URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://support.minitab.com/en-us/minitab/19/help-and-how-to/statistical-modeling/regression/supporting-topics/logistic-regression/what-is-a-generalized-linear-model/
[ "# What is a generalized linear model?\n\nBoth generalized linear models and least squares regression investigate the relationship between a response variable and one or more predictors. A practical difference between them is that generalized linear model techniques are usually used with categorical response variables. Least squares regression is usually used with continuous response variables. For a thorough description of generalized linear models, see 1\n\nBoth generalized linear model techniques and least squares regression techniques estimate parameters in the model so that the fit of the model is optimized. Least squares minimizes the sum of squared errors to obtain maximum likelihood estimates of the parameters. Generalized linear models obtain maximum likelihood estimates of the parameters using an iterative-reweighted least squares algorithm.\n\nFor example, you could use a generalized linear model to study the relationship between machinists' years of experience (a nonnegative continuous variable), and their participation in an optional training program (a binary variable: either yes or no), to predict whether their products meet specifications (a binary variable: either yes or no). The first two variables are the predictors; the third is the categorical response.\n\n## Types of logistic regression\n\nMinitab Statistical Software provides four generalized linear model techniques that you can use to assess the relationship between one or more predictor variables and a response variable of the following types. The previous example uses binary logistic regression because the response variable has two levels.\n\nVariable type Number of categories Characteristics Examples\n\nBinary\n\n2\n\nTwo levels\n\nPass/Fail\n\nYes/No\n\nHigh/Low\n\nOrdinal\n\n3 or more\n\nNatural ordering of the levels\n\nTaste (Mild, Medium, Hot)\n\nMedical condition (Critical, Serious, Stable, Good)\n\nSurvey results (Disagree, Neutral, Agree)\n\nNominal\n\n3 or more\n\nNo natural ordering of the levels\n\nTaste (Bitter, Sweet, Sour)\n\nColor (Red, Blue, Black)\n\nSchool subject (Math, Science, Art)\n\nPoisson\n\n3 or more\n\nThe response variable describes the number of times an event occurs in a finite observation space.\n\n0, 1, 2, ...\n###### Note\n\nFor a model that has one continuous predictor and a binary response variable, Minitab provides a fifth technique. A Binary Fitted Line Plot quickly describes the relationship between the predictor and the response.\n\n1 P. McCullagh and J. A. Nelder (1992). Generalized Linear Models. Chapman & Hall.\nBy using this site you agree to the use of cookies for analytics and personalized content.  Read our policy" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8429636,"math_prob":0.9412047,"size":2352,"snap":"2020-24-2020-29","text_gpt3_token_len":459,"char_repetition_ratio":0.13458262,"word_repetition_ratio":0.042613637,"special_character_ratio":0.18962584,"punctuation_ratio":0.104166664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9819999,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T09:21:46Z\",\"WARC-Record-ID\":\"<urn:uuid:7f6edb49-ba32-461e-a5c8-c161d35868e7>\",\"Content-Length\":\"12018\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae747ac8-49e3-4233-82ed-ea6e869bdcbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:b4a903c5-1ebc-4d4f-b73c-26bba70402c0>\",\"WARC-IP-Address\":\"23.96.207.177\",\"WARC-Target-URI\":\"https://support.minitab.com/en-us/minitab/19/help-and-how-to/statistical-modeling/regression/supporting-topics/logistic-regression/what-is-a-generalized-linear-model/\",\"WARC-Payload-Digest\":\"sha1:L2KPBFTWHH23VF5CP23LKHANWFIAMKHM\",\"WARC-Block-Digest\":\"sha1:2IUYA327LMSXLYM4VVPMMZ4AWOL6ORNB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655890157.10_warc_CC-MAIN-20200706073443-20200706103443-00254.warc.gz\"}"}
https://notebook.community/Akira794/ProbabilisticRobot/MCL/mcl_py2
[ "# mcl_py2\n\n``````\n\nIn :\n\nimport numpy as np\nimport copy\nimport math, random\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse\nfrom scipy.stats import norm\n\ndef draw_landmarks(landmarks):\nxs = [ e for e in landmarks ]\nys = [ e for e in landmarks ]\nplt.scatter(xs,ys, s=300, marker=\"*\", label =\"landmarks\", color=\"orange\")\n\ndef draw_robot(pose):\nplt.quiver([pose],[pose],[math.cos(pose)],[math.sin(pose)],color=\"red\",label=\"actual robot motion\")\n\ndef relative_landmark_pos(pose,landmark):\nx,y,th =pose\nlx,ly = landmark\ndistance = math.sqrt((x-lx)**2 + (y-ly)**2)\ndirection = math.atan2(ly-y, lx-x) - th\n\nreturn (distance, direction, lx, ly)\n\ndef draw_observation(pose, measurment):\nx,y,th =pose\ndistance, direction,lx,ly = measurment\nlx = distance * math.cos(th + direction) + x\nly = distance * math.sin(th + direction) + y\nplt.plot([x, lx],[y, ly],color=\"pink\")\n\ndef draw_observations(pose, measurements):\nfor m in measurements:\ndraw_observation(pose, m)\n\ndef observation(pose, landmark):\nactual_distance, actual_direction,lx,ly = relative_landmark_pos(pose,landmark)\n\nif(math.cos(actual_direction) < 0.0 ):\nreturn None\n\nmeasured_distance = random.gauss(actual_distance, actual_distance*0.1) #10% error\nmeasured_direction = random.gauss(actual_direction,5.0/180.0*math.pi) #5deg error\nreturn (measured_distance, measured_direction,lx,ly)\n\ndef observations(pose,landmarks):\nreturn filter(lambda x: x != None, [ observation(pose,e) for e in landmarks])\n\ndef likelihood(pose, measurement):\nx,y,th = pose\ndistance, direction,lx,ly = measurement\n\nrel_distance, rel_direction, tmp_x, tmp_y = relative_landmark_pos(pose,(lx,ly))\n\nreturn norm.pdf(x = distance - rel_distance, loc = 0.0, scale = rel_distance / 10.0) \\\n* norm.pdf(x = direction - rel_direction, loc = 0.0, scale = 5.0/180.0 * math.pi)\n\ndef change_weights(particles, measurement):\nfor p in particles:\np.weight *= likelihood(p.pose, measurement)\n\nws = [ p.weight for p in particles ]\ns = sum(ws)\nfor p in particles:\np.weight = p.weight / s\n\nclass Particles:\ndef __init__(self,pose, w ):\nself.pose = np.array(pose)\nself.weight = w\n\ndef __repr__(self):\nreturn \"pose: \" + str(self.pose) + \"weight: \" + str(self.weight)\n\ndef motion(pose,u):\np_x, p_y, p_th = pose\nfw, rot = u\n\nactual_fw = random.gauss(fw, fw/10)\ndir_error = random.gauss(0.0, math.pi/ 180.0 * 3.0)\nactual_rot = random.gauss(rot,rot/10)\n\np_x += actual_fw * math.cos(p_th + dir_error)\np_y += actual_fw * math.sin(p_th + dir_error)\np_th += actual_rot + dir_error\n\nreturn np.array([p_x, p_y, p_th])\n\ndef draw(pose,particles):\nfig = plt.figure(i, figsize= (8, 8))\nsp.set_xlim(-1.0,1.0)\nsp.set_ylim(-0.5,1.5)\n\nxs = [e.pose for e in particles]\nys = [e.pose for e in particles]\nvxs = [math.cos(e.pose)*e.weight for e in particles]\nvys = [math.sin(e.pose)*e.weight for e in particles]\nplt.quiver(xs,ys,vxs,vys,color=\"blue\",label=\"particles\")\n\nplt.quiver([pose],[pose],[math.cos(pose)],[math.sin(pose)],color=\"red\",label=\"actual robot motion\")\n\nif __name__ == '__main__':\nactual_x = np.array([0.0,0.0,0.0])\nu = np.array([0.2,math.pi / 180.0 * 20])\nactual_landmarks =[np.array([-0.5,0.0]),np.array([0.5,0.0]),np.array([0.0,0.5])]\nparticles = [Particles([0.0,0.0,0.0],1.0/100) for i in range(100)] ## 100 ##\n\npath = [actual_x]\nparticle_path = [copy.deepcopy(particles)]\nmeasurements = [observations(actual_x, actual_landmarks)]\n\nfor i in range(15):\nactual_x = motion(actual_x,u)\npath.append(actual_x)\n\nms = observations(actual_x, actual_landmarks)\nmeasurements.append(ms)\n\nfor p in particles:\np.pose = motion(p.pose,u)\n\nfor m in ms:\nchange_weights(particles, m)\n\npointer = random.uniform(0.0,1.0/len(particles))\nnew_particles = []\nparticles_num = len(particles)\n\naccum =[]\nsm = 0.0\nfor p in particles:\naccum.append(p.weight + sm)\nsm += p.weight\n\nwhile pointer < 1.0:\nif accum >= pointer:\nnew_particles.append(\nParticles(copy.deepcopy(particles.pose),1.0/particles_num)\n)\npointer += 1.0/particles_num\nelse:\naccum.pop(0)\nparticles.pop(0)\n\nparticles = new_particles\n\nparticle_path.append(copy.deepcopy(particles))\n\nfor i,p in enumerate(path):\ndraw(path[i],particle_path[i])\ndraw_landmarks(actual_landmarks)\ndraw_observations(path[i], measurements[i])\nplt.show()\n\n``````\n``````\n\nIn [ ]:\n\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59075433,"math_prob":0.9982792,"size":4220,"snap":"2021-31-2021-39","text_gpt3_token_len":1301,"char_repetition_ratio":0.16911764,"word_repetition_ratio":0.012931035,"special_character_ratio":0.33341232,"punctuation_ratio":0.27380952,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99985147,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T23:38:20Z\",\"WARC-Record-ID\":\"<urn:uuid:d86ffb88-62ef-464a-b457-183e8ac7605b>\",\"Content-Length\":\"55778\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0881df0a-adcb-4160-afff-8a269b52a5df>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed5ae6fe-1e01-44f2-9c42-38cd23f86ed7>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://notebook.community/Akira794/ProbabilisticRobot/MCL/mcl_py2\",\"WARC-Payload-Digest\":\"sha1:PC66H36Y4FRVVZLFZTCUFSL7K6CACPQ7\",\"WARC-Block-Digest\":\"sha1:E5RKHIUTH2W5IV6E5XEF7I56KWWPGN75\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154032.75_warc_CC-MAIN-20210730220317-20210731010317-00169.warc.gz\"}"}
https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/KohonenSOM.html
[ "JMSLTM Numerical Library 7.2.0\ncom.imsl.datamining\n\n## Class KohonenSOM\n\n• All Implemented Interfaces:\nSerializable, Cloneable\n\n```public class KohonenSOM\nextends Object\nimplements Serializable, Cloneable```\nA Kohonen self organizing map.\n\nA self-organizing map (SOM), also known as a Kohonen map or Kohonen SOM, is a technique for gathering high-dimensional data into clusters that are constrained to lie in low dimensional space, usually two dimensions. A Kohonen map is a widely used technique for the purpose of feature extraction and visualization for very high dimensional data in situations where classifications are not known beforehand. The Kohonen SOM is equivalent to an artificial neural network having inputs linked to every node in the network. Self-organizing maps use a neighborhood function to preserve the topological properties of the input space.\n\nIn a Kohonen map, nodes are arranged in a rectangular or hexagonal grid or lattice. The input is connected to each node, and the output of the Kohonen map is the zero-based (i, j) index of the node that is closest to the input. A Kohonen map involves two steps: training and forecasting. Training builds the map using input examples (vectors), and forecasting classifies a new input.\n\nDuring training, an input vector is fed to the network. The input's Euclidean distance from all the nodes is calculated. The node with the shortest distance is identified and is called the Best Matching Unit, or BMU. After identifying the BMU, the weights of the BMU and the nodes closest to it in the SOM lattice are updated towards the input vector. The magnitude of the update decreases with time and with distance (within the lattice) from the BMU. The weights of the nodes surrounding the BMU are updated according to:", null, "where", null, "represents the node weights,", null, "is the monotonically decreasing learning coefficient function,", null, "is the neighborhood function, d is the lattice distance between the node and the BMU, and", null, "is the input vector.\n\nThe monotonically decreasing learning coefficient function", null, "is a scalar factor that defines the size of the update correction. The value of", null, "decreases with the step index t.\n\nThe neighborhood function", null, "depends on the lattice distance d between the node and the BMU, and represents the strength of the coupling between the node and BMU. In the simplest form, the value of", null, "is 1 for all nodes closest to the BMU and 0 for others, but a Gaussian function is also commonly used. Regardless of the functional form, the neighborhood function shrinks with time (Hollm�n, 15.2.1996). Early on, when the neighborhood is broad, the self-organizing takes place on the global scale. When the neighborhood has shrunk to just a couple of nodes, the weights converge to local estimates.\n\nNote that in a rectangular grid, the BMU has four closest nodes for the Von Neumann neighborhood type, or eight closest nodes for the Moore neighborhood type. In a hexagonal grid, the BMU has six closest nodes.\n\nDuring training, this process is repeated for a number of iterations on all input vectors.\n\nDuring forecasting, the node with the shortest Euclidean distance is the winning node, and its (i, j) index is the output.\n\nExample, Serialized Form\n• ### Field Summary\n\nFields\nModifier and Type Field and Description\n`static int` `GRID_HEXAGONAL`\nIndicates a hexagonal grid.\n`static int` `GRID_RECTANGULAR`\nIndicates a rectangular grid.\n`static int` `TYPE_MOORE`\nIndicates a Moore neighborhood type.\n`static int` `TYPE_VON_NEUMANN`\nIndicates a Von Neumann neighborhood type.\n• ### Constructor Summary\n\nConstructors\nConstructor and Description\n```KohonenSOM(int dim, int nrow, int ncol)```\nConstructor for a `KohonenSOM` object.\n• ### Method Summary\n\nMethods\nModifier and Type Method and Description\n`int[]` `forecast(double[] input)`\nReturns a forecast computed using the `KohonenSOM` object.\n`int[][]` `forecast(double[][] input)`\nReturns forecasts computed using the `KohonenSOM` object.\n`int` `getDimension()`\nReturns the number of weights for each node.\n`int` `getGridType()`\nReturns the grid type.\n`int` `getNeighborhoodType()`\nReturns the neighborhood type for the rectangular grid.\n`int` `getNumberOfColumns()`\nReturns the number of columns of the node grid.\n`int` `getNumberOfRows()`\nReturns the number of rows of the node grid.\n`double[][][]` `getWeights()`\nReturns the weights of the nodes.\n`double[]` ```getWeights(int i, int j)```\nReturns the weights of the node at (i, j) in the node grid.\n`boolean` `isWrapAround()`\nReturns whether the opposite edges are connected or not.\n`void` `setGridType(int type)`\nSets the grid type.\n`void` `setNeighborhoodType(int type)`\nSets the neighborhood type.\n`void` `setWeights()`\nSets the weights of the nodes using random numbers.\n`void` `setWeights(double[][][] weights)`\nSets the weights of the nodes.\n`void` ```setWeights(int i, int j, double[] weights)```\nSets the weights of the node at (i, j) in the node grid.\n`void` `setWeights(Random random)`\nSets the weights of the nodes using a `Random` object.\n`void` `wrapAround()`\nSets a flag to indicate the map should wrap around or connect opposite edges.\n• ### Methods inherited from class java.lang.Object\n\n`clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait`\n• ### Field Detail\n\n• #### GRID_HEXAGONAL\n\n`public static final int GRID_HEXAGONAL`\nIndicates a hexagonal grid.\nConstant Field Values\n• #### GRID_RECTANGULAR\n\n`public static final int GRID_RECTANGULAR`\nIndicates a rectangular grid.\nConstant Field Values\n• #### TYPE_MOORE\n\n`public static final int TYPE_MOORE`\nIndicates a Moore neighborhood type.\nConstant Field Values\n• #### TYPE_VON_NEUMANN\n\n`public static final int TYPE_VON_NEUMANN`\nIndicates a Von Neumann neighborhood type.\nConstant Field Values\n• ### Constructor Detail\n\n• #### KohonenSOM\n\n```public KohonenSOM(int dim,\nint nrow,\nint ncol)```\nConstructor for a `KohonenSOM` object.\nParameters:\n`dim` - An `int` scalar containing the number of weights for each node in the node grid. `dim` must be greater than zero.\n`nrow` - An `int` scalar containing the number of rows in the node grid. `nrow` must be greater than zero.\n`ncol` - An `int` scalar containing the number of columns in the node grid. `ncol` must be greater than zero.\n• ### Method Detail\n\n• #### forecast\n\n`public int[] forecast(double[] input)`\nReturns a forecast computed using the `KohonenSOM` object.\nParameters:\n`input` - A `double` array containing the input data. `input.length` must be equal to `dim`.\nReturns:\nAn `int` array of length 2 containing the (i, j) index of the output node.\n• #### forecast\n\n`public int[][] forecast(double[][] input)`\nReturns forecasts computed using the `KohonenSOM` object.\nParameters:\n`input` - A `double` matrix containing `input.length` observations of data. `input[i].length` must be equal to `dim`.\nReturns:\nAn `int` matrix containing the output indices of the nodes. The i-th row contains the (i, j) index of the output node for `input[i]`.\n• #### getDimension\n\n`public int getDimension()`\nReturns the number of weights for each node.\nReturns:\nAn `int` scalar containing the number of weights for each node.\n• #### getGridType\n\n`public int getGridType()`\nReturns the grid type.\nReturns:\nAn `int` scalar containing the grid type. The return value is either `KohonenSOM.GRID_RECTANGULAR` or `KohonenSOM.GRID_HEXAGONAL`\n• #### getNeighborhoodType\n\n`public int getNeighborhoodType()`\nReturns the neighborhood type for the rectangular grid.\nReturns:\nAn `int` scalar containing the neighborhood type. The return value is either `KohonenSOM.TYPE_VON_NEUMANN` or `KohonenSOM.TYPE_MOORE`\n• #### getNumberOfColumns\n\n`public int getNumberOfColumns()`\nReturns the number of columns of the node grid.\nReturns:\nAn `int` scalar containing the number of columns of the node grid.\n• #### getNumberOfRows\n\n`public int getNumberOfRows()`\nReturns the number of rows of the node grid.\nReturns:\nAn `int` scalar containing the number of rows of the node grid.\n• #### getWeights\n\n`public double[][][] getWeights()`\nReturns the weights of the nodes.\nReturns:\nAn `nrow` by `ncol` matrix of double arrays containing the weights of the nodes.\n• #### getWeights\n\n```public double[] getWeights(int i,\nint j)```\nReturns the weights of the node at (i, j) in the node grid.\nParameters:\n`i` - An `int` scalar containing the row index of the node in the node grid, where", null, ".\n`j` - An `int` scalar containing the column index of the node in the node grid, where", null, ".\nReturns:\nA `double` array containing the weights of the node at (i, j) in the node grid.\n• #### isWrapAround\n\n`public boolean isWrapAround()`\nReturns whether the opposite edges are connected or not.\nReturns:\nA `boolean` indicating whether or not the opposite edges are connected. It is true if the opposite edges are connected. Otherwise, it is false.\n• #### setGridType\n\n`public void setGridType(int type)`\nSets the grid type.\nParameters:\n`type` - An `int` scalar containing the grid type, rectangular (`KohonenSOM.GRID_RECTANGULAR`) or hexagonal (`KohonenSOM.GRID_HEXAGONAL`).\n\nDefault: `type` = `GRID_RECTANGULAR`.\n\n `type` Description `GRID_RECTANGULAR` Use a rectangular grid (`type` = 0). `GRID_HEXAGONAL` Use a hexagonal grid (`type` = 1).\n• #### setNeighborhoodType\n\n`public void setNeighborhoodType(int type)`\nSets the neighborhood type.\nParameters:\n`type` - An `int` scalar containing the neighborhood type, Von Neumann (`KohonenSOM.TYPE_VON_NEUMANN`) or Moore (`KohonenSOM.TYPE_MOORE`). This method is ignored for a hexagonal grid.\n\nDefault: `type` = `TYPE_VON_NEUMANN`.\n\n `type` Description `TYPE_VON_NEUMANN` Use the Von Neumann (`type` = 0) neighborhood type. `TYPE_MOORE` Use the Moore (`type` = 1) neighborhood type.\n• #### setWeights\n\n`public void setWeights()`\nSets the weights of the nodes using random numbers. The weights are in [0.0, 1.0].\n• #### setWeights\n\n`public void setWeights(double[][][] weights)`\nSets the weights of the nodes.\nParameters:\n`weights` - An `nrow` by `ncol` matrix of double arrays containing the weights of the nodes. `weights[i][j].length` must be equal to `dim`.\n• #### setWeights\n\n```public void setWeights(int i,\nint j,\ndouble[] weights)```\nSets the weights of the node at (i, j) in the node grid.\nParameters:\n`i` - An `int` scalar containing the row index of the node in the node grid, where", null, ".\n`j` - An `int` scalar containing the column index of the node in the node grid, where", null, ".\n`weights` - A `double` array containing the weights. `weights.length` must be equal to `dim`.\n• #### setWeights\n\n`public void setWeights(Random random)`\nSets the weights of the nodes using a `Random` object. The weights are generated using the `Random.nextDouble` method.\nParameters:\n`random` - A `Random` object used to generate random numbers for the nodes.\n• #### wrapAround\n\n`public void wrapAround()`\nSets a flag to indicate the map should wrap around or connect opposite edges. A hexagonal grid must have an even number of rows to wrap around. By default, opposite edges are not connected.\nJMSLTM Numerical Library 7.2.0" ]
[ null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0146.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0147.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0148.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0149.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0150.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0151.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0152.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0153.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0154.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0157.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0158.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0155.png", null, "https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/eqn_0156.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72477967,"math_prob":0.90932024,"size":8499,"snap":"2022-27-2022-33","text_gpt3_token_len":2027,"char_repetition_ratio":0.17245439,"word_repetition_ratio":0.18027961,"special_character_ratio":0.21731968,"punctuation_ratio":0.11126006,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900784,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T04:34:36Z\",\"WARC-Record-ID\":\"<urn:uuid:cb5a29d8-b381-4490-823f-cb074f48b349>\",\"Content-Length\":\"35900\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41cbcc3b-5e09-4457-b95f-66e0becab23f>\",\"WARC-Concurrent-To\":\"<urn:uuid:30cd044f-5c27-474c-8ba4-17efbd9ceebe>\",\"WARC-IP-Address\":\"13.91.82.80\",\"WARC-Target-URI\":\"https://help.imsl.com/java/7.2/manual/api/com/imsl/datamining/KohonenSOM.html\",\"WARC-Payload-Digest\":\"sha1:7J64G2JOBV3ND44PV27MW4P2VYRCMWEL\",\"WARC-Block-Digest\":\"sha1:CBVOEPBNSHBDWV37TIOPX7O3AXQHXG25\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572127.33_warc_CC-MAIN-20220815024523-20220815054523-00228.warc.gz\"}"}
https://telliott99.blogspot.com/2009/08/
[ "## Monday, August 31, 2009\n\n### Connecting ln(x) and 1/x\n\nI have more from Strang. We're going to look at the the limit of Δy/Δx in the context of the exponential. Following the book, we use the symbol h for Δx, the small change in x. So we have:\n\n `dy/dx = lim(h->0) [y(x+h) - y(x)] / h]`\n\nIn this case, y(x) = bx so we want\n\n `dy/dx = lim(h->0) [ bx+h - bx ] / h ]`\n\nAs he says, the key idea is that we can split the first part\n\n `bx+h = bx bh`\n\nand then factor out bx and move it outside the limit:\n\n `dy/dx = bx lim(h->0) [bh - 1] / h ]`\n\nThe term in the limit is something. It depends on what the limit converges too. However, the important thing is that it is not a variable but a constant. Thus,\n\n `dy/dx = c bx`\n\nThis is the equation that I used in the last post. Starting from here there is a quick demonstration of something that we proved in a more roundabout fashion before. Start with the equation above and invert it:\n\n `dx/dy = 1 / c bx`\n\nbut\n\n `y = bxx = logb ydx/dy = 1/cyd/dy logb(y) = 1/cy`\n\nBeautiful! Strang says \"that proof was...powerful too quick.\" So he does the following.\n\n `y = bxf(y) = xf(bx) = xf'(bx) (c bx) = 1 # this is the chain rulef'(bx) = 1 / (c bx) # identify bx as yf'(y) = 1/cy`\n\nf(y) converts y to x, it takes the logarithm to base b of x.\n\n### More about e", null, "Previously I posted about a few of the remarkable properties of e, the base of natural logarithms. I said that ex is the function that is the derivative of itself.\n\nActually, that's not the whole truth. Every function bx is the derivative of itself, you just have to multiply by a constant. The thing is that the constant is 1 when b = e. As you can see in the code below, we obtain the slope of the curve bx at x as loge b * bx. The constant to multiply by is loge b, which equals 1 when b = e.\n\nFor more details, see Chapter 6 in Strang.\n\n `color.list=c( 'red','magenta', 'cyan','thistle4', 'salmon')plot(1:15,xlim=c(0.5,1.8), ylim=c(0,15),type='n')L = 2:6L2 = c(1,1.5)for (i in 1:length(L)) { b = L[i] col=color.list[i] curve(b**x,lwd=10, add=T,col=col) for (j in 1:length(L2)) { x = L2[j] dx = 0.35 x0 = x - dx x1 = x + dx y = b**x points(x,y,pch=16, col='blue',cex=2) m = log(b)*y # slope is const * y y0 = y - m*dx y1 = y + m*dx lines(c(x0,x1), c(y0,y1),lwd=3, col='blue') } }`\n\n## Sunday, August 30, 2009\n\n### Slope of the sine curve\n\nIn another post about representing the sine and cosine as infinite series, I tried to show how the series form easily demonstrates the correctness of the derivatives of these functions, that if\n\n `y = sin(x)dy/dx = cos(x)y = cos(x)dy/dx = -sin(x)`\n\nBut unfortunately this argument is circular (I think), since the series comes from a Taylor series, which is generated using the first derivative (and the second, third and more derivatives as well).\n\nIn this post, I want to do two things. Strang has a nice picture which makes clear the relationship between position on the unit circle and speed. This is it:", null, "The triangles for velocity and position are similar, just rotated by π / 2.\n\nIt is clear from the diagram that at any point, the y-component of the velocity is cos(t), while the y-position is sin(t). Thus, the rate of change of sin(t) is cos(t). This is the result we've been seeking. Similarly, the rate of change of the x-position, cos(t), is -sin(t).\n\nStrang also derives this result more rigorously starting on p. 64. That derivation is a bit complicated, although not too bad, and I won't follow the whole thing here. It uses his standard approach as follows:\n\n `dy / dx = lim(h -> 0) [sin(x + h) - sin(x)] / h`\n\nApplying a result found using just the Pythagorean theorem earlier (p. 31) for sin (s + t):\n\n `sin(s + t) = sin(s) cos(t) + cos(s) sin(t)cos(s + t) = cos(s) cos(t) - sin(s) sin(t)`\n\nHe comes up with this expression for:\n\n `Δy / Δx = sin x [cos(h) - 1 / h] + cos x (sin h / h)`\n\nThe problem is then to determine what happens to these two expressions in the limit as h -> 0. The first one is more interesting. As h gets small, | cos(h)-1 | gets smaller like h2, so the ratio goes to 0.\n\nR can help us see better.\n\n `h = 1for (i in 1:6) { h = h/10 print (h) print (cos(h)-1) }`\n\n ` 0.1 -0.004995835 0.01 -4.999958e-05 0.001 -5e-07 1e-04 -5e-09 1e-05 -5e-11 1e-06 -5.000445e-13`\n\nHere is the plot for the second one, which converges to 1, leaving us with simply cos(x):", null, "`f<-function(x) { x }curve(sin(x),from=0,to=0.5, ylim=c(0,0.6), lwd=3,col='red')curve(f,from=0,to=0.5, lwd=3,col='blue',add=T)`\n\n## Friday, August 28, 2009\n\n### Feynman and Kepler\n\nWhen I was in college I bought a book containing lectures that Richard Feynman gave at Cornell in 1964 (the Messenger series). The book is called The Character of Physical Law. Quite simply, it was and is wonderful.\n\nNow, years later it turns out that the lectures were taped, and that after some wrangling, a famous guy (let's call him Bill) bought the rights to this material. Recently, Bill made the video available online. There's just one small problem. I promised myself I would never again install any software made by Bill and his friends on my computer. But, in order to view the lectures, he requires me to install the Silverlight plug-in for my browser (it's undoubtedly a DRM thing). In typical Bill fashion, if you go to the link for the videos, the download of the plug-in starts automatically. Long story short---I held my nose and did it. So, thanks for the lectures (link).\n\nIn Feynman's second lecture (the Relation of Mathematics and Physics) he talks about Kepler's Second Law: \"A line joining a planet and the sun sweeps out equal areas during equal intervals of time.\" Feynman uses an argument based on vectors to show this. I didn't understand what Feynman said (and the book is just a transcription of the talk), but I googled around and found a post here. Unfortunately, the post is truncated at the critical point. The author was kind enough to write me with a detailed explanation. In the spirit of this blog as the place I post my homework for a self-taught course in anything I'm interested in, here is my version of his explanation of Feynman's explanation of Kepler's law.", null, "Feynman uses the cross-product of two vectors.\n\nIn the diagram, the vectors A and B originate at the same point and the angle between them is θ. The cross-product A x B is a vector with magnitude = |A| |B| sin θ and its direction is perpendicular to A in the plane formed by A and B. The area of the triangle formed by A and B is one-half the magnitude of the cross-product. (I wish I knew how to produce the arrows over the labels for the vectors using html but I don't, sorry).\n\nIn the context of our problem, A is the vector from the sun to our planet at time-zero, and B is the vector at time dt. The magnitude of B is not equal to that of A, in general, because the orbit is an ellipse. So we can replace the labels by A = r and B = r + dr. The area being swept out (or its tiny component dA in a very short time dt) is proportional to the cross-product (neglect the factor of one-half):\n\n `dA = r x (r + dr)`", null, "or, as Feynman wrote using the dot notation:\n\n `. .A = r x r`\n\nNow, what we are interested in is the proposition that the rate-of-change of the area is constant, that the rate-of-change of the rate-of-change of the area is zero.\n\n `..A = d2A/dt2 = 0`\n\nSo, we need to differentiate again with respect to time, and, apparently, we can use the product rule from ordinary differentiation on this cross-product:\n\n `.. .A = d/dt (r x r) .. . . = r x r + r x r`\n\n(As Feynman says: it's just playing with dots). The second term has the cross-product of a vector with itself, θ is zero and sin θ is zero so the whole thing equals zero.\n\nThe first term has the cross-product of r with the acceleration, the rate-of-change of the rate-of-change of r with time. That is equal to F/m.\n\nBut the thing is that the force acts along the radius, so now we have the cross-product of a vector with another vector that is turned by 180° from it. This product is also zero, and therefore Ä = 0.\n\n### Area of a circle: simple calculus", null, "I posted some time ago about a formula to find the area of the circle (knowing its circumference). I think it's credited to Euclid, although Google isn't being my friend at the moment, so I'm not sure.\n\nTo brush up on calculus, let's explore methods to find the area (as described in Strang's Calculus. He says:\n\"The goal here is to take a first step away from rectangles...that is our first step toward freedom, away from rectangles to rings.\"\n\nIn the example on the left panel of the figure, imagine slicing up the circle in the way of any standard integration problem to find the area under a curve. Simplify by considering only the area above the dotted line (y = 0). If the radius is R, then we can write y as a function of x:\n\n `y = (R2 + x2)1/2`\n\nImmediately, we run into a problem. I don't know a function which gives this as its derivative, so I don't know how to integrate it. Let's leave it until the end.\n\nIn the second approach, we picture the circle as being built up out of rings. As the radius r varies from 0 to R, the circumference of the ring is 2πr and we need to integrate:\n\n `A = ∫ 2πr dr = πr2 + C`\n\nevaluated between r = 0 and r = R:\n\n `A = πR2`\n\nAs Strang emphasizes, now we can see the geometrical reason why the the derivative of the area is the circumference! How fast does the area grow? It grows in proportion to the circumference.", null, "Exactly the same argument explains why the surface area of a sphere (4πr2) is the derivative of the volume (4/3πr3).\n\nThe third approach is to visualize the circle being sliced like a pie. This is really the same method we used in the previous post, which employed calculus on the sly. Now we have a series of triangles. The angle at the vertex of each thin triangle is dθ. The height is just R, and the base of each triangle is proportional to R, it is R dθ. So the area of the thin triangle is:\n\n `dA = 1/2 R2 dθ`\n\nWe integrate (R is a constant) and evaluate between θ = 2π and θ = 0:\n\n `A = 1/2 R2 θA = 1/2 R2 2πA = π R2`\n\nBack to the first method. We will try numerical integration. We will compute the area of the quarter-circle between x = 0 and x = R, and then simplify even further by considering just the unit circle with R = 1. We do this in a naive way, dividing the figure into a large number of thin segments, and calculate:\n\n `y = (1 - x2)1/2`\n\nWe get a better approximation if we divide the very first interval in half. In R:\n\n `y <- function(x) { sqrt(1 - x**2) }d = 0.00001x = seq(0,1,by=d)f = y(x)S = sum(f[-1]) + 0.5*fS = S*d`\n\n `> S 0.7853982> S*4 3.141593`\n\n## Thursday, August 27, 2009\n\n### Series for sine and cosine", null, "You probably know that the sine and cosine functions can also be given as a series:\n\n `sin(x) = x - x3/3! + x5/5! - x7/7! ...cos(x) = 1 - x2/2! + x4/4! - x6/6! ...`\n\nAnd furthermore, that the exponential function can also be given as a series:\n\n `ex = 1 + x/1! + x2/2! + x3/3! + x4/4! ...`\n\nThese series are really interesting. One reason is that they make clear that the formulas for the derivatives of these functions are obviously correct.\n\nTake e first:\n `ex = 1 + x/1! + x2/2! + x3/3! + x4/4! ...d/dx ex = 0 + 1/1 + 2x/2 + 3x2/(3*2!) + 4x3/(4*3!) ... = 1 + x/1! + x2/2! + x3/3! ... = ex`\n\nNow, the sine:\n\n `sin(x) = x - x3/3! + x5/5! - x7/7! ...d/dx sin(x) = 1 - 3x2/3*2! + 5x4/5*4! - 7x6/7*6! ... = 1 - x2/2! + x4/4! - x6/6! ... = cos(x)`\n\nThat's really spectacular. So the question I had was: where do these series come from. And I'm not sure this is the only way, but I think they come from a Taylor series. The Taylor series approximates a function f(x) at a particular point x = a by the series:\n\n `f(x) ≈ Σ (n=0 to n=∞) f(n)(a)/n! * (x-a)n`\n\nwhere f(n)(a) is the nth derivative of f(x). For ex, all of these derivative terms are just ex. Evaluating the series for a = 0 these derivatives are 1, and the series becomes (as we had above):\n\n `f(x) ≈ Σ (n=0 to n=∞) xn / n!`\n\nNow, I still have a couple of problems. For example, in order to get the terms of the Taylor series for sine and cosine we will have to know the derivatives, which is also what we seek, so that is a circular argument. Also, I'm not very clear on what the a is about. The approximation to f(x) works for the \"neighborhood of a.\"\n\nR code:\n `plot(1:2*pi,ylim=c(-1.2,1.2),type='n')curve(sin,from=0, to=2*pi, col='blue',lwd=5)curve(cos,from=0, to=2*pi, col='red',lwd=5,add=T)lines(c(0,2*pi),c(0,0),lty=2,lwd=3)`\n\n## Wednesday, August 26, 2009\n\n### Duly Quoted\n\nHere's a funny (and telling) story about a job interview someone had with Johnny von Neumann (and my post here):\n\n\"Von Neumann lived in this elegant lodge house on Westcott Road in Princeton... As I parked my car and walked in, there was this very large Great Dane dog bouncing around on the front lawn. I knocked on the door and von Neumann, who was a small, quiet, modest kind of a man, came to the door and bowed to me and said, 'Bigelow, won't you come in,' and so forth, and this dog brushed between our legs and went into the living room. He proceeded to lie down on the rug in front of everybody, and we had the entire interview---whether I would come, what I knew, what the job was going to be like---and this lasted maybe forty minutes, with the dog wandering all around the house. Towards the end of it, von Neumann asked me if I always traveled with the dog. But of course it wasn't my dog, and it wasn't his either, but von Neumann---being a diplomatic, middle-European type person---he kindly avoided mentioning it until the end.\"\n\n--Julian Bigelow\nquoted in:\nWho got Einstein's Office\nEd Regis\np 110\n\n### Towers of Hanoi", null, "This famous mathematical game is described in wikipedia. Briefly, we have three pegs and N disks. We wish to move the whole stack of disks to a different peg, let's say # 3. The rules are:\n\n• Only one disk may be moved at a time.\n• Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod.\n• No disk may be placed on top of a smaller disk.\n\nFor example, here is an intermediate stage in the game with 4 disks. Can you see the three moves that brought us to this point?", null, "A couple of interesting things: one is the recursive nature of Towers of Hanoi. If you already know how to solve the game with N disks, then how do you solve the game with N+1?\n\nEasy, move the first N disks to peg # 2, then move disk N+1 to peg # 3, then move all the other N disks on top. This stereotyped pattern leads to the following visual aid. (I've forgotten where I saw it). It looks like a binary ruler.", null, "This version of the ruler describes the series of moves (though not the target pegs) for the N=4 game. To extend it, add a new bar of the correct height for N = 5, then duplicate all the bars we already have.\n\nThe number of moves grows rapidly:\n\n `N moves1 12 33 74 15N 2N-1`\n\nAccording to wikipedia:\n\nThe puzzle was invented by the French mathematician Édouard Lucas in 1883. There is a legend about a Vietnamese temple which contains a large room with three time-worn posts in it surrounded by 64 golden disks. The monks of Hanoi, acting out the command of an ancient prophecy, have been moving these disks, in accordance with the rules of the puzzle, since that time. The puzzle is therefore also known as the Tower of Brahma puzzle. According to the legend, when the last move of the puzzle is completed, the world will end.\n\nWe can calculate, at one move per second, this game will take roughly 292 billion years, about 20 times the current age of the universe.\n\n `>>> x = 3600*24*365>>> x31536000>>> 2**63/x292471208677L`\n\n### Volume of a sphere: calculus", null, "This derivation is straight from the book Strang's Calculus. We want to calculate the volume of a hemi-sphere as shown in the figure. The cross-sections of this solid are semi-circles. The area of each semicircle is:\n\n `A = 1/2 πr2 = 1/2 π(R2-x2)`\n\nTo find the volume, we add up (integrate) the areas of each little slice:\n\n ```V = ∫ A(x) dx = ∫ 1/2 π(R2-x2) dx = 1/2 π [ R2x - 1/3 x3 ]```\n\nEvaluate the expression in brackets between x = -R and +R:\n\n ```= [R3 - 1/3 R3] - [-R3 + 1/3 R3] = 2/3 R3 - - 2/3 R3 = 4/3 R3 V = 1/2 π 4/3 R3 = 2/3 π R3```\n\nThe total volume is twice this, or\n\n `V = 4/3 π R3`\n\nThe volume can be found in either cylindrical or spherical coordinates. For spherical coordinates we have:\n\n ```x = r cosθ sinφ y = r sinθ sinφ z = r cosφ```", null, "That is, φ is the angle of the vector r with the z-axis, so the z-component is r cosφ. The component orthogonal to that is the projection of r on the x,y-plane, and that is r sinφ. Then, θ is the angle of that projection (and r) with respect to the x-axis. The x- component is r sinφ cosθ (the order is not important), while the y-component is r sinφ sinθ.\n\nIn this case, the integral is a triple integral:\n\n ```2π π R V = ∫ ∫ ∫ r2 sinφ dr dφ dθ 0 0 0```", null, "From reading Chapter 14 on multiple integrals in Strang, it seems that what we do here is to first integrate with respect to r, holding the angles constant:\n\n ```2π π V = 1/3 R3 ∫ ∫ sinφ dφ dθ 0 0```\n\nUPDATE: A sharp-eyed reader caught a silly error in the part that follows, so it's been edited. I had set up the problem in the first half (see the figure at the beginning of the post) to use the hemisphere. But in the second part we have two angles, θ and φ, with different limits of integration.\n\nContinuing with the standard approach in two dimensions, we let θ go from 0 to 2π. Now φ will go from 0 to π. Imagine rotating the great circle in the xy-plane around the x-axis: we need only to go from 0 to π. We could do the hemisphere by having φ go from 0 to π/2, but it would be a weird volume, with two wedges joined at the x-axis.\n\nSo, we have the integral of sinφ dφ, which is - cosφ, evaluated between φ = 0 and φ = π =>\n ```= -cos(pi) - -(cos(0)) = -(-1) + 1 = 2 2π V = 2/3 R3 ∫ dθ 0```\n\nBut this is just θ evaluated between θ = 0 and θ = 2π => 2π, so finally we have:\n\n `V = 4/3 R3 π`\n\n(Figures are from the book)\n\nUPDATE 2: Coming back a couple of years later, I notice Google has made changes to blogger (editing the html) that retroactively messed up the formatting on this post. The limits of integration aren't shown at the correct column offset any more. My apologies, but I don't see a simple fix at the moment.\n\n### Archimedes: volume of a sphere\n\nArchimedes was one of those rare people in the history of the world who overwhelm you with their genius. Like few others, he discovered not just one but a large number of really important things. The discovery of which he was probably most proud was the method to find the volume (and surface area) of a sphere. He found that the volume of a sphere is 2/3 the volume of the cylinder that just contains it.\n\nThis was symbolized by the sphere and cylinder on his tombstone, as witnessed (years later) by Cicero. We have no idea what Archimedes looked like, but that doesn't keep people from drawing his portrait!\n\nWhat strikes me most vividly about the discovery is that Archimedes found the correct relationship by experiment---he weighed the solids, and not only that, he used the law of the lever. According to this page, the balancing was done in a fairly complex manner. We have a cylinder that can just contain the sphere and a cone whose radius and height are equal and twice the radius of the sphere. Moreover, the density of the cylinder is four times that of the other two objects.", null, "Then, by the law of the lever, the weight of the cylinder is twice the combined weights of the sphere and the cone together (an equal force from gravity when suspended at half the distance from the fulcrum). Because the density of the cylinder is 4 times greater, its volume must be also one-half the combined volumes of the sphere and the cone.\n\nWe can check that using the (now) known formulas (see my post about the cone):\n\n `Vcylinder = 2r*πr2 = 2 πr3Vsphere = 4/3 πr3Vcone = 1/3 π(2r)(2r)2 = 8/3 πr3Vsphere + Vcone = 4 πr3`\n\nAccording to Archimedes in the Method (translation by Heath):\n\"For certain things which first became clear to me by a mechanical method had afterward to be demonstrated by geometry...it is of course easier, when we have previously acquired by the method some knowledge of questions, to supply the proof than it is to find the proof without any previous knowledge. This is a reason why, in the case of the theorems the proof of which Eudoxus was the first to discover, namely, that the cone is a third part of the cylinder, and the pyramid a third part of the prism, having the same base and equal height, we should give no small share of the credit to Democritus, who was the first to assert this truth...though he did not prove it.\"\n\nOnce he knew the correct answer, he was able to find his way to a rigorous derivation. Very smart.\n\nThere two things that make me wonder about the story. One is: why not just weigh objects of equal density like this:", null, "We get 4/3 for the sphere, 2/3 1/3 for the cone, and 2 for the cylinder. It should work. (Oops, see below). My guess is that Archimedes is just showing off. Note that he would have used his principle of buoyancy to determine the correct densities (and for that matter, could use the law of the lever to correct if the cylinder's density was not precisely 4x the others).\n\nThe second question is: what materials would he use? What has a density 4 times something else? From wikianswers we have:\n\n `Sand 2.80Copper 8.63Silver 10.40Gold 19.30Marble 2.56`\n\nHow about marble and silver?\n\n[UPDATE: Almost two years later, I find a silly error in this post. You'd need two cones, or put the one out at 2x the distance on the lever. Why didn't someone tell me?]\n\n### Volume of a cone: simple calculus\n\nI've been brushing up on my calculus skills, though it can be challenging for an old fart like me. (In one ear and out the other). I saw a book online (Strang's Calculus), which I like because it is succinct, and also the author has a very interesting, idiosyncratic style. I liked it so much I got a hard copy. Now that I've studied a bit, I realize that the book is more than succinct, it's dense, filled with math, and contains a lot more than just calculus. It's a great book.\n\nIn Chapter 8, Applications of the Integral, we encounter Example 11, to find the volume of a cone. I've redrawn the diagram from the book, below. The idea is to move vertically from the top to the bottom, letting x be equal to the radius at each point. At each value for x, we draw a shell (the outside surface of a cylinder) as shown in red. As we move from top to bottom, we accumulate a collection of these cylinders whose volumes are summed to get the volume of the cone.\n\nThe key is to recognize that the distance from the top of the cone to the top of each cylinder has the same relationship to x as b does to r (e.g. when x = r, then this distance = b). The smallest and largest rectangles in the figure are similar.\n\nThe height of the cylinder plus this distance equals b.", null, "So the height of each cylinder is\n\n `h = b-bx/r`\n\nFrom this point it's easy. The area of the cross-section of each shell is its diameter (2*x), times &pi times the small width dx; the volume is this area times the height. I'm going to leave out the multiplication symbol I usually use (*) for clarity:\n\n ` = 2πxhdx = 2πx(b-bx/r)dx`\n\nWe sum (integrate) all these cylinders:\n\n ` = ∫2πx(b-bx/r) dx = ∫2πxb dx - ∫2πxb(x/r) dx = πx2b - (2/3)πx3b/r`\n\nevaluated between x = 0 and x = r:\n\n ` = πr2b - (2/3)πr2b = (1/3)πr2b`\n\n### Volume of a cone: geometric method\n\nWe're on the trail of Archimedes. Last time we found a formula for the sum of the squares of the integers from 1 to n:\n\nS = n*(n+1)*(2n+1)/6\n\nWe're going to use that in a derivation of the formula for the volume of a cone. To start with, we slice the cone horizontally. In the limit as the number of slices gets very large and each individual slice gets very thin, the triangular shaped wedge pieces and the little bit at the top won't contribute significantly to the volume.\n\nHere is the diagram from the post:\n\n ` + /|\\ / |h\\ +--+--+ /| |h |\\ / | |r1| \\ +--+--+--+--+ /| |h |\\ / | | r2 | \\ +--+-----+-----+--+ /| |h |\\ / | | r3 | \\ +--+--------+--------+--+ /| |h |\\ / | | r4 | \\ +--+-----------+-----------+--+ R`\n\nAs Dr. Peterson explains\n\nIf the cone has base radius R and height H, and we've cut it into N slices (including that empty slice at the top, with radius r0 = 0), then each cylinder will have height h = H/N, and radius r[k] = kR/N, where k is the number of the cylinder, starting with 0 at the top and ending with N-1 for the bottom cylinder.\n\nThe only thing that's tricky about this is the numbering. The triangle at the top will be ignored. This includes the cylinder numbered k = 0 with radius = kR/N = 0. The next section below it contains a cylinder with h for its height and r1 for its radius. Since we started counting at 0, the last of the N segments has k = N-1.\n\nThe volume of each individual cylinder will be:\n\n ` = π * h * rk2 = π * H/N * (kR/N)2 = π * R2 * H * k2 / N3`\n\nThe total volume will be the sum of these, for all k from 0 to N-1; since only k is different from one cylinder to the next, we can factor everything else out from the sum and get:\n\n `V = [π * R2 * H / N3 ] * Sum(k2)Sum(k2) = 0 + 1 + 4 + ... + (N-1)2`\n\nUse our formula from last time:\n\n `Sum(k2) = (N-1)*(N)*(2N-1) / 6`\n\nDivide by N3:\n\n `Sum(k2) = (1-1/N)*(1)*(2-1/N) / 6`\n\nAs N gets very large this converges to 2/6 = 1/3, so we have finally:\n\n `V = π*R2*H / 3`\n\nVery nice.\n\n## Tuesday, August 25, 2009\n\n### Sum of squares\n\nI got interested in learning more about Archimedes. In particular, I was amazed by his derivation of the formula for the volume of a sphere. Actually he was pretty proud of it himself---according to legend and confirmed by Cicero's testimony his gravestone \"was surmounted by a sphere and a cylinder.\"\n\nSo that's where I'm headed, but in order to get there, we need to start with a simpler problem. What is the sum of all the values n2 for n = 1 to n = k? The squares and the sum go like this:\n\n `1 14 59 1416 3025 5536 9149 14064 204`\n\nIt is hard to see the pattern, so let's look up the answer, the formula is:\n\nsum of squares = k * (k+1) * (2k+1) / 6\n\nThere is a neat proof by induction in the post. We assume that the formula is correct for n = k and then prove that if so, it is also true for n = k+1. We must also prove that it works for the \"base case\", n = 1, but that is self-evident.\n\nWe have:\n\n `n = k Σ n2 = k * (k+1) * (2k+1) / 6n = 1`\n\nTo find the sum of squares for k+1, we add (k+1)2 to both sides. Here is the right-hand expression:\n\n `k * (k+1) * (2k+1) / 6 + (k+1)(k+1)`\n\nFactor out (k+1)\n\n `(k+1) * [ k * (2k+1) / 6 + (k+1) ](k+1) * [ k * (2k+1) + 6k + 6 ] / 6(k+1) * [ 2k2 + 7k + 6 ] / 6`\n\nThe term in the brackets can be factored to give:\n\n `(k+2) * (2k+3)`\n\nwhich can be rearranged to\n\n `[(k+1) + 1] * [2*(k+1) + 1]`\n\nSo we have, finally:\n\n `(k+1) * [(k+1) + 1] * [2*(k+1) + 1] / 6`\n\nwhich is what we wanted to prove.\n\nThere is an even better proof in the link which uses a \"telescoping sum.\"\n\n## Monday, August 24, 2009\n\n### Duly quoted\n\nDo you have ideas in your head that have been there forever, and then when you really need to know, where did it come from, you can't find the source? That's the way I feel about the subject of this post. I searched for the source of one such idea, but found only this:\n\n\"Life is an endless series of experiments.\"\n\n---Mohandas K Gandhi (link)\n\nWhile this sounds nice, of course it is fundamentally wrong. As one of my early mentors, Bart Sefton, often told me:\n\n\"Every good experiment comes with a control.\"\n\nAnd the quote I was searching for, which I have always attributed to Milan Kundera, goes like this:\n\n\"That's the trouble with life, you never do the control.\"\n\nI thought it was in The Unbearable Lightness of Being (link), but I can't find it.\n\n## Saturday, August 22, 2009\n\n### The fly and the train, contd.\n\nThinking about the famous story involving von Neumann and the infinite series, I guessed that if we have a series where:\n\neach succeeding term xn+1 is produced from the preceding term xn by multiplying by the factor 1/r, then the sum of all the terms xn+1 + xn+2... is equal to xn * 1/(r-1).\n\nSo, I ran a Python simulation and it turns out to be correct!\n\n `def test(r): n = 100 # works for other n as well rL = [n] f = 1.0/r for i in range(10): rL.append(rL[-1]*f) print str(r).ljust(3), print round(n*1.0/sum(rL[1:]),2) for r in range(2,10): test(r)`\n\n `2 1.03 2.04 3.05 4.06 5.07 6.08 7.09 8.0`\n\nIf I'm reading the article correctly, this turns out to be a simple result from the theory of geometric series, and it looks like it works even for fractional r. Modifying the code above to test fractional r shows this is true. Department of \"learn something new every day\"....\n\nMy guess is that von Neumann saw all three pieces instantly:\n• the first term and step size of the series\n• the formula for the terms after the first one\n• the easy way\n\n### von Neumann\n\nI'm reading a biography of Johnny von Neumann by Norman Macrae. Or maybe I should say the biography, it's not like there are many choices for biographies of this great mathematician and early computer scientist. The book is not horrible and it's the only game in town, so I am reading it. It's based on material collected by Stephen White---who was somehow involved with Stan Ulam in an earlier version of the project.\n\nThe basic problem with the book is that Macrae is a whack job. To quote Mark Yasuda's review at Amazon:\nBy far the biggest problem, however, comes from MacRae's approach to the book - he insists upon inserting so much of his own world views and dogma into the body of the book, that we no longer have a biography on Von Neumann - we have Von Neumann's life used as a vehicle for MacRae's own personal views on education, politics, the Japanese economy of the 1960's through the 1980's (I never expected to see this in a Von Neumann biography), and cold war history. He takes time out to provide slanted views of Bertrand Russell and Norbert Wiener, for no reason (they barely figure in the book beyond his distorted descriptions of them) other than to insinuate that their liberal viewpoints are due to poor parenting. In sum, the book's most fatal flaw is that there's entirely too much of MacRae, and not enough Von Neumann.\n\nLeaving all that aside, the book contains yet another retelling of the famous story about von Neumann and his approach to this problem:\nTwo trains (or bicycles) are 20 miles apart and headed directly toward each other at a speed of 10 mph. A fly starts from the front of the train on the left, flies at a speed of 15 mph to the tip of the train on the right, then turns around immediately and flies back to the first. This cycle continues until the trains meet, ending everything. How far does the fly fly?\n\nThe story is that when posed this problem, von Neumann thought for a second and gave the answer (below). When asked how he did it, he answered \"I summed the infinite series, of course.\"\n\nThe easy way to do the calculation is to focus on the trains:\n\n1. The trains will meet in one hour.\n2. The fly will fly 15 miles in that hour.\n\nWhat was interesting to me is that, when you think about the series method, it is not that hard, and converges rapidly.\n\nIn the first cycle the fly and its target train start with a separation of 20 miles. The fly's trajectory covers the initial distance times the ratio of its velocity to the total velocity of approach (15/(10+15)):\n\nd1 = 20 miles * 0.6 = 12\n\nHowever, each of the trains also moves 12 miles * 10/15 = 8 miles during the same period, so the new distance for the second cycle is 20 - 2*8 = 4, or one-fifth of the original. If you follow this out you can see that the series we need to sum has its first term equal to 12 and each succeeding term is 1/5 of the preceding one. The first four terms sum to:\n\n12 + 2.4 + 0.48 + 0.096 = 14.976\n\nI think Johnny guessed at this point.\n\nCome to think of it, there is probably a theorem about infinite series that says if we add terms like this, the sum of all the terms obtained from 12 by multiplying by 1/5, that will be equal to 12 * 1/4. Anybody know?\n\n[UPDATE: Not sure if this was laziness (I was in a hurry) or being incredibly dumb. See wikipedia. This is the geometric series with r = 1/5 and a = 12. The sum is a times 1/(1-r) = 12*5/4 = 15. ]\n\n## Thursday, August 20, 2009\n\n### Duly Quoted\n\n\"Perhaps the worst plight of a vessel is to be caught in a gale on a lee shore. In this connection the following...rules should be observed:\n1. Never allow your vessel to be found in such a predicament...\"\n\nCallingham, Seamanship: Jottings for the Young Sailor\nas quoted in:\nLen Deighton, Horse Under Water\n\n## Tuesday, August 18, 2009\n\n### Duly Quoted\n\nFrom Ezra Klein:\n\nLove is what we call it when our neuroses are compatible.\n\n(link)\n\n### Gamma distribution", null, "I posted previously about the beta distribution here, here and here. We ran into it because it is the conjugate prior for Bayesian analysis of problems in binomial proportion. That's because the likelihood function is in the same form.\n\nThe beta distribution is a continuous probability distribution with two shape parameters α and β (or a and b). If we consider p as the random variable (and q = 1-p), then the unnormalized distribution is just:\n\n `f(x) = pa-1 qb-1`\n\nThe function has symmetry: if we switch a and b, and also p and q, everything would look the same. Varying the values for a and b can yield a wide variety of shapes. For large a and b, the plot approaches a normal distribution with mean = a / a+b.\n\nThe need to normalize the beta distribution brings us to the gamma distribution. This one can be viewed in a simple way but also can be fairly complex. The simple version is that, for positive integers, Γ(x) is just (x-1)!.\n\nThe normalizing constant for the beta distribution with parameters a and b is:\n\n `Γ(a+b) / (Γ(a) * Γ(b))`\n\nThe gamma distribution is frequently used in phylogenetic models, for example, to model the distribution of variability during evolutionary time among different positions in a protein. The gamma distribution also has two parameters but these apparently have quite different roles.\n\nThey are called the shape parameter (k) and the scale parameter (θ). More generally, the unnormalized gamma distribution is:\n\n `f(x) = xk-1 exp { -x / θ }`\n\nWe can get an idea of the roles of the two variables by considering this:\n\n `mean = kθvariance = kθ2`\n\nLet's look at some plots obtained using different values for the two parameters. In the series below, k goes from 1 to 4, and then for each plot theta varies from 0.25, 0.5, 1, 2, 3 as the color goes from blue to green to magenta to red to maroon.", null, "", null, "", null, "", null, "`N = 6thetaL = c(0.25,0.5,1,2,3)color.list = c('blue','darkgreen', 'magenta','red','darkred')#k = 1,2,3,4plot(1:N,ylim=c(0,0.8),type='n')for (i in 1:5) { curve(dgamma(x, shape=k,scale=thetaL[i]), from=0,to=N,lwd=3, col=color.list[i],add=T) }`\n\n## Sunday, August 16, 2009\n\n### Geometric distribution", null, "According to wikipedia, the geometric distribution is the probability distribution which describes the number of Bernoulli trials required to obtain a single success.\n\nSo, in the simple case of a fair coin (p = 1/2):\n\n `P(X=1) = 1/2P(X=2) = 1/4 (1/2 times 1/2)P(X=3) = 1/8 (1/2 times 1/4)`\n\nAccording to mathworld, the geometric distribution is the only discrete memoryless random distribution, and is a discrete analog of the exponential distribution.\n\nThe memoryless property can be seen easily if we take the geometric series:\n\n `1/2 + 1/4 + 1/8 ...`\n\nIf we have already obtained a failure on the first trial, then we remove the first term (corresponding to success on the first trial) and then normalize by dividing by the sum of all remaining terms (1 - 1/2):\n\n ` = 2/4 + 2/8 + 2/16 ... = 1/2 + 1/4 + 1/8 ...`\n\nThe normalization is needed because we require the sum of all the terms to add up to 1 for a proper probability distribution. In general, for process with probability of success p and failure q = 1 - p:\n\n `P(X=k) = qk-1 * p`\n\nAccording to wikipedia, the mean is 1/p and the variance is q/p2. We can compare that to the exponential distribution with a pdf of:\n\n `λe-λx`\n\nand mean = 1/λ and variance = 1/λ2.\n\nIt is not clear to me at present why the expressions for the variance don't match up.\n\nR code:\n\n `x=numeric(7)x = 1for (i in 2:length(x)) { x[i] = x[i-1]/2 }plot(x,type='s', ylim=c(0,1.0), col='red',lwd=3)`\n\n## Saturday, August 15, 2009\n\n### Normally squared\n\nAccording to wikipedia, the chi-square distribution (with df = k), arises as the sum of the squares of k independent, normally distributed random variables with mean 0 and variance 1. For example, in a bivariate distribution where each variable is normally distributed, the square of the distance from the origin should be distributed in this way with df = 2.\n\n `u = rnorm(10000)v = rnorm(10000)w = u**2 + v**2B = seq(0,max(w)+1,by=1)hist(w,freq=F,ylim=c(0,0.5), breaks=B,col='cyan')curve(dchisq(x,df=2), lwd=5,col='red', from=0,to=10,add=T)`", null, "I found that I could also get the chi-squared distribution by multiplying two random variables (with mean not equal to 0) together, and it seems that in this case the degrees of freedom is equal to the product of the means.\n\n `u = rnorm(10000,3)v = rnorm(10000,3)w = u*vB = seq(-5,max(w)+1,by=1)hist(w,freq=F,ylim=c(0,0.18), breaks=B,col='cyan')curve(dchisq(x,df=9), lwd=5,col='red', from=0,to=30,add=T)`", null, "Let's explore how to use R to solve the problem of the grade distribution from the last post. We have males and females with grades from A,B,D,F in order, stored in a matrix M:\n\n `m = c(56,60,43,8)f = c(37,63,47,5)M = t(rbind(m,f))`\n\n `> M m f[1,] 56 37[2,] 60 63[3,] 43 47[4,] 8 5`\n\n `r = chisq.test(M)`\n\n `> r Pearson's Chi-squared testdata: M X-squared = 4.1288, df = 3, p-value =0.2479`\n\nThis matches the value given in Grinstead & Snell. We can explore contributions of the individual categories to the statistic as follows.\n\n `E = r\\$expectedO = r\\$observed(O-E)**2/E`\n\nWe see that the disparity in A's was certainly higher than for the other categories, but the p-value (above) is not significant.\n\n `> (O-E)**2/E m f[1,] 1.0985994 1.2070139[2,] 0.2995463 0.3291068[3,] 0.3595670 0.3950506[4,] 0.2096039 0.2302885`\n\nWe see that the 95th percentile of the chi-squared distribution for df=3 is just a bit larger than 7.8:\n\n `> S = seq(7,8,by=0.1)> pchisq(S,df=3) 0.9281022 0.9312222 0.9342109 0.9370738 0.9398157 0.9424415 0.9449561 0.9473637 0.9496689 0.9518757 0.9539883`\n\nWe can do a Monte Carlo simulation (I actually do not know yet how the preceding function works, but the simulation should be just like what we did yesterday:\n\n `r = chisq.test(M,simulate.p.value=T)`\n\n `> r Pearson's Chi-squared test with simulated p-value (based on 2000 replicates)data: M X-squared = 4.1288, df = NA, p-value =0.2414`\n\nAnd Fisher's exact test (also on my study list for the future):\n\n `r = fisher.test(M)`\n\n `> r Fisher's Exact Test for Count Datadata: M p-value = 0.2479alternative hypothesis: two.sided`\n\n## Friday, August 14, 2009\n\n### Grade disparity - chi-squared analysis", null, "I recently posted about two common statistical problems: the first involves making an estimate of the population mean when the sample size is small, involving the t statistic of \"Student\", as well as the related problem of deciding whether the means for two small samples are different, including the example of paired samples. These arise commonly in biology. The second problem is deciding whether two sets of paired values are independent, or instead whether given the value for x, we can predict y. This problem involves covariance and the simplest approach is linear regression.\n\nThe third basic problem in statistics for which I want to improve my understanding is that of the independence of two discrete distributions. This will lead us to the chi-squared density as a special case of the gamma distribution.\n\nThere is a very nice discussion of this in Grinstead & Snell (pdf). Here is their table which gives a grade distribution, comparing the results for females and males in a particular class.", null, "If the distributions are independent (e.g. P(grade = A) = P(grade = A | sex = female), then we can predict the most likely distribution, but also expect that there will be some spread in the observed breakdown by grade and sex due to random sampling. How much deviation should we expect?", null, "We can model the problem using colored balls. For example, we put 152 pink balls and 167 baby blue balls in to an urn, mix well, and then draw out in succession 93 'A's, 123 'B's, 90 'C's and 13 'D's.\n\nWe calculate for each category:\n\n(O - E)2/ E\n\nwhere O is the observed value and E the expected value based on the assumption of independence. We sum the value over all categories to obtain the statistic. According to theory, this statistic has a chi-squared (Χ2) distribution with degrees of freedom df = 3. (For this problem df is the number of grades - 1 * the number of sexes - 1).\n\nIf the calculated value for the statistic exceeds 95 % of the values in the distribution for this df, we reject the null hypothesis that the two probability distributions are independent. In other words, we suspect that males and females have different grade distributions for this course.\n\nFor this data, the value of Χ2 that is exceeded 5 % of the time is 7.8, so the calculated value of 4.13 does not allow rejection.\n\nHere is code for a Python simulation (R code to plot is at the end). I redirect the output to a file like so:\n\n `python grades.py > results.txt`\n\n `import randomrL = list() # for statistic valuesF = 152; M = 167N = M + Ff = F*1.0/N; m = M*1.0/Nv = Falsegrades = [93,123,90,13]# expected values:EF = [f*g for g in grades]EM = [m*g for g in grades]def test(): print 'EF', for e in EF: print round(e,2), print print 'EM', for e in EM: print round(e,2), print R = 10000 # number of trialsfor i in range(10000): if v: test() chisq = 0 pL = ['F']*F + ['M']*M random.shuffle(pL) mL = list() fL = list() for i in range(4): # grades A-D n = grades[i] # how many 'A's... fL.append(pL[:n].count('F')) mL.append(pL[:n].count('M')) pL = pL[n:] if v: print 'OF',' '.join([str(e).rjust(2) for e in fL]) if v: print 'OM',' '.join([str(e).rjust(2) for e in mL]) for i in range(4): chisq += (fL[i]-EF[i])**2/EF[i] chisq += (mL[i]-EM[i])**2/EM[i] print round(chisq,2) if v: print`\n\n `setwd('Desktop')v = read.table( 'results.txt',head=F)B = seq(0,20,by=0.25)hist(v[,1],freq=F,breaks=B)curve(dchisq(x,3), from=0,to=20,add=T,col='blue',lwd=3)`\n\n### Regression corrected\n\nIf you read my post about regression from yesterday, you may have noticed that it has a serious problem with the way the errors were generated. What I did was this:\n\n `set.seed(157)x = runif(10)e = xfor (i in 1:length(x)) { e[i] = rnorm(1,mean=x[i],sd=0.3) }y = 2.4*x + eplot(y~x,pch=16,col='blue',cex=1.8)`", null, "If we look at the errors, we see that they are dependent on the value of x! Naturally (b/c of mean=x[i]).\n\n `plot(x,e,pch=16,col='blue',cex=2)`", null, "What I should have done is something like this:\n\n `set.seed(1357)e = rnorm(10)/3y = 2.4*x + eplot(x,e,pch=16,col='magenta',cex=2)`", null, "`plot(x,y,pch=16,col='darkred',cex=2)`", null, "Nevertheless, I hope the essential points are clear:\n\n• covariance is related to variance: cov(x,x) = var(x)\n• correlation is the covariance of z-scores\n• the slope of the regression line is: cov(x,y) / var(x)\n• the regression line goes through x, y\n• r ≈ cov(x,y) / sqrt(var(x)*var(y))\n• the call to plot the line is abline(lm(y~x))\n\nThe proportionality for r is b/c we are not matching R's output for this. I think it is because we are missing a correction factor of n-1/n. I will have to look into that when I get back to my library at home.\n\nWith the change, we do a little better on guessing the slope:\n\n `> lm(y~x)Call:lm(formula = y ~ x)Coefficients:(Intercept) x 0.2625 2.0039 `\n\n## Thursday, August 13, 2009\n\n### Limited power\n\nI first came across Andrew Gelman's name because he co-wrote a book called 'Red State, Blue State'. I'm interested in politics and follow a few bloggers I think are smart---funny that they all turn out to be \"progressives.\" What are the chances?\n\nI haven't read that book, though I do have a book of his with ideas for teaching Statistics, and I aspire to read his book on Bayesian Analysis. He also has a blog.\n\nAnyway, somewhere I came across an article of his on the web (pdf). It's an analysis prompted by the publication of a set of articles in the Journal of Theoretical Biology with titles such as \"Beautiful Parents Have More Daughters,\" that are (as you might guess by this point) fatally flawed. It's a great read.\n\n### Basic regression", null, "Continuing with my education in statistics, I'm reading Dalgaard's book, Introductory Statistics with R. The topic of this post is linear regression by least squares.\n\nWe're going to model errors in the data as follows:\n\n `set.seed(157)x = runif(10)e = xfor (i in 1:length(x)) { e[i] = rnorm(1,mean=x[i],sd=0.3) }y = 2.4*x + eplot(y~x,pch=16,col='blue',cex=1.8)`\n\n[UPDATE: this is not the correct way to model the errors. See here.]", null, "`dx = x-mean(x)dy = y-mean(y)n = length(x)sum(dx**2)/(n-1)var(x)`\n\nThe variance of x is computed in the usual way:\n\n `> sum(dx**2)/(n-1) 0.1182592> var(x) 0.1182592`\n\nThe covariance of x and y is defined in such a way that the variance of x is equal to the covariance of x with itself. That makes it easy to remember!\n\n `sum(dx*dy)/(n-1)cov(x,y)`\n\n `> sum(dx*dy)/(n-1) 0.4309724> cov(x,y) 0.4309724`\n\nCorrelation is just like covariance, only it is computed using z-scores (normalized data).\n\n `zx = (x-mean(x))/sd(x)zy = (y-mean(y))/sd(y)cov(zx,zy)cor(x,y)`\n\n `> cov(zx,zy) 0.9515839> cor(x,y) 0.9515839`\n\nAs discussed in Dalgard (Ch. 5), the estimated slope is:\n\n `β = cov(x,y) / var(x)`\n\n `> cov(x,y) / var(x) 3.644302`\n\nwhile the intercept is:\n\n `α = y - β*x`\n\nThe linear regression line goes through x, y.\n\nLet R do the work:\n\n `> lm(y~x)Call:lm(formula = y ~ x)Coefficients:(Intercept) x -0.04804 3.64430`\n\nThe estimated slope is 3.64, while the true value is 2.4. I thought the problem was the last point, but it goes deeper than that.\n\n `># doing this by hand> i = 7 # index of max y value> lm(y[-i]~x[-i])Call:lm(formula = y[-i] ~ x[-i])Coefficients:(Intercept) x[-i] -0.1053 3.5803`\n\nMuch more information is available using the \"extractor function\" summary:\n\n `> summary(xy.model)Call:lm(formula = y ~ x)Residuals: Min 1Q Median 3Q -0.456059 -0.369435 -0.006057 0.239960 Max 0.814548 Coefficients: Estimate Std. Error t value(Intercept) -0.04804 0.28819 -0.167x 3.64430 0.41621 8.756 Pr(>|t|) (Intercept) 0.872 x 2.27e-05 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.4294 on 8 degrees of freedomMultiple R-squared: 0.9055, Adjusted R-squared: 0.8937 F-statistic: 76.67 on 1 and 8 DF, p-value: 2.267e-05 `\n\nThe correlation coefficient is\n\n `r = sum(dx*dy) / sqrt( sum(dx**2) * sum(dy**2) )`\n\nThat is\n\n `r = cov(x,y) / sqrt(var(x)*var(y))`\n\n `> cov(x,y) / sqrt(var(x)*var(y)) 0.9515839`\n\nI am not sure why this doesn't match the output above.\n\nTo get the plot shown at the top of the post we use another extractor function on the linear model:\n\n `xy.model = lm(y~x)w = fitted(xy.model)`\n\nw contains the predicted values of y for each x according to the model.\n\n `plot(y~x,pch=16,col='blue',cex=1.8)xy.model = lm(y~x)abline(xy.model)segments(x,w,x,y,col='blue',lty=2)`\n\nWe can do an even fancier plot, with confidence bands, as follows.\n\n `df = data.frame(x)pp = predict(xy.model,int='p',newdata=df)pc = predict(xy.model,int='c',newdata=df)plot(y~x,pch=16,col='blue',cex=1.8)matlines(x,pc,lty=2,col='black')matlines(x,pp,lty=3,col='black')`", null, "This is promising but it is obviously going to take more work.\n\n### Core statistics for bioinformatics\n\nI found a nice overview of statistics on the internet. The author's name is Woon Wei Lee. The course page is here, and here is a link to the pdf. There are folders for other lectures which likely contain interesting stuff.\n\n### Student's t test 3\n\nThe paired t test is used when two sets of values are related, for example because each of a pair of measurements was made on the same subject.\n\nIn this case, it is the mean of the difference between the two values that is distributed according to the t distribution.\n\nThis example is from Dalgard.\n\n `pre = c(5260,5470,5640, 6180,6390,6515,6805, 7515,7515,8230,8770)post = c(3910,4220,3885, 5160,5645,4680,5265, 5975,6790,6900,7335)plot(pre,post,pch=16, col='blue',cex=2)diff = post-pre`", null, "Not only are the values correlated, but the difference is always negative:\n\n `> diff -1350 -1250 -1755 -1020 -745 -1835 -1540 -1540 -725 -1330 -1435`\n\n `t.test(pre,post,paired=T)`\n\n `> t.test(pre,post,paired=T) Paired t-testdata: pre and post t = 11.9414, df = 10,p-value = 3.059e-07alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 1074.072 1566.838 sample estimates:mean of the differences 1320.455`\n\nWe can do the test by hand, as follows:\n\n `> mean(diff) -1320.455> sd(diff) 366.7455> x = sd(diff)/sqrt(10)> x 115.9751> abs(mean(diff))/x 11.38567`\n\nThe question now is, what fraction of the values from the t-distribution with df = 10 are greater than 11.39?\n\n `S = seq(0,1,by=0.001)w = rt(1000000,df=10)y = quantile(w,S)round(tail(y))`\n\n `> round(tail(y)) 99.5% 99.6% 99.7% 3 3 3 99.8% 99.9% 100.0% 4 4 11`\n\nThe short answer: not very many!" ]
[ null, "https://3.bp.blogspot.com/_39NGKVWYg3o/SpvnhA8hMFI/AAAAAAAAAl4/yUsO3Vjm8do/s320/Picture+3.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SppPeviNW1I/AAAAAAAAAlo/vKXxEZN-BPA/s320/Picture+2.png", null, "https://1.bp.blogspot.com/_39NGKVWYg3o/SppPfPccmKI/AAAAAAAAAlw/jeoETMZ_gjY/s320/Picture+4.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SpfsOfJVF1I/AAAAAAAAAlY/CoGXgfdzd_c/s320/Picture+1.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SpfsOizkPkI/AAAAAAAAAlg/goX6xaqDkZ0/s320/Picture+2.png", null, "https://2.bp.blogspot.com/_39NGKVWYg3o/SpfUfc1X4ZI/AAAAAAAAAlQ/M2EWXARkE8c/s320/Picture+3.png", null, "https://2.bp.blogspot.com/_39NGKVWYg3o/SpfUfc1X4ZI/AAAAAAAAAlQ/M2EWXARkE8c/s320/Picture+3.png", null, "https://2.bp.blogspot.com/_39NGKVWYg3o/Spa5n9Hy5CI/AAAAAAAAAlI/aOzIJLyqCOA/s320/Picture+2.png", null, "https://2.bp.blogspot.com/_39NGKVWYg3o/SpWPYynjxpI/AAAAAAAAAkw/99W9ZViEb-Q/s320/Picture+3.png", null, "https://2.bp.blogspot.com/_39NGKVWYg3o/SpWPZIFaqrI/AAAAAAAAAk4/GuWSd6Lp8nI/s320/Picture+4.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SpWPZbRzYYI/AAAAAAAAAlA/p34eVW5OQOc/s320/Picture+5.png", null, "https://3.bp.blogspot.com/_39NGKVWYg3o/SpWC7hZJMwI/AAAAAAAAAkY/a_KahCj52yo/s320/Picture+3.png", null, "https://1.bp.blogspot.com/_39NGKVWYg3o/SpWC8Zp4EeI/AAAAAAAAAkg/uD7PKvV1v6c/s320/Picture+4.png", null, "https://1.bp.blogspot.com/_39NGKVWYg3o/SpWFHmuv2jI/AAAAAAAAAko/d62KJ_hMAf0/s320/Picture+5.png", null, "https://1.bp.blogspot.com/_39NGKVWYg3o/SpVIESIcRhI/AAAAAAAAAkI/QHoFSdWnwvE/s320/Picture+2.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SpVIE9ABY7I/AAAAAAAAAkQ/x1oh6gTjosU/s320/Picture+3.png", null, "https://1.bp.blogspot.com/_39NGKVWYg3o/SpUxJ92ixyI/AAAAAAAAAkA/w3QpXpcyHwY/s320/Picture+2.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/Soreb5u4EAI/AAAAAAAAAjg/uHwRlXzVx4o/s320/Picture+4.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/Sore6pDZREI/AAAAAAAAAjo/xK3YeqZWI6I/s320/Picture+5.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SorebEJXgvI/AAAAAAAAAjQ/Sv5bvUmsQos/s320/Picture+2.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SorebTtalfI/AAAAAAAAAjY/JMfpnNpuA7E/s320/Picture+3.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/Soreb5u4EAI/AAAAAAAAAjg/uHwRlXzVx4o/s320/Picture+4.png", null, "https://3.bp.blogspot.com/_39NGKVWYg3o/So1VsFTLbrI/AAAAAAAAAjw/cCaY_sVKjMc/s320/Picture+1.png", null, "https://1.bp.blogspot.com/_39NGKVWYg3o/Sobn_9JhPZI/AAAAAAAAAi4/nCvOf7m-Lsc/s320/Picture+5.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SobhT9lgwZI/AAAAAAAAAio/nPqLNWZIXRw/s320/Picture+2.png", null, "https://3.bp.blogspot.com/_39NGKVWYg3o/SoXR78uQu5I/AAAAAAAAAiY/hM1jbMpEHmE/s320/Picture+4.png", null, "https://1.bp.blogspot.com/_39NGKVWYg3o/SoXR6bDcWmI/AAAAAAAAAiI/0EKmXDCz5Rw/s320/Picture+1.png", null, "https://3.bp.blogspot.com/_39NGKVWYg3o/SoXR7fbe5II/AAAAAAAAAiQ/G4HsVm73CWU/s320/Picture+2.png", null, "https://2.bp.blogspot.com/_39NGKVWYg3o/SoR_kzlUbZI/AAAAAAAAAhg/nsDQK5spees/s320/Picture+1.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SoVqctQiOgI/AAAAAAAAAhw/-f9EUX49INY/s320/Picture+1.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SoVqc3vt1BI/AAAAAAAAAh4/4DsEZn45ryQ/s320/Picture+2.png", null, "https://4.bp.blogspot.com/_39NGKVWYg3o/SoVqdYaDz0I/AAAAAAAAAiA/5SaWsAh9llM/s320/Picture+4.png", null, "https://2.bp.blogspot.com/_39NGKVWYg3o/SoR_kLAP8cI/AAAAAAAAAhY/ZrP--MnSiEU/s320/Picture+2.png", null, "https://2.bp.blogspot.com/_39NGKVWYg3o/SoR_kzlUbZI/AAAAAAAAAhg/nsDQK5spees/s320/Picture+1.png", null, "https://1.bp.blogspot.com/_39NGKVWYg3o/SoR_lCdeaKI/AAAAAAAAAho/R9ScebBmIB8/s320/Picture+3.png", null, "https://2.bp.blogspot.com/_39NGKVWYg3o/SoRA-SXO_8I/AAAAAAAAAhQ/BAy3TWkOoFw/s320/Picture+6.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9386508,"math_prob":0.9889396,"size":673,"snap":"2019-35-2019-39","text_gpt3_token_len":161,"char_repetition_ratio":0.11210762,"word_repetition_ratio":0.0,"special_character_ratio":0.23031203,"punctuation_ratio":0.12582782,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983024,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,2,null,2,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,2,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-23T05:28:58Z\",\"WARC-Record-ID\":\"<urn:uuid:40711188-53e2-4e04-a4eb-2c82c068ed5b>\",\"Content-Length\":\"231620\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f4df97b0-c0af-49f8-aa73-131e47427ac2>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4182467-a526-43ca-b2f9-ebbb98c7f034>\",\"WARC-IP-Address\":\"172.217.15.65\",\"WARC-Target-URI\":\"https://telliott99.blogspot.com/2009/08/\",\"WARC-Payload-Digest\":\"sha1:HP2JSZOLLQBQMLS3F7YYMY2UZF6VZ25P\",\"WARC-Block-Digest\":\"sha1:ZMAOQSHUEGKLNLVHKQ7WU2F33ZEH6OQG\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027317847.79_warc_CC-MAIN-20190823041746-20190823063746-00544.warc.gz\"}"}
https://www.thefreedictionary.com/Newton%27s+first+law+of+motion
[ "# Newton's first law of motion\n\nAlso found in: Thesaurus, Encyclopedia, Wikipedia.\nRelated to Newton's first law of motion: Newton's third law of motion\nThesaurusAntonymsRelated WordsSynonymsLegend:\n Noun 1 Newton's first law of motion - a body remains at rest or in motion with a constant velocity unless acted upon by an external forcelaw of motion, Newton's law, Newton's law of motion - one of three basic laws of classical mechanics\nBased on WordNet 3.0, Farlex clipart collection. © 2003-2012 Princeton University, Farlex Inc.\nReferences in periodicals archive ?\n'If you search the text of Newton's first law of motion you will find million hits, does that mean Newton was guilty of plagiarism,' CIIT's rector told The Express Tribune.\nAccording to Newton's First Law of Motion, an object moving in space to move in the same direction at a constant velocity in the absence of an unbalanced force.\nWhen your plane is stationary, it's a good time to consider Isaac Newton's First Law of Motion, regarding inertia.\nNewton's First Law of Motion: Every object in a state of uniform motion (or at rest) tends to remain in that state of motion (or at rest) unless an external force is applied to it.\nThe group has met weekly, covering a variety of STEM-focused topics such as the Scientific Method, Bernoulli's Principle, Pascal's Law, Newton's first law of motion and more.\nThat's because each time the sled stops, a player has to contend with Newton's First Law of Motion, which states that an object at rest has the tendency to remain at rest.\nNEWTON'S FIRST LAW OF MOTION, also known as the principle of inertia, says \"Every body perseveres in its state of being at rest or of moving uniformly straight forward, except insofar as it is compelled to change its state by forces impressed.\" (1) I will argue that inertia is an inherent principle and that inertia and Newton's First Law are in this way natural in the Aristotelian sense.\nParticipants learned that an actor's movements onstage were more effective when they followed Newton's First Law of Motion (inertia and momentum: a body will either remain at rest or in motion unless acted upon by an outside force).\nSir Isaac Newton's First Law of Motion states that an object at rest tends to stay at rest and that an object in motion tends to stay in motion with the same speed and in the same direction unless acted upon by an unbalanced force.\nASK any paratrooper how softly he lands while parachuting and he'll quickly explain Sir Isaac Newton's First Law of Motion: \"An object in motion will remain in motion until an external force is applied.\" In other words, something has to stop the movement.\nNewton's First Law of Motion states that unless acted upon by an external force, a body at rest will remain at rest and a body in motion will remain in motion.\nNewton's first law of motion states, \"Every object in a state of uniform motion tends to remain in that state of motion, unless an external force is applied to it.\" Folks, your car is not equipped with an ejection seat to get you out a nanosecond before a crash.\n\nSite: Follow: Share:\nOpen / Close" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.956216,"math_prob":0.611697,"size":2511,"snap":"2021-04-2021-17","text_gpt3_token_len":542,"char_repetition_ratio":0.16114879,"word_repetition_ratio":0.10633484,"special_character_ratio":0.20708881,"punctuation_ratio":0.07628866,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9726828,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T02:23:19Z\",\"WARC-Record-ID\":\"<urn:uuid:19c3d0ce-dfb7-471a-8706-e471faec6167>\",\"Content-Length\":\"47026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e488cb3-d00a-46d3-b530-a83ea66ca2b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:660ac981-5a59-42c6-9613-f7ca1bf158fa>\",\"WARC-IP-Address\":\"209.160.67.5\",\"WARC-Target-URI\":\"https://www.thefreedictionary.com/Newton%27s+first+law+of+motion\",\"WARC-Payload-Digest\":\"sha1:ZGQQJMIDN5RVAVYUEWU2TEMHPUOJ5KZQ\",\"WARC-Block-Digest\":\"sha1:36XLVWL2X3CE3GXLPRTXEA2YF4LGL7ZI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703531702.36_warc_CC-MAIN-20210123001629-20210123031629-00455.warc.gz\"}"}
https://downloads.hindawi.com/journals/ijap/2013/837639.xml
[ "IJAP International Journal of Antennas and Propagation 1687-5877 1687-5869 Hindawi Publishing Corporation 837639 10.1155/2013/837639 837639 Research Article A Modified STAP Estimator for Superresolution of Multiple Signals Wang Zhongbao Xie Junhao Ma Zilong Quan Taifan Nickel Ulrich Department of Electronic Engineering Harbin Institute of Technology Harbin 150001 China hit.edu.cn 2013 27 4 2013 2013 22 01 2013 08 04 2013 2013 Copyright © 2013 Zhongbao Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\nA modified space-time adaptive processing (STAP) estimator is described in this paper. The estimator combines the incremental multiparameter (IMP) algorithm and the existing beam-space preprocessing techniques yielding a computationally cheap algorithm for the superresolution of multiple signals. It is a potential technique for the remote sensing of the ocean currents from the broadened first-order Bragg sea echo spectrum of shipborne high-frequency surface wave radar (HFSWR). Some simulation results and real-data analysis are shown to validate the proposed algorithm.\n\n1. Introduction\n\nThe measurement of the near-surface currents is a very difficult task by using conventional methods, especially under some harsh sea conditions. Many advanced marine measurement instruments, such as drifting buoys and acoustic current meters , have been used to collect sea state information. However, it would be very expensive to collect and interpret data from these devices for the sparse spatial sampling provided. Therefore, it is virtually impossible to form the current maps over a widespread ocean surface timely and accurately with these conventional meters.\n\nIn recent years, HFSWR has already become a powerful remote-sensing tool which receives increasing attention from oceanographers and research groups for its good ability to determine the large-scale sea state under all weather conditions [3, 4]. The measurement principle of HFSWR mainly depends on the Bragg resonant scattering theory and Doppler frequency effect theory in the sea echo spectrum of HFSWR. In the absence of ocean currents, the first-order Bragg lines would appear symmetrically above and below the zero Doppler frequency, which are caused by the ocean waves with precisely one-half wavelength of the radar moving towards and away from the radar. In practical application, some displacements will happen in the first-order Bragg lines since the near-surface currents always exist on the ocean .\n\nAs an extension of shore-based HFSWR, shipborne HFSWR not only inherits all the advantages of shore-based HFSWR but also shows some outstanding features, such as flexibility and mobility. However, some new problems emerge as the radar on board is a moving ship. One of the worst problems is that the first-order Bragg lines have been broadened into two pass bands (when the speed of ship is slow) or one low pass band (when the ship is sailing at a high speed) in the first-order Bragg sea echo spectrum of shipborne HFSWR . For these cases, it will be hard to determine the ocean currents from the broadened sea echo spectrum. Furthermore, the moving ship yields different Doppler shifts to the different azimuth sea echoes. Thus, there exists a certain space-time coupling relation in the received sea echo spectrum of shipborne HFSWR.\n\nThe novel space-time IMP algorithm was proposed by Clarke and Spence which was based on one-dimensional IMP algorithm and modified to detect and estimate multiple signals from the conventional beamwidth and (or) Doppler resolution bin. Although space-time IMP estimator effectively improves the robustness of the detection and estimation of multiple signals, the computational load of two-dimensional search process is too heavy for real-time application. Lately, Chadwick used the eigendecomposition method instead of the full-search process to reduce the heavy computational burden in his proposed polarisation-sensitive IMP algorithm. However, for surface wave radar, this method becomes invalid since the returns of HFSWR are mostly vertical polarization sensitive components.\n\nMany researchers, such as Shaw and Wilkins , used beam-space preprocessing technique to reduce the computational load and improve robust performance of high-resolution DOA (direction-of-arrival) estimation algorithms. Recently, Hassanien et al. [10, 11], proposed a new concept about beam-space preprocessing algorithm which was able to suppress the out-of-sector interferences accompanied with the updated array data. This technique was proven to be more robust than the aforementioned beam-space methods.\n\nIn this paper, we combine the space-time IMP algorithm and the adaptive beam-space preprocessing technique yielding a computationally cheap space-time adaptive estimator for detection, estimation and super-resolution of multiple signals. The proposed algorithm is validated by simulation results as well as experimental examples.\n\nThis paper is organized as follows. First we introduce the signal model. In the following section, the proposed algorithm and some simulation results are presented. The measurement of the near-surface currents of the ocean by shipborne HFSWR and some real-data analysis are presented in Section 4. The final part is the study conclusion.\n\n2. Signal Model\n\nConsider a uniform linear array (ULA) with M omnidirectional antennas and the antenna spacing d. If there are N sources that have been received from a far field with different relative delays and attenuations. The received data X is then given by as follows: (1)X=AΛBH+N, where (2)X=[x1x2xM]T,xi=[xi(1)xi(2)xi(L)],xi(l),i=1,2,,M,l=1,2,,L denotes the data received from the ith sensor at lth sampling time, [·]T denotes the transpose operation, A=[a(φ1)a(φ2)a(φN)]T is the array manifold matrix, (3)a(φi)=[1e-j2πd(M-1)sinφi/λ]T is the steering vector points to the direction φi,i=1,2,,N, λ denotes wavelength of the radar, Λ is a (N×N) diagonal matrix containing the signal magnitudes, B=[b(f1)b(f2)b(fN)] is a (L×N) matrix comprising the normalized source waveforms, b(fi)=[1e-j2πfie-j2πfi(L-1)]T is the normalized source waveform, fi,i=1,2,,N is the frequency of jth source, [·]H denotes the Hermitian transpose operation, and N is the (M×L) matrix comprising the zero mean and σ2 variance Gaussian noise.\n\nThe (M×M) covariance matrix of the received data is given by (4)R=E{XXH}=ASAH+σ2I, where E{·} is the statistical expectation operator, S=E{ΛBHBΛH} is the (L×L) source covariance matrix, and I is the identity matrix.\n\n3. Modified STAP Estimator 3.1. Space-Time IMP\n\nSpace-time IMP is a two-dimensional maximum likelihood method which uses a set of space-time calibration response vectors to match with the received data. Thus, the primary objective of space-time IMP is to maximize the “signal plus noise” to “expected noise” power ratio (SNR). If the maximum output power is over the threshold, then a target is detected and the corresponding space-time calibration response vector is recorded. In order to reduce the sidelobe leakage of the detected targets and improve the detection and estimation of the potential signals in the residual data, the detected targets are removed from the original data through an orthogonal subspace projection matrix before each iterative stage.\n\nAccording to the definition of SNR in space-time IMP, we have the following : (5)F(θ,f)=WH(θ,f)Qvec(X)vec(X)HQW(θ,f)WH(θ,f)QW(θ,f), where (6)W(θ,f)=vec{(a(θ)aH(θ)a(θ))(b(f)bH(f)b(f))H} is the (ML×1) space-time calibration response matrix, vec{·} is the vectorization operation, and Q is an (ML×ML) orthogonal projection matrix, which is given by (7)Q=I-M[MHM]-1MH, where M denotes a sum of space-time calibration response vectors corresponding to the detected signals; that is, Q projects the received data into a subspace orthogonal to the detected signals.\n\nThe adaptive beam-space preprocessing technique was first mentioned in which used the updated data for adaptive suppression of out-of-sector interferences. This technique had been shown to be more robust than the aforementioned beam-space methods.\n\nThe primary objective of the data-adaptive beam-space preprocessing is to solve the optimal beam-space matrix design problem through minimizing the output power of the transformed data, which can be expressed as follows : (8)minCtr(CHRC)subjecttoCHC=IsubjecttoCHa(θb)=1b=1,2,,BsubjecttoCHa(θ-k)γθ-kΘ-,k=1,,K, where tr{·} is the trace of a matrix, C is the (M×B) beam-space matrix, B(BM) is the beam-space dimension, · is the vector 2-norm, Θ- denotes all out-of-sector directions which are divided into K angular grids, {θb}b=1B and {θ-k}k=1K are the angles corresponding to the in-of-sector and out-of-sector directions, and γ is the stopband attenuation parameter which should meet the requirement (9)minCγsubjecttoCHC=IsubjecttoCHa(θb)=1b=1,2,,BsubjecttoCHa(θ-k)γθ-kΘ-,k=1,,K.\n\nAfter the beam-space transformation, the array steering vector matrix a and manifold matrix A have already transformed into (10)a~=CHa,A~=CHA.\n\nThen, the (B×B) covariance matrix R~ in beam-space should be rewritten as (11)R~=A~SA~H+σ2I.\n\nObviously, the dimension of the matrix R~ is lower than that of R. This fact is exploited in all beam-space-based methods to reduce computational load compared with the element-space algorithms .\n\nIn this section, we show how the conventional space-time estimator combines with the adaptive beam-space preprocessing technique to present a computationally cheap space-time adaptive beam-space IMP estimator.\n\nFollowing the discussion above, the two-dimensional discriminants shown in (5) should be modified as (12)F~(θ,f)=W~H(θ,f)Q~vec(CHX)vec(CHX)HQ~W~(θ,f)W~H(θ,f)Q~W~(θ,f), where (13)W(θ,f)=vec{(CHa(θ)aH(θ)CCHa(θ))(b(f)bH(f)b(f))H}, and Q~ has been reduced to a BL×BL matrix in beam-space domain, which is given by (14)Q~=I-M~[M~HM~]-1M~H, where M~ denotes a sum of space-time calibration response vectors corresponding to the detected signals in beam-space domain.\n\n3.4. Threshold Setting\n\nHow to select an appropriate threshold to terminate the iterative process in IMP algorithm is very important. Theoretically, when all “significant peaks” have been detected and cleared out from the received data, there is only a completely flat plane in the residual scan . However, it is impossible to accurately estimate the noise statistics from the limited received data. Furthermore, the definition of “significant peak” in IMP algorithm has not been clearly reported.\n\nIn the paper, we use a double-threshold setting method to ensure that the iterative process is halted timely. First, we check two successive scans before the next iteration. If the difference between the two scans is comparable to that of the “expected noise” level, that is, no “significant peak” appears during the last scan, then the iterative process should be halted. Besides, if the difference between two successive estimations has reached the preset threshold, which suggests that the iterations are estimating the same target, then the iterative process should be halted as well.\n\n3.5. Simulation Results\n\nSeveral simulation results are shown in this section to test the performance of the modified algorithm through comparing it with several conventional algorithms.\n\nDuring the simulations, we assume that the radar works at f=6MHz, which contains an ULA with M=8 omnidirectional sensors and the elements are spaced one-half wavelength apart. The half power beamwidth is approximately 13°. The number of snapshots L=32 and the beam-space dimension B=2 are chosen for our simulations. The adaptive beam-space matrix has been solved using the cvx optimization MATLAB toolbox. Since the minimum value in (9) is γmin=0.0676, we take the parameter γ=0.07 for (8). Furthermore, the two simulation targets (0.2Hz,86°) and (0.3Hz,90°) are also selected in the simulations.\n\nTo define a successful experiment, we use the criterion mentioned in if (15)i=12|θ^i-θi|<|θ1-θ2|, where θ^i and θi  (i=1,2) are, respectively, the estimated and truth values, then the two signals are successfully resolved.\n\nFigures 1 and 2 illustrate the probability of source resolution and their root mean square error (RMSE) versus SNR in Doppler domain, respectively. The conventional space-time IMP, space-time beam-space IMP which combines space-time IMP with discrete fourier transform (DFT) matrix beam-space processing technique , and 64 points and 256 points FFT results are used for the comparison in the figures. We find that the beam-space-based algorithms show better resolution and smaller RMSE than the other algorithms in resolving the two simulation targets. Thus, it is reasonable to conclude that the beam-space-based methods require less observation time but maintain high Doppler accuracy compared with the conventional algorithms. By the way, all the simulation results shown in this section have averaged over 1000 independent Monte Carlo experiments.\n\nProbability of source resolution versus SNR in Doppler domain.\n\nDoppler estimated RMSE versus SNR.\n\nFigures 3 and 4 are the probability of source resolution and their RMSE versus SNR in azimuth domain, respectively. According to these figures, the beam-space-based algorithms show better performance than the other algorithms. Comparing these methods, the space-time adaptive beam-space IMP algorithm shows the highest robust and lowest RMSE in resolving the two simulation targets. Thus, it is reasonable to conclude that the proposed algorithm requires smaller antenna array and lower SNR threshold for detection and estimation of multiple signals when compared with the conventional DOA algorithms.\n\nProbability of source resolution versus SNR in azimuth domain.\n\nAzimuth estimated RMSE versus SNR.\n\n4. Shipborne HFSWR 4.1. Space-Time Coupling Relation\n\nIn , Xie et al. had proven that the first-order Bragg spectrums were broadened along the azimuthal directions from the real-data analysis. The authors concluded that there was a space-time coupling relation existed in the first-order Bragg sea echo spectrum of shipborne HFSWR.\n\nAssuming that both the transmitting and the receiving antennas of HFSWR are mounted on a ship which is moving in the positive direction of the x-axis at a constant speed vs(m/s), as shown in Figure 5.\n\nSchematic of shipborne HFSWR and current vector.\n\nIn the absence of ocean current, the space-time coupling relation in the first-order Bragg sea echo spectrum of shipborne HFSWR can be expressed as follows : (16)fd=2vsλcosϕ+fB, where ϕ[0,π] is the azimuth direction, fB=±g/πλ are the first-order Bragg frequencies, the positive and negative signs are, respectively, the Bragg waves moving towards and away from the radar.\n\n4.2. Current Measurement\n\nIn the presence of ocean currents, the first-order Bragg lines in (16) are shifted from the theoretical positions. The displacements are proportional to the radial current velocities. Thus, (16) should be rewritten as (17)fd=2vsλcosϕ+fB+2vc(ϕ)λ.\n\nAs shown in (17), the first-order Bragg lines are related to the azimuth directions as well as the speed of ship and currents. Therefore, the first-order Bragg peaks are not only displaced, but also broadened into two pass bands (for slow ship speed case) in the first-order Bragg sea echo spectrum of shipborne HFSWR.\n\n4.3. Real-Data Analysis\n\nThe real data used in this paper was received from the shipborne HFSWR experiments conducted on the Yellow Sea of China on September 8, 1998 . Figures 6 and 7 show the space-time coupling relation in the sixth range bin of the real-data file 1128 (containing 7 channels × 256 samples × 32 range bins and the ship speed was about 4.8 m/s), which is processed through the DFT plus DBF (Digital Beamforming) cascade processing and the proposed algorithm. As shown in the figures, the first-order Bragg lines are broadened along the azimuth directions, which tally well with the theoretical lines in (16). The displacements may be caused by ocean currents or interferences.\n\nSpace-time spectrum of real data processed by DFT and DBF cascaded processing. – – denotes the theoretical value (f0=5.283MHz, Tp=0.262s, d=14m).\n\nSpace-time spectrum of real data processed by IMP.\n\nTable 1 is an example of the DOA and Doppler estimations. The modified STAP estimator is used to process the above-mentioned real data within the section between the azimuth 40° and 90°. Since there was no information about the sea state recorded during the experiment, we here assume that the detected targets near the theoretical first-order Bragg lines are considered as ocean currents and their displacements are proportional to their radial velocities. Based on this assumption, five currents have been detected and estimated. All of them are very close to the theoretical first-order Bragg frequencies and their corresponding radial velocities are calculated in the table.\n\nDOA and Doppler estimations.\n\nTarget 1 2 3 4 5\nDOA (Deg) 62.72 50.12 64.06 52.56 63.35\nDoppler (Hz) - 0.1582 - 0.1154 - 0.1596 - 0.1292 - 0.1589\nTheoretical (Hz) - 0.1541 - 0.1220 - 0.1577 - 0.1278 - 0.1558\nVelocity (m/s) 0.1166 - 0.1869 0.0519 0.0398 0.0885\n\nTable 2 is another example for better understanding the robustness of the proposed algorithm, where we add a simulation target (-0.18Hz,80°) to the real data used previously, as shown in Figure 8. The added simulation target is detected correctly and the estimations of DOA and Doppler frequency are within 2° and 0.0025Hz of the true signal position.\n\nDOA and Doppler estimations.\n\nTarget 1 2 3 4 5\nDOA (Deg) 82.97 67.33 64.19 81.97 52.56\nDoppler (Hz) - 0.1833 - 0.1627 - 0.1567 - 0.1825 - 0.1176\nTheoretical (Hz) - 0.2130 - 0.1668 - 0.1581 - 0.2100 - 0.1238\n\nSpace-time spectrum of real data adds simulation target.\n\n5. Conclusion\n\nIn this paper, we have introduced a modified space-time adaptive processing estimator that can be used for the detection, estimation, and superresolution of multiple signals. The method combines the conventional IMP method and the existing adaptive beam-space preprocessing techniques yielding a computationally cheap algorithm for estimating the near-surface currents of the ocean from the broadening of the first-order Bragg sea echo spectrum of shipborne HFSWR. The proposed algorithm is validated by simulation results as well as experimental examples.\n\nAcknowledgment\n\nThis work is supported by the State Key Program of National Natural Science of China (Grant no. 61132005)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87870914,"math_prob":0.9637755,"size":20697,"snap":"2022-05-2022-21","text_gpt3_token_len":5134,"char_repetition_ratio":0.13241193,"word_repetition_ratio":0.07009511,"special_character_ratio":0.2427888,"punctuation_ratio":0.120236866,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9860633,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T23:06:30Z\",\"WARC-Record-ID\":\"<urn:uuid:ca16c971-0c64-439c-87cf-f17e5510cc70>\",\"Content-Length\":\"102862\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:450d40f3-754b-44c8-8cbe-b72ab37a6ff7>\",\"WARC-Concurrent-To\":\"<urn:uuid:93dd1c9e-34bf-47c9-9256-9507d6402a7c>\",\"WARC-IP-Address\":\"13.249.38.31\",\"WARC-Target-URI\":\"https://downloads.hindawi.com/journals/ijap/2013/837639.xml\",\"WARC-Payload-Digest\":\"sha1:GGQT3ZLOCCGXAZYSGFGEJY72FSZJNPZ7\",\"WARC-Block-Digest\":\"sha1:7BC6K57SLTNBWNEPSDWNOPN3DI6VNWRR\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304876.16_warc_CC-MAIN-20220125220353-20220126010353-00569.warc.gz\"}"}
https://mathoverflow.net/questions/240299/how-do-i-complete-the-sketch-of-proof-in-fga-explained-5-5-8
[ "How do I complete the sketch of proof in 'FGA explained', 5.5.8?\n\nIt seems to me that the one part that is difficult to transfer to general coherent sheaves is given only one sentence: \"Because we have such a common $m$, we get as before an injective morphism from the functor $\\mathfrak{Quot}^{\\Phi,L}_{E/X/S}$ into the Grassmanian functor $\\mathfrak{Grass}(\\pi_\\ast E(r), \\Phi(r))$.\" But I see at least two serious hardships with this proof: First, why is $R^1 \\pi_{T,\\ast} \\left( \\mathcal{G}_t (r) \\right) = 0$? Second, even if that is true, the values on $T$ are quotients of $\\pi_{T,\\ast} E_T$, not of $(\\pi_\\ast E)_T$, where the latter is the pullback of the pushforward. These two sheaves are the same for some $r$ by Theorem 5.4 (3.1 in the standalone paper), but that $r$ might depend on $T$.\n\nEdit: The statement of the theorem that has a more or less thorough proof in the book \"Fundamental algebraic geometry: Grothendieck's FGA explained.\", in part 5, by Nitin Nitsure (it exists as a separate paper [Nitsure, Nitin (2005), \"Construction of Hilbert and Quot schemes\", Fundamental algebraic geometry, Math. Surveys Monogr. 123, Providence, R.I.: Amer. Math. Soc., pp. 105–137, arXiv:math/0504590, MR 2223407]), is as follows: (theorem 5.2 in the paper, 5,15 in the book)\n\nLet $S$ be a Noetherian scheme, and $X$ a closed subscheme in $\\mathbb{P}(V)$ for some vector bundle $V$, let $\\pi$ denote the stuctral morphism $X \\to S$. Let $E$ be a coherent factor-sheaf of $\\pi^\\ast(W)(\\nu)$ where $W$ is an $S$-vector bundle and $\\nu$ is an integer. Then for any integer-valued polynomial $\\Phi$, the functor $\\mathfrak{Quot}_{E/X/S}^{\\Phi,\\mathcal{O}(1)}$ is representable by a closed subscheme of the Grassmanian $Gr(W \\otimes Sym^r(V),\\Phi(r))$ for large enough r.\n\nIt differs from the original Grothendieck's statement in that in general, not all projective schemes embed into projectivisations of vector bundles (if there is an ample bundle on $S$, they do) Also, the bundle $E$ is just any coherent sheaf in Grothendieck's version of the theorem.\n\nBecause I don't know what the author meant when he wrote that the part that I pointed out generalises easily, I guess I will just write down that part of the proof (it is implicit in the text, I had to guess the specifics myself) and note where I see difficulty.\n\nFirst of all, $X$ is obviously replaced with $\\mathbb{P}(V)$. Then the factor-sheaf of $\\pi^\\ast (W) (\\nu)$ is replaced by $\\pi^\\ast(W)$ - this induces a closed embedding on the $\\mathfrak{Quot}$ sheaves. Here I don't see how to generalise - to express an arbitrary coherent sheaf as a quotient of a vector bundle. So, over an $S$-scheme $T$, on $\\mathbb{P}(H) \\times_S T$ there is an exact sequence $0 \\to \\mathcal{G} \\to E_T \\to \\mathcal{F} \\to 0.$ And what seems crucial is that all these three sheaves are flat over $T$. On fibers $s$, $\\mathcal{G}_s$ is a subsheaf of a free sheaf $E_s$, and we know the Hilbert polynome of $\\mathcal{G}$ so Mumford's theorem (5.3 in the book, 2.3 in the paper) is used to show that there exists an $m$ which does not depend on $s$ or the particular $\\mathcal{F}$ such that all three sheaves in the exact sequence on the fiber are (Castelnuovo-Mumford) $m$-regular. (perhaps for general $E$ one can emulate the proof of Mumford's theorem, uniformly bounding the dimensions of global sections of $E$ on subspaces in fibers) Then after twisting by $r$ for $r \\geq m$, the three sheaves have no higher cohomology on fibers. Then by the Flatness and Base Change Theorem (strongly using that the sheaves are flat over $S$), the higher cohomology vanish. They are also relatively globally generated (maybe after a twist by n), I hae thought of the following argument: they become \"relatively 0-regular\" after thet twist, so they are relatively globally generated. Since $R^1\\pi_{T,\\ast}(\\mathcal{G(r)})=0$, we get an exact sequence $0 \\to \\pi_{T,\\ast}(\\mathcal{G}(r)) \\to \\pi_{T,\\ast}(E(r)) \\to \\pi_{T,\\ast}(\\mathcal{F}(r)) \\to 0$, and thus a morphism to the Grassmanian. The global generation (of $\\mathcal{G}(r)$) is used to show that the morphism is injective and to find its image.\n\nThe statement of Mumford's theorem is as follows: There exists a polynome with integer coefficients $F_{d,n}$ in $n+1$ variables such that for any $\\mathcal{F}$ a coherent subsheaf of $\\oplus^d \\mathcal{O}_{\\mathbb{P}^n_k}$, $\\mathcal{F}$ is $F_{n,d}(a_0, \\cdots a_n)$-regular where $a_i$ are the coordinates of the hilbert polynome of $\\mathcal{F}$ in the binomial basis.\n\n• You're far more likely to get an answer to your question if you make it self-contained. Maybe someone knows some alg geom but doesn't have FGA explained (e.g. maybe they read FGA!) and they're not going to be answering your question as it stands. – znt Jun 2 '16 at 21:52\n• As @znt says, you should make your question self-contained. As a minimum, you should include the statement of the result that you are asking about, with a proper reference to the document in which it appears, preferably with a link. – Neil Strickland Jun 3 '16 at 6:13\n• I hope the text that I wrote is helpful. It is really hard to make the question self-contained because the statements in the text are interconnected and many of them I have not met before. Maybe I should have just posted the link to the paper, which contains the text available for anyone. – Alexei Tsybyshev Jun 3 '16 at 14:20" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8858688,"math_prob":0.9973656,"size":4443,"snap":"2019-43-2019-47","text_gpt3_token_len":1336,"char_repetition_ratio":0.11308853,"word_repetition_ratio":0.005722461,"special_character_ratio":0.28561783,"punctuation_ratio":0.10964912,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997904,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T20:15:04Z\",\"WARC-Record-ID\":\"<urn:uuid:8359e382-0db8-4c01-ab3b-078aedbef9c9>\",\"Content-Length\":\"114403\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6baf46db-b375-4640-b511-d5a9fe817c92>\",\"WARC-Concurrent-To\":\"<urn:uuid:b58e52c0-bb05-469b-8225-b29b147bbd59>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/240299/how-do-i-complete-the-sketch-of-proof-in-fga-explained-5-5-8\",\"WARC-Payload-Digest\":\"sha1:YAQU3TRNLNO6ZNULUBDWMEMXC6KTTJ3K\",\"WARC-Block-Digest\":\"sha1:6IIO4DHFUZPDBB5WA7H3FZKU7J7CVUVA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986718918.77_warc_CC-MAIN-20191020183709-20191020211209-00388.warc.gz\"}"}
https://support.zabbix.com/browse/ZBX-14697
[ "", null, "# Memory leak in alert manager when zabbix database is not available\n\nXMLWordPrintable\n\n#### Details\n\n• Type:", null, "Problem report\n• Status: Closed\n• Priority:", null, "Trivial\n• Resolution: Fixed\n• Affects Version/s: 3.4.12, 4.0.0alpha9\n• Fix Version/s:\n• Component/s:\n• Labels:\n• Team:\nTeam A\n• Sprint:\nSprint 40\n• Story Points:\n0.125\n\n#### Description\n\nAlert manager does not close previous connection, this leads to memory leak after each successful reconnect.\n\n```==613== For counts of detected and suppressed errors, rerun with: -v\n==613== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)\n==637== at 0x4C30B06: calloc (vg_replace_malloc.c:711)\n==637== by 0x4E58C16: mysql_init (in /usr/lib64/libmariadb.so.3)\n==637== by 0x7B505E: zbx_db_connect (db.c:494)\n==637== by 0x6EA30C: DBconnect (db.c:73)\n==637== by 0x41E986: MAIN_ZABBIX_ENTRY (server.c:1178)\n==637== by 0x62FBAA: daemon_start (daemon.c:392)\n==637== by 0x41D14D: main (server.c:854)\n==637==\n==637== 173,765 (6,360 direct, 167,405 indirect) bytes in 5 blocks are definitely lost in loss record 140 of 140\n==637== at 0x4C30B06: calloc (vg_replace_malloc.c:711)\n==637== by 0x4E58C16: mysql_init (in /usr/lib64/libmariadb.so.3)\n==637== by 0x7B505E: zbx_db_connect (db.c:494)\n==637== by 0x6EA30C: DBconnect (db.c:73)\n==637== by 0x41E986: MAIN_ZABBIX_ENTRY (server.c:1178)\n==637== by 0x62FBAA: daemon_start (daemon.c:392)\n==637== by 0x41D14D: main (server.c:854)\n==637==\n==637== LEAK SUMMARY:\n==637== definitely lost: 7,632 bytes in 6 blocks\n==637== indirectly lost: 200,886 bytes in 120 blocks\n==637== possibly lost: 0 bytes in 0 blocks\n==637== still reachable: 146,440 bytes in 244 blocks\n==637== suppressed: 0 bytes in 0 blocks\n==637== Reachable blocks (those to which a pointer was found) are not shown.\n==637== To see them, rerun with: --leak-check=full --show-leak-kinds=all\n```\n\nPostgreSQL\n\n```==22638== 35,532 (896 direct, 34,636 indirect) bytes in 1 blocks are definitely lost in loss record 103 of 104\n==22638== at 0x4C2EBAB: malloc (vg_replace_malloc.c:299)\n==22638== by 0x4E4767A: ??? (in /usr/lib64/libpq.so.5.10)\n==22638== by 0x4E4DD3A: PQsetdbLogin (in /usr/lib64/libpq.so.5.10)\n==22638== by 0x783163: zbx_db_connect (db.c:542)\n==22638== by 0x6E96AF: DBconnect (db.c:73)\n==22638== by 0x43966D: MAIN_ZABBIX_ENTRY (server.c:1130)\n==22638== by 0x61A2F0: daemon_start (daemon.c:392)\n==22638== by 0x437F5D: main (server.c:832)\n==22638==\n==22638== LEAK SUMMARY:\n==22638== definitely lost: 896 bytes in 1 blocks\n==22638== indirectly lost: 34,636 bytes in 31 blocks\n==22638== possibly lost: 0 bytes in 0 blocks\n==22638== still reachable: 147,066 bytes in 251 blocks\n==22638== suppressed: 0 bytes in 0 blocks\n==22638== Reachable blocks (those to which a pointer was found) are not shown.\n==22638== To see them, rerun with: --leak-check=full --show-leak-kinds=all\n```\n\nFollowing patch fixes the issue but need proper review and testing\n\n```Index: src/zabbix_server/alerter/alert_manager.c\n===================================================================\n@@ -1960,6 +1960,7 @@\n\nif (ZBX_DB_DOWN == manager.dbstatus && time_connect + ZBX_DB_WAIT_DOWN <= now)\n{\n+\t\t\tDBclose();\nif (ZBX_DB_DOWN == (manager.dbstatus = DBconnect(ZBX_DB_CONNECT_ONCE)))\n{\n\n```\n\n#### People\n\nAssignee:", null, "Vladislavs Sokurenko\nReporter:", null, "Vladislavs Sokurenko" ]
[ null, "https://support.zabbix.com/secure/projectavatar", null, "https://support.zabbix.com/secure/viewavatar", null, "https://support.zabbix.com/images/icons/priorities/trivial.svg", null, "https://support.zabbix.com/secure/useravatar", null, "https://support.zabbix.com/secure/useravatar", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7544458,"math_prob":0.7653331,"size":3617,"snap":"2019-51-2020-05","text_gpt3_token_len":1250,"char_repetition_ratio":0.20619984,"word_repetition_ratio":0.2685185,"special_character_ratio":0.4547968,"punctuation_ratio":0.22886297,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97040325,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T12:11:03Z\",\"WARC-Record-ID\":\"<urn:uuid:1d51cadc-ae59-4b9e-925a-a5afe0582c54>\",\"Content-Length\":\"81173\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f53c8479-2808-4b5c-aa12-f7e50a92f175>\",\"WARC-Concurrent-To\":\"<urn:uuid:950b08f7-afbf-478a-92c5-d0aead747400>\",\"WARC-IP-Address\":\"87.110.183.173\",\"WARC-Target-URI\":\"https://support.zabbix.com/browse/ZBX-14697\",\"WARC-Payload-Digest\":\"sha1:NY2KYG7M3IAO3OSURAOMRBCZW2HAIWHJ\",\"WARC-Block-Digest\":\"sha1:6TYM6MDLJ4WVR3TKQ6NEBPZFWJG2YUZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250610004.56_warc_CC-MAIN-20200123101110-20200123130110-00374.warc.gz\"}"}
https://openlab.citytech.cuny.edu/mat1175fa2011/student-learning-outcomes/
[ "# Student Learning Outcomes\n\nContent-Specific Learning Outcomes\n\nBy the end of the course, the student will :\n\n1. demonstrate the ability to manipulate algebraic expressions including polynomials, rational and radical expressions.\n2. demonstrate the ability to solve equations including linear, quadratic, rational and radical equations, as well as systems of linear equations in two variables.\n3. demonstrate the ability to apply theorems and solve problems in geometry including parallel and perpendicular lines, congruent and similar triangles, and special right triangles.\n4. identify abstract mathematical relationships between quantities.\n5. identify concrete mathematical relationships in other disciplines and outside the classroom.\n6. apply algebraic and geometric principles to analyze and solve problems in other disciplines and outside the classroom.\n\nGeneral Education Learning Outcomes\n\nBy the end of the course, the student will :\n\n1. use the scientific method (make observations, perform experiments, record and process data, and draw conclusions);\n2. make diagrams and graphically present data as a tool for problem solving;\n3. read for details, big picture concepts and themes;\n4. listen to instructor as well as peers;\n5. speak one-on-one, in meetings and formally;\n6. incorporate information from various sources;\n7. write descriptively and reflectively;\n8. effectively collaborate and work as a team.\n\nMathematics Department\nMAT 1175, Fall 2011\nProfessors E. Halleck and J. Reitz" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91219753,"math_prob":0.9603095,"size":1451,"snap":"2022-27-2022-33","text_gpt3_token_len":272,"char_repetition_ratio":0.09951624,"word_repetition_ratio":0.13043478,"special_character_ratio":0.18538938,"punctuation_ratio":0.13913043,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96783304,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T07:52:15Z\",\"WARC-Record-ID\":\"<urn:uuid:64195e72-7e3a-4c88-b644-90c70c21b0f6>\",\"Content-Length\":\"107150\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1486b12-6ef7-4c45-95d3-bb194be4b810>\",\"WARC-Concurrent-To\":\"<urn:uuid:19f15652-0efb-4627-8785-fdcbf17cb6ba>\",\"WARC-IP-Address\":\"209.68.53.53\",\"WARC-Target-URI\":\"https://openlab.citytech.cuny.edu/mat1175fa2011/student-learning-outcomes/\",\"WARC-Payload-Digest\":\"sha1:GI5ESOR5TDUY3AHDFADFQTD3REFSRLKL\",\"WARC-Block-Digest\":\"sha1:4ZYA5CZDBGCE5PO6YQDJSZJX7QGVJREF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571996.63_warc_CC-MAIN-20220814052950-20220814082950-00353.warc.gz\"}"}
https://federalprism.com/does-koh-and-cuso4-form-a-precipitate/
[ "# Does KOH and CuSO4 form a precipitate?\n\n## Does KOH and CuSO4 form a precipitate?\n\nDescription: Copper hydroxide precipitate, 1 of 3. Copper (II) hydroxide precipitate (Cu(OH)2) formed by adding 0.5 M copper sulfate (CuSO4) solution to a 0.2 M solution of potassium hydroxide (KOH). The reaction is CuSO4 + KOH -> Cu(OH)2 + K2SO4. This is an example of a double replacement reaction.\n\n## What happens when you mix HCl and KOH?\n\nPotassium hydroxide (KOH) reacts with hydrochloric acid (HCl) to produce potassium chloride (KCl), a salt and water (H2O). This is a neutralization reaction.\n\nDoes CuSO4 react with HCl?\n\nWhen concentrated hydrochloric acid is added to a very dilute solution of copper sulfate, the pale blue solution slowly turns yellow-green on the formation of a copper chloride complex.\n\nWhat happens when you mix CuSO4 and KOH?\n\n[Note: May be used as an aqueous solution.]…Search by products (Cu(OH) 2, K 2SO 4)\n\n1 KOH + CuSO4 → K2SO4 + Cu(OH)2\n2 KOH + CuSO4*5H2O → H2O + K2SO4 + Cu(OH)2\n\n### What is the reaction between Cu and H in HCl?\n\nIn the case of Cu + HCl we can see that Cu is below H on the activity series. Because of this the Cu will not be able to replace the H in HCl and there will be NO REACTION. If you are unsure if a compound is soluble when writing net ionic equations you should consult a solubility table for the compound.\n\n### How to use ionic net Equation Calculator?\n\nMethod to use the Ionic net equation calculator is as follows: 1 1: Enter the chemical equation in the “Enter the chemical equation” field. 2 2: Now click the button “Balance” to get the equalize equation. 3 3: Finally, for the specified chemical equation, a window will pop with the output. More\n\nWhat is the net ionic equation for a protium ion?\n\nKOH (aq) + H Cl(aq) → KCl(aq) +H 2O(l) …the net ionic equation is…. These days, commonly, we represent the protium ion, H +, in aqueous solution as… H 3O+ …i.e. a protonated water molecule. The following is taken from a previous answer…\n\nWhat happens when HCl (g) is bleached in water?\n\nWe may take a tank of H Cl(g), and we can bleed it in to water to give an AQUEOUS solution that we could represent as H Cl(aq) OR H 3O+ and Cl−. In each case this is a REPRESENTATION of what occurs in solution. If we bleed enuff gas in, we achieve saturation at a concentration of approx. 10.6 ⋅ mol ⋅ L−1 with respect to hydrochloric acid." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85892355,"math_prob":0.96366566,"size":2357,"snap":"2023-40-2023-50","text_gpt3_token_len":681,"char_repetition_ratio":0.11644709,"word_repetition_ratio":0.018691588,"special_character_ratio":0.25922784,"punctuation_ratio":0.10612245,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98425734,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T12:46:27Z\",\"WARC-Record-ID\":\"<urn:uuid:bf3c7492-9eba-43ad-b2d5-026679886122>\",\"Content-Length\":\"59218\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ccbf081-0cc0-463a-82eb-fe2e9a27b29c>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a499c36-bf3e-47a0-9c49-83c329c4bbad>\",\"WARC-IP-Address\":\"172.67.181.88\",\"WARC-Target-URI\":\"https://federalprism.com/does-koh-and-cuso4-form-a-precipitate/\",\"WARC-Payload-Digest\":\"sha1:4LRYIWSWOH432WBA6JRQ4BFSEFYZOMCK\",\"WARC-Block-Digest\":\"sha1:ZEPV45M32DVTTSZPOZIVG4BJ74P2CHWE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102469.83_warc_CC-MAIN-20231210123756-20231210153756-00842.warc.gz\"}"}
https://www.colorhexa.com/030f4a
[ "# #030f4a Color Information\n\nIn a RGB color space, hex #030f4a is composed of 1.2% red, 5.9% green and 29% blue. Whereas in a CMYK color space, it is composed of 95.9% cyan, 79.7% magenta, 0% yellow and 71% black. It has a hue angle of 229.9 degrees, a saturation of 92.2% and a lightness of 15.1%. #030f4a color hex could be obtained by blending #061e94 with #000000. Closest websafe color is: #000033.\n\n• R 1\n• G 6\n• B 29\nRGB color chart\n• C 96\n• M 80\n• Y 0\n• K 71\nCMYK color chart\n\n#030f4a color description : Very dark blue.\n\n# #030f4a Color Conversion\n\nThe hexadecimal color #030f4a has RGB values of R:3, G:15, B:74 and CMYK values of C:0.96, M:0.8, Y:0, K:0.71. Its decimal value is 200522.\n\nHex triplet RGB Decimal 030f4a `#030f4a` 3, 15, 74 `rgb(3,15,74)` 1.2, 5.9, 29 `rgb(1.2%,5.9%,29%)` 96, 80, 0, 71 229.9°, 92.2, 15.1 `hsl(229.9,92.2%,15.1%)` 229.9°, 95.9, 29 000033 `#000033`\nCIE-LAB 7.726, 21.573, -37.527 1.444, 0.855, 6.567 0.163, 0.096, 0.855 7.726, 43.286, 299.894 7.726, -2.794, -24.282 9.248, 11.689, -35.628 00000011, 00001111, 01001010\n\n# Color Schemes with #030f4a\n\n• #030f4a\n``#030f4a` `rgb(3,15,74)``\n• #4a3e03\n``#4a3e03` `rgb(74,62,3)``\nComplementary Color\n• #03334a\n``#03334a` `rgb(3,51,74)``\n• #030f4a\n``#030f4a` `rgb(3,15,74)``\n• #1b034a\n``#1b034a` `rgb(27,3,74)``\nAnalogous Color\n• #334a03\n``#334a03` `rgb(51,74,3)``\n• #030f4a\n``#030f4a` `rgb(3,15,74)``\n• #4a1b03\n``#4a1b03` `rgb(74,27,3)``\nSplit Complementary Color\n• #0f4a03\n``#0f4a03` `rgb(15,74,3)``\n• #030f4a\n``#030f4a` `rgb(3,15,74)``\n• #4a030f\n``#4a030f` `rgb(74,3,15)``\n• #034a3e\n``#034a3e` `rgb(3,74,62)``\n• #030f4a\n``#030f4a` `rgb(3,15,74)``\n• #4a030f\n``#4a030f` `rgb(74,3,15)``\n• #4a3e03\n``#4a3e03` `rgb(74,62,3)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #010519\n``#010519` `rgb(1,5,25)``\n• #020a31\n``#020a31` `rgb(2,10,49)``\n• #030f4a\n``#030f4a` `rgb(3,15,74)``\n• #041463\n``#041463` `rgb(4,20,99)``\n• #05197b\n``#05197b` `rgb(5,25,123)``\n• #061e94\n``#061e94` `rgb(6,30,148)``\nMonochromatic Color\n\n# Alternatives to #030f4a\n\nBelow, you can see some colors close to #030f4a. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #03214a\n``#03214a` `rgb(3,33,74)``\n• #031b4a\n``#031b4a` `rgb(3,27,74)``\n• #03154a\n``#03154a` `rgb(3,21,74)``\n• #030f4a\n``#030f4a` `rgb(3,15,74)``\n• #03094a\n``#03094a` `rgb(3,9,74)``\n• #03034a\n``#03034a` `rgb(3,3,74)``\n• #09034a\n``#09034a` `rgb(9,3,74)``\nSimilar Colors\n\n# #030f4a Preview\n\nText with hexadecimal color #030f4a\n\nThis text has a font color of #030f4a.\n\n``<span style=\"color:#030f4a;\">Text here</span>``\n#030f4a background color\n\nThis paragraph has a background color of #030f4a.\n\n``<p style=\"background-color:#030f4a;\">Content here</p>``\n#030f4a border color\n\nThis element has a border color of #030f4a.\n\n``<div style=\"border:1px solid #030f4a;\">Content here</div>``\nCSS codes\n``.text {color:#030f4a;}``\n``.background {background-color:#030f4a;}``\n``.border {border:1px solid #030f4a;}``\n\n# Shades and Tints of #030f4a\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010411 is the darkest color, while #fefeff is the lightest one.\n\n• #010411\n``#010411` `rgb(1,4,17)``\n• #010724\n``#010724` `rgb(1,7,36)``\n• #020b37\n``#020b37` `rgb(2,11,55)``\n• #030f4a\n``#030f4a` `rgb(3,15,74)``\n• #04135d\n``#04135d` `rgb(4,19,93)``\n• #051770\n``#051770` `rgb(5,23,112)``\n• #051a83\n``#051a83` `rgb(5,26,131)``\n• #061e95\n``#061e95` `rgb(6,30,149)``\n• #0722a8\n``#0722a8` `rgb(7,34,168)``\n• #0826bb\n``#0826bb` `rgb(8,38,187)``\n• #082ace\n``#082ace` `rgb(8,42,206)``\n• #092ee1\n``#092ee1` `rgb(9,46,225)``\n• #0a31f4\n``#0a31f4` `rgb(10,49,244)``\n• #1b40f6\n``#1b40f6` `rgb(27,64,246)``\n• #2e50f7\n``#2e50f7` `rgb(46,80,247)``\n• #4160f7\n``#4160f7` `rgb(65,96,247)``\n• #5470f8\n``#5470f8` `rgb(84,112,248)``\n• #677ff9\n``#677ff9` `rgb(103,127,249)``\n• #7a8ffa\n``#7a8ffa` `rgb(122,143,250)``\n• #8c9ffa\n``#8c9ffa` `rgb(140,159,250)``\n• #9faffb\n``#9faffb` `rgb(159,175,251)``\n• #b2bffc\n``#b2bffc` `rgb(178,191,252)``\n• #c5cefd\n``#c5cefd` `rgb(197,206,253)``\n• #d8defd\n``#d8defd` `rgb(216,222,253)``\n• #ebeefe\n``#ebeefe` `rgb(235,238,254)``\n• #fefeff\n``#fefeff` `rgb(254,254,255)``\nTint Color Variation\n\n# Tones of #030f4a\n\nA tone is produced by adding gray to any pure hue. In this case, #242529 is the less saturated color, while #000d4d is the most saturated one.\n\n• #242529\n``#242529` `rgb(36,37,41)``\n• #21232c\n``#21232c` `rgb(33,35,44)``\n• #1e212f\n``#1e212f` `rgb(30,33,47)``\n• #1b1f32\n``#1b1f32` `rgb(27,31,50)``\n• #181d35\n``#181d35` `rgb(24,29,53)``\n• #151b38\n``#151b38` `rgb(21,27,56)``\n• #12193b\n``#12193b` `rgb(18,25,59)``\n• #0f173e\n``#0f173e` `rgb(15,23,62)``\n• #0c1541\n``#0c1541` `rgb(12,21,65)``\n• #091344\n``#091344` `rgb(9,19,68)``\n• #061147\n``#061147` `rgb(6,17,71)``\n• #030f4a\n``#030f4a` `rgb(3,15,74)``\n• #000d4d\n``#000d4d` `rgb(0,13,77)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #030f4a is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5321583,"math_prob":0.7304466,"size":3633,"snap":"2019-13-2019-22","text_gpt3_token_len":1632,"char_repetition_ratio":0.12510334,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5549133,"punctuation_ratio":0.23489933,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9893631,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T20:32:27Z\",\"WARC-Record-ID\":\"<urn:uuid:71a2c123-e49e-4acc-8ccc-d2ec10ac41b0>\",\"Content-Length\":\"36281\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:343775d3-fff3-4a22-809b-b00b41fec6c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:33cbdcda-81f6-4dd3-b8ad-c7608aa0f9ac>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/030f4a\",\"WARC-Payload-Digest\":\"sha1:OX4YGJL3IKV6SN4O42MKMKSGR6DG6QTK\",\"WARC-Block-Digest\":\"sha1:BMZWEBUE5FOZPQ625ZU4XARZYVNNKMYN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204300.90_warc_CC-MAIN-20190325194225-20190325220225-00112.warc.gz\"}"}
https://www.mathworks.com/matlabcentral/cody/problems/49-sums-with-excluded-digits/solutions/1603736
[ "Cody\n\n# Problem 49. Sums with Excluded Digits\n\nSolution 1603736\n\nSubmitted on 8 Aug 2018 by Shawn Neal\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nn = 20; m = 5; total = 190; assert(isequal(no_digit_sum(n,m),total))\n\ns = 2\n\n2   Pass\nn = 10; m = 5; total = 50; assert(isequal(no_digit_sum(n,m),total))\n\ns = 1\n\n3   Pass\nn = 33; m = 3; total = 396; assert(isequal(no_digit_sum(n,m),total))\n\ns = 3" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62429255,"math_prob":0.9993197,"size":510,"snap":"2020-10-2020-16","text_gpt3_token_len":172,"char_repetition_ratio":0.13043478,"word_repetition_ratio":0.02247191,"special_character_ratio":0.3882353,"punctuation_ratio":0.14678898,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990136,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-29T21:16:44Z\",\"WARC-Record-ID\":\"<urn:uuid:c355a4c9-3b6c-412c-bf25-d3efe28964bf>\",\"Content-Length\":\"73383\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:333c4011-9138-46fc-bda1-9280fffa5476>\",\"WARC-Concurrent-To\":\"<urn:uuid:69c257ea-ff8f-4310-9bb9-16cd18be4021>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://www.mathworks.com/matlabcentral/cody/problems/49-sums-with-excluded-digits/solutions/1603736\",\"WARC-Payload-Digest\":\"sha1:3OQP3PP77MM4ZAMD6PCU2BHLQTNLPH2N\",\"WARC-Block-Digest\":\"sha1:ZP6YBDI3KEJ5R6JTM7N7QKOTJC5N26KW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370496227.25_warc_CC-MAIN-20200329201741-20200329231741-00522.warc.gz\"}"}
http://cogent.psyc.bbk.ac.uk/help/version2.4/classes/buffer_graph.html
[ "COGENT Online\n CONTENTS\n COGENT Version 2.4 Help", null, "# Buffer/Graph\n\n## Introduction\n\nGraph buffers provide the functionality of a buffer with an option to view and print the buffer contents as a graph or chart. Elements of a graph buffer may take one of two forms: type(DataSet, Type, Properties) or data(DataSet, X, Y), where DataSet identifies a particular data set, Type specifies the type of graph required for that data set (see below), Properties is a list of secondary properties specifying aspects of the appearance of the particular data set (see below), and X and Y are numeric values for the data set. Graph buffer elements may be matched, added or deleted, in a way analogous to elements from other sub-types of buffer.\n\nThere is no limit to the number of data sets that may be stored in one graph buffer. Normally there will be one type/3 element and several data/3 elements for each data set. Multiple data sets within a single graph buffer are treated independently, with the limitation that the graph axes and labels are determined by graph properties (see below), and are hence shared by all data sets within a graph.\n\n## Element Types\n\nA type/3 term specifies the style or type of graph to be drawn for a given data set. The second argument must be one of scatter, line, or bar. The third argument specifies graph properties such as colour and marker style, in the form of a list of secondary properties. Thus, the following type/3 element:\n\ntype(frequency, bar, [colour(blue), filled(true)])\n\nspecifies that the graph associated with frequency should be a bar-chart drawn with blue filled bars. This could be changed to a line graph using red filled square markers (and a red line) by replacing the above element with:\n\ntype(frequency, line, [colour(red), marker(square), filled(true)])\n\nThree kinds of secondary property are recognised: colour, filled, and marker. There are four marker types: square, circle, cross and plus.\n\n## Properties\n\nGraph buffers inherit all of the properties associated with the parent buffer class, plus additional properties for controlling the appearance of the graph. The full set of properties of a graph buffer are:\n\nInitialise (possible values: Each Trial/Each Block/Each Subject/Each Experiment/Each Session; default: Each Trial)\nThe timing of buffer initialisation is determined by this property. When the value is Each Trial, the buffer will automatically initialise itself at the beginning of each trial. When the value is Each Block, the buffer will initialise itself at the beginning of each block of trials (i.e., contents will be preserved across trials within a block). Similarly, when the value is Each Subject, contents will be preserved across simulated blocks, when the value is Each Experiment, contents will be preserved across simulated subjects, and when the value is Each Session, the contents will be preserved across experiments.\n\nDecay (possible values: None/Half-Life/Linear/Fixed; default: None)\nThis property specifies whether the elements of a buffer decay with time, and if so what pattern of decay is observed.\nWhen the value is None, elements will remain in a buffer until they are either explicitly deleted or are forced out by the buffer overflowing.\nWhen the value is Half-Life, elements decay in a random fashion, but with the probability of decay begin constant on each cycle. This probability is specified in terms of a half life (specified by the Decay Constant property). The half life is the number of cycles it takes on average for half of the elements to decay.\nWhen the value is Linear, elements decay in a probabilistic fashion, but with the probability of an element decay increasing linearly with each cycle it remains in the buffer. The maximum number of cycles an element can spend in the buffer is given by the value of the Decay Constant property. (So if decay is Linear and the decay constant is 10, the probability of an element decaying on the first cycle will be 0.1, the probability of it decaying within two cycles will be 0.2, and so on, with the probability of it decaying in 10 cycles being 1.0.)\nWhen the value is Fixed, elements remain in the buffer for a fixed number of cycles (specified by the Decay Constant property).\n\nDecay Constant (possible values: 1 -- 9999; default: 20)\nThis property determines the decay rate if decay is specified for the buffer. In the case of Half-Life decay, the constant specifies the half-life (in cycles) of buffer elements. A larger number will result in a longer half-life and so a slower decay. In the case of Linear decay, the constant specifies the maximum number of cycles an element may remain in the buffer. In the case of Fixed decay, the constant specifies the number of cycles an element will remain present in the buffer after the element is added to the buffer (provided it is not ``refreshed'' via Duplicates: No, as discussed above). Again, a larger number will lead to slower decay.\n\nLimited Capacity (possible values: on/off; default: off)\nThis property determines whether the buffer has limited or unlimited capacity. If the value is Yes then the buffer's capacity is limited to the value specified by the Capacity property, and its behaviour when that capacity is exceeded is governed by the On Excess property; if the value is No, these two properties are ignored.\n\nCapacity (possible values: 1 -- 9999; default: 7)\nThis property specifies the capacity of a buffer in terms of the number of items it may hold. If Limited Capacity is not selected, this property has no effect.\n\nOn Excess (possible values: Random/Youngest/Oldest/Ignore; default: Random)\nThe value of this property determines how the buffer behaves when its capacity is exceeded. If the value is Random, then a random element will be deleted from the buffer to make way for the new element. If the value is Youngest, then the most recently added element will be deleted to make way for the new element. If the value is Oldest, then the least recently added element will be deleted to make way for the new element. If the value is Ignore, then the new element will be discarded and the buffer contents will not be altered. If Limited Capacity is not selected, this property has no effect.\n\nGrounded (Boolean; default: TRUE)\nIf this property is set, attempts to add ungrounded terms (i.e., terms containing variables) to the buffer will result in an error message. The rationale for this property is that in most applications adding ungrounded terms to a buffer is probably results from a bug in the model, so flagging such occurrences is useful. In some applications, however, it may be reasonable to have ungrounded terms in buffers (e.g., where the terms represent production-like rules). In these cases, the property should not be set.\n\nAccess (values: Random/FIFO/LIFO; default: Random):\nThe order in which the buffer's elements are accessed by match operations. Interpretation of the property's values follows that of propositional buffers.\n\nTitle (values: an arbitrary character string; default: ``Title''):\nThe graph's title, which is centred above the graph when the graph is viewed in the Current Graph page or when the graph is printed.\n\nX Label (values: an arbitrary character string; default: ``X''):\nThe label drawn beside the graph's horizontal axis.\n\nAutoscale X (Boolean; default: FALSE):\nIf this property is set, COGENT will automatically select suitable values for the minimum and maximum coordinates on the X axis (based on the data present on the graph). If the property is not set, the values of the X Min and X Max properties will be used instead.\n\nX Min (values: a real number; default: 0.0):\nThe minimum value of the horizontal coordinate of the graph.\n\nX Max (values: a real number; default: 10.0):\nThe maximum value of the horizontal coordinate of the graph.\n\nX Units (values: a positive integer: default: 5):\nThe number of units into which the horizontal axis of the graph is divided.\n\nY Label (values: an arbitrary character string; default: ``Y''):\nThe label drawn beside the graph's vertical axis.\n\nAutoscale Y (Boolean; default: FALSE):\nIf this property is set, COGENT will automatically select suitable values for the minimum and maximum coordinates on the Y axis (based on the data present on the graph). If the property is not set, the values of the Y Min and Y Max properties will be used instead.\n\nY Min (values: a real number; default: 0.0):\nThe minimum value of the vertical coordinate of the graph.\n\nY Max (values: a real number; default: 100.0):\nThe maximum value of the vertical coordinate of the graph.\n\nY Units (values: a positive integer: default: 5):\nThe number of units into which the vertical axis of the graph is divided.\n\n## The Graph View\n\nGraphical buffers may be viewed as graphs by selecting the Current Graph page of the box's notebook. Each data/3 element is used to construct a data point on the graph for a particular data set, with the second and third arguments specifying the coordinates (and the first argument specifying the data set). The standard printing procedures may be used to print these graphs.\n\n## The Buffer Element Editor\n\nRecall that all information must be represented in COGENT via Prolog terms. Buffer elements are no exception, but they are perhaps the simplest sorts of box elements in COGENT. This is reflected in the simplicity of the buffer element editor. Apart from the comment line, it contains a single text field into which the buffer element should be typed. The contents of this field should be a valid Prolog term. If not, however, COGENT does automatic syntax checking (and attempted correction) of editor elements, and so any error will be noted and (possibly) corrected.\n\n COGENT Online\n CONTENTS\n COGENT Version 2.4 Help" ]
[ null, "http://cogent.psyc.bbk.ac.uk/help/version2.4/images/buffer_graph.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82863444,"math_prob":0.8852563,"size":9482,"snap":"2019-13-2019-22","text_gpt3_token_len":2081,"char_repetition_ratio":0.16807343,"word_repetition_ratio":0.15613148,"special_character_ratio":0.21050411,"punctuation_ratio":0.124528304,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96821046,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-27T01:53:18Z\",\"WARC-Record-ID\":\"<urn:uuid:f23ff5fe-9853-4ba0-ae93-0cb535d65d74>\",\"Content-Length\":\"12282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6338a0e-d722-4fdd-a973-d477f523b2bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:63011515-45d0-48ed-af5d-e1a37760c2ba>\",\"WARC-IP-Address\":\"193.61.4.225\",\"WARC-Target-URI\":\"http://cogent.psyc.bbk.ac.uk/help/version2.4/classes/buffer_graph.html\",\"WARC-Payload-Digest\":\"sha1:FUMQ2LIWBOYG4F3MBXFZCOLSIT2X6WOA\",\"WARC-Block-Digest\":\"sha1:FQCXBYS3AHCLEYLEVU5LR7E4I6FA2XWW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912207146.96_warc_CC-MAIN-20190327000624-20190327022624-00428.warc.gz\"}"}
https://au.mathworks.com/help/optim/ug/generate-and-plot-a-pareto-front.html
[ "Generate and Plot Pareto Front\n\nThis example shows how to generate and plot a Pareto front for a 2-D multiobjective function using fgoalattain.\n\nThe two objective functions in this example are shifted and scaled versions of the convex function $\\sqrt{1+{x}^{2}}$. The code for the objective functions appears in the simple_mult helper function at the end of this example.\n\nBoth objective functions decrease in the region $x\\le 0$ and increase in the region $x\\ge 1$. In between 0 and 1, ${f}_{1}\\left(x\\right)$ increases and ${f}_{2}\\left(x\\right)$ decreases, so a tradeoff region exists. Plot the two objective functions for $x$ ranging from $-1/2$ to $3/2$.\n\nt = linspace(-1/2,3/2);\nF = simple_mult(t);\nplot(t,F,'LineWidth',2)\nhold on\nplot([0,0],[0,8],'g--');\nplot([1,1],[0,8],'g--');\nplot([0,1],[1,6],'k.','MarkerSize',15);\ntext(-0.25,1.5,'Minimum(f_1(x))')\ntext(.75,5.5,'Minimum(f_2(x))')\nhold off\nlegend('f_1(x)','f_2(x)')\nxlabel({'x';'Tradeoff region between the green lines'})", null, "To find the Pareto front, first find the unconstrained minima of the two objective functions. In this case, you can see in the plot that the minimum of ${f}_{1}\\left(x\\right)$ is 1, and the minimum of ${f}_{2}\\left(x\\right)$ is 6, but in general you might need to use an optimization routine to find the minima.\n\nIn general, write a function that returns a particular component of the multiobjective function. (The pickindex helper function at the end of this example returns the $k$th objective function value.) Then find the minimum of each component using an optimization solver. You can use fminbnd in this case, or fminunc for higher-dimensional problems.\n\nk = 1;\n[min1,minfn1] = fminbnd(@(x)pickindex(x,k),-1,2);\nk = 2;\n[min2,minfn2] = fminbnd(@(x)pickindex(x,k),-1,2);\n\nSet goals that are the unconstrained optima for each objective function. You can simultaneously achieve these goals only if the objective functions do not interfere with each other, meaning there is no tradeoff.\n\ngoal = [minfn1,minfn2];\n\nTo calculate the Pareto front, take weight vectors $\\left[a,1–a\\right]$ for $a$ from 0 through 1. Solve the goal attainment problem, setting the weights to the various values.\n\nnf = 2; % Number of objective functions\nN = 50; % Number of points for plotting\nonen = 1/N;\nx = zeros(N+1,1);\nf = zeros(N+1,nf);\nfun = @simple_mult;\nx0 = 0.5;\noptions = optimoptions('fgoalattain','Display','off');\nfor r = 0:N\nt = onen*r; % 0 through 1\nweight = [t,1-t];\n[x(r+1,:),f(r+1,:)] = fgoalattain(fun,x0,goal,weight,...\n[],[],[],[],[],[],[],options);\nend\nfigure\nplot(f(:,1),f(:,2),'ko');\nxlabel('f_1')\nylabel('f_2')", null, "You can see the tradeoff between the two objective functions.\n\nHelper Functions\n\nThe following code creates the simple_multi function.\n\nfunction f = simple_mult(x)\nf(:,1) = sqrt(1+x.^2);\nf(:,2) = 4 + 2*sqrt(1+(x-1).^2);\nend\n\nThe following code creates the pickindex function.\n\nfunction z = pickindex(x,k)\nz = simple_mult(x); % evaluate both objectives\nz = z(k); % return objective k\nend" ]
[ null, "https://au.mathworks.com/help/examples/optim/win64/GenerateAndPlotParetoFrontExample_01.png", null, "https://au.mathworks.com/help/examples/optim/win64/GenerateAndPlotParetoFrontExample_02.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73659754,"math_prob":0.9998586,"size":2796,"snap":"2022-05-2022-21","text_gpt3_token_len":784,"char_repetition_ratio":0.15114613,"word_repetition_ratio":0.01965602,"special_character_ratio":0.3050787,"punctuation_ratio":0.21314102,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997869,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-29T03:49:45Z\",\"WARC-Record-ID\":\"<urn:uuid:fc39895f-1a96-4522-b71c-731a86867445>\",\"Content-Length\":\"82275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b4feedc-0f70-48bb-8adb-8b227d7e0e11>\",\"WARC-Concurrent-To\":\"<urn:uuid:1231907f-caf5-4381-b90a-39b8e9184341>\",\"WARC-IP-Address\":\"23.47.146.88\",\"WARC-Target-URI\":\"https://au.mathworks.com/help/optim/ug/generate-and-plot-a-pareto-front.html\",\"WARC-Payload-Digest\":\"sha1:SNZQCEJ3PMQ6JIKQCLPPJA635XO4QUIM\",\"WARC-Block-Digest\":\"sha1:CCH7S43HPAMSK3RWSOFWV7E32CYFDRZE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320299927.25_warc_CC-MAIN-20220129032406-20220129062406-00321.warc.gz\"}"}
https://nontrivialproblems.wordpress.com/tag/scientific-units/
[ "# When Less is More, part 2: Flipping the Sign\n\nIn part 1, we looked at absolute temperature scales.  Before an aside about not setting your kitchen on fire, we defined absolute zero as the point where atoms have no more energy from their motion (or kinetic energy).  We’ve now also nicely set ourselves up for the “big deal”.  If absolute zero is where there’s no kinetic energy, how do you get below that?  Can atoms somehow have negative kinetic energy?  The simple (and relevant to this discussion) answer is no.\n\nInstead, we need to add something we’ve been neglecting from our discussion of temperature.  So far we’ve only talked about heat.  But another concept is used in the “relevant” definition (I’ll have more to say on this below) of temperature for this experiment:  entropy, which is basically the amount of disorder in a system.  If you took an advanced physics or chemistry class in high school (or maybe a first-year class in either of those fields in college), you might have learned the second law of thermodynamics, which states that entropy tends to increase, and must increase in closed systems.  It also defines how entropy changes in physical processes:  at a given (absolute) temperature T, if an amount of heat dQ flows into a system (we can also use a negative sign for heat flowing out), then the entropy S changes by an amount dS by", null, "$\\Large dS=\\frac{\\ dQ}{\\ T}$\n\nOf course, definitions can sometimes work both ways, and that’s what physicists decided to do.  Based on this, they solve for T, and we “officially” define temperature as", null, "$\\Large T=\\frac{\\ dQ}{\\ dS}$\n\nAnd so with this definition, we can see where a negative temperature comes from: anytime the heat flow and entropy change have different signs, T should be negative.\n\nWe can also be more general in that last equation, and look at temperature as a function of how entropy changes with respect to energy (if you’ve taken calculus, I mean the derivative of entropy with respect to energy), and temperature is", null, "$\\Large \\frac{\\ 1}{\\ T}=\\frac{\\ d}{\\ dE} S(E)$\n\nOf course, “anytime” would still be rare in our everyday experience.  At the low energies and macroscopic scales of normal human life, heat (or energy) and entropy increase together.  But scientists can set up quantum systems (read: basically any system on an atomic scale) where we break that trend.  This is where a more basic understanding of entropy comes in.  If you have a system where atoms can have two different energies, the greatest entropy is when half of the atoms are in each energy state.  This is because that allows for the greatest number of combinations of atoms.  Look at this plot of the number of total possible combinations for making a combination of k atoms from a set of n total atoms based on this formula", null, ", in this case where n is 50.\n\nAs you can see, the number of combination explodes close to the halfway point.  Also note the symmetry of the combination curve in this case.  So let’s say with our 50 atoms, k equals the number of atoms in the second, higher energy state.  (For those who know about electron orbitals, realize here that we’re not talking about the energy levels of the electron; we’re actually talking about the energy state the whole atom.  This can come up in systems where atoms are placed in magnetic fields.)  In most everyday systems, you’d have a lot of atoms in the lower state, and so we have a small k.  Adding energy kicks atoms up to the higher state, so k increases, and you see the number of possible configurations (and therefore the entropy), also increases.  But we have the opposite happen if we were to start with a high energy system, where k is already close to 50.  In that case, adding energy would decrease the entropy, and so by the third equation, we have a negative temperature.  Ta da!\n\nThe idea of a “simple” energy state system having a negative temperature is actually kind of old hat; i.e., we did it already.  (In fact, if you know how lasers work, this basically describes the population inversion of electrons; we just don’t typically ascribe a “temperature” to electrons confined in solids.)  So what’s the big deal about this new research?  While the atoms were in a gas state, the team was able to get them in a lattice arrangement in space and actually prevented motion of atoms.  By rapidly adjusting the magnetic field and lasers to trap most of the atoms in a high energy state, they made what should have been an energetically unstable arrangement (like a pencil balanced on its tip) stable.\n\nNow that we have a spatially spread out system in a negative temperature, we can also test lots of interesting physics.  For example, the combination of stability on millisecond time scales (which is LONG for particle physics) and high energies can allow for unique probing of the Standard Model, such as the formation of structures defined by the strong force.  The researchers also point out that in this negative temperature regime, they also experience negative pressure, which is similar to the force believed to drive the expansion of the universe.\n\nSo that paragraph ends the NEW science, but I promised I would go more into the definition of temperature.  Or more accurately, why many people (look at the comments  on the review) seem to complain about how this really isn’t a negative absolute temperature.  Especially as those slightly more in the know freak out when they learn that systems with negative absolute temperature behave as if they were hotter than a temperature of positive infinity.  I’ll explain that confusing fact briefly.  If you put a negative temperature object next to a positive temperature object and there’s no energy source, heat will flow from the negative temperature object.  As I mentioned, the 2nd Law of Thermodynamics says entropy increases in closed systems.  Adding energy to the negative temperature system would decrease the system entropy (positive temperature systems lose entropy when they lose energy, and the negative system loses entropy when it gains energy).  To maximize the entropy, heat would need to flow out of the negative temperature object to the positive temperature object, bringing both objects closer to that giant entropy peak in the middle.\n\nThose who complain that the temperature is only negative because of a weird definition resort to arguments about kinetic energy.  Unless you take a course on statistical mechanics, you almost never see the definition of temperature we just derived here.  Instead, you typically talk about how temperature is a measure of the average kinetic energy of the atoms in a system, and absolute zero is then described as the point where all atomic motion stops.  This then leads people to an obvious question, “How can atoms move less than being stopped?”\n\nIt’s a good question.  The problem is that the basic definition people are using isn’t the right one.  As one of the wonderful SciBloggers explains, the kinetic energy is important, but not in the way that simplification leads most people to believe.  It’s not the mere average of atomic energies that matters, it’s the nature of the probability distribution of energy around the average.  In statistical physics, this is described by the (Maxwell-)Boltzmann distribution.", null, "Plot of number of atoms at each velocity for a variety of temperatures based on the Boltzmann distribution (given in Celsius, not Kelvin). A plot for energy would be similar. (From Wikipedia)\n\nIn the figure, you’ll see that the average velocity does increase with temperature.  Another trend is that the distribution widens with increasing temperature.  And if you analyzed the graphs, you’d see that most atoms have less energy than the average energy.  For a negative temperature system, these features are slightly tweaked.  The average energy increases with temperature (so -1 K has a higher average energy than -100 K).  The distribution widens at lower temperatures.  And the big deal is that most particles in a negative temperature system have a higher kinetic energy than the average.  This would look like the graph of the Boltzmann distribution if we say the temperature is negative, and so that’s why we go with that.  So it’s not that physicists have lied, it’s just kind of bizarre compared to our normal experience.\n\nThere’s also one slightly more intuitive explanation that I’ve been using to explain the concept of negative temperature.  First, I need to dispel another aspect of the definition that people are confused on.  Atoms at absolute zero don’t have zeroes of everything else.  Quantum mechanics says there is a zero-point, or minimum, energy that all objects have in a system.  And I’m pretty sure the uncertainty principle requires anything with a non-zero energy to have a momentum, which means it must move.  Instead, physicists define absolute zero as a minimum entropy point.  In order to minimize the entropy of a negative temperature system, we have to add energy to it.  In other words, we have to ADD heat to a negative temperature system to bring it to absolute zero.\n\n# When Less Is More, part 1: What Is Temperature?\n\nSo the science regions of the blogosphere recently exploded with a new paper claiming to have made a material with a negative absolute temperature.  Those last three words together may sound weird to the uninitiated (and residents of the Plains and Northeast may wonder why physicists need all that newfangled equipment to go below zero degrees), but they need to be combined in that phrase in physics.  If you’ve ever been prone to intense existential thought over your weather forecast, you might realize the way we think of 0 degrees in everyday life seems a bit arbitrary compared to what 0 means for other units.  Most of the time when we say something is 0, we mean there’s none of that quality.  A length or mass of 0 means there is nothing there.  A speed of 0 means something isn’t moving.  Zero current means there’s no electricity.  A bank account with \\$0 means you have no money.  And in units that allow negative amounts, these counteract positive values.  A negative velocity means you’re going in the direction opposite your reference.  A negative financial statement means you’ve lost money or owe money (and so take away from positive values of money you had).\n\nSo what do these mean for temperature?  First, it’s important that while we deal with temperature everyday, it’s actually kind of an ephemeral unit.  It’s not the same thing as energy, but it is related to that.  And in scientific contexts, temperature is not the same as heat.  Heat is defined as the transfer of energy between bodies by some thermal process, like radiation (basically how old light bulbs work), conduction (touching), or convection (heat transfer by a fluid moving, like the way you might see soup churn on a stove).  So as a kind of approximate definition, we can think of temperature as a measure of how much energy something could give away as heat* (Edit:  A friend of mine has come up with a good analogy:  Temperature is like pressure, but for heat.  Mass moves from high to low pressure areas, and heat flows from high to low temperature objects).  And so now, we realize there’s nothing particularly special about 0 degrees in our everyday scales, in both Fahrenheit and Celsius.  I mean, sure, there is a marker that these definitions came from.  Fahrenheit’s 0 point is the temperature which a mixture of water ice and ammonium chloride (which Scandinavians might recognize as the flavoring of salty licorice) always stabilizes to.  And as nearly all middle schoolers know, Celsius is 0 at the temperature at which water freezes.\n\nWhile these are cold temperatures to our human flesh, you know things at 0 degrees in either scale can still manage to give up heat (just not to you and your warm 98.6 degree F body).  This is where absolute temperature comes into play.  For the field of thermodynamics, where temperatures and heat transfer are important in studying energy, scientists quickly realized that their old zero points were meaningless when trying to talk intelligently about energy.  So they made new scales with “absolute zero”, a point where there is no more energy (at least from the motion of atoms).  This point is about -273 °C and -460 °F.  This is also partially why anyone attempting to cook a recipe twice as fast by “doubling” the temperature on their oven/stove/whatever is doomed to failure, because going from say 250 °F to 500 °F on your oven hasn’t doubled the temperature.  Instead, you would need to go from 710 absolute degrees Rankine (the absolute temperature scale based on the Fahrenheit intervals; ask a chemical engineering friend if they’ve worked in Rankine) to 1420 °R, which would be like 960 °F and would lead to a really amusing story with your oven manufacturer and/or the fire department.\n\nYou’ll notice a first on the blog here, and that’s the presence of  a “part 1”.  I haven’t actually gotten to the new science of the article yet, but I wanted to set up the background here.  Scientific units actually have fascinating histories and I got incredibly caught up on researching temperature for this part, so I ended up with way more information than I originally planned on.\n\n*This is a really loose definition of temperature.  Don’t ever use this on homework, ever.  Consider it more of a guideline." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://i0.wp.com/upload.wikimedia.org/math/e/e/d/eed76baaa00cf639ad1c1cecd11247f1.png", null, "https://i2.wp.com/upload.wikimedia.org/wikipedia/commons/thumb/3/36/Maxwell-Boltzmann_distribution_1.png/800px-Maxwell-Boltzmann_distribution_1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9353362,"math_prob":0.9331037,"size":8648,"snap":"2020-10-2020-16","text_gpt3_token_len":1731,"char_repetition_ratio":0.16774641,"word_repetition_ratio":0.0013651877,"special_character_ratio":0.19865865,"punctuation_ratio":0.09244713,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9526339,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-28T15:31:44Z\",\"WARC-Record-ID\":\"<urn:uuid:40a49136-2645-4402-a13d-7ebe90c73c71>\",\"Content-Length\":\"87903\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b9f92d5-62f3-40df-92cd-3b585d22a0bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:975a3dba-1857-4960-a2d2-63a57fc9ae15>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://nontrivialproblems.wordpress.com/tag/scientific-units/\",\"WARC-Payload-Digest\":\"sha1:76NYHHBFA6MA5RQ5SPNZFUY64DBPXPCU\",\"WARC-Block-Digest\":\"sha1:U3CT3RXJJPQARDEMBTPM3BFRZIYY4HB2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370491998.11_warc_CC-MAIN-20200328134227-20200328164227-00286.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/hep-th/0301111/
[ "# Charge density and electric charge in quantum electrodynamics\n\nG. Morchio,\nDip. di Fisica, Universita’ di Pisa and INFN, Pisa, Italy\nF. Strocchi,\nScuola Normale Superiore and INFN, Pisa, Italy\n###### Abstract\n\nThe convergence of integrals over charge densities is discussed in relation with the problem of electric charge and (non local) charged states in Quantum Electrodynamics (QED). Delicate, but physically relevant, mathematical points like the domain dependence of local charges as quadratic forms and the time smearing needed for strong convergence of integrals of charge densities are analyzed. The results are applied to QED and the choice of time smearing is shown to be crucial for the removal of the vacuum polarization effects responsible for the time dependence of the charge (Swieca phenomenon). The possibility of constructing physical charged states in the Feynman-Gupta-Bleuler gauge as limits of local state vectors is discussed, compatibly with the vanishing of the Gauss charge on local states. A modification by a gauge term of the Dirac exponential factor which yields the physical Coulomb fields from the Feynman-Gupta-Bleuler fields is shown to remove the infrared divergence of scalar products of local and physical charged states, allowing for a construction of physical charged fields with well defined correlation functions with local fields.\n\n## 1 Introduction\n\nThe simple relation between charge density and electric charge in classical electrodynamics does not extend trivially to the quantum case, because of problems due to vacuum polarization and infinite volume integration.\n\nQuite generally, the relation between local charges and global conserved charges has been extensively discussed in the seventies, in relation with the proof of the Goldstone theorem  [1, 2, 3, 4] and it has become standard wisdom in the quantum field theory (QFT) framework in which all the relevant information are carried by the local states.\n\nThe problem changes substantially if the relevant charged states are non local, as it is the case of Quantum Electrodynamics(QED)  . As a consequence, one cannot rely on the standard strategy of controlling the convergence of local charges on the domain of local states, and in fact the limit of local charges, as quadratic forms, crucially depends on the domain which is considered. Moreover, as discussed by Swieca  , on the charged states obtained by applying Coulomb fields to the vacuum, the local charge given by the integral of the density with the standard smearing in space and time does not converge to the electric charge and its limit is even time dependent.\n\nThis difficulty requires an analysis of the convergence of suitably time-smeared integrals of the charge density; as we shall see, not only the standard time smearing does not work, but also Requardt’s space-time smearing prescription   requires a modification in order to obtain the correct result for the renormalized charge. Actually, the basic point is the control of the construction of charged states, which is related to the infrared problem and is a deep non perturbative problem, both in the general algebraic approach and in the approach which uses fields operators  .\n\nEven in perturbation theory a rigorous control on the construction of charged states is far from trivial. In the (positive) Coulomb gauge the (non local) charged fields are difficult to handle   and the standard strategy is to use a local formulation at the expense of positivity, as in the Feynman-Gupta-Bleuler gauge. In this case, the charged states should be the obtained by an appropriate construction in terms of local (unphysical) states. Such a possibility has been advocated by Dirac and Symanzik  [9, 8] who proposed explicit formulas for non local charged (Coulomb) fields in terms of the local Feynman-Gupta-Bleuler fields. Such a construction, which involves non trivial ultraviolet and infrared problems has recently been refined by Steinmann  [10, 13] on the basis of a perturbative expansion.\n\nAn important issue is whether the above states can be constructed only in terms of expectations of the observables or they exist as vectors in a space in which local states are dense. In the latter case, the control of limits of local states requires a topology and the topology defined by the Wightman functions of the local fields is too weak to give a unique space; thus, the possibility of reaching the physical charged states, characterized by a Coulomb delocalization, depends on the choice of a topology. For example, the implicit use of the standard (Krein) metric on the asymptotic fields excludes the presence of charged states in the corresponding physical space, as pointed out by Zwanziger in his investigations on the infrared problem in QED  . A possible non perturbative construction of physical charged states as limits of local states was discussed in Ref.  , with the use of a Hilbert-Krein topology which takes into account the effects of the infrared problem. In our opinion, the non uniqueness of such Hilbert-Krein majorant topologies, which are associated to the Wightman functions of the local fields in order to obtain weakly complete inner product spaces of states, should not be regarded as a mathematical oddness, being related to the allowed large distance behaviour or ”boundary conditions” at infinity.\n\nThe possibility of constructing physical charged states as limits of local state vectors in a weak topology has been recently denied   on the basis of an argument by which the local Gauss charge, corresponding to the integral of , vanishes on the local states and therefore on any weak closure of them; thus no weak closure of the local states could contain physical charged states. A main conclusion of our analysis is that the assumptions involved in the argument underestimate the delicate rôle of such topologies for the convergence of local charges in QED.\n\nIn view of the problems which arise in QED, in Section 2 we discuss in general charges defined as limits of quadratic forms, their crucial dependence on the domain and their relation to global charge operators; in particular, attention is paid to the case in which the relevant domains arise by applying non local field operators to the vacuum.\n\nIn Section 3 we consider the problem of weak convergence of local charges, which is shown to be very relevant for the Steinmann argument. Strong convergence on the vacuum is shown to be a general consequence of a stronger version of Requardt’s theorem, which also allows for an improved time smearing procedure, necessary for obtaining the correct value of the charge on Coulomb charged states. Such a time smearing procedure avoids the time dependence effects due to vacuum polarization while preserving the correct value of the charge.\n\nIn Section 4, we discuss convergence of Gauss local charges on physical charged states, on the basis of the standard local formulations of QED, the Feynman-Gupta-Bleuler gauge. Quite generally, independently of the use of a Hilbert-Krein topology, it is shown that the construction of physical charged state vectors as limits of local states in a weak topology is incompatible with convergence, in the same weak topology, of the Gauss local charges, even with a time smearing a la Requardt, on local states. A simple model is discussed which mimics the relation between charge density and charge in QED and displays the compatibility between the vanishing of the Gauss charge on a dense domain of local states and its strong convergence to a non zero electric charge on the physical space. In Section 5 we compare the construction of physical charged states of Ref. with the DSS construction analyzed by Steinmann  [10, 13]. We show that the infrared divergence in the matrix elements of physical charged states with local states is avoided by the modified DSS exponential used in Ref. , which only differs from the standard factor by a gauge term. In this way one removes the obstruction pointed out by Steinmann   as an argument for the impossibility of constructing physical charged fields with well defined correlation functions with local fields.\n\n## 2 Charges as limits of quadratic forms\n\nThe analysis of the charge operator in QED presents subtle features arising from the Coulomb delocalization of charged states  [5, 12]. It is therefore convenient to start by an analysis of charges as integrals over a local density on general (not necessarily local) domains.\n\nIn this section we shall show that i) charges defined as limits of quadratic forms on dense domains in general (including the quantum field theory case) crucially depend on the domain; e.g. may converge to zero on and have a non zero limit on if is not dense,   ii) such a phenomenon cannot occur if converges weakly on and on .\n\nQuite generally, in quantum field theory the problem of associating an (unbroken) charge to the integral over a local density\n\n QR=∫|x|≤Rdxj0(x,0),∂μjμ=0,\n\nis delicate and deserves special attention. Intuitively, one thinks of defining a state of charge , as satisfying\n\n QΨ=limR→∞QRΨ=qΨ,\n\nbut as emphasized by Schoer and Stichel  , the limit does not exist as a weak limit, even if some smearing in time is made with and even if is a local state, briefly . In the latter case, the limit exists   as a sesquilinear form on\n\n limR→∞(Φ,QRΨ)=Q(Φ,Ψ),Φ,Ψ∈D0.\n\nFurthermore, if defines an unbroken symmetry on the local fields the limit sesquilinear form defines an (hermitean) operator on .\ni) Domains and limits of quadratic forms\nIn general, the limit of hermitean operators as forms on domains , crucially depends on the domain , in particular, the limit on does not constrain the limit on , .\n\nSuch a domain dependence in general persists, as shown by the example below, even if converges to an hermitean sesquilinear form on satisfying the boundedness condition\n\n |Q(Φ,Ψ)|≤CΨ||Φ||,∀Φ,Ψ∈D, (2.1)\n\nand therefore identifies an (hermitean) operator with domain . Furthermore, even if eq.(2,1) holds, it is not at all guaranteed that, ,\n\n (χ,QΨ)=limR→∞(χ,QRΨ),∀Ψ∈D. (2.2)\n\nIn fact, such an equation means that converge weakly. By the convergence of on , weak convergence of is equivalent to the boundedness of the norms , for each fixed .\n\nIn particular, as shown by the example below, even if converges to zero , one cannot conclude that , . (This also shows that the failure of eq.(2.2) does not depend on converging to an unbounded or a bounded operator.)\n\nThe general phenomenon is that, if converge to an operator on and to an operator on , the two operators and are in general not related, in the sense of the following\n\n###### Definition 2.1\n\nTwo densely defined hermitean operators are said to be related if there is an hermitean operator of which and are restrictions. They will be said to be weakly related if there is a densely defined hermitean operator to which both and are related.\n\nThe above relations are symmetric and the second notion is strictly weaker, since e.g. different self adjoint extensions of an hermitean operator are not restrictions of the same hermitean operator. An example of limits of quadratic forms which define not weakly related operators is given below.\n\nExample. Let us consider the space of functions vanishing at the origin, the linear span of and\n\nClearly, both and are dense domains; in fact, if is orthogonal to one has\n\n cn≡(f,sinnx)=αn(f,sinx)=αnc1,n≥2.\n\nFurthermore\n\n 0=(π/2)∫π0dxf(x)=∑n≥1cn∫π0dxsinnx=2(∑n≥2α2nc1+c1)\n\nimplies , i.e. . Now, let be the multiplication operator by a regular function converging to as a distribution; then\n\n (D0,QRD0)→0,(D1,QRD1)→(D1,P1D1)≠0,\n\nwith the projection on . Thus, the limits of the hermitean operators define two bounded operators which are not even weakly related.\n\nConvergence of on to an operator constrains convergence to an operator on any domain , such that is dense.\n\n###### Proposition 2.1\n\nLet the hermitean operators converge to an operator on and to an operator on ;\ni) if is dense, then and are weakly related\nii) if , then and are related\niii) in both cases, if is essentially selfadjoint on , then is contained in the closure of\niv) if and are not related, then does not converge to an operator on .\n\nProof   The hermiticity of implies that both are densely defined hermitean operators and so is their restriction to . In case ii) extends , in case i) and extend . If converge to on , both and are restrictions of , so that and are related.\n\n###### Proposition 2.2\n\nIf both and converge weakly, then the two limits define hermitean operators and which are related\n\nProof   Hermiticity of the limit forms follows from that of and the existence of weak limits implies that the limit forms define operators on and on . The weak limit of exists also on and by the same argument defines an hermitean operator which extends and .\n\nAs a result, if is essentially selfadjoint, is contained in its closure and in particular if , also , in other terms if converges to zero weakly and converges weakly, then .\nii) Convergence of local charges in quantum field theory\nA general situation which occurs in quantum field theory is described in terms of translational invariant (field) algebras , a (unique translationally invariant) cyclic vector , domains\n\n D0=A0Ψ0,D1=A1Ψ0,\n\nand local hermitean charges , with domains containing  and   and with . In general,   is the integral of the zero component of a local conserved (operator valued tempered distribution) current with suitable smearing:\n\n QR=∫d4xj0(x,x0)fR(x)α(x0)=j0(fRα), (2.3)\n fR(x)=f(|x|/R)∈D(R3),f(x)=1,if|x|≤1,f(x)=0,if|x|≥2,\n α∈D(R),suppα⊂[−a,a],a<1,∫dtα(t)=1.\n\nIf is a local (field) algebra and converges as , the limit defines an operator iff\n\n limR→∞(Ψ0,[QR,A0]Ψ0)=0, (2.4)\n\nequivalently   iff\n\n limR→∞(D0,QRΨ0)=0. (2.5)\n\nNon local algebras may be relevant in the discussion of non local states, e.g. asymptotic states, or charged states in the Coulomb gauge; a local and a non local field algebra, and , occur in the construction of charged states in QED.\n\n###### Proposition 2.3\n\nLet an algebra invariant under translations; if on  , converge to an operator , then\n\n limR→∞(D,QRΨ0)=0. (2.6)\n\nProof   The spectral representations of the space translations gives\n\n ((U(a)−1)4AΨ0,QRΨ0)=∫dJA(k)(eik⋅a−1)4R3~f(Rk),∀A∈A (2.7)\n\nwhere\n\n |(eik⋅a−1)4R3~f(Rk)P(k)|≤|Rk⋅a|4R|~f(Rk)P(Rk)||P(k)P(Rk)|≤CR→0,\n\nin the limit , the r.h.s. of eq.(2.6) converges to zero and therefore, by the density of , one has\n\n (U(a)−1)4QΨ0=0,∀a.\n\nThen, since is a normal operator, it follows that and by the uniqueness of the translationally invariant state ; actually , because .\n\nThus, under the same assumptions, one has that the charge defined in terms of the limit of the commutator  , coincides with , i.e.\n\n (D,Q′AΨ0)≡limR→∞(D,[QR,A]Ψ0)=limR→∞(D,QRAΨ0). (2.8)\n\nThe domain dependence of charge operators obtained as limits of quadratic forms appears also in the above quantum field theory framework. In particular, as a result of Proposition 2.1, if converges to zero on , the convergence to a non zero operator on is excluded if is dense, but may be allowed if is not dense, even if .\n\nSuch features are illustrated and displayed by the following Example.\nExample. Let be a massless scalar field, a free Dirac field, the algebra generated by and by and the algebra generated by and by\n\n ψd(x)=ψ(x)U(x),U(x)=eiϕ(fx)\n ϕ(fx)=∫dyϕ(y)f(y−x),f∈D(R4),∫dxf(x)=1.\n\nThen we consider the local charges\n\n QRϕ≡∂0ϕ(fRα,),QRψ≡j0(fRα),jμ(x)=:¯ψγμψ:,\n QR=QRψ+QRϕ\n\nand the Fock representation of , with Fock vacuum . Since by locality\n\n limR→∞[QRϕ,A0]=0,limR→∞(D0,QRΨ0)=0,\n\nwe have\n\n limR→∞(D0,QRD0)=(D0,QψD0),\n\nwhere is the unbroken fermionic charge. On the other hand, since we have\n\n limR→∞(D,QRD)=0.\n\nIn conclusion converge to the unbroken fermionic charge on and to the zero charge on .\n\nIt is worthwhile to note that the limit of the operators does not define an operator on , where , (since the corresponding bilinear form is discontinuous on the left). Moreover, one has a symmetry breaking condition on the algebra generated by and : converges weakly (actually strongly) and\n\n limR→∞(Ψ0,[QR,ψ†ψd]Ψ0)≠0.\n\nThis fact is actually a consequence of and being not related. In general if converges on to operators which are not related, then, for the algebra generated by and , one cannot have both weak convergence of and\n\n limR→∞(Ψ0,[QR,A]Ψ0)=0. (2.9)\n\nIn fact, by eq.(2.6),\n\n (Di,QiAΨ0)≡limR→∞(Di,[QR,A]Ψ0),∀A∈Ai.\n\nNow, if eq.(2.9) holds, by a standard argument   one gets an hermitean operator on , which extends and , in contrast with their being not related.\n\n## 3 Convergence of time smeared integral of charge density. The vacuum sector of QED\n\nIn this section we discuss weak and strong convergence of local charges, in particular in the vacuum sector of QED.\n\nAs found by Requardt  , the weak limit of on local states can be obtained under general conditions by a suitable time smearing of the charge density, namely by considering, with as in eq.(2.3),\n\n QR≡j0(fRαR),αR(x0)≡α(|x0|/R)/R. (3.1)\n\nActually, one can strengthen Requardt’s theorem and obtain strong convergence (Proposition 3.1), also with a more general time smearing , which will prove necessary in the charged sectors of QED.\n\nWe recall that if is a Lorentz covariant conserved tempered current, the two point function of the charge density is of the form\n\n =−ΔJ(x−y),\n\nwith a Lorentz invariant tempered distribution of positive type; we denote by the spectral measure defined by .\n\n###### Proposition 3.1\n\nIf the spectral measure satisfies the (infrared) regularity condition\n\n dν(k2)=k2dσ(k2),dσameasure,\n\nthen, putting one has\ni)\nii) for all functions with , satisfying and ,\niii) if, for , , the above strong convergence to zero is obtained by choosing .\n\nProof   In fact, one has\n\n ||QR,TΨ0||2=∫dν(k2)d3q|q~f(q)|22√(|q|/R)2+k2)R|~α(T√(|q|/R)2+k2)|2.\n\nSince is of fast decrease,\n\n |~α(T√(|q|/R)2+k2)|2≤CN1+((T|q|/R)2+T2k2)N≤CN1+(T2k2)N,\n\nand since is tempered there is an such that is a finite measure. Then, by taking , one has\n\n ||QR,TΨ0||2≤C′RT∫dσ′(s2)Ts(1+T2s2)2≡RTG(T).\n\nThe integrand function is bounded and converges to zero pointwise, when , so that by the dominated convergence theorem . Thus i) is proved; moreover strong convergence to zero holds if one chooses and ii) follows since ,\n\n ∫∞εdσ′(k2)T√k2/(1+T2k2)2=O(1/T3).\n\nIf the hypothesis of iii) holds one can bound the integral from to by\n\n C∫ε0ds2Ts(1+T2s2)2≤CT2∫∞0du2u(1+u2)2=O(1/T2).\n\nThus, the strong convergence to zero is obtained if .\n\nIn the physical vacuum sector of QED the assumptions of Proposition 3.1 for the spectral measure of the electric current are satisfied since\n\n <∂F0(x)∂F0(y)>=∫k2dρ(k2)d3k|2√k2+k2|−1k2eik(x−y),∂Fμ≡∂νFμν,\n\nwith the spectral measure of the two point function of . Hence, for as in ii) of Proposition 3.1,\n\n limR→∞||∂F0(fR,αT(R))Ψ0||2=0 (3.2)\n\nand therefore converges strongly to zero on the dense domain obtained by applying local bounded observable operators to the vacuum. Eq.(3.2) with was also obtained by D’Emilio  .\n\nThe situation is completely different if one adopts the standard smearing  [1, 2], with a fixed ,\n\n ~QR=j0(fRα).\n###### Proposition 3.2\n\nThe operators have the following properties\ni) they converge to zero on\nii) does not converge weakly in , nor does , a bounded local operator\niii) there are vectors such that\n\n limR→∞<Ψ,~QRΨ0>\n\ndepends on the time smearing test function (time dependence of the charge)\niv) there are operators such that,\n\n limR→∞<Ψ0,[~QR,F]Ψ0>≠0\n\n(Swieca phenomenon  )\n\nProof   Since in the physical vacuum sector , i) follows by locality and Maison theorem  .\n\nFor ii), the same calculation done above for now gives\n\n ||~QRΨ0||2=R∫k2dρ(k2)d3q|q~f(q)|22√(|q|/R)2+k2)R|~α(√(|q|/R)2+k2)|2,\n\nso that cannot converge weakly. Furthermore,\n\n ~QRUΨ0=[~QR,U]Ψ0+U~QRΨ0\n\nand the first term on the r.h.s converges by locality; since the second term does not converge weakly, neither does the l.h.s.\n\nIn order to construct the vector of iii) we consider\n\n ΨR≡F0i((∂iΔ−1g)fRh)Ψ0,g∈D(R3),h∈D(R).\n\nSuch vectors converge strongly to a vector , for , since the Fourier transform of is square integrable with respect to the measure defined by the Fourier transform of . Then, we have\n\n limR→∞<Ψ,~QRΨ0>=limR→∞∫dρ(k2)d3k|2k0|−1k2~fR(k)~α(k0)¯~g(k)¯~h(k0)→\n ¯~g(0)∫d ρ(m2)m~α(m)¯~h(m),\n\nwhich displays the dependence on .\n\nThe operators converge strongly to an operator on the dense domain the algebra of strictly localized (bounded) observables, since they converge strongly on and becomes independent of , for sufficiently large by locality. Then, we have\n\n limR→∞<Ψ0,[~QR,F]Ψ0>=¯~g(0)∫dρ(m2)m(~α(m)¯~h(m)−~α(−m)¯~h(−m))\n\nwhich does not vanish in general.\n\nThe vector reflects the infrared behaviour of ”dipole states” of the form , where is the electron field in the Coulomb gauge, constructed, e.g., according to the Dirac-Symanzik-Steinmann  [8, 10] prescription. Thus, in QED, even in the vacuum sector, the naive idea of the charge as the integral of the charge density gives rise to substantial problems because of vacuum polarization effects which disappear only with a suitable time smearing. The same problems arise in the charged sectors of the Coulomb gauge, as stressed by Swieca  ; they are a general consequence of the non locality of the charged Coulomb fields.\n\nIn general, the standard procedure, eq.(2.3), corresponds to taking, in the corresponding correlation functions in momentum space, the limit and gives a function in only in expectations on local states. On the other hand, Requardt time smearing corresponds to taking a limit on the light cone; in expectations on local states, it coincides with that of the standard smearing and it is independent. As discussed in the Appendix, independence does not hold on the (non local) charged states of QED and therefore a modification of Requardt’s prescription is required for QED.\n\n## 4 Charge density and charge in local formulations of QED\n\nThe relation between charge density and charge presents further subtle aspects in the charged sectors. As a consequence of the local Gauss’ law, charged states cannot be local. In this section we discuss the limit of the Gauss charges\n\n QGR=(∂F)0(fRαR)\n\nas quadratic forms on local and on physical charged states in the Feynman-Gupta-Bleuler formulation of QED and the implications on the possibility of constructing physical state vectors as weak limits of local states.\n\nIn the Coulomb gauge, since the charged fields are not local, one has to discuss the limit of local charges on domains obtained from the vacuum by a non local algebra, giving rise to the problems discussed in Sect.2.\n\nEven in perturbation theory the control of the Coulomb gauge is difficult and the standard strategy is to use a local formulation at the expense of positivity; this is the case of the Feynman or Gupta-Bleuler gauge. In this case, the charged fields and the vector potential are local but their vacuum expectation values cannot satisfy positivity; the corresponding Wightman functions define an indefinite inner product space (with the local field algebra), with inner product denoted by , which does not contains physical charged states  [5, 12].\n\nAs suggested by perturbation theory, non local physical charged states may be obtained as suitable limits of local unphysical charged state vectors. A possible non perturbative construction of physical charged state vectors along these lines was discussed in  .\n\nQuite generally, a crucial issue is that the definition and the control of the limit of local charged state vectors requires a topology; even in the positive case the weak topology on defined by the seminorms , i.e. by the Wightman functions is too weak; on the other hand, the inner product space does not identify a unique Hilbert-Krein majorant topology   and one has different closures . For the physical interpretation, the relevant space is the physical subspace , identified by a subsidiary condition (which in QED selects gauge invariant states) and different topologies may give rise to isomorphic physical spaces.\n\nIn general,   the dependence of the space on the topology should not be regarded as a mathematical oddness, since different closures of reflect different ”boundary conditions” at infinity. Even in the standard theory of unbounded hermitean operators the local domain of functions of compact support may allow different self adjoint extensions, corresponding to different boundary conditions; in the physical applications the choice of one instead of the other is dictated by physical considerations  [12, 17]. In the QED case the lack of non uniqueness reflects the physical fact that different Hilbert-Krein topologies, defined by majorant inner products , correspond to different large distance behaviours of the limit states, classified in particular by the velocity parameter of their Lienard-Wiechert electromagnetic fields al large distances  . Thus, the choice of the Hilbert-Krein topology is governed by physical considerations since it determines the class of vector states which one can constructively associate to the Wightman functions, i.e. the corresponding closure of the vector space . For these reasons it should not be a surprise that may allow different extensions. Even in the algebraic approach the construction of the charged states, which correspond to non local morphisms of the algebra of observables, is not under sharp control and in any case does not resolve the multiplicity associated to the large distance behaviour  .\n\nThe choice of the Hilbert-Krein topology in local formulations of QED was discussed at length in   also in connection with Zwanziger unsuccessful attempt to construct physical charged states, as a result of a too restrictive Hilbert-Krein topology.\n\nIt has been argued   that the Gauss charge converges weakly to zero on the local states as a consequence of the vanishing of the Gauss charge commutators with local fields, and that this prevents the construction of physical state with non zero Gauss charge as limits of local states. We shall examine the weak points of this argument in order.\n\nFirst, the vanishing of the Gauss charge commutators with local fields implies the vanishing of the Gauss charge as a quadratic form on , (see eq.(2.8) and the Appendix). The vanishing of the Gauss charge on a closure of would follow (see Proposition 4.2 below) if one had weak convergence of in the topology which defines such a closure of .\n\nAs we shall see the validity of such a property is not constrained by the correlation functions of the local fields and does not hold in general. Actually, (see the Example below and the following Section) one may find a Hilbert-Krein topology which avoids the weak convergence of and allows for the construction of physical charged state vectors.\n\nThe failure of the -weak convergence of should not appear strange, since it involves a topology whose rôle is merely that of linking the physical non local charged states to the unphysical local states. It should be stressed that the Gauss charge may well converge weakly or even strongly on a dense domain of physical states, with respect to the intrinsic Hilbert topology of the physical space. This means that , (equivalently where denotes the distinguished subspace of satisfying the subsidiary condition and a dense subspace of ), one has that\n\n limR→∞<Φ,QRΨ>=limR→∞<Φ,QRGΨ>,QR≡j0(fRαR)\n\nexists, equivalently\n\n =||QRGΨ||2 (4.1)\n\nare bounded. This, however, does not mean that or converge weakly with respect to the Hilbert-Krein closure , since weak convergence in amounts to the boundedness of\n\n ||QRGΨ||2HK≡(QRGΨ,QRGΨ),\n\nwhere is the majorant inner product which defines the Hilbert-Krein topology and the corresponding closure of the local states .\n\nActually, independently of any Hilbert-Krein majorant, there is a conflict between the construction of the physical charged states in terms of the Wightman functions of the local field algebra and the weak convergence of in the corresponding extension of . This difficulty is an intrinsic one, since it only involves the Wightman functions of and the existence of the physical charged states in an extension of compatible with the inner product defined by the Wightman functions, namely such that the sequences of elements of which define the extension, have convergent inner products  . No reference is needed to a Hilbert-Krein majorant topology, even if, clearly, any Hilbert-Krein majorant defines a weak extension. To clarify this point we introduce the following\n\n###### Definition 4.1\n\nGiven two vector spaces and , with inner products and , we say that can be realized in a weak extension of if there exists an inner product vector space containing a weakly dense inner product subspaces isomorphic to and a subspace isomorphic to .\n\nIf and are defined by the vacuum correlation functions of two field algebras , the property of being realized in a extension of is implied by the existence of joint vacuum correlation functions of and . In the case of local formulations of QED, if the correlation functions of the physical field algebra , e.g. of the field algebra of the Coulomb gauge, can be constructed in terms of the correlation functions of the local field algebra , one has an extended field algebra generated by and , and is realized in an extension of .\n\n###### Proposition 4.1\n\nLet be a non degenerate vector space with inner product , a weakly dense subspace and ; let be hermitean charges and\n\n limR→∞=0, (4.2)\n limR→∞≠0. (4.3)\n\nThen, cannot converge in the weak topology defined by .\n\nIn concrete, if physical charged states may be obtained as limits of the local states of in a Hilbert-Krein topology , i.e. they belong to a (Hilbert-Krein) extension of and\n\n limR→∞=≠0,limR→∞=0, (4.4)\n\nthen cannot converge weakly with respect to .\n\nProof     Since is dense and is non degenerate, eq.(4.2) and weak convergence imply that converges weakly to zero. Thus\n\n =limR→∞=0\n\nand again by the density of , converges weakly to zero, which is incompatible with eq.(4.3).\n\nBy eqs.(4.4) and locality and therefore, by the density of , weak convergence implies and\n\n =limR→∞=limR→∞=0.\n\nThus, the construction of physical charged states in a Hilbert-Krein extension of is incompatible with weak convergence of the Gauss charge on .\n\nThe failure of weak convergence of gives rise to the same problems and features discussed in Sect. 2; in particular the domain dependence of the limits of allows the vanishing of such a limit on compatibly with its being non zero on a domain containing non local states (as are the physical charged states).\n\nA Hilbert-Krein topology which allows the construction of physical charged states, avoiding the weak convergence of , was discussed in   in terms of the properties of the asymptotic fields . The mechanism is clearly displayed by the following\nExample. Let be a (canonical) free massive Dirac field and two massless scalar fields satisfying the following (equal times) commutation relations\n\n [ϕ1,ϕ2]=0,[π1,π2]=0,[ϕi,πi]=0,πi≡∂0ϕi,i=1,2,\n [π1(x),ϕ2(y)]=[π2(x),ϕ1(y)]=−iδ(x−y).\n\nThen, the fields\n\n ϕ±≡(ϕ1±ϕ2)/√2,π±≡(π1±π2)/√2,\n ψ(x)≡U(x)ψ0(x),U(x)≡:eiϕ2:(x) (4.5)\n\nsatisfy the following commutators and anti commutators\n\n [ϕ±(x),ϕ±(y)]=±iD(x−y),[ϕ±(x),ϕ∓(y)]=0,\n [ϕ±(x),ψ(y)]=±iD(x−y)ψ(y),{ψ(x),¯ψ(y)}=iS(x−y), (4.6)\n\nwhere are the standard commutator functions for massless scalar and Dirac fields. Thus, and are local fields.\n\nOur field theory model is defined by the vacuum correlation functions of the field algebra generated by and and their Wick products; such correlation functions do not satisfy positivity.\n\nNow, we consider the following local charges\n\n QRϕ≡∂0ϕ1(fRαR),QR≡j0(fRαR),QRG≡QR−QRϕ, (4.7)\n\nwhere\n\n jμ(x)=:¯ψγμψ:(x)=:¯ψ0γμψ0:(x).\n\nThe factorization of the correlation functions of and implies that converges to an unbroken (non zero) ”electron” charge in sense of quadratic forms on and in fact the correlation functions with unequal numbers of and vanish. Actually, converges strongly with respect to any Hilbert-Krein topology chosen to turn into a pre-Hilbert space, provided it is a product over fermion and boson Fock spaces since, by positivity of the correlation functions of ,\n\n ||QRΨ0||2HK=→0. (4.8)\n\nThe charge requires a quite different discussion. The field algebra is neutral under\n\n limR→∞[QRG,F]=0. (4.9)\n\nTherefore, putting , by the argument at the beginning of Sect.2, ii), one has\n\n limR→∞=limR→∞=0.\n\nIn the analogy with the local formulation of QED, the local charge plays the rôle of the Gauss charge, plays the rôle of the electron charge and" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90982205,"math_prob":0.98672885,"size":30015,"snap":"2023-14-2023-23","text_gpt3_token_len":6352,"char_repetition_ratio":0.18050048,"word_repetition_ratio":0.050462406,"special_character_ratio":0.20063302,"punctuation_ratio":0.09980112,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98921597,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T17:32:05Z\",\"WARC-Record-ID\":\"<urn:uuid:c79c6d8a-6945-436a-8ec0-8278d3d7ee5e>\",\"Content-Length\":\"1049404\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77086db0-c3a0-4292-b3ca-bef4a121517b>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a59e223-8211-483c-8ace-79f6dffa6a0d>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/hep-th/0301111/\",\"WARC-Payload-Digest\":\"sha1:F633MDAMFC6WSIYJZUE7M6TNLRAATGFG\",\"WARC-Block-Digest\":\"sha1:NLMH7ZQEBLMAIPGE3HVOEIROVK7CRHAG\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943704.21_warc_CC-MAIN-20230321162614-20230321192614-00140.warc.gz\"}"}
https://mne.tools/dev/generated/mne.simulation.metrics.precision_score.html
[ "# mne.simulation.metrics.precision_score#\n\nmne.simulation.metrics.precision_score(stc_true, stc_est, threshold='90%', per_sample=True)[source]#\n\nCompute the precision.\n\nThe precision is the ratio `tp / (tp + fp)` where `tp` is the number of true positives and `fp` the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.\n\nThe best value is 1 and the worst value is 0.\n\nThreshold is used first for data binarization.\n\nParameters:\nstc_trueinstance of (Vol|Mixed)SourceEstimate\n\nThe source estimates containing correct values.\n\nstc_estinstance of (Vol|Mixed)SourceEstimate\n\nThe source estimates containing estimated values e.g. obtained with a source imaging method.\n\nthreshold\n\nThe threshold to apply to source estimates before computing the precision. If a string the threshold is a percentage and it should end with the percent character.\n\nper_sample`bool`\n\nIf True the metric is computed for each sample separately. If False, the metric is spatio-temporal.\n\nReturns:\nmetric`float` | `array`, shape (n_times,)\n\nThe metric. float if per_sample is False, else array with the values computed for each time point.\n\nNotes\n\nNew in version 1.2.\n\n## Examples using `mne.simulation.metrics.precision_score`#", null, "Compare simulated and estimated source activity\n\nCompare simulated and estimated source activity" ]
[ null, "https://mne.tools/dev/_images/sphx_glr_plot_stc_metrics_thumb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65379363,"math_prob":0.98319083,"size":1181,"snap":"2023-14-2023-23","text_gpt3_token_len":265,"char_repetition_ratio":0.13678844,"word_repetition_ratio":0.025157232,"special_character_ratio":0.20152414,"punctuation_ratio":0.14563107,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.994733,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T09:29:44Z\",\"WARC-Record-ID\":\"<urn:uuid:c36a17bd-ccaa-4a86-b58e-8081e70d91c9>\",\"Content-Length\":\"113329\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e23b1ec0-57ca-4768-bcf3-e118a3ebf386>\",\"WARC-Concurrent-To\":\"<urn:uuid:0bf0b088-8c60-4a5e-8364-99a729ce93aa>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://mne.tools/dev/generated/mne.simulation.metrics.precision_score.html\",\"WARC-Payload-Digest\":\"sha1:2E6XWFHBJPWBR5VNU7S2X53F2V7QJ3G2\",\"WARC-Block-Digest\":\"sha1:M5H7MKJHLCO77BICKBRHHUR4F2XXXGXF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655446.86_warc_CC-MAIN-20230609064417-20230609094417-00136.warc.gz\"}"}
https://marvelousessays.co.uk/essays/analysis/behavior-of-compression-members.html
[ "", null, "## Free «Behavior of Compression Members» UK Essay Paper\n\nThe elaciticity of an obvject determines the extent to which the object can withold the tensile stress subjected in the form of an external force. Elaciticity depends with the material upon which the mamber is fabricated, where some elements have lower values of transition from the plastic to the elastic nature while others have hagher values. The ranges of these values exhibited by members once subjected to an external force forms the definition of the eccentricity. A bar placed in an axial position could be used to determine its eccentricity since the position allows for uniform subjection of exaternal forces, where a slender bar could bend as a result of the added force. This form of bending is known as buckling, while the nature of the bar reveals an impafection in the member, which is termed as the elasticity. For instance, the figure gives the curviture of a bar in axial positiin subjected to an external force, where the variations in the force applied gives the ideal buckling (Wiliams, 2000).\n\nThe bar bends depending on the amount of force applied, while verying the accentricities could be viable for the culculation of the Euler buckling load. This could be done experimentally, where the experimental results for the zero accentricity experiemnt and the maximum accentricity experiment are valuable in revealing the Euler Buckling load. By consideration, the Euler Buckling load is proportional to the product of the square of pie with both the modulus of elaciticity and the 2nd moment of area about the buckling axis, while it is also inveersely proportional to the square of the effective length of the strut. Additionally, the effective length of the strut depends on the flexibility of the rotation. The experiemental investigation of the accentricity and the culculation of the Euler Buckling load forms the essence of this paper (Wiliams, 2000).\n\nBehavior of the Compression Members\n\n## Buy Behavior of Compression Members essay paper online\n\nBecome our VIP client\n\nTotal price:\n\n* Final order price might be slightly different depending on\nthe current exchange rate of chosen payment system.\n\nThe experimental analysis for determination of the behavior of compression members involves the structuring of a strut with known modulus of elaciticity. The strut is places in an axial position against pivots of eqal force of attraction, where the lower pivot and the higher pivots holding the strut in the axial position are adjusted in a way that there is little influence of their individaul weights on the strut. This elaves the bending motions to the force exerted by the wights on the spring, where in this case, weights of 10n were used in an increamenting trend of 5n. this gave the resultant buckling strengnths, which were proportional to the modulus of the strut used and also varying with the amoungt of effert loaded. For instance, the ability of ataining elasticity to gain the transition of the strut from the plastic stage to the elastic state was influenced by an increase in the amount of applied force resulting from an increase in the weights of the loads.        The consideration of Zero exentricity involved the application of the knive edge brackets that were housed in the groove. On the convese, the maximum accentricity was obtained through removing the housing in the grooves of the edge brackets for the knives (Megson, 2005).\n\nDifferent compression members habour different mudulus of elaciticities. This implies that the Euler bucking load for these members differs at varied eccentricities. For instance, varying the weights as exteranl forces increases the extent of deflection, while a plot of the graph of the deflection verses the quotient of the deflection and the Euler bucling load gives a straight line. This curve is also essential in culculatng the initial curviture, which is in this case termed as the accentricity. The initial shape of the strut is defined by the product of the maximum cenrtala mplitude and the sine of the quotient of the product of both pie and the distance of dispklacement along the x axis with the length of the strut. Varaitions of the strut due to increase in the external forces of influence acrued from an increase in the load of the weights gives the initial final position of the strut defined by the central amplitude. This central amplitude is vital in the culculation of the euler buckling load by use of the gradient of the graph of the central amplitude verses the quotient of the curviture and the displacement along the x axis (Megson, 2005).\n\nOn the other hand, the accentricity of the strut is culculated from the additional central deflection obtained by increasing the load of the sources of force or stress to the strut. The theoritical value of the Euler Buckling strength is culculated by the fomular: F = (pi^2)*(E)*(I)/ (L^2), where F is the Euler’s buckling load, I is the cross-sectional moment of inertia while E is the Young’s modulus of elasticity for the strut. By consideration, the experiment involved the use of a strut of initial curvature position represented by Y=yo/ (1-p/pE) where Y is the deflecting shape of the strut, yo is the distance of curvature in the initial position, while p is the vertical displacement accrued from the boundary conditions. This formula is a viable tool for the acquaintance of the initial eccentricity or the zero eccentricity in this case. Consequently, the maximum curvature is represented by the equation Y*= ao/(1-p/pE), where Y* is the maximum deflection reached by the highest weights accrued from increased loads. This formula is vital in synthesizing the maximum eccentricity.\n\nThe essence of increasing the weights in terms of loads results in the realization of the effect of buckling, which is the critical amount of force that the strut can withhold to acquire a transition from the plastic to the elastic mode of formation. A plot for the graph of the eigenvalues verses the weights gives a straight line, where the gradient of the line is suitable in the acquisition of the Euler’s buckling load. This is also important in the determination of the initial and final position of curvature since the y intercept for the graphs gives both the initial and final positions of the curvature (Ghali, 2003).\n\nZero Eccentricity\n\nBy consideration, from the formula: F = (pi^2)*(E)*(I)/ (L^2) using the initial curvature and replacing the values gives the Euler’s buckling load of 136.903. This value is obtainable through using 600mm as the length of the struts with a cross sectional area of 25.4 by 1.6mm. Having the values for E as 205kN/mm2, shows that the modulus of the elasticity for the strut is constant. This implies that in the theoretical value calculation of the Euler’s buckling load for the strut, the modulus of elasticity is constant. This implies that Po= (3.142*205kN/mm2*25.4mm*1.6mm)/600mm), which is equal to 136.9038kips. The value shows a slight deviation from the conventional graphical analysis, which gives 135.089kips as the gradient of the slope correlating to the graphical representation of the Euler’s buckling load. The differences emanate from the fact that the measurements for graphical analysis vary with the net weights of he load, which might not be linear (Ghali, 2003).\n\nLimited Time offer!\n\nGet 19% OFF\n\n0\n0\ndays\n:\n0\n0\nhours\n:\n0\n0\nminutes\n:\n0\n0\nseconds\n\nMoreover, the differences might have been as a result of lack of uniformity in the application of the forces emanating from the bending curvature. This results in unequal subjection of forces that are geared towards producing the buckling effects. On the other hand, the plot gives the initial curvature as being 1.067, which is acquired from the Y intercept of the graph. This implies that the members of he same group portray similar characteristics in such a way that there is a prior acquaintance to the tensile stress amid constrains of the curvature effects from the load. This implies that the members of he same elements always contain some form of curvature even without the application of an external force of curvature. This is also the value for the eccentricity of the strut at the zero eccentricity, where the Y intercept represents the initial curvature (Milliams, 2005).\n\nThe above graph shows that there is a form of force that the structural members made up of different elements exhibit some tensional force that results in the plastic characteristics. These characteristics transit upon exertion of an external force to give rise to the transition of the elements into the elastic nature. This elastic nature is what defines the curvature, which is obtainable through calculation of the Euler’s Buckling strength. This also shows that materials of he same cross-sectional area and same modulus of elasticity harbor the same values of the Euler’s buckling load at zero eccentricity. Moreover, the values for the external force need to be uniform, where it is necessary to use the axial position for the struts to depict the linearity in terms of the external force applied (Ghali, 2003).\n\nMaximum Eccentricity\n\nThis type of eccentricity involves the use of the final position of curvature of the strut, which is elucidated by the bending motion due to the external force. Once the load is placed on to the spring, there is exertion of the external force that culminates in the change in nature of the strut from the plastic nature to the elastic nature. This gives rise to the bending motion, where the plot of Y* verses Y*/pE gives a straight line. The slope of the line gives the Euler’s buckling load, while the Y intercept of the graph gives the final curvature position. This is the amount of force required for the strut to gain the transition from the plastic nature to the elastic nature, which varies with the material making up the strut. For instance, the use of the strut with a modulus of elasticity of 205kN/mm2 for a rod of cross-sectional area of 25.4 by 1.6mm gives a lead into the calculation of the Euler’s buckling load.>\n\nThis is from the fact that from the formula F = (pi^2)*(E)*(I)/ (L^2), the resultant Euler’s buckling load is equal to 135.88 kips at the maximum eccentricity. By consideration, the value is obtainable from the synthesis that leads to the calculation: (3.1422*250kN/mm2*25.4mm*1.6mm)/600mm. this gives 136.903 kips as the theoretical value for the Euler buckling load. This is the value Euler’s buckling load for the strut at the maximum eccentricity, which corresponds with the value obtained from the calculation from zero eccentricity. The differences in the values obtainable lies in the graphical analysis, where the value obtained in this case as synthesized from the slope of the straight line obtained from the plot of the final curvature with the quotient of the final curvature and the eccentric force (Wiliams, 2000).\n\nFrom the graphical analysis, it is evident that the Y intercept, which represents the final curviture lies at 0.60. this value is higher than the initial value obtained from the experiemntation with zero eccentricity. It is evident that the values differ in terms of the accentricity, which also provides a aplatform for varied Euler’s buckling load. The nature of he graphs differ in terms of the gradient, where the graphical representation for the zero accentricity is more pronounced than that for the maximum accentricity. This is due to the increase in the load of force, where the result is an increase in the force of applkciation. As the force of stretch increases, it leads to the increase in the extent of stretch. Morever, this results into the reduction in the distance of curviture in the resulting PE values (Megson, 2005).\n\nIt is also evident that there is a slight diviation in the values of the Euler buckling constant obtained from the synthesis of the graphical analysis for both the graphical analysis of the Zero eccentricity and the maximum eccentricity with that of the tehoritical value. The value for the tehoritical analysis is slightly grater than the graphical analysis in both cases. This is due to the clarity in the theoritical value, which is determined directly from the values for the buckling constant. The devaitions are as a result of the measurements for the weights, which does not give a linear observation in relation to the extent of stretch caused by the force. In addition, the element of the force of gravity, which in this case is not taken into consideration results in the final influence of the resultant datum (Megson, 2005).\n\nThe datum collected from the analysis is a vaible tool for the reflection of both the accentricity of the strut and the Euler’s buckling load, where the varaitions in the laods is in a sound correlation to the influence of stretch. For instance, the initiatl curviture for the strut is due to the normal situational mandate of structural elements belonging to the same group. The natural behaviour of these members shows that they exhibit a form of stretch at an intitial position. This force could be accelerated through increasing the external force. In this case, the exteranl force was increased through an increase in the weights of the laods. This resulted into the increse in the amount of stretch culminating in an increase in the extent of curviture. This also confirms the linearity in the resultant datum with the theoritical basis for the experiement (Milliams, 2005).\n\nLet’s earn with us!\n\nGet 10% from your friends orders!\n\nThe essence of use of standardised loads of known weights takes care of he influence of the gravitational pull, where the mass is defined in terms of the force of gravity acting on the load. On the other hand, stadnardisation of the results was achieved trhough ensuring that the supporting beams do not exert an external force on the spring balance. This was through ensuring that the parts of the supporting beams are not part of the experiemntal analysis (Megson, 2005).\n\nConclusion\n\nThe obtained value for accentricity, which is the Y intercept of the graph of curviture vuses the quotient of the curviture with the displacement as a result of the stretch is 0.17 for the zero eccentricity experiemnt and 0.6 for the maximum eccentricity. On the other hand, the value for the Euler’s accentricity forgraphical analysis is obtainable as 135.88 kips, which is slightly lower than the theoritical value for the Euler’s ccentricity that is 136.9038. these values differ because of the influence of measure of the weights in relation to the gravitational pull. The results obtained from experimental analysis are linear with the fact that the extent of curviture is proportional with the force of stretch, while the force increses with the increse in the mass of he load. This leads to a straight line graph that depicts the initial curviture vurses the quotient of the initial curviture and the extent of stretch, whose gradient gives the Euler’s buckling load while the Y intercept forms the eccentricity.", null, "" ]
[ null, "https://marvelousessays.co.uk/essays/behavior-of-compression-members.png", null, "https://marvelousessays.co.uk/files/images/holidays/close-white.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9201031,"math_prob":0.96677196,"size":24493,"snap":"2020-45-2020-50","text_gpt3_token_len":5255,"char_repetition_ratio":0.18763526,"word_repetition_ratio":0.7749815,"special_character_ratio":0.20222104,"punctuation_ratio":0.077380955,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9901219,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T17:18:23Z\",\"WARC-Record-ID\":\"<urn:uuid:d005af33-f233-49db-89d3-2f1bba6e817b>\",\"Content-Length\":\"340864\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:454c8cbe-b5a6-4226-8740-740493939fff>\",\"WARC-Concurrent-To\":\"<urn:uuid:acc683fa-1dcb-40b0-910e-a2bb3862e981>\",\"WARC-IP-Address\":\"66.165.237.122\",\"WARC-Target-URI\":\"https://marvelousessays.co.uk/essays/analysis/behavior-of-compression-members.html\",\"WARC-Payload-Digest\":\"sha1:WQXXPLIKSLMBWBULJUN4NYDH6UKGMHTK\",\"WARC-Block-Digest\":\"sha1:OLCRSIBYNZDV3U5WKREYBZSWUWOAHCS6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141176922.14_warc_CC-MAIN-20201124170142-20201124200142-00285.warc.gz\"}"}
https://www.allkidsnetwork.com/search?ty=4%2C10%2C24%2C27&mltid=2be93742-234a-4a16-8922-0805ce172eb8
[ "# Search\n\nAbout 3,743 Search Results Matching Types of Worksheet, Worksheet Section, Generator, Generator Section, Similar to Triangle Worksheet", null, "## Triangle Worksheet\n\nTrace the triangles, draw some on your own, the...", null, "## Triangle Worksheet\n\nLearn about the triangles, then trace the word ...", null, "## Triangle Worksheet\n\nTrace triangles, then draw some on your own. T...", null, "## Traceable Triangles Worksheet\n\nTrace the different size triangles and then dra...", null, "## Angles in a Triangle Worksheet\n\nKids learn that the angles of a triangle always...", null, "## Shape Names Recognition Worksheet\n\nDraw a line to match each shape with its name.", null, "## Preschool Shapes Worksheets\n\nThis collection of free worksheets will help yo...", null, "## Draw and Find Shapes Worksheets\n\nThis is our best set of shapes worksheet yet! I...", null, "## Subtraction Problems Worksheet - Shapes Theme\n\nUse the pictures to count and subtract and get ..." ]
[ null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 170 140'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 170 140'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 170 140'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 170 140'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 170 140'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 170 140'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 170 140'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 170 140'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 170 140'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87564534,"math_prob":0.659397,"size":519,"snap":"2019-51-2020-05","text_gpt3_token_len":110,"char_repetition_ratio":0.16116504,"word_repetition_ratio":0.0,"special_character_ratio":0.24277456,"punctuation_ratio":0.2601626,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95930785,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T01:14:36Z\",\"WARC-Record-ID\":\"<urn:uuid:2afb86e4-fc3e-46c7-a88a-857676bc839f>\",\"Content-Length\":\"84464\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:945f3dfb-6e5c-4ac5-aa75-b9658a85c9c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:137e1ce3-ab90-4376-bb85-42391a4158de>\",\"WARC-IP-Address\":\"52.45.69.225\",\"WARC-Target-URI\":\"https://www.allkidsnetwork.com/search?ty=4%2C10%2C24%2C27&mltid=2be93742-234a-4a16-8922-0805ce172eb8\",\"WARC-Payload-Digest\":\"sha1:QR34GAMN5ZWPIGFYHNQY6HAYYWQ5CEWX\",\"WARC-Block-Digest\":\"sha1:IMWCVGPWDD3W47IH4LY3IRKAF2I7BCK7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601040.47_warc_CC-MAIN-20200120224950-20200121013950-00224.warc.gz\"}"}
https://socratic.org/questions/how-do-you-factor-8y-3-125
[ "# How do you factor 8y^3-125?\n\nFeb 27, 2017\n\n$8 {y}^{3} - 125 = \\left(2 y - 5\\right) \\left(4 {y}^{2} + 10 y + 25\\right)$\n\n#### Explanation:\n\nThe difference of cubes identity can be written:\n\n${a}^{3} - {b}^{3} = \\left(a - b\\right) \\left({a}^{2} + a b + {b}^{2}\\right)$\n\nUse this with $a = 2 y$ and $b = 5$ as follows:\n\n$8 {y}^{3} - 125 = {\\left(2 y\\right)}^{3} - {5}^{3}$\n\n$\\textcolor{w h i t e}{8 {y}^{3} - 125} = \\left(2 y - 5\\right) \\left({\\left(2 y\\right)}^{2} + \\left(2 y\\right) \\left(5\\right) + {5}^{2}\\right)$\n\n$\\textcolor{w h i t e}{8 {y}^{3} - 125} = \\left(2 y - 5\\right) \\left(4 {y}^{2} + 10 y + 25\\right)$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.515467,"math_prob":1.0000094,"size":415,"snap":"2019-51-2020-05","text_gpt3_token_len":170,"char_repetition_ratio":0.11192214,"word_repetition_ratio":0.0,"special_character_ratio":0.4506024,"punctuation_ratio":0.044444446,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99971753,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T18:09:22Z\",\"WARC-Record-ID\":\"<urn:uuid:31df2d30-6343-4ae1-b0c2-6b8d016ad8eb>\",\"Content-Length\":\"32952\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e6613df-4b29-4d5f-b983-094db6fa3b0a>\",\"WARC-Concurrent-To\":\"<urn:uuid:553bcda5-74af-44d9-9629-c547b5d01592>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-factor-8y-3-125\",\"WARC-Payload-Digest\":\"sha1:O3ZXD5HQRN7AHUZL3OFCRZVCGECZF5JQ\",\"WARC-Block-Digest\":\"sha1:4PKTY57BTMOWWXOBTCBAGQX4BGOZYJRL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541288287.53_warc_CC-MAIN-20191214174719-20191214202719-00044.warc.gz\"}"}
https://math.libretexts.org/Courses/University_of_California_Davis/UCD_Mat_21A%3A_Differential_Calculus/1%3A_Functions/1.2%3A_Combining_Functions%3B_Shifting_and_Scaling_Graphs
[ "$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n# 1.2: Combining Functions; Shifting and Scaling Graphs\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\nMany functions in applications are built up from simple functions by inserting constants in various places. It is important to understand the effect such constants have on the appearance of the graph.\n\n## Horizontal shifts\n\nIf we replace $$x$$ by $$x-C$$ everywhere it occurs in the formula for $$f(x)$$, then the graph shifts over $$C$$ to the right. (If $$C$$ is negative, then this means that the graph shifts over $$|C|$$ to the left.) For example, the graph of $$y=(x-2)^2$$ is the $$x^2$$-parabola shifted over to have its vertex at the point 2 on the $$x$$-axis. The graph of $$y=(x+1)^2$$ is the same parabola shifted over to the left so as to have its vertex at $$-1$$ on the $$x$$-axis. Note well: when replacing $$x$$ by $$x-C$$ we must pay attention to meaning, not merely appearance. Starting with $$y=x^2$$ and literally replacing $$x$$ by $$x-2$$ gives $$y=x-2^2$$. This is $$y=x-4$$, a line with slope 1, not a shifted parabola.\n\n## Vertical shifts\n\nIf we replace $$y$$ by $$y-D$$, then the graph moves up $$D$$ units. (If $$D$$ is negative, then this means that the graph moves down $$|D|$$ units.) If the formula is written in the form $$y=f(x)$$ and if $$y$$ is replaced by $$y-D$$ to get $$y-D=f(x)$$, we can equivalently move D to the other side of the equation and write $$y=f(x)+D$$. Thus, this principle can be stated: to get the graph of $$y=f(x)+D$$, take the graph of $$y=f(x)$$ and move it D units up.For example, the function $$y=x^2-4x=(x-2)^2-4$$ can be obtained from $$y=(x-2)^2$$ (see the last paragraph) by moving the graph 4 units down. The result is the $$x^2$$-parabola shifted 2 units to the right and 4 units down so as to have its vertex at the point $$(2,-4)$$.\n\nWarning. Do not confuse $$f(x)+D$$ and $$f(x+D)$$. For example, if $$f(x)$$ is the function $$x^2$$, then $$f(x)+2$$ is the function $$x^2+2$$, while $$f(x+2)$$ is the function $$(x+2)^2=x^2+4x+4$$.\n\nAn important example of the above two principles starts with the circle $$x^2+y^2=r^2$$. This is the circle of radius $$r$$ centered at the origin. (As we saw, this is not a single function $$y=f(x)$$, but rather two functions $$y=\\pm\\sqrt{r^2-x^2}$$ put together; in any case, the two shifting principles apply to equations like this one that are not in the form $$y=f(x)$$.) If we replace $$x$$ by $$x-C$$ and replace $$y$$ by $$y-D$$---getting the equation $$(x-C)^2+(y-D)^2=r^2$$---the effect on the circle is to move it $$C$$ to the right and $$D$$ up, thereby obtaining the circle of radius $$r$$ centered at the point $$(C,D)$$. This tells us how to write the equation of any circle, not necessarily centered at the origin.\n\nWe will later want to use two more principles concerning the effects of constants on the appearance of the graph of a function.\n\n## Horizontal dilation\n\nIf $$x$$ is replaced by $$x/A$$ in a formula and $$A>1$$, then the effect on the graph is to expand it by a factor of $$A$$ in the $$x$$-direction (away from the $$y$$-axis). If $$A$$ is between 0 and 1 then the effect on the graph is to contract by a factor of $$1/A$$ (towards the $$y$$-axis). We use the word \"dilate'' to mean expand or contract.\n\nFor example, replacing $$x$$ by $$x/0.5=x/(1/2)=2x$$ has the effect of contracting toward the $$y$$-axis by a factor of 2. If $$A$$ is negative, we dilate by a factor of $$|A|$$ and then flip about the $$y$$-axis. Thus, replacing $$x$$ by $$-x$$ has the effect of taking the mirror image of the graph with respect to the $$y$$-axis. For example, the function $$y=\\sqrt{-x}$$, which has domain $$\\{x\\in R\\mid x\\le 0\\}$$, is obtained by taking the graph of $$\\sqrt{x}$$ and flipping it around the $$y$$-axis into the second quadrant.\n\n## Vertical dilation\n\nIf $$y$$ is replaced by $$y/B$$ in a formula and $$B>0$$, then the effect on the graph is to dilate it by a factor of $$B$$ in the vertical direction. As before, this is an expansion or contraction depending on whether $$B$$ is larger or smaller than one. Note that if we have a function $$y=f(x)$$, replacing $$y$$ by $$y/B$$ is equivalent to multiplying the function on the right by $$B$$: $$y=Bf(x)$$. The effect on the graph is to expand the picture away from the $$x$$-axis by a factor of $$B$$ if $$B>1$$, to contract it toward the $$x$$-axis by a factor of $$1/B$$ if $$0 < B < 1$$, and to dilate by $$|B|$$ and then flip about the $$x$$-axis if $$B$$ is negative.\n\n$\\left({x\\over a}\\right)^2+\\left({y\\over b}\\right)^2=1 \\qquad\\hbox{or}\\qquad {x^2\\over a^2}+{y^2\\over b^2}=1.$\n\nFinally, if we want to analyze a function that involves both shifts and dilations, it is usually simplest to work with the dilations first, and then the shifts. For instance, if we want to dilate a function by a factor of $$A$$ in the $$x$$-direction and then shift $$C$$ to the right, we do this by replacing $$x$$ first by $$x/A$$ and then by $$(x-C)$$ in the formula. As an example, suppose that, after dilating our unit circle by $$a$$ in the $$x$$-direction and by $$b$$ in the $$y$$-direction to get the ellipse in the last paragraph, we then wanted to shift it a distance $$h$$ to the right and a distance $$k$$ upward, so as to be centered at the point $$(h,k)$$. The new ellipse would have equation $$\\left({x-h\\over a}\\right)^2+\\left({y-k\\over b}\\right)^2=1.$$ Note well that this is different than first doing shifts by $$h$$ and $$k$$ and then dilations by $$a$$ and $$b$$:\n\n$\\left({x\\over a}-h\\right)^2+\\left({y\\over b}-k\\right)^2=1.$\n\nSee figure 1.4.1.", null, "Figure 1.4.1. Ellipses: $$\\left({x-1\\over 2}\\right)^2+\\left({y-1\\over 3}\\right)^2=1$$ on the left, $$\\left({x\\over 2}-1\\right)^2+\\left({y\\over 3}-1\\right)^2=1$$ on the right.\n\n## Contributors\n\n• Integrated by Justin Marshall." ]
[ null, "https://math.libretexts.org/@api/deki/files/2578/1.4.1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89720064,"math_prob":1.0000091,"size":5576,"snap":"2021-43-2021-49","text_gpt3_token_len":1666,"char_repetition_ratio":0.14985642,"word_repetition_ratio":0.07204301,"special_character_ratio":0.33662122,"punctuation_ratio":0.0813278,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000087,"pos_list":[0,1,2],"im_url_duplicate_count":[null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T03:56:15Z\",\"WARC-Record-ID\":\"<urn:uuid:5b94e07b-b4a6-4765-b61d-8331bc1785ef>\",\"Content-Length\":\"101204\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35cebba0-65f9-45c0-b243-1da7f13f8aa3>\",\"WARC-Concurrent-To\":\"<urn:uuid:96a012d7-5ce8-4d91-a3e4-0764a25644e3>\",\"WARC-IP-Address\":\"13.249.38.5\",\"WARC-Target-URI\":\"https://math.libretexts.org/Courses/University_of_California_Davis/UCD_Mat_21A%3A_Differential_Calculus/1%3A_Functions/1.2%3A_Combining_Functions%3B_Shifting_and_Scaling_Graphs\",\"WARC-Payload-Digest\":\"sha1:WUDWDPPR6RIUNPGI5RVCW46SW4WFAF2R\",\"WARC-Block-Digest\":\"sha1:7NGSUOYI5BRCAIV6BHDYUPH4GRO62Y6C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588053.38_warc_CC-MAIN-20211027022823-20211027052823-00200.warc.gz\"}"}
https://sciencing.com/how-8086143-calculate-fan-output.html
[ "# How to Calculate Fan Output", null, "••• Jupiterimages/Photos.com/Getty Images\n\nEngineers specify a fan's output in terms of the amount of air it displaces each minute. This measurement takes into account the speed of the wind that the fan produces and also the size of the fan's blades. The fan's output, the pressure it creates and the power it consumes all relate to one another. The manufacturer's documentation also likely directly lists the fan's power consumption, which will let you calculate its total output.\n\nDivide the fan's power consumption, measured in kilowatts, by 0.746 to convert it to horsepower. If, for instance, a fan consumes 4 kW: 4 / 0.746 = 5.36 horsepower.\n\nMultiply the result by the fan's efficiency. If the fan operates, for instance, at 80 percent efficiency: 5.36 x 0.80 = 4.29 horsepower.\n\nMultiply the result by 530, a conversion constant: 4.29 x 530 = 2,273.\n\nDivide this answer by the fan's total pressure, measured in feet of water. For instance, if the fan operates at 0.2 feet of water of pressure: 2,273 / 0.2 = 11,365. The fan's output is therefore approximately 11,500 cubic feet per minute.\n\nDont Go!\n\nWe Have More Great Sciencing Articles!" ]
[ null, "https://img-aws.ehowcdn.com/360x267p/s3-us-west-1.amazonaws.com/contentlab.studiod/getty/cache.gettyimages.com/9ee968d00b5d4dd89ba8b9d8c44dd459.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89102185,"math_prob":0.96905303,"size":1456,"snap":"2023-40-2023-50","text_gpt3_token_len":345,"char_repetition_ratio":0.11707989,"word_repetition_ratio":0.008474576,"special_character_ratio":0.2445055,"punctuation_ratio":0.15254237,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9749779,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T12:22:03Z\",\"WARC-Record-ID\":\"<urn:uuid:e4d2995d-032b-4024-a65c-b6c0ac7e9470>\",\"Content-Length\":\"407470\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e27e1b4-ad9a-4f67-b3fe-f943e6d82c19>\",\"WARC-Concurrent-To\":\"<urn:uuid:74b987fc-fa77-448a-b625-aa1ee38a030e>\",\"WARC-IP-Address\":\"23.205.107.80\",\"WARC-Target-URI\":\"https://sciencing.com/how-8086143-calculate-fan-output.html\",\"WARC-Payload-Digest\":\"sha1:4SAON7IPCTPEBJANR7BNV4VJXAVIQJ4X\",\"WARC-Block-Digest\":\"sha1:Q33ZM6XBDKB2G6MGFMHFGQ3ZT43KDSKB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510208.72_warc_CC-MAIN-20230926111439-20230926141439-00516.warc.gz\"}"}
https://www.colorhexa.com/baf6b3
[ "# #baf6b3 Color Information\n\nIn a RGB color space, hex #baf6b3 is composed of 72.9% red, 96.5% green and 70.2% blue. Whereas in a CMYK color space, it is composed of 24.4% cyan, 0% magenta, 27.2% yellow and 3.5% black. It has a hue angle of 113.7 degrees, a saturation of 78.8% and a lightness of 83.3%. #baf6b3 color hex could be obtained by blending #ffffff with #75ed67. Closest websafe color is: #ccffcc.\n\n• R 73\n• G 96\n• B 70\nRGB color chart\n• C 24\n• M 0\n• Y 27\n• K 4\nCMYK color chart\n\n#baf6b3 color description : Very soft lime green.\n\n# #baf6b3 Color Conversion\n\nThe hexadecimal color #baf6b3 has RGB values of R:186, G:246, B:179 and CMYK values of C:0.24, M:0, Y:0.27, K:0.04. Its decimal value is 12252851.\n\nHex triplet RGB Decimal baf6b3 `#baf6b3` 186, 246, 179 `rgb(186,246,179)` 72.9, 96.5, 70.2 `rgb(72.9%,96.5%,70.2%)` 24, 0, 27, 4 113.7°, 78.8, 83.3 `hsl(113.7,78.8%,83.3%)` 113.7°, 27.2, 96.5 ccffcc `#ccffcc`\nCIE-LAB 91.507, -31.304, 26.289 61.34, 79.604, 54.779 0.313, 0.407, 79.604 91.507, 40.879, 139.976 91.507, -29.762, 43.17 89.221, -33.417, 26.053 10111010, 11110110, 10110011\n\n# Color Schemes with #baf6b3\n\n• #baf6b3\n``#baf6b3` `rgb(186,246,179)``\n• #efb3f6\n``#efb3f6` `rgb(239,179,246)``\nComplementary Color\n• #dcf6b3\n``#dcf6b3` `rgb(220,246,179)``\n• #baf6b3\n``#baf6b3` `rgb(186,246,179)``\n• #b3f6ce\n``#b3f6ce` `rgb(179,246,206)``\nAnalogous Color\n• #f6b3dc\n``#f6b3dc` `rgb(246,179,220)``\n• #baf6b3\n``#baf6b3` `rgb(186,246,179)``\n• #ceb3f6\n``#ceb3f6` `rgb(206,179,246)``\nSplit Complementary Color\n• #f6b3ba\n``#f6b3ba` `rgb(246,179,186)``\n• #baf6b3\n``#baf6b3` `rgb(186,246,179)``\n• #b3baf6\n``#b3baf6` `rgb(179,186,246)``\n• #f6efb3\n``#f6efb3` `rgb(246,239,179)``\n• #baf6b3\n``#baf6b3` `rgb(186,246,179)``\n• #b3baf6\n``#b3baf6` `rgb(179,186,246)``\n• #efb3f6\n``#efb3f6` `rgb(239,179,246)``\n• #7cee6f\n``#7cee6f` `rgb(124,238,111)``\n• #91f185\n``#91f185` `rgb(145,241,133)``\n• #a5f39c\n``#a5f39c` `rgb(165,243,156)``\n• #baf6b3\n``#baf6b3` `rgb(186,246,179)``\n• #cff9ca\n``#cff9ca` `rgb(207,249,202)``\n• #e3fbe1\n``#e3fbe1` `rgb(227,251,225)``\n• #f8fef7\n``#f8fef7` `rgb(248,254,247)``\nMonochromatic Color\n\n# Alternatives to #baf6b3\n\nBelow, you can see some colors close to #baf6b3. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #cbf6b3\n``#cbf6b3` `rgb(203,246,179)``\n• #c5f6b3\n``#c5f6b3` `rgb(197,246,179)``\n• #c0f6b3\n``#c0f6b3` `rgb(192,246,179)``\n• #baf6b3\n``#baf6b3` `rgb(186,246,179)``\n• #b4f6b3\n``#b4f6b3` `rgb(180,246,179)``\n• #b3f6b7\n``#b3f6b7` `rgb(179,246,183)``\n• #b3f6bd\n``#b3f6bd` `rgb(179,246,189)``\nSimilar Colors\n\n# #baf6b3 Preview\n\nThis text has a font color of #baf6b3.\n\n``<span style=\"color:#baf6b3;\">Text here</span>``\n#baf6b3 background color\n\nThis paragraph has a background color of #baf6b3.\n\n``<p style=\"background-color:#baf6b3;\">Content here</p>``\n#baf6b3 border color\n\nThis element has a border color of #baf6b3.\n\n``<div style=\"border:1px solid #baf6b3;\">Content here</div>``\nCSS codes\n``.text {color:#baf6b3;}``\n``.background {background-color:#baf6b3;}``\n``.border {border:1px solid #baf6b3;}``\n\n# Shades and Tints of #baf6b3\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020c01 is the darkest color, while #fafef9 is the lightest one.\n\n• #020c01\n``#020c01` `rgb(2,12,1)``\n• #061d03\n``#061d03` `rgb(6,29,3)``\n• #0a2f06\n``#0a2f06` `rgb(10,47,6)``\n• #0e4008\n``#0e4008` `rgb(14,64,8)``\n• #11520a\n``#11520a` `rgb(17,82,10)``\n• #15630c\n``#15630c` `rgb(21,99,12)``\n• #19750e\n``#19750e` `rgb(25,117,14)``\n• #1c8610\n``#1c8610` `rgb(28,134,16)``\n• #209812\n``#209812` `rgb(32,152,18)``\n• #24aa14\n``#24aa14` `rgb(36,170,20)``\n• #27bb16\n``#27bb16` `rgb(39,187,22)``\n• #2bcd18\n``#2bcd18` `rgb(43,205,24)``\n• #2fde1a\n``#2fde1a` `rgb(47,222,26)``\n• #3be527\n``#3be527` `rgb(59,229,39)``\n• #4be738\n``#4be738` `rgb(75,231,56)``\n• #5aea4a\n``#5aea4a` `rgb(90,234,74)``\n• #6aec5b\n``#6aec5b` `rgb(106,236,91)``\n• #7aee6d\n``#7aee6d` `rgb(122,238,109)``\n• #8af07e\n``#8af07e` `rgb(138,240,126)``\n• #9af290\n``#9af290` `rgb(154,242,144)``\n• #aaf4a1\n``#aaf4a1` `rgb(170,244,161)``\n• #baf6b3\n``#baf6b3` `rgb(186,246,179)``\n• #caf8c5\n``#caf8c5` `rgb(202,248,197)``\n``#dafad6` `rgb(218,250,214)``\n• #eafce8\n``#eafce8` `rgb(234,252,232)``\n• #fafef9\n``#fafef9` `rgb(250,254,249)``\nTint Color Variation\n\n# Tones of #baf6b3\n\nA tone is produced by adding gray to any pure hue. In this case, #d4d5d4 is the less saturated color, while #b5fdac is the most saturated one.\n\n• #d4d5d4\n``#d4d5d4` `rgb(212,213,212)``\n• #d1d9d0\n``#d1d9d0` `rgb(209,217,208)``\n• #cfdccd\n``#cfdccd` `rgb(207,220,205)``\n• #ccdfca\n``#ccdfca` `rgb(204,223,202)``\n• #cae2c7\n``#cae2c7` `rgb(202,226,199)``\n• #c7e6c3\n``#c7e6c3` `rgb(199,230,195)``\n• #c4e9c0\n``#c4e9c0` `rgb(196,233,192)``\n• #c2ecbd\n``#c2ecbd` `rgb(194,236,189)``\n• #bfefba\n``#bfefba` `rgb(191,239,186)``\n• #bdf3b6\n``#bdf3b6` `rgb(189,243,182)``\n• #baf6b3\n``#baf6b3` `rgb(186,246,179)``\n• #b7f9b0\n``#b7f9b0` `rgb(183,249,176)``\n• #b5fdac\n``#b5fdac` `rgb(181,253,172)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #baf6b3 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53712255,"math_prob":0.6814146,"size":3725,"snap":"2020-34-2020-40","text_gpt3_token_len":1714,"char_repetition_ratio":0.12604138,"word_repetition_ratio":0.011070111,"special_character_ratio":0.5197315,"punctuation_ratio":0.23756906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9606171,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T09:32:50Z\",\"WARC-Record-ID\":\"<urn:uuid:07cc60ef-5986-4a9a-9ebc-1b8286008891>\",\"Content-Length\":\"36359\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f027ab37-6c69-4501-a068-24b1734d437c>\",\"WARC-Concurrent-To\":\"<urn:uuid:c66c2ab2-24f8-4eb1-8020-c9b8661d1c0f>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/baf6b3\",\"WARC-Payload-Digest\":\"sha1:HYJMI3YWDEXKSO42BATEELOSC7BFCEIA\",\"WARC-Block-Digest\":\"sha1:DDY5E35XPF3SBN3LNVGT5RNIYTW42EWH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400223922.43_warc_CC-MAIN-20200925084428-20200925114428-00634.warc.gz\"}"}
https://flylib.com/books/en/2.582.1/logical_functions.html
[ "# Logical Functions\n\nThe category of functions known as the logical functions contains a strange hodgepodge of things. Chapter 8 discussed two of them: the If and Case conditional functions. The logical functions covered here include several that are new to FileMaker Pro 8.\n\nThe Let Function\n\nThe Let function enables you to simplify complex calculations by declaring variables to represent subexpressions. These variables exist only within the scope of the formula and can't be referenced in other places. As an example, this is a formula presented in Chapter 8 for extracting the last line of a text field:\n\n`Right(myText; Length(myText) - Position(myText; \"¶\"; 1; PatternCount(myText; \"¶\")))`\n\nWith the Let function, this formula could be rewritten this way:\n\n```Let ([fieldLength = Length(myText) ;\nreturnCount = PatternCount(myText; \"¶\") ;\npositionOfLastReturn = Position (myText; \"¶\"; 1; returnCount) ;\ncharactersToGrab = fieldLength - positionOfLastReturn];\nRight (myText, charactersToGrab)\n)```\n\nThe Let function takes two parameters. The first is a list of variable declarations. If you want to declare multiple variables, you need to enclose the list within square brackets and separate the individual declarations within the list with semicolons. The second parameter is some formula you want evaluated. That formula can reference any of the variables declared in the first parameter, just as it would reference any field value.", null, "If you experience unexpected behavior of a Let function, the trouble might be your variable names. For more information, see \"Naming Variables in Let Functions\" in the \"Troubleshooting\" section at the end of this chapter.\n\nNotice in this example that the third variable declared, positionOfLastReturn, references the returnCount variable, which was the second variable declared. This capability to have subsequent variables reference previously defined ones is one of the powerful aspects of the Let function because it enables you to build up a complex formula via a series of simpler ones.\n\nIt is fair to observe that the Let function is never necessary; you could rewrite any formula that uses the Let function, without using Let, either as a complex nested formula or by explicitly defining or setting fields to contain subexpressions. The main benefits of using the Let function are simplicity, clarity, and ease of maintenance. For instance, a formula that returns a person's age expressed as a number of years, months, and days could be written as shown here:\n\n```\n```\n\n[View full width]\n\nYear (Get (CurrentDate)) - Year(birthDate) - (DayOfYear(Get(CurrentDate)) < DayOfYear", null, "(birthDate)) & \" years, \" & Mod ( Month(Get(CurrentDate)) - Month (birthDate) - (Day (Get(", null, "CurrentDate)) < Day(birthDate)); 12) & \" months, and \" & (Get(CurrentDate) - Date (Month(", null, "Get(CurrentDate)) - (Day (Get(CurrentDate)) < Day(birthDate)); Day (birthDate); Year (Get(", null, "CurrentDate)))) & \" days \"\n\nThis is a fairly complex nested formula, and many subexpressions appear multiple times. Writing and debugging this formula is difficult, even when you understand the logic on which it's based. With the Let function, the formula could be rewritten this way:\n\nLet ( [ C = Get(CurrentDate); yC = Year (C) ; mC = Month (C) ; dC = Day (C) ; doyC = DayOfYear (C) ; B = birthDate; yB = Year (B) ; mB = Month (B) ; dB= Day (B) ; doyB = DayOfYear (b) ; num_years = ( yC - yB - (doyC < doyB)) ; num_months = Mod (mC - mB - (dC\n\nBecause of the extra space we've put in the formula, it's a bit longer than the original, but it's vastly easier to comprehend. If you were a developer needing to review and understand a formula written by someone else, we're sure you'd agree that you'd prefer seeing the Let version of this rather than the first version.\n\nBesides simplicity and clarity, there are also performance benefits to using the Let function. If you have a complex subexpression that you refer to multiple times during the course of a calculation, FileMaker Pro evaluates it anew each time it's referenced. If you create the subexpression as a variable within a Let statement, the subexpression is evaluated only once, no matter how many times it is subsequently referenced. In the example just shown, for instance, FileMaker would evaluate Get(CurrentDate) eight times in the first version. In the version that uses Let, it's evaluated only once. In many cases, the performance difference may be trivial or imperceptible. But other times, optimizing the evaluation of calculation formulas may be just the answer for increasing your solution's performance.\n\nThe more you use the Let function, the more likely it is that it will become one of the core functions you use. To help you become more familiar with it, we use it frequently throughout the examples in the rest of this chapter.\n\n## Quick Calculation Testing Using Let\n\nThe Let function makes it much easier to debug calculation formulas. It used to be that if you wanted to make sure that a subexpression was evaluating correctly, you'd need to create a separate field to investigate it. Using Let, you can just comment out the second parameter of the Let function and have the function return one or more of the subexpressions directly. When you've got each subexpression working as intended, just comment out the test code and uncomment the original code.\n\nTip", null, "It's not uncommon that you may want to set the same variable several times within a Let statement. A typical example occurs when you want to perform a similar operation several times on the same variable, without excessive nesting. For example, in FileMaker 7, a fragment of a Let statement that's involved in some complex text parsing might look like this:\n``` result = _TextColor( text; RGB( 255: 0; 0 ));\nresult1 = _TextFont ( result; \"TimesNewRoman\");\nresult2 = _Textsize ( result1; 14);```\n\nHere, we want to apply several text formatting operations to the value of text. We'd like to put them on successive rows, rather than building a big nested expression. We'd prefer to just keep naming the output result, but in FileMaker 7, we'll be prevented from setting a variable with the same name twice. In FileMaker 8 this behavior is permitted, and we could rewrite the code fragment as something like this:\n\n``` result = _TextColor( text; RGB( 255: 0; 0 ));\nresult = _ TextFont ( result; \"TimesNewRoman\");\nresult = _Textsize ( result; 14);```\n\nAlthough this is a great convenience when you need to do it, be aware that calculations and custom functions that use this technique will not execute correctly if the file is accessed via FileMaker Pro 7.\n\nThe Choose Function\n\nThe If and Case functions are sufficiently robust and elegant for most conditional tests that you'll write. For several types of conditional tests, however, the Choose function is a more appropriate option. As with If and Case, the value returned by the Choose function is dependent on the result of some test. What makes the Choose function different is that the test should return an integer rather than a true/false result. The test is followed by a number of possible results. The one that's chosen depends on the numeric result of the test. If the test result is 0, the first result is used. If the test result is 1, the second result is used, and so on. The syntax for Choose is as follows:\n\n`Choose (test ; result if test=0 ; result if test=1 ; result if test=2 ....)`\n\nA classic example of when a Choose function comes in handy is when you have categorical data stored as a number and you need to represent it as text. For instance, you might import demographic data in which the ethnicity of an individual is represented by an integer from 1 to 5. The following formula might be used to represent it to users:\n\n```\n```\n\n[View full width]\n\nChoose (EthnicityCode; \"\"; \"African American\"; \"Asian\"; \"Caucasian\"; \"Hispanic\"; \" Native", null, "American\")\n\nOf course, the same result could be achieved with the following formula:\n\n```\n```\n\n[View full width]\n\nCase (EthnicityCode = 1; \"African American\"; EthnicityCode = 2; \"Asian\", EthnicityCode =", null, "3; \"Caucasian\"; EthnicityCode = 4; \"Hispanic\"; EthnicityCode= 5; \"Native American\")\n\nYou should consider the Choose function in several other situations. The first is for generating random categorical data. Say your third-grade class is doing research on famous presidents, and you want to randomly assign each student one of the six presidents you have chosen. By first generating a random number from 0 to 5, you can then use the Choose function to select a president. The formula would be this:\n\n```Let ( r = Random * 6; // Generates a random number from 0 to 5\nChoose (r, \"Washington\", \"Jefferson\", \"Lincoln\", \"Roosevelt\", \"Truman\", \"Kennedy\"))```\n\nDon't worry that r isn't an integer; the Choose function ignores everything but the integer portion of a number.\n\nSeveral FileMaker Pro functions return integer numbers from 1 to n, so these naturally work well as the test for a Choose function. Most notable are the DayofWeek function, which returns an integer from 1 to 7, and the Month function, which returns an integer from 1 to 12. As an example, you could use the Month function within a Choose to figure out within which quarter of the year a given date fell:\n\n```\n```\n\n[View full width]\n\nChoose (Month(myDate)-1; \"Q1\"; \"Q1\"; \"Q1\"; \"Q2\"; \"Q2\"; \"Q2\"; \"Q3\"; \"Q3\"; \"Q3\"; \"Q4\"; \"Q4\";", null, "\"Q4\")\n\nThe -1 shifts the range of the output from 112 to 011, which is more desirable because the Choose function is zero-based, meaning that the first result corresponds to a test value of zero. There are more compact ways of determining the calendar quarter of a date, but this version is very easy to understand and offers much flexibility.\n\nAnother example of when Choose works well is when you need to combine the results of some number of Boolean tests to produce a distinct result. As an example, imagine that you have a table that contains results on Myers-Briggs personality tests. For each test given, you have scores for four pairs of personality traits (E/I, S/N, T/F, J/P). Based on which score in each pair is higher, you want to classify each participant as one of 16 personality types. Using If or Case statements, you would need a very long, complex formula to do this. With Choose, you can treat the four tests as a binary number, and then simply do a conversion back to base-10 to decode the results. The formula might look something like this:\n\n```Choose( (8 * (E>I)) + (4 * (S>N)) + (2 * (T>F)) + (J>P);\n\"Type 1 - INFP\" ; \"Type 2 - INFJ\" ; \"Type 3 - INTP\" ; \"Type 4 - INTJ\" ;\n\"Type 5 - ISFP\" ; \"Type 6 - ISFJ\" ; \"Type 7 - ISTP\" ; \"Type 8 - ISTJ\" ;\n\"Type 9 - ENFP\" ; \"Type 10 - ENFJ\" ; \"Type 11 - ENTP\" ; \"Type 12 - ENTJ\" ;\n\"Type 13 - ESFP\" ; \"Type 14 - ESFJ\" ; \"Type 15 - ESTP\" ; \"Type 16 - ESTJ\")```\n\nEach greater-than comparison is evaluated as a 1 or 0 depending on whether it represents a true or false statement for the given record. By multiplying each result by successive powers of 2, you end up with an integer from 0 to 15 that represents each of the possible outcomes. (This is similar to how flipping a coin four times generates 16 possible outcomes.)\n\nAs a final example, the Choose function can also be used anytime you need to \"decode\" a set of abbreviations into their expanded versions. Take, for example, a situation in which survey respondents have entered SA, A, N, D, or SD as a response to indicate Strongly Agree, Agree, Neutral, Disagree, or Strongly Disagree. You could map from the abbreviation to the expanded text by using a Case function like this:\n\n```Case (ResponseAbbreviation = \"SA\"; \"Strongly Agree\";\nResponseAbbreviation = \"A\"; \"Agree\" ;\nResponseAbbreviation = \"N\"; \"Neutral\" ;\nResponseAbbreviation = \"D\"; \"Disagree\" ;\nResponseAbbreviation = \"SD\"; \"Strongly Disagree\" )```\n\nYou can accomplish the same mapping by using a Choose function if you treat the two sets of choices as ordered lists. You simply find the position of an item in the abbreviation list, and then find the corresponding item from the expanded text list. The resulting formula would look like this:\n\n```\n```\n\n[View full width]\n\nLet ( [a = \"|SA||A||N||D||SD|\" ; r = \"|\" & ResponseAbbreviation & \"|\" ; pos = Position (a; r ; 1 ; 1) ; itemNumber = PatternCount (Left (a; pos-1); \"|\") / 2]; Choose (itemNumber, \"Strongly Agree\"; \"Agree\"; \"Neutral\"; \"Disagree\"; \"Strongly", null, "Disagree\") )\n\nIn most cases, you'll probably opt for using the Case function for simple decoding of abbreviations. Sometimes, however, the list of choices isn't something you can explicitly test against (such as with the contents of a value list), and finding one's position within the list may suffice to identify a parallel position in some other list. Having the Choose function in your toolbox may offer an elegant solution to such challenges.\n\nThe GetField Function\n\nWhen writing calculation formulas, you use field names to refer abstractly to the contents of particular fields in the current record. That is, the formula for a FullName calculation might be FirstName & \" \" & LastName. FirstName and LastName are abstractions; they represent data contained in particular fields.\n\nImagine, however, that instead of knowing in advance what fields to refer to in the FullName calculation, you wanted to let users pick any fields they wanted to. So you set up two fields, which we'll call UserChoice1 and UserChoice2. How can you rewrite the FullName calculation so that it's not hard-coded to use FirstName and LastName, but rather uses the fields that users type in the two UserChoice fields?\n\nThe answer, of course, is the GetField function. GetField enables you to add another layer of abstraction to your calculation formulas. Instead of hard-coding field names in a formula, GetField allows you to place into a field the name of the field you're interested in accessing. That sounds much more complicated than it actually is. Using GetField, we might rewrite our FullName formula as shown here:\n\n`GetField (UserChoice1) & \" \" & GetField (UserChoice2)`\n\nThe GetField function takes just one parameter. That parameter can be either a literal text string or a field name. Having it be a literal text string, although possible, is not particularly useful. The function GetField(\"FirstName\") would certainly return the contents of the FirstName field, but you can achieve the same thing simply by using FirstName by itself. It's only when the parameter of the GetField function is a field or formula that it becomes interesting. In that case, the function returns the contents of the field referred to by the parameter.\n\nThere are many potential uses of GetField in a solution. Imagine, for instance, that you have a Contact table with fields called First Name, Nickname, and Last Name (among others). Sometimes contacts prefer to have their nickname appear on badges and in correspondence, and sometimes the first name is desired. To deal with this, you could create a new text field called Preferred Name and format that field as a radio button containing First Name and Nickname as the choices. When doing data entry, a user could simply check off which name should be used for correspondence. When it comes time to make a Full Name calculation field, one of your options would be the following:\n\n```Case ( Preferred Name = \"First Name\"; First Name;\nPreferred Name = \"Nickname\"; Nickname) &\n\" \" & Last Name```\n\nAnother option, far more elegant and extensible, would be the following:\n\n`GetField (PreferredName) & \" \" & Last Name`\n\nWhen there are only two choices, the Case function certainly isn't cumbersome. But if there were dozens or hundreds of fields to choose from, GetField clearly has an advantage.\n\nBuilding a Customizable List Report\n\nOne of the common uses of GetField is for building user-customizable list reports. It's really nothing more than an extension of the technique shown in the preceding example, but it's still worth looking at in depth. The idea is to have several global text fields where a user can select from a pop-up list of field names. The global text fields can be defined in any table you want. Remember, in calculation formulas, you can refer to a globally stored field from any table, even without creating a relationship to that table. The following example uses two tables: SalesPeople and Globals. The SalesPeople table has the following data fields:\n\nSalesPersonID\n\nFirstName\n\nLastName\n\nTerritory\n\nCommissionRate\n\nPhone\n\nEmail\n\nSales_2005\n\nSales_2006\n\nThe Globals table has six global text fields named gCol1 through gCol6.\n\nWith these in place, you can now create six display fields in the SalesPeople table (named ColDisplay1 through ColDisplay6) that will contain the contents of the field referred to in one of the global fields. For instance, ColDisplay1 has the following formula:\n\n`GetField (Globals::gCol1)`\n\nColDisplay2 through 6 will have similar definitions. The next step is to create a value list that contains all the fields you want the user to be able to select. The list used in this example is shown in Figure 14.1. Keep in mind that because the selection is used as part of a GetField function, the field names must appear exactly as they have been definedand any change to the underlying field names will cause the report to malfunction.\n\nFigure 14.1. Define a value list containing a list of the fields from which you want to allow a user to select for the custom report.", null, "The final task is to create a layout where users can select and see the columns for their custom list report. You might want to set up one layout where the user selects the fields and another for displaying the results, but we think it's better to take advantage of the fact that in FileMaker 8, fields in header parts of list layouts can be edited. The column headers of your report can simply be pop-up lists. Figure 14.2 shows how you would set up your layout this way.\n\nFigure 14.2. The layout for your customizable list report can be quite simple. Here, the selection fields act also as field headers.", null, "Back in Browse mode, users can now click into a column heading and select what data they want to appear there. This one layout can thus serve a wide variety of needs. Figures 14.3 and 14.4 show two examples of the types of reports that can be made.\n\nFigure 14.3. A user can customize the contents of a report simply by selecting fields from pop-up lists in the header.", null, "Figure 14.4. Here's another example of a how a user might configure the customizable list report.", null, "Extending the Customizable List Report\n\nAfter you have the simple custom report working, there are many ways you can extend it to add even more value and flexibility for your users. For instance, you might add a subsummary part that's also based on a user-specified field. A single layout can thus be a subsummary based on any field the user wants. One way to implement this is to add another pop-up list in the header of your report and a button to sort and preview the subsummary report. Figure 14.5 shows what your layout would look like after adding the subsummary part and pop-up list. BreakField is a calculation in the SalesPeople table that's defined as shown here:\n\n`GetField (Globals::gSummarizeBy)`", null, "The Preview button performs a script that sorts by the BreakField and goes to Preview mode. Figure 14.6 shows the result of running the script when Territory has been selected as the break field.\n\nFigure 14.6. Sorting by the break field and previewing shows the results of the dynamic subsummary.", null, "Caution\n\nTo be fully dynamic, any calculations you write using the GetField function are probably going to need to be unstored. Unstored calculations will not perform well over very large data sets when searching and sorting, so use caution when creating GetField routines that might need to handle large data sets.\n\nThe Evaluate Function\n\nThe Evaluate function is one of the most intriguing functions in FileMaker Pro 8. In a nutshell, it enables you to evaluate a dynamically generated or user-generated calculation formula. With a few examples, you'll easily understand what this function does. It may, however, take a bit more time and thought to understand why you'd want to use it in a solution. We start with explaining the what, and then suggest a few potential whys.\n\nThe syntax for the Evaluate function is as follows:\n\n`Evaluate ( expression {; [field1 ; field2 ;...]} )`\n\nThe expression parameter is a text string representing some calculation formula that you want evaluated. The optional additional parameter is a list of fields whose modification triggers the reevaluation of the expression.\n\nFor example, imagine that you have a text field named myFormula and another named myTrigger. You then define a new calculation field called Result, using the following formula:\n\n`Evaluate (myFormula; myTrigger)`\n\nFigure 14.7 shows some examples of what Result will contain for various entries in myFormula.\n\nFigure 14.7. Using the Evaluate function, you can have a calculation field evaluate a formula contained in a field.", null, "There's something quite profound going on here. Instead of having to \"hard-code\" calculation formulas, you can evaluate a formula that's been entered as field data. In this way, Evaluate provides an additional level of logic abstraction similar to the GetField function. In fact, if myFormula contained the name of a field, Evaluate(myFormula) and GetField(myFormula) would return exactly the same result. It might help to think of Evaluate as the big brother of GetField. Whereas GetField can return the value of a dynamically specified field, Evaluate can return the value of a dynamically specified formula.\n\nUses for the Evaluate Function\n\nA typical use for the Evaluate function is to track modification information about a particular field or fields. A timestamp field defined to auto-enter the modification time is triggered anytime any field in the record is modified. There may be times, however, when you want to know the last time that the Comments field was modified, without respect to other changes to the record. To do this, you would define a new calculation field called CommentsModTime with the following formula:\n\n`Evaluate (\"Get(CurrentTimestamp)\" ; Comments)`\n\nThe quotes around Get(CurrentTimestamp) are important, and are apt be a source of confusion. The Evaluate function expects to be fed either a quote-enclosed text string (as shown here) or a formula that yields a text string (as in the Result field earlier). For instance, if you want to modify the CommentsModTime field so that rather than just returning a timestamp, it returns something like Record last modified at: 11/28/2005 12:23:58 PM by Fred Flintstone, you would need to modify the formula to the following:\n\n```\n```\n\n[View full width]\n\nEvaluate (\"\"Record modified at: \" & Get (CurrentTimeStamp) & \" by \" & Get", null, "(AccountName)\" ; Comments)\n\nHere, because the formula you want to evaluate contains quotation marks, you must escape them by preceding them with a slash. For a formula of any complexity, this becomes difficult both to write and to read. There is, fortunately, a function called Quote that eliminates all this complexity. The Quote function returns the parameter it is passed as a quote-wrapped text string, with all internal quotes properly escaped. Therefore, you could rewrite the preceding function more simply as this:\n\n```\n```\n\n[View full width]\n\nEvaluate (Quote (\"Record modified at: \" & Get (CurrentTimeStamp) & \" by \" & Get (", null, "AccountName)) ; Comments)\n\nIn this particular case, using the Let function further clarifies the syntax:\n\n```Let ( [\ntime = Get ( CurrentTimeStamp ) ;\naccount = Get ( AccountName );\nmyExpression = Quote ( \"Record modified at: \" & time & \" by \" & account ) ] ;\n\nEvaluate ( myExpression ; Comments )\n)```\n\nEvaluation Errors\n\nYou typically find two other functions used in conjunction with the Evaluate function: IsValidExpression and EvaluationError.\n\nIsValidExpression takes as its parameter an expression, and it returns a 1 if the expression is valid, a 0 if it isn't. An invalid expression is any expression that can't be evaluated by FileMaker Pro, whether due to syntax errors or other runtime errors. If you plan to allow users to type calculation expressions into fields, be sure to use IsValidExpression to test their input to be sure it's well formed. In fact, you probably want to include a check of some kind within your Evaluate formula itself:\n\n```Let ( valid = IsValidExpression (myFormula) ;\nIf (not valid; \"Your expression was invalid\" ; Evaluate (myFormula) )```\n\nThe EvaluationError function is likewise used to determine whether there's some problem with evaluating an expression. However, it returns the actual error code corresponding to the problem. One thing to keep in mind, however, is that rather than testing the expression, you want to test the evaluation of the expression. So, as an error trap used in conjunction with an Evaluate function, you might have the following:\n\n```Let ( [result = Evaluate (myFormula) ;\nerror = EvaluationError (result) ] ;\nIf (error ; \"Error: \" & error ; result)\n)```\n\nCustomizable List Reports Redux\n\nWe mentioned previously that Evaluate could be thought of as an extension of GetField. In an example presented in the GetField section, we showed how you could use the GetField function to create user-customizable report layouts. One of the drawbacks of that method that we didn't discuss at the time is that your field names need to be user- and display-friendly. However, there is an interesting way to get around this limitation that also happens to showcase the Evaluate function. We discuss that solution here as a final example of Evaluate.", null, "Another use of Evaluate is presented in \"Passing Multivalued Parameters,\" p. 438.\n\nTo recap the earlier example, imagine that you have six global text fields (gCol1 through gCol6) in a table called Globals. Another table, called SalesPeople, has demographic and sales-related data for your salespeople. Six calculation fields in SalesPeople, called ColDisplay1 through ColDisplay6, display the contents of the demographic or sales data fields, based on a user's selection from a pop-up list containing field names. ColDisplay1, for instance, has the following formula:\n\n`GetField (Globals::gCol1)`\n\nWe now extend this solution in several ways. First, create a new table in the solution called FieldNames with the following text fields: FieldName and DisplayName. Figure 14.8 shows the data that might be entered in this table.\n\nFigure 14.8. The data in FieldName represents fields in the SalesPerson table; the DisplayName field shows more user-friendly labels that will stand in for the actual field labels.", null, "Earlier, we suggested using a hard-coded value list for the pop-up lists attached to the column selection fields. Now you'll want to change that value list so that it contains all the items in the DisplayName column of the FieldNames table. Doing this, of course, causes all the ColDisplay fields to malfunction. There is, for instance, no field called Ph. Number, so GetField (\"Ph. Number\") will not function properly. What we want now is the GetField function not to operate on the user's entry, but rather on the FieldName that corresponds to the user's DisplayName selection. That is, when the user selects Ph. Number in gCol1, ColDisplay1 should display the contents of the Phone field.\n\nYou can accomplish this result by creating a relationship from the user's selection over to the DisplayName field. Because there are six user selection fields, there need to be six relationships. This requires that you create six occurrences of the FieldNames table. Figure 14.9 shows the Relationships Graph after you have set up the six relationships. The six new table occurrences are named Fields1 through Fields6. Notice that there's also a cross-join relationship between SalesPeople and Globals. This relationship allows you to look from SalesPeople all the way over to the FieldNames table.\n\nFigure 14.9. To create six relationships from the Globals table to the FieldNames table, you need to create six occurrences of FieldNames.", null, "The final step is to alter the calculation formulas in the ColDisplay fields. Remember, instead of \"getting\" the field specified by the user, we now want to get the field related to the field label specified by the user. At first thought, you might be tempted to redefine ColDisplay1 this way:\n\n`GetField (Fields1::FieldName)`\n\nThe problem with this is that the only way that ColDisplay1 updates is if the FieldName field changes. Changing gCol1 doesn't have any effect on it. This, finally, is where Evaluate comes in. To force ColDisplay1 to update, you can use the Evaluate function instead of GetField. The second parameter of the formula can reference gCol1, thus triggering the reevaluation of the expression every time gCol1 changes. The new formula for ColDisplay1 is therefore this:\n\n`Evaluate (Fields1::FieldName ; Globals::gCol1)`\n\nThere is, in fact, still a slight problem with this formula. Even though the calculation is unstored, the field values don't refresh onscreen. The solution is to refer not merely to the related FieldName, but rather to use a Lookup function (which is covered in depth in the next section) to explicitly grab the contents of FieldName. The final formula, therefore, is the following:\n\n`Evaluate (Lookup (Fields1::FieldName) ; Globals::gCol1)`\n\nThere's one final interesting extension we will make to this technique. At this point, the Evaluate function is used simply to grab the contents of a field. It's quite possible, however, to add a field called Formula to the FieldNames table, and have the Evaluate function return the results of some formula that you define there. The formula in ColDisplay1 would simply be changed to this:\n\n`Evaluate (Lookup (Fields1::Formula) ; Globals::gCol1)`\n\nOne reason you might want to do this is to be able to add some text formatting to particular fields. For instance, you might want the Sales_2004 field displayed with a leading dollar sign. Because all the ColDisplay fields yield text results, you can't do this with ordinary field formatting. Instead, in the Formula field on the Sales_2004 record, you could type the following formula:\n\n`\"\\$ \" & Sales_2004`\n\nThere's no reason, of course, why a formula you write can't reference multiple fields. This means that you can invent new fields for users to reference simply by adding a new record to the FieldNames table. For example, you could invent a new column called Initials, defined this way:\n\n`Left (FirstName; 1) & Left (LastName; 1)`\n\nYou could even invent a column called Percent Increase that calculates the percent sales increase from 2004 to 2005. This would be the formula for that:\n\n`Round((Sales_2005 - Sales_2004) / Sales_2004 *100, 2) & \" %\"`\n\nFigure 14.10 shows the contents of the FieldNames table. Note that for columns where you just want to retrieve the value of a field (for example, FirstName), the field name itself is the entire formula.\n\nFigure 14.10. The expression in the Formula field is dynamically evaluated when a user selects a column in the customizable report.", null, "This technique is quite powerful. You can cook up new columns for the customizable report just by adding records to the FieldNames table. Figure 14.11 shows an example of a report that a user could create based on the formulas defined in FieldNames. Keep in mind that Initials, \\$ Increase, and Percent Increase have not been defined as fields anywhere.\n\nFigure 14.11. In the finished report, users can select from any of the columns defined in the FieldNames table, even those that don't explicitly exist as defined fields.", null, "The Lookup Functions\n\nIn versions of FileMaker before version 7, lookups were exclusively an auto-entry option. FileMaker 7 added two lookup functions, Lookup and LookupNext, and both are useful additions to any developer's toolkit.\n\nThe two lookup functions operate quite similarly to their cousin, the auto-entry lookup option. In essence, a lookup is used to copy a related value into the current table. Lookups (all kinds) have three necessary components: a relationship, a trigger field, and a target field. When the trigger field is modified, the target field is set to some related field value.\n\nIt's important to understand the functional differences between the lookup functions and the auto-entry option. Although they behave similarly, they're not quite equivalent. Some of the key differences include the following:\n\n• Auto-entry of a looked-up value is an option for regular text, number, date, time, or timestamp fields, which are subsequently modifiable by the user. A calculation field that includes a lookup function is not user modifiable.\n• The lookup functions can be used anywherenot just in field definitions. For instance, they can be used in formulas in scripts, record-level security settings, and calculated field validation. Auto-entering a looked-up value is limited to field definition.\n• The lookup functions can be used in conjunction with other functions to create more complex logic rules. The auto-entry options are comparatively limited.\n\nLookup\n\nThe syntax of the Lookup function is as follows:\n\n`Lookup ( sourceField {; failExpression} )`\n\nThe sourceField is the related field whose value you want to retrieve. The optional failExpression parameter is returned if there is no related record or if the sourceField is blank for the related record. If the specified relationship matches multiple related records, the value from the first related record is returned.\n\nThere are two main differences between using the Lookup function and simply referencing a related field in a formula. The first is that calculations that simply reference related fields must be unstored, but calculations that use the Lookup function to access related fields can be stored and indexed. The other difference is that changing the sourceField in the related table does not cause the Lookup to retrigger. Just as with auto-entry of a looked-up value, the Lookup function captures the sourceField as it existed at a moment in time. The alternative, simply referencing the related field, causes all the values to remain perfectly in sync: When the related value is updated, any calculations that reference it are updated as well. (The downside is that, as with all calculations that directly reference related data, such a calculation cannot be stored.)\n\nLookupNext\n\nThe LookupNext function is designed to allow you to map continuous data elements to categorical results. It has the same effect as checking the Copy Next Lower Value or Copy Next Text Formatting Functions Higher Value options when specifying an auto-entry lookup field option. Here is its syntax:\n\n`LookupNext ( sourceField ; lower/higherFlag )`\n\nThe acceptable values for the second parameter are Lower and Higher. These are keywords and shouldn't be placed in quotes.\n\nAn example should help clarify what we mean about mapping continuous data to categorical results. Imagine that you have a table that contains information about people, and that one of the fields is the person's birth date. You want to have some calculation fields that display the person's astrological information, such as a zodiac sign and ruling planet. Birth dates mapping to zodiac signs is a good example of continuous data mapping to categorical results: A range of birth dates corresponds to each zodiac sign.\n\nIn practice, two small but instructive complications arise when you try to look up zodiac signs. The first complication is that the zodiac date ranges are expressed not as full dates, but merely as months and days (for example, Cancer starts on June 22 regardless of what year it is). This means that when you set up your zodiac table, you'll use text fields rather than date fields for the start and end dates. The second complication is that Capricorn wraps around the end of the year. The easiest way to deal with this is to have two records in the Zodiac table for Capricorn, one that spans December 22December 31, and the other that spans January 1January 20.\n\nFigure 14.12 shows the full data of the Zodiac table. The StartDate and EndDate fields, remember, are actually text fields. The leading zeros are important for proper sorting.\n\nFigure 14.12. The data from the Zodiac table is looked up and is transferred to a person record based on the person's birth date.", null, "In the Person table, you need to create a calculation formula that generates a text string containing the month and date of the person's birth date, complete with leading zeros so that it's consistent with the way dates are represented in the Zodiac table. The DateMatch field is defined this way:\n\n`Right (\"00\" & Month (Birthdate); 2) & \"/\" & Right (\"00\"& Day (Birthdate); 2)`\n\nNext, create a relationship between the Person and Zodiac tables, matching the DateMatch field in Person to the StartDate field in Zodiac. This relationship is shown in Figure 14.13.\n\nFigure 14.13. By relating the Person table to Zodiac, you can look up any information you want based on the person's birth date.", null, "Obviously, many birth dates aren't start dates for one of the zodiac signs. To match to the correct zodiac record, you want to find the next lower match when no exact match is found. For instance, with a birth date of February 13 (02/13), there is no matching record where the StartDate is 02/13, so the next lowest StartDate, which is 01/21 (Aquarius), should be used.\n\nIn the Person table, therefore, you can grab any desired zodiac information by using the LookupNext function. Figure 14.14 shows an example of how this date might be displayed on a person record. The formula for ZodiacInfo is as follows:\n\n```\"Sign: \" & LookupNext (Zodiac::ZodiacSign; Lower) & \"¶\" &\n\"Symbol: \" & LookupNext (Zodiac::ZodiacSymbol; Lower) & \"¶\" &\n\"Ruling Planet: \" & LookupNext (Zodiac::RulingPlanet; Lower)```\n\nFigure 14.14. Using the LookupNext function, you can create a calculation field in the Person table that contains information from the next lower matching record.", null, "It would have been possible in the previous examples to match to the EndDate instead of the StartDate. In that case, you would simply need to match to the next higher instead of the next lower matching record.\n\nAn entirely different but perfectly valid way of approaching the problem would have been to define a more complex relationship between Person and Zodiac, in which the DateMatch was greater than or equal to the StartDate and less than or equal to the EndDate. Doing this would allow you to use the fields from the Zodiac table as plain related fields; no lookup would have been required. There are no clear advantages or disadvantages of this method over the one discussed previously.\n\nNote\n\nOther typical scenarios for using LookupNext are for things such as shipping rates based on weight ranges, price discounts based on quantity ranges, and defining cut scores based on continuous test score ranges.\n\n### Text Formatting Functions", null, "Special Edition Using FileMaker 8\nISBN: 0789735121\nEAN: 2147483647\nYear: 2007\nPages: 296", null, "" ]
[ null, "https://flylib.com/books/2/582/1/html/2/images/troubleshooting.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/new.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/14fig01.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig02.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig03.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig04.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig05.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig06.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig07.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/ccc.gif", null, "https://flylib.com/books/2/582/1/html/2/images/arrow.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig08.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig09.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig10.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig11.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig12.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig13.jpg", null, "https://flylib.com/books/2/582/1/html/2/images/14fig14.jpg", null, "https://flylib.com/icons/4829-small.jpg", null, "https://flylib.com/media/images/top.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88836545,"math_prob":0.86007804,"size":39083,"snap":"2023-40-2023-50","text_gpt3_token_len":8517,"char_repetition_ratio":0.14550014,"word_repetition_ratio":0.016251354,"special_character_ratio":0.21712765,"punctuation_ratio":0.117704555,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9570403,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,null,null,null,null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T06:27:04Z\",\"WARC-Record-ID\":\"<urn:uuid:ec2b0150-fc36-4fb8-bed7-a8faebf1694a>\",\"Content-Length\":\"93586\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9e626309-389c-4a47-91a6-294cf73763d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:feaaadad-f883-47a4-947e-7a264ff37e27>\",\"WARC-IP-Address\":\"179.43.157.53\",\"WARC-Target-URI\":\"https://flylib.com/books/en/2.582.1/logical_functions.html\",\"WARC-Payload-Digest\":\"sha1:523NEYQ3LAEQJZDTNLZ5NPW2R3CRFN3Q\",\"WARC-Block-Digest\":\"sha1:F47WCYAM2QED22PAHGDUIVH4MNZ5UNDU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100525.55_warc_CC-MAIN-20231204052342-20231204082342-00044.warc.gz\"}"}
http://oak.go.kr/central/journallist/journaldetail.do?article_seq=16825
[ "### Budget Deficits and Exchange-Rate Crises\n\n•", null, "• #### ABSTRACT\n\nThis paper investigates currency crises in an optimizing general equilibrium model with overlapping generations. It is shown that a rise in government budget deficits financed by future taxes generates a decumulation of external assets, leading up to a speculative attack and forcing the monetary authorities to abandon the peg.\n\n• #### KEYWORD\n\nBudget deficits , foreign exchange reserves , currency crises\n\n• ### 1. Introduction\n\nThe financial and currency turmoils of the 1990s in many European, Latin-American and Asian countries have called into question the viability of fixed exchange-rate regimes and led to the development of new models on the causes  of currency collapses. We can now identify at least two main theoretical explanations of crises, one based on the view that a collapse is the inevitable outcome of inconsistent macroeconomic policies and the other based on the view that a collapse results from self-fulfilling expectations. According to the so-called ‘firstgeneration’ models of currency crises, if a government finances its fiscal deficits by printing money in excess of money demand growth, while following a fixed exchange-rate policy, a gradual loss of reserves will occur, leading to a speculative attack against the currency that forces the abandonment of the fixed-rate regime eventually (see, for example, Krugman, 1979; Flood & Garber, 1984; Obstfeld, 1986; Calvo, 1987; van Wijnbergen, 1991; Calvo & Végh, 1999).\n\nThe ‘second-generation’ models of currency crises, on the other hand, show that the government’s decision to give up a fixed exchange rate depends on the net benefits of pegging; hence, the fixed rate is likely to be maintained as long as the benefits of devaluing are smaller than the costs. However, changes in market beliefs about the currency sustainability can force the government to go out of the peg. For example, if agents expect a devaluation, a speculative attack will start, forcing the government to abandon the peg, since the costs of keeping the exchange rate fixed outweigh the benefits. On the other hand, if agents expect no change in the currency rate the fixed peg will be preserved. Private expectations are self-fulfilling and multiple equilibria can occur, for given fundamentals (see, for example, Obstfeld, 1996; Velasco, 1996; Cole & Kehoe, 1996; Jeanne, 1997; Jeanne & Masson, 2000).1\n\nMore recently, the debate about the role played by fundamentals and/or selffulfilling expectations in triggering a speculative attack has been enriched by a new set of models analyzing currency crises in the context of a change in the  expectations of future policy (see, for example, Burnside et al., 2001, 2003, 2004, 2006; Daniel, 2000, 2001; Corsetti & Maćkowiack, 2006; Maćkowiack, 2007; Singh, 2009). According to this view, explanations of crises do not necessarily require a period of fundamental misalignments. All that is needed is that the path of current and future government policies becomes inconsistent with the fixed peg.\n\nThis literature typically uses static models, or models with extrinsic dynamics; that is, models where the dynamics of the system come exclusively from current or anticipated future changes in exogenous variables. Such systems are, in fact, always in steady-state equilibrium in the absence of external shocks. This is in contrast to the so-called intrinsic dynamics of the system, where the economy evolves from some initial stationary state due to, for example, the accumulation of capital stock or foreign assets.2 Models with intrinsic dynamics are useful to understand currency crises and to predict the exact time of an attack. They enable us to study the dynamics of relevant macroeconomic variables.\n\nThe purpose of this paper is to deal with this kind of dynamics using a modified version of the Yaari (1965)Blanchard (1985) model. Our approach has three main advantages. First, it allows a ‘nondegenerate’ dynamic adjustment in the basic monetary model of exchange rate determination. Second, it shows that the macroeconomic equilibrium is dependent on the timing of fiscal policies. Third, the current account is allowed to play a crucial role in transmitting fiscal disturbances to the rest of the economy.\n\nAcentral finding of the paper is that a collapse may occur even as a consequence of a temporary tax cut fully financed by future taxes. In particular, following a fiscal expansion, current account imbalances and the expected depletion of foreign assets will lead to a currency crisis, forcing the monetary authorities to adopt a floating exchange-rate regime. This result is in sharp contrast with the existing literature, where monetary or fiscal policies are inconsistent or expected to be inconsistent with the exchange-rate policy. On the other hand, our theoretical results are consistent with the evidence that Asian countries that came under attack in 1997 were those that had experienced larger current account deficits on the eve of the crisis.3\n\nThe main conclusion that emerges from our analysis is that crises can occur in a flexible-price, fully optimizing framework, even when both monetary and fiscal policies are correctly designed; that is, when the intertemporal budget constraint of the government is always respected and monetary policy obeys the rules of the game. The crisis, however, is not driven by self-fulfilling expectations, as the collapse results from well-defined dynamics in the fundamentals. From this point of view, the crisis is basically real and is triggered by the macroeconomic adjustment path associated with a consistent and flexible policy rule that, nonetheless, gives rise to the conditions for a run on the central bank’s foreign reserves. The main implication of our model is that the sustainability of a fixed exchange-rate system may require not only giving up monetary sovereignty, but also strongly restraining the conduct of fiscal policy.4\n\nThe paper is organized as follows. Section 2 presents the theoretical model. Section 3 describes the dynamics of the model and the time of the speculative attack. Section 4 concludes.\n\n1The partition between first and second generation models is consistent with the classification scheme of currency crises proposed by Flood and Marion (1999), and Jeanne (2000). From this point of view, the so-called ‘third-generation’ models, elaborated to explain the more recent financial turmoils in Asia, Latin America and Russia, are considered extensions of the existing setups that explicitly include the financial side of the economy   2This distinction may be found, for example, in Turnovsky (1977, 1997) and Obstfeld and Stockman (1985).   3For an exhaustive overviewof the economic fundamentals in Asian countries in the years preceding the financial and currency crisis, see Corsetti et al. (1999) and World Bank (1999).   4Similar implications are also found in Cook and Devereux (2006).\n\n### 2. The Model\n\nConsider a small open economy described as follows.Agents have perfect foresight and consume a single tradeable good. Domestic supply of the good is exogenous. Household’s financial wealth is divided between domestic money (which is not held abroad) and internationally traded bonds. There are no barriers to trade, so that purchasing power parity (PPP) holds at all times, that is PS = P, where S is the nominal exchange rate (defined as units of domestic currency per unit of foreign currency), P is the domestic price level and P is the foreign price level. There is perfect capital mobility and domestic and foreign non-monetary assets are perfect substitutes; thus, uncovered interest parity (UIP) is always verified,\n\nwhere i and i are the domestic and foreign (constant) nominal interest rate, respectively, and\n\nis the rate of exchange depreciation. In the absence of foreign inflation the external nominal interest rate is equal to the real rate.\n\nThe demand side of the economy is described by an extended version of the Yaari–Blanchard perpetual youth model with money in the utility function.5 There is no bequest motive, and financialwealth for newly born agents is assumed to be zero. The birth and death rates are the same, so that population is constant. Let δ denote the instantaneous constant probability of death and β the subjective discount rate. For convenience, the size of each generation at birth is normalized to δ, hence total population is equal to unity.\n\nEach individual of the generation born at time s at each time period ts faces the following maximization problem:\n\nsubject to the individual consumer’s flow budget constraint\n\nand to the transversality condition\n\nwhere\n\n6\n\nis the external inflation rate, while cs, ys, ms, ws and τs denote consumption, endowment, nominal money balances, total financial wealth and lump-sum taxes, respectively. Notice that the effective discount rate of consumers is given by β + δ, where\n\n7 Each individual is assumed to receive for every period of her life an actuarial fair premium equal to a fraction δ of her financial wealth from a life insurance company operating in a perfectly competitive market. At the time of her death the remaining individual’s net wealth goes to the insurance company. For simplicity, both the endowment and the amount of lump-sum taxes are age-independent; hence, individuals of all generations have the same human wealth. In addition, the endowment is assumed to be constant over time.\n\nThe representative consumer of generation s chooses a sequence for consumption and money balances in order to maximize equation (1) subject to equations (2) and (3) for an initial level of wealth. Solving the dynamic optimization problem and aggregating the results across cohorts yield the following expressions for the time path of aggregate consumption, the portfolio balance condition, the aggregate flow budget constraint of the households and the transversality condition, respectively:\n\nwhere, η; ≡ (1 − ξ )/ξ . The upper case letter denotes the population aggregate of the generic individual economic variable. For any generic economic variable at individual level, say x, the corresponding population aggregateX can be obtained as\n\nLetting B denote traded bonds denominated in foreign currency, total financial wealth of the private sector can be expressed as:\n\nwhere we have used the PPP condition.\n\nThe public sector is viewed as a composite entity consisting of a government and a central bank. Let D denote the net stock of government debt in terms of foreign currency given by the stock of foreign-currency denominated government bonds (DG) net of official foreign reserves (R), that is D = DGR. Under the assumption that government bonds and foreign reserves yield the same interest rate, the public sector flow budget constraint can be expressed as:\n\nwhere G is public spending, μ is the growth rate of nominal money and MMH + SR with MH being the domestic component of the money supply (domestic credit) and SR is the stock of official reserves valued in home currency. Henceforth, without loss of generality we assume that Gt = 0.\n\nSubtracting equation (9) from equation (6) yields the current account balance:\n\nwhere F = BD is the net stock of the external assets of the economy. Since D = DG − Rthe net stock of external assets can also be expressed as F = B + RDG.\n\n### 2.1 Fiscal and Monetary Regimes\n\nIn order to close the modelwe need to specify the fiscal and the monetary regimes. The government is assumed to adopt a tax rule of the form:\n\nwhere Z is a transfer and\n\nTaxes are an increasing function of net government debt adjusted for seigniorage.8 The parameter restriction on α rules out any explosive paths for the public debt.\n\nThe monetary regime is described by a fixed exchange-rate system. The central bank pegs on each date t the exchange rate at the constant level\n\nstanding ready to accommodate any change in money demand in order to keep the relative price of the currency fixed; that is, to accommodate any change in money demand by selling or buying foreign currency bonds. Thus, money supply is endogenously determined according to equation (5) for a given\n\nUnder a permanently fixed exchange rate,\n\nand the domestic price level is\n\nThe foreign price P is assumed constant and normalized to one. It follows that\n\nthat\n\n### 2.2 Macroeconomic Equilibrium\n\nBy combining equations (4) and (5) with equations (8)–(11), under the fixed exchange-rate regime, the macroeconomic equilibrium of the model is described by the following set of equations:\n\ngiven the initial conditions on net public debt, on official foreign reserves, on the stock of foreign assets and on nominal money balances: D0 = 0, R0, F0 and M0. The last term of equation (12) results from the redistribution of financial wealth across generations. If the probability of death δ were equal to zero, the economy would collapse into the standard infinitely-lived representative agent model.9\n\nThe dynamic system described by equations (12)–(14) consists of one jump variable (C) and two predetermined or sluggish variables (D and F). The system must have one positive and two negative roots in order to generate a unique stable saddle-point equilibrium path. In the Appendix it is shown that this condition is always satisfied.\n\n5The approach of entering money in the utility function to allowfor money holding behavior within a Yaari–Blanchard framework, is common to a number of papers including Spaventa (1987), Marini and van der Ploeg (1988), van der Ploeg (1991), and Kawai and Maccini (1990, 1995). Similar results could also be obtained by use of cash-in-advance models (see Feenstra, 1986). For aYaari–Blanchard model with cash-in-advance, see Petrucci (2003).   6Following Blanchard (1985), this condition ensures that consumers are relatively patient, in order to ensure that the steady state-level of aggregate financial wealth is positive.   7This assumption ensures that savings are decreasing in wealth and that a steady-state value of aggregate consumption exists. See Blanchard (1985) for details.   8Similar fiscal rules are frequently adopted in the literature. See Benhabib et al. (2001) among others.   9However, for δ = 0 model stability requires that β = i∗.\n\n### 3. Fiscal Deficits and Currency Crises\n\nIn this section, we examine the dynamic effects of fiscal policy on the macrovariables of the model to derive the links between future budget deficits and currency crises in a pegged exchange-rate economy. The policy is centered on an unanticipated lump-sum tax cut; that is, a once and for all increase in Z.10 There is a fiscal deficit at time t = 0, generated by the tax cut, followed by future surpluses as debt accumulates, so as always to satisfy the intertemporal government budget constraint without recourse to seigniorage revenues. For the sake of simplicity it is assumed that up to time zero the economy has been in steady-state.\n\nAccording to Salant–Henderson’s criterion, the peg will be abandoned when the shadow exchange rate, i.e. the exchange that would prevail in the economy in a flexible exchange rate regime, is equal to the prevailing fixed rate. The hypothesis of perfect foresight implies that traders will exchange domestic currency for foreign currency before reserves run out in order to avoid capital losses. The flexible exchange-rate regime that follows the fixed-rate regime’s collapse is assumed to be permanent. For the sake of simplicity, we assume that following the collapse the monetary authorities will adopt a monetary targeting regime characterized by a zero-growth rate of the money, that is\n\nSince the analysis is based on the assumption of perfect foresight, the transitional dynamics of the economy depend on the expectations of the long-run steady-state relationships.\n\nThe steady state equilibrium is described by the set of equations (12)–(14) when\n\nand the portfolio balance condition (15). Let\n\nand\n\ndenote the long-run levels of consumption, public debt, net foreign assets and real money balances, respectively. The Appendix shows that the long-run effects of a tax cut are described by the following set of derivatives:\n\nFrom the above relationships we can see that a tax cut implies that in the new steady-state equilibrium consumption, real money balances and foreign assets are below their original levels, while the government debt is higher.11 Also, notice that the tax cut would imply a jump in consumption (through the wealth effect) and hence an increase in real money balances at t = 0. Given the fixed nominal exchange rate\n\nthe increase in real money balances is accomplished by a rise in the nominal money supply. This occurs as households accommodate any changes in money demand with portfolio shifts from foreign bonds to money (i.e., by  selling foreign bonds to the central bank for domestic money). Hence, foreigncurrency assets at the central bank rise on impact, whereas the net stock of the economy external assets goes unchanged as the adjustment in B and R net out in the aggregate. Thereafter, following the contraction in consumption and real money demand along the transitional path, a reverse portfolio shift occurs, as households force the central bank to sell off their reserves for domestic money. Therefore, the model implies that the rise in the budget deficit at t = 0 generates a continuous depletion of foreign assets (or current account deficits) along the transition to the newsteady-state equilibrium, thus making the central bank open to speculative attacks.12 As shown below, this may indeed happen if the shadow exchange rate crosses the pegged rate at some point in time along the economy’s adjustment path. At the time of the attack there will be a jump increase in net government debt and a decline in both net foreign asset and money supply.13\n\n### 3.1 Solution Procedure and the Time of the Speculative Attack\n\nIn order to analyze the adjustment of the economy to the initial tax cut, when the public anticipate the collapse of the exchange-rate regime at some point in the future,we need to proceed backward in time. In particular,we first solve the model under the floating exchange-rate regime and find the exact time of the attack, say t. We can then use the results to solve the model under the fixed exchange-rate regime, that is, for 0 < t < t. Notice that perfect foresight requires that all jump variables be continuous at t = t.\n\nConsider first the model under the floating regime, which applies for t > t. The relevant equations of the system are equations (13)–(14) together with:\n\nwhere Φ = M/S. Since under the floating regime it is only the exchange rate that responds to variation in money demand, we have that all the observed changes in the level of real money balances are due to changes in the exchange rate, that is\n\nFrom equation (5), equation (18) immediately follows. Notice that the system of equations describing the economy under the peg (12)–(14) and under the float (13), (14), (17) and (18) have the same steady-state solution, that is\n\nand\n\nAs shown in the Appendix, the system of differential equations (13), (14), (17) and (18) is saddle path stable. The solution can be easily obtained by using the initial conditions on the predetermined variables, D, F andM, under a zero level of official reserves. In this way, it is possible to compute the time path for the floating exchange rate before and after the speculative attack. According to the Salant–Henderson criterion, the currency crisis will occur at the point where the shadow exchange rate (i.e., the exchange rate that would prevail in the economy after the fiscal expansion under a flexible regime if the official reserves had fallen to zero) is equal to the fixed rate, that is\n\nThe time path of the shadow floating rate is given by (see the Appendix):\n\nwhere\n\nand\n\nUsing the above result and imposing the condition\n\none can compute the time of the speculative attack. Clearly, the time of the attack also depends on the size of δ. The larger δ, the sooner the crisis will occur. For δ = 0 instead the initial fiscal expansion will not give rise to any change in consumption, in money demand and in the current account, so that equation (19) will become an identity.\n\nAfter the attack the current exchange rate will depreciate, steadily converging toward a steady state value whose level crucially depends on the initial tax cut:14\n\nwhere Δ ≡ (αi)(β + δi)(i + δ)i > 0. Of course the attack will take place only if the initial tax cut is sufficiently large to bring about a depreciation of the shadow exchange rate such that at a certain point\n\nBy contrast, the crisis will not take place if the shadow exchange rate converges towards a level below the peg, that is if\n\nIt can be easily shown that the currency crisis will occur only if the initial tax reduction Z satisfies the following condition:\n\nThe lower the probability of death δ, the larger the initial tax cut able to trigger a crisis.15 Similarly, we can compute the critical level of the initial stock reserves. For a given Z, speculators will attack the domestic currency only if the level of the initial stock of reserves is such that:\n\nIt can be shown that the critical threshold level of initial reserves is increasing in the probability of death δ.16 Proceeding backward in time, it is now possible to describe the time path for the economy under the fixed regime immediately after the fiscal expansion by solving the system of differential equations (12)–(14), given the initial conditions on the predetermined variables, and by imposing the condition associated with the assumption of perfect foresight,which requires that consumption be continuous at the time of the attack t. The analytical solutions are described in detail in the Appendix.", null, "### 3.2 Numerical Example\n\nThe adjustment process can be better described by making use of a simple numerical example.17\n\nConsider Figures 14 and assume, for example, an unanticipated tax cut at t = 0. Figures 1 and 2 plot the current time paths for consumption and the nominal exchange rate (bold lines) and for their related ‘shadow’ levels before the attack occurring at time t. There is, on impact, an increase in consumption, while the shadow exchange rate, after the initial appreciation, starts depreciating steadily.", null, "", null, "Current generations profit from the lump-sum tax cut, since they share the burden of future increases in taxation with yet unborn individuals.\n\nThe dynamics of net foreign assets is depicted in Figure 3. Following the tax cut, the economy starts reducing its holding of foreign assets to finance the higher consumption level along the transitional path. Agents accommodate additional money needed for consumption by selling initially Bt forMt at the central bank’s window. Next, as consumption and real money holdings decline, they exchange domestic money for foreign reserves, using up foreign-currency assets at the central bank. At time t there is an abrupt decline in net foreign assets as a consequence of the speculative attack and the depletion of remaining official reserves. The peg is abandoned and the economy shifts to a flexible exchange-rate regime, where the money supply is under the control of the monetary authorities adopting a simple targeting rule, as already mentioned, foreign assets decline towards the new long-run equilibrium, and the nominal exchange rate depreciates until the current account is brought back in equilibrium.", null, "", null, "Figure 4 plots the time path for net government debt, which increases gradually and at the time of the attack jumps upward because of the sudden exhaustion of official reserves, and then continues increasing converging to a new long-run equilibrium above its starting level.\n\nUnder the baseline calibration, Figures 58 show how the time of the attack t is affected by the initial level of reserves R0, the amount of the tax cut Z, the responsiveness of taxes to public debt α and the liquidity preference parameter η;, respectively.The time of the attack depends positively on the level of official foreign reserves and on the responsiveness of taxes to public debt α, but negatively on the magnitude of the fiscal expansion and on the liquidity preference parameter.", null, "", null, "", null, "Notice that in this model a crisis may occur even when the fiscal budget is showing a surplus.18 This is because, along the transitional path, a sequence of fiscal surpluses will replace the initial sequence of deficits in order to satisfy the government intertemporal budget constraint.\n\n10The effects of government deficit in optimizing models can be found, for example, in Frenkel and Razin (1987), Obstfeld (1989), Turnovsky and Sen (1991).   11It can be easily shown that if δ = 0, the long-run effects of the fiscal expansion will simply be:  That is because when the probability of death is equal to zero, the Ricardian equivalence is restored and the time profile of lump-sum taxes does not affect consumption. In such circumstances a tax cut will not give rise to any current account imbalances and to any change in demand for real money balances, that is why the currency crisis will not take place.   12Strong empirical support for a positive relationship between the current account deficit and current and expected future budget deficits is found by Piersanti (2000). See Baxter (1995) for a more general discussion on this issue.   13Clearly, since movements in fundamentals and hence the speculative attack are predictable in this model, the government could avoid the sharp loss in reserves by abandoning the peg just before  the attack. However, if the government did so, then speculators would incorporate this into their strategies, so introducing strategic interactions and uncertainty into the time of collapse.We do not  consider these issues here, which would terribly complicate the model and make it intractable (as there is no equilibrium in pure strategies) without adding anything newto the Pastine’s (2002) results; nor we consider the optimal exit time issue as in Rebelo and Végh (2006), which nonetheless also involves strategic interactions.We simply focus on the less intricate and narrow target of describing the macroeconomic adjustment path associated with a consistent and more flexible policy rule – in contrast to those considered in the existing literature – which gives rise to the possibility, but not the certainty, of a currency crash, i.e., on the conditions for an attack.   14Recall that under under the floating regime all the observed variations in the level of real money balances are only due to changes in the exchange rate.   15Letting Z∗ denote the critical level of the tax cut above which the peg will collapse, it can be easily shown that     16Letting R*0be the critical stock of reserves below which the crisis will occur, we have that    17Figures 1–4 illustrate a numerical example based on the following parametrization: i∗ = 0.03, η; = 0.25, δ = 0.02, α = 0.04, Y = 1,G = D0 = 0,   F0 = 0.5 and R0 = 0.2. The rate of time preference, β = 0.024, and the initial stock of nominal money balances, M0 = 8.46, are implied. The fiscal expansion consists of an increase in Z of 0.05 from zero.   18This result is consistent with the evidence that in most Asian countries, during the years preceding  the crisis, fiscal imbalances were either in surplus or in modest deficit. See World Bank (1999).\n\n### 4. Conclusion\n\nIn this paper we have used an optimizing general equilibrium model with overlapping generations to investigate the relation between fiscal deficits and currency crises. It is shown that a rise in current and expected future budget deficits  generates a depletion of foreign reserves, leading up to a currency crisis.\n\nCrises can thus occur even when policies are ‘correctly’ designed; that is, a monetary policy fully committed to maintaining the peg and the fiscal authorities respecting the intertemporal government budget constraint. The sustainability  of fixed exchange-rate systems may thus require not only giving up monetary sovereignty but also imposing a more severe degree of fiscal discipline than implied by the standard solvency conditions.\n\nThe authors are very grateful to two anonymous referees for their extremely useful suggestions. The authors also wish to thank Fabio C. Bagliano, Marianne Baxter, Leonardo Becchetti, Anna Rita Bennato, Lorenzo Bini Smaghi, Giancarlo Corsetti, Andrea Costa, Alberto Dalmazzo, Bassam Fattouh, Laurence Harris, Fabrizio Mattesini, PedroM. Oviedo, Alessandro Piergallini, Cristina M. Rossi, Pasquale Scaramozzino, Salvatore Vinci and participants at the II RIEF Conference on International Economics and Finance – University of Rome ‘Tor Vergata’ for helpful comments on previous versions of this paper. The financial support of the MIUR (Grant No. 2007P8MJ7P_003) is gratefully acknowledged. The usual disclaimer applies.\n\n• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• []", null, "• [Figure 1.] Exchange rate dynamics, S.", null, "• [Figure 2.] Consumption dynamics, C.", null, "• [Figure 3.] Net foreign assets dynamics, F.", null, "• [Figure 4.] Net government debt dynamics, D.", null, "• [Figure 5.] The time of the attack and R0.", null, "• [Figure 6.] The time of the attack and Z.", null, "• [Figure 7.] The time of the attack and α.", null, "• [Figure 8.] The time of the attack and η;.", null, "" ]
[ null, "http://oak.go.kr/central/images/2015/cc_img.png", null, "http://oak.go.kr//repository/journal/16825/NRF002_2011_v25n2_285_f0001.jpg", null, "http://oak.go.kr//repository/journal/16825/NRF002_2011_v25n2_285_f0002.jpg", null, "http://oak.go.kr//repository/journal/16825/NRF002_2011_v25n2_285_f0003.jpg", null, "http://oak.go.kr//repository/journal/16825/NRF002_2011_v25n2_285_f0004.jpg", null, "http://oak.go.kr//repository/journal/16825/NRF002_2011_v25n2_285_f0005.jpg", null, "http://oak.go.kr//repository/journal/16825/NRF002_2011_v25n2_285_f0006.jpg", null, "http://oak.go.kr//repository/journal/16825/NRF002_2011_v25n2_285_f0007.jpg", null, "http://oak.go.kr//repository/journal/16825/NRF002_2011_v25n2_285_f0008.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0001.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0002.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0003.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0004.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0005.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0006.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0049.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0007.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0008.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0009.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0010.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0011.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0012.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0013.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0014.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0015.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0016.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0017.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0018.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0019.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0020.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0021.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0022.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0023.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0024.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0025.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0026.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0027.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0028.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0029.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0030.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0031.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0032.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0033.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0034.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0035.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0036.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0037.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0038.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0039.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0040.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0041.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0042.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0043.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0044.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0045.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0046.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0047.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_e0048.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_f0001.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_f0002.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_f0003.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_f0004.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_f0005.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_f0006.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_f0007.jpg", null, "http://oak.go.kr/repository/journal/16825/NRF002_2011_v25n2_285_f0008.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90963465,"math_prob":0.94903624,"size":29377,"snap":"2020-45-2020-50","text_gpt3_token_len":6215,"char_repetition_ratio":0.14353317,"word_repetition_ratio":0.049675122,"special_character_ratio":0.21166219,"punctuation_ratio":0.10123223,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95811725,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T09:58:48Z\",\"WARC-Record-ID\":\"<urn:uuid:434cb426-7bfa-465c-a703-2a1f82b57ca8>\",\"Content-Length\":\"429311\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e973df4-f7db-4619-ae34-5373e5b89505>\",\"WARC-Concurrent-To\":\"<urn:uuid:106f7284-121c-4b59-aa84-1fcf66708528>\",\"WARC-IP-Address\":\"124.137.58.153\",\"WARC-Target-URI\":\"http://oak.go.kr/central/journallist/journaldetail.do?article_seq=16825\",\"WARC-Payload-Digest\":\"sha1:WNL56QADN3MEMJRVHKOYJOHJCZA67PVR\",\"WARC-Block-Digest\":\"sha1:PB7O5QV4EXPUCHYCBZWKAAKYATUSHESH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107910204.90_warc_CC-MAIN-20201030093118-20201030123118-00461.warc.gz\"}"}
https://www.mql5.com/en/code/20927
[ "Interesting script?\nSo post a link to it -\nlet others appraise it\n\nYou liked the script? Try it in the MetaTrader 5 terminal", null, "# WEVOMO - indicator for MetaTrader 5\n\nViews:\n1494\nRating:\nPublished:\n2018.06.18 16:25\nUpdated:\n2018.10.09 11:05\n\nThe indicator displays in the price chart two moving averages: Volume Move-Adjusted Moving Average and Weight Volume Move-Adjusted Moving Average, calculated by the formula:\n\n```VOMOMA = (MOMA + VOMA)/2\nWEVOMO = (MOMA + VOMA + WMA)/3\n```\n\nwhere:\n\n```MOMA[i] = (Close[i-Period+1]*Diff[i-Period+1] + Close[i-Period+2]*Diff[i-Period+2] +... + Close[i]*Diff[i])/Sum(Diff)\nVOMA[i] = (Close[i-Period+1]*Volume[i-Period+1] + Close[i-Period+2]*Volume[i-Period+2] + ... + Close[i]*Volume[i])/Sum(Volume)\nWMA[i] = (Close[i-Period+1]*1 + Close[i-Period+2]*2 + ... + Close[i]*Period)/LSum\nLSum = (Period+1)*Period/2\nDiff[i] = Abs(Close[i] - Close[i-1])\n```\n\nThe indicator has three input parameters:\n\n• Period - calculation period;\n• Show Volume Move MA - display Volume Move-Adjusted MA (VOMOMA);\n• Show Weighted Volume Move MA - display Weight Volume Move-Adjusted MA (WEVOMO).", null, "Translated from Russian by MetaQuotes Software Corp.\nOriginal code: https://www.mql5.com/ru/code/20927", null, "Wiseman_HTF\n\nIndicator Wiseman with the timeframe selection option in its input parameters.", null, "RVRResistance\n\nAn indicator of the volume / bar price range ratio with a signal line and with the option of identifying the maximum/minimum price change resistance.", null, "SSD_With_Histogram\n\nA slow stochastic with a histogram.", null, "Smart_Money_Pressure_Oscillator\n\nSmart Money Pressure Oscillator" ]
[ null, "https://c.mql5.com/i/code/indicator.png", null, "https://c.mql5.com/18/68/WEVOMO.png", null, "https://c.mql5.com/i/code/indicator.png", null, "https://c.mql5.com/i/code/indicator.png", null, "https://c.mql5.com/i/code/indicator.png", null, "https://c.mql5.com/i/code/indicator.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7128863,"math_prob":0.88854045,"size":1113,"snap":"2020-10-2020-16","text_gpt3_token_len":323,"char_repetition_ratio":0.17853923,"word_repetition_ratio":0.0,"special_character_ratio":0.27583107,"punctuation_ratio":0.10837439,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9845416,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,1,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-04T10:27:38Z\",\"WARC-Record-ID\":\"<urn:uuid:aaa45580-b8fa-457d-9ff8-f84c6dbf5729>\",\"Content-Length\":\"32053\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:325269c3-cf27-4260-a710-d853b7b7f9ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:829b0984-a4a1-4ca6-a3c9-4825ce8d31f4>\",\"WARC-IP-Address\":\"78.140.180.100\",\"WARC-Target-URI\":\"https://www.mql5.com/en/code/20927\",\"WARC-Payload-Digest\":\"sha1:PEDXLQS6YDG2RUXQY3P2Q5NKHEQB4OKY\",\"WARC-Block-Digest\":\"sha1:LC3NGIDA2USGU62O5NKBCWIGK55HDKUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370521574.59_warc_CC-MAIN-20200404073139-20200404103139-00252.warc.gz\"}"}
https://www.geeksforgeeks.org/tcs-nqt-coding-questions-how-coding-task-evaluated-in-tcs-nqt/
[ "# TCS NQT Coding Questions & How Coding Task Evaluated in TCS NQT\n\n• Difficulty Level : Medium\n• Last Updated : 24 Jan, 2022\n\nTo know more about the TCS NQT: TCS NQT – National Qualifier Test\n\n1. How to solve the Coding Section in TCS NQT 2022?\n\nSTEP by STEP guide to solving the coding section in TCS NQT 2022.\n\nSTEP 1: Understanding the storyYou are given 30 minutes to write a program and clear all the test cases. Of this, spend the first five to six minutes on understanding the story given and figuring out what needs to be calculated. If your English is not very strong, you need to spend more time in reading and understanding the task. Don’t jump into typing your program before you understand what is going on.\n\nThese are the points that you need to understand from the task.\n\n• There are coins of several denominations.\n• Initially, there are an Even number of coins of each type.\n• One of the coins is lost. When you remove 1 from an even number, we get an Odd number. So, finally we have many denominations occurring an Even number of times but one particular denomination occurring an Odd number of times.\n• The first input we get is the total number of coins. Let’s call this N.\n• In the second line we get only N-1 values because one of the coins is missing.\n• The output is the denomination (value) of the missing coin.\n\nSTEP 2: Reading the inputs Once you know what is needed, we need to think in terms of how you write the code. At this time we don’t know how to find the answer. But we know how to take the inputs. So, the first part of the program is to read the value of N, declare an array and read N-1 values into the array. Even though the task keeps changing, reading a set of values in a loop is required in many programs. You will need to practice several programs so that you won’t waste your time in simple tasks like this in the actual exam.\n\nSTEP 3: Cracking the core logic\n\nThe next step is to figure out how to convert this data into an answer. By the time we reach this step in the exam, we should have about 20 minutes left. Let’s race against the time. Consider the example given in the question. We see that the denomination Rs.2 appears two times, Rs.1 appears two times and Rs.5 appears three times. Why is it so? We know that originally all these are present an even number of times. But, one of the coins got lost and that is the number which appears an odd number of times. In this example, we can infer that originally, there were four coins with Rs. 5 but one of them fell down. That is why we finally have three Rs.5 coins instead of four Rs.5 coins. Here is a direct way of solving this problem. Towards the end of this article, we will see a more efficient way of solving the same problem.\n\nMethod 1:\n\n• Task 1: Read one coin at a time in the loop. Take its value. Let’s call this a[j].\n• Task 2: In an inner loop, go through each coin in the list and count how many times is V occurring. For this we first need to initialize count to zero. Whenever a[j] == a[i] is true, we need to increment the counter.\n• Task 3: Once the inner loop is completed, the value of count will tell us how many times has a[i] occurred in the array.\n• Task 4: We need to check if the count is Odd. We are told that only 1 denomination will occur an odd number of times. If we find it, we can print it and exit the program.\n\nWhen we divide an Odd number with 2, we get 1 as remainder. This can be obtained using the % operator.\n\nMethod 2: Once we realize that we need to find the number which is occurring an Odd number of times, some of us can come up with an alternative method to identify it. This method is based on the EXOR operation which is a bit-wise operation performed using the ^ symbol. Here is the Truth table for XOR operations. Operands Result:\n\n```0 ^ 0 0\n0 ^ 1 1\n1 ^ 0 1\n1 ^ 1 0```\n\nFrom the above Truth table, we can conclude that N ^ N = 0, 0 ^ N = N. Let’s say we have 3 integers A, B and C. Here are a few interesting results from EXOR.\n\n```A^A=0\nA^B^A = A^A^B = B\nA^B^C^B^A^C = 0```\n• So, if we perform the EXOR operation on a series of numbers we will notice some interesting results.\n• If a particular number (say A) is occurring an Even number of times, the EXOR of all of them together is 0. i.e. A^A^A…^A =0 when the number is occurring an Even number of times.\n• If a number is occurring an Odd number of times, the EXOR of all the occurrences is same as the number itself. So, A^A^A^….^A= A when the number is occurring an Odd number of times.\n• The order in which we apply the EXOR operation doesn’t matter A^B=B^A.\n• Using these properties together, we can notice that when we take the EXOR of all the inputs, any number that occurs an even number of times will give an EXOR of 0. If\n• A is the number that occurs an Odd number of times, the overall EXOR for all its appearances will be equal to A. The overall result for all the other numbers will be zero. So, finally we get A^0 which is equal to A.\n\nOnce we know this, we can go ahead and implement this in the code. We just need to take a temporary variable to store the result. Let’s call this E and initialize it to 0. We then need to go through a loop and perform EXOR on all the given elements. The final value of EXOR is the answer we need. 2. What is TCS NQT 2022?\n\nStep 4: Validating the code\n\nThe remaining time in the exam can be spent in verifying that the code clears all the test cases. In case it fails, you can try giving your own inputs to find out when it is failing and then try to correct the algorithm. In TCS NQT 2022, you might not have a big penalty if your code is slow. So, it makes a lot of sense for you to write working code before you worry about efficiency.\n\nTCS NQT 2022(https://learning.tcsionhub.in/hub/national-qualifier-test/) is the exam conducted by TCS for recruiting freshers who are going to graduate in the year 2022. Make sure that you understand the Eligibility criteria for the test, the syllabus and test pattern.\n\n3. What is the Coding section in NQT 2022?\n\nThe coding section of the TCS NQT 2022 has one question which is usually in the form a Case study or a Story. At the end of the Caselet, they will ask us to write a program which takes the input in a particular format and produces the output as per the required format. 4. What are the other rules for the Coding section?\n\nYou can attempt the coding task in any of the 5 languages given by TCS. These are C, C++, Java, Python and Perl. You have a total of 30 minutes to solve this question. Here are the most important points to know about before you attempt the coding section of TCS NQT 2022.\n\n5. How is the coding task evaluated?\n\nThe TCS NQT 2022 is attempted by lakhs of students. The examiners are not going to read everyone’s code. Instead, they will use a computerized evaluation which will automatically assign score based ONLY ON THE OUTPUT. The main part of the coding section is “Test Cases”. Your code will be Validated against test cases. You will be given partial marks based on how many test cases are cleared.\n\n6. Do I need to make my program so efficient?\n\nWhile it is always good to write efficient programs you need to control your greed. Before you worry about efficiency, you need to ensure that your program clears at least some of the test cases. Finally, nobody reads your code – they just look at the number of test cases. Make sure that your code clears as many test cases as possible. However, if you know an efficient method, there is no reason for you to not use it. Go ahead, give it best shot. This is your playground.\n\n7. In which language should I write the code?\n\nThe advantage of choosing C, C++ and Java over scripting languages is that the compiler is very strict. The probability of any mistakes being found by the compiler is very high which means the probability of validating against the test cases is also high. On the other hand, in scripting languages like Python or Perl the probability of finding mistakes by compiler is not fruitful. Some of your coding mistakes might percolate till the time you go into the compiler stage. As we keep telling, don’t try to learn a new programming language now. Stick with a language that you already know. Build your confidence by practicing multiple tasks using it.\n\n8. Is it necessary to Validate Against Test Cases?\n\nTCS NQT 2022 has introduced the facility of validating your solution code against the actual test cases. Make sure that your code is clearing them. Keep tweaking it until you are done. However, in the last 2 minutes you need to ensure that your code is stable. Stop making any further changes. Read your code a few times to ensure that you haven’t done anything stupid in your excitement.\n\n9. Where can I get more questions like this?\n\nYou can prepare yourself for the TCS NQT with us by following this course TCS NQT Preparation Test Series.\n\nMy Personal Notes arrow_drop_up" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93398494,"math_prob":0.9481603,"size":8753,"snap":"2022-27-2022-33","text_gpt3_token_len":2105,"char_repetition_ratio":0.13144359,"word_repetition_ratio":0.029218843,"special_character_ratio":0.23820405,"punctuation_ratio":0.104210526,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9888412,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T09:49:42Z\",\"WARC-Record-ID\":\"<urn:uuid:4aca3994-e785-48a9-b6bb-60fa054f2eea>\",\"Content-Length\":\"129920\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68e1a8f8-f8b2-4162-8e77-06b668e2f684>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c45626d-ebd0-4156-94e3-a88bd504ccdb>\",\"WARC-IP-Address\":\"23.45.180.234\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/tcs-nqt-coding-questions-how-coding-task-evaluated-in-tcs-nqt/\",\"WARC-Payload-Digest\":\"sha1:TBIGWTK4CEY3VKCTN66Y4TKCY5AYFTMF\",\"WARC-Block-Digest\":\"sha1:BXDQMAEUKCC6W65733JACYMKAUF47TEM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104215805.66_warc_CC-MAIN-20220703073750-20220703103750-00184.warc.gz\"}"}
https://zbmath.org/authors/?q=ai%3Ajiang.yaolin
[ "## Jiang, Yaolin\n\nCompute Distance To:\n Author ID: jiang.yaolin", null, "Published as: Jiang, Yao-Lin; Jiang, Yaolin; Jiang, Yao-lin; Jiang, Yao Lin; Jiang, Y.-L. more...less Further Spellings: 蒋耀林 Homepage: http://gr.xjtu.edu.cn/web/yljiang/english External Links: MGP\n Documents Indexed: 234 Publications since 1991 Co-Authors: 92 Co-Authors with 206 Joint Publications 3,593 Co-Co-Authors\nall top 5\n\n### Co-Authors\n\n 9 single-authored 17 Xu, Kangli 15 Li, Hongli 13 Teng, Zhi-dong 13 Yang, Yunbo 12 Xiao, Zhihua 11 Ding, Xiaoli 11 Wang, Xiaolong 10 Zhang, Long 9 Song, Bo 8 Chen, Cheng 8 Chen, Fang 8 Wang, Zuolei 7 Chen, Richard M. M. 7 Miao, Zhen 7 Qi, Zhen-Zhong 7 Wing, Omar 7 Xu, Zongben 7 Yang, Ping 6 Zhang, Hui 5 Chen, Haibao 5 Gao, Jing 5 Hu, Cheng 5 Kong, Xu 4 Chen, Chunyue 4 Kang, Yanmei 4 Kong, Qiongxiang 4 Lu, Yi 4 Peng, Guojun 3 Deng, Dingwen 3 Jia, Jiteng 3 Li, Yanpeng 3 Lin, Xiaolin 3 Sun, Wei 3 Wang, Yan 3 Wang, Zhaohong 3 Yang, Jun-Man 3 Yang, Zhixia 3 You, Zhaoyong 3 Yuan, Jiawei 2 Gander, Martin Jakob 2 Huang, Zu-Lan 2 Jiang, Haijun 2 Li, Jicheng 2 Li, Zhen 2 Liang, Dong 2 Liu, Wei 2 Roach, Gary Francis 2 Shi, Xuerong 2 Wang, Chen-Ye 2 Wang, Wei 2 Wang, Weigang 2 Wang, Yuying 2 Xu, Hong-Kun 1 Antillon, Armando 1 Bao, Yangjuan 1 Chen, Haodong 1 Chen, Yonghong 1 Chen, Yuxian 1 Feng, Xiaomei 1 Gao, Ning 1 Huang, Fenfen 1 Jiang, Chen 1 Kao, Yonggui 1 Kohn, Kathlén 1 Li, Baocheng 1 Li, Changpin 1 Li, Zi-Xue 1 Liang, Dongyue 1 Lin, Xiaola 1 Liu, Gui-Rong 1 Liu, Qingquan 1 Liu, Wei 1 Liu, Xianliang 1 Liu, Yaowu 1 Lu, Hongliang 1 Mei, Kenneth K. 1 Mu, Hong-liang 1 Muhammadhaji, Ahmadjan 1 Parmananda, Punit 1 Qiu, Zhiyong 1 Song, Qiu-Yan 1 Wang, Dengshan 1 Wang, Hongyong 1 Wang, Xiaoqin 1 Wang, Xiaoyun 1 Wang, Zhen 1 Wei, Guangsheng 1 Wen, Xiaoyong 1 Xie, Jianqiang 1 Xu, Jianxue 1 Xu, Jiaojiao 1 Xu, Ling 1 Xu, Wei-Wei 1 Yang, Jingyu 1 Yong, Xie 1 You, Zaoyong 1 Yu, Bo-Hao 1 Yu, Qingjian 1 Zeng, Wei 1 Zhang, Wei ...and 2 more Co-Authors\nall top 5\n\n### Serials\n\n 15 Applied Mathematics and Computation 13 Journal of the Franklin Institute 9 Numerical Algorithms 9 International Journal of Computer Mathematics 8 Journal of Computational Mathematics 8 International Journal of Systems Science. Principles and Applications of Systems and Integration 6 Computers & Mathematics with Applications 6 Journal of Computational and Applied Mathematics 5 International Journal of Control 5 Mathematical and Computer Modelling of Dynamical Systems 4 Chaos, Solitons and Fractals 4 IEEE Transactions on Circuits and Systems. I: Fundamental Theory and Applications 4 Journal on Numerical Methods and Computer Applications 4 Communications in Nonlinear Science and Numerical Simulation 4 Journal of Applied Mathematics and Computing 4 International Journal of Biomathematics 4 East Asian Journal on Applied Mathematics 3 International Journal of Modern Physics B 3 SIAM Journal on Numerical Analysis 3 Applied Numerical Mathematics 3 IMA Journal of Mathematical Control and Information 3 Applied Mathematical Modelling 3 Nonlinear Dynamics 3 Chinese Journal of Engineering Mathematics 2 Physics Letters. A 2 IEEE Transactions on Automatic Control 2 Mathematics and Computers in Simulation 2 Numerical Mathematics 2 Mathematica Numerica Sinica 2 Systems & Control Letters 2 Applied Mathematics Letters 2 SIAM Journal on Matrix Analysis and Applications 2 Mathematica Applicata 2 Journal of Scientific Computing 2 SIAM Journal on Scientific Computing 2 Journal of Difference Equations and Applications 2 Fractional Calculus & Applied Analysis 2 Discrete Dynamics in Nature and Society 2 Nonlinear Analysis. Real World Applications 2 Nonlinear Analysis. Modelling and Control 2 Computational Methods in Applied Mathematics 2 Scientia Sinica. Mathematica 2 Asian Journal of Control 1 Bulletin of the Australian Mathematical Society 1 Discrete Mathematics 1 Indian Journal of Pure & Applied Mathematics 1 Journal of Mathematical Physics 1 Linear and Multilinear Algebra 1 Mathematical Methods in the Applied Sciences 1 Mathematics of Computation 1 Acta Mathematica Sinica 1 Journal of the Korean Mathematical Society 1 Le Matematiche 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Numerical Functional Analysis and Optimization 1 Proceedings of the American Mathematical Society 1 Proceedings of the Edinburgh Mathematical Society. Series II 1 Journal of Xi’an Jiaotong University 1 Acta Mathematicae Applicatae Sinica 1 Journal of Mathematical Research & Exposition 1 Applied Mathematics and Mechanics. (English Edition) 1 Chinese Annals of Mathematics. Series A 1 Journal of Engineering Mathematics (Xi’an) 1 Acta Mathematicae Applicatae Sinica. English Series 1 Journal of Biomathematics 1 Numerical Methods for Partial Differential Equations 1 Japan Journal of Industrial and Applied Mathematics 1 Applications of Mathematics 1 Neurocomputing 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Journal of Nonlinear Science 1 Numerical Linear Algebra with Applications 1 Pure and Applied Mathematics 1 Engineering Analysis with Boundary Elements 1 Communications on Applied Nonlinear Analysis 1 Journal of Mathematical Chemistry 1 Differential Equations and Dynamical Systems 1 Taiwanese Journal of Mathematics 1 Journal of Combinatorial Optimization 1 Chaos 1 European Journal of Mechanics. A. Solids 1 International Journal of Nonlinear Sciences and Numerical Simulation 1 Journal of Nonlinear Mathematical Physics 1 Mathematical Modelling and Analysis 1 Journal of Applied Mathematics 1 Acta Mathematica Scientia. Series A. (Chinese Edition) 1 Acta Mathematica Scientia. Series B. (English Edition) 1 Waves in Random and Complex Media 1 Frontiers of Mathematics in China 1 Applied Mathematical Sciences (Ruse) 1 Journal of Physics A: Mathematical and Theoretical 1 Communications in Theoretical Physics 1 IET Control Theory & Applications 1 Advances in Applied Mathematics and Mechanics 1 Communication on Applied Mathematics and Computation 1 Analysis and Mathematical Physics 1 Numerical Algebra, Control and Optimization 1 Journal of Theoretical Biology 1 Operations Research Transactions 1 Nonlinear Analysis. Theory, Methods & Applications ...and 1 more Serials\nall top 5\n\n### Fields\n\n 104 Numerical analysis (65-XX) 67 Ordinary differential equations (34-XX) 65 Systems theory; control (93-XX) 38 Partial differential equations (35-XX) 22 Biology and other natural sciences (92-XX) 15 Dynamical systems and ergodic theory (37-XX) 13 Linear and multilinear algebra; matrix theory (15-XX) 9 Operator theory (47-XX) 8 Fluid mechanics (76-XX) 7 Special functions (33-XX) 7 Integral equations (45-XX) 6 Information and communication theory, circuits (94-XX) 5 Probability theory and stochastic processes (60-XX) 5 Computer science (68-XX) 4 Real functions (26-XX) 4 Mechanics of deformable solids (74-XX) 4 Optics, electromagnetic theory (78-XX) 3 Algebraic geometry (14-XX) 3 Statistics (62-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Combinatorics (05-XX) 2 Difference and functional equations (39-XX) 2 Approximations and expansions (41-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Global analysis, analysis on manifolds (58-XX) 2 Mechanics of particles and systems (70-XX) 2 Classical thermodynamics, heat transfer (80-XX) 2 Operations research, mathematical programming (90-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 General and overarching topics; collections (00-XX) 1 Functions of a complex variable (30-XX) 1 Integral transforms, operational calculus (44-XX) 1 Differential geometry (53-XX) 1 Quantum theory (81-XX)\n\n### Citations contained in zbMATH Open\n\n148 Publications have been cited 932 times in 607 Documents Cited by Year\nDynamical analysis of a fractional-order predator-prey model incorporating a prey refuge. Zbl 1377.34062\nLi, Hong-Li; Zhang, Long; Hu, Cheng; Jiang, Yao-Lin; Teng, Zhidong\n2017\nA generalization of the inexact parameterized Uzawa methods for saddle point problems. Zbl 1159.65034\nChen, Fang; Jiang, Yao-Lin\n2008\nTime domain model order reduction of general orthogonal polynomials for linear input-output systems. Zbl 1369.93117\nJiang, Yao-Lin; Chen, Hai-Bao\n2012\nGlobal Mittag-Leffler stability of coupled system of fractional-order differential equations on network. Zbl 1410.34025\nLi, Hong-Li; Jiang, Yao-Lin; Wang, Zuolei; Zhang, Long; Teng, Zhidong\n2015\nOn time-domain simulation of lossless transmission lines with nonlinear terminations. Zbl 1116.78029\nJiang, Yao-Lin\n2004\nBifurcations of a Holling-type II predator-prey system with constant rate harvesting. Zbl 1175.34062\nPeng, Guojun; Jiang, Yaolin; Li, Changpin\n2009\nAnalytical solutions for the multi-term time-space fractional advection-diffusion equations with mixed boundary conditions. Zbl 1260.35241\nDing, Xiao-Li; Jiang, Yao-Lin\n2013\nPinning adaptive and impulsive synchronization of fractional-order complex dynamical networks. Zbl 1372.34086\nLi, Hong-Li; Hu, Cheng; Jiang, Yao-Lin; Wang, Zuolei; Teng, Zhidong\n2016\nA general approach to waveform relaxation solutions of nonlinear differential-algebraic equations: the continuous-time and discrete-time cases. Zbl 1374.94906\nJiang, Yao-Lin\n2004\nGeneralized projective synchronization of a class of fractional-order chaotic systems via a scalar transmitted signal. Zbl 1220.34060\nPeng, Guojun; Jiang, Yaolin\n2008\nWaveform relaxation methods for fractional differential equations with the Caputo derivatives. Zbl 1259.65113\nJiang, Yao-Lin; Ding, Xiao-Li\n2013\nModel reduction of bilinear systems based on Laguerre series expansion. Zbl 1273.93034\nWang, Xiaolong; Jiang, Yaolin\n2012\nAnalysis of two parareal algorithms for time-periodic problems. Zbl 1283.65077\nGander, Martin J.; Jiang, Yao-Lin; Song, Bo; Zhang, Hui\n2013\nSynchronization of fractional-order complex dynamical networks via periodically intermittent pinning control. Zbl 1375.93060\nLi, Hong-Li; Hu, Cheng; Jiang, Haijun; Teng, Zhidong; Jiang, Yao-Lin\n2017\nA note on the spectra and pseudospectra of waveform relaxation operators for linear differential-algebraic equations. Zbl 0980.65081\nJiang, Yao-Lin; Wing, Omar\n2000\nWaveform relaxation for reaction-diffusion equations. Zbl 1230.65111\nLiu, Jun; Jiang, Yao-Lin\n2011\nImproving convergence performance of relaxation-based transient analysis by matrix splitting in circuit simulation. Zbl 0999.94574\nJiang, Yao-Lin; Chen, Richard M. M.; Wing, Omar\n2001\nBifurcation of a delayed gause predator-prey model with Michaelis-Menten type harvesting. Zbl 1394.92109\nLiu, Wei; Jiang, Yaolin\n2018\nGlobal stability of an SI epidemic model with feedback controls in a patchy environment. Zbl 1426.92077\nLi, Hong-Li; Zhang, Long; Teng, Zhidong; Jiang, Yao-Lin; Muhammadhaji, Ahmadjan\n2018\nA note on convergence conditions of waveform relaxation algorithms for nonlinear differential-algebraic equations. Zbl 0984.65079\nJiang, Yao-Lin; Wing, Omar\n2001\nAnti-synchronization and intermittent anti-synchronization of two identical hyperchaotic Chua systems via impulsive control. Zbl 1345.34095\nLi, Hong-Li; Jiang, Yao-Lin; Wang, Zuo-Lei\n2015\nTrigonometric Hermite wavelet approximation for the integral equations of second kind with weakly singular kernel. Zbl 1350.65144\nGao, Jing; Jiang, Yao-Lin\n2008\nMonotone waveform relaxation for systems of nonlinear differential-algebraic equations. Zbl 1049.65077\nJiang, Yao-Lin; Wing, Omar\n2000\nComputing periodic solutions of linear differential-algebraic equations by waveform relaxation. Zbl 1115.37068\nJiang, Yao-Lin; Chen, Richard M. M.\n2005\nSimplest equation method for some time-fractional partial differential equations with conformable derivative. Zbl 1415.35275\nChen, Cheng; Jiang, Yao-Lin\n2018\nAnalysis of a new parareal algorithm based on waveform relaxation method for time-periodic problems. Zbl 1308.65117\nSong, Bo; Jiang, Yao-Lin\n2014\nA parareal waveform relaxation algorithm for semi-linear parabolic partial differential equations. Zbl 1248.65090\nLiu, Jun; Jiang, Yao-Lin\n2012\n$$H_2$$ optimal reduced models of general MIMO LTI systems via the cross Gramian on the Stiefel manifold. Zbl 1364.93114\nJiang, Yaolin; Xu, Kangli\n2017\nWaveform relaxation method for fractional differential-algebraic equations. Zbl 1305.26017\nDing, Xiao-Li; Jiang, Yao-Lin\n2014\nA superlinear convergence estimate for the parareal Schwarz waveform relaxation algorithm. Zbl 1414.65018\nGander, Martin J.; Jiang, Yao-Lin; Song, Bo\n2019\nOn HSS and AHSS iteration methods for nonsymmetric positive definite Toeplitz systems. Zbl 1191.65027\nChen, Fang; Jiang, Yao-Lin\n2010\nExotic localized vector waves in a two-component nonlinear wave system. Zbl 1440.37068\nXu, Ling; Wang, Deng-Shan; Wen, Xiao-Yong; Jiang, Yao-Lin\n2020\nNumerical analysis and computation of a type of IMEX method for the time-dependent natural convection problem. Zbl 1336.65155\nYang, Yun-Bo; Jiang, Yao-Lin\n2016\nConvergence analysis of waveform relaxation for nonlinear differential-algebraic equations of index one. Zbl 1001.94058\nJiang, Yao-Lin; Chen, Richard M. M.; Wing, Omar\n2000\nSemilinear fractional differential equations based on a new integral operator approach. Zbl 1263.35215\nDing, Xiao-Li; Jiang, Yao-Lin\n2012\nNonnegative solutions of fractional functional differential equations. Zbl 1247.34007\nJiang, Yao-Lin; Ding, Xiao-Li\n2012\nA parareal algorithm based on waveform relaxation. Zbl 1256.65071\nLiu, Jun; Jiang, Yao-Lin\n2012\nModel order reduction of MIMO bilinear systems by multi-order Arnoldi method. Zbl 1344.93025\nXiao, Zhi-Hua; Jiang, Yao-Lin\n2016\nModel order reduction methods for coupled systems in the time domain using Laguerre polynomials. Zbl 1232.65119\nWang, Xiao-Long; Jiang, Yao-Lin\n2011\nDynamic analysis of a fractional-order single-species model with diffusion. Zbl 1416.92138\nLi, Hong-Li; Zhang, Long; Hu, Cheng; Jiang, Yao-Lin; Teng, Zhidong\n2017\nTime domain model order reduction using general orthogonal polynomials for K-power bilinear systems. Zbl 1338.93095\nQi, Zhen-Zhong; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2016\nA new parareal waveform relaxation algorithm for time-periodic problems. Zbl 1308.65131\nSong, Bo; Jiang, Yao-Lin\n2015\nOn the determinant evaluation of quasi penta-diagonal matrices and quasi penta-diagonal Toeplitz matrices. Zbl 1286.65057\nJiang, Yao-Lin; Jia, Ji-Teng\n2013\nPractical computation of normal forms of the Bogdanov-Takens bifurcation. Zbl 1286.34059\nPeng, Guojun; Jiang, Yaolin\n2011\nTwo-sided projection methods for model reduction of MIMO bilinear systems. Zbl 1305.93046\nWang, Xiao-Long; Jiang, Yao-Lin\n2013\nAn effective fracture analysis method based on the virtual crack closure-integral technique implemented in CS-FEM. Zbl 1459.74168\nZeng, W.; Liu, G. R.; Jiang, C.; Dong, X. W.; Chen, H. D.; Bao, Y.; Jiang, Y.\n2016\nModel reduction of discrete-time bilinear systems by a Laguerre expansion technique. Zbl 1465.93022\nWang, Xiaolong; Jiang, Yaolin\n2016\nA further necessary and sufficient condition for strong convergence of nonlinear contraction semigroups and of iterative methods for accretive operators in Banach spaces. Zbl 0820.47074\nXu, Zong-Ben; Jiang, Yao-Lin; Roach, G. F.\n1995\nPeriodic waveform relaxation solutions of nonlinear dynamic equations. Zbl 1088.34520\nJiang, Yao-Lin\n2003\nModel-order reduction of coupled DAE systems via $$\\varepsilon$$ technique and Krylov subspace method. Zbl 1255.93030\nJiang, Yao-Lin; Chen, Chun-Yue; Chen, Hai-Bao\n2012\nWaveform relaxation methods for fractional functional differential equations. Zbl 1312.34011\nDing, Xiao-Li; Jiang, Yao-Lin\n2013\nOn the uniqueness and perturbation to the best rank-one approximation of a tensor. Zbl 1317.65111\nJiang, Yao-Lin; Kong, Xu\n2015\nWindowing waveform relaxation of initial value problems. Zbl 1113.65081\nJiang, Yaolin\n2006\nArnoldi-based model order reduction for linear systems with inhomogeneous initial conditions. Zbl 1380.93071\nSong, Qiu-Yan; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2017\n$$H_{2}$$ optimal model order reduction by two-sided technique on Grassmann manifold via the cross-Gramian of bilinear systems. Zbl 1359.93086\nXu, Kang-Li; Jiang, Yao-Lin; Yang, Zhi-Xia\n2017\nLaguerre functions approximation for model reduction of second order time-delay systems. Zbl 1347.93072\nWang, Xiaolong; Jiang, Yaolin; Kong, Xu\n2016\nOn model reduction of $$K$$-power bilinear systems. Zbl 1290.93031\nWang, Xiao-Long; Jiang, Yao-Lin\n2014\nLie group analysis method for two classes of fractional partial differential equations. Zbl 1440.35340\nChen, Cheng; Jiang, Yao-Lin\n2015\nSchwarz waveform relaxation methods for parabolic equations in space-frequency domain. Zbl 1142.65411\nJiang, Yao-Lin; Zhang, Hui\n2008\nSymbolic algorithm for solving cyclic penta-diagonal linear systems. Zbl 1269.65027\nJia, Ji-Teng; Jiang, Yao-Lin\n2013\nMulti-order Arnoldi-based model order reduction of second-order time-delay systems. Zbl 1347.93073\nXiao, Zhi-Hua; Jiang, Yao-Lin\n2016\nComputation of universal unfolding of the double-zero bifurcation in $$Z_2$$-symmetric systems by a homological method. Zbl 1282.34047\nPeng, Guojun; Jiang, Yao Lin\n2013\nRiemannian modified Polak-Ribière-Polyak conjugate gradient order reduced model by tensor techniques. Zbl 1441.93045\nJiang, Yao-Lin; Xu, Kang-Li\n2020\nFinite-time balanced truncation for linear systems via shifted Legendre polynomials. Zbl 1425.93067\nXiao, Zhi-Hua; Jiang, Yao-Lin; Qi, Zhen-Zhong\n2019\nWaveform relaxation methods of nonlinear integral-differential-algebraic equations. Zbl 1072.65166\nJiang, Yaolin\n2005\nRunge-Kutta methods of dynamic iteration for index-2 differential-algebraic equations. Zbl 1080.65070\nSun, Wei; Jiang, Yao-Lin\n2005\nMathematical modelling on $$RLCG$$ transmission lines. Zbl 1121.93032\nJiang, Y.-L.\n2005\nDimension reduction for second-order systems by general orthogonal polynomials. Zbl 1298.93100\nXiao, Zhi-Hua; Jiang, Yao-Lin\n2014\nA general method for solving singular perturbed impulsive differential equations with two-point boundary conditions. Zbl 1090.65094\nWang, Xiao-Yun; Jiang, Yao-Lin\n2005\nModel order reduction based on general orthogonal polynomials in the time domain for coupled systems. Zbl 1291.93062\nQi, Zhen-Zhong; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2014\nA trust-region method for $$H_2$$ model reduction of bilinear systems on the Stiefel manifold. Zbl 1409.93017\nYang, Ping; Jiang, Yao-Lin; Xu, Kang-Li\n2019\nA parareal approach of semi-linear parabolic equations based on general waveform relaxation. Zbl 1431.65135\nLi, Jun; Jiang, Yao-lin; Miao, Zhen\n2019\n$$H_2$$ optimal model order reduction of the discrete system on the product manifold. Zbl 1470.93034\nJiang, Yao-Lin; Wang, Wei-Gang\n2019\nAnalysis of two decoupled time-stepping finite-element methods for incompressible fluids with microstructure. Zbl 1387.65108\nYang, Yun-Bo; Jiang, Yao-Lin\n2018\nSemi-discrete Galerkin finite element method for the diffusive Peterlin viscoelastic model. Zbl 1391.76337\nJiang, Yao-Lin; Yang, Yun-Bo\n2018\nAn explicitly uncoupled VMS stabilization finite element method for the time-dependent Darcy-Brinkman equations in double-diffusive convection. Zbl 1402.65139\nYang, Yun-Bo; Jiang, Yao-Lin\n2018\nAn $$\\varepsilon$$-embedding model-order reduction approach for differential-algebraic equation systems. Zbl 1251.93042\nChen, Chun-Yue; Jiang, Yao-Lin; Chen, Hai-Bao\n2012\nExistence of periodic solutions in predator-prey with Watt-type functional response and impulsive effects. Zbl 1203.34073\nLin, Xiaolin; Jiang, Yaolin; Wang, Xiaoqin\n2010\nStructure-preserving model order reduction by general orthogonal polynomials for integral-differential systems. Zbl 1307.93093\nYuan, Jia-Wei; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2015\nArnoldi-based model reduction for fractional order linear systems. Zbl 1312.93025\nJiang, Yao-Lin; Xiao, Zhi-Hua\n2015\nAn adaptive wavelet method for nonlinear differential-algebraic equations. Zbl 1162.65046\nGao, Jing; Jiang, Yao-Lin\n2007\nStability analysis for coupled systems of fractional differential equations on networks. Zbl 1370.34014\nLi, Hong-Li; Jiang, Yao-Lin; Wang, Zuolei; Feng, Xiaomei; Teng, Zhidong\n2017\nNonlinear model order reduction based on tensor Kronecker product expansion with Arnoldi process. Zbl 1347.93070\nJiang, Yao-Lin; Chen, Hai-Bao; Qi, Zhen-Zhong\n2016\nOn structured variants of modified HSS iteration methods for complex Toeplitz linear systems. Zbl 1289.65052\nChen, Fang; Jiang, Yaolin; Liu, Qingquan\n2013\nA note on the $$H^1$$-convergence of the overlapping Schwarz waveform relaxation method for the heat equation. Zbl 1295.65091\nZhang, Hui; Jiang, Yao-Lin\n2014\nConservation laws and optimal system of extended quantum Zakharov-Kuznetsov equation. Zbl 1420.35071\nJiang, Yao-Lin; Lu, Yi; Chen, Cheng\n2016\nTime domain and frequency domain model order reduction for discrete time-delay systems. Zbl 1483.93048\nWang, Zhao-Hong; Jiang, Yao-Lin; Xu, Kang-Li\n2020\nStructure preserving balanced proper orthogonal decomposition for second-order form systems via shifted Legendre polynomials. Zbl 1432.93046\nXiao, Zhi-Hua; Jiang, Yao-Lin; Qi, Zhen-Zhong\n2019\nRobust exponential stability of fractional-order coupled quaternion-valued neural networks with parametric uncertainties and impulsive effects. Zbl 07512492\nLi, Hong-Li; Kao, Yonggui; Hu, Cheng; Jiang, Haijun; Jiang, Yao-Lin\n2021\nFrequency-limited reduced models for linear and bilinear systems on the Riemannian manifold. Zbl 1471.93182\nJiang, Yao-Lin; Xu, Kang-Li\n2021\nHigh-order finite difference methods for a second order dual-phase-lagging models of microscale heat transfer. Zbl 1411.80005\nDeng, Dingwen; Jiang, Yaolin; Liang, Dong\n2017\nAn efficient hybrid reduction method for time-delay systems using Hermite expansions. Zbl 1416.93040\nWang, Xiaolong; Jiang, Yao-Lin\n2019\nA semi-analytic method for computing the long-time order parameter dynamics in mean-field coupled overdamped oscillators with colored noises. Zbl 1227.37018\nKang, Yan-Mei; Jiang, Yao-Lin\n2008\nConvergence conditions on waveform relaxation of general differential-algebraic equations. Zbl 1208.65120\nJiang, Yao-Lin\n2010\nOn product-type generalized block AOR method for augmented linear systems. Zbl 1264.65043\nChen, Fang; Gao, Ning; Jiang, Yao-Lin\n2012\nSubtracting a best rank-1 approximation from $$p\\times p\\times 2$$ ($$p\\geq 2$$) tensors. Zbl 1274.15002\nKong, Xu; Jiang, Yao-Lin\n2012\nLie group analysis and invariant solutions for nonlinear time-fractional diffusion-convection equations. Zbl 1377.34008\nChen, Cheng; Jiang, Yao-Lin\n2017\nAn approach to $$H_{2,\\omega}$$ model reduction on finite interval for bilinear systems. Zbl 1373.93074\nXu, Kang-Li; Jiang, Yao-Lin\n2017\nAnti-synchronization and intermittent anti-synchronization of two identical delay hyperchaotic Chua systems via linear control. Zbl 1358.93091\nLi, Hong-Li; Wang, Zuolei; Jiang, Yao-Lin; Zhang, Long; Teng, Zhidong\n2017\nA note on the ranks of $$2\\times 2\\times 2$$ and $$2\\times 2\\times 2\\times 2$$ tensors. Zbl 1282.15004\nKong, Xu; Jiang, Yao-Lin\n2013\nOptimal convergence and long-time conservation of exponential integration for Schrödinger equations in a normal or highly oscillatory regime. Zbl 07488706\nWang, Bin; Jiang, Yaolin\n2022\nRobust exponential stability of fractional-order coupled quaternion-valued neural networks with parametric uncertainties and impulsive effects. Zbl 07512492\nLi, Hong-Li; Kao, Yonggui; Hu, Cheng; Jiang, Haijun; Jiang, Yao-Lin\n2021\nFrequency-limited reduced models for linear and bilinear systems on the Riemannian manifold. Zbl 1471.93182\nJiang, Yao-Lin; Xu, Kang-Li\n2021\nDimension reduction for $$k$$-power bilinear systems using orthogonal polynomials and Arnoldi algorithm. Zbl 1483.93045\nQi, Zhen-Zhong; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2021\nUnconditional optimal error estimates of linearized backward Euler Galerkin FEMs for nonlinear Schrödinger-Helmholtz equations. Zbl 1475.65135\nYang, Yun-Bo; Jiang, Yao-Lin\n2021\nAnalysis of two new parareal algorithms based on the Dirichlet-Neumann/Neumann-Neumann waveform relaxation method for the heat equation. Zbl 1471.65130\nSong, Bo; Jiang, Yao-Lin; Wang, Xiaolong\n2021\nUnconditional optimal error estimates of linearized, decoupled and conservative Galerkin FEMs for the Klein-Gordon-Schrödinger equation. Zbl 1476.65259\nYang, Yun-Bo; Jiang, Yao-Lin; Yu, Bo-Hao\n2021\nUnconditional optimal error estimates of linearized second-order BDF Galerkin FEMs for the Landau-Lifshitz equation. Zbl 1459.65191\nYang, Yun-Bo; Jiang, Yao-Lin\n2021\nOn the complexity of all $$( g , f )$$-factors problem. Zbl 1453.05103\nLu, Hongliang; Wang, Wei; Jiang, Yaolin\n2021\nExotic localized vector waves in a two-component nonlinear wave system. Zbl 1440.37068\nXu, Ling; Wang, Deng-Shan; Wen, Xiao-Yong; Jiang, Yao-Lin\n2020\nRiemannian modified Polak-Ribière-Polyak conjugate gradient order reduced model by tensor techniques. Zbl 1441.93045\nJiang, Yao-Lin; Xu, Kang-Li\n2020\nTime domain and frequency domain model order reduction for discrete time-delay systems. Zbl 1483.93048\nWang, Zhao-Hong; Jiang, Yao-Lin; Xu, Kang-Li\n2020\nVibration suppression of cantilevered piezoelectric laminated composite rectangular plate subjected to aerodynamic force in hygrothermal environment. Zbl 1473.74062\nLu, S. F.; Jiang, Y.; Zhang, W.; Song, X. J.\n2020\nTwo-grid stabilized FEMs based on Newton type linearization for the steady-state natural convection problem. Zbl 1488.65666\nYang, Yunbo; Jiang, Yaolin; Kong, Qiongxiang\n2020\nTime domain model reduction of time-delay systems via orthogonal polynomial expansions. Zbl 1433.93024\nWang, Xiaolong; Jiang, Yaolin\n2020\nA superlinear convergence estimate for the parareal Schwarz waveform relaxation algorithm. Zbl 1414.65018\nGander, Martin J.; Jiang, Yao-Lin; Song, Bo\n2019\nFinite-time balanced truncation for linear systems via shifted Legendre polynomials. Zbl 1425.93067\nXiao, Zhi-Hua; Jiang, Yao-Lin; Qi, Zhen-Zhong\n2019\nA trust-region method for $$H_2$$ model reduction of bilinear systems on the Stiefel manifold. Zbl 1409.93017\nYang, Ping; Jiang, Yao-Lin; Xu, Kang-Li\n2019\nA parareal approach of semi-linear parabolic equations based on general waveform relaxation. Zbl 1431.65135\nLi, Jun; Jiang, Yao-lin; Miao, Zhen\n2019\n$$H_2$$ optimal model order reduction of the discrete system on the product manifold. Zbl 1470.93034\nJiang, Yao-Lin; Wang, Wei-Gang\n2019\nStructure preserving balanced proper orthogonal decomposition for second-order form systems via shifted Legendre polynomials. Zbl 1432.93046\nXiao, Zhi-Hua; Jiang, Yao-Lin; Qi, Zhen-Zhong\n2019\nAn efficient hybrid reduction method for time-delay systems using Hermite expansions. Zbl 1416.93040\nWang, Xiaolong; Jiang, Yao-Lin\n2019\nAn unconstrained $$H_2$$ model order reduction optimisation algorithm based on the Stiefel manifold for bilinear systems. Zbl 1416.93041\nXu, Kang-Li; Jiang, Yao-Lin\n2019\nModeling and dynamics of an ecological-economic model. Zbl 1416.92139\nLiu, Wei; Jiang, Yaolin\n2019\nLie group analysis and dynamical behavior for classical Boussinesq-Burgers system. Zbl 1409.35021\nJiang, Yao-Lin; Chen, Cheng\n2019\nLie symmetry analysis and dynamic behaviors for nonlinear generalized Zakharov system. Zbl 1418.35086\nChen, Cheng; Jiang, Yao-Lin\n2019\nModel order reduction for discrete-time linear systems with the discrete-time polynomials. Zbl 1425.93063\nJiang, Yao-Lin; Yang, Jun-Man; Xu, Kang-Li\n2019\nThe Arrow-Hurwicz iterative finite element method for the stationary magnetohydrodynamics flow. Zbl 1428.76108\nYang, Yun-Bo; Jiang, Yao-Lin; Kong, Qiong-Xiang\n2019\nBifurcation of a delayed gause predator-prey model with Michaelis-Menten type harvesting. Zbl 1394.92109\nLiu, Wei; Jiang, Yaolin\n2018\nGlobal stability of an SI epidemic model with feedback controls in a patchy environment. Zbl 1426.92077\nLi, Hong-Li; Zhang, Long; Teng, Zhidong; Jiang, Yao-Lin; Muhammadhaji, Ahmadjan\n2018\nSimplest equation method for some time-fractional partial differential equations with conformable derivative. Zbl 1415.35275\nChen, Cheng; Jiang, Yao-Lin\n2018\nAnalysis of two decoupled time-stepping finite-element methods for incompressible fluids with microstructure. Zbl 1387.65108\nYang, Yun-Bo; Jiang, Yao-Lin\n2018\nSemi-discrete Galerkin finite element method for the diffusive Peterlin viscoelastic model. Zbl 1391.76337\nJiang, Yao-Lin; Yang, Yun-Bo\n2018\nAn explicitly uncoupled VMS stabilization finite element method for the time-dependent Darcy-Brinkman equations in double-diffusive convection. Zbl 1402.65139\nYang, Yun-Bo; Jiang, Yao-Lin\n2018\nWaveform relaxation of partial differential equations. Zbl 1433.65159\nJiang, Yao-Lin; Miao, Zhen\n2018\nA parameterised model order reduction method for parametric systems based on Laguerre polynomials. Zbl 1397.93041\nYuan, Jia-Wei; Jiang, Yao-Lin\n2018\nCoupling parareal and Dirichlet-Neumann/Neumann-Neumann waveform relaxation methods for the heat equation. Zbl 1450.65111\nJiang, Yao-Lin; Song, Bo\n2018\n$$H_2$$ optimal model order reduction on the Stiefel manifold for the MIMO discrete system by the cross Gramian. Zbl 1485.93105\nWang, Wei-Gang; Jiang, Yao-Lin\n2018\nAnalysis of some projection methods for the incompressible fluids with microstructure. Zbl 1395.65129\nJiang, Yao-Lin; Yang, Yun-Bo\n2018\nInterpolatory model order reduction method for second order systems. Zbl 1391.93052\nQiu, Zhi-Yong; Jiang, Yao-Lin; Yuan, Jia-Wei\n2018\nDynamical analysis of a fractional-order predator-prey model incorporating a prey refuge. Zbl 1377.34062\nLi, Hong-Li; Zhang, Long; Hu, Cheng; Jiang, Yao-Lin; Teng, Zhidong\n2017\nSynchronization of fractional-order complex dynamical networks via periodically intermittent pinning control. Zbl 1375.93060\nLi, Hong-Li; Hu, Cheng; Jiang, Haijun; Teng, Zhidong; Jiang, Yao-Lin\n2017\n$$H_2$$ optimal reduced models of general MIMO LTI systems via the cross Gramian on the Stiefel manifold. Zbl 1364.93114\nJiang, Yaolin; Xu, Kangli\n2017\nDynamic analysis of a fractional-order single-species model with diffusion. Zbl 1416.92138\nLi, Hong-Li; Zhang, Long; Hu, Cheng; Jiang, Yao-Lin; Teng, Zhidong\n2017\nArnoldi-based model order reduction for linear systems with inhomogeneous initial conditions. Zbl 1380.93071\nSong, Qiu-Yan; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2017\n$$H_{2}$$ optimal model order reduction by two-sided technique on Grassmann manifold via the cross-Gramian of bilinear systems. Zbl 1359.93086\nXu, Kang-Li; Jiang, Yao-Lin; Yang, Zhi-Xia\n2017\nStability analysis for coupled systems of fractional differential equations on networks. Zbl 1370.34014\nLi, Hong-Li; Jiang, Yao-Lin; Wang, Zuolei; Feng, Xiaomei; Teng, Zhidong\n2017\nHigh-order finite difference methods for a second order dual-phase-lagging models of microscale heat transfer. Zbl 1411.80005\nDeng, Dingwen; Jiang, Yaolin; Liang, Dong\n2017\nLie group analysis and invariant solutions for nonlinear time-fractional diffusion-convection equations. Zbl 1377.34008\nChen, Cheng; Jiang, Yao-Lin\n2017\nAn approach to $$H_{2,\\omega}$$ model reduction on finite interval for bilinear systems. Zbl 1373.93074\nXu, Kang-Li; Jiang, Yao-Lin\n2017\nAnti-synchronization and intermittent anti-synchronization of two identical delay hyperchaotic Chua systems via linear control. Zbl 1358.93091\nLi, Hong-Li; Wang, Zuolei; Jiang, Yao-Lin; Zhang, Long; Teng, Zhidong\n2017\nSymplectic waveform relaxation methods for Hamiltonian systems. Zbl 1410.65479\nLu, Yi; Jiang, Yao-Lin; Song, Bo\n2017\nA delayed predator-prey system with impulsive diffusion between two patches. Zbl 1366.92108\nLi, Hong-Li; Zhang, Long; Teng, Zhi-Dong; Jiang, Yao-Lin\n2017\nNonlinear dynamical behaviour in a predator-prey model with harvesting. Zbl 1376.92046\nLiu, Wei; Jiang, Yaolin\n2017\nModeling and analysis of a predator-prey system with time delay. Zbl 1361.92058\nLiu, Wei; Jiang, Yaolin\n2017\nA periodic single species model with intermittent unilateral diffusion in two patches. Zbl 1357.92063\nLi, Hong-Li; Zhang, Long; Teng, Zhidong; Jiang, Yao-Lin\n2017\n$$\\mathcal{H}_{2}$$ model order reduction for bilinear systems based on the cross gramian. Zbl 1397.93040\nYang, Ping; Xu, Kang-Li; Jiang, Yao-Lin\n2017\n$$H_2$$ optimal model reduction of coupled systems on the Grassmann manifold. Zbl 1488.65208\nYang, Ping; Jiang, Yao-Lin\n2017\nPinning adaptive and impulsive synchronization of fractional-order complex dynamical networks. Zbl 1372.34086\nLi, Hong-Li; Hu, Cheng; Jiang, Yao-Lin; Wang, Zuolei; Teng, Zhidong\n2016\nNumerical analysis and computation of a type of IMEX method for the time-dependent natural convection problem. Zbl 1336.65155\nYang, Yun-Bo; Jiang, Yao-Lin\n2016\nModel order reduction of MIMO bilinear systems by multi-order Arnoldi method. Zbl 1344.93025\nXiao, Zhi-Hua; Jiang, Yao-Lin\n2016\nTime domain model order reduction using general orthogonal polynomials for K-power bilinear systems. Zbl 1338.93095\nQi, Zhen-Zhong; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2016\nAn effective fracture analysis method based on the virtual crack closure-integral technique implemented in CS-FEM. Zbl 1459.74168\nZeng, W.; Liu, G. R.; Jiang, C.; Dong, X. W.; Chen, H. D.; Bao, Y.; Jiang, Y.\n2016\nModel reduction of discrete-time bilinear systems by a Laguerre expansion technique. Zbl 1465.93022\nWang, Xiaolong; Jiang, Yaolin\n2016\nLaguerre functions approximation for model reduction of second order time-delay systems. Zbl 1347.93072\nWang, Xiaolong; Jiang, Yaolin; Kong, Xu\n2016\nMulti-order Arnoldi-based model order reduction of second-order time-delay systems. Zbl 1347.93073\nXiao, Zhi-Hua; Jiang, Yao-Lin\n2016\nNonlinear model order reduction based on tensor Kronecker product expansion with Arnoldi process. Zbl 1347.93070\nJiang, Yao-Lin; Chen, Hai-Bao; Qi, Zhen-Zhong\n2016\nConservation laws and optimal system of extended quantum Zakharov-Kuznetsov equation. Zbl 1420.35071\nJiang, Yao-Lin; Lu, Yi; Chen, Cheng\n2016\nDynamic properties of a delayed predator-prey system with Ivlev-type functional response. Zbl 1354.37095\nLiu, Wei; Jiang, Yaolin; Chen, Yuxian\n2016\nSymplectic schemes for telegraph equations. Zbl 1363.65216\nLu, Yi; Jiang, Yaolin\n2016\nApproximation algorithms for minimum weight partial connected set cover problem. Zbl 1360.90224\nLiang, Dongyue; Zhang, Zhao; Liu, Xianliang; Wang, Wei; Jiang, Yaolin\n2016\nDynamics of a modified predator-prey system to allow for a functional response and time delay. Zbl 1403.92256\nLiu, Wei; Jiang, Yaolin\n2016\nGlobal Mittag-Leffler stability of coupled system of fractional-order differential equations on network. Zbl 1410.34025\nLi, Hong-Li; Jiang, Yao-Lin; Wang, Zuolei; Zhang, Long; Teng, Zhidong\n2015\nAnti-synchronization and intermittent anti-synchronization of two identical hyperchaotic Chua systems via impulsive control. Zbl 1345.34095\nLi, Hong-Li; Jiang, Yao-Lin; Wang, Zuo-Lei\n2015\nA new parareal waveform relaxation algorithm for time-periodic problems. Zbl 1308.65131\nSong, Bo; Jiang, Yao-Lin\n2015\nOn the uniqueness and perturbation to the best rank-one approximation of a tensor. Zbl 1317.65111\nJiang, Yao-Lin; Kong, Xu\n2015\nLie group analysis method for two classes of fractional partial differential equations. Zbl 1440.35340\nChen, Cheng; Jiang, Yao-Lin\n2015\nStructure-preserving model order reduction by general orthogonal polynomials for integral-differential systems. Zbl 1307.93093\nYuan, Jia-Wei; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2015\nArnoldi-based model reduction for fractional order linear systems. Zbl 1312.93025\nJiang, Yao-Lin; Xiao, Zhi-Hua\n2015\n$$H_2$$ order-reduction for bilinear systems based on Grassmann manifold. Zbl 1395.93136\nXu, Kang-Li; Jiang, Yao-Lin; Yang, Zhi-Xia\n2015\nStructure-preserving model order reduction based on Laguerre-SVD for coupled systems. Zbl 1333.93065\nQi, Zhen-Zhong; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2015\nA windowing waveform relaxation method for time-fractional differential equations. Zbl 1489.65110\nDing, Xiao-Li; Jiang, Yao-Lin\n2015\nAnalysis of a new parareal algorithm based on waveform relaxation method for time-periodic problems. Zbl 1308.65117\nSong, Bo; Jiang, Yao-Lin\n2014\nWaveform relaxation method for fractional differential-algebraic equations. Zbl 1305.26017\nDing, Xiao-Li; Jiang, Yao-Lin\n2014\nOn model reduction of $$K$$-power bilinear systems. Zbl 1290.93031\nWang, Xiao-Long; Jiang, Yao-Lin\n2014\nDimension reduction for second-order systems by general orthogonal polynomials. Zbl 1298.93100\nXiao, Zhi-Hua; Jiang, Yao-Lin\n2014\nModel order reduction based on general orthogonal polynomials in the time domain for coupled systems. Zbl 1291.93062\nQi, Zhen-Zhong; Jiang, Yao-Lin; Xiao, Zhi-Hua\n2014\nA note on the $$H^1$$-convergence of the overlapping Schwarz waveform relaxation method for the heat equation. Zbl 1295.65091\nZhang, Hui; Jiang, Yao-Lin\n2014\nA simple synchronization scheme of Genesio-Tesi system based on the back-stepping design. Zbl 1284.34095\nWang, Zuolei; Jiang, Yaolin\n2014\nDynamic behaviors of Holling type II predator-prey system with mutual interference and impulses. Zbl 1418.92106\nLi, Hongli; Zhang, Long; Teng, Zhidong; Jiang, Yaolin\n2014\nAnalytical solutions for the multi-term time-space fractional advection-diffusion equations with mixed boundary conditions. Zbl 1260.35241\nDing, Xiao-Li; Jiang, Yao-Lin\n2013\nWaveform relaxation methods for fractional differential equations with the Caputo derivatives. Zbl 1259.65113\nJiang, Yao-Lin; Ding, Xiao-Li\n2013\nAnalysis of two parareal algorithms for time-periodic problems. Zbl 1283.65077\nGander, Martin J.; Jiang, Yao-Lin; Song, Bo; Zhang, Hui\n2013\nOn the determinant evaluation of quasi penta-diagonal matrices and quasi penta-diagonal Toeplitz matrices. Zbl 1286.65057\nJiang, Yao-Lin; Jia, Ji-Teng\n2013\nTwo-sided projection methods for model reduction of MIMO bilinear systems. Zbl 1305.93046\nWang, Xiao-Long; Jiang, Yao-Lin\n2013\nWaveform relaxation methods for fractional functional differential equations. Zbl 1312.34011\nDing, Xiao-Li; Jiang, Yao-Lin\n2013\nSymbolic algorithm for solving cyclic penta-diagonal linear systems. Zbl 1269.65027\nJia, Ji-Teng; Jiang, Yao-Lin\n2013\nComputation of universal unfolding of the double-zero bifurcation in $$Z_2$$-symmetric systems by a homological method. Zbl 1282.34047\nPeng, Guojun; Jiang, Yao Lin\n2013\nOn structured variants of modified HSS iteration methods for complex Toeplitz linear systems. Zbl 1289.65052\nChen, Fang; Jiang, Yaolin; Liu, Qingquan\n2013\nA note on the ranks of $$2\\times 2\\times 2$$ and $$2\\times 2\\times 2\\times 2$$ tensors. Zbl 1282.15004\nKong, Xu; Jiang, Yao-Lin\n2013\n...and 48 more Documents\nall top 5\n\n### Cited by 1,032 Authors\n\n 102 Jiang, Yaolin 15 Li, Hongli 14 Wu, Shulin 12 Jia, Jiteng 12 Wang, Xiaolong 12 Xu, Kangli 11 Ding, Xiaoli 11 Xiao, Zhihua 11 Zhang, Guofeng 10 Zhang, Long 9 Hu, Cheng 9 Jiang, Haijun 9 Ma, Changfeng 9 Teng, Zhi-dong 8 Băleanu, Dumitru I. 8 Bhrawy, Ali Hassan 8 Dehghan Takht Fooladi, Mehdi 7 Assari, Pouria 7 Cao, Jinde 7 Kong, Xu 6 Al-saedi, Ahmed Eid Salem 6 Gander, Martin Jakob 6 Qi, Zhen-Zhong 6 Song, Bo 5 Abdullah, Farah Aini 5 Chen, Cheng 5 Fan, Zhencheng 5 Huang, Pengzhan 5 Kao, Yonggui 5 Kong, Qiongxiang 5 Li, Changpin 5 Liang, Zhaozheng 5 Miao, Zhen 5 Nian, Fuzhong 5 Rihan, Fathalla A. 5 Wang, Zhen 5 Yang, Ping 5 Zhu, Muzheng 4 Azar, Ahmad Taher 4 Cao, Yang 4 Ezz-Eldien, Samer S. 4 Ismail, Ahmad Izani Md. 4 Li, Sumei 4 Machado, José António Tenreiro 4 Mohd, Mohd Hafiz 4 Moustafa, Mahmoud 4 Ouannas, Adel 4 Song, Qiankun 4 Wang, Xingyuan 4 Wang, Zuolei 4 Xie, Yingkang 4 Yang, Xujun 4 Yang, Yunbo 4 Zhang, Naimin 3 Ali Akbar, M. 3 Attili, Basem S. 3 Bai, Zhongzhi 3 Baishya, Chandrali 3 Banerjee, Malay 3 Chen, Chunyue 3 Chen, Haibao 3 Çibik, Aytekin Bayram 3 Eghtesad, Mohammad 3 Elsonbaty, Amr R. 3 Göttlich, Simone 3 Hayat, Tasawar 3 Hu, Hongxiao 3 Huang, Ting-Zhu 3 Jiang, Daqing 3 Jiang, Meiqun 3 Kaya, Songul 3 Khojasteh Salkuyeh, Davod 3 Li, Chuandong 3 Li, Yan 3 Liu, Qun 3 Liu, Zhongyun 3 Mokhtary, Payam 3 Muhammadhaji, Ahmadjan 3 Nieto Roig, Juan Jose 3 Peng, Guojun 3 Shao, Xinhui 3 Sun, Wei 3 Vaidyanathan, Sundarapandian 3 Vakilzadeh, Mohsen 3 Vatankhah, Ramin 3 Wang, Fei 3 Wang, Zhaohong 3 Wu, Zhaoyan 3 Xiong, Junlin 3 Xu, Liguang 3 Yang, Jun-Man 3 Yang, Xi 3 Yang, Yongqing 3 Yu, Lanlin 3 Yuan, Jiawei 3 Zhang, Fengrong 3 Zhang, Yulin 3 Ziar, Toufik 2 Abdelkawy, Mohamed A. 2 Achar, Sindhu J. ...and 932 more Authors\nall top 5\n\n### Cited in 165 Serials\n\n 56 Applied Mathematics and Computation 30 Journal of the Franklin Institute 30 Chaos, Solitons and Fractals 27 Advances in Difference Equations 24 Journal of Computational and Applied Mathematics 20 Computers & Mathematics with Applications 18 Numerical Algorithms 16 International Journal of Systems Science. Principles and Applications of Systems and Integration 15 Nonlinear Dynamics 14 International Journal of Computer Mathematics 13 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 10 Applied Mathematical Modelling 10 Mathematical Problems in Engineering 9 Physica A 9 Computational and Applied Mathematics 8 International Journal of Control 8 Abstract and Applied Analysis 8 Journal of Applied Mathematics and Computing 7 Applied Numerical Mathematics 7 Communications in Nonlinear Science and Numerical Simulation 7 AIMS Mathematics 6 Mathematics and Computers in Simulation 6 Applied Mathematics Letters 6 Journal of Scientific Computing 6 Fractional Calculus & Applied Analysis 6 Mathematical and Computer Modelling of Dynamical Systems 6 Nonlinear Analysis. Modelling and Control 5 Journal of Computational Physics 5 SIAM Journal on Scientific Computing 5 Complexity 5 Journal of Mathematical Chemistry 5 Discrete Dynamics in Nature and Society 4 International Journal of Modern Physics B 4 BIT 4 SIAM Journal on Matrix Analysis and Applications 4 International Journal of Biomathematics 3 Mathematical Methods in the Applied Sciences 3 Automatica 3 Acta Applicandae Mathematicae 3 Neural Networks 3 Linear Algebra and its Applications 3 Numerical Linear Algebra with Applications 3 Mathematical Biosciences and Engineering 3 Advances in Mathematical Physics 3 Asian Journal of Control 2 Computer Methods in Applied Mechanics and Engineering 2 Journal of Mathematical Analysis and Applications 2 Wave Motion 2 Calcolo 2 Numerical Functional Analysis and Optimization 2 Numerische Mathematik 2 Systems & Control Letters 2 Stochastic Analysis and Applications 2 Journal of Computational Mathematics 2 Physica D 2 ETNA. Electronic Transactions on Numerical Analysis 2 Advances in Computational Mathematics 2 Journal of Difference Equations and Applications 2 Differential Equations and Dynamical Systems 2 International Journal of Nonlinear Sciences and Numerical Simulation 2 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 2 Discrete and Continuous Dynamical Systems. Series B 2 Computational Methods in Applied Mathematics 2 Journal of Applied Mathematics 2 Acta Mathematica Scientia. Series B. (English Edition) 2 Journal of Dynamical Systems and Geometric Theories 2 Boundary Value Problems 2 Waves in Random and Complex Media 2 Journal of Nonlinear Science and Applications 2 Advances in Applied Mathematics and Mechanics 2 Symmetry 2 Journal of Applied Analysis and Computation 2 Journal of Applied Nonlinear Dynamics 2 East Asian Journal on Applied Mathematics 2 Journal of Function Spaces 2 International Journal of Applied and Computational Mathematics 1 Applicable Analysis 1 Computers and Fluids 1 Indian Journal of Pure & Applied Mathematics 1 International Journal of Theoretical Physics 1 Linear and Multilinear Algebra 1 Mathematical Biosciences 1 Nonlinearity 1 Reports on Mathematical Physics 1 Rocky Mountain Journal of Mathematics 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Mathematics of Computation 1 Bulletin of Mathematical Biology 1 Journal of Differential Equations 1 Journal of the Korean Mathematical Society 1 Journal of Optimization Theory and Applications 1 Kybernetika 1 Mathematische Nachrichten 1 Proceedings of the Edinburgh Mathematical Society. Series II 1 Quaestiones Mathematicae 1 SIAM Journal on Numerical Analysis 1 Optimal Control Applications & Methods 1 Applied Mathematics and Mechanics. (English Edition) 1 Chinese Annals of Mathematics. Series B 1 Acta Mathematicae Applicatae Sinica. English Series ...and 65 more Serials\nall top 5\n\n### Cited in 39 Fields\n\n 231 Numerical analysis (65-XX) 229 Ordinary differential equations (34-XX) 149 Systems theory; control (93-XX) 108 Biology and other natural sciences (92-XX) 107 Partial differential equations (35-XX) 57 Dynamical systems and ergodic theory (37-XX) 56 Real functions (26-XX) 41 Linear and multilinear algebra; matrix theory (15-XX) 26 Fluid mechanics (76-XX) 15 Integral equations (45-XX) 15 Operator theory (47-XX) 14 Calculus of variations and optimal control; optimization (49-XX) 12 Special functions (33-XX) 11 Difference and functional equations (39-XX) 10 Mechanics of deformable solids (74-XX) 10 Statistical mechanics, structure of matter (82-XX) 8 Probability theory and stochastic processes (60-XX) 7 Computer science (68-XX) 7 Optics, electromagnetic theory (78-XX) 7 Operations research, mathematical programming (90-XX) 7 Information and communication theory, circuits (94-XX) 6 Combinatorics (05-XX) 6 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 5 Integral transforms, operational calculus (44-XX) 4 Approximations and expansions (41-XX) 4 Harmonic analysis on Euclidean spaces (42-XX) 4 Classical thermodynamics, heat transfer (80-XX) 3 Number theory (11-XX) 3 Global analysis, analysis on manifolds (58-XX) 3 Statistics (62-XX) 3 Mechanics of particles and systems (70-XX) 2 Algebraic geometry (14-XX) 2 Nonassociative rings and algebras (17-XX) 2 Functional analysis (46-XX) 2 Quantum theory (81-XX) 1 Functions of a complex variable (30-XX) 1 Differential geometry (53-XX) 1 Geophysics (86-XX) 1 Mathematics education (97-XX)" ]
[ null, "https://zbmath.org/static/feed-icon-14x14.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64914876,"math_prob":0.6274641,"size":46138,"snap":"2022-40-2023-06","text_gpt3_token_len":13291,"char_repetition_ratio":0.21615295,"word_repetition_ratio":0.4,"special_character_ratio":0.27335384,"punctuation_ratio":0.18415102,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95523965,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T07:38:49Z\",\"WARC-Record-ID\":\"<urn:uuid:8af3f61d-c976-47e7-bfb3-8aa8558cf015>\",\"Content-Length\":\"503761\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a274bf6f-f62a-405d-9f3a-2f3d6d24625d>\",\"WARC-Concurrent-To\":\"<urn:uuid:074c0ebc-2393-48c7-8e84-cc6337819eff>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/authors/?q=ai%3Ajiang.yaolin\",\"WARC-Payload-Digest\":\"sha1:DCDVU2D2VMKBHF3BNDO5U7JG5LTRREVK\",\"WARC-Block-Digest\":\"sha1:DSCTUORZM4FC5ZQR4K5BHYHMM4KBVY6O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335444.58_warc_CC-MAIN-20220930051717-20220930081717-00317.warc.gz\"}"}
https://socratic.org/questions/how-do-you-solve-the-following-linear-system-y-3x-4-y-3x-2
[ "# How do you solve the following linear system: y = 3x+4 , y-3x=-2 ?\n\nJul 8, 2016\n\nThe given pair of equations has no solution(s).\n\n#### Explanation:\n\nIf\n$\\textcolor{w h i t e}{\\text{XXX}} y = 3 x + 4$\nand\n$\\textcolor{w h i t e}{\\text{XXX}} y - 3 x = - 2$\n\nSubstituting (from ) $3 x + 4$ in place of $y$ into \n$\\textcolor{w h i t e}{\\text{XXX}} \\cancel{3 x} + 4 - \\cancel{3 x} = - 2$\n\nSince $4 \\ne - 2$\nthese equations are inconsistent.\n\nJul 8, 2016\n\nThe system has no soln. , or, the Soln. Set $= \\phi .$\nSubstituting the value of $y$, obtained from the first eqn., in the second eqn., we get,\n$\\left(3 x + 4\\right) - 3 x = - 2 ,$ i.e., $4 = - 2$, which is an impossible result!\nHence, the Soln. Set $= \\phi .$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6626739,"math_prob":0.9999902,"size":493,"snap":"2019-43-2019-47","text_gpt3_token_len":148,"char_repetition_ratio":0.096114516,"word_repetition_ratio":0.024691358,"special_character_ratio":0.28803244,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998989,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T10:32:54Z\",\"WARC-Record-ID\":\"<urn:uuid:663fcad9-d6c6-453c-bb67-a16315749e87>\",\"Content-Length\":\"35363\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dcffe1bc-8f45-4402-91fe-66883bed11c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:867c37dd-2705-4813-9756-7532a4ea248c>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-solve-the-following-linear-system-y-3x-4-y-3x-2\",\"WARC-Payload-Digest\":\"sha1:EJIJHQOLBVVCXAXEU24HK6XQFXYR3NH3\",\"WARC-Block-Digest\":\"sha1:A3VNV7ZUXACP5TDKK2NM6QMVZHV3JCHA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670770.21_warc_CC-MAIN-20191121101711-20191121125711-00111.warc.gz\"}"}
https://afontaine.dev/blog/advent-of-code-days-4-5-6
[ "# Advent of Code: Days 4, 5, 6\n\nAndrew Fontaine <[email protected]>\n\nI’ve been busy so this is going to be a bit longer but also contain 3 whole solutions!\n\n## Day four\n\nDiving deeper and deep, we come across a giant squid! To distract it, we decide to play bingo.\n\nAn important rule to this game of bingo is that diagonals don’t count, and don’t need to be considered. I am given a set of calls and boards, and need to figure out which board will win first.\n\nStep one is parsing the input. The first line contains the calls for the game, and the rest of the file consists of separate boards. As I want to be able to keep track if whether or not a number has been called, I make a type to hold that information. Then, I split up the file to the different boards:\n\n``````exception Empty\n\ntype i = Marked of int | Unmarked of int\n\n| [] -> [ [ line ] ]\n| last :: rest -> (line :: last) :: rest\n\nlet parse_line line =\nString.split_on_char ' ' line\n|> List.filter (fun s -> s <> \"\")\n|> List.map (fun s -> Unmarked (int_of_string s))\n\nlet input =\n| [] -> raise Empty\n| calls :: rest ->\nlet boards =\nList.fold_left\n(fun b l ->\nmatch l with\n| \"\" | \"\\n\" -> [] :: b\n| line -> add_to_board (parse_line line) b)\n[ [] ] rest\n|> List.filter (fun x -> x <> [])\nin\nlet numbers = List.map int_of_string (String.split_on_char ',' calls) in\n(numbers, boards)``````\n\nMy type `i` indicates whether or not a number has been marked. Because ocaml requires exhaustive pattern matching, I have to make sure my list of lines from the file is not empty. Then, I pull the first line out with pattern matching (noted as `calls` here), and fold over the rest of the list to construct the boards. There are a few empty lines in there that need to be filtered out. Finally, I split the calls on commas, and parse those strings into proper numbers.\n\nPart one just requires I play the game on all the boards. There are some small helper functions I need first:\n\n``````let check_wins board =\nlet row_win =\nList.exists\n(fun row ->\nList.for_all\n(fun m -> match m with Marked _ -> true | Unmarked _ -> false)\nrow)\nboard\nin\nlet column_win =\nList.exists\n(fun i ->\nList.for_all\n(fun row ->\nmatch List.nth row (i - 1) with Marked _ -> true | Unmarked _ -> false)\nboard)\n(List.init (List.length (List.nth board 0)) (fun x -> x + 1))\nin\nrow_win || column_win\n\nlet mark i board =\nList.map\n(fun row ->\nList.map\n(fun x -> match x with Unmarked y when y = i -> Marked y | y -> y)\nrow)\nboard``````\n\n`check_wins` checks to see if any row or column has been completely marked, and `mark` updates the board to set a number to `Marked` if it is found.\n\nAll that is left is to play and compute the score:\n\n``````let compute_score board call =\nlet sum =\nList.fold_left\n(fun sum row ->\nList.fold_left\n(fun s x -> match x with Marked _ -> s | Unmarked y -> s + y)\nsum row)\n0 board\nin\ncall * sum\n\nlet rec play boards = function\n| [] -> -100\n| call :: rest -> (\nlet marks = mark call in\nlet b = List.map marks boards in\nmatch List.find_opt check_wins b with\n| None -> play b rest\n| Some board -> compute_score board call)\n\nlet p1 =\nlet calls, boards = input in\nplay boards calls |> string_of_int``````\n\n`play` recursively iterates over the called numbers, marking boards until a winner is found. It returns the score of `-100` if I run out of numbers make it obvious (while still an integer) that something went wrong. While not the best error handling method, it works for such a small script. For more complicated problems later on, I plan on looking to Vladimir Keleshev’s Composable Error Handling in OCaml for guidance.\n\nOnce a winner is found, the score is computed. ⭐ one done!\n\n### Part two\n\nPart two is a small extension to part one, which suggests that I should pick the board that will win last to ensure the squid wins and won’t crush us for losing. As such, it only requires some small extension.\n\n``````\nlet rec play boards = function\n| [] -> -100\n| call :: rest -> (\nlet marks = mark call in\nlet new_boards = List.map marks boards in\nmatch new_boards with\n| [] -> -100\n| [b] -> if check_wins b then compute_score b call else play [b] rest\n| boards -> let losers = List.filter (fun b -> not (check_wins b)) boards in\nplay losers rest)\n\nlet p2 =\nlet calls, boards = input in\nplay boards calls |> string_of_int\n``````\n\nInstead of stopping as soon as a winner is found, I continue until only one board is left to win.\n\nThat’s day four ✔️\n\n## Day 5\n\nEvery year involves at least one problem about interesting lines. Every one of them is a pain to solve.\n\nThis year, the submarine is avoiding large, opaque clouds spewing out of hydrothermal vents on the ocean floor. Fortunately for us, the vents exist in extremely straight lines.\n\nI first attempted to do this the mathematically clever way using the determinant of the two lines and solving for their intersection point, if any, but that turned into a total bust that I could not puzzle out and so I shall speak no more of it.\n\nInstead, it was simply easier to walk the line segments and remember all the points I’ve been.\n\nOCaml handles generic data structures such as maps and sets via a functor, which is a function that takes and returns a module instead of normal values. To make a set module, the functor `Set.Make` is used. `Set.Make` takes a module that specifies a type, `t`, and a function to compare values, `compare`. This is a module type named `OrderedType`, where `t` is the type to order, and `compare` lets us order them. I set up a set to track the points I’ve been as so:\n\n``````type point = Point of int * int\n\nmodule PointSet = Set.Make (struct\ntype t = point\n\nlet compare (Point (x1, y1)) (Point (x2, y2)) =\nif x1 < x2 then -1\nelse if x1 = x2 then if y1 < y2 then -1 else if y1 = y2 then 0 else 1\nelse 1\nend)``````\n\n`compare` returns `-1` if the first point has a smaller `x` value, `1` if a larger `x` value, and defers to the `y` value if equal. If both are equal, `0` is returned. This is how the set knows if it already has the given point.\n\nI also need a line type to keep track of all my points:\n\n``type line = Line of point * point``\n\nAs the first problem only requires I deal with horizontal and vertical lines, that should be quick to set up:\n\n``````let is_vertical (Line (Point (_, y1), Point (_, y2))) = y1 = y2\n\nlet is_horizontal (Line (Point (x1, _), Point (x2, _))) = x1 = x2\n\nlet is_p1 line = is_vertical line || is_horizontal line``````\n\nAll that’s left is to walk the lines:\n\n``````let walk_line acc (Line (Point (x1, y1), Point (x2, y2))) =\nlet points =\nif x1 = x2 then\nList.init (abs (y2 - y1) + 1) (fun y -> Point (x1, min y1 y2 + y))\nelse\nList.init (abs (x2 - x1) + 1) (fun x -> Point (min x1 x2 + x, y1))\nin\nList.fold_left\n(fun (s1, s2) point ->\nif PointSet.mem point s1 then (s1, PointSet.add point s2)\nacc points\n\nlet walk_lines lines =\nList.fold_left walk_line (PointSet.empty, PointSet.empty) lines\n\nlet p1 =\nlet _, points = input |> List.filter is_p1 |> walk_lines in\npoints |> PointSet.elements |> List.length |> string_of_int``````\n\n`walk_line` makes a list covering every point contained in the line, and then adds them to the visited set. If the point is already in the visited set, it is added to the final set, as it matches our criteria.\n\n`walk_lines` iterates over all the lines, until the final set of points is found.\n\n⭐ one done!\n\n### Part two\n\nPart two, again, is only a small extension of part one. It demands we include the diagonal lines to check as well. Simple enough to add to `walk_line`\n\n``````let walk_line acc (Line (Point (x1, y1), Point (x2, y2))) =\nlet points =\nif x1 = x2 then\nList.init (abs (y2 - y1) + 1) (fun y -> Point (x1, min y1 y2 + y))\nelse if y1 = y2 then\nList.init (abs (x2 - x1) + 1) (fun x -> Point (min x1 x2 + x, y1))\nelse\nList.init\n(abs (x2 - x1) + 1)\n(fun z ->\nlet x = if x1 < x2 then x1 + z else x1 - z in\nlet y = if y1 < y2 then y1 + z else y1 - z in\nPoint (x, y))\nin\nList.fold_left\n(fun (s1, s2) point ->\nif PointSet.mem point s1 then (s1, PointSet.add point s2)\nacc points``````\n\nThe `else` expression now knows how to follow along a diagonal! Then to just run `walk_lines` on the whole dataset and count up the points:\n\n``````let p2 =\nlet _, points = input |> walk_lines in\npoints |> PointSet.elements |> List.length |> string_of_int``````\n\nThat’s day five ✔️\n\n## Day six\n\nDay six requires we do some biological work by counting fish.\n\nPart one is simple enough, so let’s write some code for it:\n\n``````let rec grow_fish old_fish new_fish = function\n| [] -> List.rev_append old_fish new_fish\n| h :: rest ->\nif h = 0 then grow_fish (6 :: old_fish) (8 :: new_fish) rest\nelse grow_fish ((h - 1) :: old_fish) new_fish rest\n\nlet p1_sol input count =\nList.init count (fun x -> x + 1)\n|> List.fold_left (fun fish _ -> grow_fish [] [] fish) input\n\nlet p1 =\nlet sol = p1_sol input 80 |> List.length in\nInt.to_string sol``````\n\nFor 80 iterations, I go through the list of fish, decrementing the days til spawn. Once it hits 0, I bump it back up to 6 and add a new fish with a countdown starting at 8. All that’s left is to count up the fish!\n\n⭐ one done!\n\n### Part two\n\nI was hopeful I could just run my code for 256 days, but soon realized why it was a “part two”: bottlenecks and stack overflows.\n\nThe list was growing so much that my original solution to part one (not posted) would blow up with a stack overflow. The one posted above is tail-call optimized, as I thought it would be my only issue. Turns out, it was also growing so much that iterating through the list for ever day took ages.\n\nBack to the drawing board.\n\nI had noticed a lot of the fish ended up on the same cycle. There were several fish that each had 0 to 8 days left on their spawn timers, which got me thinking… if I could group the fish, I wouldn’t have to add to the list and instead just increment the count!\n\nI initially started off trying to make a map to handle this, but as I don’t quite understand how ocaml’s maps worked, it was easier for me to use a list of tuples instead. I also needed to be able to update both the key and the value, so a tuple felt better.\n\n``````let update x up list = up (List.find_opt (fun (y, _) -> x = y) list)\n\nlet dedup list =\nList.fold_left\n(fun deduped (x, z) ->\nupdate x\n(function\n| None -> (x, z) :: deduped\n| Some (_, y) -> (x, y + z) :: List.filter (fun (a, _) -> x <> a) deduped)\ndeduped)\n[] list\n\nlet grow_fish_2 fishes _ =\nList.fold_left\n(fun fish (x, y) ->\nmatch x with 0 -> (6, y) :: (8, y) :: fish | z -> (z - 1, y) :: fish)\n[] fishes\n|> dedup\n\nlet p2_sol (input : int list) count =\nlet fishes =\nList.fold_left\n(fun fishes x ->\nupdate x\n(function\n| None -> (x, 1) :: fishes\n| Some (_, y) ->\n(x, y + 1) :: List.filter (fun (y, _) -> x <> y) fishes)\nfishes)\n[] input\nin\nList.init count (fun x -> x + 1)\n|> List.fold_left grow_fish_2 fishes\n|> List.fold_left (fun sum (_, x) -> sum + x) 0``````\n\n`update` replicates map’s `update` function, where you provide a function that takes an option type. The option type either contains the value the key is pointing to, or nothing, and you must handle both cases. `dedup` takes all the elements and buckets and merges the matching ones together again.\n\n`grow_fish_2`, then, goes over the list of fish, decrementing the number of days til they spawn. If they are 0, the number gets bumped back up to 6, and a new tuple of fish is added. This tuple starts at 8, their spawn countdown, and the same number of fish as was in the first bucket. Once that is complete, I `dedup` the buckets, as it was easier to `dedup` the list once the fish spawning was done.\n\nI repeat this for the 256 days to get… `1.73e12` fish!\n\nThat’s a lot of fish!\n\nI’m trying to stay on top of these posts this year, but the weekends are hectic, as usual. I hope to not have to cram 3 days of updates in one day again though!\n\n#### Want to discuss this post?\n\nReach out via email to ~afontaine/[email protected], and be sure to follow the mailing list etiquette.\n\nOther posts" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8613242,"math_prob":0.94363475,"size":11749,"snap":"2023-14-2023-23","text_gpt3_token_len":3283,"char_repetition_ratio":0.11375053,"word_repetition_ratio":0.12435233,"special_character_ratio":0.29942974,"punctuation_ratio":0.1243397,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9834615,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T11:16:24Z\",\"WARC-Record-ID\":\"<urn:uuid:6ee5dbde-eae1-4a0d-9034-c30ecb9e70cd>\",\"Content-Length\":\"22473\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc9e684f-8fb9-4596-8660-76a179f794ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7f0645c-d4a0-4d4f-a48a-36508788595f>\",\"WARC-IP-Address\":\"159.89.120.242\",\"WARC-Target-URI\":\"https://afontaine.dev/blog/advent-of-code-days-4-5-6\",\"WARC-Payload-Digest\":\"sha1:TQ3MIRGV7TYMF5SQAO2MUHRYT4WWA66P\",\"WARC-Block-Digest\":\"sha1:LHNOR2NVIPEOL6QTASSBSNXVUYBIPN3R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945323.37_warc_CC-MAIN-20230325095252-20230325125252-00458.warc.gz\"}"}
https://www.javaexercise.com/python/find-element-in-python-list
[ "", null, "# Python Program to Find an Element in Python List\n\nTo find an element in a list, python provides several built-in ways such as in operator, count() function, and index() function. Apart from that, we will use custom code as well to find an element. We will learn to use all these in our examples to find an element in the list. So, let's get started.\n\n## Find an element in Python List\n\nThis is an iterative approach where we use a loop to traverse each element of the list and then check via an if statement. If the element is matched, then display found message else, display not found. See the below python code.\n\n``````# Python program to find an element in a list\n\n# Take a list\nlist1 = [45, 11, 15, 9, 56, 17]\nprint(list1)\n\nval = 15\nflag = 0\nfor i in list1:\nif i == val:\nflag = 1\n\nif flag:\nprint(\"Element exists\")\nelse:\nprint(\"Element does not exist\")``````\n\nOutput:\n\nElement exists\n\n## Find an element in List by using the count() method in Python\n\nWe can use the count() method to check whether the element is present in the list or not. The count() method returns the frequency of an element in the list. So, if the count is greater than 0 then it means the element is present in the list. See the python code.\n\n``````# Python program to find an element in a list\n\n# Take a list\nlist1 = [45, 11, 15, 9, 56, 17]\nprint(list1)\n\nval = 15 # Element to find\n\n# Count method returns frequency of a number\nresult = list1.count(val)\n\nif result:\nprint(\"Element exists\")\nelse:\nprint(\"Element does not exist\")``````\n\nOutput:\n\nElement exists\n\n## Find an element in List by using an in operator in Python\n\nWe can use an in operator to check whether the specified element is present in a list. It is a membership operator that works for all iterable like list, set, tuple, etc. See the python code.\n\n``````# Python program to find an element in a list\n\n# Take a list\nlist1 = [45, 11, 15, 9, 56, 17]\nprint(list1)\n\nval = 15\n\n# Python in operator\nif val in list1:\nprint(\"Element exists\")\nelse:\nprint(\"Element does not exist\")``````\n\nOutput:\n\nElement exists\n\n## Find an element in List by using index() Method in Python\n\nPython index() method is used to find the index value of an element in the iterable(list, set, tuple, etc). We can use it to find an element in the list. This method returns an error if the element is not found, then you should use it with the try-catch statement. See the python code.\n\n``````# Python program to find an element in a list\n\n# Take a list\nlist1 = [45, 11, 15, 9, 56, 17]\nprint(list1)\n\nval = 15\n\ntry:\nresult = list1.index(val)\nexcept ValueError:\nprint(\"Element does not exist\")\nelse:\nprint(\"Element exists\") ``````\n\nOutput:\n\nElement exists\n\nif you don't use the try-catch statement with the index() method then it throws an error :\n\nresult = list1.index(111)\n\nValueError: 111 is not in list\n\nTrending" ]
[ null, "https://www.javaexercise.com/img/javaexercise.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76129,"math_prob":0.83724815,"size":2458,"snap":"2022-27-2022-33","text_gpt3_token_len":652,"char_repetition_ratio":0.18459658,"word_repetition_ratio":0.3178808,"special_character_ratio":0.28559804,"punctuation_ratio":0.13878328,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9768585,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T11:18:47Z\",\"WARC-Record-ID\":\"<urn:uuid:1770181c-e57e-406e-81a9-f19998c3ad5b>\",\"Content-Length\":\"36881\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bc374fa1-7ed7-4be2-bdaf-0be6ac075142>\",\"WARC-Concurrent-To\":\"<urn:uuid:60971493-7437-4a89-8c49-3f458551c267>\",\"WARC-IP-Address\":\"172.67.212.251\",\"WARC-Target-URI\":\"https://www.javaexercise.com/python/find-element-in-python-list\",\"WARC-Payload-Digest\":\"sha1:4AF2KOED5GAWRHMD72DCRHQOHVPXICPD\",\"WARC-Block-Digest\":\"sha1:OPMAQPIUWHYEYMADUYF4VS2WT4FBC2HE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571284.54_warc_CC-MAIN-20220811103305-20220811133305-00500.warc.gz\"}"}
https://richelbilderbeek.nl/CppRescale.htm
[ "# (C++) Rescale\n\nRescale is a math code snippet to rescale a double, 1D std::vector or 2D std::vector from a certain range to a new range.\n\n## Rescale on a double\n\n ``` #include #include //From http://www.richelbilderbeek.nl/CppRescale.htm const double Rescale(   const double value,   const double oldMin,   const double oldMax,   const double newMin,   const double newMax) {   assert(value >= oldMin);   assert(value <= oldMax);   const double oldDistance = oldMax - oldMin;   //At which relative distance is value on oldMin to oldMax ?   const double distance = (value - oldMin) / oldDistance;   assert(distance >= 0.0);   assert(distance <= 1.0);   const double newDistance = newMax - newMin;   const double newValue = newMin + (distance * newDistance);   assert(newValue >= newMin);   assert(newValue <= newMax);   return newValue; } ```\n\n## Rescale on a 1D std::vector\n\n ``` #include #include #include //From http://www.richelbilderbeek.nl/CppRescale.htm const std::vector Rescale(   std::vector v,   const double newMin,   const double newMax) {   const double oldMin = *std::min_element(v.begin(),v.end());   const double oldMax = *std::max_element(v.begin(),v.end());   typedef std::vector::iterator Iter;   Iter i = v.begin();   const Iter j = v.end();   for ( ; i!=j; ++i)   {     *i = Rescale(*i,oldMin,oldMax,newMin,newMax);   }   return v; } ```\n\n## Rescale on a 2D std::vector\n\n ``` #include #include #include //From http://www.richelbilderbeek.nl/CppRescale.htm const std::vector > Rescale(   std::vector > v,   const double newMin,   const double newMax) {   const double oldMin = MinElement(v);   const double oldMax = MaxElement(v);   typedef std::vector >::iterator RowIter;   RowIter y = v.begin();   const RowIter maxy = v.end();   for ( ; y!=maxy; ++y)   {     typedef std::vector::iterator ColIter;     ColIter x = y->begin();     const ColIter maxx = y->end();     for ( ; x!=maxx; ++x)     {       *x = Rescale(*x,oldMin,oldMax,newMin,newMax);     }   }   return v; } ```", null, "" ]
[ null, "https://richelbilderbeek.nl/valid-xhtml10.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6886252,"math_prob":0.9473462,"size":398,"snap":"2023-14-2023-23","text_gpt3_token_len":106,"char_repetition_ratio":0.15228426,"word_repetition_ratio":0.16666667,"special_character_ratio":0.23115578,"punctuation_ratio":0.13157895,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.992077,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T10:41:49Z\",\"WARC-Record-ID\":\"<urn:uuid:a1bb228b-58d3-44ee-8d3d-0b32f6c11aea>\",\"Content-Length\":\"11036\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:46971dc4-a891-4473-b1fe-de11d2f46e1d>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b89e92f-788b-434d-91c2-dae7fe760c81>\",\"WARC-IP-Address\":\"46.30.213.109\",\"WARC-Target-URI\":\"https://richelbilderbeek.nl/CppRescale.htm\",\"WARC-Payload-Digest\":\"sha1:5X45JAWMAARBGCYFQ5WZRU2ZZFHFPFZM\",\"WARC-Block-Digest\":\"sha1:L5V65SPOPGALLWSDZAP3VZBX6WJ7K3GM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224645595.10_warc_CC-MAIN-20230530095645-20230530125645-00498.warc.gz\"}"}
https://aleph0.blog/2022/10/29/tilings-and-integer-programming/
[ "# Tilings and Integer Programming\n\nThis is the story behind sequence A355477 from the Online Encyclopedia of Integer Sequences.\n\nLet’s say you have an unlimited supply of “zig-zag” Tetris pieces:\n\nHow many can you pack into an n x n square?\n\nFor example, if you’re trying to fill a 4×4 square, then you can fit three such pieces:\n\nbut, try as you might, you’ll never get a fourth piece in there (without any of the pieces overlapping or poking out the sides of the square). What about a 5×5 square, a 6×6 square, etc.?\n\nOur goal is to answer this question (reasonably) efficiently to find, for example, that this packing of a 16×16 square with 60 tiles is optimal:\n\nIt turns out that we know such optimal packings for squares up to 21 x 21, as well as a handful of larger values, but the problem is not completely solved yet. The rest of this post explains what’s known and how we know it.\n\n#### Set Packing\n\nThis tiling problem is a nice example of the “Maximum Set Packing” problem, which is a classic optimization problem that has many applications. Specifically, if we label each square with a letter:\n\nthen any placement of a tile corresponds to a 4-element set of squares. For example, the three tiles in Figure 1 correspond to the sets:\n\n$$\\begin{array}\\\\ S_1 &=& \\{B, C, E, F\\} \\\\ S_2 &=& \\{D, G, H, K\\} \\\\ S_3 &=& \\{I, J, N, O\\} \\end{array}.$$\n\nAnd, in addition to these three sets, there are 21 other sets corresponding to other placements of zig-zag tiles (e.g.,", null, "$\\{J, K, O, P\\}$ for a tile in the bottom-right corner, etc.) that we didn’t end up using in Figure 1, for a total of 24 possible placements of tiles within a 4×4 square.\n\nThe goal of the set packing problem is to find a collection of non-overlapping sets (from this family of 24 sets) that is as large as possible, which is equivalent to trying to squeeze as many non-overlapping tiles as possible into the square.\n\nUnfortunately, the maximum set packing problem is NP-complete, so we’re not going to find an efficient algorithm that works in all cases, but there is still hope: Integer Programming.\n\n#### Integer Programming\n\nInteger programming is also a very hard computation problem, but there are excellent “mixed integer programming solvers” that work very well in practice on set-packing/tiling problems like these, so we’re going to take advantage of that as we try to understand this tiling question.\n\nLet’s return to the 4×4 tiling example we looked at above and see how it can be transformed into an “integer programming” problem. First, it will be convenient to express this problem as matrix with 16 columns (each corresponding to one of the cells in the 4×4 square) and with rows corresponding to each possible placement of a tile. For example, these four placements of tiles:\n\ncorrespond to the first four rows of the matrix", null, "$X$:\n\n$$X = \\begin{matrix}A & B & C & D & E & F & G & H & I & J &K & L & M & N & O &P \\\\ \\hline 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\\\ \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\end{matrix}$$\n\nThe complete matrix has", null, "$24$ rows corresponding to all the different possible placements/orientations of the zig-zag tile.\n\nNow, the tiling problem is equivalent to finding as many rows of", null, "$X$ as possible that have at most one", null, "$1$ in each column. This is easily expressed as an integer programming problem as follows:\n\nDefine binary variables\n\n$$s_1, \\ldots, s_n \\in \\{0, 1\\}$$\n\nwhere", null, "$n$ is the number of rows in the matrix", null, "$X$, and then\n\n$$\\textrm{maximize } s_1 + \\cdots + s_n$$\n\n$$\\textrm{subject to}$$\n\n$$\\vec{s} \\cdot X \\leq 1$$\n\nIn code (using the ‘mip’ package for python) this would look something like:\n\ndef maxSetPacking(X):\n# Find the largest subset of disjoint rows of X\nm = mip.Model(sense=mip.MAXIMIZE)\nm.objective = S.sum()\nm += (np.dot(S, X) <= 1)\nm.optimize()\nreturn np.array([s.x for s in S]).round().astype(bool)\n\nThe result of this optimization is a boolean vector", null, "$\\vec{s}$ indicating which of the rows of", null, "$X$ form an optimal set covering.\n\nUnleashing our Integer Programming solver on our tiling problem, we can find optimal packings of an", null, "$n \\times n$ square for various values of", null, "$n$. Here are examples of optimal packings for", null, "$n \\leq 18$:\n\n#### Hmmmm… that’s odd\n\nHere’s one interesting pattern that emerges from the optimal packings shown above: whenever", null, "$n$ is odd, then the optimal packing contains exactly $$\\left( \\frac{n-1}{2}\\right)^2$$ tiles. And, in fact, this is true for all odd", null, "$n$. Here’s a simple proof, using the", null, "$7 \\times 7$ case as an example:\n\nFirst off, we can simply arrange the tiles in an interlocking", null, "$3 \\times 3$ array to fit", null, "$9$ tiles into the", null, "$7 \\times 7$ grid:\n\nAnd, in general, the same approach will allow $$\\left( \\frac{n-1}{2}\\right)^2$$ tiles to fit within an", null, "$n \\times n$ square whenever", null, "$n$ is odd, so we have:\n\n$$a(n) \\geq \\left( \\frac{n-1}{2}\\right)^2$$\n\nNow we have to show that no more than", null, "$9$ tiles can fit into an", null, "$7 \\times 7$ grid. To prove this, take the", null, "$7 \\times 7$ grid and place an “X” in the cells that are in both an even-numbered row and an even-numbered column:\n\nNow observe that no matter where a tile is placed it will cover one of the “X”s, for example:\n\nSince there are only", null, "$9$ “X”s, no more than", null, "$9$ tiles can be placed in the grid. (Otherwise, at least two of them would have to share an “X”, which would mean they overlap.)\n\nTogether with the previous bound, this shows that $$a(n) = \\left( \\frac{n-1}{2}\\right)^2$$ whenever", null, "$n$ is odd, so we completely understand the odd-numbered terms of this sequence.\n\n#### Getting even…\n\nThe even-numbered terms, however, seem more subtle. The integer programming approach handily deals with all cases up to", null, "$n=20$, but gets a bit stuck on", null, "$n = 22$. It finds the following packing with", null, "$116$ tiles:\n\nbut (after running for days) it can’t quite rule out the possibility that there’s an even better solution with", null, "$117$ tiles. That being said, we do know that there’s no solution with more than", null, "$117$ tiles. This is because the integer programming solver uses a “primal-dual” algorithm which, roughly, means that as it’s performing the search it’s keeping track not only of the best solution it’s found so far (which is a lower-bound for the value of", null, "$a(n)$) but also the best solution to the “dual” version of the problem (which, it turns out, is an upper-bound for the value of", null, "$a(n)$). And the search did find a “dual” solution that was strictly smaller than", null, "$118$, so we know that", null, "$a(22) \\leq 117$.\n\nSo, even in cases when the algorithm doesn’t find the optimal solution, it can give us useful bounds on the optimal solution. This is another reason why the integer programming approach is particularly appealing for this kind of tiling problem.\n\nAfter", null, "$n = 22$, the next “hard” case is", null, "$n = 26$, where we know that $$163 \\leq a(26) \\leq 165,$$ but we don’t know the exact value. Similarly for other even", null, "$n \\geq 28$ we have upper and bounds, but don’t (yet) know the exact values.\n\n#### Going Further\n\nUndoubtedly, as computers get faster and as even better integer programming solvers are developed, we’ll come to learn the true values of", null, "$a(22), a(26)$ and other terms of A355477. (Some readers may even have access to “industrial grade” (mixed) integer programming solvers that can already solve the instances that stumped the open-source tools I used…) And perhaps there are some clever mathematical insights that will lead to a better understanding of the even terms of this series. Time will tell!" ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89877534,"math_prob":0.9978355,"size":7435,"snap":"2023-14-2023-23","text_gpt3_token_len":1998,"char_repetition_ratio":0.13968511,"word_repetition_ratio":0.11046931,"special_character_ratio":0.28312036,"punctuation_ratio":0.09966997,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99881434,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T15:58:18Z\",\"WARC-Record-ID\":\"<urn:uuid:8e07496a-e537-4ff1-9fbd-7fdf2e54b5f0>\",\"Content-Length\":\"138290\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6a782b6-4ecf-4df7-88f6-172bf0b5923d>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc609278-f1f5-49cf-b341-cbdcff8afdd7>\",\"WARC-IP-Address\":\"192.0.78.216\",\"WARC-Target-URI\":\"https://aleph0.blog/2022/10/29/tilings-and-integer-programming/\",\"WARC-Payload-Digest\":\"sha1:FL2ELFD6YJ4FITO4WRHTTGA6W6HFWXMN\",\"WARC-Block-Digest\":\"sha1:ZKVVI7KPMNXQDI4BBTQHWBGSJNRTZ7QO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648695.4_warc_CC-MAIN-20230602140602-20230602170602-00540.warc.gz\"}"}
https://www.colorhexa.com/1a3936
[ "# #1a3936 Color Information\n\nIn a RGB color space, hex #1a3936 is composed of 10.2% red, 22.4% green and 21.2% blue. Whereas in a CMYK color space, it is composed of 54.4% cyan, 0% magenta, 5.3% yellow and 77.6% black. It has a hue angle of 174.2 degrees, a saturation of 37.3% and a lightness of 16.3%. #1a3936 color hex could be obtained by blending #34726c with #000000. Closest websafe color is: #333333.\n\n• R 10\n• G 22\n• B 21\nRGB color chart\n• C 54\n• M 0\n• Y 5\n• K 78\nCMYK color chart\n\n#1a3936 color description : Very dark desaturated cyan.\n\n# #1a3936 Color Conversion\n\nThe hexadecimal color #1a3936 has RGB values of R:26, G:57, B:54 and CMYK values of C:0.54, M:0, Y:0.05, K:0.78. Its decimal value is 1718582.\n\nHex triplet RGB Decimal 1a3936 `#1a3936` 26, 57, 54 `rgb(26,57,54)` 10.2, 22.4, 21.2 `rgb(10.2%,22.4%,21.2%)` 54, 0, 5, 78 174.2°, 37.3, 16.3 `hsl(174.2,37.3%,16.3%)` 174.2°, 54.4, 22.4 333333 `#333333`\nCIE-LAB 21.624, -12.396, -1.694 2.555, 3.412, 4.014 0.256, 0.342, 3.412 21.624, 12.511, 187.78 21.624, -11.941, -0.416 18.472, -7.637, 0.047 00011010, 00111001, 00110110\n\n# Color Schemes with #1a3936\n\n• #1a3936\n``#1a3936` `rgb(26,57,54)``\n• #391a1d\n``#391a1d` `rgb(57,26,29)``\nComplementary Color\n• #1a3927\n``#1a3927` `rgb(26,57,39)``\n• #1a3936\n``#1a3936` `rgb(26,57,54)``\n• #1a2d39\n``#1a2d39` `rgb(26,45,57)``\nAnalogous Color\n• #39271a\n``#39271a` `rgb(57,39,26)``\n• #1a3936\n``#1a3936` `rgb(26,57,54)``\n• #391a2d\n``#391a2d` `rgb(57,26,45)``\nSplit Complementary Color\n• #39361a\n``#39361a` `rgb(57,54,26)``\n• #1a3936\n``#1a3936` `rgb(26,57,54)``\n• #361a39\n``#361a39` `rgb(54,26,57)``\n• #1d391a\n``#1d391a` `rgb(29,57,26)``\n• #1a3936\n``#1a3936` `rgb(26,57,54)``\n• #361a39\n``#361a39` `rgb(54,26,57)``\n• #391a1d\n``#391a1d` `rgb(57,26,29)``\n• #020404\n``#020404` `rgb(2,4,4)``\n• #0a1615\n``#0a1615` `rgb(10,22,21)``\n• #122725\n``#122725` `rgb(18,39,37)``\n• #1a3936\n``#1a3936` `rgb(26,57,54)``\n• #224b47\n``#224b47` `rgb(34,75,71)``\n• #2a5c57\n``#2a5c57` `rgb(42,92,87)``\n• #326e68\n``#326e68` `rgb(50,110,104)``\nMonochromatic Color\n\n# Alternatives to #1a3936\n\nBelow, you can see some colors close to #1a3936. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #1a392e\n``#1a392e` `rgb(26,57,46)``\n• #1a3931\n``#1a3931` `rgb(26,57,49)``\n• #1a3933\n``#1a3933` `rgb(26,57,51)``\n• #1a3936\n``#1a3936` `rgb(26,57,54)``\n• #1a3939\n``#1a3939` `rgb(26,57,57)``\n• #1a3739\n``#1a3739` `rgb(26,55,57)``\n• #1a3439\n``#1a3439` `rgb(26,52,57)``\nSimilar Colors\n\n# #1a3936 Preview\n\nText with hexadecimal color #1a3936\n\nThis text has a font color of #1a3936.\n\n``<span style=\"color:#1a3936;\">Text here</span>``\n#1a3936 background color\n\nThis paragraph has a background color of #1a3936.\n\n``<p style=\"background-color:#1a3936;\">Content here</p>``\n#1a3936 border color\n\nThis element has a border color of #1a3936.\n\n``<div style=\"border:1px solid #1a3936;\">Content here</div>``\nCSS codes\n``.text {color:#1a3936;}``\n``.background {background-color:#1a3936;}``\n``.border {border:1px solid #1a3936;}``\n\n# Shades and Tints of #1a3936\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010303 is the darkest color, while #f5fafa is the lightest one.\n\n• #010303\n``#010303` `rgb(1,3,3)``\n• #081110\n``#081110` `rgb(8,17,16)``\n• #0e1e1c\n``#0e1e1c` `rgb(14,30,28)``\n• #142c29\n``#142c29` `rgb(20,44,41)``\n• #1a3936\n``#1a3936` `rgb(26,57,54)``\n• #204643\n``#204643` `rgb(32,70,67)``\n• #265450\n``#265450` `rgb(38,84,80)``\n• #2c615c\n``#2c615c` `rgb(44,97,92)``\n• #336f69\n``#336f69` `rgb(51,111,105)``\n• #397c76\n``#397c76` `rgb(57,124,118)``\n• #3f8a83\n``#3f8a83` `rgb(63,138,131)``\n• #45978f\n``#45978f` `rgb(69,151,143)``\n• #4ba59c\n``#4ba59c` `rgb(75,165,156)``\n• #53b1a7\n``#53b1a7` `rgb(83,177,167)``\n• #60b7ae\n``#60b7ae` `rgb(96,183,174)``\n• #6ebdb5\n``#6ebdb5` `rgb(110,189,181)``\n• #7bc3bc\n``#7bc3bc` `rgb(123,195,188)``\n• #89c9c3\n``#89c9c3` `rgb(137,201,195)``\n• #96cfca\n``#96cfca` `rgb(150,207,202)``\n• #a4d5d1\n``#a4d5d1` `rgb(164,213,209)``\n• #b1dcd7\n``#b1dcd7` `rgb(177,220,215)``\n• #bfe2de\n``#bfe2de` `rgb(191,226,222)``\n• #cce8e5\n``#cce8e5` `rgb(204,232,229)``\n• #daeeec\n``#daeeec` `rgb(218,238,236)``\n• #e7f4f3\n``#e7f4f3` `rgb(231,244,243)``\n• #f5fafa\n``#f5fafa` `rgb(245,250,250)``\nTint Color Variation\n\n# Tones of #1a3936\n\nA tone is produced by adding gray to any pure hue. In this case, #272c2c is the less saturated color, while #00534b is the most saturated one.\n\n• #272c2c\n``#272c2c` `rgb(39,44,44)``\n• #242f2e\n``#242f2e` `rgb(36,47,46)``\n• #203331\n``#203331` `rgb(32,51,49)``\n• #1d3633\n``#1d3633` `rgb(29,54,51)``\n• #1a3936\n``#1a3936` `rgb(26,57,54)``\n• #173c39\n``#173c39` `rgb(23,60,57)``\n• #143f3b\n``#143f3b` `rgb(20,63,59)``\n• #10433e\n``#10433e` `rgb(16,67,62)``\n• #0d4640\n``#0d4640` `rgb(13,70,64)``\n• #0a4943\n``#0a4943` `rgb(10,73,67)``\n• #074c45\n``#074c45` `rgb(7,76,69)``\n• #044f48\n``#044f48` `rgb(4,79,72)``\n• #00534b\n``#00534b` `rgb(0,83,75)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #1a3936 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54192096,"math_prob":0.739827,"size":3684,"snap":"2019-13-2019-22","text_gpt3_token_len":1674,"char_repetition_ratio":0.12472826,"word_repetition_ratio":0.011070111,"special_character_ratio":0.5646037,"punctuation_ratio":0.23756906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9910578,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T06:05:30Z\",\"WARC-Record-ID\":\"<urn:uuid:fb88859a-494e-4650-969c-02ca3d2223c6>\",\"Content-Length\":\"36365\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e6c7f9ea-7eb4-438e-9b8d-22ae13a96b8c>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf80685c-02ca-41f6-a440-edb85b65d1e4>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/1a3936\",\"WARC-Payload-Digest\":\"sha1:SPTCA3RD3JPUZOS5NWIOAXCDLBCTHEW2\",\"WARC-Block-Digest\":\"sha1:TSDMHWZZ5ENKFIZXVQF7XRXP2BEXFWD3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203755.18_warc_CC-MAIN-20190325051359-20190325073359-00323.warc.gz\"}"}
http://www.sport-tx.com/w_general.html
[ "< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\"> < id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\"> < id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">< id=\"6e2kg\">", null, "", null, "", null, "CDCVF310PW Texas Instruments", null, "", null, "半导体 集成电路 - IC clock buffer hipform1:10clckbffr generalpur+eapp", null, "购买", null, "750XCXC-6A Magnecraft / Schneider Electric", null, "机电产品 继电器 general purpose relays generalpurpose relay 3pdt 16a, 6 vac coil", null, "购买", null, "G2R-2-S DC110(S) Omron Automation and Safety", null, "", null, "机电产品 继电器 general purpose relays dpst-NO 240vac 5A generalpurpose relay", null, "购买", null, "G2R-2-S DC110(S) Omron Electronics", null, "机电产品 继电器与 I/O 模块 通用/工业继电器 dpst-NO 240vac 5A generalpurpose relay", null, "购买", null, "RLD60P050X Littelfuse Inc these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P017X Littelfuse Inc these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P110X Littelfuse Inc 电路保护 these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P135X Littelfuse Inc these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P065X Littelfuse Inc 电路保护 these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P075X Littelfuse Inc 电路保护 these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P160X Littelfuse Inc these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P250X Littelfuse Inc these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P020X Littelfuse Inc 电路保护 these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P040X Littelfuse Inc 电路保护 these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P010X Littelfuse Inc these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P090X Littelfuse Inc 电路保护 these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P025X Littelfuse Inc 电路保护 these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P030X Littelfuse Inc these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P185X Littelfuse Inc 电路保护 these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买", null, "RLD60P300X Littelfuse Inc 电路保护 these 60v rlds were designed to be generalpurpose resettable fuses", null, "购买" ]
[ null, "http://www.sport-tx.com/images/rohs.png", null, "http://www.sport-tx.com/images/pb.png", null, "http://p.datasheet.21ic.com/pdf2014/pic/2014081914/04/1bec3803-061b-4bd9-918f-3c4fcd72bcc7.jpeg", null, "http://www.sport-tx.com/images/rohs.png", null, "http://www.sport-tx.com/images/pb.png", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://www.sport-tx.com/images/xiaotu.jpg", null, "http://www.sport-tx.com/images/rohs.png", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/pdf2014/pic/2014081819/42/57e1cc7d-074d-48c0-8ff9-aa2da6dcad36.jpeg", null, "http://www.sport-tx.com/images/rohs.png", null, "http://www.sport-tx.com/images/pb.png", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://www.sport-tx.com/images/xiaotu.jpg", null, "http://www.sport-tx.com/images/rohs.png", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://www.sport-tx.com/images/xiaotu.jpg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://www.sport-tx.com/images/xiaotu.jpg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/source3/img/partimg/2014081819/19/61ee0b76-73c6-4d70-b9b7-03366642f483.jpeg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://www.sport-tx.com/images/xiaotu.jpg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/source3/img/partimg/2014081819/19/61ee0b76-73c6-4d70-b9b7-03366642f483.jpeg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/source3/img/partimg/2014081819/19/61ee0b76-73c6-4d70-b9b7-03366642f483.jpeg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://www.sport-tx.com/images/xiaotu.jpg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://www.sport-tx.com/images/xiaotu.jpg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/source3/img/partimg/2014081819/19/61ee0b76-73c6-4d70-b9b7-03366642f483.jpeg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/source3/img/partimg/2014081819/19/61ee0b76-73c6-4d70-b9b7-03366642f483.jpeg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://www.sport-tx.com/images/xiaotu.jpg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/source3/img/partimg/2014081819/19/61ee0b76-73c6-4d70-b9b7-03366642f483.jpeg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/source3/img/partimg/2014081819/19/61ee0b76-73c6-4d70-b9b7-03366642f483.jpeg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://www.sport-tx.com/images/xiaotu.jpg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/source3/img/partimg/2014081819/19/61ee0b76-73c6-4d70-b9b7-03366642f483.jpeg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null, "http://p.datasheet.21ic.com/source3/img/partimg/2014081819/19/61ee0b76-73c6-4d70-b9b7-03366642f483.jpeg", null, "http://www.sport-tx.com/images/arrow_icon.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8449658,"math_prob":0.60648805,"size":2156,"snap":"2019-43-2019-47","text_gpt3_token_len":823,"char_repetition_ratio":0.2002788,"word_repetition_ratio":0.5264901,"special_character_ratio":0.26391464,"punctuation_ratio":0.0063091484,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9603514,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96],"im_url_duplicate_count":[null,null,null,null,null,1,null,null,null,null,null,null,null,9,null,null,null,null,null,1,null,null,null,null,null,null,null,9,null,null,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null,9,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-15T23:38:51Z\",\"WARC-Record-ID\":\"<urn:uuid:95fac2fb-6435-4670-a30e-445097e46805>\",\"Content-Length\":\"50439\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60e14a7f-cede-446f-b395-276a0802a37a>\",\"WARC-Concurrent-To\":\"<urn:uuid:50c97f3e-a105-4dc8-8d1b-c0b6380608ba>\",\"WARC-IP-Address\":\"175.29.149.17\",\"WARC-Target-URI\":\"http://www.sport-tx.com/w_general.html\",\"WARC-Payload-Digest\":\"sha1:B44YIRVGWFD4OZAEKCV3BUOTDKASWNHL\",\"WARC-Block-Digest\":\"sha1:GBYFTWCX5HUUAHFXZ54BZDFWRL6YX3IE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986660829.5_warc_CC-MAIN-20191015231925-20191016015425-00464.warc.gz\"}"}
https://answers.launchpad.net/yade/+question/690942
[ "# Concrete cube dimension, aggregate particle and peri3dController\n\nAsked by Faqih Maarif on 2020-05-25\n\nDear All,\n\nI am new to YADE. I have studied some code from this forum. There are some of the questions related to concrete cube testing.\n\n1. How to determine the dimensions of the concrete cube (150x150mm) in the modeling? In the code, there is an initial size = 1.2, but apparently, this is not the cube dimension.\n\n2. How to determine the variation of aggregate granules if specified;\na. 0-4mm: 40%\nb. 4-8 mm: 22%\nc. 8-16mm: 38%\n\n3. How to determine the value (goal, xxPath, yyPath, zzPath, zxPath, xyPath), I'm confused with the determination of the numbers . I have read it in YADE Book (pg.280), but i dont understand about it.\n\ngoal=(20e-4,-6e-4,0, -2e6,3e-4,2e6)\n# the prescribed path (step,value of stress/strain) can be defined in absolute values\nxxPath=[(465,5e-4),(934,-5e-4),(1134,10e-4)],\n# or in relative values\nyyPath=[(2,4),(7,-2),(11,0),(14,4)],\n# if the goal value is 0, the absolute stress/strain values are always considered (step values remain relative)\nzzPath=[(5,-1e7),(10,0)],\n# if ##Path is not explicitly defined, it is considered as linear function between (0,0) and (nSteps,goal)\n# as in yzPath and xyPath\n# the relative values are really relative (zxPath gives the same - except of the sign from goal value - result as yyPath)\nzxPath=[(4,2),(14,-1),(22,0),(28,2)],\nxyPath=[(1,1),(2,-1),(3,1),(4,-1),(5,1)],\n\nRegards,\nFaqih Ma’arif\n\n****************************************************************************************************************************************************************************************************************\nthe complete code as follows:\n\n# peri3dController_example1.py\n# script, that explains funcionality and input parameters of Peri3dController\n\nimport string\n\n# create some material\n#O.materials.append(CpmMat(neverDamage=True,young=25e9,frictionAngle=.7,poisson=.2,sigmaT=3e6,epsCrackOnset=1e-4,relDuctility=30))\nO.materials.append(CpmMat(neverDamage=True,young=25e9,frictionAngle=.7,poisson=.2,sigmaT=3e6,epsCrackOnset=1e-4,relDuctility=30))\n\n# create periodic assembly of particles\ninitSize=1.2 #old\n\nsp.toSimulation()\n\n# plotting\n#plot.live=False\nplot.plots={'progress':('sx','sy','sz','syz','szx','sxy',),'progress_':('ex','ey','ez','eyz','ezx','exy',)}\nprogress=p3d.progress,progress_=p3d.progress,\nsx=p3d.stress,sy=p3d.stress,sz=p3d.stress,\nsyz=p3d.stress,szx=p3d.stress,sxy=p3d.stress,\nex=p3d.strain,ey=p3d.strain,ez=p3d.strain,\neyz=p3d.strain,ezx=p3d.strain,exy=p3d.strain,\n)\n\n# in how many time steps should be the goal state reached\nnSteps=4000 #new\n\nO.dt=PWaveTimeStep()/2\nEnlargeFactor=1.5\nEnlargeFactor=1.0\nO.engines=[\nForceResetter(),\nInsertionSortCollider([Bo1_Sphere_Aabb(aabbEnlargeFactor=EnlargeFactor,label='bo1s')]),\nInteractionLoop(\n[Ig2_Sphere_Sphere_ScGeom(interactionDetectionFactor=EnlargeFactor,label='ig2ss')],\n[Ip2_CpmMat_CpmMat_CpmPhys()],[Law2_ScGeom_CpmPhys_Cpm()]),\nNewtonIntegrator(),\nPeri3dController(\ngoal=(20e-4,-6e-4,0, -2e6,3e-4,2e6), # Vector6 of prescribed final values (xx,yy,zz, yz,zx,xy)\nstressMask=0b101100, # prescribed ex,ey,sz,syz,ezx,sxy; e..strain; s..stress\nnSteps=nSteps, # how many time steps the simulation will last\n# after reaching nSteps do doneHook action\ndoneHook='print \"Simulation with Peri3dController finished.\"; O.pause()',\n\n# the prescribed path (step,value of stress/strain) can be defined in absolute values\nxxPath=[(465,5e-4),(934,-5e-4),(1134,10e-4)],\n# or in relative values\nyyPath=[(2,4),(7,-2),(11,0),(14,4)],\n# if the goal value is 0, the absolute stress/strain values are always considered (step values remain relative)\nzzPath=[(5,-1e7),(10,0)],\n# if ##Path is not explicitly defined, it is considered as linear function between (0,0) and (nSteps,goal)\n# as in yzPath and xyPath\n# the relative values are really relative (zxPath gives the same - except of the sign from goal value - result as yyPath)\nzxPath=[(4,2),(14,-1),(22,0),(28,2)],\nxyPath=[(1,1),(2,-1),(3,1),(4,-1),(5,1)],\n# variables used in the first step\nlabel='p3d'\n),\n]\n\nO.step()\nbo1s.aabbEnlargeFactor=ig2ss.interactionDetectionFactor=1.\nO.run(); #O.wait()\n\nplot.plot(subPlots=False)\n\n****************************************************************************************************************************************************************************************************************\n\n## Question information\n\nLanguage:\nEnglish Edit question\nStatus:\nSolved\nFor:\nAssignee:\nNo assignee Edit question\nSolved by:\nJan Stránský\nSolved:\n2020-05-26\nLast query:\n2020-05-26\n2020-05-26\n Bruno Chareyre (bruno-chareyre) said on 2020-05-25: #1\n\nHi,\n\n> there is an initial size = 1.2, but apparently, this is not the cube dimension\n\nWhy not?\n\n> How to determine the value (goal, xxPath, yyPath, zzPath, zxPath, xyPath)\n\nCould you explain the loading path you would like to impose? Then maybe understanding this part will be easier.\n\nRegards\n\nBruno\n\n Jan Stránský (honzik) said on 2020-05-25: #2\n\nHello,\n\n> I am new to YADE\n\nwelcome :-)\n\n> There are some of the questions ...\n\nnext time please open separate \"ask a question\" for each question \n\n> 1...\n> peri3dController\n> How to determine the dimensions of the concrete cube\n> there is an initial size = 1.2, but apparently, this is not the cube dimension.\n\nthere is no \"cube\", the simulation is periodic, simulating infinite space.\nTo get dimension of a periodic cell (which is in no way related to physical dimensions - as it is infinite), you can use O.cell.size .\nThe initSize is passed to randomPeriPack. It creates a loose packing using makeCloud using the initSize size, then compress it. The parameter basically control number of particles in the simulation.\n\nHave a look at yade/examples/concrete for simulation of \"physical\" samples.\n\n> 2. How to determine the variation of aggregate granules if specified;\n\nIt is not possible \"as is\", randomPariPack just supports uniform distribution (defined by radius and rRelFuzz parameters).\nTo use defined particle size distribution, one option is to \"rewrite\" the randomPeriPack function with adjusted makeCloud call.\n\nit is better to use links to online documentation, like \n\n> 3. How to determine the value (goal, xxPath, yyPath, zzPath, zxPath, xyPath), I'm confused with the determination of the numbers . I have read it in YADE Book (pg.280), but i dont understand about it.\n\nPlease describe more what you do not understand.\nThe path arguments are just a list of (time,value) pair, i.e. what value is prescribed at what time/iter.\nIn certain cases, the values may be relative (e.g. the time on scale [0,1], internally rescaling the values with nSteps argument)\nThe prescribed value is linearly interpolated between these defined points.\nHave a look at comments in the example you have posted and compare them with the simulation results.\n\nA note about modeling of concrete using periodic simulations: in case of strain localization, the periodicity has influence on cracks/strain localized area and therefore influence on overall response.\nWhich is not a problem \"by default\", one just need to (should) have this in mind while using it.\n\ncheers\nJan\n\n Faqih Maarif (faqih07) said on 2020-05-26: #3\n\nDear All\nFirst of all, permission, I say thank you for an enlightening answer, and I would like to apologize for not obeying the rules.\n\nThank you also for the answers to the concrete cube's dimensions and the variation of aggregate particles. I will ask separately as a rule in the forum.\n\nMy cases are the following:\nI have tested the compressive strength of the concrete cube (150mmx150mm), and I will be modeling in YADE. In modeling, I will consider stress and strain periodically. I am still confused about how to determine the (xxPath, yyPath,zzPath,zxPath,xyPath), as written below.\n\nxxPath=[(465,5e-4),(934,-5e-4),(1134,10e-4)], #old\nyyPath=[(2,4),(7,-2),(11,0),(14,4)],\nzzPath=[(5,-1e7),(10,0)],\nzxPath=[(4,2),(14,-1),(22,0),(28,2)],\nxyPath=[(1,1),(2,-1),(3,1),(4,-1),(5,1)],\n\nRegards,\nFaqih Ma’arif", null, "Jan Stránský (honzik) said on 2020-05-26: #4\n\n> I have tested the compressive strength of the concrete cube (150mmx150mm)\n> In modeling, I will consider stress and strain periodically.\n\nStrain field in the post-peak range in uniaxial compression is not uniform, so using periodicity needs some caution (!)\n\n> I am still confused about how to determine the (xxPath, yyPath,zzPath,zxPath,xyPath), as written below.\n\nthe easiest is not to bother with paths at all :-) by default they are linear from zero to the goal value, which should be OK for uniaxial test - just define appropriate goal and stressMask.\n\ncheers\nJan\n\n Faqih Maarif (faqih07) said on 2020-05-26: #5\n\nDear\nMr. Jan and Mr. Bruno\n\nThank you very much for your attention and cooperation" ]
[ null, "https://answers.launchpad.net/@@/favourite-yes", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82947075,"math_prob":0.9073839,"size":6943,"snap":"2020-24-2020-29","text_gpt3_token_len":1830,"char_repetition_ratio":0.10376135,"word_repetition_ratio":0.26801407,"special_character_ratio":0.2529166,"punctuation_ratio":0.14864865,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9605663,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T00:43:44Z\",\"WARC-Record-ID\":\"<urn:uuid:8997e860-37cd-4173-84de-87ce282a759b>\",\"Content-Length\":\"42401\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e6ef6d13-7a83-427d-a07d-a7f1524d522c>\",\"WARC-Concurrent-To\":\"<urn:uuid:9943ff83-ca61-4b21-a54c-60eff9661d5e>\",\"WARC-IP-Address\":\"91.189.89.225\",\"WARC-Target-URI\":\"https://answers.launchpad.net/yade/+question/690942\",\"WARC-Payload-Digest\":\"sha1:JKNZU727V43GZQWP4LS5IB77472GQVPE\",\"WARC-Block-Digest\":\"sha1:QG2BU5N7J26AUOUQQ5R3Q6WMWMYAEXP4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657147031.78_warc_CC-MAIN-20200713225620-20200714015620-00421.warc.gz\"}"}
https://www.tgmarinho.com/high-order-functions-%E2%80%94-easy-mode/
[ "# High Order Functions — Easy Mode\n\n## I intend to explain a little bit about High Order Functions with Javascript.\n\nI intend to explain a little bit about High Order Functions with Javascript.\n\nBy the way, A higher order function is a function that takes a function as an argument, or returns a function. This is it, no big deal.\n\nThis one allows JS to be more flexible and pluggable.\n\nLook at code bellow, it is easy, I declare three constants which receive a function that displays to the user an alert message that awaits for an answer.\n\n``const fnName = () => prompt('Tell me your name.') ``\n``const fnMiddle = () => prompt('Tell me your middle name.') ``\n``const fnLastName = () => prompt('Tell me your last name.')``\n\nThen, if you run fnName() in the console, it will appear a small window (modal) like that:\n\nWhen you type your name and click OK, then it finalizes the process.\n\nSo, now I have bellow a new function that receives functions, look it:\n\n``const withNameComplete = (...funcs) => funcs.map(func => func()).join(' ')``\n\nThankfully ES06++ I may use this: …funcs (destructuring arguments), receive a lot of arguments inside of Array, then I have an array of functions, though.\n\nIn the code above the const withNameComplete receive another function that makes anything for us. But I have a complexity with Map in addition to HOC.\n\nThis function can receives a lot of arguments like: withNameComplete(a,b,c,d,e,f), and I have Array(a,b,c,d,e,f) when I use …funcs like my example above.\n\n``````// example declare 3 funcs and I use it passing to withNameComplete\n\nconst fnName = () => prompt('Tell me your name')\nconst fnMiddle = () => prompt('Tell me your middle name')\nconst fnLastName = () => prompt('Tell me your lastname')\n\nconst withNameComplete = (…funcs) => console.log(funcs);\n\nwithNameComplete(fnName, fnMiddle, fnLastName)``````\n\nThe code above produces that:\n\nAbove I'm consoling an array with three functions as I said.\n\nSo, all I need to do is calling each function inside of the array.\n\nNow I have a bunch of alternatives, first I use Map in the second way I use Reduce. Please, If you have a better solution I would love knowing, share your knowledge as well.\n\nFirst solution:\n\n``````const fnName = () => prompt('tell me your name')\nconst fnMiddle = () => prompt('tell me your middle name')\nconst fnLastName = () => prompt('tell me your lastname')\n\n// using Map\nconst withNameComplete = (...funcs) => funcs.map(func => func()).join(' ')\n\nconsole.log(withNameComplete(fnName, fnMiddle, fnLastName))``````\n\nYou can see in line eight I calling the function passing others function, but when I put these arguments I can't call it, only pass. for example, I can't do that, If you will occur an error:\n\n``````const fnName = () => prompt('tell me your name')\nconst fnMiddle = () => prompt('tell me your middle name')\nconst fnLastName = () => prompt('tell me your lastname')\n\n// using Map\nconst withNameComplete = (...funcs) => funcs.map(func => func()).join(' ')\n\nconsole.log(withNameComplete(fnName(), fnMiddle(), fnLastName()))``````\n\nUsing Map from ES06 I may iterate each function inside of Array func and execute, it will return a new array (thanks immutability, it is another post maybe) and then I use the function join(' ') that extract the values of the new array and put in a String.\n\nSecond solution:\n\nThis way I will use Reduce:\n\n``````const fnName = () => prompt('Tell me your name')\nconst fnMiddle = () => prompt('Tell me your middle name')\nconst fnLastName = () => prompt('Tell me your lastname')\n\n// using Reduce\nconst withNameComplete = (...funcs) => funcs.reduce((acc, cv) => \\`\\${acc} \\${cv()}\\`.trimStart(), '')\n\nconsole.log(withNameComplete(fnName, fnMiddle, fnLastName))``````\n\nCheck it out, only changed map to reduce and I have using template string to concatenate strings and using the function trimStart for removing white space of initial String because accumulator starts with this one value.\n\nWith the function reduce the return of method is the value accumulated (acc) plus current value (cv). How I am using template string then it is hard to realize, though.\n\nFinally, HOC, Array.Map and Array.Reduce are subjects hard to understand in begin, but with the theory and practice you gotcha it easily, don't give up, keep studying is the best way.\n\nSo, again, if you have a better solution I would love knowing.\n\nAll in one:\n\n``````// Training HOF => High Order Functions\n\nconst fnName = () => prompt('tell me your name')\nconst fnMiddle = () => prompt('tell me your middle name')\nconst fnLastName = () => prompt('tell me your lastname')\n\n// Debuging\n//const withNameComplete = (...funcs) => console.log(funcs)\n\n// using Map\n//const withNameComplete = (...funcs) => funcs.map(func => func()).join(' ')\n\n// using Reduce\n\nconst withNameComplete = (...funcs) => funcs.reduce((acc, cv, index, array) => \\`\\${acc} \\${cv()}\\`.trimStart(), '')\n\nconsole.log(withNameComplete(fnName, fnMiddle, fnLastName))``````\n\nLast but not least, whether you do you wanna see something cool and advance about HOC and Reduce, take a look this: https://github.com/acdlite/recompose/blob/master/src/packages/recompose/compose.js" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7585449,"math_prob":0.85920286,"size":5246,"snap":"2019-51-2020-05","text_gpt3_token_len":1240,"char_repetition_ratio":0.18237314,"word_repetition_ratio":0.25665858,"special_character_ratio":0.26401067,"punctuation_ratio":0.14994934,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9510637,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T08:38:54Z\",\"WARC-Record-ID\":\"<urn:uuid:07603b25-be08-401f-83d6-8f5a02adf402>\",\"Content-Length\":\"27607\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8308cd17-7fae-4967-8f80-0463f805df4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:26b6486e-f487-4e96-8657-af4e2c67f827>\",\"WARC-IP-Address\":\"157.245.130.6\",\"WARC-Target-URI\":\"https://www.tgmarinho.com/high-order-functions-%E2%80%94-easy-mode/\",\"WARC-Payload-Digest\":\"sha1:SJVGZ2XUVYIM3ZJ3NW35A2OQANXY6EK2\",\"WARC-Block-Digest\":\"sha1:MNFH3TN2W6IS32TWS2I6AYMJE4LCVMEU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594333.5_warc_CC-MAIN-20200119064802-20200119092802-00441.warc.gz\"}"}
https://terrytao.wordpress.com/tag/bolzano-weierstrass-theorem/
[ "You are currently browsing the tag archive for the ‘Bolzano-Weierstrass theorem’ tag.\n\nNonstandard analysis is a mathematical framework in which one extends the standard mathematical universe", null, "${{\\mathfrak U}}$ of standard numbers, standard sets, standard functions, etc. into a larger nonstandard universe", null, "${{}^* {\\mathfrak U}}$ of nonstandard numbers, nonstandard sets, nonstandard functions, etc., somewhat analogously to how one places the real numbers inside the complex numbers, or the rationals inside the reals. This nonstandard universe enjoys many of the same properties as the standard one; in particular, we have the transfer principle that asserts that any statement in the language of first order logic is true in the standard universe if and only if it is true in the nonstandard one. (For instance, because Fermat’s last theorem is known to be true for standard natural numbers, it is automatically true for nonstandard natural numbers as well.) However, the nonstandard universe also enjoys some additional useful properties that the standard one does not, most notably the countable saturation property, which is a property somewhat analogous to the completeness property of a metric space; much as metric completeness allows one to assert that the intersection of a countable family of nested closed balls is non-empty, countable saturation allows one to assert that the intersection of a countable family of nested satisfiable formulae is simultaneously satisfiable. (See this previous blog post for more on the analogy between the use of nonstandard analysis and the use of metric completions.) Furthermore, by viewing both the standard and nonstandard universes externally (placing them both inside a larger metatheory, such as a model of Zermelo-Frankel-Choice (ZFC) set theory; in some more advanced set-theoretic applications one may also wish to add some large cardinal axioms), one can place some useful additional definitions and constructions on these universes, such as defining the concept of an infinitesimal nonstandard number (a number which is smaller in magnitude than any positive standard number). The ability to rigorously manipulate infinitesimals is of course one of the most well-known advantages of working with nonstandard analysis.\n\nTo build a nonstandard universe", null, "${{}^* {\\mathfrak U}}$ from a standard one", null, "${{\\mathfrak U}}$, the most common approach is to take an ultrapower of", null, "${{\\mathfrak U}}$ with respect to some non-principal ultrafilter over the natural numbers; see e.g. this blog post for details. Once one is comfortable with ultrafilters and ultrapowers, this becomes quite a simple and elegant construction, and greatly demystifies the nature of nonstandard analysis.\n\nOn the other hand, nonprincipal ultrafilters do have some unappealing features. The most notable one is that their very existence requires the axiom of choice (or more precisely, a weaker form of this axiom known as the boolean prime ideal theorem). Closely related to this is the fact that one cannot actually write down any explicit example of a nonprincipal ultrafilter, but must instead rely on nonconstructive tools such as Zorn’s lemma, the Hahn-Banach theorem, Tychonoff’s theorem, the Stone-Cech compactification, or the boolean prime ideal theorem to locate one. As such, ultrafilters definitely belong to the “infinitary” side of mathematics, and one may feel that it is inappropriate to use such tools for “finitary” mathematical applications, such as those which arise in hard analysis. From a more practical viewpoint, because of the presence of the infinitary ultrafilter, it can be quite difficult (though usually not impossible, with sufficient patience and effort) to take a finitary result proven via nonstandard analysis and coax an effective quantitative bound from it.\n\nThere is however a “cheap” version of nonstandard analysis which is less powerful than the full version, but is not as infinitary in that it is constructive (in the sense of not requiring any sort of choice-type axiom), and which can be translated into standard analysis somewhat more easily than a fully nonstandard argument; indeed, a cheap nonstandard argument can often be presented (by judicious use of asymptotic notation) in a way which is nearly indistinguishable from a standard one. It is obtained by replacing the nonprincipal ultrafilter in fully nonstandard analysis with the more classical Fréchet filter of cofinite subsets of the natural numbers, which is the filter that implicitly underlies the concept of the classical limit", null, "${\\lim_{{\\bf n} \\rightarrow \\infty} a_{\\bf n}}$ of a sequence when the underlying asymptotic parameter", null, "${{\\bf n}}$ goes off to infinity. As such, “cheap nonstandard analysis” aligns very well with traditional mathematics, in which one often allows one’s objects to be parameterised by some external parameter such as", null, "${{\\bf n}}$, which is then allowed to approach some limit such as", null, "${\\infty}$. The catch is that the Fréchet filter is merely a filter and not an ultrafilter, and as such some of the key features of fully nonstandard analysis are lost. Most notably, the law of the excluded middle does not transfer over perfectly from standard analysis to cheap nonstandard analysis; much as there exist bounded sequences of real numbers (such as", null, "${0,1,0,1,\\ldots}$) which do not converge to a (classical) limit, there exist statements in cheap nonstandard analysis which are neither true nor false (at least without passing to a subsequence, see below). The loss of such a fundamental law of mathematical reasoning may seem like a major disadvantage for cheap nonstandard analysis, and it does indeed make cheap nonstandard analysis somewhat weaker than fully nonstandard analysis. But in some situations (particularly when one is reasoning in a “constructivist” or “intuitionistic” fashion, and in particular if one is avoiding too much reliance on set theory) it turns out that one can survive the loss of this law; and furthermore, the law of the excluded middle is still available for standard analysis, and so one can often proceed by working from time to time in the standard universe to temporarily take advantage of this law, and then transferring the results obtained there back to the cheap nonstandard universe once one no longer needs to invoke the law of the excluded middle. Furthermore, the law of the excluded middle can be recovered by adopting the freedom to pass to subsequences with regards to the asymptotic parameter", null, "${{\\bf n}}$; this technique is already in widespread use in the analysis of partial differential equations, although it is generally referred to by names such as “the compactness method” rather than as “cheap nonstandard analysis”.\n\nBelow the fold, I would like to describe this cheap version of nonstandard analysis, which I think can serve as a pedagogical stepping stone towards fully nonstandard analysis, as it is formally similar to (though weaker than) fully nonstandard analysis, but on the other hand is closer in practice to standard analysis. As we shall see below, the relation between cheap nonstandard analysis and standard analysis is analogous in many ways to the relation between probabilistic reasoning and deterministic reasoning; it also resembles somewhat the preference in much of modern mathematics for viewing mathematical objects as belonging to families (or to categories) to be manipulated en masse, rather than treating each object individually. (For instance, nonstandard analysis can be used as a partial substitute for scheme theory in order to obtain uniformly quantitative results in algebraic geometry, as discussed for instance in this previous blog post.)\n\nMany structures in mathematics are incomplete in one or more ways. For instance, the field of rationals", null, "${{\\bf Q}}$ or the reals", null, "${{\\bf R}}$ are algebraically incomplete, because there are some non-trivial algebraic equations (such as", null, "${x^2=2}$ in the case of the rationals, or", null, "${x^2=-1}$ in the case of the reals) which could potentially have solutions (because they do not imply a necessarily false statement, such as", null, "${1=0}$, just using the laws of algebra), but do not actually have solutions in the specified field.\n\nSimilarly, the rationals", null, "${{\\bf Q}}$, when viewed now as a metric space rather than as a field, are also metrically incomplete, beause there exist sequences in the rationals (e.g. the decimal approximations", null, "${3, 3.1, 3.14, 3.141, \\ldots}$ of the irrational number", null, "${\\pi}$) which could potentially converge to a limit (because they form a Cauchy sequence), but do not actually converge in the specified metric space.\n\nA third type of incompleteness is that of logical incompleteness, which applies now to formal theories rather than to fields or metric spaces. For instance, Zermelo-Frankel-Choice (ZFC) set theory is logically incomplete, because there exist statements (such as the consistency of ZFC) which could potentially be provable by the theory (because it does not lead to a contradiction, or at least so we believe, just from the axioms and deductive rules of the theory), but is not actually provable in this theory.\n\nA fourth type of incompleteness, which is slightly less well known than the above three, is what I will call elementary incompleteness (and which model theorists call the failure of the countable saturation property). It applies to any structure that is describable by a first-order language, such as a field, a metric space, or a universe of sets. For instance, in the language of ordered real fields, the real line", null, "${{\\bf R}}$ is elementarily incomplete, because there exists a sequence of statements (such as the statements", null, "${0 < x < 1/n}$ for natural numbers", null, "${n=1,2,\\ldots}$) in this language which are potentially simultaneously satisfiable (in the sense that any finite number of these statements can be satisfied by some real number", null, "${x}$) but are not actually simultaneously satisfiable in this theory.\n\nIn each of these cases, though, it is possible to start with an incomplete structure and complete it to a much larger structure to eliminate the incompleteness. For instance, starting with an arbitrary field", null, "${k}$, one can take its algebraic completion (or algebraic closure)", null, "${\\overline{k}}$; for instance,", null, "${{\\bf C} = \\overline{{\\bf R}}}$ can be viewed as the algebraic completion of", null, "${{\\bf R}}$. This field is usually significantly larger than the original field", null, "${k}$, but contains", null, "${k}$ as a subfield, and every element of", null, "${\\overline{k}}$ can be described as the solution to some polynomial equation with coefficients in", null, "${k}$. Furthermore,", null, "${\\overline{k}}$ is now algebraically complete (or algebraically closed): every polynomial equation in", null, "${\\overline{k}}$ which is potentially satisfiable (in the sense that it does not lead to a contradiction such as", null, "${1=0}$ from the laws of algebra), is actually satisfiable in", null, "${\\overline{k}}$.\n\nSimilarly, starting with an arbitrary metric space", null, "${X}$, one can take its metric completion", null, "${\\overline{X}}$; for instance,", null, "${{\\bf R} = \\overline{{\\bf Q}}}$ can be viewed as the metric completion of", null, "${{\\bf Q}}$. Again, the completion", null, "${\\overline{X}}$ is usually much larger than the original metric space", null, "${X}$, but contains", null, "${X}$ as a subspace, and every element of", null, "${\\overline{X}}$ can be described as the limit of some Cauchy sequence in", null, "${X}$. Furthermore,", null, "${\\overline{X}}$ is now a complete metric space: every sequence in", null, "${\\overline{X}}$ which is potentially convergent (in the sense of being a Cauchy sequence), is now actually convegent in", null, "${\\overline{X}}$.\n\nIn a similar vein, we have the Gödel completeness theorem, which implies (among other things) that for any consistent first-order theory", null, "${T}$ for a first-order language", null, "${L}$, there exists at least one completion", null, "${\\overline{T}}$ of that theory", null, "${T}$, which is a consistent theory in which every sentence in", null, "${L}$ which is potentially true in", null, "${\\overline{T}}$ (because it does not lead to a contradiction in", null, "${\\overline{T}}$) is actually true in", null, "${\\overline{T}}$. Indeed, the completeness theorem provides at least one model (or structure)", null, "${{\\mathfrak U}}$ of the consistent theory", null, "${T}$, and then the completion", null, "${\\overline{T} = \\hbox{Th}({\\mathfrak U})}$ can be formed by interpreting every sentence in", null, "${L}$ using", null, "${{\\mathfrak U}}$ to determine its truth value. Note, in contrast to the previous two examples, that the completion is usually not unique in any way; a theory", null, "${T}$ can have multiple inequivalent models", null, "${{\\mathfrak U}}$, giving rise to distinct completions of the same theory.\n\nFinally, if one starts with an arbitrary structure", null, "${{\\mathfrak U}}$, one can form an elementary completion", null, "${{}^* {\\mathfrak U}}$ of it, which is a significantly larger structure which contains", null, "${{\\mathfrak U}}$ as a substructure, and such that every element of", null, "${{}^* {\\mathfrak U}}$ is an elementary limit of a sequence of elements in", null, "${{\\mathfrak U}}$ (I will define this term shortly). Furthermore,", null, "${{}^* {\\mathfrak U}}$ is elementarily complete; any sequence of statements that are potentially simultaneously satisfiable in", null, "${{}^* {\\mathfrak U}}$ (in the sense that any finite number of statements in this collection are simultaneously satisfiable), will actually be simultaneously satisfiable. As we shall see, one can form such an elementary completion by taking an ultrapower of the original structure", null, "${{\\mathfrak U}}$. If", null, "${{\\mathfrak U}}$ is the standard universe of all the standard objects one considers in mathematics, then its elementary completion", null, "${{}^* {\\mathfrak U}}$ is known as the nonstandard universe, and is the setting for nonstandard analysis.\n\nAs mentioned earlier, completion tends to make a space much larger and more complicated. If one algebraically completes a finite field, for instance, one necessarily obtains an infinite field as a consequence. If one metrically completes a countable metric space with no isolated points, such as", null, "${{\\bf Q}}$, then one necessarily obtains an uncountable metric space (thanks to the Baire category theorem). If one takes a logical completion of a consistent first-order theory that can model true arithmetic, then this completion is no longer describable by a recursively enumerable schema of axioms, thanks to Gödel’s incompleteness theorem. And if one takes the elementary completion of a countable structure, such as the integers", null, "${{\\bf Z}}$, then the resulting completion", null, "${{}^* {\\bf Z}}$ will necessarily be uncountable.\n\nHowever, there are substantial benefits to working in the completed structure which can make it well worth the massive increase in size. For instance, by working in the algebraic completion of a field, one gains access to the full power of algebraic geometry. By working in the metric completion of a metric space, one gains access to powerful tools of real analysis, such as the Baire category theorem, the Heine-Borel theorem, and (in the case of Euclidean completions) the Bolzano-Weierstrass theorem. By working in a logically and elementarily completed theory (aka a saturated model) of a first-order theory, one gains access to the branch of model theory known as definability theory, which allows one to analyse the structure of definable sets in much the same way that algebraic geometry allows one to analyse the structure of algebraic sets. Finally, when working in an elementary completion of a structure, one gains a sequential compactness property, analogous to the Bolzano-Weierstrass theorem, which can be interpreted as the foundation for much of nonstandard analysis, as well as providing a unifying framework to describe various correspondence principles between finitary and infinitary mathematics.\n\nIn this post, I wish to expand upon these above points with regard to elementary completion, and to present nonstandard analysis as a completion of standard analysis in much the same way as, say, complex algebra is a completion of real algebra, or real metric geometry is a completion of rational metric geometry.", null, "Usman Nizami on Applets", null, "Anonymous on 247B, Notes 3: pseudodifferent…", null, "Xiaoyan Su on 247B, Notes 3: pseudodifferent…", null, "Rex on 247B, Notes 4: almost everywhe…", null, "Rex on 247B, Notes 4: almost everywhe…", null, "Terence Tao on 254A, Notes 1: Local well-pose…", null, "Terence Tao on The completeness and compactne…", null, "Terence Tao on 247B, Notes 3: pseudodifferent…", null, "Terence Tao on 247B, Notes 4: almost everywhe…", null, "John Mangual on 247B, Notes 4: almost everywhe…", null, "Liam Shakib Llamazar… on 254A, Notes 1: Local well-pose…", null, "Alan Chang on 247B, Notes 4: almost everywhe…", null, "Anonymous on 247B, Notes 1: Restriction…", null, "Anonymous on 247B, Notes 4: almost everywhe…", null, "Anonymous on 247B, Notes 1: Restriction…" ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://0.gravatar.com/avatar/f06f8122682bf4c7252f947eeb8c901b", null, "https://0.gravatar.com/avatar/", null, "https://0.gravatar.com/avatar/f269bf29a901f47bcf8f084eb1420cd5", null, "https://2.gravatar.com/avatar/255c68cc6b792a6614a9f9a60a2e725f", null, "https://2.gravatar.com/avatar/255c68cc6b792a6614a9f9a60a2e725f", null, "https://0.gravatar.com/avatar/3c795880f3b73784a9b75fbff3772701", null, "https://0.gravatar.com/avatar/3c795880f3b73784a9b75fbff3772701", null, "https://0.gravatar.com/avatar/3c795880f3b73784a9b75fbff3772701", null, "https://0.gravatar.com/avatar/3c795880f3b73784a9b75fbff3772701", null, "https://1.gravatar.com/avatar/4cbfcdd51d0ec56844b3540f62d06af3", null, "https://0.gravatar.com/avatar/9425d9a08ef63fe1ef27a0742aac9e93", null, "https://0.gravatar.com/avatar/c19aebb4ea389d9a0d3e56b66d664d75", null, "https://0.gravatar.com/avatar/", null, "https://0.gravatar.com/avatar/", null, "https://0.gravatar.com/avatar/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9404257,"math_prob":0.98517823,"size":14874,"snap":"2020-24-2020-29","text_gpt3_token_len":2901,"char_repetition_ratio":0.15864156,"word_repetition_ratio":0.03442414,"special_character_ratio":0.18186097,"punctuation_ratio":0.08869094,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938769,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T10:35:33Z\",\"WARC-Record-ID\":\"<urn:uuid:1daabc2f-a7cd-40ac-b6fe-9da99a430acb>\",\"Content-Length\":\"145618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1dbcc5e-351d-47f1-a3be-28d7521ce6e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c132b740-fb40-45a6-8b26-a90f92ddda77>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://terrytao.wordpress.com/tag/bolzano-weierstrass-theorem/\",\"WARC-Payload-Digest\":\"sha1:VLTQGC6O5ASSF6G7L5UR5SLNPFT4IPOL\",\"WARC-Block-Digest\":\"sha1:NAX2D5MWMPYKM54YSVWLFR3Z2L3WFW45\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347392142.20_warc_CC-MAIN-20200527075559-20200527105559-00282.warc.gz\"}"}
https://www.freethesaurus.com/Holomorphic
[ "# analytic\n\n(redirected from Holomorphic)\nAlso found in: Dictionary, Medical, Encyclopedia, Wikipedia.\nGraphic Thesaurus  🔍\n Display ON Animation ON\n Legend Synonym Antonym Related\n\n## Synonyms for analytic\n\n### of a proposition that is necessarily true independent of fact or experience\n\n#### Antonyms\n\nReferences in periodicals archive ?\nLIU, Modified Roper-Suffridge operator for some holomorphic mappings, Front.\nSkrypnik, \"On holomorphic solutions of the Darwin equations of motion of point charges,\" Ukrainian Mathematical Journal, vol.\n(Idea of the proof) by the standard holomorphic coordinate changes, r(w) has the Taylor series expansion as in (8).\nWe denote the Smirnov class by [N.sub.*](U), which consists of all holomorphic functions f on U such that log(1 + [absolute value of (f(z))]) [less than or equal to] Q[[phi]](z) (z [member of] U) for some [phi] [member of] [L.sup.1](T), [phi] [greater than or equal to] 0, where the right side denotes the Poisson integral of [phi] on U.\nHolland and Walsh characterized holomorphic Bloch space in D in terms of weighted Euclidian Lipschitz functions of indices (1/2,1/2).\nin [epsilon] with coefficients [a.sub.i](z) in the ring O(r) of holomorphic functions on [D.sub.r], continuous in its closure, satisfying\nShafikov, \"Analytic continuation of holomorphic mappings from nonminimal hypersurfaces,\" Indiana University Mathematics Journal, vol.\nwhere a tangent space index A = 1, ..., 6 has been split into a holomorphic index i = 1,2,3 and an antiholomorphic index [bar.i] = 1,2,3.\nIf f: D \\ E c O is holomorphic and bounded, then f has a unique holomorphic extension to D.\nConsider the holomorphic function F(z) := [??](z) - z defined on [B.sub.[delta]].\nIf P (or F) is parallel, then the holomorphic distribution H is intergrable.\nOn [R.sup.2], a 1-quasiconformal mapping is holomorphic or antiholomorphic.\nIf D is a non-empty simply connected open subset of the complex plane C which is not all of C, then there exists a biholomorphic (bijective and holomorphic) mapping f from D onto the open unit disk U = {z [member of] C :[absolute value of z] <1} (Krantz, 1999, Section 6.4.3, p.\nLet [A.sup.*.sub.n[zeta]] = {f [member of] H(U x [bar.U]), f(z, [zeta]) = z + [a.sub.n+1]([zeta])[z.sup.n+1] + ***, z [member of] U, [zeta] [member of] [bar.U]}, with [A.sup.*.sub.1[zeta]] = [A.sup.*.sub.[zeta]], where [a.sub.k]([zeta]) are holomorphic functions in [bar.U] for k [greater than or equal to] 2, and\nwith holomorphic coefficients [a.sub.k] (t, z) on some domain D [subset] C with respect to t and near the origin in C with respect to z.\nSite: Follow: Share:\nOpen / Close" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8262012,"math_prob":0.98134655,"size":2261,"snap":"2021-21-2021-25","text_gpt3_token_len":688,"char_repetition_ratio":0.15152858,"word_repetition_ratio":0.005586592,"special_character_ratio":0.29102167,"punctuation_ratio":0.18811882,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948972,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-12T15:15:28Z\",\"WARC-Record-ID\":\"<urn:uuid:23c6c07f-e303-412b-9f8f-8a4bba77933a>\",\"Content-Length\":\"62377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ff789c30-af66-4257-908c-28ab3311ef96>\",\"WARC-Concurrent-To\":\"<urn:uuid:6779f6e6-e532-4336-b5c7-034222ab67d0>\",\"WARC-IP-Address\":\"45.35.33.114\",\"WARC-Target-URI\":\"https://www.freethesaurus.com/Holomorphic\",\"WARC-Payload-Digest\":\"sha1:BSWCCBQDNP622IWCUYLDNWRZ4SJ2WAFS\",\"WARC-Block-Digest\":\"sha1:MCNGG4AYGQW7ATBUP2T7W4B6X5DM5CSN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243990929.24_warc_CC-MAIN-20210512131604-20210512161604-00547.warc.gz\"}"}
https://planetmath.org/polynomialfunctionalcalculus
[ "# polynomial functional calculus\n\nLet $\\mathcal{A}$ be an unital associative algebra over $\\mathbb{C}$ with identity element", null, "", null, "$e$ and let $a\\in\\mathcal{A}$.\n\nThe polynomial functional calculus is the most basic form of a functional calculus. It allows the expression\n\n $\\displaystyle p(a)$\n\nto make sense as an element of $\\mathcal{A}$, for any polynomial", null, "", null, "", null, "", null, "$p:\\mathbb{C}\\longrightarrow\\mathbb{C}$.\n\nThis is achieved in the following natural way: for any polynomial $p(\\lambda):=\\sum c_{n}\\,\\lambda^{n}$ we the element $p(a):=\\sum c_{n}\\,a^{n}\\in\\mathcal{A}$.\n\n## 1 Definition\n\nRecall that the set of polynomial functions in $\\mathbb{C}$, denoted by $\\mathbb{C}[\\lambda]$, is an associative algebra over $\\mathbb{C}$ under pointwise operations and is generated by the constant polynomial $1$ and the variable", null, "", null, "$\\lambda$ (corresponding to the identity function in $\\mathbb{C}$).\n\nConsider the algebra homomorphism $\\pi:\\mathbb{C}[\\lambda]\\longrightarrow\\mathcal{A}$ such that $\\pi(1)=e$ and $\\pi(\\lambda)=a$. This homomorphism is denoted by\n\n $\\displaystyle p\\longmapsto p(a)$\n\nand it is called the polynomial functional calculus for $a$.\n\nIt is clear that for any polynomial $p(\\lambda):=\\sum c_{n}\\lambda^{n}$ we have $p(a)=\\sum c_{n}\\,a^{n}$.\n\n## 2 Spectral Properties\n\nWe will denote by $\\sigma(x)$ the spectrum (http://planetmath.org/Spectrum) of an element $x\\in\\mathcal{A}$.\n\nLet $\\mathcal{A}$ be an unital associative algebra over $\\mathbb{C}$ and $a$ an element in $\\mathcal{A}$. For any polynomial $p$ we have that\n\n $\\displaystyle\\sigma(p(a))=p(\\sigma(a))$\n\n: Let us first prove that $\\sigma(p(a))\\subseteq p(\\sigma(a))$. Suppose $\\widetilde{\\lambda}\\in\\sigma(p(a))$, which means that $p(a)-\\widetilde{\\lambda}e$ is not invertible", null, "", null, ". Now consider the polynomial in $\\mathbb{C}$ given by $q:=p-\\widetilde{\\lambda}$. It is clear that $q(a)=p(a)-\\widetilde{\\lambda}e$, and therefore $q(a)$ is not invertible. Since $\\mathbb{C}$ is algebraically closed", null, "", null, "(http://planetmath.org/FundamentalTheoremOfAlgebra), we have that\n\n $\\displaystyle q(\\lambda)=(\\lambda-\\lambda_{1})^{n_{1}}\\cdots(\\lambda-\\lambda_{% k})^{n_{k}}$\n\nfor some $\\lambda_{1},\\dots,\\lambda_{k}\\in\\mathbb{C}$ and $n_{1},\\dots,n_{k}\\in\\mathbb{N}$. Thus, we can also write a similar product", null, "", null, "", null, "for $q(a)$ as\n\n $\\displaystyle q(a)=(a-\\lambda_{1}e)^{n_{1}}\\cdots(a-\\lambda_{k}e)^{n_{k}}$\n\nNow, since $q(a)$ is not invertible we must have that at least one of the factors $(a-\\lambda_{i}e)$ is not invertible, which means that for that particular $\\lambda_{i}$ we have $\\lambda_{i}\\in\\sigma(a)$. But we also have that $q(\\lambda_{i})=0$, i.e. $p(\\lambda_{i})=\\widetilde{\\lambda}$, and hence $\\widetilde{\\lambda}\\in p(\\sigma(a))$.\n\nWe now prove the inclusion $\\sigma(p(a))\\supseteq p(\\sigma(a))$. Suppose $\\widetilde{\\lambda}\\in p(\\sigma(a))$, which means that $\\widetilde{\\lambda}=p(\\lambda_{0})$ for some $\\lambda_{0}\\in\\sigma(a)$. The polynomial $p-\\widetilde{\\lambda}$ has a zero at $\\lambda_{0}$, hence there is a polynomial $d$ such that\n\n $\\displaystyle p(\\lambda)-\\widetilde{\\lambda}=d(\\lambda)(\\lambda-\\lambda_{0})\\,% ,\\qquad\\qquad\\lambda\\in\\mathbb{C}$\n\nThus, we can also write a similar product for $q(a)$ as\n\n $\\displaystyle p(a)-\\widetilde{\\lambda}e=d(a)(a-\\lambda_{0}e)$\n\nIf $p(a)-\\widetilde{\\lambda}e$ was invertible, then we would see that $a-\\lambda_{0}e$ had a left (http://planetmath.org/InversesInRings) and a right inverse", null, "", null, "(http://planetmath.org/InversesInRings), thus being invertible. But we know that $\\lambda_{0}\\in\\sigma(a)$, hence we conclude that $p(a)-\\widetilde{\\lambda}e$ cannot be invertible, i.e. $\\widetilde{\\lambda}\\in\\sigma(p(a))$. $\\square$\n\nTitle polynomial functional calculus PolynomialFunctionalCalculus 2013-03-22 18:48:23 2013-03-22 18:48:23 asteroid (17536) asteroid (17536) 8 asteroid (17536) Feature msc 46H30 msc 47A60 FunctionalCalculus ContinuousFunctionalCalculus2 BorelFunctionalCalculus polynomial spectral mapping theorem" ]
[ null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9019902,"math_prob":0.9999846,"size":2580,"snap":"2019-35-2019-39","text_gpt3_token_len":598,"char_repetition_ratio":0.14751554,"word_repetition_ratio":0.05955335,"special_character_ratio":0.22751938,"punctuation_ratio":0.12938596,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000008,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-18T20:17:11Z\",\"WARC-Record-ID\":\"<urn:uuid:485d3bb7-84f2-477f-8453-92326d2c0d4d>\",\"Content-Length\":\"37812\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20af1357-c4a0-4891-8b49-5299ceb98cb0>\",\"WARC-Concurrent-To\":\"<urn:uuid:37ee203c-6ac6-471c-a070-2a9e085fa137>\",\"WARC-IP-Address\":\"129.97.206.129\",\"WARC-Target-URI\":\"https://planetmath.org/polynomialfunctionalcalculus\",\"WARC-Payload-Digest\":\"sha1:WB3VH63QXMU76N3OQPG5YTKRFMSYVEDB\",\"WARC-Block-Digest\":\"sha1:U3OYOPTGEQK4UPAOODVUECMFHSFOY3EU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573331.86_warc_CC-MAIN-20190918193432-20190918215432-00138.warc.gz\"}"}
https://physics.stackexchange.com/questions/446164/how-does-entropy-exactly-relate-to-the-heat-flow-of-a-system
[ "# How does entropy exactly relate to the heat flow of a system\n\nEntropy is this awesome concept with many faces.\n\nFor a classical mechanics point of view, it would represent all the possible properties of a physical system, which is supposed to be unknown.\n\nFor a quantum mechanics point of view, it would represent all the possible microstates of a system.\n\nTo my knowledge, I assume that the first is a more simple approach of the second, and the second apporach contains the first in it.\n\nSo, there would be one basic definition of entropy for both together:\n\nEntropy: The amount of possible microstates a system can have\n\nThis is pretty understandable, also I can see that if it is a made-up concept, it can be as abstract as we want, because yeah, words may go beyond reality as much as we want them to.\n\nBut, doubts come into the game when we start measuring entropy.\n\nIn thermodynamics, we will say that this measure equals the sum of all the infinitely-small-quotients of transferred heat by the temperature of the system, in a reversible process.\n\n$$\\Delta S_{12}=\\int_1^2 (\\frac{\\delta Q}{T})_{rev}$$\n\nBut how does that relate with the word definition of entropy? How is that quotient of heat divided by temperature related to the microstates?\n\nAnd, if we cannot speak about stuff having heat, but only about energy flowing in form of heat, and being entropy a heat-related concept, why do we speak about stuff having entropy?\n\n• It is always a good idea to have a look at previous similar questions and related answers. In particular, \"How does the statistical definition of entropy reduce to heat engine entropy?\" physics.stackexchange.com/questions/301477/… and its answer, should work for your question too. – GiorgioP Dec 9 '18 at 17:00\n• @GiorgioP Which takes us to this ( physics.meta.stackexchange.com/questions/10898/… ). It \"could work\" but a) I did not find that question doing a research with my expected keywords and b) Chester's answer goes to the semantic conceptual approach that I was actually looking for, while in your suggested post, it is an advanced mathematical approach that is just partially related to my question. I understand and appreciate your intention but in this case, I think this new Q&A post makes this a richer community now. All the best. – Alvaro Franz Dec 9 '18 at 17:23" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9533901,"math_prob":0.6123279,"size":1363,"snap":"2020-45-2020-50","text_gpt3_token_len":296,"char_repetition_ratio":0.09713025,"word_repetition_ratio":0.052173913,"special_character_ratio":0.21496698,"punctuation_ratio":0.106617644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98211265,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-23T22:29:20Z\",\"WARC-Record-ID\":\"<urn:uuid:7dfc1a9e-6126-4b1c-ae64-6623d37ac827>\",\"Content-Length\":\"149742\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63700d1e-3cc1-4cb1-bce3-5a20683e2398>\",\"WARC-Concurrent-To\":\"<urn:uuid:e2faff81-d43c-4f73-9fe0-4d57aa3b792d>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/446164/how-does-entropy-exactly-relate-to-the-heat-flow-of-a-system\",\"WARC-Payload-Digest\":\"sha1:AFZIPH24BG5JJVPLAQ4C572TIT5HX75U\",\"WARC-Block-Digest\":\"sha1:XKPLNEML3RTODEABYYOGTATLCYWPD7E5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107865665.7_warc_CC-MAIN-20201023204939-20201023234939-00237.warc.gz\"}"}
https://www.varsitytutors.com/psat_math-help/plane-geometry/geometry/quadrilaterals
[ "## Example Questions\n\n← Previous 1 3 4 5 6\n\n### Example Question #1 : How To Find The Perimeter Of A Rhombus\n\nIf the area of a rhombus is 24 and one diagonal length is 6, find the perimeter of the rhombus.\n\n8\n\n12\n\n20\n\n16\n\n24\n\n20\n\nExplanation:\n\nThe area of a rhombus is found by\n\nA = 1/2(d1)(d2)\n\nwhere d1 and d2 are the lengths of the diagonals.  Substituting for the given values yields\n\n24 = 1/2(d1)(6)\n\n24 = 3(d1)\n\n8 = d1\n\nNow, use the facts that diagonals are perpendicular in a rhombus, diagonals bisect each other in a rhombus, and the Pythagorean Theorem to determine that the two diagonals form 4 right triangles with leg lengths of 3 and 4.  Since 32 + 42 = 52, each side length is 5, so the perimeter is 5(4) = 20.\n\n### Example Question #1 : Quadrilaterals", null, "Note: Figure NOT drawn to scale.\n\nCalculate the perimeter of Quadrilateral", null, "in the above diagram if:", null, "", null, "", null, "", null, "", null, "", null, "Insufficient information is given to answer the question.", null, "", null, "Explanation:", null, ", so Quadrilateral", null, "is a rhombus. Its diagonals are therefore perpendicular to each other, and the four triangles they form are right triangles. Therefore, the Pythagorean theorem can be used to determine the common sidelength of Quadrilateral", null, ".\n\nWe focus on", null, ". The diagonals are also each other's bisector, so", null, "", null, "By the Pythagorean Theorem,", null, "26 is the common length of the four sides of Quadrilateral", null, ", so its perimeter is", null, ".\n\n### Example Question #242 : Geometry\n\nA rhombus has a side length of 5. Which of the following is NOT a possible value for its area?\n\n24\n\n25\n\n10\n\n30\n\n15\n\n30\n\nExplanation:\n\nThe area of a rhombus will vary as the angles made by its sides change. The \"flatter\" the rhombus is (with two very small angles and two very large angles, say 2, 178, 2, and 178 degrees), the smaller the area is. There is, of course, a lower bound of zero for the area, but the area can get arbitrarily small. This implies that the correct answer would be the largest choice. In fact, the largest area of a rhombus occurs when all four angles are equal, i.e. when the rhombus is a square. The area of a square of side length 5 is 25, so any value bigger than 25 is impossible to acheive.\n\n### Example Question #2 : Quadrilaterals\n\nIn Rhombus", null, "", null, ". If", null, "is constructed, which of the following is true about", null, "?", null, "obtuse and scalene", null, "is obtuse and isosceles, but not equilateral", null, "is acute and equilateral", null, "is acute and isosceles, but not equilateral", null, "is acute and scalene", null, "is acute and equilateral\n\nExplanation:\n\nThe figure referenced is below.", null, "Consecutive angles of a rhombus are supplementary - as they are with all parallelograms - so", null, "A diagonal of a rhombus bisects its angles, so", null, "A similar argument proves that", null, ".\n\nSince all three angles of", null, "measure", null, ", the triangle is acute. It is also equiangular, and, subsequently, equilateral.\n\n### Example Question #1 : How To Find An Angle In A Rhombus\n\nIn Rhombus", null, "", null, ". If", null, "is constructed, which of the following is true about", null, "?", null, "is right and scalene", null, "is acute and scalene", null, "is right and isosceles, but not equilateral", null, "is acute and isosceles, but not equilateral", null, "is acute and equilateral", null, "is acute and isosceles, but not equilateral\n\nExplanation:\n\nThe figure referenced is below.", null, "The sides of a rhombus are congruent by definition, so", null, ", making", null, "isosceles. It is not equilateral, since", null, ", and an equilateral triangle must have three", null, "angles.\n\nAlso, consecutive angles of a rhombus are supplementary - as they are with all parallelograms - so", null, "A diagonal of a rhombus bisects its angles, so", null, "Similarly,", null, "This makes", null, "acute.\n\nThe correct response is that", null, "is acute and isosceles, but not equilateral.\n\n### Example Question #1 : Other Quadrilaterals\n\nQuadrilateral ABCD contains four ninety-degree angles. Which of the following must be true?\n\nI. Quadrilateral ABCD is a rectangle.\n\nII. Quadrilateral ABCD is a rhombus.\n\nIII. Quadrilateral ABCD is a square.\n\nII only\n\nI and II only\n\nII and III only\n\nI, II, and III\n\nI only\n\nI only\n\nExplanation:\n\nQuadrilateral ABCD has four ninety-degree angles, which means that it has four right angles because every right angle measures ninety degrees. If a quadrilateral has four right angles, then it must be a rectangle by the definition of a rectangle. This means statement I is definitely true.\n\nHowever, just because ABCD has four right angles doesn't mean that it is a rhombus. In order for a quadrilateral to be considered a rhombus, it must have four congruent sides. It's possible to have a rectangle whose sides are not all congruent. For example, if a rectangle has a width of 4 meters and a length of 8 meters, then not all of the sides of the rectangle would be congruent. In fact, in a rectangle, only opposite sides need be congruent. This means that ABCD is not necessarily a rhombus, and statement II does not have to be true.\n\nA square is defined as a rhombus with four right angles. In a square, all of the sides must be congruent. In other words, a square is both a rectangle and a rhombus. However, we already established that ABCD doesn't have to be a rhombus. This means that ABCD need not be a square, because, as we said previously, not all of its sides must be congruent. Therefore, statement III isn't necessarily true either.\n\nThe only statement that has to be true is statement I.\n\n### Example Question #9 : How To Find The Area Of A Trapezoid\n\nA trapezoid has a base of length 4, another base of length s, and a height of length s. A square has sides of length s. What is the value of s such that the area of the trapezoid and the area of the square are equal?", null, "", null, "", null, "", null, "", null, "", null, "Explanation:\n\nIn general, the formula for the area of a trapezoid is (1/2)(a + b)(h), where a and b are the lengths of the bases, and h is the length of the height. Thus, we can write the area for the trapezoid given in the problem as follows:\n\narea of trapezoid = (1/2)(4 + s)(s)\n\nSimilarly, the area of a square with sides of length a is given by a2. Thus, the area of the square given in the problem is s2.\n\nWe now can set the area of the trapezoid equal to the area of the square and solve for s.\n\n(1/2)(4 + s)(s) = s2\n\nMultiply both sides by 2 to eliminate the 1/2.\n\n(4 + s)(s) = 2s2\n\nDistribute the s on the left.\n\n4s + s2 = 2s2\n\nSubtract s2 from both sides.\n\n4s = s2\n\nBecause s must be a positive number, we can divide both sides by s.\n\n4 = s\n\nThis means the value of s must be 4.\n\n### Example Question #1 : How To Find The Area Of A Trapezoid", null, "Note: Figure NOT drawn to scale.\n\nThe white region in the above diagram is a trapezoid. What percent of the above rectangle, rounded to the nearest whole percent, is blue?", null, "", null, "", null, "", null, "", null, "", null, "Explanation:\n\nThe area of the entire rectangle is the product of its length and width, or", null, ".\n\nThe area of the white trapezoid is one half the product of its height and the sum of its base lengths, or", null, "Therefore, the blue polygon has area", null, "This is", null, "of the rectangle.\n\nRounded, this is 70%.\n\n### Example Question #2 : How To Find The Area Of A Trapezoid", null, "Refer to the above diagram.", null, ".\n\nGive the area of Quadrilateral", null, ".", null, "", null, "", null, "", null, "", null, "", null, "Explanation:", null, ", since both are right; by the Corresponding Angles Theorem,", null, ", and Quadrilateral", null, "is a trapezoid.\n\nBy the Angle-Angle Similarity Postulate, since", null, "and", null, "(by reflexivity),", null, "and since corresponding sides of similar triangles are in proportion,", null, "", null, "", null, "", null, ", the larger base of the trapozoid;\n\nThe smaller base is", null, ".", null, ", the height of the trapezoid.\n\nThe area of the trapezoid is", null, "", null, "", null, "### Example Question #5 : Quadrilaterals\n\nA circle with a radius 2 in is inscribed in a square. What is the perimeter of the square?\n\n32 in\n\n16 in\n\n24 in\n\n12 in\n\n28 in", null, "" ]
[ null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/problem_question_image/image/3773/Rhombus.jpg", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/221692/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225249/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225250/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/221695/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225216/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225218/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225217/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225215/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225215/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/221696/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/221697/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/221698/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/221699/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225251/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225252/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225253/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/221703/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225254/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224248/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225859/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225860/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225861/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225858/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225856/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225854/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225855/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225857/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225854/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/problem_question_image/image/3929/Rhombus.jpg", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225862/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225863/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/230438/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225865/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225866/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224248/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225849/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224250/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224247/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224245/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224245/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224244/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224245/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224247/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224245/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/problem_question_image/image/3926/Rhombus.jpg", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224251/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224252/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225850/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225834/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225851/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225852/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225853/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224252/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/224245/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/177322/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/177323/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/177321/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/177324/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/177320/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/177323/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/problem_question_image/image/3446/Rectangle_3.jpg", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/208166/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/218305/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/217147/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/217148/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/208165/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/218305/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/208170/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/217149/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/217150/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/217151/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/problem_question_image/image/3918/Thingy_3.jpg", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225569/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225570/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225551/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225549/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225552/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225548/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225550/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225548/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225571/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225572/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225573/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225574/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225575/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225576/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225577/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225578/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225579/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225580/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225581/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225582/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225583/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225584/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/225585/gif.latex", null, "https://vt-vtwa-app-assets.varsitytutors.com/assets/problems/og_image_practice_problems-9cd7cd1b01009043c4576617bc620d0d5f9d58294f59b6d6556fd8365f7440cf.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90978575,"math_prob":0.9884433,"size":7584,"snap":"2021-43-2021-49","text_gpt3_token_len":2004,"char_repetition_ratio":0.15488127,"word_repetition_ratio":0.10407876,"special_character_ratio":0.24353904,"punctuation_ratio":0.12262958,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992729,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194],"im_url_duplicate_count":[null,null,null,null,null,9,null,9,null,9,null,8,null,8,null,8,null,null,null,null,null,9,null,9,null,null,null,null,null,9,null,9,null,9,null,null,null,9,null,null,null,9,null,9,null,9,null,8,null,8,null,null,null,8,null,8,null,null,null,9,null,9,null,9,null,9,null,9,null,9,null,null,null,10,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,10,null,null,null,null,null,10,null,null,null,10,null,10,null,10,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,8,null,null,null,8,null,8,null,8,null,null,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,null,null,8,null,null,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T06:39:06Z\",\"WARC-Record-ID\":\"<urn:uuid:c742caf1-a948-4035-90ad-0d291777fa24>\",\"Content-Length\":\"234127\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bed19004-a6ba-48fc-b680-972de1445dd4>\",\"WARC-Concurrent-To\":\"<urn:uuid:510c6a1f-15be-417a-9370-58658e0c28ef>\",\"WARC-IP-Address\":\"18.67.65.37\",\"WARC-Target-URI\":\"https://www.varsitytutors.com/psat_math-help/plane-geometry/geometry/quadrilaterals\",\"WARC-Payload-Digest\":\"sha1:BOWNG64ARA7FWUR25FLP3KFGT4MI2KG3\",\"WARC-Block-Digest\":\"sha1:VMQG4LT4ND73CHTFWQ2P5RK27UGCJPMY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358688.35_warc_CC-MAIN-20211129044311-20211129074311-00217.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/171602/reducing-expression-to-simplest-form-involving-variables-in-radicals
[ "# Reducing expression to simplest form involving variables in radicals\n\nIn my notebook, I obtain an expression that I can't figure out how to reduce to it's simplest form. I've recreated it below to show what I am trying to simplify.\n\nIratio2to1 = -(L1/Sqrt[L1 L2])\nL2=(N2/N1)^2*L1\nAssuming[{L1, L2} > 0 && {L1, L2} \\[Element] Reals,Cancel[Iratio2to1]]\n\n\nthis returns:\n\n$$-\\frac{L1}{4\\sqrt{L1^2}}$$\n\nI cannot figure out how to have Mathematica reduce this to $-\\frac{1}{4}$.\n\nI've tried Simplify, FullSimplify, Cancel, now Assuming... I dont' get it!\n\n• Since Greater does not have the attribute Listable then {L1, L2} > 0 does not do what you expect. Use either L1 > 0 && L2 > 0 or And@@Thread[{L1,L2} > 0]. Cancel does not use the option Assumptions so the Assuming has no effect. Use Simplify or FullSimplify since they use the option Assumptions. Any variable used in an inequality (e.g., Greater) is assumed real so the {L1, L2} \\[Element] Reals is redundant. – Bob Hanlon Apr 20 '18 at 21:35\n• Thank you for your help. – jrive Apr 21 '18 at 11:56\n\nBecause it's equal to:\n\n$$-\\mathrm{sgn}(L1)\\frac{1}{4}$$\n\nYou can approach in multiple ways:\n\nSimplify for real L1:\n\nFullSimplify[Iratio2to1, Element[L1, Reals]]\n(* -(Sign[L1]/4) *)\n\n\nOr for positive L1:\n\nFullSimplify[Iratio2to1, L1 > 0]\n(* -(1/4) *)\n\n\nBut for complex L1 it won't cancel:\n\nFullSimplify[Iratio2to1, Element[L1, Complexes]]\n(* -(L1/(4 Sqrt[L1^2])) *)\n\n• Thank you for the detailed explanation. – jrive Apr 21 '18 at 11:57\n\nNot sure why your Assuming doesn't work, but this works:\n\n\\$Assumptions = L1 > 0\n\nSimplify[-L1/(4 Sqrt[L1^2])]\n(* -(1/4) *)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7749881,"math_prob":0.9671537,"size":917,"snap":"2021-21-2021-25","text_gpt3_token_len":315,"char_repetition_ratio":0.10733844,"word_repetition_ratio":0.0,"special_character_ratio":0.32388222,"punctuation_ratio":0.14673913,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940194,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T13:12:54Z\",\"WARC-Record-ID\":\"<urn:uuid:eb0fef84-7649-4986-9d35-bb9ff9c29499>\",\"Content-Length\":\"174469\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35b2b097-9e10-4fd8-811d-c97d2167cbf1>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1ef7976-d1b1-497f-abb6-8c48fbb7e890>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/171602/reducing-expression-to-simplest-form-involving-variables-in-radicals\",\"WARC-Payload-Digest\":\"sha1:XWUYGEX35FHGVIEAAKG2OIEEM3QYXTX7\",\"WARC-Block-Digest\":\"sha1:WCVLKBHRNMAUKJRN4BD5W635EGRJHRIT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989916.34_warc_CC-MAIN-20210513111525-20210513141525-00343.warc.gz\"}"}
https://denniskubes.com/2012/08/14/do-you-know-what-p-does-in-c/
[ "When first learning C pointers there is one thing I wish had been better explained; operator precedence vs order of operations.\n\nThe above example prints out 1 2 3. Code like *p++ is a common sight in C so it is important to know what it does. The int pointer p starts out pointing to the first address of myarray, &myarray. On each pass through the loop the p pointer address is incremented, moves up one index in the array, but the previous p unincremented address (index) is dereferenced and assigned to the out variable. This happens until it hits the fourth element in the array, 0 at which point the while loop stops. But what does *p++ do? And how does it move from one element in the array to the next.\n\nIn terms of operator precedence the postfix operator (++) binds tighter than the dereference operator (*). If we were reading this wrong we might think that we are incrementing the value pointed to by our p int pointer. But what is actually happening is four separate operations and a clash between operator precedence and order of operations.\n\nFirst x is set to 0. Then x is incremented by the postfix operator but y is assigned the old x value of 0. As the print shows y = 0 and x = 1. The postfix operator is actually a shorthand for 4 different steps. It first makes a copy of x in memory, then increments the copy of x’s value by one, then it assigns the incremented value back to the original x address, and finally returns the old value of x. If we were using the prefix operator ++x instead of the postfix operator x++ then the newly incremented value of x would have been assigned to y and y would have printed out 1.\n\nThe shortcut postfix code is the same as if we did this.\n\nThink of order of operations like driving a car home from work.  There are multiple steps and some steps have to happen before others.  First you pull out of the parking lot, then drive down street A, then street B, and so on until you get home. You can’t pull into your driveway before you pull out of the work parking lot. It doesn’t work that way. In the above example the x + 1 has to happen before the x = . The operations have to go in that order for the program to even work. Where people get confused is thinking about the ++ as an operator instead of as a shortcut for multiple operations.\n\nThe p++ in our first example acts the same way as the x code above. The * vs ++ in *p++ is a question of which to do first. Do we do all of the operations of ++ first or the dereference first? The postfix operator (++) binds tighter so we do all of the ++ operations first, then we do the dereference (*). This is order of operations vs operator precendence.\n\nThe postfix and prefix operators always follow four steps.\n\n1. Make a copy of the variable\n2. Increment the copy of the variable\n3. Assign the incremented copy back to the original variable\n4. Prefix ++x returns the incremented variable, Postfix x++ returns the unincremented variable\n\nWith an expression like *p++ there is an implicit return from p++ to the dereference operator.\n\nA fully expanded version of *p++ in code might look like this. The p pointer is assigned to x, then p is incremented. But x is still pointing at the original unincremented p address.\n\nNow that we know ++ is multiple operations and is actually about an order of operations, let’s play around with operator precendece. What happens if we use parentheses around the p++?\n\nHere we are performing the postfix operations first. This increments the p pointer address but returns the original unincremented pointer address, which is then dereferenced. This is the same thing that happened previously. *(p++) is equal to *p++ even though parentheses make the code more explicit to read.\n\nIf we have the parentheses around (*p) first what happens?\n\nHere the parentheses cause the pointer to be dereferenced first. Then the value pointed at is incremented and reassigned. As the output shows we never move to the next pointer address, we just keep incrementing the myarray value.\n\nThis will print out 2 and 3 but not 1. Why? Because here we are using the prefix operator which does the increment and returns the incremented pointer address, not the original unincremented address. Still order of operations, just a different operation. Then the dereference happens on the incremented pointer address. If we used parentheses for *(++p) nothing would change.\n\nWhat about if we changed the order and used parentheses?\n\nNow we get an entirely different result, it prints out 2 3 4 5. Why? The parentheses (*p) says first dereference the p pointer to get its value. Then perform the prefix ++ to increment and reassign the dereferenced value. Then return the incremented value to the out variable. The first derefereced value is the int at myarray = 1, that get incremented to 2 and reassigned back into &myarray. On the next pass of the loop the same int get incremented to 3, 4 and so on. And as the output shows we never move to the next pointer address.\n\nGetting this understanding of order of operations vs operator precedence makes understanding pointers in C much easier. Thinking about postfix and prefix operators in terms of all of their steps makes understanding common C idioms more clear." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89177907,"math_prob":0.9787783,"size":6660,"snap":"2023-40-2023-50","text_gpt3_token_len":1743,"char_repetition_ratio":0.15099159,"word_repetition_ratio":0.1263823,"special_character_ratio":0.28138137,"punctuation_ratio":0.1205165,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9583319,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T07:45:32Z\",\"WARC-Record-ID\":\"<urn:uuid:7e7689e7-9050-4508-8549-a3e64c7d3bc2>\",\"Content-Length\":\"89472\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:defa0a2b-08af-4b57-bcfe-2b8f71633e8a>\",\"WARC-Concurrent-To\":\"<urn:uuid:44d14ba9-e94f-4c66-a7d7-f7d1de576fd1>\",\"WARC-IP-Address\":\"143.198.67.161\",\"WARC-Target-URI\":\"https://denniskubes.com/2012/08/14/do-you-know-what-p-does-in-c/\",\"WARC-Payload-Digest\":\"sha1:T6MBPYUQHLA2W627YRFPL5WKAFM7QFUD\",\"WARC-Block-Digest\":\"sha1:Z2DDELIY63DBYQY2WUKWAOQZUS57JTCM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510498.88_warc_CC-MAIN-20230929054611-20230929084611-00516.warc.gz\"}"}
https://kernel.googlesource.com/pub/scm/linux/kernel/git/tip/tip/+/WIP.sched/core/lib/checksum.c
[ "blob: d3ec93f9e5f3e61900c8268c883010433a08f567 [file] [log] [blame]\n /* * * INET An implementation of the TCP/IP protocol suite for the LINUX * operating system. INET is implemented using the BSD Socket * interface as the means of communication with the user level. * * IP/TCP/UDP checksumming routines * * Authors: Jorge Cwik, * Arnt Gulbrandsen, * Tom May, * Andreas Schwab, * Lots of code moved from tcp.c and ip.c; see those files * for more names. * * 03/02/96 Jes Sorensen, Andreas Schwab, Roman Hodek: * Fixed some nasty bugs, causing some horrible crashes. * A: At some points, the sum (%0) was used as * length-counter instead of the length counter * (%1). Thanks to Roman Hodek for pointing this out. * B: GCC seems to mess up if one uses too many * data-registers to hold input values and one tries to * specify d0 and d1 as scratch registers. Letting gcc * choose these registers itself solves the problem. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version * 2 of the License, or (at your option) any later version. */ /* Revised by Kenneth Albanowski for m68knommu. Basic problem: unaligned access kills, so most of the assembly has to go. */ #include #include #include #ifndef do_csum static inline unsigned short from32to16(unsigned int x) { /* add up 16-bit and 16-bit for 16+c bit */ x = (x & 0xffff) + (x >> 16); /* add up carry.. */ x = (x & 0xffff) + (x >> 16); return x; } static unsigned int do_csum(const unsigned char *buff, int len) { int odd; unsigned int result = 0; if (len <= 0) goto out; odd = 1 & (unsigned long) buff; if (odd) { #ifdef __LITTLE_ENDIAN result += (*buff << 8); #else result = *buff; #endif len--; buff++; } if (len >= 2) { if (2 & (unsigned long) buff) { result += *(unsigned short *) buff; len -= 2; buff += 2; } if (len >= 4) { const unsigned char *end = buff + ((unsigned)len & ~3); unsigned int carry = 0; do { unsigned int w = *(unsigned int *) buff; buff += 4; result += carry; result += w; carry = (w > result); } while (buff < end); result += carry; result = (result & 0xffff) + (result >> 16); } if (len & 2) { result += *(unsigned short *) buff; buff += 2; } } if (len & 1) #ifdef __LITTLE_ENDIAN result += *buff; #else result += (*buff << 8); #endif result = from32to16(result); if (odd) result = ((result >> 8) & 0xff) | ((result & 0xff) << 8); out: return result; } #endif #ifndef ip_fast_csum /* * This is a version of ip_compute_csum() optimized for IP headers, * which always checksum on 4 octet boundaries. */ __sum16 ip_fast_csum(const void *iph, unsigned int ihl) { return (__force __sum16)~do_csum(iph, ihl*4); } EXPORT_SYMBOL(ip_fast_csum); #endif /* * computes the checksum of a memory block at buff, length len, * and adds in \"sum\" (32-bit) * * returns a 32-bit number suitable for feeding into itself * or csum_tcpudp_magic * * this function must be called with even lengths, except * for the last fragment, which may be odd * * it's best to have buff aligned on a 32-bit boundary */ __wsum csum_partial(const void *buff, int len, __wsum wsum) { unsigned int sum = (__force unsigned int)wsum; unsigned int result = do_csum(buff, len); /* add in old sum, and carry.. */ result += sum; if (sum > result) result += 1; return (__force __wsum)result; } EXPORT_SYMBOL(csum_partial); /* * this routine is used for miscellaneous IP-like checksums, mainly * in icmp.c */ __sum16 ip_compute_csum(const void *buff, int len) { return (__force __sum16)~do_csum(buff, len); } EXPORT_SYMBOL(ip_compute_csum); /* * copy from fs while checksumming, otherwise like csum_partial */ __wsum csum_partial_copy_from_user(const void __user *src, void *dst, int len, __wsum sum, int *csum_err) { int missing; missing = __copy_from_user(dst, src, len); if (missing) { memset(dst + len - missing, 0, missing); *csum_err = -EFAULT; } else *csum_err = 0; return csum_partial(dst, len, sum); } EXPORT_SYMBOL(csum_partial_copy_from_user); /* * copy from ds while checksumming, otherwise like csum_partial */ __wsum csum_partial_copy(const void *src, void *dst, int len, __wsum sum) { memcpy(dst, src, len); return csum_partial(dst, len, sum); } EXPORT_SYMBOL(csum_partial_copy); #ifndef csum_tcpudp_nofold static inline u32 from64to32(u64 x) { /* add up 32-bit and 32-bit for 32+c bit */ x = (x & 0xffffffff) + (x >> 32); /* add up carry.. */ x = (x & 0xffffffff) + (x >> 32); return (u32)x; } __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr, __u32 len, __u8 proto, __wsum sum) { unsigned long long s = (__force u32)sum; s += (__force u32)saddr; s += (__force u32)daddr; #ifdef __BIG_ENDIAN s += proto + len; #else s += (proto + len) << 8; #endif return (__force __wsum)from64to32(s); } EXPORT_SYMBOL(csum_tcpudp_nofold); #endif" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5491482,"math_prob":0.9741497,"size":5096,"snap":"2019-26-2019-30","text_gpt3_token_len":1618,"char_repetition_ratio":0.12490181,"word_repetition_ratio":0.06318347,"special_character_ratio":0.3683281,"punctuation_ratio":0.17182131,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9911454,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-26T08:25:31Z\",\"WARC-Record-ID\":\"<urn:uuid:9baa521f-bfd8-4445-a7ec-36bdca8541f8>\",\"Content-Length\":\"78242\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1080bfb7-1fa9-448c-9464-5470fbef00ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:bdde7b67-ad29-49b7-93c4-7aea2215d089>\",\"WARC-IP-Address\":\"173.194.207.82\",\"WARC-Target-URI\":\"https://kernel.googlesource.com/pub/scm/linux/kernel/git/tip/tip/+/WIP.sched/core/lib/checksum.c\",\"WARC-Payload-Digest\":\"sha1:65EL5DCY4QVF3DIZOOKV7X6JVWHWDQMJ\",\"WARC-Block-Digest\":\"sha1:GCXZLM2AFLB324Q3CYHCBSFVYKBYZB2C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000231.40_warc_CC-MAIN-20190626073946-20190626095946-00357.warc.gz\"}"}
https://www.r-bloggers.com/2016/09/new-features-in-imager-0-30/
[ "Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nimager is an R package for image processing, based on CImg. This new release brings many new features, including:\n\n• Support for automatic parallel processing using OpenMP.\n• A new S3 class, imlist, which makes it easy to work with image lists\n• New functions for interactively selecting image regions (grabRect,grabPoint,grabLine)\n• Experimental support for CImg’s byte-compiled DSL via imeval, patch_summary.\n• Improved plotting, API consistency, and documentation\n\nTo get started with imager, see the tutorial. Some of the new features are detailed below the fold.\n\nAnd now, for your viewing pleasure, the following piece of code downloads a random cat picture, and makes a video of a bouncing kitten:\n\n```library(rvest)\nlibrary(imager)\n#Run a search query (returning html content)\n#Grab all tags, get their \"src\" attribute, a URL to an image\nurls <- search %>% html_nodes(\"img\") %>% html_attr(\"src\") #Get urls of parrot pictures\n\n#Load the first image, and resize\n\n#We'll use 30 frames\nt <- seq(0,1,l=30)\n\n#Equations of motion\nxt <- function(t) 250*t\nyt <- function(t) 400- 1100*abs(t-.5)\nalpha <- function(t) 1-1.8*abs(t-.5)\n\n#An empty frame for our cat\nim <- imfill(400,400,val=rep(0,3))\n\n#Let's make our video\nvid <- lapply(t,function(t) imdraw(im,sprite,x=xt(t),y=yt(t),opacity=alpha(t))) %>% imappend(\"z\")\n\nplay(vid,loop=TRUE,normalise=FALSE)\n\n```", null, "A new class for image lists\n\nImage lists (S3 class “imlist”) are simply lists of images, but they come with appropriate generics for plotting, converting, etc. For example, calling imhessian automatically produces an image list\n\n```imhessian(boats) %>% plot\nimhessian(boats) %>% display\n```", null, "There’s also a “map_il” function, inspired by the purrr package. It works essentially like “sapply” but returns an image list:\n\n```#View image at different blur levels\nmap_il(seq(1,14,l=5),~ isoblur(boats,.),.id=\"v\") %>% plot(layout=\"row\")\n```", null, "To make an image list, run “imlist” on a list of images:\n\n```list(a= imnoise(10,10),b= boats) %>% imlist\n```\n\nParallel processing\n\nIf possible, imager now enables CImg’s parallel processing features. “If possible” means you need a compiler that supports OpenMP, which includes recent versions of gcc and very recent versions of clang (>= 3.7.0). Many image processing primitives (filters, transformations, etc.) will run in parallel on CPU cores. See here for more on parallel computations.\n\nInteractive selection of image regions\n\nIt’s often useful to be able to select image regions for further processing. You can now do so interactively and easily using grabRect:\n\n```r = grabRect(boats)\n```\n\nHere “r” will contain the coordinates of the rectangle you selected. If you want the contents of the rectangle itself, run:\n\n```im = grabRect(boats,coord=FALSE)\n```\n\nExperimental support for CImg’s DSL\n\nCImg includes a byte-compiled mini-language that’s well-suited for simple non-linear filters and image generators. It’s documented here.\nHere’s a simple box filter you can try:\n\n```boxf <- \"v=0;for(iy=y-3,iy% plot\n```\n\nA more impressive example is due to David Tschumperlé (the creator of CImg): the Julia set\n\n```julia <- \"\nzr = -1.2 + 2.4*x/w;\nzi = -1.2 + 2.4*y/h;\nfor (iter = 0, zr^2+zi^2<=4 && iter<256, iter++, t = zr^2 - zi^2 + 0.5; (zi *= 2*zr) += 0.2; zr = t ); iter\" imfill(500,500) %>% imeval(julia) %>% plot\n```", null, "I’m still not completely sure how useful this is. On the one hand, R is pretty poor at computations that involve looping over every pixel, and CImg’s DSL is much better. On the other, there’s a lot of ways of getting around having to write pixel loops, and I explore some of them here. Feedback appreciated.", null, "", null, "" ]
[ null, "https://dahtah.files.wordpress.com/2016/08/output.gif", null, "https://dahtah.files.wordpress.com/2016/09/imlist.png", null, "https://dahtah.files.wordpress.com/2016/09/map_il.png#038;h=129", null, "https://dahtah.files.wordpress.com/2016/09/julia.png", null, "https://feeds.wordpress.com/1.0/comments/dahtah.wordpress.com/393/", null, "https://i1.wp.com/pixel.wp.com/b.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76912236,"math_prob":0.90215397,"size":4103,"snap":"2020-45-2020-50","text_gpt3_token_len":1111,"char_repetition_ratio":0.093193464,"word_repetition_ratio":0.048309177,"special_character_ratio":0.27321473,"punctuation_ratio":0.14404762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97439784,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,5,null,5,null,4,null,5,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T18:48:11Z\",\"WARC-Record-ID\":\"<urn:uuid:911ab542-4b7d-4706-8757-f6ddad192b08>\",\"Content-Length\":\"82735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:264a58dc-6ba3-4cbb-9525-edbdecdb95e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:83e0927c-73e9-41f6-aad5-9dac4a663c39>\",\"WARC-IP-Address\":\"104.28.9.205\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/2016/09/new-features-in-imager-0-30/\",\"WARC-Payload-Digest\":\"sha1:5SO2QLHYVISW7DY6YAMA6MTEKQ3PDQJY\",\"WARC-Block-Digest\":\"sha1:YIRMPTZYOHL3Z4FQBUFIQM3WZYICQLX5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141176922.14_warc_CC-MAIN-20201124170142-20201124200142-00311.warc.gz\"}"}
https://www.colorhexa.com/063b40
[ "# #063b40 Color Information\n\nIn a RGB color space, hex #063b40 is composed of 2.4% red, 23.1% green and 25.1% blue. Whereas in a CMYK color space, it is composed of 90.6% cyan, 7.8% magenta, 0% yellow and 74.9% black. It has a hue angle of 185.2 degrees, a saturation of 82.9% and a lightness of 13.7%. #063b40 color hex could be obtained by blending #0c7680 with #000000. Closest websafe color is: #003333.\n\n• R 2\n• G 23\n• B 25\nRGB color chart\n• C 91\n• M 8\n• Y 0\n• K 75\nCMYK color chart\n\n#063b40 color description : Very dark cyan.\n\n# #063b40 Color Conversion\n\nThe hexadecimal color #063b40 has RGB values of R:6, G:59, B:64 and CMYK values of C:0.91, M:0.08, Y:0, K:0.75. Its decimal value is 408384.\n\nHex triplet RGB Decimal 063b40 `#063b40` 6, 59, 64 `rgb(6,59,64)` 2.4, 23.1, 25.1 `rgb(2.4%,23.1%,25.1%)` 91, 8, 0, 75 185.2°, 82.9, 13.7 `hsl(185.2,82.9%,13.7%)` 185.2°, 90.6, 25.1 003333 `#003333`\nCIE-LAB 22.076, -14.161, -7.822 2.564, 3.537, 5.398 0.223, 0.308, 3.537 22.076, 16.178, 208.915 22.076, -15.784, -7.195 18.806, -8.571, -3.854 00000110, 00111011, 01000000\n\n# Color Schemes with #063b40\n\n• #063b40\n``#063b40` `rgb(6,59,64)``\n• #400b06\n``#400b06` `rgb(64,11,6)``\nComplementary Color\n• #064028\n``#064028` `rgb(6,64,40)``\n• #063b40\n``#063b40` `rgb(6,59,64)``\n• #061e40\n``#061e40` `rgb(6,30,64)``\nAnalogous Color\n• #402806\n``#402806` `rgb(64,40,6)``\n• #063b40\n``#063b40` `rgb(6,59,64)``\n• #40061e\n``#40061e` `rgb(64,6,30)``\nSplit Complementary Color\n• #3b4006\n``#3b4006` `rgb(59,64,6)``\n• #063b40\n``#063b40` `rgb(6,59,64)``\n• #40063b\n``#40063b` `rgb(64,6,59)``\n• #06400b\n``#06400b` `rgb(6,64,11)``\n• #063b40\n``#063b40` `rgb(6,59,64)``\n• #40063b\n``#40063b` `rgb(64,6,59)``\n• #400b06\n``#400b06` `rgb(64,11,6)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #021011\n``#021011` `rgb(2,16,17)``\n• #042629\n``#042629` `rgb(4,38,41)``\n• #063b40\n``#063b40` `rgb(6,59,64)``\n• #085057\n``#085057` `rgb(8,80,87)``\n• #0a666f\n``#0a666f` `rgb(10,102,111)``\n• #0d7b86\n``#0d7b86` `rgb(13,123,134)``\nMonochromatic Color\n\n# Alternatives to #063b40\n\nBelow, you can see some colors close to #063b40. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #064037\n``#064037` `rgb(6,64,55)``\n• #06403b\n``#06403b` `rgb(6,64,59)``\n• #064040\n``#064040` `rgb(6,64,64)``\n• #063b40\n``#063b40` `rgb(6,59,64)``\n• #063640\n``#063640` `rgb(6,54,64)``\n• #063140\n``#063140` `rgb(6,49,64)``\n• #062d40\n``#062d40` `rgb(6,45,64)``\nSimilar Colors\n\n# #063b40 Preview\n\nText with hexadecimal color #063b40\n\nThis text has a font color of #063b40.\n\n``<span style=\"color:#063b40;\">Text here</span>``\n#063b40 background color\n\nThis paragraph has a background color of #063b40.\n\n``<p style=\"background-color:#063b40;\">Content here</p>``\n#063b40 border color\n\nThis element has a border color of #063b40.\n\n``<div style=\"border:1px solid #063b40;\">Content here</div>``\nCSS codes\n``.text {color:#063b40;}``\n``.background {background-color:#063b40;}``\n``.border {border:1px solid #063b40;}``\n\n# Shades and Tints of #063b40\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #01090a is the darkest color, while #f7fefe is the lightest one.\n\n• #01090a\n``#01090a` `rgb(1,9,10)``\n• #031a1c\n``#031a1c` `rgb(3,26,28)``\n• #042a2e\n``#042a2e` `rgb(4,42,46)``\n• #063b40\n``#063b40` `rgb(6,59,64)``\n• #084c52\n``#084c52` `rgb(8,76,82)``\n• #095c64\n``#095c64` `rgb(9,92,100)``\n• #0b6d76\n``#0b6d76` `rgb(11,109,118)``\n• #0d7d88\n``#0d7d88` `rgb(13,125,136)``\n• #0e8e9a\n``#0e8e9a` `rgb(14,142,154)``\n• #109eac\n``#109eac` `rgb(16,158,172)``\n• #12afbe\n``#12afbe` `rgb(18,175,190)``\n• #13bfcf\n``#13bfcf` `rgb(19,191,207)``\n• #15d0e1\n``#15d0e1` `rgb(21,208,225)``\n• #20d9ea\n``#20d9ea` `rgb(32,217,234)``\n• #32dcec\n``#32dcec` `rgb(50,220,236)``\n• #44dfed\n``#44dfed` `rgb(68,223,237)``\n• #56e2ef\n``#56e2ef` `rgb(86,226,239)``\n• #68e5f1\n``#68e5f1` `rgb(104,229,241)``\n• #7ae8f3\n``#7ae8f3` `rgb(122,232,243)``\n• #8cebf4\n``#8cebf4` `rgb(140,235,244)``\n• #9eeef6\n``#9eeef6` `rgb(158,238,246)``\n• #b0f1f8\n``#b0f1f8` `rgb(176,241,248)``\n• #c1f4f9\n``#c1f4f9` `rgb(193,244,249)``\n• #d3f8fb\n``#d3f8fb` `rgb(211,248,251)``\n• #e5fbfd\n``#e5fbfd` `rgb(229,251,253)``\n• #f7fefe\n``#f7fefe` `rgb(247,254,254)``\nTint Color Variation\n\n# Tones of #063b40\n\nA tone is produced by adding gray to any pure hue. In this case, #212525 is the less saturated color, while #013f45 is the most saturated one.\n\n• #212525\n``#212525` `rgb(33,37,37)``\n• #1e2728\n``#1e2728` `rgb(30,39,40)``\n• #1c292a\n``#1c292a` `rgb(28,41,42)``\n• #192b2d\n``#192b2d` `rgb(25,43,45)``\n• #162e30\n``#162e30` `rgb(22,46,48)``\n• #133033\n``#133033` `rgb(19,48,51)``\n• #113235\n``#113235` `rgb(17,50,53)``\n• #0e3438\n``#0e3438` `rgb(14,52,56)``\n• #0b373b\n``#0b373b` `rgb(11,55,59)``\n• #09393d\n``#09393d` `rgb(9,57,61)``\n• #063b40\n``#063b40` `rgb(6,59,64)``\n• #033d43\n``#033d43` `rgb(3,61,67)``\n• #013f45\n``#013f45` `rgb(1,63,69)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #063b40 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52004415,"math_prob":0.75047636,"size":3665,"snap":"2019-13-2019-22","text_gpt3_token_len":1655,"char_repetition_ratio":0.12537558,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5607094,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9909605,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T06:55:38Z\",\"WARC-Record-ID\":\"<urn:uuid:d498d1bb-5ea9-4956-9be6-88823cd599b4>\",\"Content-Length\":\"36321\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e0d1f38-1189-4493-8c63-521aa2ff59f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b17950d-c051-49f3-a951-5a7d2c85c06d>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/063b40\",\"WARC-Payload-Digest\":\"sha1:CUKNXUBOQRT6V7NGJ6EWV4TT2RW2F3PU\",\"WARC-Block-Digest\":\"sha1:7MACJ3HRYPGDOORZPI2M2BD2UWYBZCKB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204857.82_warc_CC-MAIN-20190326054828-20190326080828-00155.warc.gz\"}"}
https://dfc-org-production.my.site.com/forums/ForumsMain?id=906F00000008wx8IAA
[ "", null, "ShowAll Questionssorted byDate Posted", null, "diwoman\n\n# Calculating close Week based on opportunity close date\n\nI am trying to create a formula to calculate the Week in the Year that an opportunity will close, based on the Close Date.\n\nThere was a post earlier in April on the same subject, but it remains unsolved.  I have searched the help section and blogs, but cannot create the winning formula....any help is much appreciated!", null, "Best Answer chosen by Admin (Salesforce Developers)", null, "", null, "Buell\n\nAlready found an issue.  Here is an updated version of the formula.  Also, not sure what you are saying the issue is.  If you create a 'Number' formula field in the oppty and plug this in it will do the trick.\n\nFLOOR(((CASE( MONTH( CloseDate ), 1,0, 2,31, 3,60, 4,91, 5,121, 6,152, 7,182, 8,213, 9,244, 10,274, 11,305, 12,335, 0)) - IF( MONTH( CloseDate) >= 3, (IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 400) = 0, 0, IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 100) = 0, 1, IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 4) = 0, 0, 1)))), 0) + DAY( CloseDate )) / 7) + IF(MOD(((CASE( MONTH( CloseDate ), 1,0, 2,31, 3,60, 4,91, 5,121, 6,152, 7,182, 8,213, 9,244, 10,274, 11,305, 12,335, 0)) - IF( MONTH( CloseDate) >= 3, (IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 400) = 0, 0, IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 100) = 0, 1, IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 4) = 0, 0, 1)))), 0) + DAY( CloseDate )), 7) = 0, 0,1)", null, "Buell\nAre you just looking for a number, for example, week 32?", null, "diwoman\nYes, I just want a number, such as \"32\".  The field name will say \"Close Week\".", null, "Buell\n\nIt is ugly but ought to do the trick.  Will even account for leap years.\n\nFLOOR(((CASE( MONTH( CloseDate ),\n1,0,\n2,31,\n3,60,\n4,91,\n5,121,\n6,152,\n7,182,\n8,213,\n9,244,\n10,274,\n11,305,\n12,335,\n0)) - IF( MONTH( CloseDate) >= 3, (IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 400) = 0, 0,\nIF( MOD( YEAR( DATEVALUE(CreatedDate) ), 100) = 0, 1,\nIF( MOD( YEAR( DATEVALUE(CreatedDate) ), 4) = 0, 0,\n1)))), 0) + DAY( CloseDate )) / 7) + 1\n\nMessage Edited by Buell on 06-25-2009 11:10 AM", null, "diwoman\n\nGood grief, I was way off base with my attempts.\n\nMany thanks...the formula has been accepted, but it does not calculate a value in the close week field based on the close date.  I've updated the close date on an existing opportunity and I've created a new one, but the field is not popluating the week number.\n\nI am looking for the same behavior of the Probability field and how it relates to the Stage field.", null, "Buell\n\nAlready found an issue.  Here is an updated version of the formula.  Also, not sure what you are saying the issue is.  If you create a 'Number' formula field in the oppty and plug this in it will do the trick.\n\nFLOOR(((CASE( MONTH( CloseDate ), 1,0, 2,31, 3,60, 4,91, 5,121, 6,152, 7,182, 8,213, 9,244, 10,274, 11,305, 12,335, 0)) - IF( MONTH( CloseDate) >= 3, (IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 400) = 0, 0, IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 100) = 0, 1, IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 4) = 0, 0, 1)))), 0) + DAY( CloseDate )) / 7) + IF(MOD(((CASE( MONTH( CloseDate ), 1,0, 2,31, 3,60, 4,91, 5,121, 6,152, 7,182, 8,213, 9,244, 10,274, 11,305, 12,335, 0)) - IF( MONTH( CloseDate) >= 3, (IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 400) = 0, 0, IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 100) = 0, 1, IF( MOD( YEAR( DATEVALUE(CreatedDate) ), 4) = 0, 0, 1)))), 0) + DAY( CloseDate )), 7) = 0, 0,1)\n\nThis was selected as the best answer", null, "diwoman\n\nIt works!  Thank you...your effort is much appreciated!\n\nCheers.", null, "Buell\n\nCleaned it up a bit.\n\n((CloseDate - DATE(YEAR(CloseDate), 1, 1) + 1) / 7) + IF(MOD((CloseDate - DATE(YEAR(CloseDate), 1, 1) + 1), 7) = 0, 0, 1)\n\nMessage Edited by Buell on 06-26-2009 01:47 PM", null, "diwoman\nPerfect...thanks again!", null, "diwoman\nWe've been using this formula and it's working well until we set a close date past 2009.  We set one for 2/25/2010 and a close week of 62 was calculated.  I reverted to the code that was posted earlier and that is calcuating properly.  Just thought I would mention it....thanks!", null, "jbrew\n\nI just found this posting and it is almost exactly what I need.  I have no idea how to read the formula itself but it works great for the most part.  Is there anything that I can add to it that takes into account a fiscal 52 week year?  For example, this year December 27 - 31 will end on week 53.  I want to make sure that these dates are accounted for as week 1 and not week 53.\n\nThanks so much for the formula that has already been built.  Has been a life saver for me!!!", null, "jbrew\nOh also, I just noticed that it seems that it is miscalculating Wednesdays.  It reverts each Wednesday back to the week prior.  So for this week it calculates all days at week 34 except for Wednesday which it calculates as 33.  Thank you so much to anyone who is a formula expert!", null, "Buell\nSo are you looking for the first week of next year to be counted as week 2 then?", null, "jbrew\nI'm looking for the first full week to be week 2.  So for my example for the end of this year, I would want December 27 - January 2 to be considered week 1, Week 2 would be January 3 - 9, etc.", null, "Buell\nAnd I'm guessing you want your weeks to start on Sundays?", null, "jbrew\nyes, that would be great", null, "Buell\n\nIt's not pretty but it works.\n\nCALCULATING WORK WEEK IN YEAR (with weeks running sun. - sat. and the 53rd week of the year counting as the 1st of the following year...)\n\nIF(\n((CloseDate - DATE(YEAR(CloseDate), 1, 1) + (CASE( MOD( DATE( YEAR( CloseDate ), 01, 01 ) - DATE(1900, 1, 7), 7),\n0, + 1,\n6, + 7,\n2, + 3,\n4, + 5,\n5, + 6,\n1, + 2,\n+ 4))) / 7) + IF(MOD((CloseDate - DATE(YEAR(CloseDate), 1, 1) + (CASE( MOD( DATE( YEAR( CloseDate ), 01, 01 ) - DATE(1900, 1, 7), 7),\n0, + 1,\n6, + 7,\n2, + 3,\n4, + 5,\n5, + 6,\n1, + 2,\n+ 4))), 7) = 0, 0, 1)\n\n>= 53, 1,\n\n((CloseDate - DATE(YEAR(CloseDate), 1, 1) + (CASE( MOD( DATE( YEAR( CloseDate ), 01, 01 ) - DATE(1900, 1, 7), 7),\n0, + 1,\n6, + 7,\n2, + 3,\n4, + 5,\n5, + 6,\n1, + 2,\n+ 4))) / 7) + IF(MOD((CloseDate - DATE(YEAR(CloseDate), 1, 1) + (CASE( MOD( DATE( YEAR( CloseDate ), 01, 01 ) - DATE(1900, 1, 7), 7),\n0, + 1,\n6, + 7,\n2, + 3,\n4, + 5,\n5, + 6,\n1, + 2,\n+ 4))), 7) = 0, 0, 1)\n)\n\nThis should take care of the wednesday's issue you were seeing too.\nMessage Edited by Buell on 08-20-2009 05:09 PM", null, "jbrew\n\nThis is very close...\n\nIt is still calculating a 53 week year though and the weird part is that it seems to now be starting it's new week on Wednesdays rather than on Sundays.  Saturdays seem to be an issue too.  We don't typically have weekend dates but every once in a while it comes up.  Thank you again for your help with this.\n\nHere's an excerpt that I pulled to help illustrate what's happening:\n\nTue, Dec 16, 2008       51Wed, Dec 17, 2008     52Thur, Dec 18, 2008      52Fri, Dec 19, 2008         52Sat, Dec 20, 2008        51Sun, Dec 21, 2008       52Mon, Dec 22, 2008      52Tue, Dec 23, 2008       52Wed, Dec 24, 2008     53Fri, Dec 26, 2008         53Mon, Dec 29, 2008      1Tue, Dec 30, 2008       1Wed, Dec 31, 2008     1Fri, Jan 02, 2009          2Sun, Jan 04, 2009        2Mon, Jan 05, 2009       2Tue, Jan 06, 2009        2Wed, Jan 07, 2009       3Thur, Jan 08, 2009       3Fri, Jan 09, 2009          3Mon, Jan 12, 2009       3Tue, Jan 13, 2009        3Wed, Jan 14, 2009       4Thur, Jan 15, 2009       4Fri, Jan 16, 2009          4Sat, Jan 17, 2009         3Mon, Jan 19, 2009       4Tue, Jan 20, 2009        4Wed, Jan 21, 2009       5Thur, Jan 22, 2009       5Fri, Jan 23, 2009          5Sun, Jan 25, 2009        5Mon, Jan 26, 2009       5Tue, Jan 27, 2009        5", null, "Buell\nAh, I see the problem.  When testing I'm showing the number field with 2 decimal places which will calculate things out correctly.  When you tell it to show no decimal places SFDC is kind enough to round things up for you...  You'll either have to show 2 decimal places and ignore them or rewrite the formula as adding FLOOR() will cause it to compile too large.", null, "Buell\n\nActually, give this one a go, set it up as you did before, a number formula field with 0 deimal places.\n\nIF( ((CloseDate - DATE(YEAR(CloseDate), 1, 1) + MOD( DATE( YEAR( CloseDate ), 01, 01 ) - DATE(1900, 1, 7), 7) + 1 ) / 7) + IF(MOD((CloseDate - DATE(YEAR(CloseDate), 1, 1) + MOD( DATE( YEAR( CloseDate ), 01, 01 ) - DATE(1900, 1, 7), 7) + 1 ), 7) = 0, 0, 1) >= 53, 1, (FLOOR((CloseDate - DATE(YEAR(CloseDate), 1, 1) + MOD( DATE( YEAR( CloseDate ), 01, 01 ) - DATE(1900, 1, 7), 7) + 1 ) / 7)) + IF(MOD((CloseDate - DATE(YEAR(CloseDate), 1, 1) + MOD( DATE( YEAR( CloseDate ), 01, 01 ) - DATE(1900, 1, 7), 7) + 1 ), 7) = 0, 0, 1) )", null, "jbrew\n\nThis worked perfectly!!!  Thank you so much for all of your help!", null, "JAW99\nVery cool. Would love to see something that read \"Week of (1/31/2010)\" for example..." ]
[ null, "https://res.cloudinary.com/hy4kyit2a/image/upload/sd_social300x300_1.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/forums/img/s.gif", null, "https://dfc-org-production.my.site.com/forums/img/s.gif", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null, "https://dfc-org-production.my.site.com/img/userprofile/default_profile_45_v2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.772367,"math_prob":0.9896627,"size":5819,"snap":"2023-40-2023-50","text_gpt3_token_len":2224,"char_repetition_ratio":0.18882202,"word_repetition_ratio":0.5747538,"special_character_ratio":0.44303146,"punctuation_ratio":0.2642663,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96100324,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T03:16:55Z\",\"WARC-Record-ID\":\"<urn:uuid:8ddd0000-6514-417d-840f-e0dadf6c9937>\",\"Content-Length\":\"251418\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ca7f00d-6a3d-4119-93e8-0ad5d262067a>\",\"WARC-Concurrent-To\":\"<urn:uuid:e2775e3d-692f-4e12-9cc0-9a8fa764fcf4>\",\"WARC-IP-Address\":\"23.222.79.19\",\"WARC-Target-URI\":\"https://dfc-org-production.my.site.com/forums/ForumsMain?id=906F00000008wx8IAA\",\"WARC-Payload-Digest\":\"sha1:TTEALK3ULMC3LIHUS6AZUEUQVIA4OY2I\",\"WARC-Block-Digest\":\"sha1:SBEXILCUCJWJ4S2GDOPG46Z2CPVL6QCV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510481.79_warc_CC-MAIN-20230929022639-20230929052639-00033.warc.gz\"}"}
https://www.hugh360.co.uk/measurement/equirectangular-size-and-accuracy/
[ "For any application using photographs for measurement (Photogrammetry) the size and resolution of the image has an impact on the accuracy of the results.\n\nWhen using 360° Panoramas for measurement the size or the Equirectangular projection, and projections derived from it such as Cube Faces, Sphere, Little Planet, excetera, will impact on the accuracy of the results.\n\nPTGui uses the following formula to dervis the size of the Equirectangular at 100%:\nw  =  2π  x  f  x  (px/mm)\nwhere:\nw  =  width of the equirectangular in pixels\nf  =  focal length of lens in mm\n(px/mm)  = pixel density of the sensor where:\npx  =  pixel width of the sensor and\nmm  =  width of the sensor in mm\n\nThe size for the Equirectangular can also be read from tables derived from this formula.\n\nThe formula and tables should be used as a guide rather than definitive as the value for f (focal length of the lens) quoted for a particular lens is not always accurate.\nThe focal length of the lens used by PTGui in the formula is determined when Aligning the images and generating Control Points as alluded to in entry 3.2 in the PTGui Support FAQs and can differ from the value quoted for some lenses.\n\nFor example:\nThe Pergear 7.5mm Fisheye (Nikon Z fit) has a quoted focal length of 7.5mm, but the focal length determined by PTGui is 10.5mm.\nUsing 7.5mm in the formula gives an Equirectangular of 7900 x 3950 = 31Mp, whilst using 10.56mm in the formula gives an Equirectangular of 10900 x 5450 = 59Mp;\nThe TTArtizan 11mm Fisheye (Nikon Z fit) has a quoted focal length of 11mm, but the focal length determined by PTGui is 15.0mm.\nUsing 11.0mm in the formula gives an Equirectangular of 11580 x 5790 = 67Mp, whilst using 15.0mm in the formula gives an Equirectangular of 15760 x 7880 = 124Mp.\n\nConclusion: If you are looking to purchase a new camera and/or lens the formula and tables should be used as a guide, but it is preferable to obtain the data from an actual user if possible.\n\nIt is also important to give cognisance to the pixel density (px/mm) of the sensor as a larger sensor does not necessarily mean a larger Equirectangular and it will often be the case that the reverse is true.\nFor example:\nA 24Mp DX sensor (23.5mm x 15.6mm) will have a pixel density of 255 resulting in an Equirectangular of 16820 x 8410 = 142Mp with a 10.5mm lens;\nA 24Mp FX sensor (35.9mm x 23.9mm) will have a pixel density of 168 resulting in an Equirectangular of 11080 x 5540 = 61Mp with a 10.5mm lens.\n\nHowever, a larger sensor will enable a longer focal length lens to be used for a given shooting pattern.\nFor example:\nA 24Mp DX sensor (23.5mm x 15.6mm) will have a pixel density of 255 resulting in an Equirectangular of 17646 x 8823 = 156Mp with an 11mm lens;\nA 24Mp FX sensor (35.9mm x 23.9mm) will have a pixel density of 168 resulting in an Equirectangular of 17900 x 8850 = 160Mp with a 17mm lens.\n\nFrom communication with PTGui Support I have learned that the optimum size for the Equirectangular is not “set in stone” and is a possible interpretation. as the lens and the Equirectangular projection do not have a uniform resolution.\nThe Equirectangular can output any desired size by changing the value in the “fit at:” box in the Create Panorama dialogue.\nIncreasing the value in the “fit at:” box in the Create Panorama dialogue does not appear to have any impact on the visual appearance of the Panorama, but does reduce the angle subtended by one pixel so may be a way of increasing the accuracy of measurements made using 360° Panoramas.", null, "" ]
[ null, "https://www.hugh360.co.uk/wp-content/uploads/2017/05/Back-to-the-Top-of-the-Page-300x34.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8799329,"math_prob":0.95482105,"size":3499,"snap":"2023-40-2023-50","text_gpt3_token_len":943,"char_repetition_ratio":0.15450644,"word_repetition_ratio":0.2,"special_character_ratio":0.26464704,"punctuation_ratio":0.08534851,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97638273,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T22:48:00Z\",\"WARC-Record-ID\":\"<urn:uuid:92ebb685-62af-4948-835e-334f100bccd7>\",\"Content-Length\":\"69817\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:becff0c7-0c17-4157-b4f1-93310d55b7e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ec2cacc-ad81-4416-8541-3e866d513eb9>\",\"WARC-IP-Address\":\"92.205.147.95\",\"WARC-Target-URI\":\"https://www.hugh360.co.uk/measurement/equirectangular-size-and-accuracy/\",\"WARC-Payload-Digest\":\"sha1:JJ52CHSZOUR3KS6VVBV5IVMPIV5J46SD\",\"WARC-Block-Digest\":\"sha1:56S3VX5GOD53TQFF5OMM5T7FDTBILAE5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100016.39_warc_CC-MAIN-20231128214805-20231129004805-00691.warc.gz\"}"}
https://itssmee.wordpress.com/tag/levenshtein/
[ "## Posts Tagged ‘Levenshtein’\n\n### Similarity algorithms\n\nJune 28, 2010\n\nI have recently been researching a record linkage techniques, and part of this process I have been reminding myself of certain algorithms, and in other case learning these for the first time. As is my way, I typically try and turn the algorithm into code to allow me to understand and learn it. I have coded up examples of the following algorithms in Java:\n\nOut of curiosity I decided to publish each one separately so I can see from the stats which is the most popular. In a month or so I will add a histogram of the hits for each one (if there are any!).\n\nI hope this is useful.\n\nBe warned, I offer no warranty or guarantee on this code, any changes / enhancements / corrections / or use of this code should be attributed to itssmee, CHIME, UCL (where I am doing my research) and shared with the community.\n\n### Java example of Damerau Levenshtein distance\n\nJune 28, 2010\n\nSimilar to Levenshtein, Damerau-Levenshtein also calculates the distances between two strings. It based around comparing two string and counting the number of insertions, deletions, and substitution of single characters, and transposition of two characters.\n\nThis was, originally, aimed at spell checkers, it is also used for DNA sequences.\n\nWikipedia entry found be here:\n\n```\npublic class DamerauLevenshtein\n{\nprivate String compOne;\nprivate String compTwo;\nprivate int[][] matrix;\nprivate Boolean calculated = false;\n\npublic DamerauLevenshtein(String a, String b)\n{\nif ((a.length() > 0 || !a.isEmpty()) || (b.length() > 0 || !b.isEmpty()))\n{\ncompOne = a;\ncompTwo = b;\n}\n}\n\npublic int[][] getMatrix()\n{\nsetupMatrix();\nreturn matrix;\n}\n\npublic int getSimilarity()\n{\nif (!calculated) setupMatrix();\n\nreturn matrix[compOne.length()][compTwo.length()];\n}\n\nprivate void setupMatrix()\n{\nint cost = -1;\nint del, sub, ins;\n\nmatrix = new int[compOne.length()+1][compTwo.length()+1];\n\nfor (int i = 0; i <= compOne.length(); i++)\n{\nmatrix[i] = i;\n}\n\nfor (int i = 0; i <= compTwo.length(); i++)\n{\nmatrix[i] = i;\n}\n\nfor (int i = 1; i <= compOne.length(); i++)\n{\nfor (int j = 1; j <= compTwo.length(); j++)\n{\nif (compOne.charAt(i-1) == compTwo.charAt(j-1))\n{\ncost = 0;\n}\nelse\n{\ncost = 1;\n}\n\ndel = matrix[i-1][j]+1;\nins = matrix[i][j-1]+1;\nsub = matrix[i-1][j-1]+cost;\n\nmatrix[i][j] = minimum(del,ins,sub);\n\nif ((i > 1) && (j > 1) && (compOne.charAt(i-1) == compTwo.charAt(j-2)) && (compOne.charAt(i-2) == compTwo.charAt(j-1)))\n{\nmatrix[i][j] = minimum(matrix[i][j], matrix[i-2][j-2]+cost);\n}\n}\n}\n\ncalculated = true;\ndisplayMatrix();\n}\n\nprivate void displayMatrix()\n{\nSystem.out.println(\" \"+compOne);\nfor (int y = 0; y <= compTwo.length(); y++)\n{\nif (y-1 < 0) System.out.print(\" \"); else System.out.print(compTwo.charAt(y-1));\nfor (int x = 0; x <= compOne.length(); x++)\n{\nSystem.out.print(matrix[x][y]);\n}\nSystem.out.println();\n}\n}\n\nprivate int minimum(int d, int i, int s)\n{\nint m = Integer.MAX_VALUE;\n\nif (d < m) m = d;\nif (i < m) m = i;\nif (s < m) m = s;\n\nreturn m;\n}\n\nprivate int minimum(int d, int t)\n{\nint m = Integer.MAX_VALUE;\n\nif (d < m) m = d;\nif (t < m) m = t;\n\nreturn m;\n}\n}\n\n```\n\nFurther to the comments and observations of @zooz (see comments below), I have to apologise and advise that the code above is actually the Optimal String Alignment Distance Algorithm rather than Damerau Levenshtein. Here is the Damerau Levenshtein code in Java:\n\n```public int getDHSimilarity()\n{\nint res = -1;\nint INF = compOne.length() + compTwo.length();\n\nmatrix = new int[compOne.length()+1][compTwo.length()+1];\n\nfor (int i = 0; i < compOne.length(); i++)\n{\nmatrix[i+1] = i;\nmatrix[i+1] = INF;\n}\n\nfor (int i = 0; i < compTwo.length(); i++)\n{\nmatrix[i+1] = i;\nmatrix[i+1] = INF;\n}\n\nint[] DA = new int;\n\nfor (int i = 0; i < 24; i++)\n{\nDA[i] = 0;\n}\n\nfor (int i = 1; i < compOne.length(); i++)\n{\nint db = 0;\n\nfor (int j = 1; j < compTwo.length(); j++)\n{\n\nint i1 = DA[compTwo.indexOf(compTwo.charAt(j-1))];\nint j1 = db;\nint d = ((compOne.charAt(i-1)==compTwo.charAt(j-1))?0:1);\nif (d == 0) db = j;\n\nmatrix[i+1][j+1] = Math.min(Math.min(matrix[i][j]+d, matrix[i+1][j]+1),Math.min(matrix[i][j+1]+1,matrix[i1][j1]+(i - i1-1)+1+(j-j1-1)));\n}\nDA[compOne.indexOf(compOne.charAt(i-1))] = i;\n}\n\nreturn matrix[compOne.length()][compTwo.length()];\n}\n```\n\n### Java example of Levenshtein’s distance algorithm\n\nJune 28, 2010\n\nThe purpose of this algorithm is to measure the difference between two sequences/strings. It is based around the number of changes required to make one string equal to the other.\n\nIt is aimed at short strings, it usage is spell checkers, optical character recognition, etc.\n\nWikipedia entry can be found here:\n\n```\npublic class Levenshtein\n{\nprivate String compOne;\nprivate String compTwo;\nprivate int[][] matrix;\nprivate Boolean calculated = false;\n\npublic Levenshtein(String one, String two)\n{\ncompOne = one;\ncompTwo = two;\n}\n\npublic int getSimilarity()\n{\nif (!calculated)\n{\nsetupMatrix();\n}\nreturn matrix[compOne.length()][compTwo.length()];\n}\n\npublic int[][] getMatrix()\n{\nsetupMatrix();\nreturn matrix;\n}\n\nprivate void setupMatrix()\n{\nmatrix = new int[compOne.length()+1][compTwo.length()+1];\n\nfor (int i = 0; i <= compOne.length(); i++)\n{\nmatrix[i] = i;\n}\n\nfor (int j = 0; j <= compTwo.length(); j++)\n{\nmatrix[j] = j;\n}\n\nfor (int i = 1; i < matrix.length; i++)\n{\nfor (int j = 1; j < matrix[i].length; j++)\n{\nif (compOne.charAt(i-1) == compTwo.charAt(j-1))\n{\nmatrix[i][j] = matrix[i-1][j-1];\n}\nelse\n{\nint minimum = Integer.MAX_VALUE;\nif ((matrix[i-1][j])+1 < minimum)\n{\nminimum = (matrix[i-1][j])+1;\n}\n\nif ((matrix[i][j-1])+1 < minimum)\n{\nminimum = (matrix[i][j-1])+1;\n}\n\nif ((matrix[i-1][j-1])+1 < minimum)\n{\nminimum = (matrix[i-1][j-1])+1;\n}\n\nmatrix[i][j] = minimum;\n}\n}\n}\ncalculated = true;\ndisplayMatrix();\n}\n\nprivate void displayMatrix()\n{\nSystem.out.println(\" \"+compOne);\nfor (int y = 0; y <= compTwo.length(); y++)\n{\nif (y-1 < 0) System.out.print(\" \"); else System.out.print(compTwo.charAt(y-1));\nfor (int x = 0; x <= compOne.length(); x++)\n{\nSystem.out.print(matrix[x][y]);\n}\nSystem.out.println();\n}\n}\n}\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5281122,"math_prob":0.9985895,"size":3295,"snap":"2020-10-2020-16","text_gpt3_token_len":1079,"char_repetition_ratio":0.18353084,"word_repetition_ratio":0.1284585,"special_character_ratio":0.38330805,"punctuation_ratio":0.22377622,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996712,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T15:17:30Z\",\"WARC-Record-ID\":\"<urn:uuid:d55e3aad-1d79-48d4-890f-7454e6212934>\",\"Content-Length\":\"70065\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1fab9e4c-9911-4a07-a728-ad59aba9ea4b>\",\"WARC-Concurrent-To\":\"<urn:uuid:751dbc38-033e-480f-ae49-c83d826c5e8b>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://itssmee.wordpress.com/tag/levenshtein/\",\"WARC-Payload-Digest\":\"sha1:VP56IJWAUI2IFOXOEAL3FMQEU3GT6T6G\",\"WARC-Block-Digest\":\"sha1:DX3KNDDH4QVPLJP6JBD3WW7Y3CGCKPGP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144979.91_warc_CC-MAIN-20200220131529-20200220161529-00514.warc.gz\"}"}
https://www.accountantforums.com/threads/simple-loan-interest-calculation.148920/
[ "# New ZealandSimple Loan Interest Calculation\n\n#### Cory\n\nCan anyone help me?\nI am not an accountant, but I am a programmer. I have made a program that allows me to calculate and forecast budgets. And, only now have I decided to plug in a loan interest calculator, that shows me each and every single payment to be made and interest charged.\nHowever, I cannot get the numbers to match on the loan as per attachments.\n\nThe finance company tells me the method of charging interest (in their exact words):\nInterest charges are calculated and charged at the end of each month by multiplying the average daily balance for the preceding month by a interest rate. The interest rate is calculated by dividing the annual interest rate by 48.\n\nThe contract says 48 monthly payments of 263.24 starting from 14 January 2014.\nTotal amount of payments \\$12,635.52\nAnnual interest rate(s): 16.9% fixed for the 48 months.\nTotal interest charges: \\$3,430.46\n\\$5.00 per month for account maintenance fee.\n\nI use the calculation (which I tried to ascertain from their jargon):\n\ninterest = average_daily_balance x number_of_days x daily_interest_rate\nwhich I get 127.39\n\nfor the first interest charge I get\naverage_daily_balance = 8875.58\nnumber_of_days = 31\ndaily_interest_rate = (16.9 / 365) / 100 OR 16.9% / 365\n\nnothing I do will resolve the numbers, any ideas??", null, "", null, "#### kirby\n\nVIP Member\nFirst, look at this analytically. The March 14th 2014 charge for interest was \\$112.84. Now the April interest charge was HIGHER than March's EVEN THOUGH the April payments were HIGHER (total \\$332) than those of March (\\$264). That will give you a lower daily average balance in April than March. Should have had a lower interest in April given the higher payments. But again April's interest was higher than March's. So don't bother trying to calculate this - go ask Finance Company to explain cause it makes no sense.\n\n#### Cory\n\nThanks Kirby, I've talked to them multiple times and they're all over the place with their answers. I have the right under New Zealand law to be supplied the interest calculations, but I wanted to make sure it wasn't just my lack of understanding before I went that direction.\n\nHowever, I think I may have found what the calculation is based on. It seems their interest is calculated based on if I were simply paying the contract amount of 263.24 per month only--rather then calculated by the average daily balance the contract interest calculation says. When I do that calculation, I seem to get consistently close to their interest rate. Can anyone confirm this?" ]
[ null, "https://www.accountantforums.com/data/attachments/0/101-57f4fceab08d636690328e34123399f4.jpg", null, "https://www.accountantforums.com/data/attachments/0/102-230243c21ae6b77c1402360f9431cd5e.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94210947,"math_prob":0.79225117,"size":2539,"snap":"2019-51-2020-05","text_gpt3_token_len":604,"char_repetition_ratio":0.14911243,"word_repetition_ratio":0.9810427,"special_character_ratio":0.26782197,"punctuation_ratio":0.1262525,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9696561,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T10:02:34Z\",\"WARC-Record-ID\":\"<urn:uuid:f5fa8693-a59e-42db-9175-068472300033>\",\"Content-Length\":\"59070\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e6f10564-146c-4404-950e-316638f258e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1f8f6a1-acfc-4b88-884c-b5474a61ecd2>\",\"WARC-IP-Address\":\"209.133.199.162\",\"WARC-Target-URI\":\"https://www.accountantforums.com/threads/simple-loan-interest-calculation.148920/\",\"WARC-Payload-Digest\":\"sha1:YVEJYIQ7T6LWCBFJGSUSW5GE6JHYTBW6\",\"WARC-Block-Digest\":\"sha1:D74IJLRYFZ3JBRZIRB7GB5MIJNSRD7AL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250592394.9_warc_CC-MAIN-20200118081234-20200118105234-00198.warc.gz\"}"}
https://kipdf.com/further-inference-in-the-multiple-regression-model_5aef68b57f8b9a81328b45ea.html
[ "## Further Inference in the Multiple Regression Model\n\nCHAPTER 6 Further Inference in the Multiple Regression Model CHAPTER OUTLINE 6.1 The F-Test 6.1.1 Testing the Significance of the Model 6.1.2 Relat...\nAuthor: Ralf Bruce\nCHAPTER\n\n6\n\nFurther Inference in the Multiple Regression Model\n\nCHAPTER OUTLINE 6.1 The F-Test 6.1.1 Testing the Significance of the Model 6.1.2 Relationship between t- and F-tests 6.1.3 More General F-tests 6.2 Nonsample Information 6.3 Model Specification 6.3.1 Omitted Variables\n\n6.3.2 Irrelevant Variables 6.3.3 Choosing the Model Model Selection Criteria RESET 6.4 Poor Data, Collinearity, and Insignificance Key Terms Chapter 6 do-file\n\n6.1 THE F-TEST The example used in this chapter is a model of sales for Big Andy's Burger Barn considered in Chapter 5. The model includes three explanatory variables and a constant: SALESi  1  2 PRICEi  3 ADVERTi  4 ADVERTi 2  ei\n\nwhere SALESi is monthly sales in a given city and is measured in \\$1,000 increments, PRICEi is price of a hamburger measured in dollars, ADVERTi is the advertising expenditure also measured in thousands of dollars and i=1, 2, … , N. The null hypothesis is that advertising has no effect on average sales. For this marginal effect to be zero for all values of advertising requires 3  0 and 4  0. The alternative is 3  0 or 4  0. The parameters of the model under the null hypothesis are restricted to be zero and the parameters under the alternative are unrestricted. The F-test compares the sum of squared errors from the unrestricted model to that of the restricted model. A large difference is taken as evidence that the restrictions are false. The statistic used to test the null hypothesis (restrictions) is\n\n1\n\n2 Chapter 6\n\nF\n\n SSER  SSEU  J SSEU  N  K \n\n,\n\nwhich has an F-distribution with J numerator and N−K denominator degrees of freedom when the restrictions are true. The statistic is computed by running two regressions. The first is unrestricted; the second has the restrictions imposed. Save the sum of squared errors from each regression, the degrees of freedom from the unrestricted regression (N−K), and the number of independent restrictions imposed (J). Then, compute the following:\n\nF\n\n SSER  SSEU  J  1896.391  1532.084 SSEU  N  K  1532.084  75  4 \n\n2\n\n 8.44\n\nTo estimate this model load the data file andy.dta use andy, clear\n\nIn Stata’s variables window, you’ll see that the data contain three variables: sales, price, and advert. These are used with the regress function to estimate the unrestricted model regress sales price advert c.advert#c.advert\n\nSave the sum of squared errors into a new scalar called sseu using e(ssr) and the residual degrees of freedom from the analysis of variance table into a variable called df_unrest using e(df_r). scalar sseu = e(ssr) scalar df_unrest = e(df_r)\n\nNext, impose the restriction on the model and reestimate it using least squares. Again, save the sum of squared errors and the residual degrees of freedom. regress sales advert scalar sser = e(ssr) scalar df_rest = e(df_r)\n\nThe saved residual degrees of freedom from the restricted model can be used to obtain the number of restrictions imposed. Each unique restriction in a linear model reduces the number of parameters in the model by one. So, imposing one restriction on a three parameter unrestricted model (e.g., Big Andy’s) reduces the number of parameters in the restricted model to two. Let Kr be the number of regressors in the restricted model and Ku the number in the unrestricted model. Subtracting the degrees of freedom in the unrestricted model (N−Ku) from those of the restricted model (N−Kr) will yield the number of restrictions you’ve imposed, i.e., (N−Kr) − (N−Ku) = (Ku−Kr) = J. In Stata, scalar J = df_rest - df_unrest\n\nFurther Inference in the Multiple Regression Model 3\n\nThen, the F-statistic can be computed scalar fstat = ((sser-sseu)/J)/(sseu/(df_unrest))\n\nThe critical value from the F(J,N−K) distribution and the p-value for the computed statistic can be computed in the usual way. In this case, invFtail(J,N-K,) generates the level critical value from the F-distribution with J numerator and N−K denominator degrees of freedom. The Ftail(J,N-K,fstat) function works similarly to return the p-value for the computed statistic, fstat. scalar crit1 = invFtail(J,df_unrest,.05) scalar pvalue = Ftail(J,df_unrest,fstat) scalar list sseu sser J df_unrest fstat pvalue crit1\n\nThe output for which is: . scalar list sseu = sser = J = df_unrest = fstat = pvalue = crit1 =\n\nsseu sser J df_unrest fstat pvalue crit1 1532.0844 1896.3906 2 71 8.4413577 .00051416 3.1257642\n\nThe dialog boxes can also be used to test restrictions on the parameters of the model. The first step is to estimate the model using regress. This proceeds just as it did in section 5.1 above. Select Statistics > Linear models and related > Linear regression from the pull-down menu. This reveals the regress dialog box. Using sales as the dependent variable and price, advert, and the interaction c.advert#c.advertrt as independent variables in the regress–Linear regression dialog box, run the regression by clicking OK. Once the regression is estimated, postestimation commands are used to test the hypothesis. From the pull-down menu select Statistics > Postestimation > Tests > Test parameters, which brings up the testparm dialog box:\n\n4 Chapter 6\n\nOne can also use the test dialog box by selecting Statistics > Postestimation > Tests > Test linear hypotheses. The test dialog is harder to use. Each linear hypothesis must be entered as a Specification. For Specification 1 (required) type in advert=0 and make sure that either the Coefficients are zero or Linear expressions are equal radio button is selected. Then highlight Specification 2 and type in c.advert#c.advert=0 and click Submit. The dialog box for this step is shown below:\n\nIn both cases, the Command window is much easier to use. The testparm statement is the simplest to use for testing zero restrictions on the parameters. The syntax is testparm varlist\n\nThat means that one can simply list the variables that have zero coefficients under the null. It can also be coaxed into testing that coefficients are equal to one another using the equal option. The test command can be used to test joint hypotheses about the parameters of the most recently fit model using a Wald test. There are several different ways to specify the hypotheses and a couple of these are explored here. The general syntax is test (hypothesis 1) (hypothesis 2)\n\nEach of the joint hypotheses is enclosed in a set of parentheses. In a linear model the coefficients can be identified by their variable names, since their meaning is unambiguous. More generally, one can also use either parameter name, if previously defined, or in the linear model the _b[variable name] syntax. Here are the three equivalent ways to test the joint null regress sales price advert c.advert#c.advert\n\n6.1.1 TESTING THE SIGNIFICANCE OF THE MODEL In this application of the F-test, you determine whether your model is significant or not at the desired level of statistical significance. Consider the general linear model with K regressors\n\nyi  1  xi 22  xi 33 \n\n xiK K  ei\n\nIf the explanatory variables have no effect on the average value of y then each of the slopes will be zero, leading to the null and alternative hypotheses:\n\nH 0 : 2  0, 3  0,\n\n, K  0\n\nH1 : At least one of the k is nonzero for k  2,3,\n\nK\n\nThis amounts to J=K−1 restrictions. Again, estimate the model unrestricted, and restricted saving degrees of freedom for each. Then, use the Stata code from above to compute the test statistic:\n\nF\n\n( SST  SSE ) / ( K  1) (3115.485  1532.084) / 3   24.459 SSE / ( N  K ) 1532.084 / (75  4)\n\nThe Stata code is: * Unrestricted Model (all variables) regress sales price advert c.advert#c.advert scalar sseu = e(rss) scalar df_unrest = e(df_r) * Restricted Model (no explanatory variables) regress sales scalar sser = e(rss) scalar df_rest = e(df_r) scalar J = df_rest - df_unrest * F-statistic, critical value, pvalue scalar fstat = ((sser -sseu)/J)/(sseu/(df_unrest)) scalar crit2 = invFtail(J,df_unrest,.05) scalar pvalue = Ftail(J,df_unrest,fstat) scalar list sseu sser J df_unrest fstat pvalue crit2\n\nwhich produces:\n\n6 Chapter 6 . scalar list sseu = sser = J = df_unrest = fstat = pvalue = crit2 =\n\nsseu sser J df_unrest fstat pvalue crit2 1532.0844 3115.482 3 71 24.459321 5.600e-11 2.7336472\n\nThis particular test of regression significance is important enough that it appears on the default output of every linear regression estimated using Stata. In the output below, the F-statistic for this test is 24.4595 and its p-value is well below 5%. Therefore, we reject the null hypothesis that the model is insignificant at the five percent level. . regress sales price advert c.advert#c.advert Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n1583.39763 1532.08439\n\n3 71\n\n527.799209 21.5786533\n\nTotal\n\n3115.48202\n\n74\n\n42.1011083 P>|t|\n\n75 24.46 0.0000 0.5082 0.4875 4.6453\n\nCoef.\n\n-7.640002 12.15123\n\n1.045939 3.556164\n\n-7.30 3.42\n\n0.000 0.001\n\n-9.725545 5.060445\n\n-5.554459 19.24202\n\n-2.767963\n\n.940624\n\n-2.94\n\n0.004\n\n-4.643514\n\n-.8924117\n\n109.719\n\n6.799046\n\n16.14\n\n0.000\n\n96.16213\n\n123.276\n\n_cons\n\nt\n\n= = = = = =\n\nsales\n\nStd. Err.\n\nNumber of obs F( 3, 71) Prob > F R-squared Adj R-squared Root MSE\n\n[95% Conf. Interval]\n\n6.1.2 Relationship between t- and F-tests In this example, the equivalence of a t-test for significance and an F-test is shown. The basic model is SALESi  1  2 PRICEi  3 ADVERTi  4 ADVERTi 2  ei\n\nThe t-ratio for 2 is equal to 7.30 (see the output at the end of section 6.1.2). The F-test can be used to test the hypothesis that 2  0 against the two-sided alternative that it is not zero. The restricted model is SALESi  1  3 ADVERTi  4 ADVERTi 2  ei\n\nEstimating the unrestricted model, the unrestricted model, and computing the F-statistic in Stata: * Unrestricted Regression regress sales price advert c.advert#c.advert scalar sseu = e(rss) scalar df_unrest = e(df_r) scalar tratio = _b[price]/_se[price] scalar t_sq = tratio^2\n\nFurther Inference in the Multiple Regression Model 7\n\n* Restricted Regression regress sales advert c.advert#c.advert scalar sser = e(rss) scalar df_rest = e(df_r) scalar J = df_rest - df_unrest * F-statistic, critical value, pvalue scalar fstat = ((sser -sseu)/J)/(sseu/(df_unrest)) scalar crit = invFtail(J,df_unrest,.05) scalar pvalue = Ftail(J,df_unrest,fstat) scalar list sseu sser J df_unrest fstat pvalue crit tratio t_sq\n\nThis produces the output: . scalar list sseu sser J df_unrest fstat pvalue crit tratio t_sq sseu = 1532.0844 sser = 2683.4111 J = 1 df_unrest = 71 fstat = 53.354892 pvalue = 3.236e-10 crit = 3.9758102 tratio = -7.3044433 t_sq = 53.354892\n\nThe F-statistic is 53.35. It is no coincidence that the square of the t-ratio is equal to the F: 7.3042  53.35. The reason for this is the exact relationship between the t- and F-distributions. The square of a t random variable with df degrees of freedom is an F random variable with 1 degree of freedom in the numerator and df degrees of freedom in the denominator.\n\n6.1.3 More General F-Tests The F-test can also be used to test hypotheses that are more general than ones involving zero restrictions on the coefficients of regressors. Up to K conjectures involving linear hypotheses with equal signs can be tested. The test is performed in the same way by comparing the restricted sum of squared errors to its unrestricted value. To do this requires some algebra by the user. Fortunately, Stata provides a couple of alternatives that avoid this. The example considered is based on the optimal level of advertising first considered in Chapter 5. If the returns to advertising diminish, then the optimal level of advertising will occur when the next dollar spent on advertising generates only one more dollar of sales. Setting the marginal effect of another (thousand) dollar on sales equal to 1:\n\n3  24 Ao  1 and solving for AO yields AˆO  (1  b3 ) / 2b4 where b3 and b4 are the least squares estimates. Plugging in the results from the estimated model yields an estimated optimal level of advertising of 2.014 (\\$2014).\n\n8 Chapter 6 Suppose that Andy wants to test the conjecture that the optimal level of advertising is \\$1,900. Substituting 1.9 (remember, advertising in the data is measured in \\$1,000) leads to null and alternative hypotheses:\n\nH0 : 3  3.84  1\n\nH1 : 3  3.84  1\n\nThe Stata commands to compute the value of this conjecture under the null hypothesis and its standard error are lincom _b[advert]+3.8*_b[c.advert#c.advert]-1\n\nRecall from previous chapters that the lincomm command computes linear combinations of parameters based on the regression that precedes it. The output from lincom and the computation of the t-ratio is: . lincom _b[advert]+3.8*_b[c.advert#c.advert]-1 ( 1)\n\nCoef.\n\n(1)\n\n.4587608\n\nStd. Err. .8591724\n\nt 0.53\n\nP>|t| 0.595\n\n[95% Conf. Interval] -1.253968\n\n2.17149\n\nSince the regression is linear, the simpler syntax can also be used to produce identical results: lincom advert+3.8*c.advert#c.advert-1\n\nIn either case, an estimate and standard error are generated and these quantities are saved in r(estimate) and r(se), respectively. So, you can recall them and use the scalar command to compute a t-ratio manually.\n\nt\n\n(b3  3.8b4 )  1 se(b3  3.8b4 )\n\nThe commands to do this are: scalar t = r(estimate)/r(se)\n\nscalar pvalue2tail = 2*ttail(e(df_r),t) scalar pvalue1tail = ttail(e(df_r),t) scalar list t pvalue2tail pvalue1tail\n\nThe ttail() command is used to obtain the one-sided p-value for the computed t-ratio. It uses e(df_r) which saves the residual degrees of freedom from the sales regression that precedes its use. The output is: . scalar list t pvalue2tail pvalue1tail t = .53395657 pvalue2tail = .59501636 pvalue1tail = .29750818\n\nFurther Inference in the Multiple Regression Model 9 An algebraic trick can be used that will enable you to rearrange the model in terms of a new parameter that embodies the desired restriction. This is useful if using software that does not contain something like the lincom command. Let be the restriction. Solve for  substitute this into the model and rearrange and you’ll get\n\nThese use these in a regression. regress ystar price advert xstar\n\nThe t-ratio on the variable advert is the desired statistic. Its two-sided p-value is given in the output. If you want to compute this manually, try the following scalar t = (_b[advert])/_se[advert] scalar pvalue = ttail(e(df_r),t) scalar list t pvalue\n\nThe output for the entire routine follows: . regress ystar price advert xstar Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n1457.21501 1532.08474\n\n3 71\n\n485.738336 21.5786583\n\nTotal\n\n2989.29974\n\n74\n\n40.3959425\n\nystar\n\nCoef.\n\n-7.640002 .6329752 -2.767962 109.719\n\nStd. Err. 1.045939 .6541902 .9406242 6.799047\n\n. scalar t = (_b[advert])/_se[advert] . scalar pvalue = ttail(e(df_r),t) . scalar list t pvalue t = .96757063 pvalue = .16827164\n\nt -7.30 0.97 -2.94 16.14\n\nNumber of obs F( 3, 71) Prob > F R-squared Adj R-squared Root MSE P>|t| 0.000 0.337 0.004 0.000\n\n= = = = = =\n\n75 22.51 0.0000 0.4875 0.4658 4.6453\n\n[95% Conf. Interval] -9.725545 -.671443 -4.643514 96.16213\n\n-5.554458 1.937393 -.892411 123.276\n\n10 Chapter 6 The t-ratio in the regression table is 0.97 and has a two-sided p-value of 0.337. The t-ratio computed using the scalar command is the same (though carried to more digits) and its one-sided p-value is half that of the two-sided one in the table. The results match. This section concludes with a joint test of two of Big Andy’s conjectures. In addition to proposing that the optimal level of monthly advertising expenditure is \\$1,900, Big Andy is planning the staffing and purchasing of inputs on the assumption that a price of PRICE  \\$6 and advertising expenditure of ADVERT  1.9 will, on average, yield sales revenue of \\$80,000. The joint null hypothesis is\n\nH0 : 3  3.84  1 and\n\n1  62  1.93  3.614  80\n\nThis example uses the test command, which is followed by both restrictions, each contained in a separate set of parentheses. Notice that test uses the saved coefficient estimates _b[varname] from the preceding regression. Once again, this can be simplified in a linear regression by using the variable names alone.\n\n2, 71) = Prob > F =\n\n5.74 0.0049\n\nSince the p-value is 0.0049 and less than 5%, the null (joint) hypothesis is rejected at that level of significance.\n\n6.2 Nonsample Information Sometimes you have exact nonsample information that you want to use in the estimation of the model. Using nonsample information improves the precision with which you can estimate the remaining parameters. In this example from POE4, the authors consider a model of beer sales as a function of beer prices, liquor prices, prices of other goods, and income. The variables appear in their natural logarithms\n\nFurther Inference in the Multiple Regression Model 11\n\nln(Qt )  1  2 ln( PBt )  3 ln( PLt )  4 ln(PRt )  5 ln( It )  et Economic theory suggests that\n\n2  3  4  5  0 The beer.dta data file is used to estimate the model. Open the data file: use beer, clear\n\nThen, generate the natural logarithms of each variable for your dataset. The Stata function log(variable) is used to take the natural logarithm of variable. So, to generate natural logs of each variable, use: use gen gen gen gen gen\n\nbeer, clear lq = ln(q) lpb = ln(pb) lpl = ln(pl) lpr = ln(pr) li = ln(i)\n\nIn order to impose linear restrictions you will use what Stata calls constrained regression. Stata calls the restriction a constraint, and the procedure it uses to impose those constraints on a linear regression model is cnsreg. The syntax looks like this: constraint 1 constraint 2 cnsreg depvar indepvars [if] [in] [weight] , constraints(1 2)\n\nEach of the restrictions (constraints) are listed first and given a unique number. Once these are in memory, the cnsreg command is used like regress; follow the regression model with a comma, and the list of constraint numbers constraint(1 2 ... ) and Stata will impose the enumerated constraints and use least squares to estimate the remaining parameters. The constraint command can be abbreviated c(1 2) as shown below. For the beer example the syntax is: constraint 1 lpb+lpl+lpr+li=0 cnsreg lq lpb lpl lpr li, c(1)\n\nThe result is\n\n12 Chapter 6 . constraint 1 lpb+lpl+lpr+li=0 . cnsreg lq lpb lpl lpr li, c(1) Constrained linear regression\n\n( 1)\n\nNumber of obs F( 3, 26) Prob > F Root MSE\n\n= = = =\n\n30 36.46 0.0000 0.0617\n\nlpb + lpl + lpr + li = 0 lq\n\nCoef.\n\nlpb lpl lpr li _cons\n\n-1.299386 .1868179 .1667423 .9458253 -4.797769\n\nStd. Err. .1657378 .2843835 .0770752 .427047 3.713906\n\nt -7.84 0.66 2.16 2.21 -1.29\n\nP>|t| 0.000 0.517 0.040 0.036 0.208\n\n[95% Conf. Interval] -1.640064 -.3977407 .0083121 .0680176 -12.43181\n\n-.9587067 .7713765 .3251726 1.823633 2.836275\n\nThe pull-down menus can also be used to obtain these results, though with more effort. First, the constraint must be defined. Select Statistics > Other > Manage Constraints\n\nClick on the create button to bring up the dialog box used to number and define the constraints.\n\nFurther Inference in the Multiple Regression Model 13\n\nChoose the constraint number and type in the desired restriction in the Define expression box. Click OK to accept the constraint and to close the box. To add constraints click Create again in the constraint—Manage constraints box. When finished, click Close to close the box. To estimate the restricted model, select Statistics > Linear models and related > Constrained linear regression from the pull-down menu as shown:\n\nClick OK or Submit to estimate the constrained model.\n\n14 Chapter 6\n\n6.3 MODEL SPECIFICATION Three essential features of model choice are (1) choice of functional form, (2) choice of explanatory variables (regressors) to be included in the model, and (3) whether the multiple regression model assumptions MR1–MR6, listed in Chapter 5, hold. In this section the first two of these are explored.\n\n6.3.1 Omitted Variables If you omit relevant variables from your model, then least squares is biased. To introduce the omitted variable problem, we consider a sample of married couples where both husbands and wives work. The data are stored in the file edu_inc.dta. Open the data file and clear any previously held data from Stata’s memory use edu_inc, clear\n\nThe first regression includes family income as the dependent variable (faminc) and husband’s education (he) and wife’s education (we) as explanatory variables. From the command line regress faminc he we\n\nThe result is . regress faminc he we Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n1.3405e+11 6.9703e+11\n\n2 425\n\n6.7027e+10 1.6401e+09\n\nTotal\n\n8.3109e+11\n\n427\n\n1.9463e+09\n\nfaminc\n\nCoef.\n\nhe we _cons\n\n3131.509 4522.641 -5533.631\n\nStd. Err. 802.908 1066.327 11229.53\n\nt 3.90 4.24 -0.49\n\nNumber of obs F( 2, 425) Prob > F R-squared Adj R-squared Root MSE P>|t| 0.000 0.000 0.622\n\n= = = = = =\n\n428 40.87 0.0000 0.1613 0.1574 40498\n\n[95% Conf. Interval] 1553.344 2426.711 -27605.97\n\n4709.674 6618.572 16538.71\n\nOmitting wife’s education (we) yields: . regress faminc he Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n1.0455e+11 7.2654e+11\n\n1 426\n\n1.0455e+11 1.7055e+09\n\nTotal\n\n8.3109e+11\n\n427\n\n1.9463e+09\n\nfaminc\n\nCoef.\n\nhe _cons\n\n5155.484 26191.27\n\nStd. Err. 658.4573 8541.108\n\nt 7.83 3.07\n\nNumber of obs F( 1, 426) Prob > F R-squared Adj R-squared Root MSE\n\n= = = = = =\n\n428 61.30 0.0000 0.1258 0.1237 41297\n\nP>|t|\n\n[95% Conf. Interval]\n\n0.000 0.002\n\n3861.254 9403.308\n\n6449.713 42979.23\n\nFurther Inference in the Multiple Regression Model 15 Simple correlation analysis reveals that husband and wife’s education levels are positively correlated. As suggested in the text, this implies that omitting we from the model is likely to cause positive bias in the he coefficient. This is borne out in the estimated models. . correlate (obs=428)\n\nfaminc he we kl6 x5 x6\n\nfaminc\n\nhe\n\nwe\n\nkl6\n\nx5\n\nx6\n\n1.0000 0.3547 0.3623 -0.0720 0.2898 0.3514\n\n1.0000 0.5943 0.1049 0.8362 0.8206\n\n1.0000 0.1293 0.5178 0.7993\n\n1.0000 0.1487 0.1595\n\n1.0000 0.9002\n\n1.0000\n\nIncluding wife’s education and number of preschool age children (kl6) yields: . regress faminc he we kl6 Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n1.4725e+11 6.8384e+11\n\n3 424\n\n4.9082e+10 1.6128e+09\n\nTotal\n\n8.3109e+11\n\n427\n\n1.9463e+09\n\nfaminc\n\nCoef.\n\nhe we kl6 _cons\n\n3211.526 4776.907 -14310.92 -7755.331\n\nStd. Err. 796.7026 1061.164 5003.928 11162.93\n\nt 4.03 4.50 -2.86 -0.69\n\nNumber of obs F( 3, 424) Prob > F R-squared Adj R-squared Root MSE P>|t| 0.000 0.000 0.004 0.488\n\n= = = = = =\n\n428 30.43 0.0000 0.1772 0.1714 40160\n\n[95% Conf. Interval] 1645.547 2691.111 -24146.52 -29696.91\n\n4777.504 6862.704 -4475.325 14186.25\n\nNotice that compared to the preceding regression, the coefficient estimates for he and we have not changed much. This occurs because kl6 is not strongly correlated with the either of the education variables. It implies that useful results can still be obtained even if a relevant variable is omitted. What is required is that that the omitted variable be uncorrelated with the included variables of interest, which in this example are the education variables. It this is the case, omitting a relevant variable will not affect the validity of the tests and confidence intervals involving we or he.\n\n6.3.2 Irrelevant Variables Including irrelevant variables in the model diminishes the precision of the least squares estimator. Least squares is unbiased, but the standard errors of the coefficients will be bigger than necessary. In this example, two irrelevant variables (x5 and x6) are added to the model. These variables are correlated with he and we, but they are not related to the mean of family income. Estimate the model using linear regression to obtain:\n\n16 Chapter 6 . regress faminc he we kl6 x5 x6 Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n1.4776e+11 6.8332e+11\n\n5 422\n\n2.9553e+10 1.6192e+09\n\nTotal\n\n8.3109e+11\n\n427\n\n1.9463e+09\n\nfaminc\n\nCoef.\n\nhe we kl6 x5 x6 _cons\n\n3339.792 5868.677 -14200.18 888.8431 -1067.186 -7558.615\n\nStd. Err. 1250.039 2278.067 5043.72 2242.49 1981.685 11195.41\n\nt 2.67 2.58 -2.82 0.40 -0.54 -0.68\n\nNumber of obs F( 5, 422) Prob > F R-squared Adj R-squared Root MSE P>|t| 0.008 0.010 0.005 0.692 0.590 0.500\n\n= = = = = =\n\n428 18.25 0.0000 0.1778 0.1681 40240\n\n[95% Conf. Interval] 882.7131 1390.906 -24114.13 -3518.999 -4962.388 -29564.33\n\n5796.871 10346.45 -4286.241 5296.685 2828.017 14447.1\n\nNotice how much larger the estimated standard errors become compared to those in the preceding regression. If they had been uncorrelated with he and we, then we would expect to see very little effect on their standard errors.\n\nR2  1\n\nSSE /( N  K ) SST /( N  1)\n\nThis statistic is reported by default by Stata’s regress command. The other model selection rules considered are the Akaike information criterion (AIC) given by\n\nFurther Inference in the Multiple Regression Model 17\n\nK  SSE  AIC  ln  2 N  N  and the Bayesian information criterion (SC) given by\n\nK ln( N )  SSE  SC  ln  2 N  N  The two statistics are very similar and consist of two terms. The first is a measure of fit; the better the fit, the smaller the SSE and the smaller its natural logarithm. Adding a regressor cannot increase the size of this term. The second term is a penalty imposed on the criterion for adding a regressor. As K increases, the penalty gets larger. The idea is to pick the model among competing ones that minimizes either AIC or SC. They differ only in how large the penalty is, with SC’s being slightly larger. These criteria are available in Stata, but are computed differently. Stata’s versions were developed for use under a larger set of data generation processes than the one considered here, so by all means use them if the need arises.1 These criteria are used repeatedly in Principles of Econometrics, 4th Edition and one goal of this manual is to replicate their results. Therefore, it is a good idea to write a program to compute and display the three model selection rules; once written the program can be run multiple times to compare various model specifications. In Chapter 9, the model selection program is revisited and used within programming loops. In Stata a program is a structure that allows one to execute blocks of code by simply typing the program’s name. In the example below, a program called modelsel is created. Each time modelsel is typed in the Command window, the lines of code within the program will run. In this case, the program will compute AIC, SC, and print out the value of adjusted R2, all based on the previously run regression. Here’s how programming works in Stata. A program starts by issuing the program command and giving it a name, e.g., progname. A block of Stata commands to be executed each time the program is run are then written. The program is closed by end. Here’s the basic structure: program progname Stata commands end\n\nAfter writing the program, it must be compiled. If the program is put in a separate .do file then just run the .do file in the usual way. If the program resides along with other code in a .do file, then highlight the program code, and execute the fragment in the usual way. The program only needs to be compiled once. The program is executed by typing the program’s name, progname, at Stata’s dot prompt. The modelsel program is: program modelsel scalar aic = ln(e(rss)/e(N))+2*e(rank)/e(N) scalar bic = ln(e(rss)/e(N))+e(rank)*ln(e(N))/e(N) 1 In fact, Stata’s post-estimation command estat ic uses AIC  2ln( L)  2k and BIC  2ln( L)  k ln( N ), where L is the value of the maximized likelihood function when the errors of the model are normally distributed.\n\n18 Chapter 6 di \"r-square = \"e(r2) \" and adjusted r_square \" e(r2_a) scalar list aic bic end\n\nThe program will reside in memory until you end your Stata session or tell Stata to drop the program from memory. This is accomplished in either of two ways. First, program drop progname will drop the given program (i.e., progname) from memory. The other method is to drop all programs from memory using program drop _all. Only use this method if you want to clear all user defined programs from Stata’s memory. This particular program uses results that are produced and stored by Stata after a regression is run. Several of these will be familiar already. e(rss) contains the sum of squared errors and e(N) the sample size. The new result used is e(rank), which basically measures how many independent variables you have in the model, excluding any that are perfectly collinear with the others. In an identified regression model, this generally measures the number of coefficients in the model, K. Within the body of the program the scalars aic and bic (sometimes called SC—the Schwartz criterion) are computed and a display command is issued to print out the value of adjusted R2 in the model. Finally, the scalar list command is given to print out the computed values of the scalars. To estimate a model and compute the model selection rules derived from it run the modelsel program if you haven’t already. Then, estimate the regression and type modelsel. For instance regress faminc he estimates store Model1 modelsel\n\nThis produces: . regress faminc he Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n1.0455e+11 7.2654e+11\n\n1 426\n\n1.0455e+11 1.7055e+09\n\nTotal\n\n8.3109e+11\n\n427\n\n1.9463e+09\n\nfaminc\n\nCoef.\n\nhe _cons\n\n5155.484 26191.27\n\nStd. Err. 658.4573 8541.108\n\nt 7.83 3.07\n\nNumber of obs F( 1, 426) Prob > F R-squared Adj R-squared Root MSE\n\n= = = = = =\n\n428 61.30 0.0000 0.1258 0.1237 41297\n\nP>|t|\n\n[95% Conf. Interval]\n\n0.000 0.002\n\n3861.254 9403.308\n\n6449.713 42979.23\n\n. modelsel r-square = .12580103 and adjusted r_square .12374892 aic = 21.261776 bic = 21.280744\n\nTo use the model selection rules, run modelsel after each model and choose the one that either has the largest adjusted R2 (usually a bad idea) or the smallest AIC or BIC (better, but not a great idea). Refer to the .do file at the end of the chapter for an example of this in use. For the family income the model selection code produces\n\nFurther Inference in the Multiple Regression Model 19 Model 1 (he) r-square = .12580103 and adjusted r_square .12374892 aic = 21.261776 bic = 21.280744 Model 2 (he, we) r-square = .16130045 and adjusted r_square .15735363 aic = 21.224993 bic = 21.253445 Model 3 (he, we, kl6) r-square = .17717332 and adjusted r_square .17135143 aic = 21.210559 bic = 21.248495 Model 4 (he, we, kl6, x5, x6) r-square = .17779647 and adjusted r_square .16805472 aic = 21.219148 bic = 21.276051\n\nIn the example, Stata’s estimates store command is issued after each model and the results are accumulated using the estimates table command estimates table Model1 Model2 Model3 Model4, b(%9.3f) stfmt(%9.3f) /// se stats(N r2 r2_a aic bic) Variable\n\nModel1\n\nModel2\n\n5155.484 658.457\n\n3131.509 802.908 4522.641 1066.327\n\n3211.526 796.703 4776.907 1061.164 -1.43e+04 5003.928\n\n_cons\n\n26191.269 8541.108\n\n-5533.631 11229.533\n\n-7755.331 11162.934\n\n3339.792 1250.039 5868.677 2278.067 -1.42e+04 5043.720 888.843 2242.490 -1067.186 1981.685 -7558.615 11195.411\n\nN r2 r2_a aic bic\n\n428 0.126 0.124 10314.652 10322.770\n\n428 0.161 0.157 10298.909 10311.086\n\n428 0.177 0.171 10292.731 10308.967\n\n428 0.178 0.168 10296.407 10320.761\n\nhe we kl6\n\nModel3\n\nx5 x6\n\nModel4\n\nlegend: b/se\n\nIn this table produced by Stata, Stata’s versions of the aic and bic statistics computed for each regression are used. Obviously, Stata is using a different computation! No worries though, both sets of computations are valid and lead to the same conclusion. The largerst R 2 is from Model 3 as are the smallest aic and bic statistics. It is clear that Model 3 is the preferred specification in this example.\n\n20 Chapter 6 Functional Form Although theoretical considerations should be your primary guide to functional form selection, there are many instances when economic theory or common sense isn’t enough. This is where the RESET test is useful. RESET can be used as a crude check to determine whether you’ve made an obvious error in specifying the functional form. It is NOT really a test for omitted variables; instead it is a test of the adequacy of your functional form. The test is simple. The null hypothesis is that your functional form is adequate; the alternative is that it is not. Estimate the regression assuming that functional form is correct and obtain the predicted values. Square and cube these, add them back to the model, reestimate the regression and perform a joint test of the significance of yˆ 2 and yˆ 3 . There are actually several variants of this test. The first adds only yˆ 2 to the model and tests its significance using either an F-test or the equivalent t-test. The second add both yˆ 2 and yˆ 3 and then does a joint test of their significance. We’ll refer to these as RESET(1) and RESET(2), respectively. The example is again based on the family income regression. Estimate the model using least squares and use the predict statement to save the linear predictions from the regression regress faminc he we kl6 predict yhat\n\nRecall that the syntax to obtain the in-sample predicted values from a regression, yˆ i , is predict yhat, xb. In this command yhat is a name that you designate. We can safely omit the xb option since this is Stata’s default setting. Now, generate the squares and cubes of yˆ i using gen yhat2 = yhat^2 gen yhat3 = yhat^3\n\nEstimate the original regression with yhat2 added to the model. Test yhat2’s significance using a t-test or an F-test. For the latter use Stata’s test command as shown. regress faminc he we kl6 yhat2 test yhat2\n\nThe test result is . test yhat2 ( 1)\n\nyhat2 = 0 Constraint 1 dropped F(\n\n0, 423) = Prob > F =\n\n. .\n\nObviously there is a problem with this formulation. Stata tells us that the constraint was dropped leaving nothing to test! The problem is that the data are ill-conditioned. For the computer to be able to do the arithmetic, it needs the variables to be of a similar magnitude in the dataset. Take a look at the summary statistics for the variables in the model.\n\nFurther Inference in the Multiple Regression Model 21 . summarize faminc he we kl6 Variable\n\nObs\n\nMean\n\nfaminc he we kl6\n\n428 428 428 428\n\n91213 12.61215 12.65888 .1401869\n\nStd. Dev. 44117.35 3.035163 2.285376 .3919231\n\nMin\n\nMax\n\n9072 4 5 0\n\n344146.3 17 17 2\n\nThe magnitude of faminc is 1,000s of times larger than the other variables. The predictions from a linear regression will be of similar scale. When these are squared and cubed as required by the RESET tests, the conditioning worsens to the point that your computer can’t do the arithmetic. The solution is to rescale faminc so that its magnitude is more in line with that of the other variables. Recall that in linear regression, rescaling dependent and independent variables only affects the magnitudes of the coefficients, not any of the substantive outcomes of the regression. So, drop the ill-conditioned predictions from the data and rescale faminc by dividing it by 10,000. drop yhat yhat2 yhat3 gen faminc_sc = faminc/10000\n\nNow, estimate the model, save the predictions and generate the squares and cubes. regress faminc_sc he we kl6 predict yhat gen yhat2 = yhat^2 gen yhat3 = yhat^3\n\nFor RESET(1) add yhat2 to the model and test its significance using its t-ratio or an F-test. . regress faminc_sc he we kl6 yhat2 Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n1567.8552 6743.01804\n\n4 423\n\n391.963801 15.940941\n\nTotal\n\n8310.87325\n\n427\n\n19.4634034\n\nfaminc_sc\n\nCoef.\n\nhe we kl6 yhat2 _cons\n\n-.2381464 -.4235106 1.088733 .099368 8.724295\n\nStd. Err. .2419692 .3832141 1.143928 .0406211 4.03894\n\n. test yhat2 ( 1)\n\nyhat2 = 0 F(\n\n1, 423) = Prob > F =\n\n5.98 0.0148\n\nt -0.98 -1.11 0.95 2.45 2.16\n\nNumber of obs F( 4, 423) Prob > F R-squared Adj R-squared Root MSE P>|t| 0.326 0.270 0.342 0.015 0.031\n\n= = = = = =\n\n428 24.59 0.0000 0.1887 0.1810 3.9926\n\n[95% Conf. Interval] -.7137582 -1.176752 -1.159758 .0195236 .7854029\n\n.2374655 .3297303 3.337224 .1792123 16.66319\n\n22 Chapter 6 Once again, the squared value of the t-ratio is equal to the F-statistic and they have the same pvalue. For RESET(2), add yhat3 and test the joint significance of the squared and cubed predictions: . regress faminc_sc he we kl6 yhat2 yhat3 Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n1572.19024 6738.68301\n\n5 422\n\n314.438048 15.9684431\n\nTotal\n\n8310.87325\n\n427\n\n19.4634034\n\nfaminc_sc\n\nCoef.\n\nhe we kl6 yhat2 yhat3 _cons\n\n-.8451418 -1.301616 3.74098 .3234706 -.0085692 15.01851\n\nStd. Err. 1.189891 1.72841 5.217533 .4320295 .0164465 12.73868\n\nt -0.71 -0.75 0.72 0.75 -0.52 1.18\n\nNumber of obs F( 5, 422) Prob > F R-squared Adj R-squared Root MSE P>|t| 0.478 0.452 0.474 0.454 0.603 0.239\n\n= = = = = =\n\n428 19.69 0.0000 0.1892 0.1796 3.9961\n\n[95% Conf. Interval] -3.183993 -4.698981 -6.51461 -.5257272 -.0408964 -10.02065\n\n1.493709 2.095748 13.99657 1.172668 .0237581 40.05767\n\n. test yhat2 yhat3 ( 1) ( 2)\n\nyhat2 = 0 yhat3 = 0 F(\n\n2, 422) = Prob > F =\n\n3.12 0.0451\n\nBoth RESET(1) and RESET(2) are significant at the 5% level and you can conclude that the original linear functional form is not adequate to model this relationship. Stata includes a post-estimation command that will perform a RESET(3) test after a regression. The syntax is regress faminc he we kl6 estat ovtest\n\nThis version of RESET adds yˆ 2 , yˆ 3 , and yˆ 4 to the model and tests their joint significance. Technically there is nothing wrong with this. However, including this many powers of yˆ is not often recommended since the RESET loses statistical power rapidly as powers of yˆ are added.\n\n6.4 POOR DATA, COLLINEARITY AND INSIGNIFICANCE In the preceding section we mentioned that one of Stata’s computations fails due to poor conditioning of the data. This is similar to what collinearity does to a regression. Collinearity makes it difficult or impossible to compute the parameter estimates and various other statistics with much precision. In a statistical model collinearity arises because of poor experimental design, or in our case, because of data that don’t vary enough to permit precise measurement of the parameters. Unfortunately, there is no simple cure for this; rescaling the data has no effect on the linear relationships contained therein.\n\nFurther Inference in the Multiple Regression Model 23 The example here uses cars.dta. Load the cars data, clearing any previous data out of memory use cars, clear\n\nA look at the summary statistics (summarize) reveals reasonable variation in the data . summarize Variable\n\nObs\n\nMean\n\nmpg cyl eng wgt\n\n392 392 392 392\n\n23.44592 5.471939 194.412 2977.584\n\nStd. Dev. 7.805007 1.705783 104.644 849.4026\n\nMin\n\nMax\n\n9 3 68 1613\n\n46.6 8 455 5140\n\nEach of the variables contains variation as measured by their range and standard deviations. Simple correlations (corr) reveal a potential problem. . corr (obs=392)\n\nmpg cyl eng wgt\n\nmpg\n\ncyl\n\neng\n\nwgt\n\n1.0000 -0.7776 -0.8051 -0.8322\n\n1.0000 0.9508 0.8975\n\n1.0000 0.9330\n\n1.0000\n\nNotice that among the potential explanatory variables (cyl, eng, wgt), the correlations are very high; the smallest occurs between cyl and wgt and it is nearly 0.9. Estimating independent effects of each of these variables on miles per gallon will prove challenging. First, estimate a simple model of miles per gallon (mpg) as a function of the number of cylinders (cyl) in the engine. regress mpg cyl . regress mpg cyl Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n14403.0829 9415.91022\n\n1 390\n\n14403.0829 24.1433595\n\nTotal\n\n23818.9931\n\n391\n\n60.918141\n\nmpg\n\nCoef.\n\ncyl _cons\n\n-3.558078 42.91551\n\nStd. Err. .1456755 .8348668\n\nt -24.42 51.40\n\nNumber of obs F( 1, 390) Prob > F R-squared Adj R-squared Root MSE P>|t| 0.000 0.000\n\n= = = = = =\n\n[95% Conf. Interval] -3.844486 41.2741\n\n-3.271671 44.55691\n\nAdd the car’s engine displacement in cubic inches (eng) weight (wgt) to the model. regress mpg cyl eng wgt\n\n392 596.56 0.0000 0.6047 0.6037 4.9136\n\n24 Chapter 6 . regress mpg cyl eng wgt Source\n\nSS\n\ndf\n\nMS\n\nModel Residual\n\n16656.4441 7162.54906\n\n3 388\n\n5552.14802 18.460178\n\nTotal\n\n23818.9931\n\n391\n\n60.918141\n\nmpg\n\nCoef.\n\ncyl eng wgt _cons\n\n-.2677968 -.012674 -.0057079 44.37096\n\nStd. Err. .4130673 .0082501 .0007139 1.480685\n\nt -0.65 -1.54 -8.00 29.97\n\nNumber of obs F( 3, 388) Prob > F R-squared Adj R-squared Root MSE P>|t| 0.517 0.125 0.000 0.000\n\n= = = = = =\n\n392 300.76 0.0000 0.6993 0.6970 4.2965\n\n[95% Conf. Interval] -1.079927 -.0288944 -.0071115 41.45979\n\n.5443336 .0035465 -.0043043 47.28213\n\nNow, test a series of hypotheses. The first is for the significance of cyl, the second for the significance of eng, and the third is of their joint significance. test cyl test eng test cyl eng\n\nThe results are: . test cyl ( 1)\n\ncyl = 0 F(\n\n1, 388) = Prob > F =\n\n0.42 0.5172\n\n. test eng ( 1)\n\neng = 0 F(\n\n1, 388) = Prob > F =\n\n2.36 0.1253\n\n. test eng cyl ( 1) ( 2)\n\neng = 0 cyl = 0 F(\n\n2, 388) = Prob > F =\n\n4.30 0.0142\n\nEssentially, neither of the variables is individually significant, but they are jointly significant at the 5% level. This can happen because you were not able to measure their separate influences precisely enough. As revealed by the simple correlations, the independent variables cyl, eng, and wgt are highly correlated with one another. This can be verified by estimating several auxiliary regressions where each of the independent variables is regressed on all of the others. regress cyl eng wgt scalar r1 = e(r2) regress eng wgt cyl scalar r2 = e(r2) regress wgt eng cyl scalar r3 = e(r2)\n\nFurther Inference in the Multiple Regression Model 25\n\nAn R 2 above 0.8 indicates strong collinearity which may adversely affect the precision with which you can estimate the parameters of a model that contains all the variables. In the example, the R2s are 0.93, 0.90, and 0.87, all well above the 0.8 threshold. This is further confirmation that it will be difficult to differentiate the individual contributions of displacement and number of cylinders to a car’s gas mileage. . scalar list r1 = r2 = r3 =\n\nr1 r2 r3 .90490236 .93665456 .87160914\n\nThe advantage of using auxiliary regressions instead of simple correlations to detect collinearity is not that obvious in this particular example. Collinearity may be hard to detect using correlations when there are many variables in the regression. Although no two variables may be highly correlated, several variables may be linearly related in ways that are not apparent. Looking at the R2 from the auxiliary multiple regressions will be more useful in these situations.\n\n2\n\nFtail(J,N-K,fstat)\n\nprogram drop progname\n\nAIC\n\nF-statistic\n\nprogram drop _all\n\nBIC\n\nfunctional form\n\nregress\n\ncnsreg\n\ninvFtail(J,N_K,alpha)\n\nRESET\n\ncollinearity\n\ninvttail(df,alpha)\n\nrestricted regression\n\nconstraint\n\nirrelevant variables\n\nrestricted sum of squares\n\ne(df_r)\n\njoint significance test\n\nSchwartz criterion\n\ne(r2)\n\nlincom\n\ntest (hypoth 1)(hypoth 2)\n\ne(r2_a)\n\ntestparm varlist\n\ne(rank)\n\nManage constraints model selection\n\nomitted variables\n\nttail(df,tstat)\n\nestat ovtest\n\noverall F-test\n\nunrestricted sum of squares\n\nestimates store\n\npredict, xb\n\nestimates table\n\nprogram\n\nt-ratio\n\nCHAPTER 6 DO-FILE [CHAP06.DO] * file chap06.do for Using Stata for Principles of Econometrics, 4e * cd c:\\data\\poe4stata * * * * *\n\nStata do-file copyright C 2011 by Lee C. Adkins and R. Carter Hill used for \"Using Stata for Principles of Econometrics, 4e\" by Lee C. Adkins and R. Carter Hill (2011) John Wiley and Sons, Inc.\n\n26 Chapter 6 * setup version 11.1 capture log close set more off * open log log using chap06, replace text use andy, clear * * * * *\n\n------------------------------------------The following block estimates Andy's sales and uses the difference in SSE to test a hypothesis using an F-statistic -------------------------------------------\n\n* Unrestricted Model regress sales price advert c.advert#c.advert scalar sseu = e(rss) scalar df_unrest = e(df_r) * Restricted Model regress sales price scalar sser = e(rss) scalar df_rest = e(df_r) scalar J = df_rest - df_unrest * F-statistic, critical value, pvalue scalar fstat = ((sser -sseu)/J)/(sseu/(df_unrest)) scalar crit1 = invFtail(J,df_unrest,.05) scalar pvalue = Ftail(J,df_unrest,fstat) scalar list sseu sser J df_unrest fstat pvalue crit1 * * * * *\n\n------------------------------------------Here, we use Stata's test statement to test hypothesis using an F-statistic Note: Three versions of the syntax -------------------------------------------\n\n------------------------------------------Overall Significance of the Model Uses same Unrestricted Model as above -------------------------------------------\n\n* Unrestricted Model (all variables) regress sales price advert c.advert#c.advert scalar sseu = e(rss) scalar df_unrest = e(df_r) * Restricted Model (no explanatory variables) regress sales scalar sser = e(rss) scalar df_rest = e(df_r) scalar J = df_rest - df_unrest * F-statistic, critical value, pvalue scalar fstat = ((sser -sseu)/J)/(sseu/(df_unrest)) scalar crit2 = invFtail(J,df_unrest,.05) scalar pvalue = Ftail(J,df_unrest,fstat) scalar list sseu sser J df_unrest fstat pvalue crit2 * ------------------------------------------* Relationship between t and F * ------------------------------------------* Unrestricted Regression regress sales price advert c.advert#c.advert scalar sseu = e(rss) scalar df_unrest = e(df_r) scalar tratio = _b[price]/_se[price] scalar t_sq = tratio^2\n\nFurther Inference in the Multiple Regression Model 27 * Restricted Regression regress sales advert c.advert#c.advert scalar sser = e(rss) scalar df_rest = e(df_r) scalar J = df_rest - df_unrest * F-statistic, critical value, pvalue scalar fstat = ((sser -sseu)/J)/(sseu/(df_unrest)) scalar crit = invFtail(J,df_unrest,.05) scalar pvalue = Ftail(J,df_unrest,fstat) scalar list sseu sser J df_unrest fstat pvalue crit tratio t_sq * * * *\n\n------------------------------------------Optimal Advertising Uses both syntaxes for test -------------------------------------------\n\nbeer, clear lq = ln(q) lpb = ln(pb) lpl = ln(pl) lpr = ln(pr) li = ln(i)\n\nconstraint 1 lpb+lpl+lpr+li=0 cnsreg lq lpb lpl lpr li, c(1) * ------------------------------------------* MROZ Examples * ------------------------------------------use edu_inc, clear regress faminc he we regress faminc he * correlations among regressors correlate\n\n28 Chapter 6 * Irrelevant variables regress faminc he we kl6 x5 x6 * * * *\n\n------------------------------------------Stata uses the estat ovtest following a regression to do a RESET(3) test. -------------------------------------------\n\nregress faminc he we kl6 estat ovtest program modelsel scalar aic = ln(e(rss)/e(N))+2*e(rank)/e(N) scalar bic = ln(e(rss)/e(N))+e(rank)*ln(e(N))/e(N) di \"r-square = \"e(r2) \" and adjusted r-square \" e(r2_a) scalar list aic bic end quietly regress faminc he di \"Model 1 (he) \" modelsel estimates store Model1 quietly regress faminc he di \"Model 2 (he, we) \" modelsel estimates store Model2 quietly regress faminc he di \"Model 3 (he, we, kl6) modelsel estimates store Model3 quietly regress faminc he di \"Model 4 (he, we, kl6. modelsel estimates store Model4\n\nwe\n\nwe kl6 \" we kl6 x5 x6 x5, x6) \"\n\nestimates table Model1 Model2 Model3 Model4, b(%9.3f) stfmt(%9.3f) se /// stats(N r2 r2_a aic bic) regress faminc he we kl6 predict yhat gen yhat2=yhat^2 gen yhat3=yhat^3 summarize faminc he we kl6 *------------------------------* Data are ill-conditioned * Reset test won' work here * Try it anyway! *------------------------------regress faminc he we kl6 yhat2 test yhat2 regress faminc he we kl6 yhat2 yhat3 test yhat2 yhat3 *---------------------------------------* Drop the previously defined predictions * from the dataset *---------------------------------------drop yhat yhat2 yhat3 *-------------------------------* Recondition the data by * scaling FAMINC by 10000 * ------------------------------gen faminc_sc = faminc/10000 regress faminc_sc he we kl6 predict yhat gen yhat2 = yhat^2 gen yhat3 = yhat^3 summarize faminc_sc faminc he we kl6 yhat yhat2 yhat3 regress faminc_sc he we kl6 yhat2 test yhat2 regress faminc_sc he we kl6 yhat2 yhat3 test yhat2 yhat3\n\nFurther Inference in the Multiple Regression Model 29 * Extraneous regressors regress faminc he we kl6 x5 x6 * ------------------------------------------* Cars Example * ------------------------------------------use cars, clear summarize corr regress mpg cyl regress mpg cyl eng wgt test cyl test eng test eng cyl * Auxiliary * Check: r2 regress cyl scalar r1 = regress eng scalar r2 = regress wgt scalar r3 = scalar list\n\nregressions for collinearity >.8 means severe collinearity eng wgt e(r2) wgt cyl e(r2) eng cyl e(r2) r1 r2 r3\n\nlog close program drop modelsel" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7859296,"math_prob":0.9730712,"size":51840,"snap":"2021-04-2021-17","text_gpt3_token_len":14938,"char_repetition_ratio":0.17244773,"word_repetition_ratio":0.13847806,"special_character_ratio":0.3336227,"punctuation_ratio":0.13817877,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99596745,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T23:01:07Z\",\"WARC-Record-ID\":\"<urn:uuid:8f6cfc04-9650-4618-8683-22780c9902c8>\",\"Content-Length\":\"111890\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a0ea82d-4d70-4ceb-8ea6-6fb0e7389d77>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c522370-253d-4252-92fe-1261049200e2>\",\"WARC-IP-Address\":\"104.21.23.229\",\"WARC-Target-URI\":\"https://kipdf.com/further-inference-in-the-multiple-regression-model_5aef68b57f8b9a81328b45ea.html\",\"WARC-Payload-Digest\":\"sha1:BQGYRMZ5IBSFW227K3SWIEAXAUHXY4GI\",\"WARC-Block-Digest\":\"sha1:A6UT4GU45HWAZYWTZWOLO2IYFHX4MUS7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703538741.56_warc_CC-MAIN-20210123222657-20210124012657-00771.warc.gz\"}"}
https://brajasorensen.com/3-digit-multiplication-worksheets-for-grade-2/
[ "# 3 Digit Multiplication Worksheets For Grade 2", null, "Braja Sorensen Team November 23, 2020 Worksheet\n\nWhat's more, these worksheets with answer keys are absolutely free! Worksheets by grade math tutorials geometry arithmetic pre algebra & algebra.", null, "3 Free Math Worksheets Second Grade 2 Addition Add 2 Digit\n\n### The coloring portion makes a symmetric design that helps students self check their answers and makes it easy for teachers to grade.", null, "3 digit multiplication worksheets for grade 2. Tap on print, pdf or image button to print or download this 4th grade worksheet for practice multiplying 3 digit multiplicand and 2 digit multiplier. In particular recall of the 2, 5 and 10 'times tables', multiplying by whole tens and solving missing factor problems. Some of the worksheets for this concept are 3 digit by 2 digit multiplication 1, multiplication word problems, multiplication, multiplication 3 digit by 2 digit multiplication, grade 4 multiplication work, long multiplication work multiplying 3 digit by 2, two digit multiplication work, two.\n\nFor more multiplication worksheets grade 5 3 digit by 2 digit, please read more here and subscribe to our mailing list! Practice your multiplication and hone your skills with these printable timestable worksheets for 2 and 3 digits. Remember to put the extra zero in the ones place on the second line of multiplication.\n\nSome of the worksheets displayed are multiplication 3 digit by 2 digit multiplication, grade 4 multiplication work, grade 4 multiplication work, multiplication, long multiplication work multiplying 3 digit by 2, two digit multiplication work, 3 digit by 2 digit multiplication 1, two digit multiplication work. Children will solve a total of 15 problems using the standard algorithm, regrouping as needed. Here is a collection of our printable worksheets for topic 2 and 3 digit multiplication of chapter multiply by 1 digit in section multiplication.\n\nThere are activities with vertical problems, horizontal problems, and lattice grids. Worksheets > math > grade 2 > multiplication. A brief description of the worksheets is on each of the worksheet widgets.\n\n25 whole number problems that are well balanced and gradually increases in difficulty. Addition worksheets and subtraction worksheets aren’t what most youngsters wish to be doing throughout their time. 3rd grade math worksheets for problem solving with multiplication.\n\nIt may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Click on the images to view, download, or print. It can also be used as an assessment or quiz.\n\nEnjoy a variety of crossword puzzles, math riddles, word problems, a scoot game, and a custom worksheet generator tool. 456 x 365) includes vertical, horizontal, and lattice problems. Single digit multipication, 2 digit by 1 digit multiplication, 2 digit by 2 digit multiplication, 3 digit by 1 digit multiplication, 3 digit by 2 digit multiplication, 3 digit by 3 digit multiplication, 4 digit by 3 digit multiplication\n\nThese grade 2 multiplication worksheets emphasize early multiplication skills; 1 x 2 and 2 x 3 digit multiplication worksheets. These worksheets cover most multiplication subtopics and are were also conceived in line with common core state standards.\n\n1, 2 and 3 digit word problems for multiplication. Quick links to download / preview the below listed worksheets : Math in basic phrases is often everything an easy task to instruct, but in relation to educating the more.\n\n2 and 3 digit multiplication : 3 by 2 digit multiplication color worksheet. Get printable multiplication worksheets here including the worksheets for multiplication worksheets grade 5 3 digit by 2 digit as well as other multiplication worksheets.\n\nThis provides great extra practice for kids. 3 digit x 2 digit and 3 gigit x 1 digit multiplication worksheets for 3rd and 4th grade. These worksheets are pdf files.\n\nTap on print, pdf or image button to print or download this 5th grade worksheet for practice multiplying 3 digit multiplicand and 3 digit multiplier.", null, "Multiplication with a Riddle Multiply one and three digit", null, "4 Free Math Worksheets Second Grade 2 Addition Adding 3", null, "3 Digit Addition With Regrouping Carrying 6 Worksheets", null, "Double Digit Multiplication With Regrouping, Two Digit", null, "Double Digit Multiplication With Regrouping, Two Digit", null, "Free 3.NBT.2 Halloween Themed 3Digit Addition with", null, "The Multiplying 2Digit by 2Digit Numbers (A) Math", null, "5 Free Math Worksheets Third Grade 3 Multiplication", null, "3.NBT.2 Valentine's Day Themed 3Digit Addition with", null, "Math Worksheets Adding And Subtracting Three Digit Numbers", null, "3 Digit by 1 Digit Multiplication Riddle I love how this", null, "Fill in Multiplication Worksheets 10 Multiplication", null, "Summer Review Packets! 3rd grade math, Multiplication", null, "February FUNFilled Learning! Math literacy, Math", null, "Collection Of Three Digit Subtraction With Regrouping", null, "Requested and added. 3Digit by 3Digit Multiplication", null, "The 3Digit by 1Digit Multiplication with Grid Support (A", null, "Two Digit Multiplication With Regrouping, Valentine's Day", null, "5 Free Math Worksheets Second Grade 2 Addition Add 3 3\n\n### RELATED ARTICLES", null, "Feb 03, 2021\n\n### 1st Grade Language Arts Worksheets Free Printables", null, "Feb 03, 2021\n\n### Photos of 3 Digit Multiplication Worksheets For Grade 2\n\nRate This 3 Digit Multiplication Worksheets For Grade 2\n\nReviews are public and editable. Past edits are visible to the developer and users unless you delete your review altogether.\n\nMost helpful reviews have 100 words or more\n\n### 1st Grade Language Arts Worksheets Free Printables", null, "Feb 03, 2021\n\n#### Single Digit Addition Worksheets Printable", null, "Feb 03, 2021\n\nCategories\n\nStatic Pages\n\nTag Cloud" ]
[ null, "https://brajasorensen.com/k/2020/12/b403e7d670b8ec164009a2bee347e638-4-692x980.png", null, "https://brajasorensen.com/k/2020/12/367dd2869f4b3abeeb2c71713723cb63.jpg", null, "https://brajasorensen.com/k/2020/12/9aa4a07aacbc2d779c3ce5798992b6aa-2.png", null, "https://i.pinimg.com/originals/34/79/ca/3479ca6ec186b22339dafddc4451acc9.png", null, "https://i.pinimg.com/originals/c3/d2/55/c3d25539ae6d383d2c06475394ec0036.jpg", null, "https://brajasorensen.com/k/2020/12/9aa4a07aacbc2d779c3ce5798992b6aa-2.png", null, "https://i.pinimg.com/originals/14/b6/04/14b6045be28ca764ab7996e3cdef80d9.jpg", null, "https://brajasorensen.com/k/2020/12/04872dbba17d9fc3bb0738fed0198c9d.jpg", null, "https://brajasorensen.com/k/2020/12/3572e79954579dec76e8df30cc952ea4.jpg", null, "https://brajasorensen.com/k/2020/12/c4a45110e16462cec7f8d7dad780f8ed.png", null, "https://brajasorensen.com/k/2020/12/a98a69cf9fbb5ebe10adb298f3aa07ce.jpg", null, "https://i.pinimg.com/736x/20/f1/d9/20f1d9d99267f6c98c8a1a778af0245f.jpg", null, "https://i.pinimg.com/originals/2c/b3/56/2cb356ffa8efd400c40412b0c8983412.jpg", null, "https://i.pinimg.com/originals/28/07/23/2807234da83798af4032b21898a4ebdd.png", null, "https://brajasorensen.com/k/2020/12/b403e7d670b8ec164009a2bee347e638-6.png", null, "https://i.pinimg.com/originals/a3/d2/31/a3d231f909631e51dc6765dd84d790e4.png", null, "https://i.pinimg.com/originals/e3/24/9c/e3249c1e9460a08cddb76d6a92a2b481.png", null, "https://i.pinimg.com/originals/fe/a3/5d/fea35d649ae713a4c0b4175cfa75aa3d.jpg", null, "https://i.pinimg.com/originals/1c/14/84/1c1484ce3b0e6473df605233abdc2379.jpg", null, "https://i.pinimg.com/originals/ef/67/a0/ef67a083ff8a6d8388d85576d9ae3a18.jpg", null, "https://i.pinimg.com/originals/d8/22/f9/d822f9b8ae46678244f982074c73eda8.png", null, "https://i.pinimg.com/originals/7e/62/96/7e6296118150f34dbac9ea21fba816e7.jpg", null, "https://brajasorensen.com/k/2020/12/318b2c8e6b372ee1e304e11a96fa87f4-98x98.jpg", null, "https://brajasorensen.com/k/2020/12/7d6c1fd0dd9c5e81a8c78acc5d06cf46-98x98.jpg", null, "https://brajasorensen.com/k/2020/12/7d6c1fd0dd9c5e81a8c78acc5d06cf46-98x98.jpg", null, "https://brajasorensen.com/k/2020/12/0e9be430523dac4c436d0cae93967076-98x98.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83152884,"math_prob":0.88943064,"size":5129,"snap":"2021-43-2021-49","text_gpt3_token_len":1096,"char_repetition_ratio":0.29912195,"word_repetition_ratio":0.12911393,"special_character_ratio":0.19984402,"punctuation_ratio":0.10520362,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931832,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,1,null,4,null,null,null,null,null,null,null,null,null,8,null,9,null,null,null,4,null,7,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,8,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T20:26:02Z\",\"WARC-Record-ID\":\"<urn:uuid:df9d27ed-833c-4200-96be-b647b78d85e8>\",\"Content-Length\":\"40923\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32c26dbf-42d0-4d54-b94c-3a70331d59d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f6ffba0-1b78-4252-9c3b-295c75e612aa>\",\"WARC-IP-Address\":\"104.21.64.75\",\"WARC-Target-URI\":\"https://brajasorensen.com/3-digit-multiplication-worksheets-for-grade-2/\",\"WARC-Payload-Digest\":\"sha1:5KVPCZNEWZJZSUHZFJK7NJYS4HYKTR46\",\"WARC-Block-Digest\":\"sha1:BQHIYLDI32T4OE4F664M5CI32PTK3TAH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588242.22_warc_CC-MAIN-20211027181907-20211027211907-00633.warc.gz\"}"}
https://statkat.org/stattest.php?t=34&t2=44&t3=35&t4=40
[ "# Sign test - overview\n\nThis page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table\n\nSign test\nBinomial test for a single proportion\nWilcoxon signed-rank test\nCochran's Q test\nIndependent variableIndependent variableIndependent variableIndependent/grouping variable\n2 paired groupsNone2 paired groupsOne within subject factor ($\\geq 2$ related groups)\nDependent variableDependent variableDependent variableDependent variable\nOne of ordinal levelOne categorical with 2 independent groupsOne quantitative of interval or ratio levelOne categorical with 2 independent groups\nNull hypothesisNull hypothesisNull hypothesisNull hypothesis\n• H0: P(first score of a pair exceeds second score of a pair) = P(second score of a pair exceeds first score of a pair)\nIf the dependent variable is measured on a continuous scale, this can also be formulated as:\n• H0: the population median of the difference scores is equal to zero\nA difference score is the difference between the first score of a pair and the second score of a pair.\nH0: $\\pi = \\pi_0$\n\nHere $\\pi$ is the population proportion of 'successes', and $\\pi_0$ is the population proportion of successes according to the null hypothesis.\nH0: $m = 0$\n\nHere $m$ is the population median of the difference scores. A difference score is the difference between the first score of a pair and the second score of a pair.\n\nSeveral different formulations of the null hypothesis can be found in the literature, and we do not agree with all of them. Make sure you (also) learn the one that is given in your text book or by your teacher.\nH0: $\\pi_1 = \\pi_2 = \\ldots = \\pi_I$\n\nHere $\\pi_1$ is the population proportion of 'successes' for group 1, $\\pi_2$ is the population proportion of 'successes' for group 2, and $\\pi_I$ is the population proportion of 'successes' for group $I.$\nAlternative hypothesisAlternative hypothesisAlternative hypothesisAlternative hypothesis\n• H1 two sided: P(first score of a pair exceeds second score of a pair) $\\neq$ P(second score of a pair exceeds first score of a pair)\n• H1 right sided: P(first score of a pair exceeds second score of a pair) > P(second score of a pair exceeds first score of a pair)\n• H1 left sided: P(first score of a pair exceeds second score of a pair) < P(second score of a pair exceeds first score of a pair)\nIf the dependent variable is measured on a continuous scale, this can also be formulated as:\n• H1 two sided: the population median of the difference scores is different from zero\n• H1 right sided: the population median of the difference scores is larger than zero\n• H1 left sided: the population median of the difference scores is smaller than zero\nH1 two sided: $\\pi \\neq \\pi_0$\nH1 right sided: $\\pi > \\pi_0$\nH1 left sided: $\\pi < \\pi_0$\nH1 two sided: $m \\neq 0$\nH1 right sided: $m > 0$\nH1 left sided: $m < 0$\nH1: not all population proportions are equal\nAssumptionsAssumptionsAssumptionsAssumptions\n• Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another\n• Sample is a simple random sample from the population. That is, observations are independent of one another\n• The population distribution of the difference scores is symmetric\n• Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another\nNote: sometimes it considered sufficient for the data to be measured on an ordinal scale, rather than an interval or ratio scale. However, since the test statistic is based on ranked difference scores, we need to know whether a change in scores from, say, 6 to 7 is larger than/smaller than/equal to a change from 5 to 6. This is impossible to know for ordinal scales, since for these scales the size of the difference between values is meaningless.\n• Sample of 'blocks' (usually the subjects) is a simple random sample from the population. That is, blocks are independent of one another\nTest statisticTest statisticTest statisticTest statistic\n$W =$ number of difference scores that is larger than 0$X$ = number of successes in the sampleTwo different types of test statistics can be used, but both will result in the same test outcome. We will denote the first option the $W_1$ statistic (also known as the $T$ statistic), and the second option the $W_2$ statistic. In order to compute each of the test statistics, follow the steps below:\n1. For each subject, compute the sign of the difference score $\\mbox{sign}_d = \\mbox{sgn}(\\mbox{score}_2 - \\mbox{score}_1)$. The sign is 1 if the difference is larger than zero, -1 if the diffence is smaller than zero, and 0 if the difference is equal to zero.\n2. For each subject, compute the absolute value of the difference score $|\\mbox{score}_2 - \\mbox{score}_1|$.\n3. Exclude subjects with a difference score of zero. This leaves us with a remaining number of difference scores equal to $N_r$.\n4. Assign ranks $R_d$ to the $N_r$ remaining absolute difference scores. The smallest absolute difference score corresponds to a rank score of 1, and the largest absolute difference score corresponds to a rank score of $N_r$. If there are ties, assign them the average of the ranks they occupy.\nThen compute the test statistic:\n\n• $W_1 = \\sum\\, R_d^{+}$\nor\n$W_1 = \\sum\\, R_d^{-}$\nThat is, sum all ranks corresponding to a positive difference or sum all ranks corresponding to a negative difference. Theoratically, both definitions will result in the same test outcome. However:\n• tables with critical values for $W_1$ are usually based on the smaller of $\\sum\\, R_d^{+}$ and $\\sum\\, R_d^{-}$. So if you are using such a table, pick the smaller one.\n• If you are using the normal approximation to find the $p$ value, it makes things most straightforward if you use $W_1 = \\sum\\, R_d^{+}$ (if you use $W_1 = \\sum\\, R_d^{-}$, the right and left sided alternative hypotheses 'flip').\n• $W_2 = \\sum\\, \\mbox{sign}_d \\times R_d$\nThat is, for each remaining difference score, multiply the rank of the absolute difference score by the sign of the difference score, and then sum all of the products.\nIf a failure is scored as 0 and a success is scored as 1:\n\n$Q = k(k - 1) \\dfrac{\\sum_{groups} \\Big (\\mbox{group total} - \\frac{\\mbox{grand total}}{k} \\Big)^2}{\\sum_{blocks} \\mbox{block total} \\times (k - \\mbox{block total})}$\n\nHere $k$ is the number of related groups (usually the number of repeated measurements), a group total is the sum of the scores in a group, a block total is the sum of the scores in a block (usually a subject), and the grand total is the sum of all the scores.\n\nBefore computing $Q$, first exclude blocks with equal scores in all $k$ groups.\nSampling distribution of $W$ if H0 were trueSampling distribution of $X$ if H0 were trueSampling distribution of $W_1$ and of $W_2$ if H0 were trueSampling distribution of $Q$ if H0 were true\nThe exact distribution of $W$ under the null hypothesis is the Binomial($n$, $P$) distribution, with $n =$ number of positive differences $+$ number of negative differences, and $P = 0.5$.\n\nIf $n$ is large, $W$ is approximately normally distributed under the null hypothesis, with mean $nP = n \\times 0.5$ and standard deviation $\\sqrt{nP(1-P)} = \\sqrt{n \\times 0.5(1 - 0.5)}$. Hence, if $n$ is large, the standardized test statistic $$z = \\frac{W - n \\times 0.5}{\\sqrt{n \\times 0.5(1 - 0.5)}}$$ follows approximately the standard normal distribution if the null hypothesis were true.\nBinomial($n$, $P$) distribution.\n\nHere $n = N$ (total sample size), and $P = \\pi_0$ (population proportion according to the null hypothesis).\nSampling distribution of $W_1$:\nIf $N_r$ is large, $W_1$ is approximately normally distributed with mean $\\mu_{W_1}$ and standard deviation $\\sigma_{W_1}$ if the null hypothesis were true. Here $$\\mu_{W_1} = \\frac{N_r(N_r + 1)}{4}$$ $$\\sigma_{W_1} = \\sqrt{\\frac{N_r(N_r + 1)(2N_r + 1)}{24}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \\frac{W_1 - \\mu_{W_1}}{\\sigma_{W_1}}$$ follows approximately the standard normal distribution if the null hypothesis were true.\n\nSampling distribution of $W_2$:\nIf $N_r$ is large, $W_2$ is approximately normally distributed with mean $0$ and standard deviation $\\sigma_{W_2}$ if the null hypothesis were true. Here $$\\sigma_{W_2} = \\sqrt{\\frac{N_r(N_r + 1)(2N_r + 1)}{6}}$$ Hence, if $N_r$ is large, the standardized test statistic $$z = \\frac{W_2}{\\sigma_{W_2}}$$ follows approximately the standard normal distribution if the null hypothesis were true.\n\nIf $N_r$ is small, the exact distribution of $W_1$ or $W_2$ should be used.\n\nNote: if ties are present in the data, the formula for the standard deviations $\\sigma_{W_1}$ and $\\sigma_{W_2}$ is more complicated.\nIf the number of blocks (usually the number of subjects) is large, approximately the chi-squared distribution with $k - 1$ degrees of freedom\nSignificant?Significant?Significant?Significant?\nIf $n$ is small, the table for the binomial distribution should be used:\nTwo sided:\n• Check if $W$ observed in sample is in the rejection region or\n• Find two sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\\alpha$\nRight sided:\n• Check if $W$ observed in sample is in the rejection region or\n• Find right sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\\alpha$\nLeft sided:\n• Check if $W$ observed in sample is in the rejection region or\n• Find left sided $p$ value corresponding to observed $W$ and check if it is equal to or smaller than $\\alpha$\n\nIf $n$ is large, the table for standard normal probabilities can be used:\nTwo sided:\nRight sided:\nLeft sided:\nTwo sided:\n• Check if $X$ observed in sample is in the rejection region or\n• Find two sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\\alpha$\nRight sided:\n• Check if $X$ observed in sample is in the rejection region or\n• Find right sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\\alpha$\nLeft sided:\n• Check if $X$ observed in sample is in the rejection region or\n• Find left sided $p$ value corresponding to observed $X$ and check if it is equal to or smaller than $\\alpha$\nFor large samples, the table for standard normal probabilities can be used:\nTwo sided:\nRight sided:\nLeft sided:\nIf the number of blocks is large, the table with critical $X^2$ values can be used. If we denote $X^2 = Q$:\n• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or\n• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\\alpha$\nEquivalent ton.a.n.a.Equivalent to\nTwo sided sign test is equivalent to\n--Friedman test, with a categorical dependent variable consisting of two independent groups.\nExample contextExample contextExample contextExample context\nDo people tend to score higher on mental health after a mindfulness course?Is the proportion of smokers amongst office workers different from $\\pi_0 = 0.2$?Is the median of the differences between the mental health scores before and after an intervention different from 0?Subjects perform three different tasks, which they can either perform correctly or incorrectly. Is there a difference in task performance between the three different tasks?\nSPSSSPSSSPSSSPSS\nAnalyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...\n• Put the two paired variables in the boxes below Variable 1 and Variable 2\n• Under Test Type, select the Sign test\nAnalyze > Nonparametric Tests > Legacy Dialogs > Binomial...\n• Put your dichotomous variable in the box below Test Variable List\n• Fill in the value for $\\pi_0$ in the box next to Test Proportion\nAnalyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...\n• Put the two paired variables in the boxes below Variable 1 and Variable 2\n• Under Test Type, select the Wilcoxon test\nAnalyze > Nonparametric Tests > Legacy Dialogs > K Related Samples...\n• Put the $k$ variables containing the scores for the $k$ related groups in the white box below Test Variables\n• Under Test Type, select Cochran's Q test\nJamoviJamoviJamoviJamovi\nJamovi does not have a specific option for the sign test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the two sided $p$ value that would have resulted from the sign test. Go to:\n\nANOVA > Repeated Measures ANOVA - Friedman\n• Put the two paired variables in the box below Measures\nFrequencies > 2 Outcomes - Binomial test\n• Put your dichotomous variable in the white box at the right\n• Fill in the value for $\\pi_0$ in the box next to Test value\n• Under Hypothesis, select your alternative hypothesis\nT-Tests > Paired Samples T-Test\n• Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line\n• Under Tests, select Wilcoxon rank\n• Under Hypothesis, select your alternative hypothesis\nJamovi does not have a specific option for the Cochran's Q test. However, you can do the Friedman test instead. The $p$ value resulting from this Friedman test is equivalent to the $p$ value that would have resulted from the Cochran's Q test. Go to:\n\nANOVA > Repeated Measures ANOVA - Friedman\n• Put the $k$ variables containing the scores for the $k$ related groups in the box below Measures\nPractice questionsPractice questionsPractice questionsPractice questions" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8261666,"math_prob":0.99804157,"size":7456,"snap":"2021-31-2021-39","text_gpt3_token_len":1962,"char_repetition_ratio":0.12412775,"word_repetition_ratio":0.1921141,"special_character_ratio":0.26971567,"punctuation_ratio":0.09640398,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988866,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T09:23:44Z\",\"WARC-Record-ID\":\"<urn:uuid:9fdf39ce-52cc-4b52-9400-04631273b532>\",\"Content-Length\":\"34733\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ff6a6ce8-4824-4247-b892-8dbc0bad213c>\",\"WARC-Concurrent-To\":\"<urn:uuid:6356fc57-6283-430c-9c6d-d7e624b6a8ca>\",\"WARC-IP-Address\":\"141.138.168.125\",\"WARC-Target-URI\":\"https://statkat.org/stattest.php?t=34&t2=44&t3=35&t4=40\",\"WARC-Payload-Digest\":\"sha1:I2NOCCHK4A2Z46HQFHNTKH5JNVZ6HHKR\",\"WARC-Block-Digest\":\"sha1:KEHUO6ZJN4GV6BDJSTCTDRILPGAYY5IH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153223.30_warc_CC-MAIN-20210727072531-20210727102531-00288.warc.gz\"}"}
https://www.geeksforgeeks.org/python-evaluate-expression-given-in-string/
[ "Skip to content\nRelated Articles\nPython – Evaluate Expression given in String\n• Difficulty Level : Hard\n• Last Updated : 15 Oct, 2020\n\nSometimes, while working with Python Strings, we can have certain computations in string format and we need to formulate its result. This can occur in domains related to Mathematics and data. Lets discuss certain ways in which we can perform this task.\nMethod #1 : Using regex + map() + sum()\nThe combination of above functions can be used to solve this problem. In this, we perform the task of computation using sum() and mapping of operator and operation using map(). This method can be used if the string has only + or -. Method #2 can be used for other operations as well.\n\n## Python3\n\n `# Python3 code to demonstrate working of``# Expression evalution in String``# Using regex + map() + sum()``import` `re` `# initializing string``test_str ``=` `\"45 + 98-10\"` `# printing original string``print``(``\"The original string is : \"` `+` `test_str)` `# Expression evalution in String``# Using regex + map() + sum()``res ``=` `sum``(``map``(``int``, re.findall(r``'[+-]?\\d+'``, test_str)))` `# printing result``print``(``\"The evaluated result is : \"` `+` `str``(res))`\nOutput :\n```The original string is : 45+98-10\nThe evaluated result is : 133\n\n```\n\nMethod #2 : Using eval()\nThis is one of the way in which this task can be performed. In this, we perform computation internally using eval().\n\n## Python3\n\n `# Python3 code to demonstrate working of``# Expression evalution in String``# Using eval()` `# initializing string``test_str ``=` `\"45 + 98-10\"` `# printing original string``print``(``\"The original string is : \"` `+` `test_str)` `# Expression evalution in String``# Using eval()``res ``=` `eval``(test_str)` `# printing result``print``(``\"The evaluated result is : \"` `+` `str``(res))`\nOutput :\n```The original string is : 45+98-10\nThe evaluated result is : 133\n\n```\n\nAttention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.\n\nTo begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning – Basic Level Course\n\nMy Personal Notes arrow_drop_up" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67214036,"math_prob":0.88393325,"size":1954,"snap":"2021-21-2021-25","text_gpt3_token_len":462,"char_repetition_ratio":0.12410256,"word_repetition_ratio":0.37900874,"special_character_ratio":0.2707267,"punctuation_ratio":0.09638554,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9590586,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-16T23:09:42Z\",\"WARC-Record-ID\":\"<urn:uuid:0f68577d-4dd0-4c1d-8b57-8972ec2f1371>\",\"Content-Length\":\"105662\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e9f5966-dd3b-43b0-93ea-19ca0185ffed>\",\"WARC-Concurrent-To\":\"<urn:uuid:316c4062-881e-40e4-afba-b6f01b6af1ff>\",\"WARC-IP-Address\":\"23.218.216.148\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/python-evaluate-expression-given-in-string/\",\"WARC-Payload-Digest\":\"sha1:L7XRVFPFM37ZYPZ76RJTYT22JL5FVMBA\",\"WARC-Block-Digest\":\"sha1:RJGSUKKGYM7CVJUOXTKOZ2VOBBNV2WR6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487626122.27_warc_CC-MAIN-20210616220531-20210617010531-00333.warc.gz\"}"}
https://physics.stackexchange.com/questions/174665/how-can-the-schwarzschild-radius-of-the-universe-be-13-7-billion-light-years
[ "# How can the Schwarzschild radius of the universe be 13.7 billion light years?\n\n• It says that the S. radius of the universe is as big as the size of the universe?\n\n• How is this possible?\n\n• Since most the universe is empty space shouldn't the S. radius of our universe be significantly smaller then 13.7 light years?\n\nFirstly we should note that the universe as a whole is not described by the Schwarzschild metric, so the Schwarzschild radius of the universe is a meaningless concept. However if you take the mass of the observable universe you could ask what the Schwarzschild radius of a black hole of this mass is.\n\nFor a mass $M$ the Schwarzschild radius is:\n\n$$r_s = \\frac{2GM}{c^2} \\tag{1}$$\n\nIf the radius of the observable universe is $R$, and the density is $\\rho$, then the mass is:\n\n$$M = \\tfrac{4}{3}\\pi R^3 \\rho$$\n\nand we can substitute in equation (1) to get:\n\n$$r_s = \\frac{8G}{3c^2} \\pi R^3 \\rho \\tag{2}$$\n\nNow we believe that the density of the universe is the critical density, and from the FLRW metric with some hair pulling we can obtain a value for the critical density:\n\n$$\\rho_c = \\frac{3H^2}{8\\pi G}$$\n\nAnd we can substitute for $\\rho$ in equation (2) to get:\n\n$$r_s = \\frac{H^2}{c^2} R^3 \\tag{3}$$\n\nNow, Hubble's law tells that the velocity of a distant object is related to its distance $r$ by:\n\n$$v \\approx Hr$$\n\nand since the edge of the universe, $r_e$, is where the recession velocity is $c$ we get:\n\n$$r_e \\approx \\frac{c}{H}$$\n\nand substituting this in equation (3) gives;\n\n$$r_s = \\frac{1}{r_e^2} R^3 \\tag{4}$$\n\nIf $r_e = R$ then we'd be left with $r_s = R$ and we'd have shown that the Schwarzschild radius of the mass of the observable universe is equal to it's radius. Sadly it doesn't quite work. The dimension $R$ is the current size of the observable universe, which is around 46.6 billion light years, while the size used in Hubble's law, $r_e$, is the current apparent size 13.7 billion light years.\n\nIf I take equation (3) and put in $R$ = 46.6 billion light years and $H$ = 68 km/sec/megaParsec I get $r_s$ to be around 500 billion light years or a lot larger than the size of the observable universe.\n\n• The recession velocity at the edge of the Universe (the particle horizon) is not $c$, but more like $3.3c$. Your $r_e$ is simply the same as your $R$, so there's no need to introduce $r_e$. That is, if in Eq. 3 you substitute $c=HR/3.3$, you get that $r_s = 3.3^2R\\simeq 11 R \\sim 500\\,\\mathrm{Gly}$. – pela Dec 2 '16 at 10:06" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8734787,"math_prob":0.9994973,"size":1825,"snap":"2020-24-2020-29","text_gpt3_token_len":543,"char_repetition_ratio":0.13234487,"word_repetition_ratio":0.030120483,"special_character_ratio":0.31123286,"punctuation_ratio":0.07506702,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998343,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T07:47:57Z\",\"WARC-Record-ID\":\"<urn:uuid:810fd623-e7d4-4b0c-a2e8-23c5d9049f0c>\",\"Content-Length\":\"148326\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01e0468b-a7fa-4df0-a13d-b2a1f39361f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:f85e9fcc-8da9-41d3-a0b6-fa31b634cfbe>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/174665/how-can-the-schwarzschild-radius-of-the-universe-be-13-7-billion-light-years\",\"WARC-Payload-Digest\":\"sha1:DWNTBUTLAZKGYYU4SEHJGIU7M5DHJYNW\",\"WARC-Block-Digest\":\"sha1:KNFUPYVDV4OK5FDETQHXZJ6FXMDVL4HL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347402457.55_warc_CC-MAIN-20200529054758-20200529084758-00321.warc.gz\"}"}
https://astronomy.stackexchange.com/questions/40376/thought-question-what-would-a-giant-far-away-mirror-look-like-to-a-telescope
[ "# (Thought Question) What would a giant far away mirror look like to a telescope?\n\nIf there was a theoretically perfect mirror the size of say, our solar system, somewhere out in space that had a focal point of literally earth. What would that look like to a space or earth telescope? Could it be distinguished from other parts of space? Could you identify earth in it? (A la By putting a mirror in space, would we be able to see into the past?)\n\n• This is a great question, but I’m voting to close this question because I think it more properly belongs on the World Building or Space Exploration Stack Exchange sites. Dec 14, 2020 at 5:40\n• @ConnorGarcia we should only close questions when they are off-topic where they are asked, not for \"I think it's better asked elsewhere\" by itself. Some questions are on-topic in multiple sites and some are off-topic everywhere. There isn't always a 1:1 correspondence between questions and their sites. The OP decides where to ask. We can make suggestions, and we vote to close if we are pretty sure the question is off-topic locally.\n– uhoh\nDec 14, 2020 at 9:42\n• If the mirror had a focal point of Earth, that would stand out as it would increase the light reflected to Earth. If it didn't have the focal point of Earth and just reflected stars, I think it still might be visible as mirrors don't reflect all wavelengths of light. With a good enough look, it should be recognizable as a mirror, I would think. Dec 14, 2020 at 10:06\n\nIf there was a theoretically perfect mirror the size of say, our solar system, somewhere out in space...\n\nGreat thought experiment so far!\n\n...that had a focal point of literally earth...\n\nRats! I was hoping for a flat mirror so the answer would have been slightly simpler.\n\nIt is true that concave mirrors are \"magnifying mirrors\" and if we stay closer to the mirror than twice its focal point (see below) we can see a magnified image. I don't believe that it provides better optical resolution than a normal mirror, but since the construction of our eyes is fixed (we can't change the optical system nor \"pixel density\") we use the mirror to \"blow up\" the image in the same way that a photographic enlarger blows up the image on a negative without improving the resolution of the image.\n\nIn optics image information is encoded in the wavefront at any point whether in focus at that point or not. If we know the mirror's diameter and distance from Earth, we can apply the principle of diffraction in a simple way no matter how complicated the rest of the optical system.\n\nFor a hard-edge circular mirror (top-hat shaped apodization) we know that the Airy disk is the right principle to apply, and a simple definition of resolution gives us\n\n$$\\theta \\approx \\text{1.22} \\frac{\\lambda}{d}$$\n\nfor the angular resolution. Let's use 500 nm green light for $$\\lambda$$ and twice Neptune's orbital radius (60 AU) for $$d$$. That gives us $$4 \\times 10^{-18}$$ radians.\n\nIf the mirror were at the distance of Proxima Centauri or only $$4 \\times 10^{+16}$$ meters, then the impact on resolving a wavefront from the perfect mirror based only on diffraction of a circular aperture would be of the order of centimeters!\n\nThe focal length of a concave mirror is the parallel-to-point distance, so we would really need the mirror to be at roughly the point-to-point distance of twice the focal length.\n\nIf the concave mirror brought our image back to us, say on a sheet of paper, it would be incredibly dim i.e. pretty much no photons except from the Sun itself. But if we ignore that then we would see ourselves blurred by about 1 centimeter.\n\nIf the image was kilometers in front of us then we could focus a telescope on that image in space and re-image it at the entrance pupil of an eyepiece or on to a sensor.\n\nIf the telescope were 1 meter in diameter, we could see roughly a 1 meter wide swath of Earth.\n\nOf course this doesn't work so simply because the mirror would have to be pointed such that the location of Earth 8.5 years ago would be imaged where we were today.\n\n### Conclusions\n\nYes this is kinda possible from a Gedankenexperiment point of view; a 60 AU mirror out at Proxima Centauri with a focal length of twice that distance could produce an image of Earth nearby Earth (light time considerations in force) and we could look at that location in space with a large diameter telescope and see a patch of Earth 8.5 years ago roughly vignetted by the size of the telescope's aperture." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9473813,"math_prob":0.9432717,"size":4464,"snap":"2023-40-2023-50","text_gpt3_token_len":1029,"char_repetition_ratio":0.119282514,"word_repetition_ratio":0.042038217,"special_character_ratio":0.22939068,"punctuation_ratio":0.08342728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9659386,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T20:44:46Z\",\"WARC-Record-ID\":\"<urn:uuid:fb2fcd71-2602-49de-b4e2-5959bd6f1b71>\",\"Content-Length\":\"148080\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:24214934-07f9-4233-854b-45e334b0ba14>\",\"WARC-Concurrent-To\":\"<urn:uuid:d036b80b-386a-4935-80a2-c7e243a3e2dc>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://astronomy.stackexchange.com/questions/40376/thought-question-what-would-a-giant-far-away-mirror-look-like-to-a-telescope\",\"WARC-Payload-Digest\":\"sha1:7NUYYFUABQFLZEKV7FX3P5GAOW3FUGSX\",\"WARC-Block-Digest\":\"sha1:34CUTSM5VNB2CTKUP5FW3VIABW5HTAUR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102637.84_warc_CC-MAIN-20231210190744-20231210220744-00293.warc.gz\"}"}
https://www.r-bloggers.com/2016/07/the-trick-to-understanding-nas-missing-values-in-r/
[ "Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nHere's a little puzzle that might shed some light on some apparently confusing behaviour by missing values (NAs) in R:\n\nWhat is NA^0 in R?\n\nYou can get the answer easily by typing at the R command line:\n\n> NA^0\n 1\n\nBut the interesting question that arises is: why is it 1? Most people might expect that the answer would be NA, like most expressions that include NA. But here's the trick to understanding this outcome: think of NA not as a number, but as a placeholder for a number that exists, but whose value we don't know.\n\nNow think of all of the numbers that could replace NA in the expression NA^0. Any positive number to the power zero is 1. Same goes for any negative number. Even zero to the power zero is defined by mathematicians to be 1 (for reasons I'm not going to go into here). So that means whatever number you substitute for NA in the expression NA^0, the answer will be 1. And so that's the answer R gives.\n\nThere are a few other instances where using the indeterminate NA in an expression can lead to a specific non-NA result. Consider this example:\n\n> NA || TRUE\n TRUE\n\nHere. the NA is holding the place of a logical value1, so it could be representing only TRUE or FALSE. But whatever it represents, the answer will be the same:\n\n> TRUE || TRUE\n TRUE\n> FALSE || TRUE\n TRUE\n\nBy the same token, any(x) can return TRUE even if the logical vector includes NAs, as long as x includes at least one TRUE value. Similarly, NA && FALSE is always FALSE.\n\nThere are a few other examples as well (if you know some, share them in the comments). But always remember: if you're ever confused by the behaviour of NA in R, think about what values it might contain, and if changing them changes the outcome. That might explain what's going on. For more on how R handles NAs, see the R Language Definition.\n\n1Footnote: I'm deliberately ignoring the storage mode of NA, which can come in logical, integer, double and character flavours. In all the examples above, it gets coerced to the type of the other elements in the expression." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9323187,"math_prob":0.9317663,"size":2454,"snap":"2021-43-2021-49","text_gpt3_token_len":582,"char_repetition_ratio":0.10326531,"word_repetition_ratio":0.07522124,"special_character_ratio":0.23512633,"punctuation_ratio":0.106589146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9870543,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T18:38:23Z\",\"WARC-Record-ID\":\"<urn:uuid:476e9795-cb86-481c-9281-d56970ffe028>\",\"Content-Length\":\"89722\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1b79e83-05a1-4e77-ac17-4b0e0345b24b>\",\"WARC-Concurrent-To\":\"<urn:uuid:17e86380-f47f-490b-9cfb-17c00fea56c2>\",\"WARC-IP-Address\":\"172.64.135.34\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/2016/07/the-trick-to-understanding-nas-missing-values-in-r/\",\"WARC-Payload-Digest\":\"sha1:CH6LPYLMCJEB4KRLUMESEA427Z6F3DWO\",\"WARC-Block-Digest\":\"sha1:ZV2JWRUVMXHHGPJOU6XFVCHOTGDKUQL3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359065.88_warc_CC-MAIN-20211130171559-20211130201559-00638.warc.gz\"}"}
https://www.mybaseline.org/maths1/textbook?topicID=9
[ "# 9 Matrices\n\n15th century printers pioneered the use of movable type printing. Mirror images of each letter were cast in metal. These were arranged, letter by letter, in frames which printed one page at a time. A frame was called a matrix, several frames were called matrices. In mathematics, a matrix is a frame used to hold numbers or variables. Matrices were used before computers were invented but the advent of computers has made the use of matrices much more common.\n\nA 2x2 matrix looks like this $\\begin{bmatrix} 1 & 2 \\\\[0.3em] 4 & -3 \\end{bmatrix}$\n\nMatrices are always rectangular and can be any size. The size of an array is called its order.\n\nMatrices are generally represented by capital letters and the individual numbers, called elements, are represented by lowercase letters. Annoyingly, the subscripts are written row then column.\n\n$A=\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix}$\n\nFigure 9.0: Matrix $A$ with elements $a_{ij}$\n\n## 9.1 Addition and Subtraction of Matrices\n\nYou can add and subtract matrices of the same order. To add two matrices together you need to add the corresponding terms.\n\n If $A=\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix}$ and $B=\\begin{bmatrix} b_{11} & b_{12} \\\\[0.3em] b_{21} & b_{22} \\end{bmatrix}$ then $A+B$ $=$ $\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix}+ \\begin{bmatrix} b_{11} & b_{12} \\\\[0.3em] b_{21} & b_{22} \\end{bmatrix}$ $=$ $\\begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} \\\\[0.3em] a_{21}+b_{21} & a_{22}+b_{22} \\end{bmatrix}$\n\nExample 9.1: Find $A+B$ given $A=\\begin{bmatrix} 1 & 3 \\\\[0.3em] -5 & 2 \\end{bmatrix}$ and $B=\\begin{bmatrix} -2 & 4 \\\\[0.3em] 3 & 1 \\end{bmatrix}$\n\n $A+B$ $=$ $\\begin{bmatrix} 1+-2 & 3+4 \\\\[0.3em] -5+3 & 2+1 \\end{bmatrix}$ $A+B$ $=$ $\\begin{bmatrix} -1 & 7 \\\\[0.3em] -2 & 3 \\end{bmatrix}$\n\nTo subtract one matrix from another you need to subtract the corresponding terms.\n\n If $A=\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix}$ and $B=\\begin{bmatrix} b_{11} & b_{12} \\\\[0.3em] b_{21} & b_{22} \\end{bmatrix}$ then $A-B$ $=$ $\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix}- \\begin{bmatrix} b_{11} & b_{12} \\\\[0.3em] b_{21} & b_{22} \\end{bmatrix}$ $=$ $\\begin{bmatrix} a_{11}-b_{11} & a_{12}-b_{12} \\\\[0.3em] a_{21}-b_{11} & a_{22}-b_{22} \\end{bmatrix}$\n\nExample 9.1a: Find $A-B$ given $A=\\begin{bmatrix} 1 & 3 \\\\[0.3em] -5 & 2 \\end{bmatrix}$ and $B=\\begin{bmatrix} -2 & 4 \\\\[0.3em] 3 & 1 \\end{bmatrix}$\n\n $A-B$ $=$ $\\begin{bmatrix} 1-(-2) & 3-4 \\\\[0.3em] -5-3 & 2-1 \\end{bmatrix}$ $A-B$ $=$ $\\begin{bmatrix} 3 & -1 \\\\[0.3em] -8 & 1 \\end{bmatrix}$\n\n## 9.2 Multiplication of a Matrix by a Scalar\n\nTo multiply a matrix by a scalar you multiply each of the elements by the scalar.\n\nIf $A=\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix}$ and $c$ is a scalar then $cA=\\begin{bmatrix} ca_{11} & ca_{12} \\\\[0.3em] ca_{21} & ca_{22} \\end{bmatrix}$\n\nExample 9.2: Given $A=\\begin{bmatrix} 1 & 3 \\\\[0.3em] -5 & 2 \\end{bmatrix}$ and $c=3$ find $cA$.\n\n $cA$ $=$ $\\begin{bmatrix} 3\\times1 & 3\\times3 \\\\[0.3em] 3\\times(-5) & 3\\times2 \\end{bmatrix}$ $cA$ $=$ $\\begin{bmatrix} 3 & 9 \\\\[0.3em] -15 & 6 \\end{bmatrix}$\n\n## 9.3 Multiplication of a Matrix by another Matrix\n\nYou can multiply two matrices iff (iff means if and only if) the number of columns in the first matrix is equal to the number of rows in the second.\n\nIf $A=\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix}$ and $B= \\begin{bmatrix} b_{11} & b_{12} \\\\[0.3em] b_{21} & b_{22} \\end{bmatrix}$ then the matrices can be multiplied together because the number of columns in $A$ is the same as the number of rows in $B$.\n\nTo multiply $A$ by $B$ we take the first column of $B$ and put it over $A$, multiply then sum the corresponding terms.\n\n If $A=\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix}$ and $B=\\begin{bmatrix} b_{11} & b_{12} \\\\[0.3em] b_{21} & b_{22} \\end{bmatrix}$ then $A \\times B$ $=$ $\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix} \\times \\begin{bmatrix} b_{11} & b_{12} \\\\[0.3em] b_{21} & b_{22} \\end{bmatrix}$ $=$ $\\begin{bmatrix} a_{11} \\times b_{11} + a_{12} \\times b_{21} & a_{11} \\times b_{12} + a_{12} \\times b_{22} \\\\[0.3em] a_{21} \\times b_{11} + a_{22} \\times b_{21} & a_{21} \\times b_{12} + a_{22} \\times b_{22} \\end{bmatrix}$\n\nExample 9.3: Find $A \\times B$ given $A=\\begin{bmatrix} 1 & 3 \\\\[0.3em] -5 & 2 \\end{bmatrix}$ and $B=\\begin{bmatrix} -2 & 4 \\\\[0.3em] 3 & 1 \\end{bmatrix}$\n\n $A \\times B$ $=$ $\\begin{bmatrix} 1 & 3 \\\\[0.3em] -5 & 2 \\end{bmatrix} \\times \\begin{bmatrix} -2 & 4 \\\\[0.3em] 3 & 1 \\end{bmatrix}$ $=$ $\\begin{bmatrix} 1 \\times (-2) + 3 \\times 3 & 1 \\times 4 + 3 \\times 1 \\\\[0.3em] -5 \\times (-2) + 2 \\times 3 & -5 \\times 4 + 2 \\times 1 \\\\[0.3em] \\end{bmatrix}$ $A \\times B$ $=$ $\\begin{bmatrix} 7 & 7 \\\\[0.3em] 16 & -18 \\end{bmatrix}$\n\n## 9.4 Division of a Matrix by another Matrix\n\nMatrix division does not exist. If you want to find $X$ in the matrix equation $AX=C$ where $A$, $X$ and $C$ are matrices you first need to find the inverse of matrix $A$. When you multiply $A$ by it's inverse, $A^{-1}$, the result is the identity matrix, that is a matrix with $1$s on the main diagonal and $0$s everywhere else.\n\nThe inverse of $A$ is $A^{-1}$ where $A \\times A^{-1} = I$\nIf $A$ was a $3\\times3$ matrix $I$ would be $\\begin{bmatrix} 1&0&0\\\\[0.3em] 0&1&0\\\\[0.3em] 0&0&1 \\end{bmatrix}$\n\nHaving found $A^{-1}$ we can write:\n\n $A X$ $=$ $C$ $A A^{-1} X$ $=$ $CA^{-1}$ $A A^{-1}$ $=$ $I$ so $X$ $=$ $CA^{-1}$\n\n## 9.5 Determinants\n\n### 9.5.1 2x2 Determinants\n\nThe determinant of a matrix can be thought of as the magnitude of the matrix. You can only calculate determinants for square matrices. For a 2x2 matrix, $A=\\begin{bmatrix} a_{11} & a_{12} \\\\[0.3em] a_{21} & a_{22} \\end{bmatrix}$ the determinant is given by $a_{11} \\times a_{22} - a_{12} \\times a_{21}$.\n\nExample 9.5.1: Find the determinant of $A=\\begin{bmatrix} 1 & 3 \\\\[0.3em] -5 & 2 \\end{bmatrix}$.\n\n $|A|$ $=$ $1 \\times 2 - 3 \\times (-5)$ $|A|$ $=$ $17$\n\n### 9.5.1 3x3 Determinants\n\nTo find the determinant of a 3x3 matrix take each term in the top row one at a time. Ignore the row and column that contain the term.", null, "What is left is a 2x2 matrix that we can evaluate as shown above.\n\nAt the end of this first step we have\n$a_{11} \\times (a_{22} \\times a_{33} - a_{23} \\times a_{32})$\n\nFor the next step we take $a_{12}$. We ignore the top row and middle column which gives us\n$a_{12} \\times (a_{21} \\times a_{33} - a_{23} \\times a_{31})$\n\nThere is an added complication. The signs on the top row alternate as shown below.", null, "This means we need to change the sign of the middle term. If it is negative we make it positive. If it is positive . . .\n\nWe repeat the process for the last term so we giving us the determinant of $|A|$ as:\n\n $a_{11} \\times (a_{22} \\times a_{33} - a_{23} \\times a_{32})$ $|A|$ $=$ $-a_{12} \\times (a_{21} \\times a_{33} - a_{23} \\times a_{31})$ +$a_{13} \\times (a_{21} \\times a_{32} - a_{22} \\times a_{31})$\n\nExample 9.5.3: Find the determinant of $A=\\begin{bmatrix} 2 & 5 & 3\\\\[0.3em] -1 & 6 & 2\\\\[0.3em] 3 & -1 & 2 \\end{bmatrix}$.\n\n $2 \\times (6 \\times 2 - 2 \\times (-1))$ $|A|$ $=$ $-5 \\times (-1 \\times 2 - 2 \\times 3)$ $+3 \\times (-1 \\times (-1) - 6 \\times 3)$ $=$ $2 \\times 14 -5 \\times (-8) +3 \\times (-17)$ $=$ $28 + 40 - 51$ $|A|$ $=$ $17$\n\n## 9.5 Where Do We Use Matrices?\n\nThe question should really be 'Where do we not use matrices?' There was a time when matrices were only used in specialist applications like mathematics and quantum mechanics but since computers became commonplace matrices underpin pretty much every computer application.\n\nThe name of the popular program MatLab is a concatenation of Matrix Laboratory. Artificial Intelligence, neural networks, statistical, drafting, analysis and modelling software are all based on matrices.\n\nComputer graphics would be virtually impossible without the use of matrices. You can easily imagine a computer display as a 2D matrix but computer graphics, using matrices, take it a lot further than that. Movement in 3D, reflection of light, the shadows are all calculated using matrices. In the still from the film Gravity below the only real items in the image are the faces of the actors. Everything else looks real, moves as if it is real but is computer generated.", null, "Imagine you have the 2D coordinates of an object in an array $A$. To scale the object you would multiply $A$ by the scaling matrix $S=\\begin{bmatrix}S_x&0\\\\[0.3em]0&S_y\\end{bmatrix}$.\n\nTo rotate $A$ you would multiply by the rotation matrix $R=\\begin{bmatrix}cos(\\theta)&sin(\\theta)\\\\[0.3em]-sin(\\theta)&cos(\\theta)\\end{bmatrix}$\n\nThis video shows how characters are animated by the film company Pixar." ]
[ null, "https://mybaseline.org/maths1/images/3x3Matrix1.png", null, "https://mybaseline.org/maths1/images/3x3Matrix2.png", null, "https://mybaseline.org/maths1/images/Gravity.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67824405,"math_prob":1.0000057,"size":8529,"snap":"2020-45-2020-50","text_gpt3_token_len":3244,"char_repetition_ratio":0.28422287,"word_repetition_ratio":0.15638767,"special_character_ratio":0.4306484,"punctuation_ratio":0.078386165,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000001,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T22:30:35Z\",\"WARC-Record-ID\":\"<urn:uuid:5ff51adb-d83f-418b-b527-9c0b7e1e32b1>\",\"Content-Length\":\"46959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5810644a-b3cf-46ad-a2a4-fdc7c778fd09>\",\"WARC-Concurrent-To\":\"<urn:uuid:194d0317-f0f3-459f-bbbd-7c970d1614af>\",\"WARC-IP-Address\":\"5.77.61.104\",\"WARC-Target-URI\":\"https://www.mybaseline.org/maths1/textbook?topicID=9\",\"WARC-Payload-Digest\":\"sha1:HF4KZ32EATZGMVYNCVW26QTT3W3WCFUV\",\"WARC-Block-Digest\":\"sha1:HYQ4PT5E36KCK2Q6DLIQCGZEBYCJBQBY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107902038.86_warc_CC-MAIN-20201028221148-20201029011148-00389.warc.gz\"}"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-9th-edition/chapter-1-chemical-foundations-additional-exercises-page-39/98
[ "## Chemistry 9th Edition\n\nDensity of solid$=2.155\\frac{g}{mL}$\nWe can find the mass of the benzene as mass of benzene$=58.8-25=33.8 g$ Volume of benzene$=\\frac{mass}{density}=\\frac{33.8g}{0.880\\frac{g}{cm^3}}=38.4 mL$ (we know that 1cm^3=1 mL) Now volume of solid$=50-38.4=11.6mL$ Density of solid$=\\frac{mass}{volume}$ we plug int the known values to obtain: Density of solid$=\\frac{25}{11.6}=2.155\\frac{g}{mL}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53183156,"math_prob":0.9999206,"size":411,"snap":"2019-51-2020-05","text_gpt3_token_len":166,"char_repetition_ratio":0.16953316,"word_repetition_ratio":0.0,"special_character_ratio":0.40389293,"punctuation_ratio":0.10784314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000033,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T12:01:23Z\",\"WARC-Record-ID\":\"<urn:uuid:9cc5b476-fa86-469b-afe1-67935f20c358>\",\"Content-Length\":\"51694\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:082e6ee5-01d8-45bc-a937-b191c6829c36>\",\"WARC-Concurrent-To\":\"<urn:uuid:97ad1968-3579-44e9-9109-1ac44cf6f68a>\",\"WARC-IP-Address\":\"3.90.134.5\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/science/chemistry/chemistry-9th-edition/chapter-1-chemical-foundations-additional-exercises-page-39/98\",\"WARC-Payload-Digest\":\"sha1:7ELCRXHRLTEUO27FUNMARCAT7R2ON4GZ\",\"WARC-Block-Digest\":\"sha1:AZS67EZMBLCAPAVUHBBQVKHQLVIB7WHU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540543252.46_warc_CC-MAIN-20191212102302-20191212130302-00143.warc.gz\"}"}
https://www.hindawi.com/journals/mpe/2012/985429/
[ "Special Issue\n\n## Mathematical Methods Applied to the Celestial Mechanics of Artificial Satellites\n\nView this Special Issue\n\nResearch Article | Open Access\n\nVolume 2012 |Article ID 985429 | https://doi.org/10.1155/2012/985429\n\nRoberta Veloso Garcia, Helio Koiti Kuga, Maria Cecilia F. P. S. Zanardi, \"Unscented Kalman Filter Applied to the Spacecraft Attitude Estimation with Euler Angles\", Mathematical Problems in Engineering, vol. 2012, Article ID 985429, 12 pages, 2012. https://doi.org/10.1155/2012/985429\n\n# Unscented Kalman Filter Applied to the Spacecraft Attitude Estimation with Euler Angles\n\nAccepted18 Sep 2011\nPublished14 Dec 2011\n\n#### Abstract\n\nThe aim of this work is to test an algorithm to estimate, in real time, the attitude of an artificial satellite using real data supplied by attitude sensors that are on board of the CBERS-2 satellite (China Brazil Earth Resources Satellite). The real-time estimator used in this work for attitude determination is the Unscented Kalman Filter. This filter is a new alternative to the extended Kalman filter usually applied to the estimation and control problems of attitude and orbit. This algorithm is capable of carrying out estimation of the states of nonlinear systems, without the necessity of linearization of the nonlinear functions present in the model. This estimation is possible due to a transformation that generates a set of vectors that, suffering a nonlinear transformation, preserves the same mean and covariance of the random variables before the transformation. The performance will be evaluated and analyzed through the comparison between the Unscented Kalman filter and the extended Kalman filter results, by using real onboard data.\n\n#### 1. Introduction\n\nThe attitude of a spacecraft is defined by its orientation in space related to some reference system. The importance of determining the attitude is related not only to the performance of attitude control but also to the precise usage of information obtained by payload experiments performed by the satellite. The attitude estimation is the process of calculating the orientation of the spacecraft in relation to a reference system from data supplied by attitude sensors. Chosen the vector of reference, an attitude sensor measures the orientation of these vectors with respect to the satellite reference system . Once these one or more vectors measurements are known, it is possible to compute the orientation of the satellite processing these vectors, using methods of attitude estimation. There are several methods for determining the attitude of a satellite. Each method is appropriate to a particular type of application and meets the needs such as available time for processing and accuracy to be attained. However, all methods need observations that are obtained by means of sensors installed on the satellite. The sensors are essential for attitude estimation, because they measure its orientation relative to some referential, for example, the Earth, the sun, or a star. In this work, the satellite attitude is described by Euler angles, due to its easy geometric interpretation, and the method to estimate the attitude used is the Unscented Kalman Filter. This method is capable of performing state estimation in nonlinear systems, besides taking into account measurements provided by different attitude sensors. In this work there were considered real data supplied by gyroscopes, infrared Earth sensors, and digital sun sensors. These sensors are on board of the CBERS-2 satellite (China-Brazil Earth Resources Satellite), and the measurements were collected by the Satellite Control Centre of INPE (Brazilian Institute for Space Research).\n\n#### 2. Representation of Attitude by Euler Angles\n\nThe attitude of an artificial satellite is directly related to its orientation in space. Through the attitude one can know the spatial orientation of the satellite, since in most cases it can be considered as a rigid body, where the attitude is expressed by the relationship between two coordinate systems, one fixed on the satellite and another associated with a reference system, for example, inertial system . For a good performance of a mission it is essential that the satellite be stabilized in relation to a specified attitude. The attitude stabilization is achieved by the on-board attitude control, which is designed to acquire and maintain the satellite in a predefined attitude. The CBERS-2 attitude is stabilized in three axes nominally geo-pointed and can be described with respect to the orbital system. In this reference system, the movement around the direction of the orbital velocity is called roll. The movement around the direction normal to the orbit is called pitch, and finally the movement around the direction Nadir/Zenith is called yaw. To transform a vector represented in a given reference to another it is necessary to define a matrix of direction cosines (), where its elements are written in terms of Euler angles (, , ) . The rotation sequence used in this work for the Euler angles was the 3-2-1, where the coordinate system fixed in the body of the satellite (, , ) is related to the orbital coordinate system (, , ) through the following sequence of rotations: (i)1st rotation of an angle (yaw angle) around the axis,(ii)2nd rotation of an angle (roll angle) around an intermediate axis ,(iii)3rd rotation of an angle (pitch angle) around the -axis.\n\nThe matrix obtained through the 3-2-1 rotation sequence is given by where is the matrix of direction cosines with , , and .\n\nBy representing the attitude of a satellite with Euler angles, the set of kinematic equations are given by where is the orbital angular velocity and , , and are the components of the angular velocity on the satellite system.\n\n#### 3. The Measurements System of Satellite\n\nIn order to estimate the satellite attitude accurately, several types of sensors, including gyros, earth sensors, and solar sensors, are used in the attitude determination system. The equations of these sensors are introduced here.\n\n##### 3.1. The Model for Gyros\n\nThe advantage of a gyro is that it can provide the angular displacement and/or angular velocity of the satellite directly. However, gyros have an error due to drifting, meaning that their measurement error increases with time. In this work, the rate-integration gyros (RIGs) are used to measure the angular velocities of the roll, pitch, and yaw of the satellite. The mathematical model of the RIGs is where are the angular displacement of the satellite in a time interval , and are components of bias of the gyroscope.\n\nThus, the measured components of the angular velocity of the satellite are given by where is the output vector of the gyroscope, and represents a Gaussian white noise process covering all the remaining unmodelled effects:\n\n##### 3.2. The Measurement Model for Infrared Earth Sensors (IRESs)\n\nOne way to compensate for the drifting errors present in gyros is to use the earth sensors. These sensors are located on the satellite and aligned with their axes of roll and pitch. In the work, two earth sensors are used, with one measuring the roll angle and the other measuring the pitch angle. In principle, an earth sensor cannot measure the yaw angle. The measurement equations for the earth sensors are given as where and are the white noise representing the small remaining misalignment, installation, and/or assembly errors assumed to be gaussian:\n\n##### 3.3. The Measurement Model for Digital Solar Sensors (DSSs)\n\nSince an earth sensor is not able to measure the yaw angle, the solar sensors are used by the Attitude Control System in order to overcome this problem. However, these sensors do not provide direct measurements but coupled angle of pitch () and yaw (). The measurement equations for the solar sensor are obtained as follows : when , and when , where and are the white noise representing the small remaining misalignment, installation, and/or assembly errors assumed Gaussian: The conditions are such that the solar vector is in the field of view of the sensor, and , , and are the components of the unit vector associated to the sun vector in the satellite system and given by where , , and are the components of the sun vector in the orbital coordinate system and , , and are the Euler angles-estimated attitude.\n\n#### 4. Attitude Estimation Methods\n\nThe goal of an estimator is to calculate the state vector (attitude) based on a set of observations (sensors) . In other words, it is an algorithm capable of processing measurements to produce, according with a given criterium, a minimum error estimate of the state of a system. In this paper, the real-time estimator used to estimate the satellite attitude is a variant of the Kalman filter, applied to problems that present some nonlinearity. This estimator is described as follows.\n\n##### 4.1. Unscented Kalman Filter\n\nThe basic premise behind the Unscented Kalman Filter (UKF) is that it is easier to approximate a Gaussian distribution than it is to approximate an arbitrary nonlinear function. Instead of linearizing to first order using Jacobian matrices, the UKF uses a rational deterministic sampling approach to capture the mean and covariance estimates with a minimal set of sample points. The nonlinear function is applied to each point, in turn, to yield a cloud of transformed points. The statistics of the transformed points can then be calculated to form an estimate of the nonlinearly transformed mean and covariance. We present an algorithmic description of the UKF omitting some theoretical considerations, left to [7, 8].\n\nConsider the system model given by where is the state vector and is the measurement vector. We assume that the process noise and measurement-error noise are zero-mean Gaussian noise processes with covariances given by and , respectively. In this work the state vector at time is defined by the Euler angles and gyro biases:\n\nPerforming the necessary simplifications (small Euler angles) in the set of (2.2), the attitude angles and gyro angular velocity biases are modelled as follows:\n\nGiven the state vector at step , we compute a collection of sigma-points, stored in the columns of the sigma point matrix , where is the dimension of the state vector. In our case, , so is a matrix. The columns of are computed by in which , is the th column of the matrix square root of .\n\nOnce computed, we perform the prediction step by first predicting each column of through time by using where is differential equation defined in (2.2) or (4.3). In our formulation, we perform a numerical Runge-Kutta integration.\n\nWith being calculated, the a priori state estimate is where are weights defined by\n\nAs the last part of the prediction step, we calculate the a priori error covariance with where is the process error covariance matrix, and the weights are defined by where is a scaling parameter which determines the spread of the sigma points and is a parameter used to incorporate any prior knowledge about the distribution of .\n\nTo compute the correction step, we first must transform the columns of through the measurement function to . In this way\n\nWith the mean measurement vector , we compute the a posteriori state estimate using where is the Kalman gain. In the UKF formulation, is defined by where where represents the measurement error covariance matrix.\n\nFinally, the last calculation in the correction step is to compute the a posteriori estimate of the error covariance given by\n\n#### 5. Results\n\nHere, the results and the analysis from the algorithm developed to estimate the attitude are presented. To validate and to analyze the performance of the estimators, real sensors data from the CBERS-2 satellite were used (see [10, 11]). The CBERS-2 satellite was launched on October 21, 2003. The measurements are for the month of April 2006, available to the ground system at a sampling rate of about 8.56 sec. The algorithm was implemented through MATLAB software. To check the performance the UKF, their results were compared with the estimated attitude by the more conventional EKF (Extended Kalman Filter), considering the following set of initial conditions:(i)initial attitude:  deg;(ii)initial bias of gyro:  deg/hour, and  deg/hour,  deg/hour;(iii)initial covariance (): = (0.5 deg)2 (error related to the attitude), and = (1 deg/hour)2 (error related to the drift of gyro);(iv)observation noise covariance (): = (0.3 deg)2 (sun sensor), are = (0.03 deg)2 (earth sensor);(v)dynamic noise covariance (): = (0.1 deg)2 (noise related to the attitude), = (0.01 deg/hour)2, and = (0.005 deg/hour)2 (noise related to the drifting of gyro).\n\nThe real measurements obtained by the attitude sensors (digital sun sensors, infrared Earth sensors, and gyroscopes) are shown in Figure 1.\n\nIn Figures 2 and 3 are observed the behavior of attitude and the biases of gyro during the period analyzed. The average estimated values for the axes of roll and pitch, considering the Unscented Kalman Filter, are in the order of −0.47 deg and −0.45 deg, respectively, and their standard deviations are about 0.02 deg. For the yaw axis the estimate seems not to behave randomly and its average estimated value is about −1.47 deg with standard deviation 0.3 deg. The attitude estimated by the extended Kalman filter had their values for the axes roll and pitch in the order of about −0.49 deg and −0.43 deg with standard deviation about 0.05 deg. For the axis pitch, its average value is −1.42 deg and standard deviation is 0.36 deg.\n\nFigures 4 and 5 present the standard deviations for both estimators for the attitude and the bias of the gyro. It is observed that the attitude standard deviations and the standard deviations of the gyro bias decrease with a tendency to stabilize around a value for both filters. However, the graphs show the superiority of UKF, because in most cases it works within a range of protection better than the EKF.\n\nIn Figures 6 and 7, we can see that the residues of sun sensors and Earth sensors have the same behavior for both estimators. For the Earth sensors, the residuals obtained by the both estimators are smaller and show a tendency to zero mean. However, the residues of UKF are still lower, being at about −0.009 deg for and 0.004 deg for . Already the residues of the EKF are approximately 0.01 deg and −0.027 deg for and , respectively.\n\nThese results seem consistent because in this case it is not possible to compare the estimated values with true values, since these values are not known.\n\nIn this paper, the Unscented Kalman Filter estimator applied in nonlinear systems was presented for use in real-time attitude estimation. The main objective was to estimate the attitude of a CBERS-2 like satellite, using real data provided by sensors that are on board of the satellite. To verify the consistency of the estimator, the attitude was estimated by two different methods. The usage of real data from on-board attitude sensors poses difficulties like mismodelling, mismatch of sizes, misalignments, unforeseen systematic errors, and postlaunch calibration errors. However, it is observed that, although the EKF and UKF have roughly the same accuracy, the UKF leads to a convergence of the state vector faster than the EKF. This fact was expected, since the UKF prevents the linearizations needed for EKF, when the system has some nonlinearity in their equations.\n\n1. V. L. Pisacane and R. C. Moore, Fundamentals of Space Systems, Oxford University Press, New York, NY, USA, 1994.\n2. M. D. Shuster, “Survey of attitude representations,” Journal of the Astronautical Sciences, vol. 41, no. 4, pp. 439–517, 1993. View at: Google Scholar\n3. M. C. Zanardi, Dinâmica de atitude de satelites artificiais, Ph.D. thesis, FEG-UNESP, Guaratinguetá, São Paulo, Brazil, 2005.\n4. J. R. Wertz, Spacecraft Attitude Determination and Control, D. Reidel, Dordrecht, The Netherlands, 1978.\n5. H. Fuming and H. K. Kuga, “CBERS simulator mathematical models,” CBTT Project, CBTT /2000 /MM /001. INPE, S ão José dos Campos, 1999. View at: Google Scholar\n6. R. G. Brown and P. Y. C. Hwang, Introduction to Random Signals and Applied Kalman Filterin, Wiley, New York, NY, USA, 1996. View at: Zentralblatt MATH\n7. S. J. Julier and J. K. Uhlmann, “Reduced sigma point filters for the propagation of means and covariances through nonlinear transformations,” in Proceedings of the American Control Conference, vol. 2, pp. 887–892, Anchorage, Alaska, USA, May 2002. View at: Google Scholar\n8. S. J. Julier and J. K. Uhlmann, “Unscented filtering and nonlinear estimation,” Proceedings of the IEEE, vol. 92, no. 3, pp. 401–422, 2004. View at: Publisher Site | Google Scholar\n9. S.J. Julier and J. K. Uhlmann, “A new extension of the kalman filter for nonlinear systems,” in Proceedings of the International Symposium on Aerospace/Defense Sensing, Simulation and Controls, (SPIE '99), SPIE, Orlando, Fla, USA, April 1997. View at: Google Scholar\n10. H. K. Kuga, R. V. F. Lopes, and A. R. Silva, “On board attitude reconstitution of CBERS-2 Satellite,” in Proceedings of the Proceedings of XIII Brazilian Coloquium on Orbital Dynamics, p. 109, 2006. View at: Google Scholar\n11. R. V. F. Lopes and H. K. Kuga, “CBERS-2: on ground attitude determination from telemetry data,” Internal report C-ITRP, INPE, 2005. View at: Google Scholar\n\n#### More related articles\n\nWe are experiencing issues with article search and journal table of contents. We are working on a fix as to remediate it and apologise for the inconvenience.\n\nArticle of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9045995,"math_prob":0.9063007,"size":17514,"snap":"2021-31-2021-39","text_gpt3_token_len":3830,"char_repetition_ratio":0.15825243,"word_repetition_ratio":0.03846154,"special_character_ratio":0.21708347,"punctuation_ratio":0.1358173,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98858774,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T09:26:14Z\",\"WARC-Record-ID\":\"<urn:uuid:6a15e78a-caae-455e-a0d5-bc273bbbc8da>\",\"Content-Length\":\"790295\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d1c2a77-0474-4938-96b9-03ed04e6d647>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c8a3594-b92d-48f4-a340-4ecf2e7ed797>\",\"WARC-IP-Address\":\"99.84.105.107\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/mpe/2012/985429/\",\"WARC-Payload-Digest\":\"sha1:5H2RIUUUGRJTDYKQ5DDKC2PRR3KVRSAT\",\"WARC-Block-Digest\":\"sha1:SAUH3WEGBGNDFSEPZ2YRVMVZBY7AE735\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057199.49_warc_CC-MAIN-20210921070944-20210921100944-00163.warc.gz\"}"}
https://socratic.org/questions/what-is-the-chemical-formula-of-copper-oxide
[ "# What is the chemical formula of copper oxide?\n\nWell... you'll have to specify the oxidation state of copper. Is it $+ 1$ or $+ 2$? Is it copper(I) or copper (II)? Your pick.\nDepending on your choice, it may be $\\text{CuO}$ or $\\text{Cu\"_2\"O}$... Which one is which, if oxygen in this compound, the oxide anion ${\\text{O}}^{2 -}$, has an oxidation state of $- 2$?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82983464,"math_prob":0.9661726,"size":489,"snap":"2019-51-2020-05","text_gpt3_token_len":111,"char_repetition_ratio":0.12989691,"word_repetition_ratio":0.0,"special_character_ratio":0.22699386,"punctuation_ratio":0.1122449,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96275556,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T11:49:52Z\",\"WARC-Record-ID\":\"<urn:uuid:a692ec63-8509-4ee9-a264-d5a6100d49d7>\",\"Content-Length\":\"33731\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c0cf833-8bfd-487e-b493-cf25862f973c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad709fc2-1ae7-4c82-81ec-c82ba93f9b01>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/what-is-the-chemical-formula-of-copper-oxide\",\"WARC-Payload-Digest\":\"sha1:7OPXA67QHL4OH6QKJTP7YPBFRKPZBIZR\",\"WARC-Block-Digest\":\"sha1:U5YGE2U4YUFHCCSZ3SFSBZPR75RFPUGU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251700675.78_warc_CC-MAIN-20200127112805-20200127142805-00426.warc.gz\"}"}
https://www.cwflynt.com/CS146GameLab/lsn10.html
[ "", null, "Previous", null, "More loops and more secrets", null, "Next\n\nYou've already noticed that the `for` loop is a pretty useful command.\n\nBut, you can make it even more useful if you nest one loop command inside another.\n\nWhen you nest loops, it means that the inner loop will run N times for each time the outer loop does a cycle.\n\nTry typing this code into Komodo Edit and see what it does.\n\n ```for {set x 1} {\\$x <= 5} {set x [expr \\$x + 1]} { for {set y 1} {\\$y <= 3} {set y [expr \\$y + 1]} { tk_messageBox -type ok -message \"X is \\$x and Y is \\$y\" } } ```\n\nYou have to click the OK button a lot of times before you finish the two loops.\n\nYou can figure out how many times the innermost part of a loop will be executed. It will be the product of all the times you go through each loop. In the example above, that's 5 \"X\" loops and 3 \"Y\" loops. So there will be `3 x 5` passes for 15 tk_messageBox commands.\n\nThere's a couple of rules for using nested loops.\n\n1. Each loop needs its own loop variable\n2. Inner loops can't change the values of outer loop variables.\n\nNow, in fact, neither of these rules is absolute, but if you break them, you better know just what you're doing and why.\n\nFor starters, lets look at the number guessing game where you use numbers from 0 to 100 and put 10 guesses on each row.\n\nYou can do that with a single loop, like the code just below this.\n\nDividing the contents of the variable `count` by 10 returns the row to put each button on. If you divide one whole number by another whole number in Tcl/Tk, it returns a whole number back to you. This is the quotient without any remainder or fractional part.\n\nWhen `\\$count` is less than 10, `\\$count / 10` will be 0. When `\\$count` is between 10 and 19, ```\\$count / 10``` will be 1, and so forth.\n\nThe `expr` command's % operator divides one number by another and gives you the remainder instead of the quotient.\n\nIf you divide 23 by 10, the quotient is 2 with a remainder of 3. So `expr 23 %10` returns 3.\n\nThe remainder of `1 % 10` is `1`, the remainder of `12 % 10` is `2`, and so forth.\n\n `````` for {set count 0} {\\$count < 100} {set count [expr \\$count + 1]} { button .b_\\$count -text \\$count -command \"tk_messageBox -type ok -message \\$count\" grid .b_\\$count -row [expr \\$count / 10] -column [expr \\$count % 10] } ``````\n\nThis makes a display that looks like this:", null, "We can create the same buttons with a pair of nested loops: one loop for the rows and one for the columns.\n\n `````` set buttonNum 0 for {set row 1} {\\$row <= 10} {set row [expr \\$row + 1]} { for {set column 1} {\\$column <= 10} {set column [expr \\$column + 1]} { button .b_\\$buttonNum -text \"\\$buttonNum\" \\ -command \"tk_messageBox -type ok -message \\$buttonNum\" grid .b_\\$buttonNum -row \\$row -column \\$column set buttonNum [expr \\$buttonNum + 1] } } ``````\n\nThis is a little longer than the previous code, but it's also a little simpler. Being able to split the rows and columns into two spots means we don't need to think about the `%` operator any more.\n\nBut, it takes a lot of time to type in all of these `expr` commands to change the value of our loop variables.\n\nComputer programmers don't like to waste a lot of time, so there are shorter ways to things that are very common. One very common thing for computer programs to do is to add a number to the value stored in a variable.\n\nSo, there's a command to increment a variable. It's called `incr`.\n\nYou use this command by telling `incr` what variable you want to add a value to, and the value to add. If you leave out the value to add, Tcl/Tk assumes you want to add 1. The value `1` is called a default. When you don't put some arguments onto a command line, you get a default value .\n\nWe can rewrite the previous loop using the `incr` command, and it will look like this:\n\n `````` set buttonNum 0 for {set row 1} {\\$row <= 10} {incr row} { for {set column 1} {\\$column <= 10} {incr column} { button .b_\\$buttonNum -text \"\\$buttonNum\" \\ -command \"tk_messageBox -type ok -message \\$buttonNum\" grid .b_\\$buttonNum -row \\$row -column \\$column incr buttonNum } } ``````\n\nIt's not a big change, but it makes the program easier to read.\n\nUsing the nested loops makes it a little simpler to create the buttons on the game board.\n\nWhen you've got 10 numbers in a row, the \"Too High\", and \"Too Low\" clues make sense. When the buttons are in a grid, those clues don't make so much sense. It would make more sense to say \"LEFT\", \"RIGHT\", \"UP\" or \"DOWN\". And then it's not a number guessing game, it's a position guessing game.\n\nInstead of having a single secret number, we can change the game to have two secret numbers - a secret row and a secret column. When the player clicks too far to the left, they get told to go right, if they click to high, they get told to go down, etc.\n\nIt would be easy to write the loops like the next code. There's a problem with this. Look at it for a moment and see if you can see what the problem is. Here's a hint - the code will run just fine - it just won't do the right thing.\n\nYou might try typing it into Komodo Edit to see what it does.\n\n `````` for {set x 1} {\\$x <= 10} {incr x} { for {set y 1} {\\$y <= 10} {incr y} { if {\\$x < \\$secretX} { set message \"Right\" } if {\\$x > \\$secretX} { set message \"Left\" } if {\\$y < \\$secretY} { set message \"Down\" } if {\\$y > \\$secretY} { set message \"Up\" } button .b_\\$buttonNum -text \"Guess \\$x \\$y\" \\ -command \"tk_messageBox -type ok -message \\\"\\$message\\\"\" grid .b_\\$buttonNum -column \\$x -row \\$y incr buttonNum } } ``````\n\nYou probably noticed that when you select a position to the left and up, the code sets the `message` variable to \"Left\", and then overwrites that value and sets the `message` variable to \"Up\".\n\nWhen you play the game, you need to go up and down until you find the correct row, then left and right to the correct column.\n\nThat's not a fun way to play the game. We can make it better.\n\nJust like it's pretty common to want to add one value to the contents of a variable, it's common to want to add a new string onto the end of the contents of a variable.\n\nThe command to append one string onto the end of another is `append`.\n\nLike the `incr` command, `append` command needs to know the variable to append new values to, and the values to append. Unlike `incr`, there is no default value for the `append` command.\n\nTo make a variable with the string `first second` in it, you might do something like this:\n\n ``````set message \"first\" append message \" second\" ``````\n\nNotice that we've got a space after the first quote in the `append` command. The `append` command will append every single character that you give it to the string in your variable. It won't add any new characters (like a space) or skip any. If you left out the space, the new contents of `message` would be `firstsecond`\n\nHere's the code for a position guessing game using the `tk_messageBox` commands. Notice the first `if` command. We'll discuss that right after you look at this example.\n\n ``````# Calculate 2 secret numbers between 1 and 10 set secretX [expr 1 + int(rand() * 10)] set secretY [expr 1 + int(rand() * 10)] # The variable buttonNum will make it easy to build # unique names for each button. set buttonNum 0 for {set x 1} {\\$x <= 10} {incr x} { for {set y 1} {\\$y <= 10} {incr y} { # Initialize the message to be an empty string set message {} # If the X and Y both match the secrets # the player has won. if {(\\$x == \\$secretX) && (\\$y == \\$secretY)} { set message \"You Win\" } # If x is less than secret, they player needs to guess # further to the right. if {\\$x < \\$secretX} { append message \"Right \" } # If x is greater than secret, they player needs to guess # further to the left. if {\\$x > \\$secretX} { append message \"Left \" } # If y is less than secret, they player needs to guess # further down. if {\\$y < \\$secretY} { append message \"Down\" } # If y is greater than secret, they player needs to guess # further up. if {\\$y > \\$secretY} { append message \"Up\" } button .b_\\$buttonNum -text \"Guess \\$x \\$y\" \\ -command \"tk_messageBox -type ok -message {\\$message}\" grid .b_\\$buttonNum -column \\$x -row \\$y incr buttonNum } } ``````\n\nThe `if` command in that example has two tests. One to check if `\\$x` matchs `\\$secretX` and one to check if `\\$y` matchs `\\$secretY`.\n\nThese tests are combined using the `&&` symbols. The `&&` means both of the tests have to be true for the `if` command to accept that the test is true.\n\nThe `&&` symbols are called boolean operators.\n\nBoolean operators are just like the normal arithmetic operators you're familiar with like +, - and so forth.\n\nBut while arithmetic operators work with lots of numbers, the boolean operators only work with 2 values: TRUE and FALSE. We sometimes represent FALSE as a 0, and TRUE as any non-zero number, and sometimes as the words TRUE and FALSE, and sometimes we use the words yes for TRUE and no for FALSE.\n\nThe two boolean operators we use for building `if` command tests are:\n\n Symbol Name Example Description `&&` AND \\$a && \\$b Every argument must be true. `||` OR \\$a || \\$b At least one argument must be true.\n\nYou can group sets of boolean expressions Just like you can group parts of an arithmetic expression with parentheses.\n\nFor instance, if you want to make sure that the addition is done before the multipliction in an arithmetic expression, you'd write `(1 + 2) x 3`.\n\nWith boolean algebra, you group the tests in the order you want them done.\n\nThis test below is from the `if` command above. Tcl/Tk evaluates it in 3 steps:\n\n``` (\\$x == \\$secretX) && (\\$y == \\$secretY) ```\n\nFirst Tcl/Tk checks to see if `\\$x` is the same as `\\$secretX`. If it is, Tcl/Tk replaces that part of the command with a TRUE.\n\n``` (TRUE) && (\\$y == \\$secretY) ```\n\nIf `\\$x` is not the same as `\\$secretX`, then we're done. A single FALSE means that the `&&` can't be TRUE.\n\nBut if `\\$x` is the same as `\\$secretX`, Tcl/Tk will check if `\\$y` is the same as `\\$secretY`, and then will replace that part of the command as necessary.\n\nIf `\\$y` is the same as `\\$secretY`, the new command will look like this:\n\n``` (TRUE) && (TRUE) ```\n\nFinally, Tcl/Tk compares the results to see if they are all TRUE. In this example, they are both TRUE, so the boolean expression is TRUE.\n\nIf both of these tests are TRUE, then the `if` command will look at the action associated with it.\n\nTake another look at the `button` in the command above. The button text is `Guess \\$x \\$y`. The `\\$x` and `\\$y` will be replaced by the values of the x and y variables, so the actual text will be things like Guess 2 3.\n\nI put the `\\$x` and `\\$y` into the button command so you can see how the buttons are arranged on the screen.\n\nDo you need that text? What would happen if you had no `-text` argument for the `button` command, or if you just used a question mark, or your name? Try changing these on the button and see what happens.\n\nTry playing with the nested loops in Komodo Edit. Change the program to be use a 15x15 grid instead of 10x10.\n\nThe important parts of this lesson are:\n\n• You can nest one loop inside another.\n\n• You can add a number to the contents of a variable with the `incr` command.\n\n• You can use `incr` to add a negative number to the contents of a variable to do a subtractraction.\n\n• You can make complex tests for if commands using boolean operators.\n\n• You can append a string to a variable with the `append` command.\n\nIn the next lesson, we'll learn some more things about `snack` and make this program talk to us instead of using the `tk_messageBox` popups.", null, "Previous", null, "", null, "Next" ]
[ null, "https://www.cwflynt.com/CS146GameLab/arrow_big_left.gif", null, "https://www.cwflynt.com/CS146GameLab/pacmanbar.gif", null, "https://www.cwflynt.com/CS146GameLab/arrow_big_right.gif", null, "https://www.cwflynt.com/CS146GameLab/lsn10-1.gif", null, "https://www.cwflynt.com/CS146GameLab/arrow_big_left.gif", null, "https://www.cwflynt.com/CS146GameLab/pacmanbar.gif", null, "https://www.cwflynt.com/CS146GameLab/arrow_big_right.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8871535,"math_prob":0.7991816,"size":8164,"snap":"2023-14-2023-23","text_gpt3_token_len":2058,"char_repetition_ratio":0.12316176,"word_repetition_ratio":0.016341923,"special_character_ratio":0.2455904,"punctuation_ratio":0.098964326,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9803543,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,3,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T07:32:55Z\",\"WARC-Record-ID\":\"<urn:uuid:bf7898ce-6211-452a-b463-4b5bfe7767a1>\",\"Content-Length\":\"14547\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0df9dd12-3de8-4029-b40c-34846e432659>\",\"WARC-Concurrent-To\":\"<urn:uuid:0386d92e-eb93-427a-87cb-c79c8b3d1c4a>\",\"WARC-IP-Address\":\"209.123.234.210\",\"WARC-Target-URI\":\"https://www.cwflynt.com/CS146GameLab/lsn10.html\",\"WARC-Payload-Digest\":\"sha1:DL4UE3RKA66MDKRFDJVM5DNDM2RSOD37\",\"WARC-Block-Digest\":\"sha1:A4WG4FK2VKPJ3OMOFBA463TI7YD364LV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224643585.23_warc_CC-MAIN-20230528051321-20230528081321-00780.warc.gz\"}"}
https://fr.mathworks.com/help/ident/ug/use-lstm-for-linear-system-identification.html
[ "# Use LSTM Network for Linear System Identification\n\nThis example shows how to use long short-term memory (LSTM) neural networks to estimate a linear system and compares this approach to transfer function estimation.\n\nIn this example, you investigate the ability of an LTSM network to capture the underlying dynamics of a modeled system. To do this, you train an LSTM network on the input and output signal from a linear transfer function, and measure the accuracy of the network response to a step change.\n\n### Transfer Function\n\nThis example uses a fourth-order transfer function with mixed fast and slow dynamics and moderate damping. The moderate damping causes the system dynamics to damp out over a longer time horizon and shows the ability of an LSTM network to capture the mixed dynamics without some of the important response dynamics damping out. Construct the transfer function by specifying the poles and zeros of the system.\n\n```fourthOrderMdl = zpk(-4,[-9+5i;-9-5i;-2+50i;-2-50i],5e5); [stepResponse,stepTime] = step(fourthOrderMdl);```\n\nPlot the step response of the transfer function.\n\n```plot(stepTime,stepResponse) grid on axis tight title('Fourth-order mixed step response')```", null, "The Bode plot shows the bandwidth of the system, which is measured as the first frequency where the gain drops below 70.8%, or around 3 dB, of its DC value.\n\n`bodeplot(fourthOrderMdl)`", null, "`fb = bandwidth(fourthOrderMdl)`\n```fb = 62.8858 ```\n\n### Generate Training Data\n\nConstruct a data set of input and output signals that you can use to train an LSTM network. For the input, generate a random Gaussian signal. Simulate the response of the transfer function `fourthOrderMdl` to this input to obtain the output signal.\n\n#### Gaussian Noise Data\n\nSpecify properties for the random Gaussian noise training signal.\n\n```signalType = 'rgs'; % Gaussian signalLength = 5000; % Number of points in the signal fs = 100; % Sampling frequency signalAmplitude = 1; % Maximum signal amplitude```\n\nGenerate the Gaussian noise signal using `idinput` and scale the result.\n\n```urgs = idinput(signalLength,signalType); urgs = (signalAmplitude/max(urgs))*urgs';```\n\nGenerate the time signal based on the sample rate.\n\n`trgs = 0:1/fs:length(urgs)/fs-1/fs;`\n\nUse the `lsim` function to generate the response of the system and store the result in `yrgs`. Transpose the simulated output so that it corresponds to the LSTM data structure, which requires row vectors and not column vectors.\n\n```yrgs = lsim(fourthOrderMdl,urgs,trgs); yrgs = yrgs';```\n\nSimilarly, create a shorter validation signal to use during network training.\n\n```xval = idinput(100,signalType); yval = lsim(fourthOrderMdl,xval,trgs(1:100));```\n\n### Create and Train Network\n\nThe following network architecture was determined by using a Bayesian optimization routine where the Bayesian optimization cost function uses independent validation data (see the accompanying `bayesianOptimizationForLSTM.mlx` for the details). Although multiple architectures may work, this optimization provides the most computationally efficient one. The optimization process also showed that as the complexity of the transfer function increases when applying LSTM to other linear transfer functions, the architecture of the network does not change significantly. Rather, the number of epochs needed to train the network increases. The number of hidden units required for modeling a system is related to how long the dynamics take to damp out. In this case there are two distinct parts to the response: a high frequency response and a low frequency response. A higher number of hidden units are required to capture the low frequency response. If a lower number of units are selected the high frequency response is still modeled. However, the estimation of the low frequency response deteriorates.\n\nCreate the network architecture.\n\n```numResponses = 1; featureDimension = 1; numHiddenUnits = 100; maxEpochs = 1000; miniBatchSize = 200; Networklayers = [sequenceInputLayer(featureDimension) ... lstmLayer(numHiddenUnits) ... lstmLayer(numHiddenUnits) ... fullyConnectedLayer(numResponses) ... regressionLayer];```\n\nThe initial learning rate impacts the success of the network. Using an initial learning rate that is too high results in high gradients, which lead to longer training times. Longer training times can lead to saturation of the fully connected layer of the network. When the network saturates, the outputs diverge and the network outputs a `NaN` value. Hence, use the default value of 0.01, which is a relatively low initial learning rate. This results in a monotonically decreasing residual and loss curves. Use a piecewise rate schedule to keep the optimization algorithm from getting trapped in local minima at the start of the optimization routine.\n\n```options = trainingOptions('adam', ... 'MaxEpochs',maxEpochs, ... 'MiniBatchSize',miniBatchSize, ... 'GradientThreshold',10, ... 'Shuffle','once', ... 'Plots','training-progress',... 'ExecutionEnvironment','gpu',... 'LearnRateSchedule','piecewise',... 'LearnRateDropPeriod',100,... 'Verbose',0,... 'ValidationData',[{xval'} {yval'}]); loadNetwork = true; % Set to true to train the network using a parallel pool. if loadNetwork load('fourthOrderMdlnet','fourthOrderNet') else poolobj = parpool; fourthOrderNet = trainNetwork(urgs,yrgs,Networklayers,options); delete(poolobj) save('fourthOrderMdlnet','fourthOrderNet','urgs','yrgs'); end```\n\n### Evaluate Model Performance\n\nA network performs well if it is successful in capturing the system dynamic behavior. Evaluate the network performance by measuring the ability of the network to accurately predict the system response to a step input.\n\nConstruct a step input.\n\n```stepTime = 2; % In seconds stepAmplitude = 0.1; stepDuration = 4; % In seconds % Construct step signal and system output. time = (0:1/fs:stepDuration)'; stepSignal = [zeros(sum(time<=stepTime),1);stepAmplitude*ones(sum(time>stepTime),1)]; systemResponse = lsim(fourthOrderMdl,stepSignal,time); % Transpose input and output signal for network inputs. stepSignal = stepSignal'; systemResponse = systemResponse';```\n\nUse the trained network to evaluate the system response. Compare the system and estimated responses in a plot.\n\n```fourthOrderMixedNetStep = predict(fourthOrderNet,stepSignal); figure title('Step response estimation') plot(time,systemResponse,'k', time, fourthOrderMixedNetStep) grid on legend('System','Estimated') title('Fourth-Order Step')```", null, "The plot shows two issues with the fit. First, the initial state of the network is not stationary, which results in transient behavior at the start of the signal. Second, the prediction of the network has a slight offset.\n\n### Initialize Network and Adjust Fit\n\nTo initialize the network state to the correct initial condition, you must update the network state to correspond to the state of the system at the start of the test signal.\n\nYou can adjust the initial state of the network by comparing the estimated response of the system at the initial condition to the actual response of the system. Use the difference between the estimation of the initial state by the network and the actual response of the initial state to correct for the offset in the system estimation.\n\n#### Set Network Initial State\n\nAs the network performs estimation using a step input from 0 to 1, the states of the LSTM network (cell and hidden states of the LSTM layers) drift toward the correct initial condition. To visualize this, extract the cell and hidden state of the network at every time step using the `predictAndUpdateState` function.\n\nUse only the cell and hidden state values prior to the step, which occurs at 2 seconds. Define a time marker for 2 seconds, and extract the values up to this marker.\n\n```stepMarker = time <= 2; yhat = zeros(sum(stepMarker),1); hiddenState = zeros(sum(stepMarker),200); % 200 LSTM units cellState = zeros(sum(stepMarker),200); for ntime = 1:sum(stepMarker) [fourthOrderNet,yhat(ntime)] = predictAndUpdateState(fourthOrderNet,stepSignal(ntime)'); hiddenState(ntime,:) = fourthOrderNet.Layers(2,1).HiddenState; cellState(ntime,:) = fourthOrderNet.Layers(2,1).CellState; end```\n\nNext, plot the hidden and cell states over the period before the step and confirm that they converge to fixed values.\n\n```figure subplot(2,1,1) plot(time(1:200),hiddenState(1:200,:)) grid on axis tight title('Hidden State') subplot(2,1,2) plot(time(1:200),cellState(1:200,:)) grid on axis tight title('Cell State')```", null, "To initialize the network state for a zero input signal, choose an input signal of zero and choose the duration so that the signal is long enough for the networks to reach steady state.\n\n```initializationSignalDuration = 10; % In seconds initializationValue = 0; initializationSignal = initializationValue*ones(1,initializationSignalDuration*fs); fourthOrderNet = predictAndUpdateState(fourthOrderNet,initializationSignal);```\n\nEven with the correct initial condition when the network is given a zero signal, the output is not quite zero. This is because of an incorrect bias term that the algorithm learned during the training process.\n\n```figure zeroMapping = predict(fourthOrderNet,initializationSignal); plot(zeroMapping) axis tight```", null, "Now that the network is correctly initialized, use the network to predict the step response again and plot the results. The initial disturbance is gone.\n\n```fourthOrderMixedNetStep = predict(fourthOrderNet,stepSignal); figure title('Step response estimation') plot(time,systemResponse,'k', ... time,fourthOrderMixedNetStep,'b') grid on legend('System','Estimated') title('Fourth-Order Step - Adjusted State')```", null, "Even after you set the network initial state to compensate for the initial condition of the test signal, a small offset is still visible in the predicted response. This is because of the incorrect bias term that the LSTM network learned during training. You can fix the offset by using the same initialization signal that was used for updating the network states. The initialization signal is expected to map the network to zero. The offset between zero and the network estimation is the error in the bias term learned by the network. Summing the bias term calculated at each layer comes close to the bias detected in the response. Adjusting for the network bias term at the network output, however, is easier than adjusting the individual bias terms in each layer of the network.\n\n```bias = mean(predict(fourthOrderNet,initializationSignal)); fourthOrderMixedNetStep = fourthOrderMixedNetStep-bias; figure title('Step response estimation') plot(time,systemResponse,'k',time,fourthOrderMixedNetStep,'b-') legend('System','Estimated') title('Fourth-Order Step - Adjusted Offset')```", null, "### Move Outside of Training Range\n\nAll the signals used to train the network had a maximum amplitude of 1 and the step function had an amplitude of 0.1. Now, investigate the behavior of the network outside of these ranges.\n\n#### Time Shift\n\nIntroduce a time shift by adjusting the time of the step. Set the time of the step to 3 seconds, 1 second longer than in the training set. Plot the resulting network output and note that the output is correctly delayed by 1 second.\n\n```stepTime = 3; % In seconds stepAmplitude = 0.1; stepDuration = 5; % In seconds [stepSignal,systemResponse,time] = generateStepResponse(fourthOrderMdl,stepTime,stepAmplitude,stepDuration); fourthOrderMixedNetStep = predict(fourthOrderNet,stepSignal); bias = fourthOrderMixedNetStep(1) - initializationValue; fourthOrderMixedNetStep = fourthOrderMixedNetStep-bias; figure plot(time,systemResponse,'k', time,fourthOrderMixedNetStep,'b') grid on axis tight```", null, "#### Amplitude Shift\n\nNext, increase the amplitude of the step function to investigate the network behavior as the system inputs move outside of the range of the training data. To measure the drift outside of the training data range, you can measure the probability density function of the amplitudes in the Gaussian noise signal. Visualize the amplitudes in a histogram.\n\n```figure histogram(urgs,'Normalization','pdf') grid on```", null, "Set the amplitude of the step function according to the percentile of the distribution. Plot the error rate as a function of the percentile.\n\n```pValues = [60:2:98, 90:1:98, 99:0.1:99.9 99.99]; stepAmps = prctile(urgs,pValues); % Amplitudes stepTime = 3; % In seconds stepDuration = 5; % In seconds stepMSE = zeros(length(stepAmps),1); fourthOrderMixedNetStep = cell(length(stepAmps),1); steps = cell(length(stepAmps),1); for nAmps = 1:length(stepAmps) % Fourth-order mixed [stepSignal,systemResponse,time] = generateStepResponse(fourthOrderMdl,stepTime,stepAmps(nAmps),stepDuration); fourthOrderMixedNetStep{nAmps} = predict(fourthOrderNet,stepSignal); bias = fourthOrderMixedNetStep{nAmps}(1) - initializationValue; fourthOrderMixedNetStep{nAmps} = fourthOrderMixedNetStep{nAmps}-bias; stepMSE(nAmps) = sqrt(sum((systemResponse-fourthOrderMixedNetStep{nAmps}).^2)); steps{nAmps,1} = systemResponse; end figure plot(pValues,stepMSE,'bo') title('Prediction Error as a Function of Deviation from Training Rrange') grid on axis tight```", null, "```subplot(2,1,1) plot(time,steps{1},'k', time,fourthOrderMixedNetStep{1},'b') grid on axis tight title('Best Performance') xlabel('time') ylabel('System Response') subplot(2,1,2) plot(time,steps{end},'k', time,fourthOrderMixedNetStep{end},'b') grid on axis tight title('Worst Performance') xlabel('time') ylabel('System Response')```", null, "As the amplitude of the step response moves outside of the range of the training set, the LSTM attempts to estimate the average value of the response.\n\nThese results show the importance of using training data that is in the same range as the data that will be used for prediction. Otherwise, prediction results are unreliable.\n\n### Change System Bandwidth\n\nInvestigate the effect of the system bandwidth on the number of hidden units selected for the LSTM network by modeling the fourth-order mixed-dynamics transfer function with four different networks:\n\n• Small network with 5 hidden units and a single LSTM layer\n\n• Medium network with 10 hidden units and a single LSTM layer\n\n• Full network with 100 hidden units and a single LSTM layer\n\n• Deep network with 2 LSTM layers (each with 100 hidden units)\n\n`load('variousHiddenUnitNets.mat')`\n\nGenerate a step signal.\n\n```stepTime = 2; % In seconds stepAmplitude = 0.1; stepDuration = 4; % In seconds % Construct step signal. time = (0:1/fs:stepDuration)'; stepSignal = [zeros(sum(time<=stepTime),1);stepAmplitude*ones(sum(time>stepTime),1)]; systemResponse = lsim(fourthOrderMdl,stepSignal,time); % Transpose input and output signal for network inputs. stepSignal = stepSignal'; systemResponse = systemResponse';```\n\nEstimate the system response using the various trained networks.\n\n```smallNetStep = predict(smallNet,stepSignal)-smallNetZeroMapping(end); medNetStep = predict(medNet,stepSignal)-medNetZeroMapping(end); fullnetStep = predict(fullNet,stepSignal) - fullNetZeroMapping(end); doubleNetStep = predict(doubleNet,stepSignal) - doubleNetZeroMapping(end); ```\n\nPlot the estimated response.\n\n```figure title('Step response estimation') plot(time,systemResponse,'k', ... time,doubleNetStep,'g', ... time,fullnetStep,'r', ... time,medNetStep,'c', ... time,smallNetStep,'b') grid on legend({'System','Double Net','Full Net','Med Net','Small Net'},'Location','northwest') title('Fourth-Order Step')```", null, "Note all the networks capture the high frequency dynamics in the response well. However, plot a moving average of the responses in order to compare the slow varying dynamics of the system. The ability of the LSTM to capture the longer term dynamics (lower frequency dynamics) of the linear system is directly related to the dynamics of the system and the number of hidden units in the LSTM. The number of layers in the LSTM is not directly related to the long-term behavior but rather adds flexibility to adjust the estimation from the first layer.\n\n```figure title('Slow dynamics component') plot(time,movmean(systemResponse,50),'k') hold on plot(time,movmean(doubleNetStep,50),'g') plot(time,movmean(fullnetStep,50),'r') plot(time,movmean(medNetStep,50),'c') plot(time,movmean(smallNetStep,50),'b') grid on legend('System','Double Net','Full net','Med Net','Small Net','Location','northwest') title('Fourth Order Step')```", null, "### Add Noise to Measured System Response\n\nAdd random noise to the system output to explore the effect of noise on the LSTM performance. To this end, add white noise with levels of 1%, 5%, and 10% to the measured system responses. Use the noisy data to train the LSTM network. With the same noisy data sets, estimate linear models by using `tfest`. Simulate these models and use the simulated responses as the baseline for a performance comparison.\n\nUse the same step function as before:\n\n```stepTime = 2; % In seconds stepAmplitude = 0.1; stepDuration = 4; % In seconds [stepSignal,systemResponse,time] = generateStepResponse(fourthOrderMdl,stepTime,stepAmplitude,stepDuration);```\n\nLoad the trained networks and estimate the system response.\n\n```load('noisyDataNetworks.mat') netNoise1Step = predictAndAdjust(netNoise1,stepSignal,initializationSignal,initializationValue); netNoise5Step = predictAndAdjust(netNoise5,stepSignal,initializationSignal,initializationValue); netNoise10Step = predictAndAdjust(netNoise10,stepSignal,initializationSignal,initializationValue);```\n\nA transfer function estimator (`tfest`) is used to estimate the function at the above noise levels to compare the resilience of the networks to noise (see the accompanying `noiseLevelModels.m` for more details).\n\n```load('noisyDataTFs.mat') tfStepNoise1 = lsim(tfNoise1,stepSignal,time); tfStepNoise5 = lsim(tfNoise5,stepSignal,time); tfStepNoise10 = lsim(tfNoise10,stepSignal,time); ```\n\nPlot the generated responses.\n\n```figure plot(time,systemResponse,'k', ... time,netNoise1Step, ... time,netNoise5Step, ... time,netNoise10Step) grid on legend('System Response','1% Noise','5% Noise','10% Noise') title('Deep LSTM with noisy data')```", null, "Now, plot the estimated transfer functions.\n\n```figure plot(time,systemResponse,'k', ... time,tfStepNoise1, ... time,tfStepNoise5, ... time,tfStepNoise10) grid on legend('System Response','1% Noise','5% Noise','10% Noise') title('Transfer functions fitted to noisy data')```", null, "Calculate the mean squared error to better assess the performance of the different models at different noise levels.\n\n```msefun = @(y,yhat) mean(sqrt((y-yhat).^2)/length(y)); % LSTM errors lstmMSE(1,:) = msefun(systemResponse,netNoise1Step); lstmMSE(2,:) = msefun(systemResponse,netNoise5Step); lstmMSE(3,:) = msefun(systemResponse,netNoise10Step); % Transfer function errors tfMSE(1,:) = msefun(systemResponse,tfStepNoise1'); tfMSE(2,:) = msefun(systemResponse,tfStepNoise5'); tfMSE(3,:) = msefun(systemResponse,tfStepNoise10'); mseTbl = array2table([lstmMSE tfMSE],'VariableNames',{'LSTMMSE','TFMSE'})```\n```mseTbl=3×2 table LSTMMSE TFMSE __________ __________ 1.0115e-05 8.8621e-07 2.5577e-05 9.9064e-06 5.1791e-05 3.6831e-05 ```\n\nNoise has a similar effect on both the LSTM and transfer-function estimation results.\n\n### Helper Functions\n\n```function [stepSignal,systemResponse,time] = generateStepResponse(model,stepTime,stepAmp,signalDuration) %Generates a step response for the given model. % %Check model type modelType = class(model); if nargin < 2 stepTime = 1; end if nargin < 3 stepAmp = 1; end if nargin < 4 signalDuration = 10; end % Constuct step signal if model.Ts == 0 Ts = 1e-2; time = (0:Ts:signalDuration)'; else time = (0:model.Ts:signalDuration)'; end stepSignal = [zeros(sum(time<=stepTime),1);stepAmp*ones(sum(time>stepTime),1)]; switch modelType case {'tf', 'zpk'} systemResponse = lsim(model,stepSignal,time); case 'idpoly' systemResponse = sim(model,stepSignal,time); otherwise error('Model passed is not supported') end stepSignal = stepSignal'; systemResponse = systemResponse'; end```\n\n## Support", null, "Get trial now" ]
[ null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_01.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_02.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_03.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_04.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_05.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_06.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_07.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_08.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_09.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_10.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_11.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_12.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_13.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_14.png", null, "https://fr.mathworks.com/help/examples/ident/win64/UsingLSTMForLinearSystemIdentificationExample_15.png", null, "https://fr.mathworks.com/images/responsive/supporting/apps/doc_center/bg-trial-arrow.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7361369,"math_prob":0.9812839,"size":19228,"snap":"2020-34-2020-40","text_gpt3_token_len":4597,"char_repetition_ratio":0.18117978,"word_repetition_ratio":0.065539986,"special_character_ratio":0.2235802,"punctuation_ratio":0.18871252,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9886079,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T17:15:26Z\",\"WARC-Record-ID\":\"<urn:uuid:5d8c72a7-e0dd-4512-8ebc-1ccf072a661c>\",\"Content-Length\":\"103347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fe9213c-c816-4aa7-8e3c-02818925ba0d>\",\"WARC-Concurrent-To\":\"<urn:uuid:720cc6e5-993e-4293-9568-ece30a56f09f>\",\"WARC-IP-Address\":\"184.25.198.13\",\"WARC-Target-URI\":\"https://fr.mathworks.com/help/ident/ug/use-lstm-for-linear-system-identification.html\",\"WARC-Payload-Digest\":\"sha1:TAHDF7H7OYA4CJZMS25J2E5QABOYHERO\",\"WARC-Block-Digest\":\"sha1:CDWTZRE34BWOVHVD2P4ZOMFUC7KA4UVD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400219691.59_warc_CC-MAIN-20200924163714-20200924193714-00206.warc.gz\"}"}
https://www.engineeringexpert.net/Engineering-Expert-Witness-Blog/tag/overload
[ "Ever take a peek inside the toaster while you’re waiting for the toast to pop up?  If so, you would have noticed a bright orange glow.  That glow is produced when the toasting wires heat up, which in turn creates a nice crusty surface on your bread or waffle.  It’s the same phenomenon as when the filament inside an incandescent bulb glows.  The light and heat produced in both these cases are the result of the Joule, pronounced “jewel,” effect at work.      To understand Joule heating, let’s first refresh our memories as to electrical current resistance.  We learned previously that wire is not a perfect conductor, and as such resistance to flow is encountered.  This resistance causes power to be lost along the length of wire, in accordance with this equation: Power Loss = I2 × R Where I is the electric current flowing through a wire, and R is the total electrical resistance of the wire.  The power loss is measured in units of Joules per second, otherwise known as watts, “watt” denoting a metric unit of power.  It is named after the famed Scottish mechanical engineer, James Watt, who is responsible for inventing the modern steam engine.  A Joule is a metric unit of heat energy, named after the English scientist James Prescott Joule.  He was a pioneer in the field of thermodynamics, a branch of physics concerned with the relationships between different forms of energy.      Anyway, to see how the equation works, let’s look at an example.  Suppose we have 12 feet of 12 AWG copper wire.  We are using it to feed power to an appliance that draws 10 amperes of electric current.  Going to our handy engineering reference book, we find that the 12 AWG wire has an electrical resistance of 0.001588 ohms per foot, “ohm” being a unit of electrical resistance.  Plugging in the numbers, our equation for total electrical resistance becomes: R = (0.001588 ohms per foot) × 12 feet = 0.01905 ohms And we can now calculate power loss as follows: Power = I2 × R = (10 amperes)2 × (0.01905 ohms) = 1.905 watts      Instead of using a 12 AWG wire, let’s use a smaller diameter wire, say, 26 AWG.  Our engineering reference book says that 26 AWG wire has an electrical resistance of 0.0418 ohms per foot.  So let’s see how this changes the power loss: R = (0.0418 ohms per foot) × 12 feet = 0.5016 ohms Power = I2 × R = (10 amperes)2 × (0.5016 ohms) = 50.16 watts      This explains why appliances like space heaters and window unit air conditioners have short, thick power cords.  They draw a lot of current when they operate, and a short power cord, precisely because it is short, poses less electrical resistance than a long cord.  A thicker cord also helps reduce resistance to power flow.  The result is a large amount of current flowing through a superhighway of wire, the wide berth reducing both the amount of power loss and the probability of dangerous Joule heating effect from taking place.       Our example shows that the electric current flowing through the 12 AWG wire loses 1.905 watts of power due to the inconsistencies within the wire, and this in turn causes the wire to heat up.  This is Joule heating at work.  Joule heating of 50.16 watts in the thinner 26 AWG wire can lead to serious trouble.      When using a power cord, heat moves from the copper wire within it, whose job it is to conduct electricity, and beyond, on to the electrical insulation that surrounds it.  There the heat is not trapped, but escapes into the environment surrounding the cord.  If the wire has low internal resistance and the amount of current flowing through it is within limits which are deemed to be acceptable, then Joule heating can be safely dissipated and the wire remains cool.  But if the current goes beyond the safe limit, as specified in the American Wire Gauge (AWG) table for that type of wire, then overheating can be the result.  The electrical insulation may start to melt and burn, and the local fire department may then become involved.          That’s it for wire sizing and electric current.  Next time we’ll slip back into the mechanical world and explore a new topic: the principles of ventilation. _____________________________________________", null, "" ]
[ null, "http://www.engineeringexpert.net/Engineering-Expert-Witness-Blog/http://www.engineeringexpert.net/web/Engineering-Expert-Witness-Blog/wp-content/uploads//2011/03/spaceheater001.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9434588,"math_prob":0.9703364,"size":7010,"snap":"2022-27-2022-33","text_gpt3_token_len":1529,"char_repetition_ratio":0.12988867,"word_repetition_ratio":0.024570024,"special_character_ratio":0.22924393,"punctuation_ratio":0.09708029,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97033715,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T01:36:02Z\",\"WARC-Record-ID\":\"<urn:uuid:f2acde82-5bfe-4ee7-a321-19dfc58daf79>\",\"Content-Length\":\"52635\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a63ea4a5-a2e3-4962-83ed-b279d725f780>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b568898-a43a-410f-b239-bdb1b13947db>\",\"WARC-IP-Address\":\"207.150.212.63\",\"WARC-Target-URI\":\"https://www.engineeringexpert.net/Engineering-Expert-Witness-Blog/tag/overload\",\"WARC-Payload-Digest\":\"sha1:LBAHZBRRMZRJRJILJXHEODNXZVAGHUMU\",\"WARC-Block-Digest\":\"sha1:L2GBQAZERBLMMGRWKEV772OV6JCF3KBU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570879.37_warc_CC-MAIN-20220809003642-20220809033642-00393.warc.gz\"}"}
http://st551.cwick.co.nz/lecture/lecture_03/
[ "# Random Sampling Studies ST551 Lecture 3\n\n## Random Sampling studies: General Setting\n\nKey components of the study setting:\n\n• Population(s) of interest\n• Variable of interest\n• Parameter of interest\n• (Specific) Question/Hypothesis of interest\n\n## Random Sampling study - notation\n\nTake a random sample of $$n$$ (sampling) units from the population of interest.\n\nMeasure outcome variable of interest on each unit:\n\n$$Y_i =$$ measurement of outcome on $$i$$th unit sampled, $$i = 1, \\ldots, n$$.\n\nMaybe also measure some other explanatory/predictor variable on units:\n\n$$X_i =$$ measurement of explanatory variable on $$i$$th unit sampled, $$i = 1, \\ldots, n$$\n\n## Data Settings\n\nOne sample: One outcome variable (Y) measured on units\n\n• What’s the average rent for OSU students?\n• What proportion of ST551 students prefer cats to dogs?\n• How large is the average family size of US households?\n\nTwo sample: One outcome variable measured on units plus one binary explanatory variable\n\n• How does the average rent ($$Y_i$$) of undergraduate ($$X_i = 1$$) OSU students compare to graduate ($$X_i = 0$$) OSU students?\n\n## Data Settings (cont.)\n\nMulti-sample: One outcome variable measured on units plus one categorical (> 2 levels) explanatory variable\n\n• Is the average rent ($$Y_i$$) of OSU students different for different kinds of accommodation (dorm, apartment, house)?\n\nRegression settings: (ST552)\n\n• Simple: One outcome variable and one continuous explanatory variable\n\n• How much does rent of OSU students ($$Y_i$$) decrease based on the number of people they live ($$X_i$$) with?\n• Multiple: One outcome variable and one or more explanatory variables\n\n• What’s the average rent that OSU students pay for a $$Z$$ square foot house with $$X$$ bedrooms, $$D$$ miles from campus?\n\n## For the next few weeks…\n\nWe will focus on the one sample random sampling setting.\n\nMeasure $$Y$$ on $$n$$ randomly sampled units from a population of interest.\n\nInterested in some question/hypothesis about some parameter of the population.\n\n## Parameters of interest\n\nParameter: some summary measure of $$Y$$ for all units in the population\n\n• Population mean: average of variable of interest for all units in the population\n• Population median: median of variable of interest for all units in the population\n• Population variance: variance of variable of interest for all units in the population\n• … any one number summary of the variable of interest for all units of the population\n\n• Point Estimate: the single best guess of the population parameter value\n\n• Interval Estimate: a range of likely values for the population parameter\n\n• Hypothesis test: is a specific value of the population parameter plausible?\n\nDo people support the idea of a single payer health system?\n\nDiscuss with neighbor, what might be the population, variable, parameter and question/hypothesis?\n\nPopulation:\nVariable:\nParameter:\nQuestion/Hypothesis:\n\n# Probability Review\n\n## Population Distribution\n\nThe population distribution is the distribution of $$Y$$ for the entire population.\n\nIt tells us how likely values are over the range of $$Y$$.\n\nIn particular, it provides us a probability model for $$Y$$, so we can find probabilities such as:\n\n$P(Y \\in (a, b]) = P(a < Y \\le b)$ In words: the probability, for a random unit drawn from the population, that the value of the variable of interest is between $$a$$ and $$b$$ (technically greater than $$a$$ and less than or equal to $$b$$).\n\n## Common distributions\n\nIt’s sometimes convenient to assume mathematical forms for population distributions.\n\nContinuous distributions: the range of possible values is the real line\n\nNormal, Exponential, t, F, Uniform, Gamma\n\nDiscrete distributions: range of possible values are distinct separate values\n\nBernoulli, Binomial, Poisson, Multinomial, Discrete Uniform\n\n# The Normal Distribution\n\n## The Normal Distribution\n\nThe classic “Bell-shaped” distribution (but not every “bell-shape” is Normal).\n\nThe standard Normal has mean 0 and variance 1.", null, "## The Normal Distribution\n\nProbability is found as areas under the curve of the probability density function.\n\nE.g. $$P(0 < Y \\le 1)$$ = shaded area", null, "## The Normal Distribution\n\nThere is really a whole family of Normal distributions identified by their mean and variance.\n\nWe write $$N(\\mu, \\sigma^2)$$ to refer to the specific Normal with mean $$\\mu$$ and variance $$\\sigma^2$$.", null, "## Properties of Normally Distributed variables\n\nIf $$X \\sim N(0, 1)$$ then $$\\sigma X + \\mu \\sim N(\\mu, \\sigma^2)$$\n\nAlso if $$Y \\sim N(\\mu, \\sigma^2)$$ then $$\\frac{Y - \\mu}{\\sigma} \\sim N(0, 1)$$\n\nMore generally, if $$X \\sim N(\\mu, \\sigma^2)$$ then\n\n$aX + b \\sim N(a\\mu + b, a^2\\sigma^2)$\n\n## Properties of Normally Distributed variables\n\nIf $$X \\sim N(\\mu_X, \\sigma_X^2)$$ and $$Y \\sim N(\\mu_Y, \\sigma_Y^2)$$, independent of $$X$$.\n\nThen,\n\n$Z = X + Y \\sim N(\\mu_X + \\mu_Y, \\sigma_X^2 + \\sigma_Y^2)$\n\nIndependent: knowing value of one variable doesn’t help to guess value of other.\n\n## Why is the Normal so important?\n\n• Some things seem naturally Normally distributed (actually it’s pretty hard to tell)\n• It’s easy to work with mathematically (this isn’t generally a good reason in practice)\n• The Central Limit Theorem!\n\n# Back to our setting\n\n## Statistic\n\nA statistic is a one number summary of our sample.\n\nUsually, we use a statistic to summarize what we know from our data at hand (our sample).\n\n• Sample mean: average calculated using the sample, $$\\overline{Y} = \\frac{1}{n}\\sum_{i = 1}^n Y_i$$\n• Sample median: middle value of the sample\n• Sample standard deviation\n• pretty much anything…\n\n## Example: Commute time\n\nI want to know the average commute time of students in the class on the first day.\n\nPopulation: ST551 students present on first day of class Fall 2017\nVariable of interest: Commute time in minutes\nParameter: Population mean\n\nI randomly sample 5 index cards from those you filled out on first day.\n\n___ ___ ___ ___ ___\n\nHow would you use the sample to estimate the population mean?\n\nWould your estimate have the same value regardless of the sample we obtained?\n\n## Sampling distribution\n\nWe use a sample statistic to estimate a population parameter.\n\nThe value of the sample statistic depends on the sample we obtain.\n\nThe sample is random $$\\implies$$ the sample statistic is random\n\nThat means, the sample statistic has a probability distribution: the sampling distribution of the statistic\n\n## Example: Commute time (cont.)\n\n6, 10, 10, 15, 15, 30, 5, 25, 20, 10, 10, 20, 12, 8, 10, 15, 10, 15, 8, 8, 10, 5, 15, 18, 20, 15, 2, 15, 15, 2, 30, 7, 7, 28, 30, 10 and 10\n\nOne sample:\nNext sample:\n\n## Example: Commute time (cont.)", null, "If we take a very large number of samples we would get a good idea of sampling distribution of the sample mean for samples of size 5 from this population.\n\n## Sampling distributions\n\nOf course we don’t take many samples! So how do we know what the sampling distribution of a statistic looks like?\n\nWe’ll see inference in this setting depends on knowing the sampling distribution for the statistic being used, the sample size and the population.\n\nOptions for finding the sampling distribution:\n\n• Derive it mathematically\n• Can’t derive the distribution?\n• Derive properties of the distribution\n• Simulate\n• Approximate" ]
[ null, "http://d33wubrfki0l68.cloudfront.net/58297ba9301b1f3425102bc7651fb447ad8cf0a2/bb110/lecture/lecture_3_files/figure-html/standard-normal-1.png", null, "http://d33wubrfki0l68.cloudfront.net/570e13b689eb391a804f27c82cf2179cff5683f1/6ac0c/lecture/lecture_3_files/figure-html/normal-auc-1.png", null, "http://d33wubrfki0l68.cloudfront.net/47b37d1c8a921dbbddc253d00585accd4ab1c69e/ef83c/lecture/lecture_3_files/figure-html/normal-family-1.png", null, "http://d33wubrfki0l68.cloudfront.net/5469b772b1947c2dbdee87e736632954e0dfce6d/4bfb5/lecture/lecture_3_files/figure-html/sampling-dist-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82374305,"math_prob":0.99780476,"size":7111,"snap":"2019-51-2020-05","text_gpt3_token_len":1743,"char_repetition_ratio":0.16898832,"word_repetition_ratio":0.049868766,"special_character_ratio":0.25720716,"punctuation_ratio":0.12375859,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999056,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T00:32:16Z\",\"WARC-Record-ID\":\"<urn:uuid:13a8ea6a-b2f8-438c-a77e-5c3afd46058e>\",\"Content-Length\":\"16451\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d4ac53e-b318-4fb5-bc84-9d241c496011>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa2f1ed7-c18e-49f4-a676-458a5a934b75>\",\"WARC-IP-Address\":\"104.248.63.231\",\"WARC-Target-URI\":\"http://st551.cwick.co.nz/lecture/lecture_03/\",\"WARC-Payload-Digest\":\"sha1:T4XEFRET2B72SEUFTCHL6VGGZYXDKIT5\",\"WARC-Block-Digest\":\"sha1:YDJRNJDZDQKYRXAAABPYKQV24Q2XXMBH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250595787.7_warc_CC-MAIN-20200119234426-20200120022426-00217.warc.gz\"}"}
https://documen.tv/question/a-satellite-orbits-earth-at-a-speed-of-22100-feet-per-second-ft-s-use-the-following-facts-to-con-24231382-67/
[ "## A satellite orbits earth at a speed of 22100 feet per second (ft/s). Use the following facts to convert this speed to miles per hour (mph).\n\nQuestion\n\nA satellite orbits earth at a speed of 22100 feet per second (ft/s). Use the following facts to convert this speed to miles per hour (mph). 1 mile = 5280 ft 1 min = 60 sec 1 hour = 60 min\n\nin progress 0\n7 months 2021-07-17T12:19:16+00:00 1 Answers 1 views 0", null, "", null, "" ]
[ null, "https://documen.tv/wp-content/ql-cache/quicklatex.com-19f64280a6a7f4b9ce9b45ce914bcfbb_l3.png", null, "https://documen.tv/wp-content/ql-cache/quicklatex.com-f7b4bf275d3ffd65c44d818d589fa8ce_l3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65213364,"math_prob":0.8823258,"size":499,"snap":"2022-05-2022-21","text_gpt3_token_len":159,"char_repetition_ratio":0.09090909,"word_repetition_ratio":0.52380955,"special_character_ratio":0.38076153,"punctuation_ratio":0.10619469,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96329504,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T08:01:26Z\",\"WARC-Record-ID\":\"<urn:uuid:288d5c1b-80a5-43fe-a0a3-5946ffa90bff>\",\"Content-Length\":\"79894\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:306cab35-240b-447d-82b1-326602d32b23>\",\"WARC-Concurrent-To\":\"<urn:uuid:f0f59936-6060-42f6-97e4-4c33f58cbb3d>\",\"WARC-IP-Address\":\"103.57.223.32\",\"WARC-Target-URI\":\"https://documen.tv/question/a-satellite-orbits-earth-at-a-speed-of-22100-feet-per-second-ft-s-use-the-following-facts-to-con-24231382-67/\",\"WARC-Payload-Digest\":\"sha1:Z5GESOOCQ2BQZDJANRSUY7N36TWN4I74\",\"WARC-Block-Digest\":\"sha1:JWJKSABAE3T3SOB73KS7MVE7PBYRAKM4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663013003.96_warc_CC-MAIN-20220528062047-20220528092047-00440.warc.gz\"}"}
http://web.newworldencyclopedia.org/entry/Logic
[ "# Logic\n\nLogic, from Classical Greek λόγος (logos), originally meaning the word, or what is spoken, (but coming to mean thought or reason or an explanation or a justification or key) is most often said to be the study of criteria for the evaluation of arguments, although the exact definition of logic is a matter of controversy among philosophers. However the subject is grounded, the task of the logician is the same: to advance an account of valid and fallacious inference, in order to allow one to distinguish good from bad arguments.\n\nTraditionally, logic is studied as a branch of philosophy. Since the mid-1800s logic has also been commonly studied in mathematics, and, more recently, in set theory and computer science. As a science, logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference, often expressed in symbolic or formal language, and through the study of arguments in natural language (a spoken language such as English, Italian, or Japanese). The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialist analyses of reasoning such as probability, correct reasoning, and arguments involving causality.\n\n## Nature of logic\n\nBecause of its fundamental role in philosophy, the nature of logic has been the object of intense dispute; it is not possible clearly to delineate the bounds of logic in terms acceptable to all rival viewpoints. Despite that controversy, the study of logic has been very coherent and technically grounded. In this article, we first characterize logic by introducing fundamental ideas about form, then by outlining some schools of thought, as well as by giving a brief overview of logic's history, an account of its relationship to other sciences, and finally, an exposition of some of logic's essential concepts.\n\n### Informal, formal and symbolic logic\n\nThe crucial concept of form is central to discussions of the nature of logic, and it complicates exposition that the term 'formal' in \"formal logic\" is commonly used in an ambiguous manner. We shall start by giving definitions that we shall adhere to in the rest of this article:\n\n• Informal logic is the study of arguments expressed in natural language. The study of fallacies—often known as informal fallacies—is an especially important branch of informal logic.\n• An inference possesses a purely formal content if it can be expressed as a particular application of a wholly abstract rule, that is a rule that is not about any particular thing or property. (For example: The argument \"If John was strangled he died. John was strangled. Therefore John died.\" is an example, in English, of the argument form or rule, \"If P then Q. P is true. Therefore Q is true.\" Moreover, this is a valid argument form, known since the Middle Ages as Modus Ponens.) We will see later that on many definitions of logic, logical inference and inference with purely formal content are the same thing. This does not render the notion of informal logic vacuous, since one may wish to investigate logic without committing to a particular formal analysis.\n• Formal logic is the field of study in which we are concerned with the form or structure of the inferences rather than the content.\n• Symbolic logic is the study of abstractions, expressed in symbols, that capture the formal features of logical inference.\n\nThe ambiguity is that \"formal logic\" is very often used with the alternate meaning of symbolic logic as we have defined it, with informal logic meaning any logical investigation that does not involve symbolic abstraction; it is this sense of 'formal' that is parallel to the received usages coming from \"formal languages\" or \"formal theory.\"\n\nWhile formal logic is old, on the above analysis, dating back more than two millennia to the work of Aristotle, symbolic logic is comparatively new, and arises with the application of insights from mathematics to problems in logic. The passage from informal logic through formal logic to symbolic logic can be seen as a passage of increasing theoretical sophistication; of necessity, appreciating symbolic logic requires internalizing certain conventions that have become prevalent in the symbolic analysis of logic. Generally, logic is captured by a formal system, comprising a formal language, which describes a set of formulas and a set of rules of derivation. The formulas will normally be intended to represent claims that we may be interested in, and likewise the rules of derivation represent inferences; such systems usually have an intended interpretation.\n\nWithin this formal system, the rules of derivation of the system and its axioms (see the article Axiomatic Systems) then specify a set of theorems, which are formulas that are derivable from the system using the rules of derivation. The most essential property of a logical formal system is soundness, which is the property that under interpretation, all of the rules of derivation are valid inferences. The theorems of a sound formal system are then truths of that system. A minimal condition which a sound system should satisfy is consistency, meaning that no theorem contradicts another; another way of saying this is that no statement or formula and its negation are both derivable from the system. Also important for a formal system is completeness, meaning that everything true is also provable in the system. However, when the language of logic reaches a certain degree of expressiveness (say, second-order logic), completeness becomes impossible to achieve in principle.\n\nIn the case of formal logical systems, the theorems are often interpretable as expressing logical truths (tautologies, or statements that are always true), and it is in this way that such systems can be said to capture at least a part of logical truth and inference.\n\nFormal logic encompasses a wide variety of logical systems. Various systems of logic we will discuss later can be captured in this framework, such as term logic, predicate logic and modal logic, and formal systems are indispensable in all branches of mathematical logic. The table of logic symbols describes various widely used notations in symbolic logic.\n\n### Rival conceptions of logic\n\nLogic arose (see below) from a concern with correctness of argumentation. The conception of logic as the study of argument is historically fundamental, and was how the founders of distinct traditions of logic, namely Aristotle, Mozi and Aksapada Gautama, conceived of logic. Modern logicians usually wish to ensure that logic studies just those arguments that arise from appropriately general forms of inference; so for example the Stanford Encyclopedia of Philosophy says of logic that it \"does not, however, cover good reasoning as a whole. That is the job of the theory of rationality. Rather it deals with inferences whose validity can be traced back to the formal features of the representations that are involved in that inference, be they linguistic, mental, or other representations\" (Hofweber 2004).\n\nBy contrast Immanuel Kant introduced an alternative idea as to what logic is. He argued that logic should be conceived as the science of judgment, an idea taken up in Gottlob Frege's logical and philosophical work, where thought (German: Gedanke) is substituted for judgment (German: Urteil). On this conception, the valid inferences of logic follow from the structural features of judgments or thoughts.\n\nA third view of logic arises from the idea that logic is more fundamental than reason, and so that logic is the science of states of affairs (German: Sachverhalt) in general. Barry Smith locates Franz Brentano as the source for this idea, an idea he claims reaches its fullest development in the work of Adolf Reinach (Smith 1989). This view of logic appears radically distinct from the first; on this conception logic has no essential connection with argument, and the study of fallacies and paradoxes no longer appears essential to the discipline.\n\nOccasionally one encounters a fourth view as to what logic is about: it is a purely formal manipulation of symbols according to some prescribed rules. This conception can be criticized on the grounds that the manipulation of just any formal system is usually not regarded as logic. Such accounts normally omit an explanation of what it is about certain formal systems that makes them systems of logic.\n\n### History of logic\n\n(see History of Logic)\n\nWhile many cultures have employed intricate systems of reasoning, logic as an explicit analysis of the methods of reasoning received sustained development originally in three places: China in the fifth century B.C.E., Greece in the fourth century B.C.E., and India between the second century B.C.E. and the first century B.C.E..\n\nThe formally sophisticated treatment of modern logic apparently descends from the Greek tradition, although it is suggested that the pioneers of Boolean logic were likely aware of Indian logic. (Ganeri 2001) The Greek tradition itself comes from the transmission of Aristotelian logic and commentary upon it by Islamic philosophers to Medieval logicians. The traditions outside Europe did not survive into the modern era; in China, the tradition of scholarly investigation into logic was repressed by the Qin dynasty following the legalist philosophy of Han Feizi, in the Islamic world the rise of the Asharite school suppressed original work on logic.\n\nHowever in India, innovations in the scholastic school, called Nyaya, continued into the early eighteenth century. It did not survive long into the colonial period. In the twentieth century, western philosophers like Stanislaw Schayer and Klaus Glashoff have tried to explore certain aspects of the Indian tradition of logic.\n\nDuring the medieval period a greater emphasis was placed upon Aristotle's logic. During the later period of the medieval ages, logic became a main focus of philosophers, who would engage in critical logical analyses of philosophical arguments, and who developed sophisticated logical analyses and logical methods.\n\n### Relation to other sciences\n\nLogic is related to rationality and the structure of concepts, and so has a degree of overlap with psychology. Logic is generally understood to describe reasoning in a prescriptive manner (i.e. it describes how reasoning ought to take place), whereas psychology is descriptive, so the overlap is not so marked. Gottlob Frege, however, was adamant about anti-psychologism: that logic should be understood in a manner independent of the idiosyncrasies of how particular people might reason.\n\n### Deductive and inductive reasoning\n\nOriginally, logic consisted only of deductive reasoning which concerns what follows universally from given premises. However, it is important to note that inductive reasoning has sometimes been included in the study of logic. Correspondingly, although some people have used the term \"inductive validity,\" we must distinguish between deductive validity and inductive strength—from the point of view of deductive logic, all inductive inferences are, strictly speaking, invalid, so some term other than \"validity\" should be used for good or strong inductive inferences. An inference is deductively valid if and only if there is no possible situation in which all the premises are true and the conclusion false. The notion of deductive validity can be rigorously stated for systems of formal logic in terms of the well-understood notions of semantics. But for all inductive arguments, no matter how strong, it is possible for all the premises to be true and the conclusion nevertheless false. So inductive strength requires us to define a reliable generalization of some set of observations, or some criteria for drawing an inductive conclusion (e. g. \"In the sample we examined, 40 percent had characteristic A and 60 percent had characteristic B, so we conclude that 40 percent of the entire population has characteristic A and 60 percent has characteristic B.\"). The task of providing this definition may be approached in various ways, some less formal than others; some of these definitions may use mathematical models of probability.\n\nFor the most part our discussion of logic here deals only with deductive logic.\n\n## Topics in logic\n\nThroughout history, there has been interest in distinguishing good from bad arguments, and so logic has been studied in some more or less familiar form. Aristotelian logic has principally been concerned with teaching good argument, and is still taught with that end today, while in mathematical logic and analytical philosophy much greater emphasis is placed on logic as an object of study in its own right, and so logic is studied at a more abstract level.\n\nConsideration of the different types of logic explains that logic is not studied in a vacuum. While logic often seems to provide its own motivations, the subject usually develops best when the reason for the investigator's interest is made clear.\n\n### Syllogistic logic\n\nThe Organon was Aristotle's body of work on logic, with the Prior Analytics constituting the first explicit work in formal logic, introducing the syllogistic. The parts of syllogistic, also known by the name term logic, were the analysis of the judgments into propositions consisting of two terms that are related by one of a fixed number of relations, and the expression of inferences by means of syllogisms that consisted of two propositions sharing a common term as premise, and a conclusion which was a proposition involving the two unrelated terms from the premises.\n\nAristotle's work was regarded in classical times and from medieval times in Europe and the Middle East as the very picture of a fully worked out system. It was not alone; the Stoics proposed a system of propositional logic that was studied by medieval logicians. Nor was the perfection of Aristotle's system undisputed; for example the problem of multiple generality was recognized in medieval times. Nonetheless, problems with syllogistic logic were not seen as being in need of revolutionary solutions.\n\nToday, Aristotle's system is mostly seen as of historical value (though there is some current interest in extending term logics), regarded as made obsolete by the advent of sentential logic and the predicate calculus.\n\n### Predicate logic\n\nLogic as it is studied today is a very different subject to that studied before, and the principal difference is the innovation of predicate logic. Whereas Aristotelian syllogistic logic specified the forms that the relevant parts of the involved judgments took, predicate logic allows sentences to be analyzed into subject and argument in several different ways, thus allowing predicate logic to solve the problem of multiple generality that had perplexed medieval logicians. With predicate logic, for the first time, logicians were able to give an account of quantifiers (expressions such as all, some, and none) general enough to express all arguments occurring in natural language.\n\nThe discovery of predicate logic is usually attributed to Gottlob Frege, who is also credited as one of the founders of analytical philosophy, but the formulation of predicate logic most often used today is the first-order logic presented in Principles of Theoretical Logic by David Hilbert and Wilhelm Ackermann in 1928. The analytical generality of the predicate logic allowed the formalization of mathematics, and drove the investigation of set theory, allowed the development of Alfred Tarski's approach to model theory; it is no exaggeration to say that it is the foundation of modern mathematical logic.\n\nFrege's original system of predicate logic was not first-, but second-order. Second-order logic is most prominently defended (against the criticism of Willard Van Orman Quine and others) by George Boolos and Stewart Shapiro.\n\n### Modal logic\n\nIn language, modality deals with the phenomenon that subparts of a sentence may have their semantics modified by special verbs or modal particles. For example, \"We go to the games\" can be modified to give \"We should go to the games,\" and \"We can go to the games\" and perhaps \"We will go to the games.\" More abstractly, we might say that modality affects the circumstances in which we take an assertion to be satisfied.\n\nThe logical study of modality dates back to Aristotle, who was concerned with the alethic modalities of necessity and possibility, which he observed to be dual in the sense of De Morgan duality. While the study of necessity and possibility remained important to philosophers, little logical innovation happened until the landmark investigations of Clarence Irving Lewis in 1918, who formulated a family of rival axiomatizations of the alethic modalities. His work unleashed a torrent of new work on the topic, expanding the kinds of modality treated to include deontic logic and epistemic logic. The seminal work of Arthur Prior applied the same formal language to treat temporal logic and paved the way for the marriage of the two subjects. Saul Kripke discovered (contemporaneously with rivals) his theory of frame semantics which revolutionized the formal technology available to modal logicians and gave a new graph-theoretic way of looking at modality that has driven many applications in computational linguistics and computer science, such as dynamic logic.\n\n### Deduction and reasoning\n\n(see Deductive reasoning)\n\nThe motivation for the study of logic in ancient times was clear, as we have described: it is so that we may learn to distinguish good from bad arguments, and so become more effective in argument and oratory, and perhaps also, to become a better person.\n\nThis motivation is still alive, although it no longer necessarily takes center stage in the picture of logic; typically dialectical or inductive logic, along with an investigation of informal fallacies, will form much of a course in critical thinking, a course now given at many universities.\n\n### Mathematical logic\n\n(see Mathematical logic)\n\nMathematical logic really refers to two distinct areas of research: the first is the application of the techniques of formal logic to mathematics and mathematical reasoning, and the second, in the other direction, the application of mathematical techniques to the representation and analysis of formal logic.\n\nThe boldest attempt to apply logic to mathematics was undoubtedly the logicism pioneered by philosopher-logicians such as Gottlob Frege and Bertrand Russell with his colleague Alfred North Whitehead: the idea was that—contra Kant's assertion that mathematics is synthetic a priori—mathematical theories were logical tautologies and hence analytic, and the program was to show this by means to a reduction of mathematics to logic. The various attempts to carry this out met with a series of failures, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's Program by Gödel's incompleteness theorems.\n\nBoth the statement of Hilbert's Program and its refutation by Gödel depended upon their work establishing the second area of mathematical logic, the application of mathematics to logic in the form of proof theory. Despite the negative nature of the incompleteness theorems, Gödel's completeness theorem, a result in model theory and another application of mathematics to logic, can be understood as showing how close logicism came to being true: every rigorously defined mathematical theory can be exactly captured by a first-order logical theory; Frege's proof calculus is enough to describe the whole of mathematics, though not equivalent to it. Thus we see how complementary the two areas of mathematical logic have been.\n\nIf proof theory and model theory have been the foundation of mathematical logic, they have been but two of the four pillars of the subject. Set theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic, from Cantor's theorem, through the status of the Axiom of Choice and the question of the independence of the continuum hypothesis, to the modern debate on large cardinal axioms.\n\nRecursion theory captures the idea of computation in logical and arithmetic terms; its most classical achievements are the undecidability of the Entscheidungsproblem by Alan Turing, and his presentation of the Church-Turing thesis. Today recursion theory is mostly concerned with the more refined problem of complexity classes—when is a problem efficiently solvable?—and the classification of degrees of unsolvability.\n\n### Philosophical logic\n\n(see Philosophical logic)\n\nPhilosophical logic deals with formal descriptions of natural language. Most philosophers assume that the bulk of \"normal\" proper reasoning can be captured by logic, if one can find the right method for translating ordinary language into that logic. Philosophical logic is essentially a continuation of the traditional discipline that was called \"Logic\" before it was supplanted by the invention of mathematical logic. Philosophical logic has a much greater concern with the connection between natural language and logic. As a result, philosophical logicians have contributed a great deal to the development of non-standard logics (e.g., free logics, tense logics) as well as various extensions of classical logic (e.g., modal logics), and non-standard semantics for such logics (e.g., Kripke's technique of supervaluations in the semantics of logic).\n\n### Logic and computation\n\nLogic cut to the heart of computer science as it emerged as a discipline: Alan Turing's work on the Entscheidungsproblem followed from Kurt Gödel's work on the incompleteness theorems, and the notion of a general purpose computer that came from this work was of fundamental importance to the designers of the computer machinery in the 1940s.\n\nIn the 1950s and 1960s, researchers predicted that when human knowledge could be expressed using logic with mathematical notation, it would be possible to create a machine that reasons, or artificial intelligence. This turned out to be more difficult than expected because of the complexity of human reasoning. In logic programming, a program consists of a set of axioms and rules. Logic programming systems such as Prolog compute the consequences of the axioms and rules in order to answer a query.\n\nToday, logic is extensively applied in the fields of artificial intelligence, and computer science, and these fields provide a rich source of problems in formal logic. The ACM Computing Classification System in particular regards:\n\n• Section F.3 on Logics and meanings of programs and F. 4 on Mathematical logic and formal languages as part of the theory of computer science: this work covers formal semantics of programming languages, as well as work of formal methods such as Hoare logic;\n• Boolean logic as fundamental to computer hardware: particularly, the system's section B.2 on Arithmetic and logic structures;\n• Many fundamental logical formalisms are essential to section I.2 on artificial intelligence, for example modal logic and default logic in Knowledge representation formalisms and methods, and Horn clauses in logic programming.\n\nFurthermore, computers can be used as tools for logicians. For example, in symbolic logic and mathematical logic, proofs by humans can be computer-assisted. Using automated theorem proving the machines can find and check proofs, as well as work with proofs too lengthy to be written out by hand.\n\n## Controversies in logic\n\nJust as we have seen there is disagreement over what logic is about, so there is disagreement about what logical truths there are.\n\n### Bivalence and the law of the excluded middle\n\nThe logics discussed above are all \"bivalent\" or \"two-valued\"; that is, they are to be understood as dividing all propositions into just two groups: those that are true and those that are false. Systems which reject bivalence are known as non-classical logics.\n\nThe law of the excluded middle states that every proposition is either true or false—there is no third or middle possibility. In addition, this view holds that no statement can be both true and false at the same time and in the same manner.\n\nIn the early twentieth century Jan Łukasiewicz investigated the extension of the traditional true/false values to include a third value, \"possible,\" so inventing ternary logic, the first multi-valued logic.\n\nIntuitionistic logic was proposed by L. E. J. Brouwer as the correct logic for reasoning about mathematics, based upon his rejection of the law of the excluded middle as part of his intuitionism. Brouwer rejected formalization in mathematics, but his student Arend Heyting studied intuitionistic logic formally, as did Gerhard Gentzen. Intuitionistic logic has come to be of great interest to computer scientists, as it is a constructive logic, and is hence a logic of what computers can do.\n\nModal logic is not truth conditional, and so it has often been proposed as a non-classical logic. However, modal logic is normally formalized with the principle of the excluded middle, and its relational semantics is bivalent, so this inclusion is disputable. On the other hand, modal logic can be used to encode non-classical logics, such as intuitionistic logic.\n\nLogics such as fuzzy logic have since been devised with an infinite number of \"degrees of truth,\" represented by a real number between 0 and 1. Bayesian probability can be interpreted as a system of logic where probability is the subjective truth value.\n\n### Implication: strict or material?\n\nIt is easy to observe that the notion of implication formalized in classical logic does not comfortably translate into natural language by means of \"if___ then...,\" due to a number of problems called the paradoxes of material implication.\n\nMaterial implication holds that in any statement of the form \"If P then Q ,\" the entire statement is false only if P (known as the antecedent)is true and Q (the consequent) is false. This means that if P is false, or Q is true, then the statement \"If P then Q\" is necessarily true. The paradoxes of material implication arise from this.\n\nOne class of paradoxes includes those that involve counterfactuals, such as \"If the moon is made of green cheese, then 2+2=5\"—a statement that is true by material implication since the antecedent is false. But many people find this to be puzzling or even false because natural language does not support the principle of explosion. Eliminating these classes of paradox led to David Lewis's formulation of strict implication, and to a more radically revisionist logics such as relevance logic and dialetheism.\n\nA second class of paradoxes are those that involve redundant premises, falsely suggesting that we know the consequent because of the antecedent: thus \"if that man gets elected, granny will die\" is materially true if granny happens to be in the last stages of a terminal illness, regardless of the man's election prospects. Such sentences violate the Gricean maxim of relevance, and can be modeled by logics that reject the principle of monotonicity of entailment, such as relevance logic.\n\n### Tolerating the impossible\n\nClosely related to questions arising from the paradoxes of implication comes the radical suggestion that logic ought to tolerate inconsistency. Again, relevance logic and dialetheism are the most important approaches here, though the concerns are different; the key issue that classical logic and some of its rivals, such as intuitionistic logic have is that they respect the principle of explosion, which means that the logic collapses if it is capable of deriving a contradiction. Graham Priest, the proponent of dialetheism, has argued for paraconsistency on the striking grounds that there are in fact, true contradictions (Priest 2004).\n\n### Is logic empirical?\n\nWhat is the epistemological status of the laws of logic? What sort of arguments are appropriate for criticizing purported principles of logic? In an influential paper entitled Is logic empirical? Hilary Putnam, building on a suggestion of W.V.O. Quine, argued that in general the facts of propositional logic have a similar epistemological status as facts about the physical universe, for example as the laws of mechanics or of general relativity, and in particular that what physicists have learned about quantum mechanics provides a compelling case for abandoning certain familiar principles of classical logic: if we want to be realists about the physical phenomena described by quantum theory, then we should abandon the principle of distributivity, substituting for classical logic the quantum logic proposed by Garrett Birkhoff and John von Neumann.\n\nAnother paper by the same name by Sir Michael Dummett argues that Putnam's desire for realism mandates the law of distributivity: distributivity of logic is essential for the realist's understanding of how propositions are true of the world, in just the same way as he has argued the principle of bivalence is. In this way, the question Is logic empirical? can be seen to lead naturally into the fundamental controversy in metaphysics on realism versus anti-realism." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95475584,"math_prob":0.7451463,"size":27546,"snap":"2019-26-2019-30","text_gpt3_token_len":5383,"char_repetition_ratio":0.14443396,"word_repetition_ratio":0.0032198713,"special_character_ratio":0.18271255,"punctuation_ratio":0.09332509,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9631787,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-19T04:24:08Z\",\"WARC-Record-ID\":\"<urn:uuid:1433fbb0-6f91-49b2-be8a-5b260a7dcd3d>\",\"Content-Length\":\"56448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05dd87d0-21d9-45e4-8983-459f24b35509>\",\"WARC-Concurrent-To\":\"<urn:uuid:66db84d5-2652-414a-a615-a46d68cad8bc>\",\"WARC-IP-Address\":\"45.33.101.153\",\"WARC-Target-URI\":\"http://web.newworldencyclopedia.org/entry/Logic\",\"WARC-Payload-Digest\":\"sha1:B3Z3G3EVXTNDCVKVK4RXA2BVSPI3VZVZ\",\"WARC-Block-Digest\":\"sha1:W2QFUEHXUCPTMPZXB6KM3F5LZUOQ5ZWG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525974.74_warc_CC-MAIN-20190719032721-20190719054721-00229.warc.gz\"}"}
https://edunews.tech/swapping-two-numbers/
[ "# Swapping two numbers", null, "# Swapping two numbers\n\nIn this example, you will learn to swap two numbers in C programming using two different techniques.\n\n## Swap Numbers Using Temporary Variable\n\n```#include<stdio.h>\n#include<conio.h>\nvoid main() {\ndouble first, second, temp;\nprintf(\"\\n\\t\\t\\t...Welcome To EduNews.Tech... \");\n\nprintf(\"\\n\\nEnter first number: \");\nscanf(\"%lf\", &first);\nprintf(\"Enter second number: \");\nscanf(\"%lf\", &second);\n\n// Value of first is assigned to temp\ntemp = first;\n\n// Value of second is assigned to first\nfirst = second;\n\n// Value of temp (initial value of first) is assigned to second\nsecond = temp;\n\nprintf(\"\\nAfter swapping, firstNumber = %.2lf\\n\", first);\nprintf(\"After swapping, secondNumber = %.2lf\", second);\nprintf(\"\\n\\n\\n\\t\\t\\tThankyou for Joining Us !\");\nprintf(\"\\n\\t\\t\\t!Regards EduNews !\");\n\ngetch();\n}\n```\n\n### Program Output:", null, "In the above program, the temp variable is assigned the value of the first variable.\nThen, the value of the first variable is assigned to the second variable.\nFinally, the temp (which holds the initial value of first) is assigned to second. This completes the swapping process.\n\n## Swap Numbers Without Using Temporary Variables\n\n```#include <stdio.h>\n#include<conio.h>\nvoid main() {\ndouble a, b;\nprintf(\"\\n\\t\\t\\t...Welcome To EduNews.Tech... \");\n\nprintf(\"\\n\\nEnter value of a: \");\nscanf(\"%lf\", &a);\nprintf(\"Enter value of b: \");\nscanf(\"%lf\", &b);\n\n// Swapping\n\n// a = (initial_a - initial_b)\na = a - b;\n\n// b = (initial_a - initial_b) + initial_b = initial_a\nb = a + b;\n\n// a = initial_a - (initial_a - initial_b) = initial_b\na = b - a;\n\nprintf(\"\\nAfter swapping, a = %.2lf\\n\", a);\nprintf(\"After swapping, b = %.2lf\", b);\nprintf(\"\\n\\n\\n\\t\\t\\tThankyou for Joining Us !\");\nprintf(\"\\n\\t\\t\\t!Regards EduNews !\");\n\ngetch();\n}\n```\n\n### Program Output:", null, "I hope this post helps you to understand the “Swapping of two Numbers” with two different techniques and its implementation in C programming language.\n\nKeep coding 🙂" ]
[ null, "https://edunews.tech/wp-content/uploads/2020/08/Swapping-two-numbers-930x620.jpg", null, "https://edunews.tech/wp-content/uploads/2020/07/swaping-two-numbers.png", null, "https://edunews.tech/wp-content/uploads/2020/07/swaping-two-numbers1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63530326,"math_prob":0.98000485,"size":1904,"snap":"2020-45-2020-50","text_gpt3_token_len":490,"char_repetition_ratio":0.16842106,"word_repetition_ratio":0.09252669,"special_character_ratio":0.30357143,"punctuation_ratio":0.2265625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9683826,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T21:30:25Z\",\"WARC-Record-ID\":\"<urn:uuid:fe208d41-8a47-4fbb-ae7c-a679c91de233>\",\"Content-Length\":\"58296\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4fca4ec1-bc0c-44cb-8e99-29ece09e50d2>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8df7a31-43d0-4da9-9560-7c9ce430aa76>\",\"WARC-IP-Address\":\"159.89.164.176\",\"WARC-Target-URI\":\"https://edunews.tech/swapping-two-numbers/\",\"WARC-Payload-Digest\":\"sha1:PJQ6GNQE5KPYQTJEQPARLFDLBNXT46A5\",\"WARC-Block-Digest\":\"sha1:726BBI3Q3QRHYDMZBIG426A5FSXGCLCE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107878633.8_warc_CC-MAIN-20201021205955-20201021235955-00714.warc.gz\"}"}
http://isabelle.in.tum.de/repos/isabelle/file/32b6a8f12c1c/src/HOL/HOL.thy
[ "src/HOL/HOL.thy\n author ballarin Tue Oct 28 11:03:07 2008 +0100 (2008-10-28) changeset 28699 32b6a8f12c1c parent 28682 5de9fc98ad96 child 28741 1b257449f804 permissions -rw-r--r--\nRemoved 'includes meta_term_syntax' and 'includes meta_conjunction_syntax'.\n``` 1 (* Title: HOL/HOL.thy\n```\n``` 2 ID: \\$Id\\$\n```\n``` 3 Author: Tobias Nipkow, Markus Wenzel, and Larry Paulson\n```\n``` 4 *)\n```\n``` 5\n```\n``` 6 header {* The basis of Higher-Order Logic *}\n```\n``` 7\n```\n``` 8 theory HOL\n```\n``` 9 imports Pure\n```\n``` 10 uses\n```\n``` 11 (\"hologic.ML\")\n```\n``` 12 \"~~/src/Tools/IsaPlanner/zipper.ML\"\n```\n``` 13 \"~~/src/Tools/IsaPlanner/isand.ML\"\n```\n``` 14 \"~~/src/Tools/IsaPlanner/rw_tools.ML\"\n```\n``` 15 \"~~/src/Tools/IsaPlanner/rw_inst.ML\"\n```\n``` 16 \"~~/src/Provers/project_rule.ML\"\n```\n``` 17 \"~~/src/Provers/hypsubst.ML\"\n```\n``` 18 \"~~/src/Provers/splitter.ML\"\n```\n``` 19 \"~~/src/Provers/classical.ML\"\n```\n``` 20 \"~~/src/Provers/blast.ML\"\n```\n``` 21 \"~~/src/Provers/clasimp.ML\"\n```\n``` 22 \"~~/src/Provers/coherent.ML\"\n```\n``` 23 \"~~/src/Provers/eqsubst.ML\"\n```\n``` 24 \"~~/src/Provers/quantifier1.ML\"\n```\n``` 25 (\"simpdata.ML\")\n```\n``` 26 \"~~/src/Tools/random_word.ML\"\n```\n``` 27 \"~~/src/Tools/atomize_elim.ML\"\n```\n``` 28 \"~~/src/Tools/induct.ML\"\n```\n``` 29 (\"~~/src/Tools/induct_tacs.ML\")\n```\n``` 30 \"~~/src/Tools/code/code_name.ML\"\n```\n``` 31 \"~~/src/Tools/code/code_funcgr.ML\"\n```\n``` 32 \"~~/src/Tools/code/code_thingol.ML\"\n```\n``` 33 \"~~/src/Tools/code/code_printer.ML\"\n```\n``` 34 \"~~/src/Tools/code/code_target.ML\"\n```\n``` 35 \"~~/src/Tools/code/code_ml.ML\"\n```\n``` 36 \"~~/src/Tools/code/code_haskell.ML\"\n```\n``` 37 \"~~/src/Tools/nbe.ML\"\n```\n``` 38 (\"~~/src/HOL/Tools/recfun_codegen.ML\")\n```\n``` 39 begin\n```\n``` 40\n```\n``` 41 subsection {* Primitive logic *}\n```\n``` 42\n```\n``` 43 subsubsection {* Core syntax *}\n```\n``` 44\n```\n``` 45 classes type\n```\n``` 46 defaultsort type\n```\n``` 47 setup {* ObjectLogic.add_base_sort @{sort type} *}\n```\n``` 48\n```\n``` 49 arities\n```\n``` 50 \"fun\" :: (type, type) type\n```\n``` 51 itself :: (type) type\n```\n``` 52\n```\n``` 53 global\n```\n``` 54\n```\n``` 55 typedecl bool\n```\n``` 56\n```\n``` 57 judgment\n```\n``` 58 Trueprop :: \"bool => prop\" (\"(_)\" 5)\n```\n``` 59\n```\n``` 60 consts\n```\n``` 61 Not :: \"bool => bool\" (\"~ _\" 40)\n```\n``` 62 True :: bool\n```\n``` 63 False :: bool\n```\n``` 64\n```\n``` 65 The :: \"('a => bool) => 'a\"\n```\n``` 66 All :: \"('a => bool) => bool\" (binder \"ALL \" 10)\n```\n``` 67 Ex :: \"('a => bool) => bool\" (binder \"EX \" 10)\n```\n``` 68 Ex1 :: \"('a => bool) => bool\" (binder \"EX! \" 10)\n```\n``` 69 Let :: \"['a, 'a => 'b] => 'b\"\n```\n``` 70\n```\n``` 71 \"op =\" :: \"['a, 'a] => bool\" (infixl \"=\" 50)\n```\n``` 72 \"op &\" :: \"[bool, bool] => bool\" (infixr \"&\" 35)\n```\n``` 73 \"op |\" :: \"[bool, bool] => bool\" (infixr \"|\" 30)\n```\n``` 74 \"op -->\" :: \"[bool, bool] => bool\" (infixr \"-->\" 25)\n```\n``` 75\n```\n``` 76 local\n```\n``` 77\n```\n``` 78 consts\n```\n``` 79 If :: \"[bool, 'a, 'a] => 'a\" (\"(if (_)/ then (_)/ else (_))\" 10)\n```\n``` 80\n```\n``` 81\n```\n``` 82 subsubsection {* Additional concrete syntax *}\n```\n``` 83\n```\n``` 84 notation (output)\n```\n``` 85 \"op =\" (infix \"=\" 50)\n```\n``` 86\n```\n``` 87 abbreviation\n```\n``` 88 not_equal :: \"['a, 'a] => bool\" (infixl \"~=\" 50) where\n```\n``` 89 \"x ~= y == ~ (x = y)\"\n```\n``` 90\n```\n``` 91 notation (output)\n```\n``` 92 not_equal (infix \"~=\" 50)\n```\n``` 93\n```\n``` 94 notation (xsymbols)\n```\n``` 95 Not (\"\\<not> _\" 40) and\n```\n``` 96 \"op &\" (infixr \"\\<and>\" 35) and\n```\n``` 97 \"op |\" (infixr \"\\<or>\" 30) and\n```\n``` 98 \"op -->\" (infixr \"\\<longrightarrow>\" 25) and\n```\n``` 99 not_equal (infix \"\\<noteq>\" 50)\n```\n``` 100\n```\n``` 101 notation (HTML output)\n```\n``` 102 Not (\"\\<not> _\" 40) and\n```\n``` 103 \"op &\" (infixr \"\\<and>\" 35) and\n```\n``` 104 \"op |\" (infixr \"\\<or>\" 30) and\n```\n``` 105 not_equal (infix \"\\<noteq>\" 50)\n```\n``` 106\n```\n``` 107 abbreviation (iff)\n```\n``` 108 iff :: \"[bool, bool] => bool\" (infixr \"<->\" 25) where\n```\n``` 109 \"A <-> B == A = B\"\n```\n``` 110\n```\n``` 111 notation (xsymbols)\n```\n``` 112 iff (infixr \"\\<longleftrightarrow>\" 25)\n```\n``` 113\n```\n``` 114\n```\n``` 115 nonterminals\n```\n``` 116 letbinds letbind\n```\n``` 117 case_syn cases_syn\n```\n``` 118\n```\n``` 119 syntax\n```\n``` 120 \"_The\" :: \"[pttrn, bool] => 'a\" (\"(3THE _./ _)\" [0, 10] 10)\n```\n``` 121\n```\n``` 122 \"_bind\" :: \"[pttrn, 'a] => letbind\" (\"(2_ =/ _)\" 10)\n```\n``` 123 \"\" :: \"letbind => letbinds\" (\"_\")\n```\n``` 124 \"_binds\" :: \"[letbind, letbinds] => letbinds\" (\"_;/ _\")\n```\n``` 125 \"_Let\" :: \"[letbinds, 'a] => 'a\" (\"(let (_)/ in (_))\" 10)\n```\n``` 126\n```\n``` 127 \"_case_syntax\":: \"['a, cases_syn] => 'b\" (\"(case _ of/ _)\" 10)\n```\n``` 128 \"_case1\" :: \"['a, 'b] => case_syn\" (\"(2_ =>/ _)\" 10)\n```\n``` 129 \"\" :: \"case_syn => cases_syn\" (\"_\")\n```\n``` 130 \"_case2\" :: \"[case_syn, cases_syn] => cases_syn\" (\"_/ | _\")\n```\n``` 131\n```\n``` 132 translations\n```\n``` 133 \"THE x. P\" == \"The (%x. P)\"\n```\n``` 134 \"_Let (_binds b bs) e\" == \"_Let b (_Let bs e)\"\n```\n``` 135 \"let x = a in e\" == \"Let a (%x. e)\"\n```\n``` 136\n```\n``` 137 print_translation {*\n```\n``` 138 (* To avoid eta-contraction of body: *)\n```\n``` 139 [(\"The\", fn [Abs abs] =>\n```\n``` 140 let val (x,t) = atomic_abs_tr' abs\n```\n``` 141 in Syntax.const \"_The\" \\$ x \\$ t end)]\n```\n``` 142 *}\n```\n``` 143\n```\n``` 144 syntax (xsymbols)\n```\n``` 145 \"_case1\" :: \"['a, 'b] => case_syn\" (\"(2_ \\<Rightarrow>/ _)\" 10)\n```\n``` 146\n```\n``` 147 notation (xsymbols)\n```\n``` 148 All (binder \"\\<forall>\" 10) and\n```\n``` 149 Ex (binder \"\\<exists>\" 10) and\n```\n``` 150 Ex1 (binder \"\\<exists>!\" 10)\n```\n``` 151\n```\n``` 152 notation (HTML output)\n```\n``` 153 All (binder \"\\<forall>\" 10) and\n```\n``` 154 Ex (binder \"\\<exists>\" 10) and\n```\n``` 155 Ex1 (binder \"\\<exists>!\" 10)\n```\n``` 156\n```\n``` 157 notation (HOL)\n```\n``` 158 All (binder \"! \" 10) and\n```\n``` 159 Ex (binder \"? \" 10) and\n```\n``` 160 Ex1 (binder \"?! \" 10)\n```\n``` 161\n```\n``` 162\n```\n``` 163 subsubsection {* Axioms and basic definitions *}\n```\n``` 164\n```\n``` 165 axioms\n```\n``` 166 refl: \"t = (t::'a)\"\n```\n``` 167 subst: \"s = t \\<Longrightarrow> P s \\<Longrightarrow> P t\"\n```\n``` 168 ext: \"(!!x::'a. (f x ::'b) = g x) ==> (%x. f x) = (%x. g x)\"\n```\n``` 169 -- {*Extensionality is built into the meta-logic, and this rule expresses\n```\n``` 170 a related property. It is an eta-expanded version of the traditional\n```\n``` 171 rule, and similar to the ABS rule of HOL*}\n```\n``` 172\n```\n``` 173 the_eq_trivial: \"(THE x. x = a) = (a::'a)\"\n```\n``` 174\n```\n``` 175 impI: \"(P ==> Q) ==> P-->Q\"\n```\n``` 176 mp: \"[| P-->Q; P |] ==> Q\"\n```\n``` 177\n```\n``` 178\n```\n``` 179 defs\n```\n``` 180 True_def: \"True == ((%x::bool. x) = (%x. x))\"\n```\n``` 181 All_def: \"All(P) == (P = (%x. True))\"\n```\n``` 182 Ex_def: \"Ex(P) == !Q. (!x. P x --> Q) --> Q\"\n```\n``` 183 False_def: \"False == (!P. P)\"\n```\n``` 184 not_def: \"~ P == P-->False\"\n```\n``` 185 and_def: \"P & Q == !R. (P-->Q-->R) --> R\"\n```\n``` 186 or_def: \"P | Q == !R. (P-->R) --> (Q-->R) --> R\"\n```\n``` 187 Ex1_def: \"Ex1(P) == ? x. P(x) & (! y. P(y) --> y=x)\"\n```\n``` 188\n```\n``` 189 axioms\n```\n``` 190 iff: \"(P-->Q) --> (Q-->P) --> (P=Q)\"\n```\n``` 191 True_or_False: \"(P=True) | (P=False)\"\n```\n``` 192\n```\n``` 193 defs\n```\n``` 194 Let_def: \"Let s f == f(s)\"\n```\n``` 195 if_def: \"If P x y == THE z::'a. (P=True --> z=x) & (P=False --> z=y)\"\n```\n``` 196\n```\n``` 197 finalconsts\n```\n``` 198 \"op =\"\n```\n``` 199 \"op -->\"\n```\n``` 200 The\n```\n``` 201\n```\n``` 202 axiomatization\n```\n``` 203 undefined :: 'a\n```\n``` 204\n```\n``` 205 abbreviation (input)\n```\n``` 206 \"arbitrary \\<equiv> undefined\"\n```\n``` 207\n```\n``` 208\n```\n``` 209 subsubsection {* Generic classes and algebraic operations *}\n```\n``` 210\n```\n``` 211 class default = type +\n```\n``` 212 fixes default :: 'a\n```\n``` 213\n```\n``` 214 class zero = type +\n```\n``` 215 fixes zero :: 'a (\"0\")\n```\n``` 216\n```\n``` 217 class one = type +\n```\n``` 218 fixes one :: 'a (\"1\")\n```\n``` 219\n```\n``` 220 hide (open) const zero one\n```\n``` 221\n```\n``` 222 class plus = type +\n```\n``` 223 fixes plus :: \"'a \\<Rightarrow> 'a \\<Rightarrow> 'a\" (infixl \"+\" 65)\n```\n``` 224\n```\n``` 225 class minus = type +\n```\n``` 226 fixes minus :: \"'a \\<Rightarrow> 'a \\<Rightarrow> 'a\" (infixl \"-\" 65)\n```\n``` 227\n```\n``` 228 class uminus = type +\n```\n``` 229 fixes uminus :: \"'a \\<Rightarrow> 'a\" (\"- _\" 80)\n```\n``` 230\n```\n``` 231 class times = type +\n```\n``` 232 fixes times :: \"'a \\<Rightarrow> 'a \\<Rightarrow> 'a\" (infixl \"*\" 70)\n```\n``` 233\n```\n``` 234 class inverse = type +\n```\n``` 235 fixes inverse :: \"'a \\<Rightarrow> 'a\"\n```\n``` 236 and divide :: \"'a \\<Rightarrow> 'a \\<Rightarrow> 'a\" (infixl \"'/\" 70)\n```\n``` 237\n```\n``` 238 class abs = type +\n```\n``` 239 fixes abs :: \"'a \\<Rightarrow> 'a\"\n```\n``` 240 begin\n```\n``` 241\n```\n``` 242 notation (xsymbols)\n```\n``` 243 abs (\"\\<bar>_\\<bar>\")\n```\n``` 244\n```\n``` 245 notation (HTML output)\n```\n``` 246 abs (\"\\<bar>_\\<bar>\")\n```\n``` 247\n```\n``` 248 end\n```\n``` 249\n```\n``` 250 class sgn = type +\n```\n``` 251 fixes sgn :: \"'a \\<Rightarrow> 'a\"\n```\n``` 252\n```\n``` 253 class ord = type +\n```\n``` 254 fixes less_eq :: \"'a \\<Rightarrow> 'a \\<Rightarrow> bool\"\n```\n``` 255 and less :: \"'a \\<Rightarrow> 'a \\<Rightarrow> bool\"\n```\n``` 256 begin\n```\n``` 257\n```\n``` 258 notation\n```\n``` 259 less_eq (\"op <=\") and\n```\n``` 260 less_eq (\"(_/ <= _)\" [51, 51] 50) and\n```\n``` 261 less (\"op <\") and\n```\n``` 262 less (\"(_/ < _)\" [51, 51] 50)\n```\n``` 263\n```\n``` 264 notation (xsymbols)\n```\n``` 265 less_eq (\"op \\<le>\") and\n```\n``` 266 less_eq (\"(_/ \\<le> _)\" [51, 51] 50)\n```\n``` 267\n```\n``` 268 notation (HTML output)\n```\n``` 269 less_eq (\"op \\<le>\") and\n```\n``` 270 less_eq (\"(_/ \\<le> _)\" [51, 51] 50)\n```\n``` 271\n```\n``` 272 abbreviation (input)\n```\n``` 273 greater_eq (infix \">=\" 50) where\n```\n``` 274 \"x >= y \\<equiv> y <= x\"\n```\n``` 275\n```\n``` 276 notation (input)\n```\n``` 277 greater_eq (infix \"\\<ge>\" 50)\n```\n``` 278\n```\n``` 279 abbreviation (input)\n```\n``` 280 greater (infix \">\" 50) where\n```\n``` 281 \"x > y \\<equiv> y < x\"\n```\n``` 282\n```\n``` 283 end\n```\n``` 284\n```\n``` 285 syntax\n```\n``` 286 \"_index1\" :: index (\"\\<^sub>1\")\n```\n``` 287 translations\n```\n``` 288 (index) \"\\<^sub>1\" => (index) \"\\<^bsub>\\<struct>\\<^esub>\"\n```\n``` 289\n```\n``` 290 typed_print_translation {*\n```\n``` 291 let\n```\n``` 292 fun tr' c = (c, fn show_sorts => fn T => fn ts =>\n```\n``` 293 if T = dummyT orelse not (! show_types) andalso can Term.dest_Type T then raise Match\n```\n``` 294 else Syntax.const Syntax.constrainC \\$ Syntax.const c \\$ Syntax.term_of_typ show_sorts T);\n```\n``` 295 in map tr' [@{const_syntax HOL.one}, @{const_syntax HOL.zero}] end;\n```\n``` 296 *} -- {* show types that are presumably too general *}\n```\n``` 297\n```\n``` 298\n```\n``` 299 subsection {* Fundamental rules *}\n```\n``` 300\n```\n``` 301 subsubsection {* Equality *}\n```\n``` 302\n```\n``` 303 lemma sym: \"s = t ==> t = s\"\n```\n``` 304 by (erule subst) (rule refl)\n```\n``` 305\n```\n``` 306 lemma ssubst: \"t = s ==> P s ==> P t\"\n```\n``` 307 by (drule sym) (erule subst)\n```\n``` 308\n```\n``` 309 lemma trans: \"[| r=s; s=t |] ==> r=t\"\n```\n``` 310 by (erule subst)\n```\n``` 311\n```\n``` 312 lemma meta_eq_to_obj_eq:\n```\n``` 313 assumes meq: \"A == B\"\n```\n``` 314 shows \"A = B\"\n```\n``` 315 by (unfold meq) (rule refl)\n```\n``` 316\n```\n``` 317 text {* Useful with @{text erule} for proving equalities from known equalities. *}\n```\n``` 318 (* a = b\n```\n``` 319 | |\n```\n``` 320 c = d *)\n```\n``` 321 lemma box_equals: \"[| a=b; a=c; b=d |] ==> c=d\"\n```\n``` 322 apply (rule trans)\n```\n``` 323 apply (rule trans)\n```\n``` 324 apply (rule sym)\n```\n``` 325 apply assumption+\n```\n``` 326 done\n```\n``` 327\n```\n``` 328 text {* For calculational reasoning: *}\n```\n``` 329\n```\n``` 330 lemma forw_subst: \"a = b ==> P b ==> P a\"\n```\n``` 331 by (rule ssubst)\n```\n``` 332\n```\n``` 333 lemma back_subst: \"P a ==> a = b ==> P b\"\n```\n``` 334 by (rule subst)\n```\n``` 335\n```\n``` 336\n```\n``` 337 subsubsection {*Congruence rules for application*}\n```\n``` 338\n```\n``` 339 (*similar to AP_THM in Gordon's HOL*)\n```\n``` 340 lemma fun_cong: \"(f::'a=>'b) = g ==> f(x)=g(x)\"\n```\n``` 341 apply (erule subst)\n```\n``` 342 apply (rule refl)\n```\n``` 343 done\n```\n``` 344\n```\n``` 345 (*similar to AP_TERM in Gordon's HOL and FOL's subst_context*)\n```\n``` 346 lemma arg_cong: \"x=y ==> f(x)=f(y)\"\n```\n``` 347 apply (erule subst)\n```\n``` 348 apply (rule refl)\n```\n``` 349 done\n```\n``` 350\n```\n``` 351 lemma arg_cong2: \"\\<lbrakk> a = b; c = d \\<rbrakk> \\<Longrightarrow> f a c = f b d\"\n```\n``` 352 apply (erule ssubst)+\n```\n``` 353 apply (rule refl)\n```\n``` 354 done\n```\n``` 355\n```\n``` 356 lemma cong: \"[| f = g; (x::'a) = y |] ==> f(x) = g(y)\"\n```\n``` 357 apply (erule subst)+\n```\n``` 358 apply (rule refl)\n```\n``` 359 done\n```\n``` 360\n```\n``` 361\n```\n``` 362 subsubsection {*Equality of booleans -- iff*}\n```\n``` 363\n```\n``` 364 lemma iffI: assumes \"P ==> Q\" and \"Q ==> P\" shows \"P=Q\"\n```\n``` 365 by (iprover intro: iff [THEN mp, THEN mp] impI assms)\n```\n``` 366\n```\n``` 367 lemma iffD2: \"[| P=Q; Q |] ==> P\"\n```\n``` 368 by (erule ssubst)\n```\n``` 369\n```\n``` 370 lemma rev_iffD2: \"[| Q; P=Q |] ==> P\"\n```\n``` 371 by (erule iffD2)\n```\n``` 372\n```\n``` 373 lemma iffD1: \"Q = P \\<Longrightarrow> Q \\<Longrightarrow> P\"\n```\n``` 374 by (drule sym) (rule iffD2)\n```\n``` 375\n```\n``` 376 lemma rev_iffD1: \"Q \\<Longrightarrow> Q = P \\<Longrightarrow> P\"\n```\n``` 377 by (drule sym) (rule rev_iffD2)\n```\n``` 378\n```\n``` 379 lemma iffE:\n```\n``` 380 assumes major: \"P=Q\"\n```\n``` 381 and minor: \"[| P --> Q; Q --> P |] ==> R\"\n```\n``` 382 shows R\n```\n``` 383 by (iprover intro: minor impI major [THEN iffD2] major [THEN iffD1])\n```\n``` 384\n```\n``` 385\n```\n``` 386 subsubsection {*True*}\n```\n``` 387\n```\n``` 388 lemma TrueI: \"True\"\n```\n``` 389 unfolding True_def by (rule refl)\n```\n``` 390\n```\n``` 391 lemma eqTrueI: \"P ==> P = True\"\n```\n``` 392 by (iprover intro: iffI TrueI)\n```\n``` 393\n```\n``` 394 lemma eqTrueE: \"P = True ==> P\"\n```\n``` 395 by (erule iffD2) (rule TrueI)\n```\n``` 396\n```\n``` 397\n```\n``` 398 subsubsection {*Universal quantifier*}\n```\n``` 399\n```\n``` 400 lemma allI: assumes \"!!x::'a. P(x)\" shows \"ALL x. P(x)\"\n```\n``` 401 unfolding All_def by (iprover intro: ext eqTrueI assms)\n```\n``` 402\n```\n``` 403 lemma spec: \"ALL x::'a. P(x) ==> P(x)\"\n```\n``` 404 apply (unfold All_def)\n```\n``` 405 apply (rule eqTrueE)\n```\n``` 406 apply (erule fun_cong)\n```\n``` 407 done\n```\n``` 408\n```\n``` 409 lemma allE:\n```\n``` 410 assumes major: \"ALL x. P(x)\"\n```\n``` 411 and minor: \"P(x) ==> R\"\n```\n``` 412 shows R\n```\n``` 413 by (iprover intro: minor major [THEN spec])\n```\n``` 414\n```\n``` 415 lemma all_dupE:\n```\n``` 416 assumes major: \"ALL x. P(x)\"\n```\n``` 417 and minor: \"[| P(x); ALL x. P(x) |] ==> R\"\n```\n``` 418 shows R\n```\n``` 419 by (iprover intro: minor major major [THEN spec])\n```\n``` 420\n```\n``` 421\n```\n``` 422 subsubsection {* False *}\n```\n``` 423\n```\n``` 424 text {*\n```\n``` 425 Depends upon @{text spec}; it is impossible to do propositional\n```\n``` 426 logic before quantifiers!\n```\n``` 427 *}\n```\n``` 428\n```\n``` 429 lemma FalseE: \"False ==> P\"\n```\n``` 430 apply (unfold False_def)\n```\n``` 431 apply (erule spec)\n```\n``` 432 done\n```\n``` 433\n```\n``` 434 lemma False_neq_True: \"False = True ==> P\"\n```\n``` 435 by (erule eqTrueE [THEN FalseE])\n```\n``` 436\n```\n``` 437\n```\n``` 438 subsubsection {* Negation *}\n```\n``` 439\n```\n``` 440 lemma notI:\n```\n``` 441 assumes \"P ==> False\"\n```\n``` 442 shows \"~P\"\n```\n``` 443 apply (unfold not_def)\n```\n``` 444 apply (iprover intro: impI assms)\n```\n``` 445 done\n```\n``` 446\n```\n``` 447 lemma False_not_True: \"False ~= True\"\n```\n``` 448 apply (rule notI)\n```\n``` 449 apply (erule False_neq_True)\n```\n``` 450 done\n```\n``` 451\n```\n``` 452 lemma True_not_False: \"True ~= False\"\n```\n``` 453 apply (rule notI)\n```\n``` 454 apply (drule sym)\n```\n``` 455 apply (erule False_neq_True)\n```\n``` 456 done\n```\n``` 457\n```\n``` 458 lemma notE: \"[| ~P; P |] ==> R\"\n```\n``` 459 apply (unfold not_def)\n```\n``` 460 apply (erule mp [THEN FalseE])\n```\n``` 461 apply assumption\n```\n``` 462 done\n```\n``` 463\n```\n``` 464 lemma notI2: \"(P \\<Longrightarrow> \\<not> Pa) \\<Longrightarrow> (P \\<Longrightarrow> Pa) \\<Longrightarrow> \\<not> P\"\n```\n``` 465 by (erule notE [THEN notI]) (erule meta_mp)\n```\n``` 466\n```\n``` 467\n```\n``` 468 subsubsection {*Implication*}\n```\n``` 469\n```\n``` 470 lemma impE:\n```\n``` 471 assumes \"P-->Q\" \"P\" \"Q ==> R\"\n```\n``` 472 shows \"R\"\n```\n``` 473 by (iprover intro: assms mp)\n```\n``` 474\n```\n``` 475 (* Reduces Q to P-->Q, allowing substitution in P. *)\n```\n``` 476 lemma rev_mp: \"[| P; P --> Q |] ==> Q\"\n```\n``` 477 by (iprover intro: mp)\n```\n``` 478\n```\n``` 479 lemma contrapos_nn:\n```\n``` 480 assumes major: \"~Q\"\n```\n``` 481 and minor: \"P==>Q\"\n```\n``` 482 shows \"~P\"\n```\n``` 483 by (iprover intro: notI minor major [THEN notE])\n```\n``` 484\n```\n``` 485 (*not used at all, but we already have the other 3 combinations *)\n```\n``` 486 lemma contrapos_pn:\n```\n``` 487 assumes major: \"Q\"\n```\n``` 488 and minor: \"P ==> ~Q\"\n```\n``` 489 shows \"~P\"\n```\n``` 490 by (iprover intro: notI minor major notE)\n```\n``` 491\n```\n``` 492 lemma not_sym: \"t ~= s ==> s ~= t\"\n```\n``` 493 by (erule contrapos_nn) (erule sym)\n```\n``` 494\n```\n``` 495 lemma eq_neq_eq_imp_neq: \"[| x = a ; a ~= b; b = y |] ==> x ~= y\"\n```\n``` 496 by (erule subst, erule ssubst, assumption)\n```\n``` 497\n```\n``` 498 (*still used in HOLCF*)\n```\n``` 499 lemma rev_contrapos:\n```\n``` 500 assumes pq: \"P ==> Q\"\n```\n``` 501 and nq: \"~Q\"\n```\n``` 502 shows \"~P\"\n```\n``` 503 apply (rule nq [THEN contrapos_nn])\n```\n``` 504 apply (erule pq)\n```\n``` 505 done\n```\n``` 506\n```\n``` 507 subsubsection {*Existential quantifier*}\n```\n``` 508\n```\n``` 509 lemma exI: \"P x ==> EX x::'a. P x\"\n```\n``` 510 apply (unfold Ex_def)\n```\n``` 511 apply (iprover intro: allI allE impI mp)\n```\n``` 512 done\n```\n``` 513\n```\n``` 514 lemma exE:\n```\n``` 515 assumes major: \"EX x::'a. P(x)\"\n```\n``` 516 and minor: \"!!x. P(x) ==> Q\"\n```\n``` 517 shows \"Q\"\n```\n``` 518 apply (rule major [unfolded Ex_def, THEN spec, THEN mp])\n```\n``` 519 apply (iprover intro: impI [THEN allI] minor)\n```\n``` 520 done\n```\n``` 521\n```\n``` 522\n```\n``` 523 subsubsection {*Conjunction*}\n```\n``` 524\n```\n``` 525 lemma conjI: \"[| P; Q |] ==> P&Q\"\n```\n``` 526 apply (unfold and_def)\n```\n``` 527 apply (iprover intro: impI [THEN allI] mp)\n```\n``` 528 done\n```\n``` 529\n```\n``` 530 lemma conjunct1: \"[| P & Q |] ==> P\"\n```\n``` 531 apply (unfold and_def)\n```\n``` 532 apply (iprover intro: impI dest: spec mp)\n```\n``` 533 done\n```\n``` 534\n```\n``` 535 lemma conjunct2: \"[| P & Q |] ==> Q\"\n```\n``` 536 apply (unfold and_def)\n```\n``` 537 apply (iprover intro: impI dest: spec mp)\n```\n``` 538 done\n```\n``` 539\n```\n``` 540 lemma conjE:\n```\n``` 541 assumes major: \"P&Q\"\n```\n``` 542 and minor: \"[| P; Q |] ==> R\"\n```\n``` 543 shows \"R\"\n```\n``` 544 apply (rule minor)\n```\n``` 545 apply (rule major [THEN conjunct1])\n```\n``` 546 apply (rule major [THEN conjunct2])\n```\n``` 547 done\n```\n``` 548\n```\n``` 549 lemma context_conjI:\n```\n``` 550 assumes \"P\" \"P ==> Q\" shows \"P & Q\"\n```\n``` 551 by (iprover intro: conjI assms)\n```\n``` 552\n```\n``` 553\n```\n``` 554 subsubsection {*Disjunction*}\n```\n``` 555\n```\n``` 556 lemma disjI1: \"P ==> P|Q\"\n```\n``` 557 apply (unfold or_def)\n```\n``` 558 apply (iprover intro: allI impI mp)\n```\n``` 559 done\n```\n``` 560\n```\n``` 561 lemma disjI2: \"Q ==> P|Q\"\n```\n``` 562 apply (unfold or_def)\n```\n``` 563 apply (iprover intro: allI impI mp)\n```\n``` 564 done\n```\n``` 565\n```\n``` 566 lemma disjE:\n```\n``` 567 assumes major: \"P|Q\"\n```\n``` 568 and minorP: \"P ==> R\"\n```\n``` 569 and minorQ: \"Q ==> R\"\n```\n``` 570 shows \"R\"\n```\n``` 571 by (iprover intro: minorP minorQ impI\n```\n``` 572 major [unfolded or_def, THEN spec, THEN mp, THEN mp])\n```\n``` 573\n```\n``` 574\n```\n``` 575 subsubsection {*Classical logic*}\n```\n``` 576\n```\n``` 577 lemma classical:\n```\n``` 578 assumes prem: \"~P ==> P\"\n```\n``` 579 shows \"P\"\n```\n``` 580 apply (rule True_or_False [THEN disjE, THEN eqTrueE])\n```\n``` 581 apply assumption\n```\n``` 582 apply (rule notI [THEN prem, THEN eqTrueI])\n```\n``` 583 apply (erule subst)\n```\n``` 584 apply assumption\n```\n``` 585 done\n```\n``` 586\n```\n``` 587 lemmas ccontr = FalseE [THEN classical, standard]\n```\n``` 588\n```\n``` 589 (*notE with premises exchanged; it discharges ~R so that it can be used to\n```\n``` 590 make elimination rules*)\n```\n``` 591 lemma rev_notE:\n```\n``` 592 assumes premp: \"P\"\n```\n``` 593 and premnot: \"~R ==> ~P\"\n```\n``` 594 shows \"R\"\n```\n``` 595 apply (rule ccontr)\n```\n``` 596 apply (erule notE [OF premnot premp])\n```\n``` 597 done\n```\n``` 598\n```\n``` 599 (*Double negation law*)\n```\n``` 600 lemma notnotD: \"~~P ==> P\"\n```\n``` 601 apply (rule classical)\n```\n``` 602 apply (erule notE)\n```\n``` 603 apply assumption\n```\n``` 604 done\n```\n``` 605\n```\n``` 606 lemma contrapos_pp:\n```\n``` 607 assumes p1: \"Q\"\n```\n``` 608 and p2: \"~P ==> ~Q\"\n```\n``` 609 shows \"P\"\n```\n``` 610 by (iprover intro: classical p1 p2 notE)\n```\n``` 611\n```\n``` 612\n```\n``` 613 subsubsection {*Unique existence*}\n```\n``` 614\n```\n``` 615 lemma ex1I:\n```\n``` 616 assumes \"P a\" \"!!x. P(x) ==> x=a\"\n```\n``` 617 shows \"EX! x. P(x)\"\n```\n``` 618 by (unfold Ex1_def, iprover intro: assms exI conjI allI impI)\n```\n``` 619\n```\n``` 620 text{*Sometimes easier to use: the premises have no shared variables. Safe!*}\n```\n``` 621 lemma ex_ex1I:\n```\n``` 622 assumes ex_prem: \"EX x. P(x)\"\n```\n``` 623 and eq: \"!!x y. [| P(x); P(y) |] ==> x=y\"\n```\n``` 624 shows \"EX! x. P(x)\"\n```\n``` 625 by (iprover intro: ex_prem [THEN exE] ex1I eq)\n```\n``` 626\n```\n``` 627 lemma ex1E:\n```\n``` 628 assumes major: \"EX! x. P(x)\"\n```\n``` 629 and minor: \"!!x. [| P(x); ALL y. P(y) --> y=x |] ==> R\"\n```\n``` 630 shows \"R\"\n```\n``` 631 apply (rule major [unfolded Ex1_def, THEN exE])\n```\n``` 632 apply (erule conjE)\n```\n``` 633 apply (iprover intro: minor)\n```\n``` 634 done\n```\n``` 635\n```\n``` 636 lemma ex1_implies_ex: \"EX! x. P x ==> EX x. P x\"\n```\n``` 637 apply (erule ex1E)\n```\n``` 638 apply (rule exI)\n```\n``` 639 apply assumption\n```\n``` 640 done\n```\n``` 641\n```\n``` 642\n```\n``` 643 subsubsection {*THE: definite description operator*}\n```\n``` 644\n```\n``` 645 lemma the_equality:\n```\n``` 646 assumes prema: \"P a\"\n```\n``` 647 and premx: \"!!x. P x ==> x=a\"\n```\n``` 648 shows \"(THE x. P x) = a\"\n```\n``` 649 apply (rule trans [OF _ the_eq_trivial])\n```\n``` 650 apply (rule_tac f = \"The\" in arg_cong)\n```\n``` 651 apply (rule ext)\n```\n``` 652 apply (rule iffI)\n```\n``` 653 apply (erule premx)\n```\n``` 654 apply (erule ssubst, rule prema)\n```\n``` 655 done\n```\n``` 656\n```\n``` 657 lemma theI:\n```\n``` 658 assumes \"P a\" and \"!!x. P x ==> x=a\"\n```\n``` 659 shows \"P (THE x. P x)\"\n```\n``` 660 by (iprover intro: assms the_equality [THEN ssubst])\n```\n``` 661\n```\n``` 662 lemma theI': \"EX! x. P x ==> P (THE x. P x)\"\n```\n``` 663 apply (erule ex1E)\n```\n``` 664 apply (erule theI)\n```\n``` 665 apply (erule allE)\n```\n``` 666 apply (erule mp)\n```\n``` 667 apply assumption\n```\n``` 668 done\n```\n``` 669\n```\n``` 670 (*Easier to apply than theI: only one occurrence of P*)\n```\n``` 671 lemma theI2:\n```\n``` 672 assumes \"P a\" \"!!x. P x ==> x=a\" \"!!x. P x ==> Q x\"\n```\n``` 673 shows \"Q (THE x. P x)\"\n```\n``` 674 by (iprover intro: assms theI)\n```\n``` 675\n```\n``` 676 lemma the1I2: assumes \"EX! x. P x\" \"\\<And>x. P x \\<Longrightarrow> Q x\" shows \"Q (THE x. P x)\"\n```\n``` 677 by(iprover intro:assms(2) theI2[where P=P and Q=Q] ex1E[OF assms(1)]\n```\n``` 678 elim:allE impE)\n```\n``` 679\n```\n``` 680 lemma the1_equality [elim?]: \"[| EX!x. P x; P a |] ==> (THE x. P x) = a\"\n```\n``` 681 apply (rule the_equality)\n```\n``` 682 apply assumption\n```\n``` 683 apply (erule ex1E)\n```\n``` 684 apply (erule all_dupE)\n```\n``` 685 apply (drule mp)\n```\n``` 686 apply assumption\n```\n``` 687 apply (erule ssubst)\n```\n``` 688 apply (erule allE)\n```\n``` 689 apply (erule mp)\n```\n``` 690 apply assumption\n```\n``` 691 done\n```\n``` 692\n```\n``` 693 lemma the_sym_eq_trivial: \"(THE y. x=y) = x\"\n```\n``` 694 apply (rule the_equality)\n```\n``` 695 apply (rule refl)\n```\n``` 696 apply (erule sym)\n```\n``` 697 done\n```\n``` 698\n```\n``` 699\n```\n``` 700 subsubsection {*Classical intro rules for disjunction and existential quantifiers*}\n```\n``` 701\n```\n``` 702 lemma disjCI:\n```\n``` 703 assumes \"~Q ==> P\" shows \"P|Q\"\n```\n``` 704 apply (rule classical)\n```\n``` 705 apply (iprover intro: assms disjI1 disjI2 notI elim: notE)\n```\n``` 706 done\n```\n``` 707\n```\n``` 708 lemma excluded_middle: \"~P | P\"\n```\n``` 709 by (iprover intro: disjCI)\n```\n``` 710\n```\n``` 711 text {*\n```\n``` 712 case distinction as a natural deduction rule.\n```\n``` 713 Note that @{term \"~P\"} is the second case, not the first\n```\n``` 714 *}\n```\n``` 715 lemma case_split [case_names True False]:\n```\n``` 716 assumes prem1: \"P ==> Q\"\n```\n``` 717 and prem2: \"~P ==> Q\"\n```\n``` 718 shows \"Q\"\n```\n``` 719 apply (rule excluded_middle [THEN disjE])\n```\n``` 720 apply (erule prem2)\n```\n``` 721 apply (erule prem1)\n```\n``` 722 done\n```\n``` 723\n```\n``` 724 (*Classical implies (-->) elimination. *)\n```\n``` 725 lemma impCE:\n```\n``` 726 assumes major: \"P-->Q\"\n```\n``` 727 and minor: \"~P ==> R\" \"Q ==> R\"\n```\n``` 728 shows \"R\"\n```\n``` 729 apply (rule excluded_middle [of P, THEN disjE])\n```\n``` 730 apply (iprover intro: minor major [THEN mp])+\n```\n``` 731 done\n```\n``` 732\n```\n``` 733 (*This version of --> elimination works on Q before P. It works best for\n```\n``` 734 those cases in which P holds \"almost everywhere\". Can't install as\n```\n``` 735 default: would break old proofs.*)\n```\n``` 736 lemma impCE':\n```\n``` 737 assumes major: \"P-->Q\"\n```\n``` 738 and minor: \"Q ==> R\" \"~P ==> R\"\n```\n``` 739 shows \"R\"\n```\n``` 740 apply (rule excluded_middle [of P, THEN disjE])\n```\n``` 741 apply (iprover intro: minor major [THEN mp])+\n```\n``` 742 done\n```\n``` 743\n```\n``` 744 (*Classical <-> elimination. *)\n```\n``` 745 lemma iffCE:\n```\n``` 746 assumes major: \"P=Q\"\n```\n``` 747 and minor: \"[| P; Q |] ==> R\" \"[| ~P; ~Q |] ==> R\"\n```\n``` 748 shows \"R\"\n```\n``` 749 apply (rule major [THEN iffE])\n```\n``` 750 apply (iprover intro: minor elim: impCE notE)\n```\n``` 751 done\n```\n``` 752\n```\n``` 753 lemma exCI:\n```\n``` 754 assumes \"ALL x. ~P(x) ==> P(a)\"\n```\n``` 755 shows \"EX x. P(x)\"\n```\n``` 756 apply (rule ccontr)\n```\n``` 757 apply (iprover intro: assms exI allI notI notE [of \"\\<exists>x. P x\"])\n```\n``` 758 done\n```\n``` 759\n```\n``` 760\n```\n``` 761 subsubsection {* Intuitionistic Reasoning *}\n```\n``` 762\n```\n``` 763 lemma impE':\n```\n``` 764 assumes 1: \"P --> Q\"\n```\n``` 765 and 2: \"Q ==> R\"\n```\n``` 766 and 3: \"P --> Q ==> P\"\n```\n``` 767 shows R\n```\n``` 768 proof -\n```\n``` 769 from 3 and 1 have P .\n```\n``` 770 with 1 have Q by (rule impE)\n```\n``` 771 with 2 show R .\n```\n``` 772 qed\n```\n``` 773\n```\n``` 774 lemma allE':\n```\n``` 775 assumes 1: \"ALL x. P x\"\n```\n``` 776 and 2: \"P x ==> ALL x. P x ==> Q\"\n```\n``` 777 shows Q\n```\n``` 778 proof -\n```\n``` 779 from 1 have \"P x\" by (rule spec)\n```\n``` 780 from this and 1 show Q by (rule 2)\n```\n``` 781 qed\n```\n``` 782\n```\n``` 783 lemma notE':\n```\n``` 784 assumes 1: \"~ P\"\n```\n``` 785 and 2: \"~ P ==> P\"\n```\n``` 786 shows R\n```\n``` 787 proof -\n```\n``` 788 from 2 and 1 have P .\n```\n``` 789 with 1 show R by (rule notE)\n```\n``` 790 qed\n```\n``` 791\n```\n``` 792 lemma TrueE: \"True ==> P ==> P\" .\n```\n``` 793 lemma notFalseE: \"~ False ==> P ==> P\" .\n```\n``` 794\n```\n``` 795 lemmas [Pure.elim!] = disjE iffE FalseE conjE exE TrueE notFalseE\n```\n``` 796 and [Pure.intro!] = iffI conjI impI TrueI notI allI refl\n```\n``` 797 and [Pure.elim 2] = allE notE' impE'\n```\n``` 798 and [Pure.intro] = exI disjI2 disjI1\n```\n``` 799\n```\n``` 800 lemmas [trans] = trans\n```\n``` 801 and [sym] = sym not_sym\n```\n``` 802 and [Pure.elim?] = iffD1 iffD2 impE\n```\n``` 803\n```\n``` 804 use \"hologic.ML\"\n```\n``` 805\n```\n``` 806\n```\n``` 807 subsubsection {* Atomizing meta-level connectives *}\n```\n``` 808\n```\n``` 809 axiomatization where\n```\n``` 810 eq_reflection: \"x = y \\<Longrightarrow> x \\<equiv> y\" (*admissible axiom*)\n```\n``` 811\n```\n``` 812 lemma atomize_all [atomize]: \"(!!x. P x) == Trueprop (ALL x. P x)\"\n```\n``` 813 proof\n```\n``` 814 assume \"!!x. P x\"\n```\n``` 815 then show \"ALL x. P x\" ..\n```\n``` 816 next\n```\n``` 817 assume \"ALL x. P x\"\n```\n``` 818 then show \"!!x. P x\" by (rule allE)\n```\n``` 819 qed\n```\n``` 820\n```\n``` 821 lemma atomize_imp [atomize]: \"(A ==> B) == Trueprop (A --> B)\"\n```\n``` 822 proof\n```\n``` 823 assume r: \"A ==> B\"\n```\n``` 824 show \"A --> B\" by (rule impI) (rule r)\n```\n``` 825 next\n```\n``` 826 assume \"A --> B\" and A\n```\n``` 827 then show B by (rule mp)\n```\n``` 828 qed\n```\n``` 829\n```\n``` 830 lemma atomize_not: \"(A ==> False) == Trueprop (~A)\"\n```\n``` 831 proof\n```\n``` 832 assume r: \"A ==> False\"\n```\n``` 833 show \"~A\" by (rule notI) (rule r)\n```\n``` 834 next\n```\n``` 835 assume \"~A\" and A\n```\n``` 836 then show False by (rule notE)\n```\n``` 837 qed\n```\n``` 838\n```\n``` 839 lemma atomize_eq [atomize]: \"(x == y) == Trueprop (x = y)\"\n```\n``` 840 proof\n```\n``` 841 assume \"x == y\"\n```\n``` 842 show \"x = y\" by (unfold `x == y`) (rule refl)\n```\n``` 843 next\n```\n``` 844 assume \"x = y\"\n```\n``` 845 then show \"x == y\" by (rule eq_reflection)\n```\n``` 846 qed\n```\n``` 847\n```\n``` 848 lemma atomize_conj [atomize]:\n```\n``` 849 fixes meta_conjunction :: \"prop => prop => prop\" (infixr \"&&\" 2)\n```\n``` 850 shows \"(A && B) == Trueprop (A & B)\"\n```\n``` 851 proof\n```\n``` 852 assume conj: \"A && B\"\n```\n``` 853 show \"A & B\"\n```\n``` 854 proof (rule conjI)\n```\n``` 855 from conj show A by (rule conjunctionD1)\n```\n``` 856 from conj show B by (rule conjunctionD2)\n```\n``` 857 qed\n```\n``` 858 next\n```\n``` 859 assume conj: \"A & B\"\n```\n``` 860 show \"A && B\"\n```\n``` 861 proof -\n```\n``` 862 from conj show A ..\n```\n``` 863 from conj show B ..\n```\n``` 864 qed\n```\n``` 865 qed\n```\n``` 866\n```\n``` 867 lemmas [symmetric, rulify] = atomize_all atomize_imp\n```\n``` 868 and [symmetric, defn] = atomize_all atomize_imp atomize_eq\n```\n``` 869\n```\n``` 870\n```\n``` 871 subsubsection {* Atomizing elimination rules *}\n```\n``` 872\n```\n``` 873 setup AtomizeElim.setup\n```\n``` 874\n```\n``` 875 lemma atomize_exL[atomize_elim]: \"(!!x. P x ==> Q) == ((EX x. P x) ==> Q)\"\n```\n``` 876 by rule iprover+\n```\n``` 877\n```\n``` 878 lemma atomize_conjL[atomize_elim]: \"(A ==> B ==> C) == (A & B ==> C)\"\n```\n``` 879 by rule iprover+\n```\n``` 880\n```\n``` 881 lemma atomize_disjL[atomize_elim]: \"((A ==> C) ==> (B ==> C) ==> C) == ((A | B ==> C) ==> C)\"\n```\n``` 882 by rule iprover+\n```\n``` 883\n```\n``` 884 lemma atomize_elimL[atomize_elim]: \"(!!B. (A ==> B) ==> B) == Trueprop A\" ..\n```\n``` 885\n```\n``` 886\n```\n``` 887 subsection {* Package setup *}\n```\n``` 888\n```\n``` 889 subsubsection {* Classical Reasoner setup *}\n```\n``` 890\n```\n``` 891 lemma imp_elim: \"P --> Q ==> (~ R ==> P) ==> (Q ==> R) ==> R\"\n```\n``` 892 by (rule classical) iprover\n```\n``` 893\n```\n``` 894 lemma swap: \"~ P ==> (~ R ==> P) ==> R\"\n```\n``` 895 by (rule classical) iprover\n```\n``` 896\n```\n``` 897 lemma thin_refl:\n```\n``` 898 \"\\<And>X. \\<lbrakk> x=x; PROP W \\<rbrakk> \\<Longrightarrow> PROP W\" .\n```\n``` 899\n```\n``` 900 ML {*\n```\n``` 901 structure Hypsubst = HypsubstFun(\n```\n``` 902 struct\n```\n``` 903 structure Simplifier = Simplifier\n```\n``` 904 val dest_eq = HOLogic.dest_eq\n```\n``` 905 val dest_Trueprop = HOLogic.dest_Trueprop\n```\n``` 906 val dest_imp = HOLogic.dest_imp\n```\n``` 907 val eq_reflection = @{thm eq_reflection}\n```\n``` 908 val rev_eq_reflection = @{thm meta_eq_to_obj_eq}\n```\n``` 909 val imp_intr = @{thm impI}\n```\n``` 910 val rev_mp = @{thm rev_mp}\n```\n``` 911 val subst = @{thm subst}\n```\n``` 912 val sym = @{thm sym}\n```\n``` 913 val thin_refl = @{thm thin_refl};\n```\n``` 914 val prop_subst = @{lemma \"PROP P t ==> PROP prop (x = t ==> PROP P x)\"\n```\n``` 915 by (unfold prop_def) (drule eq_reflection, unfold)}\n```\n``` 916 end);\n```\n``` 917 open Hypsubst;\n```\n``` 918\n```\n``` 919 structure Classical = ClassicalFun(\n```\n``` 920 struct\n```\n``` 921 val imp_elim = @{thm imp_elim}\n```\n``` 922 val not_elim = @{thm notE}\n```\n``` 923 val swap = @{thm swap}\n```\n``` 924 val classical = @{thm classical}\n```\n``` 925 val sizef = Drule.size_of_thm\n```\n``` 926 val hyp_subst_tacs = [Hypsubst.hyp_subst_tac]\n```\n``` 927 end);\n```\n``` 928\n```\n``` 929 structure BasicClassical: BASIC_CLASSICAL = Classical;\n```\n``` 930 open BasicClassical;\n```\n``` 931\n```\n``` 932 ML_Antiquote.value \"claset\"\n```\n``` 933 (Scan.succeed \"Classical.local_claset_of (ML_Context.the_local_context ())\");\n```\n``` 934\n```\n``` 935 structure ResAtpset = NamedThmsFun(val name = \"atp\" val description = \"ATP rules\");\n```\n``` 936\n```\n``` 937 structure ResBlacklist = NamedThmsFun(val name = \"noatp\" val description = \"Theorems blacklisted for ATP\");\n```\n``` 938 *}\n```\n``` 939\n```\n``` 940 text {*ResBlacklist holds theorems blacklisted to sledgehammer.\n```\n``` 941 These theorems typically produce clauses that are prolific (match too many equality or\n```\n``` 942 membership literals) and relate to seldom-used facts. Some duplicate other rules.*}\n```\n``` 943\n```\n``` 944 setup {*\n```\n``` 945 let\n```\n``` 946 (*prevent substitution on bool*)\n```\n``` 947 fun hyp_subst_tac' i thm = if i <= Thm.nprems_of thm andalso\n```\n``` 948 Term.exists_Const (fn (\"op =\", Type (_, [T, _])) => T <> Type (\"bool\", []) | _ => false)\n```\n``` 949 (nth (Thm.prems_of thm) (i - 1)) then Hypsubst.hyp_subst_tac i thm else no_tac thm;\n```\n``` 950 in\n```\n``` 951 Hypsubst.hypsubst_setup\n```\n``` 952 #> ContextRules.addSWrapper (fn tac => hyp_subst_tac' ORELSE' tac)\n```\n``` 953 #> Classical.setup\n```\n``` 954 #> ResAtpset.setup\n```\n``` 955 #> ResBlacklist.setup\n```\n``` 956 end\n```\n``` 957 *}\n```\n``` 958\n```\n``` 959 declare iffI [intro!]\n```\n``` 960 and notI [intro!]\n```\n``` 961 and impI [intro!]\n```\n``` 962 and disjCI [intro!]\n```\n``` 963 and conjI [intro!]\n```\n``` 964 and TrueI [intro!]\n```\n``` 965 and refl [intro!]\n```\n``` 966\n```\n``` 967 declare iffCE [elim!]\n```\n``` 968 and FalseE [elim!]\n```\n``` 969 and impCE [elim!]\n```\n``` 970 and disjE [elim!]\n```\n``` 971 and conjE [elim!]\n```\n``` 972 and conjE [elim!]\n```\n``` 973\n```\n``` 974 declare ex_ex1I [intro!]\n```\n``` 975 and allI [intro!]\n```\n``` 976 and the_equality [intro]\n```\n``` 977 and exI [intro]\n```\n``` 978\n```\n``` 979 declare exE [elim!]\n```\n``` 980 allE [elim]\n```\n``` 981\n```\n``` 982 ML {* val HOL_cs = @{claset} *}\n```\n``` 983\n```\n``` 984 lemma contrapos_np: \"~ Q ==> (~ P ==> Q) ==> P\"\n```\n``` 985 apply (erule swap)\n```\n``` 986 apply (erule (1) meta_mp)\n```\n``` 987 done\n```\n``` 988\n```\n``` 989 declare ex_ex1I [rule del, intro! 2]\n```\n``` 990 and ex1I [intro]\n```\n``` 991\n```\n``` 992 lemmas [intro?] = ext\n```\n``` 993 and [elim?] = ex1_implies_ex\n```\n``` 994\n```\n``` 995 (*Better then ex1E for classical reasoner: needs no quantifier duplication!*)\n```\n``` 996 lemma alt_ex1E [elim!]:\n```\n``` 997 assumes major: \"\\<exists>!x. P x\"\n```\n``` 998 and prem: \"\\<And>x. \\<lbrakk> P x; \\<forall>y y'. P y \\<and> P y' \\<longrightarrow> y = y' \\<rbrakk> \\<Longrightarrow> R\"\n```\n``` 999 shows R\n```\n``` 1000 apply (rule ex1E [OF major])\n```\n``` 1001 apply (rule prem)\n```\n``` 1002 apply (tactic {* ares_tac @{thms allI} 1 *})+\n```\n``` 1003 apply (tactic {* etac (Classical.dup_elim @{thm allE}) 1 *})\n```\n``` 1004 apply iprover\n```\n``` 1005 done\n```\n``` 1006\n```\n``` 1007 ML {*\n```\n``` 1008 structure Blast = BlastFun\n```\n``` 1009 (\n```\n``` 1010 type claset = Classical.claset\n```\n``` 1011 val equality_name = @{const_name \"op =\"}\n```\n``` 1012 val not_name = @{const_name Not}\n```\n``` 1013 val notE = @{thm notE}\n```\n``` 1014 val ccontr = @{thm ccontr}\n```\n``` 1015 val contr_tac = Classical.contr_tac\n```\n``` 1016 val dup_intr = Classical.dup_intr\n```\n``` 1017 val hyp_subst_tac = Hypsubst.blast_hyp_subst_tac\n```\n``` 1018 val claset = Classical.claset\n```\n``` 1019 val rep_cs = Classical.rep_cs\n```\n``` 1020 val cla_modifiers = Classical.cla_modifiers\n```\n``` 1021 val cla_meth' = Classical.cla_meth'\n```\n``` 1022 );\n```\n``` 1023 val Blast_tac = Blast.Blast_tac;\n```\n``` 1024 val blast_tac = Blast.blast_tac;\n```\n``` 1025 *}\n```\n``` 1026\n```\n``` 1027 setup Blast.setup\n```\n``` 1028\n```\n``` 1029\n```\n``` 1030 subsubsection {* Simplifier *}\n```\n``` 1031\n```\n``` 1032 lemma eta_contract_eq: \"(%s. f s) = f\" ..\n```\n``` 1033\n```\n``` 1034 lemma simp_thms:\n```\n``` 1035 shows not_not: \"(~ ~ P) = P\"\n```\n``` 1036 and Not_eq_iff: \"((~P) = (~Q)) = (P = Q)\"\n```\n``` 1037 and\n```\n``` 1038 \"(P ~= Q) = (P = (~Q))\"\n```\n``` 1039 \"(P | ~P) = True\" \"(~P | P) = True\"\n```\n``` 1040 \"(x = x) = True\"\n```\n``` 1041 and not_True_eq_False: \"(\\<not> True) = False\"\n```\n``` 1042 and not_False_eq_True: \"(\\<not> False) = True\"\n```\n``` 1043 and\n```\n``` 1044 \"(~P) ~= P\" \"P ~= (~P)\"\n```\n``` 1045 \"(True=P) = P\"\n```\n``` 1046 and eq_True: \"(P = True) = P\"\n```\n``` 1047 and \"(False=P) = (~P)\"\n```\n``` 1048 and eq_False: \"(P = False) = (\\<not> P)\"\n```\n``` 1049 and\n```\n``` 1050 \"(True --> P) = P\" \"(False --> P) = True\"\n```\n``` 1051 \"(P --> True) = True\" \"(P --> P) = True\"\n```\n``` 1052 \"(P --> False) = (~P)\" \"(P --> ~P) = (~P)\"\n```\n``` 1053 \"(P & True) = P\" \"(True & P) = P\"\n```\n``` 1054 \"(P & False) = False\" \"(False & P) = False\"\n```\n``` 1055 \"(P & P) = P\" \"(P & (P & Q)) = (P & Q)\"\n```\n``` 1056 \"(P & ~P) = False\" \"(~P & P) = False\"\n```\n``` 1057 \"(P | True) = True\" \"(True | P) = True\"\n```\n``` 1058 \"(P | False) = P\" \"(False | P) = P\"\n```\n``` 1059 \"(P | P) = P\" \"(P | (P | Q)) = (P | Q)\" and\n```\n``` 1060 \"(ALL x. P) = P\" \"(EX x. P) = P\" \"EX x. x=t\" \"EX x. t=x\"\n```\n``` 1061 -- {* needed for the one-point-rule quantifier simplification procs *}\n```\n``` 1062 -- {* essential for termination!! *} and\n```\n``` 1063 \"!!P. (EX x. x=t & P(x)) = P(t)\"\n```\n``` 1064 \"!!P. (EX x. t=x & P(x)) = P(t)\"\n```\n``` 1065 \"!!P. (ALL x. x=t --> P(x)) = P(t)\"\n```\n``` 1066 \"!!P. (ALL x. t=x --> P(x)) = P(t)\"\n```\n``` 1067 by (blast, blast, blast, blast, blast, iprover+)\n```\n``` 1068\n```\n``` 1069 lemma disj_absorb: \"(A | A) = A\"\n```\n``` 1070 by blast\n```\n``` 1071\n```\n``` 1072 lemma disj_left_absorb: \"(A | (A | B)) = (A | B)\"\n```\n``` 1073 by blast\n```\n``` 1074\n```\n``` 1075 lemma conj_absorb: \"(A & A) = A\"\n```\n``` 1076 by blast\n```\n``` 1077\n```\n``` 1078 lemma conj_left_absorb: \"(A & (A & B)) = (A & B)\"\n```\n``` 1079 by blast\n```\n``` 1080\n```\n``` 1081 lemma eq_ac:\n```\n``` 1082 shows eq_commute: \"(a=b) = (b=a)\"\n```\n``` 1083 and eq_left_commute: \"(P=(Q=R)) = (Q=(P=R))\"\n```\n``` 1084 and eq_assoc: \"((P=Q)=R) = (P=(Q=R))\" by (iprover, blast+)\n```\n``` 1085 lemma neq_commute: \"(a~=b) = (b~=a)\" by iprover\n```\n``` 1086\n```\n``` 1087 lemma conj_comms:\n```\n``` 1088 shows conj_commute: \"(P&Q) = (Q&P)\"\n```\n``` 1089 and conj_left_commute: \"(P&(Q&R)) = (Q&(P&R))\" by iprover+\n```\n``` 1090 lemma conj_assoc: \"((P&Q)&R) = (P&(Q&R))\" by iprover\n```\n``` 1091\n```\n``` 1092 lemmas conj_ac = conj_commute conj_left_commute conj_assoc\n```\n``` 1093\n```\n``` 1094 lemma disj_comms:\n```\n``` 1095 shows disj_commute: \"(P|Q) = (Q|P)\"\n```\n``` 1096 and disj_left_commute: \"(P|(Q|R)) = (Q|(P|R))\" by iprover+\n```\n``` 1097 lemma disj_assoc: \"((P|Q)|R) = (P|(Q|R))\" by iprover\n```\n``` 1098\n```\n``` 1099 lemmas disj_ac = disj_commute disj_left_commute disj_assoc\n```\n``` 1100\n```\n``` 1101 lemma conj_disj_distribL: \"(P&(Q|R)) = (P&Q | P&R)\" by iprover\n```\n``` 1102 lemma conj_disj_distribR: \"((P|Q)&R) = (P&R | Q&R)\" by iprover\n```\n``` 1103\n```\n``` 1104 lemma disj_conj_distribL: \"(P|(Q&R)) = ((P|Q) & (P|R))\" by iprover\n```\n``` 1105 lemma disj_conj_distribR: \"((P&Q)|R) = ((P|R) & (Q|R))\" by iprover\n```\n``` 1106\n```\n``` 1107 lemma imp_conjR: \"(P --> (Q&R)) = ((P-->Q) & (P-->R))\" by iprover\n```\n``` 1108 lemma imp_conjL: \"((P&Q) -->R) = (P --> (Q --> R))\" by iprover\n```\n``` 1109 lemma imp_disjL: \"((P|Q) --> R) = ((P-->R)&(Q-->R))\" by iprover\n```\n``` 1110\n```\n``` 1111 text {* These two are specialized, but @{text imp_disj_not1} is useful in @{text \"Auth/Yahalom\"}. *}\n```\n``` 1112 lemma imp_disj_not1: \"(P --> Q | R) = (~Q --> P --> R)\" by blast\n```\n``` 1113 lemma imp_disj_not2: \"(P --> Q | R) = (~R --> P --> Q)\" by blast\n```\n``` 1114\n```\n``` 1115 lemma imp_disj1: \"((P-->Q)|R) = (P--> Q|R)\" by blast\n```\n``` 1116 lemma imp_disj2: \"(Q|(P-->R)) = (P--> Q|R)\" by blast\n```\n``` 1117\n```\n``` 1118 lemma imp_cong: \"(P = P') ==> (P' ==> (Q = Q')) ==> ((P --> Q) = (P' --> Q'))\"\n```\n``` 1119 by iprover\n```\n``` 1120\n```\n``` 1121 lemma de_Morgan_disj: \"(~(P | Q)) = (~P & ~Q)\" by iprover\n```\n``` 1122 lemma de_Morgan_conj: \"(~(P & Q)) = (~P | ~Q)\" by blast\n```\n``` 1123 lemma not_imp: \"(~(P --> Q)) = (P & ~Q)\" by blast\n```\n``` 1124 lemma not_iff: \"(P~=Q) = (P = (~Q))\" by blast\n```\n``` 1125 lemma disj_not1: \"(~P | Q) = (P --> Q)\" by blast\n```\n``` 1126 lemma disj_not2: \"(P | ~Q) = (Q --> P)\" -- {* changes orientation :-( *}\n```\n``` 1127 by blast\n```\n``` 1128 lemma imp_conv_disj: \"(P --> Q) = ((~P) | Q)\" by blast\n```\n``` 1129\n```\n``` 1130 lemma iff_conv_conj_imp: \"(P = Q) = ((P --> Q) & (Q --> P))\" by iprover\n```\n``` 1131\n```\n``` 1132\n```\n``` 1133 lemma cases_simp: \"((P --> Q) & (~P --> Q)) = Q\"\n```\n``` 1134 -- {* Avoids duplication of subgoals after @{text split_if}, when the true and false *}\n```\n``` 1135 -- {* cases boil down to the same thing. *}\n```\n``` 1136 by blast\n```\n``` 1137\n```\n``` 1138 lemma not_all: \"(~ (! x. P(x))) = (? x.~P(x))\" by blast\n```\n``` 1139 lemma imp_all: \"((! x. P x) --> Q) = (? x. P x --> Q)\" by blast\n```\n``` 1140 lemma not_ex: \"(~ (? x. P(x))) = (! x.~P(x))\" by iprover\n```\n``` 1141 lemma imp_ex: \"((? x. P x) --> Q) = (! x. P x --> Q)\" by iprover\n```\n``` 1142 lemma all_not_ex: \"(ALL x. P x) = (~ (EX x. ~ P x ))\" by blast\n```\n``` 1143\n```\n``` 1144 declare All_def [noatp]\n```\n``` 1145\n```\n``` 1146 lemma ex_disj_distrib: \"(? x. P(x) | Q(x)) = ((? x. P(x)) | (? x. Q(x)))\" by iprover\n```\n``` 1147 lemma all_conj_distrib: \"(!x. P(x) & Q(x)) = ((! x. P(x)) & (! x. Q(x)))\" by iprover\n```\n``` 1148\n```\n``` 1149 text {*\n```\n``` 1150 \\medskip The @{text \"&\"} congruence rule: not included by default!\n```\n``` 1151 May slow rewrite proofs down by as much as 50\\% *}\n```\n``` 1152\n```\n``` 1153 lemma conj_cong:\n```\n``` 1154 \"(P = P') ==> (P' ==> (Q = Q')) ==> ((P & Q) = (P' & Q'))\"\n```\n``` 1155 by iprover\n```\n``` 1156\n```\n``` 1157 lemma rev_conj_cong:\n```\n``` 1158 \"(Q = Q') ==> (Q' ==> (P = P')) ==> ((P & Q) = (P' & Q'))\"\n```\n``` 1159 by iprover\n```\n``` 1160\n```\n``` 1161 text {* The @{text \"|\"} congruence rule: not included by default! *}\n```\n``` 1162\n```\n``` 1163 lemma disj_cong:\n```\n``` 1164 \"(P = P') ==> (~P' ==> (Q = Q')) ==> ((P | Q) = (P' | Q'))\"\n```\n``` 1165 by blast\n```\n``` 1166\n```\n``` 1167\n```\n``` 1168 text {* \\medskip if-then-else rules *}\n```\n``` 1169\n```\n``` 1170 lemma if_True: \"(if True then x else y) = x\"\n```\n``` 1171 by (unfold if_def) blast\n```\n``` 1172\n```\n``` 1173 lemma if_False: \"(if False then x else y) = y\"\n```\n``` 1174 by (unfold if_def) blast\n```\n``` 1175\n```\n``` 1176 lemma if_P: \"P ==> (if P then x else y) = x\"\n```\n``` 1177 by (unfold if_def) blast\n```\n``` 1178\n```\n``` 1179 lemma if_not_P: \"~P ==> (if P then x else y) = y\"\n```\n``` 1180 by (unfold if_def) blast\n```\n``` 1181\n```\n``` 1182 lemma split_if: \"P (if Q then x else y) = ((Q --> P(x)) & (~Q --> P(y)))\"\n```\n``` 1183 apply (rule case_split [of Q])\n```\n``` 1184 apply (simplesubst if_P)\n```\n``` 1185 prefer 3 apply (simplesubst if_not_P, blast+)\n```\n``` 1186 done\n```\n``` 1187\n```\n``` 1188 lemma split_if_asm: \"P (if Q then x else y) = (~((Q & ~P x) | (~Q & ~P y)))\"\n```\n``` 1189 by (simplesubst split_if, blast)\n```\n``` 1190\n```\n``` 1191 lemmas if_splits [noatp] = split_if split_if_asm\n```\n``` 1192\n```\n``` 1193 lemma if_cancel: \"(if c then x else x) = x\"\n```\n``` 1194 by (simplesubst split_if, blast)\n```\n``` 1195\n```\n``` 1196 lemma if_eq_cancel: \"(if x = y then y else x) = x\"\n```\n``` 1197 by (simplesubst split_if, blast)\n```\n``` 1198\n```\n``` 1199 lemma if_bool_eq_conj: \"(if P then Q else R) = ((P-->Q) & (~P-->R))\"\n```\n``` 1200 -- {* This form is useful for expanding @{text \"if\"}s on the RIGHT of the @{text \"==>\"} symbol. *}\n```\n``` 1201 by (rule split_if)\n```\n``` 1202\n```\n``` 1203 lemma if_bool_eq_disj: \"(if P then Q else R) = ((P&Q) | (~P&R))\"\n```\n``` 1204 -- {* And this form is useful for expanding @{text \"if\"}s on the LEFT. *}\n```\n``` 1205 apply (simplesubst split_if, blast)\n```\n``` 1206 done\n```\n``` 1207\n```\n``` 1208 lemma Eq_TrueI: \"P ==> P == True\" by (unfold atomize_eq) iprover\n```\n``` 1209 lemma Eq_FalseI: \"~P ==> P == False\" by (unfold atomize_eq) iprover\n```\n``` 1210\n```\n``` 1211 text {* \\medskip let rules for simproc *}\n```\n``` 1212\n```\n``` 1213 lemma Let_folded: \"f x \\<equiv> g x \\<Longrightarrow> Let x f \\<equiv> Let x g\"\n```\n``` 1214 by (unfold Let_def)\n```\n``` 1215\n```\n``` 1216 lemma Let_unfold: \"f x \\<equiv> g \\<Longrightarrow> Let x f \\<equiv> g\"\n```\n``` 1217 by (unfold Let_def)\n```\n``` 1218\n```\n``` 1219 text {*\n```\n``` 1220 The following copy of the implication operator is useful for\n```\n``` 1221 fine-tuning congruence rules. It instructs the simplifier to simplify\n```\n``` 1222 its premise.\n```\n``` 1223 *}\n```\n``` 1224\n```\n``` 1225 constdefs\n```\n``` 1226 simp_implies :: \"[prop, prop] => prop\" (infixr \"=simp=>\" 1)\n```\n``` 1227 [code del]: \"simp_implies \\<equiv> op ==>\"\n```\n``` 1228\n```\n``` 1229 lemma simp_impliesI:\n```\n``` 1230 assumes PQ: \"(PROP P \\<Longrightarrow> PROP Q)\"\n```\n``` 1231 shows \"PROP P =simp=> PROP Q\"\n```\n``` 1232 apply (unfold simp_implies_def)\n```\n``` 1233 apply (rule PQ)\n```\n``` 1234 apply assumption\n```\n``` 1235 done\n```\n``` 1236\n```\n``` 1237 lemma simp_impliesE:\n```\n``` 1238 assumes PQ: \"PROP P =simp=> PROP Q\"\n```\n``` 1239 and P: \"PROP P\"\n```\n``` 1240 and QR: \"PROP Q \\<Longrightarrow> PROP R\"\n```\n``` 1241 shows \"PROP R\"\n```\n``` 1242 apply (rule QR)\n```\n``` 1243 apply (rule PQ [unfolded simp_implies_def])\n```\n``` 1244 apply (rule P)\n```\n``` 1245 done\n```\n``` 1246\n```\n``` 1247 lemma simp_implies_cong:\n```\n``` 1248 assumes PP' :\"PROP P == PROP P'\"\n```\n``` 1249 and P'QQ': \"PROP P' ==> (PROP Q == PROP Q')\"\n```\n``` 1250 shows \"(PROP P =simp=> PROP Q) == (PROP P' =simp=> PROP Q')\"\n```\n``` 1251 proof (unfold simp_implies_def, rule equal_intr_rule)\n```\n``` 1252 assume PQ: \"PROP P \\<Longrightarrow> PROP Q\"\n```\n``` 1253 and P': \"PROP P'\"\n```\n``` 1254 from PP' [symmetric] and P' have \"PROP P\"\n```\n``` 1255 by (rule equal_elim_rule1)\n```\n``` 1256 then have \"PROP Q\" by (rule PQ)\n```\n``` 1257 with P'QQ' [OF P'] show \"PROP Q'\" by (rule equal_elim_rule1)\n```\n``` 1258 next\n```\n``` 1259 assume P'Q': \"PROP P' \\<Longrightarrow> PROP Q'\"\n```\n``` 1260 and P: \"PROP P\"\n```\n``` 1261 from PP' and P have P': \"PROP P'\" by (rule equal_elim_rule1)\n```\n``` 1262 then have \"PROP Q'\" by (rule P'Q')\n```\n``` 1263 with P'QQ' [OF P', symmetric] show \"PROP Q\"\n```\n``` 1264 by (rule equal_elim_rule1)\n```\n``` 1265 qed\n```\n``` 1266\n```\n``` 1267 lemma uncurry:\n```\n``` 1268 assumes \"P \\<longrightarrow> Q \\<longrightarrow> R\"\n```\n``` 1269 shows \"P \\<and> Q \\<longrightarrow> R\"\n```\n``` 1270 using assms by blast\n```\n``` 1271\n```\n``` 1272 lemma iff_allI:\n```\n``` 1273 assumes \"\\<And>x. P x = Q x\"\n```\n``` 1274 shows \"(\\<forall>x. P x) = (\\<forall>x. Q x)\"\n```\n``` 1275 using assms by blast\n```\n``` 1276\n```\n``` 1277 lemma iff_exI:\n```\n``` 1278 assumes \"\\<And>x. P x = Q x\"\n```\n``` 1279 shows \"(\\<exists>x. P x) = (\\<exists>x. Q x)\"\n```\n``` 1280 using assms by blast\n```\n``` 1281\n```\n``` 1282 lemma all_comm:\n```\n``` 1283 \"(\\<forall>x y. P x y) = (\\<forall>y x. P x y)\"\n```\n``` 1284 by blast\n```\n``` 1285\n```\n``` 1286 lemma ex_comm:\n```\n``` 1287 \"(\\<exists>x y. P x y) = (\\<exists>y x. P x y)\"\n```\n``` 1288 by blast\n```\n``` 1289\n```\n``` 1290 use \"simpdata.ML\"\n```\n``` 1291 ML {* open Simpdata *}\n```\n``` 1292\n```\n``` 1293 setup {*\n```\n``` 1294 Simplifier.method_setup Splitter.split_modifiers\n```\n``` 1295 #> Simplifier.map_simpset (K Simpdata.simpset_simprocs)\n```\n``` 1296 #> Splitter.setup\n```\n``` 1297 #> clasimp_setup\n```\n``` 1298 #> EqSubst.setup\n```\n``` 1299 *}\n```\n``` 1300\n```\n``` 1301 text {* Simproc for proving @{text \"(y = x) == False\"} from premise @{text \"~(x = y)\"}: *}\n```\n``` 1302\n```\n``` 1303 simproc_setup neq (\"x = y\") = {* fn _ =>\n```\n``` 1304 let\n```\n``` 1305 val neq_to_EQ_False = @{thm not_sym} RS @{thm Eq_FalseI};\n```\n``` 1306 fun is_neq eq lhs rhs thm =\n```\n``` 1307 (case Thm.prop_of thm of\n```\n``` 1308 _ \\$ (Not \\$ (eq' \\$ l' \\$ r')) =>\n```\n``` 1309 Not = HOLogic.Not andalso eq' = eq andalso\n```\n``` 1310 r' aconv lhs andalso l' aconv rhs\n```\n``` 1311 | _ => false);\n```\n``` 1312 fun proc ss ct =\n```\n``` 1313 (case Thm.term_of ct of\n```\n``` 1314 eq \\$ lhs \\$ rhs =>\n```\n``` 1315 (case find_first (is_neq eq lhs rhs) (Simplifier.prems_of_ss ss) of\n```\n``` 1316 SOME thm => SOME (thm RS neq_to_EQ_False)\n```\n``` 1317 | NONE => NONE)\n```\n``` 1318 | _ => NONE);\n```\n``` 1319 in proc end;\n```\n``` 1320 *}\n```\n``` 1321\n```\n``` 1322 simproc_setup let_simp (\"Let x f\") = {*\n```\n``` 1323 let\n```\n``` 1324 val (f_Let_unfold, x_Let_unfold) =\n```\n``` 1325 let val [(_\\$(f\\$x)\\$_)] = prems_of @{thm Let_unfold}\n```\n``` 1326 in (cterm_of @{theory} f, cterm_of @{theory} x) end\n```\n``` 1327 val (f_Let_folded, x_Let_folded) =\n```\n``` 1328 let val [(_\\$(f\\$x)\\$_)] = prems_of @{thm Let_folded}\n```\n``` 1329 in (cterm_of @{theory} f, cterm_of @{theory} x) end;\n```\n``` 1330 val g_Let_folded =\n```\n``` 1331 let val [(_\\$_\\$(g\\$_))] = prems_of @{thm Let_folded} in cterm_of @{theory} g end;\n```\n``` 1332\n```\n``` 1333 fun proc _ ss ct =\n```\n``` 1334 let\n```\n``` 1335 val ctxt = Simplifier.the_context ss;\n```\n``` 1336 val thy = ProofContext.theory_of ctxt;\n```\n``` 1337 val t = Thm.term_of ct;\n```\n``` 1338 val ([t'], ctxt') = Variable.import_terms false [t] ctxt;\n```\n``` 1339 in Option.map (hd o Variable.export ctxt' ctxt o single)\n```\n``` 1340 (case t' of Const (\"Let\",_) \\$ x \\$ f => (* x and f are already in normal form *)\n```\n``` 1341 if is_Free x orelse is_Bound x orelse is_Const x\n```\n``` 1342 then SOME @{thm Let_def}\n```\n``` 1343 else\n```\n``` 1344 let\n```\n``` 1345 val n = case f of (Abs (x,_,_)) => x | _ => \"x\";\n```\n``` 1346 val cx = cterm_of thy x;\n```\n``` 1347 val {T=xT,...} = rep_cterm cx;\n```\n``` 1348 val cf = cterm_of thy f;\n```\n``` 1349 val fx_g = Simplifier.rewrite ss (Thm.capply cf cx);\n```\n``` 1350 val (_\\$_\\$g) = prop_of fx_g;\n```\n``` 1351 val g' = abstract_over (x,g);\n```\n``` 1352 in (if (g aconv g')\n```\n``` 1353 then\n```\n``` 1354 let\n```\n``` 1355 val rl =\n```\n``` 1356 cterm_instantiate [(f_Let_unfold,cf),(x_Let_unfold,cx)] @{thm Let_unfold};\n```\n``` 1357 in SOME (rl OF [fx_g]) end\n```\n``` 1358 else if Term.betapply (f,x) aconv g then NONE (*avoid identity conversion*)\n```\n``` 1359 else let\n```\n``` 1360 val abs_g'= Abs (n,xT,g');\n```\n``` 1361 val g'x = abs_g'\\$x;\n```\n``` 1362 val g_g'x = symmetric (beta_conversion false (cterm_of thy g'x));\n```\n``` 1363 val rl = cterm_instantiate\n```\n``` 1364 [(f_Let_folded,cterm_of thy f),(x_Let_folded,cx),\n```\n``` 1365 (g_Let_folded,cterm_of thy abs_g')]\n```\n``` 1366 @{thm Let_folded};\n```\n``` 1367 in SOME (rl OF [transitive fx_g g_g'x])\n```\n``` 1368 end)\n```\n``` 1369 end\n```\n``` 1370 | _ => NONE)\n```\n``` 1371 end\n```\n``` 1372 in proc end *}\n```\n``` 1373\n```\n``` 1374\n```\n``` 1375 lemma True_implies_equals: \"(True \\<Longrightarrow> PROP P) \\<equiv> PROP P\"\n```\n``` 1376 proof\n```\n``` 1377 assume \"True \\<Longrightarrow> PROP P\"\n```\n``` 1378 from this [OF TrueI] show \"PROP P\" .\n```\n``` 1379 next\n```\n``` 1380 assume \"PROP P\"\n```\n``` 1381 then show \"PROP P\" .\n```\n``` 1382 qed\n```\n``` 1383\n```\n``` 1384 lemma ex_simps:\n```\n``` 1385 \"!!P Q. (EX x. P x & Q) = ((EX x. P x) & Q)\"\n```\n``` 1386 \"!!P Q. (EX x. P & Q x) = (P & (EX x. Q x))\"\n```\n``` 1387 \"!!P Q. (EX x. P x | Q) = ((EX x. P x) | Q)\"\n```\n``` 1388 \"!!P Q. (EX x. P | Q x) = (P | (EX x. Q x))\"\n```\n``` 1389 \"!!P Q. (EX x. P x --> Q) = ((ALL x. P x) --> Q)\"\n```\n``` 1390 \"!!P Q. (EX x. P --> Q x) = (P --> (EX x. Q x))\"\n```\n``` 1391 -- {* Miniscoping: pushing in existential quantifiers. *}\n```\n``` 1392 by (iprover | blast)+\n```\n``` 1393\n```\n``` 1394 lemma all_simps:\n```\n``` 1395 \"!!P Q. (ALL x. P x & Q) = ((ALL x. P x) & Q)\"\n```\n``` 1396 \"!!P Q. (ALL x. P & Q x) = (P & (ALL x. Q x))\"\n```\n``` 1397 \"!!P Q. (ALL x. P x | Q) = ((ALL x. P x) | Q)\"\n```\n``` 1398 \"!!P Q. (ALL x. P | Q x) = (P | (ALL x. Q x))\"\n```\n``` 1399 \"!!P Q. (ALL x. P x --> Q) = ((EX x. P x) --> Q)\"\n```\n``` 1400 \"!!P Q. (ALL x. P --> Q x) = (P --> (ALL x. Q x))\"\n```\n``` 1401 -- {* Miniscoping: pushing in universal quantifiers. *}\n```\n``` 1402 by (iprover | blast)+\n```\n``` 1403\n```\n``` 1404 lemmas [simp] =\n```\n``` 1405 triv_forall_equality (*prunes params*)\n```\n``` 1406 True_implies_equals (*prune asms `True'*)\n```\n``` 1407 if_True\n```\n``` 1408 if_False\n```\n``` 1409 if_cancel\n```\n``` 1410 if_eq_cancel\n```\n``` 1411 imp_disjL\n```\n``` 1412 (*In general it seems wrong to add distributive laws by default: they\n```\n``` 1413 might cause exponential blow-up. But imp_disjL has been in for a while\n```\n``` 1414 and cannot be removed without affecting existing proofs. Moreover,\n```\n``` 1415 rewriting by \"(P|Q --> R) = ((P-->R)&(Q-->R))\" might be justified on the\n```\n``` 1416 grounds that it allows simplification of R in the two cases.*)\n```\n``` 1417 conj_assoc\n```\n``` 1418 disj_assoc\n```\n``` 1419 de_Morgan_conj\n```\n``` 1420 de_Morgan_disj\n```\n``` 1421 imp_disj1\n```\n``` 1422 imp_disj2\n```\n``` 1423 not_imp\n```\n``` 1424 disj_not1\n```\n``` 1425 not_all\n```\n``` 1426 not_ex\n```\n``` 1427 cases_simp\n```\n``` 1428 the_eq_trivial\n```\n``` 1429 the_sym_eq_trivial\n```\n``` 1430 ex_simps\n```\n``` 1431 all_simps\n```\n``` 1432 simp_thms\n```\n``` 1433\n```\n``` 1434 lemmas [cong] = imp_cong simp_implies_cong\n```\n``` 1435 lemmas [split] = split_if\n```\n``` 1436\n```\n``` 1437 ML {* val HOL_ss = @{simpset} *}\n```\n``` 1438\n```\n``` 1439 text {* Simplifies x assuming c and y assuming ~c *}\n```\n``` 1440 lemma if_cong:\n```\n``` 1441 assumes \"b = c\"\n```\n``` 1442 and \"c \\<Longrightarrow> x = u\"\n```\n``` 1443 and \"\\<not> c \\<Longrightarrow> y = v\"\n```\n``` 1444 shows \"(if b then x else y) = (if c then u else v)\"\n```\n``` 1445 unfolding if_def using assms by simp\n```\n``` 1446\n```\n``` 1447 text {* Prevents simplification of x and y:\n```\n``` 1448 faster and allows the execution of functional programs. *}\n```\n``` 1449 lemma if_weak_cong [cong]:\n```\n``` 1450 assumes \"b = c\"\n```\n``` 1451 shows \"(if b then x else y) = (if c then x else y)\"\n```\n``` 1452 using assms by (rule arg_cong)\n```\n``` 1453\n```\n``` 1454 text {* Prevents simplification of t: much faster *}\n```\n``` 1455 lemma let_weak_cong:\n```\n``` 1456 assumes \"a = b\"\n```\n``` 1457 shows \"(let x = a in t x) = (let x = b in t x)\"\n```\n``` 1458 using assms by (rule arg_cong)\n```\n``` 1459\n```\n``` 1460 text {* To tidy up the result of a simproc. Only the RHS will be simplified. *}\n```\n``` 1461 lemma eq_cong2:\n```\n``` 1462 assumes \"u = u'\"\n```\n``` 1463 shows \"(t \\<equiv> u) \\<equiv> (t \\<equiv> u')\"\n```\n``` 1464 using assms by simp\n```\n``` 1465\n```\n``` 1466 lemma if_distrib:\n```\n``` 1467 \"f (if c then x else y) = (if c then f x else f y)\"\n```\n``` 1468 by simp\n```\n``` 1469\n```\n``` 1470 text {* This lemma restricts the effect of the rewrite rule u=v to the left-hand\n```\n``` 1471 side of an equality. Used in @{text \"{Integ,Real}/simproc.ML\"} *}\n```\n``` 1472 lemma restrict_to_left:\n```\n``` 1473 assumes \"x = y\"\n```\n``` 1474 shows \"(x = z) = (y = z)\"\n```\n``` 1475 using assms by simp\n```\n``` 1476\n```\n``` 1477\n```\n``` 1478 subsubsection {* Generic cases and induction *}\n```\n``` 1479\n```\n``` 1480 text {* Rule projections: *}\n```\n``` 1481\n```\n``` 1482 ML {*\n```\n``` 1483 structure ProjectRule = ProjectRuleFun\n```\n``` 1484 (\n```\n``` 1485 val conjunct1 = @{thm conjunct1}\n```\n``` 1486 val conjunct2 = @{thm conjunct2}\n```\n``` 1487 val mp = @{thm mp}\n```\n``` 1488 )\n```\n``` 1489 *}\n```\n``` 1490\n```\n``` 1491 constdefs\n```\n``` 1492 induct_forall where \"induct_forall P == \\<forall>x. P x\"\n```\n``` 1493 induct_implies where \"induct_implies A B == A \\<longrightarrow> B\"\n```\n``` 1494 induct_equal where \"induct_equal x y == x = y\"\n```\n``` 1495 induct_conj where \"induct_conj A B == A \\<and> B\"\n```\n``` 1496\n```\n``` 1497 lemma induct_forall_eq: \"(!!x. P x) == Trueprop (induct_forall (\\<lambda>x. P x))\"\n```\n``` 1498 by (unfold atomize_all induct_forall_def)\n```\n``` 1499\n```\n``` 1500 lemma induct_implies_eq: \"(A ==> B) == Trueprop (induct_implies A B)\"\n```\n``` 1501 by (unfold atomize_imp induct_implies_def)\n```\n``` 1502\n```\n``` 1503 lemma induct_equal_eq: \"(x == y) == Trueprop (induct_equal x y)\"\n```\n``` 1504 by (unfold atomize_eq induct_equal_def)\n```\n``` 1505\n```\n``` 1506 lemma induct_conj_eq:\n```\n``` 1507 fixes meta_conjunction :: \"prop => prop => prop\" (infixr \"&&\" 2)\n```\n``` 1508 shows \"(A && B) == Trueprop (induct_conj A B)\"\n```\n``` 1509 by (unfold atomize_conj induct_conj_def)\n```\n``` 1510\n```\n``` 1511 lemmas induct_atomize = induct_forall_eq induct_implies_eq induct_equal_eq induct_conj_eq\n```\n``` 1512 lemmas induct_rulify [symmetric, standard] = induct_atomize\n```\n``` 1513 lemmas induct_rulify_fallback =\n```\n``` 1514 induct_forall_def induct_implies_def induct_equal_def induct_conj_def\n```\n``` 1515\n```\n``` 1516\n```\n``` 1517 lemma induct_forall_conj: \"induct_forall (\\<lambda>x. induct_conj (A x) (B x)) =\n```\n``` 1518 induct_conj (induct_forall A) (induct_forall B)\"\n```\n``` 1519 by (unfold induct_forall_def induct_conj_def) iprover\n```\n``` 1520\n```\n``` 1521 lemma induct_implies_conj: \"induct_implies C (induct_conj A B) =\n```\n``` 1522 induct_conj (induct_implies C A) (induct_implies C B)\"\n```\n``` 1523 by (unfold induct_implies_def induct_conj_def) iprover\n```\n``` 1524\n```\n``` 1525 lemma induct_conj_curry: \"(induct_conj A B ==> PROP C) == (A ==> B ==> PROP C)\"\n```\n``` 1526 proof\n```\n``` 1527 assume r: \"induct_conj A B ==> PROP C\" and A B\n```\n``` 1528 show \"PROP C\" by (rule r) (simp add: induct_conj_def `A` `B`)\n```\n``` 1529 next\n```\n``` 1530 assume r: \"A ==> B ==> PROP C\" and \"induct_conj A B\"\n```\n``` 1531 show \"PROP C\" by (rule r) (simp_all add: `induct_conj A B` [unfolded induct_conj_def])\n```\n``` 1532 qed\n```\n``` 1533\n```\n``` 1534 lemmas induct_conj = induct_forall_conj induct_implies_conj induct_conj_curry\n```\n``` 1535\n```\n``` 1536 hide const induct_forall induct_implies induct_equal induct_conj\n```\n``` 1537\n```\n``` 1538 text {* Method setup. *}\n```\n``` 1539\n```\n``` 1540 ML {*\n```\n``` 1541 structure Induct = InductFun\n```\n``` 1542 (\n```\n``` 1543 val cases_default = @{thm case_split}\n```\n``` 1544 val atomize = @{thms induct_atomize}\n```\n``` 1545 val rulify = @{thms induct_rulify}\n```\n``` 1546 val rulify_fallback = @{thms induct_rulify_fallback}\n```\n``` 1547 )\n```\n``` 1548 *}\n```\n``` 1549\n```\n``` 1550 setup Induct.setup\n```\n``` 1551\n```\n``` 1552 use \"~~/src/Tools/induct_tacs.ML\"\n```\n``` 1553 setup InductTacs.setup\n```\n``` 1554\n```\n``` 1555\n```\n``` 1556 subsubsection {* Coherent logic *}\n```\n``` 1557\n```\n``` 1558 ML {*\n```\n``` 1559 structure Coherent = CoherentFun\n```\n``` 1560 (\n```\n``` 1561 val atomize_elimL = @{thm atomize_elimL}\n```\n``` 1562 val atomize_exL = @{thm atomize_exL}\n```\n``` 1563 val atomize_conjL = @{thm atomize_conjL}\n```\n``` 1564 val atomize_disjL = @{thm atomize_disjL}\n```\n``` 1565 val operator_names =\n```\n``` 1566 [@{const_name \"op |\"}, @{const_name \"op &\"}, @{const_name \"Ex\"}]\n```\n``` 1567 );\n```\n``` 1568 *}\n```\n``` 1569\n```\n``` 1570 setup Coherent.setup\n```\n``` 1571\n```\n``` 1572\n```\n``` 1573 subsection {* Other simple lemmas and lemma duplicates *}\n```\n``` 1574\n```\n``` 1575 lemma Let_0 [simp]: \"Let 0 f = f 0\"\n```\n``` 1576 unfolding Let_def ..\n```\n``` 1577\n```\n``` 1578 lemma Let_1 [simp]: \"Let 1 f = f 1\"\n```\n``` 1579 unfolding Let_def ..\n```\n``` 1580\n```\n``` 1581 lemma ex1_eq [iff]: \"EX! x. x = t\" \"EX! x. t = x\"\n```\n``` 1582 by blast+\n```\n``` 1583\n```\n``` 1584 lemma choice_eq: \"(ALL x. EX! y. P x y) = (EX! f. ALL x. P x (f x))\"\n```\n``` 1585 apply (rule iffI)\n```\n``` 1586 apply (rule_tac a = \"%x. THE y. P x y\" in ex1I)\n```\n``` 1587 apply (fast dest!: theI')\n```\n``` 1588 apply (fast intro: ext the1_equality [symmetric])\n```\n``` 1589 apply (erule ex1E)\n```\n``` 1590 apply (rule allI)\n```\n``` 1591 apply (rule ex1I)\n```\n``` 1592 apply (erule spec)\n```\n``` 1593 apply (erule_tac x = \"%z. if z = x then y else f z\" in allE)\n```\n``` 1594 apply (erule impE)\n```\n``` 1595 apply (rule allI)\n```\n``` 1596 apply (case_tac \"xa = x\")\n```\n``` 1597 apply (drule_tac x = x in fun_cong, simp_all)\n```\n``` 1598 done\n```\n``` 1599\n```\n``` 1600 lemma mk_left_commute:\n```\n``` 1601 fixes f (infix \"\\<otimes>\" 60)\n```\n``` 1602 assumes a: \"\\<And>x y z. (x \\<otimes> y) \\<otimes> z = x \\<otimes> (y \\<otimes> z)\" and\n```\n``` 1603 c: \"\\<And>x y. x \\<otimes> y = y \\<otimes> x\"\n```\n``` 1604 shows \"x \\<otimes> (y \\<otimes> z) = y \\<otimes> (x \\<otimes> z)\"\n```\n``` 1605 by (rule trans [OF trans [OF c a] arg_cong [OF c, of \"f y\"]])\n```\n``` 1606\n```\n``` 1607 lemmas eq_sym_conv = eq_commute\n```\n``` 1608\n```\n``` 1609 lemma nnf_simps:\n```\n``` 1610 \"(\\<not>(P \\<and> Q)) = (\\<not> P \\<or> \\<not> Q)\" \"(\\<not> (P \\<or> Q)) = (\\<not> P \\<and> \\<not>Q)\" \"(P \\<longrightarrow> Q) = (\\<not>P \\<or> Q)\"\n```\n``` 1611 \"(P = Q) = ((P \\<and> Q) \\<or> (\\<not>P \\<and> \\<not> Q))\" \"(\\<not>(P = Q)) = ((P \\<and> \\<not> Q) \\<or> (\\<not>P \\<and> Q))\"\n```\n``` 1612 \"(\\<not> \\<not>(P)) = P\"\n```\n``` 1613 by blast+\n```\n``` 1614\n```\n``` 1615\n```\n``` 1616 subsection {* Basic ML bindings *}\n```\n``` 1617\n```\n``` 1618 ML {*\n```\n``` 1619 val FalseE = @{thm FalseE}\n```\n``` 1620 val Let_def = @{thm Let_def}\n```\n``` 1621 val TrueI = @{thm TrueI}\n```\n``` 1622 val allE = @{thm allE}\n```\n``` 1623 val allI = @{thm allI}\n```\n``` 1624 val all_dupE = @{thm all_dupE}\n```\n``` 1625 val arg_cong = @{thm arg_cong}\n```\n``` 1626 val box_equals = @{thm box_equals}\n```\n``` 1627 val ccontr = @{thm ccontr}\n```\n``` 1628 val classical = @{thm classical}\n```\n``` 1629 val conjE = @{thm conjE}\n```\n``` 1630 val conjI = @{thm conjI}\n```\n``` 1631 val conjunct1 = @{thm conjunct1}\n```\n``` 1632 val conjunct2 = @{thm conjunct2}\n```\n``` 1633 val disjCI = @{thm disjCI}\n```\n``` 1634 val disjE = @{thm disjE}\n```\n``` 1635 val disjI1 = @{thm disjI1}\n```\n``` 1636 val disjI2 = @{thm disjI2}\n```\n``` 1637 val eq_reflection = @{thm eq_reflection}\n```\n``` 1638 val ex1E = @{thm ex1E}\n```\n``` 1639 val ex1I = @{thm ex1I}\n```\n``` 1640 val ex1_implies_ex = @{thm ex1_implies_ex}\n```\n``` 1641 val exE = @{thm exE}\n```\n``` 1642 val exI = @{thm exI}\n```\n``` 1643 val excluded_middle = @{thm excluded_middle}\n```\n``` 1644 val ext = @{thm ext}\n```\n``` 1645 val fun_cong = @{thm fun_cong}\n```\n``` 1646 val iffD1 = @{thm iffD1}\n```\n``` 1647 val iffD2 = @{thm iffD2}\n```\n``` 1648 val iffI = @{thm iffI}\n```\n``` 1649 val impE = @{thm impE}\n```\n``` 1650 val impI = @{thm impI}\n```\n``` 1651 val meta_eq_to_obj_eq = @{thm meta_eq_to_obj_eq}\n```\n``` 1652 val mp = @{thm mp}\n```\n``` 1653 val notE = @{thm notE}\n```\n``` 1654 val notI = @{thm notI}\n```\n``` 1655 val not_all = @{thm not_all}\n```\n``` 1656 val not_ex = @{thm not_ex}\n```\n``` 1657 val not_iff = @{thm not_iff}\n```\n``` 1658 val not_not = @{thm not_not}\n```\n``` 1659 val not_sym = @{thm not_sym}\n```\n``` 1660 val refl = @{thm refl}\n```\n``` 1661 val rev_mp = @{thm rev_mp}\n```\n``` 1662 val spec = @{thm spec}\n```\n``` 1663 val ssubst = @{thm ssubst}\n```\n``` 1664 val subst = @{thm subst}\n```\n``` 1665 val sym = @{thm sym}\n```\n``` 1666 val trans = @{thm trans}\n```\n``` 1667 *}\n```\n``` 1668\n```\n``` 1669\n```\n``` 1670 subsection {* Code generator basics -- see further theory @{text \"Code_Setup\"} *}\n```\n``` 1671\n```\n``` 1672 text {* Equality *}\n```\n``` 1673\n```\n``` 1674 class eq = type +\n```\n``` 1675 fixes eq :: \"'a \\<Rightarrow> 'a \\<Rightarrow> bool\"\n```\n``` 1676 assumes eq_equals: \"eq x y \\<longleftrightarrow> x = y\"\n```\n``` 1677 begin\n```\n``` 1678\n```\n``` 1679 lemma eq: \"eq = (op =)\"\n```\n``` 1680 by (rule ext eq_equals)+\n```\n``` 1681\n```\n``` 1682 lemma eq_refl: \"eq x x \\<longleftrightarrow> True\"\n```\n``` 1683 unfolding eq by rule+\n```\n``` 1684\n```\n``` 1685 end\n```\n``` 1686\n```\n``` 1687 text {* Module setup *}\n```\n``` 1688\n```\n``` 1689 use \"~~/src/HOL/Tools/recfun_codegen.ML\"\n```\n``` 1690\n```\n``` 1691 setup {*\n```\n``` 1692 Code_ML.setup\n```\n``` 1693 #> Code_Haskell.setup\n```\n``` 1694 #> Nbe.setup\n```\n``` 1695 #> Codegen.setup\n```\n``` 1696 #> RecfunCodegen.setup\n```\n``` 1697 *}\n```\n``` 1698\n```\n``` 1699\n```\n``` 1700 subsection {* Legacy tactics and ML bindings *}\n```\n``` 1701\n```\n``` 1702 ML {*\n```\n``` 1703 fun strip_tac i = REPEAT (resolve_tac [impI, allI] i);\n```\n``` 1704\n```\n``` 1705 (* combination of (spec RS spec RS ...(j times) ... spec RS mp) *)\n```\n``` 1706 local\n```\n``` 1707 fun wrong_prem (Const (\"All\", _) \\$ (Abs (_, _, t))) = wrong_prem t\n```\n``` 1708 | wrong_prem (Bound _) = true\n```\n``` 1709 | wrong_prem _ = false;\n```\n``` 1710 val filter_right = filter (not o wrong_prem o HOLogic.dest_Trueprop o hd o Thm.prems_of);\n```\n``` 1711 in\n```\n``` 1712 fun smp i = funpow i (fn m => filter_right ([spec] RL m)) ([mp]);\n```\n``` 1713 fun smp_tac j = EVERY'[dresolve_tac (smp j), atac];\n```\n``` 1714 end;\n```\n``` 1715\n```\n``` 1716 val all_conj_distrib = thm \"all_conj_distrib\";\n```\n``` 1717 val all_simps = thms \"all_simps\";\n```\n``` 1718 val atomize_not = thm \"atomize_not\";\n```\n``` 1719 val case_split = thm \"case_split\";\n```\n``` 1720 val cases_simp = thm \"cases_simp\";\n```\n``` 1721 val choice_eq = thm \"choice_eq\"\n```\n``` 1722 val cong = thm \"cong\"\n```\n``` 1723 val conj_comms = thms \"conj_comms\";\n```\n``` 1724 val conj_cong = thm \"conj_cong\";\n```\n``` 1725 val de_Morgan_conj = thm \"de_Morgan_conj\";\n```\n``` 1726 val de_Morgan_disj = thm \"de_Morgan_disj\";\n```\n``` 1727 val disj_assoc = thm \"disj_assoc\";\n```\n``` 1728 val disj_comms = thms \"disj_comms\";\n```\n``` 1729 val disj_cong = thm \"disj_cong\";\n```\n``` 1730 val eq_ac = thms \"eq_ac\";\n```\n``` 1731 val eq_cong2 = thm \"eq_cong2\"\n```\n``` 1732 val Eq_FalseI = thm \"Eq_FalseI\";\n```\n``` 1733 val Eq_TrueI = thm \"Eq_TrueI\";\n```\n``` 1734 val Ex1_def = thm \"Ex1_def\"\n```\n``` 1735 val ex_disj_distrib = thm \"ex_disj_distrib\";\n```\n``` 1736 val ex_simps = thms \"ex_simps\";\n```\n``` 1737 val if_cancel = thm \"if_cancel\";\n```\n``` 1738 val if_eq_cancel = thm \"if_eq_cancel\";\n```\n``` 1739 val if_False = thm \"if_False\";\n```\n``` 1740 val iff_conv_conj_imp = thm \"iff_conv_conj_imp\";\n```\n``` 1741 val iff = thm \"iff\"\n```\n``` 1742 val if_splits = thms \"if_splits\";\n```\n``` 1743 val if_True = thm \"if_True\";\n```\n``` 1744 val if_weak_cong = thm \"if_weak_cong\"\n```\n``` 1745 val imp_all = thm \"imp_all\";\n```\n``` 1746 val imp_cong = thm \"imp_cong\";\n```\n``` 1747 val imp_conjL = thm \"imp_conjL\";\n```\n``` 1748 val imp_conjR = thm \"imp_conjR\";\n```\n``` 1749 val imp_conv_disj = thm \"imp_conv_disj\";\n```\n``` 1750 val simp_implies_def = thm \"simp_implies_def\";\n```\n``` 1751 val simp_thms = thms \"simp_thms\";\n```\n``` 1752 val split_if = thm \"split_if\";\n```\n``` 1753 val the1_equality = thm \"the1_equality\"\n```\n``` 1754 val theI = thm \"theI\"\n```\n``` 1755 val theI' = thm \"theI'\"\n```\n``` 1756 val True_implies_equals = thm \"True_implies_equals\";\n```\n``` 1757 val nnf_conv = Simplifier.rewrite (HOL_basic_ss addsimps simp_thms @ @{thms \"nnf_simps\"})\n```\n``` 1758\n```\n``` 1759 *}\n```\n``` 1760\n```\n``` 1761 end\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57891434,"math_prob":0.9912189,"size":42396,"snap":"2019-51-2020-05","text_gpt3_token_len":14821,"char_repetition_ratio":0.14818834,"word_repetition_ratio":0.051934,"special_character_ratio":0.44796678,"punctuation_ratio":0.1294469,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99974555,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T10:43:25Z\",\"WARC-Record-ID\":\"<urn:uuid:45b70c59-f33c-4db1-852c-f126ed0b0250>\",\"Content-Length\":\"277677\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7faf9e8a-3284-4b01-af44-c168722e5c67>\",\"WARC-Concurrent-To\":\"<urn:uuid:21c06b27-0cf4-4c7a-9512-155fed6e5eef>\",\"WARC-IP-Address\":\"131.159.46.82\",\"WARC-Target-URI\":\"http://isabelle.in.tum.de/repos/isabelle/file/32b6a8f12c1c/src/HOL/HOL.thy\",\"WARC-Payload-Digest\":\"sha1:MYCNTH673LLWDDS2UUPFPG66Q4MCIAZN\",\"WARC-Block-Digest\":\"sha1:KEFFA3YRSQY5ENWTNVY3BFDLOCYM6JNM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594391.21_warc_CC-MAIN-20200119093733-20200119121733-00098.warc.gz\"}"}
https://www.calculatorbit.com/en/length/4-picometer-to-nautical-mile
[ "# 4 Picometer to Nautical Mile Calculator\n\nResult:\n\n4 Picometer = 2.159827213822894e-15 Nautical Mile (NM)\n\nRounded: ( Nearest 4 digits)\n\n4 Picometer is 2.159827213822894e-15 Nautical Mile (NM)\n\n4 Picometer is 4e-9mm\n\n## How to Convert Picometer to Nautical Mile (Explanation)\n\n• 1 picometer = 5.399568034557235e-16 NM (Nearest 4 digits)\n• 1 nautical-mile = 1852000000000000.5 pm (Nearest 4 digits)\n\nThere are 5.399568034557235e-16 Nautical Mile in 1 Picometer. To convert Picometer to Nautical Mile all you need to do is multiple the Picometer with 5.399568034557235e-16.\n\nIn formula distance is denoted with d\n\nThe distance d in Nautical Mile (NM) is equal to 5.399568034557235e-16 times the distance in picometer (pm):\n\n### Equation\n\nd (NM) = d (pm) × 5.399568034557235e-16\n\nFormula for 4 Picometer (pm) to Nautical Mile (NM) conversion:\n\nd (NM) = 4 pm × 5.399568034557235e-16 => 2.159827213822894e-15 NM\n\n## How many Nautical Mile in a Picometer\n\nOne Picometer is equal to 5.399568034557235e-16 Nautical Mile\n\n1 pm = 1 pm × 5.399568034557235e-16 => 5.399568034557235e-16 NM\n\n## How many Picometer in a Nautical Mile\n\nOne Nautical Mile is equal to 1852000000000000.5 Picometer\n\n1 NM = 1 NM / 5.399568034557235e-16 => 1852000000000000.5 pm\n\n## picometer:\n\nThe picometer (symbol: pm) is unit of length in the International System of Units (SI), equal to 0.000000000001 meter or 1x10^-12 meter or 1/1000000000000 meter. The picometer is equal to 1000 femtometres or 1/1000 nanometer. The picometer's length is so small that its application is almost entirely confined to the particle physics, quantum physics, acoustics and chemistry.\n\n## nautical-mile:\n\nThe nautical mile (symbol: M, NM, nmi) is a unit of length in the International systems of Units (SI), equal to 1852 meters. Nautical mile is a unit of length used in air amrine and space navigation and for the definition of territorial waters. Historically, it was defined as the merdian arc length corresponding to one minute (1/60 of a degree) of latitude.\n\n## Picometer to Nautical Mile Calculations Table\n\nNow by following above explained formulas we can prepare a Picometer to Nautical Mile Chart.\n\nPicometer (pm) Nautical Mile (NM)\n1 5.399568034557235e-16\n2 1.079913606911447e-15\n3 1.6198704103671705e-15\n4 2.159827213822894e-15\n5 2.6997840172786176e-15\n6 3.239740820734341e-15\n7 3.7796976241900646e-15\n8 4.319654427645788e-15\n9 4.8596112311015116e-15\n\nNearest 4 digits\n\n## Convert from Picometer to other units\n\nHere are some quick links to convert 4 Picometer to other length units.\n\n## Convert to Picometer from other units\n\nHere are some quick links to convert other length units to Picometer.\n\n## FAQs About Picometer and Nautical Mile\n\nConverting from one Picometer to Nautical Mile or Nautical Mile to Picometer sometimes gets confusing.\n\n### Is 5.399568034557235e-16 Nautical Mile in 1 Picometer?\n\nYes, 1 Picometer have 5.399568034557235e-16 (Nearest 4 digits) Nautical Mile.\n\n### What is the symbol for Picometer and Nautical Mile?\n\nSymbol for Picometer is pm and symbol for Nautical Mile is NM.\n\n### How many Picometer makes 1 Nautical Mile?\n\n1852000000000000.5 Picometer is euqal to 1 Nautical Mile.\n\n### How many Nautical Mile in 4 Picometer?\n\nPicometer have 2.159827213822894e-15 Nautical Mile.\n\n### How many Nautical Mile in a Picometer?\n\nPicometer have 5.399568034557235e-16 (Nearest 4 digits) Nautical Mile." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74417514,"math_prob":0.8756242,"size":4702,"snap":"2023-40-2023-50","text_gpt3_token_len":1472,"char_repetition_ratio":0.3203491,"word_repetition_ratio":0.05179856,"special_character_ratio":0.33475116,"punctuation_ratio":0.100961536,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.968516,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T12:08:04Z\",\"WARC-Record-ID\":\"<urn:uuid:1cbafb0e-1a6f-4266-a155-8a8350a68ac7>\",\"Content-Length\":\"40038\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23424f0b-a4c0-4cf9-bd37-d9bd27dbe4e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:59a0a074-ce19-4572-8b44-97752c848624>\",\"WARC-IP-Address\":\"172.67.182.78\",\"WARC-Target-URI\":\"https://www.calculatorbit.com/en/length/4-picometer-to-nautical-mile\",\"WARC-Payload-Digest\":\"sha1:LKXM54NMU4RM4LW4GWIQFXTGKAYIWDC5\",\"WARC-Block-Digest\":\"sha1:4WQWDMMZ73UP3D3N3AZ6TZYRPEK5MSF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100551.17_warc_CC-MAIN-20231205105136-20231205135136-00202.warc.gz\"}"}
https://www.piping-designer.com/index.php/properties/dimensionless-numbers/172-schmidt-number
[ "# Schmidt Number\n\non . Posted in Dimensionless Numbers\n\nSchmidt number, abbreviated as Sc, a dimensionless number, is used in fluid mechanics and heat transfer to characterize the relative importance of mass transfer (diffusion) to momentum transfer (viscous forces).  The Schmidt number is particularly important in problems involving mass transfer, such as the diffusion of solute in a solvent or the transfer of heat in a fluid through convection and conduction.\n\n### Schmidt number categorizes fluids into different regimes\n\n• Low Sc < 1  -  Momentum transfer dominates over mass transfer.  The fluid is often characterized as having a rapid diffusion of mass relative to momentum.\n• High Sc > 1  -  Mass transfer is more significant than momentum transfer.  The fluid is characterized by slower diffusion of mass compared to momentum.\n\nIn many cases, for common fluids and solutes, the Schmidt number can be approximated as a constant, simplifying calculations involving mass transfer.  For example, in the case of heat transfer in a fluid, the Prandtl number is used to characterize the ratio of momentum diffusivity to thermal diffusivity, and the Schmidt number plays a similar role but for mass transfer.\n\n## Schmidt Number formula\n\n$$\\large{ Sc = \\frac{ \\nu }{ D_m } }$$     (Schmidt Number)\n\n$$\\large{ \\nu = Sc \\; D_m }$$\n\n$$\\large{ D_m = \\frac{ \\nu }{ Sc } }$$\n\nSymbol English Metric\n$$\\large{ Sc }$$ = Schmidt number $$\\large{dimensionless}$$\n$$\\large{ \\nu }$$  (Greek symbol nu) = kinematic viscosity $$\\large{\\frac{ft^2}{sec}}$$ $$\\large{\\frac{m^2}{s}}$$\n$$\\large{ D_m }$$ = mass diffusivity $$\\large{\\frac{ft^2}{sec}}$$ $$\\large{\\frac{m^2}{s}}$$\n\n## Schmidt Number formula\n\n$$\\large{ Sc = \\frac{ \\mu }{ \\rho \\; D_m } }$$     (Schmidt Number)\n\n$$\\large{ \\mu = Sc \\; \\rho \\; D_m }$$\n\n$$\\large{ \\rho = \\frac{ \\mu }{ Sc \\; D_m } }$$\n\n$$\\large{ D_m = \\frac{ \\mu }{ Sc \\; \\rho } }$$\n\nSymbol English Metric\n$$\\large{ Sc }$$ = Schmidt number $$\\large{dimensionless}$$\n$$\\large{ \\mu }$$  (Greek symbol mu) = dynamic viscosity $$\\large{\\frac{lbf-sec}{ft^2}}$$ $$\\large{ Pa-s }$$\n$$\\large{ \\rho }$$  (Greek symbol rho) = density $$\\large{\\frac{lbm}{ft^3}}$$ $$\\large{\\frac{kg}{m^3}}$$\n$$\\large{ D_m }$$ = mass diffusivity $$\\large{\\frac{ft^2}{sec}}$$ $$\\large{\\frac{m^2}{s}}$$", null, "Tags: Flow" ]
[ null, "https://www.piping-designer.com/images/Piping%20Designer%20Gallery/Piping-Designer_Logo_1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76426774,"math_prob":0.9999584,"size":2241,"snap":"2023-40-2023-50","text_gpt3_token_len":662,"char_repetition_ratio":0.19982119,"word_repetition_ratio":0.13813815,"special_character_ratio":0.32440874,"punctuation_ratio":0.07204611,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999865,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T04:08:48Z\",\"WARC-Record-ID\":\"<urn:uuid:7e0915f0-ea34-4f16-9e56-19a40155d11d>\",\"Content-Length\":\"31056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16682633-f558-48bf-8ba5-5bd591024712>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d69a880-43f9-4b8c-8815-1578d83b77c3>\",\"WARC-IP-Address\":\"200.225.40.42\",\"WARC-Target-URI\":\"https://www.piping-designer.com/index.php/properties/dimensionless-numbers/172-schmidt-number\",\"WARC-Payload-Digest\":\"sha1:GJTEYOCNHFB5TWSANB3S6MW5RNSL5QMR\",\"WARC-Block-Digest\":\"sha1:6ALGXE24R4AFQHEXZVQWJQGI5373XGFK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100264.9_warc_CC-MAIN-20231201021234-20231201051234-00437.warc.gz\"}"}
http://www.rwgrayprojects.com/synergetics/s11/p0600.html
[ "", null, "1106.00", null, "Inside-Outing of Tetrahedron in Transformational Projection Model", null, "1106.10", null, "Complementary Negative Tetrahedron: The rod ends can be increased beyond the phase that induced the 180-degree triangle, and the vertexes of the steel-spring surface triangle can go on to be increased beyond 180 degrees each, and thus form a negative triangle. This is to say that the original tetrahedron formed between the three vertexes of the spherical triangle on the sphere's surface__with the center of the sphere as the fourth point__will have flattened to one plane when the vertexes are at 180 degrees; at that moment the tetrahedron is a hemisphere. By lengthening the radii again and increasing the triangle's original \"interior\" angles, the tetrahedron will turn itself inside out. In effect, what seems to be a \"small,\" i.e., an only apparently ``plane\" equilateral triangle must always be a small equilateral spherical triangle of a very big sphere, and it is always complemented by the negative triangle completing the balance of the surface of the inherent sphere respective to the three lines and three vertexes of the triangle.", null, "1106.11", null, "No triangular surface is conceivable occurring independently of its inherent sphere, as there is no experimentally demonstrable flat surface plane in Universe reaching outward laterally in all directions to infinity; although this has been illusionarily accepted as \"obvious\" by historical humanity, it is contradictory to experience. The surface of any system must return to itself in all directions and is most economically successful in doing so as an approximate true sphere that contains the most volume with the least surface. Nature always seeks the most economical solutions__ergo, the sphere is normal to all systems experience and to all experiential, i.e., operational consideration and formulation. The construction of a triangle involves a surface, and a curved surface is most economical and experimentally satisfactory. A sphere is a closed surface, a unitary finite surface. Planes are never finite. Once a triangle is constructed on the surface of a sphere__because a triangle is a boundary line closed upon itself__the finitely closed boundary lines of the triangle automatically divide the unit surface of the sphere into two separate surface areas. Both are bounded by the same three great-circle arcs and their three vertexial links: this is the description of a triangle. Therefore, both areas are true triangles, yet with common edge boundaries. It is impossible to construct one triangle alone. In fact, four triangles are inherent to the oversimplified concept of the construction of \"one\" triangle. In addition to the two complementary convex surface triangles already noted, there must of necessity be two complementary concave triangles appropriate to them and occupying the reverse, or inside, of the spherical surface. Inasmuch as convex and concave are opposites, one reflectively concentrating radiant energy and the other reflectively diffusing such incident radiation; therefore they cannot be the same. Therefore, a minimum of four triangles is always induced when any one triangle is constructed, and which one is the initiator or inducer of the others is irrelevant. The triangle initiator is an inadvertent but inherent tetrahedron producer; it might be on the inside constructing its triangle on some cosmic sphere, or vice versa.", null, "1106.12", null, "It might be argued that inside and outside are the same, but this is not so. While there is an interminable progression of insides within insides in Experience Universe, there is only one outside comprehensive to all insides. So they are not the same, and the mathematical fact remains that four is the minimum of realizable triangles that may be constructed if any are constructed. But that is not all, for it is also experimentally disclosed that not only does the construction of one triangle on the surface of the sphere divide the total surface into two finite areas each of which is bound by three edges and three angles__ergo, by two triangles__but these triangles are on the surface of a system whose unity of volume was thereby divided into two centrally angled tetrahedra, because the shortest lines on sphere surfaces are great circles, and great circles are always formed on the surface of a sphere by planes going through the center of the sphere, which planes of the three-greatcircle-arc-edged triangle drawn on the surface automatically divide the whole sphere internally into two spherical tetrahedra, each of which has its four triangles__ergo, inscribing one triangle \"gets you eight,\" like it or not. And each of those eight triangles has its inside and outside, wherefore inscribing one triangle, which is the minimum polygon, like \"Open Sesame,\" inadvertently gets you 16 triangles. And that is not all: the sphere on which you scribed is a system and not the whole Universe, and your scribing a triangle on it to stake out your ``little area on Earth\" not only became 16 terrestrial triangles but also induced the remainder of Universe outside the system and inside the system to manifest their invisible or nonunitarily conceptual ``minimum inventorying'' of ``the rest of Universe other than Earth,'' each of which micro and macro otherness system integrity has induced an external tetrahedron and an internal tetrahedron, each with 16 triangles for a cosmic total of 64 (see Sec. 401.01).", null, "1106.20", null, "Inside-Outing: Inside-outing means that any one of the four vertexes of the originally considered tetrahedron formed on the transformational projection model's triangle, with its spherical center, has passed through its opposite face. The minima and the maxima of the spherical equiside and -angle triangle formed by the steel springs is seen to be in negative triangular complement to the smallest 60-degree+ triangle. The vertexes of even the maxima or minima are something greater than 60 degrees each__ because no sphere is large enough to be flat__or something less than 300 degrees each.", null, "1106.21", null, "The sphere is at its smallest when the two angles of complement are each degrees on either side of the three-arc boundary, and the minima-maxima of the triangles are halfway out of phase with the occurrence of the minima and maxima of the sphere phases.", null, "1106.22", null, "No sphere large enough for a flat surface to occur is imaginable. This is verified by modern physics' experimentally induced abandonment of the Greeks' definition of a sphere, which absolutely divided Universe into all Universe outside and all Universe inside the sphere, with an absolute surface closure permitting no traffic between the two and making inside self-perpetuating to infinity complex__ergo, the first locally perpetual- motion machine, completely contradicting entropy. Since physics has found no solids or impervious continuums or surfaces, and has found only finitely separate energy quanta, we are compelled operationally to redefine the spheric experience as an aggregate of events approximately equidistant in a high-frequency aggregate in almost all directions from one only approximate event (see Sec. 224.07). Since nature always interrelates in the most economical manner, and since great circles are the shortest distances between points on spheres, and since chords are shorter distances than arcs, then nature must interrelate the spheric aggregated events by the chords, and chords always emerge to converge; ergo, converge convexly around each spheric system vertex; ergo, the sums of the angles around the vertexes of spheric systems never add to 360 degrees. Spheres are high-frequency, geodesic polyhedra (see Sec. 1022.10).", null, "1106.23", null, "Because (a) all radiation has a terminal speed, ergo an inherent limit reach; because (b) the minimum structural system is a tetrahedron; because (c) the unit of energy is the tetrahedron with its six-degrees-of-minimum-freedoms vector edges; because (d) the minimum radiant energy package is one photon; because (e) the minimum polar triangle__ and its tetrahedron's contraction__is limited by the maximum reach of its three interior radii edges of its spherical tetrahedron; and because (f) physics discovered experimentally that the photon is the minimum radiation package; therefore we identify the minimum tetrahedron photon as that with radius = c, which is the speed of light: the tetrahedron edge of the photon becomes unit radius = frequency limit. (See Sec. 541.30.)", null, "1106.24", null, "The transformational projection model coupled with the spheric experience data prove that a finite minima and a finite maxima do exist, because a flat is exclusively unique to the area confined within a triangle's three points. The almost flat occurs at the inflection points between spheric systems' inside-outings and vice versa, as has already been seen at the sphere's minima size; and that at its maxima, the moment of flatness goes beyond approximate flatness as the minima phase satisfies the four-triangle minima momentum of transformation, thus inherently eliminating the paradox of static equilibrium concept of all Universe subdivided into two parts: that inside of a sphere and that outside of it, the first being finite and the latter infinite. The continual transforming from inside out to outside in, finitely, is consistent with dynamic experience.", null, "1106.25", null, "Every great circle plane is inherently two spherical segment tetrahedra of zero altitude, base-to-base.", null, "1106.30", null, "Inside-Outing of Spheres: When our model is in its original condition of having its springs all flat (a dynamic approximation) and in one plane, in which condition all the rods are perpendicular to that plane, the rods may be gathered to a point on the opposite side of the spring-steel strap to that of the first gathering, and thus we see the original sphere turned inside out. This occurs as a sphere of second center, which, if time were involved, could be the progressive point of the observer and therefore no \"different\" point.", null, "1106.31", null, "Considering Universe at minimum unity of two, two spheres could then seem to be inherent in our model. The half-out-of-phaseness of the sphere maxima and minima, with the maxima-minima of the surface triangles, find the second sphere's phase of maxima in coincidence with the first's minima. As the two overlap, the flat phase of the degree triangles of the one sphere's minima phase is the flat phase of the other sphere's maxima. The maxima sphere and the minima sphere, both inside-outing, tend to shuttle on the same polar axis, one of whose smaller polar triangles may become involutional while the other becomes evolutional as the common radii of the two polar tetrahedra refuse convergence at the central sphere. We have learned elsewhere (see Sec. 517) that two or more lines cannot go through the same point at the same time; thus the common radii of the two polar tetrahedra must twistingly avert central convergence, thus accomplishing central core involutional-evolutional, outside-inside-outside, cyclically transformative travel such as is manifest in electromagnetic fields. All of this is implicit in the projection model's transformational phases. There is also disclosed here the possible intertransformative mechanism of the interpulsating binary stars.", null, "1107.00", null, "Transformational Projection Model with Rubber-Band Grid", null, "1107.10", null, "Construction: Again returning the model to the condition of approximate dynamic inflection at maxima-minima of the triangle__i.e., to their approximately flat phase of \"one\" most-obvious triangle of flat spring-steel strips__in which condition the rods are all perpendicular to the surface plane of the triangle and are parallel to one another in three vertical planes of rod rows in respect to the triangle's plane. At this phase, we apply a rubber-band grid of three-way crossings. We may consider the rubber bands of ideal uniformity of cross section and chemical composition, in such a manner as to stretch them mildly in leading them across the triangle surface between the points uniformly spaced in rows, along the spring-steel strap's midsurface line through each of which the rods were perpendicularly inserted. The rubber bands are stretched in such a manner that each rubber band leads from a point distant from its respective primary vertex of the triangle to a point on the nearest adjacent edge, that is, the edge diverging from the same nearest vertex, this second point being double the distance along its edge from the vertex that the first taken point is along its first considered edge. Assuming no catenary sag or drift, the \"ideal\" rubber bands of no weight then become the shortest distances between the edge points so described. Every such possible connection is established, and all the tensed, straight rubber bands will lie in one plane because, at the time, the springs are flat__and that one plane is the surface of the main spring-steel triangle of the model.", null, "1107.11", null, "The rubber bands will be strung in such a way that every point along the steel triangle mid-edge line penetrated by the rods shall act as an origin, and every second point shall become also the recipient for such a linking as was described above, because each side feeds to the other sides. The \"feeds\" must be shared at a rate of one goes into two. Each recipient point receives two lines and also originates one; therefore, along each edge, every point is originating or feeding one vertical connector, while every other, or every second point receives two obliquely impinging connector lines in addition to originating one approximately vertically fed line of connection.", null, "Fig. 1107.12 1107.12", null, "The edge pattern, then, is one of uniform module divisions separated by points established by alternating convergences with it: first, the convergence of one connector line; then, the convergence of three connector lines; and repeat.", null, "1107.13", null, "This linking of the three sides will provide a rubber-band grid of three-way crossings of equi-side and -angle triangular interstices, except along the edges of the main equiangle triangle formed by the spring-steel pieces, where half-equilateral triangles will occur, as the outer steel triangle edges run concurrently through vertexes to and through midpoints of opposite sides, and thence through the next opposite vertexes again of each of the triangular interstices of the rubber-band grid interacting with the steel edges of the main triangle.", null, "1107.14", null, "The rubber-band, three-way, triangular subgridding of the equimodule spring-steel straps can also be accomplished by bands stretched approximately parallel to the steel-strap triangle's edges, connecting the respective modular subdivisions of the main steel triangle. In this case, the rubber-band crossings internal to the steel-band triangle may be treated as is described in respect to the main triangle subtriangular gridding by rubber bands perpendicular to the sides.", null, "1107.20", null, "Transformation: Aggregation of Additional Rods: More steel rods (in addition to those inserted perpendicularly through the steel-band edges of the basic triangle model) may now be inserted__also perpendicularly__through a set of steel grommets attached at (and centrally piercing through) each of the points of the three-way crossings of the rubber bands (internal to the big triangle of steel) in such a manner that the additional rods thus inserted through the points of three-way crossings are each perpendicular to the now flat-plane phase of the big basic articulatable steel triangle, and therefore perpendicular to, and coincident with, each of the lines crossing within the big steel triangle face. The whole aggregate of rods, both at edges and at internal intersections, will now be parallel to one another in the three unique sets of parallel planes that intersect each other at 60 degrees of convergence. The lines of the intersecting planes coincide with the axes of the rods; i.e., the planes are perpendicular to the plane of the basic steel triangle and the lines of their mutual intersections are all perpendicular to the basic plane and each corresponds to the axis of one of the rods. The whole forms a pattern of triangularly bundled, equiangular, equilateral-sectioned, parallel-prism-shaped tube spaces.", null, "Fig. 1107.21 1107.21", null, "Let us now gather together all the equally down-extending lengths of rod ends to one point. The Greeks defined a sphere as a surface equidistant in all directions from one point. All the points where the rods penetrate the steel triangle edges or the three-way-intersecting elastic rubber-band grid will be equidistant from one common central point to which the rod ends are gathered__and thus they all occur in a spherical, triangular portion of the surface of a common sphere__specifically, within the lesser surfaced of the two spherical triangles upon that sphere described by the steel arcs. Throughout the transformation, all the rods continue their respective perpendicularities to their respective interactions of the three-way crossings of the flexible grid lines of the basic steel triangle's inherently completable surface.", null, "1107.22", null, "If the frequency of uniform spacing of the perpendicularly__and equidistantly__penetrating and extending rods is exquisitely multiplied, and the uniform intervals are thus exquisitely shortened, then when the rod ends are gathered to a common point opposite either end of the basic articulatable steel-band triangle, the gathered ends will be closer together than their previous supposedly infinitely close parallel positioning had permitted, and the opposite ends will be reciprocally thinned out beyond their previous supposedly infinite disposition. Both ends of the rods are in finite condition__ beyond infinite__and the parallel phase (often thought of as infinite) is seen to be an inflection phase between two phases of the gathering of the ends, alternately, to one or the other of the two spherical centers. The two spherical centers are opposite either the inflection or flat phase of the articulating triangle faces of the basic articulatable triangle of our geodesics transformational projection model." ]
[ null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/s11/figs/t0712.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/s11/figs/t0721.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blankt.gif", null, "http://www.rwgrayprojects.com/synergetics/figs/blanksp.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9033257,"math_prob":0.9513716,"size":18238,"snap":"2022-05-2022-21","text_gpt3_token_len":4147,"char_repetition_ratio":0.14785565,"word_repetition_ratio":0.00944056,"special_character_ratio":0.19645795,"punctuation_ratio":0.086697966,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98118,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T17:42:35Z\",\"WARC-Record-ID\":\"<urn:uuid:e60998d5-1b25-4aca-9f2d-7603c0b5cb2e>\",\"Content-Length\":\"25712\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8eec9671-a148-4108-bdd6-9ba3fc2b6acf>\",\"WARC-Concurrent-To\":\"<urn:uuid:83e3c47c-081b-4564-bed4-fa3da457986e>\",\"WARC-IP-Address\":\"173.236.225.222\",\"WARC-Target-URI\":\"http://www.rwgrayprojects.com/synergetics/s11/p0600.html\",\"WARC-Payload-Digest\":\"sha1:JWQN6GCO4XGEAJ7XY37HSDMLP36N6J2Z\",\"WARC-Block-Digest\":\"sha1:C4QDN74G6PXLETVC3B5A26HZ4GRHEUSR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304859.70_warc_CC-MAIN-20220125160159-20220125190159-00552.warc.gz\"}"}
https://man.freebsd.org/cgi/man.cgi?query=remquo&sektion=3&format=html
[ "# FreeBSD Manual Pages\n\n```REMAINDER(3)\t FreeBSD Library Functions Manual\t\t REMAINDER(3)\n\nNAME\nremainder,\tremainderf, remainderl,\tremquo,\tremquof, remquol -- minimal\nresidue functions\n\nLIBRARY\nMath Library (libm, -lm)\n\nSYNOPSIS\n#include <math.h>\n\ndouble\nremainder(double x, double\ty);\n\nfloat\nremainderf(float x, float y);\n\nlong double\nremainderl(long double x, long double y);\n\ndouble\nremquo(double x, double y,\tint *quo);\n\nfloat\nremquof(float x, float y, int *quo);\n\nlong double\nremquol(long double x, long double\ty, int *quo);\n\nDESCRIPTION\nremainder(), remainderf(),\tremainderl(), remquo(),\tremquof(), and\nremquol() return the remainder r := x - n*y where n is the\tinteger\tnear-\nest the exact value of x/y; moreover if |n\t- x/y| = 1/2 then n is even.\nConsequently the remainder\tis computed exactly and\t|r| <= |y|/2. But at-\ntempting to take the remainder when y is 0\tor x is\t+-infinity is an in-\nvalid operation that produces a NaN.\n\nThe remquo(), remquof(), and remquol() functions also store the last k\nbits of n in the location pointed to by quo, provided that\tn exists. The\nnumber of bits k is platform-specific, but\tis guaranteed to be at least\n3.\n\nfmod(3), ieee(3), math(3)\n\nSTANDARDS\nThe remainder(), remainderf(), remainderl(), remquo(), remquof(), and\nremquol() routines\tconform\tto ISO/IEC 9899:1999 (\"ISO C99\"). The remain-\nder is as defined in IEEE Std 754-1985.\n\nHISTORY\nThe remainder() and remainderf() functions\tappeared in 4.3BSD and\nFreeBSD 2.0, respectively.\t The remquo() and remquof() functions were" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69771445,"math_prob":0.9706924,"size":1847,"snap":"2022-40-2023-06","text_gpt3_token_len":520,"char_repetition_ratio":0.2034726,"word_repetition_ratio":0.02189781,"special_character_ratio":0.28207904,"punctuation_ratio":0.17771883,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9962318,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T21:13:25Z\",\"WARC-Record-ID\":\"<urn:uuid:387b812e-5c24-4e77-8099-b800c3a9555c>\",\"Content-Length\":\"37360\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2da78432-90c7-4fe5-a91b-a0272b3f113f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0feb0e9d-465c-437d-a96f-fba53398dfea>\",\"WARC-IP-Address\":\"96.47.72.84\",\"WARC-Target-URI\":\"https://man.freebsd.org/cgi/man.cgi?query=remquo&sektion=3&format=html\",\"WARC-Payload-Digest\":\"sha1:UYMOIHO75XYV2RS3SS3RWSWJFJHHSO6C\",\"WARC-Block-Digest\":\"sha1:4H6D4SAJNGKEBS3CJ724ARP2VOHRPKMM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500041.18_warc_CC-MAIN-20230202200542-20230202230542-00460.warc.gz\"}"}
https://www.meritnation.com/karnataka-class-7/math/rd-sharma-2019-2020/data-handling-iii-construction-of-bar-graphs/textbook-solutions/97_1_3522_10331_24.5_44985
[ "Rd Sharma 2019 2020 Solutions for Class 7 Math Chapter 24 Data Handling Iii Construction Of Bar Graphs are provided here with simple step-by-step explanations. These solutions for Data Handling Iii Construction Of Bar Graphs are extremely popular among Class 7 students for Math Data Handling Iii Construction Of Bar Graphs Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Rd Sharma 2019 2020 Book of Class 7 Math Chapter 24 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Rd Sharma 2019 2020 Solutions. All Rd Sharma 2019 2020 Solutions for class Class 7 Math are prepared by experts and are 100% accurate.\n\n#### Question 1:\n\nTwo hundred students of class VI and VII were asked to name their favourate colours so as to decide upon what should be the colour of their school house. The results are shown in the following table.\n\n Colour: Red Green Blue Yellow Orange Number of students: 43 19 55 49 34\nRepresent the given data on a bar graph.\n(i) Which is the most preferred colour and which is the least?\n(ii) How many colours are there in all?", null, "Mark the horizontal axis OX as “Name of the Colour” and the vertical axis OY as “Number Of Students”.\n\n1. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n2. Choose a suitable scale to determine the heights of the bars, according to the space available for the graph. Here, we choose 1 small division to represent 10 student.\n\n(i)  The most preferred colour is blue and the least preferred is green.\n(ii) In all, there are 5 colours.\n\n#### Question 2:\n\nFollowing data gives total marks (out of 600) obtained by six children of a particular class.\n\n Student: Ajay Bali Dipti Faiyaz Gotika Hari Marks obtained: 450 500 300 360 400 540\nRepresent the data by a bar graph", null, "1. Mark  the horizontal axis OX as “Name of the Students” and the vertical axis OY as “Marks Obtained”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars,according to the space available for the graph.Here, we choose 1 small division to represent 100 marks.\n\n#### Question 3:\n\nNumber of children in six different classes are given below. Represent the data on a bar graph.\n\n Class: V VI VII VIII IX X Number of children: 135 120 95 100 90 80\n(i) How do you choose the scale.\n(ii) Which class has the maximum number of children?\n(iii) Which class has the minimum number of children?", null, "1. Mark the horizontal axis OX as “Class ” and the vertical axis OY as “Number of Children”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars,according to the space available for the graph.Here, we choose 1 big division to represent 40 children.\n\n(i) We choose 1 big to represent 40 children.\n(ii)The maximum number of students are in class V.\n(iii) The minimum number of students are in class X.\n\n#### Question 4:\n\nThe performance of students in 1st term and 2nd term is as given below. Draw a double bar graph choosing appropriate scale and answer the following:\n\n Subject: English Hindi Maths Science S.Science 1st term: 67 72 88 81 73 2nd term: 70 65 95 85 75\n(i) In which subject, has the children improved their performance the most?\n(ii) Has the performance gone down in any subject?", null, "We choose 1 small division to represent 1 child in the graph.\n\n1. Mark the horizontal axis OX as “Subject” and the vertical axis OY as “Marks”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars,according to the space available for the graph. Here, we choose 1 big division to represent 10 marks.\n\n(i) In Maths, the students showed their greatest improvement.\n(ii) The students performed worst in Hindi.\n\n#### Question 5:\n\nConsider the following data gathered from a survey of a colony:\n\n Favourate Sport: Cricket Basket-Ball Swimming Hockey Athletics Watching 1240 470 510 423 250 Participating 620 320 320 250 105\nDraw a double bar graph choosing an appropriate scale. What do you infer from the bar graph?\n(i) Which sport is most popular?\n(ii) What is more preferred watching or participating in sports?", null, "1. Mark the horizontal axis OX as “Favourite Sports” and the vertical axis OY as “Number of People”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars,according to the space available for the graph.Here, we choose 2 big division to represent 400 people .\n\n(i) Cricket is the most popular sport.\n(ii) Watching is preferred over participation.\n\n#### Question 6:\n\nThe production of saleable steel in some of the steel plants of our country during 1999 is given below:\n\n Plant Bhilai Durgapur Rourkela Bokaro Production (In thousand tonnes) 160 80 200 150\nConstruct a bar graph to represent the above data on a graph paper by using the scale 1 big division = 20 thousand tonnes.", null, "1. Mark the horizontal axis OX as “Name of the Steel Plant” and the vertical axis OY as “Production (in thousand tonnes)”.\n\n2. Along the horizontal axis OX, choose  bars of uniform (equal) width,with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars,according to the space available for the graph.Here, we choose 1 big division to represent 20 thousand tonnes.\n\n#### Question 7:\n\nThe following data gives the number (in thousands) of applicants registered with an Employment Exchange during, 1995-2000:\n\n Year 1995 1996 1997 1998 1999 2000 Number of applicants registered (in thousands) 18 20 24 28 30 34\nConstruct a bar graph to represent the above data.", null, "1. Mark the horizontal axis OX as “Years” and the vertical axis OY as “Number of Applicants Registered (in thousands)”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars, according to the space available for the graph. Here, we choose 1 big divisions to represent 4 thousand applicants.\n\n#### Question 8:\n\nThe following table gives the route length (in thousand kilometres) of the Indian Railways in some of the years:\n\n Year 1960-61 1970-71 1980-81 1990-91 2000-2001 Route length (in thousand kilometres) 56 60 61 74 98\nRepresent the above data with the help of a bar graph.", null, "1. Mark the horizontal axis OX as “Years” and the vertical axis OY as “Route Length (in thousand kilometres)”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars, according to the space available for the graph. Here, we choose 1 big division to represent 1000 Km.\n\n#### Question 9:\n\nThe following data gives the amount of loans (in crores of rupees) disbursed by a bank during some years:\n\n Year 1992 1993 1994 1995 1996 Loan (in crores of rupees) 28 33 55 55 80\n(i) Represent the above data with the help of a bar graph.\n(ii) With the help of the bar graph, indicate the year in which amount of loan is not increased over that of the preceding year.", null, "1. Mark the horizontal axis OX as “Years” and the vertical axis OY as “Loan (in crores of rupees)”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars, according to the space available for the graph. Here, we choose 1 big division to represent 10 crore of rupees.\n\nIn 1995, the loan amount was not increased over that of the preceding year.\n\n#### Question 10:\n\nThe following table shows the interest paid by a company (in lakhs):\n\n Year 1995-96 1996-97 1997-98 1998-99 1999-2000 Interest (in lakhs of rupees) 20 25 15 18 30\nDraw the bar graph to represent the above information.", null, "1. Mark the horizontal axis OX as “Years” and the vertical axis OY as “Interest (in lakhs of rupees)”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars, according to the space available for the graph. Here, we choose 1 big divisions to represent 5 lakhs rupees.\n\n#### Question 11:\n\nThe following data shows the average age of men in various countries in a certain year:\n\n Country India Nepal China Pakistan U.K. U.S.A. Average age (in years) 55 52 60 50 70 75\nRepresent the above information by a bar graph.", null, "1. Mark the horizontal axis OX as “Countries” and the vertical axis OY as  “Average Age of men (in years)”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars, according to the space available for the graph. Here, we choose 1 big division to represent 10 year.\n\n#### Question 12:\n\nThe following data gives the production of foodgrains (in thousand tonnes) for some years:\n\n Year 1995 1996 1997 1998 1999 2000 Production (in thousand tonnes) 120 150 140 180 170 190\nRepresent the above data with the help of a bar graph.", null, "1. Mark the horizontal axis OX as “Years” and the vertical axis OY as “Production of foodgrains (in thousand tonnes)”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars, according to the space available for the graph.Here, we choose 1 big division to represent 20 thousand tonnes.\n\n#### Question 13:\n\nThe following data gives the amount of manure (in thousand tonnes) manufactured by a company during some years:\n\n Year 1992 1993 1994 1995 1996 1997 Manure (in thousand tonnes) 15 35 45 30 40 20\n(i) Represent the above data with the help of a bar graph.\n(ii) Indicate with the help of the bar graph the year in which the amount of manure manufactured by the company was maximum.\n(iii) Choose the correct alternative:\nThe consecutive years during which there was maximum decrease in manure production are:\n(a) 1994 and 1995\n(b) 1992 and 1993\n(c) 1996 and 1997\n(d) 1995 and 1996", null, "1. Mark the horizontal axis OX as “Years” and the vertical axis OY as “Manure (in thousand tonnes)”.\n\n2. Along the horizontal axis OX, choose bars of uniform (equal) width, with a uniform gap between them.\n\n3. Choose a suitable scale to determine the heights of the bars, according to the space available for the graph. Here, we choose 1 big divisions to represent 5 thousand tonnes.\n\n(ii) In the year 1994 , the amount of manure manufactured by the company was maximum.\n(iii) 1996 and 1997.\n\nView NCERT Solutions for all chapters of Class 7" ]
[ null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87093616,"math_prob":0.88483536,"size":10289,"snap":"2020-45-2020-50","text_gpt3_token_len":2634,"char_repetition_ratio":0.15508021,"word_repetition_ratio":0.40832397,"special_character_ratio":0.28778306,"punctuation_ratio":0.11005976,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9592516,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T13:40:59Z\",\"WARC-Record-ID\":\"<urn:uuid:4a00ca73-687f-416f-9e44-ad9a4e8d0a99>\",\"Content-Length\":\"86091\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e100cbda-80fd-4091-8ba8-2e488e6291c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:15d68f4b-276f-4c8c-88f2-0b9666c21aa6>\",\"WARC-IP-Address\":\"23.215.132.209\",\"WARC-Target-URI\":\"https://www.meritnation.com/karnataka-class-7/math/rd-sharma-2019-2020/data-handling-iii-construction-of-bar-graphs/textbook-solutions/97_1_3522_10331_24.5_44985\",\"WARC-Payload-Digest\":\"sha1:TNGNB7RPCGFTRBE6FZRSY62HBDXTCMXX\",\"WARC-Block-Digest\":\"sha1:2TIUNJ5HYUBRETNIWIQFZ2FKR3CWTN3R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107876500.43_warc_CC-MAIN-20201021122208-20201021152208-00438.warc.gz\"}"}
https://www.teachoo.com/8986/2862/Question-16-(Or-1st)/category/CBSE-Class-10-Sample-Paper-for-2019-Boards/
[ "Question 16 (OR 1 st question)\n\nThe points A(1, –2) , B(2, 3), C (k, 2) and D(–4, –3) are the vertices of a parallelogram. Find the value of k.", null, "", null, "", null, "", null, "Subscribe to our Youtube Channel - https://you.tube/teachoo\n\n1. Class 10\n2. Solutions of Sample Papers for Class 10 Boards\n3. CBSE Class 10 Sample Paper for 2019 Boards\n\nTranscript\n\nQuestion 16 (OR 1st question) The points A(1, –2) , B(2, 3), C (k, 2) and D(–4, –3) are the vertices of a parallelogram. Find the value of k. Let’s first draw the figure We know that Diagonals of parallelogram bisect each other So, O is mid-point of AC & O is mid-point of BD 1 mark Finding mid−point of AC We have to find x coordinate of O x−coordinate of O = (𝑥1 + 𝑥2)/2 Where x1 = 1 , x2 = k Putting values x−coordinate of O = (1 + 𝑘)/2 = (𝑘 + 1)/2 1/2 mark Finding mid−point of BD We have to find x coordinate of O x−coordinate of O = (𝑥1 + 𝑥2)/2 Where x1 = 2 , x2 = –4, Putting values for x−coordinate x−coordinate of O = (2 + (−4))/2 = (−2)/2 = –1 …(2) 1/2 mark Comparing (1) & (2) (𝑘 + 1)/2 = –1 k + 1 = –1 × 2 k + 1 = –2 k = –2 – 1 k = – (2 + 1) k = –3 Hence, k = –3 1 mark\n\nCBSE Class 10 Sample Paper for 2019 Boards\n\nClass 10\nSolutions of Sample Papers for Class 10 Boards", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/9500e080-a65b-4029-844f-1a65d09e4f57/slide50.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/a26cca33-ee99-433d-acbb-88d34b13abbb/slide51.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/76312b09-1d22-4461-b443-49986fe28de3/slide52.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/7ab509fc-8984-4065-b06a-538bb5652051/slide53.jpg", null, "https://delan5sxrj8jj.cloudfront.net/misc/Davneet+Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8529774,"math_prob":0.99962145,"size":1988,"snap":"2020-45-2020-50","text_gpt3_token_len":781,"char_repetition_ratio":0.27066532,"word_repetition_ratio":0.28465346,"special_character_ratio":0.40090543,"punctuation_ratio":0.07990868,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997022,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T05:20:45Z\",\"WARC-Record-ID\":\"<urn:uuid:a57c430c-a181-42e6-9195-435777a62e46>\",\"Content-Length\":\"76262\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b192a920-cdc5-4df1-bd38-222e946854fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e78ee7a-e857-4db5-bb90-733be4549338>\",\"WARC-IP-Address\":\"52.44.17.83\",\"WARC-Target-URI\":\"https://www.teachoo.com/8986/2862/Question-16-(Or-1st)/category/CBSE-Class-10-Sample-Paper-for-2019-Boards/\",\"WARC-Payload-Digest\":\"sha1:2RWWVG3UJ2BFEEQAKF6SGCO7PWN46HXZ\",\"WARC-Block-Digest\":\"sha1:SKIDN5O4O2YTEN2KU2RP6ET666D2E547\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141746320.91_warc_CC-MAIN-20201205044004-20201205074004-00623.warc.gz\"}"}
https://www.meritnation.com/ask-answer/question/a-bag-contains-rupee-50-paise-and-25-paise-coins-in-the-rati/ratio-and-proportion/3827838
[ "# a bag contains rupee, 50 paise and 25 paise coins in the ratio of 5:6:8. if the total amount is Rs.420, find the number of coins.\n\n5/19*210=55 6/19*210=66 8/19*210=88\n\nwhich is\n\n55 coins of 1 rupee coin\n\n132 coins of 50 paise coin\n\n352 coins of 25 paise coin\n\nTOTAL of 539 coins\n\n• 1\n\n300\n\n• 1\nWhat are you looking for?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7345788,"math_prob":0.9224944,"size":272,"snap":"2021-43-2021-49","text_gpt3_token_len":101,"char_repetition_ratio":0.20895523,"word_repetition_ratio":0.0,"special_character_ratio":0.43382353,"punctuation_ratio":0.097222224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9873565,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T14:54:25Z\",\"WARC-Record-ID\":\"<urn:uuid:55c18ccc-f0d1-42e0-a74d-a65d1a9ab87b>\",\"Content-Length\":\"25203\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2637fd8d-df20-4ea4-bd28-191f8b7a81e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:54505347-94ec-4b31-833d-4e7d44f42c50>\",\"WARC-IP-Address\":\"13.32.208.104\",\"WARC-Target-URI\":\"https://www.meritnation.com/ask-answer/question/a-bag-contains-rupee-50-paise-and-25-paise-coins-in-the-rati/ratio-and-proportion/3827838\",\"WARC-Payload-Digest\":\"sha1:V2VO77WF72RZKGXJPP5MO3PD3FHCU2NV\",\"WARC-Block-Digest\":\"sha1:UMJ757XZED2U6POOXHAXIHTETZV2DMOE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358189.36_warc_CC-MAIN-20211127133237-20211127163237-00612.warc.gz\"}"}
http://emelugozo.ilubameb.ru/2019-12-12_calculate-bmi-men-kilograms.aspx
[ "# Calculate bmi men kilograms\n\n## The BMI Formula - How To Calculate\n\nThis calculator provides bmi and the corresponding bmiforage percentile based on the cdc growth charts for children and teens (ages 2 through 19 years).   bmi percentile calculator for child and teen.This free body mass index calculator gives out the bmi value and categorizes bmi based on provided information. It includes reference charts and tables, from the world health organization as well as ce.This bmi calculator calculates your bmi (body mass index) by entering your height and weight.   note that this index should only be used to give you a general idea of where body weight should be, so mak.Use this calculator to check your body mass index (bmi) and find out if youre a healthy weight. Or you can use it to please enter weight. Switch to kg 94cm (37ins) or more for men; 80cm (31.Use the metric units (cm, kg) tab if you want to calculate your bmi in international metric units, or the us it is used for both men and women, age 18 or older.Calculate your bmi with the bmi calculator for women and men. With guidance on your body mass index (bmi) is determined by your body weight, size, and sex (male or female). To effectively lose weight i.\n\n## BMI Calculator « BMI calculator BMI calculator\n\nstop taking omeprazole 20 mg\n\n## BMI Calculator – OnCalc Calculator – Simple Calculator https://www.oncalc.com › bmi-calculator\n\nBody mass index calculator calculate your body mass index to take care about the ages 12 and 16 have a higher bmi than males by 1. 0 kgm2 on average.To calculate bmi, body mass in kilograms is divided by the square of the body height. According to who, one has excessive weight if  diet websites for people living healthily and sportsmen provide the.Body mass index (bmi) is a guide to help people estimate their total body fat as a proportion of their total body weight. Bmi can indicate risk for developing certain medical conditions including diabe.Body mass index (bmi) is a number calculated from a persons weight and height. Bmi is calculated by dividing your weight in kilograms by your height in.Bmi means body mass index. It is a measure of the proportion of body fat to it is equal to your weight in kilograms divided by your height in metres squared. Found that the lowest mortality risk f.Jun 16, 2019 your body mass index bmi is calculated by dividing your weight in kilograms by your height squared in metres. But no need to worry about that.\n\nrisek omeprazole 20 mg\n\n## BMI Calculator online - Body Mass Index calculator | bmi ... https://www.bmi-online.org\n\nbuy nike fuel band uk online\n\n## Body Mass Index (BMI) Calculator - Imperial & Metric https://www.stonestokilograms.com › bmi-calculator\n\nthe effect of viagra on blood pressure\n\n## Calculate your BMI, correctly rated according to age and sex https://www.smartbmicalculator.com\n\nbuy omeprazole 20 mg online uk\n\n## BMI calculation for men, women and children, online and...\n\ndeal discount free herbal viagra viagra viagra viagradrugs.net\n\n## BMI Calculator\n\n.Calculate your bmi or use the bmi chart to determine your health and bmi the bmi formula is used for men, women and children. For children there.This bmi calculator for men and women is easy to use and gives you information about your body mass index.   the body mass index formula is easy to determine. It’s a simple calculation that takes into a.The body mass index, or bmi, is a measurement of body fat calculated from an adult's weight and height.   it works for adults, both men and women above age 18, and it’s a good indicator of the amou.Bmi calculator. The body mass index is a value derived from the weight and height of a person.   select female male. Fitness calculators.   bmi table for men. Normal weight bmi (kgm²).Enter height: feet. And, inches. Enter weight: pounds. (please note: 8 ounces = 0. Bmi: calculate bmi using meters & kilograms.Calculate your bmi and visualize your 3d body model webgl. Unit measurement: metric. Height: ft cm. Your bmi: body mass index scale. Underweight: le.\n\nrisek omeprazole 20 mg" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9041429,"math_prob":0.98186517,"size":25976,"snap":"2020-34-2020-40","text_gpt3_token_len":5876,"char_repetition_ratio":0.26262897,"word_repetition_ratio":0.30293798,"special_character_ratio":0.22289807,"punctuation_ratio":0.10442073,"nsfw_num_words":6,"has_unicode_error":false,"math_prob_llama3":0.9552001,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T10:04:59Z\",\"WARC-Record-ID\":\"<urn:uuid:6b9cfa41-e61e-47f7-a352-33f56a3e0dec>\",\"Content-Length\":\"30854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:874c594e-6602-4d23-b11d-fa6906e0638e>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5b1d4ac-2845-4b41-8a51-3f17660428e1>\",\"WARC-IP-Address\":\"37.46.132.131\",\"WARC-Target-URI\":\"http://emelugozo.ilubameb.ru/2019-12-12_calculate-bmi-men-kilograms.aspx\",\"WARC-Payload-Digest\":\"sha1:X4GNFFSDNUCXIR36UYMCRFYYJG3NKFIT\",\"WARC-Block-Digest\":\"sha1:7IGY3NTWPE3ULOTCLF7WPO2FVCQHQI5P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400191160.14_warc_CC-MAIN-20200919075646-20200919105646-00448.warc.gz\"}"}
https://amsi.org.au/ESA_Senior_Years/SeniorTopic4/4h/4h_2content_5.html
[ "## Content\n\n### Sampling from symmetric distributions\n\nWe found the mean and variance of the sample mean $$\\bar{X}$$ in the previous section. What more can be said about the distribution of $$\\bar{X}$$? We now consider the shape of the distribution of the sample mean.\n\nIt is instructive to consider random samples from a variety of parent distributions.\n\n#### Sampling from the Normal distribution\n\nWhen exploring the concept of a sample mean as a random variable, we used the example of sampling from a Normal random variable. Specifically, we considered taking a random sample of size $$n=10$$ from the $$\\mathrm{N}(30,7^2)$$ distribution. figure 5 illustrated an approximation to the distribution of $$\\bar{X}$$ in this case, by showing a histogram of 100 sample means from random samples of size $$n=10$$.\n\nTo approximate the distribution better, we take a lot more samples than 100. figure 6 shows a histogram of 100 000 sample means based on 100 000 random samples each of size $$n=10$$. Each of the random samples is taken from the Normal distribution $$\\mathrm{N}(30,7^2)$$.", null, "Detailed description\n\nFigure 6: Histogram of means from 100 000 random samples of size $$n=10$$ from $$\\mathrm{N}(30,7^2)$$.\n\nAlthough 100 000 is a lot of samples, it is still not quite an 'endless' repetition! If we took more and more samples of size 10, each time obtaining the sample mean and adding it to the histogram, then the shape of the histogram would become smoother and smoother and more and more bell-shaped, until eventually it would become indistinguishable from the shape of the Normal curve shown in figure 7.\n\nfigure 7 shows the true distribution of sample means for samples of size $$n=10$$ from $$\\mathrm{N}(30,7^2)$$, which is only approximated in figures 5 and 6.", null, "Detailed description\n\nFigure 7: The distribution of the sample mean $$\\bar{X}$$ based on random samples of size $$n=10$$ from $$\\mathrm{N}(30,7^2)$$, with $$\\bar{X} \\stackrel{\\mathrm{d}}{=} \\mathrm{N}(30,\\dfrac{7^2}{10})$$.\n\nIf we are sampling from a Normal distribution, then the distribution of $$\\bar{X}$$ is also Normal, a result which we assert without proof. This result is true for all values of $$n$$.\n\n###### Theorem (Sampling from a Normal distribution)\n\nIf we have a random sample of size $$n$$ from the Normal distribution with mean $$\\mu$$ and variance $$\\sigma^2$$, then the distribution of the sample mean $$\\bar{X}$$ is Normal, with mean $$\\mu$$ and variance $$\\dfrac{\\sigma^2}{n}$$. In other words, for a random sample of size $$n$$ on $$X \\stackrel{\\mathrm{d}}{=} \\mathrm{N}(\\mu,\\sigma^2)$$, the distribution of the sample mean is itself Normal: specifically, $$\\bar{X} \\stackrel{\\mathrm{d}}{=} \\mathrm{N}(\\mu,\\tfrac{\\sigma^2}{n})$$.\n\nWe have observed that the spread of the distribution of $$\\bar{X}$$ is less than that of the distribution of the parent variable $$X$$. This reflects the intuitive idea that we get more precise estimates from averages than from a single observation. Further, since the sample size $$n$$ is in the denominator of the variance of $$\\bar{X}$$, the spread of sample means in a long-run sequence based on samples of size $$n=1000$$ each time (for example) will be smaller than the spread of sample means in a long-run sequence based on samples of size $$n=50$$.\n\nAgain consider taking repeated samples of study scores from the Normal distribution $$\\mathrm{N}(30,7^2)$$. Four different scenarios are shown in figure 8, each based on different sample sizes of study scores: $$n = 1$$, $$n = 4$$, $$n = 9$$ and $$n = 25$$. In the top panel are histograms based on 100 000 sample means, and in the bottom panel are the true distributions of the sample means. The distributions of sample means based on larger sample sizes are narrower, and more concentrated around the mean $$\\mu$$, than those based on smaller samples. The distribution of sample means based on one study score is, of course, identical to the original population distribution of study scores.", null, "Detailed description\n\nFigure 8: Histograms and true distributions of means of random samples of varying size from $$\\mathrm{N}(30,7^2)$$.\n\n##### Exercise 1\n1. Estimate the standard deviation of each of the distributions in the bottom panel of figure 8.\n2. Calculate the standard deviation of each of the distributions in the bottom panel of figure 8, and compare your estimates with the calculated values.\n\nIn summary: For a random sample of size $$n$$ on $$X \\stackrel{\\mathrm{d}}{=} \\mathrm{N}(\\mu,\\sigma^2)$$, the distribution of $$\\bar{X}$$ is itself Normal; specifically,\n\n$\\bar{X} \\stackrel{\\mathrm{d}}{=} \\mathrm{N}(\\mu,\\tfrac{\\sigma^2}{n}).$\n\nIt is very important to understand that sampling from a Normal distribution is a special case. It is not true, for other parent distributions, that the distribution of $$\\bar{X}$$ is Normal for any value of $$n$$.\n\nWe now consider the distribution of sample means based on populations that do not have Normal distributions.\n\n#### Sampling from the uniform distribution\n\nRecall that the uniform distribution is one of the continuous distributions, with the corresponding random variable equally likely to take any value within the possible interval. If $$X \\stackrel{\\mathrm{d}}{=} \\mathrm{U}(0,1)$$, then $$X$$ is equally likely to take any value between 0 and 1.\n\nfigure 9 shows the first of several random samples of size $$n=10$$ from the uniform distribution $$\\mathrm{U}(0,1)$$, as seen in the module Random sampling . The sample has been projected down to the $$x$$-axis in the lower part of figure 9 to give a dotplot of the data, and now the sample mean is added as a black triangle under the dots. The data in this case are referred to as 'random numbers', since a common application of the $$\\mathrm{U}(0,1)$$ distribution is to generate random numbers between 0 and 1.\n\nfigure 10 shows ten samples, each of 10 observations from the same uniform distribution $$\\mathrm{U}(0,1)$$. The top panel shows the population distribution. The middle panel shows each of the ten samples, with dots for the observations and a triangle for the sample mean. The bottom panel shows the ten sample means plotted on a dotplot.", null, "Detailed description\n\nFigure 9: First random sample of size $$n=10$$ from $$\\mathrm{U}(0,1)$$, with the sample mean shown as a triangle.", null, "Detailed description\n\nFigure 10: Ten random samples of size $$n=10$$ from $$\\mathrm{U}(0,1)$$, with the sample means shown as triangles in the middle panel and as dots in the dotplot in the bottom panel.\n\nfigure 11 shows a histogram with one million sample means from the same population distribution. There are several features to note in figure 11. As in the case of the distribution of sample means taken from a Normal population, the spread in the histogram of sample means is less than the spread in the parent distribution from which the samples are taken. However, in contrast to the case of sampling from a Normal distribution, the shape of the histogram is unlike the population distribution; rather, it is like a Normal distribution.", null, "Detailed description\n\nFigure 11: Histogram of means from one million random samples of size $$n=10$$ from $$\\mathrm{U}(0,1)$$.\n\nfigure 12 shows four different histograms of means of samples of random numbers taken from the uniform distribution $$\\mathrm{U}(0,1)$$. From left to right, they are based on sample size $$n = 1$$, $$n = 4$$, $$n = 16$$ and $$n = 25$$. Of course, when the sample mean is based on a single random number ($$n = 1$$), the shape of the histogram looks like the original parent distribution. The other histograms are not uniform; they tend to be bell-shaped. It is rather remarkable that we see this is so even for a sample size as small as $$n=4$$.", null, "Figure 12: Histograms of means of random samples of varying size from $$\\mathrm{U}(0,1)$$.\n\nHow does this tendency to a bell-shaped curve arise?\n\nConsider the means of samples of size $$n=4$$, for example. figure 13 shows ten different random samples taken from the uniform distribution $$\\mathrm{U}(0,1)$$, each with four observations. The observations are shown as dots, and the means of the samples of four observations are shown as triangles. The darker vertical line at $$x = 0.5$$ shows the true mean for the population from which the samples were taken.\n\nConsider the values of the observations sampled in relation to the population mean. The first sample in figure 13 has two values below $$0.5$$, and two above; the mean of these four values is close to 0.5. The second sample is similar, with two values below the true mean, and two values above. Samples 3, 6 and 7 have three values below the mean, and only one above. The means of these three samples are below the true mean, and they tend to be further from 0.5 than samples 1 and 2. All four observations in sample 8 are above $$0.5$$, and all four observations in sample 10 are below $$0.5$$; the means of these two samples are farthest from the true population mean.\n\nAs the population from which the observations are sampled is uniform, samples with two of the four observations above the mean of 0.5 will arise more often than samples with one or three observations above $$0.5$$; samples with zero or four observations above 0.5 will arise least often. Hence, we see the tendency for the histogram in the second panel in figure 12 to be concentrated and centred around 0.5.", null, "Figure 13: Ten random samples of size $$n=4$$ from $$\\mathrm{U}(0,1)$$, with the sample means shown as triangles.\n\nNext page - Content - Sampling from asymmetric distributions" ]
[ null, "https://amsi.org.au/ESA_Senior_Years/imageSenior/4h_6.png", null, "https://amsi.org.au/ESA_Senior_Years/imageSenior/4h_7.png", null, "https://amsi.org.au/ESA_Senior_Years/imageSenior/4h_8.png", null, "https://amsi.org.au/ESA_Senior_Years/imageSenior/4h_9.png", null, "https://amsi.org.au/ESA_Senior_Years/imageSenior/4h_10.png", null, "https://amsi.org.au/ESA_Senior_Years/imageSenior/4h_11.png", null, "https://amsi.org.au/ESA_Senior_Years/imageSenior/4h_12.png", null, "https://amsi.org.au/ESA_Senior_Years/imageSenior/4h_13.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91333234,"math_prob":0.99989915,"size":9455,"snap":"2022-40-2023-06","text_gpt3_token_len":2302,"char_repetition_ratio":0.21637923,"word_repetition_ratio":0.10707204,"special_character_ratio":0.26610258,"punctuation_ratio":0.09994539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994934,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T06:56:46Z\",\"WARC-Record-ID\":\"<urn:uuid:b2944e9e-2ad3-4e29-9321-2d4230cedfe4>\",\"Content-Length\":\"14281\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5860cec7-2a90-4bec-b092-2f4bcb63874c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1fee6898-c81a-4559-be72-df073040c8ab>\",\"WARC-IP-Address\":\"101.0.91.74\",\"WARC-Target-URI\":\"https://amsi.org.au/ESA_Senior_Years/SeniorTopic4/4h/4h_2content_5.html\",\"WARC-Payload-Digest\":\"sha1:XGBMDNLUVH6VGJEOQY353TUIUTNY63KP\",\"WARC-Block-Digest\":\"sha1:X2G6CS55LSI2WDSN2YKIR4KLGRFNZKRX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335124.77_warc_CC-MAIN-20220928051515-20220928081515-00057.warc.gz\"}"}
https://gitlab.science.ru.nl/benoit/tweetnacl/-/commit/af8b4e4daca6cc1e3145d39792c7b1e20eb9106e
[ "### Mainly a pass through the conclusions; minor edits in intro and Section 4\n\nparent c1a563e8\n ... ... @@ -25,7 +25,7 @@ but Bernstein suggested to rename the protocol to X25519 and to use the name Curve25519 for the underlying elliptic curve~\\cite{Ber14}. We use this updated terminology in this paper. \\subheading{Contribution of this paper} \\subheading{Contribution of this paper.} We provide a mechanized formal proof of the correctness of the X25519 implementation in TweetNaCl. This proof is done in three steps: ... ...\n ... ... @@ -30,14 +30,17 @@ triple before proving its correctness with VST (\\ref{subsec:with-VST}). We provide an example of equivalence of operations over different number representations (\\ref{subsec:num-repr-rfc}). Then, we describe efficient techniques used to in some of our more complex proofs (\\ref{subsec:inversions-reflections}). %XXX-Peter: \"used to\" what? Incomplete sentence %XXX-Peter: Does this subsection really belong here? My understanding is that it describes %the full picture (Sections 4 and 5) and not just what is happening in this section. \\subsection{Structure of our proof} \\label{subsec:proof-structure} % XXX-Peter: This whole paragraph can go away; we already said this before. In order to prove the correctness of X25519 in TweetNaCl code \\TNaCle{crypto_scalarmult}, we use VST to prove that the code matches our functional Coq specification of \\Coqe{RFC}. Then, we prove that our specification of the scalar multiplication matches the mathematical definition ... ... @@ -48,7 +51,7 @@ subsequently called: \\TNaCle{unpack25519}; \\TNaCle{A}; \\TNaCle{Z}; \\TNaCle{M}; \\TNaCle{S}; \\TNaCle{car25519}; \\TNaCle{inv25519}; \\TNaCle{set25519}; \\TNaCle{sel25519}; \\TNaCle{pack25519}. We prove the implementation of X25519 is \\textbf{sound}, \\ie: We prove that the implementation of X25519 is \\textbf{sound}, \\ie: \\begin{itemize} \\item absence of access out-of-bounds of arrays (memory safety). \\item absence of overflows/underflow in the arithmetic. ... ... @@ -59,19 +62,19 @@ We also prove that TweetNaCl's code is \\textbf{correct}: \\item Operations on \\TNaCle{gf} (\\TNaCle{A}, \\TNaCle{Z}, \\TNaCle{M}, \\TNaCle{S}) are equivalent to operations ($+,-,\\times,x^2$) in $\\Zfield$. \\item The Montgomery ladder computes the multiple of a point. % \\item The Montgomery ladder computes a scalar multiplication between a natural % number and a point. %XXX-Peter: We don't prove this last statement in this section \\end{itemize} In order to prove the soundness and correctness of \\TNaCle{crypto_scalarmult}, we reuse the generic Montgomery ladder defined in \\sref{sec:Coq-RFC}. We define a High-level specification by instantiating the ladder with a generic We define a high-level specification by instantiating the ladder with a generic field $\\K$, this allows us to prove the correctness of the ladder with respect to the theory of elliptic curves. to the theory of elliptic curves. This high-level specification does not rely on the parameters of Curve25519. We later specialize $\\K$ with $\\Ffield$, and the parameters of Curve25519 ($a = 486662, b = 1$), to derive the correctness of \\coqe{RFC} (\\sref{sec:maths}). %XXX-Peter: not in this section, correct? We define a mid-level specification by instantiating the ladder over $\\Zfield$. Additionally we also provide a low-level specification close to the \\texttt{C} code ... ... @@ -157,7 +160,7 @@ SEP (sh [{ v_q }] <<(uch32)-- mVI (RFC n p); Ews [{ c121665 }] <<(lg16)-- mVI64 c_121665 \\end{lstlisting} In this specification we state as preconditions: In this specification we state the following preconditions: \\begin{itemize} \\item[] \\VSTe{PRE}: \\VSTe{_p OF (tptr tuchar)}\\\\ The function \\TNaCle{crypto_scalarmult} takes as input three pointers to ... ... @@ -177,7 +180,7 @@ In this specification we state as preconditions: complete representation of \\TNaCle{u8}. \\end{itemize} As Post-condition we have: As postcondition we have: \\begin{itemize} \\item[] \\VSTe{POST}: \\VSTe{tint}\\\\ The function \\TNaCle{crypto_scalarmult} returns an integer. ... ... @@ -229,8 +232,9 @@ The correctness of this specification is formally proven in Coq with % For the sake of completeness we proved all intermediate functions. \\subheading{Memory aliasing.} In the VST, a simple specification of \\texttt{M(o,a,b)} will assume three distinct memory share. When called with three memory shares (\\texttt{o, a, b}), In the VST, a simple specification of \\texttt{M(o,a,b)} will assume that the pointer arguments point to non-overlapping space in memory. % XXX-Peter: \"memory share\" is unclear; please fix in the next sentence. When called with three memory shares (\\texttt{o, a, b}), the three of them will be consumed. However assuming this naive specification when \\texttt{M(o,a,a)} is called (squaring), the first two memory shares (\\texttt{o, a}) are consumed and VST will expect a third memory share (\\texttt{a}) which does not \\emph{exist} anymore. ... ... @@ -257,11 +261,13 @@ In the proof of our specification, we do a case analysis over $k$ when needed. This solution does not cover all cases (e.g. all arguments are aliased) but it is enough for our needs. %XXX-Peter: shouldn't verifying fixed-length for loops be rather standard? % Can we shorten the next paragraph? \\subheading{Verifying \\texttt{for} loops.} Final states of \\texttt{for} loops are usually computed by simple recursive functions. However we must define invariants which are true for each iteration step. Assume we want to prove a decreasing loop where indexes go from 3 to 0. Assume that we want to prove a decreasing loop where indexes go from 3 to 0. Define a function $g : \\N \\rightarrow State \\rightarrow State$ which takes as input an integer for the index and a state, then returns a state. It simulates the body of the \\texttt{for} loop. ... ... @@ -455,7 +461,8 @@ apply the unrolling and exponentiation formulas 255 times. This could be automat in Coq with tacticals such as \\Coqe{repeat}, but it generates a proof object which will take a long time to verify. \\subheading{Reflections.} In order to speed up the verification we use a \\subheading{Reflections.} In order to speed up the verification we use a technique called Reflection''. It provides us with flexibility, \\eg we don't need to know the number of times nor the order in which the lemmas needs to be applied (chapter 15 in \\cite{CpdtJFR}). ... ... @@ -473,7 +480,7 @@ With this technique we prove the functional correctness of the inversion over \\Z \\label{cor:inv_comput_field} \\Coqe{Inv25519} computes an inverse in \\Zfield. \\end{lemma} Which is formalized as: This statement is formalized as \\begin{lstlisting}[language=Coq] Corollary Inv25519_Zpow_GF : forall (g:list Z), ... ...\n ... ... @@ -13,7 +13,7 @@ In our case we rely on: \\begin{itemize} \\item \\textbf{Calculus of Inductive Constructions}. The intuitionistic logic used by Coq must be consistent in order to trust the proofs. As an axiom, we assumed that the functional extensionality, which is also consistent with that logic. we assume that the functional extensionality is also consistent with that logic. $$\\forall x, f(x) = g(x) \\implies f = g$$ \\begin{lstlisting}[language=Coq] Lemma f_ext: forall (A B:Type), ... ... @@ -40,13 +40,13 @@ aux1 = a[i]; aux2 = b[i]; o[i] = aux1 + aux2; \\end{lstlisting} The trust of the proof relied on the trust of a correct translation from the The trust of the proof relies on the trust of a correct translation from the initial version of \\emph{TweetNaCl} to \\emph{TweetNaclVerifiableC}. \\texttt{clightgen} comes with \\texttt{-normalize} flag which factors out function calls and assignments from inside subexpressions. The changes required for a C-code to make it Verifiable are now minimal. The changes required for C code to make it verifiable are now minimal. \\item Last but not the least, we must trust: the \\textbf{Coq kernel} and its \\item Last but not the least, we must trust the \\textbf{Coq kernel} and its associated libraries; the \\textbf{Ocaml compiler} on which we compiled Coq; the \\textbf{Ocaml Runtime} and the \\textbf{CPU}. Those are common to all proofs done with this architecture \\cite{2015-Appel,coq-faq}. ... ... @@ -59,14 +59,14 @@ Indeed indexes 17 to 79 of the \\TNaCle{i64 x} intermediate variable of we removed them. Peter Wu and Jason A. Donenfeld brought to our attention that the original \\TNaCle{car25519} function presented risk of Undefined Behavior if \\texttt{c} \\TNaCle{car25519} function presented risk of undefined behavior if \\texttt{c} is a negative number. \\begin{lstlisting}[language=Ctweetnacl] c=o[i]>>16; o[i]-=c<<16; // c < 0 = UB ! \\end{lstlisting} By replacing this statement by a logical \\texttt{and} (and proving the correctness) we solved this problem. We replaced this statement with a logical \\texttt{and}, proved correctness, and thus solved this problem. \\begin{lstlisting}[language=Ctweetnacl] o[i]&=0xffff; \\end{lstlisting} ... ... @@ -75,7 +75,6 @@ We believe that the type change of the loop index (\\TNaCle{int} instead of \\TNaC does not impact the trust of our proof. \\subheading{A complete proof.} We provide a mechanized formal proof of the correctness of the X25519 implementation in TweetNaCl. We first formalized X25519 from RFC~7748~\\cite{rfc7748} in Coq. Then we proved ... ...\nSupports Markdown\n0% or .\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7573432,"math_prob":0.9556225,"size":9514,"snap":"2022-27-2022-33","text_gpt3_token_len":2710,"char_repetition_ratio":0.13375394,"word_repetition_ratio":0.09527273,"special_character_ratio":0.29188564,"punctuation_ratio":0.17037863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9891444,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T19:50:12Z\",\"WARC-Record-ID\":\"<urn:uuid:234eab32-a0a1-4110-b067-dc07db8b276b>\",\"Content-Length\":\"358365\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:94fc894a-6360-4747-94b1-92120d0acae7>\",\"WARC-Concurrent-To\":\"<urn:uuid:97a37aa4-6c35-45fd-8134-9b91cf1e86e5>\",\"WARC-IP-Address\":\"131.174.16.187\",\"WARC-Target-URI\":\"https://gitlab.science.ru.nl/benoit/tweetnacl/-/commit/af8b4e4daca6cc1e3145d39792c7b1e20eb9106e\",\"WARC-Payload-Digest\":\"sha1:XJCUFNXD7TI437YP5R56X7FQCJIOBKAE\",\"WARC-Block-Digest\":\"sha1:VI6IF6IOBUPHTXXJVVMR7EARNHBVZXFJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103573995.30_warc_CC-MAIN-20220628173131-20220628203131-00433.warc.gz\"}"}
https://math.hawaii.edu/wordpress/calendar/action~agenda/exact_date~1543312800/
[ "# Calendar\n\nNov\n29\nThu\nMaster defense Greg Dziadurski @ Keller Hall 403\nNov 29 @ 9:00 am – 10:30 am\n\nTitle: TBA\n\nDec\n3\nMon\nDavid Webb: Inescapable dimension\nDec 3 @ 2:30 pm – 3:30 pm\nDec\n6\nThu\nMasters defense: Nathaniel Warner @ Keller 401\nDec 6 @ 4:00 pm – 5:30 pm\n\nTitle: Computing the Witten-Reshetikhin-Turaev Invariant of 3-Manifolds\n\nDec\n7\nFri\nColloquium: Pamela Harris (Williams)\nDec 7 @ 3:30 pm – 4:30 pm\nJan\n4\nFri\nColloquium: Pamela Harris (Williams)\nJan 4 @ 3:30 pm – 4:30 pm\nJan\n17\nThu\nUndergrad Seminar: Gideon Zamba @ 402\nJan 17 @ 3:00 pm – 4:00 pm\n\nApplied Mathematics in Action through Biostatistics\n\nGideon K. D. Zamba, PhD.\nProfessor of Biostatistics\n\nProfessor of Radiology and Nuclear Medicine\n\nThe University of Iowa\n\nApplied mathematics is a field of constant adaptability to the world’s contingencies. Such\nadaptability requires a solid training and a keen understanding of theoretical and pure\nmathematical thinking—as the activity of applied thinking is vitally connected to research\nin pure mathematics. One such applied mathematical field is the field of statistics. As the\nworld continues to rely more on data for inference and decision making, statistics and\nassociated data-driven fields have gained an increased recognition. The purpose of this talk\nis to educate the audience about the field of statistics, about statistical involvements, and\nprovide examples of settings where statistical theory finds an application and where real-\nworld applications call for new statistical developments. The presentation further provides\nsome general guidance on the mathematical and computational skills needed for a\nsuccessful graduate work in Statistics or Biostatistics.\n\nJan\n18\nFri\nColloquium: Ian Marquette (U. of Queensland)\nJan 18 @ 3:30 pm – 4:30 pm\n\nTitle: Higher order superintegrability, Painlevé transcendents and representations of polynomial algebras\n\nAbstract: I will review results on classification of quantum superintegrable systems on two-dimensional Euclidean space allowing separation of variables in Cartesian coordinates and possessing an extra integral of third or fourth order. The exotic quantum potential satisfy a nonlinear ODE and have been shown to exhibit the Painlevé property. I will also present different constructions of higher order superintegrable Hamiltonians involving Painlev´e transcendents using four types of building blocks which consist of 1D Hamiltonians allowing operators of the type Abelian, Heisenberg, Conformal or Ladder. Their integrals generate finitely generated polynomial algebras and representations can be exploited to calculate the energy spectrum. I will point out that for certain cases associated with exceptional orthogonal polynomials, these algebraic structures do not allow to calculate the full spectrum and degeneracies. I will describe how other sets of integrals can be build and used to provide a complete solution.\n\nJan\n24\nThu\nKameryn Williams: Logic seminar @ Keller 313\nJan 24 @ 2:30 pm – 3:20 pm\n\nTitle: Amalgamating generic reals, a surgical approach\nLocation: Keller Hall 313\nSpeaker: Kameryn Williams, UHM\n\nThe material in this talk is an adaptation of joint work with Miha Habič, Joel David Hamkins, Lukas Daniel Klausner, and Jonathan Verner, transforming set theoretic results into a computability theoretic context.\n\nLet $\\mathcal D$ be the collection of dense subsets of the full binary tree coming from a fixed countable Turing ideal. In this talk we are interested in properties of $\\mathcal D$-generic reals, those reals $x$ so that every $D \\in \\mathcal D$ is met by an initial segment of $x$. To be more specific the main question is the following. Fix a real $z$ which cannot be computed by any $\\mathcal D$-generic. Can we craft a family of $\\mathcal D$-generic reals so that we have precise control over which subfamilies of generic reals together compute $z$?\n\nI will illustrate a specific of this phenomenon as a warm up. I will show that given any $\\mathcal D$-generic $x$ there is another $\\mathcal D$-generic $y$ so that $x \\oplus y$ can compute $z$. That is, neither $x$ nor $y$ can compute $z$ on their own, but together they can.\n\nThe main result for the talk then gives a uniform affirmative answer for finite families. Namely, I will show that for any finite set $I = \\{0, \\ldots, n-1\\}$ there are mutual $\\mathcal D$-generic reals $x_0, \\ldots, x_{n-1}$ which can be surgically modified to witness any desired pattern for computing $z$. More formally, there is a real $y$ so that given any $\\mathcal A \\subseteq \\mathcal P(I)$ which is closed under superset and contains no singletons, that there is a single real $w_\\mathcal{A}$ so that the family of grafts $x_k \\wr_y w_\\mathcal{A}$ for $k \\in A \\subseteq I$ can compute $z$ if and only if $A \\in \\mathcal A$. Here, $x \\wr_y w$ is a surgical modification of $x$, using $y$ to guide where to replace bits from $x$ with those from $w$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8604308,"math_prob":0.9642638,"size":4247,"snap":"2022-40-2023-06","text_gpt3_token_len":979,"char_repetition_ratio":0.10747113,"word_repetition_ratio":0.0,"special_character_ratio":0.20037673,"punctuation_ratio":0.08288044,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97567415,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T05:54:26Z\",\"WARC-Record-ID\":\"<urn:uuid:6358a385-ede2-4204-9a51-7569bed59c94>\",\"Content-Length\":\"47117\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5efbc367-3849-4ded-af3c-5605a5c084f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:249e59b1-9fc4-450c-9d79-765f575e6a6a>\",\"WARC-IP-Address\":\"128.171.50.254\",\"WARC-Target-URI\":\"https://math.hawaii.edu/wordpress/calendar/action~agenda/exact_date~1543312800/\",\"WARC-Payload-Digest\":\"sha1:G5VIWBJOFYBVGC46FG7H3JDSHTVL45AH\",\"WARC-Block-Digest\":\"sha1:KJYE7JQ7PUSRNIF5FFGGOOSLXUDADPQF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500384.17_warc_CC-MAIN-20230207035749-20230207065749-00032.warc.gz\"}"}
https://helpdesk.castoredc.com/section/5-introduction-to-calculations-the-basics
[ "# Introduction to calculations: the basics\n\n• ## Introduction to calculation fields and syntax\n\nCalculation fields in Castor allow you to solve different mathematical problems by using the variables from your eCRF. Common applications of calculations Calculations can be used for simple mathemati...\n\n• ## Using the \"if-else\" logic\n\nThis tutorial will cover what an if/else statement is, when to use it, and how to use it. What is an if/else statement The if/else statement is a logical, conditional expression that evaluates a condi...\n\n• ## Forcing Castor to calculate with empty fields\n\nThis tutorial covers the functions you can use to calculate with multiple fields, even if one or more of the values are not set.  The tag '##allowempty##' tells Castor to allow for empty fields to be ...\n\n• ## Using the \"for loop\" in calculations\n\nIn calculations, it is sometimes necessary to repeat a certain action several times. For example, you want to calculate an average of several variables, while some of them are allowed to be empty. In ...\n\n• ## Comparing variables with calculation fields\n\nComparing two variables In calculations, you sometimes need to compare two variables against each other. Check out the article Using the if-else logic to see how this works. For example, you want to c...\n\n• ## Compare several numerical variables against a specific variable\n\nCheck out the article Using the for loop in calculations to see an explanation about how this calculation works. This template allows you to compare three or more fields (variables) with one specific ..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.873076,"math_prob":0.97591656,"size":1222,"snap":"2020-34-2020-40","text_gpt3_token_len":248,"char_repetition_ratio":0.13300492,"word_repetition_ratio":0.03,"special_character_ratio":0.20458265,"punctuation_ratio":0.1446281,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.987121,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T20:45:05Z\",\"WARC-Record-ID\":\"<urn:uuid:075f2644-aa38-45d6-9a22-5599bc17e528>\",\"Content-Length\":\"27791\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c2f8d40-7ebf-478b-b672-869c166ca7de>\",\"WARC-Concurrent-To\":\"<urn:uuid:6cc0f4d7-6497-4a70-9d52-0efcd1d0c942>\",\"WARC-IP-Address\":\"52.203.48.25\",\"WARC-Target-URI\":\"https://helpdesk.castoredc.com/section/5-introduction-to-calculations-the-basics\",\"WARC-Payload-Digest\":\"sha1:Z7HB4O7PLSGDDCTN3NUATCO53SJ4W5XY\",\"WARC-Block-Digest\":\"sha1:VZPBSRYBLBWDXJVNRUTP3J3YX3DF2IDC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402131986.91_warc_CC-MAIN-20201001174918-20201001204918-00389.warc.gz\"}"}
https://www.mathbootcamps.com/general-solution-system-equations/
[ " The general solution to a system of equations - MathBootCamps\n\n# The general solution to a system of equations\n\nIn your algebra classes, if a system of equations had infinitely many solutions, you would simply write “infinitely many solutions” and move on to the next problem. However, there is a lot more going on when we say “infinitely many solutions”. In this article, we will explore this idea with general solutions.\n\n## Writing out a general solution\n\nFirst, let’s review just how to write out a general solution to a given system of equations. To do this, we will look at an example.\n\n### Example\n\nFind the general solution to the system of equations:\n\n$$\\begin{array}{c} x_1 + 2x_2 + 8x_3 + 18x_4 = 11\\\\ x_1 + x_2 + 5x_3 +11x_4 = 10\\\\ \\end{array}$$\n\nAs with any system of equations, we will use an augmented matrix and row reduce.\n\n$$\\left[ \\begin{array}{cccc|c} 1 & 2 & 8 & 18 & 11\\\\ 1 & 1 & 5 & 11 & 10\\\\ \\end{array} \\right] \\sim \\left[ \\begin{array}{cccc|c} 1 & 0 & 2 & 4 & 9\\\\ 0 & 1 & 3 & 7 & 1\\\\ \\end{array} \\right]$$\n\nNow, write out the equations from this reduced matrix.\n\n$$\\begin{array}{c} x_1 + 2x_3 + 4x_4 = 9\\\\ x_2 + 3x_3 + 7x_4 = 1\\\\ \\end{array}$$\n\nNotice in the matrix, that the leading ones (the first nonzero entry in each row) are in the columns for $$x_1$$ and $$x_2$$.\n\nSolve for these variables.\n\n$$\\begin{array}{c} x_1 = 9 – 2x_3 – 4x_4\\\\ x_2 = 1 – 3x_3 – 7x_4\\\\ \\end{array}$$\n\nThe remaining variables are free variables, meaning, that they can take on any value. The values of $$x_1$$ and $$x_2$$ are based on the value of these two variables. In the general solution, you want to note this.\n\nGeneral solution:\n\n$$\\boxed{ \\begin{array}{l} x_1 = 9 – 2x_3 – 4x_4\\\\ x_2 = 1 – 3x_3 – 7x_4\\\\ x_3 \\text{ is free}\\\\ x_4 \\text{ is free}\\\\ \\end{array} }$$\n\nThere are infinitely many solutions to this system of equations, all using different values of the two free variables.\n\n## Finding specific solutions\n\nSuppose that you wanted to give an example of a specific solution to the system of equations above. There are infinitely many, so you have a lot of choices! You just need to consider possible values of the free variables.\n\n### Example solution\n\nLet:\n\n$$\\begin{array}{l} x_3 = 0\\\\ x_4 = 1\\\\ \\end{array}$$\n\nThere was no special reason to pick 0 and 1. Again, this would work for ANY value you pick for these two variables.\n\nUsing these values, a solution is:\n\n$$\\begin{array}{l} x_1 = 9 – 2x_3 – 4x_4 = 9 – 2(0) – 4(1)\\\\ x_2 = 1 – 3x_3 – 7x_4 = 1 – 3(0) – 7(1)\\\\ x_3 = 0\\\\ x_4 = 1\\\\ \\end{array} \\rightarrow \\boxed{ \\begin{array}{l} x_1 = 5\\\\ x_2 = -6\\\\ x_3 = 0\\\\ x_4 = 1\\\\ \\end{array} }$$\n\nYou can check these values in the original system of equations to be sure:\n\n$$\\begin{array}{l} x_1 + 2x_2 + 8x_3 + 18x_4 = 11\\\\ x_1 + x_2 + 5x_3 +11x_4 = 10\\\\ \\end{array} \\rightarrow \\begin{array}{l} (5) + 2(-6) + 8(0) + 18(1) = 11 \\text{ (true)}\\\\ (5) + (-6) + 5(0) +11(1) = 10 \\text{ (true)}\\\\ \\end{array}$$\n\nSince both equations are true for these values, we know that we have found one of the many, many solutions. If we wanted to find more solutions, we could just pick different values for the two free variables $$x_1$$ and $$x_2$$.\n\n## Summary of the steps\n\nGiven a system of equations, the steps for writing out the general solution are:\n\n1. Row reduce the augmented matrix for the system.\n2. Write out the equations from the row-reduced matrix.\n3. Solve for the variables that have a leading one in their column.\n4. Label the remaining variables as free variables.", null, "## Subscribe to our Newsletter!\n\nWe are always posting new free lessons and adding more study guides, calculator guides, and problem packs.\n\nSign up to get occasional emails (once every couple or three weeks) letting you know what's new!" ]
[ null, "https://i1.wp.com/www.mathbootcamps.com/wp-content/plugins/thrive-leads/editor-templates/_form_css/images/set_23_icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85362995,"math_prob":0.9999337,"size":3704,"snap":"2022-40-2023-06","text_gpt3_token_len":1242,"char_repetition_ratio":0.15432432,"word_repetition_ratio":0.095168374,"special_character_ratio":0.35664147,"punctuation_ratio":0.08092485,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999583,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T12:36:33Z\",\"WARC-Record-ID\":\"<urn:uuid:68979cd0-d6fe-4309-8aa9-fe37acee1dc5>\",\"Content-Length\":\"36909\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e7240d3f-26aa-4816-83d9-538ff0d0c695>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b7fb626-a093-40f6-aa8d-748c6dd110e5>\",\"WARC-IP-Address\":\"162.159.134.42\",\"WARC-Target-URI\":\"https://www.mathbootcamps.com/general-solution-system-equations/\",\"WARC-Payload-Digest\":\"sha1:AC7Y64YBHJBDWOZE3NPCVDDPUY2L7OGQ\",\"WARC-Block-Digest\":\"sha1:J5IGAJA57A4XX6MEMGO2VDYJPNOSQUZ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335254.72_warc_CC-MAIN-20220928113848-20220928143848-00642.warc.gz\"}"}
http://neusite.info/like-and-unlike-fractions-worksheets/like-and-unlike-fractions-worksheets-fractions-worksheets-grade-4-word-problems/
[ "# Like And Unlike Fractions Worksheets Fractions Worksheets Grade 4 Word Problems", null, "like and unlike fractions worksheets fractions worksheets grade 4 word problems. adding unlike fractions worksheets for all download and share other grade 7 doc 4 word problems 5 common core,multiplication of fractions worksheets grade 6 pdf adding and subtracting fraction with like denominators ordering 2 word problems,fractions worksheets grade 4 icse pdf 1 like,fractions worksheets grade 6 word problems adding ordering 2 doc,multiplication and division of fractions worksheets grade 6 pdf ncert printable for teachers,adding fractions worksheets grade 1 4 word problems pdf fraction math addition of free printable,comparing fractions worksheets grade 2 with unlike denominators 4 pdf 5,fractions worksheets grade 5 pdf multiplication and division of 6 subtracting unlike,addition of fractions worksheets grade 6 cbse adding 1,fractions worksheet 7th grade pdf worksheets 11 unlike denominators 4. .ne {width:100%;float:left;display:block;} .ar {width:80px;float:left;position:relative;padding:0px 6px 6px 0;display:block;} .ar a img {height:80px;} .abs {position:absolute;top:5px;left:5px;} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .rel li{float:left;width:29%;margin-right:6px;} Related Post Science Measurement Worksheets Grade 6Multiplication Using Partial Products WorksheetsGrade 4 English Punctuation WorksheetsPicture Graph Worksheets 3rd GradeParaphrasing Worksheets 5th GradeCreate Cursive WorksheetsLower Case Alphabet Printable WorksheetsProofreading Worksheets 4th GradeFree Online Worksheets For Grade 31st Class Maths WorksheetsAdjective Worksheets 4th GradeLife Cycle WorksheetsDiscount WorksheetsClouds Worksheets 3rd GradeAutumn Worksheets For Kindergarten", null, "like and unlike fractions worksheets fractions worksheets grade 4 word problems." ]
[ null, "http://neusite.info/wp-content/uploads/2019/11/like-and-unlike-fractions-worksheets-fractions-worksheets-grade-4-word-problems.jpg", null, "http://neusite.info/wp-content/uploads/2019/11/like-and-unlike-fractions-worksheets-fractions-worksheets-grade-4-word-problems.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7732218,"math_prob":0.78697985,"size":904,"snap":"2019-51-2020-05","text_gpt3_token_len":187,"char_repetition_ratio":0.2688889,"word_repetition_ratio":0.040650405,"special_character_ratio":0.17809735,"punctuation_ratio":0.07482993,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.976719,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T17:33:02Z\",\"WARC-Record-ID\":\"<urn:uuid:68dfb6d1-630f-4f74-bc40-5c663e6be8e3>\",\"Content-Length\":\"68112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e05e14e6-0513-4aac-9ed7-81b23b4ad13d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f24dbf53-d324-41a9-be81-206514b6c51b>\",\"WARC-IP-Address\":\"104.24.114.154\",\"WARC-Target-URI\":\"http://neusite.info/like-and-unlike-fractions-worksheets/like-and-unlike-fractions-worksheets-fractions-worksheets-grade-4-word-problems/\",\"WARC-Payload-Digest\":\"sha1:XQVL3KQFXYSS3BERTFT43AH2P6EWSSAD\",\"WARC-Block-Digest\":\"sha1:U642AZVH4O2ET4LMIWHWGGPMNMF2EGQY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540544696.93_warc_CC-MAIN-20191212153724-20191212181724-00510.warc.gz\"}"}
https://normgoldblatt.com/how-to-calculate-the-length-of-a-triangle-side-10
[ "# How to calculate the length of a triangle side\n\nMath can be difficult to understand, but it's important to learn How to calculate the length of a triangle side.", null, "Decide mathematic question\nExplain math questions\n• Have more time for your recreation\n\nLooking for a little help with your homework? Check out our solutions for all your homework help needs!\n\nIf you need help, our customer service team is available 24/7.\n\nIf you have a question, we have an answer! We are here to help you with whatever you need.\n\n## How to find the length of the side of a right triangle\n\nThis calculator calculates for the length of one side of a right triangle given the length of the other two sides. A right triangle has two sides perpendicular to each other. Sides a and b are", null, "Solve mathematic question\n\nTo solve a math equation, you need to find the value of the variable that makes the equation true.", null, "Track Way\n\nTrack Way is a great place to go for a run.", null, "Track Progress", null, "Fast Professional Tutoring\n\nWe provide professional tutoring services that help students improve their grades and performance in school.\n\n## How To Find the Length of a Triangle\n\nThere are many ways to find the side length of a right triangle. We are going to focus on two specific cases. Case I When we know 2 sides of the right triangle, use the Pythagorean theorem . Case II We know 1 side and 1 angle of the right\n\n## Right Triangle Calculator\n\nThis trigonometry video tutorial explains how to calculate the missing side length of a triangle. Examples include the use of the pythagorean theorem, trigonometry ratios such\n\n• 732+\n\nTutors\n\n• 8\n\nYears of experience\n\n## Sides of Triangle\n\nThe perimeter of a triangle is defined as the sum of its sides. Let’s say there is an equilateral triangle with unknown side length a. Then its perimeter (P) is, a + a + a = 3a. 3a = P\n\n## Customer reviews", null, "How to Find the Length of the Side of a Triangle If You Know\n\nGiven the length of two sides and the angle between them, the following formula can be used to determine the area of the triangle. Note that the variables used are in reference to the triangle shown in the calculator above. Given a = 9, b = 7\n\nSOLVING\n\nTo solve a math equation, you need to find the value of the variable that makes the equation true.\n\nDo math problem\n\nI can't do math equations.\n\nGET SERVICE INSTANTLY\n\nCall now and get service instantly!" ]
[ null, "https://normgoldblatt.com/images/pic-boy.webp", null, "https://normgoldblatt.com/images/step-1.svg", null, "https://normgoldblatt.com/images/step-2.svg", null, "https://normgoldblatt.com/images/step-3.svg", null, "https://normgoldblatt.com/images/step-4.svg", null, "https://normgoldblatt.com/images/img-7.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9409714,"math_prob":0.98550487,"size":2921,"snap":"2022-40-2023-06","text_gpt3_token_len":635,"char_repetition_ratio":0.11004457,"word_repetition_ratio":0.056710776,"special_character_ratio":0.21259843,"punctuation_ratio":0.0957265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99874836,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-09T00:01:35Z\",\"WARC-Record-ID\":\"<urn:uuid:fc7441c6-bcda-4153-ab19-67b14a8a7a9d>\",\"Content-Length\":\"48426\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f79e8cf2-be21-4df6-953c-c7c12b26c375>\",\"WARC-Concurrent-To\":\"<urn:uuid:fba9eae3-3e71-4a9c-9b40-2c698d171bf0>\",\"WARC-IP-Address\":\"170.178.164.166\",\"WARC-Target-URI\":\"https://normgoldblatt.com/how-to-calculate-the-length-of-a-triangle-side-10\",\"WARC-Payload-Digest\":\"sha1:H6BDYOGM4YA767DZTDR4QESGCWGRGDZV\",\"WARC-Block-Digest\":\"sha1:77YTWBOS7JLX3JNC5ZQGB47TSUI4YVZN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500983.76_warc_CC-MAIN-20230208222635-20230209012635-00434.warc.gz\"}"}
https://www.bernardosulzbach.com/
[ "# Maximum Profit in Job Scheduling\n\nThis is my solution for the LeetCode problem number 1235, Maximum Profit in Job Scheduling.\n\nThis problem is quite similar to that of determining the maximum number of not overlapping ranges. The only difference is that in this problem the ranges have a profit associated with them, rather than a linear function over the size of the range.\n\nThe key idea for my solution, which runs in $O(n \\lg n)$, is to sort all jobs by their starting time and, going backwards, store the result of the maximum profit when starting at time $t$. For every job that starts at $t_i$, the maximum when starting at $t_i$ has to be set to either the maximum found for any $t_j > t_i$ or the maximum for any $t_k > e_i$, where $e_i$ is the ending time of the job $i$, plus the profit of job $i$, whichever is bigger. This is the optimization of the choice between scheduling job $i$ or skipping it.\n\nstruct Job {\nint startTime;\nint endTime;\nint profit;\n\nJob(int startTime, int endTime, int profit) : startTime(startTime), endTime(endTime), profit(profit) {}\n\nbool operator<(const Job& rhs) const {\nif (startTime < rhs.startTime) return true;\nif (rhs.startTime < startTime) return false;\nif (endTime < rhs.endTime) return true;\nif (rhs.endTime < endTime) return false;\nreturn profit < rhs.profit;\n}\n};\n\nint firstEqualOrGreater(const map<int, int>& bestStartingFrom, int key) {\nreturn bestStartingFrom.upper_bound(key - 1)->second;\n}\n\nint jobScheduling(vector<int>& startTime, vector<int>& endTime, vector<int>& profit) {\nvector<Job> jobs;\nfor (size_t i = 0; i < startTime.size(); i++) {\njobs.emplace_back(startTime[i], endTime[i], profit[i]);\n}\n// Sorting this vector is O(n lg n).\nsort(begin(jobs), end(jobs));\nmap<int, int> bestStartingFrom;\nbestStartingFrom[numeric_limits<int>::max()] = 0;\n// Querying and inserting n times here is O(n lg n).\nfor (auto it = rbegin(jobs); it != rend(jobs); it++) {\nconst auto a = firstEqualOrGreater(bestStartingFrom, it->startTime);\nconst auto b = it->profit + firstEqualOrGreater(bestStartingFrom, it->endTime);\nbestStartingFrom[it->startTime] = max(a, b);\n}\nreturn firstEqualOrGreater(bestStartingFrom, 0);\n}\n\n\n\n# Solution to Smallest Sufficient Team\n\nThis is my solution for the LeetCode problem number 1125, Smallest Sufficient Team.\n\nThis problem is a version of a well-known NP-complete problem in computer science known as the set cover problem. In order to make the solution fast enough to pass the time limit, redundant people are removed from the set of candidates. If $s_i$ denotes the set of skills of person $i$ and $s_i \\cup s_j = s_i$, person $j$ is redundant, as it does not do anything more than $i$ can and therefore is not required to find an optimal solution.\n\nAnother possible optimization would be to determine if there are any required people. Where a required person would be one that possesses a skill no other person does. This person simply has to be part of the solution and can be removed from the search and added to the solution unconditionally.\n\nThe required skills and each person’s skills are represented by 16-bit unsigned integers, because computing their union is much faster than computing the union of vectors of strings. After this, it basically becomes a matter of considering each person and exhaustively testing all of the possible solutions. Sorting the vector of people from “most skilled” to “least skilled” may also speed up the search considerably. Lastly, the search can stop whenever the current solution being built gets to be as large as the best one found so far.\n\n# Solution to Best Time to Buy and Sell Stock with Cooldown\n\nThis is my solution for the LeetCode problem number 309, Best Time to Buy and Sell Stock with Cooldown.\n\nThis is a quite simple problem which can be addressed in O(1) space and O(n) time using dynamic programming. However, the O(n) space solution seems easier to arrive at. The subproblem explored through dynamic programming is that starting day $i$ with no stock and after the cooldown has ended is equivalent to solving the original problem with only days $i$ through $n$.\n\nIn this solution, 0 is used to indicate that no stock is owned, 1 is used to indicate that stock is owned and 2 is used to indicate cooldown. The algorithm keeps three values for each day: the maximum profit by starting without stock, the maximum profit by starting with stock and the maximum profit by starting under cooldown. After the bottom-up solving is done, the answer to the problem question is what is the maximum profit for starting at the first day without stock.\n\nint maxProfit(const vector<int> &prices) {\nvector<array<int, 3>> m(prices.size() + 1);\nfor (int i = prices.size() - 1; i >= 0; i--) {\nm[i] = max(m[i + 1], m[i + 1] - prices[i]);\nm[i] = max(m[i + 1], m[i + 1] + prices[i]);\nm[i] = max(m[i + 1], m[i + 1]);\n}\nreturn m;\n}\n\n\n\nNow, because evaluating day $i$ only requires information about day $i + 1$, the vector of arrays can become just two arrays, as follows.\n\nint maxProfit(const vector<int> &prices) {\narray<int, 3> d1{};\narray<int, 3> d2{};\nfor (int i = prices.size() - 1; i >= 0; i--) {\nd2 = d1;\nd1 = max(d2, d2 - prices[i]);\nd1 = max(d2, d2 + prices[i]);\nd1 = max(d2, d2);\n}\nreturn d1;\n}\n\n\n\n# Path of Exile calculators\n\nI recently developed a web-based Path of Exile calculator so that it is more practical to estimate the number of divine orbs required to get at least a certain set of rolls.\n\nIt can be found on its own page.\n\nThe JavaScript source code is not obfuscated so it is easier to reuse it if you want to.\n\n# The Future of WebAssembly\n\nIt is a really good thing to see what came after asm.js going so far. If we are transpiling to JS anyway, might as well compile to something which runs faster." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8557515,"math_prob":0.98720795,"size":5742,"snap":"2020-34-2020-40","text_gpt3_token_len":1407,"char_repetition_ratio":0.120250955,"word_repetition_ratio":0.04025424,"special_character_ratio":0.26471612,"punctuation_ratio":0.12631579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.990029,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T20:32:18Z\",\"WARC-Record-ID\":\"<urn:uuid:1fa150a0-5d18-49d6-91f0-97b351508d4f>\",\"Content-Length\":\"31718\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd6bd87f-4f49-4e9e-87c2-907002081468>\",\"WARC-Concurrent-To\":\"<urn:uuid:8657e58c-f628-474c-a59e-d4b3041717e5>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://www.bernardosulzbach.com/\",\"WARC-Payload-Digest\":\"sha1:6NWX6RKU2MR5YZFMFJIT73CQL55XNCJZ\",\"WARC-Block-Digest\":\"sha1:R4J5JACSIBUHGZS4LBADNXLVAYL6LVYX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439741154.98_warc_CC-MAIN-20200815184756-20200815214756-00558.warc.gz\"}"}
https://docs.splunk.com/Documentation/MLApp/5.2.0/User/Customvisualizations
[ "", null, "Download topic as PDF\n\n# Custom visualizations in the Machine Learning Toolkit\n\nThe Splunk Machine Learning Toolkit includes several reusable custom visualizations that you can use in your own dashboards. Each visualization expects data in a certain format with certain fields, that you can see in the syntax portion of the visualization descriptions.\n\n## Custom visualization workflow\n\n1. Run a search from the Search page in the Splunk Machine Learning Toolkit or the default Search & Reporting app on the Splunk platform.\n\n2. Click the Visualization tab, then click the menu at the top left to display available visualizations.", null, "3. Select a visualization.\n\nYou can use these custom visualizations on any Splunk platform instance on which the Splunk Machine Learning Toolkit is installed.\n\nMany of these visualizations also display within the Machine Learning Toolkit Assistants. For more information on step-by-step Assistant options, see MLTK guided workflows.\n\n## 3D Scatter Plot\n\nUse the 3D Scatter Plot to see patterns in your data. Look for clusters of similar data points, or drill down to identify singular data points.\n\nUsers upgrading to version 4.4.0 of the MLTK where a custom theme is in place for the 3D Scatter Plot must change the 3D Scatter Plot background color format setting to the new option of Auto for the visualization to adhere to your global light/ dark Splunk dashboard theme.\n\nSearch fragment\n\n```search_fragment = | table clusterId x y z [clusterColor]\n```\n\nSyntax\n\n```| eval clusterColor = case(clusterId=0, \"teal\", clusterId=2, \"#09B1DF\")\n| table clusterId x y z clusterColor\n```\n\nThe `clusterColor` parameter is optional. The `clusterColor` parameter supports written color names or any hex color code. To review the list of supported color names, see the GitHub bahamas10 css color names. If no `clusterColor` parameter is provided the scatter plot uses default css colors supported in all modern web browsers.\n\nThe `| table clusterId x y z` line must be provided for the visualization to render properly.\n\nExample\n\nThe following example uses 3D Scatter Plot on a test set.\n\n```| inputlookup firewall_traffic.csv\n| table clusterId x y z clusterColor\n```\n\nExample output\n\nThe following example shows 3D Scatter Plot on a test set.\n\n## Boxplot Chart\n\nUse the Boxplot Chart to show the minimum, lower quartile, median, upper quartile, and maximum of each field.\n\nBoxplot requires the input of the macro `| `boxplot`` in order to render. Failing to include the macro displays an error.\n\nSearch fragment\n\n```search_fragment = | boxplot ...\n```\n\nThe box plot chart visualization expects five rows corresponding to min, max, median, lower quartile and upper quartile, in any order.\n\n• `exactperc25` is the lower quartile\n• `exactperc75` is the upper quartile\n\nExample\n\nThe following example uses Boxplot Chart on a test set.\n\n` | inputlookup app_usage.csv | `boxplot``\n\nExample output\n\nThe following image shows Boxplot Chart on a test set.\n\n## Distribution Plot\n\nUse the Distribution Plot to show the output of the DensityFunction algorithm. This visualization can be called with either the `fit` or `apply` commands.\n\nThis visualization requires the use of `fit DensityFunction` or `apply` in combination with `show_density=True show_options=\"feature_variables, split_by, params\"`.\n\nSearch fragment\n\n```search_fragment = | fit DensityFunction <field> [by \"<fields>\"]\nshow_density=True\nshow_options=\"feature_variables, split_by, params\"\n```\n\nExample\n\nThe following example uses Distribution Plot on a test set.\n\n`... | fit DensityFunction \"quantity\" by \"shop_id\" dist=auto threshold=0.01 show_density=True show_options=\"feature_variables,split_by,params\"...`\n\nExample output\n\nThe following example shows Distribution Plot on a test set.\n\n## Downsampled Line Chart\n\nUse the Downsampled Line Chart to show values and trends over time implementing downsampling to show large numbers of points.\n\nSearch fragment\n\n```search_fragment = | table <x_axis> <y_axis_1> <y_axis_2> ...\n```\n\nExample\n\nThe following example uses Downsampled Line Chart on a test set.\n\n`... | table _time, \"median_house_value\", \"predicted(median_house_value)\" ...`\n\nExample output\n\nThe following image shows the Actual vs. Predicted Line Chart and the Residuals Line Chart that are also available when using the Predict Numeric Fields Assistant.\n\n## Forecast Chart\n\nUse the Forecast Chart to show the forecasted value for data This visualization is available in the Forecast Time Series Assistant and Smart Forecasting Assistant, which use different macros to produce the output:\n\n• The Forecast Time Series Assistant uses the `fit` or `predict` commands with the ARIMA algorithm.\n• The Smart Forecasting Assistant uses the `fit` command with the StateSpaceForecast algorithm.\n\nSearch fragment\n\n`search_fragment = | timechart count [by comparison_category] | modvizpredict (<field>, <algorithm>, <future_timespan>, <holdback>, <confidence_interval>)`\n\nSyntax\n\n``` | fit ARIMA [_time] <field_to_forecast> order=<int>-<int>-<int> [forecast_k=<int>] [conf_interval=<int>] [holdback=<int>] | `forecastviz(<forecast_k>, <holdback>, <field_to_forecast>, <conf_interval>)`\n```\n``` | fit StateSpaceForecast variable_name1 [variable_name2] [variable_name3] [variable_name4] [variable_name5] output_metadata=true [conf_interval=<int>] | `smartforecastviz(<variable_name1> [,<variable_name2>] [, <variable_name3] [, <variable_name4] [, <variable_name5>])`\n```\n\nExamples\n\nThe following examples use Forecast Chart on a test set.\n\n```| inputlookup exchange.csv | fit ARIMA _time rate holdback=5 conf_interval=95 order=1-0-1 forecast_k=10 as prediction | `forecastviz(10, 5, \"rate\", 95)`\n```\n```| inputlookup app_usage.csv | fields CRM ERP Expenses | fit StateSpaceForecast CRM ERP output_metadata=true holdback=0 forecast_k=50 conf_interval=50 into app_usage_model | `smartforecastviz(CRM, ERP)`\n```\n\nExample output\n\nThe following image shows the Forecast Chart on test data.\n\n## Heatmap Plot\n\nUse the Heatmap Plot to show data values as colors in a table matrix.\n\nSearch fragment\n\n```search_fragment = | confusionmatrix (<x_axis>, <y_axis>)\n```\n\nExample\n\nThe following example uses Heatmap Plot on a test set.\n\n`| inputlookup firewall_traffic.csv | head 50000 | fit AutoPrediction \"has_known_vulnerability\" from \"bytes_received\" \"packets_received\" \"packets_sent\" \"bytes_sent\" \"used_by_malware\" test_split_ratio=0.3 into \"default_model_name\" | eval \"_split\"=case('_split'=\"Test\", \"Testing\", '_split'=\"Training\", \"Training\") | where '_split'=\"Testing\" | `confusionmatrix(\"has_known_vulnerability\", \"predicted(has_known_vulnerability)\")`.`\n\nExample output\n\nThe following example shows Heatmap Plot on a test set.\n\n## Histogram Chart\n\nUse the Histogram Chart to show continuous data as bucketed by the `bin` command.\n\nSearch fragment\n\n```search_fragment = | histogram (<field, bins>)\n```\n\nExample\n\nThe following example uses Histogram Chart on a test set.\n\n`... | bin residual bins=100 ...`\n\nExample output\n\nThe following image shows the Residuals Histogram on a test set.\n\n## Outliers Chart\n\nUse the Outliers Chart to show the acceptable range for a value and to highlight the points that are outside of this range.\n\nSearch fragment\n\n```search_fragment = | table _time, <outlier_variable>, <lower_bound>, <upper_bound>\n```\n\nExample\n\nThe following example uses Outliers Chart on a test set.\n\n`... | table _time, quantity, lowerBound, upperBound, isOutlier ...`\n\nExample output\n\nThe following image shows the Outliers Chart on a test set.\n\n## Scatter Line Chart\n\nUse the Scatter Line Chart to show the relationships between discrete values in two dimensions, as well as an additional identity (x=y) line.\n\nSerch fragment\n\n```search_fragment = | table <x_axis> <y_axis>\n```\n\nExample\n\nThe following example uses Scatter Line Chart on a test set.\n\n`... | table \"median_house_value\" \"predicted(median_house_value)\" ...`\n\nExample output\n\nThe following image shows Scatter Chart on a test set.\n\n## Scatterplot Matrix\n\nUse the Scatterplot Matrix to show the relationships between discrete values in multiple dimensions.\n\nAll field values must be numeric in order to render the Scatterplot Matrix.\n\nSearch fragment\n\n```search_fragment = | table <name_category>, <dimension_1>, <dimension_2>, <dimension_3> ...\n```\n\nExample\n\nThe following example uses Scatterplot Matrix on a test set.\n\n`... | table cluster, \"avg_rooms_per_dwelling\", \"business_acres\", \"median_house_value\" ...`\n\nExample output\n\nThe following example shows the Scatterplot Matrix on a test set." ]
[ null, "https://docs.splunk.com/skins/OxfordComma/images/acrobat-logo.png", null, "https://docs.splunk.com/images/thumb/f/fb/Updated_Viz_shot.png/800px-Updated_Viz_shot.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62980795,"math_prob":0.74052954,"size":8640,"snap":"2021-21-2021-25","text_gpt3_token_len":1946,"char_repetition_ratio":0.14682724,"word_repetition_ratio":0.122156695,"special_character_ratio":0.22094907,"punctuation_ratio":0.13445996,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9872549,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T04:00:03Z\",\"WARC-Record-ID\":\"<urn:uuid:d883e373-f101-4636-a222-8ba4d6fb0b3b>\",\"Content-Length\":\"117666\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98beaee7-ac89-454e-8df4-287bebcb3be4>\",\"WARC-Concurrent-To\":\"<urn:uuid:aafb5aff-7ae4-4dd5-91e4-41418d6b26ed>\",\"WARC-IP-Address\":\"52.32.122.94\",\"WARC-Target-URI\":\"https://docs.splunk.com/Documentation/MLApp/5.2.0/User/Customvisualizations\",\"WARC-Payload-Digest\":\"sha1:4FYCMEL4GQEXBBQ5WJXESBYLMEH54RR6\",\"WARC-Block-Digest\":\"sha1:GMJ7SQFHSTFYHVL5V4JF4KUR3H474ZAG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988725.79_warc_CC-MAIN-20210506023918-20210506053918-00344.warc.gz\"}"}
https://courses.gameinstitute.com/visitor_catalog_class/show/8666/
[ "# C++ Programming for Game Development\n\nCourse\nAccess code required\n\n## Modules\n\nHere is the course outline:\n\n## 1. Environment & Core Language Features\n\nGoals: Create, compile, link and execute C++ programs. Find out how C++ code is transformed into machine code. Learn some of the basic C++ features necessary for every C++ program. Discover how to output and input text information to and from the user. Understand the concept of variables. Perform simple arithmetic operations in C++.\n\n## 2. Conditionals, Loops, & Arrays\n\nGoals: Understand and evaluate logical expressions. Form and apply conditional, if...then, statements. Discover how to execute a block of code repeatedly using various kinds of loops. Learn how to create containers of variables and how to manipulate the individual elements in those containers.\n\n## 3. Functions\n\nGoals: Understand and construct logical code groupings/tasks as functions. Understand the various definitions for scope as it pertains to variable declarations. Understand how to use code libraries for common tasks in mathematics and for random number generation. Understand function parameter overloading and the concept of the default parameter.\n\n## 4. References & Pointers\n\nGoals: Become familiar with reference and pointer syntax. Understand how C++ passes array arguments into functions. Discover how to return multiple return values from a function. Learn how to create and destroy memory at runtime (i.e., while the program is running).\n\n## 5. Object Oriented Programming (OOP)\n\nGoals: Understand the problems object oriented programming attempts to solve. Define a class and instantiate members of that class. Learn some basic class design strategies.\n\n## 6. Strings & Miscellaneous\n\nGoals: Understand how C++ natively describes strings. Learn some important standard library string functions. Review std::string and become familiar with some of its methods. Become familiar with the this pointer. Learn about the friend and static keywords. Discover how to create your own namespaces. Understand what enumerated types are, how they are defined in C++, and when they would be used.\n\nGoals: Learn how to overload the arithmetic operators. Discover how to overload the relational operators. Overload the conversion operators. Understand the difference between deep copies and shallow copies. Find out how to overload the assignment operator and copy constructor to perform deep copies.\n\n## 8. File Input/Output\n\nGoals: Learn how to load and save text files. Learn how to load and save binary files.\n\n## 9. Inheritance & Polymorphism\n\nGoals: Understand what inheritance means in C++ and why it is a useful code construct. Understand the syntax of polymorphism, how it works, and why it is useful. Learn how to create general abstract types and interfaces.\n\n## 10. Templates\n\nGoals: Learn how to design and implement generic classes. Learn how to define generic functions.\n\n## 11. Exception Handling\n\nGoals: Understand the method of catching errors via function return codes, and an understanding of the shortcomings of this method. Become familiar with the concepts of exception handling, its syntax, and its benefits. Learn how to write assumption verification code using asserts.\n\n## 12. Number Systems\n\nGoals: Learn how to represent numbers with the binary and hexadecimal numbering systems, how to perform basic arithmetic in these numbering systems, and how to convert between these numbering systems as well as the base ten numbering system. Gain an understanding of how the computer describes intrinsic C++ types internally. Become proficient with the various binary operations. Become familiar with the way in which floating-point numbers are represented internally.\n\n## 13. The Standard Template Library\n\nGoals: Discover how lists, stacks, queues, deques, and maps work internally, and in which situations they should be used. Become familiar with a handful of the generic algorithms the standard library provides and how to apply these algorithms on a variety of data structures. Learn how to create objects that act like functions, called functors, and learn how to create and use predicates with the standard library.\n\nRuntime: 35m 26s\n\n## 15. Transitioning from C++ to C# (Optional)\n\nGoals: Use your C++ knowledge to learn C# in about 90 minutes. An excellent supplemental tutorial from Derek Banas on YouTube. This is an optional module, but is strongly recommended for students who intend to pursue our Unity training, where C# is the primary language being used in those videos." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84656274,"math_prob":0.7934058,"size":4513,"snap":"2021-04-2021-17","text_gpt3_token_len":880,"char_repetition_ratio":0.14570858,"word_repetition_ratio":0.005738881,"special_character_ratio":0.19964547,"punctuation_ratio":0.14445828,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9542047,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-19T18:56:44Z\",\"WARC-Record-ID\":\"<urn:uuid:18be4451-bd6e-47a4-afc2-5d9006899bb5>\",\"Content-Length\":\"42091\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28b553d8-3d8f-4650-a17e-99c46662a9a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ea5a368-4ebe-40eb-b9a6-eb618c2515ef>\",\"WARC-IP-Address\":\"13.32.181.69\",\"WARC-Target-URI\":\"https://courses.gameinstitute.com/visitor_catalog_class/show/8666/\",\"WARC-Payload-Digest\":\"sha1:DQARDDTOLQCVNLU5Z7F24TLNTEN4JDDT\",\"WARC-Block-Digest\":\"sha1:JJLQXG2ENNPAFK7QK5VJGEBJ2MQTG4HF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703519600.31_warc_CC-MAIN-20210119170058-20210119200058-00780.warc.gz\"}"}
https://www.sidefx.com/docs/houdini/vex/functions/volumeindexorigin.html
[ "# volumeindexorigin VEX function\n\nGets the index of the bottom left of a volume primitive.\n\n``` vector  volumeindexorigin(<geometry>geometry, int primnum) ```\n\n``` vector  volumeindexorigin(<geometry>geometry, string volumename) ```\n\nReturns\n\nThe index of the bottom left of a volume primitive. For Volume primitives, this is always zero. However, for VDB primitives, this represents the bottom left of their active bounding box of voxels.\n\nReturns 0 if `primnum` is out of range, the geometry is invalid, or the given primitive is not a volume primitive.\n\n volume\n\n# VEX Functions\n\n## Arrays\n\n• Adds an item to an array or string.\n\n• Returns the indices of a sorted version of an array.\n\n• Efficiently creates an array from its arguments.\n\n• Loops over the items in an array, with optional enumeration.\n\n• Inserts an item, array, or string into an array or string.\n\n• Checks if the index given is valid for the array or string given.\n\n• Returns the length of an array.\n\n• Removes the last element of an array and returns it.\n\n• Adds an item to an array.\n\n• Removes an item at the given index from an array.\n\n• Removes an item from an array.\n\n• Reorders items in an array or string.\n\n• Sets the length of an array.\n\n• Returns an array or string in reverse order.\n\n• Slices a sub-string or sub-array of a string or array.\n\n• Returns the array sorted in increasing order.\n\n• Adds a uniform item to an array.\n\n## Attributes and Intrinsics\n\n• Adds an attribute to a geometry.\n\n• Adds a detail attribute to a geometry.\n\n• Adds a point attribute to a geometry.\n\n• Adds a primitive attribute to a geometry.\n\n• Adds a vertex attribute to a geometry.\n\n• Appends to a geometry’s visualizer detail attribute.\n\n• Reads the value of an attribute from geometry.\n\n• Returns the class of a geometry attribute.\n\n• Returns the data id of a geometry attribute.\n\n• Returns the size of a geometry attribute.\n\n• Returns the type of a geometry attribute.\n\n• Returns the transformation metadata of a geometry attribute.\n\n• Reads the value of a detail attribute value from a geometry.\n\n• Reads a detail attribute value from a geometry.\n\n• Returns the size of a geometry detail attribute.\n\n• Returns the type of a geometry detail attribute.\n\n• Returns the type info of a geometry attribute.\n\n• Reads the value of a detail intrinsic from a geometry.\n\n• Finds a primitive/point/vertex that has a certain attribute value.\n\n• Returns number of elements where an integer or string attribute has a certain value.\n\n• Reads an attribute value from geometry, with validity check.\n\n• Copies the value of a geometry attribute into a variable and returns a success flag.\n\n• Checks whether a geometry attribute exists.\n\n• Returns if a geometry detail attribute exists.\n\n• Returns if a geometry point attribute exists.\n\n• Returns if a geometry prim attribute exists.\n\n• Returns if a geometry vertex attribute exists.\n\n• Finds a point by its id attribute.\n\n• Finds a primitive by its id attribute.\n\n• Finds a point by its name attribute.\n\n• Finds a primitive by its name attribute.\n\n• Returns the number of unique values from an integer or string attribute.\n\n• Reads a point attribute value from a geometry.\n\n• Reads a point attribute value from a geometry and outputs a success/fail flag.\n\n• Returns the size of a geometry point attribute.\n\n• Returns the type of a geometry point attribute.\n\n• Returns the type info of a geometry attribute.\n\n• Returns an array of point localtransforms from an array of point indices.\n\n• Returns a point transform from a point index.\n\n• Returns a rigid point transform from a point index.\n\n• Returns an array of point transforms from an array of point indices.\n\n• Returns an array of rigid point transforms from an array of point indices.\n\n• Reads a primitive attribute value from a geometry.\n\n• Interpolates the value of an attribute at a certain parametric (u, v) position and copies it into a variable.\n\n• Evaluates the length of an arc on a primitive defined by an array of points using parametric uv coordinates.\n\n• Evaluates the length of an arc on a primitive using parametric uv coordinates.\n\n• Reads a primitive attribute value from a geometry, outputting a success flag.\n\n• Returns the size of a geometry prim attribute.\n\n• Returns the type of a geometry prim attribute.\n\n• Returns the type info of a geometry attribute.\n\n• Returns position derivative on a primitive at a certain parametric (u, v) position.\n\n• Reads a primitive intrinsic from a geometry.\n\n• Interpolates the value of an attribute at a certain parametric (uvw) position.\n\n• Convert parametric UV locations on curve primitives between different spaces.\n\n• Writes an attribute value to geometry.\n\n• Sets the meaning of an attribute in geometry.\n\n• Sets a detail attribute in a geometry.\n\n• Sets the value of a writeable detail intrinsic attribute.\n\n• Sets a point attribute in a geometry.\n\n• Sets an array of point local transforms at the given point indices.\n\n• Sets the world space transform of a given point\n\n• Sets an array of point transforms at the given point indices.\n\n• Sets a primitive attribute in a geometry.\n\n• Sets the value of a writeable primitive intrinsic attribute.\n\n• Sets a vertex attribute in a geometry.\n\n• Returns one of the set of unique values across all values for an int or string attribute.\n\n• Returns the set of unique values across all values for an int or string attribute.\n\n• Interpolates the value of an attribute at certain UV coordinates using a UV attribute.\n\n• Reads a vertex attribute value from a geometry.\n\n• Reads a vertex attribute value from a geometry.\n\n• Returns the size of a geometry vertex attribute.\n\n• Returns the type of a geometry vertex attribute.\n\n• Returns the type info of a geometry attribute.\n\n## BSDFs\n\n• Returns the albedo (percentage of reflected light) for a bsdf given the outgoing light direction.\n\n• Returns a specular BSDF using the Ashikhmin shading model.\n\n• Returns a Blinn BSDF or computes Blinn shading.\n\n• Returns a cone reflection BSDF.\n\n• Creates a bsdf object from two CVEX shader strings.\n\n• Returns a diffuse BSDF or computes diffuse shading.\n\n• Evaluates a bsdf given two vectors.\n\n• Returns a ggx BSDF.\n\n• Returns a BSDF for shading hair.\n\n• Returns an anisotropic volumetric BSDF, which can scatter light forward or backward.\n\n• Returns an isotropic BSDF, which scatters light equally in all directions.\n\n• Returns new BSDF that only includes the components specified by the mask.\n\n• Returns the normal for the diffuse component of a BSDF.\n\n• Returns a Phong BSDF or computes Phong shading.\n\n• Samples a BSDF.\n\n• Computes the solid angle (in steradians) a BSDF function subtends.\n\n• Splits a bsdf into its component lobes.\n\n• Creates an approximate SSS BSDF.\n\n## BSDFs\n\n• Returns a specular BSDF or computes specular shading.\n\n## CHOP\n\n• Adds new channels to a CHOP node.\n\n• Reads from a CHOP attribute.\n\n• Reads CHOP attribute names of a given attribute class from a CHOP input.\n\n• Returns the sample number of the last sample in a given CHOP input.\n\n• Returns the frame corresponding to the last sample of the input specified.\n\n• Returns the time corresponding to the last sample of the input specified.\n\n• Returns the channel index from a input given a channel name.\n\n• Returns the value of a channel at the specified sample.\n\n• Computes the minimum and maximum value of samples in an input channel.\n\n• Returns all the CHOP channel names of a given CHOP input.\n\n• Returns the number of channels in the input specified.\n\n• Returns the value of a CHOP channel at the specified sample.\n\n• Returns the value of a CHOP local transform channel at the specified sample.\n\n• Returns the value of a CHOP local transform channel at the specified sample and evaluation time.\n\n• Returns the value of a CHOP channel at the specified sample and evaluation time.\n\n• Returns the sample rate of the input specified.\n\n• Returns the value of CHOP context temporary buffer at the specified index.\n\n• Removes channels from a CHOP node.\n\n• Removes a CHOP attribute.\n\n• Renames a CHOP channel.\n\n• Resize the CHOP context temporary buffer\n\n• Sets the value of a CHOP attribute.\n\n• Sets the length of the CHOP channel data.\n\n• Sets the sampling rate of the CHOP channel data.\n\n• Sets the CHOP start sample in the channel data.\n\n• Returns the start sample of the input specified.\n\n• Returns the frame corresponding to the first sample of the input specified.\n\n• Returns the time corresponding to the first sample of the input specified.\n\n• Writes a value of CHOP context temporary buffer at the specified index.\n\n• Returns 1 if the Vex CHOP’s Unit Menu is currently set to 'frames', 0 otherwise.\n\n• Returns 1 if the Vex CHOP’s Unit Menu is currently set to 'samples', 0 otherwise.\n\n• Returns 1 if the Vex CHOP’s Unit Menu is currently set to 'seconds', 0 otherwise.\n\n• Returns the number of inputs.\n\n## color\n\n• Compute the color value of an incandescent black body.\n\n• Transforms between color spaces.\n\n• Compute the luminance of the RGB color specified by the parameters.\n\n## Conversion\n\n• Converts a string to a float.\n\n• Converts a string to an integer.\n\n• Depending on the value of c, returns the translate (c=0), rotate (c=1), scale (c=2), or shears (c=3) component of the transform (xform).\n\n• Converts the argument from radians into degrees.\n\n• Creates a vector4 representing a quaternion from euler angles.\n\n• Convert HSV color space into RGB color space.\n\n• Converts a quaternion represented by a vector4 to a matrix3 representation.\n\n• Creates a euler angle representing a quaternion.\n\n• Converts the argument from degrees into radians.\n\n• Convert RGB color space to HSV color space.\n\n• Convert a linear sRGB triplet to CIE XYZ tristimulus values.\n\n• Flattens an array of vector or matrix types into an array of floats.\n\n• Turns a flat array of floats into an array of vectors or matrices.\n\n• Convert CIE XYZ tristimulus values to a linear sRGB triplet.\n\n## Crowds\n\n• Add a clip into an agent’s definition.\n\n• Returns the names of the channels in an agent primitive’s rig.\n\n• Returns the current value of an agent primitive’s channel.\n\n• Returns the current values of an agent primitive’s channels.\n\n• Returns all of the animation clips that have been loaded for an agent primitive.\n\n• Finds the index of a channel in an agent’s animation clip.\n\n• Returns the names of the channels in an agent’s animation clip.\n\n• Returns the length (in seconds) of an agent’s animation clip.\n\n• Returns an agent primitive’s current animation clips.\n\n• Samples a channel of an agent’s clip at a specific time.\n\n• Samples an agent’s animation clip at a specific time.\n\n• Returns the sample rate of an agent’s animation clip.\n\n• Samples an agent’s animation clip at a specific time.\n\n• Returns the start time (in seconds) of an agent’s animation clip.\n\n• Returns the current times for an agent primitive’s animation clips.\n\n• Returns the transform groups for an agent primitive’s current animation clips.\n\n• Returns the blend weights for an agent primitive’s animation clips.\n\n• Returns the name of the collision layer of an agent primitive.\n\n• Returns the name of the current layer of an agent primitive.\n\n• Finds the index of a transform group in an agent’s definition.\n\n• Returns the transform that each shape in an agent’s layer is bound to.\n\n• Returns all of the layers that have been loaded for an agent primitive.\n\n• Returns the names of the shapes referenced by an agent primitive’s layer.\n\n• Returns the current local space transform of an agent primitive’s bone.\n\n• Returns the current local space transforms of an agent primitive.\n\n• Returns the agent definition’s metadata dictionary.\n\n• Returns the child transforms of a transform in an agent primitive’s rig.\n\n• Finds the index of a transform in an agent primitive’s rig.\n\n• Finds the index of a channel in an agent primitive’s rig.\n\n• Returns the parent transform of a transform in an agent primitive’s rig.\n\n• Applies a full-body inverse kinematics algorithm to an agent’s skeleton.\n\n• Returns the number of transforms in an agent primitive’s rig.\n\n• Returns whether a transform is a member of the specified transform group.\n\n• Returns whether a channel is a member of the specified transform group.\n\n• Returns the names of the transform groups in an agent’s definition.\n\n• Returns the weight of a member of the specified transform group.\n\n• Returns the name of each transform in an agent primitive’s rig.\n\n• Converts transforms from world space to local space for an agent primitive.\n\n• Converts transforms from local space to world space for an agent primitive.\n\n• Returns the current world space transform of an agent primitive’s bone.\n\n• Returns the current world space transforms of an agent primitive.\n\n• Overrides the value of an agent primitive’s channel.\n\n• Overrides the values of an agent primitive’s channels.\n\n• Sets the current animation clips for an agent primitive.\n\n• Sets the animation clips that an agent should use to compute its transforms.\n\n• Sets the current times for an agent primitive’s animation clips.\n\n• Sets the blend weights for an agent primitive’s animation clips.\n\n• Sets the collision layer of an agent primitive.\n\n• Sets the current layer of an agent primitive.\n\n• Overrides the local space transform of an agent primitive’s bone.\n\n• Overrides the local space transforms of an agent primitive.\n\n• Overrides the world space transform of an agent primitive’s bone.\n\n• Overrides the world space transforms of an agent primitive.\n\n## dict\n\n• Converts a VEX dictionary into a JSON string.\n\n• Converts a JSON string into a VEX dictionary.\n\n• Returns all the keys in a dictionary.\n\n## displace\n\n• Reads a variable from the displacement shader for the surface.\n\n## File I/O\n\n• Returns file system status for a given file.\n\n## filter\n\n• Computes an importance sample based on the given filter type and input uv.\n\n## Geometry\n\n• Adds a point to the geometry.\n\n• Adds a primitive to the geometry.\n\n• Adds a vertex to a primitive in a geometry.\n\n• Clip the line segment between p0 and p1.\n\n• Returns a handle to the current geometry.\n\n• Returns an oppath: string to unwrap the geometry in-place.\n\n• Returns 1 if the edge specified by the point pair is in the group specified by the string.\n\n• This function computes the first intersection of a ray with geometry.\n\n• Computes all intersections of the specified ray with geometry.\n\n• Finds the closest position on the surface of a geometry.\n\n• Finds the closest point in a geometry.\n\n• Finds the all the closest point in a geometry.\n\n• Returns the number of edges in the group.\n\n• Returns the point number of the next point connected to a given point.\n\n• Returns the number of points that are connected to the specified point.\n\n• Returns an array of the point numbers of the neighbours of a point.\n\n• Returns the number of points in the input or geometry file.\n\n• Returns the number of primitives in the input or geometry file.\n\n• Returns the number of vertices in the input or geometry file.\n\n• Returns the number of vertices in the group.\n\n• Returns the list of primitives containing a point.\n\n• Returns a linear vertex number of a point in a geometry.\n\n• Returns the list of vertices connected to a point.\n\n• Returns an array of the primitive numbers of the edge-neighbours of a polygon.\n\n• Returns a list of primitives potentially intersecting a given bounding box.\n\n• Converts a primitive/vertex pair into a point number.\n\n• Returns the list of points on a primitive.\n\n• Converts a primitive/vertex pair into a linear vertex.\n\n• Returns number of vertices in a primitive in a geometry.\n\n• Returns the list of vertices on a primitive.\n\n• Removes a point from the geometry.\n\n• Removes a primitive from the geometry.\n\n• Removes a vertex from the geometry.\n\n• Sets edge group membership in a geometry.\n\n• Rewires a vertex in the geometry to a different point.\n\n• Rewires a vertex in the geometry to a different point.\n\n• This function computes the intersection of the specified ray with the geometry in uv space.\n\n• Converts a primitive/vertex pair into a linear vertex.\n\n• Returns the linear vertex number of the next vertex sharing a point with a given vertex.\n\n• Returns the point number of linear vertex in a geometry.\n\n• Returns the linear vertex number of the previous vertex sharing a point with a given vertex.\n\n• Returns the number of the primitive containing a given vertex.\n\n• Converts a linear vertex index into a primitive vertex number.\n\n## groups\n\n• Returns an array of point numbers corresponding to a group string.\n\n• Returns an array of prim numbers corresponding to a group string.\n\n• Returns an array of linear vertex numbers corresponding to a group string.\n\n• Returns 1 if the point specified by the point number is in the group specified by the string.\n\n• Returns 1 if the primitive specified by the primitive number is in the group specified by the string.\n\n• Returns 1 if the vertex specified by the vertex number is in the group specified by the string.\n\n• Returns the number of points in the group.\n\n• Returns the number of primitives in the group.\n\n• Adds or removes a point to/from a group in a geometry.\n\n• Adds or removes a primitive to/from a group in a geometry.\n\n• Adds or removes a vertex to/from a group in a geometry.\n\n## Half-edges\n\n• Returns the destination point of a half-edge.\n\n• Returns the destination vertex of a half-edge.\n\n• Returns the number of half-edges equivalent to a given half-edge.\n\n• Determines whether a two half-edges are equivalent (represent the same edge).\n\n• Determines whether a half-edge number corresponds to a primary half-edge.\n\n• Determines whether a half-edge number corresponds to a valid half-edge.\n\n• Returns the half-edge that follows a given half-edge in its polygon.\n\n• Returns the next half-edges equivalent to a given half-edge.\n\n• Returns the point into which the vertex following the destination vertex of a half-edge in its primitive is wired.\n\n• Returns the vertex following the destination vertex of a half-edge in its primitive.\n\n• Returns the point into which the vertex that precedes the source vertex of a half-edge in its primitive is wired.\n\n• Returns the vertex that precedes the source vertex of a half-edge in its primitive.\n\n• Returns the half-edge that precedes a given half-edge in its polygon.\n\n• Returns the primitive that contains a half-edge.\n\n• Returns the primary half-edge equivalent to a given half-edge.\n\n• Returns the source point of a half-edge.\n\n• Returns the source vertex of a half-edge.\n\n• Finds and returns a half-edge with the given endpoints.\n\n• Finds and returns a half-edge with a given source point or with given source and destination points.\n\n• Returns the next half-edge with the same source as a given half-edge.\n\n• Returns one of the half-edges contained in a primitive.\n\n• Returns the half-edge which has a vertex as source.\n\n## Image Processing\n\n• Tells the COP manager that you need access to the given frame.\n\n• Returns the default name of the alpha plane (as it appears in the compositor preferences).\n\n• Samples a 2×2 pixel block around the given UV position, and bilinearly interpolates these pixels.\n\n• Returns the default name of the bump plane (as it appears in the compositor preferences).\n\n• Returns the name of a numbered channel.\n\n• Samples the exact (unfiltered) pixel color at the given coordinates.\n\n• Returns the default name of the color plane (as it appears in the compositor preferences).\n\n• Returns the default name of the depth plane (as it appears in the compositor preferences).\n\n• Reads the z-records stored in a pixel of a deep shadow map or deep camera map.\n\n• Returns fully filtered pixel input.\n\n• Queries if metadata exists on a composite operator.\n\n• Returns 1 if the plane specified by the parameter exists in this COP.\n\n• Returns the aspect ratio of the specified input.\n\n• Returns the channel name of the indexed plane of the given input.\n\n• Returns the last frame of the specified input.\n\n• Returns the end time of the specified input.\n\n• Returns 1 if the specified input has a plane named planename.\n\n• Returns the number of planes in the given input.\n\n• Returns the index of the plane named 'planename' in the specified input.\n\n• Returns the name of the plane specified by the planeindex of the given input\n\n• Returns the number of components in the plane named planename in the specified input.\n\n• Returns the frame rate of the specified input.\n\n• Returns the starting frame of the specified input.\n\n• Returns the start time of the specified input.\n\n• Returns the X resolution of the specified input.\n\n• Returns the Y resolution of the specified input.\n\n• Returns the default name of the luminaence plane (as it appears in the compositor preferences).\n\n• Returns the default name of the mask plane (as it appears in the compositor preferences).\n\n• Returns a metadata value from a composite operator.\n\n• Reads a component from a pixel and its eight neighbors.\n\n• Returns the default name of the normal plane (as it appears in the compositor preferences).\n\n• Returns the index of the plane specified by the parameter, starting at zero.\n\n• Returns the name of the plane specified by the index (e.\n\n• Returns the number of components in the plane (1 for scalar planes and up to 4 for vector planes).\n\n• Returns the default name of the point plane (as it appears in the compositor preferences).\n\n• Returns the default name of the velocity plane (as it appears in the compositor preferences).\n\n## Interpolation\n\n• Samples a Catmull-Rom (Cardinal) spline defined by position/value keys.\n\n• Returns value clamped between min and max.\n\n• Samples a Catmull-Rom (Cardinal) spline defined by uniformly spaced keys.\n\n• Takes the value in one range and shifts it to the corresponding value in a new range.\n\n• Takes the value in one range and shifts it to the corresponding value in a new range.\n\n• Takes the value in the range (0, 1) and shifts it to the corresponding value in a new range.\n\n• Takes the value in the range (1, 0) and shifts it to the corresponding value in a new range.\n\n• Takes the value in the range (-1, 1) and shifts it to the corresponding value in a new range.\n\n• Inverses a linear interpolation between the values.\n\n• Performs linear interpolation between the values.\n\n• Samples a polyline between the key points.\n\n• Samples a polyline defined by linearly spaced values.\n\n• Quaternion blend between q1 and q2 based on the bias.\n\n• Computes ease in/out interpolation between values.\n\n## light\n\n• Returns the color of ambient light in the scene.\n\n• Computes attenuated falloff.\n\n• Sends a ray from the position P along the direction specified by the direction D.\n\n• Sends a ray from the position P along direction D.\n\n## Math\n\n• Returns the derivative of the given value with respect to U.\n\n• Returns the derivative of the given value with respect to V.\n\n• Returns the derivative of the given value with respect to the 3rd axis (for volume rendering).\n\n• Returns the absolute value of the argument.\n\n• Returns the inverse cosine of the argument.\n\n• Returns the inverse sine of the argument.\n\n• Returns the inverse tangent of the argument.\n\n• Returns the inverse tangent of y/x.\n\n• Returns the average value of the input(s)\n\n• Returns the cube root of the argument.\n\n• Returns the smallest integer greater than or equal to the argument.\n\n• Combines Local and Parent Transforms with Scale Inheritance.\n\n• Returns the cosine of the argument.\n\n• Returns the hyperbolic cosine of the argument.\n\n• Returns the cross product between the two vectors.\n\n• Computes the determinant of the matrix.\n\n• Diagonalizes Symmetric Matrices.\n\n• Returns the dot product between the arguments.\n\n• Computes the eigenvalues of a 3×3 matrix.\n\n• Gauss error function.\n\n• Inverse Gauss error function.\n\n• Gauss error function’s complement.\n\n• Returns the exponential function of the argument.\n\n• Extracts Local Transform from a World Transform with Scale Inheritance.\n\n• Returns the largest integer less than or equal to the argument.\n\n• Returns the fractional component of the floating point number.\n\n• Returns an identity matrix.\n\n• Inverts a matrix.\n\n• Checks whether a value is a normal finite number.\n\n• Checks whether a value is not a number.\n\n• Returns an interpolated value along a curve defined by a basis and key/position pairs.\n\n• Returns the magnitude of a vector.\n\n• Returns the squared distance of the vector or vector4.\n\n• Returns the natural logarithm of the argument.\n\n• Returns the logarithm (base 10) of the argument.\n\n• Creates an orthonormal basis given a z-axis vector.\n\n• Returns a normalized vector.\n\n• Returns the outer product between the arguments.\n\n• Computes the intersection of a 3D sphere and an infinite 3D plane.\n\n• Raises the first argument to the power of the second argument.\n\n• Determines if a point is inside or outside a triangle circumcircle.\n\n• Determines if a point is inside or outside a tetrahedron circumsphere.\n\n• Determines the orientation of a point with respect to a line.\n\n• Determines the orientation of a point with respect to a plane.\n\n• Pre multiply matrices.\n\n• Returns the product of a list of numbers.\n\n• This function returns the closest distance between the point Q and a finite line segment between points P0 and P1.\n\n• Finds distance between two quaternions.\n\n• Inverts a quaternion rotation.\n\n• Multiplies two quaternions and returns the result.\n\n• Rotates a vector by a quaternion.\n\n• Creates a vector4 representing a quaternion.\n\n• Rounds the number to the closest whole number.\n\n• Bit-shifts an integer left.\n\n• Bit-shifts an integer right.\n\n• Bit-shifts an integer right.\n\n• Returns -1, 0, or 1 depending on the sign of the argument.\n\n• Returns the sine of the argument.\n\n• Returns the hyperbolic sine of the argument.\n\n• Finds the normal component of frame slid along a curve.\n\n• Solves a cubic function returning the number of real roots.\n\n• Finds the real roots of a polynomial.\n\n• Solves a quadratic function returning the number of real roots.\n\n• Finds the angles of a triangle from its sides.\n\n• Samples a value along a polyline or spline curve.\n\n• Generate a cumulative distribution function (CDF) by sampling a spline curve.\n\n• Returns the square root of the argument.\n\n• Returns the sum of a list of numbers.\n\n• Computes the singular value decomposition of a 3×3 matrix.\n\n• Returns the trigonometric tangent of the argument\n\n• Returns the hyperbolic tangent of the argument\n\n• Transposes the given matrix.\n\n• Removes the fractional part of a floating point number.\n\n## measure\n\n• Returns the distance between two points.\n\n• Returns the squared distance between the two points.\n\n• Sets two vectors to the minimum and maximum corners of the bounding box for the geometry.\n\n• Returns the center of the bounding box for the geometry.\n\n• Returns the maximum of the bounding box for the geometry.\n\n• Returns the minimum of the bounding box for the geometry.\n\n• Returns the size of the bounding box for the geometry.\n\n• Returns the bounding box of the geometry specified by the filename.\n\n• Sets two vectors to the minimum and maximum corners of the bounding box for the geometry.\n\n• Returns the center of the bounding box for the geometry.\n\n• Returns the maximum of the bounding box for the geometry.\n\n• Returns the minimum of the bounding box for the geometry.\n\n• Returns the size of the bounding box for the geometry.\n\n• Computes the distance and closest point of a point to an infinite plane.\n\n• Returns the relative position of the point given with respect to the bounding box of the geometry.\n\n• Returns the relative position of the point given with respect to the bounding box of the geometry.\n\n• Finds the distance of a point to a group of points along the surface of a geometry.\n\n• Finds the distance of a uv coordinate to a geometry in uv space.\n\n• Finds the distance of a point to a geometry.\n\n## metaball\n\n• Once you get a handle to a metaball using metastart and metanext, you can query attributes of the metaball with metaimport.\n\n• Takes the ray defined by p0 and p1 and partitions it into zero or more sub-intervals where each interval intersects a cluster of metaballs from filename.\n\n• Iterate to the next metaball in the list of metaballs returned by the metastart() function.\n\n• Open a geometry file and return a \"handle\" for the metaballs of interest, at the position p.\n\n• Returns the metaweight of the geometry at position p.\n\n## Nodes\n\n• Adds a mapping for an attribute to a local variable.\n\n• Evaluates a channel (or parameter) and return its value.\n\n• Evaluates a channel (or parameter) and return its value.\n\n• Evaluates a channel (or parameter) and return its value.\n\n• Evaluates a channel (or parameter) and return its value.\n\n• Evaluates a key-value dictionary parameter and return its value.\n\n• Evaluates a channel with a new segment expression.\n\n• Evaluates a channel with a new segment expression at a given frame.\n\n• Evaluates a channel with a new segment expression at a given time.\n\n• Evaluates a channel (or parameter) and return its value.\n\n• Evaluates a channel (or parameter) and return its value.\n\n• Resolves a channel string (or parameter) and return op_id, parm_index and vector_index.\n\n• Evaluates a channel (or parameter) and return its value.\n\n• Evaluates a ramp parameter and return its value.\n\n• Evaluates the derivative of a parm parameter with respect to position.\n\n• Evaluates a channel (or parameter) and return its value.\n\n• Evaluates an operator path parameter and return the path to the operator.\n\n• Returns the raw string channel (or parameter).\n\n• Evaluates a channel or parameter, and return its value.\n\n• Evaluates a channel or parameter, and return its value.\n\n• Returns the capture transform associated with a Capture Region SOP.\n\n• Returns the deform transform associated with a Capture Region SOP.\n\n• Returns the capture or deform transform associated with a Capture Region SOP based on the global capture override flag.\n\n• Returns 1 if input_number is connected, or 0 if the input is not connected.\n\n• Returns the full path for the given relative path\n\n• Resolves an operator path string and return its op_id.\n\n• Returns the parent bone transform associated with an OP.\n\n• Returns the parent transform associated with an OP.\n\n• Returns the parm transform associated with an OP.\n\n• Returns the preconstraint transform associated with an OP.\n\n• Returns the pre and parm transform associated with an OP.\n\n• Returns the pre and raw parm transform associated with an OP.\n\n• Returns the pretransform associated with an OP.\n\n• Returns the raw parm transform associated with an OP.\n\n• Returns the transform associated with an OP.\n\n## Noise and Randomness\n\n• Generates \"alligator\" noise.\n\n• Computes divergence free noise based on Perlin noise.\n\n• Computes 2d divergence free noise based on Perlin noise.\n\n• Computes divergence free noise based on Simplex noise.\n\n• Computes 2d divergence free noise based on simplex noise.\n\n• Generates Worley (cellular) noise using a Chebyshev distance metric.\n\n• Generates 1D and 3D Perlin Flow Noise from 3D and 4D data.\n\n• There are two forms of Perlin-style noise: a non-periodic noise which changes randomly throughout N-dimensional space, and a periodic form which repeats over a given range of space.\n\n• Generates noise matching the output of the Hscript noise() expression function.\n\n• Produces the exact same results as the Houdini expression function of the same name.\n\n• Generates turbulence matching the output of the HScript turb() expression function.\n\n• Generates Worley (cellular) noise using a Manhattan distance metric.\n\n• There are two forms of Perlin-style noise: a non-periodic noise which changes randomly throughout N-dimensional space, and a periodic form which repeats over a given range of space.\n\n• Derivatives of Perlin Noise.\n\n• Non-deterministic random number generation function.\n\n• These functions are similar to wnoise and vnoise.\n\n• There are two forms of Perlin-style noise: a non-periodic noise which changes randomly throughout N-dimensional space, and a periodic form which repeats over a given range of space.\n\n• Periodic derivatives of Simplex Noise.\n\n• Creates a random number between 0 and 1 from a seed.\n\n• Generate a random number based on the position in 1-4D space.\n\n• Generate a uniformly distributed random number.\n\n• Hashes floating point numbers to integers.\n\n• Hashes integer numbers to integers.\n\n• Generates a random Poisson variable given the mean to the distribution and a seed.\n\n• Hashes a string to an integer.\n\n• Generate a uniformly distributed random number.\n\n• These functions are similar to wnoise.\n\n• Generates Voronoi (cellular) noise.\n\n• Generates Worley (cellular) noise.\n\n• Simplex noise is very close to Perlin noise, except with the samples on a simplex mesh rather than a grid. This results in less grid artifacts. It also uses a higher order bspline to provide better derivatives. This is the periodic simplex noise\n\n• Simplex noise is very close to Perlin noise, except with the samples on a simplex mesh rather than a grid. This results in less grid artifacts. It also uses a higher order bspline to provide better derivatives.\n\n• Derivatives of Simplex Noise.\n\n## normals\n\n• In shading contexts, computes a normal. In the SOP contexts, sets how/whether to recompute normals.\n\n• Returns the normal of the primitive (prim_number) at parametric location u, v.\n\n## Open Color IO\n\n• Returns the names of active displays supported in Open Color IO\n\n• Returns the names of active views supported in Open Color IO\n\n• Imports attributes from OpenColorIO spaces.\n\n• Returns the names of roles supported in Open Color IO\n\n• Returns the names of color spaces supported in Open Color IO.\n\n• Parse the color space from a string\n\n• Transform colors using Open Color IO\n\n## particles\n\n• Samples the velocity field defined by a set of vortex filaments.\n\n## Point Clouds and 3D Images\n\n• Returns the value of the point attribute for the metaballs if metaball geometry is specified to i3dgen.\n\n• Returns the density of the metaball field if metaball geometry is specified to i3dgen.\n\n• Transforms the position specified into the \"local\" space of the metaball.\n\n• This function closes the handle associated with a pcopen function.\n\n• Returns a list of closest points from a file within a specified cone.\n\n• Returns a list of closest points from a file in a cone, taking into account their radii\n\n• Writes data to a point cloud inside a pciterate or a pcunshaded loop.\n\n• Returns the distance to the farthest point found in the search performed by pcopen.\n\n• Filters points found by pcopen using a simple reconstruction filter.\n\n• Returns a list of closest points from a file.\n\n• Returns a list of closest points from a file taking into account their radii.\n\n• Generates a point cloud.\n\n• Imports channel data from a point cloud inside a pciterate or a pcunshaded loop.\n\n• Imports channel data from a point cloud outside a pciterate or a pcunshaded loop.\n\n• Imports channel data from a point cloud outside a pciterate or a pcunshaded loop.\n\n• Imports channel data from a point cloud outside a pciterate or a pcunshaded loop.\n\n• Imports channel data from a point cloud outside a pciterate or a pcunshaded loop.\n\n• Imports channel data from a point cloud outside a pciterate or a pcunshaded loop.\n\n• Imports channel data from a point cloud outside a pciterate or a pcunshaded loop.\n\n• Imports channel data from a point cloud outside a pciterate or a pcunshaded loop.\n\n• This function can be used to iterate over all the points which were found in the pcopen query.\n\n• Returns a list of closest points to an infinite line from a specified file\n\n• Returns a list of closest points to an infinite line from a specified file\n\n• This node returns the number of points found by pcopen.\n\n• Returns a handle to a point cloud file.\n\n• Returns a handle to a point cloud file.\n\n• Changes the current iteration point to a leaf descendant of the current aggregate point.\n\n• Returns a list of closest points to a line segment from a specified file\n\n• Returns a list of closest points to a line segment from a specified file\n\n• Iterate over all of the points of a read-write channel which haven’t had any data written to the channel yet.\n\n• Writes data to a point cloud file.\n\n• Returns a list of closest points from a file.\n\n• Samples a color from a photon map.\n\n• Returns the value of the 3d image at the position specified by P.\n\n• This function queries the 3D texture map specified and returns the bounding box information of the file.\n\n## Sampling\n\n• Creates a cumulative distribution function (CDF) from an array of probability density function (PDF) values.\n\n• Creates a probability density function from an array of input values.\n\n• Limits a unit value in a way that maintains uniformity and in-range consistency.\n\n• Initializes a sampling sequence for the nextsample function.\n\n• Samples the Cauchy (Lorentz) distribution.\n\n• Samples a cumulative distribution function (CDF).\n\n• Generates a uniform unit vector2, within maxangle of center, given a uniform number between 0 and 1.\n\n• Generates a uniform unit vector2, given a uniform number between 0 and 1.\n\n• Generates a uniform vector2 with alpha < length < 1, where 0 < alpha < 1, given a vector2 of uniform numbers between 0 and 1.\n\n• Generates a uniform vector2 with length < 1, within maxangle of center, given a vector2 of uniform numbers between 0 and 1.\n\n• Generates a uniform vector2 with length < 1, given a vector2 of uniform numbers between 0 and 1.\n\n• Generates a uniform unit vector, within maxangle of center, given a vector2 of uniform numbers between 0 and 1.\n\n• Generates a uniform unit vector, given a vector2 of uniform numbers between 0 and 1.\n\n• Returns an integer, either uniform or weighted, given a uniform number between 0 and 1.\n\n• Samples the exponential distribution.\n\n• Samples geometry in the scene and returns information from the shaders of surfaces that were sampled.\n\n• Generates a unit vector, optionally biased, within a hemisphere, given a vector2 of uniform numbers between 0 and 1.\n\n• Generates a uniform vector4 with length < 1, within maxangle of center, given a vector4 of uniform numbers between 0 and 1.\n\n• Generates a uniform vector4 with length < 1, given a vector4 of uniform numbers between 0 and 1.\n\n• Samples a 3D position on a light source and runs the light shader at that point.\n\n• Samples the log-normal distribution based on parameters of the underlying normal distribution.\n\n• Samples the log-normal distribution based on median and standard deviation.\n\n• Samples the normal (Gaussian) distribution.\n\n• Generates a uniform unit vector4, within maxangle of center, given a vector of uniform numbers between 0 and 1.\n\n• Generates a uniform unit vector4, given a vector of uniform numbers between 0 and 1.\n\n• Samples a 3D position on a light source and runs the light shader at that point.\n\n• Generates a uniform vector with length < 1, within maxangle of center, given a vector of uniform numbers between 0 and 1.\n\n• Generates a uniform vector with alpha < length < 1, where 0 < alpha < 1, given a vector of uniform numbers between 0 and 1.\n\n• Generates a uniform vector with length < 1, given a vector of uniform numbers between 0 and 1.\n\n• Warps uniform random samples to a disk.\n\n• Computes the mean value and variance for a value.\n\n## Sensor Input\n\n• Sensor function to render GL scene and query the result.\n\n• Sensor function query a rendered GL scene.\n\n• Sensor function to query average values from rendered GL scene.\n\n• Sensor function query a rendered GL scene.\n\n• Sensor function to save a rendered GL scene.\n\n## Shading and Rendering\n\n• Returns the area of the micropolygon containing a variable such as P.\n\n• Returns the anti-aliased weight of the step function.\n\n• Computes the fresnel reflection/refraction contributions given an incoming vector, surface normal (both normalized), and an index of refraction (eta).\n\n• If dot(I, Nref) is less than zero, N will be negated.\n\n• Sends rays into the scene and returns information from the shaders of surfaces hit by the rays.\n\n• Returns the blurred point position (P) vector at a fractional time within the motion blur exposure.\n\n• Evaluates surface derivatives of an attribute.\n\n• Returns the name of the current object whose shader is being run.\n\n• Returns the depth of the ray tree for computing global illumination.\n\n• Returns group id containing current primitive.\n\n• Returns a light struct for the specified light identifier.\n\n• Returns the light id for a named light (or -1 for an invalid name).\n\n• Returns the name of the current light when called from within an illuminance loop, or converts an integer light ID into the light’s name.\n\n• Returns an array of light identifiers for the currently shaded surface.\n\n• Returns a selection of lights that illuminate a given material.\n\n• Evaluates local curvature of primitive grid, using the same curvature evaluation method as Measure SOPs.\n\n• Returns a material struct for the current surface.\n\n• Returns material id of shaded primitive.\n\n• Returns the object id for the current shading context.\n\n• Returns the name of the current object whose shader is being run.\n\n• Returns the integer ID of the light being used for photon shading.\n\n• Returns the number of the current primitive.\n\n• Returns the ptexture face id for the current primitive.\n\n• Returns the depth of the ray tree for the current shading.\n\n• Returns an approximation to the contribution of the ray to the final pixel color.\n\n• Looks up sample data in a channel, referenced by a point.\n\n• Returns a selection of objects visible to rays for a given material.\n\n• Returns modified surface position based on a smoothing function.\n\n• Evaluates UV tangents at a point on an arbitrary object.\n\n• Returns the gradient of a field.\n\n• Returns whether a light illuminates the given material.\n\n• Loops through all light sources in the scene, calling the light shader for each light source to set the Cl and L global variables.\n\n• Interpolates a value across the currently shaded micropolygon.\n\n• Finds the nearest intersection of a ray with any of a list of (area) lights and runs the light shader at the intersection point.\n\n• Computes irradiance (global illumination) at the point P with the normal N.\n\n• Returns 1 if the shader is being called to evaluate illumination for fog objects, or 0 if the light or shadow shader is being called to evaluate surface illumination.\n\n• Returns 1 if Light Path Expressions are enabled. 0 Otherwise.\n\n• Indicates whether a shader is being executed for ray tracing.\n\n• Detects the orientation of default shading space.\n\n• Returns 1 if the shader is being called to evaluate opacity for shadow rays, or 0 if the shader is being called to evaluate for surface color.\n\n• Indicates whether the shader is being evaluated while doing UV rendering (e.g. texture unwrapping)\n\n• Returns the bounce mask for a light struct.\n\n• Returns the light id for a light struct.\n\n• Queries the renderer for a named property.\n\n• Imports a variable from the light shader for the surface.\n\n• Returns a BSDF that matches the output of the traditional VEX blinn function.\n\n• Returns a BSDF that matches the output of the traditional VEX specular function.\n\n• Queries the renderer for a named property.\n\n• Computes ambient occlusion.\n\n• Computes global illumination using PBR for secondary bounces.\n\n• Sends a ray from the position P along the direction D.\n\n• Imports a value sent by a shader in a gather loop.\n\n• Returns the vector representing the reflection of the direction against the normal.\n\n• Computes the amount of reflected light which hits the surface.\n\n• Returns the refraction ray given an incoming direction, the normalized normal and an index of refraction.\n\n• Computes the illumination of surfaces refracted by the current surface.\n\n• Queries the renderer for a named property.\n\n• Returns the background color for rays that exit the scene.\n\n• Evaluates a scattering event through the domain of a geometric object.\n\n• Sets the current light\n\n• Stores sample data in a channel, referenced by a point.\n\n• Calls shadow shaders for the current light source.\n\n• Executes the shadow shader for a given light and returns the amount of shadowing as a multiplier of the shaded color.\n\n• Imports a variable from the shadow shader for the surface.\n\n• Imports a variable sent by a surface shader in an illuminance loop.\n\n• Returns the computed BRDFs for the different lighting models used in VEX shading.\n\n• Stores exported data for a light.\n\n• Use a different bsdf for direct or indirect lighting.\n\n• Sends a ray from P along the normalized vector D.\n\n• Returns a Lambertian translucence BSDF.\n\n• Computes the position and normal at given (u, v) coordinates, for use in a lens shader.\n\n• Writes color information to a pixel in the output image\n\n## Strings\n\n• Returns the full path of a file.\n\n• Converts an unicode codepoint to a UTF8 string.\n\n• Concatenate all the strings specified to form a single string.\n\n• Decodes a variable name that was previously encoded.\n\n• Decodes a geometry attribute name that was previously encoded.\n\n• Decodes a node parameter name that was previously encoded.\n\n• Encodes any string into a valid variable name.\n\n• Encodes any string into a valid geometry attribute name.\n\n• Encodes any string into a valid node parameter name.\n\n• Indicates the string ends with the specified string.\n\n• Finds an item in an array or string.\n\n• Returns 1 if all the characters in the string are alphabetic\n\n• Returns 1 if all the characters in the string are numeric\n\n• Converts an integer to a string.\n\n• Concatenate all the strings of an array inserting a common spacer.\n\n• Strips leading whitespace from a string.\n\n• This function returns 1 if the subject matches the pattern specified, or 0 if the subject doesn’t match.\n\n• Returns the integer value of the last sequence of digits of a string\n\n• Converts an UTF8 string into a codepoint.\n\n• Converts an English noun to its plural.\n\n• Matches a regular expression in a string\n\n• Finds all instances of the given regular expression in the string\n\n• Returns 1 if the entire input string matches the expression\n\n• Replaces instances of regex_find with regex_replace\n\n• Splits the given string based on regex match.\n\n• Computes the relative path for two full paths.\n\n• Returns the relative path to a file.\n\n• Strips trailing whitespace from a string.\n\n• Splits a string into tokens.\n\n• Splits a file path into the directory and name parts.\n\n• Formats a string like printf but returns the result as a string instead of printing it.\n\n• Returns 1 if the string starts with the specified string.\n\n• Strips leading and trailing whitespace from a string.\n\n• Returns the length of the string.\n\n• Returns a string that is the titlecase version of the input string.\n\n• Converts all characters in string to lower case\n\n• Converts all characters in string to upper case\n\n## Subdivision Surfaces\n\n• Evaluates a point attribute at the subdivision limit surface using Open Subdiv.\n\n• Evaluates a vertex attribute at the subdivision limit surface using Open Subdiv.\n\n• Outputs the Houdini face and UV coordinates corresponding to the given coordinates on an OSD patch.\n\n• Outputs the OSD patch and UV coordinates corresponding to the given coordinates on a Houdini polygon face.\n\n• Returns a list of patch IDs for the patches in a subdivision hull.\n\n## Tetrahedrons\n\n• Returns primitive number of an adjacent tetrahedron.\n\n• Returns vertex indices of each face of a tetrahedron.\n\n## Texturing\n\n• Looks up a (filtered) color from a texture file.\n\n• The depthmap functions work on an image which was rendered as a z-depth image from mantra.\n\n• Returns the color of the environment texture.\n\n• Perform UDIM or UVTILE texture filename expansion.\n\n• Test string for UDIM or UVTILE patterns.\n\n• Remaps a texture coordinate to another coordinate in the map to optimize sampling of brighter areas.\n\n• Evaluates an ocean spectrum and samples the result at a given time and location.\n\n• Computes a filtered sample from a ptex texture map. Use texture instead.\n\n• Looks up an unfiltered color from a texture file.\n\n• The shadowmap function will treat the shadow map as if the image were rendered from a light source.\n\n• Imports attributes from texture files.\n\n• Similar to sprintf, but does expansion of UDIM or UVTILE texture filename expansion.\n\n• Computes a filtered sample of the texture map specified.\n\n## Transforms and Space\n\n• Computes the rotation matrix or quaternion which rotates the vector a onto the vector b.\n\n• Transforms a position from normal device coordinates to the coordinates in the appropriate space.\n\n• Gets the transform of a packed primitive.\n\n• Returns a transform from one space to another.\n\n• Creates an instance transform matrix.\n\n• Computes a rotation matrix or angles to orient the negative z-axis along the vector (to-from) under the transformation.\n\n• Builds a 3×3 or 4×4 transform matrix.\n\n• Returns the camera space z-depth of the NDC z-depth value.\n\n• Transforms a normal vector.\n\n• Create an orthographic projection matrix.\n\n• Transforms a normal vector from Object to World space.\n\n• Transforms a position value from Object to World space.\n\n• Transforms a direction vector from Object to World space.\n\n• Transforms a packed primitive.\n\n• Create a perspective projection matrix.\n\n• Computes the polar decomposition of a matrix.\n\n• Applies a pre rotation to the given matrix.\n\n• Prescales the given matrix in three directions simultaneously (X, Y, Z - given by the components of the scale_vector).\n\n• Pretranslates a matrix by a vector.\n\n• Transforms a vector from one space to another.\n\n• Applies a rotation to the given matrix.\n\n• Rotates a vector by a rotation that would bring the x-axis to a given direction.\n\n• Scales the given matrix in three directions simultaneously (X, Y, Z - given by the components of the scale_vector).\n\n• Sets the transform of a packed primitive.\n\n• Returns the closest equivalent Euler rotations to a reference rotation.\n\n• Applies an inverse kinematics algorithm to a skeleton.\n\n• Applies a curve inverse kinematics algorithm to a skeleton.\n\n• Applies a full-body inverse kinematics algorithm to a skeleton.\n\n• Applies an inverse kinematics algorithm to a skeleton.\n\n• Applies a full-body inverse kinematics algorithm to a skeleton, with optional control over the center of mass.\n\n• Transforms a position into normal device coordinates.\n\n• Translates a matrix by a vector.\n\n• Transforms a normal vector from Texture to World space.\n\n• Transforms a position value from Texture to World space.\n\n• Transforms a direction vector from Texture to World space.\n\n• Transforms a directional vector.\n\n• Transforms a normal vector from World to Object space.\n\n• Transforms a position value from World to Object space.\n\n• Transforms a direction vector from World to Object space.\n\n• Transforms a normal vector from World to Texture space.\n\n• Transforms a position value from World to Texture space.\n\n• Transforms a direction vector from World to Texture space.\n\n## usd\n\n• Creates an attribute of a given type on a primitive.\n\n• Excludes an object from the collection\n\n• Includes an object in the collection\n\n• Appends an inversed transform operation to the primitive’s transform order\n\n• Applies a quaternion orientation to the primitive\n\n• Creates a primitive of a given type.\n\n• Creates a primvar of a given type on a primitive.\n\n• Adds a target to the primitive’s relationship\n\n• Applies a rotation to the primitive\n\n• Applies a scale to the primitive\n\n• Appends a transform operation to the primitive’s transform order\n\n• Applies a transformation to the primitive\n\n• Applies a translation to the primitive\n\n• Reads the value of an attribute from the USD primitive.\n\n• Reads the value of an element from an array attribute.\n\n• Returns the length of the array attribute.\n\n• Returns the names of the attributes available on the primitive.\n\n• Returns the tuple size of the attribute.\n\n• Returns the time codes at which the attribute values are authored.\n\n• Returns the name of the attribute type.\n\n• Blocks the attribute.\n\n• Blocks the primvar.\n\n• Blocks the primvar.\n\n• Blocks the primitive’s relationship\n\n• Returns the material path bound to a given primitive.\n\n• Clears the value of the metadata.\n\n• Clears the primitive’s transform order\n\n• Obtains the list of all objects that belong to the collection\n\n• Checks if an object path belongs to the collection\n\n• Obtains the object paths that are in the collection’s exclude list\n\n• Obtains the collection’s expansion rule\n\n• Obtains the object paths that are in the collection’s include list\n\n• Returns the primitive’s draw mode.\n\n• Retrurns primitive’s transform operation full name for given the transform operation suffix\n\n• Reads the value of an flattened primvar from the USD primitive.\n\n• Reads an element value of a flattened array primvar.\n\n• Sets two vectors to the minimum and maximum corners of the bounding box for the primitive.\n\n• Returns the center of the bounding box for the primitive.\n\n• Returns the maximum of the bounding box for the primitive.\n\n• Returns the minimum of the bounding box for the primitive.\n\n• Returns the size of the bounding box for the primitive.\n\n• Obtains the primitive’s bounds\n\n• Obtains the primitive’s bounds\n\n• Checks if the primitive adheres to the given API.\n\n• Checks if the primitive adheres to the given API.\n\n• Checks if the primitive is active.\n\n• Checks if the attribute is an array.\n\n• Checks if the given metadata is an array.\n\n• Checks if the primvar is an array.\n\n• Checks if the primitive has an attribute by the given name.\n\n• Checks if the collection exists.\n\n• Checks if the path is a valid collection path.\n\n• Checks if the primvar is indexed.\n\n• Checks if the primitive is an instance.\n\n• Checks if the primitive is of a given kind.\n\n• Checks if the primitive has metadata by the given name.\n\n• Checks if the path refers to a valid primitive.\n\n• Checks if the primitive has a primvar of the given name.\n\n• Checks if the primitive has a relationship by the given name.\n\n• Checks if the stage is valid.\n\n• Checks if the primitive transform is reset\n\n• Checks if the primitive is of a given type.\n\n• Checks if the primitive is visible.\n\n• Returns the primitive’s kind.\n\n• Obtains the primitive’s local transform\n\n• Constructs an attribute path from a primitive path and an attribute name.\n\n• Constructs a collection path from a primitive path and a collection name.\n\n• Constructs an property path from a primitive path and an property name.\n\n• Constructs an relationship path from a primitive path and a relationship name.\n\n• Reads the value of metadata from the USD object.\n\n• Reads the value of an element from the array metadata.\n\n• Returns the length of the array metadata.\n\n• Returns the names of the metadata available on the object.\n\n• Returns the name of the primitive.\n\n• Returns the path of the primitive’s parent.\n\n• Sets two vectors to the minimum and maximum corners of the bounding box for the given instance inside point instancer.\n\n• Returns the center of the bounding box for the instance inside a point instancer primitive.\n\n• Returns the maximum position of the bounding box for the instance inside a point instancer primitive.\n\n• Returns the minimum position of the bounding box for the instance inside a point instancer primitive.\n\n• Returns the size of the bounding box for the instance inside a point instancer primitive.\n\n• Returns the relative position of the point given with respect to the bounding box of the geometry.\n\n• Obtains the transform for the given point instance\n\n• Reads the value of a primvar from the USD primitive.\n\n• Returns the namespaced attribute name for the given primvar.\n\n• Reads the value of an element from the array primvar.\n\n• Returns the element size of the primvar.\n\n• Returns the index array of an indexed primvar.\n\n• Returns the element size of the primvar.\n\n• Returns the length of the array primvar.\n\n• Returns the names of the primvars available on the primitive.\n\n• Returns the tuple size of the primvar.\n\n• Returns the time codes at which the primvar values are authored.\n\n• Returns the name of the primvar type.\n\n• Returns the primitive’s purpose.\n\n• Obtains the relationship forwarded targets.\n\n• Returns the names of the relationships available on the primitive.\n\n• Obtains the relationship targets.\n\n• Returns the relative position of the point given with respect to the bounding box of the geometry.\n\n• Remove a target from the primitive’s relationship\n\n• Sets the primitive active state.\n\n• Sets the value of an attribute.\n\n• Sets the value of an element in an array attribute.\n\n• Sets the excludes list on the collection\n\n• Sets the expansion rule on the collection\n\n• Sets the includes list on the collection\n\n• Sets the primitive’s draw mode.\n\n• Sets the primitive’s kind.\n\n• Sets the value of an metadata.\n\n• Sets the value of an element in an array metadata.\n\n• Sets the value of a primvar.\n\n• Sets the value of an element in an array primvar.\n\n• Sets the element size of a primvar.\n\n• Sets the indices for the given primvar.\n\n• Sets the interpolation of a primvar.\n\n• Sets the primitive’s purpose.\n\n• Sets the targets in the primitive’s relationship\n\n• Sets the primitive’s transform order\n\n• Sets/clears the primitive’s transform reset flag\n\n• Sets the selected variant in the given variant set.\n\n• Sets the primitive visibility.\n\n• Constructs a full name of a transform operation\n\n• Obtains the primitive’s transform order\n\n• Extracts the transform operation suffix from the full name\n\n• Infers the transform operation type from the full name\n\n• Returns the name of the primitive’s type.\n\n• Constructs a unique full name of a transform operation\n\n• Returns the variants belonging to the given variant set on a primitive.\n\n• Returns the currently selected variant in a given variant set.\n\n• Returns the variant sets available on a primitive.\n\n• Obtains the primitive’s world transform\n\n## Utility\n\n• Returns 1 if the VEX assertions are enabled (see HOUDINI_VEX_ASSERT) or 0 if assertions are disabled. Used the implement the assert macro.\n\n• An efficient way of extracting the components of a vector or matrix into float variables.\n\n• Reports a custom runtime VEX error.\n\n• Extracts a single component of a vector type, matrix type, or array.\n\n• Parameters in VEX can be overridden by geometry attributes (if the attributes exist on the surface being rendered).\n\n• Check whether a VEX variable is varying or uniform.\n\n• Ends a long operation.\n\n• Start a long operation.\n\n• Reversibly packs an integer into a finite, non-denormal float.\n\n• Prints a message only once, even in a loop.\n\n• Prints values to the console which started the VEX program.\n\n• Evaluates a Houdini-style ramp at a specific location.\n\n• Unpacks a string-encoded ramp into a set of arrays.\n\n• Returns one of two parameters based on a conditional.\n\n• Creates a new value based on its arguments, such as creating a vector from its components.\n\n• Sets a single component of a vector or matrix type, or an item in an array.\n\n• Yields processing for a certain number of milliseconds.\n\n• Rearranges the components of a vector.\n\n• Reverses the packing of pack_inttosafefloat to get back the original integer.\n\n• Reports a custom runtime VEX warning.\n\n## volume\n\n• Returns the volume of the microvoxel containing a variable such as P.\n\n• Calculates the volume primitive’s gradient.\n\n• Gets the value of a specific voxel.\n\n• Gets the active setting of a specific voxel.\n\n• Gets the index of the bottom left of a volume primitive.\n\n• Converts a volume voxel index into a position.\n\n• Gets the vector value of a specific voxel.\n\n• Converts a position into a volume voxel index.\n\n• Gets the resolution of a volume primitive.\n\n• Samples the volume primitive’s value.\n\n• Samples the volume primitive’s vector value.\n\n• Computes the approximate diameter of a voxel." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6431071,"math_prob":0.9305134,"size":447,"snap":"2021-04-2021-17","text_gpt3_token_len":98,"char_repetition_ratio":0.18735892,"word_repetition_ratio":0.0,"special_character_ratio":0.18791947,"punctuation_ratio":0.1392405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9810994,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T16:02:54Z\",\"WARC-Record-ID\":\"<urn:uuid:1c247fbe-cd73-407e-afd0-6a66beda20ff>\",\"Content-Length\":\"459292\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:876362fc-52d9-4802-88a0-fb356c009757>\",\"WARC-Concurrent-To\":\"<urn:uuid:3268e74e-116d-4ec7-ab20-19178c8c15b4>\",\"WARC-IP-Address\":\"206.223.178.168\",\"WARC-Target-URI\":\"https://www.sidefx.com/docs/houdini/vex/functions/volumeindexorigin.html\",\"WARC-Payload-Digest\":\"sha1:7R27U4SHWBW7IKIVGUZ7LTFDEMMDUN72\",\"WARC-Block-Digest\":\"sha1:C6IH4LK37LP37RDUZJULHBKRFA7JQJAU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038887646.69_warc_CC-MAIN-20210419142428-20210419172428-00254.warc.gz\"}"}
https://quant.stackexchange.com/questions/38961/european-call-price-for-an-asset-with-mean-reverting-vasicek-model-dynamics
[ "# European Call price for an asset with mean reverting (Vasicek model) dynamics\n\nLet's look at a stock with a mean reverting price dynamics: $$dS_t = a(S-S_0)dt + \\sigma dW_t$$\n\nIf we let $\\sigma=0.25$ and $a=-0.5$ then the variance of this process is: $$Var(S_t) = 0.199\\sim0.2$$ see the Wiki article about for this kind of proces: https://en.wikipedia.org/wiki/Vasicek_model\n\nHow do I derive the Arbitrage free pricing function for a Call option with strike K and underlying being the stock with MR as described above.\n\nThe whole point of no-arbitrage pricing in a complete market is that a general underlying model of the form\n\n$$d S_t = \\mu(S_t,t)\\, dt + \\sigma(S_t,t) \\, dW_t$$\n\ncan be replaced with the risk-neutral process.\n\n$$d S_t = (r - \\sigma^2/2)\\, dt + \\sigma(S_t,t) \\, dW_t$$\n\nfor the purpose of finding the theoretical fair option price. This, of course, follows from the possibility of continuous hedging and, mathematically, through a change of measure.\n\nYou introduce two twists in that the drift imposes mean reversion and you set $\\sigma(S_t,t) = \\sigma = \\text{constant}$. Had you chosen $\\sigma(S_t,t) = \\sigma S_t$, this would revert to the Black-Scholes model as far as the option price is concerned. The form of the drift is irrelevant.\n\nAssuming $\\sigma(S_t,t) = \\sigma$ will then give the closed-form option price for arithmetic Brownian motion.\n\nThere is, however, one issue that needs to be addressed -- the estimation of $\\sigma$. Without mean reversion, and autocorrelation of returns, the volatility can be estimated using price data observed at discrete time intervals and independence would imply $\\sqrt{t}$ scaling. The parameter $\\sigma$ used in the option pricing formula would, for example, be obtained by estimating volatility $\\hat{\\sigma}$ over intervals of length $\\delta t$ and assigning $\\sigma = \\hat{\\sigma}/\\sqrt{\\delta t}$.\n\nThis would not be the case if the real price dynamics were mean reverting.\n\nSee the paper by Lo and Wang.\n\n• Great answer. A question that has puzzled me for a long time too (in the general context of hedging derivs under model misspecification). You’re so right, nobody actually cares about the mean reversion in the drift, the problem is the vol estimation ! Nice. – Ivan Mar 24 '18 at 20:05\n• @Ivan: Thank you. I think also that persistent mean reversion or trending behavior could give different results and be exploited in a realistic delta-hedging strategy where rebalancing is not continuous (but over discrete intervals) and with transaction costs. – RRL Mar 24 '18 at 20:25\n• @RRL thanks! Quite interesting. The way I see it, the pricing function would look just like as if we were using a Bachelier model instead of mean reverting model, right? – Lisa Mar 25 '18 at 0:28\n• @Lisa: You're welcome. You are correct about Bachelier model. – RRL Mar 25 '18 at 1:25" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8767339,"math_prob":0.99352056,"size":1447,"snap":"2019-51-2020-05","text_gpt3_token_len":355,"char_repetition_ratio":0.12820514,"word_repetition_ratio":0.008928572,"special_character_ratio":0.25431928,"punctuation_ratio":0.121323526,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996816,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T02:36:06Z\",\"WARC-Record-ID\":\"<urn:uuid:edc4f5e5-ee54-4632-b089-e9b355b70001>\",\"Content-Length\":\"135925\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71f061a5-09a2-456c-8f14-26eb064aa265>\",\"WARC-Concurrent-To\":\"<urn:uuid:bdb6dc23-f4ad-477e-9364-a5d07df3a3ec>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://quant.stackexchange.com/questions/38961/european-call-price-for-an-asset-with-mean-reverting-vasicek-model-dynamics\",\"WARC-Payload-Digest\":\"sha1:ZGZQW6WQMKOKME3HOZBEYJHUGIXPW5D7\",\"WARC-Block-Digest\":\"sha1:3BZ4TAQUZDAGWGZN73GXZZSVVW5TZD3Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251783621.89_warc_CC-MAIN-20200129010251-20200129040251-00323.warc.gz\"}"}
https://books.google.gr/books?id=seYGAAAAYAAJ&pg=PA148&focus=viewport&vq=%22shall+be+less+than+the+other+two+sides+of+the+triangle,+but+shall+contain+a+greater+angle.%22&dq=editions:UOMDLPabq7928_0001_001&lr=&hl=el&output=html_text
[ "Ейкьнет уелЯдбт PDF Злекфс. Экдпуз\n .flow { margin: 0; font-size: 1em; } .flow .pagebreak { page-break-before: always; } .flow p { text-align: left; text-indent: 0; margin-top: 0; margin-bottom: 0.5em; } .flow .gstxt_sup { font-size: 75%; position: relative; bottom: 0.5em; } .flow .gstxt_sub { font-size: 75%; position: relative; top: 0.3em; } .flow .gstxt_hlt { background-color: yellow; } .flow div.gtxt_inset_box { padding: 0.5em 0.5em 0.5em 0.5em; margin: 1em 1em 1em 1em; border: 1px black solid; } .flow div.gtxt_footnote { padding: 0 0.5em 0 0.5em; border: 1px black dotted; } .flow .gstxt_underline { text-decoration: underline; } .flow .gtxt_heading { text-align: center; margin-bottom: 1em; font-size: 150%; font-weight: bold; font-variant: small-caps; } .flow .gtxt_h1_heading { text-align: center; font-size: 120%; font-weight: bold; } .flow .gtxt_h2_heading { font-size: 110%; font-weight: bold; } .flow .gtxt_h3_heading { font-weight: bold; } .flow .gtxt_lineated { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; white-space: pre-wrap; } .flow .gtxt_lineated_code { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; white-space: pre-wrap; font-family: monospace; } .flow .gtxt_quote { margin-left: 2em; margin-right: 2em; margin-top: 1em; margin-bottom: 1em; } .flow .gtxt_list_entry { margin-left: 2ex; text-indent: -2ex; } .flow .gimg_graphic { margin-top: 1em; margin-bottom: 1em; } .flow .gimg_table { margin-top: 1em; margin-bottom: 1em; } .flow { font-family: serif; } .flow span,p { font-family: inherit; } .flow-top-div {font-size:83%;}", null, "CASE II. —When the angle BCA is a right angle. BC is now the same as BD, and the square on BC is the rectangle BC, BD. By I. 47 the square on AC is less than the sum of the squares on BC, BA by twice the square on BC. Q. E. D. Prop. 12 may be expressed thus : (1) AB2 = BC2 + CA2 + 2BC.CD; or (2) ca a+ b2 + 2ad, where a, b, c denote the sides of the triangle ABC, and d denotes CD; or (3) c2 a? + 62 - 2ab cos C, for CD = - b cos C trigonometrically. Prop. 13 may be expressed thus :(1) CAP BC? + BA? - 2BC.BD; or (2) 62 ao + c 2ad, where a, b, c denote the sides of the triangle ABC and d denotes BD; or (3) 62 a+ c 2ac cos B, for BD = c cos B trigonometrically. Since BD is the projection of BA on BC (see p. 100), Prop. 13 may be enunciated as follows :-In every triangle the square on the side subtending an acute angle, is less than the sum of the squares on the sides containing that angle, by twice the rectangle contained by either of these sides and the projection of the other side upon it. EXERCISES ON PROPOSITIONS 9 AND 10. 1. In the figure of Prop. 10 produce BA to R, making RA equal to BQ, and then deduce Prop. 10 from Prop. 9. 2. In AB the diameter of a circle, any two points C and D are taken equally distant from the centre, and any point P is taken on the circumference ; show that the sum of the squares on PC, PD will be equal to the sum of the squares on AC, AD. 3. If the straight line AB be divided at C so that the square on AC is double the rectangle AB, CB, prove that the sum of the squares on AB, CB is double the square on AC. 4. Show that Props. 9 and 10 are both included in the theorem :-The sum of the squares on the sum and the difference of two straight lines is equal to twice the sum of the squares on the lines. EXERCISES ON PROPOSITIONS 11, 12, 13. 1. In the figure of Prop. 11, prove that (i) CF is divided in medial section at A. (ii) CH produced and FB intersect at right angles. (iii) the ratio of AH to HB is that of 5 - 1 to 3 15. (iv) the lines AK, FD and GB are parallel. (v) if CH and EB meet in N, AN is perpendicular to CH. (vi) the square on EF is equal to five times the square on CE. (vii) if BG is bisected at M, MH is equal to MB. (viii) the sum of the squares on AB, HB is equal to three times the square on AH, (ix) that AH is greater than HB, but less than twice HB. (x) the difference of the squares on AB, AH is equal to the rectangle AB, AH, (xi) the square on a line equal to the sum of AB and HB is equal to five times the square on AH. 2. Enunciate Prop. 12, using the projection of one side on another. 3. Taking the three figures in Prop. 13, write down the values of AB?, of BC, and of CA2 in each, and express the results algebraically and trigonometrically in each case. 4. Two right-angled triangles ACB, ADB stand on the same hypotenuse AB and on the same side of it; if AD and BC intersect in 0, show that the rectangle AO, OD is equal to the rectangle BO, OC. 5. The sides of a triangle are four, five, and seven inches respectively; find the nature of each of its angles and also the length of the projection of the side four inches long upon the side five inches long. 6. State and prove the converse of Prop. 13. EUC. L PROP. 14.- Problem. To describe a square that shall be equal to a given rectilineal figure. Let A be the given rectilineal figure; it is required to describe a square equal to A.", null, "Construction.—Describe the parallelogram BCDE equal to the rectilineal figure A and having the angle CBE a right angle. I. 45 Then if BC is equal to BE, BD is a square, and what was required is done. But if BC is not equal to BE, produce BE to F, making EF equal to ED; I. 3 bisect BF at G, and with centre G and radius GF describe the semi-circle BHF; produce DE to meet the circumference in H. Then the square on EH shall be equal to the given figure A. Join GH. Proof.-Because BF is divided equally at G and unequally at E, therefore by Prop. 5 the rectangle BE, EF, together with the square on GE, is equal to the square on GF; but the square on GF is equal to the square on GH, and the square on GH is equal to the sum of the squares on GE, EH, for the angle GEH is a right angle; therefore the rectangle BE, EF, together with the square on GE, is equal to the sum of the squares on GE, EH; 1. 47 take away the common square on GE, therefore the rectangle BE, EF is equal to the square on EH. But the rectangle BE, EF is equal to BD, for ED is equal to EF; and BD is equal to A; therefore the square on EH is equal to the given figure A. Q. E. F. NOTES ON THE PROPOSITIONS IN BOOK II. The propositions in the second book of Euclid are often found to be the hardest and the most uninteresting in all Euclid. This applies more especially to the first seven propositions in the book. The Method of Proof used by Euclid in all these seven propositions depends on the fact that the area of any rectilineal figure is equal to the sum of the areas of all the separate parts of the figure. In other words we look at a certain figure and our eyes tell us that it is equal to certain other figures. The difficulty arises when we come to express this equality in Euclidean language. The chief difficulty, therefore, in Book II. lies in remembering and distinguishing the enunciations of the propositions, and for this reason the student, when repeating an enunciation in this book, should also state the proposition in letters, as given after each proposition, e. g.– AB2 AC2 + CB2 + 2 AC.CB for Prop. 4. Proposition 1. —This is the only proposition in Book II. dealing with two lines. From it Props. 2 and 3 can be deduced, and from these all the propositions up to Prop. 10 can successively be deduced. Prop. 1 may, therefore, be considered the fundamental proposition of Book II. The proof of Prop. 1 depends on the evident fact that the rectangle AG is equal to the sum of the rectangles AH, CK, and DG. Then each of these rectangles is taken in order and changed into an equal rectangle. Proposition 2.—This proposition is a special case of Prop. 1 obtained by making the line X equal to AB. The proof depends on the fact that AE is equal to the sum of AF and CE, and then each of these figures is changeil in order into an equal square or rectangle. Proposition 3.—This proposition is a special case of Prop. 1 obtained by making the line X equal to AC. The proof depends on AF being equal to its parts AE and CF, and these are changed into an equal rectangle, square, and rectangle respectively. Propositions 4, 5, 6, 7.-The proofs given first for these propositions are Euclid's proofs; the alternative proofs are obtained from the previous propositions in Book II. Euclid's proofs of these four propositions are all based on the same method, and, like the proofs of Props. 1, 2, 3, tliey all depend on a certain figure being equal to the sum of its parts and on these parts being equal to certain other rectangles or squares. The following points about Euclid's proofs of Props. 4, 5, 6, 7 may help the student: (1) All the proofs begin with “Because the complement, etc.\" (2) All the proofs aim first at getting the required rectangle, 2 AC.CB, or AQ.QB, or 2 AB.CB. (3) (a) In Prop. 4 nothing is added to the complements. (b) In Prop. 5 QL and PF are added. (c) In Prop. 6 PL is added. (d) In Prop. 7 CK is added at once to both complements. (4) Props. 5 and 6 are two cases of one proposition, the point Q being either internal or external to AB; the rectangle is the same in both. (See Ex. 8, p. 139.) (5) In Prop. 7 the letters AB.CB are repeated for squares and for the rectangle. The Alternative proofs of Props. 4, 5, 6, 7 have the advantage of being much shorter than Euclid's proofs ; and, being based on former propositions of Euclid, they are perfectly satisfactory. The student need not hesitate therefore to adopt these proofs ; but Euclid's proofs are useful as illustrations of a different method of proof, and to the mathematician the method of proof employed is often of more interest than the actual result obtained. The following points about these Alternative proofs may be noticed :(1) All begin with getting the first side of the required equation ; e.g. Props. 4 and 7 begin with the square on AB, and Props. 5 and 6 with the rectangle AQ.QB. (2) The proofs of 5 and 6 are really the same, allowance being made for the different positions of l, either internal or external to AB. (3) All the proofs end with an application of Prop. 3. N.B.—In Prop. 6 the rectangle is given as AQ, QB because it occurs in the same form in Prop. 5, and it is of great importance that the student should from the first recognize the similarity of Props. 5 and 6. The same collocation of letters, AQ, QB, also occurs in Props. 9 and 10, and this will help the beginner to remember the propositions. The student can afterwards easily alter the rectangle AQ, QB into AQ, BQ in Prop. 6, if he wishes to be strictly accurate. The latter form is the more correct, because, as already explained, QB in Prop. 6 is to be considered negative and BQ positive. (See p. 118.) « РспзгпэмензУхнЭчейб »" ]
[ null, "https://books.google.gr/books/content", null, "https://books.google.gr/books/content", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91526246,"math_prob":0.9864793,"size":8774,"snap":"2022-05-2022-21","text_gpt3_token_len":2311,"char_repetition_ratio":0.1977195,"word_repetition_ratio":0.08859294,"special_character_ratio":0.2588329,"punctuation_ratio":0.14834894,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.99682856,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T18:06:01Z\",\"WARC-Record-ID\":\"<urn:uuid:1d1a5207-4a41-4d37-b59a-723663b16a14>\",\"Content-Length\":\"46977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b2f9f20-6780-4701-878a-ed3d118c3931>\",\"WARC-Concurrent-To\":\"<urn:uuid:f82cb44e-5023-44ad-9f44-6c7d055f7b0e>\",\"WARC-IP-Address\":\"172.217.1.206\",\"WARC-Target-URI\":\"https://books.google.gr/books?id=seYGAAAAYAAJ&pg=PA148&focus=viewport&vq=%22shall+be+less+than+the+other+two+sides+of+the+triangle,+but+shall+contain+a+greater+angle.%22&dq=editions:UOMDLPabq7928_0001_001&lr=&hl=el&output=html_text\",\"WARC-Payload-Digest\":\"sha1:C3DYB2IYI4HWQWR7NWXR2A4WKPI344HW\",\"WARC-Block-Digest\":\"sha1:2TZHGSIMBE5BII6OB4CCMV72J2VB3342\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303868.98_warc_CC-MAIN-20220122164421-20220122194421-00611.warc.gz\"}"}
https://support.gams.com/gams:get_the_attribute_of_an_equation_that_returns_the_feasibitity_status_of_an_equation
[ "#", null, "GAMS Support Wiki\n\n### Site Tools\n\ngams:get_the_attribute_of_an_equation_that_returns_the_feasibitity_status_of_an_equation\n\n# Ho Do I get the attribute of an equation, that returns the feasibility status of an equation?\n\nQ: Is there any equation attribute, that gives back the feasibility status of an equation?\n\nWhat I want to do is to relax some bounds, if am equation was infeasible. I tried with equation.m, but marginals may exist, even if the equation is not highlighted with “infes”.\n\nWith recent versions of GAMS, we compute the constraint record `EQU.infeas` from `EQU.`l and `EQU.lo/up`. This feature is used in the model `feasopt1` in the GAMS model library.", null, "" ]
[ null, "https://support.gams.com/lib/tpl/gams/images/logo.png", null, "https://support.gams.com/lib/exe/indexer.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9302704,"math_prob":0.93677765,"size":440,"snap":"2019-51-2020-05","text_gpt3_token_len":109,"char_repetition_ratio":0.13302752,"word_repetition_ratio":0.0,"special_character_ratio":0.2159091,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9845546,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T17:18:08Z\",\"WARC-Record-ID\":\"<urn:uuid:7a37b71a-f4b8-4ac8-94b3-1482eda5ac4f>\",\"Content-Length\":\"12113\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b94d64d-504c-4fe1-8a8f-4ac41f7d57c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:52067e3b-1c11-4fa7-84d9-bf1c808e2eac>\",\"WARC-IP-Address\":\"78.46.254.22\",\"WARC-Target-URI\":\"https://support.gams.com/gams:get_the_attribute_of_an_equation_that_returns_the_feasibitity_status_of_an_equation\",\"WARC-Payload-Digest\":\"sha1:DWRC7QAOBUTC5Q5BJUTMEYR4QVRN5XSE\",\"WARC-Block-Digest\":\"sha1:JXQISE4QXPE4RM7NBIIBT263K5P4YIKR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250604849.31_warc_CC-MAIN-20200121162615-20200121191615-00557.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/182125/how-to-generate-random-directed-connected-planar-graphs
[ "# How to generate random Directed Connected (Planar) graphs?\n\nI need to create all possible connected and directed graphs with N vertices. The graphs are Planar and labelled with vertices 1 to N. Although the graphs are unweighted, the graphs are non simple since they are directed. Isomorphic graphs must be discarded. We must consider the following characteristics:\n\n1. Vertex 1 has one connected edge (degree=1), and direction: from vertex1 to vertex2\n2. Vertex N has one connected edge (degree=1), and direction: from vertex(N-1) to vertexN\n3. All other vertices must have degree=3 (three conected edges), with at least one edge entering and one edge going out of the vertex.\n4. Vertex1 always connect to vertex2. Direction: vertex1-->vertex2\n5. Vertex(N-1) always connect to vertexN. Direction: vertex(N-1)-->vertexN\n\nHow can I do it efficiently?\n\nAnd How can I save each graph separately for individual use afterwards?\n\n• Welcome to Mathematica.SE! Is the question about Mathematica, the software or should we move the question over to Mathematics?\n– Johu\nSep 18 '18 at 20:58\n• What have you tried? Sep 18 '18 at 21:40\n• Please clarify the question. The title says random graphs, but there is no mention of this in the question. The title mentions planar, the body doesn't. Are the graphs labelled or not? Does vertex 1 always connect to vertex 2 (and not vertex 3), or do you just mean that it's out-degree is 1? Voting to close the question until it is made to be clear and unambiguous. Sep 19 '18 at 7:55\n• @Szabolcs I think this is an interesting question to tackle, but it definitely requires more stringent definition of the problem at hand. Are non-simple graphs allowed? Is uniform sampling required, and how about graph isomorphisms in that case? Is enumerating all graphs a good alternative as a solution? (Well, that wouldn't probably work for more than the smallest graphs...) Sep 19 '18 at 15:05\n• I vote for leaving it closed, as the questions raised by Szabolcs are not addressed. It makes a huge difference, if it is required, that the graph is planar.\n– Johu\nSep 19 '18 at 15:29\n\nFor the easier case of generating a random undirected planar graph with given degree distribution, you can generate a random graph with the desired degree distribution using DegreeGraphDistribution until you get a PlanarGraph:\n\nrpg[n_] := Module[{g}, Quiet[While[Not @ PlanarGraphQ[\ng = RandomGraph[DegreeGraphDistribution[Flatten[{1, ConstantArray[3, n - 2], 1}],\nSelfLoops -> False]]]];\nPlanarGraph[Range @ n, EdgeList @ g]]]\n\n\nExamples:\n\nn = 6;\nGrid[Partition[Table[With[{g = rpg[n] }, SetProperty[g, {ImageSize -> 300,\nGraphStyle -> \"VintageDiagram\", EdgeShapeFunction -> \"CurvedArc\",\nPlotLabel -> VertexDegree[g]} ]], 6], 3]]", null, "n = 8;", null, "n = 10;", null, "n = 12;", null, "TODO: Process the EdgeList or the AdjacencyMatrix of rpg[n] to get the desired configuration of edge directions.\n\nIt won't necessarily have all the properties you desire, but this will get you a directed connected planar graph:\n\npts = RandomReal[{-1, 1}, {25, 2}];\nmyR = DelaunayMesh[pts];\n\n\n mycells =( MeshCells[myR, 1] /. Line[n_] -> n)\n\n\nGet connection rules:\n\ntherules = (Rule[#[] , #[]] & /@ mycells)\n\n\nCreate the graph\n\nGraph[therules]", null, "Confirm:\n\nPlanarGraphQ[%]\n\n\n(* True *)" ]
[ null, "https://i.stack.imgur.com/Hxd6E.png", null, "https://i.stack.imgur.com/LJbpt.png", null, "https://i.stack.imgur.com/fLP7r.png", null, "https://i.stack.imgur.com/MmZus.png", null, "https://i.stack.imgur.com/GbAx2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92586434,"math_prob":0.822778,"size":854,"snap":"2021-43-2021-49","text_gpt3_token_len":197,"char_repetition_ratio":0.15882353,"word_repetition_ratio":0.061068702,"special_character_ratio":0.22833724,"punctuation_ratio":0.11320755,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9886867,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,6,null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T01:29:50Z\",\"WARC-Record-ID\":\"<urn:uuid:18657912-6c83-4e2d-ae53-a47e5784b795>\",\"Content-Length\":\"183828\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7bf291cb-9976-42e0-986e-0c1323ac9e11>\",\"WARC-Concurrent-To\":\"<urn:uuid:387c0504-33c8-4871-8383-cef311f9aeba>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/182125/how-to-generate-random-directed-connected-planar-graphs\",\"WARC-Payload-Digest\":\"sha1:5XJLE7NG4SUFLKYB3GG5UIJSCZK6YRZB\",\"WARC-Block-Digest\":\"sha1:BCIZCDRSKLP3DPFUEUC7V7X4KHSOYPLP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585537.28_warc_CC-MAIN-20211023002852-20211023032852-00061.warc.gz\"}"}
https://www.iasexpress.net/ie-pedia/what-is-integral-calculus-how-to-evaluate-it-manually-and-using-calculator/
[ "# What is Integral Calculus & How to Evaluate It Manually and Using Calculator?\n\nThe term calculus is one of the fundamental branches of mathematics that deals with the properties and calculations of change and motion. This mathematical branch is subdivided into two main components known as derivative and integral.\n\nThe term derivative in calculus deals with the rate of change of functions respective to the independent variable. The slope of the tangent line can be evaluated with the help of this branch of calculus.\n\nThe other type of calculus named integral calculus inverted the results that derivative done and finding the areas under the curve. In this article, we will explore integral in calculus along with its types, and evaluating process manually and using calculator.\n\n## What is integral in Calculus?\n\nAn integral in calculus is a fundamental concept that helps to find the area under the curve by relying on the given interval of the function. This branch of calculus is mainly used to find the new function whose original function is derivative.\n\nIn calculus, the integral of a function f(z) will be represented as ∫ f(z) dz, where the integrating variable “dz” infinitesimal element of the variable “z”. There are further two types of integral in calculus such as definite integral and indefinite integral. copyright©iasexpress.net\n\n### Definite Integral\n\nIn calculus, the definite integral is a basic concept that deals with the numerical value of the given function over a given interval. The process of finding the numeric value of the function over the given interval is said to be finding the area under the curve.\n\nThe fundamental theorem of calculus plays a vital role in the calculation of areas under the curves. This theorem is helpful for applying the upper and lower limit values to the integrated function to evaluate the numeric value.\n\nThe formula used to solve the problems of this kind of integral is:\n\npq f(z) dz = |F(z)|pq = F(q) – F(p)\n\nwhere, p & q are the interval points of the given function, “z” is the independent variable of the function, f(z) is the given differential function, F(z) is the integrated function, & F(q) – F(p) denotes the fundamental theorem of calculus.\n\n### Indefinite Integral\n\nIn calculus, the indefinite integral is used to find the new function whose genuine function is differential. This branch of calculus is also known as primitive function & antiderivative. The name antiderivative shows that this branch of integral works as the inverse of differential. copyright©iasexpress.net\n\nThere is no interval involved in this type of integral. The constant of integration is written along with the integrated function. The formula used to solve the problems of this kind of integral is:\n\n∫ f(z) dz = F(z) + C\n\nwhere, “z” is the independent variable of the function, f(z) is the given differential function, F(z) is the integrated function, & C is the constant of integration.\n\n## Rules of Integral in Calculus\n\nThere are several rules of integral calculus for dealing with various kind of problems.\n\n• Sum Rule: ʃ [f(z) + h(z)] dz = ʃ [f(z)] dz + ʃ [h(z)] dz + C\n• Difference Rule: ʃ [f(z) – h(z)] dz = ʃ [f(z)] dz – ʃ [h(z)] dz + C\n• Constant Rule: ʃ K dz = K * z + C\n• Constant function Rule: ʃ [K h(z)] dz = K ʃ [h(z)] dz\n• Power Rule: ʃ f(z)m dz = f(z)m + 1/ (m + 1) + C\n\n## Prelims Sureshots – Most Probable Topics for UPSC Prelims\n\nA Compilation of the Most Probable Topics for UPSC Prelims, including Schemes, Freedom Fighters, Judgments, Acts, National Parks, Government Agencies, Space Missions, and more. Get a guaranteed 120+ marks!\n\n## How to evaluate the integral of the function manually?\n\nThere are various methods for solving integrals manually.\n\n### Direct Integration Method\n\nIn calculus, the direct integration method is the general way to integrate the given function with the help of rules of integration. This method is applicable to simple functions that can be evaluated with the help of sum, difference, power, and constant function rules. copyright©iasexpress.net\n\nExample\n\nFind the antiderivative of the given function with respect to “z”.\n\nh(z) = 2z3 – 15z2 + 6z5 – 5cos(z) + 15z2\n\nSolution\n\nStep 1: Write the given function according to the mathematical representation of the antiderivative.\n\nh(z) = 2z3 – 15z2 + 6z5 – 5cos(z) + 15z2\n\nʃ h(z) dz = ʃ [2z3 – 15z2 + 6z5 – 5cos(z) + 15z2] dz\n\nStep 2: Now use the rule of sum and difference to make the function simple.\n\nʃ [2z3 – 15z2 + 6z5 – 5cos(z) + 15z2] dz = ʃ [2z3] dz – ʃ [15z2] dz + ʃ [6z5] dz – ʃ [5cos(z)] dz + ʃ [15z2] dz\n\nStep 3: Now take out the constant term outside the notation of integral.\n\nʃ [2z3 – 15z2 + 6z5 – 5cos(z) + 15z2] dz = 2ʃ [z3] dz – 15ʃ [z2] dz + 6ʃ [z5] dz – 5ʃ [cos(z)] dz + 15ʃ [z2] dz\n\nStep 4: Now by direct integration method, take the power rule of the above expression to find the antiderivative.\n\nʃ [2z3 – 15z2 + 6z5 – 5cos(z) + 15z2] dz = 2 [z3 + 1 / 3 + 1] – 15 [z2 + 1 / 2 + 1] + 6 [z5 + 1 / 5 + 1] – 5 [sin(z)] + 15 [z2 + 1 / 2 + 1] + C\n\nʃ [2z3 – 15z2 + 6z5 – 5cos(z) + 15z2] dz = 2 [z4 / 4] – 15 [z3 / 3] + 6 [z6 / 6] – 5 [sin(z)] + 15 [z3 / 3] + C\n\nʃ [2z3 – 15z2 + 6z5 – 5cos(z) + 15z2] dz = 2/4 [z4] – 15/3 [z3] + 6/6 [z6] – 5 [sin(z)] + 15/3 [z3] + C\n\n= 1/2 [z4] – 5 [z3] + 1 [z6] – 5 [sin(z)] + 5 [z3] + C\n\n= z4/2 – 5z3 + z6 – 5sin(z) + 5z3 + C\n\n= z4/2 + z6 – 5sin(z) + C\n\n### Integration by Substitution\n\nIn calculus, integration by substitution is another method for finding the integral of the given function. this method is also known as U-substitution as the variable “u” is substituted to the given function to replace some terms to make that function simple. copyright©iasexpress.net\n\nThis method makes the process of integration easier for the complex function as the system of u & du will be substituted in place of complex parts.\n\nExample\n\nFind the antiderivative of the given function with respect to “z”.\n\nh(z) = z4 * ez^5\n\nSolution\n\nStep 1: Write the given function according to the mathematical representation of the antiderivative.\n\nh(z) = z4 * ez^5\n\nʃ h(z) dz = ʃ [z4 * ez^5] dz\n\nStep 2: Find the term that you want to substitute.\n\nWe’ll substitute u = z5\n\nAs the differential of z5 will give us 5z4 which is the part of the given function.\n\nStep 3: Now find the integrating variable “du”.\n\nAs u = z5\n\nThen, du/dz = 5z5-1\n\ndu/dz = 5z4\n\ndu = 5z4 dz\n\ndu/5 = z4 dz\n\nStep 4: Now substitute the value of u & du to the integral function.\n\nʃ [z4 * ez^5] dz = ʃ eu /5 du\n\nStep 5: Now integrate the above exponential function.\n\nʃ eu /5 du = 1/5 ʃ eu du\n\nAs the integral of the exponential function remains the same\n\nʃ eu /5 du = 1/5 eu + C\n\nStep 6: Now place the value of u to the above expression.\n\nʃ [z4 * ez^5] dz = 1/5 ez^5 + C\n\n### Integration by Parts\n\nIn calculus, integration by parts is the basic method for finding the integral of the function. it deals with the product of two functions. This method is derived from the product rule of derivative. There are two terms involved in this function such as u & dv. copyright©iasexpress.net\n\nThe general form of this method is:\n\n∫u * v = u * ∫ v dv – ∫ du/dz [∫v dv] dv\n\nThis method is applicable for dealing with the product of two functions, exponential function, logarithmic functions, etc.\n\nExample\n\nFind the antiderivative of the given function with respect to “z”.\n\nh(z) = z ln(z)\n\nSolution\n\nStep 1: Write the given function according to the mathematical representation of the antiderivative.\n\nh(z) = z ln(z)\n\nʃ h(z) dz = ʃ [z ln(z)] dz\n\nStep 2: Identify u & v\n\nu = ln(z)\n\nv = z\n\nStep 3: Take the formula of substitution by parts\n\n∫u * v = u * ∫ v dv – ∫ du/dz [∫v dv] dv\n\nStep 4: Now substitute the values to the above formula and simplify it.\n\nʃ [z ln(z)] dz = ln(z) * ∫ z dz – ∫ d/dz (ln(z)) [∫ z dz] dz\n\nʃ [z ln(z)] dz = ln(z) * [z1 + 1 / 1 + 1] – ∫ d/dz (ln(z)) [z1 + 1 / 1 + 1] dz\n\nʃ [z ln(z)] dz = ln(z) * [z2 / 2] – ∫ (1/z) [z2 / 2] dz\n\nʃ [z ln(z)] dz = ln(z) * [z2 / 2] – ∫ [z2 / 2z] dz\n\nʃ [z ln(z)] dz = ln(z) * [z2 / 2] – ∫ [z / 2] dz\n\nʃ [z ln(z)] dz = ln(z) * [z2 / 2] – ½ ∫ [z] dz\n\nʃ [z ln(z)] dz = ln(z) * [z2 / 2] – ½ [z1 + 1 / 1 + 1]\n\nʃ [z ln(z)] dz = ln(z) * [z2 / 2] – ½ [z2 / 2] copyright©iasexpress.net\n\nʃ [z ln(z)] dz = z2 ln(z)/2 – z2/4\n\nStep 5: Add the constant of integration.\n\nʃ [z ln(z)] dz = z2 ln(z)/2 – z2/4 + C\n\n## How to Evaluate Integral Using a Calculator?\n\nThere are hundreds of online calculators available to find the numerical problems of integral calculus. Many tools provide only results of the given function that is not sufficient for understanding and seeking guidance for evaluating limit.\n\nAlthough some of them provide solutions with steps such as an integral calculator by MeraCalculator. This calculator will allow you to solve integral problems with the help of definite and indefinite integrals with steps.\n\nExample\n\nFind the antiderivative of the given function with respect to “z”.\n\nh(z) = z ln(z)\n\nSolution\n\nStep 1: Write the function into the input box.\n\nStep 2: Select the variable “z”.\n\nStep 3: Hit the submit button.\n\nStep 4: Result\n\n## Wrap Up\n\nNow you can grab all the basics of evaluating integrals manually and using a calculator from this post. We have discussed definition, types, rules, and methods of evaluating integral with the help of solved examples.\n\n## Express Learning Management System (ELMS)\n\nOne-Stop Solution for UPSC GS & Optional with High-Yield Notes & Mindmaps!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83028775,"math_prob":0.99950373,"size":8871,"snap":"2023-40-2023-50","text_gpt3_token_len":2728,"char_repetition_ratio":0.1792038,"word_repetition_ratio":0.21837088,"special_character_ratio":0.30853343,"punctuation_ratio":0.07159091,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993432,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T16:39:52Z\",\"WARC-Record-ID\":\"<urn:uuid:7f512d4e-bee8-4ddf-8337-c28aed70b3a6>\",\"Content-Length\":\"1027913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15f0d5fe-c112-4808-853b-5f13d723f96c>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c45accf-cfc9-4e5e-b68f-018da74f92da>\",\"WARC-IP-Address\":\"172.67.160.55\",\"WARC-Target-URI\":\"https://www.iasexpress.net/ie-pedia/what-is-integral-calculus-how-to-evaluate-it-manually-and-using-calculator/\",\"WARC-Payload-Digest\":\"sha1:LYSQLCMU6TMW6KY6G6XSYMH7IWMQM4PM\",\"WARC-Block-Digest\":\"sha1:FEPPWZKNQOF3A4FMRTZYW6XEZ6OZQVPE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510520.98_warc_CC-MAIN-20230929154432-20230929184432-00398.warc.gz\"}"}
http://wiki.weizmann.ac.il/bp/index.php?title=Dynamic_b-threads&diff=90&oldid=70&printable=yes
[ "# Symbolic and dynamic b-threads\n\n## A. Symbolic b-threads\n\nIt is sometimes useful to have multiple b-threads as instances of the same class, differing from each other only by their parameters. For example, in one implementation of the game of Tic-Tac-Toe, the class AddThirdO has 48 instances. Each b-thread in the class is assigned a pair of O move events in a given line, it waits for the two to occur, and then requests the third O move event for the same line. The 48 instances cover the 6 lines (3 rows, 3 columns, and 2 diagonals) and all 6 permutations of the three events in that line.\n\nTo activate such symbolic b-thread the programmer should provide a constructor for the b-thread class which inherits from BThread, call the constructor with different parameters, and add and start all the returned instances. This approach was applied systematically to multiple classes in the Tic-Tac-Toe example as follows\n\n```package tictactoe.bThreads.tactics;\nimport static bp.BProgram.bp;\nimport static bp.BProgram.labelNextVerificationState;\nimport static bp.eventSets.EventSetConstants.none;\nimport java.util.HashSet;\nimport java.util.Set;\nimport tictactoe.events.O;\nimport bp.exceptions.BPJException;\n/**\n* A scenario that tries to complete a row/column/diagonal of Os\n*/\n@SuppressWarnings(\"serial\")\nprivate O firstSquare;\nprivate O secondSquare;\nprivate O triggeredEvent;\n@Override\npublic void runBThread() throws BPJException {\n//\t\tinterruptingEvents = new EventSet(gameOver);\nlabelNextVerificationState(\"0\");\n// Wait for the first O\nbp.bSync(none, firstSquare, none);\nlabelNextVerificationState(\"1\");\n// Wait for the second O\nbp.bSync(none, secondSquare, none);\nlabelNextVerificationState(\"2\");\n// Request the third O\nbp.bSync(triggeredEvent, none, none);\nlabelNextVerificationState(\"3\");\nbp.bSync(none, none, none);\n}\n/**\n* @param firstSquare\n* @param seconfSquare\n* @param triggeredEvent\n*/\npublic AddThirdO(O firstSquare, O secondSquare, O triggeredEvent) {\nsuper();\nthis.firstSquare = firstSquare;\nthis.secondSquare = secondSquare;\nthis.triggeredEvent = triggeredEvent;\nthis.setName(\"AddThirdO(\" + firstSquare + \",\"\n+ secondSquare + \",\" + triggeredEvent + \")\");\n}\n/**\n* Construct all instances\n*/\nstatic public Set<BThread> constructInstances() {\n// All 6 permutations of three elements\nint[][] permutations = new int[][] { new int[] { 0, 1, 2 },\nnew int[] { 0, 2, 1 }, new int[] { 1, 0, 2 },\nnew int[] { 1, 2, 0 }, new int[] { 2, 0, 1 },\nnew int[] { 2, 1, 0 } };\nfor (int[] p : permutations) {\n// Run copies for each row\nfor (int row = 0; row < 3; row++) {\np), new O(row, p)));\n}\n// Run copies for each column\nfor (int col = 0; col < 3; col++) {\ncol), new O(p, col)));\n}\n// Run copies for the main diagonal\np), new O(p, p)));\n// Run copies for the inverse diagonal\nset.add(new AddThirdO(new O(p, 2 - p), new O(\np, 2 - p), new O(p, 2 - p)));\n}\nreturn set;\n}\n```\n\n}\n\n## B. Dynamic b-threads\n\n• B-threads are normally instantiated and added to the b-program in the runBApplication method.\n• However, b-threads can also be created dynamically in two ways:\n• Any thread or b-thread can create a new instance of the desired b-thread class, and call the bp.add method to add the b-thread to the b-program, and call the bthread.start method to start the b-thread. The b-thread will join the running b-threads in the next synchronization point.\n• A running Java thread which is not a b-thread, may register itself as a b-thread and subsequently deregister, using the method calls XXXXXX and YYYYYY.\n• Examples:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7493085,"math_prob":0.80251944,"size":8861,"snap":"2022-40-2023-06","text_gpt3_token_len":2431,"char_repetition_ratio":0.13480863,"word_repetition_ratio":0.546875,"special_character_ratio":0.30177182,"punctuation_ratio":0.18599176,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9529326,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T04:44:04Z\",\"WARC-Record-ID\":\"<urn:uuid:79f0bda1-8cfd-4c4e-ac77-e4966eda7405>\",\"Content-Length\":\"43145\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d403f3f4-51e3-445b-981a-6f3d0c0d86fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:3439840c-f07c-4225-b286-6ff79c4d4c66>\",\"WARC-IP-Address\":\"132.76.150.139\",\"WARC-Target-URI\":\"http://wiki.weizmann.ac.il/bp/index.php?title=Dynamic_b-threads&diff=90&oldid=70&printable=yes\",\"WARC-Payload-Digest\":\"sha1:LLU5FUSPSA4TH73GTJXIJRUGKS6I6Q47\",\"WARC-Block-Digest\":\"sha1:KL6NOZNPKBKWUV2336QTAAJGHNT37VVM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335304.71_warc_CC-MAIN-20220929034214-20220929064214-00665.warc.gz\"}"}
https://waseda.pure.elsevier.com/en/publications/on-the-wobbling-in-cone-analysis-of-fluorescence-anisotropy-decay
[ "# On the wobbling-in-cone analysis of fluorescence anisotropy decay.\n\nK. Kinosita, A. Ikegami, S. Kawato\n\nResearch output: Contribution to journalArticle\n\n131 Citations (Scopus)\n\n### Abstract\n\nInterpretation of fluorescence anisotropy decay for the case of restricted rotational diffusion often requires a model. To investigate the extent of model dependence, two models are compared: a strict cone model, in which a fluorescent probe wobbles uniformly within a cone, and a Gaussian model, where the stationary distribution of the probe orientation is of a Gaussian type. For the same experimental anisotropy decay, analysis by the Gaussian model predicts a smaller value for the rate of wobbling motion than the strict cone analysis, but the difference is 35% at most; the cone angle obtained by the strict cone analysis agrees closely with the effective width of the Gaussian distribution. The results suggest that, when only two parameters (the rate and the angular range) are extracted from an experiment, the choice of a model is not crucial as long as the model contains the essential feature, e.g., the more-or-less conical restriction, of the motion under study. Model-independent analyses are also discussed.\n\nOriginal language English 461-464 4 Biophysical Journal 37 2 Published - 1982 Feb\n\n### Fingerprint\n\nFluorescence Polarization\nNormal Distribution\nAnisotropy\nFluorescent Dyes\n\n• Biophysics\n\n### Cite this\n\nKinosita, K., Ikegami, A., & Kawato, S. (1982). On the wobbling-in-cone analysis of fluorescence anisotropy decay. Biophysical Journal, 37(2), 461-464.\n\nOn the wobbling-in-cone analysis of fluorescence anisotropy decay. / Kinosita, K.; Ikegami, A.; Kawato, S.\n\nIn: Biophysical Journal, Vol. 37, No. 2, 02.1982, p. 461-464.\n\nResearch output: Contribution to journalArticle\n\nKinosita, K, Ikegami, A & Kawato, S 1982, 'On the wobbling-in-cone analysis of fluorescence anisotropy decay.', Biophysical Journal, vol. 37, no. 2, pp. 461-464.\nKinosita K, Ikegami A, Kawato S. On the wobbling-in-cone analysis of fluorescence anisotropy decay. Biophysical Journal. 1982 Feb;37(2):461-464.\nKinosita, K. ; Ikegami, A. ; Kawato, S. / On the wobbling-in-cone analysis of fluorescence anisotropy decay. In: Biophysical Journal. 1982 ; Vol. 37, No. 2. pp. 461-464.\n@article{e1943d239485409fafa6a5b4a5b903f5,\ntitle = \"On the wobbling-in-cone analysis of fluorescence anisotropy decay.\",\nabstract = \"Interpretation of fluorescence anisotropy decay for the case of restricted rotational diffusion often requires a model. To investigate the extent of model dependence, two models are compared: a strict cone model, in which a fluorescent probe wobbles uniformly within a cone, and a Gaussian model, where the stationary distribution of the probe orientation is of a Gaussian type. For the same experimental anisotropy decay, analysis by the Gaussian model predicts a smaller value for the rate of wobbling motion than the strict cone analysis, but the difference is 35{\\%} at most; the cone angle obtained by the strict cone analysis agrees closely with the effective width of the Gaussian distribution. The results suggest that, when only two parameters (the rate and the angular range) are extracted from an experiment, the choice of a model is not crucial as long as the model contains the essential feature, e.g., the more-or-less conical restriction, of the motion under study. Model-independent analyses are also discussed.\",\nauthor = \"K. Kinosita and A. Ikegami and S. Kawato\",\nyear = \"1982\",\nmonth = \"2\",\nlanguage = \"English\",\nvolume = \"37\",\npages = \"461--464\",\njournal = \"Biophysical Journal\",\nissn = \"0006-3495\",\npublisher = \"Biophysical Society\",\nnumber = \"2\",\n\n}\n\nTY - JOUR\n\nT1 - On the wobbling-in-cone analysis of fluorescence anisotropy decay.\n\nAU - Kinosita, K.\n\nAU - Ikegami, A.\n\nAU - Kawato, S.\n\nPY - 1982/2\n\nY1 - 1982/2\n\nN2 - Interpretation of fluorescence anisotropy decay for the case of restricted rotational diffusion often requires a model. To investigate the extent of model dependence, two models are compared: a strict cone model, in which a fluorescent probe wobbles uniformly within a cone, and a Gaussian model, where the stationary distribution of the probe orientation is of a Gaussian type. For the same experimental anisotropy decay, analysis by the Gaussian model predicts a smaller value for the rate of wobbling motion than the strict cone analysis, but the difference is 35% at most; the cone angle obtained by the strict cone analysis agrees closely with the effective width of the Gaussian distribution. The results suggest that, when only two parameters (the rate and the angular range) are extracted from an experiment, the choice of a model is not crucial as long as the model contains the essential feature, e.g., the more-or-less conical restriction, of the motion under study. Model-independent analyses are also discussed.\n\nAB - Interpretation of fluorescence anisotropy decay for the case of restricted rotational diffusion often requires a model. To investigate the extent of model dependence, two models are compared: a strict cone model, in which a fluorescent probe wobbles uniformly within a cone, and a Gaussian model, where the stationary distribution of the probe orientation is of a Gaussian type. For the same experimental anisotropy decay, analysis by the Gaussian model predicts a smaller value for the rate of wobbling motion than the strict cone analysis, but the difference is 35% at most; the cone angle obtained by the strict cone analysis agrees closely with the effective width of the Gaussian distribution. The results suggest that, when only two parameters (the rate and the angular range) are extracted from an experiment, the choice of a model is not crucial as long as the model contains the essential feature, e.g., the more-or-less conical restriction, of the motion under study. Model-independent analyses are also discussed.\n\nUR - http://www.scopus.com/inward/record.url?scp=0020091977&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=0020091977&partnerID=8YFLogxK\n\nM3 - Article\n\nC2 - 7059650\n\nAN - SCOPUS:0020091977\n\nVL - 37\n\nSP - 461\n\nEP - 464\n\nJO - Biophysical Journal\n\nJF - Biophysical Journal\n\nSN - 0006-3495\n\nIS - 2\n\nER -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85618967,"math_prob":0.798802,"size":4018,"snap":"2019-51-2020-05","text_gpt3_token_len":940,"char_repetition_ratio":0.12406577,"word_repetition_ratio":0.7758064,"special_character_ratio":0.2269786,"punctuation_ratio":0.13243243,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95023555,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T08:54:56Z\",\"WARC-Record-ID\":\"<urn:uuid:70418fa3-9060-4ccf-864a-4e04ef893ee1>\",\"Content-Length\":\"31594\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a40cbbb7-855e-433d-86b7-593067f33301>\",\"WARC-Concurrent-To\":\"<urn:uuid:23c78bdb-fb7f-4d3e-85b9-db24e839b4a7>\",\"WARC-IP-Address\":\"52.220.215.79\",\"WARC-Target-URI\":\"https://waseda.pure.elsevier.com/en/publications/on-the-wobbling-in-cone-analysis-of-fluorescence-anisotropy-decay\",\"WARC-Payload-Digest\":\"sha1:KUFSYKPFXMBVWLF6BNKUX72NB4AEANLS\",\"WARC-Block-Digest\":\"sha1:2MWPLYT6SCGCB2674CIUAJSJ6HWSBEPQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601628.36_warc_CC-MAIN-20200121074002-20200121103002-00062.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-8-polynomials-and-factoring-chapter-review-page-538/59
[ "## Algebra 1: Common Core (15th Edition)\n\n$(s-10)^{2}$\n$s^{2}-20s+100=$ ...write last term as a square. $=s^{2}-20s+10^{2}$ ...does middle term equal $2ab$? $-20s=-2(s)(10)$ yes, $=s^{2}-2(s)(10)+10^{2}$ ...write as the square of a binomial. $=(s-10)^{2}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7433765,"math_prob":1.000009,"size":238,"snap":"2020-10-2020-16","text_gpt3_token_len":106,"char_repetition_ratio":0.14102565,"word_repetition_ratio":0.0,"special_character_ratio":0.5672269,"punctuation_ratio":0.20634921,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99970317,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T18:36:05Z\",\"WARC-Record-ID\":\"<urn:uuid:4b514ae1-36c9-4285-871f-32634be78978>\",\"Content-Length\":\"91282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78029400-7308-4611-8d46-beca0b84f08f>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f3f1c7b-32c2-4cde-b70f-a039adfeb333>\",\"WARC-IP-Address\":\"54.162.109.179\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-8-polynomials-and-factoring-chapter-review-page-538/59\",\"WARC-Payload-Digest\":\"sha1:TQRANFSAEYCXYXSD62SHA5DKCJYTYGLP\",\"WARC-Block-Digest\":\"sha1:VXJFCGOEWJB7VHTOHXFJSY3DOT5JSPMM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143805.13_warc_CC-MAIN-20200218180919-20200218210919-00137.warc.gz\"}"}
https://www.geeksforgeeks.org/seaborn-regression-plots/?ref=rp
[ "# Seaborn | Regression Plots\n\nThe regression plots in seaborn are primarily intended to add a visual guide that helps to emphasize patterns in a dataset during exploratory data analyses. Regression plots as the name suggests creates a regression line between 2 parameters and helps to visualize their linear relationships. This article deals with those kinds of plots in seaborn and shows the ways that can be adapted to change the size, aspect, ratio etc. of such plots.\n\nSeaborn is not only a visualization library but also a provider of built-in datasets. Here, we will be working with one of such datasets in seaborn named ‘tips’. The tips dataset contains information about the people who probably had food at the restaurant and whether or not they left a tip. It also provides information about the gender of the people, whether they smoke, day, time and so on.\n\nLet us have a look at the dataset first before we start with the regression plots.\n\n `# import the library ` `import` `seaborn as sns ` ` `  `# load the dataset ` `dataset ``=` `sns.load_dataset(``'tips'``) ` ` `  `# the first five entries of the dataset ` `dataset.head() `\n\nOutput", null, "Now let us begin with the regression plots in seaborn.\nRegression plots in seaborn can be easily implemented with the help of the lmplot() function. lmplot() can be understood as a function that basically creates a linear model plot. lmplot() makes a very simple linear regression plot.It creates a scatter plot with a linear fit on top of it.\nSimple linear plot\n\n `sns.set_style(``'whitegrid'``) ` `sns.lmplot(x ``=``'total_bill'``, y ``=``'tip'``, data ``=` `dataset) `\n\nOutput", null, "Explanation\nx and y parameters are specified to provide values for the x and y axes. sns.set_style() is used to have a grid in the background instead of a default white background. The data parameter is used to specify the source of information for drawing the plots.\n\n `sns.set_style(``'whitegrid'``) ` `sns.lmplot(x ``=``'total_bill'``, y ``=``'tip'``, data ``=` `dataset,  ` `           ``hue ``=``'sex'``, markers ``=``[``'o'``, ``'v'``]) `\n\nOutput", null, "Explanation\nIn order to have a better analysis capability using these plots, we can specify hue to have a categorical separation in our plot as well as use markers that come from the matplotlib marker symbols. Since we have two separate categories we need to pass in a list of symbols while specifying the marker.\nSetting the size and color of the plot\n\n `sns.set_style(``'whitegrid'``) ` `sns.lmplot(x ``=``'total_bill'``, y ``=``'tip'``, data ``=` `dataset, hue ``=``'sex'``,  ` `           ``markers ``=``[``'o'``, ``'v'``], scatter_kws ``=``{``'s'``:``100``},  ` `           ``palette ``=``'plasma'``) `\n\nOutput", null, "Explanation\nIn this example what seabron is doing is that its calling the matplotlib parameters indirectly to affect the scatter plots. We specify a parameter called scatter_kws. We must note that the scatter_kws parameter changes the size of only the scatter plots and not the regression lines. The regression lines remain untouched. We also use the palette parameter to change the color of the plot.Rest of the things remain the same as explained in the first example.\nDisplaying multiple plots\n\n `sns.lmplot(x ``=``'total_bill'``, y ``=``'tip'``, data ``=` `dataset,  ` `           ``col ``=``'sex'``, row ``=``'time'``, hue ``=``'smoker'``) `\n\nOutput", null, "Explanation\nIn the above code, we draw multiple plots by specifying a separation with the help of the rows and columns. Each row contains the plots of tips vs the total bill for the different times specified in the dataset. Each column contains the plots of tips vs the total bill for the different genders. A further separation is done by specifying the hue parameter on the basis of whether the person smokes.\nSize and aspect ratio of the plots\n\n `sns.lmplot(x ``=``'total_bill'``, y ``=``'tip'``, data ``=` `dataset, col ``=``'sex'``,  ` `           ``row ``=``'time'``, hue ``=``'smoker'``, aspect ``=` `0.6``,  ` `           ``size ``=` `4``, palette ``=``'coolwarm'``) `\n\nOutput", null, "Explanation\nSuppose we have a large number of plots in the output, we need to set the size and aspect for it in order to better visualize it.\naspect : scalar, optional specifies the aspect ratio of each facet, so that “aspect * height” gives the width of each facet in inches.\n\nAttention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.\n\nTo begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course.\n\nMy Personal Notes arrow_drop_up", null, "Check out this Author's contributed articles.\n\nIf you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nPlease Improve this article if you find anything incorrect by clicking on the \"Improve Article\" button below.\n\nArticle Tags :\nPractice Tags :\n\nBe the First to upvote.\n\nPlease write to us at [email protected] to report any issue with the above content." ]
[ null, "https://media.geeksforgeeks.org/wp-content/uploads/20190827095613/gee27.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20190827103126/gee30.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20190827101125/gee28.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20190827102235/gee29.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20190827105400/gee31.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20190827111550/gee32.png", null, "https://media.geeksforgeeks.org/auth/avatar.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7533314,"math_prob":0.87622875,"size":5634,"snap":"2020-45-2020-50","text_gpt3_token_len":1332,"char_repetition_ratio":0.13587922,"word_repetition_ratio":0.1215526,"special_character_ratio":0.22559461,"punctuation_ratio":0.0996016,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9913297,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T16:00:51Z\",\"WARC-Record-ID\":\"<urn:uuid:6c27a950-f2fd-4e54-aa13-11154808138e>\",\"Content-Length\":\"117711\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7665ac34-5b5e-4d70-ae53-05bf61044f45>\",\"WARC-Concurrent-To\":\"<urn:uuid:f470f07b-65ec-46c8-a2fd-3c0e22224605>\",\"WARC-IP-Address\":\"23.11.231.64\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/seaborn-regression-plots/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:ZGVCTT2RP4DFOUPLYOBFYNI47AHUEFL5\",\"WARC-Block-Digest\":\"sha1:YVI5DKQUB3MO3KRJBRKJKBSPLDX6OVFG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141747887.95_warc_CC-MAIN-20201205135106-20201205165106-00078.warc.gz\"}"}
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/13/lesson/13.3.3/problem/13-128
[ "", null, "", null, "### Home > PC3 > Chapter 13 > Lesson 13.3.3 > Problem13-128\n\n13-128.\n\nSteady Freddy always drives his car at a constant rate of $30$ miles per hour.\n\nThe velocity is the slope (or rate of change) or the position function.\nPosition can be determined by calculating the area under the curve of a velocity function.\n\n1. Write an equation for Freddy’s velocity as a function of time. Sketch a graph of this function.\n\n2. How far has Freddy traveled after $1$ hour? After $2$ hours? After $3$ hours?\n\n3. Using your answers to part (b), what is Freddy’s position as a function of time?\n\n4. How is Freddy’s position function related to the graph of Freddy’s velocity function?\n\n5. How is Freddy’s velocity function related to his position function?" ]
[ null, "https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88967836,"math_prob":0.99214363,"size":682,"snap":"2022-05-2022-21","text_gpt3_token_len":155,"char_repetition_ratio":0.20058997,"word_repetition_ratio":0.0,"special_character_ratio":0.23900293,"punctuation_ratio":0.10218978,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98862493,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T06:44:55Z\",\"WARC-Record-ID\":\"<urn:uuid:28cf711d-2c7d-43ca-b1c1-14731c8c1e66>\",\"Content-Length\":\"36188\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8494c12a-bd56-4895-9c11-9b30afa99c6d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d06069d-5b2d-4c11-bd4b-6666a5538ed7>\",\"WARC-IP-Address\":\"104.26.7.16\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/13/lesson/13.3.3/problem/13-128\",\"WARC-Payload-Digest\":\"sha1:WOBM4QELZNYIYD47SEAZRD7CFQNCP57R\",\"WARC-Block-Digest\":\"sha1:R6MI4XKBKBP3WFFQBXIXY3LSLD4PKR4W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662521152.22_warc_CC-MAIN-20220518052503-20220518082503-00425.warc.gz\"}"}
https://metanumbers.com/107487
[ "# 107487 (number)\n\n107,487 (one hundred seven thousand four hundred eighty-seven) is an odd six-digits composite number following 107486 and preceding 107488. In scientific notation, it is written as 1.07487 × 105. The sum of its digits is 27. It has a total of 5 prime factors and 10 positive divisors. There are 71,604 positive integers (up to 107487) that are relatively prime to 107487.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 6\n• Sum of Digits 27\n• Digital Root 9\n\n## Name\n\nShort name 107 thousand 487 one hundred seven thousand four hundred eighty-seven\n\n## Notation\n\nScientific notation 1.07487 × 105 107.487 × 103\n\n## Prime Factorization of 107487\n\nPrime Factorization 34 × 1327\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 5 Total number of prime factors rad(n) 3981 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 107,487 is 34 × 1327. Since it has a total of 5 prime factors, 107,487 is a composite number.\n\n## Divisors of 107487\n\n10 divisors\n\n Even divisors 0 10 5 5\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 10 Total number of the positive divisors of n σ(n) 160688 Sum of all the positive divisors of n s(n) 53201 Sum of the proper positive divisors of n A(n) 16068.8 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 327.852 Returns the nth root of the product of n divisors H(n) 6.68917 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 107,487 can be divided by 10 positive divisors (out of which 0 are even, and 10 are odd). The sum of these divisors (counting 107,487) is 160,688, the average is 1,606,8.8.\n\n## Other Arithmetic Functions (n = 107487)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 71604 Total number of positive integers not greater than n that are coprime to n λ(n) 11934 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 10209 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 71,604 positive integers (less than 107,487) that are coprime with 107,487. And there are approximately 10,209 prime numbers less than or equal to 107,487.\n\n## Divisibility of 107487\n\n m n mod m 2 3 4 5 6 7 8 9 1 0 3 2 3 2 7 0\n\nThe number 107,487 is divisible by 3 and 9.\n\n## Classification of 107487\n\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n• Frugal\n\n## Base conversion (107487)\n\nBase System Value\n2 Binary 11010001111011111\n3 Ternary 12110110000\n4 Quaternary 122033133\n5 Quinary 11414422\n6 Senary 2145343\n8 Octal 321737\n10 Decimal 107487\n12 Duodecimal 52253\n20 Vigesimal d8e7\n36 Base36 2axr\n\n## Basic calculations (n = 107487)\n\n### Multiplication\n\nn×y\n n×2 214974 322461 429948 537435\n\n### Division\n\nn÷y\n n÷2 53743.5 35829 26871.8 21497.4\n\n### Exponentiation\n\nny\n n2 11553455169 1241846235750303 133482326342092818561 14347614811532530788666207\n\n### Nth Root\n\ny√n\n 2√n 327.852 47.5465 18.1067 10.1454\n\n## 107487 as geometric shapes\n\n### Circle\n\n Diameter 214974 675361 3.62962e+10\n\n### Sphere\n\n Volume 5.20183e+15 1.45185e+11 675361\n\n### Square\n\nLength = n\n Perimeter 429948 1.15535e+10 152010\n\n### Cube\n\nLength = n\n Surface area 6.93207e+10 1.24185e+15 186173\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 322461 5.00279e+09 93086.5\n\n### Triangular Pyramid\n\nLength = n\n Surface area 2.00112e+10 1.46353e+14 87762.8" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6208622,"math_prob":0.9785488,"size":4602,"snap":"2021-43-2021-49","text_gpt3_token_len":1616,"char_repetition_ratio":0.120269686,"word_repetition_ratio":0.02827381,"special_character_ratio":0.4591482,"punctuation_ratio":0.07898089,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99867356,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T22:57:28Z\",\"WARC-Record-ID\":\"<urn:uuid:ed5f31ec-adec-47f6-8a62-0087231b4263>\",\"Content-Length\":\"39714\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10c89e7e-16ff-4170-ad2a-6e1f2a582308>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8ef99a7-5c81-4d65-9096-0ae69a235523>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/107487\",\"WARC-Payload-Digest\":\"sha1:P6ARUQ6LVVGYBUQ6S4DLJTL6VG3YLNFK\",\"WARC-Block-Digest\":\"sha1:SP32FKHAEPNTIUVOXVLD3MKGGH2RX4D3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588244.55_warc_CC-MAIN-20211027212831-20211028002831-00322.warc.gz\"}"}