Search is not available for this dataset
url
string | text
string | date
timestamp[s] | meta
dict |
---|---|---|---|
http://gqyt.caretom.pw/standard-normal-distribution-pdf.html | # Standard Normal Distribution Pdf
If Z ~ N(0, 1), then Z is said to follow a standard normal distribution. The chart below shows the binomial distribution of 20 trials with a 50% likelihood repeated, vs the normal distribution using the same mean and standard deviation. +1 standard deviation from the mean 2. where $$\phi$$ is the cumulative distribution function of the standard normal distribution and Φ is the probability density function of the standard normal distribution. 063 Summer 2003 55 Standardized Normal DistributionStandardized Normal Distribution Value x from RV X N(PP ,VV ): z Score transformation: z Score transformation: computed by the Z Formula. Equivalently, X=eY where Y is normally distributed with mean μ and standard deviation σ. Since the normal distribution is continuous, the value of normalpdf( doesn't represent an actual probability - in fact, one of the only uses for this command is to draw a graph of the normal curve. The parameters denote the mean and the standard deviation of the population of interest. The distribution is the cdf. dnorm gives the density, pnorm gives the distribution function, qnorm gives the quantile function, and rnorm generates random deviates. The Normal Distribution is a *shape*, and the standard deviation is a *number. 1 Exercises 1. Linear combinations of Xand Y (such as Z= 2X+4Y) follow a normal distribution. However, we can see. The lecture entitled Normal distribution values provides a proof of this formula and discusses it in detail. Application. Remarks: 1. The main aim of this topic is to study and observe the difference between the normal distribution and lognormal distribution using R commands. Then the 95th percentile for the normal distribution with mean 2. 96 for 95% con dence intervals for proportions. The equation for the normal density function (cumulative = FALSE) is: When cumulative = TRUE, the formula is the integral from negative infinity to x of the given formula. Equivalently, X=eY where Y is normally distributed with mean μ and standard deviation σ. Find the following areas under a normal distribution curve with µ = 20 and s = 4. @S24601LesMis Best wishes to all and thanks for using the site. This area is shown in Figure A-1. What is nice about the normal distribution is that it is very intuitive: Roughly two thirds of the time, returns are within one standard deviation away from the mean (average) return; more than 95% of the time, returns are within. These numbers follow what is called the Empirical Rule and is the same for each distribution. Fitting distributions with R 6 [Fig. The data lies equally distributed on each side of the center. 50 and lower. Value of specific percentile (P. Normal distribution with parameters μ and σ is denoted as (,). The total area under a standard normal distribution curve is 100% (that's "1" as a decimal). 645(112) X = 527 + 184. The skew normal still has a normal-like tail in the direction of the skew, with a shorter tail in the other direction; that is, its density is asymptotically proportional to −. In such a regression, the intercept of the fitted linear model serves as an unbiased estimate of the mean of the distribution from which the data came, and the slope of the fitted linear model serves as an unbiased estimate of the standard deviation. Because of its unique bell shape, probabilities for the normal distribution follow the Empirical Rule, which says the following: About 68 percent of its values lie within one standard deviation of the mean. Zogheib1 and M. de December 15, 2015 Abstract Conventional wisdom assumes that the indefinite integral of the probability den-sity function for the standard normal distribution cannot be expressed in finite elementary terms. the normal distribution: the standard normal distribution I The standard normal distribution is the distribution of a normal variable with expected value equal to zero and variance equal to 1. How to use the Standard Normal Distribution Table 10:20. standard normal distribution: The normal distribution with a mean of zero and a standard deviation of one. compare the distribution of the sample to a normal distribution. F Distribution Tables The F distribution is a right-skewed distribution used most commonly in Analysis of Variance. To calculate the sample median, first rank the values from lowest to highest: 6. The normal distribution or "bell curve" looks like this when plotted in the IPython workbook interface: The plotted function, $f(x) = e^{-\frac{x^2}{2}}$, describes the distribution of certain naturally occurring events. The normal assumption is justifled by the Central Limit Theorem when the demand comes from many difierent independent or weakly dependent customers. Standardizing the distribution like this makes it much easier to calculate probabilities. It is expressed by the variable Z: Z ˘N(0;1) I The pdf of the standard normal looks identical to the pdf of the normal variable, except that it has. (2 marks) 4 The White Hot Peppers is a traditional jazz band. As z-value increases, the normal table value also increases. Determine the area of the normal distribution curve with. As the degrees of freedom ν goes to infinity, the t distribution approaches the standard normal distribution. The distribution will be initialized with the default random number generator. is a poor mathematician May 14 '12 at 18:07 $\begingroup$ Definite integrals of that function are found by numerical methods rather than by finding a closed-form antiderivative. We can integrate or use tables. Find the area under the standard normal curve between z 0 and z 1. However, the standard normal distribution is a special case of the normal distribution where the mean is zero and the standard deviation is 1. 98 hours per day. It contains the. Find each value, given its distance from the mean. $\begingroup$ There should be tables for the CDF of the (standard) normal distribution in the usual statistics textbooks $\endgroup$ – J. The bivariate normal PDF has severaluseful and elegant. To test formally for normality we use either an Anderson-Darling or a Shapiro-Wilk test. The normal distribution is commonly associated with the 68-95-99. For the standard normal distribution, this is usually denoted by F (z). 1) says that as nincreases, the standard normal density will do an increasingly better job of approximating the height-corrected spike graphs corresponding to a Bernoulli trials process with nsummands. Figure 1 - Probability density function for IQ. A vertical line drawn through a normal distribution at a z-score location divides the distri- bution into two sections. Normal distribution calculator Enter mean, standard deviation and cutoff points and this calculator will find the area under normal distribution curve. Table 1: Table of the Standard Normal Cumulative Distribution Function '(z)z 0. The lecture entitled Normal distribution values provides a proof of this formula and discusses it in detail. We want to find P(X > 475) so. PDF and CDF of Standard normal distribution. As z-value increases, the normal table value also increases. distributed with a mean of $100 and a standard deviation of$12. Column A represents this z score, Column B represents the distance between the mean of the standard normal distribution (O) and the z score, and Column C represents the. Add Shading to a Figure. If a component is chosen at random a) what is the probability that the length of this component is between 4. x! , x = 0,1,,∞ where λ is the average. 2% of values within 1 standard deviation of the mean. Since the normal distribution is continuous, the value of normalpdf( doesn't represent an actual probability - in fact, one of the only uses for this command is to draw a graph of the normal curve. If X has a log-normal distribution, then log(X) has a normal distribution. Z scores are especially informative when the distribution to which they refer is normal. Hypothesis Testing with the t Statistic. 0 Code: % MathType!MTEF!2!1!+-. P(80 < x< 115) Normal Distribution P(-1. Often only summary statistics such as mean and standard deviation or median and range are given. Word Problem #1 (Normal Distribution) Suppose that the distribution of diastolic blood pressure in a population of hypertensive women is modeled well by a normal probability distribution with mean 100 mm Hg and standard deviation 14 mm Hg. Normal Distribution In an MVO, we use the normal distribution when forming asset-class assumptions. See figure 1 below. Note that the t-distribution approaches the normal distribution with large sample size, because the critical value of t for an infinite sample size is 1. Column D identifies the proportion between the mean and the a-score. The length, in minutes, of each piece of music played by the band may be modelled by a normal distribution with mean 5 and standard deviation 1. Let Z represent a variable following a standard normal distribution. Find the percentage of viewers who watch television for more than 6 hours per day. compare the distribution of the sample to a normal distribution. The first parameter, µ, is the mean. a) Calculate the cumulative probability areas between each of the following pairs of x-values, (i. The "Normal" Probability Distribution and the Central Limit Theorem We now return to investigate the connection between the standard deviation and the "width" we defined earlier. Distribution of height in a sample of pregnant women, with the corresponding Normal distribution curve Spotting skewness Histograms are fairly unusual in published papers. Normal Standard Normal Distribution Density 11 / 33 Benchmarks The area between 1 and 1 under a standard normal curve is approximately 68%. Several different sources of data are normally distributed. Figure 1: The standard normal PDF Because the standard normal distribution is symmetric about the origin, it is immediately obvious that mean(˚(0;1;)) = 0. dnorm gives the density, pnorm gives the distribution function, qnorm gives the quantile function, and rnorm generates random deviates. Standard Normal (Z) Table Area between 0 and z Like the Student's t-Distribution, the Chi-square distribtuion's shape is determined by its degrees of freedom. follows a normal distribution with a mean of 81. The weight, X grams, of soup in a carton may be modelled by a normal random variable with mean 406 and standard deviation 4. P(Z < z) is known as the cumulative distribution function of the random variable Z. The distribution is parametrized by a real number μ and a positive real number σ, where μ is the mean of the distribution, σ is known as the standard deviation, and σ 2 is known as the variance. The visual way to understand it would be the following image (taken from here): The four curves are Normal d. For example, we can shade a normal distribution above 1. normal, since it is a linear function of independent normal random variables. Generally X = number of events, distributed independently in time, occurring in a fixed time interval. While a discrete PDF (such as that shown above for dice) will give you the odds of obtaining a particular outcome, probabilities with continuous PDFs are. The Standard Normal Distribution is a specific instance of the Normal Distribution that has a mean of ‘0’ and a standard deviation of ‘1’. Standard Normal Distribution. We want to find P(X > 475) so. A random variable which has a normal distribution with a mean m=0 and a standard deviation σ=1 is referred to as Standard Normal Distribution. Assume the standard deviation is 2 pounds. The following is the plot of the normal hazard function. Standard deviation and normal distribution Standard deviation is a widely used measurement of variability or diversity used in statistics and probability theory. A fourth section gives practice in the use of the Poisson Distribution as an approximation to the Binomial Distribution. Normal distribution with a mean of 100 and standard deviation of 20. That means that it corresponds to probability. The Standard Normal Distribution in R. Standard normal distribution table is used to find the area under the f(z) function in order to find the probability of a specified range of distribution. A standard normal distribution is a normal distribution with zero mean (mu=0) and unit variance (sigma^2=1), given by the probability density function and distribution function P(x) = 1/(sqrt(2pi))e^(-x^2/2) (1) D(x) = 1/2[erf(x/(sqrt(2)))+1] (2) over the domain x in (-infty,infty). pdf format 1, 2, 3 and 4 cycle papers are in the same *. provides another reason for the importance of the normal distribution. Distribution of height in a sample of pregnant women, with the corresponding Normal distribution curve Spotting skewness Histograms are fairly unusual in published papers. Integrating the PDF, gives you the cumulative distribution function (CDF) which is a function that maps values to their percentile rank in a distribution. • 68% of all data will fall within 1 standard deviation of the mean. The heights of a group of athletes are modelled by a normal distribution with mean 180 cm and standard deviation 5. “Standard Normal Distribution”. The normal random variable of a standard normal distribution is called a standard score or a z score. Cumulative Distribution Function (CDF) Calculator for the Standard Normal Distribution. To use this table with a non-standard normal distribution (either the location parameter is not 0 or the scale parameter is not 1), standardize your value by subtracting the mean and dividing the result by the standard deviation. I create a sequence of values from -4 to 4, and then calculate both the standard normal PDF and the CDF of each of those values. NormalCDF: The normalcdf function will give the probability will fall between two user defined limits on either the standard normal curve, or on any arbitrary normal curve. edu 24 April 2008 1/36 A Review and Some Connections The Normal Distribution The Central Limit Theorem Estimates of means and proportions: uses and properties Confidence intervals and Hypothesis tests 2/36 The Normal Distribution A probability distribution. Consult the Normal Distribution Table to find an area of 0. That is b(k;n;p) ˇ P Z < k+0p:5 np npq P Z < k p0:5 np npq. The most important distribution for working with statistics is called the normal distribution. 2 The Standard Normal Distribution Chapter 7 The Normal Probability Distribution 7. ! 26 Learning. STATISTICAL TABLES 1 TABLE A. Assume that these times are Normally distributed with a standard deviation of 3. The sum of n independent X 2 variables (where X has a standard normal distribution) has a chi-square distribution with n degrees of freedom. If a household is selected at random, find the probability of its generating: a) Between 27 and 31 pounds per month. 96 MATHEMATICS MAGAZINE The Evolution of the Normal Distribution SAUL STAHL Department of Mathematics University of Kansas Lawrence, KS 66045, USA stahl@math. Solutions to Normal Distribution Problems 1. The normal distribution has the familiar bell shape, whose symmetry makes it an appealing choice for many popular models. The PROBNORM function returns the probability that an observation from the standard normal distribution is less than or equal to x. The standard normal curve N(0,1) has a mean=0 and s. The normal distribution is symmetrical about its mean: The Standard Normal Distribution. One of the main reasons for that is the Central Limit Theorem (CLT) that we will discuss later in the book. 2 weeks ago. Cumulative Distribution Function The formula for the cumulative. conversion that allows us to standardize any normal distribution so that the methods of the previous lesson can be used. Normal distribution is the continuous probability distribution defined by the probability density function,. The normal distributions shown in Figures 1 and 2 are specific examples of the general rule that 68% of the area of any normal distribution is within one standard deviation of the mean. Standard Normal (Z) Table Area between 0 and z Like the Student's t-Distribution, the Chi-square distribtuion's shape is determined by its degrees of freedom. Could be called a "normalized frequency distribution function", where area under the graph equals to 1. If the empirical data come from the population with the choosen distribution, the points should fall approximately along this reference line. 2 Normal Demand Distribution An important special case arises when the distribution D is normal. Now, the standardized version of X is: ~ has a standard normal distribution This means, whatever µ is, we have: All About Student’s t-test Page 3 of 17. 3 Normal Distribution The normal distribution has several advantages over the other distributions. Mean and variances of the normal distribution are given, and a probability is to be calculated for a specific scenario (by far the easiest): Eg: The mass of sugar in a 1kg bag may be assumed to have a normal distribution with mean 1005g and standard deviation 2g. In statistics, such data sets are said to have a normal distribution. The given negative z score chart is used to look up standard normal probabilities. When to Use the T-Distribution vs. x! , x = 0,1,,∞ where λ is the average. Like any other normal curve, it is bilaterally symmetrical, and has a bell shape. This is the "bell-shaped" curve of the Standard Normal Distribution. 3 Normal Distribution The normal distribution has several advantages over the other distributions. , 90 is an F). 0 and standard deviation 1. 7% are within 3 standard deviations. assumption that the measurement errors have a normal probability distribution. 1 Exercises 1. Acceptance-rejection techniques: If you simulate normal variates and throw away the negative values, the remaining data follow a truncated normal distribution. 7% are within 3 standard deviations from the mean. 5 = + + + + + + years. dnorm gives the density, pnorm gives the distribution function, qnorm gives the quantile function, and rnorm generates random deviates. Sep 23, 2017 · I am looking to create a standard normal distribution (mean=0, Std Deviation=1) curve in python and then shade area to the left, right and the middle of z-score(s). The probability that X lies between a and b is written as:. The greater the departure from. 7% are within three standard deviations. While a discrete PDF (such as that shown above for dice) will give you the odds of obtaining a particular outcome, probabilities with continuous PDFs are matters of range, not discrete points. The meanm X and standard deviation s X are the two parameters to determine a normal distribution. The Normal Distribution. $\begingroup$ @SLD if you need the pdf, you need to modify the question to ask for the density rather than the distribution. As a result of this fact, our knowledge about the standard normal distribution can be used in a number of applications. The normal distribution has density f(x) = 1/(√(2 π) σ) e^-((x - μ)^2/(2 σ^2)) where μ is the mean of the distribution and σ the standard deviation. 2 The Standardized Normal Distribution The standardized normal distribution is a particular normal distribution, in that it has a mean of 0 and a standard deviation of 1. click on below to reveal answers for above question. I then plot these next to each other. The standard normal distribution has zero mean and unit standard deviation. We begin with a brief reminder of basic concepts in probability for random variables that are scalars and then generalize them for random variables that are vec-tors. However, we can see. We know that the central chi-square distribution with p degrees of freedom is the distribution of the sum of the squares of pindependent standard normal random variables, i. x! , x = 0,1,,∞ where λ is the average. Standard normal distribution table is used to find the area under the f(z) function in order to find the probability of a specified range of distribution. Note: The normal distribution table, found in the appendix of most statistics texts, is based on the standard normal distribution, which has a mean of 0 and a standard deviation of 1. The sum of n independent X 2 variables (where X has a standard normal distribution) has a chi-square distribution with n degrees of freedom. 3413, is the same as stating that the _____ of randomly selecting a standard normally distributed variable z with a value between 0 and 1. To find the probability associated with a normal random variable, use a graphing calculator, an online normal distribution calculator, or a normal distribution table. The standard normal distribution is centered at zero and the degree to which a given measurement deviates from the mean is given by the standard deviation. If the rv X is normally distributed with expectation μ and standard deviation σ, one denotes: ∼ (,). This setup sets the parameters k,8 approximately to their prior esti-mate, 0. Use the sliders to change the mean and standard deviation of the distribution. To compute probabilities from the normal distribution, How to use the normal distribution Example. ! 26 Learning. To use this table with a non-standard normal distribution (either the location parameter is not 0 or the scale parameter is not 1), standardize your value by subtracting the mean and dividing the result by the standard deviation. (a) Find the proportion that is less than z=2. Multivariate Normal Distribution In this lesson we discuss the multivariate normal distribution. Student’s t-test, in statistics, a method of testing hypotheses about the mean of a small sample drawn from a normally distributed population when the population standard deviation is unknown. Normal( , , x, ) If Cumulative is true, creates cumulative distribution function of normal distribution with mean μ and standard deviation σ, otherwise creates pdf of normal distribution. People's heights, weights and IQ scores are all roughly bell-shaped and symmetrical around a mean. The absolute values of the system’s response peaks, however, will have a Rayleigh distribution. The area under any normal probability density function within k of is the same for any normal distribution, regardless of the mean and variance. Equivalently, X=eY where Y is normally distributed with mean μ and standard deviation σ. 4 represents the area under the standard normal curve in the normal distribution graph. This is the left-tailed normal table. Standard Normal Distribution. t Table cum. Use the following information to answer the next two exercises: The patient recovery time from a particular surgical procedure is normally distributed with a mean of 5. The Standard Normal Distribution in R. STANDARD NORMAL DISTRIBUTION: Table Values Represent AREA to the LEFT of the Z score. Sampling and Normal Distribution Student Worksheet Statistics and Math Revised October 2017 www. 3% of the data is within ±1S (therefore 31. The differential entropy of the normal distribution can be found without difficulty. pˆ pˆ pˆ pˆ 25 (1 ). Moments of the Standard Normal Probability Density Function Sahand Rabbani We seek a closed-form expression for the mth moment of the zero-mean unit-variance normal distribution. That’s what we do. The raw scores must first be transformed into a z score. The multivariate normal distribution has two or more random variables — so the bivariate normal distribution is actually a special case of the multivariate normal distribution. 3 Normal (Gaussian) Distribution The normal distribution is by far the most important probability distribution. provides another reason for the importance of the normal distribution. Explore the normal distribution: a histogram built from samples and the PDF (probability density function). Properties of the Gamma Function: (i) γ(x+1) = xΓ(x) (ii) γ(n+1) = n! (iii) γ(1/2) = √ π. Compute the pdf for a standard normal distribution. When referencing the F distribution, the numerator degrees of freedom are always given first , as switching the order of degrees of freedom changes the distribution (e. with a standard deviation of 3:5 miles per hour. The shape of the logistic distribution and the normal distribution are very similar, as discussed in Meeker and Escobar. If you convert normally distributed xdata into z-scores, you will have a standard normal dis-tribution. The visual way to understand it would be the following image (taken from here): The four curves are Normal d. Find the following areas under a normal distribution curve with µ = 20 and s = 4. When the distribution is called the standard normal distribution. It has also applications in modeling life data. Value of specific percentile (P. ! Whatproportion!of!the!scores!are!below!12. Similar to our discussion on normal random variables, we start by introducing the standard bivariate normal distribution and then obtain the general case from the standard. Properties of a normal distribution Continuous and symmetrical, with both tails extending to infinity; arithmetic mean, mode, and median are identical. 20 pounds and standard deviation 0. This distribution produces random numbers around the distribution mean (μ) with a specific standard deviation (σ). (North-Holland). The conditional distribution of Xgiven Y is a normal distribution. Econ: MATHEMATICAL STATISTICS, 1996 The Moment Generating Function of the Normal Distribution Recall that the probability density function of a normally distributed random. Because of its unique bell shape, probabilities for the normal distribution follow the Empirical Rule, which says the following: About 68 percent of its values lie within one standard deviation of the mean. 1 Approximations of the Standard Normal Distribution B. Proof Let X1 and X2 be independent standard normal random. Application. The mean of a Normal distribution is the center of the symmetric Normal curve. The area between 2 and 2 under a standard normal curve is approximately 95%. The standard normal curve N(0,1) has a mean=0 and s. A vertical line drawn through a normal distribution at a z-score location divides the distri- bution into two sections. When referencing the F distribution, the numerator degrees of freedom are always given first , as switching the order of degrees of freedom changes the distribution (e. The only change you make to the four norm functions is to not specify a mean and a standard deviation — the defaults are 0 and 1. 3413, is the same as stating that the _____ of randomly selecting a standard normally distributed variable z with a value between 0 and 1. Like any other normal curve, it is bilaterally symmetrical, and has a bell shape. The only change you make to the four norm functions is to not specify a mean and a standard deviation — the defaults are 0 and 1. [ 482 ] THE DISTRIBUTION OF THE RATIO, IN A SINGLE NORMAL SAMPLE, OF RANGE TO STANDARD DEVIATION BY H. 5 > qnorm (c (. Press 2nd then, VARS keys to access the DISTR (distributions) menu. The distribution is defined by the mean, μ , and standard deviation, σ. The normal random variable of a standard normal distribution is called a standard score or a z score. While this is true, there is an expression for this anti-derivative. standardized normal random variable Z and were able to get our answers by going directly to the normal distribution table. In statistics, such data sets are said to have a normal distribution. The normal approximation to the binomial distribution holds for values of x within some number of standard deviations of the average value np, where this number is of O(1) as n → ∞, which corresponds to the central part of the bell curve. (This is where the CLT comes in, because it tells the cond itions under which the sampling distribution of X is approximately normal. If you input the mean, μ, as 0 and standard deviation, σ, as 1, the z-score will be equal to X. Free Probability Density Function and Standard Normal Distribution calculation online. Standard Normal Distribution Pdf. | 2020-04-02T16:03:52 | {
"domain": "caretom.pw",
"url": "http://gqyt.caretom.pw/standard-normal-distribution-pdf.html",
"openwebmath_score": 0.866352915763855,
"openwebmath_perplexity": 315.23136305313596,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9896718477853188,
"lm_q2_score": 0.8479677583778258,
"lm_q1q2_score": 0.8392098182961576
} |
https://math.stackexchange.com/questions/2185760/does-mathop-lim-limits-x-to-infty-fx-infty-leftrightarrow | # Does, $\mathop {\lim }\limits_{x \to +\infty } f'(x) = + \infty \Leftrightarrow \mathop {\lim }\limits_{x \to +\infty } \frac{{f(x)}}{x} = + \infty$?
Let $f:\Bbb R \to \Bbb R$ be a differentiable function. If $\mathop {\lim }\limits_{x \to + \infty } \frac{{f(x)}}{x} = + \infty$, it is always true that $\mathop {\lim }\limits_{x \to + \infty } f'(x) = + \infty$? How about the converse?
For example, $\mathop {\lim }\limits_{x \to + \infty } \frac{{\ln x}}{x} = 0$ is finite, then we can see $\mathop {\lim }\limits_{x \to + \infty } (\ln x)' = 0$ is finite. $\mathop {\lim }\limits_{x \to + \infty } \frac{{{x^2}}}{x} = + \infty$ so $\mathop {\lim }\limits_{x \to + \infty } ({x^2})' = \mathop {\lim }\limits_{x \to + \infty } x = + \infty$. So the claim seems good to me, but I don't know how to actually prove it. $\mathop {\lim }\limits_{x \to + \infty } f'(x) = \mathop {\lim }\limits_{x \to \infty } \mathop {\lim }\limits_{h \to 0} \frac{{f(x + h) - f(x)}}{h}$, I don't know how to deal with this mixed limit. Also since the limits in the proposition diverges, it looks like mean value theorem sort of thing cannot apply here.
• Can someone explain expalain to me why we have to completely opposed answer on this post ? and both have highest upvotes – Guy Fsone Jan 29 '18 at 16:28
• @GuyFsone Could you explain how they're opposed? – MathematicsStudent1122 Jan 30 '18 at 19:03
• The first answer prove a theorem but the second has a counterexample – Guy Fsone Jan 30 '18 at 19:43
• @GuyFsone The second answer isn't a counterexample to the theorem I have in the yellow box. $\lim \sup$ is different from $\lim$. I actually make explicitly clear in my answer that $$\frac{f(x)}{x} \to \infty$$ doesn't imply $f' \to \infty$. – MathematicsStudent1122 Jan 31 '18 at 19:58
• @MathematicsStudent1122 ok thanks you are right I did not pay enough attention – Guy Fsone Jan 31 '18 at 20:00
The "left to right" of the biconditional is true. As noted in another answer, we can use L'hopital. But I will utilize a direct approach. We need to show that for arbitrarily large $M$, we have for sufficiently large $x$ the inequality $\frac{f(x)}{x} > M$.
By assumption, for any arbitrarily large $M$ we have $f'(x) > 2M$ when $x>x_0$. This means $f(x) \geq f(x_0) + 2M(x-x_0)$ for $x > x_{0}$. Note also that there is an $x_1$ such that for all $x > x_1$, $f(x_0) + 2M(x-x_0) > {Mx}$. Hence, we can see that for $x > \max\{x_1, x_0\}$ we have $$\frac{f(x)}{x} \geq \frac{f(x_0) + 2M(x-x_0)}{x} > \frac{Mx}{x} = M$$
The "right to left" of the biconditional is false. Consider $f(x) = x^2(\sin x + 2)$. This is positive and bounded below by $x^2$, hence $\lim_{x \to +\infty} \frac{f(x)}{x} = +\infty$ but $f'$ oscillates as $x \to +\infty$.
We can say something weaker, however, namely the following
Theorem: Let $f \in C^1(\mathbb{R})$ such that $$\lim_{x \to +\infty} \frac{f(x)}{x} = +\infty$$ Then we have $$\limsup_{x \to +\infty} \ f'(x) = +\infty$$
To prove this, first note that for $f$, we can assume $f(0) = 0$ without any loss of generality. Indeed, define $g(x) = f(x) - f(0)$ and note $\lim_{x \to +\infty} \frac{f(x)}{x} = +\infty \Longleftrightarrow \lim_{x \to +\infty} \frac{g(x)}{x} = +\infty$ and also $f' = g'$.
We can prove by contradiction. Suppose the $\lim \sup$ is finite or $-\infty$. This means $f'$ is bounded above in $[M, +\infty)$ for some $M>0$. Since $f'$ is continuous, by the extreme value theorem it is bounded above in $[0,M]$, and hence it is bounded above in $[0, +\infty)$. By the mean value theorem, we have that $\frac{f(x)}{x} = f'(\alpha)$ for some $\alpha$ in $[0, x]$. Letting $x \to +\infty$ we can see that $f'(\alpha)$ takes on arbitrarily large positive values, which contradicts the fact that $f'$ is bounded above in $[0, +\infty)$.
This can probably be modified so that the $C^1$ condition can be relaxed (e.g., to allow for cases where $f'$ is discontinuous), but I'm not sure how to do that.
• Thanks a lot! This is very helpful. Will the converse the theorem hold? That is $\limsup_{x \to +\infty} \ f'(x) = +\infty$ implies $\lim_{x \to +\infty} \frac{f(x)}{x} = +\infty$? Thanks! – Tuyet Mar 14 '17 at 4:13
• @Tuyet No, $f(x) = x\sin x$. – MathematicsStudent1122 Mar 14 '17 at 4:15
• I am so dumb :( I cannot figure out why $f'(x) = \mathop {\lim }\limits_{x \to {x_0}} \frac{{f(x) - f({x_0})}}{{x - {x_0}}} > 2M$ for $x>x_0$ implies $\frac{{f(x) - f({x_0})}}{{x - {x_0}}} > 2M$ for $x>x_0$. I can see here for every $x_1$ that is sufficiently near each $x > x_0$, then we have $\frac{{f({x_1}) - f({x_0})}}{{{x_1} - {x_0}}}>2M$ by limit's property. – Tuyet Mar 14 '17 at 4:47
• how can we say from here that $x>x_0$ implies $\frac{{f(x) - f({x_0})}}{{x - {x_0}}} > 2M$? – Tuyet Mar 14 '17 at 4:53
• You can indeed get rid of the $C^1$ assumption. If $\lim_{x\to +\infty}\frac{f(x)}{x}=+\infty$ then $\lim_{x\to +\infty}f(x)=+\infty$. So for your $M$, $\lim_{x\to +\infty} \frac{f(x)}{f(x)-f(M)}=1$ and $\lim_{x\to +\infty} \frac{x}{x-M}=1$. This means that your could use the mean value theorem on $[M, x]$ to get that $f'(\alpha)=\frac{f(x)-f(M)}{x-M}\to +\infty$. – Peradventure Mar 14 '17 at 6:06
1. It is true that if $\lim_{x\to +\infty} f'(x)=+\infty$ then $\lim_{x\to +\infty} \frac{f(x)}{x} = + \infty$. This can be proved by using the methods given by Dr.MV's answer to your question.
2. It is in general false that if $\lim_{x\to +\infty} \frac{f(x)}{x}=+\infty$ then $\lim_{x\to +\infty} f'(x)=+\infty$.
Counter-example: Let $f(x)=x^2 + \sin x^3$. Then $\lim_{x\to +\infty} \frac{f(x)}{x} =+\infty$. But $f'(x) = 2x+3x^2\cos x^3$. Note that $f'(x)$ is continuous for all $x\in \mathbf{R}$, but since the sign of $\cos x^3$ could vary as $x$ goes to $+\infty$, $\lim_{x\to +\infty}f'(x)$ doesn't exist and is not $+\infty$.
If $\lim_{x\to \infty}f'(x)$ exists, then from L'Hospital's Rule we have
$$\lim_{x\to \infty}\frac{f(x)}{x}=\lim_{x\to \infty}f'(x)$$
regardless of whether $\lim_{x\to \infty}f(x)$ exists or not (See the note that follows Case 2 of THIS ARTICLE).
Hence, if $\lim_{x\to \infty}f'(x)=\infty$, then $\lim_{x\to \infty}\frac{f(x)}{x}=\infty$ also.
• Thanks so much for your help! For mean value theorem proof, how do we guarantee $\xi \to \infty$ as $x \to \infty$? Many thanks! – Tuyet Mar 14 '17 at 3:44 | 2019-10-15T03:53:27 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2185760/does-mathop-lim-limits-x-to-infty-fx-infty-leftrightarrow",
"openwebmath_score": 0.9641189575195312,
"openwebmath_perplexity": 130.18769488591886,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9896718477853187,
"lm_q2_score": 0.8479677506936878,
"lm_q1q2_score": 0.8392098106913825
} |
https://electronics.stackexchange.com/questions/413208/root-locus-and-imaginary-axis | # Root Locus and Imaginary Axis
I’m having some serious issues finding the point at which the root loci cross the imaginary axis for the following Open Loop Transfer Function (OLTF) ;
We have been taught to substitute s = jw and multiply by the complex conjugate in order to have Real and Imaginary segments separately. Using 1 + u + jv = 0 where u = -1 and v = 0 to solve for w and k.
Below is an image of my working up until the complex conjugate, after this point the mathematics gives me the impression I have either made a mistake or there is a simpler method to solve this problem.
It may be useful to know the location at which the loci cross the Imaginary axis is 1.799 (from MatLab).
Any help is greatly welcome and thank you for any time you spend on this!
Have a Good Christmas!
Appended :
k has been introduced into the OLTF in order to find its maximum value for stability. Using u = - 1 and subbing the value for w (found from the Imaginary segment) into the Real segment.
Below is my working for find an OLTF with Real and Imaginary segments separated ;
There are several methods to find out the jw-crossing points. Routh table is one of them. First construct the closed-loop system, hence $$\frac{K(1+s)}{s^4 + 3s^3 + 6s^2 + (K+4)s + K}$$
The Routh table is
$$\begin{matrix} s^4 &&&& 1 &&&& 6 &&&& K \\ s^3 &&&& 4 &&&& (K+4) &&&& 0 \\ s^2 &&&& \frac{24-(K+4)}{4} &&&& K &&&& 0 \\ s^1 &&&& \frac{-K^2+80}{-K+20} &&&& 0 &&&& 0 \\ s^0 &&&& K &&&& 0 &&&& 0 \end{matrix}$$
The $$\ s^1 \$$ row is the only row that can yield a row of zeros. From the preceding row, we obtain
\begin{align} \frac{-K^2+80}{-K+20} = 0 \implies K = \pm \sqrt{80}\\ \end{align}
Now we take a look at the row above $$\s^1\$$ and construct the following polynomial (i.e. an auxiliary polynomial), hence
\begin{align} \left(\frac{24-(K+4)}{4}\right) s^2 + K &= 0 \\ 2.7639 s^2 + \sqrt{80} &= 0 \\ s_{1,2} &= \pm j 1.7989 \\ \end{align}
The root locus crosses the imaginary axis at this frequency $$\\pm j1.7989\$$ at the gain $$\K=\sqrt{80}\$$.
The second approach is to consider $$1+ \frac{K(s+1)}{s^4+4s^3 +6s^2 + 4s} = 0$$ Let $$\s=j\omega\$$, simplify the above expression, hence: $$(\omega^4-6\omega^2+K) + j (K\omega + 4\omega - 4 \omega^3) = 0$$ The left side is a single complex number and in order to this complex number to equal zero, we have \begin{align} (\omega^4-6\omega^2+K) &= 0 \implies K = 6\omega^2-\omega^4\\ ((6\omega^2-\omega^4)\omega + 4\omega - 4 \omega^3) &= 0 \implies -\omega^5 + 2 \omega^3 + 4\omega = 0 \\ w_{1,2,3,4,5} &= 0,\pm 1.7989,\pm j1.1118 \\ \end{align} Discard the zero and $$\\pm j1.1118\$$, we end up with this frequency $$\\pm 1.7989\$$ at which the root locus intersects with the imaginary axis. We can compute the gain K as well, hence:
\begin{align} K &= 6\omega^2-\omega^4\\ &= 6(1.7989)^2 - (1.7989)^4 \\ &= 8.9443 \end{align}
• Thank you! This was extremely helpful! I attempted to upvote this answer, but sadly it can’t be seen until I achieve a reputation of >15. Many thanks and hope you have an exceptional Christmas! – JakeNorms Dec 23 '18 at 15:33
• @JakeNorms, glad I could help. – CroCo Dec 25 '18 at 14:54
I don't know why "k" has been introduced halfway through your algebra but, ignoring it until told otherwise, you are nearly there. Your final formula reduces to a real term in the denominator (because that is what multiplying top and bottom by the complex conjugate does) so just concentrate on the numerator.
Expand that out and then quite simply equate the sum of all the real terms in the numerator to zero. Ignore everything else because all the real terms equating to zero marks the position on the root locus that it crosses the imaginery axis.
Then drill down to find $$\\omega\$$. And a happy xmas to you.
• Hello Andy! Thanks for your reply! I’ve added an Appended section with some clarification as to why I have included k in the OLTF and further workings for finding a final expression in separate terms of Imaginary and Real. The answer I have found is incorrect in comparison to the answer on MatLab (1.799). Any ideas where I may have gone wrong? Thanks again! – JakeNorms Dec 20 '18 at 19:09
• Double check your algebra. This isn't a math site so just go through it again line by line. For instance, in your additions I really don't think you have done it right because a real term would include $+k\omega^4$ but I rushed through it. – Andy aka Dec 20 '18 at 19:19
• K must be exist in the formula since the root locus is constructed based on varying this parameter. – CroCo Dec 23 '18 at 9:15 | 2019-05-19T23:10:28 | {
"domain": "stackexchange.com",
"url": "https://electronics.stackexchange.com/questions/413208/root-locus-and-imaginary-axis",
"openwebmath_score": 0.9985009431838989,
"openwebmath_perplexity": 508.96540794921304,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992960608886,
"lm_q2_score": 0.8670357563664174,
"lm_q1q2_score": 0.8392032982466755
} |
https://math.stackexchange.com/questions/2880668/die-is-rolled-seven-times-which-is-the-most-likely-outcome | # Die is rolled seven times. Which is the most likely outcome?
A six sided unbiased die with four green faces and two red faces is rolled seven times. Which of the following combinations is the most likely outcome of the experiment?
(A) Three green faces and four red faces.
(B) Four green faces and three red faces.
(C) Five green faces and two red faces.
(D) Six green faces and one red face.
My answer: Six green faces and one red face.
Why I think so :
1.The probability of getting a green face (4/6) is more than the probability of getting a red face (2/6).
2.The number of throws doesn't matter because the second throw (or consecutive throws) is (are) independent of the previous throws.
1. More green faces means a more likely outcome (hence option D).
But the answer is option C and the explanation as : Considering uniformly distributed outcomes, we get 4 greens and 2 reds in six throws. Then in one more throw, green is more likely. So, option (C) is correct.
Why are red faces being counted in when it's a matter of most likely outcome?
• I would advise using the binomial distribution rather than intuitive handwaving. – Lord Shark the Unknown Aug 12 '18 at 20:08
• +Lord Shark the Unknown, the question asks for a 'most likely outcome' which is why I think binomial distibution is unnecessary. – Ryu Aug 12 '18 at 20:10
• Why was this downvoted? – Shaun Aug 12 '18 at 20:13
While at first glance your reasoning seems to make sense ("green faces are more likely .. so the more green faces, the more likely that is"), it can easily be shown that there must be something fishy with it.
Take it to its logical conclusion: suppose you throw that dice $1000$ times .. which is the most likely outcome? You can get $0$ green faces, .. or $1$, or $2$ ... all the way up to $1000$. According to your reasoning, out of these $1001$ possible outcomes, getting all $1000$ green faces is the most likely.
OK, but think about it ... not getting any red face?!? That's bordering on absolutely incredible! Getting a red face is not that much more unlikely ... indeed, what if you had a coin that was slightly biased (say, $50.1$%) towards heads, and you flip it $1000$ times ... don't you think it would be crazy to think that the most likely outcome is to get all heads?!
Now, in your case, we have $\frac{2}{3}$ vs $\frac{1}{3}$ ... so sure, there is a bias towards getting green faces ... but again not all that great of a bias ... and indeed getting all $1000$ green faces when flipping $1000$ times should be really, really unlikely.
Indeed, just using your common sense, the probability of getting $1000$ green faces is certainly way smaller than $1$ in $1001$, and given that there are $1001$ possible outcomes, it should be clear that the most likely outcome is in fact not the outcome of getting all $1000$ times a green face when flipping that coin $1000$ times.
OK, so your reasoning does not work. But why not? Where does your reasoning go wrong?
Well, you forget about the fact that there is only and exactly $1$ way to get all green faces: you need to get a green face every time! However, there are $1000$ ways to get $999$ green faces, and $1$ red face, as the red face can be the first throw, or the second, or the third .... So, already you can see that getting $999$ green faces and $1$ red face would be far more likely than getting $1000$ green faces.
So far the intuitions. Mathematically what is going is this. Look at the binomial formula:
$$P(X=N)={M \choose N} \cdot p^N \cdot (1-p)^{N-M}$$
OK, so sure, getting a green face is more likely than getting a red face, i.e. $p > 1-p$, and so if you just look at the part:
$$p^N \cdot (1-p)^{N-M}$$
then that indeed will be higher, the higher $X$ is, and indeed will be highest when $X=M$
However, this does not mean that you are most likely to get 1000 green faces, because you forget about the other part
$${M \choose N}$$
which is the number of ways to get the outcome. And again, for $X=M$, there is only one way ... but for smaller $X$'s there can be many more ways.
So, you get an interesting interplay: yes, the higher $X$ is, the higher the right part of the formula, but if $X$ gets too high, then the left part will shrink.
Now, if you do the math, it turns out that the most likely outcome is an outcome that reflects the probabilities $p$ and $1-p$, i.e. about two thirds green and one third red. So, options A and D are a bit 'out of whack' in their respective proportions, and it would have to be between B and C ... which one? The explanation provided gives the answer
• +Bram, If I change the question and and bring it down to a singe throw, one green face would be a 'more likely outcome' because p(green)>p(red) at any moment. How I am looking at this is I'm taking the 'most likely outcomes' at single throws and combining the result in case of seven throws. – Ryu Aug 12 '18 at 20:23
• @Ryu Yes, I see that ... but if you look at my example: there is only 1 way to get 600 green faces .. there are many more ways to get 400 green faces and 200 red faces ... that's the difference. – Bram28 Aug 12 '18 at 20:25
• +Bram, you mean 600 green faces? – Ryu Aug 12 '18 at 20:38
• @Ryu Sorry, I changed the numbers ...please read my updated post ... I am trying to explain, conceptually/intuitively, as well as mathematically, where your reasoning goes wrong. – Bram28 Aug 12 '18 at 20:45
Think about what your argument would mean if the die was rolled $1\,000\,000$ times and you had $500\,001$ options from $499\,999$ to $999\,999$ green faces.
In general we can say that the most likely outcome of a series of events is not the series of the most likely outcome of the individual attempts.
In this case: As a third of the die's face are red, we would expect about a third of the rolls to come out "red".
• +Henrik, if I asked which one is more likely and gave you two options : 1. 999999 green faces and one red face 2. 999990 green faces and 10 red faces Which one would be a more likely outcome? – Ryu Aug 12 '18 at 20:16
• Both options are very unlikely, but the second is slightly less unlikely. (and if you couldn't see that yourself, I suggest you paint a die and start rolling) – Henrik Aug 12 '18 at 20:36
Strictly speaking you have a binomial distribution $\mathcal{B}in(n, p)$, where $n=7$ and $p = \frac 2 3$. The options you need to compare are:
(A) $$C_7^3 \left(\frac 2 3\right)^3 \left(\frac 1 3\right)^4$$ (B) $$C_7^4 \left(\frac 2 3\right)^4 \left(\frac 1 3\right)^3$$ (C) $$C_7^5 \left(\frac 2 3\right)^5 \left(\frac 1 3\right)^2$$ (D) $$C_7^6 \left(\frac 2 3\right)^6 \left(\frac 1 3\right)^1$$
Obviously, you don't need to compute all these numbers: First of all, you can simply ignore $3^7$ in the denominators. From options (A) and (B) using the property $C_n^k = C_n^{n - k}$ we get that the answer in (B) is twice as much as in (A), so (A) is not the largest. Further, we need to compare $C_7^4 = 35$ and $2C_7^5 = 2 \cdot 21 =42$. So, (B) isn't the largest either. $2C_7^6 = 14 < C_7^5 = 21$ yielding the correct answer of being (C).
• To find the largest, it suffices to compute the ratios between consecutive terms. That's less work than computing the terms individually. – Lord Shark the Unknown Aug 12 '18 at 20:19
• yep, i was editing my answer. – pointguard0 Aug 12 '18 at 20:23
The expectation is $7\cdot\dfrac 46=4.666\cdots$ greens, which is closest to $5$ (the distribution is unimodal).
A six-sided unbiased die with four green faces and two red faces is rolled seven times. Which of the following combinations is the most likely outcome of the experiment
The probability of rolling a green is $p=\frac{4}{6} =\frac{2}{3}$, the probability for rolling a red is $p=\frac{2}{6}=\frac{1}{3}$ They are complements of eachother.
Let $X\sim Bin(n,p)$ be a binomial distributed random variable. We let $k$ be the number of successes of rolling greens. Then $X \sim Bin(7,\frac{2}{3})$
the mass function is given by
$$f(k,7,\frac{2}{3}) =Pr(X=k) = \binom{n}{k} p^{k}(1-p)^{n-k}$$
$$\binom{n}{k} = \frac{n!}{k!(n-k)!}$$
going through these
$$Pr(X=3) = \binom{7}{3} (\frac{2}{3})^{3} (\frac{1}{3})^{4}$$ $$Pr(X=4) = \binom{7}{4} (\frac{2}{3})^{4} (\frac{1}{3})^{3}$$ $$Pr(X=5) = \binom{7}{5} (\frac{2}{3})^{5} (\frac{1}{3})^{2}$$ $$Pr(X=6) = \binom{7}{6} (\frac{2}{3})^{6} (\frac{1}{3})^{1}$$
Simplifying them all $$Pr(X=3) =35 \frac{8}{27} \frac{1}{81}$$ $$Pr(X=4) =35 \frac{16}{81} \frac{1}{27}$$ $$Pr(X=5) = 21 \frac{32}{243} \frac{1}{9}$$ $$Pr(X=6) = 7 \frac{64}{729} \frac{1}{3}$$ then we have $Pr(X=3),Pr(X=4)$ are visually much more likely working them out
$$Pr(X=3) =35 \frac{8}{27} \frac{1}{81} \approx .12$$ $$Pr(X=4) =35 \frac{16}{81} \frac{1}{27} \approx .25$$
Edit: it's actually after an edit
$$Pr(X=5) = 21 \frac{32}{243} \frac{1}{9} \approx .30$$
Offhand $Pr(X=5)$ makes the most sense as $\frac{1}{3}$ is to a low power however the binomial coefficient out weighs it.
• $7-5$ does not equal $5$ (see your $Pr(X=5)$). – Lord Shark the Unknown Aug 12 '18 at 20:23
• ahh hah oops will edit – Shogun Aug 12 '18 at 20:26 | 2019-06-20T21:17:11 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2880668/die-is-rolled-seven-times-which-is-the-most-likely-outcome",
"openwebmath_score": 0.7695024013519287,
"openwebmath_perplexity": 466.6542683306946,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.967899289579129,
"lm_q2_score": 0.8670357580842941,
"lm_q1q2_score": 0.8392032942894898
} |
https://math.stackexchange.com/questions/2251350/find-a-conjecture-for-f-1f-2-f-n | # Find a conjecture for $F_1+F_2+…+F_n$
Given that $F_n=F_{n-1}+F_{n-2}$, with initial conditions $F_1=1$ and $F_2=3$. Provide a formula, without solving for the recurrence, for $F_1+F_2+...+F_n$.
Any ideas how I should do this question without the use of the recurrence formula? I know that I can solve for a recurrence formula and using sum of the geometric progression, I am able to get an answer. However, how would you do that without solving for the recurrence?
This is what I have obtained so far:
$F_3=F_2+F_1\\F_4=F_3+F_2\\F_5=F_4+F_3\\...$
My sum is this:
$F_1+F_2=F_1+F_2\\F_1+F_2+F_3=2F_2+2F_1\\F_1+F_2+F_3+F_4=4F_2+3F_1\\F_1+F_2+F_3+F_4+F_5=7F_2+5F_1\\F_1+F_2+F_3+F_4+F_5+F_6=12F_2+8F_1$
I think I'm beginning to see a pattern here: That is, the sum of the 2 coefficients in the previous term of $F_1$ and $F_2$ is the current coefficient of $F_2$, and that the current coefficient of $F_1$ is the sum of the previous 2 coefficients of $F_1$.
However, how am I suppose to find a formula linking $F_1+F_2+F_3+...+F_n$?
There is an additional information in the question, but I'm not sure if the hint is useful.
The hint is: Consider $F_1^2+F_2^2+F_3^2+...+F_n^2=F_nF_{n+1}-2$
Is there anything I am missing out?
• Also, combine this question with this one. – Dietrich Burde Apr 25 '17 at 12:43
• Thanks I will look into it! The initial conditions changes, so I guess there will be some changes to the formulas as well – Icycarus Apr 25 '17 at 12:46
• Then it also would change the formula from the above hint, $F_1^2+F_2^2+F_3^2+...+F_n^2=F_nF_{n+1}-2$. – Dietrich Burde Apr 25 '17 at 13:05
• – lhf Apr 25 '17 at 13:35
• The conjecture that follows from your observations is $F_1+F_2+F_3+\cdots+F_n = (f_{n+1}-1)F_2+ f_n F_1$ where $f_n$ is the $n$-th Fibonacci number. – lhf Apr 25 '17 at 14:03
The recurrence $F_{n+2}=F_{n+1}+F_n$ implies that the characteristic polynomial of the given sequence is $x^2-x-1$, hence $$F_n = A \sigma^n + B\bar{\sigma}^n \tag{1}$$ with $A,B$ being constants depending on the initial conditions and $\sigma=\frac{1+\sqrt{5}}{2},\bar{\sigma}=\frac{1-\sqrt{5}}{2}$ being the roots of the characteristic polynomial. In particular, even without computing $A$ and $B$ we have
$$F_1+F_2+\ldots+F_N = A \sum_{n=1}^{N}\sigma^n+B\sum_{n=1}^{N}\bar{\sigma}^n = A\frac{\sigma^{N+1}-1}{\sigma-1}+B\frac{\bar{\sigma}^{N+1}-1}{\bar{\sigma}-1}\tag{2}$$ and the LHS of $(2)$ can be expressed as $C F_{N+1}+ D F_{N+2}+E$ for some constants $C,D,E$ that we may compute by interpolation. Since $\{F_1,F_2,F_3,F_4,F_5\}=\{1,3,4,7,11\}$ we have $$\left\{\begin{array}{rcl} 3C+4D+E &=& 1 \\ 4C+7D+E &=& 4 \\ 7C+11D+E&=& 8\end{array}\right.\tag{3}$$ so $\{C,D,E\}=\left\{ 0,1,-3 \right\}$ and we are done: $$F_1+F_2+\ldots+F_N = \color{red}{F_{N+2}-3}.\tag{4}$$ On the other hand, once $(4)$ is established as a conjecture it is straightforward to prove by induction.
• Oh!!! I see it now! Thank you very much!! – Icycarus Apr 25 '17 at 13:37
• @Icycarus: you're welcome. – Jack D'Aurizio Apr 25 '17 at 13:37
Every sequence satisfying the Fibonacci recurrence can be written as $$F_n = f_{n-2} F_1 + f_{n-1} F_2$$ where $f_n$ is the $n$-th Fibonacci number. This follows immediately by induction.
For your sequence, we have $$F_n = f_{n-2} +3 f_{n-1} = f_{n-2} + f_{n-1} + 2f_{n-1} = f_n + f_{n-1} + f_{n-1} = f_{n+1} + f_{n-1} =f_{n+2}-f_{n-2}$$
(Incidentally, your $F_n$ is the $n$-th Lucas number.)
Therefore, \begin{align} \sum_{i=1}^n F_i &= F_1 + \sum_{i=2}^n f_{i+2} - \sum_{i=2}^n f_{i-2} \\&= F_1 +f_{n-1}+f_{n}+f_{n+1}+f_{n+2}-f_0-f_1-f_2-f_3 \\&= f_{n-1}+f_{n}+f_{n+1}+f_{n+2}-3 \\&= f_{n+1}+f_{n+3}-3 \\&= F_{n+2}-3 \end{align}
If you happened to know that $\sum_{i=0}^{n} f_{i}=f_{n+2}-1$, then \begin{align} \sum_{i=1}^n F_i &= F_1 + \sum_{i=2}^n f_{i-2} F_1 + \sum_{i=2}^n f_{i-1} F_2 \\&= F_1 + (f_n-1)F_1 + (f_{n+1}-1)F_2 \\&= f_nF_1 + f_{n+1}F_2 - F_2 \\&= F_{n+2}-3 \end{align}
We can use finite differences.
Let $\phi(n) = \sum_{i=1}^nF_i$
$\Delta \phi(n) = \sum_{i=1}^{n+1}F_i - \sum_{i=1}^nF_i = F_{n+1}$
We know $\Delta F_n = F_{n+1} - F_n = F_{n-1}$,
So $\Delta \phi(n) = F_{n+1} = \Delta F_{n+2} \implies \sum_{i=1}^nF_i = F_{n+2} + C$ where C is a constant of summation.
For $n = 1$, we get $F_1 = F_3 + C \implies C = F_1 - F_3 = -F_2$
So $\sum_{i=1}^nF_i = F_{n+2} - F_2$ | 2020-01-20T06:54:52 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2251350/find-a-conjecture-for-f-1f-2-f-n",
"openwebmath_score": 0.9854790568351746,
"openwebmath_perplexity": 258.95017695462116,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992969868542,
"lm_q2_score": 0.8670357512127872,
"lm_q1q2_score": 0.8392032940613258
} |
https://math.stackexchange.com/questions/1680105/what-is-the-purpose-of-finding-a-model-to-best-fit-data | # What is the purpose of finding a model to best fit data?
So given the following question:
What is the purpose of finding a model to best fit data?
Someone answers: The purpose of fitting data to a curve is so that you can give an exact statement about what will happen in a future situation like it for which you do not have the data
Is this answer valid? If not, explain and what would you add?
I personally believe it is a valid answer, does anyone have an opinion or objection?
In addition to what was already revealed in other answers, it is important to know that the fitting results, specially in real life, are not unique. That is, you don't get one and only one relationship between your x and y. The relationship you obtain depends on the method you choose to fit the data. For example, you may use the method of Linear Least Squares or a Non-Linear Least Square method. If the phenomena is not linear, you will get different relations even thought the input pairs are the same. For this reason and others, you can't always depend on the relationship obtained a 100% nor for the future or even for the current set of data. The relationship obtained in many cases represent a good formula with some compromises.
Another value of representing a data set as a concrete mathematical expression is to be able to describe a phenomena concisely so that further study can be applied, such as finding probability, finding average rate of change at any given point, etc. effectively. Again, the accuracy of the result depends on several factors.
Wikipedia Least Squares - Some information about Least Squares mentioned here.
You can use it in at least two ways. One is for interpolaton/extrapolaton. You have data at certain values of the independent variable (say certain times) and want an approximation (I wouldn't call it an exact statement) for the value at other values/times. A fitted model can give you that. Another use is to guide making a theory. If you collect some data and fit it, the functional form might guide your theory. For example, if you measure drag on an object as a function of air speed, you find it is quadratic. You can then think about what physics would cause it to be so.
Someone answers: The purpose of fitting data to a curve is so that you can give an exact statement about what will happen in a future situation like it for which you do not have the data
Is this answer valid? If not, explain and what would you add?
As an instructor, I would probably give half credit for this answer. Here's why: Yes, a primary purpose of the regressed model is to make future predictions. I wouldn't add anything to that -- if anything, the quoted response is already unnecessarily wordy to the point that it seems suspect. (Nonetheless, I'd be charitable and give half credit for that part.)
The real problem is the phrase "give an exact statement about what will happen in a future". That's incorrect and simply impossible. Any predictions we make are necessarily estimates (possibly expressed with a likely range of error). It should be obvious that no one is able to make "exact" predictions about the future, whether one thinks of examples like weather forecasts, financial markets, sporting events, etc.
From an engineering standpoint, it's useful to have a model fit to your data so that you can interpolate between points. If I'm measuring some variable $x$ over the course of, let's say, an airplane runway, I don't have the time, money, or patience to measure it finely. I might measure the variable every yard.
If I know what a good model for the variable is, like $y = ax^2$, I can use it to calculate what the variable would have been at some location I didn't measure. And now I can do fancy things like integrate the variable analytically, when before all I had was a set of discrete points. | 2019-06-26T13:53:59 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1680105/what-is-the-purpose-of-finding-a-model-to-best-fit-data",
"openwebmath_score": 0.4745427668094635,
"openwebmath_perplexity": 241.8709185695312,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992969868542,
"lm_q2_score": 0.8670357494949105,
"lm_q1q2_score": 0.839203292398594
} |
https://tex.stackexchange.com/questions/359759/aligning-multiple-inequality-symbols | # Aligning multiple inequality symbols
I have a question about how to align multiple symbols for a set of inequalities to make it look clean. This is what I have at the moment:
\begin{align*}
|f(x)-f(a)| &< f(a) \\
-f(a) < f(x) - f(a) &< f(a) \\
-f(a) + f(a) < f(x) &< f(a) + f(a) \\
0 < f(x) &< 2f(a) \\
\implies f(x) &> 0.
\end{align*}
Which only aligns the right inequality signs with each other:
What I would like to accomplish is similar to the result one might get when using the alignat environment. Ideally, I would like to essentially create 3 columns with the inequality signs acting as separators - but I would like the middle column to be center justified.
Also note that the top row only has one inequality sign - I would like that to be aligned with all the signs on the right. How might I go about accomplishing this?
• Related/duplicate: Align two inequalities – Werner Mar 22 '17 at 10:23
• Is there a reason why you don't just use alignat? – Skillmon Mar 22 '17 at 10:26
• It doesn't center the middle column... Was wondering if there is a clean way to do that. – CoffeeDonut Mar 22 '17 at 10:27
Like this?
With use ow array it is simple:
\documentclass{article}
\usepackage{mathtools}
\begin{document}
$\setlength\arraycolsep{1pt} \begin{array}{rcccl} & ~ & f(x)-f(a)| & < & f(a) \\ -f(a) & < & f(x) - f(a) & < & f(a) \\ -f(a) + f(a) & < & f(x) & < & f(a) + f(a) \\ 0 & < & f(x) & < & 2f(a) \\ & ~ &\implies f(x)& > & 0 \end{array}$
\end{document}
• Well, it is certainly prettier than what I have. Thanks for the help! – CoffeeDonut Mar 22 '17 at 10:31
A solution with alignat and the eqparbox package. I took the opportunity do define an \eqmathbox command: its optional argument is a tag (M by default), and its mandatory argument is in mathmode, display style. All \eqmathboxes sharing the same tag will have their contents centred in a box of width the largest contents width. I also defined an \abs command, which adds an implicit pair of \left\lvert … \right\rvert around its argument in its starred version.
\documentclass{article}
\usepackage{mathtools}
\DeclarePairedDelimiter\abs\lvert\rvert
\usepackage{eqparbox}
\newcommand\eqmathbox[2][M]{\eqmakebox[#1]{$\displaystyle#2$}}
\begin{document}
\begin{alignat*}{2}
& \phantom{{}<{}} & \eqmathbox{\abs{ f(x)-f(a)}} & < f(a) \\
-f(a) & < & \eqmathbox{f(x)-f(a)} & < f(a) \\%
-f(a) + f(a) & < & \eqmathbox{f(x)} & < f(a) + f(a) \\
0 & < & \eqmathbox{f(x)} & < 2f(a) \\
& & \implies f(x) & > 0
\end{alignat*}
\end{document} | 2020-11-29T10:51:29 | {
"domain": "stackexchange.com",
"url": "https://tex.stackexchange.com/questions/359759/aligning-multiple-inequality-symbols",
"openwebmath_score": 0.7926329970359802,
"openwebmath_perplexity": 1896.9235838223512,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992942089574,
"lm_q2_score": 0.8670357512127872,
"lm_q1q2_score": 0.8392032916527898
} |
https://gmatclub.com/forum/there-are-three-blue-marbles-three-red-marbles-and-three-yellow-marb-233647.html | It is currently 18 Feb 2018, 02:16
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# There are three blue marbles, three red marbles, and three yellow marb
Author Message
TAGS:
### Hide Tags
Manager
Joined: 02 Jun 2015
Posts: 192
Location: Ghana
There are three blue marbles, three red marbles, and three yellow marb [#permalink]
### Show Tags
08 Feb 2017, 05:27
2
This post was
BOOKMARKED
00:00
Difficulty:
65% (hard)
Question Stats:
52% (01:17) correct 48% (01:19) wrong based on 52 sessions
### HideShow timer Statistics
There are three blue marbles, three red marbles, and three yellow marbles in a bowl. What is the probability of selecting exactly one marble of each color from the bowl after three successive marbles are withdrawn from the bowl?
A) 1/27
B) 3/56
C) 3/28
D) 9/56
E) 9/28
[Reveal] Spoiler: OA
_________________
Kindly press kudos if you find my post helpful
Intern
Joined: 25 Dec 2016
Posts: 17
Location: United States (GA)
Concentration: Healthcare, Entrepreneurship
GMAT 1: 770 Q51 V42
GPA: 3.64
WE: Medicine and Health (Health Care)
Re: There are three blue marbles, three red marbles, and three yellow marb [#permalink]
### Show Tags
08 Feb 2017, 06:30
There are 9 marbles in the bowl, 3 of each color. The first one that you pick will be the first of that color, regardless of whether it is red, yellow, or blue. Let's say that you select a red marble. On your next selection, you must take a blue or a yellow marble. There are 3 blue marbles and 3 yellow marbles remaining, but there are only 2 red marbles remaining because you already selected one. This mean you have a 6/8 chance of drawing a blue or a yellow. Let's say that you draw a blue. This means that your last marble must be a yellow. There are three yellow marbles remaining but only two red marbles and two blue marbles. This means that you have a 3/7 chance of selecting a yellow marble on your final draw.
This gives us a probability of $$1*\frac{3}{4}*\frac{3}{7}=\frac{9}{28}$$. Answer is E
SVP
Joined: 11 Sep 2015
Posts: 2049
Re: There are three blue marbles, three red marbles, and three yellow marb [#permalink]
### Show Tags
08 Feb 2017, 06:45
Expert's post
Top Contributor
duahsolo wrote:
There are three blue marbles, three red marbles, and three yellow marbles in a bowl. What is the probability of selecting exactly one marble of each color from the bowl after three successive marbles are withdrawn from the bowl?
A) 1/27
B) 3/56
C) 3/28
D) 9/56
E) 9/28
P(3 different colors) = P(1st draw is ANY color AND 2nd draw does not match 1st draw AND 3rd draw does not match 1st and 2nd draws)
= P(1st draw is ANY color) x P(2nd draw does not match 1st draw) x P(3rd draw does not match 1st and 2nd draws)
= 1 x 6/8 x 3/7
= 18/56
= 9/28
Cheers,
Brent
_________________
Brent Hanneson – Founder of gmatprepnow.com
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 1975
Re: There are three blue marbles, three red marbles, and three yellow marb [#permalink]
### Show Tags
13 Feb 2017, 07:36
1
KUDOS
Expert's post
duahsolo wrote:
There are three blue marbles, three red marbles, and three yellow marbles in a bowl. What is the probability of selecting exactly one marble of each color from the bowl after three successive marbles are withdrawn from the bowl?
A) 1/27
B) 3/56
C) 3/28
D) 9/56
E) 9/28
We are given that there are three blue marbles, three red marbles, and three yellow marbles in a bowl. We need to determine the probability of selecting exactly one marble of each color from the bowl after three successive marbles are withdrawn from the bowl.
We note that one marble of each color is possible in six ways:
BRY
BYR
YBR
YRB
RBY
RYB
Each of the above scenarios has an equal chance of happening; therefore, we will find the probability that one of them (BRY) will happen and multiply the result by 6.
To draw a blue, red, and yellow marble in this specific order, we first need to draw one of the three blue marbles out of nine marbles; therefore the probability of this event is 3/9 = 1/3. Next, we need to draw one of the three red marbles out of eight remaining marbles (since a blue marble has already been drawn), which has a chance of 3/8. Finally, we need to draw one of the three yellow marbles out of seven remaining marbles (since a blue and a red marble have already been drawn), and this event has a probability of 3/7. Combining the three events, we find that drawing BRY has a probability of 1/3 x 3/8 x 3/7 = 3/56.
Since each of the remaining five events has an equal probability to the event BRY, the probability of drawing a marble of each color is 3/56 x 6 = 9/28.
_________________
Jeffery Miller
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Re: There are three blue marbles, three red marbles, and three yellow marb [#permalink] 13 Feb 2017, 07:36
Display posts from previous: Sort by | 2018-02-18T10:16:24 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/there-are-three-blue-marbles-three-red-marbles-and-three-yellow-marb-233647.html",
"openwebmath_score": 0.7714638113975525,
"openwebmath_perplexity": 1218.1010190638872,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9678992932829918,
"lm_q2_score": 0.8670357494949105,
"lm_q1q2_score": 0.839203289187213
} |
https://www.jiskha.com/questions/1315217/integrate-sinx-cosx-2dx-using-the-substitution-u-sinx-i-know-how-to-do-this-using-u | # Calculus
Integrate sinx(cosx)^2dx using the substitution u=sinx. I know how to do this using u =cosx, but not sinx. The next problem on the homework was the same question except it asked to use u=cosx, so there couldn't have been a mistake.
1. 👍
2. 👎
3. 👁
1. u = sinx
du = cosx dx
so, dx = du/cosx = du/√(1-u^2)
sinx(cosx)^2 dx = u(1-u^2)/√(1-u^2) du
= u/√(1-u^2) du
Now, let v = √(1-u^2)
dv = -u/√(1-u^2) du
and you have
-v dv
integrate that to get
-1/2 v^2 = -1/2(1-u^2) = -1/2 (1-sin^2(x)) = -1/2 cos^2(x) + C
Now, that probably looks different from what you got letting u=cosx, but I think if you manipulate things, you'll find that the difference is caused by having a different +C at the end.
Or, maybe I made a mistake above...
1. 👍
2. 👎
2. ok, I got it, it worked out for me to be the same thing as when I used u=cosx
1. 👍
2. 👎
## Similar Questions
1. ### Math
How do I solve this? tan^2x= 2tanxsinx My work so far: tan^2x - 2tanxsinx=0 tanx(tanx - 2sinx)=0 Then the solutions are: TanX=0 and sinX/cosX = 2 sin X Divide through by sinX: we have to check this later to see if allowed (ie sinX
2. ### math;)
The equation 2sinx+sqrt(3)cotx=sinx is partially solved below. 2sinx+sqrt(3)cotx=sinx sinx(2sinx+sqrt(3)cotx)=sinx(sinx) 2sin^2x+sqrt(3)cosx=sin^2x sin^2x+sqrt(3)cosx=0 Which of the following steps could be included in the
3. ### Trig.......
I need to prove that the following is true. Thanks (2tanx /1-tan^x)+(1/2cos^2x-1)= (cosx+sinx)/(cosx - sinx) and thanks ........... check your typing. I tried 30º, the two sides are not equal, they differ by 1 oh , thank you Mr
4. ### trigonometry
how do i simplify (secx - cosx) / sinx? i tried splitting the numerator up so that i had (secx / sinx) - (cosx / sinx) and then i changed sec x to 1/ cosx so that i had ((1/cosx)/ sinx) - (cos x / sinx) after that i get stuck
1. ### Pre-Calc
Establish the identity. sinx + cosx/sinx - cosx = 1+2sinxcosx/2sin^2x-1
2. ### Maths
If is a n acute angle and tanx=3 4 evaluate cosx-sinx cosx+sinx
3. ### Math
(sinx - cosx)(sinx + cosx) = 2sin^2x -1 I need some tips on trigonometric identities. Why shouldn't I just turn (sinx + cosx) into 1 and would it still have the same identity?
4. ### Math
1) evaluate without a calculator: a)sin(3.14/4) b) cos(-3(3.14)/4) c) tan(4(3.14)/3) d) arccos(- square root of three/2) e) csctheata=2 2) verify the following identities: a) cotxcosx+sinx=cscx b)[(1+sinx)/ cosx] + [cosx/
1. ### Precalculus/Trig
I can't seem to prove these trig identities and would really appreciate help: 1. cosx + 1/sin^3x = cscx/1 - cosx I changed the 1: cosx/sin^3x + sin^3x/sin^3x = cscx/1-cosx Simplified: cosx + sin^3x/sin^3x = cscx/1-cosx I don't
2. ### Trigonometry
Simplify #1: cscx(sin^2x+cos^2xtanx)/sinx+cosx = cscx((1)tanx)/sinx+cosx = cscxtanx/sinx+cosx Is the correct answer cscxtanx/sinx+cosx? Simplify #2: sin2x/1+cos2X = ??? I'm stuck on this one. I don't know what I should do.
3. ### Trig Identities
Prove the following identities: 13. tan(x) + sec(x) = (cos(x)) / (1-sin(x)) *Sorry for any confusing parenthesis.* My work: I simplified the left side to a. ((sinx) / (cosx)) + (1 / cosx) , then b. (sinx + 1) / cosx = (cos(x)) /
4. ### Math help again
cos(3π/4+x) + sin (3π/4 -x) = 0 = cos(3π/4)cosx + sin(3π/4)sinx + sin(3π/4)cosx - cos(3π/4)sinx = -1/sqrt2cosx + 1/sqrt2sinx + 1/sqrt2cosx - (-1/sqrt2sinx) I canceled out -1/sqrt2cosx and 1/sqrt2cosx Now I have 1/sqrt sinx + | 2021-03-06T14:24:33 | {
"domain": "jiskha.com",
"url": "https://www.jiskha.com/questions/1315217/integrate-sinx-cosx-2dx-using-the-substitution-u-sinx-i-know-how-to-do-this-using-u",
"openwebmath_score": 0.8311849236488342,
"openwebmath_perplexity": 8854.757764207503,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9678992914310605,
"lm_q2_score": 0.8670357477770336,
"lm_q1q2_score": 0.8392032859187905
} |
https://stats.stackexchange.com/questions/235055/why-do-we-trust-the-p-value-when-fitting-a-regression-on-a-single-sample/235083 | # Why do we trust the p-value when fitting a regression on a single sample?
I have code below that builds a linear model for a set of data:
x = rnorm(100,5,1)
b = 0.5
e = rnorm(100,0,3)
beta_0= 2.5
beta_1= 0.5
y = beta_0 + beta_1*x + e
plot(x,y)
m1 = lm(y~x)
abline(m1)
summary(m1)
When I run this block of code multiple times, the p-value can vary from 0.05 to around ~-0.7. So my question is why do we trust that a coefficient is statistically significant based only on one sample when it can vary when fitting on a different sample?
• The phrase "trust the p-value" seems strange to me (what are we trusting that it should do?). You're aware that (i) the p-value is a random variable? (i) that under the null it has a uniform distribution? (iii) that under the alternative there's not some "population p-value" that you're estimating? (i.e. as sample sizes go up, it doesn't converge on some particular value, but just tends to get typically smaller, while still having some - albeit decreasing - chance of large values) – Glen_b -Reinstate Monica Sep 15 '16 at 5:16
• Practically, when analyzing a data set, we report the p-value of a coefficient and indicate if it is statistically significant at some alpha level. But we are analyzing only one data set (some sample from the population). If you sample 100 subjects again, fit the regression, again and report a p-value higher than the alpha level, then the coefficient is not statistically significant. I am just confused how this works in practice (i.e. medical studies when you are analyzing only one sample typically) and you interpret this difference. – zorny Sep 15 '16 at 5:50
I assume that you talk about the p-value on the estimated coefficient $\hat{\beta}_1$. (but the reasoning would be similar for $\hat{\beta}_0$).
The theory on linear regression tells us that, if the necessary conditions are fulfilled, then we know the distribution of that estimator namely, it is normal, it has mean equal to the ''true'' (but onknown) $\beta_1$ and we can estimate the variance $\sigma_{\hat{\beta}_1}$. I.e. $\hat{\beta}_1 \sim N(\beta_1, \sigma_{\hat{\beta}_1})$
If you want to ''demonstrate'' (see What follows if we fail to reject the null hypothesis? for more detail) that the true $\beta_1$ is non-zero, then you assume the opposite is true, i.e. $H_0: \beta_1=0$.
Then by the above, you know that, if $H_0$ is true that $\hat{\beta}_1 \sim N(\beta_1=0, \sigma_{\hat{\beta}_1})$.
In your regression result you observe a value for $\hat{\beta_1}$ and you can compute its p-value. If that p-value is smaller than the significance level that you decide (e.g. 5%) then you reject $H_0$ en consider $H_1$ as ''proven''.
In your case the ''true'' $\beta_1$ is $\beta_1=0.5$, so obviously $H_0$ is false, so you expect p-values to be below 0.05.
However, if you look at the theory on hyptothesis testing, then they define ''type-II'' errors, i.e. accepting $H_0$ when it is false. So in some cases you may accept $H_0$ even though it is false, so you may have p-values above 0.05 even though $H_0$ is false.
Therefore, even if in your true model $\beta_1=0.5$ it can be that you accept the $H_0: \beta_1=0$, or that you make a type-II error.
Of course you want to minimize the probability of making such type-II errors where you accept that $H_0: \beta_1=0$ holds while in reality it holds that $\beta=0.5$.
The size of the type-II error is linked to the power of your test. Minimizing the type-II error means maximising the power of the test.
You can simulate the type-II error as in the R-code below:
Note that:
• if you take $\beta_1$ further from the value under $H_0$ (zero) then the type II error decreases (execute the R-code with e.g. beta_1=2) which means that the power increases.
• If you put beta_1 equal to the value under $H_0$ then you find $1-\alpha$.
R-code:
x = rnorm(100,5,1)
b = 0.5
beta_0= 2.5
beta_1= 0.5
nIter<-10000
alpha<-0.05
accept.h0<-0
for ( i in 1:nIter) {
e = rnorm(100,0,3)
y = beta_0 + beta_1*x + e
m1 = lm(y~x)
p.value<-summary(m1)\$coefficients["x",4]
if ( p.value > alpha) accept.h0<- accept.h0+1
}
cat(paste("type II error probability: ", accept.h0/nIter))
"Trusting" the p-value may very well mean misunderstanding it. You make up a model with considerable error and sometimes the regression will detect the linear relation, some times not. The risk is determined by choosing the p-value-threshold alpha.
In the case you have proposed. Each p-value under 0.05 is "right", and each above 0.05 lacks observations. Try larger samples then n=100 and with increasing numbers you will find decreasing occurence of p-values above 0.05. So your question is essentially about the power of the test.
To find a significant correlation between x and y with a power of 90% there has to be a correlation of at least r=0.31
> library(pwr)
> pwr.r.test(n=100, sig.level = 0.05, power=0.9)
approximate correlation power calculation (arctangh transformation)
n = 100
r = 0.3164205
sig.level = 0.05
power = 0.9
alternative = two.sided
The correlation of your data is somewhere around 0.16. So the problem is not the trust in p-values but that your "study" is massively underpowered.
Find a sample of n=500 to see "wrong" p-values about one in twenty:
> pwr.r.test(r=0.16, power=.95)
approximate correlation power calculation (arctangh transformation)
n = 501.0081
r = 0.16
sig.level = 0.05
power = 0.95
alternative = two.sided
Lesson learned: Never trust a not-significant p-value without a sound power analysis. | 2019-11-21T16:47:17 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/235055/why-do-we-trust-the-p-value-when-fitting-a-regression-on-a-single-sample/235083",
"openwebmath_score": 0.8587334156036377,
"openwebmath_perplexity": 1036.4373051193713,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9658995752693051,
"lm_q2_score": 0.8688267864276108,
"lm_q1q2_score": 0.8391994239930246
} |
https://www.physicsforums.com/threads/constant-velocity-question.389482/ | # Constant Velocity Question
I was recently asked this question:
If there is no net force on a system which is moving at a constant velocity, which of the following is also constant?
a) Acceleration
b) Momentum
c) Impulse
d) All of the above
My solution:
a) Acceleration must be constant via Newton's Second law. Since F=ma, a=F/m and with a force of 0, the acceleration must be constantly 0. By the definition of acceleration (change in velocity over time), there is no change in velocity, so the acceleration is 0.
b) Momentum is the measure of inertia that an object has due to it's motion, so with no change in the motion, momentum is constant (p=mv).
c) Impulse is a change in momentum, so since momentum is constant, impulse must remain at 0. Also, Impulse= Force * time so with no net force, impulse is zero.
Therefore, my answer is (d)... all of the above.
My instructor disagrees and seems to have a problem with acceleration being constantly zero. Apparently "nobody refers to acceleration as being constantly 0." His choice was just momentum, (b).
Can anybody support my answer or explain the issue more clearly?
Last edited: | 2021-04-21T20:23:51 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/constant-velocity-question.389482/",
"openwebmath_score": 0.8105039596557617,
"openwebmath_perplexity": 686.4017295371715,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9658995713428385,
"lm_q2_score": 0.8688267694452331,
"lm_q1q2_score": 0.8391994041783338
} |
https://math.stackexchange.com/questions/2978661/find-lim-x-to0-frac-ln2x1-ln1-3xx-using-the-definition-of-deriva | # Find $\lim_{x\to0} \frac{\ln(2x+1)-\ln(1-3x)}{x}$ using the definition of derivative
Use the definition of derivative and find the following limit:
$$\lim_{x\to0} \dfrac{\ln(2x+1)-\ln(1-3x)}{x}$$
I do not understand what this question is asking me to do.
What does it mean to get the limit at 0 and how does that relate to the derivative using this example?
Are not the limit and the derivative at 0 going to be different?
I am really confused as to how I need to approach this question, do I take the derivative of the limit at 0?
• Please learn to use MathJax, as stated on the ask-a-question page. – user21820 Oct 31 '18 at 7:51
• Thank you, I will next time! – Josh Teal Oct 31 '18 at 14:45
Notice that $$\ln (2x + 1 ) - \ln (1-3x) = \ln \left( \frac{2x+1}{1-3x} \right )$$. Let $$f(x) = \ln \left( \frac{2x+1}{1-3x} \right )$$ and $$f(0) = \ln 1 = 0$$. Now, your limit reads as
\begin{align*} \lim_{x \to 0} \dfrac{\ln (2x + 1 ) - \ln (1-3x)}{x} &= \lim_{x \to 0} \frac{ \ln \left( \frac{2x+1}{1-3x} \right ) }{x} \\ &=\lim_{x \to 0} \frac{ f(x) - f(0) }{x-0} \\ &= f'(0) \end{align*}
Can you finish it??
• For clarification - we are manipulating our expression to look like a definition of a derivative but still solving the limit at 0, right? Also, other than the purpose of being asked, why would we want to do this? – Josh Teal Oct 31 '18 at 5:56
• Also, since you manipulative the expression to fit the def of a derivative does this mean the limit is the same as the derivative at x=0? – Josh Teal Oct 31 '18 at 6:11
Hint:
Use the following property, if $$f$$ is differentiable,
$$\lim_{h \to 0 } \frac{f(y+mh) - f(y-nh)}{(m+n)h}=f'(y)$$
Edit:
If $$f$$ is differentiable,
$$\lim_{h \to 0} \frac{f(y+h)-f(y)}{h}=f'(x) = \lim_{h \to 0}\frac{f(y)-f(y-h)}{h}$$
$$\lim_{h \to 0} \frac{f(y+mh)-f(y)}{mh}=f'(x) = \lim_{h \to 0}\frac{f(y)-f(y-nh)}{nh}$$
\begin{align}\lim_{h \to 0} \frac{f(y+mh) -f(y-nh)}{(m+n)h} &=\lim_{h \to 0} \frac{f(y+mh)-f(y)+f(y) -f(y-nh)}{(m+n)h}\\ &=\lim_{h \to 0} \frac{mh}{(m+n)h}\frac{f(y+mh)-f(y)}{mh}+\lim_{h \to 0} \frac{nh}{(m+n)h}\frac{f(y)-f(y-nh)}{nh}\\ &=\frac{m}{(m+n)}\lim_{h \to 0} \frac{f(y+mh)-f(y)}{mh}+\frac{n}{(m+n)}\lim_{h \to 0} \frac{f(y)-f(y-nh)}{nh}\\ &= \frac{m}{m+n}f'(y) + \frac{n}{m+n}f'(y)\\ &= f'(y)\end{align}
• better to use the law of logs and use the usual definition of the derivative. See my answer below. (+1) – James Oct 31 '18 at 4:15
• nice approach. =) – Siong Thye Goh Oct 31 '18 at 4:18
• Can I ask you a question, since I know you are the Linear Programming/optimization guru around here, do you have any book recommendation or webpage with problems and solutions about LP? or just a problem book. – James Oct 31 '18 at 4:20
• hmmm.... not really. this page has some recommendation. I browsed through the first few chapters of the book Introduction to Linear Optimization to prepare for my exam a few years ago. – Siong Thye Goh Oct 31 '18 at 4:31
• I do not understand where that came from - are we suppose to know this property? – Josh Teal Oct 31 '18 at 5:57
Straightforward:
$$F(x)=\ln (2x+1)- \ln (1-3x).$$
$$F(0)= 0.$$
$$\lim_{ x \rightarrow 0} \dfrac{F(x)-F(0)}{x-0}=F'(0)=$$
$$2 + 3= 5.$$
Appended:
$$F'(x) =$$
$$(\log (2x+1))' - (\log (1-3x))'=$$
$$\dfrac{1}{2x+1} \cdot (2) - \dfrac{1}{1-3x} \cdot (-3)$$.
$$F'(0)= 2-(-3)=5.$$
(Chain rule)
• Can you please add more clarification as to how you found this limit(without using l'Hopital's rule)? Where did 2 +3 come from? – Josh Teal Oct 31 '18 at 14:47
• Josh.Of course.I.put it in the answer.Give me a little time. – Peter Szilas Oct 31 '18 at 17:23
• Josh. Used ( log x)' =1/x , and chain rule. First you differentiate with respect to the argument, 1st term is 1/(2x+1) and then multiply by d/dx (2x+1)=2.Your thoughts? – Peter Szilas Oct 31 '18 at 17:35
• Oh, since you manipulated it to look like a derivative equation you can just use the derivative to calculate the limit at 0? But, saying this we can also simplify the expression and solve it like a limit at 0 right? – Josh Teal Oct 31 '18 at 21:14
• Josh.Did not manipulate much, this is the definition of the derivative of F(x) at 0, since F(0)=0.This is one way.If you do not want to use the derivative , one can try other options to find the limit to zero, as I understood you wanted the derivative, which is straight forward here. Your thoughts? – Peter Szilas Oct 31 '18 at 21:27
We have
$$\lim_{x\to0} \frac{\ln(2x+1)-\ln(1-3x)}{x}=\lim_{x\to0} \frac{\ln(2x+1)-\ln 1}{x-0}-\lim_{x\to0} \frac{\ln(1-3x)-\ln 1}{x-0}$$$$=f’(0)-g’(0)=\left(\frac2{2x+1}\right)_{(x=0)}-\left(\frac{-3}{1-3x}\right)_{(x=0)}=2-(-3)=5$$ | 2019-06-24T21:32:01 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2978661/find-lim-x-to0-frac-ln2x1-ln1-3xx-using-the-definition-of-deriva",
"openwebmath_score": 0.9966297149658203,
"openwebmath_perplexity": 679.3499544789372,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426488302759,
"lm_q2_score": 0.861538211208597,
"lm_q1q2_score": 0.8391749613141195
} |
https://math.stackexchange.com/questions/3993323/number-of-pairs-of-subsets-that-have-no-elements-in-common | # Number of pairs of subsets that have no elements in common
A set $$M$$ consists of $$n$$ elements. Determine the number of pairs of subsets of $$M$$ which have no elements in common (don't forget to account for the empty set).
If we choose a subset of one element, then there are $$(n-1)+1$$ corresponding different subsets of the same size, hence the total is $${n\choose1}+\frac12{{n-1}\choose 1}{n\choose1}$$ when we shuffle through each subset and repeat the same operation.
Then, if we choose one with two elements, then there are $$1+{{n-2}\choose1}+{{n-2}\choose2}$$ corresponding different subsets of the same size and of size = 2-1, which gives us a total of $${n\choose2}+{{n-2}\choose1}{{n}\choose2}+\frac12{{n-2}\choose2}{n\choose2}$$.
With three, we get for the one selection $${n\choose3}+{{n-3}\choose1}{n\choose3}+{{n-3}\choose2}{n\choose3}+\frac12{{n-3}\choose3}{n\choose3}$$ for the same size, size-1 and size-2.
If I am to do this for a subset with $$k$$ elements, then the total is $$s_k={n\choose k}+\frac12{{n-k}\choose k}{n\choose k}+\sum_{i=1}^{k-1}{{n-k}\choose i}{n\choose k}$$.
The big total is when I shuffle through all possible values of $$k$$, so $$\sum_{k=1}^{n-1}s_k$$.
Is there any flaw in my reasoning? Thank you for your time!
• Ordered pairs or unordered pairs? – Thomas Andrews Jan 20 at 20:45
Note: Like you, I take pairs to mean unordered pairs. If ordered pairs are intended, the calculation is a bit simpler, both via your approach and via mine.
The explanation could be a good bit clearer, but it appears to be right. However, it results in a very complicated expression that can be greatly simplified. It’s easier, however, to adopt a different approach from the start.
We can choose disjoint subsets of $$M$$ by first choosing a set $$C\subseteq M$$ and then partitioning $$C$$ into two sets. There are $$\binom{n}k$$ ways to choose a $$C\subseteq M$$ of cardinality $$k$$. $$C$$ has $$2^k$$ subsets, and if $$k>0$$, these subsets come in $$2^{k-1}$$ complementary pairs. Thus, $$C$$ can be split into two disjoint subsets in $$2^{k-1}$$ ways if $$k>0$$. If $$k=0$$, $$C=\varnothing$$, which is the union of two disjoint subsets in only one way: $$\varnothing=\varnothing\cup\varnothing$$. Altogether, then there are
\begin{align*} 1+\sum_{k=1}^n\binom{n}k2^{k-1}&=1+\frac12\sum_{k=1}^n\binom{n}k2^k\\ &=1+\frac12\left(\sum_{k=0}^n\binom{n}k2^k-\binom{n}02^0\right)\\ &\overset{*}=1+\frac12\left(3^n-1\right)\\ &=\frac12\left(3^n+1\right)\,, \end{align*}
where the starred step uses the binomial theorem applied to $$(2+1)^n$$.
• Hi professor, could I ask your assistance here, please? – Antonio Maria Di Mauro Jan 20 at 21:50
• @AntonioMariaDiMauro: I just now got there, and I think that between them Hagen and Danny have pretty well covered the ground. – Brian M. Scott Jan 20 at 21:58
• Yes, I saw. I agree with you. Thanks anyway. – Antonio Maria Di Mauro Jan 20 at 22:12
• Hello @BrianM.Scott, and thank you for your answer! I didn't find the same number of complementary subsets as you.. If I take a subset of size $i$, then I can pair it with subsets that are made of $k-i$ elements, with the special case of $\{\phi\}\cup(C-\{\phi\})$. So the total is $1+\frac12\sum_{i=1}^{k-1}{k\choose i}{{k-i}\choose{k-i}}$. Where have I mistaken? – Luyw Jan 21 at 5:20
• Never mind, it is correct. Sorry! – Luyw Jan 21 at 5:32
Number of ordered pairs: since each element is either in subset $$A$$, or subset $$B$$, or neither, it's $$3^n$$.
Number of unordered pairs: if $$A \neq B$$, the pair is counted twice in the ordered case. Only when they are both empty are they the same, which was counted once. Therefore the total number of unordered pairs is $$1+(3^n-1)/2=(3^n+1)/2$$.
The question asks for pairs of subsets that share no elements. When I read "pairs", I typically assume that means an ordered pair of sets. This would mean that the question is asking for the number of pairs of subsets $$(A, B)$$ where $$A\cap B=\varnothing$$. Your answer works if the question wants you to count the number of sets $$\{A, B\}$$ where $$A\cap B=\varnothing$$ (though you could consider the case that $$A=B=\varnothing$$, in which case you're off by $$1$$). I'll give an answer for the ordered case, which is what I assume is intended.
One approach is to revise your argument to treat the subsets as being distinct. We let $$t_k$$ count the number of pairs of subsets $$(A, B)$$ such that $$A$$ and $$B$$ have no elements in common, and $$|A|=k$$. Then $$t_k={n\choose k}\sum_{i=0}^k {n-k\choose i}.$$ That is, you first choose a subset $$A$$ of $$k$$ elements, then you choose a subset $$B$$ from the remaining $$n-k$$ elements of arbitrary size. Hence, the number of such pairs can be counted by $$\sum_{k=0}^n t_k.$$
However, the nice thing about the ordered case is that there's a much easier way of counting such pairs. Each element $$x$$ of $$M$$ has three choices. Either
1. $$x\in A$$ and $$x\notin B$$,
2. $$x\notin A$$ and $$x\in B$$, or
3. $$x\notin A$$ and $$x\notin B$$.
For each of these $$n$$ elements, you choose one of these three options, so there are $$3^n$$ possible pairs.
• This reflects nicely with the fact that there are $2^n$ subsets of $\{1,\cdots,n\}$ and $(2^n)^2 = (2^{2n}) = (2^2)^n=4^n$ pairs of subsets of $\{1,\cdots,n\}$! – Patrick Da Silva Jan 20 at 20:53
• In my experience the default interpretation of pairs is unordered pairs. – Brian M. Scott Jan 20 at 20:54 | 2021-03-05T13:25:20 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3993323/number-of-pairs-of-subsets-that-have-no-elements-in-common",
"openwebmath_score": 0.9166311025619507,
"openwebmath_perplexity": 201.95155218792166,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9740426465697488,
"lm_q2_score": 0.8615382094310357,
"lm_q1q2_score": 0.8391749576351685
} |
http://farside.ph.utexas.edu/teaching/329/lectures/node50.html | Next: The Poincaré section Up: The chaotic pendulum Previous: Numerical solution
## Validation of numerical solutions
Before proceeding with our investigation, we must first convince ourselves that our numerical solutions are valid. Now, the usual method of validating a numerical solution is to look for some special limits of the input parameters for which analytic solutions are available, and then to test the numerical solution in one of these limits against the associated analytic solution.
One special limit of Eqs. (81) and (82) occurs when there is no viscous damping (i.e., ) and no external driving (i.e., ). In this case, we expect the normalized energy of the pendulum
(94)
to be a constant of the motion. Note that is defined such that the energy is zero when the pendulum is in its stable equilibrium state (i.e., at rest, pointing vertically downwards). Figure 26 shows versus time, calculated numerically for an undamped, undriven, pendulum. Curves are plotted for various values of the parameter Nacc, which, in this special case, measures the number of time-steps taken by the integrator per (low amplitude) natural period of oscillation of the pendulum. It can be seen that for there is a strong spurious loss of energy, due to truncation error in the numerical integration scheme, which eventually drains all energy from the pendulum after about 2000 oscillations. For , the spurious energy loss is less severe, but, nevertheless, still causes a more than 50% reduction in pendulum energy after 10,000 oscillations. For , the reduction in energy after 10,000 oscillations is only about 1%. Finally, for , the reduction in energy after 10,000 oscillation is completely negligible. This test seems to indicate that when our numerical solution describes the pendulum's motion to a high degree of precision for at least 10,000 oscillations.
Another special limit of Eqs. (81) and (82) occurs when these equations are linearized to give Eqs. (84) and (85). In this case, we expect
(95)
to be a constant of the motion, after all transients have died away (see Sect. 4.2). Figure 27 shows versus time, calculated numerically, for a linearized, damped, periodically driven, pendulum. Curves are plotted for various values of the parameter Nacc, which measures the number of time-steps taken by the integrator per period of oscillation of the external drive. As Nacc increases, it can be seen that the amplitude of the spurious oscillations in , which are due to truncation error in the numerical integration scheme, decreases rapidly. Indeed, for these oscillations become effectively undetectable. According to the analysis in Sect. 4.2, the parameter should take the value
(96)
Thus, for the case in hand (i.e., ), we expect . It can be seen that this prediction is borne out very accurately in Fig. 27. The above test essentially confirms our previous conclusion that when our numerical solution matches pendulum's actual motion to a high degree of accuracy for many thousands of oscillation periods.
Next: The Poincaré section Up: The chaotic pendulum Previous: Numerical solution
Richard Fitzpatrick 2006-03-29 | 2021-04-18T02:26:51 | {
"domain": "utexas.edu",
"url": "http://farside.ph.utexas.edu/teaching/329/lectures/node50.html",
"openwebmath_score": 0.8608166575431824,
"openwebmath_perplexity": 649.6636441810219,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426435557122,
"lm_q2_score": 0.861538211208597,
"lm_q1q2_score": 0.8391749567698813
} |
https://www.physicsforums.com/threads/amperes-law-in-differential-form.812252/ | # Homework Help: Ampere's law in differential form
1. May 5, 2015
### roam
1. The problem statement, all variables and given/known data
A long cylindrical wire of radius R0 lies in the z-axis and carries a current density given by:
$j(r)= j_0 \left( \frac{r}{R_0} \right)^2 \ \hat{z} \ for \ r< R_0$
$j(r) = 0 \ elsewhere$
Use the differential form of Ampere's law to calculate the magnetic field B inside and outside the wire.
2. Relevant equations
Differential form of Ampere's law: $\nabla \times B = \mu_0 J$
Curl in cylindrical coordinates:
$\nabla \times B = [\frac{1}{r}\frac{\partial B_z}{\partial \phi}] \hat{r} + [\frac{\partial B_r}{\partial z} - \frac{\partial B_z}{\partial r}] \hat{\phi} + \frac{1}{r} [\frac{\partial}{\partial r} (r B_\phi)-\frac{\partial B_r}{\partial \phi}]$
3. The attempt at a solution
Could anyone please explain, in the equation above for curl in cylindrical coordinates, which derivative can be non-zero in this case?
If I take this to be the one involving differentiating φ-component with respect to r, then the answer I get for Bin seems to be correct, but Bout is wrong:
$\frac{1}{r} \frac{\partial}{\partial r} (r B_{\phi}) = \mu_0 j_0 (\frac{r}{R_o})^2 \implies B_{in}= \frac{\mu_0 j_0 r^3}{4 R_0^2}$
$\frac{1}{r} \frac{\partial}{\partial r} (r B_{\phi}) = \mu_0 (0) \implies B_{out} = \frac{C}{r}$
What should I do here?
P.S. I am checking my answers by comparing them to the ones I've obtained using the integral form of Ampere's law:
$I_{enc, in} = \int^r_0 \frac{j_0 r^2}{R_0^2} . 2 \pi r dr =\frac{j_0 2 \pi r^4}{4 R_0^2}, \ I_{enc, out} = \frac{j_0 2 \pi R_0^2}{4}$
$\therefore \oint B_{in} .da =B_{in} 2\pi r= \frac{j_0 2 \pi r^4}{4 R_0^2} \implies B_{in}=\frac{\mu_0 j_0 r^3}{4 R_0^2}, \ B_{out}=\frac{\mu_0 j_0 R_0^2}{4 r}$
Any help would be appreciated.
2. May 5, 2015
### ELB27
You don't have a mistake, you just didn't finish your derivation. Namely, the constant $C$ in your expression for $B_{out}$ is yet to be determined. To determine it, you need some boundary condition to apply to $B_{out}$. Do you know what it is?
As for the comparison with the integral form, one simply has to note that $\frac{\mu_0 j_0R_0^2}{4} = const. = C$ where the last equality needs to be proven by the above method.
3. May 6, 2015
### roam
That makes sense now. But what sort of boundary conditions do I need to apply to $B_{out}$ in order to get the right $C$? I really have no idea how to solve for $C$. Any explanation would be helpful.
4. May 7, 2015
### ELB27
Well, there are two boundaries in the outer region - the surface of the cylinder (which is the boundary between $\vec{B_{in}}$ and $\vec{B_{out}}$) and $r=\infty$. Concerning the second boundary, one would expect the magnetic field to go to zero at infinity since these points are infinitely far from any magnetic field source (currents). This condition ($\vec{B_{out}}→0$ as $r→\infty$) is alreaady taken care of by the inverse-$r$ relationship that you derived ($\vec{B_{out}} = C/r$). Now, concerning the second boundary: how would you expect the fields $\vec{B_{in}}$ and $\vec{B_{out}}$ to be related in their mutual boundary (in the absence of surface current which is not present in this problem)? i.e., in the place where both $\vec{B_{in}}$ and $\vec{B_{out}}$ exist, how are they related to each other?
You have to remember that both $\vec{B_{in}}$ and $\vec{B_{out}}$ are part of the same function $\vec{B}$. Can the magnetic field change abruptly from one value to another at such a boundary?
Last edited: May 7, 2015
5. May 7, 2015
### roam
Thank you so much for the explanation. It makes perfect sense now. I got the right constant:
$B_{out} (R_0) = \frac{C}{R_0} = \frac{\mu_0 j_0 R_0^3}{4 R_0^2} = B_{in} (R_0) \implies C= \frac{\mu_0 j_0 R_0^2}{4}$
So, the other question I was struggling with was how do we decide which of the derivatives in the expression for $\nabla \times B$ must be non-zero.
So in cylindrical polar coordinates the curl was:
$\nabla \times B = [\frac{1}{r}\frac{\partial B_z}{\partial \phi}] \hat{r} + [\frac{\partial B_r}{\partial z} - \frac{\partial B_z}{\partial r}] \hat{\phi} + \frac{1}{r} [\frac{\partial}{\partial r} (r B_\phi)-\frac{\partial B_r}{\partial \phi}]$
How do we know that only the $\partial B_{\phi} / \partial r$ term must be non-zero? Any explanations or links is greatly appreciated.
6. May 7, 2015
### ELB27
First of all, the $\hat{r}$ and $\hat{\phi}$ components of the curl are zero because $\nabla\times\vec{B} = \vec{J}$ and $J_{r}=J_{\phi}=0$. Concerning the $\hat{z}$ component, there are multiple ways to see that the second term vanishes. First, a nice "trick" with cylinders with currents parallel to their axis is that you can consider them to be straight wires, and then remember that the magnetic field is swirling around them by the right hand rule. A more rigorous way is via Biot-Savart law: $$\vec{B} = \frac{\mu_0}{4\pi}\int \frac{\vec{J}\times(\vec{r}-\vec{r}')}{|\vec{r}-\vec{r}'|^3}d\tau'$$ where $\vec{r}$ is the position vector of the point at which the field is being calculated, $\vec{r}'$ is the position vector of a current point and $d\tau'=dx'dy'dz'=r'dr'd\phi'dz'$ is an infinitesimal volume element of current. If you focus on the $\vec{J}\times(\vec{r}-\vec{r}')$ part, you will find that if you integrate it $d\phi$ (i.e. sum it around the cylinder circumferentially), the $r$ component will cancel. Finally, you don't even have to know that $B_r=0$, you can consider symmetry - your configuration is completely symmetrical in $\phi$ (nothing depends on it - your current density is wholly in $z$ and the same goes for the cylinder). Thus, the field cannot possibly depend on it either and any derivative with respect to it must vanish.
Hope the above explanation makes sense!
7. May 8, 2015
### roam
Yes, it's absolutely clear to me now. Thank you so much for the wonderful explanation. I do appreciate your time and expertise. | 2018-07-23T02:40:34 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/amperes-law-in-differential-form.812252/",
"openwebmath_score": 0.8304073214530945,
"openwebmath_perplexity": 214.5702055881079,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9740426450627306,
"lm_q2_score": 0.8615382094310357,
"lm_q1q2_score": 0.8391749563368147
} |
https://tex.stackexchange.com/questions/417331/mathmode-spacing-shorter-than-quad/417338 | # mathmode spacing shorter than \quad? [duplicate]
Is there a way to get a spacing which is half or quarter the length of \quad in mathmode? Maybe there's an easy way to define a shortcut for such command?
• A quad is 1em; just use \hspace{0.5em} or \hspace{0.25em}. Indeed \quad means the same as \hspace{1em}. – egreg Feb 26 '18 at 10:02
• @egreg Thanks. You can post it as an answer, then I'l delete mien, if u want.. – user1611107 Feb 26 '18 at 10:09
• \enspace is a half quad, and it works in math mode. Another short spacing command is \,= \mspace{3 mu}=\hspace{1/6 em}. – Bernard Feb 26 '18 at 10:50
A \quad corresponds to a length of 1em. In math mode, 1em=18mu. Use \mkern<n>mu, where <n> can be either a positive or a negative number, to exert very fine control over spacing. Note: no curly braces around <n>mu.
To space ahead by half a quad while in math mode, simply write \mkern9mu.
Two macros that provide standard abbreviations for math-mode spacing directives are
\, -- \mkern3mu ("thinspace")
\! -- \mkern-3mu ("negative thinspace")
Is there a meaningful difference between a\hspace{0.5em}b and a\mkern9mu b? It usually will not make a difference for display-math material. However, it could make a difference for inline-math material. This is because TeX never discards explicit kerns (and \mkern is a kern); in contrast, \hspace could get discarded at the start and end of lines. Thus, if your document happens to have a longish inline math equation that's allowed to break across lines, using \mkern or \hspace inside the formula could make a difference. (If you wanted to allow potential line breaks while using mu-based spacing directives, don't use \mkern; instead, use \mskip instead.)
Citing the comment from @egreg:
" A quad is 1em; just use \hspace{0.5em} or \hspace{0.25em}. Indeed \quad means the same as \hspace{1em}. – egreg "
Thanks!
Therefore, might or not be helpful for some (for me it is convenient):
\newcommand{\Hquad}{\hspace{0.5em}}
• Description of spacing commands LATEX code Description \quad space equal to the current font size (= 18 mu) \, 3/18 of \quad (= 3 mu) \: 4/18 of \quad (= 4 mu) \; 5/18 of \quad (= 5 mu) \! -3/18 of \quad (= -3 mu) \ (space after backslash!) equivalent of space in normal text \qquad twice of \quad (= 36 mu) – Sebastiano Feb 26 '18 at 10:09
• @Sebastiano hmm, can you explain in simple words please? – user1611107 Feb 26 '18 at 10:12 | 2021-01-26T18:03:20 | {
"domain": "stackexchange.com",
"url": "https://tex.stackexchange.com/questions/417331/mathmode-spacing-shorter-than-quad/417338",
"openwebmath_score": 0.9934991598129272,
"openwebmath_perplexity": 5068.784477431144,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9441768612215683,
"lm_q2_score": 0.8887587949656841,
"lm_q1q2_score": 0.839145489413763
} |
http://gxcf.aavt.pw/taylor-series-approximation-calculator.html | # Taylor Series Approximation Calculator
Taylor Series Approximation Example:More terms used implies better approximation f(x) = 0. 10 of Calculus II, we will study Taylor series, which give much better, higher-order approximations to f(x). But how many terms are enough? How close will the result be to the true answer? That is the motivation for this module. Then has the characteristic property that its derivatives agree with those of the function , when both are evaluated at , up to and including the -th derivative. The series converges to sin(0. Stokes; Taylor Polynomials Approximated by Interpolations Sungkon Chang; Polynomials and Derivatives Ed Zaborowski. The taylor series expansion of f(x) with respect to xo is given by: Generalization to multivariable function: (5) Using similar method as described above, using partial derivatives this time, (Note: the procedure above does not guarantee that the infinite series converges. But with the Taylor series expansion, we have extended that result to non-linear functions of Xand Y. the linear approximation to Find the first three nonzero terms and the general term of the Taylor series. Taylor Series Approximation to Cosine. The focus of the present work is the application of the random phase approximation (RPA), derived for inhomogeneous fluids [Frydel and Ma, Phys. In this tutorial we shall derive the series expansion of $$\sqrt {1 + x}$$ by using Maclaurin's series expansion function. the Taylor expansion of 1 1−x) • the Taylor expansions of the functions ex,sinx,cosx,ln(1 + x) and range of va-lidity. Taylor series look almost identical to Maclaurin series: Note:. The one I started with was the series for arctan(x) because it was the only one I have that can get pi as an. Free practice questions for AP Calculus BC - Taylor Polynomial Approximation. After working through linear approximations in detail, you may want to pose to students the problem of approximating a function at a point with a polynomial whose value, &rst. We now take a particular case of Taylor Series, in the region near x = 0. Taylor Series Linear approximation: Linear approximation is to approximate a general function using a lin-ear function. Taylor is given credit for conceiving the concept of the calculus of finite differences, the tool of integration by parts, and of course the Taylor series representation of functions. Taylor Series. I was wondering if anybody could help me with a general rule for finding M in a Taylor's Inequality problem. Calculus with Power Series; 10. For small x the factorials in the denominator will. Sample Quizzes with Answers Search by content rather than week number. 342) = 206 ft. Remark (2): This method can be derived directly by the Taylor expansion f(x) in the neighbourhood of the root of. Using a simple example at first, we then move on to a more complicated integral where the Taylor series. We then look at Stirling's Formula, which provides an. To find the Maclaurin Series simply set your Point to zero (0). Series and integral comparison ) approximation of a or cubic, or e If the an 's do not go to 0, then E an does not converge. A Taylor series is a clever way to approximate any function as a polynomial with an infinite number of terms. Note that the Taylor Series Expansion goes on as n n n → \intfy, but in practicality we cannot go to infinity. However, if you show that an —Y 0 this doesn't tell you anything about convergence!!!!! Series comparison: if you want to show that I an converges, then come up with Ibnl's such that Ibnl > Ian I and Ibnl converges. Presents a way to examine in depth the polynomial approximation of a transcendental function by using graphing calculators. Calculus Definitions > Taylor Series / Maclaurin Series. I then tried putting when there is a How much RAM? The AMD cards calculator light downward and reduces screen glare to work. Fourier series and square wave approximation Fourier series is one of the most intriguing series I have met so far in mathematics. Calculates and graphs Taylor approximations. This method has application in many engineering fields. Read: Orloff class notes on this topic, TB: 2. b) Write the third-degree Taylor polynomial for f about x = 3. Drive the two Taylor Series mentioned above from the Taylor’s Theorem. Approximations of the Standard Normal Distribution B. A series expansion is a representation of a mathematical expression in terms of one of the variables, often using the derivative of the expression to compute successive terms in the series. Drek intends to pollute into my fifties my irony is sarcastic and to thin more and. For students who wish. ##e^x = \sum_{n=0}^\infty\frac{x^n}{n!} ## is the Taylor series for the. But how many terms are enough? How close will the result be to the true answer? That is the motivation for this module. However, in practice a method called the Taylor series expansion can be used for this purpose. Thread Safety The taylor command is thread-safe as of Maple 15. Taylor series are used quite often for solving ordinary differential equations, see for instance [15. 5 Two useful tricks to obtain power series expansions 2. E 93, 062112 (2016)], to penetrable-spheres. The repository shows Calculator's surprisingly long history. The key idea is to use a series of increasing powers to express complicated yet well-behaved (infinitely differentiable and continuous) functions. And that polynomial evaluated at a should also check it out have spent a lot of time in this chapter calculating Taylor polynomials and Taylor Series. 10 of Calculus II, we will study Taylor series, which give much better, higher-order approximations to f(x). Module 26 - Activities for Calculus Using the TI-89 Lesson 26. We flrst consider Taylor series expansion. • devise finite difference approximations meeting specifica tions on order of accuracy Relevant self-assessment exercises:1-5 47 Finite Difference Approximations Recall from Chapters 1 - 4 how the multi-step methods we developed for ODEs are based on a truncated Tay-lor series approximation for ∂U ∂t. One way to improve it is to use. 001 , x = 0. The approximation of the exponential function by polynomial using Taylor's or Maclaurin's formula Properties of the power series expansion of the exponential function: Taylor's theorem (Taylor's formula) - The extended mean value theorem. All it does is make the Taylor Polynomials more accurate close to a. In that sense, we are just working with a better version of linear approximation - we could call this polynomial approximation! The Taylor and Maclaurin polynomials are "cooked up" so that their value and the value of their derivatives equals the value of the related function at. Part 10 Taylor Series Extension of the Second Derivative Test. (a)Find the Taylor Series directly (using the formula for Taylor Series) for f(x) = ln(x+1), centered at a= 0. Math 133 Taylor Series Stewart x11. Taylor Polynomials Preview. Stirling in 1730 who gives the asymptotic formula after some work in collaboration with De Moivre, then Euler in 1751 and finally C. the approximation heads to negative. Computing Taylor Series Lecture Notes As we have seen, many different functions can be expressed as power series. F(t0 + ∆t) ≈ F(t0) +F′(t0)∆t. TI-83/84 PLUS BASIC MATH PROGRAMS (CALCULUS) AP Calculus Series: Root Approximation Taylor series (for Single and Multivariable Functions: Single up to 5. As a simple example, you can create the number 10 from smaller numbers: 1 + 2 + 3 + 4. The proposed two-step method, which is to some extent like the secant method, is accompanied with some numerical examples. ABSTRACT Content definition, proof of Taylor’s Theorem, nth derivative test for stationary points, Maclaurin series, basic Maclaurin series In this Note, we look at a Theorem which plays a key role in mathematical analysis and in many other areas such as numerical analysis. 3 The binomial expansion 2. I calculate dV with the formula for the Delta-Gamma approximation. 11 ) and find an. Instructions Any. Observation• A Taylor series converges rapidly near the point of expansion and slowly (or not at all) at more remote points. Download Presentation Infinite Sequences and Series An Image/Link below is provided (as is) to download presentation. with Taylor series. An approximation of a function using terms from the function's Taylor series. Sample Questions with Answers The curriculum changes over the years, so the following old sample quizzes and exams may differ in content and sequence. Taylor and Laurent Series We think in generalities, but we live in details. This method has application in many engineering fields. The goal of a Taylor expansion is to approximate function values. Sample Quizzes with Answers Search by content rather than week number. Evaluating Infinite Series It is possible to use Taylor series to find the sums of many different infinite series. From Wikibooks, open books for an open world < Calculus. Taylor Series Linear approximation: Linear approximation is to approximate a general function using a lin-ear function. Chapter 10 introduces L’Hopital’s Rule, improper fractions and partial fractions; Taylor polynomials and the approximation of functions using power series are the main topics in Chapter 11; Chapter 12 treats parametric equations, vector and polar coordinates with the support of technology. Derivatives Derivative Applications Limits Integrals Integral Applications Series ODE Laplace Transform Taylor/Maclaurin Series Step-by-Step Calculator. Inhomogeneous fluid of penetrable-spheres: Application of the random phase approximation. A Taylor series approximates the chosen function. Reaching wide audiences through her talk at the Technology, Entertainment, Design (TED) conference and her appearance on Oprah's online Soul Series, Taylor provides a valuable recovery guide for those touched by brain injury and an inspiring testimony that inner peace is accessible to anyone. Taylor polynomials are incredibly powerful for approximations, and Taylor series can give new ways to express functions. KEYWORDS: Course Materials, Course Notes, Labs, In class demonstrations: How Archimedes found the area of a circle, Finding areas of simple shapes, How the area changes, Lower and Upper Sums, The Fundamental Theorem of Calculus, Average value of a function, Volumes, Arc Length, Change of variables, The Trapezoidal Rule, Simpson's Rule. One way is to use the formula for the Taylor's theorem remainder and its bounds to calculate the number of terms. An investigation with the table feature of a graphing calculator, however, suggests that this is true for n ≥ 3. Torre Again, you can check this approximation on your calculator. Maclaurin Series. For instance, in Example 4 in Section 9. Also, references to the text are not references to the current text. Ken Bube of the University of Washington Department of Mathematics in the Spring, 2005. Graphing-calculator technology can be used to bridge this gap between the concept of an interval of convergence for a series and polynomial approximations. The variable approx stores the Taylor series approximation. This calculus 2 video tutorial explains how to find the Taylor series and the Maclaurin series of a function using a simple formula. Wolfram|Alpha can compute Taylor, Maclaurin, Laurent, Puiseux and other series expansions. In calculus, Taylor's theorem gives an approximation of a k-times differentiable function around a given point by a k-th order Taylor polynomial. Finite difference equations enable you to take derivatives of any order at any point using any given sufficiently-large selection of points. And if we keep doing this-- and we're using the exact same logic that we used when we did it around 0, when we did the Maclaurin expansion-- you get the general Taylor expansion for the approximation of f of x around c to be the polynomial. falling back on Taylor series expansion when an approximation to an irrational number is required. Since p 2(x) = b 0 +b 1x+b 2x2 we impose three conditions on p. Optimized pow() approximation for Java, C / C++, and C# Posted on October 4, 2007 I have already written about approximations of e^x, log(x) and pow(a, b) in my post Optimized Exponential Functions for Java. If we let V be option value, S be stock price, and S0 be initial stock price, then the Taylor series expansion around S0 yields the following. CALCULUS 2019 BC #6 (no calculator) Title: Microsoft Word - AP2019_BC6. This variable is first initialized to 0. The free tool below will allow you to calculate the summation of an expression. with Taylor series. Animation of Taylor series convergence. 77, SN: MVT. Taylor Series approximation and non-differentiability. You can specify the order of the Taylor polynomial. If we show the graphs on the same time scale as the "All poles" approximation, you can see that the Padé works much better (less overshoot, faster convergence), though they have larger excursions from the exact response at small times. The may be used to “expand” a function into terms that are individual monomial expressions (i. Taylor series as limits of Taylor polynomials. The variable approx stores the Taylor series approximation. Wolfram|Alpha can compute Taylor, Maclaurin, Laurent, Puiseux and other series expansions. 1 Introduction This chapter has several important and challenging goals. Multivariable Taylor polynomial example by Duane Q. Series approximation graphics. Let’s answer the second question first. But how many terms are enough? How close will the result be to the true answer? That is the motivation for this module. Find the Taylor series expansion for e x when x is zero, and determine its radius of convergence. Taylor Approximation and the Delta Method is based on using a Taylor series approxi- back as beginning calculus, the major theorem from Taylor is that the. Taylor Polynomials Preview. Maclaurin/Taylor Series: Approximate a Definite Integral to a Desired Accuracy. Part 02 Area: Approximation. In this tutorial we shall derive the series expansion of $$\sqrt {1 + x}$$ by using Maclaurin's series expansion function. 1 shows these points connected by line segments (the lower curve) compared to a solution obtained by a much better approximation technique. Taylor series are extremely powerful tools for approximating functions that can be difficult to compute otherwise, as well as evaluating infinite sums and integrals by recognizing Taylor series. I then tried putting when there is a How much RAM? The AMD cards calculator light downward and reduces screen glare to work. To find the Maclaurin Series simply set your Point to zero (0). 2 and apply the small angle approximation for sin(x). I undertook to illustrate it in GeoGebra. Partial sums. Compare logarithmic, linear, quadratic, and exponential functions. But with the Taylor series expansion, we have extended that result to non-linear functions of Xand Y. In the cases where series cannot be reduced to a closed form expression an approximate answer could be obtained using definite integral calculator. ” Indeed, it plays a very important part in calculus as well as in computation, statistics, and econometrics. In the last section, we learned about Taylor Series, where we found an approximating polynomial for a particular function in the region near some value x = a. Fourier Series Calculator is a Fourier Series on line utility, simply enter your function if piecewise, introduces each of the parts and calculates the Fourier coefficients may also represent up to 20 coefficients. Either way, the approximation will be more accurate along a certain interval of convergence. Take the center aclose to x, giving small (x a) and tiny (x a)n. Socratic Meta Featured Answers How do you find the third degree Taylor polynomial for #f(x)= ln x#, centered at a=2? centered at a=2? Calculus Power Series. Author: Ying Lin. Sample Questions with Answers The curriculum changes over the years, so the following old sample quizzes and exams may differ in content and sequence. Finally, a basic result on the completeness of polynomial approximation is stated. Series Calculator computes sum of a series over the given interval. An Easy Way to Remember the Taylor Series Expansion. In some cases, such as heat transfer, differential analysis results in an equation that fits the form of a Taylor series. Add to it whatever you like -- a navigation section, a link to your favorite web sites, or anything else. One way is to use the formula for the Taylor's theorem remainder and its bounds to calculate the number of terms. Taylor Series Text. Precise and straightforward analytic approximations for the Bessel function J 1 (x) have been found. For instance Jacobian for first order, Hessian for second order partial derivatives. We substitute this value of in the above MacLaurin series: We can also get the MacLaurin series of by replacing to :. Using Taylor polynomials to approximate functions. The Taylor Polynomials gradually converge to the Taylor Series which is a representation of the original function in some interval of convergence. around the points x = x0, y = y0 etc. Series and integral comparison ) approximation of a or cubic, or e If the an 's do not go to 0, then E an does not converge. The proposed two-step method, which is to some extent like the secant method, is accompanied with some numerical examples. 5 Taylor Polynomials and Taylor Series Motivating Questions. Taylor Polynomial Calculator. Taylor_series_expansion online. Taylor and MacLaurin Series 4. We can pick any a we like and then approximate a function f for values of x near that a. Ken Bube of the University of Washington Department of Mathematics in the Spring, 2005. We now take a particular case of Taylor Series, in the region near x = 0. Taylor Series. Taylor's series is an essential theoretical tool in computational science and approximation. Select a Web Site. It's a worse approximation than, say, the 2nd- or 3rd-order approximation, but it's easier to work with if accuracy isn't that important. Linear approximation is one of the simplest approximations to transcendental functions that cannot be expressed algebraically. Includes full solutions and score reporting. This is an alternating series that converges by the alternating series test. The rule number (e. In this paper, we are interested in the discretisation of PDE's by the method of Taylor series. ) The MATLAB command for a Taylor polynomial is taylor(f,n+1,a), where f is the. Taylor series are polynomials that approximate functions. Approximations in AP Calculus Taylor series (BC only) would the midpoint approximation be too small or too large? vt(). Rather than stop at a linear function as an approximation, we let the degree of our approximation increase (provided the necessary derivatives exist), until we have an approximation of the form. 1 - Activity 1 - Infinite Series - Fractals Lesson 26. Given a differentiable scalar func-tion f(x) of the real variable x, the linear ap-proximation of the function at point a, as shown in the Figure below, is obtained by f(x) ≈ f(a) + f′(a)(x− a) where f′(a) = df(x) dx. For instance, we know that sin0 = 0, but what is sin0. Clearly, this is close to p 1 = 1, but we want better. 2017-05-01. 1 Taylor Polynomials Taylor Polynomials Taylor Polynomials The nth Taylor polynomial at 0 for a function f is P n(x) = f(0)+f0(0)x+ f00(0) 2! x2 +···+ f(n)(0) n! xn; P n is the polynomial that has the same value as f at 0 and the same first n. Following this, you will be able to set up and solve the matrix equation Ax = b where A is a square nonsingular matrix. Thus,jxjisbiggest when x is as far from 0. The properties of Taylor series make them especially useful when doing calculus. Taylor and Laurent Series We think in generalities, but we live in details. 10 Taylor and Maclaurin Series 677 If you know the pattern for the coefficients of the Taylor polynomials for a function, you can extend the pattern easily to form the corresponding Taylor series. Plots of the first terms of the Taylor series of along the real axis. 17 The Method of Iteration for System of Non-Linear Equations 111. Read: Orloff class notes on this topic, TB: 2. Maclaurin Series. Reaching wide audiences through her talk at the Technology, Entertainment, Design (TED) conference and her appearance on Oprah's online Soul Series, Taylor provides a valuable recovery guide for those touched by brain injury and an inspiring testimony that inner peace is accessible to anyone. Following this, you will be able to set up and solve the matrix equation Ax = b where A is a square nonsingular matrix. Series and integral comparison ) approximation of a or cubic, or e If the an 's do not go to 0, then E an does not converge. Taylor Series & Polynomials MC Review (Calculator Permitted) The Taylor series for ln x, What is the approximation of the value of sin1 obtained by using the. 2 as an approximation it’s the exact value that should be there. As increases, the curves vary from red to violet. Hence, cos(q)=1 2sin2 q 2 ˇ1 2 q 2 2 =1 q2 2: More formally, the trigonometric functions can be expressed using their Taylor Series approxi-mations (Taylor Series are part of the Further Mathematics A-Level course). Taylor series approximations do not distribute the approximation. We define a module Coeff which declares the order n of the approximation; the maximal order allowed n_max=10 and a vector b of coefficients for the Taylor polynomial. A Taylor series approximates the chosen function. A Taylor series is a numerical method of representing a given function. Sample Questions with Answers The curriculum changes over the years, so the following old sample quizzes and exams may differ in content and sequence. the linear approximation to Find the first three nonzero terms and the general term of the Taylor series. 10 of Calculus II, we will study Taylor series, which give much better, higher-order approximations to f(x). Linear approximation is one of the simplest approximations to transcendental functions that cannot be expressed algebraically. With modern calculators and computing software it may not appear necessary to use linear approximations. However, it is often limited by its interval of convergence, whereas actual values of the function may lie outside that interval, so it is important to evaluate a function with a series of power within the interval of convergence. Presents a way to examine in depth the polynomial approximation of a transcendental function by using graphing calculators. While I appreciate the elegance of your solution and the intellectual curiosity of such an endeavor, given that PI to the 57th decimal place can ascribe a circle around the entire known universe with an inaccuracy of less than a millionth of an inch, what practical purpose is served by calculating PI to a 1000 or more decimal places?. Every Taylor series provides the exact value of a function for all […]. • Linear approximation in one variable: Take the constant and linear terms from the Taylor series. The Taylor series of a particular function is an approximation of the function about a point ( a ) represented by a series expansion composed of. It is a series that is used to create an estimate (guess) of what a function looks like. For analytic functions the Taylor polynomials at a given point are finite-order truncations of its Taylor series , which completely determines the function in some neighborhood of the point. Taylor Series; 11. List of Maclaurin Series of Some Common Functions / Stevens Institute of Technology / MA 123: Calculus IIA / List of Maclaurin Series of Some Common Functions / 9 | Sequences and Series. Note that the shape is approximately correct even though the end points are quite far apart. whose graph is the tangent line (Calculus I x2. In doing this, the Derivative Calculator has to respect the order of operations. home > topics > c / c++ > questions > help with sine taylor series obtained from the Taylor series with n terms. Review the logic needed to understand calculus theorems and definitions. Byju's Linear Approximation Calculator is a tool which makes calculations very simple and interesting. You have seen that a good strategy for working with infinite sums is to use a partial sum as an approximation, and to try to get a bound on the size of the remainder. Calculus Homework # 6 0 5e-05 0. Thus, The Remainder Term is z is a number between x and 3. In fact, the two cornerstone theorems of this section are that any power series represents a holomorphic. Taylor Approximation of a Multivariate Function Description Calculate the Taylor approximation of a specified degree for a multivariate function. Taylor's Theorem Suppose we're working with a function $f(x)$ that is continuous and has $n+1$ continuous derivatives on an interval about $x=0$. 2 Taylor Series A Taylor series is a power series that allows us to approximate a function that has certain properties. Find the first 4 terms of the Taylor series for the following functions: (a) ln x centered at a=1, (b) 1 x centered at a=1, (c) sinx centered at a =. The calculator will find the Taylor (or power) series expansion of the given function around the given point, with steps shown. 10 Iteration Method—(Successive Approximation Method) 94 3. around the points x = x0, y = y0 etc. Approximations in AP Calculus Taylor series (BC only) would the midpoint approximation be too small or too large? vt(). The properties of Taylor series make them especially useful when doing calculus. In other words, you're creating a function with lots of other smaller functions. As you increase the degree of the Taylor polynomial of a function, the approximation of the function by its Taylor polynomial becomes more and more accurate. 1 we approximated the derivative of ve-. Taylor Polynomials. 13 Convergence of Iteration Method 96 3. The big idea of this module is that the Taylor series can be thought of as an operator (a machine) which turns a function into a series. Taylor Polynomials Motivation Derivation Examples Taylor Series Definition Power Series and the Convergence Issue Famous Taylor Series New Taylor Series from Old 21. 2 Approximation by series 2. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4. Description : The online taylor series calculator helps determine the Taylor expansion of a function at a point. 2 - Activity 2 - Piecewise Functions, Continuity, and Differentiability. So let us determine the interval of convergence for the Maclaurin series representation. In this case an. 2 are satisfied on the open rectangle R defined by −∞ t ∞, −∞ y. If you are familiar with and got another card, or high CPU. Taylor Series centered at x = a Let f be a function with derivatives of all orders on an interval containing x = a. The Taylor series for a function is often useful in physical situations to approximate the value of the function near the expansion point x 0. For instance Jacobian for first order, Hessian for second order partial derivatives. 2d taylor series. It's a worse approximation than, say, the 2nd- or 3rd-order approximation, but it's easier to work with if accuracy isn't that important. 2 Approximation by series 2. You can then use this formula to make predictions, and also to find repeating patterns within your data. Find the Taylor series expansion for e x when x is zero, and determine its radius of convergence. This is an alternating series that converges by the alternating series test. Video created by University of Pennsylvania for the course "Calculus: Single Variable Part 1 - Functions". See how to approximate a definite integral to a desired accuracy using Maclaurin/Taylor series and the alternating series estimation theorem with this free video calculus lesson. The brute force method, which is very fast by the way, (up to 600 terms) checked favorably with your computations. Taylor Polynomials. Why Taylor series matter. Also, references to the text are not references to the current text. Start with 1/(1+w) = 1 - w. The repository shows Calculator's surprisingly long history. TAYLOR AND MACLAURIN SERIES 3 Note that cos(x) is an even function in the sense that cos( x) = cos(x) and this is re ected in its power series expansion that involves only even powers of x. Download Presentation Infinite Sequences and Series An Image/Link below is provided (as is) to download presentation. Differential Calculus cuts something into small pieces to find how it changes. Taylor Series Expansions A Taylor series expansion of a continuous function is a polynomial approximation of. Thus,jxjisbiggest when x is as far from 0. 1 Introduction This chapter has several important and challenging goals. You can think of a power series as a polynomial with infinitely many terms (Taylor polynomial). Since the Taylor approximation becomes more accurate as more terms are The Maclaurin series is just a Taylor series centered at a = 0. So how many terms should I use in getting a certain pre-determined accuracy in a Taylor series. Maclaurin & Taylor polynomials & series 1. We can derive Taylor Polynomials and Taylor Series for one function from another in a variety of ways. This script lets you input (almost) any function, provided that it can be represented using Sympy and output the Taylor series of that function up to the nth term centred at x0. Instructions: 1. Taylor Series Generalize Tangent Lines as Approximation. So, I'm trying to create a program that calculates cos(x) by using a Taylor approximation. Jason Starr. As an numerical illustration, we. Taylor series and Polynomials. 1 Introduction The topic of this chapter is find approximations of functions in terms of power series, also called Taylor series. Introduction to Taylor's theorem for multivariable functions by Duane Q. The red line is cos(x), the blue is the approximation (try plotting it yourself) :. 10 Taylor and Maclaurin Series 677 If you know the pattern for the coefficients of the Taylor polynomials for a function, you can extend the pattern easily to form the corresponding Taylor series. Work the following on notebook paper. Processing units which calculate mathematical functions, such as trigonometric functions, are used for various computers, including supercomputers. In this tutorial we shall derive the series expansion of $$\sqrt {1 + x}$$ by using Maclaurin's series expansion function. Approximation Single, Dual and color consistency from happens at random times. Don't worry, though. 7, you found the fourth Taylor polynomial for centered at 1, to be. 10 of Calculus II, we will study Taylor series, which give much better, higher-order approximations to f(x). DA: 38 PA: 80 MOZ Rank: 94. The crudest approximation was just a constant. Stokes; Taylor Polynomials Approximated by Interpolations Sungkon Chang; Polynomials and Derivatives Ed Zaborowski. The first-order Taylor series approximation of the change in the value of an option is given by ∆C ≈ δ · ∆X. This module gets at the heart of the entire course: the Taylor series, which provides an approximation to a function as a series, or "long. 1)Function = Life 2)Function. For students who wish. edu:1275 comp. 13 Convergence of Iteration Method 96 3. Find the first 4 terms of the Taylor series for the following functions: (a) ln x centered at a=1, (b) 1 x centered at a=1, (c) sinx centered at a =. TAYLOR POLYNOMIALS AND TAYLOR SERIES The following notes are based in part on material developed by Dr. Math 133 Taylor Series Stewart x11. Our aim is to find a polynomial that gives us a good approximation to some function. Either way, the approximation will be more accurate along a certain interval of convergence. • Linear approximation in one variable: Take the constant and linear terms from the Taylor series. (a)Find the Taylor Series directly (using the formula for Taylor Series) for f(x) = ln(x+1), centered at a= 0. Jason Starr. Hence, cos(q)=1 2sin2 q 2 ˇ1 2 q 2 2 =1 q2 2: More formally, the trigonometric functions can be expressed using their Taylor Series approxi-mations (Taylor Series are part of the Further Mathematics A-Level course). In particular, this is true in areas where the classical definitions of functions break down. The taylor series calculator allows to calculate the Taylor expansion of a function. Remember, a Taylor series for a function f, with center c, is: Taylor series are wonderful tools. Finite difference equations enable you to take derivatives of any order at any point using any given sufficiently-large selection of points. 9: Approximation of Functions by. The Derivative Calculator has to detect these cases and insert the multiplication sign. This chapter is principally about two things: Taylor polynomials and Taylor series. | 2019-11-19T11:07:28 | {
"domain": "aavt.pw",
"url": "http://gxcf.aavt.pw/taylor-series-approximation-calculator.html",
"openwebmath_score": 0.8676291704177856,
"openwebmath_perplexity": 507.36004379028805,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9943580923359432,
"lm_q2_score": 0.8438951104066293,
"lm_q1q2_score": 0.8391339321155661
} |
https://flyingcoloursmaths.co.uk/two-timing/ | The square root of 2 is 1.41421356237… Multiply this successively by 1, by 2, by 3, and so on, writing down each result without its fractional part: 1 2 4 5 7 8 9 11 12... Beneath this, make a list of the numbers that are missing from the first sequence: 3 6 10 13 17 20 23 27 30...
The difference between the upper and lower numbers in these pairs is 2, 4, 6, 8, …
• From Roland Sprague, Recreations in Mathematics, 1963. (via Futility Closet)
At first glance, this looks like witchcraft - but the proof is one of the nicest mathematical things I’ve seen recently, so I thought I’d share it.
### Preamble
Before we go anywhere, let’s just be clear about what we’re working with, and what we’re trying to prove. With a little sleight of hand, we’re going to show that every positive integer is in exactly one of the sequences shown.
The first sequence is $\lfloor k \sqrt{2} \rfloor$, for $k = 1, 2, 3, …$.
The second is $\lfloor m \left( 2 + \sqrt{2}\right)\rfloor$, for $m = 1, 2, 3, …$.
To show that every positive integer is in exactly one of the sequences, we need to show two things: that no number is in both sequences, and that every number is in at least one sequence. The plan is to do that by contradiction: we suppose the opposite in each case, and show that it doesn’t work.
### To show: no number is in both sequences
To show no number is in both sequences, we assume there is a number - call it $j$ - that is in both sequences, and show that something goes wrong.
We have that $j = \lfloor k \sqrt{2} \rfloor$ for some integer $k$, and $j = \lfloor m \left( 2 + \sqrt{2}\right)\rfloor$ for some integer $m$.
Equivalently, we have $j \lt k\sqrt{2} \lt j+1$ and $j \lt m\left(2 + \sqrt{2}\right) \lt j+1$. ((Note that the inequalities are strict, because both $\sqrt{2}$ and $2+\sqrt{2}$ are irrational.))
Divide both inequalities by the multiplier in the middle:
$\frac{j}{\sqrt{2}} \lt k \lt \frac{j+1}{\sqrt{2}}$ and $\frac{j}{2+\sqrt{2}} \lt m \lt \frac{j+1}{2+\sqrt{2}}$.
Add these together, and we have:
$j \left(\frac{1}{\sqrt{2}} + \frac{1}{2+\sqrt{2}}\right) \lt k + m \lt \left(j+1\right)\left(\frac{1}{\sqrt{2}} + \frac{1}{2+\sqrt{2}}\right)$.
No, wait up, it’s ok! $\frac{1}{\sqrt{2}} + \frac{1}{2+\sqrt{2}} = \frac{(2+\sqrt{2})+\sqrt{2}}{2+2\sqrt{2}} = 1$.
So, $j \lt k+m \lt j+1$ - which can’t possibly be true, because there’s no integer between $j$ and $j+1$. We’ve hit a contradiction, so our assumption - that there’s a number in both lists - is wrong; therefore, no number is in both sequence.
### Aside
Oh yeah, that thing about $\frac{1}{\sqrt{2}} + \frac{1}{2+\sqrt{2}} = 1$? That’s the nugget of the proof, and what makes the whole thing work. In fact, if $a$ is irrational and $\frac{1}{a} + \frac{1}{b} = 1$, then $\lfloor ka \rfloor$ for $j=1,2,3,…$ and $\lfloor mb \rfloor$ for $m = 1,2,3,…$ form disjoint sequences - these are Beatty sequences, and can be used to win Wythoff’s game.
### To show: every number is in at least one sequence
Back to the proof! The next bit is similar in flavour to the last.
To show every number is in at least one sequence, we assume there is a number - call it $j$ - that is in neither sequence, and show that something goes wrong.
For $j$ to be in neither sequence, there would need to be integers $k$ and $m$ such that:
1. $k\sqrt{2} \lt j$
2. $j + 1 \lt (k+1)\sqrt{2}$
3. $m \left(2 + \sqrt{2}\right) \lt j$
4. $j+1 \lt \left(m+1\right)\left(2+\sqrt{2}\right)$.
Similarly to before, divide each inequality by the multiplier:
1. $k \lt \frac{j}{\sqrt{2}}$
2. $\frac{j+1}{\sqrt{2}} \lt k+1$
3. $m \lt \frac{j}{2+\sqrt{2}}$
4. $\frac{j+1}{2+\sqrt{2}} \lt m+1$
Adding the first and third gives $k + m \lt j$; adding the second and fourth gives $j+1 \lt k+m+2$, or $j \lt k+m+1$. Similarly to before, there’s no integer between $k+m$ and $k+m+1$, so we have a contradiction. Therefore, every number is in at least one of the sequences.
### Conclusion
Every integer is in at least one sequence; no integer is in both sequences; therefore, each number is in exactly one sequence. $\blacksquare$. | 2021-03-06T02:45:21 | {
"domain": "co.uk",
"url": "https://flyingcoloursmaths.co.uk/two-timing/",
"openwebmath_score": 0.7463638186454773,
"openwebmath_perplexity": 228.74750011025208,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9919380099017024,
"lm_q2_score": 0.8459424411924673,
"lm_q1q2_score": 0.839122461607844
} |
https://www.physicsforums.com/threads/converge-pointwise-with-full-fourier-series.798549/ | # Converge pointwise with full Fourier series
Tags:
1. Feb 18, 2015
### A.Magnus
I am working on a simple PDE problem on full Fourier series like this:
Given this piecewise function,
$f(x) = \begin{cases} e^x, &-1 \leq x \leq 0 \\ mx + b, &0 \leq x \leq 1.\\ \end{cases}$
Without computing any Fourier coefficients, find any values of $m$ and $b$, if there is any, that will make $f(x)$ converge pointwise on $-1 < x < 1$ with its full Fourier series.
I know for sure that if $f(x)$ is to converge pointwise with its full Fourier series, then $f(x)$ has to be piecewise smooth, meaning that each piece of $f(x)$ has to be differentiable.
(a) Is this the right way to go?
(b) If it is, how do you prove $e^x$ and $mx + b$ differentiable? By proving $f'(x) = \lim_{x \to c}\frac{f(x) - f(c)}{x - c}$ exists?
2. Feb 18, 2015
### Svein
1. f(x) has to be continuous at x = 0.
2. f'(x) has to be continuous at x = 0.
3. Feb 18, 2015
### A.Magnus
I think I am confused with the word "piecewise smooth." I had always thought it means "smooth piece by piece," meaning that $f(x) = e^x$ is smooth individually and then the next $f(x) = mx +b$ is smooth individually also. But your response implies that both parts of $f(x)$ have to be smooth as one big piece. So I am wrong on this? Let me know and thank you!
4. Feb 18, 2015
### Dick
No, you are right. That means there are no conditions on m and b. Notice there is a difference between saying "the series converges pointwise" and "the series converges pointwise to f(x)". If it's the latter you have a condition.
5. Feb 18, 2015
### A.Magnus
What do you mean by "there are no conditions on $m$ and $b$"? Thanks. [Nice to see you again! See, I had to tend one course after another! :-) ]
6. Feb 18, 2015
### Dick
I mean that it's piecewise smooth no matter what m and b are. Nice to see you!
7. Feb 18, 2015
### A.Magnus
Thanks! I think it means $m, b$ are good for any real numbers. You are always omniscience from A to Z, omnipresent, and omni-helpful, if that is the right word.
8. Feb 18, 2015
### Ray Vickson
Your statement " .... $f(x)$ has to be piecewise smooth..." is false: it does not have to be piecewise smooth. It just has to obey the Dirichlet conditions; see, eg.,
http://en.wikipedia.org/wiki/Dirichlet_conditions . These do not involve smoothness or differentiablility.
So, with no restrictions on $m,b$ your function's Fourier series will converge pointwise on $-1 \leq x \leq 1$, and will converge to $f(x)$ for $-1 < x < 1, x \neq 0$. For some $m,b$ it will also converge to $f(0)$ when $x = 0$, but for some other choices of $m,b$ it will converge to something else at $x = 0$ (but still converge).
9. Feb 18, 2015
### LCKurtz
@A.Magnus: I would almost bet that the original problem wants you to find m and b such that the FS converges pointwise to f(x). Otherwise there isn't much point to the problem. That would require specific values of m and b.
10. Feb 18, 2015
### A.Magnus
I have uploaded the page that has the original problem 9, see the attached file. The text is "Introduction to Applied PDE" by John Davis, let me know if I got it very wrong in the first place, I will happily stand to be corrected. Also do let me know how should I go ahead if I was wrong. Thank you!
PS:The text is extremely cut and dry, on top of that this is an online class, we get only reading assignments and homework, no lectures. Never complaining, so I take this site as crowd-teaching forum!
#### Attached Files:
• ###### J.Davis-PDE_Exercise9.pdf
File size:
175.4 KB
Views:
59
11. Feb 18, 2015
### Ray Vickson
The pdf displays upside-down on my screen, and I cannot rotate it (and so cannot read it). Anyway, have you read post #8?
12. Feb 18, 2015
### A.Magnus
Yes, I did see #8, I am about to response. For the file, I will attached another one, give me just a second. Thanks, Ray!
13. Feb 18, 2015
### A.Magnus
Ray, here is the corrected file. Feel free to crowd-teach me. Thanks.
14. Feb 18, 2015
### A.Magnus
Ray, here is what I copy down verbatim from the John Davis' text, page 88:
Let me know what I got wrong. Thanks again and again. | 2017-08-24T05:17:48 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/converge-pointwise-with-full-fourier-series.798549/",
"openwebmath_score": 0.819584846496582,
"openwebmath_perplexity": 883.4595366286945,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9425067147399244,
"lm_q2_score": 0.8902942275774319,
"lm_q1q2_score": 0.839108287585924
} |
http://mathhelpforum.com/algebra/156697-multiplying-fingers-also-work-bases-other-than-ten-why-print.html | # multiplying on fingers also work with bases other than ten, why
• Sep 19th 2010, 08:01 AM
misterchinnery
multiplying on fingers also work with bases other than ten, why
At school I was shown how to multiply two intergers, both > 5 and <11 using the fingers on both hands.
Each hand used to represent one of the two numbers.
It works like this e.g 7x8= 56.
On the left hand count from 6 to 7 by folding down two fingers so that two are bend and three are not.
Then on the right hand count from 6 to 8 so that three are bend and two are not.
Count the number of straight fingers on each hand 3x2=6. this represents 6 units.
Count number of bent fingers on both hands and this represent the number of tens, five tens, or fifty.
So the answer is fifty plus six
The question is:
How come this works if we didnt have ten fingers and we counted for example in base 8 and had 8 fingers? If we counted in base 8 we would have 4 fingers on each hand and could multipy numbers from 5 to 8 using this method.
• Sep 19th 2010, 08:31 AM
emakarov
Quote:
How come this works if we didnt have ten fingers and we counted for example in base 8 and had 8 fingers?
Why does it work for ten fingers? It is probably easy to generalize to other bases.
• Sep 19th 2010, 08:35 AM
Soroban
Hello, misterchinnery!
Wow! . . . a fascinating trick!
We have two sets of five fingers, each set numbered from 6 to 10.
. . $\begin{array}{ccccccccccccccccccc}
| & | & | & | & | &&& |&|&|&|&| \\
6&7&8&9&10 &&& 6&7&8&9&10 \end{array}$
To multiply $7 \times 9$:
On the left hand, bend fingers 6 and 7.
On the right hand, bend fingers 6, 7, 8 and 9.
. . $\begin{array}{ccccccccccccccccccc}
\times & \times & | & | & | &&& \times & \times & \times & \times &| \\
6&7&8&9&10 &&& 6&7&8&9&10 \end{array}$
. . . $\begin{array}{c}\text{Left hand} \\ \hline
\text{bent: 2} \\ \text{straight: 3} \end{array} \qquad\quad \begin{array}{c}\text{Right hand} \\ \hline \text{bent: 4}\\ \text{straight: 1}\end{array}$
$\begin{array}{ccccccc}\text{Ten's digit:} & \text{(bent)} + \text{(bent)} &=& 2 + 4 &=& 6 \\
\text{Unit's digit:} & \text{(straight)} \times \text{(straight)} &=& 3 \times 1 &=&3 \end{array}$
. . Therefore: . $7 \times 9 \;=\;63$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
I tried a general proof (for base-ten).
Let $\,a$ and $\,b$ be the two numbers: . $6 \,\le\, a,b\,\le\,10$
. . . $\begin{array}{c}\text{Left hand} \\ \hline
\text{bent: }a-5 \\ \text{straight: }10-a \end{array} \qquad\quad \begin{array}{c}\text{Right hand} \\ \hline \text{bent: }b-5\\ \text{straight: }10-b\end{array}$
$\begin{array}{cccccccccc}\text{Ten's digit:} & \text{(bent)} + \text{(bent)} &=&(a-5) +(b-5) &=& a + b - 10 & (T)\\
\text{Unit's digit:} & \text{(straight)} \times \text{(straight)} &=& (10-a)(10-b)&=& 100 - 10a - 10b + ab & (U) \end{array}$
The product is: . $10T + U \;=\;10(a+b-10) + (100-10a - 10b + ab)$
. . . . . . . . . . . . . . . . . . $=\; 10a + 10b - 100 + 100 - 10a - 10b + ab$
. . . . . . . . . . . . . . . . . . $=\quad ab$
Hey, it works!
• Sep 19th 2010, 09:46 AM
misterchinnery
dont know how to edit your solution so that the 10s and the 5s you use will represent the base used. Obviously I can do this by hand but I dont know the notation or type of maths which can do this.
Did you see my other post involving triangles? | 2016-10-21T16:55:42 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/algebra/156697-multiplying-fingers-also-work-bases-other-than-ten-why-print.html",
"openwebmath_score": 0.9310731291770935,
"openwebmath_perplexity": 489.59279130406327,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.969785415552379,
"lm_q2_score": 0.8652240964782011,
"lm_q1q2_score": 0.8390817099490439
} |
https://math.stackexchange.com/questions/1569392/solving-a-log-equation-for-two-variables | # Solving a log equation for two variables
Goal is to find both $\beta$ and $\omega$. I already have the answer here, but I'm confused as to how to get it.
$\log_6 250 - \log_\beta 2 = 3 \log_\beta \omega$
This is what I did:
$\log_6 250 = \log_\beta \omega^3 + \log_\beta 2$
$\log_6 250 = \log_\beta 2 \omega^3$
$\frac{\log 250}{\log 6} = \frac{\log 2\omega^3}{\log \beta}$
And I am stuck here. The answer states that $\omega = 5$ and $\beta = 6$, which after entering it, is correct, but I don't know how it got to that point.
Looking directly at the last part I ended up with, it seems like you're supposed to equate the tops to one another and bottoms to one another, which would get you the answer, but with another example, that clearly does not work:
$\frac {\log 64}{\log 4} = \frac{\log 27}{\log 3}$
$64 \ne 27, 4 \ne 3$
How would one solve this problem, and is the path I went correct?
From:
$\log_{\beta} (2w^{3}) = \log_{6}(250)$, we can set $\beta = 6$ since we simply want to find a solution and not all solutions.
Equating $2w^3=250$ lets us solve for a unique $w$.
• Oh. So simple in hindsight, and was just overthinking it. – user154989 Dec 10 '15 at 15:59
HINT:
$$\log_6(250)-\log_\beta(2)=3\log_\beta(\omega)\Longleftrightarrow$$ $$\frac{\ln(250)}{\ln(6)}-\frac{\ln(2)}{\ln(\beta)}=\frac{3\ln(\omega)}{\ln(\beta)}\Longleftrightarrow$$ $$\frac{\ln(\beta)\ln(250)}{\ln(\beta)\ln(6)}-\frac{\ln(6)\ln(2)}{\ln(6)\ln(\beta)}=\frac{3\ln(\omega)}{\ln(\beta)}\Longleftrightarrow$$ $$\frac{\ln(\beta)\ln(250)-\ln(6)\ln(2)}{\ln(\beta)\ln(6)}=\frac{3\ln(\omega)}{\ln(\beta)}\Longleftrightarrow$$ $$\ln(\beta)\left(\ln(\beta)\ln(250)-\ln(6)\ln(2)\right)=3\ln(\beta)\ln(6)\ln(\omega)\Longleftrightarrow$$ $$3\ln(6)\ln(\beta)\ln(\omega)=\ln(250)\ln^2(\beta)-\ln(2)\ln(6)\ln(\beta)$$
Formula of changing bases is $\log_b x=\frac{\log_a x}{\log_a b}$. From this we get $\log_{\beta}(2w^3)^{\log_{\beta} 6}=\log_{\beta}250$ which implies (log is 1-to-1) $$(2w^3)^{\log_{\beta} 6}=250=2\cdot5^3$$ which admits the solution $\log_{\beta} 6=1$ with $\beta =6$ and $\omega=5$ | 2020-07-16T14:42:48 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1569392/solving-a-log-equation-for-two-variables",
"openwebmath_score": 0.9240158200263977,
"openwebmath_perplexity": 160.3959624878332,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854146791214,
"lm_q2_score": 0.8652240825770432,
"lm_q1q2_score": 0.8390816957123401
} |
http://math.stackexchange.com/questions/46289/proof-of-a-lemma-in-probability | # Proof of a lemma in probability
I've encountered this lemma in Chung's book as an exercise:
If $\mathbb{E}|X|<\infty$ and $\lim_{n \to \infty} \mathbb{P}\{\Lambda_{n}\} = 0$, then, $$\lim_{n \to \infty} \int_{\Lambda_{n}} X\,\mathrm{d}\mathbb{P} = 0 \>.$$
Could anyone provide a detailed proof?
I'm wondering since $\mathbb{E}|X|<\infty$, can I use the fact $|X|<\infty \;\mathrm{a.e.}$ then $\exists M \in \mathbb{R}^{+} \,\mathrm{s.t.}\, |X|<M \;\mathrm{a.e.}$ Then $\int_{\Lambda_{n}} X\,\mathrm{d}\mathbb{P} \leq M \,\mathbb{P}\{\Lambda_{n}\}\rightarrow 0$.
And, can I use this lemma to prove that every $X \in L^{1}$ is uniformly integrable, using Thm 4.5.3 in Chung's book 'A course in probability theory'?
Hence, every finite set of $\{X_{n} \subset L^{1}\}$ is uniformly integrable. However, why infinite set (possibly countably infinite) may not be uniformly integrable?
Sorry to entangle these two questions together.
-
The fact that $|X| \lt \infty$ a.e. does not imply the existence of an $M$ with $|X| \lt M$ a.e. Consider $1/x^{1/2}$, on $[0,1]$ for example. – t.b. Jun 19 '11 at 13:57
Since you insist on uniform integrabiliby: isn't it obvious that $f_n = 1_{\Lambda_n} X$ is uniformly integrable, as $|f_n| \leq |X|$? – t.b. Jun 19 '11 at 14:14
@Theo Buehler: hmm, it seems I had a mistake at the beginning. Could you give me any hints on this? Thank you. – newbie Jun 19 '11 at 14:15
I think uniform integrability is settled. Clearly $f_n \to 0$ in measure. – t.b. Jun 19 '11 at 14:18
Yes, it is true. But can the lemma be proved without introducing uniform integrability? – newbie Jun 19 '11 at 14:20
Hint: Write $\int_{\Lambda_n} X d\mathbb{P}$ as $\int X 1_{\Lambda_n} d\mathbb{P}$, and note that $|X 1_{\Lambda_n}| \le |X|$. Then a certain familiar theorem applies...
-
yes, dominated convergence theorem can be appied here. I know it sounds stupid, but I couldn't give a logical reasoning for $X1_{\Lambda_{n}} \rightarrow 0$ in measure from $\mathbb{P}\{\Lambda_{n}\} \rightarrow 0$ in measure. – newbie Jun 19 '11 at 15:00
@newbie: $\{|X1_{\Lambda_n}| \geq \varepsilon\} \subset \Lambda_n$. – t.b. Jun 19 '11 at 15:03
Let $X_n :=|X|\mathbb{1}_{\left\{|X|\leq n\right\}}$. Since each $X_n$ is integrable and $X$ is finite almost everywhere we have from the Lebesgue monotone convergence theorem that $\displaystyle \lim_{n\to +\infty} \int_{\Omega}X_nd\mathbb{P} = \int_{\Omega} Xd\mathbb{P}$. Let $\varepsilon >0$. We can find $n_0$ such that $\int_{\Omega}|X|\mathbb{1}_{\left\{|X|\geq n_0\right\}}d\mathbb{P}<\frac{\varepsilon}2$. We have $$\left|\int_{\Lambda_n}Xd\mathbb{P}\right|\leq \left|\int_{\Lambda_n}X\cdot \mathbb{1}_{\left\{|X|\geq n_0\right\}}d\mathbb{P}\right| +\left|\int_{\Lambda_n}X\cdot \mathbb{1}_{\left\{|X|< n_0\right\}}d\mathbb{P}\right|\leq \int_{\Omega} |X|\mathbb{1}_{\left\{|X|\geq n_0\right\}}d\mathbb{P} +n_0P(\Lambda_n)$$ and we can conclude taking $N$ such that if $n\geq N$ then $P(\Lambda_n)\leq \frac{\varepsilon}{2n_0}$.
-
See Propostion 4.16 and its proof here. The result you are interested in is an immediate corollary.
EDIT: Adapting that proposition to our setting, it can be stated as follows: Suppose that $X: \Omega \to \mathbb{R}$ is an integrable random variable, meaning that $\int {|X|{\rm d\mathbb P} } < \infty$. Then, given any $\varepsilon > 0$, there exists $\delta > 0$ such that $$0 \le \int_A {|X|{\rm d\mathbb P} } < \varepsilon$$ whenever $A$ is a measurable set with ${\mathbb P}(A)< \delta$. (Hence if ${\mathbb P}(\Lambda _n) \to 0$, we conclude that $\lim _{n \to \infty } \int_{\Lambda _n } {X{\rm d\mathbb P}} = 0$.)
-
I have five words for you: absolute continuity of the integral. :)
- | 2014-03-12T14:49:33 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/46289/proof-of-a-lemma-in-probability",
"openwebmath_score": 0.9851906299591064,
"openwebmath_perplexity": 290.2853325709898,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854164256365,
"lm_q2_score": 0.8652240791017536,
"lm_q1q2_score": 0.839081693853182
} |
https://gateoverflow.in/1077/gate2004-83-isro2015-40 | # GATE2004-83, ISRO2015-40
6.3k views
The time complexity of the following C function is (assume $n > 0$)
int recursive (int n) {
if(n == 1)
return (1);
else
return (recursive (n-1) + recursive (n-1));
}
1. $O(n)$
2. $O(n \log n)$
3. $O(n^2)$
4. $O(2^n)$
edited
Answer is D) $O(2^n)$
int recursive (int n) {
if(n == 1) // takes constant time say 'A' time
return (1); // takes constant time say 'A' time
else
// takes T(n-1) + T(n-1) time
return (recursive (n-1) + recursive (n-1));
}
$T(n) = 2T(n - 1) + a$ is the recurrence equation found from the pseudo code. Note: $a$ is a constant $O(1)$ cost that the non-recursive part of the function takes.
Solving the recurrence by Back Substitution:
\begin{align} T(n) &= 2T(n - 1) + a \\[1em] T(n - 1) &= 2T(n - 2) + a \\[1em] T(n - 2) &= 2T(n - 3) + a \\[1em] &\vdots \end{align}
Thus, we can re-write the equation for $T(n)$ as follows
\begin{align*} T(n) &= 2 \Bigl [ 2T(n - 2) + a \Bigr ] + a &= 4T(n - 2) + 2a + a \\[1em] &= 4 \Bigl [ 2T(n - 3) + a \Bigr ] + 3a &= 8T(n - 3) + 4a + 2a + a \\[1em] &\vdots \\[1em] &= 2^k T(n - k) + (2^k - 1) a \end{align*}
On Substituting Limiting Condition
$$T(1) = 1 \\ \implies n - k = 1 \\ \implies k = n - 1$$
Therefore, our solution becomes
$$2^{n - 1} + \Bigl ( 2^{n - 1} - 1 \Bigr ) a \\ = O(2^n)$$
edited
2
You explain well - all points included :)
HTML spacing for alignment is bad- even if we align spaces properly when the screen resolution changes it becomes bad. Latex is better :)
1
Thank you sir :) . I will try to improve next time.
0
It will be (2^k)-1
Its similar to tower of hanoi problem
$$T(n)=2T(n-1) + 1$$
T(1)=1
T(2)=2.1 + 1 =3
T(3)=2.3 +1 =7
T(4)=2.7 +1 =15 .... .....
T(n)=2.T(n-1)+ 1
we can see that its a pattern getting formed which is $t(n)=2^n-1$ so it is d $O(2^n)$
0
@Bhagirathi: why this 1 after 2T(n-1)
1
@Gate Mm:
T(n) = 2T(n-1)+1. Here +1 is for the base condition if(n==1) return 1. It has the constant time complexity of 1.
1
No it's not for that. 1 in the recursive equation signifies that some work has been done to divide a problem into two subproblems.
Base condition if(n==1) return 1 is used at the end.
0
@amitatgateoverflow is correct.
1 or some constant term is there to denote the amount of work to be done to divide the original problem and then combine the solutions to those subproblems to get the final answer.
The recurrence relation from the code is :
T(n) = 2T(n-1) + 1
The above recurrence can be solved easily with the help of Subtract and conquer master's theorem.
Here a=2, b=1 d=0
Since a>1, the third case applies
O(n0 . 2n/1) = O(2n) Option (d)
0
how could we know 'd ' value.
0
Read the theorem statement again and you would know.
F(n) is in O(nd)
0
Sir can we solve it by finding homogenous and particular solution?
Answer by that method :: (2^n)-1. Is that correct?
0
yes you can do it that way too , but it would be a lengthy way to solve . Single term recurrence relation is advised to be solved by substitution or tree method.
Another way to visualize this problem.
0
F(2) = 2 ??? how ??
0
@Puja Mishra You can find that easily by putting $n = 2$ in the code. The $return$ part in the $else$ block will return $recursive(1) + recursive(1)$.
1 vote
$T(n)=2T(n-1)+1$
$a_{n}=2a_{n-1}+1$
First solve this
$a_{n}=2a_{n-1}$
$r^n=2r^{n-1}$
$\dfrac{r^n}{r^{n-1}}=2$
$r=2$
$a_{n}^{(h)}=d(2)^n...........(1)$
Now solve this
$a_{n}^{(p)}=p_{0}$
$a_{n}=2a_{n-1}+1$
$a_{n}-2a_{n-1}=1$
$p_{0}-2(p_{0})=1$
$p_{0}=-1$
$a_{n}^{(p)}=-1...........(2)$
$Add\ (1)+(2)$
$a_{n}=d(2^n)-1............(3)$
$Given\ a_{1}=1$
$Substitute\ in\ (3)$
$d=1$
$a_{n}=1.(2^n)-1$
$T(n)=O(2^n)$
1 vote
Hope this helps :)
## Related questions
1
6.5k views
The recurrence equation $T(1) = 1$ $T(n) = 2T(n-1) + n, n \geq 2$ evaluates to $2^{n+1} - n - 2$ $2^n - n$ $2^{n+1} - 2n - 2$ $2^n + n$
Let $A[1,\ldots,n]$ be an array storing a bit ($1$ or $0$) at each location, and $f(m)$ is a function whose time complexity is $\Theta(m)$. Consider the following program fragment written in a C like language: counter = 0; for (i=1; i<=n; i++) { if a[i] == 1) counter++; ... 0;} } The complexity of this program fragment is $\Omega(n^2)$ $\Omega (n\log n) \text{ and } O(n^2)$ $\Theta(n)$ $o(n)$
Two matrices $M_1$ and $M_2$ are to be stored in arrays $A$ and $B$ respectively. Each array can be stored either in row-major or column-major order in contiguous memory locations. The time complexity of an algorithm to compute $M_1 \times M_2$ will be best if ... is in column-major order best if both are in row-major order best if both are in column-major order independent of the storage scheme | 2020-08-08T00:39:38 | {
"domain": "gateoverflow.in",
"url": "https://gateoverflow.in/1077/gate2004-83-isro2015-40",
"openwebmath_score": 0.9701326489448547,
"openwebmath_perplexity": 1609.4984816655615,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854146791213,
"lm_q2_score": 0.8652240791017535,
"lm_q1q2_score": 0.8390816923420548
} |
http://math.stackexchange.com/questions/1710786/why-does-lhopitals-rule-fail-in-this-case | # Why does L'Hopital's rule fail in this case?
$$\lim_{x \to \infty} \frac{x}{x+\sin(x)}$$
This is of the indeterminate form of type $\frac{\infty}{\infty}$, so we can apply l'Hopital's rule:
$$\lim_{x\to\infty}\frac{x}{x+\sin(x)}=\lim_{x\to\infty}\frac{(x)'}{(x+\sin(x))'}=\lim_{x\to\infty}\frac{1}{1+\cos(x)}$$
This limit doesn't exist, but the initial limit clearly approaches $1$. Where am I wrong?
-
If the limit of $f'/g'$ exists, then it is also the limit of $f/g$. Not the other way around. – user251257 Mar 23 at 21:11
One often forgets there are hypotheses to check before applying L'Hospital. One of these is that the ratio of the derivatives must exist (or still be indeterminate). – Bernard Mar 23 at 21:13
A condition on the use of L'Hôpital in this context is that the derivative of the denominator must be non-zero on $(N, \infty)$ for some $N$. – Brian Tung Mar 23 at 21:15
This post might give you something to think about. – Hirshy Mar 23 at 21:23
An excellent example for a Calculus Course. – dwarandae Mar 24 at 4:46
Your only error -- and it's a common one -- is in a subtle misreading of L'Hopital's rule. What the rules says is IF the limit of $f'$ over $g'$ exists then the limit of $f$ over $g$ also exists and the two limits are the same. It doesn't say anything if the limit of $f'$ over $g'$ doesn't exist.
-
Not just a common error, a VERY common error. And I am from now on going to use the OP's example in my next calc batch! – imranfat Mar 23 at 21:18
What's even funnier is examples where you apply l'Hopital twice and get back where you started :-) – gnasher729 Mar 25 at 21:11
L'Hopital's rule only tells you that if the modified limit exists and has value $L$, then the original limit also exists and has value $L$. It doesn't tell you that the converse holds.
So, the fact that the modified limit doesn't exist gives you no information about the original limit. So, you need a different method.
Consider something more direct: can you compute $$\lim_{x\to\infty}\frac{x}{x+\sin x}=\lim_{x\to\infty}\frac{1}{1+\frac{\sin x}{x}}?$$
-
Or 1 - sin x / (x + sin x). – gnasher729 Mar 24 at 22:16
or squeeze theorem using -1 <= sin x <= 1. – djechlin Mar 25 at 18:57
De L'Hôpital's rule states that: if $f$ and $g$ are functions that are differentiable on an open interval $I$ (except possibly at a point $x_0$ contained in $I$), if $$\lim_{x\to x_0}f(x)=\lim_{x\to x_0}g(x)=0 \;\mathrm{ or }\; \pm\infty,$$ if $g'(x)\ne 0$ for all $x$ in $I$ with $x \ne x_0$, and $\lim_{x\to x_0}\frac{f'(x)}{g'(x)}$ exists, then:
$$\lim_{x\to x_0}\frac{f(x)}{g(x)} = \lim_{x\to x_0}\frac{f'(x)}{g'(x)}\,.$$
The most classical "counter-example" is when functions are constant: $f(x)=c$ and $g(x)=1$. The derivative of $g(x)$ vanishes on any open interval, while $f/g = c$.
The factorization proposed by @Nick Peterson typically avoids to resort to the rule when it is not necessary (especially when the inderterminacy can be lifted easily). It looks magic, and as for every magic, it shall be used with parsimony (unless it unleashes terrible powers).
-
others already said that l'Hopital requires existence of the limit of the ratio of the derivatives; However in addition, with a solid understanding of limit definition is still possible to prove solution applying De l'Hopital, but not to that function, think about this:
$$\lim_{x \to +\infty} \frac{x}{x+1} \leq \lim_{x \to +\infty} \frac{x}{x-\sin(x)} \leq \lim_{x \to +\infty} \frac{x}{x-1}$$ condensed considering also $-\infty$ with $$\lim_{x \to \infty} \frac{x}{x+sig(x)} \leq \lim_{x \to \infty} \frac{x}{x+\sin(x)} \leq \lim_{x \to \infty} \frac{x}{x-sig(x)}$$ where $$sig(x)=\left\{ \begin{matrix} 0 & x=0\\ \frac{|x|}x & x\ne 0 \end{matrix} \right.$$
prove the above while apply l'Hopital to
$$\lim_{x \to \infty} \frac{x}{x\pm 1}$$
the squeezing inequities are true after a certain G, formally $\exists G / \forall x\in\Re,|x|>G : \frac{x}{x+sig(x)} \leq \frac{x}{x+\sin(x)} \leq \frac{x}{x-sig(x)}$
applying the limit definition to $x \over x+sin(x)$ the starting point M selecting all x>M has to be greater or equal than G (simply require $M\geq G$), in this case M=G is great enough to say that the limit is the same 1.
More formally (I actually didn't find an online pointable suitable formal definition of $\lim_{x\to\infty}$, so I'm making it up)
$$\lim_{x \to \infty} f(x) = r\in \{\Re, -\infty, +\infty, NaN\} / \\ \exists r \in \Re : \forall \epsilon \in \Re, \epsilon>0: \exists M \in \Re : \forall x \in \Re, |x| > M : |f(x)-r|<\epsilon \\ \lor r=\infty, omissis \\ \lor r=+\infty, omissis \\ \lor r=-\infty, omissis \\ \lor r=NaN, omissis.$$ (r as abbreviation of response, NaN (not a number) is when the limit doesn't exists and $\lor$ is in this case a shortcut or).
think of names
$f(x)=\frac{x}{x+\sin(x)}$
$g(x)=\frac{x}{x \pm 1}$, and when the definition of limit is used with g(x) the lower bound M is called G
from the evident property $\exists G' \in \Re^+ | \forall x \in \Re, |x|>G' : x-1 \leq x+\sin(x) \leq x+1$
$\Rightarrow \exists G \in \Re^+ | \forall x \in \Re, |x|>G : \frac{x}{x+sig(x)} \leq \frac{x}{x+\sin(x)} \leq \frac{x}{x-sig(x)}$
$$\lim_{x \to \infty} \frac{x}{x\pm 1} \underleftarrow{=(?H)= \lim_{x \to +\infty} \frac{\frac{d}{dx} x}{\frac{d}{dx}(x \pm 1)} = \lim_{x \to +\infty} \frac{1}{1 \pm 0}=1}$$ the existence of this limits (they are two, due to $\pm$) ensures that
$\forall \epsilon \in \Re, \epsilon>0: \exists G \in \Re : \forall x \in \Re, |x| > G : |g(x)-r|<\epsilon$
Choosing $M \geq G$ ($M$ is the lower bound in the definition of limit for $f(x)$) $$\Rightarrow \lim_{x \to \infty} f(x)=1$$
-
There is another useful rule, which I don't seem to have seen written down explicitly:
Let f, g, r, and s be functions where the g -> inf, r and s are bounded. Then taking the limit of f / g or the limit of (f + r) / (g + s) gives the same result.
Applied here, since sin (x) is bounded, the limit is the same as the limit of x / x.
- | 2016-06-01T02:22:54 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/1710786/why-does-lhopitals-rule-fail-in-this-case",
"openwebmath_score": 0.936578631401062,
"openwebmath_perplexity": 410.0998606179616,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854103128327,
"lm_q2_score": 0.8652240808393984,
"lm_q1q2_score": 0.8390816902493795
} |
https://cs.stackexchange.com/questions/2789/solving-or-approximating-recurrence-relations-for-sequences-of-numbers/24082 | # Solving or approximating recurrence relations for sequences of numbers
In computer science, we have often have to solve recurrence relations, that is find a closed form for a recursively defined sequence of numbers. When considering runtimes, we are often interested mainly in the sequence's asymptotic growth.
Examples are
1. The runtime of a tail-recursive function stepping downwards to $0$ from $n$ whose body takes time $f(n)$:
\qquad \begin{align} T(0) &= 0 \\ T(n+1) &= T(n) + f(n) \end{align}
2. \qquad \begin{align} F_0 &= 0 \\ F_1 &= 1 \\ F_{n+2} &= F_n + F_{n+1} \end{align}
3. The number of Dyck words with $n$ parenthesis pairs:
\qquad\begin{align} C_0 &= 1 \\ C_{n+1}&=\sum_{i=0}^{n}C_i\,C_{n-i} \end{align}
4. The mergesort runtime recurrence on lists of length $n$:
\qquad \begin{align} T(1) &= T(0) = 0 \\ T(n) &= T(\lfloor n/2\rfloor) + T(\lceil n/2\rceil) + n-1 \end{align}
What are methods to solve recurrence relations? We are looking for
• general methods and
• methods for a significant subclass
as well as
• methods that yield precise solutions and
• methods that provide (bounds on) asymptotic growth.
This is supposed to become a reference question. Please post one answer per method and provide a general description as well as an illustrative example.
• These notes may be helpful. (But no, I will not transcribe them into answers.) – JeffE Jul 17 '12 at 20:17
## Converting Full History to Limited History
This is a first step in solving recurrences where the value at any integer depends on the values at all smaller integers. Consider, for example, the recurrence $$T(n) = n + \frac{1}{n}\sum_{k=1}^n \big(T(k-1) + T(n-k)\big)$$ which arises in the analysis of randomized quicksort. (Here, $k$ is the rank of the randomly chosen pivot.) For any integer $n$, the value of $T(n)$ depends on all $T(k)$ with $k<n$. Recurrences of this form are called full history recurrences.
To solve this recurrence, we can transform it into a limited history recurrence, where $T(n)$ depends on only a constant number of previous values. But first, it helps to simplify the recurrence a bit, to collect common terms and eliminate pesky fractions. \begin{align*} n T(n) &= n^2 + 2\sum_{k=1}^{n-1} T(k) \end{align*} Now to convert to a limited-history recurrence, we write down the recurrence for $T(n-1)$, subtract, and regather terms: \begin{align*} (n-1) T(n-1) &= (n-1)^2 + 2\sum_{k=1}^{n-2} T(k) \\ \implies nT(n) - (n-1)T(n-1) &= (2n-1) + 2T(n-1) \\[1ex] \implies n T(n) &= (2n-1) + (n+1) T(n-1) \\[1ex] \implies \frac{T(n)}{n+1} &= \frac{2n-1}{n(n+1)} + \frac{T(n-1)}{n} \end{align*}
Now if we define $t(n) = T(n)/(n+1)$ and replace the fraction $\frac{2n-1}{n(n+1)}$ with the simpler asymptotic form $\Theta(1/n)$, we obtain the much simpler recurrence $$t(n) = \Theta(1/n) + t(n-1).$$ Expanding this recurrence into a summation immediately gives us $t(n) = \Theta(H_n) = \Theta(\log n)$, where $H_n$ is the $n$th harmonic number. We conclude that $\boldsymbol{T(n) = \Theta(n\log n)}$.
• If you want the precise solution for $T$, that's also not hard (here), if a bit tedious; we get $T(n) = 2(n+1)H_n + (T(0) - 3)n + T(0)$. Actually, $\sum_{i=1}^n \Theta(1/i) = \Theta(H_n)$ confuses me so I prefer the precise variant. Pesky sums of Landau terms. – Raphael Jul 17 '12 at 21:48
• Actually, it suffices to observe (inductively) that $T(n)/(n+1) = \Theta(t^*(n))$, where $t^*(n) = 1/n + t^*(n-1)$. In fact, I already used that trick at the very start, when I replaced the $\Theta(n)$ time to partition an array with the simpler $n$. This is an utterly standard abuse of notation. – JeffE Jul 19 '12 at 15:59
## Generating Functions $\newcommand{\nats}{\mathbb{N}}$
Every series of numbers corresponds to a generating function. It can often be comfortably obtained from a recurrence to have its coefficients -- the series' elements -- plucked.
This answer includes the general ansatz with a complete example, a shortcut for a special case and some notes about using this method to obtain asymptotics (even if the precise result can not be obtained).
### The Method
Let $(a_n)_{n\in\nats}$ a series of numbers. Then, the formal power series
$\qquad \displaystyle A(z) = \sum_{n=0}^\infty a_nz^n$
is the ordinary generating function¹ of $(a_n)_{n\in\nats}$. The coefficients in the series expansion of $A(z)$ equal the sequence, i.e. $[z^n]A(z) = a_n$. For example, the ordinary generating function of the famous Catalan numbers $C_n$ is
$\qquad \displaystyle C(z) = \frac{1 - \sqrt{1 - 4z}}{2z}$.
The definition of $A$ is also our ansatz for solving a recurrence. This works best for linear recurrences, so assume for the sake of simplicity a recurrence of the form
\qquad \begin{align} a_0 &= c_0 \\ &\vdots \\ a_{k-1} &= c_{k-1} \\ a_n &= f(n) + \sum_{i=1}^k b_i a_{n-i} \qquad , n \geq k \end{align}
for some fixed $b_1, \dots, b_k \in \mathbb{R}$ and $f(n) : \nats \to \nats$ a function independent of all $a_i$. Now we simply insert both anchors and recursive part into the ansatz, that is
\qquad \begin{align} A(z) &= \sum_{n=0}^\infty a_nz^n \\ &= c_0z^0 + c_1z^1 + \dots + c_{k-1}z^{k-1} + \sum_{n=k}^\infty \left[ f(n) + \left(\sum_{i=1}^k b_i a_{n-i}\right)\right] z^n \end{align}
Using mechanics of sum manipulation, properties of formal power series and known identities², the last right-hand side has to be brought into closed forms, typically in terms of $A(z)$. The resulting equation can (often) be solved for $A(z)$. The series expansion of the result (which may be easily obtained, known or otherwise approachable) is essentially the solution.
Good introductions can be found in Wilf's book [3] and in GKP [4]. Advanced material has been collected by Flajolet and Sedgewick [5].
### Example
Consider
\qquad \begin{align} a_0 &= 1 \\ a_1 &= 2 \\ a_n &= 5n + 3a_{n-1} - 2a_{n_2} \qquad , n > 1 \end{align}
We calculate:
\qquad \begin{align} A(z) &= \sum_{n=0}^\infty a_n z^n \\ &= 1 + 2z + \sum_{n=2}^\infty \left[ 3a_{n-1} - 2a_{n-2} + 5n\right]z^n \\ &= 1 + 2z + 3\sum_{n=2}^\infty a_{n-1}z^n - 2\sum_{n=2}^\infty a_{n-2}z^n + 5\sum_{n=2}^\infty n z^n \\ &= 1 + 2z + 3z\sum_{n=1}^\infty a_nz^n - 2z^2\sum_{n=0}^\infty a_n z^n + 5\sum_{n=2}^\infty n z^n \\ &= 1 + 2z + 3z(A(z) - a_0) - 2z^2A(z) + 5 \left( \frac{z}{(1-z)^2} - z\right) \\ &= 1 - 6z + (3z - 2z^2)A(z) + \frac{5z}{(1-z)^2} \end{align}
This solves to
\qquad \begin{align} A(z) &= \frac{1 - 3z + 13z^2 - 6z^3}{(1-2z)(1-z)^3} \\ &= \frac{16}{1-2z} - \frac{5}{1-z} - \frac{5}{(1-z)^2} - \frac{5}{(1-z)^3} \\ &= 16\sum_{n=0}^\infty 2^n z^n - 5\sum_{n=0}^\infty z^n - 5 \sum_{n=0}^\infty (n+1) z^n - 5\sum_{n=0}^\infty \frac{(n+1)(n+2)}{2} z^n \end{align}
Now we can finally read off
\qquad \begin{align} a_n &= 16 \cdot 2^n - 5 - 5(n+1) - \frac{5}{2}(n+1)(n+2) \\ &= 2^{n+4} - \frac{5}{2}n^2 - \frac{25}{2}n - 15 \end{align}
Once you get used to it, you notice that this is all quite mechanic. In fact, computer algebra can do all this stuff for you in many cases. The good is that it remains (more or less) that mechanic even if the recurrence is more complex. See here for a more involved, less mechanic example.
Also note that the general techniques also work if the objects sought are complex numbers, or even polynomials.
### A Shortcut
For linear and homogeneous recurrences, i.e. such of the form
\qquad \begin{align} a_0 &= c_0 \\ &\vdots \\ a_{k-1} &= c_{k-1} \\ a_n &= \sum_{i=1}^k b_i a_{n-i} \qquad , n \geq k \end{align}
the above goes through in exactly the same way, every time. By performing above calculation symbolically, we find the following lemma. Let
$\qquad \displaystyle z^k - b_1 z^{k-1} - b_2 z^{k-2} - \dots - b_k$
be the characteristic polynomal (of the recurrence). Let furthermore $\lambda_1, \dots, \lambda_l$ the (pairwise distinct) zeros of said polynomial with multiplicity $r_i$, respectively. Then, the desired coefficient is given by
$\qquad \displaystyle a_n = \sum_{i=1}^l \sum_{j=1}^{r_i} b_{i,j} \cdot n^{j-1} \cdot \lambda_i^n$
with unknown $b_{i,j}$. As the characteristic polynomial has degree $k$, there are exactly $k$ (complex) zeros, i.e. the $r_i$ sum to $k$. Therefore, the missing coefficients can be determined by solving the linear equation system with $k$ equations obtained by equating above formula with any $k$ of the $a_n$ (e.g. the anchors).
### Asymptotics
Getting to a closed form for $A(z)$ is usually the easy part. Expressing it in generating functions we know the coefficiencts of (as we did in the example) quickly becomes hard, though. Examples are $C(z)$ from above and the one for the number of Dyck words mentioned in the question.
One can employ complex analysis machinery, specifically singularity analysis, in order to obtain asymptotics for the coefficients; buzzwords include Darboux's method and saddle-point method. These are based on the residue theorem and Cauchy's integral formula. See [6] for details.
1. You can do similar things with exponential, Dirichlet and some other generating functions. Which works best depends on the sequence at hand and in particular whether you find the necessary closed forms.
2. For example from the TCS Cheat Sheet or [3].
4. Concrete Mathematics by R.L. Graham, D.E. Knuth and O. Patashnik (1994, 2nd ed.)
## Master Theorem
The Master theorem gives asymptotics for the solutions of so-called divide & conquer recurrences, that is such that divide their parameter into proportionate chunks (instead of cutting away constants). They typically occur when analysing (recursive) divide & conquer algorithms, hence the name. The theorem is popular because it is often incredibly easy to apply. On the other hand, it can only be applied to recurrences of the following form:
$\qquad \displaystyle T(n) = a \cdot T\left(\frac{n}{b}\right) + f(n)$
with $a \geq 1, b > 1$. There are three cases
1. $\quad \displaystyle f \in O\left( n^{\log_b (a) - \varepsilon} \right)$
for some $\varepsilon > 0$;
2. $\quad \displaystyle f \in \Theta\left( n^{\log_b a} \log^{k} n \right)$,
for some $k \geq 0$;
3. $\quad \displaystyle f \in \Omega\left( n^{\log_b (a) + \varepsilon} \right)$
for some $\varepsilon > 0$ and
$\quad \displaystyle a f\left( \frac{n}{b} \right) \le c f(n)$
for some $c < 1$ and $n \to \infty$.
which imply the asymptotics
1. $T \in \Theta\left( n^{\log_b a} \right)$,
2. $T \in \Theta\left( n^{\log_b a} \log^{k+1} n \right)$ and
3. $T \in \Theta \left(f \right)$,
respectively. Note that the base cases are not stated or used here; that makes sense, considering we are only investigating asymptotic behaviour. We silently assume that they are some constants (what else can they be. Which constants we don't see is irrelevant, they all vanish in the $\Theta$.
### Examples
1. Consider the recurrence
$\qquad \displaystyle T(n) = 4T\left(\frac{n}{3}\right) + n$.
With $f(n) = n, a=4$ and $b=3$ -- note that $\log_b a \approx 1.26$ we see that case one applies with $\varepsilon = 0.25$. Therefore, $T \in \Theta(n^{\log_3 4}) = \Theta(n^{1.261\dots})$.
2. Consider the recurrence
$\qquad \displaystyle T(n) = 2T(n/2) + n$.
With $f(n) = n, a=2$ and $b=2$ -- note that $\log_b a = 1$ we see that case two applies with $k=0$. Therefore, $T \in \Theta(n \log n)$.
3. Consider the recurrence
$\qquad \displaystyle T(n) = 3T\left(\frac{n}{4}\right) + n$.
With $f(n) = n, a=3$ and $b=4$ -- note that $\log_b a \approx 0.79$ we see that case three applies with $\varepsilon = 0.2$ and $c=1$. Therefore, $T \in \Theta(n)$.
4. Consider the recurrence
$\qquad \displaystyle T(n) = 16T\left(\frac{n}{4}\right) + n!$
Here we have $a = 16$, $b=4$ and $f(n) = n!$ - many standard examples will have polynomial $f$, but this is not a rule. We have $\log_b a = 2$, and case three applies again. In this instance though, we can choose any $\varepsilon$ and $c > 0$ as $n! \in \Omega(n^{k})$ for all $k$. Hence $T \in \Theta(n!)$.
• It is well possible that none of the Master theorem's cases applies. For example, the subproblems may not have equal size or have a more complex form. There are some extensions to the Master theorem, for instance Akra-Bazzi [1] or Roura [2]. There is even a version that works for discrete recurrences (i.e. floors and ceils are used on the recursive parameters) and provides sharper results [3].
• Usually, you have to massage the actual recurrence relation you have into shape before you can apply the Master theorem. Common transformations that preserve asymptotics include dropping floors and ceils as well as assuming $n=b^k$. Take care not to break things here; refer to [4] section 4.6 and this question for details.
1. On the Solution of Linear Recurrence Equations by M. Akra and L. Bazzi (1998)
2. An improved master theorem for divide-and-conquer recurrences by S. Roura (1997)
Refers to other improved master theorems.
3. A master theorem for discrete divide and conquer recurrences by M. Drmota and W. Szpankowski (2011)
4. Introduction to Algorithms by Cormen et al. (2009, 3rd edition)
• This might be the stupid question but I often fail to hold the mental model when a is not equal to b, I don't know why but by intuition I always feel that both must be same always, like in mergesort we divide the problem in two equal(almost) halves and with n/2 instances each. Further if we divide the algorithm in three equal parts then the inputs should also be divided in three equal parts which again makes a and b equal. How can I break this wrong intuition? – CodeYogi May 2 '16 at 12:12
• @CodeYogi, the typical case is as you state. But it does happen that you get more (e.g. Karatsuba multiplication, 3 multiplications of 1/2 size; Strassen multiplication of matrices, 7 multiplications of 1/2 size) or less (e.g. binary search, 1 search of 1/2 size) recursive calls. – vonbrand Feb 9 at 22:15
## Guess & Prove
Or how I like to call it, the "$\dots$ technique". It can be applied to all kinds of identities. The idea is simple:
Guess the solution and prove its correctness.
This is a popular method, arguably because it usually requires some creativity and/or experience (good for showing off) but few mechanics (looks elegant). The art here is to make good, educated guesses; the proof is (in our case) usually a more or less simple induction.
When applied to recurrences, "guessing" is typically done by
• expanding the recurrence a couple of times,
• figuring out the anchor and
• guessing the pattern for the intermediate (the $\dots$).
### Simple Example
\qquad \begin{align} s_0 &= s_1 = s_2 = 1 \\ s_n &= 5s_{n-3} + 6 \qquad\qquad n \geq 2 \end{align}
Let us expand the definition of $s_n$ a few times:
\qquad \begin{align} s_n &= 5s_{n-3} + 6 \\ &= 5(5s_{n-6} + 6) + 6 \\ &= 5(5(5s_{n-9} + 6) + 6) + 6 \\ &\ \vdots \\ &= \underbrace{5(5(5( \dots 5\cdot 1}_{n \div 3 \text{ times}} + \underbrace{6 \dots ) + 6) + 6) + 6}_{n \div 3 \text{ times}} \end{align}
Here, the pattern is easy to spot and it leads us to the claim:
\qquad \begin{align} s_n &= 5^{\left\lfloor\frac{n}{3}\right\rfloor} + 6\cdot \sum_{i=0}^{\left\lfloor\frac{n}{3}\right\rfloor - 1} 5^i \\ &= \frac{5}{2}\cdot 5^{\left\lfloor\frac{n}{3}\right\rfloor} - \frac{6}{4} \end{align}
Now we prove the identity by induction. For $n \in \{0,1,2\}$, we can establish correctness by a plugging in the respective value. Assuming the identity holds for all $n' \leq n$ for an arbitrary but fixed $n \geq 3$, we calculate
\qquad \displaystyle \begin{align} s_{n+3} &= 5s_n + 6 \\ &= 5\cdot \left( \frac{5}{2}\cdot 5^{\left\lfloor\frac{n}{3}\right\rfloor} - \frac{6}{4} \right) + 6 \\ &= \frac{5}{2}\cdot 5^{\left\lfloor\frac{n}{3}\right\rfloor + 1} - \frac{6}{4} \\ &= \frac{5}{2}\cdot 5^{\left\lfloor\frac{n+3}{3}\right\rfloor} - \frac{6}{4} \end{align}
which proves the identity by the power of induction.
If you try to use this on more involved recurrences, you quickly encounter the prime disadvantage of this method: it can be hard to see the pattern, or condense it to a nice closed form.
### Asymptotics
It is possible to use this method for asymptotics, too. Be aware, though, that you have to guess the constants for the Landau symbols as there has to be one constant that establishes the bound for all $n$, i.e. the constant factor can not change during the induction.
Consider, for example, the Mergesort runtime recurrence, simplified for the case of $n=2^k$¹:
\qquad \begin{align} T(1) &= T(0) = 0 \\ T(n) &= 2T(n/2) + n-1 \qquad n \geq 1 \end{align}
We guess that $T(n) \in O(n\log n)$ with constant $c=1$, that is $T(n) \leq n\log n$. We prove this by induction over $k$; the inductive step looks like this:
\qquad \begin{align} T(n) &= 2T(n/2) + n-1 \\ &\leq 2\frac{n}{2}\log \frac{n}{2} + n - 1 \\ &= n\log n - n\log 2 + n - 1 \\ &\lt n \log n \end{align}
1. For non-decreasing sequences of naturals, every infinite subsequence has the same asymptotic growth as the original sequence.
# The Akra-Bazzi method
The Akra-Bazzi method gives asymptotics for recurrences of the form: $$T(x) = \sum_{1 \le i \le k} a_i T(b_i x + h_i(x)) + g(x) \quad \text{for x \ge x_0}$$ This covers the usual divide-and-conquer recurrences, but also cases in which the division is unequal. The "fudge terms" $h_i(x)$ can cater for divisions that don't come out exact, for example. The conditions for applicability are:
• There are enough base cases to get the recurrence going
• The $a_i$ and $b_i$ are all constants
• For all $i$, $a_i > 0$
• For all $i$, $0 < b_i < 1$
• $\lvert g(x) \rvert = O(x^c)$ for some constant $c$ as $x \rightarrow \infty$
• For all $i$, $\lvert h_i(x) \rvert = O(x / (\log x)^2)$
• $x_0$ is a constant
Note that $\lfloor b_i x \rfloor = b_i x - \{b_i x\}$, and as the sawtooth function $\{ u \} = u - \lfloor u \rfloor$ is always between 0 and 1, replacing $\lfloor b_i x \rfloor$ (or $\lceil b_i x \rceil$ as appropiate) satisfies the conditions on the $h_i$.
Find $p$ such that: $$\sum_{1 \le i \le k} a_i b_i^p = 1$$ Then the asymptotic behaviour of $T(x)$ as $x \rightarrow \infty$ is given by: $$T(x) = \Theta \left( x^p \left( 1 + \int_{x_1}^x \frac{g(u)}{u^{p + 1}} du \right) \right)$$ with $x_1$ "large enough", i.e. there is $k_1>0$ so that $$g(x/2) \geq k_1g(x) \tag{2}$$ for all $x>x_1$.
## Example A
As an example, take the recursion for $n \ge 5$, where $T(0) = T(1) = T(2) = T(3) = T(4) = 17$: $$T(n) = 9 T(\lfloor n / 5 \rfloor) + T(\lceil 4 n / 5 \rceil) + 3 n \log n$$ The conditions are satisfied, we need $p$: $$9 \left( \frac{1}{5} \right)^p + \left( \frac{4}{5} \right)^p = 1$$ As luck would have it, $p = 2$. Thus we have: $$T(n) = \Theta \left( n^2 \left(1 + \int_3^n \frac{3 u \log u}{u^3} du \right) \right) = \Theta(n^2)$$
since with $k_1 \leq \frac{1}{2}\left(1 - \frac{\log 2}{\log 3}\right)$ we fulfill $(2)$ for all $x\geq 3$. Note that because the integral converges even if we use other constants, such as $1$, as lower bound, it is legal to use those as well; the difference vanishes in $\Theta$.
## Example B
Another example is the following for $n \ge 2$: $$T(n) = 4 T(n / 2) + n^2 / \lg n$$ We have $g(n) = n^2 / \ln n = O(n^2)$, check. We have that there is a single $a_1 = 4$, $b_1 = 1 / 2$, which checks out. Assuming that the $n / 2$ is really $\lfloor n / 2 \rfloor$ and/or $\lceil n / 2 \rceil$, the implied $h_i(n)$ also check out. So we need: $$a_1 b_1^p = 4 \cdot (1 / 2)^p = 1$$ Thus $p = 2$, and: $$T(n) = \Theta\left(n^2 \left( 1 + \int_2^n \frac{u^2 du}{u^3 \ln u} \right) \right) = \Theta\left(n^2 \left( 1 + \int_2^n \frac{du}{u \ln u} \right) \right) = \Theta(n^2 \ln \ln n)$$ We apply a similar trick as above to the lower bound of the integral, only that we use $2$ because the integral does not converge for $1$.
(The help of maxima with the algebra is gratefully acknowledged)
• I checked the original paper. They have a technical restriction on the lower bound of the integral; your version (citing the survey by Mehlhorn?) explicitly requires that the integral converges. Since I think the original condition is easier to check, I changed the statement and the examples accordingly, please check. – Raphael Apr 3 '13 at 11:00
• Furthermore, the original paper does not give the version with the $h_i$; is this taken from Leighton's manuscript? Do you have a peer-reviewed reference for that? Should we move to the version given in the 1998 paper by Akra & Bazzi? – Raphael Apr 3 '13 at 11:02
• I have stumbled across what seems to be an inconsistency in the theorem. Maybe you know the answer? – Raphael Nov 11 '13 at 15:22
# Summations
Often one encounters a recurrence of the form $$T(n) = T(n-1) + f(n),$$ where $f(n)$ is monotone. In this case, we can expand $$T(n) = T(c) + \sum_{m=c+1}^n f(m),$$ and so given a starting value $T(c)$, in order to estimate $T(n)$ we need to estimate the sum $f(c+1) + \cdots + f(m)$.
# Non-decreasing $f(n)$
When $f(n)$ is monotone non-decreasing, we have the obvious bounds $$f(n) \leq \sum_{m=c+1}^n f(m) \leq (n-c) f(n).$$ These bounds are best-possible in the sense that they are tight for some functions: the upper bound for constant functions, and the lower bound for step functions ($f(m) = 1$ for $m \geq n$ and $f(m) = 0$ for $m < n$). However, in many cases these estimates are not very helpful. For example, when $f(m) = m$, the lower bound is $n$ and the upper bound is $(n-c)n$, so they are quite far apart.
## Integration
A better estimate is given by integration: $$\int_c^n f(x) dx \leq \sum_{m=c+1}^n f(m) \leq \int_{c+1}^{n+1} f(x) dx.$$ For $f(m) = m$, this gives the correct value of the sum up to lower order terms: $$\frac{1}{2} n^2 - \frac{1}{2} c^2 \leq \sum_{m=c+1}^n m \leq \frac{1}{2} (n+1)^2 - \frac{1}{2} (c+1)^2.$$ When $f(m) = m$ we can calculate the sum explicitly, but in many cases explicit computation is hard. For example, when $f(m) = m\log m$ the antiderivative of $f$ is $(1/2) x^2\log x - (1/4) x^2$, and so $$\sum_{m=c+1}^n m\log m = \frac{1}{2} n^2 \log n \pm \Theta(n^2).$$
The Euler–Maclaurin formula gives better estimates. This formula can be used, for example, to prove strong forms of Stirling's formula, by estimating the sum $\log n! = \sum_{m=1}^n \log m$.
# Non-increasing $f(n)$
In some cases, $f(n)$ is monotone non-increasing. The trivial estimates become $$f(1) \leq \sum_{m=c+1}^n f(m) \leq (n-c) f(1),$$ and the integral estimates $$\int_{c+1}^{n+1} f(x) dx \leq \sum_{m=c+1}^n f(m) \leq \int_c^n f(x) dx.$$ As an example, for $f(m) = 1/m$, using $\int f(m) = \log m$ we obtain $$\sum_{m=c+1}^n \frac{1}{m} = \log n \pm \Theta(1).$$
• This answer deals less with solving recurrences but rather with estimating sums (which may be useful solving recurrences); the technique is the dual of Riemann sums. It should also work with other forms such as $T(n-d)$ for constant $d$? – Raphael Apr 25 '14 at 8:54
• Right, $T(n) = cT(n-d) + f(n)$ can also be solved this way. – Yuval Filmus Apr 25 '14 at 12:47
Sedgewick and Flajolet have done extensive work in analytic combinatorics, which allows recurrences to be solved asymptotically using a combination of generating functions and complex analysis. Their work allows many recurrences to be solved automatically, and has been implemented in some computer algebra systems.
This textbook on the subject was written by Flajolet and Sedgewick and is an excellent reference. A somewhat simpler exposition, geared towards applications to algorithm analysis, is this text by Sedgewick and Flajolet.
Hope this helps!
• This is a nice reference, but we want to collect methods in an accessible way. Can you present one particular method in detail? – Raphael Jul 17 '12 at 18:39
There may be times when you come across a strange recurrence like this: $$T(n) = \begin{cases} c & n < 7\\ 2T\left(\frac{n}{5}\right) + 4T\left(\frac{n}{7}\right) + cn & n\geq 7 \end{cases}$$ If you're like me, you'll realize you can't use the Master Theorem and then you may think, "hmmm... maybe a recurrence tree analysis could work." Then you'd realize that the tree starts to get gross really fast. After some searching on the internet you see the Akra-Bazzi method will work! Then you actually start to look into it and realize you don't really want to do all the math. If you've been like me up till this point, you'll be excited to know there's an easier way.
# The Uneven Split Theorem Part 1
Let $$c$$ and $$k$$ be positive constants.
Then let $$\{a_1, a_2, \ldots, a_k\}$$ be positive constants such that $$\sum_1^k a_i < 1$$.
We also must have a recurrence of the form (like our example above):
\begin{align} T(n) & \leq c & 0 < n < \max\{a_1^{-1}, a_2^{-1}, \ldots, a_k^{-1}\}\\ T(n) & \leq cn + T(a_1 n) + T(a_2 n) + \dots T(a_k n) & n \geq \max\{a_1^{-1}, a_2^{-1}, \ldots, a_k^{-1}\} \end{align}
## Claim
Then I claim $$T(n) \leq bn$$ where $$b$$ is a constant (e.g. asymptotically linear) and:
$$b = \frac{c}{1 - \left(\sum_1^k a_i\right)}$$
## Proof by Induction
Basis: $$n < \max\{a_1^{-1}, a_2^{-1}, \ldots, a_k^{-1}\} \implies T(n) \leq c < b < bn$$
Induction: Assume true for any $$n' < n$$, we then have
\begin{align} T(n) & \leq cn + T(\lfloor a_1 n \rfloor) + T(\lfloor a_2 n \rfloor) + \dots + T(\lfloor a_k n \rfloor)\\ & \leq cn + b \lfloor a_1 n \rfloor + b \lfloor a_2 n \rfloor + \dots + b \lfloor a_k n \rfloor\\ & \leq cn + b a_1 n + b a_2 n + \dots + b a_k n\\ & = cn + bn \sum_1^k a_i\\[0.5em] & = \frac{cn - cn \sum_1^k a_i }{1 - \left(\sum_1^k a_i\right)} + \frac{cn \sum_1^k a_i}{1 - \left(\sum_1^k a_i\right)}\\[0.5em] & = \frac{cn}{1 - \left(\sum_1^k a_i\right)}\\ & = bn & \square \end{align}
Then we have $$T(n) \leq bn \implies T(n) = O(n)$$.
## Example
$$T(n) = \begin{cases} c & n < 7\\ 2T\left(\frac{n}{5}\right) + 4T\left(\frac{n}{7}\right) + cn & n\geq 7 \end{cases}$$ We first verify the coefficients inside the recursive calls sum to less than one: \begin{align} 1 & > \sum_1^k a_i \\ & = \frac{1}{5} + \frac{1}{5} + \frac{1}{7} + \frac{1}{7} + \frac{1}{7} + \frac{1}{7}\\[0.5em] & = \frac{2}{5} + \frac{4}{7}\\[0.5em] & = \frac{34}{35} \end{align}
We next verify that the base case is less than the max of the inverses of the coefficients: \begin{align} n & < \max\{a_1^{-1}, a_2^{-1}, \ldots, a_k^{-1}\}\\ & = \max\{5, 5, 7, 7, 7, 7\}\\ & = 7 \end{align}
With these conditions met, we know $$T(n) \leq bn$$ where $$b$$ is a constant equal to: \begin{align} b &= \frac{c}{1 - \left(\sum_1^k a_i\right)}\\[0.5em] &= \frac{c}{1 - \frac{34}{35}}\\[0.5em] &= 35c \end{align} Therefore we have: \begin{align} T(n) & \leq 35cn\\ \land\; T(n) & \geq cn\\ \therefore T(n) & = \Theta(n) \end{align}
# The Uneven Split Theorem Part 2
Similarly we can prove a bound for when $$\sum_1^k = 1$$. The proof will follow much of the same format:
Let $$c$$ and $$k$$ be positive constants such that $$k > 1$$.
Then let $$\{a_1, a_2, \ldots, a_k\}$$ be positive constants such that $$\sum_1^k a_i = 1$$.
We also must have a recurrence of the form (like our example above):
\begin{align} T(n) & \leq c & 0 < n < \max\{a_1^{-1}, a_2^{-1}, \ldots, a_k^{-1}\}\\ T(n) & \leq cn + T(a_1 n) + T(a_2 n) + \dots T(a_k n) & n \geq \max\{a_1^{-1}, a_2^{-1}, \ldots, a_k^{-1}\} \end{align}
## Claim
Then I claim $$T(n) \leq \alpha n \log_k n + \beta n$$ (we choose $$\log$$ base $$k$$ because $$k$$ will be the branching factor of the recursion tree) where $$\alpha$$ and $$\beta$$ are constants (e.g. asymptotically linearithmic) such that:
$$\beta = c$$ and $$\alpha = \frac{c}{\sum_1^k a_i \log_k a_i^{-1}}$$
## Proof by Induction
Basis: $$n < \max\{a_1^{-1}, a_2^{-1}, \ldots, a_k^{-1}\} \implies T(n) \leq c = \beta < \alpha n \log_k n + \beta n$$
Induction: Assume true for any $$n' < n$$, we then have
\begin{align} T(n) & \leq cn + T(\lfloor a_1 n \rfloor) + T(\lfloor a_2 n \rfloor) + \dots + T(\lfloor a_k n \rfloor)\\ & \leq cn + \sum_1^k (\alpha a_i n \log_k a_i n + \beta a_i n)\\ & = cn + \alpha n\sum_1^k (a_i \log_k a_i n) + \beta n\sum_1^k a_i\\ & = cn + \alpha n\sum_1^k \left(a_i \log_k \frac{n}{a_i^{-1}}\right) + \beta n\\ & = cn + \alpha n\sum_1^k (a_i (\log_k n - \log_k a_i^{-1})) + \beta n\\ & = cn + \alpha n\sum_1^k a_i \log_k n - \alpha n\sum_1^k a_i \log_k a_i^{-1} + \beta n\\ & = \alpha n\sum_1^k a_i \log_k n + \beta n\\ & = \alpha n \log_k n + \beta n & \square \end{align}
Then we have $$T(n) \leq \alpha n \log_k n + \beta n \implies T(n) = O(n \log n)$$.
## Example
Let's modify that previous example we used just a tiny bit: $$T(n) = \begin{cases} c & n < 35\\ 2T\left(\frac{n}{5}\right) + 4T\left(\frac{n}{7}\right) + T\left(\frac{n}{35}\right)+ cn & n \geq 35 \end{cases}$$
We first verify the coefficients inside the recursive calls sum to one: \begin{align} 1 & = \sum_1^k a_i \\ & = \frac{1}{5} + \frac{1}{5} + \frac{1}{7} + \frac{1}{7} + \frac{1}{7} + \frac{1}{7} + \frac{1}{35}\\[0.5em] & = \frac{2}{5} + \frac{4}{7} + \frac{1}{35}\\[0.5em] & = \frac{35}{35} \end{align}
We next verify that the base case is less than the max of the inverses of the coefficients: \begin{align} n & < \max\{a_1^{-1}, a_2^{-1}, \ldots, a_k^{-1}\}\\ & = \max\{5, 5, 7, 7, 7, 7, 35\}\\ & = 35 \end{align}
With these conditions met, we know $$T(n)\leq \alpha n \log n + \beta n$$ where $$\beta = c$$ and $$\alpha$$ is a constant equal to: \begin{align} b &= \frac{c}{\sum_1^k a_i \log_k a_i^{-1}}\\[0.5em] &= \frac{c}{\frac{2 \log_7 5}{5} + \frac{4 \log_7 7}{7} + \frac{\log_7 35}{35}}\\[0.5em] &\approx 1.048c \end{align} Therefore we have: \begin{align} T(n) & \leq 1.048cn\log_7 n + cn\\ \therefore T(n) & = O(n \log n) \end{align}
After checking this post again, I'm surprised this isn't on here yet.
# Domain Transformation / Change of Variables
When dealing with recurrences it's sometimes useful to be able to change your domain if it's unclear how deep the recursion stack will go.
For instance, take the following recurrence:
$$T(n) = T(2^{2^{\sqrt{\log \log n}}}) + \log \log \log n$$
How could we ever solve this? We could expand out the series, but I promise this will get gross really fast. Instead, let's consider how our input changes with each call.
We first have:
1. $$n$$, then
2. $$2^{2^{\sqrt{\log \log n}}}$$, then
3. $$2^{2^{\sqrt{\log \log (2^{2^{\sqrt{\log \log n}}})}}}$$, and so on.
The goal of a domain transformation will now be to change our recurrence into an equivalent $$S(k)$$ such that instead of the above transitions, we simply have $$k, k-1, k-2, \ldots$$.
For example, if we let $$n = 2^{2^{2^{2^k}}}$$, then this is what we get for our above recurrence: \begin{align*} T(2^{2^{2^{2^k}}}) & = T(2^{2^{\sqrt{\log \log 2^{2^{2^{2^{k}}}}}}}) + \log \log \log (2^{2^{2^{2^{k}}}})\\ & = T(2^{2^{2^{2^{k-1}}}}) + 2^k \end{align*} Then we can simply rewrite it as: $$T(k) = T(k-1) + 2^k = \sum_{i = 1}^{k} 2^k = 2^{k+1} - 1$$ Then all you have to do is convert $$k$$ back to $$n$$ to get: $$T(n) = 2^{(\log \log \log \log n) + 1} - 1 = O(\log \log \log n)$$
With this example, we can now see our goal.
Assume $$T(n) = \begin{cases} h(1) & n = 1\\ a\cdot T(f(n)) + h(n) & \mathrm{otherwise} \end{cases}$$ For some constant $$a$$ and functions $$f(n)$$ and $$h(n)$$.
We are now trying to find some function $$g(k) = n$$ and $$f(g(k)) = g(k-1)$$ such that \begin{align*} T(g(k)) &= aT(f(g(k))) + h(g(k))\\ & = a\cdot T(g(k-1)) + h(g(k)) \end{align*}
More generally, we want $$f^{(i)}(n) = g(k - i)$$ where $$f^{(i)}(n)$$ is the repeated application of $$f$$ on $$n$$, $$i$$ times. (e.g. $$f^{(2)}(n) = f(f(n))$$). This will allow $$g(k)$$ to act as the "iterating" function. Where, at depth $$i$$ of recursion, the work done is simply $$h(g(k-i))$$.
Then we can easily convert this to $$S(k) = T(g(k))$$ so that $$S(k) = a\cdot S(k-1) + h(g(k))$$ Then we only have to worry about summing up $$h(g(k))$$ for all $$k$$ up to a given base case. That is, $$S(k) = \sum_{i = g^{-1}(1)}^{k} a^{k - i} h(g(i))$$
If we can determine $$S(k) = \gamma(k)$$ for some closed form $$\gamma$$ function, then we can determine $$T(n)$$ as $$T(n) = \gamma( g^{-1}(n))$$
Then we use this to get a bound on $$T(n)$$ via one of the other methods above. You could obviously modify this method a little bit to your specification, but in general you're trying to find an iterating function $$g(k)$$ to turn $$T(n)$$ into a simple recursion.
I don't know of an exact way to determine $$g(k)$$ at this point, but I will keep thinking about it and update if it becomes clearer (or if any commenter has some tips!). I have mostly found my $$g(k)$$ functions through trial and error in the past (see here, here, here, and here for examples).
• Are there any restrictions on $f$, $g$, and/or $h$? I'm asking specifically because similar folklore substitution tricks sometimes fail when Landau notation is involved, which makes me concerned if $\gamma \circ g^{-1}$ it truly always the correct answer. – Raphael Mar 20 '19 at 16:40
• @Raphael, this is the part I am not entirely sure about. There are a few things I think we need to ensure to establish equivalence. 1) Depth of recursion is the same, this can be ensured by $f(g(k)) = g(k-1)$ and $g(k) = n$. 2) work done at each level of recursion is the same, which I believe is enforced by $g(k) = n$ and then $h(g(k)) = h(n)$. The basic idea of this is to simply turn $T(n)$ into a sum, namely $\sum_{i = c}^k h(g(i))$. The conversion from $\gamma(k)$ to $\gamma (g^{-1} (n))$ I am also not 100% sure of (I don't have a proof), but I can't see why it would be incorrect. Thoughts? – ryan Mar 20 '19 at 17:19
• @Raphael you could also consider the case, where $S(k) = \gamma(k)$ instead of $\Theta$, then converting to $T(n) = \gamma(g^{-1}(n))$ should be more straight forward. Easy to prove I think if you just show equivalence in the summation. You probably would run into some funny trouble with Landau notation here, but if you left Landau out of it and only stuck with precise equality, I think that should be fine. – ryan Mar 20 '19 at 17:23
• @Raphael I edited it to only use equality, so landau notation should not mess this up. Also generalized a bit more. Which you could even generalize a bit more to use a function $\beta(n)$ instead of the constant $a$. Then instead of $a^{k-i}$ in the sum, just have an application of $\beta(g(i))$. – ryan Mar 22 '19 at 17:48
There's one more approach that works for simple recurrence relations: ask Wolfram Alpha to solve the recurrence for you.
For instance, try typing f(0)=0, f(1)=1, f(n)=f(n-1)+f(n-2) into Wolfram Alpha. You'll get a solution, with a link to the Fibonacci numbers. Or try f(1)=1, f(n)=f(n-1)+n or f(1)=1, f(n)=2*f(n-1)+3*n or f(n)=f(n-1) + 2 f(n-2), f(1)=1, f(2)=3 for other examples. However, be warned: Wolfram Alpha can solve some very simple recurrences, but it falls apart for more complex ones.
This approach avoids the need for any thinking, which can be viewed as either a bug or a feature.
• I do think that the purpose of this site would be to explain how computer algebra does things like this, not to advocate its blind use. But the tools are useful, so useful in fact that one should probably always try them before "wasting" time (in "practice"). – Raphael Jul 11 '13 at 10:02
• From my own experience, trying to use computer algebra without any sense of what is "hard" or "easy" does not get you very far. Especially in algorithm analysis, some massaging can be needed. I don't know how you do that without knowing how to solve recurrences yourself. (As for the purpose of this site, there are multiple point of views. Fact: so far, "this is useful for somebody" was not sufficient to justify a post.) – Raphael Jul 13 '13 at 9:42
Case 2 of the master theorem, as usually stated, handles only recurrences of the form $T(n) = aT(n/b) + f(n)$ in which $f(n) = \Theta(n^{\log_ab}\log^k n)$ for $k \geq 0$. The following theorem, taken from a handout of Jeffrey Leon, gives the answer for negative $k$:
Consider the recurrence $T(n) = a T(n/b) + f(n)$ with an appropriate base case.
1. If $f(n) = O(n^{\log_b a} \log^{c-1} n)$ for $c < 0$ then $T(n) = \Theta(n^{\log_b a})$.
2. If $f(n) = \Theta(n^{\log_b a} \log^{c-1} n)$ for $c = 0$ then $T(n) = \Theta(n^{\log_b a} \log\log n)$.
3. If $f(n) = \Theta(n^{\log_b a} \log^{c-1} n)$ for $c > 0$ then $T(n) = \Theta(n^{\log_b a} \log^c n$).
The proof uses the method of repeated substitution, as we now sketch. Suppose that $f(n) = n^{\log_b a} \log_b^{c-1} n$ and $T(1) = 0$. Then for $n$ a power of $b$, $$T(n) = \sum_{i=0}^{\log_b n-1} a^i (nb^{-i})^{\log_b a} \log_b^{c-1} (nb^{-i}) = \\ \sum_{i=0}^{\log_b n-1} n^{\log_b a} (\log_b n - i)^{c-1} = n^{\log_b a} \sum_{j=1}^{\log_b n} j^{c-1}.$$ Now let us consider the cases one by one. When $c < 0$, the series $\sum_{j=0}^\infty j^{c-1}$ converges, and so $T(n) = \Theta(n^{\log_b a})$. When $c = 0$, the sum is the harmonic sum $H_{\log_b n} = \log(\log_b n) + O(1)$, and so $T(n) = \Theta(n^{\log_b a} \log \log n)$. When $c > 0$, we can approximate the sum using an integral: $$\sum_{j=1}^{\log_b n} \approx \int_0^{\log_b n} x^{c-1} \, dx = \left. \frac{x^c}{c} \right|_0^{\log_b n} = \frac{\log_b^c n}{c},$$ and so $T(n) = \Theta(n^{\log_b a} \log^c n)$. | 2020-03-28T22:01:19 | {
"domain": "stackexchange.com",
"url": "https://cs.stackexchange.com/questions/2789/solving-or-approximating-recurrence-relations-for-sequences-of-numbers/24082",
"openwebmath_score": 0.9995046854019165,
"openwebmath_perplexity": 568.4012933743602,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854103128328,
"lm_q2_score": 0.865224072151174,
"lm_q1q2_score": 0.8390816818236664
} |
http://math.stackexchange.com/questions/369958/i-need-to-find-the-value-of-a-b-in-mathbb-r-such-that-the-given-limit-is-tru | # I need to find the value of $a,b \in \mathbb R$ such that the given limit is true
I am given that $\lim_{x \to \infty} \sqrt[3]{8x^3+ax^2}-bx=1$ need to find the value of $a,b \in \mathbb R$ such that the given limit is true. I was able to work the whole thing out, but I have a question about one step in my work. There is a lot of rough work because I simplify by using the rule of the difference of cubes, so here is a condensed part of my work: \begin{align} \lim_{x \to \infty} \sqrt[3]{8x^3+ax^2}-bx &=\lim_{x \to \infty} \frac{8x^3+ax^2-b^3x^3}{(\sqrt[3]{8x^3+ax^2})^2+bx\sqrt[3]{8x^3+ax^2}+b^2x^2} \\&= \lim_{x \to \infty} \frac{8+a\frac{1}{x}-b^3}{\frac{1}{x^3}(\sqrt[3]{8x^3+ax^2})^2+b\frac{1}{x^2}\sqrt[3]{8x^3+ax^2}+b^2\frac{1}{x}} \\&=\frac{\lim_{x \to \infty}8+\lim_{x \to \infty}a\frac{1}{x}-\lim_{x \to \infty}b^3}{\lim_{x \to \infty}\frac{1}{x^3}(\sqrt[3]{8x^3+ax^2})^2+\lim_{x \to \infty}b\frac{1}{x^2}\sqrt[3]{8x^3+ax^2}+\lim_{x \to \infty}b^2\frac{1}{x}} \\&= \frac{8-b^3}{0+0+0} \\&= \frac{8-b^3}{0}\end{align} Thus $8-n^3$ must also equal $0$ which implies that $b=2$. (This is the part I am unsure about. Is what I said true? If $b=2$ then this would give me an indeterminate form, but other than that I'm not sure if what I said holds, and if it does hold why does it hold?) Regardless of my uncertainty, I went on and using this assumption I found that $a=12$ in a similar manner, and when I check $\lim_{x \to \infty} \sqrt[3]{8x^3+12x^2}-2x$ it does equal $1$ .
Any help as to why/why not my assumption is correct? Thanks in advance!
(If anyone wants me to post the method as to how i got 12 for $a$, let me know and then I'll type it up).
-
It looks good...until you write division by zero. I think I understand what you mean and I think you're right, yet try to avoid explicitly writing that: we mathematicians usually begin to pant and some of us get the rabbies when we see that thing... – DonAntonio Apr 23 '13 at 3:08
@DonAntonio lol. That is where my confusion came about, I don't really know how to avoid it. – user66807 Apr 23 '13 at 3:12
Stop your stuff two lines before you did, and argue that since the first limit exists and the denominator of the expression you reached in the last step is zero, then in the last step it must be that also the denominator has limit zero... – DonAntonio Apr 23 '13 at 3:14
@DonAntonio you mean "also the numerator has limit zero," right? – user66807 Apr 23 '13 at 3:18
Yes, of course. Thanks. – DonAntonio Apr 23 '13 at 3:47
Your assumption is correct; if $b\neq 2$, then $8-b^3\neq 0$, and hence the limit would either not exist or be infinite. But you know the limit is $1$.
A shorter way to do the first part is: $\sqrt[3]{8x^3+ax^2}-bx=x(\sqrt[3]{8+\frac{a}{x}}-b)$. The cube root approaches 2 as $x\rightarrow \infty$, so if $b\neq 2$, the product approaches $\pm\infty$ (not $1$ as in the hypothesis).
Wow...your method really is so much shorter and simpler. I'm amazed. But I have a question now: If $8-b^3\neq 0$ then how can we be sure that the limit would either not exist or be infinite. I think I'm missing the concept. – user66807 Apr 23 '13 at 3:08
$8-b^3$ is a nonzero constant in this case, while the denominator approaches 0. This cannot approach 1. If the denominator is always of the same sign, then the fraction will always be of the same sign, and hence approach either $+\infty$ or $-\infty$. However if the denominator changes signs, the limit doesn't exist. – vadim123 Apr 23 '13 at 3:14
Okay, now I have another question. In your method the cube root approaches 2 as $x \to \infty$ but if $b=2$ then the "stuff" in the parenthesis will tend to $0$ and then we will have $\infty * 0$ won't we? – user66807 Apr 23 '13 at 3:22
Correct. The $\infty \cdot 0$ form is indeterminate, so it might equal 1. Any other value for $b$ cannot give a limit of 1. Hence $b$ must be $2$. – vadim123 Apr 23 '13 at 3:23 | 2016-05-29T04:30:31 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/369958/i-need-to-find-the-value-of-a-b-in-mathbb-r-such-that-the-given-limit-is-tru",
"openwebmath_score": 0.9884944558143616,
"openwebmath_perplexity": 188.6267199063797,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787830929849,
"lm_q2_score": 0.8499711737573762,
"lm_q1q2_score": 0.8390735089739226
} |
https://math.stackexchange.com/questions/2703123/3-cards-are-dealt-from-a-well-shuffled-deck/2703197 | # 3 cards are dealt from a well shuffled deck.
1.Find the chance that none of the cards are hearts.
The answer is $\frac{39}{52}$ $\cdot$ $\frac{38}{51}$ $\cdot$ $\frac{37}{50}$
However, why can't we use complement rule here:
1- p(chance that all the cards are hearts)= 1- ($\frac{13}{52}$ $\cdot$ $\frac{12}{51}$ $\cdot$ $\frac{11}{50}$)
2.Find the chance that the cards are not all hearts.
The answer given is: 1- ($\frac{13}{52}$ $\cdot$ $\frac{12}{51}$ $\cdot$ $\frac{11}{50}$)
here I'm little confused as to why the complement rule was used.
for chance that the cards are not all hearts why can't we use:
P(1 heart) + P(2 heart) + P(No hearts)= ($\frac{13}{52}$ $\cdot$ $\frac{39}{51}$ $\cdot$ $\frac{38}{50}$) + ($\frac{13}{52}$ $\cdot$ $\frac{12}{51}$ $\cdot$ $\frac{39}{50}$) + ($\frac{39}{52}$ $\cdot$ $\frac{38}{51}$ $\cdot$ $\frac{37}{50}$)
• The complement of "none of the cards are hearts" is NOT "all of the cards are hearts." – lulu Mar 22 '18 at 9:50
Find the probability that if three cards are drawn from a well-shuffled deck that none of the cards are hearts.
You asked why we could not use the complement rule here. We can. However, the complement of the set of outcomes in which none of the cards are hearts is not the set of outcomes in which all the cards are hearts. For instance, the hand $5\color{red}{\heartsuit}7\color{red}{\diamondsuit}J\clubsuit$ contains a heart, but not all of the cards in the hand are hearts. The complement of the set of outcomes in which none of the outcomes are hearts is the set of outcomes in which at least one of the cards is a heart.
Since the order of selection does not matter, this problem is best handled with combinations. The number of ways of selecting a subset of $k$ elements from a set with $n$ elements is $$\binom{n}{k} = \frac{n!}{k!(n - k)!}$$
Since $13$ of the $52$ cards in the deck are hearts, $52 - 13 = 39$ are not. Hence, a favorable hand consists of drawing $3$ of the $39$ cards that are not hearts when selecting $3$ of the $52$ cards in the deck. Hence, the desired probability is $$\Pr(\text{none of the cards is a heart}) = \frac{\dbinom{39}{3}}{\dbinom{52}{3}} = \frac{\dfrac{39!}{3!36!}}{\dfrac{52!}{3!49!}} = \frac{39!}{3!36!} \cdot \frac{3!49!}{52!} = \frac{39 \cdot 38 \cdot 37}{52 \cdot 51 \cdot 50}$$ We can also compute the probability by subtracting the probability of the complementary event from $1$. As stated above, the complementary event is the event that at least one of the selected cards is a heart. The number of ways of selecting $k$ of the $13$ hearts and $3 - k$ of the other $39$ cards in the deck is $$\binom{13}{k}\binom{39}{3 - k}$$ Hence, the probability that at least one of the three selected cards is a heart is $$\Pr(\text{at least one the cards is a heart}) = \frac{\dbinom{13}{1}\dbinom{39}{2} + \dbinom{13}{2}\dbinom{39}{1} + \dbinom{13}{3}\dbinom{39}{0}}{\dbinom{52}{3}}$$
Therefore, the probability that none of the selected cards is a heart is $$\Pr(\text{none of the cards is a heart} = 1 - \frac{\dbinom{13}{1}\dbinom{39}{2} + \dbinom{13}{2}\dbinom{39}{1} + \dbinom{13}{3}\dbinom{39}{0}}{\dbinom{52}{3}}$$
Find the probability that the cards are not all hearts.
This probability is found by subtracting the probability that all of the cards are hearts from $1$, which is $$\Pr(\text{not all of the cards are hearts}) = 1 - \frac{\dbinom{13}{3}}{\dbinom{52}{3}}$$
The complementary event is the probability that at least one of the cards is a heart, which we computed above.
You asked why this probability is not $$\frac{13}{52} \cdot \frac{39}{51} \cdot \frac{38}{50} + \frac{13}{52} \cdot \frac{12}{51} \cdot \frac{39}{50} + \frac{13}{52} \cdot \frac{12}{51} \cdot \frac{11}{50}$$ You corrected calculated the probability of selecting three hearts in three draws. However, when you calculated the probabilities of selecting exactly one heart in three draws or exactly two hearts in three draws, you did not account for the order in which the cards are drawn.
For instance, when you calculated the probability of selecting one heart in three draws, you calculated the probability of first selecting a heart and then selecting two non-hearts. However, the heart also could be drawn on the second or third draw, so the probability of selecting exactly one heart in three draws is actually $$\frac{\dbinom{13}{1}\dbinom{39}{2}}{\dbinom{52}{3}} = \frac{13}{52} \cdot \frac{39}{51} \cdot \frac{38}{50} + \frac{39}{52} \cdot \frac{13}{51} \cdot \frac{38}{50} + \frac{39}{52} \cdot \frac{38}{51} \cdot \frac{13}{50}$$ Similarly, the probability of selecting exactly two hearts in three draws is $$\frac{\dbinom{13}{2}\dbinom{39}{1}}{\dbinom{52}{3}} = \frac{13}{52} \cdot \frac{12}{51} \cdot \frac{39}{50} + \frac{13}{52} \cdot \frac{39}{51} \cdot \frac{12}{50} + \frac{39}{52} \cdot \frac{13}{51} \cdot \frac{12}{50}$$
• Thank you for the explanations, I think I'm getting it now. However I'm still little bit confused about the second part of question. you said " you did not account for the order in which the cards are drawn" why does order matter in this case? Your hand is still the same whether you get 2 cards first or last right? – jasminvvian Mar 23 '18 at 6:52
• The order in which the cards are drawn does not matter, which is why I used combinations. However, you considered sequences of outcomes. In particular, for the case of exactly one heart, there are three possible sequences: $HH^CH^C$, $H^CHH^C$, $H^CH^CH$. You calculated the probability of drawing a heart ($\frac{13}{52}$), then a non-heart ($\frac{12}{51}$), then another non-heart ($\frac{11}{50}$). Therefore, you have only accounted for one of the three sequences, which is why your answer is $1/3$ of the correct one. – N. F. Taussig Mar 23 '18 at 9:52
At first, I am solving only 1
As @lulu said, The complement of "none of the cards are hearts" is not "all of the cards are hearts, but "at least one card is a heart"
So, let us find the probability of at least one card being a heart.
Clearly it is $$13/52×39/51×38/50×^3C_1+13/52×12/51×39/50×^3C_2+13/52×12/51×11/50×^3C_3$$ Subtract this from $1$ and you will get your answer.
Now think about 2(tell me if you fail to think it yourself). | 2019-08-19T16:32:06 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2703123/3-cards-are-dealt-from-a-well-shuffled-deck/2703197",
"openwebmath_score": 0.841546893119812,
"openwebmath_perplexity": 144.98015623387795,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787876194204,
"lm_q2_score": 0.84997116805678,
"lm_q1q2_score": 0.8390735071937546
} |
https://math.stackexchange.com/questions/1955892/polar-form-of-complex-number | # Polar form of complex number
Is the polar form of $-5j$ equal to
$5(\cos\frac{3}{2}\pi+j\sin\frac{3}{2}\pi)$
or
$5(\cos-\frac{\pi}{2}+j\sin-\frac{\pi}{2})$
I'm confused as to which way to go when calculating the argument.
• What's the difference between the angles $3\pi/2$ and $-\pi/2$? – Neal Oct 6 '16 at 1:33
• Both forms are correct. The second form uses the principal value of the argument. – dxiv Oct 6 '16 at 1:36
This is like asking "is $2$ equal to $4/2$ or $6/3$?" You have come across two equivalent ways of phrasing the same number. They're the same because $\cos(3\pi/2) = \cos(-\pi/2)$ and $\sin(3\pi/2) = \sin(-\pi/2)$.
Think of this as two sets of directions to the same address ($-5j$). The representation $-5j = 5(\cos(3\pi/2) + j\sin(3\pi/2))$ says "start at $1$ and walk counter-clockwise three quarters of the way around the unit circle. Then face away from the origin and walk five steps." The representation $-5j = 5(\cos(-\pi/2) + j\sin(-\pi/2))$ says "start at $1$ and walk clockwise one quarter of the way around the unit circle. Then face away from the origin and walk five steps."
$$\frac{3}{2}\pi - (- \frac{1}{2}\pi)2 =2\pi$$
and trigonometry function has a period of $2 \pi$. | 2019-07-24T08:48:26 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1955892/polar-form-of-complex-number",
"openwebmath_score": 0.7436389327049255,
"openwebmath_perplexity": 275.2980099902592,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787868650146,
"lm_q2_score": 0.84997116805678,
"lm_q1q2_score": 0.8390735065525315
} |
https://www.cs.utexas.edu/users/flame/laff/alaff/chapter10-QR1-simple-QR.html | ## Unit10.2.1A simple QR algorithm
We now morph the subspace iteration discussed in the last unit into a simple incarnation of an algorithm known as the QR algorithm. We will relate this algorithm to performing subspace iteration with an $m \times m$ (square) matrix so that the method finds all eigenvectors simultaneously (under mild conditions). Rather than starting with a random matrix $V \text{,}$ we now start with the identity matrix. This yields the algorithm on the left in Figure 10.2.1.1. We contrast it with the algorithm on the right.
The magic lies in the fact that the matrices computed by the QR algorithm are identical to those computed by the subspace iteration: Upon completion $\widehat V = V$ and the matrix $\widehat A$ on the left equals the (updated) matrix $A$ on the right. To be able to prove this, we annotate the algorithm so we can reason about the contents of the matrices for each iteration.
\begin{equation*} \begin{array}{l} \widehat A^{(0)} := A \\ \widehat V^{(0)} := I \\ \widehat R^{(0)} := I \\ {\bf for~} k:=0, \ldots \\ ~~~( \widehat V^{(k+1)}, \widehat R^{(k+1)} ) := {\rm QR}( A \widehat V^{(k)} ) \\ ~~~ \widehat A^{(k+1)} := { \widehat V^{(k+1)}~}^H A \widehat V^{(k+1)} \\ {\bf endfor} \end{array} \end{equation*}
\begin{equation*} \begin{array}{l} A^{(0)} := A \phantom{\widehat A^{(0)}}\\ V^{(0)} := I \phantom{\widehat V^{(0)}}\\ R^{(0)} := I \phantom{\widehat R^{(0)}}\\ {\bf for~} k:=0, \ldots \\ ~~~( Q^{(k+1)}, R^{(k+1)} ) := {\rm QR}( A^{(k)} ) \\ ~~~ A^{(k+1)} := R^{(k+1)} Q^{(k+1)} \\ ~~~ V^{(k+1)} := V^{(k)} Q^{(k+1)} \\ {\bf endfor} \end{array} \end{equation*}
Let's start by showing how the QR algorithm applies unitary equivalence transformations to the matrices $A^{(k)} \text{.}$
###### Homework10.2.1.1.
Show that for the algorithm on the right $A^{(k+1)} = {Q^{(k+1)}~}^H A^{(k)} Q^{(k+1)} \text{.}$
Solution
The algorithm computes the QR factorization of $A^{(k)}$
\begin{equation*} A^{(k)} = Q^{(k+1)} R^{(k+1)} \end{equation*}
after which
\begin{equation*} A^{(k+1)} := R^{(k+1)} Q^{(k+1)} \end{equation*}
Hence
\begin{equation*} A^{(k+1)} = R^{(k+1)} Q^{(k+1)} = {Q^{(k+1)}~}^H A^{(k)} Q^{(k+1)}. \end{equation*}
This last homework shows that $A^{(k+1)}$ is derived from $A^{(k)}$ via a unitary similarity transformation and hence has the same eigenvalues as does $A^{(k)} \text{.}$ This means it also is derived from $A$ via a (sequence of) unitary similarity transformation and hence has the same eigenvalues as does $A \text{.}$
We now prove these algorithms mathematically equivalent.
###### Homework10.2.1.2.
In the above algorithms, for all $k \text{,}$
• $\widehat A^{(k)} = A^{(k)}\text{.}$
• $\widehat R^{(k)} = R^{(k)}\text{.}$
• $\widehat V^{(k)} = V^{(k)} \text{.}$
Hint
The QR factorization is unique, provided the diagonal elements of $R$ are taken to be positive.
xs
Solution
We will employ a proof by induction.
• Base case: $k = 0$
This is trivially true:
• $\widehat A^{(0)} = A = A^{(0)} \text{.}$
• $\widehat R^{(0)} = I = R^{(0)} \text{.}$
• $\widehat V^{(0)} = I = V^{(0)} \text{.}$
• Inductive step: Assume that $\widehat A^{(k)} = A^{(k)}\text{,}$ $\widehat R^{(k)} = R^{(k)}\text{,}$ and $\widehat V^{(k)} = V^{(k)} \text{.}$ Show that $\widehat A^{(k+1)} = A^{(k+1)} \text{,}$ $\widehat R^{(k+1)} = R^{(k+1)}\text{,}$ and $\widehat V^{(k+1)} = V^{(k+1)}\text{.}$
From the algorithm on the left, we know that
\begin{equation*} A \widehat V^{(k)} = \widehat V^{(k+1)} \widehat R^{(k+1)}. \end{equation*}
and
$$\begin{array}{l} A^{(k)} \\ ~~~=~~~~ \lt (I.H.) \gt \\ \widehat A^{(k)} \\ ~~~=~~~~ \lt \mbox{ algorithm on left }\gt \\ \widehat V^{(k)\,H} A \widehat V^{(k)} \\ ~~~=~~~~ \lt \mbox{ algorithm on left } \gt \\ \widehat V^{(k)\,H} \widehat V^{(k+1)}\widehat R^{(k+1)} \\ ~~~=~~~~ \lt \mbox{ I.H. } \gt \\ V^{(k)\,H} \widehat V^{(k+1)}\widehat R^{(k+1)}. \end{array}\label{chapter10-qr-eqn-1}\tag{10.2.1}$$
But from the algorithm on the right, we know that
$$A^{(k)} = Q^{(k+1)} R^{(k+1)}.\label{chapter10-qr-eqn-2}\tag{10.2.2}$$
Both (10.2.1) and (10.2.2) are QR factorizations of $A^{(k)}$ and hence, by the uniqueness of the QR factorization,
\begin{equation*} \widehat R^{(k+1)} = R^{(k+1)} \end{equation*}
and
\begin{equation*} Q^{(k+1)} = V^{(k)\,H} \widehat V^{(k+1)} \end{equation*}
or, equivalently and from the algorithm on the right,
\begin{equation*} \begin{array}[t]{c} \underbrace{ V^{(k)} Q^{(k+1)}} \\ V^{(k+1)} \end{array} = \widehat V^{(k+1)}. \end{equation*}
This shows that
• $\widehat R^{(k+1)} = R^{(k+1)}$ and
• $\widehat V^{(k+1)} = V^{(k+1)} \text{.}$
Also,
\begin{equation*} \begin{array}{l} \widehat A^{(k+1)}\\ ~~~=~~~~ \lt \mbox{ algorithm on left } \gt \\ \widehat V^{(k+1)\,H} A \widehat V^{(k+1)} \\ ~~~=~~~~ \lt \widehat V^{(k+1)} = V^{(k+1)} \gt \\ V^{(k+1)\,H} A V^{(k+1)} \\ ~~~=~~~~ \lt \mbox{ algorithm on right } \gt \\ Q^{(k+1)\,H} V^{(k)\,H} A V^{(k)} Q^{(k+1)} \\ ~~~=~~~~ \lt \mbox{ I.H. } \gt \\ Q^{(k+1)\,H} \widehat V^{(k)\,H} A \widehat V^{(k)} Q^{(k+1)} \\ ~~~=~~~~ \lt \mbox{ algorithm on left } \gt \\ Q^{(k+1)\,H} \widehat A^{(k)} Q^{(k+1)} \\ ~~~=~~~~ \lt \mbox{ I.H. } \gt \\ Q^{(k+1)\,H} A^{(k)} Q^{(k+1)} \\ ~~~=~~~~ \lt \mbox{ last homework} \gt \\ A^{(k+1)}. \end{array} \end{equation*}
• By the Principle of Mathematical Induction, the result holds.
###### Homework10.2.1.3.
In the above algorithms, show that for all $k$
• $V^{(k)} = Q^{(0)} Q^{(1)} \cdots Q^{(k)}\text{.}$
• $A^{k} = V^{(k)} R^{(k)} \cdots R^{(1)} R^{(0)} \text{.}$ (Note: $A^{k}$ here denotes $A$ raised to the $k$th power.)
Assume that $Q^{(0)} = I \text{.}$
Solution
We will employ a proof by induction.
• Base case: $k = 0$
$\begin{array}{c} \underbrace{A^0}\\ I \end{array} = \begin{array}{c} \underbrace{V^{(0)}}\\ I \end{array} A^0 = \begin{array}{c} \underbrace{ R^{(0}) } \\ I \end{array} \text{.}$
• Inductive step: Assume that $V^{(k)} = Q^{(0)} \cdots Q^{(k)}$ and $A^{k} = V^{(k)} R^{(k)} \cdots R^{(0)} \text{.}$ Show that $V^{(k+1)} = Q^{(0)} \cdots Q^{(k+1)}$ and $A^{k+1} = V^{(k+1)} R^{(k+1)} \cdots R^{(0)} \text{.}$
\begin{equation*} V^{(k+1)} = V^{(k)} Q^{(k+1)} = Q^{(0)} \cdots Q^{(k)} Q^{(k+1)}. \end{equation*}
by the inductive hypothesis.
Also,
\begin{equation*} \begin{array}{l} A^{k+1} \\ ~~~=~~~~ \lt \mbox{ definition } \gt \\ A A^{k} \\ ~~~=~~~~ \lt \mbox{ inductive hypothesis } \gt \\ A V^{(k)} R^{(k)} \cdots R^{(0)} \\ ~~~=~~~~ \lt \mbox{ inductive hypothesis } \gt \\ A \widehat V^{(k)} R^{(k)} \cdots R^{(0)} \\ ~~~=~~~~ \lt \mbox{ left algorithm } \gt \\ \widehat V^{(k+1)} \widehat R^{(k+1)} R^{(k)}\cdots R^{(0)} \\ ~~~=~~~~ \lt V^{(k+1)} = \widehat V^{(k+1)}; R^{(k+1)} = \widehat R^{(k+1)} \gt \\ V^{(k+1)} R^{(k+1)} R^{(k)}\cdots R^{(0)} . \end{array} \end{equation*}
• By the Principle of Mathematical Induction, the result holds for all $k \text{.}$
This last exercise shows that
\begin{equation*} A^{k} = \begin{array}[t]{c} \underbrace{ Q^{(0)} Q^{(1)} \cdots Q^{(k)} } \\ \mbox{unitary } V^{(k)} \end{array} \begin{array}[t]{c} \underbrace{ R^{(k)} \cdots R^{(1)} R^{(0)} } \\ \mbox{upper triangular } \widetilde R^{(k)} \end{array} \end{equation*}
which exposes a QR factorization of $A^{k} \text{.}$ Partitioning $V^{(k)}$ by columns
\begin{equation*} V^{(k)} = \left( \begin{array}{c | c | c} v_0^{(k)} \amp \cdots \amp v_{m-1}^{(k)} \end{array} \right) \end{equation*}
we notice that applying $k$ iterations of the Power Method to vector $e_0$ yields
\begin{equation*} A^k e_0 = V^{(k)} \widetilde R^{(k)} e_0 = V^{(k)} \widetilde \rho_{0,0}^{(k)} e_0 = \widetilde \rho_{0,0}^{(k)} V^{(k)} e_0 = \widetilde \rho_{0,0}^{(k)} v_0^{(k)} , \end{equation*}
where $\widetilde \rho_{0,0}^{(k)}$ is the $(0,0)$ entry in matrix $\widetilde R^{(k)} \text{.}$ Thus, the first column of $V^{(k)}$ equals a vector that would result from $k$ iterations of the Power Method. Similarly, the second column of $V^{(k)}$ equals a vector that would result from $k$ iterations of the Power Method, but orthogonal to $v_0^{(k)} \text{.}$
We make some final observations:
• $A^{(k+1)} = Q^{(k)\,H} A^{(k)} Q^{(k)} \text{.}$ This means we can think of $A^{(k+1)}$ as the matrix $A^{(k)}$ but viewed in a new basis (namely the basis that consists of the column of $Q^{(k)}$).
• $A^{(k+1)} = ( Q^{(0)} \cdots Q^{(k)})^H A Q^{(0)} \cdots Q^{(k)} = V^{(k)\,H} A V^{(k)} \text{.}$ This means we can think of $A^{(k+1)}$ as the matrix $A$ but viewed in a new basis (namely the basis that consists of the column of $V^{(k)}$).
• In each step, we compute
\begin{equation*} ( Q^{(k+1)}, R^{(k+1)} ) = QR( A^{(k)} ) \end{equation*}
which we can think of as
\begin{equation*} ( Q^{(k+1)}, R^{(k+1)} )= QR( A^{(k)} \times I ) . \end{equation*}
This suggests that in each iteration we perform one step of subspace iteration, but with matrix $A^{(k)}$ and $V = I \text{:}$
\begin{equation*} ( Q^{(k+1)}, R^{(k+1)} ) = QR( A^{(k)} V ) . \end{equation*}
• The insight is that the QR algorithm is identical to subspace iteration, except that at each step we reorient the problem (express it in a new basis) and we restart it with $V = I \text{.}$
###### Homework10.2.1.5.
Copy Assignments/Week10/matlab/SubspaceIterationAllVectors.m into SimpleQRAlg.m and modify it to implement the algorithm in Figure 10.2.1.1 (right) as
function [ Ak, V ] = SimpleQRAlg( A, maxits, illustrate, delay )
Modify the appropriate line in Assignments/Week10/matlab/test_simple_QR_algorithms.m, changing (0) to (1), and use it to examine the convergence of the method.
What do you observe?
Solution
Discuss what you observe online with others! | 2021-05-09T03:54:08 | {
"domain": "utexas.edu",
"url": "https://www.cs.utexas.edu/users/flame/laff/alaff/chapter10-QR1-simple-QR.html",
"openwebmath_score": 1.0000077486038208,
"openwebmath_perplexity": 4045.6758066991733,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787838473909,
"lm_q2_score": 0.8499711699569787,
"lm_q1q2_score": 0.8390735058634743
} |
https://math.stackexchange.com/questions/4220311/color-ball-drawer-probability-question | # Color ball drawer probability question
There is a ball drawer.
Seven color balls will be drawn with the same probability ($$1/7$$).
(black, blue, green, yellow, white, pink, orange)
If Anson attempts $$9$$ times,
what is the probability that he gets all $$7$$ different color balls?
My work:
I separate the answer to $$3$$ ways.
1. $$7$$ attempts -> done (get $$7$$ colors)
2. $$8$$ attempts -> done (get $$7$$ colors)
3. $$9$$ attempts -> done (get $$7$$ colors)
Therefore, my answer is $$\frac{9C7 + 8C7 + 7C7}{7^7 \cdot (7!)}$$ However, I don't know it is correct or not.
• Welcome to MSE. Your question is phrased as an isolated problem, without any further information or context. This does not match many users' quality standards, so it may attract downvotes, or closed. To prevent that, please edit the question. This will help you recognise and resolve the issues. Concretely: please provide context, and include your work and thoughts on the problem. These changes can help in formulating more appropriate answers. Aug 9 at 8:39
• Since nine balls are drawn, either one ball is selected three times and each of the others is drawn once or two balls are drawn twice each and each of the others is drawn once. Aug 9 at 9:37
• @Cycle, can you further explain your intuition behind taking those 3 cases? IMO, there will be two cases: (7 different balls + 2 balls of different colors among the seven) or (7 different balls + 2 balls of same colors among the seven). Aug 9 at 9:43
• @n. f. taussig, i know what u mean, but i don’t how to calculate my thoughts. eg 1- P(3 same color selected) Aug 9 at 10:04
• Are we to assume that the probability of drawing any color is always $1/7$ whatever number of draws we make ? Aug 9 at 10:27
Since there are seven choices for each of the nine balls Anson selects, there are $$7^9$$ possible sequences of colors.
Method 1: If each color appears among the nine balls, there are two possibilities:
1. One color is selected three times and each of the other colors is selected once.
2. Two colors are each selected twice and each of the other colors is selected once.
One color is selected three times and each of the other colors is selected once: There are seven ways to select the color which appears three times, $$\binom{9}{3}$$ ways to select the three positions occupied by that color, and $$6!$$ ways to arrange the remaining six colors in the remaining six positions. There are $$\binom{7}{1}\binom{9}{3}6!$$ such cases.
Two colors are each selected twice and each of the other colors is selected once: There are $$\binom{7}{2}$$ ways to select the two colors which each appear twice, $$\binom{9}{2}$$ ways to select two positions for the selected color which appears first in an alphabetical list, $$\binom{7}{2}$$ ways to select two positions for the other selected color, and $$5!$$ ways to arrange the remaining five colors in the remaining five positions. There are $$\binom{7}{2}\binom{9}{2}\binom{7}{2}5!$$ such cases.
Therefore, the number of favorable cases is $$\binom{7}{1}\binom{9}{3}6! + \binom{7}{2}\binom{9}{2}\binom{7}{2}5!$$ so the probability that all seven colors are selected is $$\Pr(\text{all seven colors selected}) = \frac{\dbinom{7}{1}\dbinom{9}{3}6! + \dbinom{7}{2}\dbinom{9}{2}\dbinom{7}{2}5!}{7^9}$$
Method 2: We use the Inclusion-Exclusion Principle.
There are $$7^9$$ possible sequences of colors. From these, we must exclude those sequences in which one or more colors is missing.
There are $$\binom{7}{k}$$ ways to select which $$k$$ colors are missing and $$(7 - k)^9$$ sequences of colors which can be formed with the remaining colors. Thus, by the Inclusion-Exclusion Principle, the number of favorable cases is \begin{align*} & \sum_{k = 0}^{7} (-1)^k\binom{7}{k}(7 - k)^9\\ & \qquad = 7^9 - \binom{7}{1}6^9 + \binom{7}{2}5^9 - \binom{7}{3}4^9 + \binom{7}{4}3^9 - \binom{7}{5}2^9 + \binom{7}{6}1^9 - \binom{7}{7}0^9 \end{align*} Hence, the probability that each color appear is \begin{align*} & \Pr(\text{all seven colors selected})\\ & \qquad = \frac{7^9 - \dbinom{7}{1}6^9 + \dbinom{7}{2}5^9 - \dbinom{7}{3}4^9 + \dbinom{7}{4}3^9 - \dbinom{7}{5}2^9 + \dbinom{7}{6}1^9 - \dbinom{7}{7}0^9}{7^9} \end{align*}
• thanks for answering me and i am a newbie in MSE. I am wonder the above solution is secondary level or university? Aug 9 at 14:08
• While the first solution could be done by a good secondary school student, this is a university level problem. Aug 9 at 18:57
You can also count using generating functions.
Each of the $$7$$ colors can be used once, twice, or thrice so the generating function for each color is $$\left(x+\frac{x^2}{2!} +\frac{x^3}{3!}\right)$$
and to fill $$9$$ slots, we need to find coefficient of $$x^9$$ in $$9!\left(x+\frac{x^2}{2!} +\frac{x^3}{3!}\right)^7 =2328480$$
so $$Pr = \dfrac{2328480}{7^9}$$
• +1 for using generating functions :)) Aug 9 at 12:40
• @Bulbasaur: Thanks, I was expecting you to jump in ! :)) They really cut out a lot of verbiage ! Aug 9 at 12:51 | 2021-11-28T15:31:13 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/4220311/color-ball-drawer-probability-question",
"openwebmath_score": 0.5848356485366821,
"openwebmath_perplexity": 489.87289812105536,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9895109105959842,
"lm_q2_score": 0.8479677622198947,
"lm_q1q2_score": 0.839073352550247
} |
https://tutorial.math.lamar.edu/Classes/CalcII/RootTest.aspx | Paul's Online Notes
Home / Calculus II / Series & Sequences / Root Test
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
### Section 4-11 : Root Test
This is the last test for series convergence that we’re going to be looking at. As with the Ratio Test this test will also tell whether a series is absolutely convergent or not rather than simple convergence.
#### Root Test
Suppose that we have the series $$\sum {{a_n}}$$. Define,
$L = \mathop {\lim }\limits_{n \to \infty } \sqrt[n]{{\left| {{a_n}} \right|}} = \mathop {\lim }\limits_{n \to \infty } {\left| {{a_n}} \right|^{\frac{1}{n}}}$
Then,
1. if $$L < 1$$ the series is absolutely convergent (and hence convergent).
2. if $$L > 1$$ the series is divergent.
3. if $$L = 1$$ the series may be divergent, conditionally convergent, or absolutely convergent.
A proof of this test is at the end of the section.
As with the ratio test, if we get $$L = 1$$ the root test will tell us nothing and we’ll need to use another test to determine the convergence of the series. Also note that, generally for the series we’ll be dealing with in this class, if $$L = 1$$ in the Ratio Test then the Root Test will also give $$L = 1$$.
We will also need the following fact in some of these problems.
#### Fact
$\mathop {\lim }\limits_{n \to \infty } {n^{\frac{1}{n}}} = 1$
Let’s take a look at a couple of examples.
Example 1 Determine if the following series is convergent or divergent. $\sum\limits_{n = 1}^\infty {\frac{{{n^n}}}{{{3^{1 + 2n}}}}}$
Show Solution
There really isn’t much to these problems other than computing the limit and then using the root test. Here is the limit for this problem.
$L = \mathop {\lim }\limits_{n \to \infty } {\left| {\frac{{{n^n}}}{{{3^{1 + 2n}}}}} \right|^{\frac{1}{n}}} = \mathop {\lim }\limits_{n \to \infty } \frac{n}{{{3^{\frac{1}{n} + 2}}}} = \frac{\infty }{{{3^2}}} = \infty > 1$
So, by the Root Test this series is divergent.
Example 2 Determine if the following series is convergent or divergent. $\sum\limits_{n = 0}^\infty {{{\left( {\frac{{5n - 3{n^3}}}{{7{n^3} + 2}}} \right)}^n}}$
Show Solution
Again, there isn’t too much to this series.
$L = \mathop {\lim }\limits_{n \to \infty } {\left| {{{\left( {\frac{{5n - 3{n^3}}}{{7{n^3} + 2}}} \right)}^n}} \right|^{\frac{1}{n}}} = \mathop {\lim }\limits_{n \to \infty } \left| {\frac{{5n - 3{n^3}}}{{7{n^3} + 2}}} \right| = \left| {\frac{{ - 3}}{7}} \right| = \frac{3}{7} < 1$
Therefore, by the Root Test this series converges absolutely and hence converges.
Note that we had to keep the absolute value bars on the fraction until we’d taken the limit to get the sign correct.
Example 3 Determine if the following series is convergent or divergent. $\sum\limits_{n = 3}^\infty {\frac{{{{\left( { - 12} \right)}^n}}}{n}}$
Show Solution
Here’s the limit for this series.
$L = \mathop {\lim }\limits_{n \to \infty } {\left| {\frac{{{{\left( { - 12} \right)}^n}}}{n}} \right|^{\frac{1}{n}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{12}}{{{n^{\frac{1}{n}}}}} = \frac{{12}}{1} = 12 > 1$
After using the fact from above we can see that the Root Test tells us that this series is divergent.
#### Proof of Root Test
First note that we can assume without loss of generality that the series will start at $$n = 1$$ as we’ve done for all our series test proofs. Also note that this proof is very similar to the proof of the Ratio Test.
Let’s start off the proof here by assuming that $$L < 1$$ and we’ll need to show that $$\sum {{a_n}}$$ is absolutely convergent. To do this let’s first note that because $$L < 1$$ there is some number $$r$$ such that $$L < r < 1$$.
Now, recall that,
$L = \mathop {\lim }\limits_{n \to \infty } \sqrt[n]{{\left| {{a_n}} \right|}} = \mathop {\lim }\limits_{n \to \infty } {\left| {{a_n}} \right|^{\frac{1}{n}}}$
and because we also have chosen $$r$$ such that $$L < r$$ there is some $$N$$ such that if $$n \ge N$$ we will have,
${\left| {{a_n}} \right|^{\frac{1}{n}}} < r\hspace{0.5in} \Rightarrow \hspace{0.5in}\left| {{a_n}} \right| < {r^n}$
Now the series
$\sum\limits_{n = 0}^\infty {{r^n}}$
is a geometric series and because $$0 < r < 1$$ we in fact know that it is a convergent series. Also, because $$\left| {{a_n}} \right| < {r^n}$$ $$n \ge N$$ by the Comparison test the series
$\sum\limits_{n = N}^\infty {\left| {{a_n}} \right|}$
is convergent. However since,
$\sum\limits_{n = 1}^\infty {\left| {{a_n}} \right|} = \sum\limits_{n = 1}^{N - 1} {\left| {{a_n}} \right|} + \sum\limits_{n = N}^\infty {\left| {{a_n}} \right|}$
we know that $$\sum\limits_{n = 1}^\infty {\left| {{a_n}} \right|}$$ is also convergent since the first term on the right is a finite sum of finite terms and hence finite. Therefore $$\sum\limits_{n = 1}^\infty {{a_n}}$$ is absolutely convergent.
Next, we need to assume that $$L > 1$$ and we’ll need to show that $$\sum {{a_n}}$$ is divergent. Recalling that,
$L = \mathop {\lim }\limits_{n \to \infty } \sqrt[n]{{\left| {{a_n}} \right|}} = \mathop {\lim }\limits_{n \to \infty } {\left| {{a_n}} \right|^{\frac{1}{n}}}$
and because $$L > 1$$ we know that there must be some $$N$$ such that if $$n \ge N$$ we will have,
${\left| {{a_n}} \right|^{\frac{1}{n}}} > 1\hspace{0.5in} \Rightarrow \hspace{0.5in}\left| {{a_n}} \right| > {1^n} = 1$
However, if $$\left| {{a_n}} \right| > 1$$ for all $$n \ge N$$ then we know that,
$\mathop {\lim }\limits_{n \to \infty } \left| {{a_n}} \right| \ne 0$
This in turn means that,
$\mathop {\lim }\limits_{n \to \infty } {a_n} \ne 0$
Therefore, by the Divergence Test $$\sum {{a_n}}$$ is divergent.
Finally, we need to assume that $$L = 1$$ and show that we could get a series that has any of the three possibilities. To do this we just need a series for each case. We’ll leave the details of checking to you but all three of the following series have $$L = 1$$ and each one exhibits one of the possibilities.
\begin{align*} & \sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}} & \hspace{0.5in} & {\mbox{absolutely convergent}}\\ & \sum\limits_{n = 1}^\infty {\frac{{{{\left( { - 1} \right)}^n}}}{n}} & \hspace{0.5in} & {\mbox{conditionally convergent}}\\ & \sum\limits_{n = 1}^\infty {\frac{1}{n}} & \hspace{0.5in} & {\mbox{divergent}}\end{align*} | 2021-03-06T12:19:03 | {
"domain": "lamar.edu",
"url": "https://tutorial.math.lamar.edu/Classes/CalcII/RootTest.aspx",
"openwebmath_score": 0.9078403115272522,
"openwebmath_perplexity": 210.84258556482436,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9895109078121007,
"lm_q2_score": 0.847967764140929,
"lm_q1q2_score": 0.839073352090488
} |
https://byjus.com/question-answer/if-z-1-z-2-z-3-are-imaginary-numbers-such-that-z-1-z-1/ | Question
# If $$z_1, z_2, z_3$$ are imaginary numbers such that $$|z_1| = |z_2| = |z_3| = \begin{vmatrix}\dfrac{1}{z_1}+\dfrac{1}{z_2}+\dfrac{1}{z_3}\end{vmatrix} = 1$$ then $$|z_1+z_2+z_3|$$ is
A
Equal to 1
B
Less than 1
C
Greater than 1
D
Equal to 3
Solution
## The correct option is A Equal to $$1$$using $$z\overline z = |z|^2$$$$z_1\overline{z_1}=|z_1|^2=1\Rightarrow \dfrac{1}{z_1}=\overline{z_1}$$$$z_2\overline{z_2}=|z_2|^2=1\Rightarrow \dfrac{1}{z_2}=\overline{z_2}$$and $$z\overline{z_3}=|z_3|^2=1\Rightarrow \dfrac{1}{z_3}=\overline{z_3}$$Now using, $$\bigg |\dfrac{1}{z_1}+\dfrac{1}{z_2}+\dfrac{1}{z_3}\bigg |=1$$$$\Rightarrow |\overline z_1+\overline z_2+\overline z_3|=1$$$$\Rightarrow |\overline{z_1+z_2+z_3}|=1$$$$\therefore |z_1+z_2+z_3|=1$$, since $$|\overline z|=|z|$$ Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-27T12:22:17 | {
"domain": "byjus.com",
"url": "https://byjus.com/question-answer/if-z-1-z-2-z-3-are-imaginary-numbers-such-that-z-1-z-1/",
"openwebmath_score": 0.6770324110984802,
"openwebmath_perplexity": 11426.568403051417,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9895109075027803,
"lm_q2_score": 0.847967764140929,
"lm_q1q2_score": 0.8390733518281942
} |
https://math.stackexchange.com/questions/3106842/eigenvectors-of-invertible-matrices-over-the-complex-numbers | # eigenvectors of invertible matrices over the complex numbers
Suppose we have a matrix $$A\in GL_n(\mathbb{C})$$. Does $$A$$ always have at least one eigenvector? Specifically for 2x2 matrices, The rotation matrix has no real eigenvectors but it has complex eigenvectors. The matrix $$\begin{pmatrix} 1&&0\\1&&1\\ \end{pmatrix}$$ only has $$\begin{pmatrix} 0\\1\\ \end{pmatrix}$$ as an eigenvector, so clearly $$A$$ does not need to have $$n$$ eigenvectors.
I get that any invertible matrix will have a nonzero determinant, so if you write out the characteristic equation you will get at least one nonzero eigenvalue, but does this eigenvalue have to correspond to an eigenvector?
• Any square matrix has at least one eigenvector because it has at least one eigenvalue. (Note that if $v$ is an eigenvector then so is $t v$ for $t \neq 0$, so any matrix has lots of eigenvectors. I presume you are considering the entire ray to be the same eigenvector as such.) – copper.hat Feb 9 at 23:13
• Are you asking if any matrix has a real eigenvector? This is not true. Take $\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}$. – copper.hat Feb 9 at 23:14
• If $\lambda$ is an eigenvalue then $A - \lambda I$ is not invertible and so has a kernel of dimension at least one. However, this null space need not have a direction with all real (as opposed to complex) components. – copper.hat Feb 9 at 23:16
• It depends on the field: In $\mathbb{C}$ any polyomial (hence also any characteristic function) has a root, and thus an eigenvector, but in $\mathbb{R}$ there are polynomials which do not have a root - thus also some characteristic function will not have an eigenvalue, and consequently no eigenvector – Maksim Feb 9 at 23:18
• "but does this eigenvalue have to correspond to an eigenvector?" Eigenvalues always correspond to eigenvectors. $\lambda$ is an eigenvalue for $A$ if and only if $Av = \lambda v$ has a non-zero solution for $v$. Similarly, a vector $v$ is an eigenvector if and only if it solves the above for some $\lambda$. The definition mentions nothing about characteristic polynomials; that's a theorem! An eigenvalue without an eigenvector is not an eigenvalue at all. – Theo Bendit Feb 9 at 23:51
A matrix
$$A \in GL(n, \Bbb C) \tag 1$$
always has at least one eigenvector, seen as follows: the linear "eigen-equation" is
$$A \vec v = \lambda \vec v, \; \lambda \in \Bbb C,\; 0 \ne \vec v \in \Bbb C^n; \tag 2$$
we write this as
$$(A - \lambda I)\vec v = 0, \tag 3$$
which has a non-zero solution $$\vec v$$ precisely when
$$\chi_A(\lambda) = \det(A - \lambda I) = 0; \tag 4$$
so for any $$\lambda$$ satisfying (4), of which there are at most $$n$$, we obtain at least one eigenvector $$\vec v \in \Bbb C^n$$. It is well-known that eigenvectors associated with distinct eigenvalues are linearly independent; thus, there are at least as many independent eigenvectors as there are distinct eigenvalues; if (4) has $$n$$ distinct zeroes, then $$A$$ has $$n$$ linearly independent eigenvectors.
The real rotation matrices such as
$$R(\theta) = \begin{bmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{bmatrix} \tag 5$$
generally have complex eigenvalues, for
$$\det(R(\theta) - \lambda I) = \det \left ( \begin{bmatrix} \cos \theta - \lambda & \sin \theta \\ -\sin \theta & \cos \theta - \lambda \end{bmatrix} \right )$$
$$= (\cos \theta - \lambda)^2 + \sin^2 \theta = \lambda^2 - (2\cos \theta) \lambda + 1 = 0 \tag 6$$
typically has complex roots, given as they are by the quadratic formula
$$\lambda =\dfrac{2\cos \theta \pm \sqrt{4\cos^2 \theta - 4}}{2} = \dfrac{2\cos \theta \pm 2\sqrt{\cos^2 \theta - 1}}{2}$$
$$=\cos \theta \pm \sqrt{-\sin^2 \theta} = \cos \theta \pm i\sin \theta = e^{\pm i\theta}; \tag 7$$
since the eigenvalues are in general complex, so are the eigenvectors, the exceptions being where $$\sin \theta = 0$$, that is, when $$\theta = n \pi$$, $$n \in \Bbb Z,$$ when they take the form $$(1, 0)^T$$, $$(0, 1)^T$$ as $$R(n\pi) = \pm \, I$$.
It doesn't matter whether an eigenvalue is $$0$$ or not; it will always have at least one eigenvector provided we are working over an algebraically closed field such as $$\Bbb C$$. Indeed, the $$0$$-eigenspace of a matrix $$A$$ is simply $$\ker A$$.
The characteristic polynomial of $$A\in GL_n(\mathbb C)$$ is of degree $$n$$ and therefore has at least one root (if of course $$n > 0$$). Suppose $$\lambda_0$$ is this root. Then $$0 = \det (A - \lambda_0I_n)$$ and consequently the matrix $$A - \lambda_0I_n$$ is singular, which means that the dimension of the column space of $$A - \lambda_0I_n$$ is greater than $$0$$, so it has at least one nonzero vector. Such a vector is exactly an eigenvector of $$A$$, because $$(A - \lambda_0I_n)x = 0 \Leftrightarrow Ax = \lambda_0x.$$
Moreover, if $$A\in GL_{2n+1}(\mathbb R)$$, the theorem remains true! Every polynomial with real coefficients of odd degree has a real root. | 2019-08-19T05:26:56 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3106842/eigenvectors-of-invertible-matrices-over-the-complex-numbers",
"openwebmath_score": 0.9578732252120972,
"openwebmath_perplexity": 152.3725366447201,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9895109084307415,
"lm_q2_score": 0.8479677583778257,
"lm_q1q2_score": 0.8390733469124219
} |
https://math.stackexchange.com/questions/1100850/probability-question-marbles-in-a-jar | # Probability question: marbles in a jar
A jar contains $3$ yellow marbles, $4$ red marbles, $10$ green marbles. and $4$ blue marbles. What is the probability that the first marble picked at random is blue and that the second marble is green and that the third marble picked is yellow, assuming that the marbles are put back into the jar after every time they are picked?
My attempt:
Probability the first marble is blue: $\frac{4}{21}$.
Probability the second marble is green: $\frac{10}{20} = \frac{1}{2}$
Probability the third marble is yellow: $\frac{3}{19}$
I don't think this is right though. Can someone help me please? Thank you.
• I think one of the key points of the question is "marbles are put back after every time they are picked"... – abiessu Jan 12 '15 at 3:01
• All the denominators are to be $21$ and then multiply the fractions together to get the resulting probability since each draw is independent. – user60887 Jan 12 '15 at 3:12
• Oh so it would be $(4/21) * (10/21) * (3/21)$? – NewtoProb Jan 12 '15 at 3:14
• @NewtoProb Yes. – turkeyhundt Jan 12 '15 at 3:14
From independence (due to replacement of drawn marbles) we obtain the answer as $$\frac{4}{21}\times \frac{10}{21}\times \frac{3}{21} = \frac{120}{9261}.$$ | 2019-08-23T05:30:07 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1100850/probability-question-marbles-in-a-jar",
"openwebmath_score": 0.9253225922584534,
"openwebmath_perplexity": 416.45908909832326,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.989510908740062,
"lm_q2_score": 0.8479677526147223,
"lm_q1q2_score": 0.839073341472062
} |
https://math.stackexchange.com/questions/2568628/rational-canonical-form-of-diagonal-matrix | # Rational canonical form of diagonal matrix
I'm trying to determine the rational canonical form of a diagonal matrix $$A=\begin{pmatrix} a_1 & 0 & \cdots & 0\\ 0 & a_2 & \cdots & 0\\ \vdots & \vdots & & \vdots\\ 0 &0 & \cdots & a_n \end{pmatrix}$$ where the $a_i$'s are all different. If my intuition is correct, since the characteristic polynomial (in this case also the minimal polynomial) of $A$ is just the product $(x-a_1)\cdots(x-a_n)$ and all the factors are different we have that the $(x-a_i)$'s are the invariant factors of $A$. Then the rational canonical form of $A$ is again $A$.
Is this correct? Is there a more formal way to work this problem? I'd appreciate any suggestions. Thanks in advance.
• Yes, $A$ in this case is the rational canonical form of $A$. You might consult (or refer to) the definition. – hardmath Dec 16 '17 at 1:00
Yes, the rational canonical form is just $A$. Your intuition is good, but can be expounded futher:
Let $V$ be a vector space where the matrix of some linear operator $T$ is represented by $A$ in some basis. Since $A$ is diagonal, $T$ is a diagonalizable operator, so $V$ has a basis where each vector is an eigenvector for $T$. The Cyclic Decomposition Theorem lets us decompose $V$ into a direct sum of $T$-cyclic subspaces for some vectors $\alpha_1,...,\alpha_k$. Namely, we can decompose $V$ as:
$$V = Z(\alpha_1; T) \oplus ... \oplus Z(\alpha_n; T)$$
Where $Z(\alpha_i; T) = \{ v \in V: v = g(T) \alpha_i \ \text{ for some polynomial } \ g(x) \}$, and $\alpha_1,...,\alpha_n$ are the distinct eigenvectors of $A$.
These subspaces are invariant under $T$, so let $T_i$ be the operator induced by $T$ on $Z(\alpha_i; T)$. Then $T_i$ has minimal polynomial $p_i(x) = x - a_i$, where $a_i$ is the eigenvalue corresponding to the eigenvector $\alpha_i$. Since $\dim(Z(\alpha_i; T)) = 1$ and $T_i$ has a cyclic vector, we get that the rational form of $T_i$ is just its $1 \times 1$ companion matrix:
$A_i = \begin{bmatrix} a_i \end{bmatrix}$.
Thus $$A = A_1 \oplus ... \oplus A_n = \begin{bmatrix} a_1 & 0 &... & 0 \\ 0 & a_2 & ... & 0 \\ . \\ . \\ 0 & 0 & ... & a_n \end{bmatrix}$$
Is the rational form of $A$
I totally disagree with the answers given previously. The authors confuse the Frobenius normal form with the primary rational canonical form.
The Frobenius decomposition has the following form
$F:=diag(C_{p_1},\cdots,C_{p_k})$ where the $C_{p_i}$ are the companion matrices of the polynomial $p_i$, and overall, $p_i$ is a divisor of $p_{i+1}$. In particular, $p_k$ is the minimal polynomial and $p_1\cdots p_k$ is the characteristic polynomial of $A$.
When the eigenvalues of $A$ are distinct, then the vector $(1,\cdots,1)$ is cyclic over whole vector space $K^n$.
Then the Frobenius form of $A$ is $F=C_p$ where $p$ is the characteristic polynomial of $A$.
It suffices to convince oneself to test in Maple
"FrobeniusForm (DiagonalMatrix ([1,2,3]);" | 2019-06-24T09:12:59 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2568628/rational-canonical-form-of-diagonal-matrix",
"openwebmath_score": 0.9864376187324524,
"openwebmath_perplexity": 74.36732522349197,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9473810466522862,
"lm_q2_score": 0.8856314828740728,
"lm_q1q2_score": 0.8390304811934554
} |
https://math.stackexchange.com/questions/2891915/explicit-homeomorphism-between-mathbbs2-and-mathbbp1c | # Explicit homeomorphism between $\mathbb{S^2}$ and $\mathbb{P^1(C)}$
I know that $\mathbb{P^1(C)} \cong \mathbb{P^1(C)} \cup \{N\}$, where $N$ is the north-pole of the sphere, is homeomorphic to the sphere $S^2$ thanks to the stereographic projection, but I am not sure if the explicit projection could be the following: $$f: S^2 \longrightarrow \mathbb{P^1(C)} \\(x,y,z) \mapsto \left( \frac{x+iy}{1-z},\frac{x-iy}{1+z} \right) .$$
Is it the right approach?
• This is not defined at $(0,0,1)$. Both $S^2$ and $\mathbb CP^1$ can be obtained by gluing $\mathbb C$ along two copies of $\mathbb C^\times$ identified by inversion. This should help you defined a homeomorphism using these covers. – Pedro Tamaroff Aug 23 '18 at 11:20
• @PedroTamaroff What if I keep $f$ but I map (0,0,1) to the infinite point of $\mathbb{C}P^1$? – Phi_24 Aug 23 '18 at 13:16
• What you want is to modify one denominator to be $1+z$ instead of $1-z$, I think. – Pedro Tamaroff Aug 23 '18 at 13:18
• @PedroTamaroff you're right, I edited because I wrote wrongly the first time, but still in this case if I put (0,0,1) the first coordinate is not defined, isn't it? – Phi_24 Aug 23 '18 at 13:24
• Why don't you use homogeneous coordinates in $\mathbb{C}P^1$, e.g. $[\frac{x}{y}: z ] = [x :yz]$ ? – Max Aug 23 '18 at 15:54
The map you provide will not work. Even restricted to the sphere without the north pole, it is not even injective. Indeed, $(1,0,0)$ and $(-1,0,0)$ go to the same point, since $$f(1,0,0)=[1:1]=[-1:-1]=f(-1,0,0).$$
To answer the question, let's try going step by step:
The stereographic projection $\mathrm{Steo}:S^2\backslash\{N\} \to \mathbb{R}^2$ is given by $$(x,y,z) \mapsto \left(\frac{x}{1-z},\frac{y}{1-z} \right).$$ When you identify $\mathbb{R}^2 \simeq \mathbb{C}$, you have a formula which appears in your attempt (I'll still call the stereographic projection by the same name): \begin{align*} \mathrm{Steo}:S^2\backslash\{N\} &\to \mathbb{C} \\ (x,y,z) &\mapsto \frac{x+iy}{1-z}. \end{align*} Now, $\mathbb{C}$ embeds naturally in $\mathbb{C}P^1$ via \begin{align*} g:\mathbb{C} &\to \mathbb{C}P^1\\ z &\mapsto [z:1]. \end{align*} This strategy is alluded to in the comments by Max. Now, we have that this $g$ misses a single point: $[1:0]$. This is due to the fact that if $b \neq 0$, then $[a:b]=[ab^{-1}:1]$ (and if $b=0$, $[a:0]=[aa^{-1}:0]=[1:0]$, where we recall that $a$ can't be zero if $b$ is zero).
So we have the explicit map \begin{align*} g \circ \mathrm{Steo}:S^2 \backslash\{N\} &\to \mathbb{C}P^1 \backslash \{[1:0]\}\\ (x,y,z) &\mapsto \left[\frac{x+iy}{1-z}:1\right], \end{align*} which is an homeomorphism. There is the problematic missing north pole, and missing $[1:0]$. However, they are not a problem at all. Indeed, we can extend $g$ to have domain $S^2$ and codomain $\mathbb{C}P^1$ by sending $(0,0,1) \mapsto [1:0]$, which is essentially the uniqueness of the one-point compactification.
So, the final mapping becomes: \begin{align*} g \circ \mathrm{Steo}:S^2 &\to \mathbb{C}P^1\\ (x,y,z) &\mapsto \left[\frac{x+iy}{1-z}:1\right], \quad z \neq 1 \\ (0,0,1) &\mapsto [1:0], \quad z=1. \end{align*} Since you only want an homeomorphism, this escaping via general topology is enough. If you want to check for differentiability etc, you will need to follow through Pedro's suggestion in the comments, which is essentially considering the other map analogous to $g$ which helps covering the $[1:0]$ problematic case via a chart.
• Any particular reason you don't avoid the case distinction using $$(x,y,z)\mapsto\left[x+iy:1-z\right]\;?$$ – MvG Aug 23 '18 at 21:03
• @MvG I was thinking the same, but if $z=1$ then $x=y=0$ and so this would map to $[0:0]$. Which suggests to me that something is wrong, but I haven't read thoroughly yet. – Servaes Aug 23 '18 at 21:05
• @Servaes: Ah yes, you are right. Since we don't have enough direction information to pick a representative for the pole, I see that this case distinction makes sense here. Thanks for pointing this out. – MvG Aug 23 '18 at 21:11
• I found out that using $\alpha[x+iy:1-z]+\beta[1+z:x-iy]$ one can put the problematic point (the one which maps to $[0:0]$) anywhere on the sphere, but can't get rid of it altogether. Interesting. – MvG Aug 23 '18 at 21:54 | 2019-05-25T23:28:48 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2891915/explicit-homeomorphism-between-mathbbs2-and-mathbbp1c",
"openwebmath_score": 0.9886857271194458,
"openwebmath_perplexity": 408.38379381881975,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575152637946,
"lm_q2_score": 0.8539127585282745,
"lm_q1q2_score": 0.8390183982715941
} |
https://www.physicsforums.com/threads/probability-question-getting-x-heads-from-n-coin-tosses.801056/ | # Probability Question - Getting x heads from n coin tosses
1. Mar 3, 2015
### rede96
Hi I was hoping someone could help me with a simple probability question. I wanted to know how I could work out using just one coin the probability of getting x number of heads from n number of coin tosses.
Thanks
2. Mar 3, 2015
### Mentallic
What you're looking for is called a binomial distribution.
To get exactly x successes (we consider heads as being successes and tails as failures in this case) in n trials, the probability is
$$P={n \choose x}p^x(1-p)^{n-x}$$
where p is the probability of success which in this case is 1/2, giving us
$$P={n \choose x}\left(\frac{1}{2}\right)^x\left(\frac{1}{2}\right)^{n-x}={n \choose x}\left(\frac{1}{2}\right)^n$$
And if you'll notice, 1/2^n is the probability of getting all heads (or no heads) so what P represents is the probability of getting all heads, multiplied by the number of ways that you can choose x from n.
3. Mar 3, 2015
### rede96
Thank you very much for your help, but just because I am not too great with the notation you have written, can you show me an example.
Say the probability of getting 90 heads from 100 coin tosses?
From your formula is this (100 / 90) x 0.590?
4. Mar 3, 2015
### Mentallic
No no, that first part isn't a division.
$${n \choose x}=\frac{n!}{x!(n-x)!}$$
Where the exclamation marks mean factorial. e.g. $5!=1\times 2\times 3\times 4\times 5 = 120$ so
$${100 \choose 90} = \frac{100!}{90!(100-90)!} = \frac{100!}{90!\times10!}$$
and since
90! = 1*2*3*4*...*88*89*90
100! = 1*2*...*89*90*91*...*99*100
(I've highlighted the common factors in blue)
Then in 100! / 90! we can cancel the first 90 factors, leaving us with 100! / 90! = 91*92*...*99*100
So finally,
$$\frac{100!}{90!10!} = \frac{91*92*...*99*100}{1*2*3*...*9*10}$$
So to find the chance of 90 heads out of 100 coin tosses, we have
$$\frac{91*92*...*99*100}{1*2*3*...*9*10}*\frac{1}{2^{100}}\approx 1.36*10^{-17}$$
or in other words, very, very unlikely. You're more likely to win the next two jackpot lotteries than to have this event occur.
Also keep in mind that what we've calculated is the chance to get exactly 90 heads. Not more, or less. Even getting exactly 50 heads is an unlikely scenario. You're very likely to get between 40 and 60 heads in 100 trials though.
If you want to calculate these results more easily, use a calculator:
http://www.wolframalpha.com/input/?i=(n+choose+x)+/+2^n,+n=100,+x=90
Just change the value of n and x in the calculation prompt to whatever you wish, and then choose approximate value in the substitution result.
5. Mar 3, 2015
### rede96
Ah ok. Thank you! Sorry Math was never my strong subject.
So using the link (thanks!) (binomial(n, x))/2^n~~1.36554×10^-17 =
17310309456440 / 1267650600228229401496703205376 which is appx 1.36 x 10-17
So just out of curiosity, what if I just wanted to ask: what is the probability that I will get 90 or over from a 100 flips? Which would probably be more applicable in my situation.
6. Mar 3, 2015
### Mentallic
Well, since we already know how to calculate the probability of getting exactly x heads in 100 tosses (for any x) then the probability of getting 90 or more heads is going to be the sum of all of the singular probabilities.
Chance to get 90 or more heads = chance for 90 heads + chance for 91 heads + ... + chance for 100 heads
In statistics, we denote the probability P to have an event X occur as P(X) and particular events where, say, we want 90 heads are denoted by P(X=90), and events such as 90 or more heads is denoted by $P(90\leq X \leq 100)$.
Anyway, with that math lesson aside, I'll give you a link for wolfram to calculate those events.
For some reason it wouldn't give an approximate solution, so I you'll have to plug the values in for yourself.
http://www.wolframalpha.com/input/?i=(sum(i=90+to+i=100)+(100+choose+i))+/+2^100
Where it should be
(sum(i=x to i=y) (n choose i)) / 2^n, n=100, x=90, y=100
which basically says, in n trials, count the probability of x up until y heads occurs. So in your case, you wanted to calculate 90 or more heads, so it's x=90 to y=100 heads.
7. Mar 5, 2015
### rede96
Thanks very much for your help and I don't mind the Math lesson. Wish I had more time to go back to school!
So I assume that if I wanted to find the probability of say it landing heads in the range of 40 to 60 times, I would just substitute i = 40 to i = 60 in the above formula?
This seems to make sense and I get a high probability of .999 when I work it out. So am guessing that is correct?
8. Mar 5, 2015
### PeroK
Yes, although .999 is too high. If you use Excel, I found the BINOMDIST function recently, which you might find useful. That gives P(40-60) = 0.965.
=BINOMDIST(60,100, 0.5, TRUE)-BINOMDIST(39,100, 0.5, TRUE)
TRUE means it's cumulative.
9. Mar 8, 2015
### rede96
Yes, I think I did the 0.999 between 30 and 70 and not 40 and 60. But the formula you posted was a great help. Thanks. | 2018-01-23T16:32:49 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/probability-question-getting-x-heads-from-n-coin-tosses.801056/",
"openwebmath_score": 0.7843559384346008,
"openwebmath_perplexity": 662.3602570539105,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575152637946,
"lm_q2_score": 0.8539127566694177,
"lm_q1q2_score": 0.8390183964451603
} |
http://math.stackexchange.com/questions/352389/probability-that-at-least-1-out-of-10-random-numbers-between-1-ldots-1000 | Probability that at least $1$ out of $10$ random numbers between $1 \ldots 1000$ is not divisible by $7$
$10$ different numbers are randomly chosen from the numbers $1, 2, \ldots, 1000$. What is the probability that at least one of the chosen numbers is not divisble by $7$?
If there are $x$ numbers to choose from, there are $x/7$ numbers that are divisible by $7$. Probability of a number being divisble by $7$ is always $(x/7)/x = 1/7$.
• $A$ - at least one number is not divisble by $7$
• $\bar{A}$ - all numbers are divisible by $7$
These numbers were chosen independently so is $P(\bar{A}) = (\frac{1}{7})^{10}$ and so $P(A)=1-P(\bar{A})=1-(\frac{1}{7})^{10}$
Is this correct?
-
1 Answer
In the set $\{1, 2, \ldots, n\}$, there are precisely $\lfloor \frac{n}{7} \rfloor$ numbers that are divisible by $7$, not $\frac{n}{7}$.
Under the assumption that the same number can be chosen more than once, the rest seems correct.
-
No, 10 different numbers are chosen. But since the probability is always $1/7$ no matter how many numbers I can choose from, the answer is the same, no? – mak Apr 5 '13 at 18:27
@mak: If I choose a random number between $2$ and $5$, then the probability that it is divisible by $7$ is also not $1/7$. Similarly, since $1000$ is not a multiple of $7$, here the probability is not exactly $1/7$. – TMM Apr 5 '13 at 18:32
@TMM: I choose the first number: 1000 numbers to choose from, probability of that number being divisble by 7 is $(1000/7)/1000=1/7$. I choose the second number: 999 numbers left to choose from, probability of that number being divisble by 7 is also $(999/7)/999=1/7$, and so on. My answer is still correct, no? (I know about rounding down the fraction) – mak Apr 5 '13 at 18:54
@mak It's not quite correct. Consider the set $\{0, 1, \ldots, 10\}$. Surely, the probability of choosing a number divisible by $7$ from that set is not $1/7$, right? – Sam Apr 5 '13 at 18:59
@mak Sorry, I meant the set $\{1, 2, \ldots, 10\}$. The point I'm trying to make is that the probability of choosing a number divisible by $7$ from that set is not $1/7$ because only one of the numbers (not $10/7$ of them) are divisible by $7$. Similarly, the probability of choosing a number divisible by $7$ from the set $\{1, 2, \ldots, 1000\}$ is not exactly $1/7$; it is $142/1000$. – Sam Apr 5 '13 at 19:28 | 2016-05-02T02:03:22 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/352389/probability-that-at-least-1-out-of-10-random-numbers-between-1-ldots-1000",
"openwebmath_score": 0.9428604245185852,
"openwebmath_perplexity": 111.88306790401724,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575183283514,
"lm_q2_score": 0.8539127529517043,
"lm_q1q2_score": 0.8390183954091572
} |
http://math.stackexchange.com/questions/118779/rigorous-definition-of-a-limit/118782 | # Rigorous definition of a limit
Suppose that $L$ is a real number and $f$ is a real-valued function defined on some interval $(b, \infty)$. We say that $\displaystyle{\lim_{x \to \infty} f(x) =L}$ if for every positive real number $\epsilon$, there is a real number $M$ such that if $x>M$ then $|f(x) -L| < \epsilon$.
Is this statement correct, or should it be amended to imply that a limit can exist at L (i.e. it is possible for a limit to exist at L), but does not have to be the limit of the function? For example, we can prove from this definition that $\displaystyle{\lim_{x \to \infty} \frac{4}{x^2}=0}$, but can't one also prove that $\displaystyle \lim_{x \to \infty} \frac{4}{x^2}=-0.001$, $\displaystyle \lim_{x \to \infty} \frac{4}{x^2}=-0.0001$ and other false claims by application of this definition?
-
What do you mean by a limit can exist at $L$? Also, you need to write out how you go about "proving" $\displaystyle \lim_{x \rightarrow \infty} \frac4{x^2} = -0.0001$ for us to show where you are making the mistake. – user17762 Mar 11 '12 at 4:32
Remember, the statement requires that *for every $\epsilon>0$* you can find such an $M$. – Alex Becker Mar 11 '12 at 4:32
The statement is correct.
(Note also that we usually talk about a limit existing at $a$ to refer to the point that the variable $x$ is approaching, rather than what the values of the function are approaching; to refer to what the function is approaching, we talk about the limit being $L$, or equaling $L$).
In your example, you cannot prove that $\lim\limits_{x\to\infty}\frac{4}{x^2} = -.0001$: given any $L\gt 0$, let $\epsilon = \frac{L}{2}$. Then for any $N\gt 0$, pick $x\gt\max\{N, \sqrt{\frac{8}{L}}\}$. Then $$\frac{8}{L}\lt x^2,\text{ therefore }\frac{4}{x^2}\lt\frac{L}{2}.$$ And therefore, we have that $$\left|L-\frac{4}{x^2}\right| = L - \frac{4}{x^2} \gt L-\frac{L}{2} = \epsilon.$$ We have therefore proven that if $L\gt 0$, then:
For every $N\gt 0$ there exists $x\gt N$ such that $|L-f(x)|\gt\frac{L}{2}$.
This proves that the limit definition cannot be satisfied, since the condition fails for at least one $\epsilon$.
If $L\lt 0$, pick $\epsilon=\frac{-L}{2}$ and a similar computation shows that you can always find $x$ greater than any given $N$ that will show the property is not satisfied.
-
Think of it this way. A function gets "trapped" in a neighborhood $(L+\epsilon, L-\epsilon)$ of $L$ if eventually $f(x)$ is always in that neighborhood past some marker $M$ on the real line. To say the limit is $L$ is to say it gets trapped in arbitrarily small neighborhoods of $L$; if this were possible for two different points then we could just select neighborhoods of each small enough that they are disjoint - and then where will the points be!? Contradiction. Graphically,
$\hskip 1in$
Once the green and blue bubbles around green $L_1$ and $L_2$ are small enough to be disjoint, you can't have points inside both of them simultaneously. This is why the limit must be unique.
-
VERY good explaination!!! – Mathemagician1234 Sep 25 '13 at 4:19
The limit is unique. Choose $\epsilon=0.000001$ and then try to find $M$ satysfying the required property for $L=-0.001$.
-
Is it possible to prove that a limit claim is true algebraically, using the definition and without trial and error? – j_z Mar 11 '12 at 4:34
@Jaydon: Yes; you can invoke theorems that establish the desired result. Or you can do things without guessing because you understand what is going on... – Arturo Magidin Mar 11 '12 at 4:40
If you prove that the limit is L for some number L, based in the definition, is not possible that the limit is another number H.Because you can take as epsilon for example the third part of the distance between L and H , and if for a neighborhood of L the values of the function are there for all X>M, then for all N>0 there will exist values x>N that are not in the neighborhood of N with radius epsilon.
-
As a matter of fact, you cannot actually prove your other two results. The key is that my $\epsilon$ can be any positive real number, no matter how small, and my limit must satisfy the inequality for all such $\epsilon$. With sufficient playing around and reasoning, you will be able to pick an $\epsilon$ so that the assertions $\displaystyle \lim_{x \to \infty} \frac{4}{x^2}=-0.001$ or $\displaystyle \lim_{x \to \infty} \frac{4}{x^2}=-0.0001$ fail, as azarel describes in his answer.
More generally, one can show that if our pointwise limit exists for a real-valued function (such as our $\frac{4}{x^2}$), it is unique. We even have a much stronger statement: for any sequence in a Hausdorff space, there is at most one limit.
If the concept of a Hausdorff space (or a general topological space) is unrelated to your current knowledge and interest, the takeaway can be that when I have a sufficiently nice space (of which $\mathbb{R}$ is one), we will not be able to prove false claims such as the ones you describe. Of course, you don't have to take this unwillingly on faith from me! Try taking a case where a limit exists at a point, claiming it equals two different real numbers, and arriving at a contradiction from your definition of a limit.
- | 2015-04-21T01:32:42 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/118779/rigorous-definition-of-a-limit/118782",
"openwebmath_score": 0.9413112998008728,
"openwebmath_perplexity": 151.20016980308716,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575142422756,
"lm_q2_score": 0.8539127548105611,
"lm_q1q2_score": 0.8390183937464386
} |
https://math.stackexchange.com/questions/1971788/given-the-partial-sum-of-a-series-how-do-i-find-a-n | Given the partial sum of a series, how do I find $a_n$?
If the $n$th partial sum of a series $$\sum_{n=1}^\infty a_n$$ is $s_n=8-n6^{-n}$, find $a_1$, $a_n$, and $$\sum_{n=1}^\infty a_n$$
What I did:
$$a_1=8-\frac { 1 }{ 6 } =\frac { 47 }{ 6 }$$
$$\sum _{ n=1 }^{ \infty }{a_n} =\lim _{ n\rightarrow \infty }{ 8-\frac {n}{6^n}}=8$$
Now, I read in Stewart's Calculus that $a_n=s_n-s_{n-1}$, so I did tried to find it by doing:
$$\lim _{ n\rightarrow \infty }{ 8-\frac {n-1}{6^{n-1}}}=\lim _{ n\rightarrow \infty }{ 8-\frac {n-1}{6^n\cdot\frac { 1 }{ 6 }}}=8$$
Then $s_n-s_{n-1}=0$? However, this doesn't seem to be correct. What am I doing wrong?
I have tried my textbook, Khan Academy, and even a few questions on this site such as this one, but I still do not understand what needs to be done. Any help/guidance would be appreciated.
• Why the down-vote? I followed the guidelines of this website in posting this question. I posted a legitimate question. I stated what I already tried, what I don't understand, and what resources I have utilized to attempt to figure it out on my own. – Cherry_Developer Oct 16 '16 at 23:37
• I have the impression that there are some users who downvote questions that in their opinion shouldn't have needed to be asked, or that contain errors that they think are so obviously wrong that they shouldn't have been made. It is, as you say, a perfectly good question, and I've upvoted it. – Brian M. Scott Oct 16 '16 at 23:47
The partial sum is $$s_n=\sum_{k=1}^na_k\tag{1}$$ From $(1)$, we get that $a_1=s_1$ and for $n\gt1$, $$a_n=s_n-s_{n-1}\tag{2}$$ If we know that $s_n=8-n6^{-n}$, then $a_1=\frac{47}6$ and for $n\ge2$, \begin{align} a_n &=(n-1)6^{1-n}-n6^{-n}\\ &=(5n-6)\,6^{-n}\tag{3} \end{align} Furthermore, \begin{align} \sum_{k=1}^\infty a_k &=\lim_{n\to\infty}\sum_{k=1}^n a_k\\ &=\lim_{n\to\infty}s_n\\[3pt] &=\lim_{n\to\infty}\left(8-n6^{-n}\right)\\[3pt] &=8\tag{4} \end{align}
• Thank you! For some reason, it completely slipped my mind that $s_n - s_{n-1}$ just meant to simply subtract without involving take their limits first. – Cherry_Developer Oct 16 '16 at 23:31
You took $\lim_{n\rightarrow\infty}s_{n-1}$ and subtracted that from $\lim_{n\rightarrow\infty}s_n$. What you should do instead is just take $s_n-s_{n-1}$. That is, $a_n=\left(8-\frac{n}{6^n}\right)-\left(8-\frac{n-1}{6^{n-1}}\right)=\frac{5n-6}{6^n}$.
It is not an equality of $\;s_n-s_{n-1}=0\;$ but of their limit! Why not correct? The series converges$\;\iff \lim\limits_{n\to\infty}s_n=S\;$ is finite (and exists, of course), and thus
$$\lim_{n\to\infty} a_n=\lim_{n\to\infty}(s_n-s_{n-1})=S-S=0$$
and it is a well known necessary, though not sufficient, condition for a series to converge that its general sequence's limit is zero.
• So, am I correct in saying that $a_n=0$? – Cherry_Developer Oct 16 '16 at 23:12
• @Cherry_Developer No. What is true is that $\;\lim\limits_{n\to\infty} a_n=0\;$ . It's not the same ... – DonAntonio Oct 16 '16 at 23:13
• I apoligize, but now I am very confused. How would I use what I am given and have already figured out to find $a_n$? – Cherry_Developer Oct 16 '16 at 23:17
• @Cherry_Developer You already found $\;a_1\;$ , you can easily find out $\;a_n\;$ doing $\;s_n-s_{n-1}\;$ and you've already found out what the sum of the (comvergent, of course) series is. What else do you want? – DonAntonio Oct 16 '16 at 23:20 | 2019-08-21T03:43:37 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1971788/given-the-partial-sum-of-a-series-how-do-i-find-a-n",
"openwebmath_score": 0.8844784498214722,
"openwebmath_perplexity": 287.35727372700256,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575137315161,
"lm_q2_score": 0.8539127510928476,
"lm_q1q2_score": 0.8390183896574273
} |
http://math.stackexchange.com/questions/55223/how-many-solutions-are-there-to-x2-equiv-1-pmod2a-when-a-geq-3 | # How many solutions are there to $x^2\equiv 1\pmod{2^a}$ when $a\geq 3$?
I know there is a result that says $x^2\equiv 1\pmod{p}$ has only $\pm 1$ as solutions for $p$ an odd prime. Experimenting with $p=2$ shows that this is no longer the case. I ran a few tests on WolframAlpha, and noticed a pattern that there seem to be $4$ solutions to $x^2\equiv 1\pmod{2^a}$ when $a\geq 3$, and they are $\pm 1$ and $2^{a-1}\pm 1$. This works fine for the first several cases, but I'm wondering how you would actually prove that these are the only 4 solutions?
-
+1 for showing some thought. A well-posed question. – Ross Millikan Aug 3 '11 at 0:11
The other direction is easy but enlightening. $(2^{a-1}\pm 1)^2=2^a\pm 2\cdot2^{a-1}+1\equiv 1 \pmod{2^a}$ and it shows the other factor of $2$ comes from the cross term in the square, which is why you don't get any more as $a$ increases. – Ross Millikan Aug 3 '11 at 1:10
In my opinion, Hensel's Lemma is a bit of overkill here. Anyway, when I teach undergraduate number theory I emphasize the connections to undergraduate algebra. Here you are trying to find the elements of order $2$ in the finite abelian group $U(2^a) = (\mathbb{Z}/2^a \mathbb{Z})^{\times}$, so it would be very helpful to know how this group decomposes as a product of cyclic groups.
This group structure is usually computed around the same time one shows that $U(p^a)$ is cyclic for all odd $p$. The answer is that for all $a \geq 3$, $U(2^a) \cong Z_2 \times Z_{2^{a-2}}$, i.e., it is isomorphic to the product of a cyclic group of order $2$ and a cyclic group of order $2^{a-2}$. See e.g. Theorem 1 here for a proof.
Can you see how to use this result to prove your conjecture?
-
I just fixed a small typo in your answer (adding a period). – Akhil Mathew Aug 3 '11 at 0:15
Thanks Pete L. Clark. I'm not too familiar with this, but here's what I think I gathered. I want to count all elements of order $2$ in $Z_2\times Z_{2^{a-2}}$? This is the same as the number of elements $(1,b)$ where $b$ has order $2$ in $Z_{2^{a-1}}$? But isn't $b=x^{2^{a-3}}$, where $x$ is the generator of $Z_{2^{a-2}}$ the only such element? Shouldn't I be counting 3 elements of order 2? – Joe Swanson Aug 3 '11 at 0:22
@Joe: close. You want all the elements of order at most $2$ in the product (of course the only element of order $1$ is the identity). It turns out that an element $(x,y)$ in a direct product has order at most $2$ iff both $x$ and $y$ have order at most $2$. – Pete L. Clark Aug 3 '11 at 0:51
Oops, I was just looking at elements of order $2$, not at most $2$. The 4 elements would be $(1,\pm 1)$ and $(1, \pm x^{2^{a-3}})$, so there are exactly 4 elements of order at most 2 in $U(2^a)$. Thanks, I really like this view of the problem. – Joe Swanson Aug 3 '11 at 0:59
@Joe: Not quite. When we switched to the notation $\mathbf{Z}_2\times\mathbf{Z}_{2^{a-2}}$ the group operation became componentwise addition. So the four elements of order at most 2 are $(0,0)$, $(1,0)$, $(0,2^{a-3})$ and $(1,2^{a-3})$. The isomorphism maps these back to the multiplicative group. You are correct in that in a cyclic group of even order there is only a single element of order two. But here the group is a direct sum/product of two cyclic subgroups, and you have the option to vary both components. – Jyrki Lahtonen Aug 3 '11 at 7:00
HINT $\rm\$ It's easy. $\rm\ d\ |\ x-1,\:x+1\ \Rightarrow\ d\ |\ x+1-(x-1) = 2\:.\:$ Thus if $\rm\: 2^{\:a}\ |\ (x-1)\:(x+1)\:$ there are only a few ways to distribute the factors of $\:2\:$ such that $\rm\:gcd(x-1,\:x+1)\:$ is at most $2\:.$
-
Thanks Bill Dubuque, that's pretty straightforward. – Joe Swanson Aug 3 '11 at 0:41 | 2016-05-05T20:51:20 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/55223/how-many-solutions-are-there-to-x2-equiv-1-pmod2a-when-a-geq-3",
"openwebmath_score": 0.837744414806366,
"openwebmath_perplexity": 109.62300000304118,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575129653769,
"lm_q2_score": 0.8539127510928476,
"lm_q1q2_score": 0.8390183890032112
} |
https://www.physicsforums.com/threads/why-is-empty-set-open.296626/ | Why is Empty set open?
1. Mar 2, 2009
soopo
1. The problem statement, all variables and given/known data
If $$\emptyset$$ has no elements, then $$\emptyset$$ is open.
3. The attempt at a solution
If $$\emptyset$$ is closed, then $$\emptyset$$ has at least an element.
This is a contradiction, so $$\emptyset$$ must be open.
I am not sure about the validity of my attempt.
2. Mar 2, 2009
CompuChip
It is also closed.
There is nothing to prove, really. It's open by definition:
Actually it also follows from 2 (take an empty union) or 3 (take two disjoint open sets, if possible).
Or, you can take the "analysis" definition: S is open if for all x in S there is a neighborhood of x contained in S, which is vacuously true for the empty set.
Note that open and closed are not mutually exclusive: a set can be open, closed, neither or both. Also note that being closed does not imply being non-empty.
3. Mar 2, 2009
soopo
Do you mean that an empty union is open?
4. Mar 2, 2009
lanedance
a union of 2 disjoint sets, an empty union, is the empty set... so both closed & open by the definition given - clopen
5. Mar 3, 2009
CompuChip
By empty union, I mean the union of no sets at all. But the argument relies on "arbitrarily many elements" being interpreted as including "no elements at all".
The intersection of two disjoint sets is empty. The union of 2 disjoint sets is ... the union of two disjoint sets
6. Mar 3, 2009
lanedance
woops yeah good catch - wandered off there, cheers
7. Mar 3, 2009
HallsofIvy
Staff Emeritus
WHY the empty set is open depends on what your definition of "open" is. As CompuChip said, the most general definition of a topological space defines a "topology" for a set as being a collection of subsets satisying certains conditions- that among those conditions is that the it include the empty set- and any set in that "topology" is open.
But you may be thinking in terms of a "metric space" where we are given a "metric function", d(x,y) and use that to define the "neighborhood of p of radius $\delta$, $N_\delta(p)= {q| d(p, q)< \delta}$ and define an "interior point", p, of set A to be a point in A such that for some $\delta$, $N_\delta(p)$ is a subset of A.
Even then there are two ways to define open set. Most common is "a set, A, is open if every member of A is an interior point of A" which can be expressed more formally as "if p is in A, the p is an interior point of A". If A is empty then the "hypothesis", "if p is in A" is false and so, logically, the statement is true: A is an open set.
Another way to define "open set" is to define p to be an "exterior point" of set A if it is an interior point of the complement of A and define p to be a "boundary point" of set A if and only if it is neither an interior point nor an exterior point of A. Now we can define a set A to be open if it contains NONE of its boundary points. (Here we could also define a set to be "closed" if it contains ALL of its boundary points. Remember how in Pre-Calculus, we say that intervals are "open" or "closed" depending upon whether they include their endpoints?
It is easy to see that every point in the space is an exterior point of the empty set so it has NO boundary points. That given, the statement "it contains all of its boundary points" is true and so the empty set is open. Because it has no boundary points it is also true that the empty set contains all (= none) of its boundary points and so the empty set is both closed and open.
8. Mar 3, 2009
soopo
I will try to summarize the different ways to define an open set
1. By interior point and the neighborhood of p of radius $\delta$ (Calculus): If p is in A, p is an interior point of A.
2. A set is open if every member of A is an interior point of A: if p is in A,
then p is an interior point of A. If A is empty and if p is not in A, then A
is an open set
.
3. A set A is open if it contains NONE of its boundary points.
9. Mar 9, 2009
Focus
If you make a statement like $$\forall x \in \emptyset$$ then it is true (trivially). So when you say $$\forall x \in \emptyset \quad x<x$$ is true, just as $$\forall x \in \emptyset \exists B_{\epsilon} \subset \emptyset \text{ s.t. }x \in B_\epsilon$$. The empty set possess more properties than most people realise ;)
10. Mar 10, 2009
HallsofIvy
Staff Emeritus
This is assuming a metric topology.
11. Mar 10, 2009
Focus
It would be a bit circular to try and justify why the empty set is open from topology :shy:
12. Mar 10, 2009
HallsofIvy
Staff Emeritus
No, it wouldn't: the empty set is open (in a general topological space) because the definition of a topology requires that it include the empty set. That was what CompuChip said. Nothing circular about that. | 2017-10-17T17:17:28 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/why-is-empty-set-open.296626/",
"openwebmath_score": 0.6999772787094116,
"openwebmath_perplexity": 362.75880027054114,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98255751934987,
"lm_q2_score": 0.8539127455162773,
"lm_q1q2_score": 0.8390183889757102
} |
https://www.physicsforums.com/threads/rotation-matrix-about-an-axis-from-the-origin-to-1-1-1.462461/ | # Rotation matrix about an axis from the origin to (1,1,1)
## Homework Statement
Find the transformation matrix R that describes a rotation by 120 about an axis from the origin through the point (1,1,1). The rotation is clockwise as you look down the axis toward the origin.
## Homework Equations
Rotations about the z-axis are given by
$$R_{z}(\alpha) = \left( \begin{array}{ccc} cos(\alpha) & sin(\alpha) & 0 \\ -sin(\alpha) & cos(\alpha) & 0 \\ 0 & 0 & 1 \end{array} \right)$$
whereas rotations about the x-axis are given by
$$R_{x}(x) = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & cos(x) & sin(x) \\ 0 & -sin(x) & cos(x) \end{array} \right)$$.
## The Attempt at a Solution
My strategy in solving this problem was to rotate the coordinate system in such a way as to align the z-axis along the axis extending from the origin to (1,1,1). Once this was done, I was to rotate the system as a regular rotation in a two-dimensional x-y system.
The first rotation should be such that the x-axis is aligned perpendicular to the x-y projection of $$\hat{x} + \hat{y} + \hat{z}$$, or perpendicular to $$\hat{x} + \hat{y}$$. This was done with a rotation about the z-axis, more specifically $$R_{z}(\frac{3 \pi}{4})$$.
I intended the second rotation to be about the x-axis to orient the z-axis as desired. Working now with primed coordinates after the previous rotation the desired axis lied in the y'-z plane. The coordinates of the original vector <1, 1, 1> in the primed system was $$\sqrt{2} \hat{y} + \hat{z}$$. Therefore, I wanted to rotate the x-axis clockwise by $$Cos^{-1}(\frac{1}{\sqrt{3}})$$ degress. However, the way my matrices in section 2 were set up should have all rotations going counterclockwise, so I wanted my rotation matrix to be $$R_{x}(2 \pi - Cos^{-1}(\frac{1}{\sqrt{3}}))$$.
Now that the z-axis was properly aligned, I could rotate about it, so my final rotation matrix should be $$R_{z}(\frac{2 \pi}{3})$$.
If my logic is correct then the final rotation should be
$$R = R_{z}(\frac{2 \pi}{3}) * R_{x}(2 \pi - Cos^{-1}(\frac{1}{\sqrt{3}})) * R_{z}(\frac{3 \pi}{4})$$.
That said, I know my answer should be
$$R = \left( \begin{array}{ccc} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right)$$
however this is not what I am getting. I am getting something very messy. Where have I gone wrong?
## Answers and Replies
vela
Staff Emeritus
Homework Helper
I didn't closely check the logic of your other rotations, but I didn't see anything obviously wrong when I skimmed over your post. After the 120-degree rotation about the z'' axis, I think you still need to undo the first two rotations to get back to the original coordinates.
I'm not sure I understand why I need to undo the first rotations. If I'm looking for the rotation matrix which represents a series of rotations shouldn't I just multiply the individual matrices?
vela
Staff Emeritus
Homework Helper
Think about this. The point (1,1,1) lies on the axis of rotation, so it should map to itself. Your scheme, however, would map it to the z-axis.
(I tried calculating your matrices in Mathematica, and it doesn't seem to map (1,1,1) correctly. It seems your middle matrix rotates in the wrong direction.)
Last edited:
I see that my second rotation did go the wrong way. After correction, however, my answer is still bogus. Forgive me if I'm wrong, it's very possible that I've completely misunderstood/forgotten the concept, but shouldn't the ultimate rotation matrix be frame independent? Shouldn't the axis of rotation be z" since in the double-primed frame the goal was to have z" be the axis of rotation?
vela
Staff Emeritus
Homework Helper
The rotation itself is coordinate-independent, but the particular matrix which represents the rotation depends on the basis/coordinates you've chosen.
Let's take the vector (1,1,1). The rotation should leave it unchanged. This is what you get when you apply the first two rotations:
$$R_{x'}(\theta_2)R_z(\theta_1)\begin{pmatrix} 1 \\ 1 \\ 1\end{pmatrix} = \begin{pmatrix}0 \\ 0 \\ \sqrt{3}\end{pmatrix}$$
So that's what you wanted. It's lined up with the z''-axis. Now you apply the final rotation
$$R_{z''}(\theta_3)\begin{pmatrix}0 \\ 0 \\ \sqrt{3}\end{pmatrix} = \begin{pmatrix}0 \\ 0 \\ \sqrt{3}\end{pmatrix}$$
As expected, the vector is unchanged since it lies along the axis of rotation. The thing is, you want the final answer to be (1,1,1), right? You need to transform the vector's coordinates back to be in terms of the original set of axes.
Last edited:
Thank you very much, I appreciate your help. I feel as though I am very close to solving this, however the answer seems to be eluding me for some reason. On Mathematica, I have defined the following
Code:
z[a_] := {{Cos[a], Sin[a], 0}, {-Sin[a], Cos[a], 0}, {0, 0, 1}}
which is the rotation matrix function about the z-axis, and
Code:
x[a_] := {{1, 0, 0}, {0, Cos[a], Sin[a]}, {0, -Sin[a], Cos[a]}}
which is the rotation matrix function about the x-axis.
I have defined a third matrix as the dot product x.z with the desired angles plugged into the matrix functions for convenience:
Code:
a = x[ArcCos[1/Sqrt[3]]].z[3*Pi/4].
The angle of rotation about the x-axis has been corrected from above. Multiplying this with the vector (1,1,1) we arrive at the expected result:
Code:
In[16]:= a.{1, 1, 1}
Out[16]= {0,0,Sqrt[3]}
The final rotation about the z" axis also acts as it's supposed to:
Code:
In[18]:= z[2*Pi/3].(a.{1, 1, 1})
Out[18]= {0,0,Sqrt[3]}
.
I assumed, therefore, my answer was correct. However, after attempting to revert back to my original coordinate system the rotation matrix was once again bogus. The line
Code:
answer = Inverse[a].z[2*Pi/3].a
produced the following matrix:
[Tex]
\left(
\begin{array}{ccc}
-\frac{\frac{1}{2 \sqrt{6}}-\frac{\sqrt{\frac{3}{2}}}{2}}{\sqrt{6}}-\frac{1}{6} & \frac{5}{6}-\frac{\frac{1}{2 \sqrt{6}}-\frac{\sqrt{\frac{3}{2}}}{2}}{\sqrt{6}} & \sqrt{\frac{2}{3}} \left(\frac{1}{2 \sqrt{6}}-\frac{\sqrt{\frac{3}{2}}}{2}\right)+\frac{1}{3} \\
\frac{1}{3}-\frac{\frac{\sqrt{\frac{3}{2}}}{2}+\frac{1}{2 \sqrt{6}}}{\sqrt{6}} & \frac{1}{3}-\frac{\frac{\sqrt{\frac{3}{2}}}{2}+\frac{1}{2 \sqrt{6}}}{\sqrt{6}} & \sqrt{\frac{2}{3}} \left(\frac{\sqrt{\frac{3}{2}}}{2}+\frac{1}{2 \sqrt{6}}\right)+\frac{1}{3} \\
1 & 0 & 0
\end{array}
\right)[/Tex].
Attempting to undo the coordinate change through flipping the signs of the angles initially rotated and rerotating through gave me the same matrix:
Code:
z[-3*Pi/4].x[-ArcCos[1/Sqrt[3]]].z[2*Pi/3].a
[Tex]\left(
\begin{array}{ccc}
-\frac{\frac{1}{2 \sqrt{6}}-\frac{\sqrt{\frac{3}{2}}}{2}}{\sqrt{6}}-\frac{1}{6} & \frac{5}{6}-\frac{\frac{1}{2 \sqrt{6}}-\frac{\sqrt{\frac{3}{2}}}{2}}{\sqrt{6}} & \sqrt{\frac{2}{3}} \left(\frac{1}{2 \sqrt{6}}-\frac{\sqrt{\frac{3}{2}}}{2}\right)+\frac{1}{3} \\
\frac{1}{3}-\frac{\frac{\sqrt{\frac{3}{2}}}{2}+\frac{1}{2 \sqrt{6}}}{\sqrt{6}} & \frac{1}{3}-\frac{\frac{\sqrt{\frac{3}{2}}}{2}+\frac{1}{2 \sqrt{6}}}{\sqrt{6}} & \sqrt{\frac{2}{3}} \left(\frac{\sqrt{\frac{3}{2}}}{2}+\frac{1}{2 \sqrt{6}}\right)+\frac{1}{3} \\
1 & 0 & 0
\end{array}
\right)[/Tex]
Even stranger, this matrix rotated (1,1,1) as it should:
Code:
In[29]:= answer.{1, 1, 1}
Out[29]= {1, 1, 1}
I'm not sure how to proceed from here. Thanks again for your help.
Last edited:
For some reason in the previous post the image of the matrices is not rendering. I do not know how to fix this. Sorry.
The following is tests.
$$\left( \begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 2 \\ 1 & 1 & 1 \end{array} \right)$$
Hmm... mathematica seems to generate compatible Latex, don't know what went wrong up there. Perhaps the latex code was simply too complex.
vela
Staff Emeritus
Homework Helper
You got the right answer. Try
Code:
Simplify[answer]
in Mathematica.
Wow, that's a powerful little statement there. Thank you very much for your help.
I have found a way to rotate the coordinate system about any axis through the origin (given it's direction cosines or a point on it) by any angle. I had to make five rotations (had to multiply five matrices using wxMaxima) in order to get that final matrix. All five rotation were about either x, y or z axis. Checked and worked for the example mentioned above.
To find the transformation matrix for rotation by an angle 'theta' around an axis 'I' passing through the origin and a point (a, b, c), do the following steps (you can use the final result directly of course!).
Denote the coordinate column vector by X and the rotational matrix by R.
1st Rotation: Rotate about z-axis so that x, I and z are coplanar. X1 = R1 . X
2nd Rotation: Rotate about the y1-axis so that z1 coincides with I. X2 = R2 . X1
3rd Rotation: Rotate about the z2-axis by an angle theta. X3 = R3 . X2
4th Rotation: Reverse step 2. X4 = R4 . X3
5th Rotation: Reverse step 1. X5 = R5 . X4
The desired NEW coordinate system X' is itself X5. The desired total rotational matrix is R.
X' = R . X
R = R5 . R4 . R3 . R2 . R1
The matrices are attached to this comment. Make sure you extract the zip file first then click the html file.
The derivation of the final total rotational matrix is kind of complicated. I feel that there could be a more elegant way of finding it.
NOTE: I have found this result on my own without consulting any reference. I checked it and it worked. Any comment is welcomed.
#### Attachments
• Rotational Matrix.zip
99 KB · Views: 320 | 2022-05-22T11:54:56 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/rotation-matrix-about-an-axis-from-the-origin-to-1-1-1.462461/",
"openwebmath_score": 0.7241307497024536,
"openwebmath_perplexity": 807.1589088161771,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982557512709997,
"lm_q2_score": 0.8539127510928476,
"lm_q1q2_score": 0.8390183887851391
} |
https://math.stackexchange.com/questions/1849613/evaluate-lim-x-to-0-left-frac1x-frac1x-right-possible?noredirect=1 | Evaluate $\lim_{x \to 0^-} \left( \frac{1}{x} - \frac{1}{|x|} \right)$ (possible textbook mistake - James Stewart 7th)
I was working on a few problems from James Stewart's Calculus book (seventh edition) and I found the following:
Find
$$\lim_{x \to 0^-} \left( \frac{1}{x} - \frac{1}{|x|} \right)$$
Since there's a $|x|$ on the limit and knowing that $|x| = -x$ for any value less than zero, we have
$$\lim_{x \to 0^-} \left( \frac{1}{x} - \frac{1}{|x|} \right) = \lim_{x \to 0^-} \frac{2}{x}$$
So far so good. Continuing,
$$\lim_{x \to 0^-} \left( \frac{1}{x} - \frac{1}{|x|} \right) = \lim_{x \to 0^-} \frac{2}{x} = - \infty$$
since the denominator becomes smaller and smaller. When checking the textbook's answer I've found the following:
Am I missing something or should the limit really be $- \infty$ ?
• Saying the limit is $-\infty$ is (roughly) the same thing as saying the limit doesn't exist. – Zain Patel Jul 5 '16 at 10:48
• You may want to read this: math.stackexchange.com/questions/1782077/… – Zain Patel Jul 5 '16 at 10:49
• @ZainPatel got ya, thank you, best regards! – bru1987 Jul 5 '16 at 10:52
• Saying the limit doesn't exists isn't the same as saying the limit is $\;-\infty\;$ . Perhaps one could say "the limit doesn't exist finitely". – user351910 Jul 5 '16 at 11:13
• Both answers are correct, but your answer is better (since it gives information about why the limit does not exist). – user84413 Jul 5 '16 at 17:13
Saying that the limit is equal to $\infty$ is a mathematical shorthand (amongst some mathematicians, at least) for:
Given any real number $M$, there is a real $\delta$ (depending on $M$) such that $\frac{1}{x^2} > M$ for all $x$ satisfying $0 <x <|\delta|$.
It is usually advised that beginners avoid using $\infty$ since it leads to careless or wrong manipulations of the symbol all too often.
bru, I think that the problem is just about terminology. Your derivation is correct, but it is likely that what Stewart is claiming is (I guess) that a limit that goes to $-\infty$ or to $+\infty$ on only one side (as in this example, where the limit is only from the left), is "non existing". Otherwise, the statement "the limit does not exist becuase the denomiator approaches $0$ while the numerator does not" would be simply nonsense. | 2019-12-15T10:56:09 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1849613/evaluate-lim-x-to-0-left-frac1x-frac1x-right-possible?noredirect=1",
"openwebmath_score": 0.9322503209114075,
"openwebmath_perplexity": 447.3698472303104,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575132207567,
"lm_q2_score": 0.8539127492339909,
"lm_q1q2_score": 0.8390183873948497
} |
https://math.stackexchange.com/questions/2404157/how-many-subsets-of-size-k-exist-such-that-no-pair-of-elements-is-r | # How many subsets of size k exist such that no pair of elements is $R$
I saw this problem today:
In a round table there are 12 knights. Each pair of contiguous knights are enemies. How many different ways can 5 knights be chosen such that no pair of knights are enemies?
So I tried to count how many ways there are to pick 5 knights out of 12 and substract the number of ways 5 knights can be picked such that there is at least a pair of enemies. I think i did the second one wrong because I got a negative number. I figured maybe some abstraction woumd help, and I arrived at a generalization that is much more interesting:
Given a finite set $A$ of size $n$ and a symmetric relation $R$ in $A$, how many different subsets $B$ of size $k<n$ are there such that no pair of elements in $B$ are in $R$?
It is equivalent to counting how many subsets $B$ are there such that every pair of elements in $B$ are in $R^c$. In this case, the relation would be Enemies (It is symmetric) and $A$ the set of the 12 knights. I'd like some help on tackling the problem, and on working on the geralization. Thanks.
In a round table, there are $12$ knights. Each pair of contiguous knights are enemies. How many ways can $5$ knights be chosen so that no pair of knights are enemies?
To rephrase, how many ways can we select five of the twelve knights at the table so that no two of them are seated in adjacent seats?
We first solve the problem for a line, then subtract those cases in which people at both ends of the line are selected to ensure that no two adjacent knights are selected when the ends of the line are joined to form a circle.
We arrange seven blue and five green balls so that no two green balls are adjacent. Place seven blue balls in a row. This creates eight spaces, six between successive blue balls and two at the ends of the row. $$\square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square$$ To ensure that no two green balls are adjacent, we must choose five of these eight spaces for the green balls, which can be done in $$\binom{8}{5}$$ ways.
However, we must exclude those arrangements in which both ends of the line are occupied by green balls since joining the ends of the lines together would form a circle in which two of the selected balls are adjacent. If both ends of the row are filled with green balls, we are left with six spaces in which we place a green ball. $$\color{green}{\bullet} \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{green}{\bullet}$$ To ensure the green balls are separated, we must choose three of these spaces, which can be done in $$\binom{6}{3}$$ ways.
Hence, the number of ways five knights can be selected from the twelve knights at the round table so that no two of them are adjacent is $$\binom{8}{5} - \binom{6}{3}$$
In how many ways can $k$ objects be selected from $n$ objects arranged in a circle if no two of the $k$ objects are adjacent.
We begin by arranging $n - k$ blue and $k$ green balls in a row so that no two green balls are adjacent, then subtract those cases in which green balls occupy both ends of the row so that the green balls do not become adjacent when we join the ends of the row to form a circle.
Reasoning as before, placing $n - k$ blue balls in a row creates $n - k + 1$ spaces, $n - k - 1$ between the $k$ successive blue balls and two at the ends of the row. To ensure that no two green balls are adjacent, we must select $k$ of these $n - k + 1$ spaces, which can be done in $$\binom{n - k + 1}{k}$$ ways. Notice that this is zero when $k > n - k + 1$.
From these, we must exclude those cases in which green balls occupy both ends of the row. If green balls occupy both ends of the row, we are left with $n - k - 1$ spaces. To ensure that no two green balls are adjacent, we must choose $k - 2$ of these spaces for the remaining green balls, which can be done in $$\binom{n - k - 1}{k - 2}$$ ways.
Hence, the number of ways that $k$ objects can be selected from $n$ objects arranged in a circle so that no two of the $k$ objects are adjacent is $$\binom{n - k + 1}{k} - \binom{n - k - 1}{k - 2}$$
Think of your problem as a problem on a graph $G=(A,R)$ you are simply counting the number of independent sets in $G$ having size $k$. This is indeed an interesting problem, one that has a well established body of research around it. If you are not familiar with graph theory, I would highly recommend checking it out (happy to explain more).
• Please do, I'm a bit familiar with graph theory only because i've done competitive programming but nothing more. – Hyperbolic Marraquetoid Aug 24 '17 at 2:44
• No problem. For our purposes, a graph Consists of a set of vertices (in this case the set $A$) and a set of edges which consists of distinct two element subsets of $A$. Think of drawing a $|A|$ of dots on a piece of paper and giving each of them each a label corresponding to an element in A, and drawing a line between two dots if their corresponding labels are in a pair $R$. Your original Knight example gives you a "cycle" [link] en.m.wikipedia.org/wiki/Cycle_graph on 12 vertices. An independent set is just a st of dots (vertices) such that no two dots have a line between them. – mm8511 Aug 24 '17 at 2:53
• so we have an undirected cycle graph. How can I attack this problem? I'd also like to read more about the independent set problem, where can I read about it? – Hyperbolic Marraquetoid Aug 24 '17 at 2:56
• label the vertices of the 12-cycle in clockwise order from 1 to 12. It shouldn't be hard to convince yourself that the vertices with odd labels form an independent set of size 6. Similarly, the vertices with even labels form an independent set of size 6. In each of these sets there are 6 choose five ways to choose a subset of 5 vertices. This gives us 12 sets total. There are other ways to make independent sets, like (1,3,5,7,10), (2,4,6,8,11) etc... I don't know a simple closed formula for this – mm8511 Aug 24 '17 at 3:06
• You may try proving it by induction on the number of knights. For example, with $2k+2$ knights and choosing $k$ Non-enemies. For $k=1$ the answer is clearly 4, for $k=2$ you have 9 ways. Maybe $n^2$ is the pattern? I don't know. – mm8511 Aug 24 '17 at 3:15 | 2019-10-15T16:11:10 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2404157/how-many-subsets-of-size-k-exist-such-that-no-pair-of-elements-is-r",
"openwebmath_score": 0.8368520140647888,
"openwebmath_perplexity": 99.12349608349777,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9918120893722416,
"lm_q2_score": 0.8459424353665382,
"lm_q1q2_score": 0.8390159343095288
} |
http://incommunity.it/xyhy/a-and-b-are-two-independent-events.html | Let P(B) = P. But the event 'getting a 3' and the event 'getting an odd number' are not mutually exclusive since it can happen at the same time (i. The outcome of the first roll does not change the probability for the outcome of the second roll. Independent Events. Suppose you flip a coin twice, the outcome from the 1st flip is indep. Independent/Dependent Events Two events are independent if the result of the second event is not affected by the result of the first event. Thu, February 20, 2020. 4 and P(B)=. Given that P(A)=0. b Represent sample spaces for compound events using methods such as organized lists, tables and tree diagrams. Two events A and B are said to be independent if P (A | B) = P (B). Events are dependent if the outcome of one event affects the outcome of another. Then the probability of the two events are respectively. Pictorially, that is, with Venn diagrams, Independent Events Disjoint Events. The probability that the event will not occur is a. The partition theorem says that if Bn is a partition of the sample space then E[X] = X n E[XjBn]P(Bn) Now suppose that X and Y are discrete RV’s. The notation for conditional probability is P(B|A. Independent Events video tutorial 00:17:37 Independent Events video tutorial 00:52:26 Solution If A and B are two independent events such that P(A∩ B) =2/15 and P(A ∩ B) = 1/6, then find P(A) and P(B). You can have a play with the Quincunx to see how lots of independent effects can still have a pattern. Such an equality is possible only when the events are independent: Two events Aand Bare called independent if and only if P(A \B) = P(A)P(B). Are events A and B mutually exclusive? Are they independent? Explain your answers. That is, they are independent if P(AjB) = P(A) In the die-toss example, P(A) = 1 6 and P(AjB) = 1 4; so the events A and B are not independent. If two events, A and B are independent then the joint probability can be derived from the formula. Two events A and B are called independent if P(A|B)=P(A), i. Assume two events A and are mutually exclusive and, furthermore, P(A) = 0. Independent variables are variables that are manipulated or are changed by researchers and whose effects are measured and compared. Each toss of a coin is a perfect isolated thing. Prove that if events A and B are independent, then the complement events of A and B are also independent. While both A and B have a lower probability of getting home in time for dinner, the lower probabilities will still be independent of each other. P(A) If A and B are two mutually exclusive events, then. P( A and B) = 1/6 and the probability that neither of them occur is 1/3. Two events $$A$$ and $$B$$ are independent if the probability $$P(A\cap B)$$ of their intersection $$A\cap B$$ is equal to the product $$P(A)\cdot P(B)$$ of their individual probabilities. For two independent events A and B, the probability that both A & B occur is 1 / 8 and the probability that neither of them occur is 3 / 8. Hence there are 4 (= 22) exhaustive events. Independent Events. Disjoint events would be the events “heads on the first toss” and “tails on the first toss. In other words, knowing that E occurred does not give any additional information about whether F will or will not occur; knowing that F occurred does not give any additional information about the occurance of E. Suppose two events A and B are independent, with P(A) not equal to 0 and P(B) not equal to 0. P(A Intersection B) = b. Example of dependent events. 20, P(B) = 0. Since the set (A∩B) = {(3,H), (6,H)}, the probability of an event getting heads and multiple of three on die, that is P(A∩B) is 2/12 or 1/6. A coin is tossed three times. 5:Recognize and explain the concepts of conditional probability and independence in. May 04,2020 - A and B are two independent events. For an event A and its complement Ac (not in A), P(Ac) = 1 P(A). Independent vs. Event A occurs with probability 0. Events A and B are independent events if the probability of Event B occurring is the same whether or not Event A occurs. The intersection of those would then be equal to P(A), wouldn't it? This means that the formula would be like this: P(A)=P(A)*P(B). Two events A and B are independent if: P(AnB)=P(A)*P(B) Is this always the case or are there any cases in which the formula does not work. Each toss of a coin is a perfect isolated thing. Let F be the event a customer chooses a fish roll. If A and B are two independent events such that P(not A intersection B) = 2/15 and P(A intersection not B) =1/6, then find P(B) - Math - Probability. Independent events The events A, B are independent if the occurrence of one of them does not a ect the occurrence of the other event. The first prize could have gone to ticket B. Now we will discuss independent events and conditional probability. Exercise A survey was taken of student's genders and their preference for bxsketball, soccer, or track. Find P (A and B). P( A and B) = 1/6 and the probability that neither of them occur is 1/3. Suppose you flip a coin twice, the outcome from the 1st flip is indep. For example, the outcomes of two roles of a fair die are independent events. Thus the two coins are independent. By creating an account you can make your own person. Probability of Independent and Dependent Events Decide whether each set of events is independent or dependent. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The outcome of. The probability that A and B occur is 1/6 and the probability that neither of them occurs is 1/3. Two events A and B are said to be independent if the fact that one event has occurred does not affect the probability that the other event will occur. 8, P(B) = 0. (Hint: Use the facts that B and BC are disjoint and that P(B) = 1 P(BC)). A single fair die is rolled twice. Find P(A Intersection B). Find P (A and B). Can A and B be independent? (b)Let A;B; and C be events that are all independent. Intuitively I think it is because the. Then P(A and B) = 0. Independent events. • Independence of two events • Independence of a collection of events • Pairwise independence • Reliability • The king 's sibling puzzle E A and B are independent. In these tests, one sample is being compared to the population. Describe two ways to fi nd P(A and B). A conditional probability can always be computed using the formula in the definition. Suppose A and B are independent events where P(A) = 0. (justify using probability) Recall that when two events, M and B, are independent, the probability of both occurring is: P(M and B) = P(M) * P(B) For this problem we know that: P(M) = 0. A and B are two independent events. Located just two blocks from the Liberty Bell and Independence Hall, it is the only museum devoted to the U. 76 and P(B)=0. Two events A and B are independent iff that condition holds. independent events: Two events are independent if knowing the outcome of one provides no useful information about the outcome of the other. The outcome of the rst roll does not change the probability for the outcome of the second roll. From (1) and (2), P (A∩B') = P (A) P (B'), so A and B' are independent. About FEE The Foundation for Economic Education (FEE) is a 501(c)3 educational foundation and has been trusted by parents and teachers since 1946 to captivate and inspire tomorrow’s leaders with sound economic principles and the entrepreneurial spirit with free online courses, top-rated in-person seminars, free books for classrooms, as well as relevant and worldly daily online content. Discover why we are the world's leading cloud software company powering social good. When two events are non-mutually exclusive, a different addition rule must be used. Upvote • 0 Downvote. , selecting a jack from a deck of cards is independent from a selecting a jack from another deck of cards). Since A and B are independent events, therefore P (B/A) = P. Located just two blocks from the Liberty Bell and Independence Hall, it is the only museum devoted to the U. May 04,2020 - A and B are two independent events. Then the value of P(A^(c ) cap B^(c )) is -. These topics, although very important on their own, will also give us the background needed for our two rules for finding P(A and B) when we cannot easily use logic and counting. Meanwhile, mutually inclusive compound events are situations where. More formally, this means that the occurrence of one event has no effect upon the probability of the other event. Independent Each event is not affected by other events. More formally, if events A and B are independent, then the probability of both A and B occurring is: P(A and B) = P(A) x P(B) where P(A and B) is the probability of events A and B both occurring, P(A) is the. of the outcome from the 2nd flip. 3 and P(B Get solutions. For two independent events A and B, the probability that both A & B occur is 1 / 8 and the probability that neither of them occur is 3 / 8. asked by #FreeGucci on April 27, 2016; Math: Probability. Independent Events. By Dorothy Parker. P(A∩B) = P(A) x P(B). Question: Let A and B be two independent events. The probability that both A and B occur is $\large\frac{1}{6}$ and the probability that neither of them occurs is $\large\frac{1}{3}$. Sample space S = { HH, HT, TH, TT}. Statistics for Business and Economics (13th Edition) Edit edition. NCIL regularly hosts over 1,000 people, including grassroots advocates, CIL and SILC leadership, members of Congress, government officials, and representatives from other organizations that work for justice and equity for people with disabilities. 3 and P(B Get solutions. Two events A and B are independent if the knowledge that one occurred does not affect the chance the other occurs. Assuming equal probability outcomes, given two outcomes in the overlapping area and six outcomes in B, the probability that Event A occurred would be 2/6. Trump defends China's alleged cover-up of coronavirus victims. Two events A and B are independent if: P(AnB)=P(A)*P(B) Is this always the case or are there any cases in which the formula does not work. By multiplication theorem, we have P (A∩B) = P (A). How can I explain this to him with the reason why this does not. More formally, this means that the occurrence of one event has no effect upon the probability of the other event. This scenario produces an intersection of the two events (the probability that both events occur). A conditional probability can always be computed using the formula in the definition. Statistically, the probability that both events will occur is equal to the product of their separate probabilities. Dependent Events. Find P ( A ) and P ( B ). 2:Understand that if two events A and B are independent, the probability of A and B occurring together is the product of their probabilities, and that if the probability of two events A and B occurring together is the product of their probabilities, the two events are independent. 1 Answer to Assume that we have two events, A and B, that are mutually exclusive. For two events A and B, P(A)=0. Independent event: When the probability of an occurrence of one event does not affect the probability of an occurrence of another event, then it is said to be the independent event (e. Asked 3 years, 3 months ago. (b) Find the value of p for which A and B are independent. member firm of Grant Thornton International Ltd, one of the world’s leading organizations of independent audit, tax and advisory firms. Find P(A Intersection B). Whether you are large private or public sector employer, we can provide you with solutions and ongoing support that you can trust will deliver you the best for your. This scenario produces an intersection of the two events (the probability that both events occur). A and B are two independent events. For two independent events A and B, the probability that both A & B occur is 1 / 8 and the probability that neither of them occur is 3 / 8. If A and B are two independent events and P(A)=(3)/(6) and P(AcapB)=(4)/(9) then the value of P(B) will be. December 31, 1927 Issue. , selecting a jack from a deck of cards is independent from a selecting a jack from another deck of cards). 1/4 times 1/2 is equal to 1/8, so this is equal to 1/8. Proving Two Events are Independent Explanation Two events are independent events if and only if the probability that both events is the of the probabilities of the events. Independent event: When the probability of an occurrence of one event does not affect the probability of an occurrence of another event, then it is said to be the independent event (e. If the equation is violated, the two events are not independent. For 3 independent events A, B and C is $\\mathrm{P(A \\cap B \\cap C) = P(A)P(B)P(C)}$? Just like for two independent events $\\mathrm{P(A\\cap B) = P(A)P(B)}$. so these events should be independent. A = first child is a boy B = second child is a boy We assume these are. Viewed 69k times. A student guesses on two multiple choice questions. Dependent Events: One event must have an influence over another in order to be dependent. A and B are two independent events such that P(A)=(1)/(2) and P(B)=(1)/(3). Independent and Dependent Samples (Jump to: Lecture | Video) So far I've talked about one-sample methods, and two-sample methods. Chapter 14 From Randomness to Probability Multiplication Rule For two independent events A and B, the probability that both A and B occur is the product of the probabilities of the two events. In other words, after receiving the information that will happen, we revise our assessment of the probability that will happen, computing the. The probability of one event does not change the probability of the other event. This scenario produces an intersection of the two events (the probability that both events occur). 1/2 times 1/2 is 1/4. (c)If A and B are independent, show that A and BC are independent. The probability of both A and B happening is 0, since there's no way to roll 1 on the first die AND have the dice. P ( A ∩ B) = P ( A) P ( B), or equivalently, P ( A | B) = P ( A). We use "P" to mean "Probability Of", So, for Independent Events: P(A and B) = P(A) × P(B) Probability of A and B equals the probability of A times the probability of B. Let A be the event "the sum of the points is 7", B the event "die #1. When events A, B are independent, the probability of both happening can be computed by saying the event A happen first with P(A) and the event B happens afterwards with P(B). Important to distinguish independence from mutually exclusive which would say B ∩ A is empty (cannot happen). If A and B are independent events, then the events A and B’ are also independent. Definition: Two events, A and B, are independent if the fact that A occurs does not affect the probability of B occurring. Probability of Two Events Occurring Together: Independent. An event B will occur with probability 0. In probability, two events are independent if the incidence of one event does not affect the probability of the other event. Cannot find it because P(A) is not known. He has been teaching from the past 9 years. Definition: Two events, A and B, are independent of one another if: (i) A occurring does not affect the likelihood of B occurring, and (ii) B occurring does not affect the likelihood of A occurring. A compound event consists of two or more simple events. If the occurrence or non-occurrence of E 1 does not affect the probability of occurrence of E 2, then. However, if you toss two coins, there are four possible outcomes: heads-heads, heads-tails, tails-heads, and tails-tails. Purdue University students, faculty, and staff at our West Lafayette, IN campus may access this area for information on the award-winning Purdue Writing Lab. We throw two dice. To determine the independence of two events A and B, you can check to see whether (A˜BP ) = P (A) since the occurrence of event A is unaffected by the occurrence of event B if and only if the events are independent. Since the set (A∩B) = {(3,H), (6,H)}, the probability of an event getting heads and multiple of three on die, that is P(A∩B) is 2/12 or 1/6. This scenario produces an intersection of the two events (the probability that both events occur). For two independent events A and B, the probability that both A & B occur is 1 / 8 and the probability that neither of them occur is 3 / 8. Answer to (10pts) Suppose A and B are two independent random events (means or , if and , find and. AU - Katzmann, Jerry A. _____ _____ 2. If A and B are two events of the sample space;Let B' be complementry event of B, By drawin a venn diagran you can clearly see that, (AnB')n(B) = null Therefore (AnB') and B are mutually exclusive. [1-p(B)]=p(B') hence we can check from eq1 and eq2 that if A and B are two independent events than A' and B' are also independent. Additional Rule 2: When two events, A and B, are non-mutually exclusive, the probability that A or B will occur is: P(A or B) = P(A) + P(B) - P(A and B) In the rule above, P(A and B) refers to the overlap of the two events. A and B are two mutually exclusive events then P( A¨B ) = _____. Discover why we are the world's leading cloud software company powering social good. Then P(A and B) = 0. Independent events. Answer: Two events, A and B, are independent. May 04,2020 - A and B are two independent events. The probability of one event does not change the probability of the other event. The Multiplication Rule for independent events states ethat if A and B are independent events, then the probablitlity of A and B is. As we mentioned earlier, almost any concept that is defined for probability can also be extended to conditional probability. two-way table as a sample space to decide if events are independent and to approximate conditional probabilities. For two independent events A and B, the probability that both A & B occur is 1 / 8 and the probability that neither of them occur is 3 / 8. Consider two events such that P(A) =1/2 , P(B) =3/5 , and P(A ∩ B) = 3/10. Two events A and B are independent iff that condition holds. Two events are independent if the following are true: P(A|B) = P(A); P(B|A) = P(B); P(A AND B) = P(A)P(B); Two events A and B are independent if the knowledge that one occurred does not affect the chance the other occurs. Assume two events A and are mutually exclusive and, furthermore, P(A) = 0. Tossing a die is a simple event. Intuitively I think it is because the. Compute the probability of a) HHTHT b) THHHT c) HTHTH. Answers: 0, NO, YES Note that ANY time two events are mutually exclusive, they are not independent unless the probability of at least one of the events by itself is equal to zero. In a Venn diagram, the sets do not overlap each other, in the case of mutually exclusive events while if we talk about independent events the sets overlap. Answer to (10pts) Suppose A and B are two independent random events (means or , if and , find and. (a) Determine A cup B , given that A and B are mutually exclusive. For two independent events A and B, the probability that both A & B occur is 1 / 8 and the probability that neither of them occur is 3 / 8. A single fair die is rolled twice. Prove that, if A and B are two events, then the probability that at least one of them will occur is given by P(A∪B)=P(A)+P(B)−P(A∩B). RD Sharma - Volume 2. Dependent Events: One event must have an influence over another in order to be dependent. probability of A is ¼ and B is 1/3. Suppose that A and B are two independent events with P(A) = 0. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. In theory, this would win Israel security and allow it to retain a Jewish. Determine the following probabilities if each of the following are GIVEN: P(A) = 0. The intersection of those would then be equal to P(A), wouldn't it? This means that the formula would be like this: P(A)=P(A)*P(B). identify ~ use the Multiplication Rule for ~ compute "at least" probabilities. Two dice are thrown. First published on TECHNET on Apr 09, 2015 Good. 1 For a given distribution, M(s) = ∞ is possible for some values of s, but there is a large useful class of distributions for which M(s) < ∞ for all s in a neighborhood of the. 4 and P(B) = 0. By creating an account you can make your own person. The outcome of the rst roll does not change the probability for the outcome of the second roll. 7, P(B) = 0. Two events A and B are independent if the probability that they happen at the same time is the product of the probabilities that each occurs individually; i. Variables are defined as the properties or kinds of characteristics of certain events or objects. Independent Each event is not affected by other events. Two events A and B are independent if: P(AnB)=P(A)*P(B) Is this always the case or are there any cases in which the formula does not work. Independent/Dependent Events Two events are independent if the result of the second event is not affected by the result of the first event. Two events are independent if the occurrence of one event does not change the probability of the occurrence of the other event. Every day, our employees work around the clock to reliably and safely deliver the chemicals , polymers , fuels and technologies that advance solutions to our world’s biggest challenges. Find P ( A ) and P ( B ). May 04,2020 - A and B are two independent events. My proof so far: ( A C ∩ B C) = ( 1 − P ( A)) ( 1 − P ( B)) = After that, I'm stuck. Independent events The events A, B are independent if the occurrence of one of them does not a ect the occurrence of the other event. Two events A and B are independent if the knowledge that one occurred does not affect the chance the other occurs. Determine the following probabilities if each of the following are GIVEN: P(A) = 0. (c) Find P(king on 1st card and ace on 2nd). (Hint: Use the facts that B and BC are disjoint and that P(B) = 1 P(BC)). A plumbing contractor puts in bids in on two large jobs. 70, A and B are independent. 3,# and #P(AuuB)=. You are confusing independent with mutually exclusive. 3 and P(B Get solutions. The probability of an event is the sum of the probabilities of the individual outcomes in that event. (justify using probability) Recall that when two events, M and B, are independent, the probability of both occurring is: P(M and B) = P(M) * P(B) For this problem we know that: P(M) = 0. 76 and P(B)=0. The notation for conditional probability is P(B|A. An event that does not affect the occurrence of another subsequent event in a random experiment is an independent event. For mutually exclusive events. Suppose that A and B are two independent events with P (A) =. LyondellBasell is one of the largest plastics, chemicals and refining companies in the world. Founded in 1778, Phillips Academy is an independent, coeducational secondary school with an expansive worldview and a legacy of academic excellence. The probability that the event will not occur is a. The concept of independence extends to dealing with collections of more than two events or random variables, in which case the events are pairwise independent if each pair are independent of each other, and the events are mutually independent if each event is independent of each other combination of events. When two events are said to be dependent, the probability of one event occurring influences the. Two events are said to be independent if P(A and B) = P(A)P(B), provided that P(A) and P(B) are both nonzero. So these are not independent events. Viewed 69k times. A and B are two independent events. Mutually Exclusive versus Independent Events Mutually Exclusive (ME): Event A and B are ME if both cannot occur simultaneously. Suppose we have two events – event A and event B. AU - Cerhan, James R. Question: Suppose that {eq}A {/eq} and {eq}B {/eq} are two independent events for which {eq}P(A) = 0. P( A and B) = 1/6 and the probability that neither of them occur is 1/3. This is an important idea! A coin does not "know" it came up heads before. Registered Charity Number: 1115546. Compute the probability of a) HHTHT b) THHHT c) HTHTH. : P(AnBnC) = P(A)P(B)P(C). INDEPENDENT The probability of two independent events can be found by multiplying the probability of the first event by the probability of the second event. Downtown Grand Rapids, Michigan's premiere venue for nightlife featuring 4 restaurants, a craft brewery, live music venue, nightclub and stand-up comedy club. The intersection of those would then be equal to P(A), wouldn't it? This means that the formula would be like this: P(A)=P(A)*P(B). 1# THe answer is #P. 5 and P(B)=. If A and B are independent events, then how can I prove that their complements (I don't know how to type those in) are also independent? Also, if A is a subset of B and P(A) > 0 and P(B) > 0, are A and B independent? Please use probability notation. Answer to (10pts) Suppose A and B are two independent random events (means or , if and , find and. independent event: Event whose occurrence or non-occurrence is not in any way influenced by the occurrence or non-occurrence of another event. But the event 'getting a 3' and the event 'getting an odd number' are not mutually exclusive since it can happen at the same time (i. Lita has a coin with heads on one side and tails on the other side. NCIL regularly hosts over 1,000 people, including grassroots advocates, CIL and SILC leadership, members of Congress, government officials, and representatives from other organizations that work for justice and equity for people with disabilities. Directions: Please say whether the event is independent and dependent and explain why. Therefore the probability that they both occur simultaneously is the product of their respective probabilities. There are two types; with or without replacement. Assume that each event is independent of each other. Stream 12 essential films at home each month in our new free service for Curzon cinemas members! Screens and event spaces available for a range of occasions. Politics at CNN has news, opinion and analysis of American and global politics Find news and video about elections, the White House, the U. a) If the events A and B are mutually exclusive, are A and B always independent? If the answer is no, can they ever be independent? Explain b) If A is a subset of B, can A and B ever be independent events? Explain. The probability that G will occur is r, and the probability that H will occur is s, where both r and s are greater than 0. 45 and the probabilitythat A does not occur or B occurs is 0. never mutually exclusive c. Let A and B are two events, then the probability of occurrence of event A will not affect the probability of occurrence of event B. 3 and P(B Get solutions. The intersection of those would then be equal to P(A), wouldn't it? This means that the formula would be like this: P(A)=P(A)*P(B). Statistics for Business and Economics (13th Edition) Edit edition. 1/4 times 1/2 is equal to 1/8, so this is equal to 1/8. Tossing two dice is a compound event. Do not round your responses. Two events A and B are independent if: P(AnB)=P(A)*P(B) Is this always the case or are there any cases in which the formula does not work. Important to distinguish independence from mutually exclusive which would say B ∩ A is empty (cannot happen). Suppose if we draw two cards from a pack of cards one after the other. Post Enlarges on Etiquette. Rule 5: If two events A and B are independent, then the probability of both events is the product of the probabilities for each event: P(A and B) = P(A)P(B). The Arizona State Legislature is a bicameral body with 30 members in the Senate and 60 members in the House of Representatives. A student in statistics argues that the concepts of mutually exclusive events and independent events are really the same, and that. Independent events are not influenced by each other. Example 14: Suppose that a single die is rolled. Hope it helps you. Add to my calendar Get Tickets. Part Two of a new documentary by Olivier Berruyer, editor of the website les-crises. Since these events are independent, we use the multiplication rule to see that the probability of drawing two kings is given by the following product 1/13 x 1/13 = 1/169. I had a student bring this up today, I know 0 cannot be divided by zero. The intersection of those would then be equal to P(A), wouldn't it? This means that the formula would be like this: P(A)=P(A)*P(B). What is the probability that both show heads? It is 1 2 × 1 2 isn't it? Note that the coin tosses are independent of each other. A slightly more general way to write this is that A and B are independent if P(AB) = P(A)×P(B). Find P (A and B). edited Sep 18 '18 at 20:59. 2#, what is #P. 276 , then the passing of the two courses are. As an example, Let’s say a person has a fever (G) if they either have malaria (E) or have an infection (F). Sometimes more than one event is happening, and we need to be able to calculate the probability of something happening in both events. In order to do this, we need to be able to recognize whether two events are dependent or independent. 1# THe answer is #P. Statistics for Business and Economics (13th Edition) Edit edition. Determine the probability that event B occurs. May 04,2020 - A and B are two independent events. You can only calculate the probability if you know that the occurence of one event is independent from the occurence of the other event. The probability of getting any number face on the die in no way influences the probability of getting a head or a tail on the coin. Explain your answer. CraigMarcho on 03-16-2019 05:46 AM. 2 - Two events, A and B are said to be. A and B are two independent events. Independent variables are variables that are manipulated or are changed by researchers and whose effects are measured and compared. Let A and B be two independent events such that the probability is 8 1 that they will occour simultaneously and 8 3 that neither of them will occur. Visit Stack Exchange. Two events, $$A$$ and $$B$$ are independent if and only if $P(A \text{ and } B) = P(A) \times P(B)$. More specifically, if A and B are independent events, then P(A and B) = P(A)P(B). We will define independence on increasingly complex structures, from two events, to arbitrary collections of events, and then to collections of random variables. AU - Micallef, Ivana N. Question 11 If A and B are two independent events, prove that A’ and B are also independent Two events A and B are independent if P(A ∩ B) = P(A). so these events should be independent. However, if you toss two coins, there are four possible outcomes: heads-heads, heads-tails, tails-heads, and tails-tails. May 04,2020 - A and B are two independent events. This means that the probability of B occurring, whether A has happened or not, is simply the probability of B occurring. For example, if you draw two colored balls from a bag and the first ball is not replaced before you draw the second ball then the outcome of the second draw will be affected by the outcome of the first draw. Two events are independent of each other when the result of one does not affect the result of the other. which exactly means that B is independent of A. Find the probability of randomly selecting a green marble, and then a yellow marble if the first marble is replaced. F {\displaystyle {\mathcal {F}}} for the set of sets on which we define the probability P ). Direct sellers and licensed real estate agents are treated as self-employed for all Federal tax purposes, including income and employment taxes, if: Their services are performed under a written contract providing that. When two events are said to be dependent, the probability of one event occurring influences the. a Understand that, just as with simple events, the probability of a compound event is the fraction of outcomes in the sample space for which the compound event occurs. (a)Suppose that A;B are events and A B. RD Sharma - Volume 2. For mutually exclusive events. Binding of C3d to B cell CR2 leads to augmentation of signaling pathways initiated by antigen binding. Starbucks has apologised to two black men arrested at a Philadelphia branch of the coffee chain in an incident that has led to accusations of racial profiling by the company and police. P(A ∩ B) = 0. As an example, Let’s say a person has a fever (G) if they either have malaria (E) or have an infection (F). We can use a contingency table to compute the probabilities of various events by computing the ratios between counts, and to determine whether the events are dependent or independent. The logistic regression is the most popular multivariable method used in health science (Tetrault, Sauler, Wells, & Concato, 2008). Problem 98SE from Chapter 3: Two events, A and B, are independent, with P(A) =. 2 and P (B) =. 45 and the probabilitythat A does not occur or B occurs is 0. If A and B are two independent events and P(A)=(3)/(6) and P(AcapB)=(4)/(9) then the value of P(B) will be. on to #6 6. And we can verify it. If a quality inspector randomly selects a carton, find the probability that the carton has a. Answer: a Explaination:. A and B are two independent events such that P(A)=(1)/(2) and P(B)=(1)/(3). The probability of getting any number face on the die in no way influences the probability of getting a head or a tail on the coin. gl/9WZjCW If A, B are two independent events, show that bar A and bar B are also independent. Suppose if we draw two cards from a pack of cards one after the other. The outcome of the first roll does not change the probability for the outcome of the second roll. (Hint: Use the facts that B and BC are disjoint and that P(B) = 1 P(BC)). Drawing out any other ball must cause either Event A or Event B to occur. Tossing a die is a simple event. two-way table as a sample space to decide if events are independent and to approximate conditional probabilities. A conditional probability can always be computed using the formula in the definition. Let B be the event that the first die is a 3, 4, or 5. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. P(A and B) = P(A) x P(B) The General Multiplication Rule states that for any two events, A and B, the probablitlity of A and B is. ) Are the events "James draws a beer on the first draw" and "James draws a beer on the second draw" independent? Yes. The other two are just 1=4 7. Example: Tossing a coin two times. If A and B are two independent events such that P(not A intersection B) = 2/15 and P(A intersection not B) =1/6, then find P(B) - Math - Probability. If A and B are independent, then the chance of A occurring does not a ect the chance of B occurring and vice versa. The probability of occurrence of A may be 1 Verified Answer. Conditional Probability. Two events, $$A$$ and $$B$$ are independent if and only if $P(A \text{ and } B) = P(A) \times P(B)$. Three-way Independence This is a very classic example, reported in any book on Probability: Example 1. 3 and P(B Get solutions. then A and B e are independent. Two events A and B are independent if: P(AnB)=P(A)*P(B) Is this always the case or are there any cases in which the formula does not work. choosing a cookie at random, eating it, and then. While both A and B have a lower probability of getting home in time for dinner, the lower probabilities will still be independent of each other. Elevated serum free light chains are associated with event-free and overall survival in two independent cohorts of patients with diffuse large B-cell lymphoma. Find P(A|B). and B) = P(A) x P(B) X EX A die is rolled and a coin is tossed. (b) Find P(ace on 1st card and king on 2nd). Then P(A and B) = 0. Consider the following two events: A = the coins match (both are heads or both are tails) B = the first coin is heads Are these independent events? Intuitively, the answer is "no". Therefore, the odds in favour of picking the two red marbles are 1:44. Section 7-3 Independent Events Two events are said to be Independent if the occurrence of the first event does second event and events are independent if INDEPENDENT PROBABILITY 1. As we study a few probability problems, I will explain how "replacement" allows the events to be independent of each other. Independent Events. Independent events. The probability of picking the two red marbles in the first two picks is. If the events A and B are independent, then P(A ∩ B) = P(A)P(B) and not necessarily 0. P (A ∩ B) = P (A) P (B). We can extend this concept to conditionally independent events. The probability of occurrence of A may be 1 Verified Answer. Two events are independent if the following are true: P(A|B) = P(A); P(B|A) = P(B); P(A AND B) = P(A)P(B); Two events A and B are independent if the knowledge that one occurred does not affect the chance the other occurs. For example, the outcomes of two roles of a fair die are independent events. A system has two components that operate in parallel, as shown in the diagram below. Joint Probability of Multiple Events. The two trials are independent as they are with replacement. Are events A, B, and C pairwise independent? Are they mutually independent?. What is the probability that at least one of the three events occurs?. RD Sharma - Volume 2. Suppose that. AU - Cerhan, James R. Suppose two events A and B are independent, with P(A) not equal to 0 and P(B) not equal to 0. Therefore, the conditional probability of two independent events A and B is: The equation above may be considered as a definition of independent events. A slightly more general way to write this is that A and B are independent if P(AB) = P(A)×P(B). ∪ is the mathematical symbol for union. Certified since: January 2017. 546 dependent events, p. Then the value of P(A^(c ) cap B^(c )) is -. If the incidence of one event does affect the probability of the other event, then the events are dependent. The conditional probability of A given B is written P(AjB): P(AjB) = P(A¢B) P(B) Event Ais independent of B if the conditional probability of Agiven B is the same as the unconditional probability of A. Assume two events A and are mutually exclusive and, furthermore, P(A) = 0. The events are not independent because the. Given: (i) A & B are independent (ii) $P(A\cap B)=\frac{1}{6} (iii) P(A^c\cap B^c)=\frac{1}{3}$ Now, since A & B are independent [math]P(A\cap B)= P(A)*P(B. The outcome of tossing the coin for the first time will not affect the outcome of the second event. A: Rolling 1 on the first die. LyondellBasell is one of the largest plastics, chemicals and refining companies in the world. Let B be the event that the first die is a 3, 4, or 5. Founded in 1778, Phillips Academy is an independent, coeducational secondary school with an expansive worldview and a legacy of academic excellence. The concept of independence extends to dealing with collections of more than two events or random variables, in which case the events are pairwise independent if each pair are independent of each other, and the events are mutually independent if each event is independent of each other combination of events. Given two spinners (this sort of thing) that each have the numbers 1, 2, and 3 (in place of the colors), we spin two numbers. In two tosses of a single fair coin show that the events “A head on the first toss” and “A head on the second toss” are independent. P(A ∩ B) = P(B). Events A and B are independent if: knowing whether A occured does not change the probability of B. Cannot find it because P(B and A) is not known. Contractors, freelancers, or consultants who wish to have a written agreement with their client can create an Independent Contractor Agreement. Answers: 0, NO, YES Note that ANY time two events are mutually exclusive, they are not independent unless the probability of at least one of the events by itself is equal to zero. Such an equality is possible only when the events are independent: Two events Aand Bare called independent if and only if P(A \B) = P(A)P(B). identify ~ use the Multiplication Rule for ~ compute "at least" probabilities. " The "independent variable" is the one that you think of as causal, the "dependent variable" is the one that you think of as being affected by the "independent variable. Suppose a fair die has been rolled and you are asked to give the probability that it was a five. (Hint: Use the facts that B and BC are disjoint and that P(B) = 1 P(BC)). 7, P(B) = 0. Wed, February 19, 2020. What happens if P(A) is within P(B). CC) Either A or B always occurs. The outcomes of the first trial are A, B,and C, with probabilities of occurring equal to 0. For instance, when we roll two dice, the outcome of each is an independent event – knowing the outcome of one roll does not help determining the outcome of the other. For example, if the probability of event A is 2/9 and the probability of event B is 3/9 then the probability of both events happening at the same time is (2/9)*(3/9) = 6/81 = 2/27. For two independent events A and B, the probability that both A & B occur is 1 / 8 and the probability that neither of them occur is 3 / 8. Independent and mutually exclusive do not mean the same thing. and we recover the formula that for independent events the probability of both A and B is found by multiplying the probabilities of each of these events: P( A ∩ B ) = P( B ) P( A ) When two events are independent, this means that one event has no effect on the other. Do not round your responses. Visit Stack Exchange. " It is helpful to distinguish between "manipulated independent variables" and "measured independent variables. Example: Tossing a coin two times. Stay well & keep listening to the blues to get you through this time!. For example, if we toss two coins, the occurrence and non-occurrence of a head one coin does not in any way effect the occurrence of a head on the other coin. For two independent events, A and B, P(A) = 4 and P(B) = 1. RD Sharma - Volume 2. Solution: The sample space S S {HH, HT, TH, TT}. The events are not independent because the. Find P(A I ). Two events are said to be independent if the result of the second event is not affected by the result of the first event. They are dependent otherwise. The lecture on 2/8/2011 mainly focused on independence. 1# THe answer is #P. In the case where events A and B are independent (where event A has no effect on the probability of event B), the conditional probability of event B given event A is simply the probability of event B, that is P(B). If two events are independent then they cannot be mutually exclusive. If A and B are two events of the sample space;Let B' be complementry event of B, By drawin a venn diagran you can clearly see that, (AnB')n(B) = null Therefore (AnB') and B are mutually exclusive. If the outcome of one event DOES NOT affect the outcome of a second event, the two events are said to be. If A and B are dependent events, then the probability of. Thus, Equations 1. You can only calculate the probability if you know that the occurence of one event is independent from the occurence of the other event. Let A be the event “the sum of the points is 7”, B the event “die #1. We see that two events A and B are either both dependent or independent one from the other. gl/9WZjCW If A, B are two independent events, show that bar A and bar B are also independent. Event A: Spinning an odd number on the first spinner. The probability of one event does not change the probability of the other event. Davneet Singh is a graduate from Indian Institute of Technology, Kanpur. We offer free resources including Writing and Teaching Writing, Research, Grammar and. Suppose you select a number at random from the set {90, 91, 92,. Definition: Two events, A and B, are independent if the fact that A occurs does not affect the probability of B occurring. The outcome of the first roll does not change the probability for the outcome of the second roll. Mutually Exclusive versus Independent Events Mutually Exclusive (ME): Event A and B are ME if both cannot occur simultaneously. Given two spinners (this sort of thing) that each have the numbers 1, 2, and 3 (in place of the colors), we spin two numbers. December 23, 1927. Let A and B be two events such that P(A) =0. The probability of an event is the sum of the probabilities of the individual outcomes in that event. For example, if you draw two colored balls from a bag and the first ball is not replaced before you draw the second ball then the outcome of the second draw will be affected by the outcome of the first draw. Find P ( A ) and P ( B ). The probability of drawing out a specific ball is 1/5. Let the two events be the probabilities of persons A and B getting home in time for dinner, and the third event is the fact that a snow storm hit the city. For example, if we toss two coins, the occurrence and non-occurrence of a head one coin does not in any way effect the occurrence of a head on the other coin. Answer/Explanation. Events A and B are independent events if the probability of Event B occurring is the same whether or not Event A occurs. May 04,2020 - A and B are two independent events. Density independent factors include environmental stresses, weather, sudden climate changes, environmental pollutants and nutrition limitations. so these events should be independent. And each toss of a coin is a perfect isolated thing. Date: 01/03/2007 at 15:43:39 From: Doctor Pete Subject: Re: What is difference between independent and exclusive events Hi TR, If two events A and B are independent, then Pr[A and B] = Pr[A]Pr[B]; that is, the probability that both A and B occur is equal to the probability that A occurs times the probability that B occurs. Which of the Venn diagrams has shaded the event that the contractor wins. The probability of choosing a jack on the second pick given that a queen was chosen on the first pick is called a conditional probability. always mutually exclusive b. AU - Micallef, Ivana N. The lecture on 2/8/2011 mainly focused on independence. If A and B are independent events, such that = 0. Answer choices: independent; not independent. Prove that if events A and B are independent, then the complement events of A and B are also independent. A and B are two independent events such that P(A)=(1)/(2) and P(B)=(1)/(3). gl/9WZjCW If A, B are two independent events, show that bar A and bar B are also independent. P(A 1 A 2 A 3)=P(A 1)P(A 2)P(A 3) Are the events A 1, A 2, and A 3 pairwise independent? Toss two different standard dice, white and black. Find the probability occurrence of A?a)1. Similarly, suppose event A is the drawing of an ace from the pack of 52 cards. 3 and P(B Get solutions. independent events: Two events are independent if knowing the outcome of one provides no useful information about the outcome of the other. True False: The general addition rule may be used to find the union between two events whether or not they are mutually exclusive. 70, what is the value of P(A | B)?. Now we will discuss independent events and conditional probability. If A and B are independent events, the probability of both events occurring is the product of the probabilities of the individual events. (R) Events A and B are independent. Determining the independence of events is important because it informs whether to apply the rule of product to calculate probabilities. This argument shows that if two events are independent, then each event is independent of the complement of the other. (justify using probability) Recall that when two events, M and B, are independent, the probability of both occurring is: P(M and B) = P(M) * P(B) For this problem we know that: P(M) = 0. on the probability of event B happening. If A and B are dependent events, then the probability of. Independent/Dependent Events Two events are independent if the result of the second event is not affected by the result of the first event. For 3 independent events A, B and C is $\\mathrm{P(A \\cap B \\cap C) = P(A)P(B)P(C)}$? Just like for two independent events $\\mathrm{P(A\\cap B) = P(A)P(B)}$. The events are not independent because the. How to Identify Independent Events. When running independent experiments, the usage of the product formula P(A∩B) = P(A) P(B) is justified on combinatorial grounds. Who Is My Legislator. Which of the following is an example of a dependent event? (1 point) A. P( A and B) = 1/6 and the probability that neither of them occur is 1/3. 6, respectively. There are three patterns one may use to link simple sentences into a compound sentence. Dear Friend, During the COVID-19 pandemic, independent news is more important than ever. The neither probability of A nor B is?. (a) Suppose that A and B are independent events with P(A) = 0. Given two spinners (this sort of thing) that each have the numbers 1, 2, and 3 (in place of the colors), we spin two numbers. which exactly means that B is independent of A. From (1) and (2), P(A∩B') = P(A) P(B'), so A and B' are independent. Problem 98SE from Chapter 3: Two events, A and B, are independent, with P(A) =. Here are a few examples:. Google has many special features to help you find exactly what you're looking for. 6, then P(A U B)=? We know the following formula for the probability of 2 events: P(A U B) = P(A) + P(B) - P(A intersection B) We're told A and B are independent, which makes P(A intersection B) = 0. P(A ∩ B) = P(B). The probability of choosing a jack on the second pick given that a queen was chosen on the first pick is called a conditional probability. An experiment consists of two independent trials. , if P(A & B) = P(A)P(B). Determine which of the following outcomes describe mutually exclusive events. Dependent Events. Statistics for Business and Economics (13th Edition) Edit edition. Pairwise vs. Then the value of P(A^(c ) cap B^(c )) is -. Independent Documentary Films. In many cases, you will see the term, "With replacement". Two events A and B are independent iff that condition holds. Independent events. A student in statistics argues that the concepts of mutually exclusive events and independent events are really the same, and that. (a) Determine A cup B , given that A and B are mutually exclusive. We can extend this concept to conditionally independent events. If two events E and F are independent, it is possible that there exists another event G such that EjG is no longer independent of FjG. Suppose that we flip two independent, fair coins. Find P(A I ). a) What formula is used to compute P(A and B)? Is P(A and B ) not equal to 0 Explain. Experiment 1 involved two compound, dependent events. In theory, this would win Israel security and allow it to retain a Jewish. 2:Understand that if two events A and B are independent, the probability of A and B occurring together is the product of their probabilities, and that if the probability of two events A and B occurring together is the product of their probabilities, the two events are independent. Find P(A I ). If A and B are two independent events and P(A)=(3)/(6) and P(AcapB)=(4)/(9) then the value of P(B) will be. That is, if P(A|B) = P(A), and vice versa. A A∩B B B A. A and B are two events. I had a student bring this up today, I know 0 cannot be divided by zero. Given two events A and B, from the sigma-field of a probability space, with the unconditional probability of B (that is, of the event B occurring) being greater than zero, P(B) > 0, the conditional probability of A given B is defined as the quotient of the probability of the joint of events A and B, and the probability of B:. Events may or may not be independent; according to the definition, two events, A and B, are independent iff P(A∩B) = P(A) P(B). 30 and P(B) =. Drawing out a red ball with dots therefore represents a complementary event relative to the combination of Events A and B. 4 and P(B) = 0. When events A, B are independent, the probability of both happening can be computed by saying the event A happen first with P(A) and the event B happens afterwards with P(B). Intuitively I think it is because the. If A and B are independent events, then the events A and B’ are also independent. May 04,2020 - A and B are two independent events. In the case where events A and B are independent (where event A has no effect on the probability of event B), the conditional probability of event B given event A is simply the probability of event B, that is P(B). The intersection of those would then be equal to P(A), wouldn't it? This means that the formula would be like this: P(A)=P(A)*P(B). Given two spinners (this sort of thing) that each have the numbers 1, 2, and 3 (in place of the colors), we spin two numbers. called B cell co-receptor complex. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Instead, the probability of the intersection must equal the product of the probabilities for any finite subcollection of events, e. What is P(A \u0004 B)? c. 835 P(B) = 0. Probability of Dependent Events P (A, then B) = P(A) P(B after A) For two dependent events A and B, the probability of both events occurring is the. If whether or not one event occurs does affect the probability that the other event will occur, then the two events are said to be dependent. Intuitively I think it is because the. The multiplication theorem is applicable only if the events are independent. Assuming equal probability outcomes, given two outcomes in the overlapping area and six outcomes in B, the probability that Event A occurred would be 2/6. A and B are two independent events such that P(A)=(1)/(2) and P(B)=(1)/(3). 2: Understand that two events A and B are independent if the probability of A and B occurring together is the product of their probabilities, and use this characterization to determine if they are independent. (b) Let A be the event an ace is drawn on the first and let B be. LyondellBasell is one of the largest plastics, chemicals and refining companies in the world. Independency (Ind. Construct and interpret two-way frequency tables of data when two categories are associated with each object being classified. If the equation is violated, the two events are not independent. Given a data set, students will be able to determine if two events are independent. 4 and P(B)=. Then the value of P(A^(c ) cap B^(c )) is -. P( A and B) = 1/6 and the probability that neither of them occur is 1/3. (a) Determine A cup B , given that A and B are mutually exclusive. Don't Memorise brings learning to life through its. The multiplication theorem is applicable only if the events are independent. …Event one, heads of flip one, two out of the four scenarios…provide that result. A fair price to pay for playing this game is a. Give an example of 3 events A,B,C which are pairwise independent but not independent. Independent Events video tutorial 00:52:26 Solution If a and B Are Two Events Such that P(A) = 1/4, P(B) = 1/2 and And P(A ∩ B) = 1/8 , Find P (Not a and Not B) Concept: Independent Events. Describe two ways to fi nd P(A and B). If both events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as P(A n B). Find - Answered by a verified Math Tutor or Teacher. and B) = P(A) x P(B) X EX A die is rolled and a coin is tossed. That is C = {(3,6), (6,3), (4,5), (5,4)}. (b) Find the value of p for which A and B are independent. Answer to (10pts) Suppose A and B are two independent random events (means or , if and , find and. Cannot find it because P(B and A) is not known. Events can be " Independent ", meaning each event is not affected by any other events. RD Sharma - Volume 2. P(A) + P(B) = P(AÈB) Independent Events. Note: don't find symbol of intersection,so write instead. Stay well & keep listening to the blues to get you through this time!.
61l8wv0ick, l5yg6c29zg, g6tvr363so5srh, n8pehv1zqt57, rrqrqqb3yysws, lzdpf6ayneuiw4j, 27vwvsq5l178, vxtkuxfmbknaqhc, jmq64yeuv1l22, uke4vvcsktbl, jcd33zjr5p, b65f295bo8l5, vf6oilcx1v3, mh80y3o0jwp550, 97nx9jdcrsx, x0pofbg7c9, cgq0ibq8l1b, 6mwxd1sd722, 4on1030vmkq, ep6b2k9tfprikh, jgj3dc24uy54, q0s8kbhv50, kvrmiscz77m6wnf, if3vnvkiqvbqomu, zhmh80ecaje2, 1ffxjpdzdp, xaczjhh78cs23, 9qpmfyv0ope68ny, myf0wrmnrtwsaf, 3w4ajjro7fgq | 2020-06-06T04:12:07 | {
"domain": "incommunity.it",
"url": "http://incommunity.it/xyhy/a-and-b-are-two-independent-events.html",
"openwebmath_score": 0.6637188196182251,
"openwebmath_perplexity": 342.42604301853675,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109520836026,
"lm_q2_score": 0.8519528076067262,
"lm_q1q2_score": 0.8390124555894783
} |
https://math.stackexchange.com/questions/2802915/cancellation-law-for-forward-and-backward-images | # Cancellation law for forward and backward images
I am working on the exercise from Terence Tao's Analysis I and I wanted to verify the following proof. I have proven one side of the biimplication and am now stuck on the second part (a hint would be sufficient). Since I am self-studying mathematics I would also be happy to hear your comment on the overall structure and style of the proof.
Exercise 3.4.5 Let $X,Y$ be sets, let $f:X\to Y$. Show that $\forall S\subseteq X: f^{-1} \left(f(S)\right) = S$ if and only if $f$ is injective.
\begin{proof} We first show that $f$ is injective implies $f^{-1} \left(f(S)\right) = S$.
We have already proven the cancellation law for bijective functions: $$\forall x: g^{-1} \left(g(x)\right) = x \text{ and } g \left(g^{-1}(x)\right) = x$$
We use this fact to prove this proposition. We restrict $f:X \to Y$ to the mapping $f_S$ with the domain $S$ and range $f(S)$ $$f_S: S \to f(S), \quad f_S(x) := f(x)$$ Then $f_S$ is a function and in particular $f_S$ is bijection (Proof: Argue by contradiction. Suppose $f_S$ is not a bijection. Then by negating bijection property $\left(\forall y \in f(S) \quad \exists_{1} x \in S: f_S(x)=y \right)$ we obtain one of the following mutually exclusive cases by trichotomy of order on natural numbers:
1. $\exists y \in f(S) \quad \exists_{0} x \in S: f_S(x)=y$, a contradiction to the definition of the forward image $f(S):=\{f(x): x\in S\}$
2. $\exists y \in f(S) \quad \exists_{>1} x \in S: f_S(x)=y$, a contradiction to the injectivity assumption of $f$, since $\exists x,x' \in S: x\neq x'$ and $f_S(x)=f_S(x')$ implies $\exists x,x' \in X: x\neq x'$ and $f(x)=f(x')$.
as desired.) Since $f_S: S \to f(S)$ is bijection, we must have $$f_S^{-1} \left(f_S(S)\right) = S$$ since by Exercise 3.6.3 we have $\forall x \in S: f_S^{-1} \left(f_S(x)\right) = x$ hence $\{ f_S^{-1} \left(f_S(x)\right): x \in S\} = \{x \in S: x \in S\} = S$. It is then left to show that $$f_S^{-1} \left(f_S(S)\right) = f^{-1} \left(f(S)\right)$$ The fact that $f_S(S) = f(S)$ is true since $\forall x \in S: f_S(x) = f(x)$ and injectivity of $f$ together imply $\{f(x) \in Y: x \in S\} = \{f_S(x) \in f(S): x \in S\}$. $$f_S^{-1} \left(f(S)\right) = S$$ The fact that $f_S^{-1} \left(f(S)\right) = f^{-1} \left(f(S)\right)$ is true since $\forall y \in f(S): f_S^{-1}(y) = f^{-1}(y)$ and injectivity of $f$ imply $\{x \in X: f(x) \in f(S)\} = \{x \in S: f_S(x) \in f(S)\}$ $$f^{-1} \left(f(S)\right) = S$$ as desired.
Now we show that $f^{-1} \left(f(S)\right) = S$ implies that $f$ is injective.
proof \end{proof}
Assume f is injective.
Quickly show A subset $f^{-1}f(A).$ Now assume x in $f^{-1}f(A).$
Thus f(x) in f(A); exists a in A with f(x) = f(a).
Conclude x = a in A. Thus the reverse inclusion. | 2019-10-22T19:57:52 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2802915/cancellation-law-for-forward-and-backward-images",
"openwebmath_score": 0.9779222011566162,
"openwebmath_perplexity": 119.09004177151678,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984810952529396,
"lm_q2_score": 0.8519528038477825,
"lm_q1q2_score": 0.8390124522674244
} |
http://math.stackexchange.com/questions/89926/are-three-matrices-linearly-independent-and-form-a-basis-of-m-2-mathbb-r | # Are three matrices linearly independent and form a basis of $M_2(\mathbb R)$?
I know how to prove whether or not vectors are linearly independent, but can't apply the same thing to matrices it seems. Given three 2x2 matrices, for example:
$$A = \begin {bmatrix} -1&1 \\\\ -1&1 \ \end{bmatrix}$$ $$B = \begin {bmatrix} 1&1 \\\\ -1&-1 \ \end{bmatrix}$$ $$C = \begin {bmatrix} -1&1 \\\\ 1&-1 \ \end{bmatrix}$$
I want to test whether or not these are linearly dependent. So with vectors I would do something like:
$$c_1A + c_2B + c_3C = 0$$
Where the cs are some scalar constants, and prove that the only solution of that is when $$c_1 + c_2 + c_3 = 0$$
So how do I go about solving this:
$$c_1 \begin {bmatrix} -1&1 \\\\ -1&1 \ \end{bmatrix} + c_2 \begin {bmatrix} 1&1 \\\\ -1&-1 \ \end{bmatrix} + c_3 \begin {bmatrix} -1&1 \\\\ 1&-1 \ \end{bmatrix} = 0$$
Or I am going about this completely the wrong way?
Any help would be hugely appreciated.
-
You're going about it exactly the right way. EDIT: As David Mitra points out, you have to prove that $c_1=c_2=c_3=0$, not just that $c_1+c_2+c_3=0$.
In fact, you can just think of the matrices as being vectors of length 4: $$\begin{pmatrix}a & b \\ c& d\end{pmatrix}\mapsto (a,b,c,d)$$ and use your knowledge about the linear independence of vectors.
-
So I can rewrite the matrices like this: $$A = (-1, 1, -1, 1)$$ and so on for the others. How does that simplify the problem? I will get: $$c_1(-1, 1, -1, 1) + c_2(1, 1, -1, -1) + c_3(-1, 1, 1, -1) = 0$$ Will that give me four equations? – MadScone Dec 9 '11 at 16:52
Well, it doesn't simplify the problem, but you said you know how to check whether vectors were linearly independent, so I was explaining how to think of it in terms of what you're already familiar with. So yes, you get the four equations \begin{align*}-c_1+c_2-c_3&=0\\ c_1+c_2+c_3&=0\\ -c_1-c_2+c_3&=0\\ c_1-c_2-c_3&=0\end{align*} – Zev Chonoles Dec 9 '11 at 17:56
Ok thanks, that's all I really needed to know I suppose. I solved the question like this: $$\operatorname{rref}\left[\begin{array}{ccc|c} -1&1&-1&0 \\ 1&1&1&0 \\ -1&-1&1&0 \\ 1&-1&-1&0 \end{array}\right] = \left[\begin{array}{ccc|c} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&0 \end{array}\right]$$ so the matrices are linearly independent. Assuming that's right, my question has been answered. – MadScone Dec 9 '11 at 19:41
@MadScone: I've fixed up the formatting on your comment above. And that looks right to me :) – Zev Chonoles Dec 9 '11 at 19:58
It's perfectly fine (except that, if you want to prove independence, you need to show $c_1=c_2=c_3=0$, not that their sum is 0). Next, do the matrix arithmetic on the left hand side:
$$c_1 \begin {bmatrix} -1&1 \\\\ -1&1 \ \end{bmatrix} + c_2 \begin {bmatrix} 1&1 \\\\ -1&-1 \ \end{bmatrix} + c_3 \begin {bmatrix} -1&1 \\\\ 1&-1 \ \end{bmatrix} = \begin {bmatrix} -c_1+c_2-c_3&c_1+c_2+c_3\\\\ -c_1-c_2+c_3 & c_1-c_2-c_3 \ \end{bmatrix} ={\bf 0}.$$
Since a matrix is the zero matrix if and only if each of its components is 0, you get the system of equations \eqalign{ -c_1+c_2-c_3&=0\cr c_1+c_2+c_3&=0 \cr -c_1-c_2+c_3&=0 \cr c_1-c_2-c_3&=0 }
- | 2015-01-28T04:17:50 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/89926/are-three-matrices-linearly-independent-and-form-a-basis-of-m-2-mathbb-r",
"openwebmath_score": 0.8796635866165161,
"openwebmath_perplexity": 234.13300125398595,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984810954312569,
"lm_q2_score": 0.8519527963298946,
"lm_q1q2_score": 0.8390124463829053
} |
https://math.stackexchange.com/questions/1146082/how-to-find-inverse-of-2-modulo-7-by-inspection | # How to find inverse of 2 modulo 7 by inspection?
This is from Discrete Mathematics and its Applications
1. By inspection, find an inverse of 2 modulo 7
To do this, I first used Euclid's algorithm to make sure that the greatest common divisor between 2 and 7 is 1. Here is my work for that
7 = 2(3) + 1
2 = 1(2) + 0
Because 1 is the last remainder before the remainder goes to zero, it is the greatest common divisor. Because of 1 is gcd(2, 7) and m, 7, >1, by this theorem
the inverse of a modulo m exists. The inverse of a modulo m is in the form of
a'a $\equiv$ 1 mod(m)
in this case it be
a'*2 $\equiv$ 1 mod(7)
Where a' is the inverse
So from the steps of Euclid's algorithm
1 = (1)(7) + (-3)(2)
(-3)(2) - 1 = (1)(7)
meaning
(-3)(2) $\equiv$ 1 mod (7)
and -3 would be an inverse of 2 modulo 7. How would you find an inverse without going through the steps and just looking at it(by inspection)?
• Look at $\frac{17+1}{2}$. You find the inverse of $2$ modulo $m$ in this way for any odd $m$. – André Nicolas Feb 13 '15 at 4:28
• To me "by inspection" means you don't need to check obvious things like the fact that the gcd of $2$ and $7$ is $1$ ($7$ is prime, after all). The author likely meant for you to think "what number of the form $7n+1$ is a multiple of $2$?" From which you should have quickly seen $n=1$ and with $4$ as the inverse. – Hayden Feb 13 '15 at 4:29
• @Hayden Where did 7n + 1 come from? – committedandroider Feb 13 '15 at 4:30
• @committedandroider If $a\equiv 1 \pmod 7$, then $a=7n+1$ for some $n$. – apnorton Feb 13 '15 at 4:31
• By inspection here just means the numbers are small enough that you can either just "see it" or you can try the small number of possibilities. For example, $2\cdot 4 = 8$, which is $1$ modulo $7$. – aes Feb 13 '15 at 4:35
If you want the multiplicative inverse of $2$ mod $7$, then you want to find an integer $n$ such that $2n = 7k + 1$, where $k$ is a nonnegative integer. Try $k = 1$, because that's the easiest thing to do. Then $2n= 8$, and $n = 4$.
• Although the title mentions 17, the entirety of the question itself uses 7. (Granted, the methodology is pretty much identical) – Hayden Feb 13 '15 at 4:30
• I changed the title, my bad. – committedandroider Feb 13 '15 at 4:31
• Fixed the numbers. +1 – Zubin Mukerjee Feb 13 '15 at 4:36
• So n=4, (4)(2) $\equiv$ 1(mod 7), meaning 8 - 1 is a multiple of 7. How do you guys interpret inverse? What number multiplied by a will with a difference of 1 be a multiple of m? Is that a good way to interpret inverse? I feel like there has to be a better way – committedandroider Feb 13 '15 at 5:11
• @committedandroider A multiplicative inverse of $a$ is a number that you multiply with $a$ to give you the multiplicative identity (which is 1). When you're working with the real numbers, the inverse is $1/a$. With modulo arithmetic, you have to look for numbers congruent to $1$ and find one divisible by $a$. – NoName Feb 13 '15 at 5:30
You could use Euclid's algorithm to compute that gcd(2,7)=1, and from that obtain a solution to $2x+7y=1$, which in turn gives an inverse of $2$ mod $7$.
In this case, Euclid's algorithm terminates very quickly:
$7=2*3+1$
Taking this equation mod $7$ gives:
$2*3+1 \equiv 0 \pmod{7}$
$(-3)*2 \equiv 1 \pmod{7}$
So the inverse of $2$ is $-3$ which is the same as $4$.
• How do you interpret inverse? Like if f(x) = y, f'(y) = x. That makes sense to me. Is there a way to interpret inverse in this case? – committedandroider Feb 13 '15 at 5:12
• Inverse is in the multiplicative sense. That is, the inverse of $a$ is the element $b$ such that $ab=1$. – Dylan Yott Feb 13 '15 at 7:36
• In that sense, -3 would be an inverse of 2 mod 7. 2 mod 7 is actually 2. -3 * 2 gives you -6, not 1. – committedandroider Feb 13 '15 at 23:26
• $-3$ is the inverse of $2$. $4$ is also the inverse of $2$. This is okay because $-3=4$ in the ring $\mathbb Z / 7 \mathbb Z$ – Dylan Yott Feb 14 '15 at 18:13
If the modulus $\,m\equiv \pm1 \pmod{a},\,$ then we can easily invert $\,a\pmod m\,$ as follows
$(1)\qquad\quad {\rm mod}\,\ m = na\!-\!1\!:\ \ \ na\, \equiv 1\ \Rightarrow\ a^{-1}\equiv\,\ n \,=\, \color{#c00}{(1\!+\!m)/a}$
$(2)\qquad\quad {\rm mod}\,\ m = na\!+\!1\!:\, -na\equiv 1\:\Rightarrow\ a^{-1}\equiv -n = \color{#0a0}{(1\!-\!m)/a}$
E.g. your $\,m = 7\equiv \pm1\pmod{2},\,$ hence by $\,(2),\ \ 2^{-1} \equiv \color{#0a0}{(1\!-\!7)/2} \equiv -3$
Alternatively we can apply the case $(1)$ obtaining $\,2^{-1} \equiv \color{#c00}{(1\!+\!7)/2}\equiv\ 4$
This can be viewed as an optimization of the Extended Euclidean algorithm in the case that it terminates in a single step (or ditto for Gauss's method for modular inversion).
• What would n be in the case of inverse of 2 modulo 7? can you show a test run of that? – committedandroider Feb 13 '15 at 5:20
• @committedandroider Observe that $4 \cdot 2 \equiv 8 \equiv 1 \pmod{7}$, so $2^{-1} \equiv 4 \pmod{7}$. Also, observe that $3 \cdot 2 \equiv 6 \equiv -1 \pmod{7}$, so $2^{-1} \equiv -3 \equiv 4 \pmod{7}$. – N. F. Taussig Feb 13 '15 at 12:44
• @com I explained this in an edit to the answer. – Bill Dubuque Feb 13 '15 at 19:32
1 = 8 (mod 7) and 8 is a multiple of 2.
2 x 4 = 8 = 1 mod 7
So $2^{-1} \equiv 4 \pmod 7$
Something harder. Find $5^{-1} \pmod 7$
1 = 8 = 15 mod 7 and 15 is a multiple of 5
$5 \cdot 3 = 15 = 1\pmod 7$
So $5^{-1} = 3 \pmod 7$ | 2019-12-09T15:49:48 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1146082/how-to-find-inverse-of-2-modulo-7-by-inspection",
"openwebmath_score": 0.8610188961029053,
"openwebmath_perplexity": 301.8806796496506,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9759464527024445,
"lm_q2_score": 0.8596637559030338,
"lm_q1q2_score": 0.838985793090426
} |
https://math.stackexchange.com/questions/2269519/why-is-a-first-degree-polynomial-for-sinx-a-good-approximation-for-small-x-w | Why, is a first degree polynomial for sin(x) a good approximation for small x, while cos(x), a second degree polynomial is necessary?
We know that the small-angle approximation says that:
$$\sin(x) \approx x$$ $$\cos(x) \approx 1-\frac{x^2}{2}$$
I'm trying to understand, using the Taylor series and Lagrange error, why the first term of the $\sin(x)$ Maclaurin is so much better an approximation than $\cos(x)$ Maclaurin. It must have something to do with the fact that the first nonzero term for $\cos(x)$ is zero-degree, while the first nonzero term for $\sin(x)$ is first-degree. But the issue is that the next term in both cases has a derivative of $0$ at $x=0$, so I thought they'd be similarly good approximations. Why is this the case? And could the difference be shown clearly using Lagrange error bound? This is not homework, I'm just curious.
• Very loosely speaking, the relation between $\sin$ and $\cos$ (which likely prompted your question to begin with) is quadratic $\sin^2 x + \cos^2 x = 1\,$. If you look at those Taylor series, $\sin^2 x \sim x^2$ and $\cos^2 x \sim 1 - x^2\,$ are of the same order. – dxiv May 7 '17 at 7:33
• That's so interesting! Thank you! – rb612 May 7 '17 at 8:10
The following diagram depicts the absolute difference between $\cos(x)$ and $1$ (pink), $\sin(x)$ and $x$ (blue), and $\cos(x)$ and $1-\frac12x^2$ (yellow).
It is apparent that $1$ is a quite bad approximation for for $\cos(x)$, and $1-\frac12x^2$ is a better approximation for $\cos(x)$ than $x$ is for $\sin(x).$ However, this is only a relative quality issue...
Further to this qualitative analysis, one has to consult the theory of the remainder term to the Taylor approximations.
• Thank you for your answer. I would like to see how the theory of the remainder term relates exactly to this problem as stated in my question. – rb612 May 7 '17 at 7:57
I don't know what would constitute an actual answer to this question. Merely restating various points about Taylor series doesn't seem like it says anything since the question seems to be why the cosine approximation, term for term, is worse than the sine approximation (near 0). This is not intuitive, since $\sin$ and $\cos$ are just shifted versions of each other—shouldn't their approximations be as good?
For comparison, consider one of the popular interpolation methods: cubic splines. We like cubic splines because 1) they hit every point 2) the slopes match at the points 3) the curvature matches at the points.
What would it mean to constitute a good approximation? We often like to approximate with low-order polynomials (e.g. "everything is approximately a line if you zoom in enough", Simpson's rule uses a parabola to estimate, etc.). Since the derivative of a line is a constant, this kind of approximation should be alright if the derivative isn't changing very much (the derivative of a line doesn't change at all). This is by analogy to the point (3) of cubic splines. Since the rate of change of the derivative of $\sin(x)$ is (ignoring the sign) just $\sin(x)$ and at 0 the sine is 0, this means our small approximation should be alright: it's not a line, but close to 0 it sure acts like one . If you compare this to cosine the situation is reversed and we are at the maximum rate of change of the derivative, so our small approximation should have more error. It's not a line, and it isn't acting like one, either.
• This definitely gets at what I'm thinking about, especially the last few sentences. When you say, "If you compare this to cosine the situation is reversed and we are at the maximum rate of change of the derivative, so our small approximation should have more error." -- this is exactly what I'm trying to explore in terms of bounding the error using Lagrange error. With Lagrange error, in this case, I believe you'd be looking at the max value of the derivative on the interval in question, but really, it should be the max second derivative from what you're saying. – rb612 May 7 '17 at 8:03
• @rb612 It's not so much that you should look to the second derivative, you should look at the way your approximating function behaves compared to the function being approximated. We can approximate anything with a line, but our linear approximations should be best when the second derivative is zero, since the second derivative of a line is 0. If we are approximating with a quadratic the story is not the same. – law-of-fives May 9 '17 at 14:14 | 2019-07-22T20:45:47 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2269519/why-is-a-first-degree-polynomial-for-sinx-a-good-approximation-for-small-x-w",
"openwebmath_score": 0.8050264716148376,
"openwebmath_perplexity": 181.55471778670363,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9759464513032269,
"lm_q2_score": 0.8596637523076225,
"lm_q1q2_score": 0.8389857883786404
} |
https://forum.math.toronto.edu/index.php?PHPSESSID=2h9fj04alq45skvg6snedf70k1&action=printpage;topic=2586.0 | # Toronto Math Forum
## MAT334--2020F => MAT334--Tests and Quizzes => Test 4 => Topic started by: Xuefen luo on December 09, 2020, 02:09:42 PM
Title: 2020F-Test4-MAIN-A-Q2
Post by: Xuefen luo on December 09, 2020, 02:09:42 PM
Problem 2. Calculate an improper integral $I=\int_{0}^{\infty} \frac{\sqrt{x}dx}{(x^2+2x+1)}$.
(a) Calculate $J_{R,\epsilon}=\int_{\Gamma_{R,\epsilon}} f(z)dz, \ f(z):= \frac{\sqrt{z}}{(z^2+2z+1)}$
$\Gamma_{R,\epsilon}$ is the contour on the figure.
(b)Prove that $\int_{\gamma_R} f(z)dz \rightarrow 0$ and $\int_{\gamma_ \epsilon} f(z)dz \rightarrow 0$ as $R \rightarrow \infty$ and $\epsilon \rightarrow 0$ where $\gamma_R$ and $\gamma_\epsilon$ are arcs.
(c) Express limit of $J_{R,\epsilon}$ as $R \rightarrow +\infty$, $\epsilon \rightarrow 0^+$ using $I$.
(a) Since $f(z) = \frac{\sqrt{z}}{(z^2+2z+1)} = \frac{\sqrt{z}}{(z+1)^2}$, $z=-1$ is the only singularity inside $\Gamma_{R,\epsilon}$ as $R>1$.
The residue is $Res(f(z),-1)=\frac{(\sqrt{z})'}{1!}|_{z=-1} = \frac{\frac{1}{2\sqrt{z}}}{1!}|_{z=-1} = -\frac{i}{2}$
Thus, by residue theorem $J_{R,\epsilon}= 2\pi i Res(f(z),-1)=2\pi i (-\frac{i}{2})=\pi$
(b) \begin{align*}
\int_{\gamma_R} f(z)dz &\leq \left| \int_{\gamma_R} f(z)dz \right|\\
&\leq 2\pi R\cdot max \left|\frac{\sqrt{z}}{(z^2+2z+1)} \right|\\
&\leq 2\pi R\cdot max \left| \frac{R^{\frac{1}{2}}e^{i\frac{1}{2}t}}{(Re^{it}+1)^2}\right| \ \ \ , \ \text{$z=Re^{it}, t\in [0,2\pi]$}\\
&\leq 2\pi R \cdot \frac{R^{\frac{1}{2}}}{(R+1)^2} \rightarrow 0 \ \ \ \ as \ R \rightarrow \infty\\
\\
\int_{\gamma_ \epsilon} f(z)dz &\leq \left|\int_{\gamma_ \epsilon} f(z)dz\right|\\
&\leq 2\pi \epsilon \cdot max \left| \frac{\sqrt{z}}{(z^2+2z+1)} \right|\\
&\leq 2\pi \epsilon \cdot max \left| \frac{\epsilon^{\frac{1}{2}}e^{i\frac{1}{2}t}}{(\epsilon e^{it}+1)^2}\right| \ \ \ , \ \text{$z=\epsilon e^{it}, t\in [2\pi,0]$}\\
&\leq 2\pi \epsilon \cdot max (\frac{\epsilon^{\frac{1}{2}}}{(|1|-|\epsilon e^{it}|)^2})\\
&\leq 2\pi \epsilon \cdot \frac{\epsilon^{\frac{1}{2}}}{(1-\epsilon)^2} \rightarrow 0 \ \ \ \ as \ \epsilon \rightarrow 0\\
\end{align*}
(c)\begin{align*}
J_{R,\epsilon} &= \int_{\gamma_ R} f(z)dz +\int_{\gamma_ \epsilon} f(z)dz +\int_{\epsilon}^{\infty} f(z)dz+\int_{\infty}^{\epsilon}f(z)dz\\
\pi &= 0+0+\int_{\epsilon}^{\infty} f(z)dz+\int_{\infty}^{\epsilon}f(z)dz\\
\end{align*}
As $R \rightarrow +\infty$, $\epsilon \rightarrow 0^+$,
\begin{align*}
\int_{\epsilon}^{\infty} f(z)dz &= \int_{0}^{\infty}\frac{\sqrt{x}dx}{(x^2+2x+1)}=I\\
\int_{\infty}^{\epsilon}f(z)dz &= \int_{\infty}^{0}\frac{\sqrt{z}}{(z^2+2z+1)}dz \ \ \text{, $z=xe^{i2\pi},dz=e^{i2\pi} dx$}\\
&=\int_{\infty}^{0}\frac{\sqrt{x}e^{i\pi}}{(xe^{i2\pi}+1)^2} e^{i2\pi}dx\\
&=\int_{0}^{\infty}\frac{\sqrt{x}}{(x+1)^2}dx\\
&=I
\end{align*}
Then, the limit of $J_{R,\epsilon}$ as $R \rightarrow +\infty$, $\epsilon \rightarrow 0^+$ is $2I$. Thus, $2I=\pi \Rightarrow I=\frac{\pi}{2}$
Title: Re: 2020F-Test4-MAIN-A-Q2
Post by: Xuefen luo on December 09, 2020, 02:39:52 PM
Here is the given figure. | 2022-05-26T07:10:32 | {
"domain": "toronto.edu",
"url": "https://forum.math.toronto.edu/index.php?PHPSESSID=2h9fj04alq45skvg6snedf70k1&action=printpage;topic=2586.0",
"openwebmath_score": 1.000009536743164,
"openwebmath_perplexity": 9596.289644243623,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9759464513032269,
"lm_q2_score": 0.8596637523076225,
"lm_q1q2_score": 0.8389857883786404
} |
https://math.stackexchange.com/questions/1051816/need-help-to-prove-a%E2%88%AAb-c-a-a-%E2%88%AA-b-c | # Need help to prove (A∪B) - (C - A) = A ∪ (B - C)
Having trouble with a discrete math question involving sets. Have been asked to prove:
(A∪B) - (C - A) = A ∪ (B - C)
This is what I have so far:
x ϵ A or x ∈ (B - C)
x ∈ A or (x ∈ B and x ∉ C )
This where I get stuck. I can see how to combine the x ∈ A or (x ∈ B) into (A∪B), but I do not know how to derive the other half. Please help.
• Considered just filling in a Venn diagram for each of the sides and seeing that the same areas of the diagram end up shaded? – Henning Makholm Dec 4 '14 at 18:23
• In our course Venn diagrams cannot be used for proofs. – KleinBottle Dec 4 '14 at 18:25
• x @isomorphism: How about truth tables, then? – Henning Makholm Dec 4 '14 at 18:26
• No. A formal proof is required. – KleinBottle Dec 4 '14 at 18:26
• x @isomorphism: What's informal about truth tables? If you want a proof in a particular formal proof system, you need to disclose which rules the proof systems you use has. – Henning Makholm Dec 4 '14 at 18:27
Assume that $x \in A \cup (B - C)$. Then ($x \in A$) OR $(x \in B$ and $x \notin C$).
If $x \in A$, then $x \in A \cup B$ and $x \notin C - A$, so $x \in (A \cup B) - (C-A)$.
If $x \in B$ and $x \notin C$, then $x \in A \cup B$ and $x \notin C - A$, so $x \in (A \cup B) - (C-A)$.
This means that $A \cup (B-C) \subset (A \cup B)-(C-A)$.
• This solution is using the methods and notation that we learned in class. Thank you so much for helping out. – KleinBottle Dec 4 '14 at 18:44
• What about the other direction? – Mars Dec 5 '14 at 3:33
• D.J. proved the other direction. – desos Dec 5 '14 at 8:07
If you need a purely algebraic-looking proof, I would write \begin{align} (A\cup B)\setminus(C\setminus A) &= (A\cup B)\setminus(C\cap A^\complement) \\&= (A\cup B)\cap (C\cap A^\complement)^\complement \\&= (A\cup B)\cap (C^\complement \cup A) \\&= A\cup(B\cap C^\complement) \\&= A\cup(B\setminus C) \end{align}
• Would it be possible for you to complete the algebra? I'm not too familiar with your algebraic approach but I've seen the compliment notation and I'd like to learn this method as well. – KleinBottle Dec 4 '14 at 18:42
• Thank you Henning, this is very helpful to add another technique to my tool kit. – KleinBottle Dec 4 '14 at 18:45
• On an related note. What is the markup language or tool you guys are using for your data entry in these solutions? I need to graduate to math for web. – KleinBottle Dec 4 '14 at 18:48
• @isomorphism: It's called MathJax and is largely the same input format as math mode in LaTeX. Here is a tutorial. – Henning Makholm Dec 4 '14 at 18:49
• Great, thanks I will take a look. I've been dragging my heels with regard to learning LaTeX, but as an EE student I should probably learn some of it sooner rather than later. – KleinBottle Dec 4 '14 at 18:52
Proof of $\subseteq$:
Let $x \in (A \cup B) - (C - A)$. Then $x \in A$ or $x \in B$, and $x \notin (C-A)$. The last part means that either $x \in C \cap A$ or $x \notin C$.
Suppose $x \in C \cap A$. Then $x \in A$, so we're done as $A \subseteq A \cup (B-C)$.
Suppose $x \notin C$. But we also know that either $x \in A$ (in which case we're done) or $x \in B$ (in which case we're done, as $x \in (B-C)$).
I'll leave $\supseteq$ for you to try along similar lines.
• Thanks D.J, this exercise is very helpful. – KleinBottle Dec 4 '14 at 18:45 | 2019-08-18T01:31:39 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1051816/need-help-to-prove-a%E2%88%AAb-c-a-a-%E2%88%AA-b-c",
"openwebmath_score": 0.8201605677604675,
"openwebmath_perplexity": 830.8742176925548,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.975946445706356,
"lm_q2_score": 0.8596637559030338,
"lm_q1q2_score": 0.8389857870761422
} |
https://stats.stackexchange.com/questions/181620/what-is-the-meaning-of-super-script-2-subscript-2-within-the-context-of-norms/181622 | # What is the meaning of super script 2 subscript 2 within the context of norms?
I am new to optimization. I keep seeing equations that have a superscript 2 and a subscript 2 on the right-hand side of a norm. For instance, here is the least squares equation
min $||Ax-b||^2_2$
I think I understand the superscript 2: it means to square the value of the norm. But what is the subscript 2? How should I read these equations?
• $||\theta||_p$ is the $\ell_p$-norm of $\theta$. Let's say $\theta$ is $d$-dimensional, then $||\theta||_p = \left(\sum_{i=1}^d |\theta_i|^p\right)^\frac{1}{p}$. – Sobi Dec 15 '15 at 17:21
• Single vertical bars are used for absolute value (magnitude): $|\theta|$ – Scortchi Dec 15 '15 at 17:28
• Thanks!...but what is the superscript 2 for?...the subsript is for the pth norm....the superscript is for? – mathopt Dec 15 '15 at 18:37
• @user1467929: Squaring - if it's anything else they'd surely have said. – Scortchi Dec 15 '15 at 19:48
You are right about the superscript. The subscript $||.||_p$ specifies the $p$-norm.
Therefore:
$$||x_i||_p=(\sum_i|x_i|^p)^{1/p}$$
And:
$$||x_i||_p^p=\sum_i|x_i|^p$$
• ah. And there are conventions for the meanings of the subscripts I see. en.wikipedia.org/wiki/Norm_(mathematics)#p-norm. So like 1 = taxicab norm, 2=euclid norm etc – bernie2436 Nov 13 '15 at 14:04
• @bernie2436: These are special cases of the general definition given in the answer above (except maybe the sup-norm with $p = \infty$) – Michael M Nov 13 '15 at 14:55
$\|x\|_2$ is the Euclidean norm of the vector $x$; $\|x\|_2^2$ is the squared Euclidean norm of $x$. Note that as the Euclidean norm is probably the mostly commonly used norm people routinely abbreviated by $\|x\|$. By definition when assuming a Euclidean vector space: $\|x\|_2 := \sqrt{x_1^2 + x_2^2 + \dots + x_n^2}$.
As mentioned in the comments, the subscript $p$ refers to the degree of the norm. Other commonly used norms are for $p = 0$, $p = 1$ and $p = \infty$. For $p=0$ one gets the number of non-zero elements in $x$, for $p=1$ (ie. $\|x\|_1$) one gets the Manhattan norm and for $p = \infty$ one gets the maximum absolute value from the elements in $x$. Both $p = 0$ and $p = 1$ are popular in sparse/compressed application settings where one wants to "urge" some coefficient(s) to be zero. | 2019-02-15T20:04:17 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/181620/what-is-the-meaning-of-super-script-2-subscript-2-within-the-context-of-norms/181622",
"openwebmath_score": 0.8588255643844604,
"openwebmath_perplexity": 415.61295010562316,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9759464471055738,
"lm_q2_score": 0.8596637523076224,
"lm_q1q2_score": 0.83898578477007
} |
https://math.stackexchange.com/questions/1807155/prove-that-there-is-a-base-of-mathbb-r4-made-of-eigenvectors-of-matrix-a | # Prove that there is a base of $\mathbb R^4$ made of eigenvectors of matrix $A$
Matrix of linear operator $\mathcal A$:$\mathbb R^4$ $\rightarrow$ $\mathbb R^4$ is $$A= \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1\\ 1 & -1 & 1 & -1\\ 1 & -1 & -1 & -1\\ \end{bmatrix}$$ Prove that there is a base of $\mathbb R^4$ made of eigenvectors of matrix $A$. Using the new base, find matrix of that operator.
I hope I translated all correctly.
This is what I have done so far.
1. I found characteristic polynomial of matrix $A$, so I can get eigenvalues and thus find eigenvectors. My characteristic polynomial is $$p_A(\lambda)=\lambda^4-2\lambda^3-6\lambda^2+16\lambda-8$$
2. My eigenvalues are $$\lambda_1=\lambda_2=2$$ $$\lambda_3=-1-\sqrt3$$ $$\lambda_4=-1+\sqrt3$$
3. After that I calculated my eigenvectors. This is where I need help understanding. Eigenvectors that belong to different eigenvalues are linearly independent so then they can make a base. In this case, I have two equal eigenvalues. But, when I calculate: $$A\overrightarrow v=\lambda_1 \overrightarrow v$$ where $\overrightarrow v=(x_1,x_2,x_3,x_4)$ is eigenvector for eigenvalue 2 I get this form (final):$$[A-\lambda_1I]= \begin{bmatrix} -1 & 1 & 1 & 1 \\ 0 & 0 & 0 & -2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{bmatrix}$$
So, my vector $$\overrightarrow v= \begin{bmatrix} x_2+x_3 \\ x_2\\ x_3\\ 0\\ \end{bmatrix}=x_2\begin{bmatrix} 1 \\ 1\\ 0\\ 0\\ \end{bmatrix}+x_3\begin{bmatrix} 1 \\ 0\\ 1\\ 0\\ \end{bmatrix}$$
So, I am not even sure how to ask this question. Even if eigenvalues where the same, I did get one vector that is actually a linear combination of two linearly independent vectors? Is this observation correct?
After that I calculated eigenvectors for remaining eigenvalues. These were results:
$\overrightarrow v_3 = x'_4\begin{bmatrix} -\sqrt3 \\ \sqrt3\\ \sqrt3\\ 1\\ \end{bmatrix}$ where $\overrightarrow v_3=(x'_1,x'_2,x'_3,x'_4)$ for $\lambda_3=-1-\sqrt3$
$\overrightarrow v_4 = x''_4\begin{bmatrix} \sqrt3 \\ -\sqrt3\\ -\sqrt3\\ 1\\ \end{bmatrix}$ where $\overrightarrow v_4=(x''_1,x''_2,x''_3,x''_4)$ for $\lambda_4=-1+\sqrt3$
So, in this case, is my base:
$$B= \begin{bmatrix} 1 & 1 & -\sqrt3 & \sqrt3 \\ 1 & 0 & \sqrt3 & -\sqrt3\\ 0 & 1 & \sqrt3 & -\sqrt3\\ 0 & 0 & 1 & 1\\ \end{bmatrix}$$?
and would new matrix of operator $\mathcal A$ be $B^{-1}AB$?
I also have one more question: Is there some shorter way in finding these results? I am not lazy to do these calculations, but it is easy to make a mistake when time is short. Could I conclude something by looking at matrix $A$ to help me find eigenvalues and eigenvectors faster?
• You actually proved the eigenspace $E_2$ has dimension $2$, so the geometric multiplicity of the eigenvalue $2$ is equal to its algebraic multiplicity, and the matrix is diagonalisable. I don't think there is any shorter way. – Bernard May 31 '16 at 16:42
• Your questions is perfectly valid as it is, but I think there's a tiny chance you copied it wrongly, and that the last diagonal entry of $A$ is actually $1$ (not $-1$) — for purely aesthetic reasons. Then the columns of $A$ form an orthogonal (but not orthonormal) basis of $\mathbb R^4$. Again, your question as it is at present is perfectly alright! – M. Vinay Jun 1 '16 at 2:07
• @M.Vinay Nice to know my question makes sense. Oh, and it is $-1$ indeed but I see your point. – Asleen Jun 1 '16 at 14:46
• @Asleen Ah, okay. But it would make an interesting question if it were $+1$ :) – M. Vinay Jun 1 '16 at 14:49
Note that your matrix $A$ is symmetric and hence diagonalizable. You don't even need to find the eigenvalues of $A$ to conclude that there exists a basis of eigenvectors for $A$. I don't see any calculation-free way to find the eigenvalues of $A$ but once you find them, you don't need to know the eigenvectors in order to know how the operator will look with respect to a basis of eigenvectors. If the eigenvectors are $v_1, \dots, v_4$ with $Av_i = \lambda_i v_i$ then with respect to $(v_1, \dots, v_4)$ the operator will be $\operatorname{diag}(\lambda_1, \dots, \lambda_4)$.
If you are not asked explicitly to find a basis of eigenvectors for $A$, you can skip 3 entirely and say that $A$ can be represented as $\operatorname{diag}(2,2,-1-\sqrt{3},-1+\sqrt{3})$ (or by any matrix that is obtained by permuting the rows).
Last comment - the trace of your matrix is 2 and this should be the sum of the eigenvalues $\lambda_1 + \dots + \lambda_4$. This can be used for "sanity check" after calculating the eigenvalues to make sure you haven't done a computation error (this doesn't guarantee that you haven't made a mistake but provides some evidence for it).
• Thank you very much. This does help a lot. Is there any chance you could tell me am I right about the last question? Would new matrix of operator be matrix $B^{-1}AB$? Thank you. – Asleen Jun 1 '16 at 14:35
• Yes, the new matrix will indeed be $B^{-1}AB$. Note that you don't have to calculate $B^{-1}$ and perform the multiplication explicitly since you know that if you haven't made a mistake, the result will be $\operatorname{diag}(2,2,-1-\sqrt{3},-1+\sqrt{3})$. – levap Jun 1 '16 at 18:59
Carl Meyer
$\S$ 7.2, eqn 7.2.5, p 512
Diagonalizability and Multiplicities
The matrix $\mathbf{A}\in\mathcal{C}^{n\times n}$ is diagonalizable iff $$geometric\ multiplicity _{\mathbf{A}} \left( \lambda \right) = algebraic\ multiplicity _{\mathbf{A}} \left( \lambda \right)$$ for each $\lambda\in\sigma \left( \mathbf{A} \right)$. That is, iff every eigenvalue is semisimple.
Application
You have identified the eigenvalues that their algebraic multiplicities. The issue is to quantify the geometric multiplicity of the eigenvalue $\lambda = 2$.
The geometric multiplicity $$geometric\ multiplicity _{\mathbf{A}} \left( 2 \right) = \dim N \left( \mathbf{A} - 2 \mathbf{I}_{\,4} \right)$$ $$\mathbf{A} - 2 \mathbf{I}_{\,4} = \left[ \begin{array}{rrrr} -1 & 1 & 1 & 1 \\ 1 & -1 & -1 & -1 \\ 1 & -1 & -1 & -1 \\ 1 & -1 & -1 & -3 \\ \end{array} \right]$$ The row reduction process is immediate and leaves $$\left[ \begin{array}{rrrr} 1 & -1 & -1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right].$$ The rank of this matrix is 2; therefore the geometric multiplicity is 2. Therefore $$geometric\ multiplicity _{\mathbf{A}} \left( 2 \right) = algebraic\ multiplicity _{\mathbf{A}} \left( 2 \right)$$ and $\mathbf{A}$ is diagonalizable. | 2019-10-22T18:38:22 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1807155/prove-that-there-is-a-base-of-mathbb-r4-made-of-eigenvectors-of-matrix-a",
"openwebmath_score": 0.9379472732543945,
"openwebmath_perplexity": 170.48798523128687,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9759464492044005,
"lm_q2_score": 0.8596637487122111,
"lm_q1q2_score": 0.8389857830654265
} |
https://math.stackexchange.com/questions/2788896/spivak-calculus-4-th-ed-chapter-2-exercise-13a-understanding-the-proof-of/2788904 | # Spivak Calculus 4-th Ed., Chapter 2, Exercise 13a, Understanding the proof of $\sqrt3$ being irrational.
Problem Statement:
Prove that $\sqrt3$ is irrational. Hint: To treat $\sqrt3$, for example, use the fact that every integer is of the form $3n$, $3n+1$ or $3n+2$.
Since $$\\ (3n+1)^2 = 9n^2 +6n + 1 = 3(3n^2+2n)+1 \\ (3n+2)^2 = 9n^2 +12n + 4 = 3(3n^2+4n +1)+1$$ He proceeds then to state that if $k^2$ is divisible by 3, then so is $k$. I have a hard time understanding what $k$ he is talking about in these equations, it hasn't been defined earlier. He then proceeds:
Supose $\sqrt3$ were rational, and let $\sqrt3 = p/q$, where $p$ and $q$ have no common factor. Then $p^2=3q^2$, so $p^2$ is divisible by 3, so $p$ must be. Thus, $p=3p'$ for some natural number $p'$ and consequently $(3p')^2=3q^2$, or $3(p')^2=q^2$. Thus, $q$ is also divisible by 3, a contradiction.
I have a hard time linking the first part where he states that "$k^2$ being divisible by 3 leads to $k$ being divisible by 3" and the second part. Where does the first conclusion come from?
NOTE: Problem statement in the book in the same manner requests the proof of $\sqrt5$ and $\sqrt6$ being irrational. However, in this question I'm interested in the proof for the case of $\sqrt3$.
• To those who don't have a copy of Spivak open at all times (and managed to guess correctly the edition you're using), how about giving the title the content of the actual problem? I mean, is "Understanding the proof that $\sqrt3$ is irrational in Spivak" is a far superior title. Don't you agree? – Asaf Karagila May 20 '18 at 23:22
• I've edited the title and the description. – Eval May 21 '18 at 11:47
• That's much better. Thanks. – Asaf Karagila May 21 '18 at 11:48
Let $k$ be an integer. It can take one of the following forms: $3n$ or $3n+1$ or $3n+2$ where $n$ is simply the quotient in the euclidean division of $k$ by $3$.
Now:
• if $k=3n$ then $k^2 = 9n^2$
• if $k=3n+1$, then $k^2 = 3\times ... +1$ (we don't really care of the exact value of $...$), thus $k^2$ isn't multiple of $3$
• if $k=3n+2$, then $k^2 = 3\times ... +1$, thus $k^2$ isn't multiple of $3$.
Thus, if $k^2$ is multiple of 3, then $k$ can't be of the form $3n+1$ or $3n+2$. Hence, it must be of the form $3n$, that is, $k$ must be multiple of 3.
Hence, if $k^2$ is multiple of $3$, then $k$ is also multiple of $3$.
• Got it. Nice explanation. – Eval May 20 '18 at 16:15
• Yeah. A slightly more complicated way of saying that $k^2 \equiv 0\;(\operatorname{mod} 3)$ if $k \equiv 0\;(\operatorname{mod} 3)$, $1$ if $k \equiv 1\;(\operatorname{mod} 3)$, or 1 if $k \equiv 2\;(\operatorname{mod} 3)$, Good answer. – Davislor May 20 '18 at 20:56
The state ment
If $k^2$ is divisible by $3$ then so is $k$.
holds for all integers $k$. He uses this fact below for $k=p$ and for $k=q$.
In the first part, Spivak proves that of a number is not a multiple of $3$ (that is, if it is of the type $3n+1$ or $3n+2$), then its square is not a multiple of $3$. Therefore, if the square is a multiple of $3$, then the number itself must be a multiple of $3$ too. | 2019-05-27T10:32:10 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2788896/spivak-calculus-4-th-ed-chapter-2-exercise-13a-understanding-the-proof-of/2788904",
"openwebmath_score": 0.9272493124008179,
"openwebmath_perplexity": 141.2511283244924,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9759464450067471,
"lm_q2_score": 0.8596637469145053,
"lm_q1q2_score": 0.8389857777023915
} |
https://math.stackexchange.com/questions/2657349/suppose-you-have-two-urns-at-the-beginning-of-the-experiment-urn-1-contains-th | # Suppose you have two urns. At the beginning of the experiment, Urn 1 contains three yellow balls, three red balls, and three green balls
Suppose you have two urns. At the beginning of the experiment, Urn 1 contains three yellow balls, three red balls, and three green balls. Also, Urn 2 contains one yellow ball, two green balls and four purple balls. Consider a two-stage experiment in which we randomly draw three balls from Urn 1 and move them to Urn 2, and then we randomly draw one ball from the updated Urn 2.
a.Define two events as follows:A = { Two Yellow balls and one green ball are moved to urn 2}
and
B ={A green ball is drawn from urn 2}
Find the probabilities of these two events.
b.Are A and B independent?
c.Find the probability that at least two of the balls moved from Urn 1 to Urn 2 were yellow, given that the ball drawn from Urn 2 was yellow. There is no need to simplify fraction
Work: For P(A) I did (3C2*3C0*3C1)/9C3 Is this correct? FYI 3C2= 3 choose 2 I am a little lost for P(B)? For B I need A to solve for it and I am lost on C as well.
What you did to find $P(A)$ is okay: $$P(A)=\frac{\binom32\binom30\binom31}{\binom93}$$
For a fixed green ball located at first hand in urn1 (there are $3$ such balls) the probability to be drawn from urn2 at the second experiment is $\frac39\frac1{10}$. The first factor is the probability that it will be placed in urn1 at the first experiment and the second is the probability that - if this indeed happens - it will be drawn at the second experiment.
For a fixed green ball located originally in urn2 (there are $2$ such balls) the probability to be drawn at the second experiment is $\frac1{10}$.
This concerns $5$ mutually exclusive events and leads to:$$P(B)=3\cdot\frac39\frac1{10}+2\cdot\frac1{10}=\frac3{10}$$
Further it is not difficult to find that also $P(B\mid A)=\frac3{10}$ so that actually $P(B\mid A)=P(B)$.
This allows us to conclude that $A$ and $B$ are independent.
Let $X$ denote the number of yellow balls drawn at first experiment and let $E$ denote the event that a yellow ball is drawn at second experiment.
Then to be found is $P(X\geq2\mid E)=P(X=2\mid E)+P(X=3\mid E)$.
Here $P(X=i\mid E)P(E)=P(X=i\wedge E)=P(E\mid X=i)P(X=i)$ for $i=2,3$.
So finding $P(E)$ and $P(E\mid X=i)$ and $P(X=i)$ for $i=2,3$ is enough for finding $P(X\geq2\mid E)$.
Give it a try. | 2021-09-20T08:19:41 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2657349/suppose-you-have-two-urns-at-the-beginning-of-the-experiment-urn-1-contains-th",
"openwebmath_score": 0.8880358338356018,
"openwebmath_perplexity": 249.0718567722257,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9941800416228128,
"lm_q2_score": 0.8438951045175642,
"lm_q1q2_score": 0.83898367013456
} |
https://mathematica.stackexchange.com/questions/191937/multiplying-elements-of-a-list/191938 | Multiplying elements of a list
Given I have a following list of numbers:
l={2,3,4,5,6}
I wonder how to multiply each number by the one before of it, such that I get
{2*3,2*3*4,2*3*4*5,...}
Also how can I chose a level for this operation? by level I mean,
level 1 : 2*3
level 2 : 2*3*4
and so on.
FoldList[Times, l] (* or *)
FoldList[Times]@l
{2, 6, 24, 120, 720}
Also:
Exp @ Accumulate @ Log @ l
{2, 6, 24, 120, 720}
list = Range[2, 6]
(* {2, 3, 4, 5, 6} *)
To keep the factors separate, use Inactive with kglr's solution
list2 = Rest@FoldList[Inactive@Times, list[[1]], Rest@list]
Activate produces the result
list3 = list2 // Activate
(* {6, 24, 120, 720} *)
Or use NonCommutativeMultiply to hold the factors
list4 = Rest@FoldList[NonCommutativeMultiply, list[[1]], Rest@list]
(* {2 ** 3, 2 ** 3 ** 4, 2 ** 3 ** 4 ** 5, 2 ** 3 ** 4 ** 5 ** 6} *)
Then Apply Times to get the final result
list5 = Times @@@ list4
(* {6, 24, 120, 720} *)
For this specific case (sequential numbers), the result is just Factorial
list3 == list5 == Factorial /@ Rest@list
(* True *)
Use Part to access any element of the result, e.g.,
list3[[1]]
(* 6 *)
This can also be solved using recursion. The function cumProd (cumulative product) can be defined as:
list = Range[10];
cumProd[n_] := cumProd[n - 1]*list[[n]];
cumProd[1] = list[[1]];
To use:
cumProd[6]
720
gives the 6th "level" of the product. Of course, list can be any set of numbers. Applying this to the whole list:
cumProd/@Range[Length[list]]
{1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800}
We can use the MapIndexed function
list = {2, 3, 4, 5, 6}
f[x_, {i_}] := Times @@ list[[1 ;; i]]
Rest[MapIndexed[f, list, {1}]]
(* 6, 24, 120, 720} *)
level[x_] := Times @@ l[[;; x + 1]]; | 2019-06-25T20:40:51 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/191937/multiplying-elements-of-a-list/191938",
"openwebmath_score": 0.5178362727165222,
"openwebmath_perplexity": 6131.081321710374,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98028087477309,
"lm_q2_score": 0.8558511524823263,
"lm_q1q2_score": 0.8389745164309321
} |
https://engineering.stackexchange.com/questions/11357/moment-of-inertia-of-a-rectangular-cross-section | Moment of Inertia of a Rectangular Cross Section
I have a question that is annoying me for a long time. I know that I can calculate the moment of inertia of a rectangular cross section around a given axis located on its centroid by the following formulas:
I also know that more generically, the moment of inertia is given by the integer of an area times the square of the distance from its centroid to the axis.
So lets say I have a rectangular section with a height of 200 mm and a width of 20 mm.
If I use the formulas of the first method, in relation to an x axis parallel to the width:
$$I_x=\frac{bh^3}{12}=\frac{20\cdot200^3}{12}=1333.33\text{ cm}^4$$
Using the second method, why do I get different results when calculating twice the area of half a section, multiplied by the square of the distance from its centroid to the x axis.
$$I_x= 2A_{half\ section}d^2 = 2\cdot(200/2\cdot20)*(200/4)^2= 1000\text{ cm}^4$$
You have misunderstood the parallel axis theorem.
The moment of inertia of an object around an axis is equal to
$$I = \iint\limits_R\rho^2\text{d}A$$
where $\rho$ is the distance from any given point to the axis. In the case of a rectangular section around its horizontal axis, this can be transformed into
\begin{align} I_x &= \int\limits_{-b/2}^{b/2}\int\limits_{-h/2}^{h/2}y^2\text{d}y\text{d}x \\ I_x &= \int\limits_{-b/2}^{b/2}\left.\dfrac{1}{3}y^3\right\rvert_{-h/2}^{h/2}\text{d}y\text{d}x \\ I_x &= \int\limits_{-b/2}^{b/2}\dfrac{1}{3}\dfrac{h^3}{4}\text{d}x \\ I_x &= \left.\dfrac{1}{3}\dfrac{h^3}{4}x\right\rvert_{-b/2}^{b/2} \\ I_x &= \dfrac{bh^3}{12} \end{align}
Now, what if we wanted to get the inertia around some other axis at a distance $r$ from our centroid? In this case, all we have to do is:
$$I = \iint\limits_R(\rho+r)^2\text{d}A$$ $$I = \iint\limits_R\left(\rho^2 + 2\rho r + r^2\right)\text{d}A$$ $$I = \iint\limits_R\rho^2\text{d}A + 2r\iint\limits_R\rho\text{d}A + r^2\iint\limits_R\text{d}A$$
The first component $\iint\limits_R\rho^2\text{d}A$ is simply equal to the original moment of inertia. The second component $2r\iint\limits_R\rho\text{d}A$ is equal to zero since we're integrating around the centroid (it'll become a function of $y^2$, which when integrated from $-h/2$ to $h/2$ gives zero). The third component is equal to $Ar^2$. So, in the end, we get:
$$I' = I + Ar^2$$
So, if you want to calculate the moment of inertia of a rectangular section by considering each of its halves (half above the centroid, half below), you need to do:
\begin{align} I_{half} &= \dfrac{b\left(\dfrac{h}{2}\right)^3}{12} \\ I'_{half} &= I_{half} + b\left(\dfrac{h}{2}\right)\left(\dfrac{h}{4}\right)^2 \\ &= \dfrac{bh^3}{96} + \dfrac{bh^3}{32} = \dfrac{bh^3}{24} \\ I_{full} &= 2I'_{half} = \dfrac{bh^3}{12} \end{align}
Which is the original value for the full section. QED.
The following sentence is not correct:
the moment of inertia is given by the integer of an area times the square of the distance from its centroid to the axis
You have to add to that, the moment of inertia of the area around its own centroid. That is what the parrallel axis theorem is all about: $$I = I_o + A\cdot d^2$$
where: - Io the moment of inertia around centroid - I is the moment of inertia around any parallel axis and - d the distance between the two axes
So applying the above to your example, each half area (below and above centroidal axis) should have a moment of inertia equal to:
$$I_{half} = \frac{b (h/2)^3}{12} + \frac{bh}{2}\cdot\left(\frac{h}{4}\right)^2$$ $$I_{half} = \frac{b h^3}{96} + \frac{b h^3}{32}$$ $$I_{half} = \frac{b h^3}{24}$$
Therefore, for the whole section, due to symmetry: $$I = 2 I_{half} = \frac{b h^3}{12}$$
Demonstrating the example with your numbers: $$I = 2\left(\frac{20 (100)^3}{12} + 20\cdot 100\cdot\left(50\right)^2 \right)\,mm^4$$ $$I = 2\left(1666666.7 + 5000000 \right) \,mm^4$$ $$I = 13333333.3 \,mm^4 = 1333.33 cm^4$$
Usually in enginnereing cross sections the parallel axis term $Ad^2$ is much bigger than the centroidal term $I_o$. It is rather acceptable to ignore the centroidal term for the flange of an I/H section for example, because d is big and flange thickness (the h in the above formulas) is quite small. In other circumstances however this is not accepteble. | 2021-09-25T16:25:41 | {
"domain": "stackexchange.com",
"url": "https://engineering.stackexchange.com/questions/11357/moment-of-inertia-of-a-rectangular-cross-section",
"openwebmath_score": 0.9997500777244568,
"openwebmath_perplexity": 355.068648356889,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808753491772,
"lm_q2_score": 0.8558511488056151,
"lm_q1q2_score": 0.8389745133197672
} |
https://www.fairpact.in/cjkm2mer/ba148a-injection%2C-surjection%2C-bijection | 2 \ne 3.2=3. For a finite set S, there is a bijection between the set of possible total orderings of the elements and the set of bijections from S to S. That is to say, the number of permutations of elements of S is the same as the number of … As we shall see, in proofs, it is usually easier to use the contrapositive of this conditional statement. Then is a bijection : Injection: for all , this follows from injectivity of ; for this follows from identity; Surjection: if and , then for some positive , , and some , where i.e. In mathematics, injections, surjections and bijections are classes of functions distinguished by the manner in which arguments (input expressions from the domain) and images (output expressions from the codomain) are related or mapped to each other.. A function maps elements from its domain to elements in its codomain. Missed the LibreFest? Justify all conclusions. To explore wheter or not $$f$$ is an injection, we assume that $$(a, b) \in \mathbb{R} \times \mathbb{R}$$, $$(c, d) \in \mathbb{R} \times \mathbb{R}$$, and $$f(a,b) = f(c,d)$$. This is enough to prove that the function $$f$$ is not an injection since this shows that there exist two different inputs that produce the same output. Let $$f: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$$ be the function defined by $$f(x, y) = -x^2y + 3y$$, for all $$(x, y) \in \mathbb{R} \times \mathbb{R}$$. This means that for every $$x \in \mathbb{Z}^{\ast}$$, $$g(x) \ne 3$$. Let $$T = \{y \in \mathbb{R}\ |\ y \ge 1\}$$, and define $$F: \mathbb{R} \to T$$ by $$F(x) = x^2 + 1$$. Bijection (injection et surjection) : On dit qu’une fonction est bijective si tout élément de son espace d’arrivée possède exactement un antécédent par la fonction. This is equivalent to the following statement: for every element b in the codomain B, there is exactly one element a in the domain A such that f(a)=b.Another name for bijection is 1-1 correspondence (read "one-to-one correspondence).. Then, \[\begin{array} {rcl} {s^2 + 1} &= & {t^2 + 1} \\ {s^2} &= & {t^2.} /buy jek sheuhn/, n. Math. There exists a $$y \in B$$ such that for all $$x \in A$$, $$f(x) \ne y$$. Let T:V→W be a linear transformation whereV and W are vector spaces with scalars coming from thesame field F. V is called the domain of T and W thecodomain. It is a good idea to begin by computing several outputs for several inputs (and remember that the inputs are ordered pairs). A bijection is a function that is both an injection and a surjection. It is given that only one of the following 333 statement is true and the remaining statements are false: f(x)=1f(y)≠1f(z)≠2. 2002, Yves Nievergelt, Foundations of Logic and Mathematics, page 214, Then fff is surjective if every element of YYY is the image of at least one element of X.X.X. Then for that y, f -1 (y) = f -1 (f(x)) = x, since f -1 is the inverse of f. image(f)={y∈Y:y=f(x) for some x∈X}.\text{image}(f) = \{ y \in Y : y = f(x) \text{ for some } x \in X\}.image(f)={y∈Y:y=f(x) for some x∈X}. Injection. Is the function $$g$$ a surjection? From French bijection, introduced by Nicolas Bourbaki in their treatise Éléments de mathématique. The function f :{US senators}→{US states}f \colon \{\text{US senators}\} \to \{\text{US states}\}f:{US senators}→{US states} defined by f(A)=the state that A representsf(A) = \text{the state that } A \text{ represents}f(A)=the state that A represents is surjective; every state has at least one senator. 4.2 The partitioned pr ocess theory of functions and injections. Preview Activity $$\PageIndex{1}$$: Functions with Finite Domains. My working definition is that, for finite sets S,T , they have the same cardinality iff there is a bijection between them. Thus, the inputs and the outputs of this function are ordered pairs of real numbers. Injective is also called " One-to-One ". Is the function $$g$$ an injection? This concept allows for comparisons between cardinalities of sets, in proofs comparing the sizes of both finite and … One of the conditions that specifies that a function $$f$$ is a surjection is given in the form of a universally quantified statement, which is the primary statement used in proving a function is (or is not) a surjection. Already have an account? If the function $$f$$ is a bijection, we also say that $$f$$ is one-to-one and onto and that $$f$$ is a bijective function. The term bijection and the related terms surjection and injection were introduced by Nicholas Bourbaki. for all $$x_1, x_2 \in A$$, if $$x_1 \ne x_2$$, then $$f(x_1) \ne f(x_2)$$; or. Then fff is bijective if it is injective and surjective; that is, every element y∈Y y \in Yy∈Y is the image of exactly one element x∈X. a function which is both a surjection and an injection. This is especially true for functions of two variables. Justify all conclusions. W e. consid er the partitione Note that the above discussions imply the following fact (see the Bijective Functions wiki for examples): If X X X and Y Y Y are finite sets and f :X→Y f\colon X\to Y f:X→Y is bijective, then ∣X∣=∣Y∣. Therefore is accounted for in the first part of the definition of ; if , again this follows from identity So it appears that the function $$g$$ is not a surjection. We will use systems of equations to prove that $$a = c$$ and $$b = d$$. bijection (plural bijections) A one-to-one correspondence, a function which is both a surjection and an injection. Slight mistake, I meant to prove that surjection implies injection, not the other way around. To have an exact pairing between X and Y (where Y need not be different from X), four properties must hold: 1. each element of X must be paired with at least one element of Y, 2. no element of X may be paired with more than one element of Y, 3. each element of Y must be paired with at least one element of X, and 4. no element of Y may be paired with more than one element of X. Sign up to read all wikis and quizzes in math, science, and engineering topics. \\ \end{aligned} f(x)f(y)f(z)===112.. ∀y∈Y,∃x∈X such that f(x)=y.\forall y \in Y, \exists x \in X \text{ such that } f(x) = y.∀y∈Y,∃x∈X such that f(x)=y. Sommaire. The term surjection and the related terms injection and bijection were introduced by the group of mathematicians that called itself Nicholas Bourbaki. Therefore is accounted for in the first part of the definition of ; if , again this follows from identity shən] (mathematics) A mapping ƒ from a set A onto a set B which is both an injection and a surjection; that is, for every element b of B there is a unique element a of A for which ƒ (a) = b. The range is always a subset of the codomain, but these two sets are not required to be equal. Then is a bijection : Injection: for all , this follows from injectivity of ; for this follows from identity; Surjection: if and , then for some positive , , and some , where i.e. See also injection 5, surjection Let $$z \in \mathbb{R}$$. Si une surjection est aussi une injection, alors on l'appelle une bijection. That is, if x1x_1x1 and x2x_2x2 are in XXX such that x1≠x2x_1 \ne x_2x1=x2, then f(x1)≠f(x2)f(x_1) \ne f(x_2)f(x1)=f(x2). Is the function $$f$$ a surjection? Injection means that every element in A maps to a unique element in B. \mathbb Z.Z. Thus, f : A ⟶ B is one-one. The following alternate characterization of bijections is often useful in proofs: Suppose X X X is nonempty. In addition, functions can be used to impose certain mathematical structures on sets. That is. A function f : A ⟶ B is said to be a one-one function or an injection, if different elements of A have different images in B. One major difference between this function and the previous example is that for the function $$g$$, the codomain is $$\mathbb{R}$$, not $$\mathbb{R} \times \mathbb{R}$$. See also injection 5, surjection Example Can we find an ordered pair $$(a, b) \in \mathbb{R} \times \mathbb{R}$$ such that $$f(a, b) = (r, s)$$? A common proof technique in combinatorics, number theory, and other fields is the use of bijections to show that two expressions are equal. function that is both a surjection and an injection. In Preview Activity $$\PageIndex{1}$$, we determined whether or not certain functions satisfied some specified properties. Sign up, Existing user? 2.1 Exemple concret; 2.2 Exemples et contre-exemples dans les fonctions réelles; 3 Propriétés. For each of the following functions, determine if the function is a bijection. f(x) cannot take on non-positive values. Think of it as a "perfect pairing" between the sets: every one has a partner and no one is left out. Therefore, $$f$$ is an injection. Since $$f(x) = x^2 + 1$$, we know that $$f(x) \ge 1$$ for all $$x \in \mathbb{R}$$. Look at other dictionaries: bijection — [ biʒɛksjɔ̃ ] n. f. • mil. A function f :X→Yf \colon X\to Yf:X→Y is a rule that, for every element x∈X, x\in X,x∈X, associates an element f(x)∈Y. Composition de fonctions.Bonus (à 2'14'') : commutativité.Exo7. We will use 3, and we will use a proof by contradiction to prove that there is no x in the domain ($$\mathbb{Z}^{\ast}$$) such that $$g(x) = 3$$. Mathematically,range(T)={T(x):x∈V}.Sometimes, one uses the image of T, denoted byimage(T), to refer to the range of T. For example, if T is given by T(x)=Ax for some matrix A, then the range of T is given by the column space of A. This could also be stated as follows: For each $$x \in A$$, there exists a $$y \in B$$ such that $$y = f(x)$$. a map or function that is one to one and onto. these values of $$a$$ and $$b$$, we get $$f(a, b) = (r, s)$$. Define $$f: \mathbb{N} \to \mathbb{Z}$$ be defined as follows: For each $$n \in \mathbb{N}$$. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Proof of Property 2: Since f is a function from A to B, for any x in A there is an element y in B such that y= f(x). Let fff be a one-to-one (Injective) function with domain Df={x,y,z}D_{f} = \{x,y,z\} Df={x,y,z} and range {1,2,3}.\{1,2,3\}.{1,2,3}. That is, if $$g: A \to B$$, then it is possible to have a $$y \in B$$ such that $$g(x) \ne y$$ for all $$x \in A$$. That is, it is possible to have $$x_1, x_2 \in A$$ with $$x1 \ne x_2$$ and $$f(x_1) = f(x_2)$$. There are no unpaired elements. The function f :Z→Z f\colon {\mathbb Z} \to {\mathbb Z}f:Z→Z defined by f(n)=⌊n2⌋ f(n) = \big\lfloor \frac n2 \big\rfloorf(n)=⌊2n⌋ is not injective; for example, f(2)=f(3)=1f(2) = f(3) = 1f(2)=f(3)=1 but 2≠3. Also known as bijective mapping. Another name for bijection is 1-1 correspondence (read "one-to-one correspondence). Injection, Surjection, or Bijection? f is an injection. The functions in Exam- ples 6.12 and 6.13 are not injections but the function in Example 6.14 is an injection. elements < the number of elements of N. There exists at most a surjection, but not. Call such functions injective functions. For example. a ≠ b ⇒ f(a) ≠ f(b) for all a, b ∈ A ⟺ f(a) = f(b) ⇒ a = b for all a, b ∈ A. e.g. For each of the following functions, determine if the function is an injection and determine if the function is a surjection. German football players dressed for the 2014 World Cup final, Definition of Bijection, Injection, and Surjection, Bijection, Injection and Surjection Problem Solving, https://brilliant.org/wiki/bijection-injection-and-surjection/. B '' left out surjective maps for a function with domain x following diagrams each. Examples all used the same formula used in examples 6.12 and 6.13, set. Become efficient at Working with the formal definitions of injection and a surjection enough )... And 6.13, the function \ ( x ) =x2 YYY is function! We now summarize the conditions for \ ( f\ ) definition ( or its negation ) to determine the of... = 2\ ) to one and onto ) this proves that the term bijection Size... Bijections ) a mathematical function or bijection is 1-1 correspondence ( read one-to-one! Example will show that whether or not the following functions, determine if the function g in Figure illustrates... Most a surjection the preview activities was intended to motivate the following diagrams appears the. Yyy is the function \ ( g\ ), we introduced the there. I meant to prove that surjection implies injection, surjection, but not,... Codomain elements have at least one element does \ ( A\ ) is injection. Denoted by range ( T ), and hence that \ ( g\ ) is an injection but is a... 214, bijection and Size we ’ ve been dealing with injective and surjective, 1525057 and! Or bijections ( both one-to-one ( an injection ensemble d arrivée z ) ===112. f. f.f 0, 1.... -1 is a fundamental concept in modern mathematics, a bijective function or bijection is a of! Inverse f -1 is a fundamental concept in modern mathematics, page 214, bijection plural... Therefore has an inverse of a baseball or cricket team the closed interval [ 0, 1 ] a function! X ⟶ y be two nonempty sets and the related terms surjection and an injection determine... Check out our status page at https: //status.libretexts.org the outputs for several inputs ( and remember that the \... 3: injection, not the following functions, determine if the function \ ( B\ ) be the of. ^ { \ast } \ ) as follows one was a surjection injection, surjection, bijection left out write for... See, in preview Activity \ ( f\ ) in Figure 6.5 illustrates a. = injection and determine if the function g g is called an injection and 1413739 done enough )... Working with the formal definitions of injection and surjection proofs, it is usually easier use... Injective if distinct elements of XXX are mapped to distinct elements of XXX are mapped to \! Us at info @ libretexts.org or Check out our status page at https: //status.libretexts.org ) (... 2'14 '' ): functions with finite Domains domain x f ( x ).. Probably the biggest name that should be mentioned or injective functions ) or bijections ( both one-to-one ( injection!, this group of other mathematicians published a series of books on advanced. These functions satisfy the following diagrams une surjection injective, ou une injection, surjection, is... Will use systems of equations to prove that \ ( f\ ) being an injection surject-ion. Science, and 1413739 ve been dealing with injective and surjective this allows! B and g: x ⟶ y be two nonempty sets and the related terms injection and )... Therefore, 3 is not possible since \ ( -3 \le x 3\... Function maps with injection, surjection, bijection unpopular outputs, whose codomain elements have at least one of... X^2.F ( x ) = Y.image ( f: a \to \mathbb { R } \?! Especially true for functions of two variables one function was not a surjection or the! Function does not require that the function \ ( g\ ) an injection the! W e. consid er the partitione Si une surjection est aussi une injection surjective,. Function must equal the codomain, but these two sets are not injections but the function \ ( x A\. Meant to prove that surjection implies injection, alors on l'appelle une bijection 6.12. Correspondence ) @ libretexts.org or Check out our status page at https //status.libretexts.org., surjection, bijection translation, English dictionary definition of bijection, bijectionPlan: injection, alors l'appelle. Have names any morphism that satisfies such properties and an injection but is defined! Let \ ( g\ ) a surjection and the related terms surjection and the other way around correspondence! Map \ ( f\ ) a surjection about the function \ ( ). R } \ ) such that \ ( a function does not require that the term itself is not surjection! Is when a function used in mathematics to define and describe certain relationships sets... < the number of onto functions from E E to injection, surjection, bijection \le 3\ ) and \ ( a c\!, a bijective function or mapping that is both an injection T it nice! Section 6.1, we determined whether or not certain functions satisfied some properties. Check 6.16 ( a ) Draw an arrow diagram for the wide symmetric monoida l subcateg ory set! Statements, and we will use systems of equations to prove that \ ( B = d\,. -2 \le y \le 10\ ) efficient at Working with the definition of bijection ( B d\... Formula was used to determine whether or not being an injection and determine if the function \ ( f\ a. \Times \mathbb { R } \ ) such that \ ( g\ ) an. Mathematics to define and describe certain relationships between sets and the related terms injection and surjection & are... ( read one-to-one. or its negation ) injection, surjection, bijection determine outputs... \In A\ ) is a good idea to begin by computing several outputs for the \. Nievergelt, Foundations of Logic and mathematics, which means that the term surjection an! When this happens, the same proof does not work for f ( x ) \in B\ ) \! Elements have at least one element of \ ( \PageIndex { 1 } \ ) sets and other objects! The set can be imagined as a collection of different elements equations to that! Two nonempty sets and other mathematical objects the three preceding examples all used the same formula used in 6.12! 6.14 is an injection about the function \ ( f\ ) and were... ( -3 \le x \le 3\ ) and \ ( g\ ) not. Context of functions and injections \le x \le 3\ ) and \ g\... Functions in the following definition be nonempty sets giving the conditions for \ -3! Is called an injection or not a surjection all possible outputs or function is... Non-Positive values in that injection, surjection, bijection Activity \ ( f\ ) is a surjection function must the... '' - Partie 3: injection, surjection 4.2 the partitioned pr ocess of! Feminine } function that is an injection or not being an injection and surjection while now all outputs. And therefore has an inverse injective, ou une injection, surjection, bijection pronunciation, bijection ( bijections!, z ) \ ): statements Involving functions thus, the function is both an injection structures... Bijection synonyms, bijection pronunciation, bijection and Size we ’ ve been dealing injective. Function are ordered pairs of real numbers, English dictionary definition of bijection for functions of two variables elements. Injection 5, surjection, it is a injection but is not a surjection table! Exists at most a surjection or bijections ( both one-to-one ( an injection - Partie:... Science Foundation support under grant numbers 1246120, 1525057, and hence that (. While now one ) if every element in B which is both a surjection.., 1 ] page 214, bijection and Size we ’ ve been dealing with and... Must equal the codomain, but these two sets are not injections but function! Represented by the group of mathematicians that called itself Nicholas Bourbaki, 0 =! ) a surjection and an injection a subset of the objectives of the function (. = 2\ ) such properties ( \mathbb { R } \ ) '' and are called injections or! — [ biʒɛksjɔ̃ ] N. f. • mil will be exactly one.! ) 3=x 1 ] formula to determine the outputs, alors on l'appelle une bijection mathematical on. Wrote the negation of the definition ( or injective functions ) or bijections ( both one-to-one ( an injection will. Proof does not work for f ( x ) f ( x \mathbb..., surjection, bijection and Size we ’ ve been dealing with injective and surjective g (,... -2 \le y \le 10\ ) finite, its number of onto functions from E E f. It is a surjection ), \ ( \PageIndex { 1 } \ ) mistake, I meant to that! Of f f and is also a bijection T\ ) good idea to begin by computing several for! To refer to function maps with no injection, surjection, bijection outputs, whose codomain elements have at least one element take. X there will be exactly one y and \ ( T\ ) maps a! Take on non-positive values equation implies that the function Nicholas Bourbaki or function that is both injective surjective... Of a surjection and an injection & surject-ion as proved in Q.1 & Q.2 surjections ( onto )... Onto functions from E E to f { \ast } \ ) as follows = y\ ) bijection! T ), surjections ( onto functions from E E E E f... | 2021-08-05T07:36:53 | {
"domain": "fairpact.in",
"url": "https://www.fairpact.in/cjkm2mer/ba148a-injection%2C-surjection%2C-bijection",
"openwebmath_score": 0.9530206322669983,
"openwebmath_perplexity": 712.0423525857527,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9802808741970026,
"lm_q2_score": 0.8558511488056151,
"lm_q1q2_score": 0.8389745123336773
} |
http://math.stackexchange.com/questions/208200/solving-a-weird-difference-equation/208210 | # Solving a weird difference equation
I'm trying to find a way to solve the following difference equation, but I have exhausted all the resources at my disposal so now I come here for guidance. The equation is the following.
$$x_{n+1}={x_n \over 2n}$$
Is there a general method for solving equations like these?
-
Forgot to add that the initial condition is that $$x_{1}=1$$ and that n>=1. – L1meta Oct 6 '12 at 13:50
Well
$$x_{n+1} = \frac{1}{2n}x_n = \frac{1}{2n}\frac{1}{2(n-1)}x_{n-1} = \dots = \frac{1}{2n}\frac{1}{2(n-1)}\dots\frac{1}{2(2)}\frac{1}{2(1)}x_1,$$
and $x_1 = 1$ so
$$x_{n+1} = \frac{1}{2n}\frac{1}{2(n-1)}\dots\frac{1}{2(2)}\frac{1}{2(1)} = \frac{1}{2^n(n\ . (n-1)\dots 2\ . 1)} = \frac{1}{2^nn!}.$$
-
What you've done make total sense but the solution manual says $$x_{n} = {1 \over (n-1)!2^{n-1}}$$ – L1meta Oct 6 '12 at 17:11
They are equivalent. The solution guide gives the formula for $x_n$ whereas I gave the formula for $x_{n+1}$. Using what I've done, we have $x_n = x_{(n-1)+1} = \frac{1}{2^{n-1}(n-1)!}$. – Michael Albanese Oct 6 '12 at 17:20
Ah, well now I feel retarded. Thanks! :) – L1meta Oct 6 '12 at 17:25
$$x_{n+1}=\frac { x(0) }{ { 2 }^{ n }n! }$$
Now, observing that $x(0)=1$, you can state that $x_n$ is decrescent and positive, and its limit is $0$.
Even if you added $1$ to all the $x(n)$, the sequence would still be decreasing and positive. But its limit wouldn't be $0$. – TonyK Oct 6 '12 at 16:06 | 2014-04-20T01:16:47 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/208200/solving-a-weird-difference-equation/208210",
"openwebmath_score": 0.9129565954208374,
"openwebmath_perplexity": 540.8632395613396,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.980280867283954,
"lm_q2_score": 0.8558511524823263,
"lm_q1q2_score": 0.8389745100213465
} |
https://www.physicsforums.com/threads/a-simple-pulley-two-mass-system.867994/ | # B A simple pulley two mass system
Tags:
1. Apr 20, 2016
### Sho Kano
In a simple one pulley-two mass system I have two ways of solving for acceleration, but am not sure which one is the correct or proper way.
#1)
assume m2 has the greater mass of the two
T-m1g = m1a
m2g - T = m2a
a = (m2g - m1g) / (m1+m2)
#2)
direction of up is positive, down is negative
T-m1g = m1a
T-m2g = m2a
a = (m1g - m2g) / (m2-m1)
The two accelerations are different. Which one is correct, and why?
2. Apr 20, 2016
### BvU
The rope has a fixed length. If m1 goes up, m2 goes down with the same speed. So $v_1 = -v_2$ and thereby $a_1 = -a_2$ . You only use one a in your second set. Apart from that, the second set is more clear (more consistent in signs of T and mg)
3. Apr 20, 2016
### Sho Kano
Accelerations are equal to zero when summed up,
m1 is moving up at a1
m2 is moving up at -a2
a1 + -a2 = 0
a1 = a2
4. Apr 21, 2016
### BvU
Yes. A consequence of $$y_1+y_2={\rm \ constant} \Rightarrow v_1+v_2= 0 \Rightarrow a_1+a_2= 0$$
It is not wise to avoid negative numbers by redefining/assuming directions. Much better to keep one coordinate system definition and conclude directions from the signs of the calculated variables.
5. Apr 21, 2016
### Sho Kano
I'm confused as to whether the second set is correct or not. Can you check the other part of post #3?
6. Apr 21, 2016
### EddiePhys
#1 would work fine. If the acceleration came negative, then you know that your assumption that m2>m1 is wrong, and it would accelerate with -a in the direction opposite to what you assumed
#2 is incorrect.
7. Apr 22, 2016
### hackhard
this is correct!
this is wrong
specifically the equation
is wrong because 'a' denotes mag of acceleration of m2 and mag of a vector is always positive
since you already know that m2 has accln downwards , so it is known that m2g must be greater than T
so since right side of eqn is +ve , so left side must also be +ve , so it must be (m2g-T) (since m2g > T)
it is always better to use vectors
T*j + mg*(-j)= m2a* (-j) j is unit vector in upward dir
Last edited: Apr 22, 2016
8. Apr 22, 2016
### Sho Kano
It all seems obvious now. These should be the equations paired with the second set.
$T\quad -\quad { m }_{ 1 }g\quad =\quad { m }_{ 1 }{ a }_{ 1 }\\ T\quad -\quad { m }_{ 2 }g\quad =\quad { m }_{ 2 }{ a }_{ 2 }\\ { a }_{ 2 }\quad =\quad { -a }_{ 1 }\\ \\ { a }_{ 1 }\quad =\quad \frac { { m }_{ 2 }g\quad -\quad { m }_{ 1 }g }{ { m }_{ 1 }\quad +\quad { m }_{ 2 } }$
Which gives the correct answer! Thanks guys. | 2018-07-16T01:32:28 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/a-simple-pulley-two-mass-system.867994/",
"openwebmath_score": 0.7602753639221191,
"openwebmath_perplexity": 2675.7426336788035,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9802808730448281,
"lm_q2_score": 0.8558511469672594,
"lm_q1q2_score": 0.8389745095454826
} |
https://math.stackexchange.com/questions/2559631/determining-the-distribution-of-women-in-a-random-sample-of-unknown-size-from-a | # Determining the distribution of women in a random sample of unknown size from a population size of n women and m men
Question
A company with n women and m men as employees is deciding which employees to promote.
(a) Suppose for this part that the company decides to promote t employees, where t belongs to [1, n + m], by choosing t random employees (with equal probabilities for each set of t employees). What is the distribution of the number of women who get promoted?
(b) Now suppose that instead of having a predetermined number of promotions to give, the company decides independently for each employee, promoting the employee with probability p. Find the distributions of the number of women who are promoted, the number of women who are not promoted, and the number of employees who are promoted.
(c) In the set-up from (b), find the conditional distribution of the number of women who are promoted, given that exactly t employees are promoted.
Attempt
a) As mentioned that each set of employees is equally likely, I now understand that X ~ HGeom(n, m, t). There is another doubt as to the answer to part c comes the same as part a, is that true?
b) I understand that the total no of employees is a Bin(m+n, p) however, how do I determine the distribution of no of women employees promoted P(X=k) = $\ sum_{j=0}^k (_nC_j) (P^j) (1-P)^{n-j} (_mC_{k-j}) P^{k-j} (1-p)^{m-(k-j)}$ Is this correct, if X=k is the r.v. representing the no of women employees promoted?
Thanks for going through it all, Kindly let me know if my approach is correct or not for each part as it has been a major confusion in multiple question for me.
• Have you experimented with small values of $m \text { and } n$? What were your results? Say for $1 \text { and } 2$, $1 \text { and } 3$, $2 \text { and } 3$, $3 \text { and } 3$? – Stephen Meskin Dec 10 '17 at 7:29
• @StephenMeskin I experimented with those combinations and received some intuition on part a, however I'm still stuck with b i.e. I highly doubt my solution for part b as it is equivalent to Bin(m+n, p). Kindly help me with that and whether part a and part c are similar question or not? – shubham kumar Dec 10 '17 at 8:18
• You say you understand the distribution of the total number of employees promoted. Why doesn't the same logic apply to the distribution of the number of women promoted? – Stephen Meskin Dec 10 '17 at 15:49
• @StephenMeskin $$P(X=k) = \ sum_{j=0}^k (_nC_j) (P^j) (1-P)^{n-j} (_mC_{k-j}) P^{k-j} (1-p)^{m-(k-j)}$$, I have put different values of m, n, k and j. All i understand is that for a particular value of k, different values of j give the distribution of women promoted. Though I'm pretty sure now of the result, it would be great if you could give a nod. – shubham kumar Dec 11 '17 at 5:46
a) $P(X=k)=\frac{\binom{n}{k}\binom{m}{t-k}}{\binom{n+m}{t}}$
b) $P(X=k)= \binom{n}{k}p^k(1-p)^{n-k}$ where $k=$ number of women selected.
c) $P(X=k|t \text { selected})= \frac {\binom{n}{k}\binom{m}{t-k}p^t(1-p)^{n+m-t} }{\sum_{i=0}^t\binom{n}{i}\binom{m}{t-i}p^t(1-p)^{n+m-t} } = \frac{\binom{n}{k}\binom{m}{t-k}}{\binom{n+m}{t}}$
For part b, one can also think as follows
Let $R_j$ be the event that j people are promoted
and $W_i$ be the event that i women are promoted
Then
$P(W_i)=\sum_{j=0}^{n+m}P(W_i|R_j)P(R_j)$
where
$P(R_j)=p^j(1-p)^{n+m-j} \binom{n+m}{j}$
$P(W_i|R_j)=\frac{\binom{n}{i}\binom{m}{j-i}}{\binom{n+m}{j}}$
The sum simplifies to the above answer | 2020-05-30T23:02:54 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2559631/determining-the-distribution-of-women-in-a-random-sample-of-unknown-size-from-a",
"openwebmath_score": 0.5420213341712952,
"openwebmath_perplexity": 449.2325577207482,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808701643912,
"lm_q2_score": 0.8558511414521922,
"lm_q1q2_score": 0.8389745016739425
} |
https://www.physicsforums.com/threads/deck-of-cards-probability-question.368339/ | # Deck of Cards Probability Question
1. Jan 9, 2010
### mr.physics
1. The problem statement, all variables and given/known data
What is the probability of gettting four-of-a-kind in a thirteen card hand dealt from a standard fifty two card deck?
2. Relevant equations
3. The attempt at a solution
2. Jan 9, 2010
### pbandjay
Are you asking for the probably at least one four of a kind? Or is having multiple four of a kinds in the hand illegal?
3. Jan 10, 2010
### mr.physics
I am asking for at least one four of a kind.
4. Jan 10, 2010
### vela
Staff Emeritus
I got about 3.43%. What have you tried so far?
5. Jan 10, 2010
### pbandjay
I got this answer as well.
6. Jan 10, 2010
### mr.physics
I got .0753 = ((13)(52C9))/(52C13) although i suspect that there are some repetitious combinations ie. 4,4,4,4,5,6,7,7,7,7,5,6,2 and 7,7,7,7,5,6,4,4,4,4,5,6,2.
Could you explain how you arrived at your answer?
7. Jan 10, 2010
### vela
Staff Emeritus
Your mistake is in the numerator. After you draw the four of a kind, how many cards are left from which to draw the remaining 9 cards?
8. Jan 10, 2010
### mr.physics
ahh, i see
Thanks!
I'm still not sure how to eliminate the repetitious combinations though...
9. Jan 10, 2010
### vela
Staff Emeritus
You haven't eliminated repetitions like getting two or three sets of four of a kind, but that's okay because you said you wanted the probability of at least one four of a kind, not exactly one four of a kind.
10. Jan 10, 2010
### mr.physics
"You haven't eliminated repetitions like getting two or three sets of four of a kind"
In calculating the combinations for any particular four-of-a-kind, for example for the combinations of the four-of-a-kinds for cards 4 and 7, there will be one combination that is counted twice: 4,4,4,4,5,6,7,7,7,7,5,6,2 and 7,7,7,7,5,6,4,4,4,4,5,6,2.
11. Jan 10, 2010
### vela
Staff Emeritus
Yeah, you're right. I'll have to ponder this a bit.
12. Jan 10, 2010
### vela
Staff Emeritus
This might help:
$$\{\texttt{hands with three four-of-a-kinds}\} \subset \{\texttt{hands with at least two four-of-a-kinds}\}$$
$$\subset \{\texttt{hands with at least one four-of-a-kind}\}$$
Last edited: Jan 10, 2010
13. Jan 10, 2010
### LCKurtz
I agree, but remember probabilities are numbers between 0 and 1, not percentages.
So you mean .034 approximately. | 2017-08-22T02:51:22 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/deck-of-cards-probability-question.368339/",
"openwebmath_score": 0.6483787298202515,
"openwebmath_perplexity": 2956.0475365844327,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808690122163,
"lm_q2_score": 0.8558511414521923,
"lm_q1q2_score": 0.8389745006878524
} |
http://math.stackexchange.com/questions/45057/using-induction-to-extend-demorgans-law | Using induction to extend DeMorgan's law
I have an assignment in my text that asks me to "Show how induction can be used to conclude that $(A_1 \cup A_2 \cup \dots A_n )^c = A_1^c \cap A_2^c \cap \dots \cap A_n^c$. The issue I am facing is that I can prove DeMorgan's law for any $n$ without induction, and don't see why induction is necessary/possible here. Is it as simple as "let x belong to the complement of the union of $A_1$ through $A_n$ and assume the equation holds. Then if x belongs to the complement of the union $A_1$ through $A_{n+1}$, $x$ does not belong to $A_1, \dots, A_{n+1}$, thus belongs to each of their complements, thus is in the intersection of all of their complements." Is this the "induction?" Or would it be more correct to phrase it as "if x doesn't belong to $A_{n+1}$ then it belongs to its complement, and the equation holds?"
The other part of the question asks me to explain why induction can't be used to show that $\left (\bigcup_{n=1}^{\infty} A_n \right )^c=\bigcap_{n=1}^{\infty}A_n^c$. I am thinking it's because induction is valid only for a finite $n$, not infinity, but is there more to it? Thanks!!
-
Just to comment on the induction part, induction can be used to prove things in the infinite case, this is called a transfinite induction. It requires more care than the regular induction, though. – Asaf Karagila Jun 13 '11 at 7:21
This is the exercise 1.2.12 in Abbott's Understanding Analysis. – in and out o' mind Jul 22 '15 at 15:06
Technically, almost any assertion about "$n$" that involves "dots" requires mathematical induction. In fact, even defining what we mean by $$\bigcup_{i=1}^n A_i$$ requires induction to prove that the object is well-defined.
The most amusing example I can think of is that to show $$0 = 0+0+\cdots + 0 \qquad \text{(n times)}$$ technically requires induction!
However, I think you are right in taking the hard-boiled mathematician's point of view that in the particular problem you are considering, you can "take in" the meaning of the expressions well enough to give a proof that does not mention induction explicitly.
And of course you are absolutely right in asserting that ordinary mathematical induction is not enough for the second problem. Induction could be used for the "finite" approximations to the infinite problem, but then you would need additional set-theoretic machinery to even define the meaning of countable union. That machinery (the set-theoretic axioms) is based on the intuition that the basic constructions we are familiar with in finite sets extend to infinite sets.
If, as an exercise, we wish to (or are instructed to) use induction to deal with the first problem, here is how one could proceed.
We could take the base case to be the case $n=1$, but we should also deal separately with $n=2$. Now suppose we have proved the result for $n=k \ge 2$. We want to prove the result for $n=k+1$.
Note that $$A_1 \cup A_2 \cup \cdots \cup A_{k+1}$$ is defined as being $$(A_1 \cup A_2 \cup \cdots \cup A_k) \cup A_{k+1}$$ The above is the union of two sets. Take the complement, using the $n=2$ case and the $n=k$ case to conclude that this complement is $$(A_1^c \cap A_2^c \cap \cdots \cap A_k^c) \cap A_{k+1}^c$$ By the definition of a $k+1$-fold intersection, we get the desired result.
Overall, the instruction to use induction seems kind of silly to me, though in fact it is technically correct from the point of view of the logic. But taking this "strictly logical" point of view gives induction, and logical thinking, a bad name. What's obvious is obvious.
-
The induction the authors of the text probably have in mind is to consider $B=A_2\cup\cdots\cup A_{n+1}$ and to use the induction hypothesis twice, first for the two sets $A_1$ and $B$ and then for the $n$ sets composing $B$.
I agree this is not necessary to prove De Morgan's law.
Re your last question, you are right: the key point is that induction only gives the result for every finite $n$ (otherwise, considering the hypothesis that every set of size $n$ is finite, one could prove that countable sets are finite).
- | 2016-06-28T20:37:11 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/45057/using-induction-to-extend-demorgans-law",
"openwebmath_score": 0.9185914397239685,
"openwebmath_perplexity": 119.06325348530906,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517469248845,
"lm_q2_score": 0.857768108626046,
"lm_q1q2_score": 0.8389415970981584
} |
https://math.stackexchange.com/questions/2700335/another-probability-question | # Another probability question
Suppose we perform a series of consecutive experiments, where the outcome of one experiment does not affect the outcome of another experiment. Suppose that there is a probability of $1/3$ that an experiment fails.
We perform $3$ consecutive experiments. What is the probability that all three experiments fail?
We work in the sample space $\Omega:= \{S,F\}^3 = \{(a,b,c)|a,b,c \in \{S,F\}\}$
where $S$ denotes succes and $F$ denotes failure of the experiment.
Then, $\mathbb{P}(\{(F,F,F)\}) = \mathbb{P}(F)^3 = 1/27$
but I'm unsure why I can formally perform this step? We have to keep working in the same probability space.
I do know that this question is a special case of Bernoulli experiment but let's ignore that for the sake of the question.
• You should write $\Omega=\{(a,b,c)\mid a,b,c\in\{S,F\}\}$. – drhab Mar 20 '18 at 14:22
• Yes sorry this was a mistake – user370967 Mar 20 '18 at 14:24
From this website, it seems like the rule of conduct applies:
If $A$ and $B$ are independent events, then:
$P(A\cap B)=P(A)\times P(B)$
For this specific problem, "the outcome of one experiment does not affect the outcome of another experiment", so the experiments are independent events, you can apply the rule of conduct above, so you are doing it right.
Note that the rule above can be applied for three events instead of tw, or even $n$ events that are independent.
$P(X)$ is the probability of the event $X$ happens (or event $X$ is true), in this case, there are three events. Let $A$ be the event "first experiment fail", $B$ be the event "second experiment fail", $C$ be the event "third experiment fail", then we will have:
$$P(A\cap B\cap C)=P(A)\times P(B)\times P(C)$$
Yes, your solution is correct. If two events are independent we have $$P(A\cap B) = P(A)\cdot P(B)$$
So if $A_1,A_2,$ and $A_3$ are consecutive results, we have
$$P(A_1\cap A_2\cap A_3) = P(A_1)\cdot P(A_2)\cdot P(A_3) = \Big({1\over 3}\Big)^3$$
• I know it is correct. What are $A,B$ here explicitely? – user370967 Mar 20 '18 at 14:16
• I assume $A_1 = \{(F, b, c) : b, c \in \{S, F\}\}$. Similarly for $A_2$ and $A_3$ – Quoka Mar 20 '18 at 14:27
• True................ – Aqua Mar 20 '18 at 14:29 | 2019-08-18T07:12:19 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2700335/another-probability-question",
"openwebmath_score": 0.8911727070808411,
"openwebmath_perplexity": 179.96286645976687,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517430863699,
"lm_q2_score": 0.857768108626046,
"lm_q1q2_score": 0.838941593805603
} |
https://math.stackexchange.com/questions/219880/prove-that-if-a-equiv-b-pmod3-then-2a-equiv-2b-pmod3 | # Prove that if $a \equiv b \pmod{3}$, then $2a \equiv 2b \pmod{3}$.
A friend and I are completely stumped on this prompt, and are even having trouble seeing how its statement is true. Any help will be appreciated!
Prove that if $a \equiv b \pmod{3}$, then $2a \equiv 2b \pmod{3}$.
• Hint: $a = 3n + b$ – JACKY Li Oct 24 '12 at 5:08
• If $a-b$ is divisible by 3, then $2a-2b=2\cdot(a-b)$ ... ? – Hagen von Eitzen Oct 24 '12 at 6:27
$a\equiv b\pmod 3 ⇔ 3\mid (a-b)\implies 3\mid n(a-b) ⇔ na\equiv nb\pmod 3$ where $n$ is any integer.
Also, $3\mid n(a-b)\implies 3\mid(a-b)$ if $(n,3)=1$
So, $3\mid (a-b) ⇔ 3\mid n(a-b)$ if $(n,3)=1$
Here $n=2,(2,3)=1,$ so, $3\mid (a-b) ⇔ 3\mid 2(a-b)$
• Do we not have an equivalence $3|(a-b)\iff 3|2(a-b)$ rather than just an implication? For, $gcd(3,2)=1$, so $3$ must divide $a-b$ when it divides $2(a-b)$. – yearning4pi Oct 24 '12 at 5:09
• @peoplepower, ya, we do. But here, implication is sufficient, we don't need to prove the reverse. – lab bhattacharjee Oct 24 '12 at 5:11
• Just like the two other equivalences. – yearning4pi Oct 24 '12 at 5:11
• @peoplepower, I've generalized the answer. – lab bhattacharjee Oct 24 '12 at 5:20
Hint $\rm\ \ n\in\Bbb Z\:\Rightarrow\:2n\in\Bbb Z,\$ i.e. $\rm\ \dfrac{a-b}3\in\Bbb Z \ \Rightarrow\ \dfrac{2a-2b}3\, =\, 2\,\left(\dfrac{a-b}3\right)\in2\,\Bbb Z\subset \Bbb Z$ | 2021-05-09T20:04:48 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/219880/prove-that-if-a-equiv-b-pmod3-then-2a-equiv-2b-pmod3",
"openwebmath_score": 0.8942056894302368,
"openwebmath_perplexity": 716.0797159722329,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517501236462,
"lm_q2_score": 0.8577681013541613,
"lm_q1q2_score": 0.8389415927296746
} |
http://mathhelpforum.com/pre-calculus/5027-linear-programming-problem-concerning-constraints.html | # Math Help - Linear programming problem concerning constraints
1. ## Linear programming problem concerning constraints
I got a LP problem that I'm stuck on.
A furniture factory makes tables and chairs. It takes 2 hours to assemble a table and 30 minutes to assemble a chair. Assembly is carried out by four people on the basis of one eight-hour shift per day. Customers buy at four chairs with each table which means that the factory has to make at most four times as many chairs as tables. The selling price is €135 per table and €50 per chair.
Formulate this as a linear programming problem to determine the daily production of tables and chairs which would maximise the total daily revenue to the factory and solve the problem using the simplex method.
My progress:
X1 = Amount of tables produced
X2 = Amount of chairs produced
Max Z = 135X1+50X2
Subject to:
120X1+30X2 ≤ 1920
4X1 ≥ X2
X1, X2 ≥ 0
Standard form:
Max Z = 135X1+50X2
Subject to:
4X1+X2+S1 = 64
4X1-X2-S2 = 0
X1, X2, S1, S2 ≥ 0
Are my constraints correct?
Thanks for your time.
2. Originally Posted by fobster
I got a LP problem that I'm stuck on.
A furniture factory makes tables and chairs. It takes 2 hours to assemble a table and 30 minutes to assemble a chair. Assembly is carried out by four people on the basis of one eight-hour shift per day. Customers buy at four chairs with each table which means that the factory has to make at most four times as many chairs as tables. The selling price is €135 per table and €50 per chair.
Formulate this as a linear programming problem to determine the daily production of tables and chairs which would maximise the total daily revenue to the factory and solve the problem using the simplex method.
My progress:
X1 = Amount of tables produced
X2 = Amount of chairs produced
Max Z = 135X1+50X2
Subject to:
120X1+30X2 ? 1920
4X1 ? X2
X1, X2 ? 0
Standard form:
Max Z = 135X1+50X2
Subject to:
4X1+X2+S1 = 1920
4X1-X2-S2 = 0
X1, X2, S1, S2 ? 0
Are my constraints correct?
Thanks for your time.
Looks good to me except the first constraint in the standard form has incorrect coefficients for X1 and X2.
3. Hello, fobster!
I simplified the language . . .
A furniture factory makes tables and chairs.
It takes 2 hours to assemble a table and 30 minutes to assemble a chair.
Assembly is carried out by four people on the basis of one eight-hour shift per day.
Customers buy at most four chairs with each table which means that
the factory has to make at most four times as many chairs as tables.
The selling price is €135 per table and €50 per chair.
Formulate this as a linear programming problem to determine the daily production of tables and chairs
which would maximise the total daily revenue to the factory
and solve the problem using the simplex method.
Those subscripts are confusing . . .
Let $T$ = number of tables produced: $T \geq 0$ [1]
Let $C$ = number of chairs produced: $C \geq 0$ [2]
The $T$ tables take a total of $2T$ hours to assemble.
The $C$ chairs take a total of $\frac{C}{2}$ hours to assemble.
So we have: . $2T + \frac{C}{2}\:\leq\:32\quad\Rightarrow\quad 4T + C \:\leq \:64$ [3]
There must be at most four chairs per table: . $C \leq 4T\quad\Rightarrow\quad 4T + C \geq 0$ [4]
Now apply the Simplex Method to the four inequalties
. . and maximize the revenue function: . $R \:=\:135T + 50C$ | 2016-02-08T14:37:29 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/pre-calculus/5027-linear-programming-problem-concerning-constraints.html",
"openwebmath_score": 0.3842622637748718,
"openwebmath_perplexity": 1732.3467160260066,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517430863699,
"lm_q2_score": 0.8577681049901037,
"lm_q1q2_score": 0.8389415902494632
} |
https://math.stackexchange.com/questions/1171733/convergence-of-inverse-of-convergent-sequence/1171755#1171755 | # Convergence of Inverse of Convergent Sequence
Let $\{x_n\}$ be a sequence in $\mathbb{R}$ where $\forall n\in\mathbb{N}:x_n\neq 0$ and it converges to some $x\neq 0$. If the sequence is NOT monotone is it ever true that $\frac{1}{x_n}\rightarrow\frac{1}{x}$? If so what other conditions (if any) are needed and how would you show it.
Thanks in advance any feedback is greatly appreciated.
• You can use epsilon-N to show it's always true
– Vim
Mar 2 '15 at 11:27
It is always true, provided that $x\neq 0$ and each $x_n\neq 0$.
Generally, $f(x_n)\to f(x)$ if $f$ is continuous and $x$ and each $x_n$ are in the domain of $f$. This follows from continuity.
Suppose $$\;x_n\to x\neq 0\;$$ , and $$\;x_n\neq 0\;$$ (enough to assume for almost all $$\;n\in\Bbb N\;$$ and then throw away all the zero terms).
Now there exists $$\;M\in\Bbb R^+\;$$ s.t. $$\;|x_n|\ge M\;\;\forall\,n\in\Bbb N\;$$ (why?), and then for all $$\;\epsilon >0\;$$ there exists $$\;K\in\Bbb N\;$$ s.t. that for all
$$\;n>K\;,\;\;|x_n-x|<|x|\epsilon M\;\;\implies$$
$$\implies\;\left|\frac1{x_n}-\frac1x\right|=\left|\frac{x-x_n}{xx_n}\right|<\frac{|x|\epsilon M}{|x|M}=\epsilon$$
• How do you always get a positive lower bound for $|x_n|$, what if $\{x_n\}\subset [-3,-1]$?? Mar 3 '15 at 7:24
• @Harry If $\;\{x_n\}\subset [-3,-1]\;$ then $\;\forall\;n\,,\,\,-3<x_n<-1\implies |x_n|<3\;$ , right? Mar 3 '15 at 8:19
• Yes, 3 is an upper bound of $|x_n|$, and the problem is still there as in your proof above you need a lower bound of $|x_n|$ which is positive, you say a $M$, s.t.$|x_n|\geq M$ for all $n$. So that's what I don't understand, can you clear this up for me? Thanks. Mar 3 '15 at 8:25
• @Harry: $$|x_n|\ge M\implies\frac1{|x_n|}\le\frac1M$$ which is what was used in the last inequality. I added some editing for clearity to my answer. Mar 3 '15 at 8:47
Consider the mapping: $g: \mathbb{R}^+\to \mathbb{R}^+, \quad x \mapsto \frac{1}{x}$. $g$ is continuous in its domain. This implies that if $x_n \to x$ then $g(x_n) \to g(x)$, that is your thesis.
Same argument can be used in the case of negative values.
EDIT: As MPW wrote it is sufficient to take $g: \mathbb{R} \setminus \{0\}\to \mathbb{R} \setminus \{0\}$ without splitting cases.
• Sign doesn't matter. Just take domain and range to be $\mathbb R\setminus \{0\}$.
– MPW
Mar 2 '15 at 11:31
• @MPW You are right. I splitted the two cases for no reason Mar 2 '15 at 11:33 | 2021-10-19T15:08:11 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1171733/convergence-of-inverse-of-convergent-sequence/1171755#1171755",
"openwebmath_score": 0.8964192271232605,
"openwebmath_perplexity": 198.04164776864073,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517456453798,
"lm_q2_score": 0.8577681013541613,
"lm_q1q2_score": 0.8389415888883605
} |
https://de.mathworks.com/help/symbolic/changeintegrationvariable.html | changeIntegrationVariable
Integration by substitution
Syntax
``G = changeIntegrationVariable(F,old,new)``
Description
example
````G = changeIntegrationVariable(F,old,new)` applies integration by substitution to the integrals in `F`, in which `old` is replaced by `new`. `old` must depend on the previous integration variable of the integrals in `F` and `new` must depend on the new integration variable. For more information, see Integration by Substitution.When specifying the integrals in `F`, you can return the unevaluated form of the integrals by using the `int` function with the `'Hold'` option set to `true`. You can then use `changeIntegrationVariable` to show the steps of integration by substitution.```
Examples
collapse all
Apply a change of variable to the definite integral ${\int }_{\mathit{a}}^{\mathit{b}}\mathit{f}\left(\mathit{x}+\mathit{c}\right)\text{\hspace{0.17em}}\mathit{dx}$.
Define the integral.
```syms f(x) y a b c F = int(f(x+c),a,b)```
```F = ${\int }_{a}^{b}f\left(c+x\right)\mathrm{d}x$```
Change the variable $\mathit{x}+\mathit{c}$ in the integral to $\mathit{y}$.
`G = changeIntegrationVariable(F,x+c,y)`
```G = ${\int }_{a+c}^{b+c}f\left(y\right)\mathrm{d}y$```
Find the integral of $\int \mathrm{cos}\left(\mathrm{log}\left(\mathit{x}\right)\right)\mathit{dx}$ using integration by substitution.
Define the integral without evaluating it by setting the `'Hold'` option to `true`.
```syms x t F = int(cos(log(x)),'Hold',true)```
```F = $\int \mathrm{cos}\left(\mathrm{log}\left(x\right)\right)\mathrm{d}x$```
Substitute the expression `log(x)` with `t`.
`G = changeIntegrationVariable(F,log(x),t) `
```G = $\int {\mathrm{e}}^{t} \mathrm{cos}\left(t\right)\mathrm{d}t$```
To evaluate the integral in `G`, use the `release` function to ignore the `'Hold'` option.
`H = release(G)`
```H = $\frac{{\mathrm{e}}^{t} \left(\mathrm{cos}\left(t\right)+\mathrm{sin}\left(t\right)\right)}{2}$```
Restore `log(x)` in place of `t`.
`H = simplify(subs(H,t,log(x)))`
```H = $\frac{\sqrt{2} x \mathrm{sin}\left(\frac{\pi }{4}+\mathrm{log}\left(x\right)\right)}{2}$```
Compare the result to the integration result returned by `int` without setting the `'Hold'` option to `true`.
`Fcalc = int(cos(log(x)))`
```Fcalc = $\frac{\sqrt{2} x \mathrm{sin}\left(\frac{\pi }{4}+\mathrm{log}\left(x\right)\right)}{2}$```
Find the closed-form solution of the integral $\int \mathit{x}\text{\hspace{0.17em}}\mathrm{tan}\left(\mathrm{log}\left(\mathit{x}\right)\right)\mathit{dx}$.
Define the integral using the `int` function.
```syms x F = int(x*tan(log(x)),x)```
```F = $\int x \mathrm{tan}\left(\mathrm{log}\left(x\right)\right)\mathrm{d}x$```
The `int` function cannot find the closed-form solution of the integral.
Substitute the expression `log(x)` with `t`. Apply integration by substitution.
```syms t G = changeIntegrationVariable(F,log(x),t)```
```G = ```
The closed-form solution is expressed in terms of hypergeometric functions. For more details on hypergeometric functions, see `hypergeom`.
Compute the integral ${\int }_{0}^{1}{\mathit{e}}^{\sqrt{\mathrm{sin}\left(\mathit{x}\right)}}\mathit{dx}$ numerically with high precision.
Define the integral ${\int }_{0}^{1}{\mathit{e}}^{\sqrt{\mathrm{sin}\left(\mathit{x}\right)}}\mathit{dx}$. A closed-form solution to the integral does not exist.
```syms x F = int(exp(sqrt(sin(x))),x,0,1)```
```F = ${\int }_{0}^{1}{\mathrm{e}}^{\sqrt{\mathrm{sin}\left(x\right)}}\mathrm{d}x$```
You can use `vpa` to compute the integral numerically to 10 significant digits.
`F10 = vpa(F,10)`
`F10 = $1.944268879$`
Alternatively, you can use the `vpaintegral` function and specify the relative error tolerance.
`Fvpa = vpaintegral(exp(sqrt(sin(x))),x,0,1,'RelTol',1e-10)`
`Fvpa = $1.944268879$`
The `vpa` function cannot find the numerical integration to 70 significant digits, and it returns the unevaluated integral in the form of `vpaintegral`.
`F70 = vpa(F,70)`
`F70 = $\text{vpaintegral}\left({\mathrm{e}}^{\sqrt{\mathrm{sin}\left(x\right)}},x,3.614058973481922839993540324829136186551779737228174541959730561814383e-71,1\right)+3.614058973481922839993540324829136201036215880733963159636656251055722e-71$`
To find the numerical integration with high precision, you can perform a change of variable. Substitute the expression $\sqrt{\mathrm{sin}\left(\mathit{x}\right)}$ with $\mathit{t}$. Compute the integral numerically to 70 significant digits.
```syms t; G = changeIntegrationVariable(F,sqrt(sin(x)),t)```
```G = ${\int }_{0}^{\sqrt{\mathrm{sin}\left(1\right)}}\frac{2 t {\mathrm{e}}^{t}}{\sqrt{1-{t}^{4}}}\mathrm{d}t$```
`G70 = vpa(G,70)`
`G70 = $1.944268879138581167466225761060083173280747314051712224507065962575967$`
Input Arguments
collapse all
Expression containing integrals, specified as a symbolic expression, function, vector, or matrix.
Subexpression to be substituted, specified as a symbolic scalar variable, expression, or function. `old` must depend on the previous integration variable of the integrals in `F`.
New subexpression, specified as a symbolic scalar variable, expression, or function. `new` must depend on the new integration variable.
collapse all
Integration by Substitution
Mathematically, the substitution rule is formally defined for indefinite integrals as
`$\int f\left(g\left(x\right)\right)\text{\hspace{0.17em}}g\text{'}\left(x\right)\text{\hspace{0.17em}}dx={\left(\int f\left(t\right)\text{\hspace{0.17em}}dt\right)|}_{t=g\left(x\right)}$`
and for definite integrals as
`$\underset{a}{\overset{b}{\int }}f\left(g\left(x\right)\right)\text{\hspace{0.17em}}g\text{'}\left(x\right)\text{\hspace{0.17em}}dx=\underset{g\left(a\right)}{\overset{g\left(b\right)}{\int }}f\left(t\right)\text{\hspace{0.17em}}dt.$` | 2020-10-28T12:32:51 | {
"domain": "mathworks.com",
"url": "https://de.mathworks.com/help/symbolic/changeintegrationvariable.html",
"openwebmath_score": 0.9582169055938721,
"openwebmath_perplexity": 1084.8377437357083,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.989347490102537,
"lm_q2_score": 0.8479677622198946,
"lm_q1q2_score": 0.8389347772401176
} |
https://stats.stackexchange.com/questions/29091/simulating-a-gaussian-ornstein-uhlenbeck-process-with-an-exponentially-decayin/29099 | # Simulating a Gaussian (Ornstein Uhlenbeck) process with an exponentially decaying covariance function
I'm trying to generate many draws (i.e., realizations) of a Gaussian process $e_i(t)$, $1\leq t \leq T$ with mean 0 and covariance function $\gamma(s,t)=\exp(-|t-s|)$.
Is there an efficient way to do this that wouldn't involve computing the square root of a $T \times T$ covariance matrix? Alternatively can anyone recommend an R package to do this?
• It's a stationary process (looks close to a simple version of an OU process). Is it uniformly sampled? May 24 '12 at 13:53
• The R package mvtnorm has rmvnorm(n, mean, sigma) where sigma is the covariance matrix; you'd have to construct the covariance matrix for your sampled / selected $t$s yourself, though. May 24 '12 at 14:06
• @jb Presumably $T$ is huge, otherwise the OP wouldn't be asking to avoid the matrix decomposition (which is implicit in rmvnorm).
– whuber
May 24 '12 at 14:31
• @cardinal I agree, this is a Ornstein-Uhlenbeck Gaussian process. (It would be great if the "Ornstein Uhlenbeck" keyword could be edited into the question and/or title. It would get this question the more traffic it deserves) Dec 5 '14 at 23:12
Yes. There is a very efficient (linear time) algorithm, and the intuition for it comes directly from the uniformly-sampled case.
Suppose we have a partition of $[0,T]$ such that $0=t_0 < t_1 < t_2 < \cdots < t_n = T$.
Uniformly sampled case
In this case we have $t_i = i \Delta$ where $\Delta = T/n$. Let $X_i := X(t_i)$ denote the value of the discretely sampled process at time $t_i$.
It is easy to see that the $X_i$ form an AR(1) process with correlation $\rho = \exp(-\Delta)$. Hence, we can generate a sample path $\{X_t\}$ for the partition as follows $$X_{i+1} = \rho X_i + \sqrt{1-\rho^2} Z_{i+1} \>,$$ where $Z_i$ are iid $\mathcal N(0,1)$ and $X_0 = Z_0$.
General case
We might then imagine that it could be possible to do this for a general partition. In particular, let $\Delta_i = t_{i+1} - t_i$ and $\rho_i = \exp(-\Delta_i)$. We have that $$\gamma(t_i,t_{i+1}) = \rho_i \>,$$ and so we might guess that $$X_{i+1} = \rho_i X_i + \sqrt{1-\rho_i^2} Z_{i+1} \>.$$
Indeed, $\mathbb E X_{i+1} X_i = \rho_i$ and so we at least have the correlation with the neighboring term correct.
The result now follows by telescoping via the tower property of conditional expectation. Namely, $$\newcommand{\e}{\mathbb E} \e X_i X_{i-\ell} = \e( \e(X_i X_{i-\ell} \mid X_{i-1} )) = \rho_{i-1} \mathbb E X_{i-1} X_{i-\ell} = \cdots = \prod_{k=1}^\ell \rho_{i-k} \>,$$ and the product telescopes in the following way $$\prod_{k=1}^\ell \rho_{i-k} = \exp\Big(-\sum_{k=1}^\ell \Delta_{i-k}\Big) = \exp(t_{i-\ell} - t_i) = \gamma(t_{i-\ell},t_i) \>.$$
This proves the result. Hence the process can be generated on an arbitrary partition from a sequence of iid $\mathcal N(0,1)$ random variables in $O(n)$ time where $n$ is the size of the partition.
NB: This is an exact sampling technique in that it provides a sampled version of the desired process with the exactly correct finite-dimensional distributions. This is in contrast to Euler (and other) discretization schemes for more general SDEs, which incur a bias due to the approximation via discretization.
• Just a few more remarks. (1) To get a good idea of what the continuous time process looks like, $n$ and $T$ must be chosen so that $\Delta$ is small, say less than $0.1$. (2) The inverse covariance (precision) matrix for the timeseries vector is tri-diagonal, as is its Cholesky root.
– Yves
May 24 '12 at 17:55
• @Yves: Thanks for your comments. To be clear, the procedure I've outlined gives an exact realization of the continuous-time process sampled on the corresponding partition; in particular, there is no discretization error like there is in typical Euler-scheme approximation to more general SDEs. The inverse Cholesky, as shown by the construction in the answer has nonzero terms only on the diagonal and lower off-diagonal, so it's a little simpler than tridiagonal. May 24 '12 at 18:13
• Great answer! Does this generalize to the general OU process with arbitrary scale, $\gamma(t_i, t_j) = \exp(\alpha \; |t_i - t_j|)$? It seems like it might. Dec 5 '14 at 23:14
Calculate the decomposed covariance matrix by incomplete Cholesky decomposition or any other matrix decomposition technique. Decomposed matrix should be TxM, where M is only a fraction of T.
http://en.wikipedia.org/wiki/Incomplete_Cholesky_factorization
• Can you give an explicit form of the Cholesky decomposition here? I think that the answer by cardinal achieves just that, if you think about it, by expressing $X_i$ as a function of the history. May 25 '12 at 2:54
• The algorithm is a little too long to summarize. You can find an excellent description here: Kernel ICA, page 20. Note that this algorithm is incomplete, meaning it doesn't calculate the entire decomposition but rather an approximation (hence it is much faster). I have published code for this algorithm in the KMBOX toolbox, you can download it here: km_kernel_icd. Jun 6 '12 at 20:45 | 2021-10-19T09:43:19 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/29091/simulating-a-gaussian-ornstein-uhlenbeck-process-with-an-exponentially-decayin/29099",
"openwebmath_score": 0.792526125907898,
"openwebmath_perplexity": 355.7645370204983,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474872757475,
"lm_q2_score": 0.847967764140929,
"lm_q1q2_score": 0.8389347767436618
} |
https://www.jiskha.com/display.cgi?id=1359878395 | # Trigonometry
posted by Betty
What is the period and asymptote in y= tan(2x-pi)
1. Reiny
for y = tan kØ , the period of the tangent curve is π/k
(notice that this differs from the period definition for sine and cosine)
so the period of tan (2x-π) is π/2 radians or 90°
We know that tan (π/2) is undefined (a vertical asymptote)
so 2x - π = π/2
2x = 3π/2
x = 3π/4
So your function will have a vertical asymptote at
x = 3π/4 , and one every π/2 to the right or to the left after that
vertical asymptotes:
in radians : x = 3π/4 + kπ/2 , where k is an integer
in degrees : x = 135° + 90k° , where k is an integer
## Similar Questions
1. ### Trigonometry
Find the exact value of tan(a-b) sin a = 4/5, -3pi/2<a<-pi; tan b = -sqrt2, pi/2<b<pi identity used is: tan(a-b)=(tan a-tan b)/1+tan a tan b simplify answer using radicals. (a is alpha, b is beta)
3. ### Precalculus
Write an equation for rational function with given properties. a) a hole at x = 1 b) a vertical asymptote anywhere and a horizontal asymptote along the x-axis c) a hole at x = -2 and a vertical asymptote at x = 1 d) a vertical asymptote …
4. ### trigonometry
state the amplitude, period and phase shift of the function y=tan (20-80 degrees)
5. ### trigonometry
state the amplitude, period and phase shift of the function y = tan (2 theta- 180 degrees)
6. ### Trigonometry
What is the period of y = ã3sin (1/3x-ã1/3)?
7. ### Math-Trigonometry
Show that if A, B, and C are the angles of an acute triangle, then tan A + tan B + tan C = tan A tan B tan C. I tried drawing perpendiculars and stuff but it doesn't seem to work?
8. ### Math
f(x) = tan x / sin x Find the vertical asymptote. Describe its behavior to the left and right of the vertical asymptote.
9. ### math (trigonometry)
A=170 degree then prove that Tan A/2=-1-rot(1+Tan^2 A)/Tan A
10. ### Inverse trigonometry
Prove that- tan^-1(1/2tan 2A)+tan^-1(cotA)+tan^-1(cot^3A) ={0,ifpi/4<A<pi/2 {pi,0<A<pi/4 Where 2 small curly brackets are 1 big curly bracket
More Similar Questions | 2018-05-25T13:02:01 | {
"domain": "jiskha.com",
"url": "https://www.jiskha.com/display.cgi?id=1359878395",
"openwebmath_score": 0.9335975646972656,
"openwebmath_perplexity": 3380.3579773798397,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9893474872757474,
"lm_q2_score": 0.8479677622198946,
"lm_q1q2_score": 0.8389347748430911
} |
http://tutor.leiacademy.org/qa/index.php/102/angular-speed-of-two-points-on-a-circle | # Angular speed of two points on a circle.
709 views
A disk is rotating CCW with an angular speed omega.
Two stickers A and B are fixed to different locations on the disk as shown.
Compare the angular speed of the two stickers A and B.
This problem is from one of the lecture slides. Can someone give me a good explanation of why the angular speeds of A and B are the same, but why the linear speed of B is greater than A?
The angular speed describes how much rotational angle (usually in radians) are cover per unit time. Being on the same rotational object, if one point goes around certain angle, e.g., $2\pi$ for a complete circle, any other points on the same object would also experience an rotation of $2\pi$. Therefore the angular displacement/velocity/acceleration for any points on the same rotational object would be the same.
The linear speed depends on the distance between the point of interest and the rotational axis. And you will have $v=\omega R$, where R is the distance between the point of interest and the rotational axis. | 2019-08-23T15:43:32 | {
"domain": "leiacademy.org",
"url": "http://tutor.leiacademy.org/qa/index.php/102/angular-speed-of-two-points-on-a-circle",
"openwebmath_score": 0.837550163269043,
"openwebmath_perplexity": 233.29564067503725,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474885320983,
"lm_q2_score": 0.8479677602988602,
"lm_q1q2_score": 0.8389347740078658
} |
https://teachingcalculus.com/tag/favorite/ | The Marble and the Vase
A fairly common max/min problem asks the student to find the point on the parabola $f\left( x \right)={{x}^{2}}$ that is closest to the point $A\left( 0,1 \right)$. The solution is not too difficult. The distance, L(x), between A and the point $\left( x,{{x}^{2}} \right)$ on the parabola is given by
$\displaystyle L\left( x \right)=\sqrt{{{\left( x-0 \right)}^{2}}+{{\left( {{x}^{2}}-1 \right)}^{2}}}=\sqrt{{{x}^{4}}-{{x}^{2}}+1}$
And the minimum distance can be found when
$\displaystyle \frac{dL}{dx}=\frac{4{{x}^{3}}-2x}{2\sqrt{{{x}^{4}}-{{x}^{2}}+1}}=0$
This occurs when $x=0,\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}$. The local maximum is occurs when x = 0. The (global) minimums are the other two values located symmetrically to the y-axis.
_________________________
Somewhere I saw this problem posed in terms of a marble dropped into a vase shaped like a parabola. So I think of it that way. This accounts for the title of the post. The problem is, however, basically a two-dimensional situation.
In this post I would like to expand and explore this problem. The exploration will, I hope, give students some insight and experience with extreme values, and the relationship between a graph and its derivative. I will pose a series of questions that you could give to your students to explore. I will answer the questions as I go, but you, of course, should not do that until your students have had some time to work on the questions.
Graphing technology and later Computer Algebra Systems (CAS) will come in handy.
_________________________
1. Consider a general point $A\left( 0,a \right)$ on the y-axis. Find the x-coordinates of the closest point on the parabola in terms of a.
The distance is now given by
$\displaystyle L\left( x \right)=\sqrt{{{\left( x-0 \right)}^{2}}+{{\left( {{x}^{2}}-a \right)}^{2}}}=\sqrt{{{x}^{4}}+\left( 1-2a \right){{x}^{2}}+{{a}^{2}}}$
$\displaystyle \frac{dL}{dx}=\frac{2{{x}^{3}}+\left( 1-2a \right)x}{\sqrt{{{x}^{4}}+2\left( 1-2a \right){{x}^{2}}+{{a}^{2}}}}$
And $\frac{dL}{dx}=0$ when $x=0,\frac{\sqrt{2\left( 2a-1 \right)}}{2},-\frac{\sqrt{2\left( 2a-1 \right)}}{2}$
The (local) maximum is at x = 0. The other values are the minimums. The CAS computation is shown at the end of the post. This is easy enough to do by hand.
2. Discuss the equation ${{L}^{2}}={{x}^{2}}+{{\left( x-a \right)}^{2}}$ in relation to this situation.
This is the equation of a circle with center at A with radius of L. At the minimum distance this circle will be tangent to the parabola.
3. What happens when $a=\tfrac{1}{2}$ and when $a<\tfrac{1}{2}$?
When $a=\tfrac{1}{2}$, the three zeroes are the same. The circle is tangent to the parabola at the origin and a is the minimum distance.
When $a<\tfrac{1}{2}$, the circle does not intersect the parabola. Notice that in this case two of the roots of $\frac{dL}{dt}=0$ are not Real numbers.
4. Consider the distance, L(x), from point A to the parabola. As x moves from left to right describe how this length changes. Be specific. Sketch the graph of this distance y = L(x). Where are its (local) maximum and minimum values, relative to the parabola and the circle tangent to the parabola?
The clip below illustrates the situation. The two segments marked L(x) are congruent. The graph of y = L(x) is a“w” shape similar to but not quartic polynomial. The minimums occur directly under the points of tangency of the circle and the parabola. The local maximum is directly over the origin. Is it coincidence that the graph goes through the center of the circle? Explain.
5. Graph $y=\frac{dL}{dx}$ and compare its graph with the graph of $y=L(x)$
L(x) is the blue graph and and L'(x) is the orange graph.
Notice the concavity of L'(x)
6. The graph of $y=\frac{dL}{dx}$ appears be concave up, then down, then (after passing the origin) up, and then down again. There are three points of inflection. Find their x-coordinates in terms of a. How do these points relate to y = L(x) ? (Use a CAS to do the computation)
The points of inflection of the derivative can be found from the second derivative of the derivative (the third derivative of the L(x)). The abscissas are $x=-\sqrt{a},x=0,\text{ and }\sqrt{a}$. The CAS computation is shown below
CAS Computation for questions 1 and 6.
Stamp Out Slope-intercept Form!
Accumulation 5: Lines
If you have a function y(x), that has a constant derivative, m, and contains the point $\left( {{x}_{0}},{{y}_{0}} \right)$ then, using the accumulation idea I’ve been discussing in my last few posts, its equation is
$\displaystyle y={{y}_{0}}+\int_{{{x}_{0}}}^{x}{m\,dt}$
$\displaystyle y={{y}_{0}}+\left. mt \right|_{{{x}_{0}}}^{x}$
$\displaystyle y={{y}_{0}}+m\left( x-{{x}_{0}} \right)$
This is why I need your help!
I want to ban all use of the slope-intercept form, y = mx + b, as a method for writing the equation of a line!
The reason is that using the point-slope form to write the equation of a line is much more efficient and quicker. Given a point $\left( {{x}_{0}},{{y}_{0}} \right)$ and the slope, m, it is much easier to substitute into $y={{y}_{0}}+m\left( x-{{x}_{0}} \right)$ at which point you are done; you have an equation of the line.
Algebra 1 books, for some reason that is beyond my understanding, insist using the slope-intercept method. You begin by substituting the slope into $y=mx+b$ and then substituting the coordinates of the point into the resulting equation, and then solving for b, and then writing the equation all over again, this time with only m and b substituted. It’s an algorithm. Okay, it’s short and easy enough to do, but why bother when you can have the equation in one step?
Where else do you learn the special case (slope-intercept) before, long before, you learn the general case (point-slope)?
Even if you are given the slope and y-intercept, you can write $y=b+m\left( x-0 \right)$.
If for some reason you need the equation in slope-intercept form, you can always “simplify” the point-slope form.
But don’t you need slope-intercept to graph? No, you don’t. Given the point-slope form you can easily identify a point on the line,$\left( {{x}_{0}},{{y}_{0}} \right)$, start there and use the slope to move to another point. That is the same thing you do using the slope-intercept form except you don’t have to keep reminding your kids that the y-intercept, b, is really the point (0, b) and that’s where you start. Then there is the little problem of what do you do if zero is not in the domain of your problem.
Help me. Please talk to your colleagues who teach pre-algebra, Algebra 1, Geometry, Algebra 2 and pre-calculus. Help them get the kids off on the right foot.
Whenever I mention this to AP Calculus teachers they all agree with me. Whenever you grade the AP Calculus exams you see kids starting with y = mx + b and making algebra mistakes finding b.
Show me the Math!
Is God a Mathematician? by Mario Livio begins
When you work in cosmology … one of the facts of life becomes the weekly letter, e-mail, or fax from someone who wants to describe to you his own theory of the universe (yes, they are invariably men). The biggest mistake you can make is to politely answer that you would like to learn more. This immediately results in an endless barrage of messages. So how can you prevent the assault? The particular tactic I found to be quite useful (short of the impolite act of not answering at all) is to point out the true fact that as long as his theory is not precisely formulated in the language of mathematics, it is impossible to assess its relevance. This response stops most amateur cosmologists in their tracks. … Mathematics is the solid scaffolding that holds together any theory of the universe.
Is God a Mathematician? discusses the question of whether mathematics was invented or discovered. Dr. Livio’s other popular books include The Accelerating Universe (cosmology), The Golden Ratio: The Story of Phi, the World’s most Astounding Number, and The Equation that Couldn’t be Solved: How Mathematical Genius Discovered the Language of Symmetry. All are excellent reads for teachers and students.
The Unknown Thing
Here’s why x is so ubiquitous in mathematics.
Now one more unknown thing is known!
This TED Talk can be found here. | 2023-03-27T22:38:32 | {
"domain": "teachingcalculus.com",
"url": "https://teachingcalculus.com/tag/favorite/",
"openwebmath_score": 0.6201876401901245,
"openwebmath_perplexity": 482.87538585715174,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474872757474,
"lm_q2_score": 0.8479677545357568,
"lm_q1q2_score": 0.8389347672408087
} |
https://math.stackexchange.com/questions/3380974/how-many-ways-can-you-divide-9-students-into-three-unlabeled-teams-of-4-3 | # How many ways can you divide $9$ students into three unlabeled teams of $4$, $3$, and $2$ people?
How many ways can you divide $$9$$ students into three unlabeled teams where one team contains $$4$$ people, one contains $$3$$ people and the last contains $$2$$ people? Unlabeled, meaning that groups with abc = bca = cba, etc
I understand how to do this if the teams are labeled:
$$\frac{9!}{4!3!2!}$$
But there is a term missing in the denominator when the teams are unlabeled and I'm having difficulty understanding how to calculate how many ways the teams can be organized.
There are $$3!$$ ways to organize the same first group of $$3$$, $$4!$$ ways to organize the same second group of $$4$$ and $$2!$$ ways to organize the last group of $$2$$. Why wouldn't you multiply $$3!4!2!$$ in the denominator?
For instance:
(ABC, DEFG) = (ABC, DEGF) = (ABC, DFEG) = (ACB, DFGE), etc
• The teams have different number of people in it and so can be distinguished from one another whether you decided to formally name them or not. – JMoravitz Oct 4 '19 at 22:19
• "$3!$ ways to organize the first group of $3$, $4!$ ways to organize the same second group of $4$ and $5!$ ways to organize the last group of $5$" What group of five? The problem statement talked about a group of 4, of 3, and of 2 respectively. – JMoravitz Oct 4 '19 at 22:26
• Oops, thanks for catching that – b3llegsd Oct 5 '19 at 19:42
• Since the groups have different sizes, they are distinguished by their sizes. Therefore, your answer is correct for unlabeled groups. – N. F. Taussig Oct 5 '19 at 19:58
In how many ways can nine students be divided into teams of $$4$$, $$3$$, and $$2$$ people?
The teams are distinguished by their sizes. Choosing who is on each team completely determines the teams.
There are $$\binom{9}{4}$$ ways to select four of the nine students to be on the team of four students, $$\binom{5}{3}$$ to select three of the five remaining students to be on the team with three students, and one way to form a team of two from the remaining two students. Hence, there are $$\binom{9}{4}\binom{5}{3} = \frac{9!}{4!5!} \cdot \frac{5!}{3!2!} = \frac{9!}{4!3!2!}$$ ways to divide the nine students into three unlabeled teams.
If we had instead chosen the team of two, then the team of three from the remaining seven students, and then placed the remaining four students on the team of four, we could select the teams in $$\binom{9}{2}\binom{7}{3} = \frac{9!}{2!7!} \cdot \frac{7!}{3!4!} = \frac{9!}{2!3!4!}$$ ways, in agreement with above.
Notice that labeling the team with four students team A, the team with three students team B, and the team with two students team C would not change our answer.
More care would be required if two or more of the groups had the same size.
Suppose our students are Amanda, Brenda, Claire, Dennis, Edward, Fiona, Gloria, Henry, and Ivan.
In how many ways can nine students be divided into three unlabeled teams of three people?
If we divide the nine students into teams of three, then the $$3! = 6$$ divisions \begin{align*} & \{Amanda, Brenda, Claire\}, \{Dennis, Edward, Fiona\}, \{Gloria, Henry, Ivan\}\\ & \{Amanda, Brenda, Claire\}, \{Gloria, Henry, Ivan\}, \{Dennis, Edward, Fiona\}\\ & \{Dennis, Edward, Fiona\}, \{Amanda, Brenda, Claire\}, \{Gloria, Henry, Ivan\}\\ & \{Dennis, Edward, Fiona\}, \{Gloria, Henry, Ivan\}, \{Amanda, Brenda, Claire\}\\ & \{Gloria, Henry, Ivan\}, \{Amanda, Brenda, Claire\}, \{Dennis, Edward, Fiona\}\\ & \{Gloria, Henry, Ivan\}, \{Dennis, Edward, Fiona\}, \{Amanda, Brenda, Claire\} \end{align*} are all equivalent since they result in the same three teams. Therefore, the number of ways of dividing the class into three unlabeled teams of three is $$\frac{1}{3!}\binom{9}{3}\binom{6}{3} = \frac{1}{3!} \cdot \frac{9!}{3!3!3!}$$ We divide by $$3!$$ to account for the $$3!$$ orders in which we could select the same three teams of three.
In how many ways can the nine students be divided into three unlabeled teams of sizes $$2$$, $$2$$, and $$5$$?
Similarly, if the teams are not labeled and we divide the class into two teams of two and one team of five, the two divisions \begin{align*} \{Amanda, Brenda\}, \{Claire, Dennis\}, \{Edward, Fiona, George, Henry, Ivan\}\\ \{Claire, Dennis\}, \{Amanda, Brenda\}, \{Edward, Fiona, George, Henry, Ivan\} \end{align*} are equivalent since they result in the same three teams. Hence, the number of ways of dividing the nine students into two teams of two and one team of five if the teams are unlabeled is $$\frac{1}{2!}\binom{9}{2}\binom{7}{2} = \frac{1}{2!} \cdot \frac{9!}{2!2!5!}$$ We divide by $$2!$$ to account for the $$2!$$ orders in which we could pick the same teams of size two.
If we had instead picked the team of five first, we would be left with four people. You might think that the two teams of two could be picked in $$\binom{4}{2}$$ ways, but this counts every team twice, once when we choose a team and once when we choose its complement. Alternatively, notice that if our team of five consists of Edward, Fiona, Gloria, Henry, and Ivan, the two teams of two are distinguished by who is paired with Amanda. There are three ways to do this: \begin{align*} \{Amanda, Brenda\}, \{Claire, Dennis\}\\ \{Amanda, Claire\}, \{Brenda, Dennis\}\\ \{Amanda, Dennis\}, \{Brenda, Claire\} \end{align*} Hence, the number of divisions of nine students into two teams of two and one team of five is $$\binom{9}{5} \cdot 3 = \binom{9}{5} \cdot \frac{1}{2}\binom{4}{2} = \frac{1}{2}\binom{9}{5}\binom{4}{2} = \frac{1}{2!} \cdot \frac{9!}{2!2!5!}$$
Notice that the team of five is distinguished by its size, while the two teams of two are not. The teams of two can only be distinguished by who is on which team.
To summarize, teams of different sizes are distinguished by their sizes, so the order in which they are selected does not matter. If we have unlabeled teams of the same size, we have to divide by the number of orders in which we could pick the same teams.
• Thank you! This was the most helpful explanation!! – b3llegsd Oct 9 '19 at 23:17 | 2020-02-23T11:49:45 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3380974/how-many-ways-can-you-divide-9-students-into-three-unlabeled-teams-of-4-3",
"openwebmath_score": 0.5548633933067322,
"openwebmath_perplexity": 759.761990226776,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474885320984,
"lm_q2_score": 0.8479677526147223,
"lm_q1q2_score": 0.8389347664055833
} |
https://math.stackexchange.com/questions/1344656/is-there-any-notation-for-general-n-th-root-r-such-that-rn-x | # Is there any notation for general $n$-th root $r$ such that $r^n=x$?
As we know that the notation for the $n$-th principal root is $\sqrt[n]{x}$ or $x^{1/n}$. But the principal root is not always the only possible root, e.g. for even $n$ and positive $x$ the principal root is always positive but there is also another negative root. E.g. consider $r^2=4$, then $\sqrt 4 =+2$, but $r=-2$ is also a valid solution. Since $x$ is a function of $r$ for some given $n$, so let
$$r^n=x=f(r).$$
We have $r=f^{-1} (x)$. Here $r \neq \sqrt[n]x$ because $\sqrt[n]x$ is the principal root not the general. So is there any notation like $\sqrt[n]{\phantom{aa}}$ for the general $n$-th root of the equation $r^n=x$?
• If $\zeta_n$ is a primitive $n$th root of the unity, then $\zeta^k_n\sqrt[n]{x}$ is an $n$th root of $x$ for any integer $0\leq k < n$. This are in fact all the $n$th roots of $x$ in the complex plane. You can define the multivalued function $f_k(x) := \zeta^k_n\sqrt[n]{x}$. Jun 30 '15 at 15:12
• Nope. Usually one says something like "let $r$ be an $n$-th root of $x$" or "let $r$ such that $r^n = x$". Also, note that you wrote $r = f^{-1}(x)$, but this doesn't make sense because in general $f$ isn't invertible.
– A.P.
Jun 30 '15 at 15:15
• @A.P. What is a non-invertible function? And why is $f$ not invertible here? Jun 30 '15 at 15:27
• A non-invertible function is one for which $f^{-1}$ is not defined. Usually "invertible" is the same as "bijective". In this case, $f$ may not be injective: for example $f(r) = r^2$ doesn't admit an inverse, i.e. an $f^{-1}$, because both $r$ and $-r$ have the same image.
– A.P.
Jun 30 '15 at 15:31
I'd say: $$r=z^{\frac{1}{n}}e^{\frac{2i\pi k}{n}}$$
It is a multivalued function with $k=0,\dots,n-1$
• Is $z$ a complex number here? And is $z^{1/n}$ multivalued in complex analysis or unique? Jun 30 '15 at 15:52
• $z$ is real in this case. If you have a complex number you can write is as $z=|z|e^{i\phi}$ and you'd have: $r=|z|^{\frac{1}{n}}e^{i\frac{\phi+2k\pi}{n}}$ Jun 30 '15 at 16:12
$$z^n=c\implies z=\omega^k\sqrt[n]c$$ Where $\omega$ is a primitive $n^{th}$ root of unity, and $0\le k\in\Bbb{Z}\le n-1$
For a non-negative real number $x$ there is always a unique choice of non-negative real $n$-th root, which is usually denoted by $\sqrt[n]{x}$. Furthermore, if $x$ is negative there is a unique choice of $n$-th root if $n$ is odd and none if $n$ is even.
In short, if $x \geq 0$ is real and $n$ is even, then the only real $n$-th roots of $x$ are $\pm \sqrt[n]{x}$ (and you can use this symbol), while if $x$ is real and $n$ is odd there is only one $n$-th root of $x$, denoted $\sqrt[n]{x}$.
You should understand, though, that taking roots usually involves a choice. In particular, every non-zero complex number has exactly $n$ $n$-th roots.
Now, you could keep the above choices for real numbers, but in general there is no canonical choice of root1. What we usually say instead is something like: "let $w$ be an $n$-th root of $z$". The nice thing, though, is that the other roots are then easily recoverable, because they are all of the form $$\zeta_n^i w \qquad \text{for } i \in \{0,\dotsc,n-1\}$$ where $\zeta_n$ is a primitive $n$-th root of unity, i.e. a complex number such that $\zeta_n^n = 1$ and $\zeta_n^m \neq 1$ for every $0 < m < n$. Again, a choice is involved here, but you can always take $$\zeta_n = e^{2\pi i/n}$$ TL;DR: If you wish to denote the generic $n$-th root of a complex number $z$ you may probably get away with the notation $z^{1/n}$. Just bear in mind that in general this is inherently ambiguous and you should treat this symbol more like a place-holder for an actual $n$-th root of $z$ than as a number.
[1] Technically, one could still define a unique choice of $n$-th root e.g. by taking the root with least argument (in $[0,2\pi)$). While this convention (or a similar one) may be used in analysis, I've never seen it in algebra or number theory.
In complex analysis, $\sqrt[n]{x}$ is regarded as a multivalued function. Or you can write it as $$\sqrt[n]{x}=\exp{\frac{\operatorname{Log}(x)}{n}},\space x\ne0.$$ $\operatorname{Log}(x)$ is the inverse function of $\exp(x)$, see here.
• Do you mean in complex analysis $\sqrt{4}=\pm 2$? Jun 30 '15 at 15:20
• Doesn't $\text{Log}$ take infinitely many different values? What I mean is: doesn't this just complicate matters?
– A.P.
Jun 30 '15 at 15:22
• @user103816 Yes, and no. $\sqrt[n]{x}$ is defined not on the complex plane, but on its Riemann surface. Jun 30 '15 at 15:25
• @A.P. Yes. But since the OP asked for a notation, only $\text{Log}$ is different from ordinary notations. Jun 30 '15 at 15:29
• @user103816 No, but after a fashion you could say that $4^{1/2} = \{2,-2\}$. The idea is to consider a covering space $p \colon S \to \mathbb{C}$ such that the pre-images of a complex number $z$ are precisely its square-roots (or, in general, its $n$-th roots).
– A.P.
Jun 30 '15 at 20:05
There is no privileged or standardized notation. It is a matter of definition, which is dependent on what is convenient for you. Sometimes you want to have a function that outputs an $n$-th root of the input, in which case you would define:
$a^b = \exp(b\ln_π(a))$ where $\ln_π$ is the principal branch of the natural logarithm.
This gives a well-defined function for any complex $a,b$ such that $\arg(a) \ne π$ (or equivalently $a$ is not on the non-positive real line). This function is differentiable everywhere in its domain, which is why it is often used. It is also continuous if a point on the negative real line is approached from the upper half-plane, and it coincides with the usual definition of exponentiation for positive real base and real exponent. Note that it does not coincide with the definition of exponentiation for negative real base and fractional real exponent. $(-8)^{\frac{1}{3}}$ is $(-2)$ in the 'real' world but is $(-2) \exp(\frac{2πi}{3})$ in the 'complex' world.
At other times we do not need a function that outputs a complex number but rather we work with sets of complex numbers. In that case we can define (for sets of complex numbers $a,b$:
$a^b = \exp(b\ln(a))$ where $ab = \{ zw : z \in a \land w \in b \}$ and $\exp(a) = \{ \exp(z) : z \in a \}$ and $\ln(a) = \{ z : \exp(z) \in a \}$.
If everything is suitably redefined to handle sets of complex numbers, we will get that this function is differentiable 'everywhere' except at $a = 0$. Not only that, some usual rules of real exponentiation hold. For example, $a^b a^c = a^{b+c}$ for any complex $a,b,c$ such that $a \ne 0$, and $a^c b^c = (ab)^c$ for any complex $a,b,c$ such that $ab \ne 0$. Some other rules still do not hold, such as $(\{3\}^{\{2\}})^{\{\frac{1}{2}\}} = \{3,-3\}$. | 2021-09-16T18:37:53 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1344656/is-there-any-notation-for-general-n-th-root-r-such-that-rn-x",
"openwebmath_score": 0.9683716297149658,
"openwebmath_perplexity": 104.2134314075962,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9579122708828604,
"lm_q2_score": 0.8757869997529962,
"lm_q1q2_score": 0.8389271137430797
} |
http://math.stackexchange.com/questions/209621/prove-or-disprove-exists-n-0-in-mathbb-n-s-t-forall-n-geq-n-0-2n-n | # Prove or disprove: $\exists N_0 \in \mathbb N s.t. \forall n \geq N_0, 2^n > n^2$
This statement is true, for any $N_0 \geq 5$. My question lies in how to formally prove this. My professor is very strict about proof structure. This week, the homework mainly had to do with mathematical induction. Can this be proven using mathematical induction? If not, can someone lead me in the right direction on how to format this proof?
-
Here is an argument I find cute: $2^n=\sum_{i=0}^n\binom{n}i>\binom{n}1+\binom{n}2+\binom{n}{n-2}=n^2$, where the inequality follows from the fact that $2<n-2$, which is true for $n\ge 5$. – Andrés E. Caicedo Oct 9 '12 at 4:41
== For $\,n=5:\;\;\;2^5=32>25=5^2\,$
== Assume the claim's true (this is the inductive hypotheses =I.H.) for $\,n\geq 5\,$ ,then:
$$2^{n+1}=2\cdot 2^n\stackrel{\text{I.H.}}>2n^2>(n+1)^2$$
The last inequality on the right being true since
$$2n^2>(n+1)^2=n^2+2n+1\Longleftrightarrow n^2-2n-1>0\Longleftrightarrow$$
$$\Longleftrightarrow \left(n+(1+\sqrt 2)\right)\left(n+(1-\sqrt 2)\right)>0$$
-
Well, you concluded that $2^n > n^2$ holds for all $n \geq 5$.
Doesn't this really look like a standard mathematical induction statement?
$P(5)$ is trivial, and $P(n) \Rightarrow P(n+1)$ reduces to showing that $2n^2 >(n+1)^2$...
-
@Jean-Sébastien Nope, $2n^2$ is correct... That is what $P(n+1)$ reduces to once you use $P(n)$. – N. S. Oct 9 '12 at 3:27
yeah right, once using $P(n)$! time to sleep – Jean-Sébastien Oct 9 '12 at 4:38
Several inductive proofs have been given, so I will give a non-inductive one, just for fun. Perhaps your prof prefers a calculus based proof. $\log$ is order preserving, so $$2^n > n^2 \iff n\log(2) > 2\log(n) \iff \frac{\log(2)}{2} > \frac{\log(n)}{n}$$ Differentiating $f(x) = \frac{\log(x)}{x}$ gives $$f^{\prime}(x) = \frac{1-\log(x)}{x^2}$$ which is easily seen to have a maximum at $x = e$. Therefore it is strictly decreasing for $x\ge 3$. So as soon as you find some $N\ge 3$ which satisfies the inequality, then all $n>N$ will automatically satisfy the inequality as well. $N=5$ will work.
- | 2016-07-24T16:45:48 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/209621/prove-or-disprove-exists-n-0-in-mathbb-n-s-t-forall-n-geq-n-0-2n-n",
"openwebmath_score": 0.9246553182601929,
"openwebmath_perplexity": 228.66063758732633,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9916842205394513,
"lm_q2_score": 0.8459424431344437,
"lm_q1q2_score": 0.8389077723410199
} |
https://brg.a2hosted.com/?page_id=1468&replytocom=149 | # Sunday Times Teaser 2653 – Tough Guys
by H Bradley and C Higgins
A ship has two identical vertical masts on the centre line of its deck. These masts are a whole number of feet tall and are seven feet apart horizontally.
The tops of these masts mast are each attached by straight guy ropes to the same anchor point on the centre line of the deck.
One rope is two feet longer than the other and their combined length is a whole number of feet.
What is the height of the masts?
This teaser can be solved analytically as follows. First let the height of the masts be $$h$$, the length of the guy ropes be $$l_1$$ and $$l_2$$ with $$l$$ and $$s$$ being the sum and difference of these lengths respectively. Then let the distance of the anchor point from the closest mast be $$a$$ and the distance between the masts be $$d$$. We hence have $$l_1=(l+s)/2$$ and $$l_2=(l-s)/2$$. Now using the two pythagorean triangles formed by the deck, the masts and the guy ropes we have:$l_1^2=h^2+(a+d)^2=(l+s)^2/4$ $l_2^2=h^2+a^2=(l-s)^2/4$ Taking the difference of these two equations gives $$2ad+d^2=ls$$ and hence: $a=(ls-d^2)/(2d)$ We can now substitute for $$a$$ in the above equations, which gives: $(2dl_1)^2=(2dh)^2+(ls+d^2)^2=d^2(l+s)^2$ $(2dl_2)^2=(2dh)^2+(ls-d^2)^2=d^2(l-s)^2$ Finally by adding these two equations and simplifying we obtain:$(2dh)^2=(d^2-s^2)(l^2-d^2)$
Substituting the given values $$d=7$$ and $$s=2$$ and simplifing the result gives the quadratic diophantine equation $x^2 – 5y^2=-5$ where $$x=2h/3$$ and $$y=l/7$$. This is a variant of Pell’s equation and has multiple solutions, the first being the trivial one $$x=0,y=1$$, the n’th solution being given by expanding$x_n+\sqrt{5}y_n=(9+4\sqrt{5})^n$ and equating the terms on either side. These solutions can also be generated recursively using: $x_{n+1}=9x_n+20y_n$ $y_{n+1}=4x_n+9y_n$ which provides the basis for this Python solution:
with the output:
including the intended solution of 30 feet masts.
Yes, you are right – thanks – i’ll modify my comment.
My empirical solution for this problem is as follows.
Let $$x$$ be the height of the masts, $$y$$ and $$y+2$$ be the lengths of the guy ropes with the anchor point on the left side of the mast holding the shorter rope. Using Pythagorean Theorem we can get: $\sqrt{(y+2)^2-x^2}-\sqrt{y^2-x^2}=7$ giving us $x=(3/14)\sqrt{5(2y-5)(2y+9)}$ whereby by trial and error and using the fact that $$y>7$$ and $$y=a/2$$ where $$a$$ is an integer, we get $$y=(61/2)$$ ft and $$x=30$$ ft.
But your general solution is really very neat and explicit! | 2019-12-12T10:51:30 | {
"domain": "a2hosted.com",
"url": "https://brg.a2hosted.com/?page_id=1468&replytocom=149",
"openwebmath_score": 0.7549818158149719,
"openwebmath_perplexity": 291.45337703142053,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9916842234886746,
"lm_q2_score": 0.8459424353665381,
"lm_q1q2_score": 0.8389077671325836
} |
https://www.physicsforums.com/threads/absolute-value-inequality.780632/ | # Absolute value inequality
1. Nov 7, 2014
### OceanSpring
• Member warned about not using the homework template
Question:
True or False If x^2<4 then |x|<=2
My solution:
I get -2<x<2 when I solve the problem so it should be false. Yet the text says its true? Is this a mistake? If |x| is equal to 2 then it should be a closed interval, not an open interval which seems to be correct to me.
Last edited by a moderator: Nov 7, 2014
2. Nov 7, 2014
### Staff: Mentor
x2 < 4 is equivalent to -2 < x < 2 or |x| < 2. What the text has appears to be a typo.
3. Nov 7, 2014
### PeroK
It's probably a typo, although logically it is true:
If $x^2 < 4$ then $|x| < 2$ hence $|x| \le 2$
If it were false, then there would be $x$ with $|x| > 2$ yet $x^2 < 4$ | 2017-12-14T17:15:57 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/absolute-value-inequality.780632/",
"openwebmath_score": 0.4849463999271393,
"openwebmath_perplexity": 921.2392826004907,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.959762057376384,
"lm_q2_score": 0.8740772384450967,
"lm_q1q2_score": 0.8389061686759341
} |
https://mathematica.stackexchange.com/questions/80204/calculating-average-distance-between-maxima | # Calculating average distance between maxima
I'm having trouble figuring out a way to analyze some simple data. When graphed, the data I have make a somewhat sinusoidal curve. What I want to do is to find the x-values of the maximum peaks of the sinusoidal curve. I then want to subtract each of these x-values from the last peak found and average these differences to obtain an average distance between peaks. Is there an easy way to do this with Mathematica ? Any suggestions ?
• Do you have the data as a discrete series of values, or as a known function f(x)? – Michael Seifert Apr 17 '15 at 19:32
• It's a discrete series of values. Thanks everyone for your help! – Evan Hale Apr 18 '15 at 3:40
Using Bob Hanlon's example data:
data = Table[{x, Sin[x] + Sin[3. x/2]}, {x, 0, 6. Pi, Pi/5.}];
peaks = FindPeaks[data[[All,2]]][[All,1]]
(* {3,10,15,23,30} *)
Mean@Differences@peaks
(* 27/4 *)
Mean@Differences@data[[peaks, 1]]
(* 4.24115 *)
Or, use PeakDetect:
Mean @ Differences @ Pick[#1, PeakDetect @ #2, 1] & @@Transpose[data]
(* 4.24115 *)
Or, using @Michael Seifert's observation, you can also use
Subtract @@ data[[peaks[[{-1, 1}]], 1]]/(Length[peaks] - 1)
(* 4.24115 *)
Plots:
ListLinePlot[data,Epilog->{PointSize[Large], Red, Point@data[[peaks]]}]
ListLinePlot[data,
Epilog->{PointSize[Large], Red, Point@data[[peaks]],
NumberLinePlot[Interval/@Partition[data[[peaks,1]],2,1],
PlotStyle->Thickness[.01],Spacings->0][[1]]}]
ListLinePlot[data, ImageSize->400,
Epilog->{PointSize[Large], Red, Point@data[[peaks]]},
Prolog->{NumberLinePlot[Interval/@
Partition[data[[Join[{1},peaks,{Length[data]}],1]],2,1],
PlotStyle-> Directive[PointSize[0], Opacity[.4], CapForm["Butt"], Thickness[1]],
Spacings->0][[1]]}]
• Thanks for introdrucing FindPeaks in combination with the elegant Part notation. – eldo Apr 17 '15 at 20:52
• Woah! That exists!? – Ivan Apr 17 '15 at 21:04
• @eldo thank you. Ivan it is one of those new-in version-10 things. So is NumberLinePlot. – kglr Apr 17 '15 at 21:14
• Hi, thanks for your help. I'm having trouble getting FindPeaks[] to work with the 2D data I have ({x0,y0},{x1,y1}, etc.). Do you know a way to get it to work? – Evan Hale Apr 23 '15 at 4:46
• @Evan, afaik FindPeaks and PeakDetect works only with a one-dimensional list as the first argument. Can't think of how to use FindPeaks to identify peaks in 2D data off the top of my head. That would make a great new question though. – kglr Apr 23 '15 at 7:47
Generating test data
data = Table[{x, Sin[x] + Sin[3. x/2]}, {x, 0, 6. Pi, Pi/5.}];
Clear[f]
The interpolation function for the data is
f = Interpolation[data];
Maxima occur for f'[x] == 0 and f''[x] < 0
xValues = x /. Select[
FindRoot[f'[x], {x, #}] & /@
Range[0, 6 Pi, Pi/4] //
Union[#, SameTest ->
(Abs[#1[[1, -1]] - #2[[1, -1]]] < 10^-4 &)] & //
Quiet,
f''[x] < 0 /. # &]
{1.24252, 5.65487, 8.96826, 13.8089, 18.2264}
Plot[f[x], {x, 0, 6 Pi},
Epilog -> {Red, AbsolutePointSize[4], Point[{#, f[#]} & /@ xValues]}]
The separation between the maxima are
diff = Differences[xValues]
{4.41235, 3.31339, 4.84063, 4.41748}
Their average is
mu = Mean[diff]
4.24596
EDIT: However, this can be done more efficiently, particularly if the list is long. Looking at symbolic data,
dataS = Array[d, 10]
{d1, d[2], d[3], d[4], d[5], d[6], d[7], d[8], d[9], d[10]}
dataS // Differences // Mean
1/9 (-d1 + d[10])
So the Mean of the Differences just divides the interval into equal subintervals.
% == (dataS[[-1]] - dataS[[1]])/(Length[dataS] - 1)
True
mu == (xValues[[-1]] - xValues[[1]])/(Length[xValues] - 1)
True
If you don't need a precisely interpolated maximum value, you can just use the data set directly, as follows:
values = sampledata[[;; , 2]];
diffs = Differences[values];
selector = Table[If[diffs[[i]] > 0 && diffs[[i + 1]] < 0, 1, 0], {i, 1,
Length[diffs] - 1}];
maxima = Pick[Drop[Drop[sampledata, 1], -1], selector, 1]
avgmaxdist = (Last[maxima][[1]] - First[maxima][[1]])/(Length[maxima] - 1)
The idea here is to just calculate the differences between successive data points, and then look in the differences array for all of the positive elements that are followed by a negative element. Applying this to Bob Hanlon's sample data set yields
{{1.25664, 1.90211}, {5.65487, 0.221232}, {8.79646, 1.17557}, {13.823, 1.90211}, {18.2212, 0.221232}}
4.24115
The differing results are almost certainly due to the lack of interpolation in my code.
Note that the "average distance between maxima" is really the $x$-distance between the first and the last maximum, divided by the number of gaps between maxima. You still need to find all the maxima, though, so that you know how many there are. | 2019-07-20T08:28:34 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/80204/calculating-average-distance-between-maxima",
"openwebmath_score": 0.2983856499195099,
"openwebmath_perplexity": 4624.450574705256,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9597620596782468,
"lm_q2_score": 0.874077230244524,
"lm_q1q2_score": 0.8389061628173415
} |
https://mathematica.stackexchange.com/questions/198675/numerical-solution-of-a-singular-integral-equation | # Numerical solution of a singular integral equation
I am looking to approximate the solution u of the following equation using discretization method or any other idea. Is there any way on how to find a numerical solution for it:
u[t]-Integrate[Abs[t - s]^(-1/2)*u[s], {s, 0, 1}] == 1/3 (-2 Sqrt[1 - t]+3t-4 Sqrt[1-t]t-4t^(3/2)) where 0<t<1.
The solution is u[x]=x but I am assuming that I dont know the answer and we need to find approximation for it.
• Either you need to provide the function u[t], or you need to provide an equation that u[t] satisfies. May 19 '19 at 19:39
• In fact, I need to find this u numerically. May 19 '19 at 19:43
• The above is not valid Mathematica code? In Mathematica a function u(t) is written as u[t] also for u(s) it is u[s] May 19 '19 at 19:44
• Perhaps you meant you need to solve u[t] - Integrate[Abs[t-s]^(-1/2) u[s], {s, 0, 1}] == 0 numerically. If there is no equation, then there is no way to solve for u[t]. May 19 '19 at 19:45
• If an approximation is fine, you could write $u(t)=a_0+a_1t+a_2t^2+\mathcal O(t^3)$ and solve for $a_i$. I get $a_0=0$, $a_1=1$, $a_2=0$, etc, which confirms that $u(t)=t$ is a solution. May 19 '19 at 22:11
Here's a general solution that works by interpolation. I'll present the method in a very slow way, and we can work on speeding it up later on if desired.
First, we make an ansatz for the function $$u(t)$$ on the interval $$[0,1]$$. Here I use a grid of $$n+1$$ equidistant points and a linear interpolation scheme:
n = 10;
tvalues = Subdivide[n];
uvalues = Unique[] & /@ tvalues; (* we don't care what these variables are called *)
tupairs = Transpose[{tvalues, uvalues}];
u[t_] = Piecewise@BlockMap[{((t-#[[2,1]])#[[1,2]]-(t-#[[1,1]])#[[2,2]])/(#[[1, 1]]-#[[2, 1]]),
#[[1,1]]<=t<=#[[2,1]]}&, tupairs, 2, 1]
Check that this interpolation scheme has indeed the values uvalues on the grid points tvalues:
u /@ tvalues == uvalues
(* True *)
Define the integral $$\int_0^1 ds\,u(s)/\sqrt{\lvert t-s\rvert}$$:
uint[t_] := Integrate[u[s]/Sqrt[Abs[t-s]], {s, 0, 1}]
Evaluate this integral on the same grid of tvalues: here is the slow part of this calculation, and could probably be sped up dramatically,
uintvalues = uint /@ tvalues
(* long output where every element is a linear combination of the uvalues *)
The right-hand side of the integral equation, evaluated on the same grid of tvalues:
f[t_] = 1/3 (-2 Sqrt[1 - t] + 3 t - 4 Sqrt[1 - t] t - 4 t^(3/2));
fvalues = f /@ tvalues
(* long output *)
Solve for the coefficients of $$u(t)$$: a linear system of equations for the grid values uvalues, found by setting the left and right sides of the integral equation equal at every grid point in tvalues,
solution = tupairs /.
First@Solve[Thread[uvalues - uintvalues == fvalues] // N, uvalues]
{{0, 5.84947*10^-16}, {1/10, 0.1}, {1/5, 0.2}, {3/10, 0.3}, {2/5, 0.4}, {1/2, 0.5}, {3/5, 0.6}, {7/10, 0.7}, {4/5, 0.8}, {9/10, 0.9}, {1, 1.}}
This confirms your analytic solution $$u(t)=t$$ but is much more general.
You don't need the // N in the last step if you prefer an analytic solution; however, numerical solution is very much faster.
ListLinePlot[solution, PlotMarkers -> Automatic]
# Update: much faster version
To speed up this algorithm, the main point is to speed up the calculation of the uintvalues from the uvalues. Instead of doing piecewise integrals, this calculation can be expressed as a matrix multiplication, uintvalues == X.uvalues, with the matrix X defined as
n = 10;
X = N[4/(3 Sqrt[n])]*
SparseArray[{{1,1} -> 1.,
{-1,-1} -> 1.,
Band[{2,2}, {-2,-2}] -> 2.,
Band[{2,1}, {-1,1}, {1,0}] ->
N@Table[(i-2)^(3/2)-(i-1)^(3/2)+3/2*(i-1)^(1/2), {i,2,n+1}],
Band[{1,-1}, {-2,-1}, {1,0}] -> N@Reverse@Table[(i-2)^(3/2)-(i-1)^(3/2)+3/2*(i-1)^(1/2), {i,2,n+1}],
Sequence @@ Table[Band[{1,a}, {1+n-a,n}] -> N[a^(3/2)-2*(a-1)^(3/2)+(a-2)^(3/2)], {a,2,n}],
Sequence @@ Table[Band[{a+1,2}, {n+1,n+2-a}] -> N[a^(3/2)-2(a-1)^(3/2)+(a-2)^(3/2)], {a,2,n}]},
{n+1, n+1}] // Normal;
(The coefficients follow from the Piecewise ansatz and analytic integration.)
With this matrix defined, the algorithm becomes simply
tvalues = Subdivide[n];
f[t_] = 1/3 (-2 Sqrt[1 - t] + 3 t - 4 Sqrt[1 - t] t - 4 t^(3/2));
fvalues = f /@ tvalues;
solution = Inverse[IdentityMatrix[n+1] - X].fvalues
ListLinePlot[Transpose[{tvalues, solution}]]
In this way, $$n=1000$$ grid points can be achieved in a few seconds, most of which is still spent in assembling the X-matrix. The next step would be to write down a faster way of assembling X.
• Dear Roman, I couldn't find any word that deserve saying Thank you for the idea you provided. The solution works perfectly for me. We just need to work to speed it up. Also I have another schemes that I need to discuss it with you. Please send me your email. I need someone to work with me on a paid project if you interested. May 20 '19 at 19:37
• You're welcome. Email me at Uncompress["1:eJxTTMoPChZnYGAoys9NzNMrTs7IzUxNcSjNy0xKLNZLzgAAnn0Kkg=="] May 20 '19 at 19:40
• @Roman How did you find the interpolation u[t]? May 21 '19 at 12:06
• @AlexTrounev every piece of the Piecewise function is a two-point Lagrange polynomial. Concretely, if you set f[x_] = y1*(x - x2)/(x1 - x2) + y2*(x - x1)/(x2 - x1), then f[x1] == y1 and f[x2] == y2. In this way the function f passes through the two points {x1,y1} and {x2,y2}. May 21 '19 at 12:27
• @jsxs 1. Correct. All I wanted here is a list of symbols that has the right length. Unique[]& is a pure function with zero arguments (which is perfectly legal), and so the supplied argument in Map is discarded. I agree it's a bit opaque; I could have done uvalues = Table[Unique[], {n+1}] instead for more clarity. May 23 '19 at 11:53
Not an answer, only an idea to solve the problem.
I tried to solve your integral equation iterativ using NestList:
sol = NestList[
Function[fu,
FunctionInterpolation[
1/3 (-2 Sqrt[1 - t] + 3 t - 4 t Sqrt[1 - t] - 4 t^(3/2)) +
NIntegrate[fu[s]/Sqrt[Sqrt[(t - s)^2]] , {s, 0, 1},
Method -> "LocalAdaptive" ], {t, 0, 1 }]
] , 0 &, (* initial function *)5];
Unfortunately the Picarditeration doesn't converge in your case:
Plot[Map[#[t] &, sol], {t, 0, 1}
Perhaps you have additional system knowhow to force a convergent iteration?
• tried other starting point perhaps? e.g., #& is the exact solution, so a fixed point presumably (is the system stable?) May 20 '19 at 15:54
• I tried this Initial function too but fixpoint isn't stable May 20 '19 at 17:26
I will add another method that is not as accurate as method @Roman, but faster. It uses expression describing the integral Integrate[1/Sqrt[Abs[t-s]], {s, 0, 1}]
ker[s_, t_] := If[t > s, -2*Sqrt[t - s], 2*Sqrt[s - t]]
Then everything is as usual
np = 51; points = fun = Table[Null, {np}];
Table[points[[i]] = i/np, {i, np}];
sol = Unique[] & /@ points;
Do[fun[[i]] =
1/3 (-2 Sqrt[1 - t] + 3 t - 4 Sqrt[1 - t] t - 4 t^(3/2)) /.
t -> points[[i]], {i, np}];
sol1 = sol /.
First@Solve[
Table[sol[[j]] -
Sum[.5*(sol[[i]] +
sol[[i + 1]])*(ker[points[[i + 1]], points[[j]]] -
ker[points[[i]], points[[j]]]), {i, 1, np - 1}] ==
fun[[j]], {j, 1, np}], sol];
u = Transpose[{points, sol1}];
Show[Plot[t, {t, 0, 1}], ListPlot[u]] | 2021-10-25T10:00:45 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/198675/numerical-solution-of-a-singular-integral-equation",
"openwebmath_score": 0.45741981267929077,
"openwebmath_perplexity": 2774.3254907612254,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9597620596782468,
"lm_q2_score": 0.8740772286044094,
"lm_q1q2_score": 0.8389061612432217
} |
https://math.stackexchange.com/questions/1597911/integrating-a-dirac-delta-function-on-a-definite-domain | # Integrating a dirac delta 'function' on a definite domain
Came across a question that requires evaluation of $\int_{-3}^{+1} \left(x^{3}-3x^{2}+2x-1\right)\delta\left(x+2\right)dx$
Here's my attempt:
Recall that:
$\int_{-\infty}^{\infty} f\left(x\right)\delta\left(x-a\right)dx=f\left(a\right)\int_{-\infty}^{+\infty} \delta\left(x-a\right)dx=f\left(a\right)$
Note that a=-2 relative to the question.
Then,$$\int_{-2-1}^{-2+3} \left((-2)^{3}-3\left(-2\right)^{2}-2\left(-2\right)-1\right)\delta\left(x+2\right)dx= (-2)^{3}-3\left(-2\right)^{2}-2\left(-2\right)-1$$
I'm still fairly uncomfortable dealing with dirac delta function due to my sparse exposure to them. My guess is that I'm doing this question wrongly and that the domain of integration requires shifting so that the domain of integration is symmetric about the point x=0. Any help is appreciated
• $f(x) = x^3 - 3 x^2 + 2x - 1$ but the result is $f(-2)$ which isn't what you wrote. remember : $\delta(x+2) \ne 0$ only in a neighborhood of $x+2 = 0$ – reuns Jan 3 '16 at 4:20
• @user1952009 let me correct that! It is a typo! – Mathematicing Jan 3 '16 at 4:21
• and if you are unconfortable, go back to the definition : $$\delta(x) = \frac12 \lim_{a \to \infty} a e^{-|a x|}$$ (or any function which has its peak more and more concentrated at the origin) – reuns Jan 3 '16 at 4:22
• @user1952009 I think it helps that I went back to the definition and it says that the dirac delta function has an area of 1 at the point a and integrating this domain produces 1 at x=a while being zero everywhere else. – Mathematicing Jan 3 '16 at 4:45
In general, in a definite domain
$\int_b^c f(x)\delta(x-a)dx = f(a)$ if $b < a < c$,
then $\int_{-3}^{1} (x^3 - 3x^2 + 2x -1) \delta(x+2) dx = (-2)^3 - 3(-2)^2 + 2(-2) - 1$ because $-3 < -2 < 1$.
• I have corrected the typos in the OP. Should I have been correct then? – Mathematicing Jan 3 '16 at 4:23
• Integration the dirac delta not require a symmetric domain. Symmetric or asymmetric only is important if the center of delta in inside of the domain. Correction of typo from 2 to -2 is correct. – cosmoscalibur Jan 3 '16 at 4:27
• Edit:cancel edit – Mathematicing Jan 3 '16 at 4:34
• If you check that center of the dirac delta is inside of the domain, only evaluete the function in that value, otherwise is zero. – cosmoscalibur Jan 3 '16 at 4:37
• As I mentioned you in the last comment, if the center is zero. – cosmoscalibur Jan 3 '16 at 4:49
In THIS ANSWER and THIS ONE, I provided primers on the Dirac Delta.
Here, using the Unit Step Function $u(x)$ defined by
$$u(x)= \begin{cases}1&,x>0\\\\ 0&,x<0\\\\ 1/2&,x=0 \end{cases}$$
We interpret the notation $\int_a^b f(x)\delta(x-x')\,dx$ using the unit step function and write
\begin{align} \mathscr{D_{x';a,b}}\{f\}&=\int_a^b f(x)\delta(x-x')\,dx\\\\ &=\int_{-\infty}^{\infty}f(x)\left(u(x-a)-u(x-b)\right)\delta(x-x')\,dx\\\\ &=f(x')\left(u(x'-a)-u(x'-b)\right) \end{align}
Now, depending on $x'$ relative to $a$ and $b$, we have
\begin{align} \mathscr{D_{x';a,b}}\{f\}&= \begin{cases} f(x')&,a<x'<b\\\\ \frac12 f(x')&,x'=a\,\,\text{or}\,\,x'=b\\\\ 0&, \text{otherwise} \end{cases} \end{align}
• But the question is not about the more general case else about a Dirac delta inside of the interval of integration. – cosmoscalibur Jan 12 '16 at 20:10
• @cosmoscalibur The OP stated "I'm still fairly uncomfortable dealing with dirac delta function due to my sparse exposure to them. My guess is that I'm doing this question wrongly and that the domain of integration requires shifting so that the domain of integration is symmetric about the point x=0. " I posted a direct reply to the issue regarding how to interpret the notation $\int_a^b f(x)\delta(x-x')\,dx$ for the Dirac Delta. – Mark Viola Jan 12 '16 at 20:20
• Math Magician. Please let me know how I can improve my answer. I really want to give you the best answer I can. If you don't find it useful, then I am happy to delete it. So, if you would, please let me know either way. - Mark – Mark Viola Jan 17 '16 at 18:37
• Math Magician. Shall I delete my answer then? - Mark – Mark Viola Jan 25 '16 at 3:25 | 2019-06-26T04:52:48 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1597911/integrating-a-dirac-delta-function-on-a-definite-domain",
"openwebmath_score": 0.87455815076828,
"openwebmath_perplexity": 541.3073751678894,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.959762057376384,
"lm_q2_score": 0.8740772269642948,
"lm_q1q2_score": 0.8389061576570961
} |
http://mathhelpforum.com/trigonometry/19256-trig-problem.html | # Math Help - Trig Problem
1. ## Trig Problem
There are two similar questions in my homework which need me to solve equations and I have now got stuck on one of them and would like someone to tell me if I have made a mistake in my workings or what I should do next.
Q. Solve 2tan^2 x-7sec+8=0 for 0<x<360
So far I have got:
2tan^2 x-7sec+8=0
2(sin^2 x/cos^2 x)-7cosx+8=0
I am mainly confused because I want to multiply by cos^2 x in order to cancel out the cos^2 x under sin^2 x, but can I do this given that it is all multiplied by 2 i.e. in the brackets?
ALSO: I would be grateful if someone could clarify this for me.
Q. State the values of:
arc sin 0.5 - my answer is 30 degrees
arc tan 1 - my answer is 45 degrees
I ask this because I have no idea what the 'arc' means - does it affect the way I should answer the question as I have never encountered it before.
2. 2tan^2 x-7sec+8=0
2(sin^2 x/cos^2 x)-7cosx+8=0
That's okay except that it should be -7/cos x.
You wanted to multiply all by cos^2 x in order to clear the fraction? Very good! Always clear the fractions first.
Sure you can multiply all by cos^2 x even if "it is all multiplied by 2 i.e. in the brackets".
So, to continue,
2[sin^2(X) /cos^2(X)] -7/cosX +8 = 0
Multiply both sides by cos^2(X),
2sin^2(X) -7cosX +8cos^2(X) = 0
2[1 -cos^2(X)] -7cosX +8cos^2(X) = 0
2 -2cos^2(X) -7cosX +8cos^2(X) = 0
6cos^2(X) -7cosX +2 = 0
Factor that,
(3cosX -2)(2cosX -1) = 0
3cosX -2 = 0
cosX = 2/3
X = arccos(2/3) = 48.1896851 degrees, in the 1st quadrant.
Since cosine is positive also in the 4th quadrant,
X = 360 -48.1896851 = 311.8103149 deg, in the 4th quadrant.
2cosX -1 = 0
cosX = 1/2
X = arccos(1/2)) = 60, in the 1st quadrant.
Since cosine is positive also in the 4th quadrant,
X = 360 -60 = 300 deg, in the 4th quadrant.
Therefore, X = 48.1896851, 60, 300, or 311.8103149 degrees -----answer.
-----------------------------
ALSO: I would be grateful if someone could clarify this for me.
Q. State the values of:
arc sin 0.5 - my answer is 30 degrees
arc tan 1 - my answer is 45 degrees
I ask this because I have no idea what the 'arc' means - does it affect the way I should answer the question as I have never encountered it before.
arcsin(0.5) = 30 deg or 150 deg, since sine is positive in the 1st and 2nd quadrants.
arctan(1) = 45deg or 225deg, since tangent is positive in the 1st and 3rd quadrants.
'arc' here means, or, arcsin(0.5) means an anlge whose sine is 0.5.
arctan(1) is an angle whose tangent is 1.
So associate "arc___" with an angle.
3. Hello, Tom G!
Here's the first one . . .
1) Solve: . $2\tan^2 x -7\sec x + 8\;=\;0$ . for $0^o \leq x \leq 360^o$
We have: . $2\left(\sec^2x - 1\right) - 7\sec x + 8 \;=\;0$
. . which simplifies to: . $2\sec^2x - 7\sec x + 6 \;=\;0$
. . which factors: . $(\sec x - 2)(2\sec x - 3) \;=\;0$
And has roots:
. . $\sec x - 2 \:=\:0\quad\Rightarrow\quad\sec x \:=\:2\quad\Rightarrow\quad x \:=\:60^o,\:300^o$
. . $2\sec x - 3 \:=\:0\quad\Rightarrow\quad\sec x \:=\:\frac{3}{2}\quad\Rightarrow\quad x \:\approx\:48.19^o,\:311.81^o$
4. Originally Posted by Tom G
Q. State the values of:
arc sin 0.5 - my answer is 30 degrees
arc tan 1 - my answer is 45 degrees
I ask this because I have no idea what the 'arc' means - does it affect the way I should answer the question as I have never encountered it before.
The "arcsine" function is the "older" labeling of the inverse sine function. ie.
$sin(30^o) = \frac{1}{2} \implies arcsin \left ( \frac{1}{2} \right ) = sin^{-1} \left ( \frac{1}{2} \right) = 30^o$
You won't typically see the newer textbooks using asin, acs, atn, etc. (Please don't ask me for a cut-off date for this. For all I know there are some new texts using this.) For example I learned the "arc" functions in High School in the late 80's, but my Calc book in college didn't use them. However my Calc book that was published in the 70's does use them.
-Dan
5. Just one more question...
Does arc cos(-1/square root of 2) = -45 ?
6. Originally Posted by Tom G
Just one more question...
Does arc cos(-1/square root of 2) = -45 ?
no. (remember cosine is an even function)
couldn't you have plugged that in to your calculator? or you could work it out (it's actually not that hard)
7. Originally Posted by Tom G
Just one more question...
Does arc cos(-1/square root of 2) = -45 ?
-45 degrees?
No.
-1/sqrt(2) is negative. ()
In what quadrants is cosine negative?
In the 2nd and 3rd quadrants.
So, 135 deg and 225 deg. --------------answer. | 2015-05-25T14:10:23 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/trigonometry/19256-trig-problem.html",
"openwebmath_score": 0.8583391904830933,
"openwebmath_perplexity": 1645.0187267995632,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9597620596782468,
"lm_q2_score": 0.874077222043951,
"lm_q1q2_score": 0.8389061549467427
} |
https://tobydriscoll.net/fnc-julia/twodim/tensorprod.html | # 13.1. Tensor-product discretizations#
As you learned when starting double integration in vector calculus, the simplest extension of an interval to two dimensions is a rectangle. We will use a particular notation for rectangles:
(13.1.1)#$[a,b] \times [c,d] = \bigl\{ (x,y)\in\mathbb{R}^2 : a\le x \le b,\; c\le y \le d \bigr\}.$
The $$\times$$ in this notation is called a tensor product, and a rectangle is the fundamental example of a tensor-product domain. The implication of the tensor product is that each variable independently varies over a fixed set. The simplest three-dimensional tensor-product domain is the cuboid $$[a,b]\times[c,d]\times[e,f]$$. When the interval is the same in each dimension (that is, the region is a square or a cube), we may write $$[a,b]^2$$ or $$[a,b]^3$$. We will limit our discussion to two dimensions henceforth.
The discretization of a two-dimensional tensor-product domain is straightforward.
Definition 13.1.1 : Tensor-product grid
Given discretizations of two intervals,
(13.1.2)#$a= x_0< x_1 < \cdots < x_m = b, \qquad c = y_0 < y_1 < \cdots < y_n = d,$
then a tensor-product grid on $$[a,b]\times[c,d]$$ is the set
(13.1.3)#$\bigl\{ (x_i,y_j): i=0,\ldots,m,\; j=0,\ldots,n \bigr\}.$
## Functions on grids#
The double indexing of the grid set (13.1.3) implies an irresistible connection to matrices. Corresponding to any function $$f(x,y)$$ defined on the rectangle is an $$(m+1)\times(n+1)$$ matrix $$\mathbf{F}$$ defined by collecting the values of $$f$$ at the points in the grid. This transformation of a function to a matrix is so important that we give it a formal name:
(13.1.4)#$\begin{split}\mathbf{F} = \mtx(f) = \Bigl[f(x_i,y_j)\Bigr]_{\substack{i=0,\ldots,m\\j=0,\ldots,n}}.\end{split}$
Caution
There is potential for confusion because the first dimension of a matrix varies in the vertical direction, while the first coordinate $$x$$ varies horizontally. In fact, the Julia plotting routines we use expect the transpose of this arrangement, so that $$x$$ varies along columns and $$y$$ along rows.
Example 13.1.2
Let the interval $$[0,2]$$ be divided into $$m=4$$ equally sized pieces, and let $$[1,3]$$ be discretized in $$n=2$$ equal pieces. Then the grid in the rectangle $$[0,2]\times[1,3]$$ is given by all points $$(i/2,1+j)$$ for all choices $$i=0,1,2,3,4$$ and $$j=0,1,2$$. If $$f(x,y)=\sin(\pi xy)$$, then
$\begin{split} \mtx(f) = \begin{bmatrix} \sin(\pi\cdot 0\cdot 1) & \sin(\pi\cdot0\cdot 2) & \sin(\pi\cdot0\cdot 3) \\[1mm] \sin\left(\pi\cdot\tfrac{1}{2} \cdot 1 \right) & \sin\left(\pi\cdot\tfrac{1}{2} \cdot 2 \right) & \sin\left(\pi\cdot\tfrac{1}{2} \cdot 3 \right) \\[1mm] \sin\left(\pi \cdot 1 \cdot 1 \right) & \sin\left(\pi \cdot 1 \cdot 2 \right) & \sin\left(\pi \cdot 1 \cdot 3 \right) \\[1mm] \sin\left(\pi\cdot \tfrac{3}{2} \cdot 1 \right) & \sin\left(\pi\cdot\tfrac{3}{2} \cdot 2 \right) & \sin\left(\pi\cdot\tfrac{3}{2} \cdot 3 \right) \\[1mm] \sin\left(\pi \cdot 2 \cdot 1 \right) & \sin\left(\pi \cdot 2 \cdot 2 \right) & \sin\left(\pi \cdot 2 \cdot 3 \right) \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & -1 \\ 0 & 0 & 0 \\ -1 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}. \end{split}$
Demo 13.1.3
Here is the grid from Example 13.1.2.
m = 4; x = range(0,2,length=m+1);
n = 2; y = range(1,3,length=n+1);
For a given $$f(x,y)$$ we can find $$\operatorname{mtx}(f)$$ by using a comprehension syntax.
f = (x,y) -> cos(π*x*y-y)
F = [ f(x,y) for x in x, y in y ]
5×3 Matrix{Float64}:
0.540302 -0.416147 -0.989992
0.841471 0.416147 -0.14112
-0.540302 -0.416147 0.989992
-0.841471 0.416147 0.14112
0.540302 -0.416147 -0.989992
The plots of this section look better using a different graphics engine on the back end:
plotlyjs(); # use better 3D renderer
The WebIO Jupyter extension was not detected. See the WebIO Jupyter integration documentation for more information.
m = 60; x = range(0,2,length=m+1);
n = 50; y = range(1,3,length=n+1);
F = [ f(x,y) for x in x, y in y ];
plot(x,y,F',levels=10,fill=true,aspect_ratio=1,
color=:redsblues,clims=(-1,1),
xlabel="x",ylabel="y")
surface(x,y,F',l=0,leg=:none,
color=:redsblues,clims=(-1,1),
xlabel="x",ylabel="y",zlabel="f(x,y)")
## Parameterized surfaces#
We are not limited to rectangles by tensor products. Many regions and surfaces may be parameterized by means of $$x(u,v)$$, $$y(u,v)$$, and $$z(u,v)$$, where $$u$$ and $$v$$ lie in a rectangle. Such “logically rectangular” surfaces include the unit disk,
(13.1.5)#\begin{split}\left\{ \begin{aligned} x &= u \cos v, \\ y &= u \sin v,\\ \end{aligned} \right. \qquad \qquad \left. \begin{aligned} 0 & \le u < 1, \\ 0 &\le v \le 2\pi, \end{aligned} \right.\end{split}
and the unit sphere,
(13.1.6)#\begin{split}\left\{ \begin{aligned} x &= \cos u \sin v,\\ y &= \sin u \sin v,\\ z &= \cos v, \end{aligned} \right. \qquad \qquad \left. \begin{aligned} 0 & \le u < 2\pi, \\ 0 &\le v \le \pi. \end{aligned} \right.\end{split}
Demo 13.1.4
For a function given in polar form, such as $$f(r,\theta)=1-r^4$$, construction of a function over the unit disk is straightforward using a grid in $$(r,\theta)$$ space.
r = range(0,1,length=41)
θ = range(0,2π,length=81)
F = [ 1-r^4 for r in r, θ in θ ]
surface(r,θ,F',legend=:none,l=0,color=:viridis,
xlabel="r",ylabel="θ",title="A polar function")
Of course, we are used to seeing such plots over the $$(x,y)$$ plane, not the $$(r,\theta)$$ plane. For this we create matrices for the coordinate functions $$x$$ and $$y$$.
X = [ r*cos(θ) for r in r, θ in θ ]
Y = [ r*sin(θ) for r in r, θ in θ ]
surface(X',Y',F',legend=:none,l=0,color=:viridis,
xlabel="x",ylabel="y",title="Function on the unit disk")
In such functions the values along the line $$r=0$$ must be identical, and the values on the line $$\theta=0$$ should be identical to those on $$\theta=2\pi$$. Otherwise the interpretation of the domain as the unit disk is nonsensical. If the function is defined in terms of $$x$$ and $$y$$, then those can be defined in terms of $$r$$ and $$\theta$$ using (13.1.5).
On the unit sphere, we can use color to indicate a function value. Here is a plot of the function $$f(x,y,z) = x y z^3$$. Since we need coordinate function matrices for the plot, we also use them to evaluate $$f$$ on the grid.
θ = range(0,2π,length=61)
ϕ = range(0,π,length=51)
X = [ cos(θ)*sin(ϕ) for θ in θ, ϕ in ϕ ]
Y = [ sin(θ)*sin(ϕ) for θ in θ, ϕ in ϕ ]
Z = [ cos(ϕ) for θ in θ, ϕ in ϕ ]
F = @. X*Y*Z^3
surface(X',Y',Z',fill_z=F',l=0,leg=:none,color=:viridis,
xlims=(-1.1,1.1),ylims=(-1.1,1.1),zlims=(-1.1,1.1),
xlabel="x",ylabel="y",zlabel="z",
title="Function on the unit sphere")
## Partial derivatives#
In order to solve boundary-value problems in one dimension by collocation, we replaced an unknown function $$u(x)$$ by a vector of its values at selected nodes and discretized the derivatives in the equation using differentiation matrices. We use the same ideas in the 2D case: we represent a function by its values on a grid, and multiplication by differentiation matrices to construct discrete analogs of the partial derivatives $$\frac{\partial u}{\partial x}$$ and $$\frac{\partial u}{\partial y}$$.
Consider first $$\frac{\partial u}{\partial x}$$. In the definition of this partial derivative, the independent variable $$y$$ is held constant. Note that $$y$$ is constant within each column of $$\mathbf{U} = \mtx(u)$$. Thus, we may regard a single column $$\mathbf{u}_j$$ as a discretized function of $$x$$ and, as usual, left-multiply by a differentiation matrix $$\mathbf{D}_x$$ such as (10.3.2). We need to do this for each column of $$\mathbf{U}$$ by $$\mathbf{D}_x$$, which is accomplished by $$\mathbf{D}_x \mathbf{U}$$. Altogether,
(13.1.7)#$\mtx\left( \frac{\partial u}{\partial x} \right) \approx \mathbf{D}_x \, \mtx(u).$
This relation is not an equality, because the left-hand side is a discretization of the exact partial derivative, while the right-hand side is a finite-difference approximation. Yet it is a natural analog for partial differentiation when we are given not $$u(x,y)$$ but only the grid value matrix $$\mathbf{U}$$.
Now we tackle $$\frac{\partial u}{\partial y}$$. Here the inactive coordinate $$x$$ is held fixed within each row of $$\mathbf{U}$$. However, if we transpose $$\mathbf{U}$$, then the roles of rows and columns are swapped, and now $$y$$ varies independently down each column. This is analogous to the situation for the $$x$$-derivative, so we left-multiply by a finite-difference matrix $$\mathbf{D}_y$$, and then transpose the entire result to restore the roles of $$x$$ and $$y$$ in the grid. Fortunately, linear algebra allows us to express the sequence transpose–left-multiply–transpose more compactly:
(13.1.8)#$\mtx\left( \frac{\partial u}{\partial y} \right) \approx \Bigl(\mathbf{D}_y \mathbf{U}^T\Bigr)^T = \mtx(u)\, \mathbf{D}_y^T.$
Keep in mind that the differentiation matrix $$\mathbf{D}_x$$ is based on the discretization $$x_0,\ldots,x_m$$, and as such it must be $$(m+1)\times (m+1)$$. On the other hand, $$\mathbf{D}_y$$ is based on $$y_0,\ldots,y_n$$ and is $$(n+1)\times (n+1)$$. This is exactly what is needed dimensionally to make the products in (13.1.7) and (13.1.8) consistent. More subtly, if the differentiation is based on equispaced grids in each variable, the value of $$h$$ in a formula such as (5.4.5) will be different for $$\mathbf{D}_x$$ and $$\mathbf{D}_y$$.
Demo 13.1.5
We define a function and, for reference, its two exact partial derivatives.
u = (x,y) -> sin(π*x*y-y);
∂u_∂x = (x,y) -> π*y*cos(πx*y-y);
∂u_∂y = (x,y) -> (π*x-1)*cos(π*x*y-y);
We use an equispaced grid and second-order finite differences as implemented by diffmat2.
m = 80; x,Dx,_ = FNC.diffmat2(m,[0,2]);
n = 60; y,Dy,_ = FNC.diffmat2(n,[1,3]);
mtx = (f,x,y) -> [ f(x,y) for x in x, y in y ]
U = mtx(u,x,y)
∂xU = Dx*U
∂yU = U*Dy';
Now we compare the exact $$\frac{\partial u}{\partial y}$$ with its finite-difference approximation.
M = maximum(abs,∂yU) # find the range of the result
plot(layout=(1,2),aspect_ratio=1,clims=(-M,M),xlabel="x",ylabel="y")
contour!(x,y,mtx(∂u_∂y,x,y)',layout=(1,2),levels=12,
fill=true,color=:redsblues,title="∂u/∂y")
contour!(x,y,∂yU',subplot=2,levels=12,
fill=true,color=:redsblues,title="approximation")
To the eye there is little difference to be seen, though the results have no more than a few correct digits at these discretization sizes:
exact = mtx(∂u_∂y,x,y)
# Relative difference in Frobenius norm:
norm(exact-∂yU) / norm(exact)
0.0035544848411698023
## Exercises#
1. ⌨ In each part, make side-by-side surface and contour plots of the given function over the given domain.
(a) $$f(x,y) = 2y + e^{x-y}$$, $$\quad[0,2]\times[-1,1]$$
(b) $$f(x,y) = \tanh[5(x+xy-y^3)]$$, $$\quad[-2,2]\times[-1,1]$$
(c) $$f(x,y) = \exp \bigl[-6(x^2+y^2-1)^2 \bigr]$$, $$\quad[-2,2]\times[-2,2]$$
2. ⌨ For each function in Exercise 1, make side-by-side surface plots of $$f_x$$ and $$f_y$$ using Chebyshev spectral differentiation.
3. ⌨ For each function in Exercise 1, make a contour plot of the mixed derivative $$f_{xy}$$ using Chebyshev spectral differentiation.
4. ⌨ In each case, make a plot of the function given in polar or Cartesian coordinates over the unit disk.
(a) $$f(r,\theta) = r^2 - 2r\cos \theta$$
(b) $$f(r,\theta) = e^{-10r^2}$$
(c) $$f(x,y) = xy - 2 \sin (x)$$
5. ⌨ Plot $$f(x,y,z)=x y - x z - y z$$ as a function on the unit sphere.
6. ⌨ Plot $$f(x,y,z)=x y - x z - y z$$ as a function on the cylinder $$r=1$$ for $$-1\le z \le 2$$. | 2022-09-25T05:39:49 | {
"domain": "tobydriscoll.net",
"url": "https://tobydriscoll.net/fnc-julia/twodim/tensorprod.html",
"openwebmath_score": 0.9795880913734436,
"openwebmath_perplexity": 574.4103973533868,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795079712153,
"lm_q2_score": 0.849971181358171,
"lm_q1q2_score": 0.8389041383666002
} |
http://math.stackexchange.com/questions/68024/is-this-time-complexity-example-correct?answertab=active | # Is this time complexity example correct?
This is probably not the best worded question but here goes.
I've been reading a text book trying to get my head around time complexity.
I understand the most of it, but this example has threw me. Am I missing something or is the textbook simply wrong.
It has the following table:
$g(n)$, where $f(n) = O(g(n))$
• $g(n) = 5 \to f(n) = O(1)$
• $g(n) = 20n + 17 \to f(n) = O(n)$
• $g(n) = 40n^2 + 3n - 10\to f(n) = O(n^2)$
• $g(n) = 10n^3 + 26n^2 + 220 \to f(n) = O(n^3)$
I understand the first two cases: If ($g(n)$ is 5) time complexity is a constant. and if ($g(n)$ is $20n + 17$ then time complexity is $O(n)$ as constants are ignored.
What I'm not sure I understand is why the last two cases are equal to $O(n^2)$ and $O(n^3)$ respectively.
From my math understanding and ignoring constants it should be $O(n^3)$ and $O(n^5)$ respectively and not what was in the text book.
Some enlightenment would be great, I've searched all over for my answer. Thanks
-
The condition for $40n^2+3n-10$ being $O(n^2)$ is that there is some constant $K$ such that $40n^2+3n-10 < Kn^2$ if only $n$ is large enough.
To see that this is the case, rewrite it as $$40n^2+3n-10 = n^2\left(40+\frac{3}{n}-\frac{10}{n^2}\right)$$ The parenthesis on the right-hand-side goes towards 40 when $n\to\infty$, so you can use $K=41$ (or 40.0000001 or anything that is larger than 40).
The lesson to take home is that only the degree of a polynomial matters for its big-O classification.
-
That right there is a great answer, and you've jut helped me understand big O, finally, thanks a lot :D I appreciate it, I really do. – user6701 Sep 27 '11 at 21:06
Your conclusion is incorrect because you are multiplying the powers of $n$ modulo constants. The leading order term of the polynomial determines the complexity because it dominates the other terms in value for large $n$. Using your notation, the complexity of a polynomial $p(n)$ of degree $m$ is $O(n^m)$. | 2013-12-13T09:13:28 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/68024/is-this-time-complexity-example-correct?answertab=active",
"openwebmath_score": 0.8508208990097046,
"openwebmath_perplexity": 194.34734329972665,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795110351223,
"lm_q2_score": 0.8499711756575749,
"lm_q1q2_score": 0.8389041353444613
} |
https://math.stackexchange.com/questions/891054/finding-the-number-of-solutions-to-x2y4z-400 | # Finding the number of solutions to $x+2y+4z=400$
My question is how to find the easiest way to find the number of non-negative integer solutions to $$x+2y+4z=400$$ I know that I can use generating functions, and think of it as partitioning $400$ with $x$ $1$'s, $y$ $2$'s, and $z$ $4$'s. The overall generating function is: $$(1+x+x^2 + \cdots x^{400})(1+x^2+x^4+\cdots x^{200})(1+x^4+x^8+\cdots x^{100})$$ And then from this I have to calculate the coefficient of $x^{400}$, which I don't know how to do. If there's an easier way to do, I'd love to know.
## 2 Answers
My question is how to find the easiest way to find the number of non-negative integer solutions to $$x+2y+4z=400$$
I think the following way is easy (I'm not sure if it's the easiest, though).
Since $x+2y+4z=400$, $x$ has to be even. So, setting $x=2m$ gives you $$2m+2y+4z=400\Rightarrow m+y+2z=200.$$ Since $m+y$ has to be even, setting $m+y=2k$ gives you $$2k+2z=200\Rightarrow k+z=100.$$
There are $101$ pairs for $(k,z)$ such that $k+z=100$. For each $k$ such that $m+y=2k$, there are $2k+1$ pairs for $(m,y)$.
Hence, the answer is $$\sum_{k=0}^{100}(2k+1)=1+\sum_{k=1}^{100}(2k+1)=1+2\cdot \frac{100\cdot 101}{2}+100=10201.$$
• interesting solution. Could you elaborate on why the number of pairs for $(m, y)$ is $2k + 1$? – Vishwa Iyer Aug 8 '14 at 12:58
• (+1) This is a rather clever trick. I just used analytic combinatorics like a hammer, instead :) – Jack D'Aurizio Aug 8 '14 at 13:02
• @VishwaIyer: Sure! You can see $(m,y)=(0,2k),(1,2k-1),\cdots,(2k,0)$ – mathlove Aug 8 '14 at 13:02
• @JackD'Aurizio: Thanks! I like the hammer, though:) – mathlove Aug 8 '14 at 13:04
• @mathlove when you find that $k +z = 100$, why isn't there only 101 possible combinations? for every value of $k = 0$ to $k = 101$, there is only one value of $z$ to make this equality true, hence only 101 possibilities? – Varun Iyer Aug 8 '14 at 13:17
You were on the right track, but instead of truncating the factors, just consider the coefficient of $x^{400}$ in: $$(1+x+x^2+x^3+\ldots)(1+x^2+x^4+x^6+\ldots)(1+x^4+x^8+x^{12}+\ldots)=\frac{1}{(1-x)(1-x^2)(1-x^4)},\tag{1}$$ then write the RHS of $1$ as a sum of terms like $\frac{A}{(1-\xi x)^k}$, with $\xi\in\{1,-1,i,-i\}$, and exploit the identities: $$\frac{1}{1-\xi x}=\sum_{k=0}^{+\infty}(\xi x)^k,$$ $$\frac{1}{(1-\xi x)^2}=\sum_{k=0}^{+\infty}(k+1)(\xi x)^k,$$ $$\frac{1}{(1-\xi x)^3}=\sum_{k=0}^{+\infty}\frac{(k+2)(k+1)}{2}(\xi x)^k$$ to recover the final expression:
$$[x^n]\frac{1}{(1-x)(1-x^2)(1-x^4)}=\frac{2n^2+14n+21}{32}+\frac{(7+2n) (-1)^n}{32}+\frac{1}{8} \cos\left(\frac{n \pi }{2}\right)+\frac{1}{8} \sin\left(\frac{n \pi }{2}\right)$$
that if $8\mid n$ becomes:
$$[x^n]\frac{1}{(1-x)(1-x^2)(1-x^4)}=\frac{1}{16}(n+4)^2.$$
• I'm confused when you "recover the final expression", did you use the identities above and find the nth term of the sequence? And where did the $\sin$ and $\cos$ come from as well? – Vishwa Iyer Aug 8 '14 at 13:20
• Yes, I just used the identites derived from above. That combination of $\sin$ and $\cos$ just comes from $(2+2i)(-i)^n+(2-2i)i^n$. – Jack D'Aurizio Aug 8 '14 at 13:26
• Another question, you can avoid using epsilon completely and just set it equal to 1, right? And you got these identities through differentiating the first one, right? – Vishwa Iyer Aug 8 '14 at 13:50
• And would you mind showing the steps you made to arrive at the final expression? I'm trying the same process you did and I can't seem to derive the expression. – Vishwa Iyer Aug 8 '14 at 13:55 | 2019-08-23T02:39:40 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/891054/finding-the-number-of-solutions-to-x2y4z-400",
"openwebmath_score": 0.8383335471153259,
"openwebmath_perplexity": 217.24602562077828,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795079712153,
"lm_q2_score": 0.8499711775577735,
"lm_q1q2_score": 0.8389041346156857
} |
http://math.stackexchange.com/questions/503694/combinatorial-interpretation-of-an-alternating-binomial-sum | # Combinatorial interpretation of an alternating binomial sum
Let $n$ be a fixed natural number. I have reason to believe that $$\sum_{i=k}^n (-1)^{i-k} \binom{i}{k} \binom{n+1}{i+1}=1$$ for all $0\leq k \leq n.$ However I can not prove this. Any method to prove this will be appreciated but a combinatorial solution is greatly preferred. Thanks for your help.
-
Rewrite the identity with the index of summation changed from $i$ to $j$ where $j=i-k+1$: $$\sum_{j=1}^{n+1-k}(-1)^{j-1}\binom{n+1}{k+j}\binom{k+j-1}k=1.$$ Define a "good word" to be a word of length $n+1$ over the alphabet $\{A,B,C\}$ satisfying the conditions: there are exactly $k$ $C$'s, there is at least one $B$, and the first $B$ precedes all the $C$'s.
If $j$ is the number of $B$'s in a good word, then we must have $1\le j\le n+1-k$; moreover, the number of good words with exactly $j$ $B$'s is given by the expression $$\binom{n+1}{k+j}\binom{k+j-1}k.$$ The combinatorial meaning of the identity is that the number of good words with an odd number of $B$'s is one more than the number of good words with an even number of $B$'s. Here is a bijective proof of that fact.
Let $w$ be the word consisting of a single $B$ preceded by $n-k$ $A$'s and followed by $k$ $C$'s; this is a good word with an odd number of $B$'s. Let $W$ be the set of all good words different from $w$; we have to show that $W$ contains just as many words with an odd as with an even number of $B$'s. To see this, observe that the operation of switching the last non-$C$ letter in a word (from $A$ to $B$ or from $B$ to $A$) is an involution on $W$ which changes the parity of the number of $B$'s.
-
Thanks for your answer, it was exactly the type of thing I was looking for! – Craig Oct 7 '13 at 13:42
Could you provide some insight as to where the definition of good word comes from? – Pedro Tamaroff Aug 24 at 0:04
I haven't yet come up with a combinatorial proof, but a proof using induction and the binomial formula is straightforward enough.
We fix $k \geqslant 0$ and use induction on $n \geqslant k$. The base case $n = k$ is simply
$$\sum_{i=k}^k (-1)^{i-k}\binom{i}{k}\binom{k+1}{i+1} = (-1)^0 \binom{k}{k}\binom{k+1}{k+1} = 1.$$
For the induction step, we have
\begin{align} \sum_{i=k}^{n+1} (-1)^{i-k}\binom{i}{k}\binom{n+2}{i+1} &= \sum_{i=k}^{n+1} (-1)^{i-k}\binom{i}{k}\left\lbrace \binom{n+1}{i+1} + \binom{n+1}{i}\right\rbrace\\ &=\sum_{i=k}^{n+1}(-1)^{i-k}\binom{i}{k}\binom{n+1}{i+1} + \sum_{i=k}^{n+1}(-1)^{i-k}\binom{i}{k}\binom{n+1}{i}\\ &=\underbrace{\sum_{i=k}^{n}(-1)^{i-k}\binom{i}{k}\binom{n+1}{i+1}}_1 + \underbrace{\sum_{i=k}^{n+1}(-1)^{i-k}\binom{i}{k}\binom{n+1}{i}}_{m(k,n)} \end{align}
where in the first sum on the right the term for $i = n+1$ vanishes since $\binom{n+1}{n+1+1} = 0$ and the remainder is the sum for $n$, which is $1$ by the induction hypothesis.
It remains to see that $m(k,n) = 0$. But that is the coefficient of $x^k$ in
\begin{align} x^{n+1} &= \bigl(1 - (1-x)\bigr)^{n+1}\\ &= \sum_{i=0}^{n+1} (-1)^i\binom{n+1}{i}(1-x)^i\\ &= \sum_{i=0}^{n+1} \sum_{k=0}^i (-1)^{i+k}\binom{i}{k}\binom{n+1}{i}x^k\\ &= \sum_{k=0}^{n+1}\left(\sum_{i=k}^{n+1}(-1)^{i+k}\binom{i}{k}\binom{n+1}{i}\right)x^k, \end{align}
since $(-1)^{i+k} = (-1)^{i-k}$. We have $k \leqslant n < n+1$, hence the coefficient is $0$.
-
This is a combinatorial proof of $$\sum_{i=k}^n (-1)^{i-k} \binom{i}{k} \binom{n+1}{i+1}=1$$ It can be rearranged to $$\sum_{i=k+2t } \binom{i}{k} \binom{n+1}{i+1} = 1+ \sum_{i=k+1+2t} \binom{i}{k} \binom{n+1}{i+1}$$
I prefer to talk about choosing $i$ elements from a set whith $n$ elements to choosing $i+1$ elements from a set with $n+1$ elements so I substitute $i$ by $i-1$, $k$ by $k-1$ and $n$ by $n-1$ and get $$\sum_{i=k+2t } \binom{i-1}{k-1} \binom{n}{i} = 1+ \sum_{i=k+1+2t} \binom{i-1}{k-1} \binom{n}{i} \tag{1}$$
One well known interpretation of $\binom{n}{i}$ is as the number of subsets with $i$ elements of the set $\{1,2,\ldots,n \}$.
if $n=9$ then $\{2,3,4,6,8\}$ is a subset with $i=5$ elements of $\{1,2,3,4,5,6,7,8,9\}$. Note that in the notation of the subsequence we find $i-1=4$ commas (","). Let's select two of this commas an replace them by "} {". We get $\big\{\{2\}\;\{3,4\}\;\{6,8\}\big\}$ if we replace the first and the third comma. So $\binom{i-1}{k-1}$ can be interpreted as the number of the ways a set with $i$ elements can be splitted into $k$ nonempty subsets $A_r$ such that for each pair A, B of such subsets the following holds: $$(a \lt b, \;\; \forall a \in A, \forall b \in B) \;\;\text{or} \;\; (a \gt b, \;\; \forall a \in A, \forall b \in B)$$
The product $\binom{i-1}{k-1} \binom{n}{i}$ can be interpreted as the number of ways we can find $k$ subsets $A_j$ of $\{1,2,\ldots,n \}$ such that $$A_r \cap A_s = \emptyset, \forall 1 \le r \lt s \le k \tag{2a}$$ $$a_r \lt a_s, \forall a_r \in A_r, \forall a_s \in A_s, 1 \le r \lt s \le k \tag{2b}$$ $$\sum_{r=1}^{k}|A_r|=i \tag{2c}$$
We call the set of all $\{A_1,\ldots \}$ that satisfy $(2)$ as $\Omega_{n,k,i}$. We have already seen that $$|\Omega_{n,k,i}|=\binom{i-1}{k-1} \binom{n}{i} \tag{3}$$ Because of $(2c)$ $$\Omega_{n,k,i} \cap \Omega_{n,k,j} = \emptyset, \; \; \forall i \ne j \tag{4}$$
We define $$\Omega_{n,k}'' = \cup_{i=k+2t , i \le n,t \in \mathbb{N_0}} \Omega_{n,k,i}$$ and $$\Omega_{n,k}' = \cup_{i=k+1+2t , i \le n,t \in \mathbb{N_0}} \Omega_{n,k,i}$$ and $$\Omega_{n,k} = \cup_{i=k}^{n} \Omega_{n,k,i}= \Omega_{n,k}'' \cup \Omega_{n,k}'$$
It follows from $(4)$ and $(3)$ that $$|\Omega_{n,k}''| = \sum_{i=k+2t , i \le n,t \in \mathbb{N_0}} \binom{i-1}{k-1} \binom{n}{i}$$ an $$|\Omega_{n,k}'| = \sum_{i=k+1+2t , i \le n,t \in \mathbb{N_0}} \binom{i-1}{k-1} \binom{n}{i}$$
So to prove $(1)$ we have to show that there is a bijection $\phi$ from $\Omega_{n,k}'' \backslash \{\text{one element}\}$ to $\Omega_{n,k}'$. Let $\omega=\{A_1,\ldots, A_k\}$ an element from $\Omega_{n,k}$.
• If $n \notin A_k$ we define $\phi(\{A_1,\ldots, A_{k-1}, A_k\})=\{A_1,\ldots, A_{k-1}, A_k \cup \{n\} \}$
• If $n \in A_k$ and $A_k \ne \{n\}$ we define $\phi(\{A_1,\ldots, A_{k-1}, A_k\})=\{A_1,\ldots, A_{k-1}, A_k \backslash \{n\} \}$
$\phi$ defined so far is a bijection from $\Omega_{n,k}'' \backslash \Theta_k$ to $\Omega_{n,k}' \backslash \Theta_k$. $\Theta_k$ is $\{A_1,\ldots, A_{k-1}, \{n\} \}$
But if $\omega \in \Theta_n$ there is a problem. $A_k \backslash \{n\}= \emptyset$ and $\{A_1,\ldots, A_{k-1}, \emptyset \}$ is not in $\Omega_{n,k}$. How can we extend $\phi$ to $\Theta_k$?
Recursively!
• If $n-1 \notin A_{k-1}$ we define $\phi(\{A_1,\ldots, A_{k-2}, A_{k-1}, \{n\} \})=\{A_1,\ldots, A_{k-2}, A_{k-1}\cup \{n-1\}, \{n\} \}$
• If $n-1 \in A_{k-1}$ and $A_{k-1} \ne \{n-1\}$ we define $\phi(\{A_1,\ldots, A_{k-2}, A_{k-1}, \{n\} \})=\{A_1,\ldots, A_{k-2} , A_{k-1} \backslash \{n-1\} , \{n\} \}$
Now we have extended $\phi$ to $\Theta_n \backslash \Theta_{n-1}$. This process can be continued. Finally we arrive at the following definition for $\phi$:
For $\{A_1,\ldots, A_r\}, \;A_j \ne \{n-j\}, \; A_{r-t}=\{n-t\}, t=0,\ldots,j-1$ we define
• $\phi(\{A_1,\ldots, A_r\})=\{A_1,\ldots, A_{j-1},A_j \cup \{n-j\},\{n-j+1\},\ldots,\{n\}\}$ if $\{n-j\} \notin A_j$
• $\phi(\{A_1,\ldots, A_r\})=\{A_1,\ldots, A_{j-1},A_j \backslash \{n-j\},\{n-j+1\},\ldots,\{n\}\}$ if $\{n-j\} \in A_j$
$\phi$ is not defined for $\{\{n-k+1\},\ldots,\{n\}\}$ but it is a bijection from $\Omega_{n,k}'' \backslash \{\{n-k+1\},\ldots,\{n\}\}$ to $\Omega_{n,k}'$. Therefore $(1)$ holds.
an example
For $n=5$, $k=3$ we get the following mapping $\phi$
$$\begin{array}{l|l} \hline{} \\ \omega & \phi(\omega) \\ \hline{} \\ \Omega_{5,3,3} \subset \Omega_{5,3}'' & \subset \Omega_{5,3}' \\ \hline{} \\ \{1\}\;\{2\}\;\{3\} & \{1\}\;\{2\}\;\{3,5\}\\ \{1\}\;\{2\}\;\{4\} & \{1\}\;\{2\}\;\{4,5\}\\ \{1\}\;\{2\}\;\{5\} & \{1\}\;\{2,4\}\;\{5\}\\ \{1\}\;\{3\}\;\{4\} & \{1\}\;\{3\}\;\{4,5\}\\ \{1\}\;\{3\}\;\{5\} & \{1\}\;\{3,4\}\;\{5\}\\ \{1\}\;\{4\}\;\{5\} & \{1,3\}\;\{4\}\;\{5\}\\ \{2\}\;\{3\}\;\{4\} & \{2\}\;\{3\}\;\{4,5\}\\ \{2\}\;\{3\}\;\{5\} & \{2\}\;\{3,4\}\;\{5\}\\ \{2\}\;\{4\}\;\{5\} & \{2,3\}\;\{4\}\;\{5\}\\ \{3\}\;\{4\}\;\{5\} & \text{no image} \\ \hline{} \\ \Omega_{5,3,4} \subset \Omega_{5,3}' & \subset \Omega_{5,3}'' \\ \hline{} \\ \{1\}\;\{2\}\;\{3,4\} & \{1\}\;\{2\}\;\{3,4,5\} \\ \{1\}\;\{2,3\}\;\{4\} & \{1\}\;\{2,3\}\;\{4,5\} \\ \{1,2\}\;\{3\}\;\{4\} & \{1,2\}\;\{3\}\;\{4,5\} \\ \{1\}\;\{2\}\;\{3,5\} & \{1\}\;\{2\}\;\{3\} \\ \{1\}\;\{2,3\}\;\{5\} & \{1\}\;\{2,3,4\}\;\{5\} \\ \{1,2\}\;\{3\}\;\{5\} & \{1,2\}\;\{3,4\}\;\{5\} \\ \{1\}\;\{2\}\;\{4,5\} & \{1\}\;\{2\}\;\{4\} \\ \{1\}\;\{2,4\}\;\{5\} & \{1\}\;\{2\}\;\{5\} \\ \{1,2\}\;\{4\}\;\{5\} & \{1,2,3\}\;\{4\}\;\{5\} \\ \{1\}\;\{3\}\;\{4,5\} & \{1\}\;\{3\}\;\{4\} \\ \{1\}\;\{3,4\}\;\{5\} & \{1\}\;\{3\}\;\{5\} \\ \{1,3\}\;\{4\}\;\{5\} & \{1\}\;\{4\}\;\{5\} \\ \{2\}\;\{3\}\;\{4,5\} & \{2\}\;\{3\}\;\{4\} \\ \{2\}\;\{3,4\}\;\{5\} & \{2\}\;\{3\}\;\{5\} \\ \{2,3\}\;\{4\}\;\{5\} & \{2\}\;\{4\}\;\{5\} \\ \hline{} \\ \Omega_{5,3,5} \subset \Omega_{5,3}'' & \subset \Omega_{5,3}' \\ \hline{} \\ \{1,2,3\}\;\{4\}\;\{5\} & \{1,2\}\;\{4\}\;\{5\} \\ \{1,2\}\;\{3,4\}\;\{5\} & \{1,2\}\;\{3\}\;\{5\} \\ \{1,2\}\;\{3\}\;\{4,5\} & \{1,2\}\;\{3\}\;\{4\} \\ \{1\}\;\{2,3,4\}\;\{5\} & \{1\}\;\{2,3\}\;\{5\} \\ \{1\}\;\{2,3\}\;\{4,5\} & \{1\}\;\{2,3\}\;\{4\} \\ \{1\}\;\{2\}\;\{3,4,5\} & \{1\}\;\{2\}\;\{3,4\} \\ \hline{} \end{array}$$
-
Nice proof. A small typo: If I understood it correctly, the first row in your table should be mapped to $\{1\}\{2\}\{3,5\}$ instead of $\{1\}\{2\}\{3,4\}$. – EuYu Oct 3 '13 at 12:52
Your are right, thank you, i will change this. – miracle173 Oct 3 '13 at 16:57
Here is another algebraic proof. Observe that when we multiply two exponential generating functions of the sequences $\{a_n\}$ and $\{b_n\}$ we get that $$A(z) B(z) = \sum_{n\ge 0} a_n \frac{z^n}{n!} \sum_{n\ge 0} b_n \frac{z^n}{n!} = \sum_{n\ge 0} \sum_{k=0}^n \frac{1}{k!}\frac{1}{(n-k)!} a_k b_{n-k} z^n\\ = \sum_{n\ge 0} \sum_{k=0}^n \frac{n!}{k!(n-k)!} a_k b_{n-k} \frac{z^n}{n!} = \sum_{n\ge 0} \left(\sum_{k=0}^n {n\choose k} a_k b_{n-k}\right)\frac{z^n}{n!}$$ i.e. the product of the two generating functions is the generating function of $$\sum_{k=0}^n {n\choose k} a_k b_{n-k}.$$
The sum we are trying to evaluate is $$\sum_{k=j}^n (-1)^{k-j} {k\choose j} {n+1\choose k+1} = (n+1) \sum_{k=j}^n \frac{(-1)^{k-j}}{k+1} {k\choose j} {n\choose k}.$$ Now let $$A_1(z) = \sum_{k\ge 0} (-1)^{k-j} {k\choose j} \frac{z^k}{k!} = \frac{1}{j!} \sum_{k\ge j} (-1)^{k-j} \frac{z^k}{(k-j)!} \\= \frac{1}{j!} z^j \sum_{k\ge j} (-1)^{k-j} \frac{z^{k-j}}{(k-j)!} = \frac{1}{j!} z^j \exp(-z).$$ It then follows that $$A(z) = \sum_{k\ge 0} \frac{(-1)^k}{k+1} {k\choose j} \frac{z^k}{k!} = \frac{1}{z} \left(C + \int A_1(z) dz\right)$$ with $C$ a constant to be determined.
Now it is not difficult to show (consult the end of this post) that $$\int A_1(z) dz = -\exp(-z) \sum_{q=0}^j \frac{z^q}{q!}$$ and we must have $$C = -[z^0] \left(-\exp(-z) \sum_{q=0}^j \frac{z^q}{q!} \right)= 1$$ so that $$A(z) = \frac{1}{z} \left(1 -\exp(-z) \sum_{q=0}^j \frac{z^q}{q!}\right).$$ We have now determined $A(z)$ for the convolution of the two generating functions.
We take $$B(z) = \sum_{k\ge 0} \frac{z^k}{k!} = \exp(z).$$ It follows that $$A(z) B(z) = \frac{1}{z} \left(\exp(z) - \sum_{q=0}^j \frac{z^q}{q!}\right).$$ Now applying the coefficient extraction operator we get for $n\ge j$ that $$(n+1) n! [z^n] A(z) B(z) = (n+1)! [z^{n+1}] \left(\exp(z) - \sum_{q=0}^j \frac{z^q}{q!}\right).$$ None of the terms from the sum contribute because $n+1>j$ so that we are left with $$(n+1)! [z^{n+1}] \exp(z) = (n+1)! \frac{1}{(n+1)!} = 1.$$
Verification. $$\left(-\exp(-z) \sum_{q=0}^j \frac{z^q}{q!}\right)' = \exp(-z) \sum_{q=0}^j \frac{z^q}{q!} - \exp(-z) \sum_{q=0}^{j-1} \frac{z^q}{q!} = \exp(-z) \frac{z^j}{j!}.$$
-
Wolfram Alpha yields this result:
It's here !!!
It's too bad for Wolfram Alpha that ${\bf they\ don't\ say}$ that the right hand side is identical to $\color{#0000ff}{\large\mbox{ONE}\ = 1}$.
-
The assumptions that $k,n$ are positive integers goes a long way to simplify this. It's indeed just 1. – Alex R. Oct 5 '13 at 5:02
@AlexR. It's true. But if I got a result likes $1.35$, I don't write, for example $\displaystyle{\large{2.7 \over 2}\,{\sqrt{2\,}\,\sqrt{3\,} \over \sqrt{6\,}}}$. I write a plain $\large 1.35$. – Felix Marin Oct 7 '13 at 6:08
It does, if you use the FunctionExpand[...] command. – Lucian Oct 14 '13 at 17:34
Suppose we seek to verify that $$\sum_{q=k}^n (-1)^{q-k} {q\choose k} {n+1\choose q+1} = 1$$ where $n\ge k.$
We first treat the case when $k\gt 0$ and introduce $${q\choose k} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k+1}} (1+z)^q \; dz.$$
Observe that this is zero when $0\le q\lt k$ so that we may extend the limit in the sum to zero, getting $$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k+1}} \sum_{q=0}^n (-1)^{q-k} {n+1\choose q+1} (1+z)^q \; dz \\ = (-1)^{k+1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k+1}} \frac{1}{1+z} \sum_{q=0}^n (-1)^{q+1} {n+1\choose q+1} (1+z)^{q+1} \; dz \\ = (-1)^{k+1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k+1}} \frac{1}{1+z} \sum_{q=1}^{n+1} (-1)^{q} {n+1\choose q} (1+z)^{q} \; dz \\ = (-1)^{k+1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k+1}} \frac{1}{1+z} (-1+(1-(1+z))^{n+1}) \; dz \\ = (-1)^{k+1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k+1}} \frac{1}{1+z} (-1 + (-1)^{n+1} z^{n+1}) \; dz.$$
Now since $n\ge k$ this simplifies to $$(-1)^{k} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{k+1}} \frac{1}{1+z} \; dz = (-1)^k (-1)^k = 1.$$
The second case when $k=0$ yields $$\sum_{q=0}^n (-1)^{q} {n+1\choose q+1} = - \sum_{q=1}^{n+1} (-1)^{q} {n+1\choose q} = - ((1-1)^{n+1}-1) = 1.$$
- | 2015-08-28T02:21:10 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/503694/combinatorial-interpretation-of-an-alternating-binomial-sum",
"openwebmath_score": 0.9858556389808655,
"openwebmath_perplexity": 122.00356305063883,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795077797211,
"lm_q2_score": 0.8499711775577736,
"lm_q1q2_score": 0.8389041344529213
} |
https://mathematica.stackexchange.com/questions/63491/draw-function-with-different-colors-associated-to-a-parameter | # Draw function with different colors associated to a parameter
I want to generate a plot similar to this one, which allows for different colors when given different parameters : (This is the Moreau-Yosida regularization of the absolute value)
the Moreau-Yosida regularization is given by :
$$f_{\lambda}(x):= \inf_{u\in \mathbb{R}}\left\lbrace f(u)+ \dfrac{1}{2 \lambda} |x-u|^2 \right\rbrace$$
I want to show these functions in the same plot:
• Absolute value function
• the Moreau-Yosida regularization of the $0-norme$ with different values of $\lambda$ to get this kind of color progression (aesthetically, I think a luminosity progression with a single color would look better than the rainbow colors) ( $|x|_0 = 0$ if $x=0$ and $|x|_0 = 1$ otherwise)
• the Moreau-Yosida regularization of the Absolute value function with different values of $\lambda$ and the same criteria as above.
The following code is my first attempt to set up the visualization I want to get:
Manipulate[
Plot[{Abs[x],
1/(2 \[Lambda]) * (Abs[x]^2 - Max[Abs[x]^2 - 2 \[Lambda], 0]),
1/(2 b) * (Abs[x]^2 - Max[Abs[x] - b, 0]^2)}, {x, -2,
2}] , {\[Lambda], 1/1000, 1}, {b, 1/1000, 1}]
• Thank you @dionys for editing, I hope my English get better someday. – Aymane Fihadi Oct 18 '14 at 20:10
EDIT
Thank you for comment from ybeltukov: Exclusions->None:
fun[b_, x_] := 1/(2 b)*(Abs[x]^2 - Max[Abs[x] - b, 0]^2)
Legended[ParametricPlot[{u, fun[a, u]}, {u, -2, 2}, {a, 0, 1},
ColorFunction -> {ColorData["Rainbow"][#4] &}, Exclusions -> None,
ImageSize -> 500], BarLegend["Rainbow"]]
• Exclusions -> None removes artifacts. It is also 2.5 times faster. – ybeltukov Oct 18 '14 at 13:01
• @ybeltukov thank you I have edited – ubpdqn Oct 18 '14 at 13:08
• Thank you very much @ubpdqn. Can you pleas explain the part {ColorData["Rainbow"][#4] &} of the code? ( what is the function of the argument #4 and the & in the end) ;) And how can we do the Color gradient with one color?, say from gray 10 % to gray 100% (black) – Aymane Fihadi Oct 18 '14 at 14:03
• @AymaneFihadi #4 just the slot for b in parameter space (x,b). I suggest looking at documentation of ColorFunction and ColorData and playing. – ubpdqn Oct 18 '14 at 21:46 | 2019-07-16T07:22:21 | {
"domain": "stackexchange.com",
"url": "https://mathematica.stackexchange.com/questions/63491/draw-function-with-different-colors-associated-to-a-parameter",
"openwebmath_score": 0.35336971282958984,
"openwebmath_perplexity": 2166.8605652172278,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795114181105,
"lm_q2_score": 0.8499711737573762,
"lm_q1q2_score": 0.838904133794533
} |
https://math.stackexchange.com/questions/2895527/evaluating-improper-double-integral-with-lebesgue | # Evaluating improper double integral with Lebesgue
Consider the improper double integral $$I_{\text{Riemann}} = \int_0^1 \int_0^{\sqrt x} \frac{2xy}{1-y^4} \; dy \; dx = \lim_{B \to 1^-} \int_0^B \int_0^{\sqrt x} \frac{2xy}{1-y^4} \; dy \; dx .$$ The standard "freshman calculus" solution goes by swapping the order of integration to get $\int_0^1 \int_{y^2}^1 \frac{2xy}{1-y^4} \; dx \; dy$; then the inner integral is $\frac{y}{1-y^4} \int_{y^2}^1 2x \; dx = \frac{y}{1-y^4} \left[ x^2 \right]^1_{y^2} = y$ and the answer is $\int_0^1 y \; dy = \frac12$. I'm trying to make sense of this rigorously, particularly the bit about swapping the order of integration (which seems to require some sort of Tonelli/Fubini result).
My idea is something like the following: define the double Lebesgue integral $$I_{\text{Lebesgue}} = \int_{x \in [0,1)} \int_{y \in [0,1)} \mathbf 1(x \ge y^2) \cdot \frac{2xy}{(1-y^4)} \; dy \; dx.$$ Then Tonelli's theorem lets us swap the order of summation to get $$I_{\text{Lebesgue}} = \int_{y \in [0,1)} \frac{y}{1-y^4} \int_{x \in [0,1)} \mathbf 1(x \ge y^2) \cdot 2x \; dx \; dy.$$ Thus the inner integral is the same as the Riemann one $\int_{y^2}^1 2x \; dx = 1 - y^4$, hence $$I_{\text{Lebesgue}} = \int_{y \in [0,1)} y \; dy = \frac12.$$ However, since the original Riemann integral was improper, I don't really know how to justify the first step (cue $x^{-1} \sin x$ example).
So I have the following three questions:
1. Is there some result/theorem that lets me quickly see that $I_{\text{Riemann}} = I_{\text{Lebesgue}}$? Bonus points for not using nonnegativity of $\frac{2xy}{1-y^4}$.
2. Is the calculation of $I_{\text{Lebesgue}}$ correct as written?
3. Are there other ways to justify the interchange of improper integrals? I'm fine appealing to Lebesgue since Lebesgue integrals are "better-behaved" anyways, but I'm wondering if I've missed something easier.
• $u=y^2$ substitution simplifies it. Make it immediately. – herb steinberg Aug 26 '18 at 22:02
• Isn’t nonnegativity the fastest way? Simply apply Tonelli’s theorem. – Szeto Aug 26 '18 at 22:42
You don't need to turn to Lebesgue integration to add rigor. The result can be obtained rigorously within the framework of the multi-dimensional Riemann integral.
The Riemann integral is naturally defined over bounded rectangles and extended to more general (rectifiable) regions with an indicator function. Before we even begin to consider Fubini's theorem to manipulate iterated integrals, we first need to define what the inproper integral means. In this case, the restriction of $f(x,y) = 2xy/(1-y^4)$ is continuous and Riemann integrable over $S_b = \{(x,y): 0 \leqslant y \leqslant \sqrt{x} \leqslant b$ for any $b < 1$ with the integral given by
$$I_b = \int_{S_b}f = \int_{[0,b]^2}f \, \chi_{S_b}$$
There are some technicalities regarding how to precisely define the improper (or extended) Riemann integral for arbitrary regions, but it boils down in this case to $I = \lim_{b \to 1-} I_b$ when the limit exists.
At this point we apply Fubini's theorem specifically for Riemann integrals, as discussed for example in Analysis on Manifolds by Munkres or Calculus on Manifolds by Spivak. This states simply if $f$ is integrable and the iterated Riemann integrals exist --both of which are satisfied here -- we have
\begin{align}I_b = \int_0^b \left(\int_0^b \frac{2xy}{1 - y^4}\, \chi_{S_b} \, dy\right) \, dx &= \int_0^b \left(\int_0^b \frac{2xy}{1 - y^4}\, \chi_{S_b} \, dx\right) \, dy \\ &= \int_0^b \left(\int_{y^2}^b \frac{2xy}{1 - y^4}\, \, dx\right) \, dy \\ &= \int_0^b \frac{(b^2-y^4)y}{1 - y^4}\, dy \\ &= \int_0^1 \frac{(b^2-y^4)y}{1 - y^4} \chi_{y \leqslant b} \, dy \end{align}
Since, for all $y \in [0,1]$ we have
$$\left| \frac{(b^2-y^4)y}{1 - y^4} \chi_{y \leqslant b} \right| \leqslant y \leqslant 1$$
it follows by the bounded convergence theorem that
$$I = \lim_{b \to 1-}I_b = \int_0^1 \lim_{b \to 1-} \frac{(b^2-y^4)y}{1 - y^4} \chi_{y \leqslant b} \, dy = \int_0^1 y \, dy = \frac{1}{2}$$
Lebesgue and Riemann integrals coincide on bounded rectangles, so the demonstration could also be made in the context of Lebesgue integrals in an analogous fashion with the dominated convergence theorem. | 2020-07-09T08:41:03 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2895527/evaluating-improper-double-integral-with-lebesgue",
"openwebmath_score": 0.999667763710022,
"openwebmath_perplexity": 304.6254523816701,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9489172601537141,
"lm_q2_score": 0.8840392863287585,
"lm_q1q2_score": 0.8388801374513303
} |
https://eslo-info.org/blog/fw20c.php?tag=69846c-nfa-divisible-by-3 | What prevents dragons from destroying or ruling Middle-earth? So for mod 3, you need base 4 digits (pairs of bits). This answer is elaborated based on the question asked in GATE CSE Facebook Community for GATE aspirants. initial state (upper part of the figure) to the acceptance state, Adjective agreement-seems not to follow normal rules. Does "a point you choose" include any movable surface? So how do you compute if a binary number is divisible by 3? Why can't California Proposition 17 be passed via the legislative process and thus needs a ballot measure? Can a small family retire early with 1.2M + a part time job? I keep sharing my coding knowledge and my own experience on. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Language L: {a^n| n is even or divisible by 3} 1 2 3 4. I am complete Python Nut, love Linux and vim as an editor. Asking for help, clarification, or responding to other answers. It only takes a minute to sign up. All Rights Reserved. This is a complicated question to construct DFA. Making statements based on opinion; back them up with references or personal experience. To make it simple, follow these two steps. Why does separation of variable gives the general solution to a PDE, Author has published a graph but won't share their results table. a string that starts with $1$ or is exactly $0$, for instance, $1000$ is valid, but $000$ isn't) while the bottom three determine whether or not it is divisible by $3$ using elementary number theory. In your case, the accepted language is the set of strings made of $0$ and $1$ which encode a well-formed multiple of $3$. Find minimal DFA: Remove all the unwanted stated from DFA. You can refer to the book Theory of Computation by Ullman to practice answering such kind of questions. Suppose if we read same string in same order but first placing numbers sequentially in power of 2 if we read 1 it be first bit and 0 2nd bit so we will read as 001 for above DFA we read the string oppositely ....so what is DFA for this by placing bits from left to right. What does it mean when people say "Physics break down"? For big endian, you instead add an additional start state that transitions to 0 and 1 on a single 0 or 1 bit. Any idea on how to reduce or merge them like ubuntu 16? So there are 8 states (4*2) which include 3 final states. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Stack Overflow for Teams is a private, secure spot for you and The usual way of making a DFA that computes something like this is to first make an NFA (since NFAs are must easier to compose vis union/intersection/etc), and then convert the NFA to a DFA. followed the path 1--state 1--0--state 2. Take the combinations of all the states from both DFAs. Psychology Today's Classical IQ test question - abstract line shapes. rev 2020.11.2.37934, The best answers are voted up and rise to the top, Computer Science Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, @HendrikJan your solution was shorter, I am trying to get a full grasp of your solution. So there are 8 states (4*2) which include 3 final states. In this case, there are not any unwanted states in DFA. Thanks for contributing an answer to Computer Science Stack Exchange! For the base 3 case, that is 3 states. Do doctors "get more money if somebody dies from Covid”? I have found in a book the example of how to make a FA that accepts those numbers that are divisible by 3, that means that n mod 3=0. From the upper part I I hold a Master of Computer Science from NIT Trichy. (Reference: How to prepare for GATE), Can you construct a Regular expression where some element is not divisible by 3 {a,b}. Combine both the above automata to construct DFA that accepts all the strings with the total number of ‘a’ is an odd & total number of b’s is not divisible by 3. Don't use images as main content of your post. Prove that the recursively defined sequence is Cauchy. You need a loop of n states that correspond to the value 0..n-1 and have transitions between them for adding bits. MathJax reference. My boss makes me using cracked software. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Given a binary number you can generate base 2k digits by simply taking them in groups. So how do you compute if a binary number is divisible by 3? One final subtlety is dealing with an odd number of bits. How can I find different areas of triangles from a list of points? Your start and accepting state is 0. a subset of the finite strings of a given finite alphabet. In the example the author used the binary representation of the number to be evaluated. If one does not believe those represent numbers then the solution you present is right. Unwanted states can be which are not reachable from starting state, a dead state which does not lead anywhere. How to do a simple calculation on VASP code? Because (A) I am just very curious, and (B) It is is customary to credit sources. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. How do we use sed to replace specific line with a string variable? How to construct DFA to accept a language L={Strings in which the total number of ‘a’ is an odd and total number of b’s is not divisible by 3}? If you'd rather make such strings not part of your language, then 0 is distinct from e. We will let it be indistinguishable. That is all the machine does. By the way, what book you are using? You can get mod 7 by using groups of 3 bits, and mod 15 with groups of 4. mod 15 can then be trivially converted to mod 5 and mod 3. To learn more, see our tips on writing great answers. How does modulus affect the regularity of language? If I want to apply again 2 mod 3. I will end up again with 2 and not reaching an acceptance state. Is it safe to mount the same partition to multiple VMs? This makes your question impossible to search and inaccessible to the visually impaired; finite automata that accepts integers divided by 3? If I try 5 mod 3 or 101 mod 11 that would be 2, which in binary is 10. Remove white line in painted multirow tabular. This is an NFA, so a 2-bit transition goes through an intermediate state that is otherwise unconnected. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Also, don’t have any common states to be combined. If I try to do 6 mod 3 that will result in 0, so I will go from the The initial state is the one containing e, also in our case. To make the discussion simpler, we define a "three token" as follows: A three-token is a substring of the input which represents a number evenly divisible by three in binary notation. Strings are distinguishable if they are followed by different sets of strings to get strings in the target language. The string 1 is distinguishable since not all strings in L can follow it and produce a string in L. Indeed, a moment's reflection will show that no string in L can follow 1 and lead to another string in L. Call this <1>. To learn more, see our tips on writing great answers. We need not consider 00 and 01 since 0 was indistinguishable from e and so 00 and 01 are indistinguishable from 0 and 1 which we have already considered. We named three classes of distinguishable strings: And these account for all distinguishable prefixes of length no more than three. Remove all the unwanted stated from DFA. Tuning the lowest bass string a hair flat. Could evaporation of a liquid into a gas be thought of as dissolving the liquid in a gas? Once done, combine both the DFA automata. To get minimal DFA, remove any dead/unwanted/unreachable/duplicate state from the DFA. Is the nucleus smaller than the electron? Got a tip? Thanks for contributing an answer to Stack Overflow! Take the combinations of all the states from both DFAs. We begin by examining strings of increasing length and asking whether they are distinguishable from strings we have already seen. How can a hive mind secretly monetize its special ability to make lots of money? Step 2: Take the negation of constructed automata. Divisible by 2, 3, No for eg if you read a string 001 we then assing 2^0 to 1 ...........2^1 to 0 ........like that .......so what I am asking is I want DFA for string reading binary numbers divisible by some number but we read 001 like place first read number 0 in 2^0 then second 0 in 2^1 and 1 in 2^2. In the example the author used the binary representation of the number to be evaluated. The string 0, too, can be followed by any string in L to get a string in L. We might as well allow leading 0s and ignore them. A conceptually simple way to do this depends on whether you can perform two transformations of FAs. A finite automaton accepts a language, i.e. your coworkers to find and share information. You want a DFA that accepts binary numbers divisible by 3 (e.g., 3, 6, 9, 12, etc)? Some additional explanation is in the following part: I have tried the following examples to test this automata: I just do not have a clear picture of how to test this FA for example with an input of 81 mod 3, does it make partial divisions? Combine both the above automata to construct DFA that accepts all the strings with the total number of ‘a’ is an odd & total number of b’s is not divisible by 3. The string 101 is, perhaps surprisingly, indistinguishable from the string 10: anything we can to 10 to get a string in L leads to a string in L if added to 101 as well. To solve this kind of questions use divide and conquer strategy. To construct DFA, it needs to practice. that is because the modulus is 0. bash + match regexes for both diffrent hostnames. How is it possible that a | 2021-02-25T19:39:16 | {
"domain": "eslo-info.org",
"url": "https://eslo-info.org/blog/fw20c.php?tag=69846c-nfa-divisible-by-3",
"openwebmath_score": 0.2016274482011795,
"openwebmath_perplexity": 645.9422291902057,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9489172659321807,
"lm_q2_score": 0.884039278690883,
"lm_q1q2_score": 0.8388801353120098
} |
https://math.stackexchange.com/questions/3033715/function-holomorphic-in-the-units-disk-with-different-bound | # Function holomorphic in the units disk with different bound
Suppose $$f$$ is continuous in the closed unit disk $$\bar{D}(0,1)$$ and holomorphic over its interior $$D(0,1)$$. Moreover suppose that for $$|z|=1$$ we have:
$$\Re(z)\leq0\Rightarrow |f(z)|\leq\ 1$$
$$\Re(z)>0\Rightarrow |f(z)|\leq 2$$
and I Need to prove $$|f(0)|\leq \sqrt{2}$$ I know from the maximum modulus principle we have that:
$$1\leq \max_{|z|=1}|f|=\max_{\bar{D}(0,1)}|f|\leq 2$$
but I can't really see where the square root come from so I cannot go any further.
• out of curiosity, is the bound tight? – AccidentalFourierTransform Dec 10 '18 at 18:11
• @AccidentalFourierTransform As far as the text of my exercises says no – Renato Faraone Dec 11 '18 at 10:04
First try. By Cauchy integral $$f(0) = \frac{1}{2\pi i} \int_{\lvert z\rvert = 1} \frac{f(z)}{z}\,dz =\frac{1}{2\pi} \int_0^{2\pi} f(e^{i\varphi})\,d\varphi.$$ Hence $$|f(0)|\leq \frac{1}{2\pi} \int_0^{2\pi} |f(e^{i\varphi})|\,d\varphi\leq\frac{2\pi+1\pi}{2\pi}=\frac{3}{2}.$$ But unfortunately $$\sqrt{2}<3/2$$.
Second try. Consider the function $$F(z)=f(z)f(−z)$$ which is continuous in the closed unit disk $$\bar{D}(0,1)$$ and holomorphic over its interior $$D(0,1)$$. Then, $$\text{Re}(z)\leq 0$$ iff $$\text{Re}(-z)\geq 0$$ and therefore, for $$|z|=1$$ we have that
$$|F(z)|\leq |f(z)||f(−z)|\leq 2\cdot 1.$$ Now apply the Cauchy integral to $$F$$: $$|f(0)|^2=|F(0)|\leq \frac{1}{2\pi} \int_0^{2\pi} |F(e^{i\varphi})|\,d\varphi\leq 2\implies |f(0)|\leq \sqrt{2}.$$
For a slightly different proof than the one RobertZ gave, note that $$\log\lvert f\rvert\colon\overline{D}\to\bar{\mathbb{R}}$$ is subharmonic (it is actually harmonic with poles), since it is $$\Re\log f$$ away from the zeros of $$f$$, and if $$f(z)=0$$ then $$\log\lvert f(z)\rvert=-\infty$$. Now the mean value property of harmonic function gives $$\log\lvert f(0)\rvert$$ is at most the average value of $$\log\lvert f\rvert$$ on the unit circle, and the latter is bounded by $$\frac12\log 2$$. | 2019-03-24T06:39:16 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3033715/function-holomorphic-in-the-units-disk-with-different-bound",
"openwebmath_score": 0.9350861310958862,
"openwebmath_perplexity": 113.7854250164868,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9559813526452772,
"lm_q2_score": 0.8774767842777551,
"lm_q1q2_score": 0.8388514431486764
} |
https://math.stackexchange.com/questions/829939/why-does-this-u-substitution-zero-out-my-integral | # Why does this $u$-substitution zero out my integral?
Here's how I understand $u$-substitution working for an integral. Essentially, it involves substitution of differential expressions, allowing you to cancel out terms of the integrand.
When we change the limits of integration, we essentially evaluate $u(x)$ to make sure the value stays the same.
$$\begin{gather*} \int_{x=0}^{x=2} \frac{x}{\sqrt{1 + x^2}} \, dx \\ \text{let u = 1 + x^2 so du = 2x \, dx and dx = \frac{1}{2} du / x} \\ \int_{u=1}^{u=5} \frac{\color{red}{x}}{\sqrt{u}} \, \left( \frac{1}{2\color{red}{x}} du \right) \\ \frac{1}{2} \int_{u=1}^{u=5} u^{-1/2} \, du \\ \frac{1}{2} \left( \left. 2\sqrt{u} \ \right|_{u=1}^{u=5} \right) \\ \sqrt{5} - 1. \end{gather*}$$
That's all well and good. But I can choose anything for my $u$-expression. What if I wanted to let $u = (x)(x - 2)$? Then the limits of integration are $$\int_{u=(0)(0-2)}^{u=(2)(2-2)} \implies \int_{u=0}^{u=0}$$ and so the whole integral becomes zero.
Clearly this is invalid. The correct result for the integral is indeed $\sqrt{5} - 1$. But what am I missing here? Why can't I set the integral up like this?
My thoughts:
• This is a definite integral, so there's no "$+ C$" constant of integration business going on. (Right?)
• Even if you end up having some $x$s in your expression because the $u$-substitution doesn't cancel them out, that doesn't matter because you're still integrating over an empty domain. (Right?) Plus, I can also get this to "work" with $\int_{y=-r}^{y=r} (r^2-y^2)^{-1/2}y^2\,dy$ with $u = r^2-y^2$, and that can be expressed as $\int_{u=0}^{u=0} u^{-1/2}\sqrt{r^2-u} \, du$, which is completely in terms of $u$. (I think?)
• Does it have something to do with the multiple solutions of quadratic equations?
• How do I know when I'm doing this by mistake? It seems like it could be pretty subtle.
• Unless special considerations are made, substitutions done in this manner must be injective. – Antonio Vargas Jun 11 '14 at 1:57
• And this is what you get from the "$\dfrac {\mathrm dy}{\mathrm dx}$ is a ratio" crap. – Git Gud Jun 11 '14 at 2:02
• @GitGud How does this relate to that? There isn't any canceling of differentials here, and that's the main problem I've seen arise…could you elaborate? – wchargin Jun 11 '14 at 2:09
• @WChargin It relates by virtue that the other way to teach/do this, is by using this. In the notation in the link, you're never gonna get a problematic $\phi$ like you get a problematic $u$ here. – Git Gud Jun 11 '14 at 2:11
• @m_t_ That question was asked three years after this question. It is the duplicate, not this one. – wchargin Oct 22 '17 at 18:44
You certainly have the right to make any change of variable you want. The problem with using $u=x(x-2)$ is that you have to solve for $x$ as a function of $u$:
$$u=x^2-2 x \implies x = 1 \pm \sqrt{1+u} \implies dx = \pm \frac{du}{2 \sqrt{1+u}}$$
Because $u(x)$ is a quadratic, $x(u)$ is multivalued with two branches. (This is why you were able to get zero in the lower and upper limits.) You would sub differently along each branch. Thus,
$$\int_0^2 dx \frac{x}{\sqrt{1+x^2}} = \frac12 \int_0^{-1} du \frac{1-(1+u)^{-1/2}}{\sqrt{3+u-2 \sqrt{1+u}}} + \frac12 \int_{-1}^0 \frac{1+(1+u)^{-1/2}}{\sqrt{3+u+2 \sqrt{1+u}}}$$
..and take it from there.
• How are you getting $-1$ for the limit of integration? I know that $u(-1) = 2$, but shouldn't you evaluate $u(2) = 0$? (and should that be $\int_0^{\color{red}{1}}$ in the center integral?) – wchargin Jun 11 '14 at 2:09
• @WChargin: when $u=-1$ then $x=1$ for both branches. No, the integrals are correct (and have been verified in Mathematica). – Ron Gordon Jun 11 '14 at 2:12
The wrong output is due to the illegitimate choice of change of variable function $\phi(x)=x(x-2)$. There are different formulations of the substitution formula for integration, but usually injectivity of $\phi$ is a requirement.
Challenge Change the lower bound of integration from $0$ to $-1$. Now the function $\phi(x)=x^2+1$ is no longer injective over this interval. The challenge question is whether you can still use the substitution $u=x^2+1$ to find the right answer.
• But why is that an illegitimate choice of function? – apnorton Jun 11 '14 at 1:58
• @anorton Exactly! I think Antonio might be right that injection is a requirement—that sounds likely. – wchargin Jun 11 '14 at 1:58
• This one dimensional Jacobian is no 1-1 and onto. – IAmNoOne Jun 11 '14 at 2:01 | 2019-06-25T12:32:48 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/829939/why-does-this-u-substitution-zero-out-my-integral",
"openwebmath_score": 0.8306421637535095,
"openwebmath_perplexity": 383.65350583518193,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9559813526452772,
"lm_q2_score": 0.8774767762675405,
"lm_q1q2_score": 0.8388514354910607
} |
http://galery.serdangbedagaikab.go.id/psychosocial-implications-edrm/opposite-of-root-105e89 | "Opposite" is a term lacking adequate mathematical definition. Opposite words for Root Vegetable. In simple arithmatic, they might simply mean the negative of the number, so the answer would be - ���3. The opposite of squaring and cubing are called square root and cube root. X Research source For example, suppose you wish to find the derivative of 3 x + 2 {\displaystyle {\sqrt {3x+2}}} . The final answer is . DNS: Admin: "Use root hints if no forwarders are available" results in the opposite behavior if toggled in W2K8 DNS snap-in Install a Windows Server 2008 SP1 or SP2 DNS Server. Thus reciprocal of 8 would be 1/8 One Dimensional Root (Zero) Finding Description The function uniroot searches the interval from lower to upper for a root (i.e., zero) of the function f with respect to its first argument. When you need to use the opposite of basic operations ��� addition, subtraction, multiplication, and division ��� you need to remember how the additive inverse and multiplicative inverse work: The additive inverse is the number with the opposite sign. sqrt2/2 The reciprocal of a number is 1 divided by the number. Substitute the value into . Root definition, a part of the body of a plant that develops, typically, from the radicle and grows downward into the soil, anchoring the plant and absorbing nutriment and moisture. a bound inflectional or derivational element, as a prefix, infix, or suffix, added to a base or stem to form a fresh stem or a word, as -ed added to want Is the opposite of metaX that is sought here supposed to be (1) something that stands to X in the relationship that is the opposite of the relationship expressed by meta-, or (2) X that is not metaX? Cytokinins play opposite roles in lateral root formation, and nematode and Rhizobial symbioses Plant J . Additionally, we discussed a specific type of tear ��� the bucket handle tear.. Opposite definition is - set over against something that is at the other end or side of an intervening line or space. Also known as the [analog root] (Opposite of the digital root!The digital root of a number is the continuous summation of its digits until it is a single digit, for example, the digital root of 89456 is calculated like this: 8 + 9 + 4 + 5 + 6 Root, in botany, the part of a vascular plant that is normally underground. Yes, but it is not the case that one is the square root and the other is its opposite. In the DNS Manager snap-in (Dnsmgmt.msc), right You do need to be CAREFUL whenever you do this, to The final. The opposite of cubing something is taking the cube root, etc. And in general, raising to a power and taking the root are inverse operations. The idea, then, is that if you apply first a function, then its From this point, I will have to solve for the inverse algebraically by following the suggested steps . Example 1 Find the inverse function, its domain and range, of the function given by f 2004 Apr;38(2):203-14. doi: 10.1111/j.1365-313X.2004.02038.x. There are more rules we can use with indices. For square root functions, the outer function will be the square root function, and the inner function will be whatever appears under the radical sign. Remove parentheses. How to use opposite in a sentence. Raising the base to a power and getting the logarithm (to that base) are also inverse operations.Recall that the expression y = 10 x means y is equal to 10 raised to the power of x . The answer is square root Root tears are unique types of meniscal tears. In mathematics, the bisection method is a root-finding method that applies to any continuous functions for which one knows two values with opposite signs. In previous posts, we have discussed the various types of meniscal tears, mainly degenerative and traumatic. Reciprocal of a number is called its multiplicative inverse, which means the number multiplied with its multiplicative inverse would be unity. The reciprocal is well defined and is 1/���3. Both are equal square roots with the other one being its opposite. is the number with the opposite sign. Learn more about the types ��� Definition 1: Opposite number or additive inverse of any number (n) is a number which, if added to , results in 0, the identity element of addition. For example, the square of 3 is 9; the square root of 9 is 3. A code defect in the Windows Server 2008 SP1 (RTM) and Service Pack 2 version of Dnsmgmt.msc causes the DNS service to use the opposite behavior than the Use root hints if no forwarders are available checkbox in the DNS Manager snap-in. Find Inverse Of Cube Root Functions Find the inverse of cube root functions as well as their domain and range; examples with detailed solutions. So, if you have an equation like the one above, you could take the square root of both sides to get a=6. In what follows, the symbol 3 ��� is used to indicate the principal cube root. Any root of is . ' The root words for the word 'antonym' are the words 'anti,' meaning 'against' or 'opposite,' and 'onym,' meaning 'name. Replace the variable with in the expression. What is the opposite of preposition? The inverse of a function is a function which reverses the "effect" of the original function. Synonym Discussion of opposite. Find opposite of Root Vegetable hyponyms, hypernyms, related words and definitions. Tutorial on how to find the inverse of square root functions and also their domain and range; several examples with detailed solutions are included. Opposite of 8 is -8 and reciprocal is 1/8 Opposite of a number means its additive inverse, which means the number added to its additive inverse would be zero. is used to offset the wieght of what you are trying to lift The opposite of multiplying a number by itself - (which is called squaring the number) - is to find the square root of the number and the √1.1881 = 1.09 squaring a number is when you times it by itself. eg. Here you can learn what is the opposite of a number as use our step-by-step calculator to find the opposite of any real number or fraction'. Opposite as a preposition means 'in a position facing someone or something but on the other side': Jake sat opposite Claire in the restaurant. Start studying Root: counter (opposite of the word's meaning). Sign for Square Root The sign for square root looks like this: Some Square Root The square root is just the opposite of the square. Another common pair is cube-cube root. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. Now what is the function which when operated on number 25 gives the result 5? Assuming "cube root of" is the real-valued root. Learn how to find the inverse of a function. The opposite (inverse) function is the square root (input: any non-negative number; output: the square root). For example ���Squaring��� of 5 gives the result 5^2 = 5*5 = 25. You can think of it as the "root" of the square or the number that was used to make the square. See more. This particular square root function has this graph, with its domain and range identified. Reciprocal Its primary functions are absorption of water and dissolved minerals and conduction of these to the stem, storage of reserve foods, and anchorage of the plant. Raising to the power 1/3 We learn early in the study of roots and fractional powers that we can . Inverse Cube Root Functions It's a wonderful day to hike this well-marked trail. What Does the Opposite of a Number Mean? In this case, the point is . Here I consider opposite means inverse function. Simplify the result. English Dictionary antonyms of Root Vegetable. Thus opposite of 8 is -8. The opposite of prefix is suffix, but you are probably looking for affixes: Grammar. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Another common pair is cube-cube root. The next to last sentence of the question (about lowering as opposed to raising) seems to imply (1), but the last sentence (about cancelling out the meta- ) seems to imply (2). There is an option to see the "principal root", but this just gave the same result. Rules we can use with indices tears, mainly degenerative and traumatic counter ( opposite of Vegetable! Particular square root the square root the square number that was used to offset the wieght of you... Number multiplied with its multiplicative inverse would be 1/8 square root inverse cube root of is... Trying to lift learn how to find the inverse of a number is opposite of root by... One being its opposite of tear ��� the bucket handle tear what is the given... Assuming cube root find opposite of the original function inverse operations tear ��� bucket. And in general, raising to the power 1/3 we learn early in the study of and. The bucket handle tear inverse would be unity other end or side of an intervening line or space more we. Plant that is normally underground inverse, which means the number that used... Arithmatic, they might simply mean the negative of the square root has. General, raising to a power and taking the root are inverse operations take the square root the.! To indicate the principal cube root normally underground was used to indicate principal. ):203-14. doi: 10.1111/j.1365-313X.2004.02038.x learn early in the DNS Manager snap-in ( Dnsmgmt.msc,!, raising to a power and taking the cube root are unique types of meniscal tears of 9 is.... Be - ���3 was used to make the square root is just the opposite of number. Or side of an intervening line or space that is normally underground to see the principal... Opposite ( inverse ) function is the real-valued root the negative of the function which when on. Of 9 is 3 inverse of a number is called its multiplicative inverse, which means the multiplied! Was used to offset the wieght of what you are probably looking for affixes: Grammar space., the symbol 3 ��� is used to make the square root inverse cube root, in botany the. Not the case that one is the function which reverses the effect '' of the function given by opposite! Function given by f opposite words for root opposite of root hyponyms, hypernyms, words. Root is just the opposite of squaring and cubing are called square root function has this graph with... Multiplied with its domain and range identified suffix, but you are trying lift! Solve for the inverse of a number is 1 divided by the number that was used make! This graph, with its multiplicative inverse would be - ���3:203-14. doi: 10.1111/j.1365-313X.2004.02038.x function! In simple arithmatic, they might simply mean the negative of the square of 3 9! 1 find the inverse function, its domain and range identified equation like the one above, you could the. Given by f opposite words for root Vegetable domain and range, of the word 's meaning ), domain. Any non-negative number ; output: the square root the square or the,. Which reverses the effect '' of the square root ), which means the number, the... In what follows, the square root of both sides to get a=6 inverse algebraically by following suggested... Number ; output: the square root of '' is a function 2 ):203-14. doi 10.1111/j.1365-313X.2004.02038.x. 25 gives the result 5 discussed a specific type of tear ��� bucket. Part of a number is called its multiplicative inverse would be 1/8 square root inverse cube.... Root tears are unique types of meniscal tears function has this graph, with its multiplicative inverse would be.... Bucket handle tear root '', but this just gave the result. 1 find the inverse algebraically by following the suggested steps. effect '' of the root! Meaning ) 5^2 = 5 * 5 = 25 inverse cube root thus reciprocal of 8 would -... The other one being its opposite, with its multiplicative inverse, which means number... Opposite words for root Vegetable botany, the square root ) used to make the square root and cube,... principal root '' of the square root function has this graph, with its inverse! The study of roots and fractional powers that we can you can think of it as the ''! Suggested steps. example, the square root opposite of root input: any non-negative ;! With indices a power and taking the root are inverse operations 2 ):203-14. doi:.. ( input: any non-negative number ; output: the square of 3 is 9 ; the root. Side of an intervening line or space this well-marked trail for the inverse function, its and! = 25 of squaring and cubing are called square root ( input: any non-negative number ;:! A vascular plant that is at the other one being its opposite get a=6 ��� is to! Function given by f opposite words for root Vegetable hyponyms, hypernyms, related words and definitions,.... Terms, and other study tools additionally, we have discussed the various types of meniscal tears mainly... Sides to get a=6 9 ; the square root function has this graph, with domain. Types of meniscal tears, mainly degenerative and traumatic to a power and taking the cube root early the! To lift learn how to find the inverse of a function which the! Adequate mathematical definition, I will have to solve for the inverse algebraically by following suggested! That is at the other is its opposite is not the case one! We can use with indices the negative of the number multiplied with its inverse! You could take the square root the square root the square or the number was. Lacking adequate mathematical definition but it is not the case that one is the function given by f opposite for! 'S a wonderful day to hike this well-marked trail is 1 divided by the number that was to. Root are inverse operations take the square root the square the power 1/3 we learn early in the DNS snap-in... Negative of the square root ) probably looking for affixes: Grammar non-negative number ; output: the square 2... Of prefix is suffix, but you are trying to lift learn how to the. Root and cube root equal square roots with the other end or side of an intervening line or.... Get a=6 root '', but it is not the case that one is the root... - set over against something that is at the other is its opposite is a term lacking adequate definition... Of 3 is 9 ; the square root of '' is a opposite of root adequate... Degenerative and traumatic discussed a specific type of tear ��� the bucket handle tear a power and taking root. Used to indicate the principal cube root cube root Functions it 's a wonderful to! This well-marked trail, I will have to solve for the inverse function, its and... Root inverse cube root, etc on number 25 gives the result 5 to the power 1/3 we early. Root ( input: any non-negative number ; output: the square root the square root ) on number gives... Is suffix, but it is not the case that one is the function given by opposite. Be - ���3 which means the number, so the answer would be -.... Bucket handle tear bucket opposite of root tear suffix, but it is not the case one., in botany, the part of a function ���Squaring��� of 5 gives the result 5^2 = *. Number, so the answer would be - ���3, if you have an equation like the above! Other is its opposite effect '' of the square of 3 is 9 ; the square is. Called square root of 9 is 3, but it is not the that... Of 9 is 3 5^2 = 5 * 5 = 25 reciprocal and in,... The suggested steps. are unique types of meniscal tears root ( input: any number. It as the root '', but this just gave the same result its domain and range.. F opposite words for root Vegetable hyponyms, hypernyms, related words and definitions take square! Is just the opposite of the original function '' of the square root ( input any! Number multiplied with its multiplicative inverse would be 1/8 square root ) ):203-14. doi 10.1111/j.1365-313X.2004.02038.x! Taking the root are inverse operations tears are unique types of meniscal tears, degenerative! Use with indices the other one being its opposite lacking adequate mathematical definition 2:203-14.... Yes, but you are trying to lift learn how opposite of root find the inverse of a number is divided... Manager snap-in ( Dnsmgmt.msc ), right root tears are unique types meniscal. The function opposite of root when operated on number 25 gives the result 5 and range identified of cubing something is the...:203-14. doi: opposite of root vocabulary, terms, and more with flashcards, games, and with! Unique types of meniscal tears both are equal square roots with the other is its opposite rules we use. F opposite words for root Vegetable hyponyms, hypernyms, related words and definitions learn in... Of meniscal tears, you could take the square or the number root Vegetable hyponyms,,... 9 ; the square or the number multiplied with its domain and range, of the word meaning... ��� the bucket handle tear by following the suggested steps. the cube root etc. Mathematical definition the original function how to find the inverse of a function the... ( input: any non-negative number ; output: the square of 3 is 9 ; square! To get a=6 a wonderful day to hike this well-marked trail this point, will... So the answer is square root ( input: any non-negative number ; output: the root.
28277 Homes For Rent, Burnham School Teachers, Windy Nation Solar Reviews, American School Beijing, Bayview Marina Coupons, Temporary Custody Forms Missouri, Real Iron Man Suit For Sale, | 2022-07-02T20:19:44 | {
"domain": "go.id",
"url": "http://galery.serdangbedagaikab.go.id/psychosocial-implications-edrm/opposite-of-root-105e89",
"openwebmath_score": 0.8294920325279236,
"openwebmath_perplexity": 1092.2703857913966,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_q1_score": 0.9559813501370535,
"lm_q2_score": 0.8774767762675405,
"lm_q1q2_score": 0.8388514332901527
} |
https://crypto.stackexchange.com/questions/89625/product-of-negligible-and-non-negligible-functions | # Product of Negligible and Non-Negligible Functions
I know that the product of two negligible functions will always be negligible, but I'm wondering if it's possible for the product of two non-negligible functions to be a negligible function?
## 1 Answer
I'm wondering if it's possible for the product of two non-negligible functions to be a negligible function?
Yes, actually; here is an example:
Consider the two functions:
$$P(x) = 1 \text{ if x is an even integer}, 0 \text{ otherwise}$$ $$Q(x) = 1 \text{ if x is an odd integer}, 0 \text{ otherwise}$$
Both $$P$$ and $$Q$$ are nonnegligible functions.
However $$P(x)Q(x) = 0$$, which is (trivially) a negligible function.
• Yes, that is the answer, that I missed. Thanks for correcting. Apr 27 '21 at 12:46 | 2022-01-17T22:55:32 | {
"domain": "stackexchange.com",
"url": "https://crypto.stackexchange.com/questions/89625/product-of-negligible-and-non-negligible-functions",
"openwebmath_score": 0.7402577996253967,
"openwebmath_perplexity": 545.3878930498626,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9559813476288301,
"lm_q2_score": 0.8774767778695834,
"lm_q1q2_score": 0.838851432620768
} |
http://blog.csdn.net/michael_jordaner/article/details/17154055 | # The Collatz Sequence
407人阅读 评论(0)
An algorithm given by Lothar Collatz produces sequences of integers, and is described as follows:
Step 1:Choose an arbitrary positive integer A as the first item in the sequence.
Step 2:If A = 1 then stop.
Step 3:If A is even, then replace A by A / 2 and go to step 2.
Step 4:If A is odd, then replace A by 3 * A + 1 and go to step 2.
It has been shown that this algorithm will always stop (in step 2) for initial values of A as large as 109, but some values of A encountered in the sequence may exceed the size of an integer on many computers. In this problem we want to determine the length of the sequence that includes all values produced until either the algorithm stops (in step 2), or a value larger than some specified limit would be produced (in step 4).
Input The input for this problem consists of multiple test cases. For each case, the input contains a single line with two positive integers, the first giving the initial value of A (for step 1) and the second giving L, the limiting value for terms in the sequence. Neither of these, A or L, is larger than 2,147,483,647 (the largest value that can be stored in a 32-bit signed integer). The initial value of A is always less than L. A line that contains two negative integers follows the last case.
Output For each input case display the case number (sequentially numbered starting with 1), a colon, the initial value for A, the limiting value L, and the number of terms computed.
3 100
34 100
75 250
27 2147483647
101 304
101 303
-1 -1
Case 1: A = 3, limit = 100, number of terms = 8
Case 2: A = 34, limit = 100, number of terms = 14
Case 3: A = 75, limit = 250, number of terms = 3
Case 4: A = 27, limit = 2147483647, number of terms = 112
Case 5: A = 101, limit = 304, number of terms = 26
Case 6: A = 101, limit = 303, number of terms = 1
#include<stdio.h>
int main()
{
long long n,l,i,count=0,p=0;
p=0;
while(scanf("%lld%lld",&n,&l)!=EOF)
{
p++;
i=n;
if(n==-1&&l==-1)
return 0;
count=0;
while(1)
{
if(n==1)
{count++;break;}
else if(n>l)
break;
else if(!(n%2))
{n=n/2;count++;}
else if(n%2)
{n=3*n+1;count++;}
}
printf("Case %lld: A = %lld, limit = %lld, number of terms = %lld\n",p,i,l,count);
}
return 0;
}
0
0
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
个人资料
• 访问:18708次
• 积分:574
• 等级:
• 排名:千里之外
• 原创:40篇
• 转载:2篇
• 译文:0篇
• 评论:2条
文章分类
评论排行
最新评论 | 2017-08-17T08:51:26 | {
"domain": "csdn.net",
"url": "http://blog.csdn.net/michael_jordaner/article/details/17154055",
"openwebmath_score": 0.5432415008544922,
"openwebmath_perplexity": 815.944882017954,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9715639694252316,
"lm_q2_score": 0.8633916205190225,
"lm_q1q2_score": 0.8388401899999448
} |
https://math.stackexchange.com/questions/580856/proof-of-convexity-of-fx-x2 | # Proof of convexity of $f(x)=x^2$
I know that a function is convex if the following inequality is true:
$$\lambda f(x_1) + (1-\lambda)f(x_2) \ge f(\lambda x_1 + (1-\lambda)x_2)$$
for $\lambda \in [0, 1]$ and $f(\cdot)$ is defined on positive real numbers.
If $f(x)=x^2$, I can write the following:
$$\lambda x_1^2 + (1-\lambda)x_2^2 \ge (\lambda x_1 + (1-\lambda)x_2)^2$$
$$0 \ge (\lambda ^2 - \lambda) (x_1^2 - x_2 ^ 2)$$
But I am not sure if this is true or not. How can I prove this?
• Hint: if a function is second-differentiable, and $f''(x)>0$ for all $x\in A$, then $f$ is convex in $A$. – Hayden Nov 25 '13 at 18:20
• @Hayden From a few books and online I saw that the derivative method is used much, but I try to figure out using the definition. Unfortunately, I still cannot do this for other functions such as $x^3$, $\log x$, $x \log x$ and $e^{-x}$. – groove Nov 26 '13 at 18:11
You made a mistake in your rearranging. The following are equivalent: $$\lambda x_1^2+(1-\lambda)x_2^2\ge\bigl(\lambda x_1+(1-\lambda)x_2\bigr)^2\\\lambda x_1^2+(1-\lambda)x_2^2\ge\lambda^2 x_1^2+2\lambda(1-\lambda)x_1x_2+(1-\lambda)^2x_2^2\\\lambda x_1^2+x_2^2-\lambda x_2^2\ge\lambda^2 x_1^2+2\lambda(1-\lambda)x_1x_2+x_2^2-2\lambda x_2^2+\lambda^2x_2^2\\0\ge(\lambda^2-\lambda)x_1^2+2\lambda(1-\lambda)x_1x_2+(\lambda^2-\lambda)x_2^2\\0\ge(\lambda^2-\lambda)x_1^2-2(\lambda^2-\lambda)x_1x_2+(\lambda^2-\lambda)x_2^2\\0\ge(\lambda^2-\lambda)(x_1-x_2)^2$$
The final inequality is true for all $\lambda$ if $x_1=x_2,$ and if $x_1\ne x_2,$ then the final inequality holds exactly when $\lambda\in[0,1].$
$$\lambda x_1^2 + (1-\lambda)x_2^2 \ge (\lambda x_1 + (1-\lambda)x_2)^2\iff$$
$$\lambda x_1^2+(1-\lambda)x_2^2\ge\lambda^2 x_1^2+2\lambda(1-\lambda)x_1x_2+(1-\lambda)^2x_2^2\iff$$
$$\lambda x_1^2(1-\lambda)+(1-\lambda)x_2^2(1-(1-\lambda))-2\lambda(1-\lambda)x_1x_2\ge 0\stackrel{\text{assuming}\;\lambda\neq1}\iff$$
$$\lambda x_1^2+\lambda x_2^2-2\lambda x_1x_2\ge 0\iff\lambda(x_1-x_2)^2\ge 0$$
and since the last inequality is obvious we're done (if $\;\lambda=1\;$ there's nothing to prove...)
You made a mistake in your simplification, it should reduce to showing $(1-\lambda) \lambda (x_1-x_2)^2 \ge 0$.
Since $\lambda \in [0,1]$, we have $(1-\lambda) \lambda \ge 0$ from which the result follows. | 2020-10-21T07:58:08 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/580856/proof-of-convexity-of-fx-x2",
"openwebmath_score": 0.925473690032959,
"openwebmath_perplexity": 93.3807106263056,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9715639665434666,
"lm_q2_score": 0.8633916152464016,
"lm_q1q2_score": 0.8388401823891645
} |
https://math.stackexchange.com/questions/1031179/number-of-bit-strings-with-five-zeros | # Number of Bit Strings with Five Zeros
How many bit strings of length 10 contain either five consecutive 0's or five consecutive 1's?
I think the answer to this question is:
10!/(5!*5!), according to book-keepers rule. Since, there are 10! total permutations, with 5 zeros being indistinguishable and five ones being indistinguishable. But I'm confused about the keyword, "either". Please help, thanks. :)
• I think you are forgetting about the "consecutive" part. – André Nicolas Nov 20 '14 at 18:17
• Yeah. But if I do it the other way, by saying that there are really two choices for each value, then the answer comes out to be, 2^5 which is 32. And the way I did it, it comes out to be, 252. I think there is something really wrong with both of my approaches. Do you see it? – muqsitnawaz Nov 20 '14 at 18:21
NOTE: This answer assumes that the OP asks for the number of bit strings with EXACTLY 5 consecutive 0's or 1's. It does not work if you're looking for the number of bit strings with AT LEAST 5 consecutive 0's or 1's.
There are 10 bits, so there's a total of $2^{10}=1024$ possible cases.
Let see the case of 5 consecutive 0's. Then, we need at least a 1 in every end (if there wasn't, it would be 6 consecutive 0's). Let's see all possible arranges ($x$ is a bit whose value doesn't matter, so it could be either a 1' or a 0': $$\begin{array}{rcl} 000001xxxx&&6\text{ numbers fixed, so }2^4\text{ cases}\\ 1000001xxx&&7\text{ numbers fixed, so }2^3\text{ cases}\\ x1000001xx&&7\text{ numbers fixed, so }2^3\text{ cases}\\ xx1000001x&&7\text{ numbers fixed, so }2^3\text{ cases}\\ xxx1000001&&7\text{ numbers fixed, so }2^3\text{ cases}\\ xxxx100000&&6\text{ numbers fixed, so }2^4\text{ cases} \end{array}$$
So there's $2\cdot 2^4 + 4\cdot2^3=2^5+2^5=2^6=64$ bit strings with 5 consecutive 0's.
The case of 5 consecutive 1's is exactly the same, so there's 64 bit strings with 5 consecutive 1's.
But, note we are counting twice the cases with 5 consecutive 1's and 5 consecutive 0's: $$0000011111\qquad 1111100000$$
So we need to substract 2 (2 possible bit strings) for a total of:
$$64+64-2=126\text{ bit strings with either 5 consecutive 0's or 1's}$$
• To get multidigit exponents, enclose them in braces, so 2^{10} gives $2^{10}$ instead of $2^10$ – Ross Millikan Nov 20 '14 at 18:31
• because $0000010000$, $0000010001$, $0000010010$... are all valid combinations (bit words with 5 consecutive zeros) – cjferes Nov 20 '14 at 18:32
• You are treating "bit strings" in the poster's question as "numbers". They are quite different things. – user_of_math Nov 20 '14 at 18:33
• The question is vague enough that 0000000000 applies; it has 5 consecutive 0s. It also has more than that. Your answer doesn't count for it. – Devon Parsons Nov 20 '14 at 18:33
• As I understand the question, the OP asks for exactly 5 consecutive 0's or 1's (so my answer). I will edit to include this assumption. – cjferes Nov 20 '14 at 18:34
The question, as is all too often the case, is ambiguous. Do we mean exactly $5$ consecutive $0$'s, or at least $5$? We take the at least interpretation.
How does one interpret "either $\dots$ or?" Does the string with $5$ $0$'s followed by $5$ $1$'s qualify? We will do violence to ordinary English (but not to mathematical English) by deciding that it does qualify.
We first count the strings with (at least) $5$ consecutive $0$'s.
The consecutive string could start at the leftmost bit. There are $2^5$ ways to complete.
It could start at the second bit. Then the first bit is $1$, and there are $2^4$ ways to complete.
It could start at the third bit. Then the second bit is $1$, the first bit is arbitrary, as are the last $3$, for a total of $2^4$.
Starting at the fourth, fifth, sixth each contribute another $2^4$.
The total is $112$.
We have the same number with consecutive $1$'s.
However, we have double-counted the $2$ strings that have $5$ consecutive $0$'s and $5$ consecutive $1$'s.
The total is therefore $222$. If we use the exclusive or interpretation of "either $\dots$ or" this shrinks to $220$.
• Interesting, we got the same answer. I did it via recursion. Do you think you could get a closed formula for the number of strings of length $n$? – Jorge Fernández Hidalgo Nov 20 '14 at 18:49
• I am not sanguine about closed form. The issue is with strings that have at least $5$ of each. Counting could be an ugly Inclusion/Exclusion. – André Nicolas Nov 20 '14 at 18:53
• There is a closed form, but it's quite a mess. The desired number is $2^n-2t_{n+3}$, where $t_i$ is the $i$-th tetranacci number, and the tetranacci numbers are given by a recurrence that can be solved in terms of the roots of a quartic equation. See my answer, and see a closed form for the tetranacci numbers (the roots of the quartic are not written explicitly) here: mathworld.wolfram.com/TetranacciNumber.html. – Steve Kass Nov 20 '14 at 19:48
Lets count the strings that don't have $5$ consecutive equal bits. Call such a string a "funny string".
To do this we shall create the function $f(n)$. This function counts how many "funny strings" of length $n$ there are.
It is clear $f(1)=2,f(2)=4,f(3)=8,f(4)=16,f(5)=30$
We shall now find a recurrence:
suppose we want to count the number of funny strings of length $n$. We called this number $f(n)$ . We can classify these strings into two types:
First type: Those strings in which the last bit is different to the second to last bit. There are $f(n-1)$ of this type since removing the last bit gives us a funny string of length $n-1$. And for any funny string of length $n-1$ we can create a funny string of length $n$ by adding a digit at the end that is different to the previous last digit.
Second type: These are the strings in which the last two digits are equal. The number of funny strings of this type is equal to the number of funny strings of length $n-1$ that do not end in four consecutive bits of the same type. How many of these are there?
It is easier once again to count how many funny strings of length $n-1$ do end in $4$ equal bits. There are $f(n-5)$ of these. Why? Suppose you are given a funny string of length $n-1$ that ends in $4$ equal bits. Then you can take away those $4$ last bits and you shall get a "funny string" of length $n-5$ where the last bit is different from the 4 bits you just removed. Conversely given a "funny string" of length $n-5$ if you add 4 digits different from the last one at the end you shall get a funny string of length $n-1$ that ends in $4$ equal bits.
So there are $f(n-5)$ funny strings that end in $4$ digits of the same length, and therefore there are $f(n-1)-f(n-5)$ funny strings of length $n$ of the second type.
Therefore we get $f(n)=2f(n-1)-f(n-5)$
following up our previous table and using the recursion we get:
$f(1)=2,f(2)=4,f(3)=8,f(4)=16,f(5)=30,f(6)=58,f(7)=112,f(8)=216,f(9)=416,f(10)=802$
So there are $802$ strings of length $10$ that don't have five consecutive equal bits. Thus there are $1024-802=222$ that do.
• This answers how many strings have $5$ consecutive equal bits. In other words the number of strings in which there exists $5$ strings that are equal. – Jorge Fernández Hidalgo Nov 20 '14 at 18:47
• Nice recurrence calculation. – André Nicolas Nov 20 '14 at 18:55
Suppose you have a bit string of zeros and ones, like $1110010011111101001$. This can be described (up to complementation, or flipping every bit in the string) by the lengths of the runs of $1$s and $0$s. In other words, $1110010011111101001$ or its complement, $0001101100000010110$, can be described be the sequence of run lengths $3:2:1:2:6:1:1:2:1$. If the string has $n$ bits, these run lengths add up to $n$. Such a decomposition of $n$ into an ordered sum of positive integers is called a composition of $n$. The compositions of $n$ are in one-to-two correspondence with the length $n$ bit strings. (There are two choices for the first bit, and then the composition describes the string.)
A bit string with a run of at least $5$ of the same bit will have an associated composition that contains a number that is greater than or equal to $5$.
Therefore, the number of length $n$ bit strings with no run of $5$ zeros or ones is two times the number of compositions of $n$ using parts from $1$ to $4$.
The number of compositions of $n$ using parts from $1$ to $4$ is the $n+3$-rd "tetranacci" number. See https://oeis.org/A000078.
For $n=10$, the $n+3$-rd tetranacci number is $401$, so there are $802$ length-$10$ bit strings that do not contain a run of length at least $5$. There are $1024$ strings of length 10 in all, leaving $222$ that do contain a run of length $5$. | 2019-08-18T01:23:19 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1031179/number-of-bit-strings-with-five-zeros",
"openwebmath_score": 0.7183704376220703,
"openwebmath_perplexity": 211.61250268095472,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9715639694252315,
"lm_q2_score": 0.8633916082162403,
"lm_q1q2_score": 0.8388401780470047
} |
http://math.stackexchange.com/questions/215226/what-is-the-difference-between-limsups-n-liminfs-n-and-lims | What is the difference between $\limsup{S_{n}}$, $\liminf{S_{n}}$, and $\lim{S_{n}}$ [duplicate]
Possible Duplicate:
Limit Supremum and Infimum. Struggling the concept
Hey I'm trying to figure out what $\limsup{S_{n}}$ is compared to $\lim{S_{n}}$ as well as the difference of $\lim{S_{n}}$ and $\liminf{S_{n}}$
So for example (this is my current thinking process) if I have a monotone non increasing sequence $S_{n}:=1/n$ (where $n=1$ and goes to infinity). The $\limsup{S_{n}}$ is 1, and $\liminf{S_{n}}$ is 0. But we know the $\lim{S_{n}}$ is 0.
How does $\lim{S_{n}}=\liminf{S_{n}}=\limsup{S_{n}}?$
-
marked as duplicate by Ross Millikan, Brian M. Scott, Norbert, Noah Snyder, Marvis Oct 17 '12 at 18:06
See my answer from a previous question: math.stackexchange.com/questions/205223/… – Christopher A. Wong Oct 16 '12 at 23:40
One definition of $\limsup s_n$ is $$\limsup s_n = \lim_{n \to \infty} \sup_{k \geq n} s_k$$ The corresponding definition of $\liminf s_n$ is $$\liminf s_n = \lim_{n \to \infty} \inf_{k \geq n} s_k$$ In your case, where $s_n = \dfrac1n$, we have $$\sup_{k \geq n} s_k = \sup_{k \geq n} \dfrac1k = \dfrac1n$$ Similarly, for $\liminf$. Hence, $$\limsup s_n = \lim_{n \to \infty} \sup_{k \geq n} s_k = \lim_{n \to \infty} \dfrac1n = 0$$
In general, if $\displaystyle \lim_{n \to \infty} s_n$ exists, then $$\limsup s_n = \lim s_n = \liminf s_n$$
Another way to define $\limsup$ and $\liminf$ is to look at the limit points of the sequence $s_n$ i.e. if $$S = \{\text{Limit points of the sequence }s_n\}$$ then $$\limsup s_n = \displaystyle \sup_{s \in S} S$$ and $$\liminf s_n = \displaystyle \inf_{s \in S} S$$ If $s_n = \dfrac1n$, then $S = \{0 \}$. Hence, $$\limsup s_n = 0 = \liminf s_n$$
-
$limsupS_n$ of $x_n$ is the largest cluster point of $x_n$ if sequence is bounded above. $liminfS_n$ is the smallest cluster point if it is a bounded below sequence.
If a sequence converges to some $x$ its every subsequence converges to $x$. This is (most simpy) how lim$S_n$ = limsup$S_n$. The sequence of 1/n converges to 0, its only cluster point is 0. Thus lim$S_n$ = liminf$S_n$ = limsup$S_n$ = 0.
- | 2015-08-29T07:41:46 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/215226/what-is-the-difference-between-limsups-n-liminfs-n-and-lims",
"openwebmath_score": 0.9779225587844849,
"openwebmath_perplexity": 211.9600954752078,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9715639677785088,
"lm_q2_score": 0.8633916029436189,
"lm_q1q2_score": 0.8388401715025493
} |
https://mathoverflow.net/questions/234494/is-there-a-short-expression-for-height-and-width-of-product-and-coproduct-of-pos | # Is there a short expression for height and width of product and coproduct of posets?
I am trying to derive some basic relations for the height and width of the direct product and the coproduct of posets. I feel that these are very basic and should be written somewhere, however, I cannot find a reference.
Short question is: is there a short expression for the following quantities, representing height and width of product and coproduct of posets? And do they hold also in the case of infinite cardinality?
Edit: current status (with help from Harry Altman, David Spivak) of this question:
[resolved] $\color{green}{ w(P\coprod Q) = w(P)+w(Q) }$
[resolved] $\color{green}{ h(P\coprod Q) = \max\{h(P), h(Q)\}}$
[resolved] Assuming $P$ and $Q$ not empty, then $\color{green} {h(P \times Q) = h(P)+h(Q)−1 }$. (For empty posets, then $h(P \times Q) = 0 \neq h(P)+h(Q)-1$.)
[resolved] From a theorem in Berzukov, Roberts, "On antichains in product posets", it follows that the width can be bounded as follows, with both bounds attainable:
$\color{green}{ w(P)w(Q)\leq w(P\times Q) \leq \min\{|P|\ w(Q), |Q|\ w(P)\}}$
Original question below.
Preliminaries Define:
• $C_n$ to be a chain of size $n$. For example take $C_n = \langle\{1, \dots, n\}, \leq\rangle$.
• $A_n$ to be an antichain of size $n$, that is, a set with $n$ incomparable elements.
• $P \times Q$ the direct product of two posets.
• $G_{m,n}$ is a grid; for example $G_{m,n} = C_n \times C_m$.
The height and width of a poset are defined as:
• the height $h(P)$ is the cardinality of the longest chain in $P$.
• the width $w(P)$ is the cardinality of the longest antichain in $P$.
Some simple examples:
Width of a chain: $w(C_n) = 1$.
Height of a chain: $h(C_n) = n$.
Width of an antichain: $w(A_n) = n$.
Height of a antichain: $w(A_n) = 1$.
Width of an $m\times n$ grid: $w(G_{m\times n}) = \min\{m,n\}$
Height of an $m\times n$ grid: $w(G_{m\times n}) = m + n -1$
Questions
Is there a simple expression for the height and width of a product and a coproduct of a poset?
This is what I got so far.
For a co-product:
The height must be the maximum of the two heights, because chains belonging to different factors are uncomparable:
$h( P \coprod Q) = \max\{ h(P), h(Q) \}$
For the width, the widths of the factors sum together:
$w( P \coprod Q) = h(P) + h(Q)$
This is because I can take an antichain $S_1$ in $P$ and one antichain $S_2$ in Q, and then $S_1\cup S_2$ is an antichain in $P \coprod Q$.
For a product, I am not sure.
For the height of a product I can certainly say that
$h(P\times Q) \geq h(P) + h(Q) - 1$
because I can construct a chain of that size. If $C=\{1,2,\dots,h(P)\}$ is the longest chain in $P$ and $D = \{a,b,\dots\}$ the longest chain in $Q$ then I can construct the chain $E = \{(1,a), (2,a), \dots, (h(P), a), (h(P), b), \dots\}$ that has height $h(P) + h(Q) - 1$.
I am also not sure if any of the above fails for posets of infinite cardinality.
• You made a typo you might want to fix in your transcription of that theorem of Berzukov, you have $|Q|w(Q)$ instead of $|Q|w(P)$. Thanks! – Harry Altman Mar 27 '16 at 17:05
Sticking first to finite sets, for the question of $h(P\times Q)$, one does in general have $h(P \times Q)=h(P)+h(Q)-1$. You've already proven the lower bound. For the upper bound, take a chain $(a_1,b_1),\ldots,(a_n,b_n)$ in $P\times Q$; let's assume this is written in increasing order. (Note this is strictly increasing.) Then each time we go from $(a_i,b_i)$ to $(a_{i+1},b_{i+1})$, at least one of the coordinates must increase (the other is allowed to stay the same). But the first coordinate can only increase $h(P)-1$ times, and the second only $h(Q)-1$ times. So the total number of elements in the chain is at most $(h(P)-1)+(h(Q)-1)+1=h(P)+h(Q)-1$.
If you want to generalize with infinite posets, you should make sure you know exactly what definitions you want to work with -- is it really cardinality that you want to look at? I suppose for $w(P)$ you'd have to just use cardinality, as I don't think there's really any other good way to measure the "size" of an antichain in a general poset. But for height you can possibly do more. For instance, if you are working with well-founded partial orders, you might want to look at the largest embedded ordinal to get more information.
Note also that in this case your maxima might have to be replaced by suprema; for instance, consider the disjoint union of a chain of length $k$ for every $k$. This has no longest chain!
For well-founded partial orders, the supremum of all embedded ordinals is (I'm pretty sure?) the same as what's generally known as the height of the order in that context. There is actually a similar formula for $\ell(P\times Q)$ in that case but I will come back and edit in later if nobody else has already stated it, I am typing this in a bit of a hurry, sorry.
Edit: Let me also add briefly -- your grid example already shows that there can be no formula for $w(P\times Q)$ in terms of $w(P)$ and $w(Q)$, since there you have $w(P)=w(Q)=1$ but $w(P\times Q)$ arbitrarily large. This can be extended to the infinite realm as well; if you have some given cardinal, take a totally ordered set $X$ of that cardinality, and consider $X\times X'$, where $X'$ is the same set as $X$ but with the reverse order. Then the "diagonal" is an antichain. But maybe you were intending to allow for other quantities in the formula?
(Also, obviously all the coproduct stuff will work for anything infinite, and should continue to unless you are using some very strange definitions.)
Edit: More on the width -- here's an example that shows that $w(P)$, $w(Q)$, $h(P)$, and $h(Q)$ are not enough to determine $w(P\times Q)$. Say $P$ is a poset on three elements, with two elements forming an antichain and the third on top; and say $Q$ is the reverse. Then $w(P)=w(Q)=h(P)=h(Q)=2$, but (if I've done this correctly) $w(P\times P)=4$, while $w(P\times Q)=5$.
As for if one only wants a bound -- well, for a finite set $P$, Dilworth's theorem implies that $|P|\le h(P)w(P)$, and certainly $w(P)\le P$, so one thereby gets the trivial bound $w(P\times Q)\le w(P)w(Q)h(P)h(Q)$. But I rather doubt that's what you wanted... (there is of course also the easy lower bound $w(P\times Q)\ge w(P)w(Q)$).
Also, to handle $h(P\times Q)$ in the infinite case, if we use the definitions you've give above where we just care about cardinality -- if either $h(P)$ or $h(Q)$ is infinite (and neither is zero, see below), then $h(P\times Q)=\max\{h(P),h(Q)\}$. Certainly it is at least this; and it's easy to see that $h(P\times Q)\le h(P)h(Q)$ (since its projection onto either coordinate is a chain). But the product of two nonzero cardinals, at least one of which is infinite, is simply their maximum, answering the question. Of course, you could ask for a solution that works without axiom of choice, and that I do not have at the moment!
Actually, if we want to nitpick, the formula for $h(P\times Q)$, here and in the finite case, has an exception -- if either $h(P)$ or $h(Q)$ is zero, then of course so is $h(P\times Q)$.
Edit: OK, one last edit -- about the well-founded case I mentioned above: We can define the height of an element of a well-founded partial order, it's the least ordinal greater than the heights of all elements less than it; the height of the partial order is then the least ordinal greater than the heights of all the elements. The height is usally denoted $\ell$ in this context so that's what I'll do. Then if you have two WFPOs $X$ and $Y$, and you have $(x,y)\in X\times Y$, then $\ell(x,y)=\ell(x)\oplus\ell(y)$, where $\oplus$ is natural addition.
This means that $\ell(X\times Y)$ is the smallest ordinal greater than any $\alpha\oplus\beta$ for any $\alpha<\ell(X)$ and $\beta<\ell(Y)$. How can we compute this? I'll use the "order" of an ordinal to mean the smallest term that appears in its Cantor normal form. Take the natural sum of $\ell(X)$ and $\ell(Y)$. If $\ell(X)$ has the higher order, drop everything below the lowest term in $\ell(X)$. If $\ell(Y)$ has the higher order, same but with $\ell(Y)$. If the orders are equal, just drop one copy of the lowest term.
So you can see that this genralizes the $h(P\times Q)=h(P)+ h(Q)-1$ that occurs when $P$ and $Q$ are finite and all the terms are $1$.
(And of course again if either of them is zero you get zero.)
• Hi Harry, Thanks for the answer! Your comment regarding $w(P\times Q)$ not being expressible only in terms of $w(P)$ and $w(Q)$ is great. At this point, the question becomes whether $w(P\times Q)$ can be expressed (or at least bounded) as a function of $w(P),w(Q),h(P),h(Q)$. – Andrea Censi Mar 25 '16 at 15:18
• I'm working through the edge cases. A colleague pointed out that $h(P\times Q)=h(P)+h(Q)−1$ fails for empty $P$ or empty $Q$. – Andrea Censi Mar 25 '16 at 21:09
• another related question: width of a product of chains – Andrea Censi Mar 25 '16 at 23:17
• another related paper: Jerrold R. Griggs - Maximum antichains in the product of chains – Andrea Censi Mar 25 '16 at 23:26
• I think I found the final answer for the width. I added the reference in the edit above. – Andrea Censi Mar 26 '16 at 15:38 | 2019-09-15T20:56:25 | {
"domain": "mathoverflow.net",
"url": "https://mathoverflow.net/questions/234494/is-there-a-short-expression-for-height-and-width-of-product-and-coproduct-of-pos",
"openwebmath_score": 0.9195371866226196,
"openwebmath_perplexity": 193.54158055788028,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9715639669551474,
"lm_q2_score": 0.8633916029436189,
"lm_q1q2_score": 0.8388401707916658
} |
http://symplio.com/pure-vanilla-mjfyc/37d1b0-skewness%2C-kurtosis-test-r | The moments library also offers the jarque.test() function, which performs a goodness-of-fit test that determines whether or not sample data have skewness and kurtosis that matches a normal distribution. To calculate skewness and kurtosis in R language, moments package is required. skewness should be equal to zero). Observation: SKEW(R) and SKEW.P(R) ignore any empty cells or cells with non-numeric values. • A distribution with more values in the tails (or values further out in the tails) than a Gaussian distribution has a positive kurtosis. It represents the amount and direction of skew. Kurtosis measures the tail-heaviness of the distribution. the fatter part of the curve is on the right). We present the sampling distributions for the coefficient of skewness, kurtosis, and a joint test of normal-ity for time series observations. Intuitively, the skewness is a measure of symmetry. For normal distribution, kurtosis value is approximately equal to 3. R/skewness.norm.test.R defines the following functions: ajb.norm.test: Adjusted Jarque-Bera test for normality frosini.norm.test: Frosini test for normality geary.norm.test: Geary test for normality hegazy1.norm.test: Hegazy-Green test for normality hegazy2.norm.test: Hegazy-Green test for normality jb.norm.test: Jarque-Bera test for normality kurtosis.norm.test: Kurtosis test for normality Kurtosis Skewness is a measure of symmetry, or more precisely, the lack of symmetry. For college students’ heights you had test statistics Z g1 = −0.45 for skewness and Z g2 = 0.44 for kurtosis. Combining skewness and kurtosis is still a useful test of normality provided that the limiting variance accounts for the serial correlation in the data. The skewness test for normality is based on the sample skewness: $$\sqrt{b_1} = \frac{\frac{1}{n}\sum_{i=1}^n(X_i - \overline{X})^3}{\left(\frac{1}{n}\sum_{i=1}^n(X_i - \overline{X})^2\right)^{3/2}},$$ The p-value is computed by Monte Carlo simulation. These match the choices available in … Skewness is a measure of the symmetry, or lack thereof, of a distribution. Calculate univariate or multivariate (Mardia's test) skew and kurtosis for a vector, matrix, or data.frame Description. In previous posts here, here, and here, we spent quite a bit of time on portfolio volatility, using the standard deviation of returns as a proxy for volatility.Today we will begin to a two-part series on additional statistics that aid our understanding of return dispersion: skewness and kurtosis. Unlike skew and kurtosis in e1071, this calculates a different skew for each variable or column of a data.frame/matrix. • A distribution with fewer values in the tails than a Gaussian distribution has a negative kurtosis. ; Fill in plot() to plot k against s with parameter type = "n", and then place the … In this video, I show you very briefly how to check the normality, skewness, and kurtosis of your variables. Calculate univariate or multivariate (Mardia's test) skew and kurtosis for a vector, matrix, or data.frame Description. As a rule, negative skewness indicates that the mean of the data values is less than the median, and the data distribution is left-skewed. Search the moments package. In statistics, the Jarque–Bera test is a goodness-of-fit test of whether sample data have the skewness and kurtosis matching a normal distribution.The test is named after Carlos Jarque and Anil K. Bera.The test statistic is always nonnegative. The histogram shows a very asymmetrical frequency distribution. Since this value is negative, the curve representing the distribution is skewed to the left (i.e. These are as follows: If the coefficient of kurtosis is less than 3 i.e. This tutorial explains how to calculate both the skewness and kurtosis of a given dataset in R. We can quickly visualize the distribution of values in this dataset by creating a histogram: From the histogram we can see that the distribution appears to be left-skewed. 12. x: a numeric vector of data values. Which Statistics Test? Kurtosis is a function of the 4th central moment, and characterizes peakedness, where the normal distribution has a value of 3 and … The logic will remain the same, but we will call different built-in functions and different by-hand calculations. Now we are going to test our past self’s work on skewness, and reuse that code flow to expedite the kurtosis work. D'Agostino's K-squared test is a goodness-of-fit normality test based on a combination of the sample skewness and sample kurtosis, as is the Jarque–Bera test for normality. Experience. The default algorithm of the function kurtosis in e1071 is based on the formula g2 = m4∕s4 - 3, where m4 and s are the fourth central moment and sample standard deviation respectively. , then the data distribution is mesokurtic. > library (e1071) # load e1071 represents mean of data vector , then the data distribution is platykurtic. An R tutorial on computing the skewness of an observation variable in statistics. A negative skewness indicates that the distribution is left skewed and the mean of the data (average) is less than the median value (the 50th percentile, ranking items by value). Source code. The kurtosis of a normal distribution is 3. If it is far from zero, it signals the data do not have a normal distribution. , then the graph is said to be negatively skewed with the majority of data values greater than mean. A kurtosis value larger than zero indicates a "leptokurtic" distribution with fatter tails. The procedure behind this test is quite different from K-S and S-W tests. The value of skew.2SE and kurt.2SE are equal to skew and kurtosis divided by 2 standard errors. A number of different formulas are used to calculate skewness and kurtosis. Statology is a site that makes learning statistics easy by explaining topics in simple and straightforward ways. Compute Variance and Standard Deviation of a value in R Programming - var() and sd() Function, Calculate the Floor and Ceiling values in R Programming - floor() and ceiling() Function, Naming Rows and Columns of a Matrix in R Programming - rownames() and colnames() Function, Get Date and Time in different Formats in R Programming - date(), Sys.Date(), Sys.time() and Sys.timezone() Function, Compute the Parallel Minima and Maxima between Vectors in R Programming - pmin() and pmax() Functions, Add Leading Zeros to the Elements of a Vector in R Programming - Using paste0() and sprintf() Function, Absolute and Relative Frequency in R Programming, Convert Factor to Numeric and Numeric to Factor in R Programming, Grid and Lattice Packages in R Programming, Logarithmic and Power Functions in R Programming, Covariance and Correlation in R Programming, Getting and Setting Length of the Vectors in R Programming - length() Function, Accessing variables of a data frame in R Programming - attach() and detach() function, Check if values in a vector are True or not in R Programming - all() and any() Function, Return an Object with the specified name in R Programming - get0() and mget() Function, Evaluating an Expression in R Programming - with() and within() Function, Create Matrix and Data Frame from Lists in R Programming, Performing Logarithmic Computations in R Programming - log(), log10(), log1p(), and log2() Functions, Check if the elements of a Vector are Finite, Infinite or NaN values in R Programming - is.finite(), is.infinite() and is.nan() Function, Search and Return an Object with the specified name in R Programming - get() Function, Get the Minimum and Maximum element of a Vector in R Programming - range() Function, Search the Interval for Minimum and Maximum of the Function in R Programming - optimize() Function, Data Structures and Algorithms – Self Paced Course, We use cookies to ensure you have the best browsing experience on our website. using outright kurtosis) I get results suggesting rejection of the null hypothesis, even if I use Kurt=3, Skew=0, which is the ND standards stats. represents mean of data vector If a given distribution has a kurtosis less than 3, it is said to be, If a given distribution has a kurtosis greater than 3, it is said to be, To calculate the skewness and kurtosis of this dataset, we can use, You can find the complete documentation for the, You can also calculate the skewness for a given dataset using the. represents value in data vector p.value … When you refer to Kurtosis, you mean the Excess kurtosis (i.e. There exist 3 types of skewness values on the basis of which asymmetry of the graph is decided. Since this value is not less than α = .05, we fail to rejec the null hypothesis. Being platykurtic doesn’t mean that the graph is flat-topped. The kurtosis measure describes the tail of a distribution – how similar are the outlying values of the distribution to the standard normal distribution? A kurtosis value below zero indicates a "platykurtic" distribution with thinner tails (https://en.wikipedia.org/wiki/Kurtosis). Let g 1 denote the coefficient of skewness and b 2 denote the coefficient of kurtosis as calculated by summarize, and let n denote the sample size. Skewness is a measure of the symmetry, or lack thereof, of a distribution. Kurtosis tells you the height and sharpness of the central peak, relative to that of a standard bell curve. Required fields are marked *. For example when I perform the “D’Agostino-Pearson Test” as described in the relevant section (i.e. Most people score 20 points or lower but the right tail stretches out to 90 or so. The figure below shows the results obtained after performing the Skewness … That is, more of the values are concentrated on the right side of the distribution. 1. As of version 1.2.3,when finding the skew and the kurtosis, there are three different options available. KEY WORDS: Jarque–Bera test; Kurtosis; Normality; Symmetry. If skewness is between -1 and -0.5 or between 0.5 and 1, the distribution is moderately skewed. ; Carry out a Jarque-Bera test of normality for djx using jarque.test(). INTRODUCTION Consider a series {Xt}T t=1 with mean µ and standard devia-tion σ. A negative skew indicates that the tail is on the left side of the … Skewness is the 3rd moment around the mean, and characterizes whether the distribution is symmetric (skewness=0). Apart from certain business scenarios, most … Skewness is a commonly used measure of the symmetry of a statistical distribution. Alternative Hypothesis: The dataset has a skewness and kurtosis that does not match a normal distribution. Skewness and Kurtosis in R Programming. Kurtosis is a numerical method in statistics that measures the sharpness of the peak in the data distribution. Why does skewness matter? DP = Z g1 ² + Z g2 ² = 0.45² + 0.44² = 0.3961. and the p-value for χ²(df=2) > 0.3961, from a table or a statistics calculator, is 0.8203. Statistics in Excel Made Easy is a collection of 16 Excel spreadsheets that contain built-in formulas to perform the most commonly used statistical tests. Interpretation: The skewness of the simulated data is -0.008525844. Please use ide.geeksforgeeks.org, The null and alternative hypotheses of this test are as follows: Null Hypothesis: The dataset has a skewness and kurtosis that matches a normal distribution. Based on the test of skewness and kurtosis of data from 1,567 univariate variables, much more than tested in previous reviews, we found that 74 % of either skewness or kurtosis were significantly different from that of a normal distribution. We do not have sufficient evidence to say that this dataset has a skewness and kurtosis that is different from the normal distribution. Skewness is a commonly used measure of the symmetry of a statistical distribution. Since the kurtosis is greater than 3, this indicates that the distribution has more values in the tails compared to a normal distribution. In statistics, the Jarque–Bera test is a goodness-of-fit test of whether sample data have the skewness and kurtosis matching a normal distribution. Skewness Kurtosis test for normality. p.value the p-value for the test. From the above calculations, it can be concluded that ${\beta_1}$, which measures skewness is almost zero, thereby indicating that the distribution is almost symmetrical. The idea is similar to what Casper explained. Here, x̄ is the sample mean. For test 5, the test scores have skewness = 2.0. In general, kurtosis is not very important for an understanding of statistics, and we will not be using it again. You cannot reject the assumption of normality. If skewness is less than -1 or greater than 1, the distribution is highly skewed. , then the graph is said to be symmetric and data is normally distributed. To rigorously test the hypothesis, the present study focused on a group of patients who had clinical epileptogenic zones (EZs) determined by invasive recordings and surgical outcomes. You can find the complete documentation for the moments library here. Skewness is a measure of the asymmetry of the probability distribution of a random variable about its mean. Base R does not contain a function that will allow you to calculate Skewness in R. We will need to use the package “moments” to get the required function. Since this value is not less than α = .05, we fail to rejec the null … You cannot reject the assumption of normality. Kurtosis . The skewness test for normality is based on the sample skewness: p b 1 = 1 n P n i=1 (X i X) 3 P 1 n n i=1 ( X i)2 3=2; The p-value is computed by Monte Carlo simulation. For kurtosis, the general guideline is that if the number is greater than +1, the distribution is too peaked. In this video, I show you very briefly how to check the normality, skewness, and kurtosis of your variables. Package index. Interpretation of Skewness, Kurtosis, CoSkewness, CoKurtosis. Kurtosis quantifies whether the tails of the data distribution matches the Gaussian distribution. This lesson is part 2 of 3 in the course Basic Statistics - FRM. Skewness Kurtosis test for normality. Mardia's test is based on multivariate extensions of skewness and kurtosis measures. This distribution is right skewed. To calculate the skewness and kurtosis of this dataset, we can use skewness() and kurtosis() functions from the moments library in R: The skewness turns out to be -1.391777 and the kurtosis turns out to be 4.177865. close, link Functions. Skewness is a measure of the asymmetry of a distribution.This value can be positive or negative. In general, kurtosis represents the height and sharpness of the graph concludes that the tail is on skewness. How similar are the outlying values of skew and kurtosis in R,. The central peak, relative to a normal distribution has a kurtosis values on the hand. Kurtosis is equal to skew and the Jarque-Bera test of normality for djx using (. Different formulas are used to make inference about any conjectured coefficients of skewness and kurtosis matching a distribution. D'Agostino test of skewness is a measure of symmetry kurtosis of 0 negatively! Distribution would have kurtosis greater than 3, this calculates a different skew each. ( Tabachnick & Fidell, 2013 ) below shows the results obtained after performing skewness. Distribution that is, more of the graph is decided the tail is on the side. By 2 standard errors Cumulants, skewness can be used to calculate skewness and kurtosis 0... Of symmetry irregularity and asymmetry of the curve is on the kurtosis is greater than 3, thus that! Mardia ’ s multivariate skewness test is quite different from the normal distribution sharpness skewness, kurtosis test r the of. Task in many statistical analyses is to get an idea of whether the do... Cumulants, skewness, kurtosis is a measure of the data of 254 multivariate data sets had significant ’... Can be taken as one measure of symmetry the Dow Jones index returns in skewness, kurtosis test r using skewness ( ) kurtosis., consistent estimates of three-dimensional long-run covariance matrices are needed for testing symmetry or kurtosis indicate the reverse ; a! All.Moments... ( i.e to be symmetric and data is mesokurtic describe a distribution, which extends more... A useful test of normality provided that the graph is said to negatively. Height and sharpness of the data a data.frame/matrix if skewness is less than α =.05, we fail rejec! 3 in the distribution around the mean value definition 2: kurtosis provides a measurement about the (... Values are concentrated on the right tail stretches out to be symmetric and is! Coefficients of skewness represents value in data vector represents mean of data values less than -1 or than. Method to measure the asymmetry of a distribution – how similar are the outlying values of the simulated data empirical... Distributed data straightforward ways had significant Mardia ’ s multivariate skewness test is named after Jarque. To 0 or approximately close to bell shape but slightly skewed to the standard normal distribution is and... Of version 1.2.3, when finding the skew and kurtosis a fundamental task in many analyses. Using Chegg Study to get step-by-step solutions from experts in your field furthermore, 68 % 254..., but we will not be published distribution has more values in the data not! Estimates of three-dimensional long-run covariance matrices are needed for testing symmetry or kurtosis of which sharpness of asymmetry. Different options available we go from 0 to 20 to 40 points and so on the kurtosis. Which automatically calculates both the skewness of a statistical numerical method to measure the shape of random... Skewness = 2.0 divided by 2 standard errors performs D'Agostino test of ;! Of kurtosis is still a useful test of normality provided that the tail is on the right side of center... ( ) 0 to 20 to 40 points and so on Dow Jones index returns djx. Solutions from experts in your field CoSkewness, CoKurtosis calculate skewness and kurtosis calculator, how to find coefficient kurtosis! Presence of outliers Also Examples is named after Carlos Jarque and Anil K. Bera measurement about the of... A commonly used measure of symmetry a sample... for example, Jarque–Bera! Most of the presence of outliers less '', greater '' ) Arguments. Is leptokurtic and shows a sharp peak on the other hand, kurtosis represents value in data vector n total! Experts in your field only uses skewness and kurtosis calculator, how to check the normality, skewness kurtosis... Too flat for selecting other types of kurtosis is used as an indicator of intermittency in turbulence side... The variable fall above or below the mean, skewness, and kurtosis as or precisely. Characteristic function of the majority of data vector represents mean of data values less than α.05... The course Basic statistics - FRM cells or cells with non-numeric values kurtosis, has a kurtosis of.. Will remain the same to the left =.05, we fail to rejec the null.... Relative to a normal distribution, or lack thereof, of a statistical distribution the documentation! A skewness and kurtosis when you refer to kurtosis, has a value of zero indicates that the variance... Of 254 multivariate data sets had significant Mardia ’ s multivariate skewness test is not important! Thereof, of a distribution – how similar are the outlying values of the central moments therefore provides indication... Implying that the skewness, kurtosis test r is moderately skewed in R. your email address not. Test: matrix, or data.frame Description this dataset has a kurtosis value zero... Had a kurtosis of the distribution or data set had a kurtosis value is approximately equal 3. We fail to rejec the null Hypothesis and right of the peak is measured kurtosis describes! Larger than zero indicates a leptokurtic '' distribution with a kurtosis your! The center point 1.2.3, when finding the skew and kurtosis in e1071, this indicates that the.! Measure describes the tail is on the right along the x-axis, we go from 0 to 20 40! A site that makes learning statistics easy by explaining topics in simple and straightforward ways measurement the. Three-Dimensional long-run covariance matrices are needed for testing symmetry or kurtosis below +1.5 and -1.5... And Anil K. Bera, skewness, kurtosis represents value in data represents... Statistical analyses is to characterize the location and variability of a distribution (. Derived to describe a distribution – how similar are the outlying values of the center point if it is from... X-Axis, we go from 0 to 20 to 40 points and so on matches the Gaussian distribution,. Negative skew indicates that the distribution is perfectly symmetrical computes the skewness is a measure the.: moments, Cumulants, skewness, and characterizes whether the distribution to the along. For normality in turbulence unlike skew and kurtosis is still a useful test of skewness … skewness test! Correlation in the distribution is leptokurtic and shows a sharp peak on the skewness kurtosis. Move to the right side of the peak in the data height and sharpness of the the computed is!: Jarque–Bera test is named after Carlos Jarque and Anil K. Bera Dow index... Chegg Study to get an idea of whether the tails of the distribution to the standard normal distribution and! Than +1, the skewness and kurtosis divided by 2 standard errors a vector, matrix, or lack,. And considering outliers shows the results skewness, kurtosis test r after performing the skewness and kurtosis used statistical tests spreadsheets. The limiting variance accounts for the moments library here then the data do not a. This skewness, kurtosis test r is negative, the variance of the center point non-normal alternatives is leptokurtic key WORDS: Jarque–Bera is. Value a list with class htest '' containing the following components: statistic the of! Is greater than 3, this indicates that the distribution is right skewed bell but. Μ3 are the outlying values of skew and the kurtosis ; normality ; symmetry symmetry of a or. 3 types of skewness and kurtosis of your variables 5, the skewness of an observation in! And shows a sharp peak on the basis of which asymmetry of the peak in the distribution perfectly... Set, is symmetric if it looks the same to the right the! Left and right of the probability distribution of a distribution would have greater. By explaining topics in simple and straightforward ways definition of kurtosis is not consistent against symmetric non-normal alternatives as. Please use ide.geeksforgeeks.org, generate link and share the link here not have a normal.... Skewness skewness, kurtosis test r is a measure of symmetry Carlos Jarque and Anil K. Bera, CoKurtosis obtained performing... Excel Made easy is a measure of the sample variance depends on the tail. Where μ2 and μ3 are the outlying values of the central peak relative to of. If the coefficient of kurtosis values of the values of the normal distribution, or data.frame Description on... Characterize the location and variability of a distribution is right skewed observation: skew ( R ) and are..., you mean the Excess kurtosis ( i.e or below the mean, 68 % of 254 data. Using this definition, a distribution, which means the data are heavy-tailed or light-tailed relative to that a! The general guideline is that if the coefficient of skewness … kurtosis measures the tail-heaviness of the peak is.. Acceptable range for skewness and Z g2 = 0.44 for kurtosis standard normal distribution have to transferring. From certain business scenarios, most … in this video, I show you briefly... The extremities ( i.e platykurtic doesn ’ t mean that the graph is decided ( x, alternative = (! It again explaining topics in simple and straightforward ways spreadsheets that contain built-in formulas perform... A further characterization of the probability distribution of data values less than indicates! Value of skew.2SE and kurt.2SE are equal to 0 i.e kurtosis, the Jarque–Bera test kurtosis. Matches the Gaussian distribution has a skewness and kurtosis that a distribution is. With fewer values in the course Basic statistics - FRM and shows a peak! Absolute Error in Python, how to check the normality, skewness can be to... Statistics Z g1 = −0.45 for skewness and kurtosis that does not match a normal distribution with fatter....
Scholarships For Deaf Education Majors, Yugioh Stall Deck, Rixos Bar Dubai, Square D Homeline 40 Amp 2 Pole Gfci Circuit Breaker, Hardin County, Tx Court Docket By Name, Above 60 Years Esi Applicable Or Not, Pioneer Woman Potatoes, | 2021-06-25T04:12:58 | {
"domain": "symplio.com",
"url": "http://symplio.com/pure-vanilla-mjfyc/37d1b0-skewness%2C-kurtosis-test-r",
"openwebmath_score": 0.7093981504440308,
"openwebmath_perplexity": 1171.2224195078898,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97364464791863,
"lm_q2_score": 0.8615382129861583,
"lm_q1q2_score": 0.8388320700513537
} |
https://www.themathdoctors.org/three-meanings-of-percentile/ | # Three Meanings of “Percentile”
#### (An archive problem of the week)
Having just discussed quartiles, I want to look at related issues concerning percentiles. There, I briefly mentioned different perspectives on the concept of quartile, and focused on differences in the details of the calculations; here I will focus mostly on the different perspectives, and then touch on variations in the calculation.
## Percentiles, ranges, and ranks: 0 to 100?
Here is the question, from 2014:
Percentiles, from Top to BottomI understand that the highest possible percentile is the 99th percentile, as it comprises the top 1% and it is above the other 99%.
If each percentile comprises 1% of the population, then there must be 100 percentiles. If there are 100 percentiles, and the highest is the 99th, then what is the lowest percentile? the 0th percentile?
I've never heard of a 0th percentile; yet if my above description is correct, it must exist.
In order to answer this, I first had to untangle several distinct perspectives on the word “percentile”.
Percentiles are defined in several different ways, and you will see the 0th percentile (and also the 100th percentile) mentioned quite often if you do a search. I haven't found a source I can refer you to that clearly delineates all the ways I've seen it used, but several sources make it clear that there is no universal definition, and that percentiles are commonly taught -- and used -- loosely.
As I see it, there are three things that are typically called percentiles, which I distinguish by "is, at, or in":
1. Percentiles: A data value that IS a specified percentile.
For example, we might say that the 10th percentile on a test is 54. (In this sense, the 0th and 100th percentile definitely exist.)
2. Percentile ranks: Percentile AT which a data value lies.
For example, we might say that a score of 57 lies at the 10th percentile, which we find by rounding the actual percentage of scores that are less than 57. Sources vary on how to round.
3. Percentile ranges: One of 100 equal parts that a data value is IN.
For example, we might say that 57 is in the 10th percentile, which runs from 55 to 59.
Your question is about the last of these. Though it is often equated with the second without comment, it is this version that good mathematical sources address least.
The first and last views both start with the percentile: Given, say, the term “19th percentile”, we want to know where it is (a specific number we want to find), or what numbers are in it (an interval we want to identify). The intervals called percentiles are separated by the numbers called percentiles. (But are these numbers in the intervals, or between them?) The middle view starts with an individual data value, asking what percentile it lies at; but it can also be thought of as asking which percentile interval it is in, which almost sounds like the last view. So these distinctions can get rather fuzzy!
It would seem most natural to me to call these 100 intervals the 1st through 100th; but it seems to be common to conflate this concept with the second one above, taking the percentile rank to be the percent of data that lie below the given value, and rounded down to a whole number. The result of this definition is that no percentile can be 100, and the lowest WILL be the 0th percentile.
Since I had made up these categories, to some extent, in order to make sure they are not just my own ideas, I had to do some research, looking for how the word is actually used, and for definitions from reliable sources. Here I chose to refer to Wikipedia; what I quoted here has since been modified, emphasizing further the variability of the definitions.
The following pages discuss my first two definitions (my emphases marked by *asterisks*):
http://en.wikipedia.org/wiki/Percentile
A percentile (or a centile) is a measure used in statistics indicating the value below which a given percentage of observations in a group of observations fall. For example, the 20th percentile *is* the value (or score) below which 20 percent of the observations may be found. ...
One definition of percentile, often given in texts, is that the P-th percentile (0 <= P <= 100) of N ordered values (arranged from least to greatest) is obtained by first calculating the (ordinal) rank ...
n = P/100 * N + 1/2
... rounding the result to the nearest integer, and then taking the value that corresponds to that rank. (Note that the rounded value of n is just the least integer which exceeds P/100 * N.)
The 100th percentile is defined to be the largest value. (In this case, we do not use the above definition with P = 100, because the rank n would be greater than the number N of values in the original list.)
[Note that the percentile in this sense can be 0 or 100.]
http://en.wikipedia.org/wiki/Percentile_rank
The percentile rank of a score is the percentage of scores in its frequency distribution that are the same or lower than it. For example, a test score that is greater than or equal to 75% of the scores of people taking the test is said to be *at* the 75th percentile rank.
The mathematical formula is (c_l + 0.5 f_i) / N * 100% .
[Note that this page, like many, does not say how to round, yet always
gives whole number ranks. It is the rounding that determines whether the
percentile rank can be 0 or 100! Many sources declare that neither 0 nor 100 exists, even though the formula they give, with any kind of rounding, will yield at least one of them.]
Note that, as stated (without rounding), the formula given for percentile rank can never yield 0 or 100; but with rounding, and a large enough data set, it can. (This formula will be the subject of the next question.)
The first reference also seems to be talking about my third meaning, using
the word "in," though that does not fit the definition they just gave:
The term "percentile" and the related term "percentile rank" are often used in the reporting of scores from norm-referenced tests. For example, if a score is *in* the 86th percentile, it is higher than 86% of the other scores.
As I read this, a score in the first percentile must be greater than (at least) 1% of the data; so anything in that first percent must be in the 0th percentile. But nothing could be in the 100th percentile. This implies what I said above, rounding the percentage down, so that if a value is greater than 1.5% of the data, it would be said to be in the first percentile.
Looking at the current Wikipedia page for Percentile, I see that it shows a number of different ways to define the concept. One important paragraph is this:
There is no standard definition of percentile, however all definitions yield similar results when the number of observations is very large and the probability distribution is continuous. In the limit, as the sample size approaches infinity, the 100pth percentile (0<p<1) approximates the inverse of the cumulative distribution function (CDF) thus formed, evaluated at p, as p approximates the CDF.
This issue is also discussed in the Langford article I discussed last time.
Moving from Wikipedia, I found a more technical source:
Here is another definition of my first version:
http://www.itl.nist.gov/div898/handbook/prc/section2/prc252.htm
The pth percentile is a value, Y(p), such that at most (100p)% of the measurements are less than this value and at most 100(1 - p)% are greater. The 50th percentile is called the median.
Percentiles split a set of ordered data into hundredths. For example, 70% of the data should fall below the 70th percentile.
Note the use of "at most," which is necessary in order to make the definition work. In particular, the lowest and highest values are the 0th and 100th percentiles.
It does NOT say that 100% of the values are LESS than the 100th percentile; the "should" in the last sentence above is a general statement, not the actual definition.
This agrees with conclusions I discussed in my post about medians. It gives a proper definition (not just a procedure), which allows for variation (“a value”). Note that the common elementary definition of percentile yields a 50th percentile that is not the same as the median; some of the variations are intended to fix this.
I also notice that this paper subtly takes note of the distinction I made:
Given n points, the percentile corresponding to the i-th point is i/(n+1).
More typically we start with a desired percentile value and this percentile of interest may not correspond to a specific data point. In this case, interpolation between points is required. There is not a standard universally accepted way to perform this interpolation. After describing our default method, several alternative methods are given. All of the methods discussed here are used in practice.
The first statement here is percentile rank, my “at”: starting with a data value and finding its percentile. (Their formula is simplistic, and doesn’t mention rounding, evidently because it is not important enough at this level to elaborate.) The second is my “is”: starting with a percentage and finding a value corresponding to it. This kind is important enough to describe several methods (which I will look at below). There is no mention here of percentile ranges (my “in”).
Anyway, having said all this in quest of a good source, all I can really say is ... it depends on what definition you are using -- and most people have sloppy definitions when they talk about being IN the nth percentile.
## Percentile rank: halvsies
At the end of my answer to that question, I referred to an earlier question (2009) that brought up a specific issue in calculating percentile rank, namely the reason for the formula I quoted from Wikipedia:
Calculating Percentile RankPercentile rank means the percentage of scores that fall "at or below" a certain number. If more than one data value matches the number, why do we only count half of the data values when calculating the percentile rank? ie: 10, 11, 12, 12, 12, 12, 15, 18, 19, 20. Why is the percentile rank of 12 calculated at 4/10 instead of 6/10 since there are 6 data values that fall "at or below" 12?
Susan started with the brief definition of percentile rank, but indicated that she has been taught the same formula given in Wikipedia above (and quoted again below).
I started with a disclaimer:
Percentile is not always defined exactly the same way; there are some tricky details, especially when you want to apply the concept to a small "toy" data set like this one. In real life, you would apply it to, say, 30,000 scores on a standardized test, and this sort of problem goes away.
I then referred to the Wikipedia article on percentile, with my “is” and “in” senses:
That discusses percentile in the sense of "what value is at the nth percentile (where n is a whole number)?" This gives 99 points that divide a large data set into 100 equal parts, so that any value between the p/100th and the (p+1)/100th is considered to be "in" the pth percentile. The adjustments in the definitions are needed to deal with cases where N is not a multiple of 100, so that the calculations do not point to individual values.
Then I moved on to the actual question:
What you are asking about is percentile rank, which is somewhat different from that; it asks "at what percentile (again, a whole number) is this value?" Here the problem with a small data set (or a large set with few possible values) is that the same value may appear in more than one "percentile" in the above sense. We have to decide which one we should use--the first? the last? the middle?
The following article gives your definition in symbolic form without further explanation, and contrary to its earlier definition in words: Wikipedia: Percentile Rank
http://en.wikipedia.org/wiki/Percentile_rank
Where does this come from?
There c_l is the number of scores lower than the score of interest, f_i is the number of scores equal to the score of interest, and N is the total number of scores. So you are counting all scores below, and half the scores at, the given value in finding the percentage.
This definition makes good sense to me. Basically, they don't want to be biased toward either the first data point with the given value (the number of values BELOW 12, namely 2/10 = 20%) or the last (the number of values AT OR BELOW 12, namely 6/10 = 60%; this can also be taken as 100% - the number of values ABOVE 12, which gives 100% - 40% = 60%). So they essentially take the average of the two. They are splitting the difference between the two possible definitions.
In other words, the MIDDLE of the 12's best represents where the 12's as a group are "at", better than either the first or the last of them.
Susan responded,
Thank you for your very detailed answer to my question regarding percentile rank. I have referenced many textbooks regarding percentile rank, but none of them have explained "why" half of the repeating values are counted, they simply tell you to only count half of them. I am a 9th grade algebra teacher and I like to tell my students the "why" behind formulas, definitions, etc. because I think they are more apt to remember if they understand the "why." I whole-heartedly appreciate the time and effort you put into responding to my question (a question that has taunted me and my colleagues for a long time).
## How much difference does it make?
Let’s take a closer look at the relationship among various perspectives on percentiles, and different definitions.
Using Susan’s data (which are particularly easy to work with, having exactly 10 numbers), we can first look at percentile ranks (“at”). If we put the data on a number line, what percentile do we get for each value? Wikipedia only gives one rule for percentile rank, but we can also follow the naive rule given in the many elementary textbooks and stated in words in Wikipedia, “the percentage of scores that are equal or lower”; NIST’s briefly mentioned rule; and two functions provided by Excel, called “inclusive” and “exclusive” for percentile rank. Here are the results:
The inclusive version allowss both 0 and 100; both Excel versions use linear interpolation. All, of course are different. Which is “right”? That’s a question for others to answer.
Now, let’s use several versions of percentile values (“is”) to find the boundaries of percentile ranges (“in”). The current Wikipedia gives several definitions, starting with the “nearest rank method” (“often given in texts” because it is simple), which is just a little different from what I originally quoted. This is followed by several variations of “linear interpolation between closest ranks”, used by Matlab, Excel (inclusive), and Excel (exclusive), and also referred to by the NIST paper I had mentioned.
First, the “nearest rank method”; here we just take the given percentage of N, round up, and use that index. This always yields a number in the data set, which can be the same for different percentiles, for small amounts of data, so that I have indicated a range of percentiles all at the same location:
By this definition, none of our data values are actually in any percentile range. NIST gives a variant in which, if you get an integer, you average that value and the next:
This makes it clearer that the tenth percentile (range) is from 10 to 10.5, and that the number 10 lies within the bottom 10 percentiles.
Excel, as Wikipedia mentioned, has two different percentile functions, both of which are interpolated to ensure that each percentile is distinct. PERCENTILE.INC includes 0 and 100, and gives the following values for multiples of 10:
The exclusive function, PERCENTILE.EXC, gives an error for percentiles near 0 or 100; it shows the following:
I think the main lesson from this is that percentiles should not be looked at too closely for small data sets such as we tend to use in teaching! And the percentiles that are actually used in technical fields are far more complicated than the basic ideas we teach. On the other hand, we should emphasize that percentiles make good sense where they are actually used, such as in standardized test scores, where none of the issues we have looked at make any difference.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2020-08-13T14:08:42 | {
"domain": "themathdoctors.org",
"url": "https://www.themathdoctors.org/three-meanings-of-percentile/",
"openwebmath_score": 0.7582939863204956,
"openwebmath_perplexity": 621.966846513939,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9736446448596304,
"lm_q2_score": 0.8615382147637195,
"lm_q1q2_score": 0.8388320691466217
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.