idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
4,601
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?
The t-test with two groups assumes that each group is normally distributed with the same variance (although the means may differ under the alternative hypothesis). That is equivalent to a regression with a dummy variable as the regression allows the mean of each group to differ but not the variance. Hence the residuals (equal to the data with the group means subtracted) have the same distribution --- that is, they are normally distributed with zero mean. A t-test with unequal variances is not equivalent to a one-way ANOVA.
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?
The t-test with two groups assumes that each group is normally distributed with the same variance (although the means may differ under the alternative hypothesis). That is equivalent to a regression w
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent? The t-test with two groups assumes that each group is normally distributed with the same variance (although the means may differ under the alternative hypothesis). That is equivalent to a regression with a dummy variable as the regression allows the mean of each group to differ but not the variance. Hence the residuals (equal to the data with the group means subtracted) have the same distribution --- that is, they are normally distributed with zero mean. A t-test with unequal variances is not equivalent to a one-way ANOVA.
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent? The t-test with two groups assumes that each group is normally distributed with the same variance (although the means may differ under the alternative hypothesis). That is equivalent to a regression w
4,602
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?
The t-test simply a special case of the F-test where only two groups are being compared. The result of either will be exactly the same in terms of the p-value and there is a simple relationship between the F and t statistics as well. F = t^2. The two tests are algebraically equivalent and their assumptions are the same. In fact, these equivalences extend to the whole class of ANOVAs, t-tests, and linear regression models. The t-test is a special case of ANOVA. ANOVA is a special case of regression. All of these procedures are subsumed under the General Linear Model and share the same assumptions. Independence of observations. Normality of residuals = normality in each group in the special case. Equal of variances of residuals = equal variances across groups in the special case. You might think of it as normality in the data, but you are checking for normality in each group--which is actually the same as checking for normality in the residuals when the only predictor in the model is an indicator of group. Likewise with equal variances. Just as an aside, R does not have seperate routines for ANOVA. The anova functions in R are just wrappers to the lm() function--the same thing that is used to fit linear regression models--packaged a little differently to provide what is typically found in an ANOVA summary rather than a regression summary.
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?
The t-test simply a special case of the F-test where only two groups are being compared. The result of either will be exactly the same in terms of the p-value and there is a simple relationship betwe
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent? The t-test simply a special case of the F-test where only two groups are being compared. The result of either will be exactly the same in terms of the p-value and there is a simple relationship between the F and t statistics as well. F = t^2. The two tests are algebraically equivalent and their assumptions are the same. In fact, these equivalences extend to the whole class of ANOVAs, t-tests, and linear regression models. The t-test is a special case of ANOVA. ANOVA is a special case of regression. All of these procedures are subsumed under the General Linear Model and share the same assumptions. Independence of observations. Normality of residuals = normality in each group in the special case. Equal of variances of residuals = equal variances across groups in the special case. You might think of it as normality in the data, but you are checking for normality in each group--which is actually the same as checking for normality in the residuals when the only predictor in the model is an indicator of group. Likewise with equal variances. Just as an aside, R does not have seperate routines for ANOVA. The anova functions in R are just wrappers to the lm() function--the same thing that is used to fit linear regression models--packaged a little differently to provide what is typically found in an ANOVA summary rather than a regression summary.
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent? The t-test simply a special case of the F-test where only two groups are being compared. The result of either will be exactly the same in terms of the p-value and there is a simple relationship betwe
4,603
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?
I totally agree with Rob's answer, but let me put it another way (using wikipedia): Assumptions ANOVA: Independence of cases – this is an assumption of the model that simplifies the statistical analysis. Normality – the distributions of the residuals are normal. Equality (or "homogeneity") of variances, called homoscedasticity Assumptions t-test: Each of the two populations being compared should follow a normal distribution ... ... the two populations being compared should have the same variance ... The data used to carry out the test should be sampled independently from the two populations being compared. Hence, I would refute the question, as they obviously have the same assumptions (although in a different order :-) ).
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?
I totally agree with Rob's answer, but let me put it another way (using wikipedia): Assumptions ANOVA: Independence of cases – this is an assumption of the model that simplifies the statistical analy
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent? I totally agree with Rob's answer, but let me put it another way (using wikipedia): Assumptions ANOVA: Independence of cases – this is an assumption of the model that simplifies the statistical analysis. Normality – the distributions of the residuals are normal. Equality (or "homogeneity") of variances, called homoscedasticity Assumptions t-test: Each of the two populations being compared should follow a normal distribution ... ... the two populations being compared should have the same variance ... The data used to carry out the test should be sampled independently from the two populations being compared. Hence, I would refute the question, as they obviously have the same assumptions (although in a different order :-) ).
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent? I totally agree with Rob's answer, but let me put it another way (using wikipedia): Assumptions ANOVA: Independence of cases – this is an assumption of the model that simplifies the statistical analy
4,604
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?
One obvious point that everyone's overlooked: With ANOVA you're testing the null that the mean is identical regardless of the values of your explanatory variables. With a T-Test you can also test the one-sided case, that the mean is specifically greater given one value of your explanatory variable than given the other.
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?
One obvious point that everyone's overlooked: With ANOVA you're testing the null that the mean is identical regardless of the values of your explanatory variables. With a T-Test you can also test th
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent? One obvious point that everyone's overlooked: With ANOVA you're testing the null that the mean is identical regardless of the values of your explanatory variables. With a T-Test you can also test the one-sided case, that the mean is specifically greater given one value of your explanatory variable than given the other.
If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent? One obvious point that everyone's overlooked: With ANOVA you're testing the null that the mean is identical regardless of the values of your explanatory variables. With a T-Test you can also test th
4,605
When is a biased estimator preferable to unbiased one?
Yes. Often it is the case that we are interested in minimizing the mean squared error, which can be decomposed into variance + bias squared. This is an extremely fundamental idea in machine learning, and statistics in general. Frequently we see that a small increase in bias can come with a large enough reduction in variance that the overall MSE decreases. A standard example is ridge regression. We have $\hat \beta_R = (X^T X + \lambda I)^{-1}X^T Y$ which is biased; but if $X$ is ill conditioned then $Var(\hat \beta) \propto (X^T X)^{-1}$ may be monstrous whereas $Var(\hat \beta_R)$ can be much more modest. Another example is the kNN classifier. Think about $k = 1$: we assign a new point to its nearest neighbor. If we have a ton of data and only a few variables we can probably recover the true decision boundary and our classifier is unbiased; but for any realistic case, it is likely that $k = 1$ will be far too flexible (i.e. have too much variance) and so the small bias is not worth it (i.e. the MSE is larger than more biased but less variable classifiers). Finally, here's a picture. Suppose that these are the sampling distributions of two estimators and we are trying to estimate 0. The flatter one is unbiased, but also much more variable. Overall I think I'd prefer to use the biased one, because even though on average we won't be correct, for any single instance of that estimator we'll be closer. $$ \ $$ Update I mention the numerical issues that happen when $X$ is ill conditioned and how ridge regression helps. Here's an example. I'm making a matrix $X$ which is $4 \times 3$ and the third column is nearly all 0, meaning that it is almost not full rank, which means that $X^T X$ is really close to being singular. x <- cbind(0:3, 2:5, runif(4, -.001, .001)) ## almost reduced rank > x [,1] [,2] [,3] [1,] 0 2 0.000624715 [2,] 1 3 0.000248889 [3,] 2 4 0.000226021 [4,] 3 5 0.000795289 (xtx <- t(x) %*% x) ## the inverse of this is proportional to Var(beta.hat) [,1] [,2] [,3] [1,] 14.0000000 26.00000000 3.08680e-03 [2,] 26.0000000 54.00000000 6.87663e-03 [3,] 0.0030868 0.00687663 1.13579e-06 eigen(xtx)$values ## all eigenvalues > 0 so it is PD, but not by much [1] 6.68024e+01 1.19756e+00 2.26161e-07 solve(xtx) ## huge values [,1] [,2] [,3] [1,] 0.776238 -0.458945 669.057 [2,] -0.458945 0.352219 -885.211 [3,] 669.057303 -885.210847 4421628.936 solve(xtx + .5 * diag(3)) ## very reasonable values [,1] [,2] [,3] [1,] 0.477024087 -0.227571147 0.000184889 [2,] -0.227571147 0.126914719 -0.000340557 [3,] 0.000184889 -0.000340557 1.999998999 Update 2 As promised, here's a more thorough example. First, remember the point of all of this: we want a good estimator. There are many ways to define 'good'. Suppose that we've got $X_1, ..., X_n \sim \ iid \ \mathcal N(\mu, \sigma^2)$ and we want to estimate $\mu$. Let's say that we decide that a 'good' estimator is one that is unbiased. This isn't optimal because, while it is true that the estimator $T_1(X_1, ..., X_n) = X_1$ is unbiased for $\mu$, we have $n$ data points so it seems silly to ignore almost all of them. To make that idea more formal, we think that we ought to be able to get an estimator that varies less from $\mu$ for a given sample than $T_1$. This means that we want an estimator with a smaller variance. So maybe now we say that we still want only unbiased estimators, but among all unbiased estimators we'll choose the one with the smallest variance. This leads us to the concept of the uniformly minimum variance unbiased estimator (UMVUE), an object of much study in classical statistics. IF we only want unbiased estimators, then choosing the one with the smallest variance is a good idea. In our example, consider $T_1$ vs. $T_2(X_1, ..., X_n) = \frac{X_1 + X_2}{2}$ and $T_n(X_1, ..., X_n) = \frac{X_1 + ... + X_n}{n}$. Again, all three are unbiased but they have different variances: $Var(T_1) = \sigma^2$, $Var(T_2) = \frac{\sigma^2}{2}$, and $Var(T_n) = \frac{\sigma^2}{n}$. For $n > 2$ $T_n$ has the smallest variance of these, and it's unbiased, so this is our chosen estimator. But often unbiasedness is a strange thing to be so fixated on (see @Cagdas Ozgenc's comment, for example). I think this is partly because we generally don't care so much about having a good estimate in the average case, but rather we want a good estimate in our particular case. We can quantify this concept with the mean squared error (MSE) which is like the average squared distance between our estimator and the thing we're estimating. If $T$ is an estimator of $\theta$, then $MSE(T) = E((T - \theta)^2)$. As I've mentioned earlier, it turns out that $MSE(T) = Var(T) + Bias(T)^2$, where bias is defined to be $Bias(T) = E(T) - \theta$. Thus we may decide that rather than UMVUEs we want an estimator that minimizes MSE. Suppose that $T$ is unbiased. Then $MSE(T) = Var(T) = Bias(T)^2 = Var(T)$, so if we are only considering unbiased estimators then minimizing MSE is the same as choosing the UMVUE. But, as I showed above, there are cases where we can get an even smaller MSE by considering non-zero biases. In summary, we want to minimize $Var(T) + Bias(T)^2$. We could require $Bias(T) = 0$ and then pick the best $T$ among those that do that, or we could allow both to vary. Allowing both to vary will likely give us a better MSE, since it includes the unbiased cases. This idea is the variance-bias trade-off that I mentioned earlier in the answer. Now here are some pictures of this trade-off. We're trying to estimate $\theta$ and we've got five models, $T_1$ through $T_5$. $T_1$ is unbiased and the bias gets more and more severe until $T_5$. $T_1$ has the largest variance and the variance gets smaller and smaller until $T_5$. We can visualize the MSE as the square of the distance of the distribution's center from $\theta$ plus the square of the distance to the first inflection point (that's a way to see the SD for normal densities, which these are). We can see that for $T_1$ (the black curve) the variance is so large that being unbiased doesn't help: there's still a massive MSE. Conversely, for $T_5$ the variance is way smaller but now the bias is big enough that the estimator is suffering. But somewhere in the middle there is a happy medium, and that's $T_3$. It has reduced the variability by a lot (compared with $T_1$) but has only incurred a small amount of bias, and thus it has the smallest MSE. You asked for examples of estimators that have this shape: one example is ridge regression, where you can think of each estimator as $T_\lambda(X, Y) = (X^T X + \lambda I)^{-1} X^T Y$. You could (perhaps using cross-validation) make a plot of MSE as a function of $\lambda$ and then choose the best $T_\lambda$.
When is a biased estimator preferable to unbiased one?
Yes. Often it is the case that we are interested in minimizing the mean squared error, which can be decomposed into variance + bias squared. This is an extremely fundamental idea in machine learning,
When is a biased estimator preferable to unbiased one? Yes. Often it is the case that we are interested in minimizing the mean squared error, which can be decomposed into variance + bias squared. This is an extremely fundamental idea in machine learning, and statistics in general. Frequently we see that a small increase in bias can come with a large enough reduction in variance that the overall MSE decreases. A standard example is ridge regression. We have $\hat \beta_R = (X^T X + \lambda I)^{-1}X^T Y$ which is biased; but if $X$ is ill conditioned then $Var(\hat \beta) \propto (X^T X)^{-1}$ may be monstrous whereas $Var(\hat \beta_R)$ can be much more modest. Another example is the kNN classifier. Think about $k = 1$: we assign a new point to its nearest neighbor. If we have a ton of data and only a few variables we can probably recover the true decision boundary and our classifier is unbiased; but for any realistic case, it is likely that $k = 1$ will be far too flexible (i.e. have too much variance) and so the small bias is not worth it (i.e. the MSE is larger than more biased but less variable classifiers). Finally, here's a picture. Suppose that these are the sampling distributions of two estimators and we are trying to estimate 0. The flatter one is unbiased, but also much more variable. Overall I think I'd prefer to use the biased one, because even though on average we won't be correct, for any single instance of that estimator we'll be closer. $$ \ $$ Update I mention the numerical issues that happen when $X$ is ill conditioned and how ridge regression helps. Here's an example. I'm making a matrix $X$ which is $4 \times 3$ and the third column is nearly all 0, meaning that it is almost not full rank, which means that $X^T X$ is really close to being singular. x <- cbind(0:3, 2:5, runif(4, -.001, .001)) ## almost reduced rank > x [,1] [,2] [,3] [1,] 0 2 0.000624715 [2,] 1 3 0.000248889 [3,] 2 4 0.000226021 [4,] 3 5 0.000795289 (xtx <- t(x) %*% x) ## the inverse of this is proportional to Var(beta.hat) [,1] [,2] [,3] [1,] 14.0000000 26.00000000 3.08680e-03 [2,] 26.0000000 54.00000000 6.87663e-03 [3,] 0.0030868 0.00687663 1.13579e-06 eigen(xtx)$values ## all eigenvalues > 0 so it is PD, but not by much [1] 6.68024e+01 1.19756e+00 2.26161e-07 solve(xtx) ## huge values [,1] [,2] [,3] [1,] 0.776238 -0.458945 669.057 [2,] -0.458945 0.352219 -885.211 [3,] 669.057303 -885.210847 4421628.936 solve(xtx + .5 * diag(3)) ## very reasonable values [,1] [,2] [,3] [1,] 0.477024087 -0.227571147 0.000184889 [2,] -0.227571147 0.126914719 -0.000340557 [3,] 0.000184889 -0.000340557 1.999998999 Update 2 As promised, here's a more thorough example. First, remember the point of all of this: we want a good estimator. There are many ways to define 'good'. Suppose that we've got $X_1, ..., X_n \sim \ iid \ \mathcal N(\mu, \sigma^2)$ and we want to estimate $\mu$. Let's say that we decide that a 'good' estimator is one that is unbiased. This isn't optimal because, while it is true that the estimator $T_1(X_1, ..., X_n) = X_1$ is unbiased for $\mu$, we have $n$ data points so it seems silly to ignore almost all of them. To make that idea more formal, we think that we ought to be able to get an estimator that varies less from $\mu$ for a given sample than $T_1$. This means that we want an estimator with a smaller variance. So maybe now we say that we still want only unbiased estimators, but among all unbiased estimators we'll choose the one with the smallest variance. This leads us to the concept of the uniformly minimum variance unbiased estimator (UMVUE), an object of much study in classical statistics. IF we only want unbiased estimators, then choosing the one with the smallest variance is a good idea. In our example, consider $T_1$ vs. $T_2(X_1, ..., X_n) = \frac{X_1 + X_2}{2}$ and $T_n(X_1, ..., X_n) = \frac{X_1 + ... + X_n}{n}$. Again, all three are unbiased but they have different variances: $Var(T_1) = \sigma^2$, $Var(T_2) = \frac{\sigma^2}{2}$, and $Var(T_n) = \frac{\sigma^2}{n}$. For $n > 2$ $T_n$ has the smallest variance of these, and it's unbiased, so this is our chosen estimator. But often unbiasedness is a strange thing to be so fixated on (see @Cagdas Ozgenc's comment, for example). I think this is partly because we generally don't care so much about having a good estimate in the average case, but rather we want a good estimate in our particular case. We can quantify this concept with the mean squared error (MSE) which is like the average squared distance between our estimator and the thing we're estimating. If $T$ is an estimator of $\theta$, then $MSE(T) = E((T - \theta)^2)$. As I've mentioned earlier, it turns out that $MSE(T) = Var(T) + Bias(T)^2$, where bias is defined to be $Bias(T) = E(T) - \theta$. Thus we may decide that rather than UMVUEs we want an estimator that minimizes MSE. Suppose that $T$ is unbiased. Then $MSE(T) = Var(T) = Bias(T)^2 = Var(T)$, so if we are only considering unbiased estimators then minimizing MSE is the same as choosing the UMVUE. But, as I showed above, there are cases where we can get an even smaller MSE by considering non-zero biases. In summary, we want to minimize $Var(T) + Bias(T)^2$. We could require $Bias(T) = 0$ and then pick the best $T$ among those that do that, or we could allow both to vary. Allowing both to vary will likely give us a better MSE, since it includes the unbiased cases. This idea is the variance-bias trade-off that I mentioned earlier in the answer. Now here are some pictures of this trade-off. We're trying to estimate $\theta$ and we've got five models, $T_1$ through $T_5$. $T_1$ is unbiased and the bias gets more and more severe until $T_5$. $T_1$ has the largest variance and the variance gets smaller and smaller until $T_5$. We can visualize the MSE as the square of the distance of the distribution's center from $\theta$ plus the square of the distance to the first inflection point (that's a way to see the SD for normal densities, which these are). We can see that for $T_1$ (the black curve) the variance is so large that being unbiased doesn't help: there's still a massive MSE. Conversely, for $T_5$ the variance is way smaller but now the bias is big enough that the estimator is suffering. But somewhere in the middle there is a happy medium, and that's $T_3$. It has reduced the variability by a lot (compared with $T_1$) but has only incurred a small amount of bias, and thus it has the smallest MSE. You asked for examples of estimators that have this shape: one example is ridge regression, where you can think of each estimator as $T_\lambda(X, Y) = (X^T X + \lambda I)^{-1} X^T Y$. You could (perhaps using cross-validation) make a plot of MSE as a function of $\lambda$ and then choose the best $T_\lambda$.
When is a biased estimator preferable to unbiased one? Yes. Often it is the case that we are interested in minimizing the mean squared error, which can be decomposed into variance + bias squared. This is an extremely fundamental idea in machine learning,
4,606
When is a biased estimator preferable to unbiased one?
This paper [1] gives a simple example demostrating that a biased estimator can even achieve a lower variance than the Cramér–Rao bound (CRB). Consider $i.i.d. X_1,...,X_n\sim N(0,\sigma^2)$, and let $k=\sigma^2$. The maximum likelihood estimator for $k$ is $\hat{k}_{ML}=\frac{1}{n}\sum{X_i^2}$. It is unbiased with a variance of $MSE_{ML}=E[(\hat{k}_{ML}-k)^2]=\frac{2\sigma^4}{n}=CRB$. Estimator $\hat{k}=\frac{1}{n+2}\sum{X_i^2}$ is biased but its variance is $MSE=E[(\hat{k}-k)^2]=\frac{2\sigma^4}{n+2}<MSE_{ML}=CRB$. [1] Stoica, P. , and R. L. Moses . "On biased estimators and the unbiased Cramér-Rao lower bound." Signal Processing 21.4(1990):349-350.
When is a biased estimator preferable to unbiased one?
This paper [1] gives a simple example demostrating that a biased estimator can even achieve a lower variance than the Cramér–Rao bound (CRB). Consider $i.i.d. X_1,...,X_n\sim N(0,\sigma^2)$, and let $
When is a biased estimator preferable to unbiased one? This paper [1] gives a simple example demostrating that a biased estimator can even achieve a lower variance than the Cramér–Rao bound (CRB). Consider $i.i.d. X_1,...,X_n\sim N(0,\sigma^2)$, and let $k=\sigma^2$. The maximum likelihood estimator for $k$ is $\hat{k}_{ML}=\frac{1}{n}\sum{X_i^2}$. It is unbiased with a variance of $MSE_{ML}=E[(\hat{k}_{ML}-k)^2]=\frac{2\sigma^4}{n}=CRB$. Estimator $\hat{k}=\frac{1}{n+2}\sum{X_i^2}$ is biased but its variance is $MSE=E[(\hat{k}-k)^2]=\frac{2\sigma^4}{n+2}<MSE_{ML}=CRB$. [1] Stoica, P. , and R. L. Moses . "On biased estimators and the unbiased Cramér-Rao lower bound." Signal Processing 21.4(1990):349-350.
When is a biased estimator preferable to unbiased one? This paper [1] gives a simple example demostrating that a biased estimator can even achieve a lower variance than the Cramér–Rao bound (CRB). Consider $i.i.d. X_1,...,X_n\sim N(0,\sigma^2)$, and let $
4,607
When is a biased estimator preferable to unbiased one?
The other examples in this thread are fantastic, but I wanted to provide an extremely simple example that illustrates that a biased estimator can sometimes have drastically smaller variance. Let $X_1, X_2, \ldots X_n \stackrel{\text{iid}}{\sim} \text{Unif}(0, \theta)$. First we consider the Method of Moments estimator $$\hat\theta_1 = 2\bar X.$$ This estimator is intuitive and it is unbiased, but it is an estimator with relatively large variance. \begin{align*} \text{bias}(\hat\theta_1) &= 0 \\ \text{Var}(\hat\theta_1) &= \frac{\theta^2}{3n} \\ \text{MSE}(\hat\theta_1) &= \frac{\theta^2}{3n} = \mathcal O(n^{-1}) \end{align*} The maximum likelihood estimator, on the other hand, is given by $$\hat\theta_2 = X_{(n)} = \text{max}_{i}\{X_i\}$$ This estimator is clearly biased since all $X_i < \theta$. But it turns out that the bias is relatively small, and the variance is much smaller that of $\hat\theta_1$. \begin{align*} \text{bias}(\hat\theta_2) &= \frac{-\theta}{n+1} \\ \text{Var}(\hat\theta_2) &= \frac{n\theta^2}{(n+1)^2(n+2)} \\ \text{MSE}(\hat\theta_2) &= \frac{2\theta^2}{(n+1)(n+2)} = \mathcal O(n^{-2}) \end{align*} The MSE of the second estimator tends to zero much faster than the first estimator. This example shows that bias should not be the only thing we consider when choosing an estimator. Further discussion: While the MLE ($\hat\theta_2$) (for this problem) is generally considered a better estimator than MOM ($\hat\theta_1$), neither would be a reasonable choice in practice. This is because the MLE can be adjusted so that it is unbiased. Consider $$\hat\theta_3 = \frac{n+1}{n}X_{(n)}.$$ Here, we have reduced the bias to zero, but in doing so we have inflated the variance. \begin{align*} \text{bias}(\hat\theta_3) &= 0 \\ \text{Var}(\hat\theta_3) &= \frac{\theta^2}{n(n+2)} \\ \text{MSE}(\hat\theta_3) &= \frac{\theta^2}{n(n+2)} = \mathcal O(n^{-2}) \end{align*} Still, this estimator is preferable (from the perspective of MSE) to either of the previous estimators. So now we notice: (i) $\hat\theta_2$ is an estimator with high bias and low variance and (ii) $\hat\theta_3 = c\hat\theta_2$ is an estimator with low bias and high variance. This begs the question, is there an estimator "in between" these two that achieves smaller MSE?" The answer is yes. Consider $$\hat\theta_4 = \frac{(n+1)(n+2)}{n(n+2) + 1}X_{(n)}.$$ This estimator reintroduces some bias to reduce the variance. It is provably the estimator of the form $cX_{(n)}$ which minimizes MSE. The takeaway here, again, is that bias and variance are two separate quantities which we would like to minimize. Often reducing one metric leads to an increase in the other. An estimator should be chosen with this tradeoff in mind. MSE is a popular (but certainly not the only) metric which takes this tradeoff into account.
When is a biased estimator preferable to unbiased one?
The other examples in this thread are fantastic, but I wanted to provide an extremely simple example that illustrates that a biased estimator can sometimes have drastically smaller variance. Let $X_1
When is a biased estimator preferable to unbiased one? The other examples in this thread are fantastic, but I wanted to provide an extremely simple example that illustrates that a biased estimator can sometimes have drastically smaller variance. Let $X_1, X_2, \ldots X_n \stackrel{\text{iid}}{\sim} \text{Unif}(0, \theta)$. First we consider the Method of Moments estimator $$\hat\theta_1 = 2\bar X.$$ This estimator is intuitive and it is unbiased, but it is an estimator with relatively large variance. \begin{align*} \text{bias}(\hat\theta_1) &= 0 \\ \text{Var}(\hat\theta_1) &= \frac{\theta^2}{3n} \\ \text{MSE}(\hat\theta_1) &= \frac{\theta^2}{3n} = \mathcal O(n^{-1}) \end{align*} The maximum likelihood estimator, on the other hand, is given by $$\hat\theta_2 = X_{(n)} = \text{max}_{i}\{X_i\}$$ This estimator is clearly biased since all $X_i < \theta$. But it turns out that the bias is relatively small, and the variance is much smaller that of $\hat\theta_1$. \begin{align*} \text{bias}(\hat\theta_2) &= \frac{-\theta}{n+1} \\ \text{Var}(\hat\theta_2) &= \frac{n\theta^2}{(n+1)^2(n+2)} \\ \text{MSE}(\hat\theta_2) &= \frac{2\theta^2}{(n+1)(n+2)} = \mathcal O(n^{-2}) \end{align*} The MSE of the second estimator tends to zero much faster than the first estimator. This example shows that bias should not be the only thing we consider when choosing an estimator. Further discussion: While the MLE ($\hat\theta_2$) (for this problem) is generally considered a better estimator than MOM ($\hat\theta_1$), neither would be a reasonable choice in practice. This is because the MLE can be adjusted so that it is unbiased. Consider $$\hat\theta_3 = \frac{n+1}{n}X_{(n)}.$$ Here, we have reduced the bias to zero, but in doing so we have inflated the variance. \begin{align*} \text{bias}(\hat\theta_3) &= 0 \\ \text{Var}(\hat\theta_3) &= \frac{\theta^2}{n(n+2)} \\ \text{MSE}(\hat\theta_3) &= \frac{\theta^2}{n(n+2)} = \mathcal O(n^{-2}) \end{align*} Still, this estimator is preferable (from the perspective of MSE) to either of the previous estimators. So now we notice: (i) $\hat\theta_2$ is an estimator with high bias and low variance and (ii) $\hat\theta_3 = c\hat\theta_2$ is an estimator with low bias and high variance. This begs the question, is there an estimator "in between" these two that achieves smaller MSE?" The answer is yes. Consider $$\hat\theta_4 = \frac{(n+1)(n+2)}{n(n+2) + 1}X_{(n)}.$$ This estimator reintroduces some bias to reduce the variance. It is provably the estimator of the form $cX_{(n)}$ which minimizes MSE. The takeaway here, again, is that bias and variance are two separate quantities which we would like to minimize. Often reducing one metric leads to an increase in the other. An estimator should be chosen with this tradeoff in mind. MSE is a popular (but certainly not the only) metric which takes this tradeoff into account.
When is a biased estimator preferable to unbiased one? The other examples in this thread are fantastic, but I wanted to provide an extremely simple example that illustrates that a biased estimator can sometimes have drastically smaller variance. Let $X_1
4,608
When is a biased estimator preferable to unbiased one?
Two reasons come to mind, aside from the MSE explanation above (the commonly accepted answer to the question): Managing risk Efficient testing Risk, roughly, is the sense of how much something can explode when certain conditions aren't met. Take superefficient estimators: $T(X) = \bar{X}_n$ if $\bar{X}_n$ lies beyond an $\epsilon$-ball of 0, 0 otherwise. You can show that this statistic is more efficient than the UMVUE, since it has the same asymptotic variance as the UMVUE with $\theta \ne 0$ and infinite efficiency otherwise. This is a stupid statistic, and Hodges threw it out there as a strawman. Turns out that if you take $\theta_n$ on the boundary of the ball, it becomes an inconsistent test, it never knows what's going on and the risk explodes. In the minimax world, we try to minimize risk. It can give us biased estimators, but we don't care, they still work because there are fewer ways to break the system. Suppose, for instance, I were interested in inference on a $\Gamma(\alpha, \beta_n)$ distribution, and once in a while the distribution threw curve balls. A trimmed mean estimate $$T_\theta(X) = \sum X_i \mathcal{I} (\|X_i\| < \theta) / \sum \mathcal{I} (\|X_i\| < \theta)$$ systematically throws out the high leverage points. Efficient testing means you don't estimate the thing you're interested in, but an approximation thereof, because this provides a more powerful test. The best example I can think of here is logistic regression. People always confuse logistic regression with relative risk regression. For instance an odds ratio of 1.6 for cancer comparing smokers to non-smokers does NOT mean that "smokers had a 1.6 greater risk of cancer". BZZT wrong. That's a risk ratio. They technically had a 1.6 fold odds of the outcome (reminder: odds = probability / (1-probability)). However, for rare events, the odds ratio approximates the risk ratio. There is relative risk regression, but it has a lot of issues with converging and is not as powerful as logistic regression. So we report the OR as a biased estimate of the RR (for rare events), and calculate more efficient CIs and p-values.
When is a biased estimator preferable to unbiased one?
Two reasons come to mind, aside from the MSE explanation above (the commonly accepted answer to the question): Managing risk Efficient testing Risk, roughly, is the sense of how much something can e
When is a biased estimator preferable to unbiased one? Two reasons come to mind, aside from the MSE explanation above (the commonly accepted answer to the question): Managing risk Efficient testing Risk, roughly, is the sense of how much something can explode when certain conditions aren't met. Take superefficient estimators: $T(X) = \bar{X}_n$ if $\bar{X}_n$ lies beyond an $\epsilon$-ball of 0, 0 otherwise. You can show that this statistic is more efficient than the UMVUE, since it has the same asymptotic variance as the UMVUE with $\theta \ne 0$ and infinite efficiency otherwise. This is a stupid statistic, and Hodges threw it out there as a strawman. Turns out that if you take $\theta_n$ on the boundary of the ball, it becomes an inconsistent test, it never knows what's going on and the risk explodes. In the minimax world, we try to minimize risk. It can give us biased estimators, but we don't care, they still work because there are fewer ways to break the system. Suppose, for instance, I were interested in inference on a $\Gamma(\alpha, \beta_n)$ distribution, and once in a while the distribution threw curve balls. A trimmed mean estimate $$T_\theta(X) = \sum X_i \mathcal{I} (\|X_i\| < \theta) / \sum \mathcal{I} (\|X_i\| < \theta)$$ systematically throws out the high leverage points. Efficient testing means you don't estimate the thing you're interested in, but an approximation thereof, because this provides a more powerful test. The best example I can think of here is logistic regression. People always confuse logistic regression with relative risk regression. For instance an odds ratio of 1.6 for cancer comparing smokers to non-smokers does NOT mean that "smokers had a 1.6 greater risk of cancer". BZZT wrong. That's a risk ratio. They technically had a 1.6 fold odds of the outcome (reminder: odds = probability / (1-probability)). However, for rare events, the odds ratio approximates the risk ratio. There is relative risk regression, but it has a lot of issues with converging and is not as powerful as logistic regression. So we report the OR as a biased estimate of the RR (for rare events), and calculate more efficient CIs and p-values.
When is a biased estimator preferable to unbiased one? Two reasons come to mind, aside from the MSE explanation above (the commonly accepted answer to the question): Managing risk Efficient testing Risk, roughly, is the sense of how much something can e
4,609
When is a biased estimator preferable to unbiased one?
The maximum-likelihood estimator $\frac 1 n \sum_{i=1}^n (X_i - \overline X)^2$ of the population variance for a normally distributed population has a lower mean squared error than does the commonplace unbiased estimator, in which the denominator is $n-1.$ But that's a somewhat weak example. I wrote a paper addressing this question via a counterexample of my own devising: https://arxiv.org/pdf/math/0206006.pdf
When is a biased estimator preferable to unbiased one?
The maximum-likelihood estimator $\frac 1 n \sum_{i=1}^n (X_i - \overline X)^2$ of the population variance for a normally distributed population has a lower mean squared error than does the commonplac
When is a biased estimator preferable to unbiased one? The maximum-likelihood estimator $\frac 1 n \sum_{i=1}^n (X_i - \overline X)^2$ of the population variance for a normally distributed population has a lower mean squared error than does the commonplace unbiased estimator, in which the denominator is $n-1.$ But that's a somewhat weak example. I wrote a paper addressing this question via a counterexample of my own devising: https://arxiv.org/pdf/math/0206006.pdf
When is a biased estimator preferable to unbiased one? The maximum-likelihood estimator $\frac 1 n \sum_{i=1}^n (X_i - \overline X)^2$ of the population variance for a normally distributed population has a lower mean squared error than does the commonplac
4,610
Is there any difference between lm and glm for the gaussian family of glm?
While for the specific form of model mentioned in the body of the question (i.e. lm(y ~ x1 + x2) vs glm(y ~ x1 + x2, family=gaussian)), regression and GLMs are the same model, the title question asks something slightly more general: Is there any difference between lm and glm for the gaussian family of glm? To which the answer is "Yes!". The reason that they can be different is because you can also specify a link function in the GLM. This allows you to fit particular forms of nonlinear relationship between $y$ (or rather its conditional mean) and the $x$-variables; while you can do this in nls as well, there's no need for starting values, sometimes the convergence is better (also the syntax is a bit easier). Compare, for example, these models (you have R so I assume you can run these yourself): x1=c(56.1, 26.8, 23.9, 46.8, 34.8, 42.1, 22.9, 55.5, 56.1, 46.9, 26.7, 33.9, 37.0, 57.6, 27.2, 25.7, 37.0, 44.4, 44.7, 67.2, 48.7, 20.4, 45.2, 22.4, 23.2, 39.9, 51.3, 24.1, 56.3, 58.9, 62.2, 37.7, 36.0, 63.9, 62.5, 44.1, 46.9, 45.4, 23.7, 36.5, 56.1, 69.6, 40.3, 26.2, 67.1, 33.8, 29.9, 25.7, 40.0, 27.5) x2=c(12.29, 11.42, 13.59, 8.64, 12.77, 9.9, 13.2, 7.34, 10.67, 18.8, 9.84, 16.72, 10.32, 13.67, 7.65, 9.44, 14.52, 8.24, 14.14, 17.2, 16.21, 6.01, 14.23, 15.63, 10.83, 13.39, 10.5, 10.01, 13.56, 11.26, 4.8, 9.59, 11.87, 11, 12.02, 10.9, 9.5, 10.63, 19.03, 16.71, 15.11, 7.22, 12.6, 15.35, 8.77, 9.81, 9.49, 15.82, 10.94, 6.53) y = c(1.54, 0.81, 1.39, 1.09, 1.3, 1.16, 0.95, 1.29, 1.35, 1.86, 1.1, 0.96, 1.03, 1.8, 0.7, 0.88, 1.24, 0.94, 1.41, 2.13, 1.63, 0.78, 1.55, 1.5, 0.96, 1.21, 1.4, 0.66, 1.55, 1.37, 1.19, 0.88, 0.97, 1.56, 1.51, 1.09, 1.23, 1.2, 1.62, 1.52, 1.64, 1.77, 0.97, 1.12, 1.48, 0.83, 1.06, 1.1, 1.21, 0.75) lm(y ~ x1 + x2) glm(y ~ x1 + x2, family=gaussian) glm(y ~ x1 + x2, family=gaussian(link="log")) nls(y ~ exp(b0+b1*x1+b2*x2), start=list(b0=-1,b1=0.01,b2=0.1)) Note that the first pair are the same model ($y_i \sim N(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i},\sigma^2)\,$), and the second pair are the same model ($y_i \sim N(\exp(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i}),\sigma^2)\,$ and the fits are essentially the same within each pair. So - in relation to the title question - you can fit a substantially wider variety of Gaussian models with a GLM than with regression.
Is there any difference between lm and glm for the gaussian family of glm?
While for the specific form of model mentioned in the body of the question (i.e. lm(y ~ x1 + x2) vs glm(y ~ x1 + x2, family=gaussian)), regression and GLMs are the same model, the title question asks
Is there any difference between lm and glm for the gaussian family of glm? While for the specific form of model mentioned in the body of the question (i.e. lm(y ~ x1 + x2) vs glm(y ~ x1 + x2, family=gaussian)), regression and GLMs are the same model, the title question asks something slightly more general: Is there any difference between lm and glm for the gaussian family of glm? To which the answer is "Yes!". The reason that they can be different is because you can also specify a link function in the GLM. This allows you to fit particular forms of nonlinear relationship between $y$ (or rather its conditional mean) and the $x$-variables; while you can do this in nls as well, there's no need for starting values, sometimes the convergence is better (also the syntax is a bit easier). Compare, for example, these models (you have R so I assume you can run these yourself): x1=c(56.1, 26.8, 23.9, 46.8, 34.8, 42.1, 22.9, 55.5, 56.1, 46.9, 26.7, 33.9, 37.0, 57.6, 27.2, 25.7, 37.0, 44.4, 44.7, 67.2, 48.7, 20.4, 45.2, 22.4, 23.2, 39.9, 51.3, 24.1, 56.3, 58.9, 62.2, 37.7, 36.0, 63.9, 62.5, 44.1, 46.9, 45.4, 23.7, 36.5, 56.1, 69.6, 40.3, 26.2, 67.1, 33.8, 29.9, 25.7, 40.0, 27.5) x2=c(12.29, 11.42, 13.59, 8.64, 12.77, 9.9, 13.2, 7.34, 10.67, 18.8, 9.84, 16.72, 10.32, 13.67, 7.65, 9.44, 14.52, 8.24, 14.14, 17.2, 16.21, 6.01, 14.23, 15.63, 10.83, 13.39, 10.5, 10.01, 13.56, 11.26, 4.8, 9.59, 11.87, 11, 12.02, 10.9, 9.5, 10.63, 19.03, 16.71, 15.11, 7.22, 12.6, 15.35, 8.77, 9.81, 9.49, 15.82, 10.94, 6.53) y = c(1.54, 0.81, 1.39, 1.09, 1.3, 1.16, 0.95, 1.29, 1.35, 1.86, 1.1, 0.96, 1.03, 1.8, 0.7, 0.88, 1.24, 0.94, 1.41, 2.13, 1.63, 0.78, 1.55, 1.5, 0.96, 1.21, 1.4, 0.66, 1.55, 1.37, 1.19, 0.88, 0.97, 1.56, 1.51, 1.09, 1.23, 1.2, 1.62, 1.52, 1.64, 1.77, 0.97, 1.12, 1.48, 0.83, 1.06, 1.1, 1.21, 0.75) lm(y ~ x1 + x2) glm(y ~ x1 + x2, family=gaussian) glm(y ~ x1 + x2, family=gaussian(link="log")) nls(y ~ exp(b0+b1*x1+b2*x2), start=list(b0=-1,b1=0.01,b2=0.1)) Note that the first pair are the same model ($y_i \sim N(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i},\sigma^2)\,$), and the second pair are the same model ($y_i \sim N(\exp(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i}),\sigma^2)\,$ and the fits are essentially the same within each pair. So - in relation to the title question - you can fit a substantially wider variety of Gaussian models with a GLM than with regression.
Is there any difference between lm and glm for the gaussian family of glm? While for the specific form of model mentioned in the body of the question (i.e. lm(y ~ x1 + x2) vs glm(y ~ x1 + x2, family=gaussian)), regression and GLMs are the same model, the title question asks
4,611
Is there any difference between lm and glm for the gaussian family of glm?
From @Repmat's answer, the model summary are the same, but the C.I.'s of the regression coefficients from confint are slightly different between lm and glm. > confint(reg1, level=0.95) 2.5 % 97.5 % (Intercept) 2.474742 11.526174 x1 1.971466 2.014002 x2 2.958422 3.023291 > confint(reg2, level=0.95) Waiting for profiling to be done... 2.5 % 97.5 % (Intercept) 2.480236 11.520680 x1 1.971492 2.013976 x2 2.958461 3.023251 $t$-distribution is used in lm while normal distribution is used in glm when constructing the intervals. > beta <- summary(reg1)$coefficients[, 1] > beta_se <- summary(reg1)$coefficients[, 2] > cbind(`2.5%` = beta - qt(0.975, n - 3) * beta_se, `97.5%` = beta + qt(0.975, n - 3) * beta_se) #t 2.5% 97.5% (Intercept) 2.474742 11.526174 x1 1.971466 2.014002 x2 2.958422 3.023291 > cbind(`2.5%` = beta - qnorm(0.975)*beta_se, `97.5%` = beta + qnorm(0.975)*beta_se) #normal 2.5% 97.5% (Intercept) 2.480236 11.520680 x1 1.971492 2.013976 x2 2.958461 3.023251
Is there any difference between lm and glm for the gaussian family of glm?
From @Repmat's answer, the model summary are the same, but the C.I.'s of the regression coefficients from confint are slightly different between lm and glm. > confint(reg1, level=0.95)
Is there any difference between lm and glm for the gaussian family of glm? From @Repmat's answer, the model summary are the same, but the C.I.'s of the regression coefficients from confint are slightly different between lm and glm. > confint(reg1, level=0.95) 2.5 % 97.5 % (Intercept) 2.474742 11.526174 x1 1.971466 2.014002 x2 2.958422 3.023291 > confint(reg2, level=0.95) Waiting for profiling to be done... 2.5 % 97.5 % (Intercept) 2.480236 11.520680 x1 1.971492 2.013976 x2 2.958461 3.023251 $t$-distribution is used in lm while normal distribution is used in glm when constructing the intervals. > beta <- summary(reg1)$coefficients[, 1] > beta_se <- summary(reg1)$coefficients[, 2] > cbind(`2.5%` = beta - qt(0.975, n - 3) * beta_se, `97.5%` = beta + qt(0.975, n - 3) * beta_se) #t 2.5% 97.5% (Intercept) 2.474742 11.526174 x1 1.971466 2.014002 x2 2.958422 3.023291 > cbind(`2.5%` = beta - qnorm(0.975)*beta_se, `97.5%` = beta + qnorm(0.975)*beta_se) #normal 2.5% 97.5% (Intercept) 2.480236 11.520680 x1 1.971492 2.013976 x2 2.958461 3.023251
Is there any difference between lm and glm for the gaussian family of glm? From @Repmat's answer, the model summary are the same, but the C.I.'s of the regression coefficients from confint are slightly different between lm and glm. > confint(reg1, level=0.95)
4,612
Is there any difference between lm and glm for the gaussian family of glm?
Short answer, they are exactly the same: # Simulate data: set.seed(42) n <- 1000 x1 <- rnorm(n, mean = 150, sd = 3) x2 <- rnorm(n, mean = 100, sd = 2) u <- rnorm(n) y <- 5 + 2*x1 + 3*x2 + u # Estimate with OLS: reg1 <- lm(y ~ x1 + x2) # Estimate with GLS reg2 <- glm(y ~ x1 + x2, family=gaussian) # Compare: require(texreg) screenreg(l = list(reg1, reg2)) ========================================= Model 1 Model 2 ----------------------------------------- (Intercept) 6.37 ** 6.37 ** (2.20) (2.20) x1 1.99 *** 1.99 *** (0.01) (0.01) x2 3.00 *** 3.00 *** (0.02) (0.02) ----------------------------------------- R^2 0.99 Adj. R^2 0.99 Num. obs. 1000 1000 RMSE 1.00 AIC 2837.66 BIC 2857.29 Log Likelihood -1414.83 Deviance 991.82 ========================================= *** p < 0.001, ** p < 0.01, * p < 0.05 Longer answer; The glm function fits the model by MLE, however, because of the assumption you made about the link function (in this case normal), you end up with the OLS estimates.
Is there any difference between lm and glm for the gaussian family of glm?
Short answer, they are exactly the same: # Simulate data: set.seed(42) n <- 1000 x1 <- rnorm(n, mean = 150, sd = 3) x2 <- rnorm(n, mean = 100, sd = 2) u <- rnorm(n) y <- 5 + 2*x1 + 3*x2 + u # Esti
Is there any difference between lm and glm for the gaussian family of glm? Short answer, they are exactly the same: # Simulate data: set.seed(42) n <- 1000 x1 <- rnorm(n, mean = 150, sd = 3) x2 <- rnorm(n, mean = 100, sd = 2) u <- rnorm(n) y <- 5 + 2*x1 + 3*x2 + u # Estimate with OLS: reg1 <- lm(y ~ x1 + x2) # Estimate with GLS reg2 <- glm(y ~ x1 + x2, family=gaussian) # Compare: require(texreg) screenreg(l = list(reg1, reg2)) ========================================= Model 1 Model 2 ----------------------------------------- (Intercept) 6.37 ** 6.37 ** (2.20) (2.20) x1 1.99 *** 1.99 *** (0.01) (0.01) x2 3.00 *** 3.00 *** (0.02) (0.02) ----------------------------------------- R^2 0.99 Adj. R^2 0.99 Num. obs. 1000 1000 RMSE 1.00 AIC 2837.66 BIC 2857.29 Log Likelihood -1414.83 Deviance 991.82 ========================================= *** p < 0.001, ** p < 0.01, * p < 0.05 Longer answer; The glm function fits the model by MLE, however, because of the assumption you made about the link function (in this case normal), you end up with the OLS estimates.
Is there any difference between lm and glm for the gaussian family of glm? Short answer, they are exactly the same: # Simulate data: set.seed(42) n <- 1000 x1 <- rnorm(n, mean = 150, sd = 3) x2 <- rnorm(n, mean = 100, sd = 2) u <- rnorm(n) y <- 5 + 2*x1 + 3*x2 + u # Esti
4,613
When will L1 regularization work better than L2 and vice versa?
How to decide which regularization (L1 or L2) to use? What is your goal? Both can improve model generalization by penalizing coefficients, since features with opposite relationship to the outcome can "offset" each other (a large positive value is counterbalanced by a large negative value). This can arise when there are collinear features. Small changes in the data can result in dramatically different parameter estimates (high variance estimates). Penalization can restrain both coefficients to be smaller. (Hastie et al, Elements of Statistical Learning, 2nd edition, p. 63) What are the pros & cons of each of L1 / L2 regularization? L1 regularization can address the multicollinearity problem by constraining the coefficient norm and pinning some coefficient values to 0. Computationally, Lasso regression (regression with an L1 penalty) is a quadratic program which requires some special tools to solve. When you have more features than observations $N$, lasso will keep at most $N$ non-zero coefficients. Depending on context, that might not be what you want. L1 regularization is sometimes used as a feature selection method. Suppose you have some kind of hard cap on the number of features you can use (because data collection for all features is expensive, or you have tight engineering constraints on how many values you can store, etc.). You can try to tune the L1 penalty to hit your desired number of non-zero features. L2 regularization can address the multicollinearity problem by constraining the coefficient norm and keeping all the variables. It's unlikely to estimate a coefficient to be exactly 0. This isn't necessarily a drawback, unless a sparse coefficient vector is important for some reason. In the regression setting, it's the "classic" solution to the problem of estimating a regression with more features than observations. L2 regularization can estimate a coefficient for each feature even if there are more features than observations (indeed, this was the original motivation for "ridge regression"). As an alternative, elastic net allows L1 and L2 regularization as special cases. A typical use-case in for a data scientist in industry is that you just want to pick the best model, but don't necessarily care if it's penalized using L1, L2 or both. Elastic net is nice in situations like these. Is it recommended to 1st do feature selection using L1 & then apply L2 on these selected variables? I'm not familiar with a publication proposing an L1-then-L2 pipeline, but this is probably just ignorance on my part. There doesn't seem to be anything wrong with it. I'd conduct a literature review. A few examples of similar "phased" pipelines exist. One is the "relaxed lasso", which applies lasso regression twice, once to down-select from a large group to a small group of features, and second to estimate coefficients for use in a model. This uses cross-validation at each step to choose the magnitude of the penalty. The reasoning is that in the first step, you cross-validate and will likely choose a large penalty to screen out irrelevant predictors; in the second step, you cross-validate and will likely pick a smaller penalty (and hence larger coefficients). This is mentioned briefly in Elements of Statistical Learning with a citation to Nicolai Meinshausen ("Relaxed Lasso." Computational Statistics & Data Analysis Volume 52, Issue 1, 15 September 2007, pp 374-393). User @amoeba also suggests an L1-then-OLS pipeline; this might be nice because it only has 1 hyperparameter for the magnitude of the L1 penalty, so less fiddling would be required. One problem that can arise with any "phased" analysis pipeline (that is, a pipeline which does some steps, and then some other steps separately) is that there's no "visibility" between those different phases (algorithms applied at each step). This means that one process inherits any data snooping that happened at the previous steps. This effect is not negligible; poorly-conceived modeling can result in garbage models. One way to hedge against data-snooping side-effects is to cross-validate all of your choices. However, the increased computational costs can be prohibitive, depending on the scale of the data and the complexity of each step.
When will L1 regularization work better than L2 and vice versa?
How to decide which regularization (L1 or L2) to use? What is your goal? Both can improve model generalization by penalizing coefficients, since features with opposite relationship to the outcome can
When will L1 regularization work better than L2 and vice versa? How to decide which regularization (L1 or L2) to use? What is your goal? Both can improve model generalization by penalizing coefficients, since features with opposite relationship to the outcome can "offset" each other (a large positive value is counterbalanced by a large negative value). This can arise when there are collinear features. Small changes in the data can result in dramatically different parameter estimates (high variance estimates). Penalization can restrain both coefficients to be smaller. (Hastie et al, Elements of Statistical Learning, 2nd edition, p. 63) What are the pros & cons of each of L1 / L2 regularization? L1 regularization can address the multicollinearity problem by constraining the coefficient norm and pinning some coefficient values to 0. Computationally, Lasso regression (regression with an L1 penalty) is a quadratic program which requires some special tools to solve. When you have more features than observations $N$, lasso will keep at most $N$ non-zero coefficients. Depending on context, that might not be what you want. L1 regularization is sometimes used as a feature selection method. Suppose you have some kind of hard cap on the number of features you can use (because data collection for all features is expensive, or you have tight engineering constraints on how many values you can store, etc.). You can try to tune the L1 penalty to hit your desired number of non-zero features. L2 regularization can address the multicollinearity problem by constraining the coefficient norm and keeping all the variables. It's unlikely to estimate a coefficient to be exactly 0. This isn't necessarily a drawback, unless a sparse coefficient vector is important for some reason. In the regression setting, it's the "classic" solution to the problem of estimating a regression with more features than observations. L2 regularization can estimate a coefficient for each feature even if there are more features than observations (indeed, this was the original motivation for "ridge regression"). As an alternative, elastic net allows L1 and L2 regularization as special cases. A typical use-case in for a data scientist in industry is that you just want to pick the best model, but don't necessarily care if it's penalized using L1, L2 or both. Elastic net is nice in situations like these. Is it recommended to 1st do feature selection using L1 & then apply L2 on these selected variables? I'm not familiar with a publication proposing an L1-then-L2 pipeline, but this is probably just ignorance on my part. There doesn't seem to be anything wrong with it. I'd conduct a literature review. A few examples of similar "phased" pipelines exist. One is the "relaxed lasso", which applies lasso regression twice, once to down-select from a large group to a small group of features, and second to estimate coefficients for use in a model. This uses cross-validation at each step to choose the magnitude of the penalty. The reasoning is that in the first step, you cross-validate and will likely choose a large penalty to screen out irrelevant predictors; in the second step, you cross-validate and will likely pick a smaller penalty (and hence larger coefficients). This is mentioned briefly in Elements of Statistical Learning with a citation to Nicolai Meinshausen ("Relaxed Lasso." Computational Statistics & Data Analysis Volume 52, Issue 1, 15 September 2007, pp 374-393). User @amoeba also suggests an L1-then-OLS pipeline; this might be nice because it only has 1 hyperparameter for the magnitude of the L1 penalty, so less fiddling would be required. One problem that can arise with any "phased" analysis pipeline (that is, a pipeline which does some steps, and then some other steps separately) is that there's no "visibility" between those different phases (algorithms applied at each step). This means that one process inherits any data snooping that happened at the previous steps. This effect is not negligible; poorly-conceived modeling can result in garbage models. One way to hedge against data-snooping side-effects is to cross-validate all of your choices. However, the increased computational costs can be prohibitive, depending on the scale of the data and the complexity of each step.
When will L1 regularization work better than L2 and vice versa? How to decide which regularization (L1 or L2) to use? What is your goal? Both can improve model generalization by penalizing coefficients, since features with opposite relationship to the outcome can
4,614
When will L1 regularization work better than L2 and vice versa?
Generally speaking if you want optimum prediction use L2. If you want parsimony at some sacrifice of predictive discrimination use L1. But note that the parsimony can be illusory, e.g., repeating the lasso process using the bootstrap will often reveal significant instability in the list of features "selected" especially when predictors are correlated with each other.
When will L1 regularization work better than L2 and vice versa?
Generally speaking if you want optimum prediction use L2. If you want parsimony at some sacrifice of predictive discrimination use L1. But note that the parsimony can be illusory, e.g., repeating th
When will L1 regularization work better than L2 and vice versa? Generally speaking if you want optimum prediction use L2. If you want parsimony at some sacrifice of predictive discrimination use L1. But note that the parsimony can be illusory, e.g., repeating the lasso process using the bootstrap will often reveal significant instability in the list of features "selected" especially when predictors are correlated with each other.
When will L1 regularization work better than L2 and vice versa? Generally speaking if you want optimum prediction use L2. If you want parsimony at some sacrifice of predictive discrimination use L1. But note that the parsimony can be illusory, e.g., repeating th
4,615
Online vs offline learning?
Online learning means that you are doing it as the data comes in. Offline means that you have a static dataset. So, for online learning, you (typically) have more data, but you have time constraints. Another wrinkle that can affect online learning is that your concepts might change through time. Let's say you want to build a classifier to recognize spam. You can acquire a large corpus of e-mail, label it, and train a classifier on it. This would be offline learning. Or, you can take all the e-mail coming into your system, and continuously update your classifier (labels may be a bit tricky). This would be online learning.
Online vs offline learning?
Online learning means that you are doing it as the data comes in. Offline means that you have a static dataset. So, for online learning, you (typically) have more data, but you have time constraints.
Online vs offline learning? Online learning means that you are doing it as the data comes in. Offline means that you have a static dataset. So, for online learning, you (typically) have more data, but you have time constraints. Another wrinkle that can affect online learning is that your concepts might change through time. Let's say you want to build a classifier to recognize spam. You can acquire a large corpus of e-mail, label it, and train a classifier on it. This would be offline learning. Or, you can take all the e-mail coming into your system, and continuously update your classifier (labels may be a bit tricky). This would be online learning.
Online vs offline learning? Online learning means that you are doing it as the data comes in. Offline means that you have a static dataset. So, for online learning, you (typically) have more data, but you have time constraints.
4,616
Online vs offline learning?
The term "online" is overloaded, and therefore causes confusion in the domain of machine learning. The opposite of "online" is batch learning. In batch learning, the learning algorithm updates its parameters after consuming the whole batch, whereas in online learning, the algorithm updates its parameters after learning from 1 training instance. Mini batch learning is the halfway point between batch learning on one end and online learning on the other extreme. Also, "when" the data comes in, or whether it is capable of being stored or not, is orthogonal to online or batch learning. Online learning is deemed to be slower to converge to a minima , when compared to batch learning. However, in cases where the entire dataset doesn't fit in memory, using online learning is an acceptable tradeoff.
Online vs offline learning?
The term "online" is overloaded, and therefore causes confusion in the domain of machine learning. The opposite of "online" is batch learning. In batch learning, the learning algorithm updates its pa
Online vs offline learning? The term "online" is overloaded, and therefore causes confusion in the domain of machine learning. The opposite of "online" is batch learning. In batch learning, the learning algorithm updates its parameters after consuming the whole batch, whereas in online learning, the algorithm updates its parameters after learning from 1 training instance. Mini batch learning is the halfway point between batch learning on one end and online learning on the other extreme. Also, "when" the data comes in, or whether it is capable of being stored or not, is orthogonal to online or batch learning. Online learning is deemed to be slower to converge to a minima , when compared to batch learning. However, in cases where the entire dataset doesn't fit in memory, using online learning is an acceptable tradeoff.
Online vs offline learning? The term "online" is overloaded, and therefore causes confusion in the domain of machine learning. The opposite of "online" is batch learning. In batch learning, the learning algorithm updates its pa
4,617
Online vs offline learning?
Online learning (also called incremental learning): we consider a single presentation of the examples. In this case, each example is used sequentially in a manner as prescribed by the learning algorithm, and then thrown away. The weight changes made at a given stage depend specifically only on the (current) example being presented and possibly on the current state of the model. It is the natural procedure for time varying rules where the examples might not be available at all at once. Offline learning: the weight changes depend on the whole (training) dataset, defining a global cost function. The examples are used repeatedly until minimization of this cost function is achieved.
Online vs offline learning?
Online learning (also called incremental learning): we consider a single presentation of the examples. In this case, each example is used sequentially in a manner as prescribed by the learning algorit
Online vs offline learning? Online learning (also called incremental learning): we consider a single presentation of the examples. In this case, each example is used sequentially in a manner as prescribed by the learning algorithm, and then thrown away. The weight changes made at a given stage depend specifically only on the (current) example being presented and possibly on the current state of the model. It is the natural procedure for time varying rules where the examples might not be available at all at once. Offline learning: the weight changes depend on the whole (training) dataset, defining a global cost function. The examples are used repeatedly until minimization of this cost function is achieved.
Online vs offline learning? Online learning (also called incremental learning): we consider a single presentation of the examples. In this case, each example is used sequentially in a manner as prescribed by the learning algorit
4,618
How large should the batch size be for stochastic gradient descent?
The "sample size" you're talking about is referred to as batch size, $B$. The batch size parameter is just one of the hyper-parameters you'll be tuning when you train a neural network with mini-batch Stochastic Gradient Descent (SGD) and is data dependent. The most basic method of hyper-parameter search is to do a grid search over the learning rate and batch size to find a pair which makes the network converge. To understand what the batch size should be, it's important to see the relationship between batch gradient descent, online SGD, and mini-batch SGD. Here's the general formula for the weight update step in mini-batch SGD, which is a generalization of all three types. [2] $$ \theta_{t+1} \leftarrow \theta_{t} - \epsilon(t) \frac{1}{B} \sum\limits_{b=0}^{B - 1} \dfrac{\partial \mathcal{L}(\theta, \textbf{m}_b)}{\partial \theta} $$ Batch gradient descent, $B = |x|$ Online stochastic gradient descent: $B = 1$ Mini-batch stochastic gradient descent: $B > 1$ but $B < |x|$. Note that with 1, the loss function is no longer a random variable and is not a stochastic approximation. SGD converges faster than normal "batch" gradient descent because it updates the weights after looking at a randomly selected subset of the training set. Let $x$ be our training set and let $m \subset x$. The batch size $B$ is just the cardinality of $m$: $B = |m|$. Batch gradient descent updates the weights $\theta$ using the gradients of the entire dataset $x$; whereas SGD updates the weights using an average of the gradients for a mini-batch $m$. (Using the average as opposed to a sum prevents the algorithm from taking steps that are too large if the dataset is very large. Otherwise, you would need to adjust your learning rate based on the size of the dataset.) The expected value of this stochastic approximation of the gradient used in SGD is equal to the deterministic gradient used in batch gradient descent. $\mathbb{E}[\nabla \mathcal{L}_{SGD}(\theta, \textbf{m})] = \nabla \mathcal{L}(\theta, \textbf{x})$. Each time we take a sample and update our weights it is called a mini-batch. Each time we run through the entire dataset, it's called an epoch. Let's say that we have some data vector $\textbf{x} : \mathbb{R}^D$, an initial weight vector that parameterizes our neural network, $\theta_0 : \mathbb{R}^{S}$, and a loss function $\mathcal{L}(\theta, \textbf{x}) : \mathbb{R}^{S} \rightarrow \mathbb{R}^{D} \rightarrow \mathbb{R}^S$ that we are trying to minimize. If we have $T$ training examples and a batch size of $B$, then we can split those training examples into C mini-batches: $$ C = \lceil T / B \rceil $$ For simplicity we can assume that T is evenly divisible by B. Although, when this is not the case, as it often is not, proper weight should be assigned to each mini-batch as a function of its size. An iterative algorithm for SGD with $M$ epochs is given below: \begin{align*} t &\leftarrow 0 \\ \textrm{while } t &< M \\ \theta_{t+1} &\leftarrow \theta_{t} - \epsilon(t) \frac{1}{B} \sum\limits_{b=0}^{B - 1} \dfrac{\partial \mathcal{L}(\theta, \textbf{m}_b)}{\partial \theta} \\ t &\leftarrow t + 1 \end{align*} Note: in real life we're reading these training example data from memory and, due to cache pre-fetching and other memory tricks done by your computer, your algorithm will run faster if the memory accesses are coalesced, i.e. when you read the memory in order and don't jump around randomly. So, most SGD implementations shuffle the dataset and then load the examples into memory in the order that they'll be read. The major parameters for the vanilla (no momentum) SGD described above are: Learning Rate: $\epsilon$ I like to think of epsilon as a function from the epoch count to a learning rate. This function is called the learning rate schedule. $$ \epsilon(t) : \mathbb{N} \rightarrow \mathbb{R} $$ If you want to have the learning rate fixed, just define epsilon as a constant function. Batch Size Batch size determines how many examples you look at before making a weight update. The lower it is, the noisier the training signal is going to be, the higher it is, the longer it will take to compute the gradient for each step. Citations & Further Reading: Introduction to Gradient Based Learning Practical recommendations for gradient-based training of deep architectures Efficient Mini-batch Training for Stochastic Optimization
How large should the batch size be for stochastic gradient descent?
The "sample size" you're talking about is referred to as batch size, $B$. The batch size parameter is just one of the hyper-parameters you'll be tuning when you train a neural network with mini-batch
How large should the batch size be for stochastic gradient descent? The "sample size" you're talking about is referred to as batch size, $B$. The batch size parameter is just one of the hyper-parameters you'll be tuning when you train a neural network with mini-batch Stochastic Gradient Descent (SGD) and is data dependent. The most basic method of hyper-parameter search is to do a grid search over the learning rate and batch size to find a pair which makes the network converge. To understand what the batch size should be, it's important to see the relationship between batch gradient descent, online SGD, and mini-batch SGD. Here's the general formula for the weight update step in mini-batch SGD, which is a generalization of all three types. [2] $$ \theta_{t+1} \leftarrow \theta_{t} - \epsilon(t) \frac{1}{B} \sum\limits_{b=0}^{B - 1} \dfrac{\partial \mathcal{L}(\theta, \textbf{m}_b)}{\partial \theta} $$ Batch gradient descent, $B = |x|$ Online stochastic gradient descent: $B = 1$ Mini-batch stochastic gradient descent: $B > 1$ but $B < |x|$. Note that with 1, the loss function is no longer a random variable and is not a stochastic approximation. SGD converges faster than normal "batch" gradient descent because it updates the weights after looking at a randomly selected subset of the training set. Let $x$ be our training set and let $m \subset x$. The batch size $B$ is just the cardinality of $m$: $B = |m|$. Batch gradient descent updates the weights $\theta$ using the gradients of the entire dataset $x$; whereas SGD updates the weights using an average of the gradients for a mini-batch $m$. (Using the average as opposed to a sum prevents the algorithm from taking steps that are too large if the dataset is very large. Otherwise, you would need to adjust your learning rate based on the size of the dataset.) The expected value of this stochastic approximation of the gradient used in SGD is equal to the deterministic gradient used in batch gradient descent. $\mathbb{E}[\nabla \mathcal{L}_{SGD}(\theta, \textbf{m})] = \nabla \mathcal{L}(\theta, \textbf{x})$. Each time we take a sample and update our weights it is called a mini-batch. Each time we run through the entire dataset, it's called an epoch. Let's say that we have some data vector $\textbf{x} : \mathbb{R}^D$, an initial weight vector that parameterizes our neural network, $\theta_0 : \mathbb{R}^{S}$, and a loss function $\mathcal{L}(\theta, \textbf{x}) : \mathbb{R}^{S} \rightarrow \mathbb{R}^{D} \rightarrow \mathbb{R}^S$ that we are trying to minimize. If we have $T$ training examples and a batch size of $B$, then we can split those training examples into C mini-batches: $$ C = \lceil T / B \rceil $$ For simplicity we can assume that T is evenly divisible by B. Although, when this is not the case, as it often is not, proper weight should be assigned to each mini-batch as a function of its size. An iterative algorithm for SGD with $M$ epochs is given below: \begin{align*} t &\leftarrow 0 \\ \textrm{while } t &< M \\ \theta_{t+1} &\leftarrow \theta_{t} - \epsilon(t) \frac{1}{B} \sum\limits_{b=0}^{B - 1} \dfrac{\partial \mathcal{L}(\theta, \textbf{m}_b)}{\partial \theta} \\ t &\leftarrow t + 1 \end{align*} Note: in real life we're reading these training example data from memory and, due to cache pre-fetching and other memory tricks done by your computer, your algorithm will run faster if the memory accesses are coalesced, i.e. when you read the memory in order and don't jump around randomly. So, most SGD implementations shuffle the dataset and then load the examples into memory in the order that they'll be read. The major parameters for the vanilla (no momentum) SGD described above are: Learning Rate: $\epsilon$ I like to think of epsilon as a function from the epoch count to a learning rate. This function is called the learning rate schedule. $$ \epsilon(t) : \mathbb{N} \rightarrow \mathbb{R} $$ If you want to have the learning rate fixed, just define epsilon as a constant function. Batch Size Batch size determines how many examples you look at before making a weight update. The lower it is, the noisier the training signal is going to be, the higher it is, the longer it will take to compute the gradient for each step. Citations & Further Reading: Introduction to Gradient Based Learning Practical recommendations for gradient-based training of deep architectures Efficient Mini-batch Training for Stochastic Optimization
How large should the batch size be for stochastic gradient descent? The "sample size" you're talking about is referred to as batch size, $B$. The batch size parameter is just one of the hyper-parameters you'll be tuning when you train a neural network with mini-batch
4,619
Clustering a long list of strings (words) into similarity groups
Seconding @micans recommendation for Affinity Propagation. From the paper: L Frey, Brendan J., and Delbert Dueck. "Clustering by passing messages between data points." science 315.5814 (2007): 972-976.. It's super easy to use via many packages. It works on anything you can define the pairwise similarity on. Which you can get by multiplying the Levenshtein distance by -1. I threw together a quick example using the first paragraph of your question as input. In Python 3: import numpy as np from sklearn.cluster import AffinityPropagation import distance words = "YOUR WORDS HERE".split(" ") #Replace this line words = np.asarray(words) #So that indexing with a list will work lev_similarity = -1*np.array([[distance.levenshtein(w1,w2) for w1 in words] for w2 in words]) affprop = AffinityPropagation(affinity="precomputed", damping=0.5) affprop.fit(lev_similarity) for cluster_id in np.unique(affprop.labels_): exemplar = words[affprop.cluster_centers_indices_[cluster_id]] cluster = np.unique(words[np.nonzero(affprop.labels_==cluster_id)]) cluster_str = ", ".join(cluster) print(" - *%s:* %s" % (exemplar, cluster_str)) Output was (exemplars in italics to the left of the cluster they are exemplar of): have: chances, edit, hand, have, high following: following problem: problem I: I, a, at, etc, in, list, of possibly: possibly cluster: cluster word: For, and, for, long, need, should, very, word, words similar: similar Levenshtein: Levenshtein distance: distance the: that, the, this, to, with same: example, list, names, same, such, surnames algorithm: algorithm, alogrithm appear: appear, appears Running it on a list of 50 random first names: Diane: Deana, Diane, Dionne, Gerald, Irina, Lisette, Minna, Nicki, Ricki Jani: Clair, Jani, Jason, Jc, Kimi, Lang, Marcus, Maxima, Randi, Raul Verline: Destiny, Kellye, Marylin, Mercedes, Sterling, Verline Glenn: Elenor, Glenn, Gwenda Armandina: Armandina, Augustina Shiela: Ahmed, Estella, Milissa, Shiela, Thresa, Wynell Laureen: Autumn, Haydee, Laureen, Lauren Alberto: Albertha, Alberto, Robert Lore: Ammie, Doreen, Eura, Josef, Lore, Lori, Porter Looks pretty great to me (that was fun).
Clustering a long list of strings (words) into similarity groups
Seconding @micans recommendation for Affinity Propagation. From the paper: L Frey, Brendan J., and Delbert Dueck. "Clustering by passing messages between data points." science 315.5814 (2007): 972-976
Clustering a long list of strings (words) into similarity groups Seconding @micans recommendation for Affinity Propagation. From the paper: L Frey, Brendan J., and Delbert Dueck. "Clustering by passing messages between data points." science 315.5814 (2007): 972-976.. It's super easy to use via many packages. It works on anything you can define the pairwise similarity on. Which you can get by multiplying the Levenshtein distance by -1. I threw together a quick example using the first paragraph of your question as input. In Python 3: import numpy as np from sklearn.cluster import AffinityPropagation import distance words = "YOUR WORDS HERE".split(" ") #Replace this line words = np.asarray(words) #So that indexing with a list will work lev_similarity = -1*np.array([[distance.levenshtein(w1,w2) for w1 in words] for w2 in words]) affprop = AffinityPropagation(affinity="precomputed", damping=0.5) affprop.fit(lev_similarity) for cluster_id in np.unique(affprop.labels_): exemplar = words[affprop.cluster_centers_indices_[cluster_id]] cluster = np.unique(words[np.nonzero(affprop.labels_==cluster_id)]) cluster_str = ", ".join(cluster) print(" - *%s:* %s" % (exemplar, cluster_str)) Output was (exemplars in italics to the left of the cluster they are exemplar of): have: chances, edit, hand, have, high following: following problem: problem I: I, a, at, etc, in, list, of possibly: possibly cluster: cluster word: For, and, for, long, need, should, very, word, words similar: similar Levenshtein: Levenshtein distance: distance the: that, the, this, to, with same: example, list, names, same, such, surnames algorithm: algorithm, alogrithm appear: appear, appears Running it on a list of 50 random first names: Diane: Deana, Diane, Dionne, Gerald, Irina, Lisette, Minna, Nicki, Ricki Jani: Clair, Jani, Jason, Jc, Kimi, Lang, Marcus, Maxima, Randi, Raul Verline: Destiny, Kellye, Marylin, Mercedes, Sterling, Verline Glenn: Elenor, Glenn, Gwenda Armandina: Armandina, Augustina Shiela: Ahmed, Estella, Milissa, Shiela, Thresa, Wynell Laureen: Autumn, Haydee, Laureen, Lauren Alberto: Albertha, Alberto, Robert Lore: Ammie, Doreen, Eura, Josef, Lore, Lori, Porter Looks pretty great to me (that was fun).
Clustering a long list of strings (words) into similarity groups Seconding @micans recommendation for Affinity Propagation. From the paper: L Frey, Brendan J., and Delbert Dueck. "Clustering by passing messages between data points." science 315.5814 (2007): 972-976
4,620
Clustering a long list of strings (words) into similarity groups
Use graph clustering algorithms, such as Louvain clustering, Restricted Neighbourhood Search Clustering (RNSC), Affinity Propgation Clustering (APC), or the Markov Cluster algorithm (MCL).
Clustering a long list of strings (words) into similarity groups
Use graph clustering algorithms, such as Louvain clustering, Restricted Neighbourhood Search Clustering (RNSC), Affinity Propgation Clustering (APC), or the Markov Cluster algorithm (MCL).
Clustering a long list of strings (words) into similarity groups Use graph clustering algorithms, such as Louvain clustering, Restricted Neighbourhood Search Clustering (RNSC), Affinity Propgation Clustering (APC), or the Markov Cluster algorithm (MCL).
Clustering a long list of strings (words) into similarity groups Use graph clustering algorithms, such as Louvain clustering, Restricted Neighbourhood Search Clustering (RNSC), Affinity Propgation Clustering (APC), or the Markov Cluster algorithm (MCL).
4,621
Clustering a long list of strings (words) into similarity groups
You could try the vector space model with the n-grams of the words as the vector space entries. I think you would have to use a measure like cosine similarity in this case instead of edit distance.
Clustering a long list of strings (words) into similarity groups
You could try the vector space model with the n-grams of the words as the vector space entries. I think you would have to use a measure like cosine similarity in this case instead of edit distance.
Clustering a long list of strings (words) into similarity groups You could try the vector space model with the n-grams of the words as the vector space entries. I think you would have to use a measure like cosine similarity in this case instead of edit distance.
Clustering a long list of strings (words) into similarity groups You could try the vector space model with the n-grams of the words as the vector space entries. I think you would have to use a measure like cosine similarity in this case instead of edit distance.
4,622
Probability distribution for different probabilities
This is the sum of 16 (presumably independent) Binomial trials. The assumption of independence allows us to multiply probabilities. Whence, after two trials with probabilities $p_1$ and $p_2$ of success the chance of success on both trials is $p_1 p_2$, the chance of no successes is $(1-p_1)(1-p_2)$, and the chance of one success is $p_1(1-p_2) + (1-p_1)p_2$. That last expression owes its validity to the fact that the two ways of getting exactly one success are mutually exclusive: at most one of them can actually happen. That means their probabilities add. By means of these two rules--independent probabilities multiply and mutually exclusive ones add--you can work out the answers for, say, 16 trials with probabilities $p_1, \ldots, p_{16}$. To do so, you need to account for all the ways of obtaining each given number of successes (such as 9). There are $\binom{16}{9} = 11440$ ways to achieve 9 successes. One of them, for example, occurs when trials 1, 2, 4, 5, 6, 11, 12, 14, and 15 are successes and the others are failures. The successes had probabilities $p_1, p_2, p_4, p_5, p_6, p_{11}, p_{12}, p_{14},$ and $p_{15}$ and the failures had probabilities $1-p_3, 1-p_7, \ldots, 1-p_{13}, 1-p_{16}$. Multiplying these 16 numbers gives the chance of this particular sequence of outcomes. Summing this number along with the 11,439 remaining such numbers gives the answer. Of course you would use a computer. With many more than 16 trials, there is a need to approximate the distribution. Provided none of the probabilities $p_i$ and $1-p_i$ get too small, a Normal approximation tends to work well. With this method you note that the expectation of the sum of $n$ trials is $\mu = p_1 + p_2 + \cdots + p_n$ and (because the trials are independent) the variance is $\sigma^2 = p_1(1-p_1) + p_2(1-p_2) + \cdots + p_n(1-p_n)$. You then pretend the distribution of sums is Normal with mean $\mu$ and standard deviation $\sigma$. The answers tend to be good for computing probabilities corresponding to a proportion of successes that differs from $\mu$ by no more than a few multiples of $\sigma$. As $n$ grows large this approximation gets ever more accurate and works for even larger multiples of $\sigma$ away from $\mu$.
Probability distribution for different probabilities
This is the sum of 16 (presumably independent) Binomial trials. The assumption of independence allows us to multiply probabilities. Whence, after two trials with probabilities $p_1$ and $p_2$ of suc
Probability distribution for different probabilities This is the sum of 16 (presumably independent) Binomial trials. The assumption of independence allows us to multiply probabilities. Whence, after two trials with probabilities $p_1$ and $p_2$ of success the chance of success on both trials is $p_1 p_2$, the chance of no successes is $(1-p_1)(1-p_2)$, and the chance of one success is $p_1(1-p_2) + (1-p_1)p_2$. That last expression owes its validity to the fact that the two ways of getting exactly one success are mutually exclusive: at most one of them can actually happen. That means their probabilities add. By means of these two rules--independent probabilities multiply and mutually exclusive ones add--you can work out the answers for, say, 16 trials with probabilities $p_1, \ldots, p_{16}$. To do so, you need to account for all the ways of obtaining each given number of successes (such as 9). There are $\binom{16}{9} = 11440$ ways to achieve 9 successes. One of them, for example, occurs when trials 1, 2, 4, 5, 6, 11, 12, 14, and 15 are successes and the others are failures. The successes had probabilities $p_1, p_2, p_4, p_5, p_6, p_{11}, p_{12}, p_{14},$ and $p_{15}$ and the failures had probabilities $1-p_3, 1-p_7, \ldots, 1-p_{13}, 1-p_{16}$. Multiplying these 16 numbers gives the chance of this particular sequence of outcomes. Summing this number along with the 11,439 remaining such numbers gives the answer. Of course you would use a computer. With many more than 16 trials, there is a need to approximate the distribution. Provided none of the probabilities $p_i$ and $1-p_i$ get too small, a Normal approximation tends to work well. With this method you note that the expectation of the sum of $n$ trials is $\mu = p_1 + p_2 + \cdots + p_n$ and (because the trials are independent) the variance is $\sigma^2 = p_1(1-p_1) + p_2(1-p_2) + \cdots + p_n(1-p_n)$. You then pretend the distribution of sums is Normal with mean $\mu$ and standard deviation $\sigma$. The answers tend to be good for computing probabilities corresponding to a proportion of successes that differs from $\mu$ by no more than a few multiples of $\sigma$. As $n$ grows large this approximation gets ever more accurate and works for even larger multiples of $\sigma$ away from $\mu$.
Probability distribution for different probabilities This is the sum of 16 (presumably independent) Binomial trials. The assumption of independence allows us to multiply probabilities. Whence, after two trials with probabilities $p_1$ and $p_2$ of suc
4,623
Probability distribution for different probabilities
One alternative to @whuber's normal approximation is to use "mixing" probabilities, or a hierarchical model. This would apply when the $p_i$ are similar in some way, and you can model this by a probability distribution $p_i\sim Dist(\theta)$ with a density function of $g(p|\theta)$ indexed by some parameter $\theta$. you get a integral equation: $$Pr(s=9|n=16,\theta)={16 \choose 9}\int_{0}^{1} p^{9}(1-p)^{7}g(p|\theta)dp $$ The binomial probability comes from setting $g(p|\theta)=\delta(p-\theta)$, the normal approximation comes from (I think) setting $g(p|\theta)=g(p|\mu,\sigma)=\frac{1}{\sigma}\phi\left(\frac{p-\mu}{\sigma}\right)$ (with $\mu$ and $\sigma$ as defined in @whuber's answer) and then noting the "tails" of this PDF fall off sharply around the peak. You could also use a beta distribution, which would lead to a simple analytic form, and which need not suffer from the "small p" problem that the normal approximation does - as beta is quite flexible. Using a $beta(\alpha,\beta)$ distribution with $\alpha,\beta$ set by the solutions to the following equations (this is the "mimimum KL divergence" estimates): $$\psi(\alpha)-\psi(\alpha+\beta)=\frac{1}{n}\sum_{i=1}^{n}log[p_{i}]$$ $$\psi(\beta)-\psi(\alpha+\beta)=\frac{1}{n}\sum_{i=1}^{n}log[1-p_{i}]$$ Where $\psi(.)$ is the digamma function - closely related to harmonic series. We get the "beta-binomial" compound distribution: $${16 \choose 9}\frac{1}{B(\alpha,\beta)}\int_{0}^{1} p^{9+\alpha-1}(1-p)^{7+\beta-1}dp ={16 \choose 9}\frac{B(\alpha+9,\beta+7)}{B(\alpha,\beta)}$$ This distribution converges towards a normal distribution in the case that @whuber points out - but should give reasonable answers for small $n$ and skewed $p_i$ - but not for multimodal $p_i$, as beta distribution only has one peak. But you can easily fix this, by simply using $M$ beta distributions for the $M$ modes. You break up the integral from $0<p<1$ into $M$ pieces so that each piece has a unique mode (and enough data to estimate parameters), and fit a beta distribution within each piece. then add up the results, noting that making the change of variables $p=\frac{x-L}{U-L}$ for $L<x<U$ the beta integral transforms to: $$B(\alpha,\beta)=\int_{L}^{U}\frac{(x-L)^{\alpha-1}(U-x)^{\beta-1}}{(U-L)^{\alpha+\beta-1}}dx$$
Probability distribution for different probabilities
One alternative to @whuber's normal approximation is to use "mixing" probabilities, or a hierarchical model. This would apply when the $p_i$ are similar in some way, and you can model this by a proba
Probability distribution for different probabilities One alternative to @whuber's normal approximation is to use "mixing" probabilities, or a hierarchical model. This would apply when the $p_i$ are similar in some way, and you can model this by a probability distribution $p_i\sim Dist(\theta)$ with a density function of $g(p|\theta)$ indexed by some parameter $\theta$. you get a integral equation: $$Pr(s=9|n=16,\theta)={16 \choose 9}\int_{0}^{1} p^{9}(1-p)^{7}g(p|\theta)dp $$ The binomial probability comes from setting $g(p|\theta)=\delta(p-\theta)$, the normal approximation comes from (I think) setting $g(p|\theta)=g(p|\mu,\sigma)=\frac{1}{\sigma}\phi\left(\frac{p-\mu}{\sigma}\right)$ (with $\mu$ and $\sigma$ as defined in @whuber's answer) and then noting the "tails" of this PDF fall off sharply around the peak. You could also use a beta distribution, which would lead to a simple analytic form, and which need not suffer from the "small p" problem that the normal approximation does - as beta is quite flexible. Using a $beta(\alpha,\beta)$ distribution with $\alpha,\beta$ set by the solutions to the following equations (this is the "mimimum KL divergence" estimates): $$\psi(\alpha)-\psi(\alpha+\beta)=\frac{1}{n}\sum_{i=1}^{n}log[p_{i}]$$ $$\psi(\beta)-\psi(\alpha+\beta)=\frac{1}{n}\sum_{i=1}^{n}log[1-p_{i}]$$ Where $\psi(.)$ is the digamma function - closely related to harmonic series. We get the "beta-binomial" compound distribution: $${16 \choose 9}\frac{1}{B(\alpha,\beta)}\int_{0}^{1} p^{9+\alpha-1}(1-p)^{7+\beta-1}dp ={16 \choose 9}\frac{B(\alpha+9,\beta+7)}{B(\alpha,\beta)}$$ This distribution converges towards a normal distribution in the case that @whuber points out - but should give reasonable answers for small $n$ and skewed $p_i$ - but not for multimodal $p_i$, as beta distribution only has one peak. But you can easily fix this, by simply using $M$ beta distributions for the $M$ modes. You break up the integral from $0<p<1$ into $M$ pieces so that each piece has a unique mode (and enough data to estimate parameters), and fit a beta distribution within each piece. then add up the results, noting that making the change of variables $p=\frac{x-L}{U-L}$ for $L<x<U$ the beta integral transforms to: $$B(\alpha,\beta)=\int_{L}^{U}\frac{(x-L)^{\alpha-1}(U-x)^{\beta-1}}{(U-L)^{\alpha+\beta-1}}dx$$
Probability distribution for different probabilities One alternative to @whuber's normal approximation is to use "mixing" probabilities, or a hierarchical model. This would apply when the $p_i$ are similar in some way, and you can model this by a proba
4,624
Probability distribution for different probabilities
Let $X_i$ ~ $Bernoulli(p_i)$ with probability generating function (pgf): $$\text{pgf} = E[t^{X_i}] = 1 - p_i (1-t)$$ Let $S = \sum_{i=1}^n X_i$ denote the sum of $n$ such independent random variables. Then, the pgf for the sum $S$ of $n=16$ such variables is: $$\begin{align*}\displaystyle \text{pgfS} &= E[t^S] \\&= E[t^{X_1}] E[t^{X_2}] \dots E[t^{X_{16}}] \text{ (... by independence)} \\ &= \prod _{i=1}^{16} \left(1-p_i(1-t) \right)\end{align*}$$ We seek $P(S=9)$, which is: $$\frac{1}{9!}\frac{d^9 \text{pgfS}}{dt^9}|_{t=0}$$ ALL DONE. This produces the exact symbolic solution as a function of the $p_i$. The answer is rather long to print on screen, but it is entirely tractable, and takes less than $\frac{1}{100}$th of a second to evaluate using Mathematica on my computer. Examples If $p_i = \frac{i}{17}, i= 1 \text{ to } 16$, then: $P(S=9) = \frac{9647941854334808184}{48661191875666868481} = 0.198268 \dots$ If $p_i = \frac{\sqrt{i}}{17}, i= 1 \text{ to } 16$, then: $P(S=9) = 0.000228613 \dots$ More than 16 trials? With more than 16 trials, there is no need to approximate the distribution. The above exact method works just as easily for examples with say $n = 50$ or $n = 100$. For instance, when $n = 50$, it takes less than $\frac{1}{10}$th of second to evaluate the entire pmf (i.e. at every value $s = 0, 1, \dots, 50$) using the code below. Mathematica code Given a vector of $p_i$ values, say: n = 16; pvals = Table[Subscript[p, i] -> i/(n+1), {i, n}]; ... here is some Mathematica code to do everything required: pgfS = Expand[ Product[1-(1-t)Subscript[p,i], {i, n}] /. pvals]; D[pgfS, {t, 9}]/9! /. t -> 0 // N 0.198268 To derive the entire pmf: Table[D[pgfS, {t,s}]/s! /. t -> 0 // N, {s, 0, n}] ... or use the even neater and faster (thanks to a suggestion from Ray Koopman below): CoefficientList[pgfS, t] // N For an example with $n = 1000$, it takes just 1 second to calculate pgfS, and then 0.002 seconds to derive the entire pmf using CoefficientList, so it is extremely efficient.
Probability distribution for different probabilities
Let $X_i$ ~ $Bernoulli(p_i)$ with probability generating function (pgf): $$\text{pgf} = E[t^{X_i}] = 1 - p_i (1-t)$$ Let $S = \sum_{i=1}^n X_i$ denote the sum of $n$ such independent random variables.
Probability distribution for different probabilities Let $X_i$ ~ $Bernoulli(p_i)$ with probability generating function (pgf): $$\text{pgf} = E[t^{X_i}] = 1 - p_i (1-t)$$ Let $S = \sum_{i=1}^n X_i$ denote the sum of $n$ such independent random variables. Then, the pgf for the sum $S$ of $n=16$ such variables is: $$\begin{align*}\displaystyle \text{pgfS} &= E[t^S] \\&= E[t^{X_1}] E[t^{X_2}] \dots E[t^{X_{16}}] \text{ (... by independence)} \\ &= \prod _{i=1}^{16} \left(1-p_i(1-t) \right)\end{align*}$$ We seek $P(S=9)$, which is: $$\frac{1}{9!}\frac{d^9 \text{pgfS}}{dt^9}|_{t=0}$$ ALL DONE. This produces the exact symbolic solution as a function of the $p_i$. The answer is rather long to print on screen, but it is entirely tractable, and takes less than $\frac{1}{100}$th of a second to evaluate using Mathematica on my computer. Examples If $p_i = \frac{i}{17}, i= 1 \text{ to } 16$, then: $P(S=9) = \frac{9647941854334808184}{48661191875666868481} = 0.198268 \dots$ If $p_i = \frac{\sqrt{i}}{17}, i= 1 \text{ to } 16$, then: $P(S=9) = 0.000228613 \dots$ More than 16 trials? With more than 16 trials, there is no need to approximate the distribution. The above exact method works just as easily for examples with say $n = 50$ or $n = 100$. For instance, when $n = 50$, it takes less than $\frac{1}{10}$th of second to evaluate the entire pmf (i.e. at every value $s = 0, 1, \dots, 50$) using the code below. Mathematica code Given a vector of $p_i$ values, say: n = 16; pvals = Table[Subscript[p, i] -> i/(n+1), {i, n}]; ... here is some Mathematica code to do everything required: pgfS = Expand[ Product[1-(1-t)Subscript[p,i], {i, n}] /. pvals]; D[pgfS, {t, 9}]/9! /. t -> 0 // N 0.198268 To derive the entire pmf: Table[D[pgfS, {t,s}]/s! /. t -> 0 // N, {s, 0, n}] ... or use the even neater and faster (thanks to a suggestion from Ray Koopman below): CoefficientList[pgfS, t] // N For an example with $n = 1000$, it takes just 1 second to calculate pgfS, and then 0.002 seconds to derive the entire pmf using CoefficientList, so it is extremely efficient.
Probability distribution for different probabilities Let $X_i$ ~ $Bernoulli(p_i)$ with probability generating function (pgf): $$\text{pgf} = E[t^{X_i}] = 1 - p_i (1-t)$$ Let $S = \sum_{i=1}^n X_i$ denote the sum of $n$ such independent random variables.
4,625
Probability distribution for different probabilities
The (in general intractable) pmf is $$ \Pr(S=k) = \sum_{\substack{A\subset\{1,\dots,n\}\\ |A|=k}} \left( \prod_{i\in A} p_i \right)\left(\prod_{j\in \{1,\dots,n\}\setminus A} (1-p_j) \right) \, . $$ R code: p <- seq(1, 16) / 17 cat(p, "\n") n <- length(p) k <- 9 S <- seq(1, n) A <- combn(S, k) pr <- 0 for (i in 1:choose(n, k)) { pr <- pr + exp(sum(log(p[A[,i]])) + sum(log(1 - p[setdiff(S, A[,i])]))) } cat("Pr(S = ", k, ") = ", pr, "\n", sep = "") For the $p_i$'s used in wolfies answer, we have: Pr(S = 9) = 0.1982677 When $n$ grows, use a convolution.
Probability distribution for different probabilities
The (in general intractable) pmf is $$ \Pr(S=k) = \sum_{\substack{A\subset\{1,\dots,n\}\\ |A|=k}} \left( \prod_{i\in A} p_i \right)\left(\prod_{j\in \{1,\dots,n\}\setminus A} (1-p_j) \right) \, . $$
Probability distribution for different probabilities The (in general intractable) pmf is $$ \Pr(S=k) = \sum_{\substack{A\subset\{1,\dots,n\}\\ |A|=k}} \left( \prod_{i\in A} p_i \right)\left(\prod_{j\in \{1,\dots,n\}\setminus A} (1-p_j) \right) \, . $$ R code: p <- seq(1, 16) / 17 cat(p, "\n") n <- length(p) k <- 9 S <- seq(1, n) A <- combn(S, k) pr <- 0 for (i in 1:choose(n, k)) { pr <- pr + exp(sum(log(p[A[,i]])) + sum(log(1 - p[setdiff(S, A[,i])]))) } cat("Pr(S = ", k, ") = ", pr, "\n", sep = "") For the $p_i$'s used in wolfies answer, we have: Pr(S = 9) = 0.1982677 When $n$ grows, use a convolution.
Probability distribution for different probabilities The (in general intractable) pmf is $$ \Pr(S=k) = \sum_{\substack{A\subset\{1,\dots,n\}\\ |A|=k}} \left( \prod_{i\in A} p_i \right)\left(\prod_{j\in \{1,\dots,n\}\setminus A} (1-p_j) \right) \, . $$
4,626
Probability distribution for different probabilities
@wolfies comment, and my attempt at a response to it revealed an important problem with my other answer, which I will discuss later. Specific Case (n=16) There is a fairly efficient way to code up the full distribution by using the "trick" of using base 2 (binary) numbers in the calculation. It only requires 4 lines of R code to get the full distribution of $Y=\sum_{i=1}^{n} Z_i$ where $Pr(Z_i=1)=p_i$. Basically, there are a total of $2^n$ choices of the vector $z=(z_1,\dots,z_n)$ that the binary variables $Z_i$ could take. Now suppose we number each distinct choice from $1$ up to $2^n$. This on its own is nothing special, but now suppose that we represent the "choice number" using base 2 arithmetic. Now take $n=3$ so I can write down all the choices so there are $2^3=8$ choices. Then $1,2,3,4,5,6,7,8$ in "ordinary numbers" becomes $1,10,11,100,101,110,111,1000$ in "binary numbers". Now suppose we write these as four digit numbers, then we have $0001,0010,0011,0100,0101,0110,0111,1000$. Now look at the last $3$ digits of each number - $001$ can be thought of as $(Z_1=0,Z_2=0,Z_3=1)\implies Y=1$, etc. Counting in binary form provides an efficient way to organise the summation. Fortunately, there is an R function which can do this binary conversion for us, called intToBits(x) and we convert the raw binary form into a numeric via as.numeric(intToBits(x)), then we will get a vector with $32$ elements, each element being the digit of the base 2 version of our number (read from right to left, not left to right). Using this trick combined with some other R vectorisations, we can calculate the probability that $y=9$ in 4 lines of R code: exact_calc <- function(y,p){ n <- length(p) z <- t(matrix(as.numeric(intToBits(1:2^n)),ncol=2^n))[,1:n] #don't need columns n+1,...,32 as these are always 0 pz <- z%*%log(p/(1-p))+sum(log(1-p)) ydist <- rowsum(exp(pz),rowSums(z)) return(ydist[y+1]) } Plugging in the uniform case $p_i^{(1)}=\frac{i}{17}$ and the sqrt root case $p_i^{(2)}=\frac{\sqrt{i}}{17}$ gives a full distribution for y as: $$\begin{array}{c|c}y & Pr(Y=y|p_i=\frac{i}{17}) & Pr(Y=y|p_i=\frac{\sqrt{i}}{17})\\ \hline 0 & 0.0000 & 0.0558 \\ 1 & 0.0000 & 0.1784 \\ 2 & 0.0003 & 0.2652 \\ 3 & 0.0026 & 0.2430 \\ 4 & 0.0139 & 0.1536 \\ 5 & 0.0491 & 0.0710 \\ 6 & 0.1181 & 0.0248 \\ 7 & 0.1983 & 0.0067 \\ 8 & 0.2353 & 0.0014 \\ 9 & 0.1983 & 0.0002 \\ 10 & 0.1181 & 0.0000 \\ 11 & 0.0491 & 0.0000 \\ 12 & 0.0139 & 0.0000 \\ 13 & 0.0026 & 0.0000 \\ 14 & 0.0003 & 0.0000 \\ 15 & 0.0000 & 0.0000 \\ 16 & 0.0000 & 0.0000 \\ \end{array}$$ So for the specific problem of $y$ successes in $16$ trials, the exact calculations are straight-forward. This also works for a number of probabilities up to about $n=20$ - beyond that you are likely to start to run into memory problems, and different computing tricks are needed. Note that by applying my suggested "beta distribution" we get parameter estimates of $\alpha=\beta=1.3206$ and this gives a probability estimate that is nearly uniform in $y$, giving an approximate value of $pr(y=9)=0.06799\approx\frac{1}{17}$. This seems strange given that a density of a beta distribution with $\alpha=\beta=1.3206$ closely approximates the histogram of the $p_i$ values. What went wrong? General Case I will now discuss the more general case, and why my simple beta approximation failed. Basically, by writing $(y|n,p)\sim Binom(n,p)$ and then mixing over $p$ with another distribution $p\sim f(\theta)$ is actually making an important assumption - that we can approximate the actual probability with a single binomial probability - the only problem that remains is which value of $p$ to use. One way to see this is to use the mixing density which is discrete uniform over the actual $p_i$. So we replace the beta distribution $p\sim Beta(a,b)$ with a discrete density of $p\sim \sum_{i=1}^{16}w_i\delta(p-p_i)$. Then using the mixing approximation can be expressed in words as choose a $p_i$ value with probability $w_i$, and assume all bernoulli trials have this probability. Clearly, for such an approximation to work well, most of the $p_i$ values should be similar to each other. This basically means that for @wolfies uniform distribution of values, $p_i=\frac{i}{17}$ results in a woefully bad approximation when using the beta mixing distribution. This also explains why the approximation is much better for $p_i=\frac{\sqrt{i}}{17}$ - they are less spread out. The mixing then uses the observed $p_i$ to average over all possible choices of a single $p$. Now because "mixing" is like a weighted average, it cannot possibly do any better than using the single best $p$. So if the $p_i$ are sufficiently spread out, there can be no single $p$ that could provide a good approximation to all $p_i$. One thing I did say in my other answer was that it may be better to use a mixture of beta distributions over a restricted range - but this still won't help here because this is still mixing over a single $p$. What makes more sense is split the interval $(0,1)$ up into pieces and have a binomial within each piece. For example, we could choose $(0,0.1,0.2,\dots,0.9,1)$ as our splits and fit nine binomials within each $0.1$ range of probability. Basically, within each split, we would fit a simple approximation, such as using a binomial with probability equal to the average of the $p_i$ in that range. If we make the intervals small enough, the approximation becomes arbitrarily good. But note that all this does is leave us with having to deal with a sum of indpendent binomial trials with different probabilities, instead of Bernoulli trials. However, the previous part to this answer showed that we can do the exact calculations provided that the number of binomials is sufficiently small, say 10-15 or so. To extend the bernoulli-based answer to a binomial-based one, we simply "re-interpret" what the $Z_i$ variables are. We simply state that $Z_i=I(X_i>0)$ - this reduces to the original bernoulli-based $Z_i$ but now says which binomials the successes are coming from. So the case $(Z_1=0,Z_2=0,Z_3=1)$ now means that all the "successes" come from the third binomial, and none from the first two. Note that this is still "exponential" in that the number of calculations is something like $k^g$ where $g$ is the number of binomials, and $k$ is the group size - so you have $Y\approx\sum_{j=1}^{g}X_j$ where $X_j\sim Bin(k,p_j)$. But this is better than the $2^{gk}$ that you'd be dealing with by using bernoulli random variables. For example, suppose we split the $n=16$ probabilities into $g=4$ groups with $k=4$ probabilities in each group. This gives $4^4=256$ calculations, compared to $2^{16}=65536$ By choosing $g=10$ groups, and noting that the limit was about $n=20$ which is about $10^7$ cells, we can effectively use this method to increase the maximum $n$ to $n=50$. If we make a cruder approximation, by lowering $g$, we will increase the "feasible" size for $n$. $g=5$ means that you can have an effective $n$ of about $125$. Beyond this the normal approximation should be extremely accurate.
Probability distribution for different probabilities
@wolfies comment, and my attempt at a response to it revealed an important problem with my other answer, which I will discuss later. Specific Case (n=16) There is a fairly efficient way to code up the
Probability distribution for different probabilities @wolfies comment, and my attempt at a response to it revealed an important problem with my other answer, which I will discuss later. Specific Case (n=16) There is a fairly efficient way to code up the full distribution by using the "trick" of using base 2 (binary) numbers in the calculation. It only requires 4 lines of R code to get the full distribution of $Y=\sum_{i=1}^{n} Z_i$ where $Pr(Z_i=1)=p_i$. Basically, there are a total of $2^n$ choices of the vector $z=(z_1,\dots,z_n)$ that the binary variables $Z_i$ could take. Now suppose we number each distinct choice from $1$ up to $2^n$. This on its own is nothing special, but now suppose that we represent the "choice number" using base 2 arithmetic. Now take $n=3$ so I can write down all the choices so there are $2^3=8$ choices. Then $1,2,3,4,5,6,7,8$ in "ordinary numbers" becomes $1,10,11,100,101,110,111,1000$ in "binary numbers". Now suppose we write these as four digit numbers, then we have $0001,0010,0011,0100,0101,0110,0111,1000$. Now look at the last $3$ digits of each number - $001$ can be thought of as $(Z_1=0,Z_2=0,Z_3=1)\implies Y=1$, etc. Counting in binary form provides an efficient way to organise the summation. Fortunately, there is an R function which can do this binary conversion for us, called intToBits(x) and we convert the raw binary form into a numeric via as.numeric(intToBits(x)), then we will get a vector with $32$ elements, each element being the digit of the base 2 version of our number (read from right to left, not left to right). Using this trick combined with some other R vectorisations, we can calculate the probability that $y=9$ in 4 lines of R code: exact_calc <- function(y,p){ n <- length(p) z <- t(matrix(as.numeric(intToBits(1:2^n)),ncol=2^n))[,1:n] #don't need columns n+1,...,32 as these are always 0 pz <- z%*%log(p/(1-p))+sum(log(1-p)) ydist <- rowsum(exp(pz),rowSums(z)) return(ydist[y+1]) } Plugging in the uniform case $p_i^{(1)}=\frac{i}{17}$ and the sqrt root case $p_i^{(2)}=\frac{\sqrt{i}}{17}$ gives a full distribution for y as: $$\begin{array}{c|c}y & Pr(Y=y|p_i=\frac{i}{17}) & Pr(Y=y|p_i=\frac{\sqrt{i}}{17})\\ \hline 0 & 0.0000 & 0.0558 \\ 1 & 0.0000 & 0.1784 \\ 2 & 0.0003 & 0.2652 \\ 3 & 0.0026 & 0.2430 \\ 4 & 0.0139 & 0.1536 \\ 5 & 0.0491 & 0.0710 \\ 6 & 0.1181 & 0.0248 \\ 7 & 0.1983 & 0.0067 \\ 8 & 0.2353 & 0.0014 \\ 9 & 0.1983 & 0.0002 \\ 10 & 0.1181 & 0.0000 \\ 11 & 0.0491 & 0.0000 \\ 12 & 0.0139 & 0.0000 \\ 13 & 0.0026 & 0.0000 \\ 14 & 0.0003 & 0.0000 \\ 15 & 0.0000 & 0.0000 \\ 16 & 0.0000 & 0.0000 \\ \end{array}$$ So for the specific problem of $y$ successes in $16$ trials, the exact calculations are straight-forward. This also works for a number of probabilities up to about $n=20$ - beyond that you are likely to start to run into memory problems, and different computing tricks are needed. Note that by applying my suggested "beta distribution" we get parameter estimates of $\alpha=\beta=1.3206$ and this gives a probability estimate that is nearly uniform in $y$, giving an approximate value of $pr(y=9)=0.06799\approx\frac{1}{17}$. This seems strange given that a density of a beta distribution with $\alpha=\beta=1.3206$ closely approximates the histogram of the $p_i$ values. What went wrong? General Case I will now discuss the more general case, and why my simple beta approximation failed. Basically, by writing $(y|n,p)\sim Binom(n,p)$ and then mixing over $p$ with another distribution $p\sim f(\theta)$ is actually making an important assumption - that we can approximate the actual probability with a single binomial probability - the only problem that remains is which value of $p$ to use. One way to see this is to use the mixing density which is discrete uniform over the actual $p_i$. So we replace the beta distribution $p\sim Beta(a,b)$ with a discrete density of $p\sim \sum_{i=1}^{16}w_i\delta(p-p_i)$. Then using the mixing approximation can be expressed in words as choose a $p_i$ value with probability $w_i$, and assume all bernoulli trials have this probability. Clearly, for such an approximation to work well, most of the $p_i$ values should be similar to each other. This basically means that for @wolfies uniform distribution of values, $p_i=\frac{i}{17}$ results in a woefully bad approximation when using the beta mixing distribution. This also explains why the approximation is much better for $p_i=\frac{\sqrt{i}}{17}$ - they are less spread out. The mixing then uses the observed $p_i$ to average over all possible choices of a single $p$. Now because "mixing" is like a weighted average, it cannot possibly do any better than using the single best $p$. So if the $p_i$ are sufficiently spread out, there can be no single $p$ that could provide a good approximation to all $p_i$. One thing I did say in my other answer was that it may be better to use a mixture of beta distributions over a restricted range - but this still won't help here because this is still mixing over a single $p$. What makes more sense is split the interval $(0,1)$ up into pieces and have a binomial within each piece. For example, we could choose $(0,0.1,0.2,\dots,0.9,1)$ as our splits and fit nine binomials within each $0.1$ range of probability. Basically, within each split, we would fit a simple approximation, such as using a binomial with probability equal to the average of the $p_i$ in that range. If we make the intervals small enough, the approximation becomes arbitrarily good. But note that all this does is leave us with having to deal with a sum of indpendent binomial trials with different probabilities, instead of Bernoulli trials. However, the previous part to this answer showed that we can do the exact calculations provided that the number of binomials is sufficiently small, say 10-15 or so. To extend the bernoulli-based answer to a binomial-based one, we simply "re-interpret" what the $Z_i$ variables are. We simply state that $Z_i=I(X_i>0)$ - this reduces to the original bernoulli-based $Z_i$ but now says which binomials the successes are coming from. So the case $(Z_1=0,Z_2=0,Z_3=1)$ now means that all the "successes" come from the third binomial, and none from the first two. Note that this is still "exponential" in that the number of calculations is something like $k^g$ where $g$ is the number of binomials, and $k$ is the group size - so you have $Y\approx\sum_{j=1}^{g}X_j$ where $X_j\sim Bin(k,p_j)$. But this is better than the $2^{gk}$ that you'd be dealing with by using bernoulli random variables. For example, suppose we split the $n=16$ probabilities into $g=4$ groups with $k=4$ probabilities in each group. This gives $4^4=256$ calculations, compared to $2^{16}=65536$ By choosing $g=10$ groups, and noting that the limit was about $n=20$ which is about $10^7$ cells, we can effectively use this method to increase the maximum $n$ to $n=50$. If we make a cruder approximation, by lowering $g$, we will increase the "feasible" size for $n$. $g=5$ means that you can have an effective $n$ of about $125$. Beyond this the normal approximation should be extremely accurate.
Probability distribution for different probabilities @wolfies comment, and my attempt at a response to it revealed an important problem with my other answer, which I will discuss later. Specific Case (n=16) There is a fairly efficient way to code up the
4,627
How do we decide when a small sample is statistically significant or not?
I will describe how a statistician interprets count data. With a tiny bit of practice you can do it, too. The basic analysis When cases arise randomly and independently, the times of their occurrences are reasonably accurately modeled with a Poisson process. This implies that the number of cases appearing in any predetermined interval has a Poisson distribution. The only thing we need to remember about that is that its variance equals its expectation. In less technical jargon, this means that the amount by which the value is likely to differ from the average (its standard error) is proportional to the square root of the average. (See Why is the square root transformation recommended for count data? for an explanation and discussion of the square root and some related transformations of count data.) In practice, we estimate the average by using the observed value. Thus, The standard error of a count of independent events with equal expected rates of occurrence is the square root of the count. (Various modifications of this rule exist for really small counts, especially counts of zero, but that shouldn't be an issue in the present application.) In the case of Vatican City, a rate of 33,666 cases per million corresponds to $$\frac{33666}{10^6} \times 802 = 27$$ cases. The square root of $27$ is $5$ (we usually don't need to worry about additional significant figures for this kind of analysis, which is usually done mentally and approximately). Equivalently, this standard error is $\sqrt{27}$ cases out of $802$ people, equivalent to $6500$ per million. We are therefore justified in stating The Vatican City case rate is $33666\pm 6500$ per million. This shows how silly it is to quote five significant figures for the rate. It is better to acknowledge the large standard error by limiting the sig figs, as in The observed Vatican City case rate is $34000 \pm 6500$ per million. (Do not make the mistake of just taking the square root of the rate! In this example, the square root of 33,666 is only 183, which is far too small. For estimating standard errors square roots apply to counts, not rates.) A good rule of thumb is to use one additional significant digit when reporting the standard error, as I did here (the case rate was rounded to the nearest thousand and its SE was rounded to the nearest 100). A slightly more nuanced analysis Cases are not independent: people catch them from other people and because human beings do not dart about the world like atoms in a vial of hot gas, cases occur in clusters. This violates the independence assumption. What really happens, then, is that the effective count should be somewhere between the number of cases and the number of distinct clusters. We cannot know the latter: but surely it is smaller (perhaps far smaller) than the number of cases. Thus, The square root rule gives a lower bound on the standard error when the events are (positively) correlated. You can sometimes estimate how to adjust the standard error. For instance, if you guess that cases occur in clusters of ten or so, then you should multiply the standard error by the square root of ten. Generally, The standard error of a count of positively correlated events is, very roughly, the square root of the count times the square root of a typical cluster size. This approximation arises by assuming all cases in a cluster are perfectly correlated and otherwise the cases in any two different clusters are independent. If we suspect the Vatican City cases are clustered, then in the most extreme case it is a single cluster: the count is $1,$ its square root is $1,$ and the standard error therefore is one whole cluster: namely, about $27$ people. If you want to be cautious about not exaggerating the reliability of the numbers, then, you might think of this Vatican City rate as being somewhere between just above zero and likely less than 70,000 per million ($1\pm 1$ clusters of $27$ of out a population of $802$).
How do we decide when a small sample is statistically significant or not?
I will describe how a statistician interprets count data. With a tiny bit of practice you can do it, too. The basic analysis When cases arise randomly and independently, the times of their occurrence
How do we decide when a small sample is statistically significant or not? I will describe how a statistician interprets count data. With a tiny bit of practice you can do it, too. The basic analysis When cases arise randomly and independently, the times of their occurrences are reasonably accurately modeled with a Poisson process. This implies that the number of cases appearing in any predetermined interval has a Poisson distribution. The only thing we need to remember about that is that its variance equals its expectation. In less technical jargon, this means that the amount by which the value is likely to differ from the average (its standard error) is proportional to the square root of the average. (See Why is the square root transformation recommended for count data? for an explanation and discussion of the square root and some related transformations of count data.) In practice, we estimate the average by using the observed value. Thus, The standard error of a count of independent events with equal expected rates of occurrence is the square root of the count. (Various modifications of this rule exist for really small counts, especially counts of zero, but that shouldn't be an issue in the present application.) In the case of Vatican City, a rate of 33,666 cases per million corresponds to $$\frac{33666}{10^6} \times 802 = 27$$ cases. The square root of $27$ is $5$ (we usually don't need to worry about additional significant figures for this kind of analysis, which is usually done mentally and approximately). Equivalently, this standard error is $\sqrt{27}$ cases out of $802$ people, equivalent to $6500$ per million. We are therefore justified in stating The Vatican City case rate is $33666\pm 6500$ per million. This shows how silly it is to quote five significant figures for the rate. It is better to acknowledge the large standard error by limiting the sig figs, as in The observed Vatican City case rate is $34000 \pm 6500$ per million. (Do not make the mistake of just taking the square root of the rate! In this example, the square root of 33,666 is only 183, which is far too small. For estimating standard errors square roots apply to counts, not rates.) A good rule of thumb is to use one additional significant digit when reporting the standard error, as I did here (the case rate was rounded to the nearest thousand and its SE was rounded to the nearest 100). A slightly more nuanced analysis Cases are not independent: people catch them from other people and because human beings do not dart about the world like atoms in a vial of hot gas, cases occur in clusters. This violates the independence assumption. What really happens, then, is that the effective count should be somewhere between the number of cases and the number of distinct clusters. We cannot know the latter: but surely it is smaller (perhaps far smaller) than the number of cases. Thus, The square root rule gives a lower bound on the standard error when the events are (positively) correlated. You can sometimes estimate how to adjust the standard error. For instance, if you guess that cases occur in clusters of ten or so, then you should multiply the standard error by the square root of ten. Generally, The standard error of a count of positively correlated events is, very roughly, the square root of the count times the square root of a typical cluster size. This approximation arises by assuming all cases in a cluster are perfectly correlated and otherwise the cases in any two different clusters are independent. If we suspect the Vatican City cases are clustered, then in the most extreme case it is a single cluster: the count is $1,$ its square root is $1,$ and the standard error therefore is one whole cluster: namely, about $27$ people. If you want to be cautious about not exaggerating the reliability of the numbers, then, you might think of this Vatican City rate as being somewhere between just above zero and likely less than 70,000 per million ($1\pm 1$ clusters of $27$ of out a population of $802$).
How do we decide when a small sample is statistically significant or not? I will describe how a statistician interprets count data. With a tiny bit of practice you can do it, too. The basic analysis When cases arise randomly and independently, the times of their occurrence
4,628
How do we decide when a small sample is statistically significant or not?
Quoting Wikipedia: In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis. Result of a statistical test can be significant, or not. Size of the sample is not a test. Significant in what sense? Prevalence of COVID-19 is a characteristic of particular country, at a particular point of time, the fact that one country has smaller (or larger) prevalence than other country does not make it more, or less "significant". It's like you said that higher people are more significant than short ones, the statement doesn't make sense. You are correct that smaller sample can vary more than larger one, but you need to consider this relative to the size of the population. The sample of 802 cases would be small for saying something about population of China, but in case of Vatican City, this would be the whole population, so there would be no uncertainty. Finally, if you mean that prevalence in Vatican City is not "significant" because it does not add up many cases to the total prevalence of COVID-19 around the world, than you are correct. However if this is what you re interested in, than rather then looking at relative prevalence (per 100 000 inhabitants) you should rather look at raw counts, that would obviously be larger, for larger countries.
How do we decide when a small sample is statistically significant or not?
Quoting Wikipedia: In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis. Result of a statistical test can be s
How do we decide when a small sample is statistically significant or not? Quoting Wikipedia: In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis. Result of a statistical test can be significant, or not. Size of the sample is not a test. Significant in what sense? Prevalence of COVID-19 is a characteristic of particular country, at a particular point of time, the fact that one country has smaller (or larger) prevalence than other country does not make it more, or less "significant". It's like you said that higher people are more significant than short ones, the statement doesn't make sense. You are correct that smaller sample can vary more than larger one, but you need to consider this relative to the size of the population. The sample of 802 cases would be small for saying something about population of China, but in case of Vatican City, this would be the whole population, so there would be no uncertainty. Finally, if you mean that prevalence in Vatican City is not "significant" because it does not add up many cases to the total prevalence of COVID-19 around the world, than you are correct. However if this is what you re interested in, than rather then looking at relative prevalence (per 100 000 inhabitants) you should rather look at raw counts, that would obviously be larger, for larger countries.
How do we decide when a small sample is statistically significant or not? Quoting Wikipedia: In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis. Result of a statistical test can be s
4,629
How do we decide when a small sample is statistically significant or not?
@Avroham. I think the word "significant" is so ambiguous, you shouldn't use it in your question. It has a very definite technial meaning in statistics, but has many other meanings more generally. I think the phrase "statistically convincing" would be better. It is even more ambiguous in one sense, but it doesn't have a technical meaning that can be confused with an every day meaning. @whuber's excellent reply is still totally relevant with this rewording.
How do we decide when a small sample is statistically significant or not?
@Avroham. I think the word "significant" is so ambiguous, you shouldn't use it in your question. It has a very definite technial meaning in statistics, but has many other meanings more generally. I th
How do we decide when a small sample is statistically significant or not? @Avroham. I think the word "significant" is so ambiguous, you shouldn't use it in your question. It has a very definite technial meaning in statistics, but has many other meanings more generally. I think the phrase "statistically convincing" would be better. It is even more ambiguous in one sense, but it doesn't have a technical meaning that can be confused with an every day meaning. @whuber's excellent reply is still totally relevant with this rewording.
How do we decide when a small sample is statistically significant or not? @Avroham. I think the word "significant" is so ambiguous, you shouldn't use it in your question. It has a very definite technial meaning in statistics, but has many other meanings more generally. I th
4,630
How do we decide when a small sample is statistically significant or not?
I think what you're asking is if there is some predetermined minimal sample size that needs to be taken in order to have statistical significance. In the case of looking at the World vs the Vatican in terms of cases/million its obvious with a ratio of 7.8 billion to 807 makes any comparison insignificant. ie, neither is predictive of the other. You want to know what minimal sample size is significant. Is it 780? 7,800? 78,000? 780,000? 7.8 million? 78 M? 780 M? I think you can do small sample sizes when polling voters and get significant results, but with something like covid19 it really does come down to factors such as where, population densities, technological advancement, do they have a modern medical system, etc. On its own, the Vatican sample would probably be a good comparison with a 5 block area of New York City in terms of "could" they see a contraction rate of 33,000+/million? But is it an indication the world will eventually see a contraction rate of 33,000/million? The Vatican sample then is insignificant in a predictive sense. Without the Vatican sample we already know the virus can spread to a whole household and kill everyone in that house. It can also infect everyone in a house and none even show symptoms.
How do we decide when a small sample is statistically significant or not?
I think what you're asking is if there is some predetermined minimal sample size that needs to be taken in order to have statistical significance. In the case of looking at the World vs the Vatican in
How do we decide when a small sample is statistically significant or not? I think what you're asking is if there is some predetermined minimal sample size that needs to be taken in order to have statistical significance. In the case of looking at the World vs the Vatican in terms of cases/million its obvious with a ratio of 7.8 billion to 807 makes any comparison insignificant. ie, neither is predictive of the other. You want to know what minimal sample size is significant. Is it 780? 7,800? 78,000? 780,000? 7.8 million? 78 M? 780 M? I think you can do small sample sizes when polling voters and get significant results, but with something like covid19 it really does come down to factors such as where, population densities, technological advancement, do they have a modern medical system, etc. On its own, the Vatican sample would probably be a good comparison with a 5 block area of New York City in terms of "could" they see a contraction rate of 33,000+/million? But is it an indication the world will eventually see a contraction rate of 33,000/million? The Vatican sample then is insignificant in a predictive sense. Without the Vatican sample we already know the virus can spread to a whole household and kill everyone in that house. It can also infect everyone in a house and none even show symptoms.
How do we decide when a small sample is statistically significant or not? I think what you're asking is if there is some predetermined minimal sample size that needs to be taken in order to have statistical significance. In the case of looking at the World vs the Vatican in
4,631
Intuitive explanations of differences between Gradient Boosting Trees (GBM) & Adaboost
I found this introduction which provides some intuitive explanations: In Gradient Boosting, ‘shortcomings’ (of existing weak learners) are identified by gradients. In AdaBoost, ‘shortcomings’ are identified by high-weight data points. By means of an exponential loss function, AdaBoost gives more weights to those samples fitted worse in previous steps. Today, AdaBoost is regarded as a special case of Gradient Boosting in terms of loss function. Historically it preceded Gradient Boosting to which it was later generalized, as shown in the history provided in the introduction: Invent AdaBoost, the first successful boosting algorithm [Freund et al., 1996, Freund and Schapire, 1997] Formulate AdaBoost as gradient descent with a special loss function [Breiman et al., 1998, Breiman, 1999] Generalize AdaBoost to Gradient Boosting in order to handle a variety of loss functions [Friedman et al., 2000, Friedman, 2001]
Intuitive explanations of differences between Gradient Boosting Trees (GBM) & Adaboost
I found this introduction which provides some intuitive explanations: In Gradient Boosting, ‘shortcomings’ (of existing weak learners) are identified by gradients. In AdaBoost, ‘shortcomings’ are id
Intuitive explanations of differences between Gradient Boosting Trees (GBM) & Adaboost I found this introduction which provides some intuitive explanations: In Gradient Boosting, ‘shortcomings’ (of existing weak learners) are identified by gradients. In AdaBoost, ‘shortcomings’ are identified by high-weight data points. By means of an exponential loss function, AdaBoost gives more weights to those samples fitted worse in previous steps. Today, AdaBoost is regarded as a special case of Gradient Boosting in terms of loss function. Historically it preceded Gradient Boosting to which it was later generalized, as shown in the history provided in the introduction: Invent AdaBoost, the first successful boosting algorithm [Freund et al., 1996, Freund and Schapire, 1997] Formulate AdaBoost as gradient descent with a special loss function [Breiman et al., 1998, Breiman, 1999] Generalize AdaBoost to Gradient Boosting in order to handle a variety of loss functions [Friedman et al., 2000, Friedman, 2001]
Intuitive explanations of differences between Gradient Boosting Trees (GBM) & Adaboost I found this introduction which provides some intuitive explanations: In Gradient Boosting, ‘shortcomings’ (of existing weak learners) are identified by gradients. In AdaBoost, ‘shortcomings’ are id
4,632
Intuitive explanations of differences between Gradient Boosting Trees (GBM) & Adaboost
An intuitive explanation of AdaBoost algorithn Let me build upon @Randel's excellent answer with an illustration of the following point In AdaBoost, ‘shortcomings’ are identified by high-weight data points AdaBoost recap Let $G_m(x) \ m = 1,2,...,M$ be the sequence of weak classifiers, our objective is to build the following: $$G(x) = \text{sign} \left( \alpha_1 G_1(x) + \alpha_2 G_2(x) + ... \alpha_M G_M(x)\right) = \text{sign} \left( \sum_{m = 1}^M \alpha_m G_m(x)\right)$$ The final prediction is a combination of the predictions from all classifiers through a weighted majority vote The coefficients $\alpha_m$ are computed by the boosting algorithm, and weight the contribution of each respective $G_m(x)$. The effect is to give higher influence to the more accurate classifiers in the sequence. At each boosting step, the data is modified by applying weights $w_1, w_2, ..., w_N$ to each training observation. At step $m$ the observations that were misclassified previously have their weights increased Note that at the first step $m=1$ the weights are initialized uniformly $w_i = 1 / N$ AdaBoost on a toy example Consider the toy data set on which I have applied AdaBoost with the following settings: Number of iterations $M = 10$, weak classifier = Decision Tree of depth 1 with 2 leaf nodes. The boundary between red and blue data points is clearly non linear, yet the algorithm does pretty well. Visualizing the sequence of weak learners and the sample weights The first 6 weak learners $m = 1,2,...,6$ are shown below. The scatter points are scaled according to their respective sample weight at each iteration First iteration: The decision boundary is very simple (linear) since these are weak learners All points are of the same size, as expected 6 blue points are in the red region and are misclassified Second iteration: The linear decision boundary has changed The previously misclassified blue points are now larger (greater sample weight) and have influenced the decision boundary 9 blue points are now misclassified Final result after 10 iterations All classifiers have a linear decision boundary, at different positions. The resulting coefficients of the first 6 iterations $\alpha_m$ are : 1.041, 0.875, 0.837, 0.781, 1.04, 0.938... As expected, the first iteration has largest coefficient as it is the one with the fewest misclassifications. Next steps An intuitive explanation of gradient boosting - to be completed Sources and further reading: python code and original figures here https://www.cs.cmu.edu/~aarti/Class/10701/slides/Lecture10.pdf
Intuitive explanations of differences between Gradient Boosting Trees (GBM) & Adaboost
An intuitive explanation of AdaBoost algorithn Let me build upon @Randel's excellent answer with an illustration of the following point In AdaBoost, ‘shortcomings’ are identified by high-weight dat
Intuitive explanations of differences between Gradient Boosting Trees (GBM) & Adaboost An intuitive explanation of AdaBoost algorithn Let me build upon @Randel's excellent answer with an illustration of the following point In AdaBoost, ‘shortcomings’ are identified by high-weight data points AdaBoost recap Let $G_m(x) \ m = 1,2,...,M$ be the sequence of weak classifiers, our objective is to build the following: $$G(x) = \text{sign} \left( \alpha_1 G_1(x) + \alpha_2 G_2(x) + ... \alpha_M G_M(x)\right) = \text{sign} \left( \sum_{m = 1}^M \alpha_m G_m(x)\right)$$ The final prediction is a combination of the predictions from all classifiers through a weighted majority vote The coefficients $\alpha_m$ are computed by the boosting algorithm, and weight the contribution of each respective $G_m(x)$. The effect is to give higher influence to the more accurate classifiers in the sequence. At each boosting step, the data is modified by applying weights $w_1, w_2, ..., w_N$ to each training observation. At step $m$ the observations that were misclassified previously have their weights increased Note that at the first step $m=1$ the weights are initialized uniformly $w_i = 1 / N$ AdaBoost on a toy example Consider the toy data set on which I have applied AdaBoost with the following settings: Number of iterations $M = 10$, weak classifier = Decision Tree of depth 1 with 2 leaf nodes. The boundary between red and blue data points is clearly non linear, yet the algorithm does pretty well. Visualizing the sequence of weak learners and the sample weights The first 6 weak learners $m = 1,2,...,6$ are shown below. The scatter points are scaled according to their respective sample weight at each iteration First iteration: The decision boundary is very simple (linear) since these are weak learners All points are of the same size, as expected 6 blue points are in the red region and are misclassified Second iteration: The linear decision boundary has changed The previously misclassified blue points are now larger (greater sample weight) and have influenced the decision boundary 9 blue points are now misclassified Final result after 10 iterations All classifiers have a linear decision boundary, at different positions. The resulting coefficients of the first 6 iterations $\alpha_m$ are : 1.041, 0.875, 0.837, 0.781, 1.04, 0.938... As expected, the first iteration has largest coefficient as it is the one with the fewest misclassifications. Next steps An intuitive explanation of gradient boosting - to be completed Sources and further reading: python code and original figures here https://www.cs.cmu.edu/~aarti/Class/10701/slides/Lecture10.pdf
Intuitive explanations of differences between Gradient Boosting Trees (GBM) & Adaboost An intuitive explanation of AdaBoost algorithn Let me build upon @Randel's excellent answer with an illustration of the following point In AdaBoost, ‘shortcomings’ are identified by high-weight dat
4,633
How does linear regression use the normal distribution?
Linear regression by itself does not need the normal (gaussian) assumption, the estimators can be calculated (by linear least squares) without any need of such assumption, and makes perfect sense without it. But then, as statisticians we want to understand some of the properties of this method, answers to questions such as: are the least squares estimators optimal in some sense? or can we do better with some alternative estimators? Then, under the normal distribution of error terms, we can show that this estimators are, indeed, optimal, for instance they are "unbiased of minimum variance", or maximum likelihood. No such thing can be proved without the normal assumption. Also, if we want to construct (and analyze properties of) confidence intervals or hypothesis tests, then we use the normal assumption. But, we could instead construct confidence intervals by some other means, such as bootstrapping. Then, we do not use the normal assumption, but, alas, without that, it could be we should use some other estimators than the least squares ones, maybe some robust estimators? In practice, of course, the normal distribution is at most a convenient fiction. So, the really important question is, how close to normality do we need to be to claim to use the results referred to above? That is a much trickier question! Optimality results are not robust, so even a very small deviation from normality might destroy optimality. That is an argument in favour of robust methods. For another tack at that question, see my answer to Why should we use t errors instead of normal errors? Another relevant question is Why is the normality of residuals "barely important at all" for the purpose of estimating the regression line? EDIT This answer led to a large discussion-in-comments, which again led to my new question: Linear regression: any non-normal distribution giving identity of OLS and MLE? which now finally got (three) answers, giving examples where non-normal distributions lead to least squares estimators.
How does linear regression use the normal distribution?
Linear regression by itself does not need the normal (gaussian) assumption, the estimators can be calculated (by linear least squares) without any need of such assumption, and makes perfect sense with
How does linear regression use the normal distribution? Linear regression by itself does not need the normal (gaussian) assumption, the estimators can be calculated (by linear least squares) without any need of such assumption, and makes perfect sense without it. But then, as statisticians we want to understand some of the properties of this method, answers to questions such as: are the least squares estimators optimal in some sense? or can we do better with some alternative estimators? Then, under the normal distribution of error terms, we can show that this estimators are, indeed, optimal, for instance they are "unbiased of minimum variance", or maximum likelihood. No such thing can be proved without the normal assumption. Also, if we want to construct (and analyze properties of) confidence intervals or hypothesis tests, then we use the normal assumption. But, we could instead construct confidence intervals by some other means, such as bootstrapping. Then, we do not use the normal assumption, but, alas, without that, it could be we should use some other estimators than the least squares ones, maybe some robust estimators? In practice, of course, the normal distribution is at most a convenient fiction. So, the really important question is, how close to normality do we need to be to claim to use the results referred to above? That is a much trickier question! Optimality results are not robust, so even a very small deviation from normality might destroy optimality. That is an argument in favour of robust methods. For another tack at that question, see my answer to Why should we use t errors instead of normal errors? Another relevant question is Why is the normality of residuals "barely important at all" for the purpose of estimating the regression line? EDIT This answer led to a large discussion-in-comments, which again led to my new question: Linear regression: any non-normal distribution giving identity of OLS and MLE? which now finally got (three) answers, giving examples where non-normal distributions lead to least squares estimators.
How does linear regression use the normal distribution? Linear regression by itself does not need the normal (gaussian) assumption, the estimators can be calculated (by linear least squares) without any need of such assumption, and makes perfect sense with
4,634
How does linear regression use the normal distribution?
But why is each predicted value assumed to have come from a normal distribution? There is no deep reason for it, and you are free to change the distributional assumptions, moving to GLMs, or to robust regression. The LM (normal distribution) is popular because its easy to calculate, quite stable and residuals are in practice often more or less normal. How does linear regression use this assumption? As any regression, the linear model (=regression with normal error) searches for the parameters that optimize the likelihood for the given distributional assumption. See here for an example of an explicit calculation of the likelihood for a linear model. If you take the log likelihood of a linear model, it turns out to be proportional to the sum of squares, and the optimization of that can be calculated quite conveniently. What if possible values are not normally distributed? If you want to fit a model with different distributions, the next textbook steps would be generalized linear models (GLM), which offer different distributions, or general linear models, which are still normal, but relax independence. Many other options are possible. If you just want to reduce the effect of outliers, you could for example consider robust regression.
How does linear regression use the normal distribution?
But why is each predicted value assumed to have come from a normal distribution? There is no deep reason for it, and you are free to change the distributional assumptions, moving to GLMs, or to robus
How does linear regression use the normal distribution? But why is each predicted value assumed to have come from a normal distribution? There is no deep reason for it, and you are free to change the distributional assumptions, moving to GLMs, or to robust regression. The LM (normal distribution) is popular because its easy to calculate, quite stable and residuals are in practice often more or less normal. How does linear regression use this assumption? As any regression, the linear model (=regression with normal error) searches for the parameters that optimize the likelihood for the given distributional assumption. See here for an example of an explicit calculation of the likelihood for a linear model. If you take the log likelihood of a linear model, it turns out to be proportional to the sum of squares, and the optimization of that can be calculated quite conveniently. What if possible values are not normally distributed? If you want to fit a model with different distributions, the next textbook steps would be generalized linear models (GLM), which offer different distributions, or general linear models, which are still normal, but relax independence. Many other options are possible. If you just want to reduce the effect of outliers, you could for example consider robust regression.
How does linear regression use the normal distribution? But why is each predicted value assumed to have come from a normal distribution? There is no deep reason for it, and you are free to change the distributional assumptions, moving to GLMs, or to robus
4,635
How does linear regression use the normal distribution?
This discussionWhat if residuals are normally distributed, but y is not? has well addressed this question. In short, for a regression problem, we only assume that the response is normal conditioned on the value of x. It is not necessary that the independent or response variables are independent.
How does linear regression use the normal distribution?
This discussionWhat if residuals are normally distributed, but y is not? has well addressed this question. In short, for a regression problem, we only assume that the response is normal conditioned on
How does linear regression use the normal distribution? This discussionWhat if residuals are normally distributed, but y is not? has well addressed this question. In short, for a regression problem, we only assume that the response is normal conditioned on the value of x. It is not necessary that the independent or response variables are independent.
How does linear regression use the normal distribution? This discussionWhat if residuals are normally distributed, but y is not? has well addressed this question. In short, for a regression problem, we only assume that the response is normal conditioned on
4,636
How does linear regression use the normal distribution?
Let me stick to the case of a one variable regression. The details are the same, but the notation is more cumbersome in the case of a multivariate regression. Given any data set $(x_i,y_i)$ one can find the 'least squares line' $ y = \beta x +c$ , that is find $\beta$ so that $\sum_i (y_i - \sum_i \beta x_i - c)^2$ is minimized. That is pure mathematics. However under the assumption that the residuals $ \eta_i = y_i - (\beta x_i +c) $ are independent identically distributed gaussian variables with a common variance, then one can get statistical estimates of how accurate the point estimate $\beta$. In particular, one can construct the 95% confidence interval for $\beta$. After all we are assuming that we are sampling from the underlying (true) distribuion and hence if we sampled again, we should expect to get a, possibly just slightly, different answer. In particular, the p-value is the probability of observing the given $\beta$ under the hypothesis that the true value of $\beta$ is zero. So the statistics comes about as information about how accurate is the point estimate $\beta$ . What to do in the case one doesn't have statistical properties of the error term ? With apologies to "The Graduate" - one word bootstrap.
How does linear regression use the normal distribution?
Let me stick to the case of a one variable regression. The details are the same, but the notation is more cumbersome in the case of a multivariate regression. Given any data set $(x_i,y_i)$ one can f
How does linear regression use the normal distribution? Let me stick to the case of a one variable regression. The details are the same, but the notation is more cumbersome in the case of a multivariate regression. Given any data set $(x_i,y_i)$ one can find the 'least squares line' $ y = \beta x +c$ , that is find $\beta$ so that $\sum_i (y_i - \sum_i \beta x_i - c)^2$ is minimized. That is pure mathematics. However under the assumption that the residuals $ \eta_i = y_i - (\beta x_i +c) $ are independent identically distributed gaussian variables with a common variance, then one can get statistical estimates of how accurate the point estimate $\beta$. In particular, one can construct the 95% confidence interval for $\beta$. After all we are assuming that we are sampling from the underlying (true) distribuion and hence if we sampled again, we should expect to get a, possibly just slightly, different answer. In particular, the p-value is the probability of observing the given $\beta$ under the hypothesis that the true value of $\beta$ is zero. So the statistics comes about as information about how accurate is the point estimate $\beta$ . What to do in the case one doesn't have statistical properties of the error term ? With apologies to "The Graduate" - one word bootstrap.
How does linear regression use the normal distribution? Let me stick to the case of a one variable regression. The details are the same, but the notation is more cumbersome in the case of a multivariate regression. Given any data set $(x_i,y_i)$ one can f
4,637
Using deep learning for time series prediction
There has been some work on adapting deep learning methods for sequential data. A lot of this work has focused on developing "modules" which can be stacked in a way analogous to stacking restricted boltzmann machines (RBMs) or autoencoders to form a deep neural network. I'll highlight a few below: Conditional RBMs: Probably one of the most successful applications of deep learning for time series. Taylor develops a RBM like model that adds temporal interactions between visible units and apply it to modeling motion capture data. Essentially you end up with something like a linear dynamical system with some non-linearity added by the hidden units. Temporal RBMs: In his thesis (section 3) Ilya Sutskever develops several RBM like models with temporal interactions between units. He also presents some interesting results showing training recurrent neural networks with SGD can perform as well or better than more complex methods, like Martens' Hessian-free algorithm, using good initialization and a slightly modified equation for momentum. Recursive Autoencoders: Lastly I'll mention the work of Richard Socher on using recursive autoencoders for parsing. Although this isn't time series, it is definitely related.
Using deep learning for time series prediction
There has been some work on adapting deep learning methods for sequential data. A lot of this work has focused on developing "modules" which can be stacked in a way analogous to stacking restricted bo
Using deep learning for time series prediction There has been some work on adapting deep learning methods for sequential data. A lot of this work has focused on developing "modules" which can be stacked in a way analogous to stacking restricted boltzmann machines (RBMs) or autoencoders to form a deep neural network. I'll highlight a few below: Conditional RBMs: Probably one of the most successful applications of deep learning for time series. Taylor develops a RBM like model that adds temporal interactions between visible units and apply it to modeling motion capture data. Essentially you end up with something like a linear dynamical system with some non-linearity added by the hidden units. Temporal RBMs: In his thesis (section 3) Ilya Sutskever develops several RBM like models with temporal interactions between units. He also presents some interesting results showing training recurrent neural networks with SGD can perform as well or better than more complex methods, like Martens' Hessian-free algorithm, using good initialization and a slightly modified equation for momentum. Recursive Autoencoders: Lastly I'll mention the work of Richard Socher on using recursive autoencoders for parsing. Although this isn't time series, it is definitely related.
Using deep learning for time series prediction There has been some work on adapting deep learning methods for sequential data. A lot of this work has focused on developing "modules" which can be stacked in a way analogous to stacking restricted bo
4,638
Using deep learning for time series prediction
Yes, deep learning can be applied for time series predictions. In fact, it has been done many times already, for example: http://cs229.stanford.edu/proj2012/BussetiOsbandWong-DeepLearningForTimeSeriesModeling.pdf http://link.springer.com/article/10.1007/s00134-013-2964-2#page-1 This is not really any "special case", deep learning is mostly about preprocessing method (based on generative model), so to you have to focus on exactly same things that you focus on when you do deep learning in "traditional sense" on one hand, and same things you focus on while performing time series predictions without deep learning.
Using deep learning for time series prediction
Yes, deep learning can be applied for time series predictions. In fact, it has been done many times already, for example: http://cs229.stanford.edu/proj2012/BussetiOsbandWong-DeepLearningForTimeSerie
Using deep learning for time series prediction Yes, deep learning can be applied for time series predictions. In fact, it has been done many times already, for example: http://cs229.stanford.edu/proj2012/BussetiOsbandWong-DeepLearningForTimeSeriesModeling.pdf http://link.springer.com/article/10.1007/s00134-013-2964-2#page-1 This is not really any "special case", deep learning is mostly about preprocessing method (based on generative model), so to you have to focus on exactly same things that you focus on when you do deep learning in "traditional sense" on one hand, and same things you focus on while performing time series predictions without deep learning.
Using deep learning for time series prediction Yes, deep learning can be applied for time series predictions. In fact, it has been done many times already, for example: http://cs229.stanford.edu/proj2012/BussetiOsbandWong-DeepLearningForTimeSerie
4,639
Using deep learning for time series prediction
Recurrent Neural Networks are considered a type of Deep Learning (DL). I think they are the most popular DL tool for (1d) sequence-to-sequence learning. They are currently the basis of Neural Machine Translation (NMT) approaches (pioneered 2014 at LISA (UdeM), Google, and probably a couple others I'm not remembering).
Using deep learning for time series prediction
Recurrent Neural Networks are considered a type of Deep Learning (DL). I think they are the most popular DL tool for (1d) sequence-to-sequence learning. They are currently the basis of Neural Machin
Using deep learning for time series prediction Recurrent Neural Networks are considered a type of Deep Learning (DL). I think they are the most popular DL tool for (1d) sequence-to-sequence learning. They are currently the basis of Neural Machine Translation (NMT) approaches (pioneered 2014 at LISA (UdeM), Google, and probably a couple others I'm not remembering).
Using deep learning for time series prediction Recurrent Neural Networks are considered a type of Deep Learning (DL). I think they are the most popular DL tool for (1d) sequence-to-sequence learning. They are currently the basis of Neural Machin
4,640
Using deep learning for time series prediction
Alex Graves' Generating sequences with Recurrent Neural Networks uses recurrent networks and Long short term memory Cells to predict text and do handwriting synthesis. Andrej Karpathy has written a blog about generating character level sequences from scratch. He uses RNNs in his tutorial. For more examples, you should look at- Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
Using deep learning for time series prediction
Alex Graves' Generating sequences with Recurrent Neural Networks uses recurrent networks and Long short term memory Cells to predict text and do handwriting synthesis. Andrej Karpathy has written a bl
Using deep learning for time series prediction Alex Graves' Generating sequences with Recurrent Neural Networks uses recurrent networks and Long short term memory Cells to predict text and do handwriting synthesis. Andrej Karpathy has written a blog about generating character level sequences from scratch. He uses RNNs in his tutorial. For more examples, you should look at- Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
Using deep learning for time series prediction Alex Graves' Generating sequences with Recurrent Neural Networks uses recurrent networks and Long short term memory Cells to predict text and do handwriting synthesis. Andrej Karpathy has written a bl
4,641
Using deep learning for time series prediction
Maybe this will help: I. Sutskever, O. Vinyals, and Q. V. V Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 3104–3112. If you have definition for your exact time window on the data like sentences in this paper or paragraphs then you will be fine with using LSTM, but I am not sure how to find the time window that are not obvious and are more context aware. An example for that can be how many of log data you are seeing are related and that's not something obvious.
Using deep learning for time series prediction
Maybe this will help: I. Sutskever, O. Vinyals, and Q. V. V Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 3104–3112. If yo
Using deep learning for time series prediction Maybe this will help: I. Sutskever, O. Vinyals, and Q. V. V Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 3104–3112. If you have definition for your exact time window on the data like sentences in this paper or paragraphs then you will be fine with using LSTM, but I am not sure how to find the time window that are not obvious and are more context aware. An example for that can be how many of log data you are seeing are related and that's not something obvious.
Using deep learning for time series prediction Maybe this will help: I. Sutskever, O. Vinyals, and Q. V. V Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 3104–3112. If yo
4,642
Understanding LSTM units vs. cells
The terminology is unfortunately inconsistent. num_units in TensorFlow is the number of hidden states, i.e. the dimension of $h_t$ in the equations you gave. Also, from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.rnn_cell.RNNCell.md : The definition of cell in this package differs from the definition used in the literature. In the literature, cell refers to an object with a single scalar output. The definition in this package refers to a horizontal array of such units. "LSTM layer" is probably more explicit, example: def lstm_layer(tparams, state_below, options, prefix='lstm', mask=None): nsteps = state_below.shape[0] if state_below.ndim == 3: n_samples = state_below.shape[1] else: n_samples = 1 assert mask is not None […]
Understanding LSTM units vs. cells
The terminology is unfortunately inconsistent. num_units in TensorFlow is the number of hidden states, i.e. the dimension of $h_t$ in the equations you gave. Also, from https://github.com/tensorflow/t
Understanding LSTM units vs. cells The terminology is unfortunately inconsistent. num_units in TensorFlow is the number of hidden states, i.e. the dimension of $h_t$ in the equations you gave. Also, from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.rnn_cell.RNNCell.md : The definition of cell in this package differs from the definition used in the literature. In the literature, cell refers to an object with a single scalar output. The definition in this package refers to a horizontal array of such units. "LSTM layer" is probably more explicit, example: def lstm_layer(tparams, state_below, options, prefix='lstm', mask=None): nsteps = state_below.shape[0] if state_below.ndim == 3: n_samples = state_below.shape[1] else: n_samples = 1 assert mask is not None […]
Understanding LSTM units vs. cells The terminology is unfortunately inconsistent. num_units in TensorFlow is the number of hidden states, i.e. the dimension of $h_t$ in the equations you gave. Also, from https://github.com/tensorflow/t
4,643
Understanding LSTM units vs. cells
Most LSTM/RNN diagrams just show the hidden cells but never the units of those cells. Hence, the confusion. Each hidden layer has hidden cells, as many as the number of time steps. And further, each hidden cell is made up of multiple hidden units, like in the diagram below. Therefore, the dimensionality of a hidden layer matrix in RNN is (number of time steps, number of hidden units).
Understanding LSTM units vs. cells
Most LSTM/RNN diagrams just show the hidden cells but never the units of those cells. Hence, the confusion. Each hidden layer has hidden cells, as many as the number of time steps. And further, each
Understanding LSTM units vs. cells Most LSTM/RNN diagrams just show the hidden cells but never the units of those cells. Hence, the confusion. Each hidden layer has hidden cells, as many as the number of time steps. And further, each hidden cell is made up of multiple hidden units, like in the diagram below. Therefore, the dimensionality of a hidden layer matrix in RNN is (number of time steps, number of hidden units).
Understanding LSTM units vs. cells Most LSTM/RNN diagrams just show the hidden cells but never the units of those cells. Hence, the confusion. Each hidden layer has hidden cells, as many as the number of time steps. And further, each
4,644
Understanding LSTM units vs. cells
Although the issue is almost the same as I answered in this answer, I'd like to illustrate this issue, which also confused me a bit today in the seq2seq model (thanks to @Franck Dernoncourt's answer), in the graph. In this simple encoder diagram: Each $h_i$ above is the same cell in different time-step (cell either GRU or LSTM as that in your question) and the weight vectors(not bias) in the cell are of the same size of (num_units/num_hidden or state_size or output_size). RNN is a special type of graphical model where nodes form a directed list as explained in section 4 of this paper: Supervised Neural Networks for the Classication of Structures. We can think of num_units as the number of tags in CRF(although CRF is undirected), and the matrices($W$'s in graph in the question) are all shared across all time steps like the transition matrix in CRF.
Understanding LSTM units vs. cells
Although the issue is almost the same as I answered in this answer, I'd like to illustrate this issue, which also confused me a bit today in the seq2seq model (thanks to @Franck Dernoncourt's answer),
Understanding LSTM units vs. cells Although the issue is almost the same as I answered in this answer, I'd like to illustrate this issue, which also confused me a bit today in the seq2seq model (thanks to @Franck Dernoncourt's answer), in the graph. In this simple encoder diagram: Each $h_i$ above is the same cell in different time-step (cell either GRU or LSTM as that in your question) and the weight vectors(not bias) in the cell are of the same size of (num_units/num_hidden or state_size or output_size). RNN is a special type of graphical model where nodes form a directed list as explained in section 4 of this paper: Supervised Neural Networks for the Classication of Structures. We can think of num_units as the number of tags in CRF(although CRF is undirected), and the matrices($W$'s in graph in the question) are all shared across all time steps like the transition matrix in CRF.
Understanding LSTM units vs. cells Although the issue is almost the same as I answered in this answer, I'd like to illustrate this issue, which also confused me a bit today in the seq2seq model (thanks to @Franck Dernoncourt's answer),
4,645
Understanding LSTM units vs. cells
In keras.layers.LSTM(units, activation='tanh', ....), the units refers to the dimensionality or length of the hidden state or the length of the activation vector passed on the next LSTM cell/unit - the next LSTM cell/unit is the "green picture above with the gates etc from http://colah.github.io/posts/2015-08-Understanding-LSTMs/ The next LSTM cell/unit (i.e. the green box with gates etc from http://colah.github.io/posts/2015-08-Understanding-LSTMs/) is NOT the same as the units in keras.layers.LSTM(units, activation='tanh', ....) The units are also sometimes called the latent dimensions. Here is a detailed explanation of the units LSTM parameter: https://zhuanlan.zhihu.com/p/58854907
Understanding LSTM units vs. cells
In keras.layers.LSTM(units, activation='tanh', ....), the units refers to the dimensionality or length of the hidden state or the length of the activation vector passed on the next LSTM cell/unit - th
Understanding LSTM units vs. cells In keras.layers.LSTM(units, activation='tanh', ....), the units refers to the dimensionality or length of the hidden state or the length of the activation vector passed on the next LSTM cell/unit - the next LSTM cell/unit is the "green picture above with the gates etc from http://colah.github.io/posts/2015-08-Understanding-LSTMs/ The next LSTM cell/unit (i.e. the green box with gates etc from http://colah.github.io/posts/2015-08-Understanding-LSTMs/) is NOT the same as the units in keras.layers.LSTM(units, activation='tanh', ....) The units are also sometimes called the latent dimensions. Here is a detailed explanation of the units LSTM parameter: https://zhuanlan.zhihu.com/p/58854907
Understanding LSTM units vs. cells In keras.layers.LSTM(units, activation='tanh', ....), the units refers to the dimensionality or length of the hidden state or the length of the activation vector passed on the next LSTM cell/unit - th
4,646
Understanding LSTM units vs. cells
In my opinion, cell means a node such as hidden cell which is also called hidden node, for multilayer LSTM model,the number of cell can be computed by time_steps*num_layers, and the num_units is equal to time_steps
Understanding LSTM units vs. cells
In my opinion, cell means a node such as hidden cell which is also called hidden node, for multilayer LSTM model,the number of cell can be computed by time_steps*num_layers, and the num_units is equal
Understanding LSTM units vs. cells In my opinion, cell means a node such as hidden cell which is also called hidden node, for multilayer LSTM model,the number of cell can be computed by time_steps*num_layers, and the num_units is equal to time_steps
Understanding LSTM units vs. cells In my opinion, cell means a node such as hidden cell which is also called hidden node, for multilayer LSTM model,the number of cell can be computed by time_steps*num_layers, and the num_units is equal
4,647
Understanding LSTM units vs. cells
Quoting from TF's tutorial on RNNs: In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only processes a single timestep.
Understanding LSTM units vs. cells
Quoting from TF's tutorial on RNNs: In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cel
Understanding LSTM units vs. cells Quoting from TF's tutorial on RNNs: In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only processes a single timestep.
Understanding LSTM units vs. cells Quoting from TF's tutorial on RNNs: In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cel
4,648
Best PCA algorithm for huge number of features (>10K)?
I've implemented the Randomized SVD as given in "Halko, N., Martinsson, P. G., Shkolnisky, Y., & Tygert, M. (2010). An algorithm for the principal component analysis of large data sets. Arxiv preprint arXiv:1007.5510, 0526. Retrieved April 1, 2011, from http://arxiv.org/abs/1007.5510.". If you want to get truncated SVD, it really works much much faster than the svd variations in MATLAB. You can get it here: function [U,S,V] = fsvd(A, k, i, usePowerMethod) % FSVD Fast Singular Value Decomposition % % [U,S,V] = FSVD(A,k,i,usePowerMethod) computes the truncated singular % value decomposition of the input matrix A upto rank k using i levels of % Krylov method as given in [1], p. 3. % % If usePowerMethod is given as true, then only exponent i is used (i.e. % as power method). See [2] p.9, Randomized PCA algorithm for details. % % [1] Halko, N., Martinsson, P. G., Shkolnisky, Y., & Tygert, M. (2010). % An algorithm for the principal component analysis of large data sets. % Arxiv preprint arXiv:1007.5510, 0526. Retrieved April 1, 2011, from % http://arxiv.org/abs/1007.5510. % % [2] Halko, N., Martinsson, P. G., & Tropp, J. A. (2009). Finding % structure with randomness: Probabilistic algorithms for constructing % approximate matrix decompositions. Arxiv preprint arXiv:0909.4061. % Retrieved April 1, 2011, from http://arxiv.org/abs/0909.4061. % % See also SVD. % % Copyright 2011 Ismail Ari, http://ismailari.com. if nargin < 3 i = 1; end % Take (conjugate) transpose if necessary. It makes H smaller thus % leading the computations to be faster if size(A,1) < size(A,2) A = A'; isTransposed = true; else isTransposed = false; end n = size(A,2); l = k + 2; % Form a real n×l matrix G whose entries are iid Gaussian r.v.s of zero % mean and unit variance G = randn(n,l); if nargin >= 4 && usePowerMethod % Use only the given exponent H = A*G; for j = 2:i+1 H = A * (A'*H); end else % Compute the m×l matrices H^{(0)}, ..., H^{(i)} % Note that this is done implicitly in each iteration below. H = cell(1,i+1); H{1} = A*G; for j = 2:i+1 H{j} = A * (A'*H{j-1}); end % Form the m×((i+1)l) matrix H H = cell2mat(H); end % Using the pivoted QR-decomposiion, form a real m×((i+1)l) matrix Q % whose columns are orthonormal, s.t. there exists a real % ((i+1)l)×((i+1)l) matrix R for which H = QR. % XXX: Buradaki column pivoting ile yapılmayan hali. [Q,~] = qr(H,0); % Compute the n×((i+1)l) product matrix T = A^T Q T = A'*Q; % Form an SVD of T [Vt, St, W] = svd(T,'econ'); % Compute the m×((i+1)l) product matrix Ut = Q*W; % Retrieve the leftmost m×k block U of Ut, the leftmost n×k block V of % Vt, and the leftmost uppermost k×k block S of St. The product U S V^T % then approxiamtes A. if isTransposed V = Ut(:,1:k); U = Vt(:,1:k); else U = Ut(:,1:k); V = Vt(:,1:k); end S = St(1:k,1:k); end To test it, just create an image in the same folder (just as a big matrix,you can create the matrix yourself) % Example code for fast SVD. clc, clear %% TRY ME k = 10; % # dims i = 2; % # power COMPUTE_SVD0 = true; % Comment out if you do not want to spend time with builtin SVD. % A is the m×n matrix we want to decompose A = im2double(rgb2gray(imread('test_image.jpg')))'; %% DO NOT MODIFY if COMPUTE_SVD0 tic % Compute SVD of A directly [U0, S0, V0] = svd(A,'econ'); A0 = U0(:,1:k) * S0(1:k,1:k) * V0(:,1:k)'; toc display(['SVD Error: ' num2str(compute_error(A,A0))]) clear U0 S0 V0 end % FSVD without power method tic [U1, S1, V1] = fsvd(A, k, i); toc A1 = U1 * S1 * V1'; display(['FSVD HYBRID Error: ' num2str(compute_error(A,A1))]) clear U1 S1 V1 % FSVD with power method tic [U2, S2, V2] = fsvd(A, k, i, true); toc A2 = U2 * S2 * V2'; display(['FSVD POWER Error: ' num2str(compute_error(A,A2))]) clear U2 S2 V2 subplot(2,2,1), imshow(A'), title('A (orig)') if COMPUTE_SVD0, subplot(2,2,2), imshow(A0'), title('A0 (svd)'), end subplot(2,2,3), imshow(A1'), title('A1 (fsvd hybrid)') subplot(2,2,4), imshow(A2'), title('A2 (fsvd power)') When I run it on my desktop for an image of size 635*483, I get Elapsed time is 0.110510 seconds. SVD Error: 0.19132 Elapsed time is 0.017286 seconds. FSVD HYBRID Error: 0.19142 Elapsed time is 0.006496 seconds. FSVD POWER Error: 0.19206 As you can see, for low values of k, it is more than 10 times faster than using Matlab SVD. By the way, you may need the following simple function for the test function: function e = compute_error(A, B) % COMPUTE_ERROR Compute relative error between two arrays e = norm(A(:)-B(:)) / norm(A(:)); end I didn't add the PCA method since it is straightforward to implement using SVD. You may check this link to see their relationship.
Best PCA algorithm for huge number of features (>10K)?
I've implemented the Randomized SVD as given in "Halko, N., Martinsson, P. G., Shkolnisky, Y., & Tygert, M. (2010). An algorithm for the principal component analysis of large data sets. Arxiv preprint
Best PCA algorithm for huge number of features (>10K)? I've implemented the Randomized SVD as given in "Halko, N., Martinsson, P. G., Shkolnisky, Y., & Tygert, M. (2010). An algorithm for the principal component analysis of large data sets. Arxiv preprint arXiv:1007.5510, 0526. Retrieved April 1, 2011, from http://arxiv.org/abs/1007.5510.". If you want to get truncated SVD, it really works much much faster than the svd variations in MATLAB. You can get it here: function [U,S,V] = fsvd(A, k, i, usePowerMethod) % FSVD Fast Singular Value Decomposition % % [U,S,V] = FSVD(A,k,i,usePowerMethod) computes the truncated singular % value decomposition of the input matrix A upto rank k using i levels of % Krylov method as given in [1], p. 3. % % If usePowerMethod is given as true, then only exponent i is used (i.e. % as power method). See [2] p.9, Randomized PCA algorithm for details. % % [1] Halko, N., Martinsson, P. G., Shkolnisky, Y., & Tygert, M. (2010). % An algorithm for the principal component analysis of large data sets. % Arxiv preprint arXiv:1007.5510, 0526. Retrieved April 1, 2011, from % http://arxiv.org/abs/1007.5510. % % [2] Halko, N., Martinsson, P. G., & Tropp, J. A. (2009). Finding % structure with randomness: Probabilistic algorithms for constructing % approximate matrix decompositions. Arxiv preprint arXiv:0909.4061. % Retrieved April 1, 2011, from http://arxiv.org/abs/0909.4061. % % See also SVD. % % Copyright 2011 Ismail Ari, http://ismailari.com. if nargin < 3 i = 1; end % Take (conjugate) transpose if necessary. It makes H smaller thus % leading the computations to be faster if size(A,1) < size(A,2) A = A'; isTransposed = true; else isTransposed = false; end n = size(A,2); l = k + 2; % Form a real n×l matrix G whose entries are iid Gaussian r.v.s of zero % mean and unit variance G = randn(n,l); if nargin >= 4 && usePowerMethod % Use only the given exponent H = A*G; for j = 2:i+1 H = A * (A'*H); end else % Compute the m×l matrices H^{(0)}, ..., H^{(i)} % Note that this is done implicitly in each iteration below. H = cell(1,i+1); H{1} = A*G; for j = 2:i+1 H{j} = A * (A'*H{j-1}); end % Form the m×((i+1)l) matrix H H = cell2mat(H); end % Using the pivoted QR-decomposiion, form a real m×((i+1)l) matrix Q % whose columns are orthonormal, s.t. there exists a real % ((i+1)l)×((i+1)l) matrix R for which H = QR. % XXX: Buradaki column pivoting ile yapılmayan hali. [Q,~] = qr(H,0); % Compute the n×((i+1)l) product matrix T = A^T Q T = A'*Q; % Form an SVD of T [Vt, St, W] = svd(T,'econ'); % Compute the m×((i+1)l) product matrix Ut = Q*W; % Retrieve the leftmost m×k block U of Ut, the leftmost n×k block V of % Vt, and the leftmost uppermost k×k block S of St. The product U S V^T % then approxiamtes A. if isTransposed V = Ut(:,1:k); U = Vt(:,1:k); else U = Ut(:,1:k); V = Vt(:,1:k); end S = St(1:k,1:k); end To test it, just create an image in the same folder (just as a big matrix,you can create the matrix yourself) % Example code for fast SVD. clc, clear %% TRY ME k = 10; % # dims i = 2; % # power COMPUTE_SVD0 = true; % Comment out if you do not want to spend time with builtin SVD. % A is the m×n matrix we want to decompose A = im2double(rgb2gray(imread('test_image.jpg')))'; %% DO NOT MODIFY if COMPUTE_SVD0 tic % Compute SVD of A directly [U0, S0, V0] = svd(A,'econ'); A0 = U0(:,1:k) * S0(1:k,1:k) * V0(:,1:k)'; toc display(['SVD Error: ' num2str(compute_error(A,A0))]) clear U0 S0 V0 end % FSVD without power method tic [U1, S1, V1] = fsvd(A, k, i); toc A1 = U1 * S1 * V1'; display(['FSVD HYBRID Error: ' num2str(compute_error(A,A1))]) clear U1 S1 V1 % FSVD with power method tic [U2, S2, V2] = fsvd(A, k, i, true); toc A2 = U2 * S2 * V2'; display(['FSVD POWER Error: ' num2str(compute_error(A,A2))]) clear U2 S2 V2 subplot(2,2,1), imshow(A'), title('A (orig)') if COMPUTE_SVD0, subplot(2,2,2), imshow(A0'), title('A0 (svd)'), end subplot(2,2,3), imshow(A1'), title('A1 (fsvd hybrid)') subplot(2,2,4), imshow(A2'), title('A2 (fsvd power)') When I run it on my desktop for an image of size 635*483, I get Elapsed time is 0.110510 seconds. SVD Error: 0.19132 Elapsed time is 0.017286 seconds. FSVD HYBRID Error: 0.19142 Elapsed time is 0.006496 seconds. FSVD POWER Error: 0.19206 As you can see, for low values of k, it is more than 10 times faster than using Matlab SVD. By the way, you may need the following simple function for the test function: function e = compute_error(A, B) % COMPUTE_ERROR Compute relative error between two arrays e = norm(A(:)-B(:)) / norm(A(:)); end I didn't add the PCA method since it is straightforward to implement using SVD. You may check this link to see their relationship.
Best PCA algorithm for huge number of features (>10K)? I've implemented the Randomized SVD as given in "Halko, N., Martinsson, P. G., Shkolnisky, Y., & Tygert, M. (2010). An algorithm for the principal component analysis of large data sets. Arxiv preprint
4,649
Best PCA algorithm for huge number of features (>10K)?
you could trying using a couple of options. 1- Penalized Matrix Decomposition. You apply some penalty constraints on the u's and v's to get some sparsity. Quick algorithm that has been used on genomics data See Whitten Tibshirani. They also have an R-pkg. " A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis." 2- Randomized SVD. Since SVD is a master algorithm, find a very quick approximation might be desirable, especially for exploratory analysis. Using randomized SVD, you can do PCA on huge datasets. See Martinsson, Rokhlin, and Tygert "A randomized algorithm for the decomposition of matrices". Tygert has code for a very fast implementation of PCA. Below is a simple implementation of randomized SVD in R. ransvd = function(A, k=10, p=5) { n = nrow(A) y = A %*% matrix(rnorm(n * (k+p)), nrow=n) q = qr.Q(qr(y)) b = t(q) %*% A svd = svd(b) list(u=q %*% svd$u, d=svd$d, v=svd$v) }
Best PCA algorithm for huge number of features (>10K)?
you could trying using a couple of options. 1- Penalized Matrix Decomposition. You apply some penalty constraints on the u's and v's to get some sparsity. Quick algorithm that has been used on genomic
Best PCA algorithm for huge number of features (>10K)? you could trying using a couple of options. 1- Penalized Matrix Decomposition. You apply some penalty constraints on the u's and v's to get some sparsity. Quick algorithm that has been used on genomics data See Whitten Tibshirani. They also have an R-pkg. " A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis." 2- Randomized SVD. Since SVD is a master algorithm, find a very quick approximation might be desirable, especially for exploratory analysis. Using randomized SVD, you can do PCA on huge datasets. See Martinsson, Rokhlin, and Tygert "A randomized algorithm for the decomposition of matrices". Tygert has code for a very fast implementation of PCA. Below is a simple implementation of randomized SVD in R. ransvd = function(A, k=10, p=5) { n = nrow(A) y = A %*% matrix(rnorm(n * (k+p)), nrow=n) q = qr.Q(qr(y)) b = t(q) %*% A svd = svd(b) list(u=q %*% svd$u, d=svd$d, v=svd$v) }
Best PCA algorithm for huge number of features (>10K)? you could trying using a couple of options. 1- Penalized Matrix Decomposition. You apply some penalty constraints on the u's and v's to get some sparsity. Quick algorithm that has been used on genomic
4,650
Best PCA algorithm for huge number of features (>10K)?
It sounds like maybe you want to use the Lanczos Algorithm. Failing that, you might want to consult Golub & Van Loan. I once coded a SVD algorithm (in SML, of all languages) from their text, and it worked reasonably well.
Best PCA algorithm for huge number of features (>10K)?
It sounds like maybe you want to use the Lanczos Algorithm. Failing that, you might want to consult Golub & Van Loan. I once coded a SVD algorithm (in SML, of all languages) from their text, and it wo
Best PCA algorithm for huge number of features (>10K)? It sounds like maybe you want to use the Lanczos Algorithm. Failing that, you might want to consult Golub & Van Loan. I once coded a SVD algorithm (in SML, of all languages) from their text, and it worked reasonably well.
Best PCA algorithm for huge number of features (>10K)? It sounds like maybe you want to use the Lanczos Algorithm. Failing that, you might want to consult Golub & Van Loan. I once coded a SVD algorithm (in SML, of all languages) from their text, and it wo
4,651
Best PCA algorithm for huge number of features (>10K)?
I'd suggest trying kernel PCA which has a time/space complexity dependent on the number of examples (N) rather than number of features (P), which I think would be more suitable in your setting (P>>N)). Kernel PCA basically works with NxN kernel matrix (matrix of similarities between the data points), rather than the PxP covariance matrix which can be hard to deal with for large P. Another good thing about kernel PCA is that it can learn non-linear projections as well if you use it with a suitable kernel. See this paper on kernel PCA.
Best PCA algorithm for huge number of features (>10K)?
I'd suggest trying kernel PCA which has a time/space complexity dependent on the number of examples (N) rather than number of features (P), which I think would be more suitable in your setting (P>>N))
Best PCA algorithm for huge number of features (>10K)? I'd suggest trying kernel PCA which has a time/space complexity dependent on the number of examples (N) rather than number of features (P), which I think would be more suitable in your setting (P>>N)). Kernel PCA basically works with NxN kernel matrix (matrix of similarities between the data points), rather than the PxP covariance matrix which can be hard to deal with for large P. Another good thing about kernel PCA is that it can learn non-linear projections as well if you use it with a suitable kernel. See this paper on kernel PCA.
Best PCA algorithm for huge number of features (>10K)? I'd suggest trying kernel PCA which has a time/space complexity dependent on the number of examples (N) rather than number of features (P), which I think would be more suitable in your setting (P>>N))
4,652
Best PCA algorithm for huge number of features (>10K)?
I seem to recall that it is possible to perform PCA by computing the eigen-decomposition of X^TX rather than XX^T and then transform to get the PCs. However I can't remember the details off-hand, but it is in Jolliffe's (excellent) book and I'll look it up when I am next at work. I'd transliterate the linear algebra routines from e.g. Numerical Methods in C, rather than use any other algorithm.
Best PCA algorithm for huge number of features (>10K)?
I seem to recall that it is possible to perform PCA by computing the eigen-decomposition of X^TX rather than XX^T and then transform to get the PCs. However I can't remember the details off-hand, but
Best PCA algorithm for huge number of features (>10K)? I seem to recall that it is possible to perform PCA by computing the eigen-decomposition of X^TX rather than XX^T and then transform to get the PCs. However I can't remember the details off-hand, but it is in Jolliffe's (excellent) book and I'll look it up when I am next at work. I'd transliterate the linear algebra routines from e.g. Numerical Methods in C, rather than use any other algorithm.
Best PCA algorithm for huge number of features (>10K)? I seem to recall that it is possible to perform PCA by computing the eigen-decomposition of X^TX rather than XX^T and then transform to get the PCs. However I can't remember the details off-hand, but
4,653
Best PCA algorithm for huge number of features (>10K)?
See Sam Roweis' paper, EM Algorithms for PCA and SPCA.
Best PCA algorithm for huge number of features (>10K)?
See Sam Roweis' paper, EM Algorithms for PCA and SPCA.
Best PCA algorithm for huge number of features (>10K)? See Sam Roweis' paper, EM Algorithms for PCA and SPCA.
Best PCA algorithm for huge number of features (>10K)? See Sam Roweis' paper, EM Algorithms for PCA and SPCA.
4,654
Best PCA algorithm for huge number of features (>10K)?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. There is also the bootstrap method by Fisher et al, designed for several hundred samples of high dimension. The main idea of the method is formulated as "resampling is a low-dimension transformation". So, if you have a small (several hundred) number of high-dimensional samples, then you can't get more principal components than the number of your samples. It thus makes sense to consider the samples as a parsimonious basis, project the data on the linear subspace spanned by these vectors, and calculate PCA within this smaller subspace. They also provide more details how to deal with the case when not all samples may be stored in the memory.
Best PCA algorithm for huge number of features (>10K)?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Best PCA algorithm for huge number of features (>10K)? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. There is also the bootstrap method by Fisher et al, designed for several hundred samples of high dimension. The main idea of the method is formulated as "resampling is a low-dimension transformation". So, if you have a small (several hundred) number of high-dimensional samples, then you can't get more principal components than the number of your samples. It thus makes sense to consider the samples as a parsimonious basis, project the data on the linear subspace spanned by these vectors, and calculate PCA within this smaller subspace. They also provide more details how to deal with the case when not all samples may be stored in the memory.
Best PCA algorithm for huge number of features (>10K)? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
4,655
Multivariate linear regression vs neural network?
Neural networks can in principle model nonlinearities automatically (see the universal approximation theorem), which you would need to explicitly model using transformations (splines etc.) in linear regression. The caveat: the temptation to overfit can be (even) stronger in neural networks than in regression, since adding hidden layers or neurons looks harmless. So be extra careful to look at out-of-sample prediction performance.
Multivariate linear regression vs neural network?
Neural networks can in principle model nonlinearities automatically (see the universal approximation theorem), which you would need to explicitly model using transformations (splines etc.) in linear r
Multivariate linear regression vs neural network? Neural networks can in principle model nonlinearities automatically (see the universal approximation theorem), which you would need to explicitly model using transformations (splines etc.) in linear regression. The caveat: the temptation to overfit can be (even) stronger in neural networks than in regression, since adding hidden layers or neurons looks harmless. So be extra careful to look at out-of-sample prediction performance.
Multivariate linear regression vs neural network? Neural networks can in principle model nonlinearities automatically (see the universal approximation theorem), which you would need to explicitly model using transformations (splines etc.) in linear r
4,656
Multivariate linear regression vs neural network?
You mention linear regression. This is related to logistic regression, which has a similar fast optimization algorithm. If you have bounds on the target values, such as with a classification problem, you can view logistic regression as a generalization of linear regression. Neural networks are strictly more general than logistic regression on the original inputs, since that corresponds to a skip-layer network (with connections directly connecting the inputs with the outputs) with $0$ hidden nodes. When you add features like $x^3$, this is similar to choosing weights to a few hidden nodes in a single hidden layer. There isn't exactly a $1-1$ correspondence, since to model a function like $x^3$ with sigmoids may take more than one hidden neuron. When you train a neural network, you let it find its own input-to-hidden hidden weights, which has the potential to be better. It may also take more time and it may be inconsistent. You can start with an approximation to logistic regression with extra features, and train the input-to-hidden weights slowly, and this should do better than logistic regression with extra features eventually. Depending on the problem, the training time may be negligible or prohibitive. One intermediate strategy is to choose a large number of random nodes, similar to what happens when you initialize a neural network, and fix the input-to-hidden weights. The optimization over the *-to-output weights stays linear. This is called an extreme learning machine. It works at least as well as the original logistic regression.
Multivariate linear regression vs neural network?
You mention linear regression. This is related to logistic regression, which has a similar fast optimization algorithm. If you have bounds on the target values, such as with a classification problem,
Multivariate linear regression vs neural network? You mention linear regression. This is related to logistic regression, which has a similar fast optimization algorithm. If you have bounds on the target values, such as with a classification problem, you can view logistic regression as a generalization of linear regression. Neural networks are strictly more general than logistic regression on the original inputs, since that corresponds to a skip-layer network (with connections directly connecting the inputs with the outputs) with $0$ hidden nodes. When you add features like $x^3$, this is similar to choosing weights to a few hidden nodes in a single hidden layer. There isn't exactly a $1-1$ correspondence, since to model a function like $x^3$ with sigmoids may take more than one hidden neuron. When you train a neural network, you let it find its own input-to-hidden hidden weights, which has the potential to be better. It may also take more time and it may be inconsistent. You can start with an approximation to logistic regression with extra features, and train the input-to-hidden weights slowly, and this should do better than logistic regression with extra features eventually. Depending on the problem, the training time may be negligible or prohibitive. One intermediate strategy is to choose a large number of random nodes, similar to what happens when you initialize a neural network, and fix the input-to-hidden weights. The optimization over the *-to-output weights stays linear. This is called an extreme learning machine. It works at least as well as the original logistic regression.
Multivariate linear regression vs neural network? You mention linear regression. This is related to logistic regression, which has a similar fast optimization algorithm. If you have bounds on the target values, such as with a classification problem,
4,657
Multivariate linear regression vs neural network?
Linear Regression aims to separate the data that is linearly separable, yes you may use additional third> degree polynomials but in that way you indicated again some assumptions about the data you have since you define the objective function's structure. In Neural Net. generally you have input layer that creates the linear separators for the data you have and hidden layer ANDs the regions that bounds some classes and last layer ORs all these regions. In that way all the data you have is able to be classified with non linear way, also all these process is going with internally learned weights and defined functions. In addition increasing the feature number for Linear Regression is opposed to "Curse of dimensionality". In addition some applications need more probabilistic results than constant numbers as output. Thus a NN with logistic function will be more suitable for such purposes (Of course there is also logistic regression suffers form the facts I told).
Multivariate linear regression vs neural network?
Linear Regression aims to separate the data that is linearly separable, yes you may use additional third> degree polynomials but in that way you indicated again some assumptions about the data you hav
Multivariate linear regression vs neural network? Linear Regression aims to separate the data that is linearly separable, yes you may use additional third> degree polynomials but in that way you indicated again some assumptions about the data you have since you define the objective function's structure. In Neural Net. generally you have input layer that creates the linear separators for the data you have and hidden layer ANDs the regions that bounds some classes and last layer ORs all these regions. In that way all the data you have is able to be classified with non linear way, also all these process is going with internally learned weights and defined functions. In addition increasing the feature number for Linear Regression is opposed to "Curse of dimensionality". In addition some applications need more probabilistic results than constant numbers as output. Thus a NN with logistic function will be more suitable for such purposes (Of course there is also logistic regression suffers form the facts I told).
Multivariate linear regression vs neural network? Linear Regression aims to separate the data that is linearly separable, yes you may use additional third> degree polynomials but in that way you indicated again some assumptions about the data you hav
4,658
Empirical justification for the one standard error rule when using cross-validation
For an empirical justification, have a look at page 12 on these Tibshirani data-mining course notes, which shows the CV error as a function of lambda for a particular modeling problem. The suggestion seems to be that, below a certain value, all lambdas give about the same CV error. This makes sense because, unlike ridge regression, LASSO is not typically used only, or even primarily, to improve prediction accuracy. Its main selling point is that it makes models simpler and more interpretable by eliminating the least relevant/valuable predictors. Now, to understand the one standard error rule, let's think about the family of models we get from varying $\lambda$. Tibshirani's figure is telling us that we have a bunch of medium-to-high complexity models that are about the same in predictive accuracy, and a bunch of low-complexity models that are not good at prediction. What should we choose? Well, if we're using $L_1$, we're probably interested in a parsimonious model, so we'd probably prefer the simplest model that explains our data reasonably well (as Einstein supposedly said, "as simple as possible but no simpler"). So how about the lowest complexity model that is "about as good" as all those high complexity models? And what's a good way of measuring "about as good"? One standard error.
Empirical justification for the one standard error rule when using cross-validation
For an empirical justification, have a look at page 12 on these Tibshirani data-mining course notes, which shows the CV error as a function of lambda for a particular modeling problem. The suggestion
Empirical justification for the one standard error rule when using cross-validation For an empirical justification, have a look at page 12 on these Tibshirani data-mining course notes, which shows the CV error as a function of lambda for a particular modeling problem. The suggestion seems to be that, below a certain value, all lambdas give about the same CV error. This makes sense because, unlike ridge regression, LASSO is not typically used only, or even primarily, to improve prediction accuracy. Its main selling point is that it makes models simpler and more interpretable by eliminating the least relevant/valuable predictors. Now, to understand the one standard error rule, let's think about the family of models we get from varying $\lambda$. Tibshirani's figure is telling us that we have a bunch of medium-to-high complexity models that are about the same in predictive accuracy, and a bunch of low-complexity models that are not good at prediction. What should we choose? Well, if we're using $L_1$, we're probably interested in a parsimonious model, so we'd probably prefer the simplest model that explains our data reasonably well (as Einstein supposedly said, "as simple as possible but no simpler"). So how about the lowest complexity model that is "about as good" as all those high complexity models? And what's a good way of measuring "about as good"? One standard error.
Empirical justification for the one standard error rule when using cross-validation For an empirical justification, have a look at page 12 on these Tibshirani data-mining course notes, which shows the CV error as a function of lambda for a particular modeling problem. The suggestion
4,659
Empirical justification for the one standard error rule when using cross-validation
The following is not an empirical study, which is why I originally wanted to post it as a comment, not an answer - but it really turns out to be too long for a comment. Cawley & Talbot (J of Machine Learning Research, 2010) draw attention to the difference between overfitting during the model selection phase and overfitting during the model fitting phase. The second kind of overfitting is the one most people are familiar with: given a particular model, we don't want to overfit it, i.e., to fit it too closely to the particular idiosyncrasies of the single data set we typically have. (This is where shrinkage/regularization can help, by trading a small increase in bias against a large decrease in variance.) However, Cawley & Talbot argue that we can overfit just as well during the model selection stage. After all, we still have typically only a single data set, and we are deciding between different models of varying complexity. Evaluating each candidate model in order to select one usually involves fitting that model, which can be done using regularization or not. But this evaluation in itself is again a random variable, because it depends on the specific data set we have. So our choice of an "optimal" model can in itself exhibit a bias, and will exhibit a variance, as depending on the specific data set from all data sets we could have drawn from the population. Cawley & Talbot therefore argue that simply choosing the model that performs best in this evaluation may well be a selection rule with small bias - but it may exhibit large variance. That is, given different training datasets from the same data generating process (DGP), this rule may select very different models, which would then be fitted and used for predicting in new datasets that again follow the same DGP. In this light, restricting the variance of the model selection procedure but incurring a small bias towards simpler models may yield smaller out-of-sample errors. Cawley & Talbot don't connect this explicitly to the one standard error rule, and their section on "regularizing model selection" is very short. However, the one standard error rule would perform exactly this regularization, and take the relationship between the variance in model selection and the variance of the out-of-bag cross-validation error into account. For instance, below is Figure 2.3 from Statistical Learning with Sparsity by Hastie, Tibshirani & Wainwright (2015). Model selection variance is given by the convexity of the black line at its minimum. Here, the minimum is not very pronounced, and the line is rather weakly convex, so model selection is probably rather uncertain with a high variance. And the variance of the OOB CV error estimate is of course given by the multiple light blue lines indicating standard errors.
Empirical justification for the one standard error rule when using cross-validation
The following is not an empirical study, which is why I originally wanted to post it as a comment, not an answer - but it really turns out to be too long for a comment. Cawley & Talbot (J of Machine L
Empirical justification for the one standard error rule when using cross-validation The following is not an empirical study, which is why I originally wanted to post it as a comment, not an answer - but it really turns out to be too long for a comment. Cawley & Talbot (J of Machine Learning Research, 2010) draw attention to the difference between overfitting during the model selection phase and overfitting during the model fitting phase. The second kind of overfitting is the one most people are familiar with: given a particular model, we don't want to overfit it, i.e., to fit it too closely to the particular idiosyncrasies of the single data set we typically have. (This is where shrinkage/regularization can help, by trading a small increase in bias against a large decrease in variance.) However, Cawley & Talbot argue that we can overfit just as well during the model selection stage. After all, we still have typically only a single data set, and we are deciding between different models of varying complexity. Evaluating each candidate model in order to select one usually involves fitting that model, which can be done using regularization or not. But this evaluation in itself is again a random variable, because it depends on the specific data set we have. So our choice of an "optimal" model can in itself exhibit a bias, and will exhibit a variance, as depending on the specific data set from all data sets we could have drawn from the population. Cawley & Talbot therefore argue that simply choosing the model that performs best in this evaluation may well be a selection rule with small bias - but it may exhibit large variance. That is, given different training datasets from the same data generating process (DGP), this rule may select very different models, which would then be fitted and used for predicting in new datasets that again follow the same DGP. In this light, restricting the variance of the model selection procedure but incurring a small bias towards simpler models may yield smaller out-of-sample errors. Cawley & Talbot don't connect this explicitly to the one standard error rule, and their section on "regularizing model selection" is very short. However, the one standard error rule would perform exactly this regularization, and take the relationship between the variance in model selection and the variance of the out-of-bag cross-validation error into account. For instance, below is Figure 2.3 from Statistical Learning with Sparsity by Hastie, Tibshirani & Wainwright (2015). Model selection variance is given by the convexity of the black line at its minimum. Here, the minimum is not very pronounced, and the line is rather weakly convex, so model selection is probably rather uncertain with a high variance. And the variance of the OOB CV error estimate is of course given by the multiple light blue lines indicating standard errors.
Empirical justification for the one standard error rule when using cross-validation The following is not an empirical study, which is why I originally wanted to post it as a comment, not an answer - but it really turns out to be too long for a comment. Cawley & Talbot (J of Machine L
4,660
Empirical justification for the one standard error rule when using cross-validation
The number of variables selected by the Lasso estimator is decided by a penalty value $\lambda$. The larger is $\lambda$, the smaller is the set of selected variables. Let $\hat S(\lambda)$ be the set of selected variables using as penalty $\lambda$. Let $\lambda^ \star$ be the penalty selected using the minimum of the cross validation function. It can be proved that $P(S_0 \subset \hat S(\lambda^\star))\rightarrow 1$. Where $S_0$ is the set of the variables that are really non 0. (The set of true variable is content strictly in the set estimated using as penalty the minimum of the cross-validation.) This should be reported in Statistics for high dimensional data by Bühlmann and van de Geer. The penalty value $\lambda$ is often chosen through cross-validation; this means that with high probability too many variables are selected. To reduce the number of selected variables the penalty is increased a little bit using the one standard error rule.
Empirical justification for the one standard error rule when using cross-validation
The number of variables selected by the Lasso estimator is decided by a penalty value $\lambda$. The larger is $\lambda$, the smaller is the set of selected variables. Let $\hat S(\lambda)$ be the
Empirical justification for the one standard error rule when using cross-validation The number of variables selected by the Lasso estimator is decided by a penalty value $\lambda$. The larger is $\lambda$, the smaller is the set of selected variables. Let $\hat S(\lambda)$ be the set of selected variables using as penalty $\lambda$. Let $\lambda^ \star$ be the penalty selected using the minimum of the cross validation function. It can be proved that $P(S_0 \subset \hat S(\lambda^\star))\rightarrow 1$. Where $S_0$ is the set of the variables that are really non 0. (The set of true variable is content strictly in the set estimated using as penalty the minimum of the cross-validation.) This should be reported in Statistics for high dimensional data by Bühlmann and van de Geer. The penalty value $\lambda$ is often chosen through cross-validation; this means that with high probability too many variables are selected. To reduce the number of selected variables the penalty is increased a little bit using the one standard error rule.
Empirical justification for the one standard error rule when using cross-validation The number of variables selected by the Lasso estimator is decided by a penalty value $\lambda$. The larger is $\lambda$, the smaller is the set of selected variables. Let $\hat S(\lambda)$ be the
4,661
What do "endogeneity" and "exogeneity" mean substantively?
JohnRos's answer is very good. In plain English, endogeneity means you got the causation wrong. That the model you wrote down and estimated does not properly capture the way causation works in the real world. When you write: \begin{equation} Y_i=\beta_0+\beta_1X_i+\epsilon_i \end{equation} you can think of this equation in a number of ways. You could think of it as a convenient way of predicting $Y$ based on $X$'s values. You could think of it as a convenient way of modeling $E\{Y|X\}$. In either of these cases, there is no such thing as endogeneity, and you don't need to worry about it. However, you can also think of the equation as embodying causation. You can think of $\beta_1$ as the answer to the question: "What would happen to $Y$ if I reached in to this system and experimentally increased $X$ by 1?" If you want to think about it that way, using OLS to estimate it amounts to assuming that: $X$ causes $Y$ $\epsilon$ causes $Y$ $\epsilon$ does not cause $X$ $Y$ does not cause $X$ Nothing which causes $\epsilon$ also causes $X$ Failure of any one of 3-5 will generally result in $E\{\epsilon|X\}\ne0$, or, not quite equivalently, ${\rm Cov}(X,\epsilon)\ne0$. Instrumental variables is a way of correcting for the fact that you got the causation wrong (by making another, different, causal assumption). A perfectly conducted randomized controlled trial is a way of forcing 3-5 to be true. If you pick $X$ randomly, then it sure ain't caused by $Y$, $\epsilon$, or anything else. So-called "natural experiment" methods are attempts to find special circumstances out in the world where 3-5 are true even when we don't think 3-5 are usually true. In JohnRos's example, to calculate the wage value of education, you need a causal interpretation of $\beta_1$, but there are good reasons to believe that 3 or 5 is false. Your confusion is understandable, though. It is very typical in courses on the linear model for the instructor to use the causal interpretation of $\beta_1$ I gave above while pretending not to be introducing causation, pretending that "it's all just statistics." It's a cowardly lie, but it's also very common. In fact, it is part of a larger phenomenon in biomedicine and the social sciences. It is almost always the case that we are trying to determine the causal effect of $X$ on $Y$---that's what science is about after all. On the other hand, it is also almost always the case that there is some story you can tell leading to a conclusion that one of 3-5 is false. So, there is a kind of practiced, fluid, equivocating dishonesty in which we swat away objections by saying that we're just doing associational work and then sneak the causal interpretation back elsewhere (normally in the introduction and conclusion sections of the paper). If you are really interested, the guy to read is Judea Perl. James Heckman is also good.
What do "endogeneity" and "exogeneity" mean substantively?
JohnRos's answer is very good. In plain English, endogeneity means you got the causation wrong. That the model you wrote down and estimated does not properly capture the way causation works in the r
What do "endogeneity" and "exogeneity" mean substantively? JohnRos's answer is very good. In plain English, endogeneity means you got the causation wrong. That the model you wrote down and estimated does not properly capture the way causation works in the real world. When you write: \begin{equation} Y_i=\beta_0+\beta_1X_i+\epsilon_i \end{equation} you can think of this equation in a number of ways. You could think of it as a convenient way of predicting $Y$ based on $X$'s values. You could think of it as a convenient way of modeling $E\{Y|X\}$. In either of these cases, there is no such thing as endogeneity, and you don't need to worry about it. However, you can also think of the equation as embodying causation. You can think of $\beta_1$ as the answer to the question: "What would happen to $Y$ if I reached in to this system and experimentally increased $X$ by 1?" If you want to think about it that way, using OLS to estimate it amounts to assuming that: $X$ causes $Y$ $\epsilon$ causes $Y$ $\epsilon$ does not cause $X$ $Y$ does not cause $X$ Nothing which causes $\epsilon$ also causes $X$ Failure of any one of 3-5 will generally result in $E\{\epsilon|X\}\ne0$, or, not quite equivalently, ${\rm Cov}(X,\epsilon)\ne0$. Instrumental variables is a way of correcting for the fact that you got the causation wrong (by making another, different, causal assumption). A perfectly conducted randomized controlled trial is a way of forcing 3-5 to be true. If you pick $X$ randomly, then it sure ain't caused by $Y$, $\epsilon$, or anything else. So-called "natural experiment" methods are attempts to find special circumstances out in the world where 3-5 are true even when we don't think 3-5 are usually true. In JohnRos's example, to calculate the wage value of education, you need a causal interpretation of $\beta_1$, but there are good reasons to believe that 3 or 5 is false. Your confusion is understandable, though. It is very typical in courses on the linear model for the instructor to use the causal interpretation of $\beta_1$ I gave above while pretending not to be introducing causation, pretending that "it's all just statistics." It's a cowardly lie, but it's also very common. In fact, it is part of a larger phenomenon in biomedicine and the social sciences. It is almost always the case that we are trying to determine the causal effect of $X$ on $Y$---that's what science is about after all. On the other hand, it is also almost always the case that there is some story you can tell leading to a conclusion that one of 3-5 is false. So, there is a kind of practiced, fluid, equivocating dishonesty in which we swat away objections by saying that we're just doing associational work and then sneak the causal interpretation back elsewhere (normally in the introduction and conclusion sections of the paper). If you are really interested, the guy to read is Judea Perl. James Heckman is also good.
What do "endogeneity" and "exogeneity" mean substantively? JohnRos's answer is very good. In plain English, endogeneity means you got the causation wrong. That the model you wrote down and estimated does not properly capture the way causation works in the r
4,662
What do "endogeneity" and "exogeneity" mean substantively?
Let me use an example: Say you want to quantify the (causal) effect of education on income. You take education years and income data and regress one against the other. Did you recover what you wanted? Probably not! This is because the income is also caused by things other than education, but which are correlated to education. Let's call them "skill": We can safely assume that education years are affected by "skill", as the more skilled you are, the easier it is to gain education. So, if you regress education years on income, the estimator for the education effect absorbs the effect of "skill" and you get an overly optimistic estimate of return to education. This is to say, education's effect on income is (upward) biased because education is not exogenous to income. Endogeneity is only a problem if you want to recover causal effects (unlike mere correlations). Also- if you can design an experiment, you can guarantee that ${\rm Cov}(X,\epsilon)=0$ by random assignment. Sadly, this is typically impossible in social sciences.
What do "endogeneity" and "exogeneity" mean substantively?
Let me use an example: Say you want to quantify the (causal) effect of education on income. You take education years and income data and regress one against the other. Did you recover what you wanted?
What do "endogeneity" and "exogeneity" mean substantively? Let me use an example: Say you want to quantify the (causal) effect of education on income. You take education years and income data and regress one against the other. Did you recover what you wanted? Probably not! This is because the income is also caused by things other than education, but which are correlated to education. Let's call them "skill": We can safely assume that education years are affected by "skill", as the more skilled you are, the easier it is to gain education. So, if you regress education years on income, the estimator for the education effect absorbs the effect of "skill" and you get an overly optimistic estimate of return to education. This is to say, education's effect on income is (upward) biased because education is not exogenous to income. Endogeneity is only a problem if you want to recover causal effects (unlike mere correlations). Also- if you can design an experiment, you can guarantee that ${\rm Cov}(X,\epsilon)=0$ by random assignment. Sadly, this is typically impossible in social sciences.
What do "endogeneity" and "exogeneity" mean substantively? Let me use an example: Say you want to quantify the (causal) effect of education on income. You take education years and income data and regress one against the other. Did you recover what you wanted?
4,663
What do "endogeneity" and "exogeneity" mean substantively?
User25901 is looking for a straight-forward, simple, real-world explanation of what the terms exogenous and endogenous mean. To respond with arcane examples or mathematical definitions is to not really answer the question that was asked. How do I, 'get a gut understanding of these two terms?' Here's what I came up with: Exo - external, outside Endo - internal, inside -genous - originating in Exogenous: A variable is exogenous to a model if it is not determined by other parameters and variables in the model, but is set externally and any changes to it come from external forces. Endogenous: A variable is endogenous in a model if it is at least partly function of other parameters and variables in a model. Therefore, exo-geneity and exo-geneocity is the reflexive adjective of the right-hand-side cause, describing its effect on others, rather than its own integrality or derivations.
What do "endogeneity" and "exogeneity" mean substantively?
User25901 is looking for a straight-forward, simple, real-world explanation of what the terms exogenous and endogenous mean. To respond with arcane examples or mathematical definitions is to not real
What do "endogeneity" and "exogeneity" mean substantively? User25901 is looking for a straight-forward, simple, real-world explanation of what the terms exogenous and endogenous mean. To respond with arcane examples or mathematical definitions is to not really answer the question that was asked. How do I, 'get a gut understanding of these two terms?' Here's what I came up with: Exo - external, outside Endo - internal, inside -genous - originating in Exogenous: A variable is exogenous to a model if it is not determined by other parameters and variables in the model, but is set externally and any changes to it come from external forces. Endogenous: A variable is endogenous in a model if it is at least partly function of other parameters and variables in a model. Therefore, exo-geneity and exo-geneocity is the reflexive adjective of the right-hand-side cause, describing its effect on others, rather than its own integrality or derivations.
What do "endogeneity" and "exogeneity" mean substantively? User25901 is looking for a straight-forward, simple, real-world explanation of what the terms exogenous and endogenous mean. To respond with arcane examples or mathematical definitions is to not real
4,664
What do "endogeneity" and "exogeneity" mean substantively?
The OLS regression, by construction, gives $X'\epsilon=0$. Actually that is not correct. It gives $X'\hat\epsilon=0$ by construction. Your estimated residuals are uncorrelated with your regressors, but your estimated residuals are "wrong" in a sense. If the true data-generating-process operates by $Y=\alpha +\beta X + \gamma Z + {\rm noise}$, and $Z$ is correlated with $X$, then $X'{\rm noise} \neq 0$ if you fit a regression leaving out $Z$. Of course, the estimated residuals will be uncorrelated with $X$. They always are, the same way that $\log(e^x)=x$. It is just a mathematical fact. This is the omitted variable bias. Say that $I$ is randomly assigned. Maybe it is the day of week that people are born. Maybe it is an actual experiment. It is anything uncorrelated with $Y$ that predicts $X$. You can then use the randomness of $I$ to predict $X$, and then use that predicted $X$ to fit a model to $Y$. That is two stage least squares, which is almost the same as IV.
What do "endogeneity" and "exogeneity" mean substantively?
The OLS regression, by construction, gives $X'\epsilon=0$. Actually that is not correct. It gives $X'\hat\epsilon=0$ by construction. Your estimated residuals are uncorrelated with your regressors,
What do "endogeneity" and "exogeneity" mean substantively? The OLS regression, by construction, gives $X'\epsilon=0$. Actually that is not correct. It gives $X'\hat\epsilon=0$ by construction. Your estimated residuals are uncorrelated with your regressors, but your estimated residuals are "wrong" in a sense. If the true data-generating-process operates by $Y=\alpha +\beta X + \gamma Z + {\rm noise}$, and $Z$ is correlated with $X$, then $X'{\rm noise} \neq 0$ if you fit a regression leaving out $Z$. Of course, the estimated residuals will be uncorrelated with $X$. They always are, the same way that $\log(e^x)=x$. It is just a mathematical fact. This is the omitted variable bias. Say that $I$ is randomly assigned. Maybe it is the day of week that people are born. Maybe it is an actual experiment. It is anything uncorrelated with $Y$ that predicts $X$. You can then use the randomness of $I$ to predict $X$, and then use that predicted $X$ to fit a model to $Y$. That is two stage least squares, which is almost the same as IV.
What do "endogeneity" and "exogeneity" mean substantively? The OLS regression, by construction, gives $X'\epsilon=0$. Actually that is not correct. It gives $X'\hat\epsilon=0$ by construction. Your estimated residuals are uncorrelated with your regressors,
4,665
What do "endogeneity" and "exogeneity" mean substantively?
Think of a system as $x,y$. When we're trying to explain it by a model $y=f(x)+\varepsilon$, is the error $\varepsilon$ a part of the system or not? When the error is not part of the system, we call it exogenous, i.e. it's added to $f(x)$ after $x$ had its input into the system. When the error is a part of the system, we call it endogenous, i.e. not only it enters $y$ after $f(x)$, it also enters $x$ itself somehow before $f(.)$ is applied to it. This makes $endogenous$ models troublesome, for they interfere with our attempts to estimate the function $f(.)$.
What do "endogeneity" and "exogeneity" mean substantively?
Think of a system as $x,y$. When we're trying to explain it by a model $y=f(x)+\varepsilon$, is the error $\varepsilon$ a part of the system or not? When the error is not part of the system, we call i
What do "endogeneity" and "exogeneity" mean substantively? Think of a system as $x,y$. When we're trying to explain it by a model $y=f(x)+\varepsilon$, is the error $\varepsilon$ a part of the system or not? When the error is not part of the system, we call it exogenous, i.e. it's added to $f(x)$ after $x$ had its input into the system. When the error is a part of the system, we call it endogenous, i.e. not only it enters $y$ after $f(x)$, it also enters $x$ itself somehow before $f(.)$ is applied to it. This makes $endogenous$ models troublesome, for they interfere with our attempts to estimate the function $f(.)$.
What do "endogeneity" and "exogeneity" mean substantively? Think of a system as $x,y$. When we're trying to explain it by a model $y=f(x)+\varepsilon$, is the error $\varepsilon$ a part of the system or not? When the error is not part of the system, we call i
4,666
What do "endogeneity" and "exogeneity" mean substantively?
In regression we want to capture the quantitative impact of an independent variable (which we assume is exogenous and not being itself dependent on something else) on an identified dependent variable. We want to know what net effect an exogenous variable has on a dependent variable- meaning the independent variable should be free of any influence from another variable. A quick way to see if the regression is suffering from the problem of endogeneity is to check the correlation between the independent variable and the residuals. But this is just a rough check otherwise formal tests of endogeneity need to be undertaken.
What do "endogeneity" and "exogeneity" mean substantively?
In regression we want to capture the quantitative impact of an independent variable (which we assume is exogenous and not being itself dependent on something else) on an identified dependent variable.
What do "endogeneity" and "exogeneity" mean substantively? In regression we want to capture the quantitative impact of an independent variable (which we assume is exogenous and not being itself dependent on something else) on an identified dependent variable. We want to know what net effect an exogenous variable has on a dependent variable- meaning the independent variable should be free of any influence from another variable. A quick way to see if the regression is suffering from the problem of endogeneity is to check the correlation between the independent variable and the residuals. But this is just a rough check otherwise formal tests of endogeneity need to be undertaken.
What do "endogeneity" and "exogeneity" mean substantively? In regression we want to capture the quantitative impact of an independent variable (which we assume is exogenous and not being itself dependent on something else) on an identified dependent variable.
4,667
Debunking wrong CLT statement
This is quite a ubiquitous misunderstanding of the central limit theorem, which I have also encountered in my statistical teaching. Over the years I have encountered this problem so often that I have developed a Socratic method to deal with it. I identify a student that has accepted this idea and then engage the student to tease out what this would logically imply. It is fairly simple to get to the reductio ad absurdum of the false version of the theorem, which is that every sequence of IID random variables has a normal distribution. A typical conversation would go something like this. Teacher: I noticed in this assignment question that you said that because $n$ is large, the data are approximately normally distributed. Can you take me through your reasoning for that bit? Student: Is that wrong? Teacher: I don't know. Let's have a look at it. Student: Well, I used that theorem you talked about in class; that main one you mentioned a bunch of times. I forget the name. Teacher: The central limit theorem? Student: Yeah, the central limit theorem. Teacher: Great, and when does that theorem apply? Student: I think if the variables are IID. Teacher: And have finite variance. Student: Yeah, and finite variance. Teacher: Okay, so the random variables have some fixed distribution with finite variance, is that right? Student: Yeah. Teacher: And the distribution isn't changing or anything? Student: No, they're IID with a fixed distribution. Teacher: Okay great, so let me see if I can state the theorem. The central limit theorem says that if you have an IID sequence of random variables with finite variance, and you take a sample of $n$ of them, then as that sample size $n$ gets large the distribution of the random variables converges to a normal distribution. Is that right? Student: Yeah, I think so. Teacher: Okay great, so let's think about what that would mean. Suppose I have a sequence like that. If I take say, a thousand sample values, what is the distribution of those random variables? Student: It's approximately a normal distribution. Teacher: How close? Student: Pretty close I think. Teacher: Okay, what if I take a billion sample values. How close now? Student: Really close I'd say. Teacher: And if we have a sequence of these things, then in theory we can take $n$ as high as we want can't we? So we can make the distribution as close to a normal distribution as we want. Student: Yeah. Teacher: So let's say we take $n$ big enough that we're happy to say that the random variables basically have a normal distribution. And that's a fixed distribution right? Student: Yeah. Teacher: And they're IID right? These random variables are IID? Student: Yeah, they're IID. Teacher: Okay, so they all have the same distribution. Student: Yeah. Teacher: Okay, so that means the first value in the sequence, it also has a normal distribution. Is that right? Student: Yeah. I mean, it's an approximation, but yeah, if $n$ is really large then it effectively has a normal distribution. Teacher: Okay great. And so does the second value in the sequence, and so on, right? Student: Yeah. Teacher: Okay, so really, as soon as we started sampling, we were already getting values that are essentially normal distributed. We didn't really need to wait until $n$ gets large before that started happening. Student: Hmmm. I'm not sure. That sounds wrong. The theorem says you need a large $n$, so I guess I think you can't apply it if you only sampled a small number of values. Teacher: Okay, so let's say we are sampling a billion values. Then we have large $n$. And we've established that this means that the first few random variables in the sequence are normally distributed, to a very close approximation. If that's true, can't we just stop sampling early? Say we were going to sample a billion values, but then we stop sampling after the first value. Was that random variable still normally distributed? Student: I think maybe it isn't. Teacher: Okay, so at some point its distribution changes? Student: I'm not sure. I'm a bit confused about it now. Teacher: Hmmm, well it seems we have something strange going on here. Why don't you have another read of the material on the central limit theorem and see if you can figure out how to resolve that contradiction. Let's talk more about it then. That is one possible approach, which seeks to reduce the false theorem down to the reductio which says that every IID sequence (with finite variance) must be composed of normal random variables. Either the student will get to this conclusion, and realise something is wrong, or they will defend against this conclusion by saying that the distribution changes as $n$ gets large (or they may handwave a bit, and you might have to lawyer them to a conclusion). Either way, this usually provokes some further thinking that can lead them to re-read the theorem. Here is another approach: Teacher: Let's look at this another way. Suppose we have an IID sequence of random variables from some other distribution; one that is not a normal distribution. Is that possible? For example, could we have a sequence of random variables representing outcome of coin flip, from the Bernoulli distribution? Student: Yeah, we can have that. Teacher: Okay, great. And these are all IID values, so again, they all have the same distribution. So every random variable in that sequence is going to have a distribution that is not a normal distribution, right? Student: Yeah. Teacher: In fact, in this case, every value in the sequence will be the outcome of a coin flip, which we set as zero or one. Is that right? Student: Yeah, as long as we label them that way. Teacher: Okay, great. So if all the values in the sequence are zeroes or ones, no matter how many of them we sample, we are always going to get a histogram showing values at zero and one, right? Student: Yeah. Teacher: Okay. And do you think if we sample more and more values, we will get closer and closer to the true distribution? Like, if it is a fair coin, does the histogram eventually converge to where the relative frequency bars are the same height? Student: I guess so. I think it does. Teacher: I think you're right. In fact, we call that result the "law of large numbers". Anyway, it seems like we have a bit of a problem here doesn't it. If we sample a large number of the values then the central limit theorem says we converge to a normal distribution, but it sounds like the "law of large numbers" says we actually converge to the true distribution, which isn't a normal distribution. In fact, it's a distribution that is just probabilities on the zero value and the one value, which looks nothing like the normal distribution. So which is it? Student: I think when $n$ is large it looks like a normal distribution. Teacher: So describe it to me. Let's say we have flipped the coin a billion times. Describe the distribution of the outcomes and explain why that looks like a normal distribution. Student: I'm not really sure how to do that. Teacher: Okay. Well, do you agree that if we have a billion coin flips, all those outcomes are zeroes and ones? Student: Yeah. Teacher: Okay, so describe what its histogram looks like. Student: It's just two bars on those values. Teacher: Okay, so not "bell curve" shaped? Student: Yeah, I guess not. Teacher: Hmmm, so perhaps the central limit theorem doesn't say what we thought. Why don't you read the material on the central limit theorem again and see if you can figure out what it says. Let's talk more about it then.
Debunking wrong CLT statement
This is quite a ubiquitous misunderstanding of the central limit theorem, which I have also encountered in my statistical teaching. Over the years I have encountered this problem so often that I have
Debunking wrong CLT statement This is quite a ubiquitous misunderstanding of the central limit theorem, which I have also encountered in my statistical teaching. Over the years I have encountered this problem so often that I have developed a Socratic method to deal with it. I identify a student that has accepted this idea and then engage the student to tease out what this would logically imply. It is fairly simple to get to the reductio ad absurdum of the false version of the theorem, which is that every sequence of IID random variables has a normal distribution. A typical conversation would go something like this. Teacher: I noticed in this assignment question that you said that because $n$ is large, the data are approximately normally distributed. Can you take me through your reasoning for that bit? Student: Is that wrong? Teacher: I don't know. Let's have a look at it. Student: Well, I used that theorem you talked about in class; that main one you mentioned a bunch of times. I forget the name. Teacher: The central limit theorem? Student: Yeah, the central limit theorem. Teacher: Great, and when does that theorem apply? Student: I think if the variables are IID. Teacher: And have finite variance. Student: Yeah, and finite variance. Teacher: Okay, so the random variables have some fixed distribution with finite variance, is that right? Student: Yeah. Teacher: And the distribution isn't changing or anything? Student: No, they're IID with a fixed distribution. Teacher: Okay great, so let me see if I can state the theorem. The central limit theorem says that if you have an IID sequence of random variables with finite variance, and you take a sample of $n$ of them, then as that sample size $n$ gets large the distribution of the random variables converges to a normal distribution. Is that right? Student: Yeah, I think so. Teacher: Okay great, so let's think about what that would mean. Suppose I have a sequence like that. If I take say, a thousand sample values, what is the distribution of those random variables? Student: It's approximately a normal distribution. Teacher: How close? Student: Pretty close I think. Teacher: Okay, what if I take a billion sample values. How close now? Student: Really close I'd say. Teacher: And if we have a sequence of these things, then in theory we can take $n$ as high as we want can't we? So we can make the distribution as close to a normal distribution as we want. Student: Yeah. Teacher: So let's say we take $n$ big enough that we're happy to say that the random variables basically have a normal distribution. And that's a fixed distribution right? Student: Yeah. Teacher: And they're IID right? These random variables are IID? Student: Yeah, they're IID. Teacher: Okay, so they all have the same distribution. Student: Yeah. Teacher: Okay, so that means the first value in the sequence, it also has a normal distribution. Is that right? Student: Yeah. I mean, it's an approximation, but yeah, if $n$ is really large then it effectively has a normal distribution. Teacher: Okay great. And so does the second value in the sequence, and so on, right? Student: Yeah. Teacher: Okay, so really, as soon as we started sampling, we were already getting values that are essentially normal distributed. We didn't really need to wait until $n$ gets large before that started happening. Student: Hmmm. I'm not sure. That sounds wrong. The theorem says you need a large $n$, so I guess I think you can't apply it if you only sampled a small number of values. Teacher: Okay, so let's say we are sampling a billion values. Then we have large $n$. And we've established that this means that the first few random variables in the sequence are normally distributed, to a very close approximation. If that's true, can't we just stop sampling early? Say we were going to sample a billion values, but then we stop sampling after the first value. Was that random variable still normally distributed? Student: I think maybe it isn't. Teacher: Okay, so at some point its distribution changes? Student: I'm not sure. I'm a bit confused about it now. Teacher: Hmmm, well it seems we have something strange going on here. Why don't you have another read of the material on the central limit theorem and see if you can figure out how to resolve that contradiction. Let's talk more about it then. That is one possible approach, which seeks to reduce the false theorem down to the reductio which says that every IID sequence (with finite variance) must be composed of normal random variables. Either the student will get to this conclusion, and realise something is wrong, or they will defend against this conclusion by saying that the distribution changes as $n$ gets large (or they may handwave a bit, and you might have to lawyer them to a conclusion). Either way, this usually provokes some further thinking that can lead them to re-read the theorem. Here is another approach: Teacher: Let's look at this another way. Suppose we have an IID sequence of random variables from some other distribution; one that is not a normal distribution. Is that possible? For example, could we have a sequence of random variables representing outcome of coin flip, from the Bernoulli distribution? Student: Yeah, we can have that. Teacher: Okay, great. And these are all IID values, so again, they all have the same distribution. So every random variable in that sequence is going to have a distribution that is not a normal distribution, right? Student: Yeah. Teacher: In fact, in this case, every value in the sequence will be the outcome of a coin flip, which we set as zero or one. Is that right? Student: Yeah, as long as we label them that way. Teacher: Okay, great. So if all the values in the sequence are zeroes or ones, no matter how many of them we sample, we are always going to get a histogram showing values at zero and one, right? Student: Yeah. Teacher: Okay. And do you think if we sample more and more values, we will get closer and closer to the true distribution? Like, if it is a fair coin, does the histogram eventually converge to where the relative frequency bars are the same height? Student: I guess so. I think it does. Teacher: I think you're right. In fact, we call that result the "law of large numbers". Anyway, it seems like we have a bit of a problem here doesn't it. If we sample a large number of the values then the central limit theorem says we converge to a normal distribution, but it sounds like the "law of large numbers" says we actually converge to the true distribution, which isn't a normal distribution. In fact, it's a distribution that is just probabilities on the zero value and the one value, which looks nothing like the normal distribution. So which is it? Student: I think when $n$ is large it looks like a normal distribution. Teacher: So describe it to me. Let's say we have flipped the coin a billion times. Describe the distribution of the outcomes and explain why that looks like a normal distribution. Student: I'm not really sure how to do that. Teacher: Okay. Well, do you agree that if we have a billion coin flips, all those outcomes are zeroes and ones? Student: Yeah. Teacher: Okay, so describe what its histogram looks like. Student: It's just two bars on those values. Teacher: Okay, so not "bell curve" shaped? Student: Yeah, I guess not. Teacher: Hmmm, so perhaps the central limit theorem doesn't say what we thought. Why don't you read the material on the central limit theorem again and see if you can figure out what it says. Let's talk more about it then.
Debunking wrong CLT statement This is quite a ubiquitous misunderstanding of the central limit theorem, which I have also encountered in my statistical teaching. Over the years I have encountered this problem so often that I have
4,668
Debunking wrong CLT statement
As whuber notes, you can always point your collaborators to a binary discrete distribution. But they might consider that "cheating" and retreat to the weaker claim that the proposed statement only applied to continuous distributions. So use the uniform distribution on the unit interval $[0,1]$. It has a mean of $\mu=0.5$, a variance of $\frac{1}{12}$, thus a standard deviation of $\sigma=\frac{1}{\sqrt{12}}\approx 0.289$. But of course the interval $[\mu-\sigma,\mu+\sigma]\approx[0.211,0.789]$ of length $2\sigma\approx 0.577$ only contains $57.7\%$ of your data (more specifically: as the sample size increases, the proportion approaches $0.577$), not $68\%$, no matter how many data points you sample.
Debunking wrong CLT statement
As whuber notes, you can always point your collaborators to a binary discrete distribution. But they might consider that "cheating" and retreat to the weaker claim that the proposed statement only app
Debunking wrong CLT statement As whuber notes, you can always point your collaborators to a binary discrete distribution. But they might consider that "cheating" and retreat to the weaker claim that the proposed statement only applied to continuous distributions. So use the uniform distribution on the unit interval $[0,1]$. It has a mean of $\mu=0.5$, a variance of $\frac{1}{12}$, thus a standard deviation of $\sigma=\frac{1}{\sqrt{12}}\approx 0.289$. But of course the interval $[\mu-\sigma,\mu+\sigma]\approx[0.211,0.789]$ of length $2\sigma\approx 0.577$ only contains $57.7\%$ of your data (more specifically: as the sample size increases, the proportion approaches $0.577$), not $68\%$, no matter how many data points you sample.
Debunking wrong CLT statement As whuber notes, you can always point your collaborators to a binary discrete distribution. But they might consider that "cheating" and retreat to the weaker claim that the proposed statement only app
4,669
Debunking wrong CLT statement
The central limit theorem states that the mean of the data will become normally distributed as the sample size increases, it says nothing about the data itself. Another way to put it is the distribution of the parameter (the mean) is normal, but that is entirely separate from the distribution of the underlying data. Most of the value from the CLT comes from the fact that you can compare samples that are not normally distributed to one another (based solely on the fact that, due to the CLT, you know how their means should behave). I think where this gets confusing is that just because you can compare two sample means to each other based on some test that assumes normality (eg. t-test) doesn't mean that you should. (ie comparing the means of two exponential distributions might not tell you what you think it does, or two bi-modal distributions, or a bi-modal with a uni-modal distribution, ect). The question most people should ask is, "is the mean (or a difference in means) a useful metric given the distribution of my data". Only if the answer to this question is yes, should one proceed to compare means (thus relying on the CLT). By not asking this question, many people fall into the following (roughly stated) logical fallacy: The CLT applies, so I can compare means. And I can compare means because they are normally distributed. This comparison must be meaningful, because the CLT says I can do it (and the CLT is very powerful). The comparison/test I am using most intuitively (/only) makes sense when the data is normally distributed, and after all, the mean is normally distributed, so my data must be normally distributed too! To directly answer the question, you can: Show them the definition, point out that the CLT only makes a claim about the distribution of the mean approaching normality, emphasize the distribution of a parameter can be very different from the distribution of the data from which it is derived. Show them this video which provides a nice visual representation of how the CLT works using several different distributions for the underlying data. (its a bit quirky, but communicated very clearly) Addendum: I glossed over some technical details in my explanation in order to make it more understandable to someone who is less familiar with statistics. Several commenters have pointed this out and so I thought I would include their feedback here: A more accurate statement of the CLT would be: "The central limit theorem states that the mean of the data will become normally distributed (more specifically the difference between the mean of the data/sample and the true mean, multiplied by the square root of the sample size $\sqrt{n}$ is normal distributed)" I have also seen this explained as "the properly normalized sum tends toward a normal distribution" It is also worth pointing out that the data must be composed of independent and identically distributed random variables with finite variance in order for the CLT to apply. A more accurate and/or less Bayesian way to say "the distribution of the parameter (mean)" would be "the distribution of the parameter estimate by the regular sample mean"
Debunking wrong CLT statement
The central limit theorem states that the mean of the data will become normally distributed as the sample size increases, it says nothing about the data itself. Another way to put it is the distribut
Debunking wrong CLT statement The central limit theorem states that the mean of the data will become normally distributed as the sample size increases, it says nothing about the data itself. Another way to put it is the distribution of the parameter (the mean) is normal, but that is entirely separate from the distribution of the underlying data. Most of the value from the CLT comes from the fact that you can compare samples that are not normally distributed to one another (based solely on the fact that, due to the CLT, you know how their means should behave). I think where this gets confusing is that just because you can compare two sample means to each other based on some test that assumes normality (eg. t-test) doesn't mean that you should. (ie comparing the means of two exponential distributions might not tell you what you think it does, or two bi-modal distributions, or a bi-modal with a uni-modal distribution, ect). The question most people should ask is, "is the mean (or a difference in means) a useful metric given the distribution of my data". Only if the answer to this question is yes, should one proceed to compare means (thus relying on the CLT). By not asking this question, many people fall into the following (roughly stated) logical fallacy: The CLT applies, so I can compare means. And I can compare means because they are normally distributed. This comparison must be meaningful, because the CLT says I can do it (and the CLT is very powerful). The comparison/test I am using most intuitively (/only) makes sense when the data is normally distributed, and after all, the mean is normally distributed, so my data must be normally distributed too! To directly answer the question, you can: Show them the definition, point out that the CLT only makes a claim about the distribution of the mean approaching normality, emphasize the distribution of a parameter can be very different from the distribution of the data from which it is derived. Show them this video which provides a nice visual representation of how the CLT works using several different distributions for the underlying data. (its a bit quirky, but communicated very clearly) Addendum: I glossed over some technical details in my explanation in order to make it more understandable to someone who is less familiar with statistics. Several commenters have pointed this out and so I thought I would include their feedback here: A more accurate statement of the CLT would be: "The central limit theorem states that the mean of the data will become normally distributed (more specifically the difference between the mean of the data/sample and the true mean, multiplied by the square root of the sample size $\sqrt{n}$ is normal distributed)" I have also seen this explained as "the properly normalized sum tends toward a normal distribution" It is also worth pointing out that the data must be composed of independent and identically distributed random variables with finite variance in order for the CLT to apply. A more accurate and/or less Bayesian way to say "the distribution of the parameter (mean)" would be "the distribution of the parameter estimate by the regular sample mean"
Debunking wrong CLT statement The central limit theorem states that the mean of the data will become normally distributed as the sample size increases, it says nothing about the data itself. Another way to put it is the distribut
4,670
Debunking wrong CLT statement
CLT is about convergence of a sum of random variables. If we have an iid sample $X_1,...,X_n$, where $EX_i=\mu$ and $Var(X_i)<\infty$ then $$ \frac{1}{\sqrt{n}}\left(X_1+...+X_n-n\mu\right) \to N(0, Var(X_i)) $$ This statement is solely about closeness of a distribution of suitably normalized sum $(X_1+...+X_n)$ to the normal distribution. It does not say that nothing about convergence of distribution of $X_i$. Since $X_i$ do not depend on $n$ why should they converge anywhere? Empirical distribution of a sample $X_i$ will actually converge (as sample size increases) to the actual distribution of $X_i$ according to Donsker theorem, so unless the actual distribution is not close to normal, the empirical distribution will not be close to it either.
Debunking wrong CLT statement
CLT is about convergence of a sum of random variables. If we have an iid sample $X_1,...,X_n$, where $EX_i=\mu$ and $Var(X_i)<\infty$ then $$ \frac{1}{\sqrt{n}}\left(X_1+...+X_n-n\mu\right) \to N(0,
Debunking wrong CLT statement CLT is about convergence of a sum of random variables. If we have an iid sample $X_1,...,X_n$, where $EX_i=\mu$ and $Var(X_i)<\infty$ then $$ \frac{1}{\sqrt{n}}\left(X_1+...+X_n-n\mu\right) \to N(0, Var(X_i)) $$ This statement is solely about closeness of a distribution of suitably normalized sum $(X_1+...+X_n)$ to the normal distribution. It does not say that nothing about convergence of distribution of $X_i$. Since $X_i$ do not depend on $n$ why should they converge anywhere? Empirical distribution of a sample $X_i$ will actually converge (as sample size increases) to the actual distribution of $X_i$ according to Donsker theorem, so unless the actual distribution is not close to normal, the empirical distribution will not be close to it either.
Debunking wrong CLT statement CLT is about convergence of a sum of random variables. If we have an iid sample $X_1,...,X_n$, where $EX_i=\mu$ and $Var(X_i)<\infty$ then $$ \frac{1}{\sqrt{n}}\left(X_1+...+X_n-n\mu\right) \to N(0,
4,671
Debunking wrong CLT statement
This is how I like to visualize the CLT. I'm not 100% sure the argument is correct though, please check. Start with a population of values whose distribution is nowhere near normal. E.g., a uniform distribution: X <- runif(n= 50000) hist(X) Now, take $n$ samples from this population, calculate the mean of each sample, shift the sample mean by the mean of the population and scale it by $\sqrt{n}$, plot a histogram of these $n$ means. That histogram is (close to) normal: mu <- 1/2 # Mean of population X x <- rep(NA, 1000) size <- 10 for(i in 1:length(x)) { x[i] <- sqrt(size) * (mean(sample(X, size= size)) - mu) }
Debunking wrong CLT statement
This is how I like to visualize the CLT. I'm not 100% sure the argument is correct though, please check. Start with a population of values whose distribution is nowhere near normal. E.g., a uniform di
Debunking wrong CLT statement This is how I like to visualize the CLT. I'm not 100% sure the argument is correct though, please check. Start with a population of values whose distribution is nowhere near normal. E.g., a uniform distribution: X <- runif(n= 50000) hist(X) Now, take $n$ samples from this population, calculate the mean of each sample, shift the sample mean by the mean of the population and scale it by $\sqrt{n}$, plot a histogram of these $n$ means. That histogram is (close to) normal: mu <- 1/2 # Mean of population X x <- rep(NA, 1000) size <- 10 for(i in 1:length(x)) { x[i] <- sqrt(size) * (mean(sample(X, size= size)) - mu) }
Debunking wrong CLT statement This is how I like to visualize the CLT. I'm not 100% sure the argument is correct though, please check. Start with a population of values whose distribution is nowhere near normal. E.g., a uniform di
4,672
Debunking wrong CLT statement
The point of confusion here is what is actually converging to a normal distribution. I think the easiest way to overcome this is to explain examples of the extremes of a sampling distribution, one with one measurement per sample (just as if taking measurements straight from the population as you describe) and one where each sample is the entire population. From there it is easier to understand what happens in the middle ground.
Debunking wrong CLT statement
The point of confusion here is what is actually converging to a normal distribution. I think the easiest way to overcome this is to explain examples of the extremes of a sampling distribution, one wit
Debunking wrong CLT statement The point of confusion here is what is actually converging to a normal distribution. I think the easiest way to overcome this is to explain examples of the extremes of a sampling distribution, one with one measurement per sample (just as if taking measurements straight from the population as you describe) and one where each sample is the entire population. From there it is easier to understand what happens in the middle ground.
Debunking wrong CLT statement The point of confusion here is what is actually converging to a normal distribution. I think the easiest way to overcome this is to explain examples of the extremes of a sampling distribution, one wit
4,673
R - QQPlot: how to see whether data are normally distributed
"The test showed that it is likely that the population is normally distributed." No; it didn't show that. Hypothesis tests don't tell you how likely the null is. In fact you can bet this null is false. The Q-Q plot doesn't give a strong indication of non-normality (the plot is fairly straight); there's perhaps a slightly shorter left tail than you'd expect but that really won't matter much. The histogram as-is probably doesn't say a lot either; it does also hint at a slightly shorter left tail. But see here The population distribution your data are from isn't going to be exactly normal. However, the Q-Q plot shows that normality is probably a reasonably good approximation. If the sample size was not too small, a lack of rejection of the Shapiro-Wilk would probably be saying much the same. Update: your edit to include the actual Shapiro-Wilk p-value is important because in fact that would indicate you would reject the null at typical significant levels. That test indicates the population your data were sampled from (assuming a simple random sample of that population) is not normally distributed and the mild skewness indicated by the plots is probably what is being picked up by the test. For typical procedures that might assume normality of the variable itself (the one-sample t-test is one that comes to mind), at what appears to be a fairly large sample size, this mild non-normality will be of almost no consequence at all -- one of the problems with goodness of fit tests is they're more likely to reject just when it doesn't matter (when the sample size is large enough to detect some modest non-normality); similarly they're more likely to fail to reject when it matters most (when the sample size is small).
R - QQPlot: how to see whether data are normally distributed
"The test showed that it is likely that the population is normally distributed." No; it didn't show that. Hypothesis tests don't tell you how likely the null is. In fact you can bet this null is fals
R - QQPlot: how to see whether data are normally distributed "The test showed that it is likely that the population is normally distributed." No; it didn't show that. Hypothesis tests don't tell you how likely the null is. In fact you can bet this null is false. The Q-Q plot doesn't give a strong indication of non-normality (the plot is fairly straight); there's perhaps a slightly shorter left tail than you'd expect but that really won't matter much. The histogram as-is probably doesn't say a lot either; it does also hint at a slightly shorter left tail. But see here The population distribution your data are from isn't going to be exactly normal. However, the Q-Q plot shows that normality is probably a reasonably good approximation. If the sample size was not too small, a lack of rejection of the Shapiro-Wilk would probably be saying much the same. Update: your edit to include the actual Shapiro-Wilk p-value is important because in fact that would indicate you would reject the null at typical significant levels. That test indicates the population your data were sampled from (assuming a simple random sample of that population) is not normally distributed and the mild skewness indicated by the plots is probably what is being picked up by the test. For typical procedures that might assume normality of the variable itself (the one-sample t-test is one that comes to mind), at what appears to be a fairly large sample size, this mild non-normality will be of almost no consequence at all -- one of the problems with goodness of fit tests is they're more likely to reject just when it doesn't matter (when the sample size is large enough to detect some modest non-normality); similarly they're more likely to fail to reject when it matters most (when the sample size is small).
R - QQPlot: how to see whether data are normally distributed "The test showed that it is likely that the population is normally distributed." No; it didn't show that. Hypothesis tests don't tell you how likely the null is. In fact you can bet this null is fals
4,674
R - QQPlot: how to see whether data are normally distributed
If the data is normally distributed, the points in the QQ-normal plot lie on a straight diagonal line. You can add this line to you QQ plot with the command qqline(x), where x is the vector of values. Examples of normal and non-normal distribution: Normal distribution set.seed(42) x <- rnorm(100) The QQ-normal plot with the line: qqnorm(x); qqline(x) The deviations from the straight line are minimal. This indicates normal distribution. The histogram: hist(x) Non-normal (Gamma) distribution y <- rgamma(100, 1) The QQ-normal plot: qqnorm(y); qqline(y) The points clearly follow another shape than the straight line. The histogram confirms the non-normality. The distribution is not bell-shaped but positively skewed (i.e., most data points are in the lower half). Histograms of normal distributions show the highest frequency in the center of the distribution. hist(y)
R - QQPlot: how to see whether data are normally distributed
If the data is normally distributed, the points in the QQ-normal plot lie on a straight diagonal line. You can add this line to you QQ plot with the command qqline(x), where x is the vector of values.
R - QQPlot: how to see whether data are normally distributed If the data is normally distributed, the points in the QQ-normal plot lie on a straight diagonal line. You can add this line to you QQ plot with the command qqline(x), where x is the vector of values. Examples of normal and non-normal distribution: Normal distribution set.seed(42) x <- rnorm(100) The QQ-normal plot with the line: qqnorm(x); qqline(x) The deviations from the straight line are minimal. This indicates normal distribution. The histogram: hist(x) Non-normal (Gamma) distribution y <- rgamma(100, 1) The QQ-normal plot: qqnorm(y); qqline(y) The points clearly follow another shape than the straight line. The histogram confirms the non-normality. The distribution is not bell-shaped but positively skewed (i.e., most data points are in the lower half). Histograms of normal distributions show the highest frequency in the center of the distribution. hist(y)
R - QQPlot: how to see whether data are normally distributed If the data is normally distributed, the points in the QQ-normal plot lie on a straight diagonal line. You can add this line to you QQ plot with the command qqline(x), where x is the vector of values.
4,675
R - QQPlot: how to see whether data are normally distributed
Some tools for checking the validity of the assumption of normality in R library(moments) library(nortest) library(e1071) set.seed(777) x <- rnorm(250,10,1) # skewness and kurtosis, they should be around (0,3) skewness(x) kurtosis(x) # Shapiro-Wilks test shapiro.test(x) # Kolmogorov-Smirnov test ks.test(x,"pnorm",mean(x),sqrt(var(x))) # Anderson-Darling test ad.test(x) # qq-plot: you should observe a good fit of the straight line qqnorm(x) qqline(x) # p-plot: you should observe a good fit of the straight line probplot(x, qdist=qnorm) # fitted normal density f.den <- function(t) dnorm(t,mean(x),sqrt(var(x))) curve(f.den,xlim=c(6,14)) hist(x,prob=T,add=T)
R - QQPlot: how to see whether data are normally distributed
Some tools for checking the validity of the assumption of normality in R library(moments) library(nortest) library(e1071) set.seed(777) x <- rnorm(250,10,1) # skewness and kurtosis, they should be a
R - QQPlot: how to see whether data are normally distributed Some tools for checking the validity of the assumption of normality in R library(moments) library(nortest) library(e1071) set.seed(777) x <- rnorm(250,10,1) # skewness and kurtosis, they should be around (0,3) skewness(x) kurtosis(x) # Shapiro-Wilks test shapiro.test(x) # Kolmogorov-Smirnov test ks.test(x,"pnorm",mean(x),sqrt(var(x))) # Anderson-Darling test ad.test(x) # qq-plot: you should observe a good fit of the straight line qqnorm(x) qqline(x) # p-plot: you should observe a good fit of the straight line probplot(x, qdist=qnorm) # fitted normal density f.den <- function(t) dnorm(t,mean(x),sqrt(var(x))) curve(f.den,xlim=c(6,14)) hist(x,prob=T,add=T)
R - QQPlot: how to see whether data are normally distributed Some tools for checking the validity of the assumption of normality in R library(moments) library(nortest) library(e1071) set.seed(777) x <- rnorm(250,10,1) # skewness and kurtosis, they should be a
4,676
R - QQPlot: how to see whether data are normally distributed
While it's a good idea to check visually whether your intuition matches the result of some test, you cannot expect this to be easy every time. If the people trying to detect the Higgs Boson would only trust their results if they could visually assess them, they would need a very sharp eye. Especially with big datasets (and thus, typically with increasing power), statistics tend to pick up the smallest of differences, even when they are hardly discernable with the naked eye. That being said: for normality, your QQ-plot should show a straight line: I would say it does not. There are clear bends in the tails, and even near the middle there is some commotion. Visually, I still might be willing to say (depending on the goal of checking normality) this data is "reasonably" normal, though. Note however: for most purposes where you want to check normality, you only need normality of the means instead of normality of the observations, so the central limit theorem may be enough to rescue you. In addition: while normality is often an assumption that you need to check "officially", many tests have been shown to be pretty insensitive to having this assumption not fulfilled.
R - QQPlot: how to see whether data are normally distributed
While it's a good idea to check visually whether your intuition matches the result of some test, you cannot expect this to be easy every time. If the people trying to detect the Higgs Boson would only
R - QQPlot: how to see whether data are normally distributed While it's a good idea to check visually whether your intuition matches the result of some test, you cannot expect this to be easy every time. If the people trying to detect the Higgs Boson would only trust their results if they could visually assess them, they would need a very sharp eye. Especially with big datasets (and thus, typically with increasing power), statistics tend to pick up the smallest of differences, even when they are hardly discernable with the naked eye. That being said: for normality, your QQ-plot should show a straight line: I would say it does not. There are clear bends in the tails, and even near the middle there is some commotion. Visually, I still might be willing to say (depending on the goal of checking normality) this data is "reasonably" normal, though. Note however: for most purposes where you want to check normality, you only need normality of the means instead of normality of the observations, so the central limit theorem may be enough to rescue you. In addition: while normality is often an assumption that you need to check "officially", many tests have been shown to be pretty insensitive to having this assumption not fulfilled.
R - QQPlot: how to see whether data are normally distributed While it's a good idea to check visually whether your intuition matches the result of some test, you cannot expect this to be easy every time. If the people trying to detect the Higgs Boson would only
4,677
R - QQPlot: how to see whether data are normally distributed
I like the version out of the 'R' library car because it provides not only the central tendency but the confidence intervals. It gives visual guidance to help confirm whether the behavior of the data is consistent with the hypothetical distribution. library(car) qqPlot(lm(prestige ~ income + education + type, data=Duncan), envelope=.99) some links: Q-Q plot interpretation http://exploringdatablog.blogspot.com/2011/03/many-uses-of-q-q-plots.html https://stackoverflow.com/questions/19392066/simultaneous-null-band-for-uniform-qq-plot-in-r https://philmikejones.wordpress.com/2014/05/12/regression-diagnostics-r/
R - QQPlot: how to see whether data are normally distributed
I like the version out of the 'R' library car because it provides not only the central tendency but the confidence intervals. It gives visual guidance to help confirm whether the behavior of the data
R - QQPlot: how to see whether data are normally distributed I like the version out of the 'R' library car because it provides not only the central tendency but the confidence intervals. It gives visual guidance to help confirm whether the behavior of the data is consistent with the hypothetical distribution. library(car) qqPlot(lm(prestige ~ income + education + type, data=Duncan), envelope=.99) some links: Q-Q plot interpretation http://exploringdatablog.blogspot.com/2011/03/many-uses-of-q-q-plots.html https://stackoverflow.com/questions/19392066/simultaneous-null-band-for-uniform-qq-plot-in-r https://philmikejones.wordpress.com/2014/05/12/regression-diagnostics-r/
R - QQPlot: how to see whether data are normally distributed I like the version out of the 'R' library car because it provides not only the central tendency but the confidence intervals. It gives visual guidance to help confirm whether the behavior of the data
4,678
How to tell the probability of failure if there were no failures?
The probability that a product will fail is surely a function of time and use. We don't have any data on use, and with only one year there are no failures (congratulations!). Thus, this aspect (called the survival function), cannot be estimated from your data. You can think of failures within one year as draws from a binomial distribution, however. You still have no failures, but this is now a common problem. A simple solution is to use the rule of 3, which is accurate with large $N$ (which you certainly have). Specifically, you can get the upper bound of a one-sided 95% confidence interval (i.e., the lower bound is $0$) on the true probability of failure within one year as $3/N$. In your case, you are 95% confident that the rate is less than $0.00003$. You also asked how to compute the probability that one or more of the next 10k fails. A quick and simple (albeit extreme) way to extend the above analysis is to just use the upper bound as the underlying probability and use the corresponding binomial CDF to get the probability that there won't be $0$ failures. Using R code, we could do: 1-pbinom(0, size=10000, prob=0.00003), which yields a 0.2591851 chance of seeing one or more failures in the next 10k products. By having used the upper bound, this is not the optimal point estimate of the probability of having at least one failure, rather you can say it is very unlikely that the probability of $\ge 1$ failure is more than $\approx 26\%$ (recognizing that this is a somewhat 'hand-wavy' framing). Another possibility is to use @amoeba's suggestion of the estimate from Laplace's rule of succession. The rule of succession states that the estimated probability of failure is $(F+1)/(N+2)$, where $F$ is the number of failures. In that case, $\hat p = 9.9998\times 10^{-06}$, and the calculation for the predicted probability of $1^+$ failures in the next 10,000 is 1-pbinom(0, size=10000, prob=9.9998e-06), yielding 0.09516122, or $\approx 10\%$.
How to tell the probability of failure if there were no failures?
The probability that a product will fail is surely a function of time and use. We don't have any data on use, and with only one year there are no failures (congratulations!). Thus, this aspect (call
How to tell the probability of failure if there were no failures? The probability that a product will fail is surely a function of time and use. We don't have any data on use, and with only one year there are no failures (congratulations!). Thus, this aspect (called the survival function), cannot be estimated from your data. You can think of failures within one year as draws from a binomial distribution, however. You still have no failures, but this is now a common problem. A simple solution is to use the rule of 3, which is accurate with large $N$ (which you certainly have). Specifically, you can get the upper bound of a one-sided 95% confidence interval (i.e., the lower bound is $0$) on the true probability of failure within one year as $3/N$. In your case, you are 95% confident that the rate is less than $0.00003$. You also asked how to compute the probability that one or more of the next 10k fails. A quick and simple (albeit extreme) way to extend the above analysis is to just use the upper bound as the underlying probability and use the corresponding binomial CDF to get the probability that there won't be $0$ failures. Using R code, we could do: 1-pbinom(0, size=10000, prob=0.00003), which yields a 0.2591851 chance of seeing one or more failures in the next 10k products. By having used the upper bound, this is not the optimal point estimate of the probability of having at least one failure, rather you can say it is very unlikely that the probability of $\ge 1$ failure is more than $\approx 26\%$ (recognizing that this is a somewhat 'hand-wavy' framing). Another possibility is to use @amoeba's suggestion of the estimate from Laplace's rule of succession. The rule of succession states that the estimated probability of failure is $(F+1)/(N+2)$, where $F$ is the number of failures. In that case, $\hat p = 9.9998\times 10^{-06}$, and the calculation for the predicted probability of $1^+$ failures in the next 10,000 is 1-pbinom(0, size=10000, prob=9.9998e-06), yielding 0.09516122, or $\approx 10\%$.
How to tell the probability of failure if there were no failures? The probability that a product will fail is surely a function of time and use. We don't have any data on use, and with only one year there are no failures (congratulations!). Thus, this aspect (call
4,679
How to tell the probability of failure if there were no failures?
You can take a bayesian approach. denote the probability of failure by $\Theta$ and think of it as a random variable. A priori, before you see the results of the experiments, you might believe that $\Theta \sim U(0,1)$. If you trust the engineers to make this product reliable, maybe you can take $\Theta \sim U(0,0.1)$ or so. This is up to you. Then, you can use Bayes' theorem to calculate the posterior distribution of $\theta$. Denote $A$ the event that you've observed ($n$ experiments with zero failures). $$ p(\Theta = \theta | A) = \frac{p (A | \Theta = \theta) p(\Theta = \theta )}{p(A)} = \frac{p (A |\theta) p(\theta )}{\int p (A |\theta) p(\theta )d\theta}. $$ Everything is simple: $\Theta$ is uniform, so $p(\theta)$ is some constant. Since you run $n$ experiments, $p(A | \theta)$ is just the probability of no failures in $n$ bernouli trials with probability of failure $\theta$. Once you have $p(\theta | A)$ you're gold: you can calculate the probability of any event $B$ by integrateion: $\mathbb{P}(B) = \int p(B |\theta) p(\theta |A) d\theta$ Below, I work through a detailed solution, following the above approach. I'll take a few standard shortcuts. Let the prior be $U(0,1)$. Then: $$ p(\theta |A)\propto p(A|\theta) \cdot 1 = (1-\theta)^n. $$ The normalization constant $p(A) = \int p(A|\theta)p(\theta) d\theta$ is found to be $B(1,n+1)$ - see wikipedia pages beta function and beta distribution. So, $p(\theta |A) = \frac{(1-\theta)^n}{B(1,n+1)}$, which is a beta distribution with parameters $1, n+1$. Denote the probability of no failures in $m$ products in the next year by $B$. The probability of at least one failure is $1 -\mathbb{P}( B )$. Then $$ 1- \mathbb{P}(B) =1 - \int (1-\theta)^m\frac{(1-\theta)^n}{B(1,n+1)}d\theta = \frac{B(1,n+m+1)}{B(1,n+1)} $$ which is roughly $0.1$, using $n= 100,000, m = 10,000$. Not very impressive? I took a uniform distribution on the probability of failure. Perhaps you have better prior faith in your engineers.
How to tell the probability of failure if there were no failures?
You can take a bayesian approach. denote the probability of failure by $\Theta$ and think of it as a random variable. A priori, before you see the results of the experiments, you might believe that $\
How to tell the probability of failure if there were no failures? You can take a bayesian approach. denote the probability of failure by $\Theta$ and think of it as a random variable. A priori, before you see the results of the experiments, you might believe that $\Theta \sim U(0,1)$. If you trust the engineers to make this product reliable, maybe you can take $\Theta \sim U(0,0.1)$ or so. This is up to you. Then, you can use Bayes' theorem to calculate the posterior distribution of $\theta$. Denote $A$ the event that you've observed ($n$ experiments with zero failures). $$ p(\Theta = \theta | A) = \frac{p (A | \Theta = \theta) p(\Theta = \theta )}{p(A)} = \frac{p (A |\theta) p(\theta )}{\int p (A |\theta) p(\theta )d\theta}. $$ Everything is simple: $\Theta$ is uniform, so $p(\theta)$ is some constant. Since you run $n$ experiments, $p(A | \theta)$ is just the probability of no failures in $n$ bernouli trials with probability of failure $\theta$. Once you have $p(\theta | A)$ you're gold: you can calculate the probability of any event $B$ by integrateion: $\mathbb{P}(B) = \int p(B |\theta) p(\theta |A) d\theta$ Below, I work through a detailed solution, following the above approach. I'll take a few standard shortcuts. Let the prior be $U(0,1)$. Then: $$ p(\theta |A)\propto p(A|\theta) \cdot 1 = (1-\theta)^n. $$ The normalization constant $p(A) = \int p(A|\theta)p(\theta) d\theta$ is found to be $B(1,n+1)$ - see wikipedia pages beta function and beta distribution. So, $p(\theta |A) = \frac{(1-\theta)^n}{B(1,n+1)}$, which is a beta distribution with parameters $1, n+1$. Denote the probability of no failures in $m$ products in the next year by $B$. The probability of at least one failure is $1 -\mathbb{P}( B )$. Then $$ 1- \mathbb{P}(B) =1 - \int (1-\theta)^m\frac{(1-\theta)^n}{B(1,n+1)}d\theta = \frac{B(1,n+m+1)}{B(1,n+1)} $$ which is roughly $0.1$, using $n= 100,000, m = 10,000$. Not very impressive? I took a uniform distribution on the probability of failure. Perhaps you have better prior faith in your engineers.
How to tell the probability of failure if there were no failures? You can take a bayesian approach. denote the probability of failure by $\Theta$ and think of it as a random variable. A priori, before you see the results of the experiments, you might believe that $\
4,680
How to tell the probability of failure if there were no failures?
Rather than computing a probability, why not predict how many products might fail? Modeling the Observations There are $n=100000$ products in the field and another $m=10000$ under consideration. Assume their failures are all independent and constant with probability $p$. We may model this situation by means of a Binomial experiment: out of a box of tickets with an unknown proportion $p$ of "failure" tickets and $1-p$ "success" tickets, draw $m+n=110000$ tickets (with replacement, so that the chance of failure stays the same). Count the failures among the first $n$ tickets--let that be $X$--and count the failures among the remaining $m$ tickets, calling that $Y$. Framing the Question In principle, $0\le X \le n$ and $0 \le Y\le m$ could be anything. What we are interested in is the chance that $Y = u$ given that $X+Y=u$ (with $u$ any number in $\{0,1,\ldots, m\}$). Since the failures could occur anywhere among all $n+m$ tickets, with every possible configuration having the same chance, it is found by dividing the number of $u$-subsets of $m$ things by the number of $u$-subsets of all $n+m$ things: $$p(u;n,m) = \Pr(Y = u\,|\, X+Y=u) = \frac{\binom{m}{u}}{\binom{n+m}{u}} \\= \frac{m(m-1)\cdots(m-u+1)}{(n+m)(n+m-1)\cdots(n+m-u+1)}.$$ Comparable formulas can be used for the calculation when $X=1, 2, \ldots.$ An upper $1-\alpha$ prediction limit (UPL) for the number of failures in those last $m$ tickets, $t_\alpha(X;n,m)$, is given by the smallest $u$ (depending on $X$) for which $p(u;n,m) \le \alpha$. Interpretation The UPL should be interpreted in terms of the risk of using $t_\alpha$, as evaluated before either $X$ or $Y$ is observed. In other words, suppose it is one year ago and you are being asked to recommend a procedure to predict the number of failures in the next $m$ products once the first $n$ have been observed. Your client asks What is the chance that your procedure will underpredict $Y$? I don't mean in the future after you have more data; I mean right now, because I have to make decisions right now and the only chances I will have available to me are the ones that can be computed at this moment." Your response can be, Right now the chance is no greater than $\alpha$, but if you plan to use a smaller prediction, the chance will exceed $\alpha$. Results For $n=10^5$, $m=10^4$, and $X=0$ we may compute that $$p(0,n,m)=1;\ p(1,n,m)=\frac{1}{11}\approx 0.091;\ p(2,n,m)=\frac{909}{109999}\approx 0.0083; \ldots$$ Thus, upon having observed $X=0$, For up to $1-\alpha=90.9\%$ confidence (that is, when $9.1\%\le \alpha$), predict there is at most $t_\alpha(0;n,m)=1$ failure in the next $10,000$ products. For up to $99.2\%$ confidence (that is, when $0.8\%\le \alpha \lt 9.1\%$), predict there are at most $t_\alpha(0;n,m)=2$ failures in the next $10,000$ products. Etc. Comments When and why would this approach apply? Suppose your company makes lots of different products. After observing the performance of $n$ of each one in the field, it likes to produce guarantees, such as "complete no-cost replacement of any failure within one year." By having prediction limits for the number of failures you can control the total costs of having to back those guarantees. Because you make many products, and expect failures to be due to random circumstances beyond your control, the experience of each product will be independent. It makes sense to control your risk in the long run. Every once in a while you might have to pay more claims than expected, but most of the time you will pay fewer. If paying more than announced could be ruinous, you will set $\alpha$ to be extremely small (and you likely would use a more sophisticated failure model, too!). Otherwise, if the costs are minor, then you can live with low confidence (high $\alpha$). These calculations show how to balance confidence and risks. Note that we don't have to compute the full procedure $t$. We wait until $X$ is observed and then just carry out the calculations for that particular $X$ (here, $X=0$), as shown above. In principle, though, we could have carried out the calculations for all possible values of $X$ at the outset. A Bayesian approach (described in other answers) is attractive and will work well provided the results do not depend heavily on the prior. Unfortunately, when the failure rate is so low that very few (or no failures) are observed, the results are sensitive to the choice of prior.
How to tell the probability of failure if there were no failures?
Rather than computing a probability, why not predict how many products might fail? Modeling the Observations There are $n=100000$ products in the field and another $m=10000$ under consideration. Assu
How to tell the probability of failure if there were no failures? Rather than computing a probability, why not predict how many products might fail? Modeling the Observations There are $n=100000$ products in the field and another $m=10000$ under consideration. Assume their failures are all independent and constant with probability $p$. We may model this situation by means of a Binomial experiment: out of a box of tickets with an unknown proportion $p$ of "failure" tickets and $1-p$ "success" tickets, draw $m+n=110000$ tickets (with replacement, so that the chance of failure stays the same). Count the failures among the first $n$ tickets--let that be $X$--and count the failures among the remaining $m$ tickets, calling that $Y$. Framing the Question In principle, $0\le X \le n$ and $0 \le Y\le m$ could be anything. What we are interested in is the chance that $Y = u$ given that $X+Y=u$ (with $u$ any number in $\{0,1,\ldots, m\}$). Since the failures could occur anywhere among all $n+m$ tickets, with every possible configuration having the same chance, it is found by dividing the number of $u$-subsets of $m$ things by the number of $u$-subsets of all $n+m$ things: $$p(u;n,m) = \Pr(Y = u\,|\, X+Y=u) = \frac{\binom{m}{u}}{\binom{n+m}{u}} \\= \frac{m(m-1)\cdots(m-u+1)}{(n+m)(n+m-1)\cdots(n+m-u+1)}.$$ Comparable formulas can be used for the calculation when $X=1, 2, \ldots.$ An upper $1-\alpha$ prediction limit (UPL) for the number of failures in those last $m$ tickets, $t_\alpha(X;n,m)$, is given by the smallest $u$ (depending on $X$) for which $p(u;n,m) \le \alpha$. Interpretation The UPL should be interpreted in terms of the risk of using $t_\alpha$, as evaluated before either $X$ or $Y$ is observed. In other words, suppose it is one year ago and you are being asked to recommend a procedure to predict the number of failures in the next $m$ products once the first $n$ have been observed. Your client asks What is the chance that your procedure will underpredict $Y$? I don't mean in the future after you have more data; I mean right now, because I have to make decisions right now and the only chances I will have available to me are the ones that can be computed at this moment." Your response can be, Right now the chance is no greater than $\alpha$, but if you plan to use a smaller prediction, the chance will exceed $\alpha$. Results For $n=10^5$, $m=10^4$, and $X=0$ we may compute that $$p(0,n,m)=1;\ p(1,n,m)=\frac{1}{11}\approx 0.091;\ p(2,n,m)=\frac{909}{109999}\approx 0.0083; \ldots$$ Thus, upon having observed $X=0$, For up to $1-\alpha=90.9\%$ confidence (that is, when $9.1\%\le \alpha$), predict there is at most $t_\alpha(0;n,m)=1$ failure in the next $10,000$ products. For up to $99.2\%$ confidence (that is, when $0.8\%\le \alpha \lt 9.1\%$), predict there are at most $t_\alpha(0;n,m)=2$ failures in the next $10,000$ products. Etc. Comments When and why would this approach apply? Suppose your company makes lots of different products. After observing the performance of $n$ of each one in the field, it likes to produce guarantees, such as "complete no-cost replacement of any failure within one year." By having prediction limits for the number of failures you can control the total costs of having to back those guarantees. Because you make many products, and expect failures to be due to random circumstances beyond your control, the experience of each product will be independent. It makes sense to control your risk in the long run. Every once in a while you might have to pay more claims than expected, but most of the time you will pay fewer. If paying more than announced could be ruinous, you will set $\alpha$ to be extremely small (and you likely would use a more sophisticated failure model, too!). Otherwise, if the costs are minor, then you can live with low confidence (high $\alpha$). These calculations show how to balance confidence and risks. Note that we don't have to compute the full procedure $t$. We wait until $X$ is observed and then just carry out the calculations for that particular $X$ (here, $X=0$), as shown above. In principle, though, we could have carried out the calculations for all possible values of $X$ at the outset. A Bayesian approach (described in other answers) is attractive and will work well provided the results do not depend heavily on the prior. Unfortunately, when the failure rate is so low that very few (or no failures) are observed, the results are sensitive to the choice of prior.
How to tell the probability of failure if there were no failures? Rather than computing a probability, why not predict how many products might fail? Modeling the Observations There are $n=100000$ products in the field and another $m=10000$ under consideration. Assu
4,681
How to tell the probability of failure if there were no failures?
The following is a Bayesian answer to "Out of 10,000 new products, how many are expected to fail if all the former 100,000 produced didn't fail?", but you should consider the sensitivity to different priors. Suppose that $X_1,\dots,X_n$ are conditionally independent and identically distributed, given $\Theta=\theta$, such that $X_1\mid\Theta=\theta\sim\mathrm{Bernoulli}(\theta)$, and use the conjugate prior $\Theta\sim\mathrm{Beta}(a,b)$, with $a,b>0$. For $m<n$, we have $$ \mathrm{E}\left[\sum_{i=m+1}^n X_i\;\Bigg\vert\; X_1=0,\dots X_m=0 \right] = \sum_{i=m+1}^n \mathrm{E}\left[ X_i\mid X_1=0,\dots X_m=0 \right] \, . $$ For $m+1\leq i\leq n$, we have $$ \begin{align} \mathrm{E}\left[X_i\mid X_1=0,\dots X_m=0\right] &= \Pr(X_i=1\mid X_1=0,\dots X_m=0) \\ &= \int_0^1 \Pr(X_i=1\mid \Theta=\theta) \,f_{\Theta\mid X_1,\dots,X_m}(\theta\mid 0,\dots,0) \,d\theta \\ &= \frac{\Gamma(m+a+b)}{\Gamma(m+a+b+1)} \frac{\Gamma(a+1)}{\Gamma(a)} = \frac{a}{m+a+b}\, , \end{align} $$ in which we used $\Theta\mid X_1=0,\dots,X_m=0\sim \mathrm{Beta}(a,m+b)$. Plugging in your numbers, with an uniform prior ($a=1,b=1$) you expect a failure rate around $10\%$, while a Jeffreys-like prior ($a=1/2,b=1/2$) gives you a failure rate close to $5\%$. This predictive expectation doesn't look like a good summary, because the predictive distribution is highly skewed. We can go further and compute the predictive distribution. Since $$ \sum_{i=m+1}^n X_i \;\Bigg\vert\; \Theta=\theta \sim \mathrm{Bin}(n-m+2,\theta) \, , $$ conditioning as we did before we have $$ \begin{align} \Pr&\left(\sum_{i=m+1}^n X_i=t \;\Bigg\vert\; X_1=0,\dots X_m=0\right) = \\ &\qquad\qquad\qquad\qquad\binom{n-m+2}{t} \frac{\Gamma(m+a+b)}{\Gamma(a)\Gamma(m+b)} \frac{\Gamma(t+a)\Gamma(n-t+2)}{\Gamma(n+a+2)} \, , \end{align} $$ for $t=0,1,\dots,n-m+2$. I'll finish it later computing a $95\%$ predictive interval.
How to tell the probability of failure if there were no failures?
The following is a Bayesian answer to "Out of 10,000 new products, how many are expected to fail if all the former 100,000 produced didn't fail?", but you should consider the sensitivity to different
How to tell the probability of failure if there were no failures? The following is a Bayesian answer to "Out of 10,000 new products, how many are expected to fail if all the former 100,000 produced didn't fail?", but you should consider the sensitivity to different priors. Suppose that $X_1,\dots,X_n$ are conditionally independent and identically distributed, given $\Theta=\theta$, such that $X_1\mid\Theta=\theta\sim\mathrm{Bernoulli}(\theta)$, and use the conjugate prior $\Theta\sim\mathrm{Beta}(a,b)$, with $a,b>0$. For $m<n$, we have $$ \mathrm{E}\left[\sum_{i=m+1}^n X_i\;\Bigg\vert\; X_1=0,\dots X_m=0 \right] = \sum_{i=m+1}^n \mathrm{E}\left[ X_i\mid X_1=0,\dots X_m=0 \right] \, . $$ For $m+1\leq i\leq n$, we have $$ \begin{align} \mathrm{E}\left[X_i\mid X_1=0,\dots X_m=0\right] &= \Pr(X_i=1\mid X_1=0,\dots X_m=0) \\ &= \int_0^1 \Pr(X_i=1\mid \Theta=\theta) \,f_{\Theta\mid X_1,\dots,X_m}(\theta\mid 0,\dots,0) \,d\theta \\ &= \frac{\Gamma(m+a+b)}{\Gamma(m+a+b+1)} \frac{\Gamma(a+1)}{\Gamma(a)} = \frac{a}{m+a+b}\, , \end{align} $$ in which we used $\Theta\mid X_1=0,\dots,X_m=0\sim \mathrm{Beta}(a,m+b)$. Plugging in your numbers, with an uniform prior ($a=1,b=1$) you expect a failure rate around $10\%$, while a Jeffreys-like prior ($a=1/2,b=1/2$) gives you a failure rate close to $5\%$. This predictive expectation doesn't look like a good summary, because the predictive distribution is highly skewed. We can go further and compute the predictive distribution. Since $$ \sum_{i=m+1}^n X_i \;\Bigg\vert\; \Theta=\theta \sim \mathrm{Bin}(n-m+2,\theta) \, , $$ conditioning as we did before we have $$ \begin{align} \Pr&\left(\sum_{i=m+1}^n X_i=t \;\Bigg\vert\; X_1=0,\dots X_m=0\right) = \\ &\qquad\qquad\qquad\qquad\binom{n-m+2}{t} \frac{\Gamma(m+a+b)}{\Gamma(a)\Gamma(m+b)} \frac{\Gamma(t+a)\Gamma(n-t+2)}{\Gamma(n+a+2)} \, , \end{align} $$ for $t=0,1,\dots,n-m+2$. I'll finish it later computing a $95\%$ predictive interval.
How to tell the probability of failure if there were no failures? The following is a Bayesian answer to "Out of 10,000 new products, how many are expected to fail if all the former 100,000 produced didn't fail?", but you should consider the sensitivity to different
4,682
How to tell the probability of failure if there were no failures?
Using Laplace's sunrise problem approach, we get the probability that a product would fail within a year $$p=\frac{1}{100000+1}$$. Next, the probability that of $n$ new products none fails within a year is $$(1-p)^n$$ Hence, the probability that at least one product of $n$ will fail in next year is $$1-\left(1-\frac{1}{100001}\right)^{n}$$ For $n=10000$ the value is $P_{10000}\approx 0.095$. In whuber's case $P_{200000}\approx 0.87$, quite high, in fact. Of course, you should keep updating your data while more products are sold, eventually one will fail.
How to tell the probability of failure if there were no failures?
Using Laplace's sunrise problem approach, we get the probability that a product would fail within a year $$p=\frac{1}{100000+1}$$. Next, the probability that of $n$ new products none fails within a ye
How to tell the probability of failure if there were no failures? Using Laplace's sunrise problem approach, we get the probability that a product would fail within a year $$p=\frac{1}{100000+1}$$. Next, the probability that of $n$ new products none fails within a year is $$(1-p)^n$$ Hence, the probability that at least one product of $n$ will fail in next year is $$1-\left(1-\frac{1}{100001}\right)^{n}$$ For $n=10000$ the value is $P_{10000}\approx 0.095$. In whuber's case $P_{200000}\approx 0.87$, quite high, in fact. Of course, you should keep updating your data while more products are sold, eventually one will fail.
How to tell the probability of failure if there were no failures? Using Laplace's sunrise problem approach, we get the probability that a product would fail within a year $$p=\frac{1}{100000+1}$$. Next, the probability that of $n$ new products none fails within a ye
4,683
How to tell the probability of failure if there were no failures?
Several good answers were provided for this question, but recently I had a chance to review few resources on this topic and so I decided to share the results. There are multiple possible estimators for zero-failures data. Let's denote $k=0$ as number of failures and $n$ as sample size. Maximum likelihood estimator for probability of failure given this data is $$ P(K = k) = \frac{k}{n} = 0 \tag{1} $$ Such estimate is rather unsatisfactory since the fact that we observed no failures in our sample hardly proves that they are impossible in general. Out out-of-data knowledge suggests that there is some probability of failure even if non were observed (yet). Having a priori knowledge leads us to using Bayesian methods reviewed by Bailey (1997), Razzaghi (2002), Basu et al (1996), and Ludbrook and Lew (2009). Among simple estimators "upper bound" estimator that assumes (Bailey, 1997) that it would not be logical for an estimator for P in the zero-failure case to yield a probability in excess of that predicted by the maximum likelihood estimator in the one-failure case, a reasonable upper bound defined as $$ \frac{1}{n} \tag{2} $$ can be mentioned. As reviewed by Ludbrook and Lew (2009), other possibilities are "rule of threes" (cf. here, Wikipedia, or Eypasch et al, 1995) $$ \frac{3}{n} \tag{3} $$ or other variations: $$ \frac{3}{n+1} \tag{4} $$ "rule of 3.7" by Newcombe and Altman (or by 3.6): $$ \frac{3.7}{n} \tag{5} $$ "new rule of four": $$ \frac{4}{n+4} \tag{6} $$ but as concluded by Ludbrook and Lew (2009) "rule of threes" is "next to useless" and "rule of 3.6" (and 3.7) "have serious limitations – they are grossly inaccurate if the initial sample size is less than 50" and they do not recommend methods (3)-(6), suggesting rather to use proper Bayesian estimators (see below). Among Bayesian estimators several different can be mentioned. First such estimator suggested by Bailey (1997) is $$ 1 - 0.5^\frac{1}{n} \tag{7} $$ for estimating median under uniform prior $$ 1 - 0.5^\frac{1}{n+1} \tag{8} $$ or for estimating mean under such prior $$ \frac{1}{n+2} \tag{9} $$ yet another approach assuming exponential failure pattern with constant failure rate (Poisson distributions) yields $$ \frac{1/3}{n} \tag{10} $$ if we use beta prior with parameters $a$ and $b$ we can use formula (see Razzaghi, 2002): $$ \frac{a}{a+b+n} \tag{11} $$ that under $a = b = 1$ leads to uniform prior (9). Assuming Jeffreys prior with $a = b = 0.5$ it leads to $$ \frac{1}{2(n+1)} \tag{12} $$ Generally, Bayesian formulas (7)-(12) are recommended. Basu et al (1996) recommends (11) with informative prior, when some a priori knowledge is available. Since no single best method exists I would suggest reviewing the literature prior to your analysis, especially when $n$ is small. Bailey, R.T. (1997). Estimation from zero-failure data. Risk Analysis, 17, 375-380. Razzaghi, M. (2002). On the estimation of binomial success probability with zero occurrence in sample. Journal of Modern Applied Statistical Methods, 1(2), 41. Ludbrook, J., & Lew, M. J. (2009). Estimating the risk of rare complications: is the ‘rule of three’good enough?. ANZ journal of surgery, 79(7‐8), 565-570. Eypasch, E., Lefering, R., Kum, C.K., and Troidl, H. (1995). Probability of adverse events that have not yet occurred: A statistical reminder. BMJ 311(7005): 619–620. Basu, A.P., Gaylor, D.W., & Chen, J.J. (1996). Estimating the probability of occurrence of tumor for a rare cancer with zero occurrence in a sample. Regulatory Toxicology and Pharmacology, 23(2), 139-144.
How to tell the probability of failure if there were no failures?
Several good answers were provided for this question, but recently I had a chance to review few resources on this topic and so I decided to share the results. There are multiple possible estimators fo
How to tell the probability of failure if there were no failures? Several good answers were provided for this question, but recently I had a chance to review few resources on this topic and so I decided to share the results. There are multiple possible estimators for zero-failures data. Let's denote $k=0$ as number of failures and $n$ as sample size. Maximum likelihood estimator for probability of failure given this data is $$ P(K = k) = \frac{k}{n} = 0 \tag{1} $$ Such estimate is rather unsatisfactory since the fact that we observed no failures in our sample hardly proves that they are impossible in general. Out out-of-data knowledge suggests that there is some probability of failure even if non were observed (yet). Having a priori knowledge leads us to using Bayesian methods reviewed by Bailey (1997), Razzaghi (2002), Basu et al (1996), and Ludbrook and Lew (2009). Among simple estimators "upper bound" estimator that assumes (Bailey, 1997) that it would not be logical for an estimator for P in the zero-failure case to yield a probability in excess of that predicted by the maximum likelihood estimator in the one-failure case, a reasonable upper bound defined as $$ \frac{1}{n} \tag{2} $$ can be mentioned. As reviewed by Ludbrook and Lew (2009), other possibilities are "rule of threes" (cf. here, Wikipedia, or Eypasch et al, 1995) $$ \frac{3}{n} \tag{3} $$ or other variations: $$ \frac{3}{n+1} \tag{4} $$ "rule of 3.7" by Newcombe and Altman (or by 3.6): $$ \frac{3.7}{n} \tag{5} $$ "new rule of four": $$ \frac{4}{n+4} \tag{6} $$ but as concluded by Ludbrook and Lew (2009) "rule of threes" is "next to useless" and "rule of 3.6" (and 3.7) "have serious limitations – they are grossly inaccurate if the initial sample size is less than 50" and they do not recommend methods (3)-(6), suggesting rather to use proper Bayesian estimators (see below). Among Bayesian estimators several different can be mentioned. First such estimator suggested by Bailey (1997) is $$ 1 - 0.5^\frac{1}{n} \tag{7} $$ for estimating median under uniform prior $$ 1 - 0.5^\frac{1}{n+1} \tag{8} $$ or for estimating mean under such prior $$ \frac{1}{n+2} \tag{9} $$ yet another approach assuming exponential failure pattern with constant failure rate (Poisson distributions) yields $$ \frac{1/3}{n} \tag{10} $$ if we use beta prior with parameters $a$ and $b$ we can use formula (see Razzaghi, 2002): $$ \frac{a}{a+b+n} \tag{11} $$ that under $a = b = 1$ leads to uniform prior (9). Assuming Jeffreys prior with $a = b = 0.5$ it leads to $$ \frac{1}{2(n+1)} \tag{12} $$ Generally, Bayesian formulas (7)-(12) are recommended. Basu et al (1996) recommends (11) with informative prior, when some a priori knowledge is available. Since no single best method exists I would suggest reviewing the literature prior to your analysis, especially when $n$ is small. Bailey, R.T. (1997). Estimation from zero-failure data. Risk Analysis, 17, 375-380. Razzaghi, M. (2002). On the estimation of binomial success probability with zero occurrence in sample. Journal of Modern Applied Statistical Methods, 1(2), 41. Ludbrook, J., & Lew, M. J. (2009). Estimating the risk of rare complications: is the ‘rule of three’good enough?. ANZ journal of surgery, 79(7‐8), 565-570. Eypasch, E., Lefering, R., Kum, C.K., and Troidl, H. (1995). Probability of adverse events that have not yet occurred: A statistical reminder. BMJ 311(7005): 619–620. Basu, A.P., Gaylor, D.W., & Chen, J.J. (1996). Estimating the probability of occurrence of tumor for a rare cancer with zero occurrence in a sample. Regulatory Toxicology and Pharmacology, 23(2), 139-144.
How to tell the probability of failure if there were no failures? Several good answers were provided for this question, but recently I had a chance to review few resources on this topic and so I decided to share the results. There are multiple possible estimators fo
4,684
How to tell the probability of failure if there were no failures?
You really need to go back to the designers of your products. It is a fundamental engineering problem not an observational statistical one. They will have an idea of the failure probability of each component and from that the net failure probabilty of the total assembled product. They can give you the expected number of failures over the whole design life of the product. A civil engineer designs a bridge to have a design life of 120 years. Each component of the bridge has a slight chance of failure. Each loading has a slight chance of being exceeded. To make the bridge economic to build, total collapse would only occur once in 2400 years which is far longer than the bridge will be maintained for. It is not surprising that the bridge does not fail in year 1, nor year 2 to year 120. That is has not collapsed tells you very little. Its various chances of failure with time can only be estimated by the original designers.
How to tell the probability of failure if there were no failures?
You really need to go back to the designers of your products. It is a fundamental engineering problem not an observational statistical one. They will have an idea of the failure probability of each co
How to tell the probability of failure if there were no failures? You really need to go back to the designers of your products. It is a fundamental engineering problem not an observational statistical one. They will have an idea of the failure probability of each component and from that the net failure probabilty of the total assembled product. They can give you the expected number of failures over the whole design life of the product. A civil engineer designs a bridge to have a design life of 120 years. Each component of the bridge has a slight chance of failure. Each loading has a slight chance of being exceeded. To make the bridge economic to build, total collapse would only occur once in 2400 years which is far longer than the bridge will be maintained for. It is not surprising that the bridge does not fail in year 1, nor year 2 to year 120. That is has not collapsed tells you very little. Its various chances of failure with time can only be estimated by the original designers.
How to tell the probability of failure if there were no failures? You really need to go back to the designers of your products. It is a fundamental engineering problem not an observational statistical one. They will have an idea of the failure probability of each co
4,685
How to tell the probability of failure if there were no failures?
This is similar to a problem I faced when we introduced a new manufacturing process to eliminate a failure in production. The new system produced no failures so people were asking the same question: how do we predict the failure rate? In your case, because you have stipulated a period over which the failure can occur with no concern for when the failure occurs within that period, the temporal effects have been removed. And it is simply a case of whether something failed or not. With that stipulated - on with my answer. Intuitively, it seems we need at least one failure to be able to calculate the failure rate. However, this assumption has an implicit mistake within it. We will never calculate the failure rate. That is because we are dealing with a sample. Thus we can only estimate a range of probable failure rates. The way to do this is to find a distribution for the failure rate. The distribution that does the job in this instance is a Beta distribution where the parameters are: α = n + 1 and β = N - n + 1 Note: N is the sample size and n is the number of failures (in your case 0) For your scenario, the distribution of the of the failure rate is shown below. . You would then feed that distribution into the respective binomial probability formula to get a distribution for the probability of one unit failing (could be done analytically or using Monte Carlo). I suspect that numbers will be very low. Note that this process is applicable no matter the number of failures in your fist set.
How to tell the probability of failure if there were no failures?
This is similar to a problem I faced when we introduced a new manufacturing process to eliminate a failure in production. The new system produced no failures so people were asking the same question:
How to tell the probability of failure if there were no failures? This is similar to a problem I faced when we introduced a new manufacturing process to eliminate a failure in production. The new system produced no failures so people were asking the same question: how do we predict the failure rate? In your case, because you have stipulated a period over which the failure can occur with no concern for when the failure occurs within that period, the temporal effects have been removed. And it is simply a case of whether something failed or not. With that stipulated - on with my answer. Intuitively, it seems we need at least one failure to be able to calculate the failure rate. However, this assumption has an implicit mistake within it. We will never calculate the failure rate. That is because we are dealing with a sample. Thus we can only estimate a range of probable failure rates. The way to do this is to find a distribution for the failure rate. The distribution that does the job in this instance is a Beta distribution where the parameters are: α = n + 1 and β = N - n + 1 Note: N is the sample size and n is the number of failures (in your case 0) For your scenario, the distribution of the of the failure rate is shown below. . You would then feed that distribution into the respective binomial probability formula to get a distribution for the probability of one unit failing (could be done analytically or using Monte Carlo). I suspect that numbers will be very low. Note that this process is applicable no matter the number of failures in your fist set.
How to tell the probability of failure if there were no failures? This is similar to a problem I faced when we introduced a new manufacturing process to eliminate a failure in production. The new system produced no failures so people were asking the same question:
4,686
How to tell the probability of failure if there were no failures?
Median unbiased estimates can be used to estimate sample proportions and (non-singular) 95% CIs in Bernoulli samples with no variability. In a sample with no positive cases, you can estimate the upper bound of a 95% confidence interval with the following formula: $$ p_{1-\alpha/2} : P(Y=0)/2 + P(Y>y) > 0.975$$ that is we seek a value $p_{1-\alpha/2}$ as the upper bound of the CI so that the Bernoulli process with probability $p=p_{1-\alpha/2}$ gives the above probability inequality. In R this is solved with a NR-like uniroot application. set.seed(12345) y <- rbinom(100, 1, 0.01) ## all 0 cil <- 0 mupfun <- function(p) { 0.5*dbinom(0, 100, p) + pbinom(1, 100, p, lower.tail = F) - 0.975 } ## for y=0 successes out of n=100 trials ciu <- uniroot(mupfun, c(0, 1))$root c(cil, ciu) [1] 0.00000000 0.05357998 ## includes the 0.01 actual probability
How to tell the probability of failure if there were no failures?
Median unbiased estimates can be used to estimate sample proportions and (non-singular) 95% CIs in Bernoulli samples with no variability. In a sample with no positive cases, you can estimate the upper
How to tell the probability of failure if there were no failures? Median unbiased estimates can be used to estimate sample proportions and (non-singular) 95% CIs in Bernoulli samples with no variability. In a sample with no positive cases, you can estimate the upper bound of a 95% confidence interval with the following formula: $$ p_{1-\alpha/2} : P(Y=0)/2 + P(Y>y) > 0.975$$ that is we seek a value $p_{1-\alpha/2}$ as the upper bound of the CI so that the Bernoulli process with probability $p=p_{1-\alpha/2}$ gives the above probability inequality. In R this is solved with a NR-like uniroot application. set.seed(12345) y <- rbinom(100, 1, 0.01) ## all 0 cil <- 0 mupfun <- function(p) { 0.5*dbinom(0, 100, p) + pbinom(1, 100, p, lower.tail = F) - 0.975 } ## for y=0 successes out of n=100 trials ciu <- uniroot(mupfun, c(0, 1))$root c(cil, ciu) [1] 0.00000000 0.05357998 ## includes the 0.01 actual probability
How to tell the probability of failure if there were no failures? Median unbiased estimates can be used to estimate sample proportions and (non-singular) 95% CIs in Bernoulli samples with no variability. In a sample with no positive cases, you can estimate the upper
4,687
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to be positive semi-definite?
The variance of a weighted sum $\sum_i a_i X_i$ of random variables must be nonnegative for all choices of real numbers $a_i$. Since the variance can be expressed as $$\operatorname{var}\left(\sum_i a_i X_i\right) = \sum_i \sum_j a_ia_j \operatorname{cov}(X_i,X_j) = \sum_i \sum_j a_ia_j \Sigma_{i,j},$$ we have that the covariance matrix $\Sigma = [\Sigma_{i,j}]$ must be positive semidefinite (which is sometimes called nonnegative definite). Recall that a matrix $C$ is called positive semidefinite if and only if $$\sum_i \sum_j a_ia_j C_{i,j} \geq 0 \;\; \forall a_i, a_j \in \mathbb R.$$
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to
The variance of a weighted sum $\sum_i a_i X_i$ of random variables must be nonnegative for all choices of real numbers $a_i$. Since the variance can be expressed as $$\operatorname{var}\left(\sum_i a
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to be positive semi-definite? The variance of a weighted sum $\sum_i a_i X_i$ of random variables must be nonnegative for all choices of real numbers $a_i$. Since the variance can be expressed as $$\operatorname{var}\left(\sum_i a_i X_i\right) = \sum_i \sum_j a_ia_j \operatorname{cov}(X_i,X_j) = \sum_i \sum_j a_ia_j \Sigma_{i,j},$$ we have that the covariance matrix $\Sigma = [\Sigma_{i,j}]$ must be positive semidefinite (which is sometimes called nonnegative definite). Recall that a matrix $C$ is called positive semidefinite if and only if $$\sum_i \sum_j a_ia_j C_{i,j} \geq 0 \;\; \forall a_i, a_j \in \mathbb R.$$
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to The variance of a weighted sum $\sum_i a_i X_i$ of random variables must be nonnegative for all choices of real numbers $a_i$. Since the variance can be expressed as $$\operatorname{var}\left(\sum_i a
4,688
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to be positive semi-definite?
The answer is quite simple. The correlation matrix is defined thus: Let $X = [x_1, x_2, ..., x_n]$ be the $m\times n$ data matrix: $m$ observations, $n$ variables. Define $X_b= [\frac{(x_1-\mu_1 e)}{s_1}, \frac{(x_2-\mu_2 e)}{s_2}, \frac{(x_3-\mu_3 e)}{s_3}, ...]$ as the matrix of normalized data, with $\mu_1$ being mean for the variable 1, $\mu_2$ the mean for variable 2, etc., and $s_1$ the standard deviation of variable 1, etc., and $e$ is a vector of all 1s. The correlation matrix is then $$C=X_b' X_b$$ divided by $m-1$. A matrix $A$ is positive semi-definite if there is no vector $z$ such that $z' A z <0$. Suppose $C$ is not positive definite. Then there exists a vector w such that $w' C w<0$. However $(w' C w)=(w' X_b' X_b w)=(X_b w)'(X_b w) = {z_1^2+z_2^2...}$, where $z=X_b w$, and thus $w' C w$ is a sum of squares and therefore cannot be less than zero. So not only the correlation matrix but any matrix $U$ which can be written in the form $V' V$ is positive semi-definite.
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to
The answer is quite simple. The correlation matrix is defined thus: Let $X = [x_1, x_2, ..., x_n]$ be the $m\times n$ data matrix: $m$ observations, $n$ variables. Define $X_b= [\frac{(x_1-\mu_1 e)}{s
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to be positive semi-definite? The answer is quite simple. The correlation matrix is defined thus: Let $X = [x_1, x_2, ..., x_n]$ be the $m\times n$ data matrix: $m$ observations, $n$ variables. Define $X_b= [\frac{(x_1-\mu_1 e)}{s_1}, \frac{(x_2-\mu_2 e)}{s_2}, \frac{(x_3-\mu_3 e)}{s_3}, ...]$ as the matrix of normalized data, with $\mu_1$ being mean for the variable 1, $\mu_2$ the mean for variable 2, etc., and $s_1$ the standard deviation of variable 1, etc., and $e$ is a vector of all 1s. The correlation matrix is then $$C=X_b' X_b$$ divided by $m-1$. A matrix $A$ is positive semi-definite if there is no vector $z$ such that $z' A z <0$. Suppose $C$ is not positive definite. Then there exists a vector w such that $w' C w<0$. However $(w' C w)=(w' X_b' X_b w)=(X_b w)'(X_b w) = {z_1^2+z_2^2...}$, where $z=X_b w$, and thus $w' C w$ is a sum of squares and therefore cannot be less than zero. So not only the correlation matrix but any matrix $U$ which can be written in the form $V' V$ is positive semi-definite.
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to The answer is quite simple. The correlation matrix is defined thus: Let $X = [x_1, x_2, ..., x_n]$ be the $m\times n$ data matrix: $m$ observations, $n$ variables. Define $X_b= [\frac{(x_1-\mu_1 e)}{s
4,689
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to be positive semi-definite?
(Possible looseness in reasoning would be mine. I'm not a mathematician: this is a depiction, not proof, and is from my numeric experimenting, not from books.) A positive semidefinite (psd) matrix, also called Gramian matrix, is a matrix with no negative eigenvalues. Matrix with negative eigenvalues is not positive semidefinite, or non-Gramian. Both of these can be definite (no zero eigenvalues) or singular (with at least one zero eigenvalue). [Word "Gramian" is used in several different meanings in math, so perhaps should be avoided.] In statistics, we usually apply these terms to a SSCP-type matrix, also called scalar product matrix. Correlation or covariance matrices are particular cases of such matrix. Any scalar product matrix is a summary characteristic of some multivariate data (a cloud). For example, given $n$ cases X $p$ variables data, we could compute $p$X$p$ covariance matrix between the variables or $n$X$n$ covariance matrix between the cases. When you compute it from real data, the matrix will always be Gramian. You may get non-Gramian (non-psd) matrix if (1) it is similarity matrix measured directly (i.e. not computed from the data) or the similarity measure isn't SSCP-type; (2) the matrix values was incorrectly entered; (3) the matrix is in fact Gramian but is (or so close to be) singular that sometimes the spectral method of computing eigenvalues produces tiny negative ones in place of true zero or tiny positive ones. An alternative and equivalent summary for the cloud is the matrix of euclidean distances. A scalar product (such as covariance) between a pair of items and the corresponding squared euclidean distance between them are tied by the law of cosines (cosine theorem, look at the picture there): $d_{12}^2 = h_1^2+h_2^2-2s_{12}$, where the $s$ is the scalar product and the $h$'s are the distances of the two items from the origin. In case of covariance matrix between variables $X$ and $Y$ this formula looks as $d_{xy}^2 = \sigma_x^2+\sigma_y^2-2cov_{xy}$. As interim conclusion: a covariance (or correlation or other scalar product) matrix between some $m$ items is a configuration of points embedded in Euclidean space, so euclidean distances are defined between all these $m$ points. Now, if [point 5] holds exactly, then the configuration of points is truly euclidean configuration which entails that the scalar product matrix at hand (e.g. the covariance one) is Gramian. Otherwise it is non-Gramian. Thus, to say "$m$X$m$ covariance matrix is positively semi-definite" is to say "the $m$ points plus the origin fit in Euclidean space perfectly". What are possible causes or versions of non-Gramian (non-Euclidean) configuration? The answers follow upon contemplating [point 4]. Cause 1. Evil is among the points themselves: $m$X$m$ distance matrix isn't fully euclidean. Some of the pairwise distances $d$ are such that they cannot agree with the rest of the points in Euclidean space. See Fig1. Cause 2. There is general (matrix-level) mismatch between $h$'s and $d$'s. For example, with fixed $d$'s and some $h$'s given, the other $h$'s must only vary within some bounds in order to stay in consent with Euclidean space. See Fig2. Cause 3. There is localized (pair-level) mismatch between a $d$ and the pair of corresponding $h$'s connected to those two points. Namely, the rule of triangular inequality is violated; that rule demands $h_1+h_2 \ge d_{12} \ge |h_1-h_2|$. See Fig3. To diagnose the cause, convert the non-Gramian covariance matrix into distance matrix using the above law of cosines. Do double-centering on it. If the resultant matrix has negative eigenvalues, cause 1 is present. Else if any $|cov_{ij}| \gt \sigma_i \sigma_j$, cause 3 is present. Else cause 2 is present. Sometimes more than one cause get along in one matrix. Fig1. Fig2. Fig3.
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to
(Possible looseness in reasoning would be mine. I'm not a mathematician: this is a depiction, not proof, and is from my numeric experimenting, not from books.) A positive semidefinite (psd) matrix, a
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to be positive semi-definite? (Possible looseness in reasoning would be mine. I'm not a mathematician: this is a depiction, not proof, and is from my numeric experimenting, not from books.) A positive semidefinite (psd) matrix, also called Gramian matrix, is a matrix with no negative eigenvalues. Matrix with negative eigenvalues is not positive semidefinite, or non-Gramian. Both of these can be definite (no zero eigenvalues) or singular (with at least one zero eigenvalue). [Word "Gramian" is used in several different meanings in math, so perhaps should be avoided.] In statistics, we usually apply these terms to a SSCP-type matrix, also called scalar product matrix. Correlation or covariance matrices are particular cases of such matrix. Any scalar product matrix is a summary characteristic of some multivariate data (a cloud). For example, given $n$ cases X $p$ variables data, we could compute $p$X$p$ covariance matrix between the variables or $n$X$n$ covariance matrix between the cases. When you compute it from real data, the matrix will always be Gramian. You may get non-Gramian (non-psd) matrix if (1) it is similarity matrix measured directly (i.e. not computed from the data) or the similarity measure isn't SSCP-type; (2) the matrix values was incorrectly entered; (3) the matrix is in fact Gramian but is (or so close to be) singular that sometimes the spectral method of computing eigenvalues produces tiny negative ones in place of true zero or tiny positive ones. An alternative and equivalent summary for the cloud is the matrix of euclidean distances. A scalar product (such as covariance) between a pair of items and the corresponding squared euclidean distance between them are tied by the law of cosines (cosine theorem, look at the picture there): $d_{12}^2 = h_1^2+h_2^2-2s_{12}$, where the $s$ is the scalar product and the $h$'s are the distances of the two items from the origin. In case of covariance matrix between variables $X$ and $Y$ this formula looks as $d_{xy}^2 = \sigma_x^2+\sigma_y^2-2cov_{xy}$. As interim conclusion: a covariance (or correlation or other scalar product) matrix between some $m$ items is a configuration of points embedded in Euclidean space, so euclidean distances are defined between all these $m$ points. Now, if [point 5] holds exactly, then the configuration of points is truly euclidean configuration which entails that the scalar product matrix at hand (e.g. the covariance one) is Gramian. Otherwise it is non-Gramian. Thus, to say "$m$X$m$ covariance matrix is positively semi-definite" is to say "the $m$ points plus the origin fit in Euclidean space perfectly". What are possible causes or versions of non-Gramian (non-Euclidean) configuration? The answers follow upon contemplating [point 4]. Cause 1. Evil is among the points themselves: $m$X$m$ distance matrix isn't fully euclidean. Some of the pairwise distances $d$ are such that they cannot agree with the rest of the points in Euclidean space. See Fig1. Cause 2. There is general (matrix-level) mismatch between $h$'s and $d$'s. For example, with fixed $d$'s and some $h$'s given, the other $h$'s must only vary within some bounds in order to stay in consent with Euclidean space. See Fig2. Cause 3. There is localized (pair-level) mismatch between a $d$ and the pair of corresponding $h$'s connected to those two points. Namely, the rule of triangular inequality is violated; that rule demands $h_1+h_2 \ge d_{12} \ge |h_1-h_2|$. See Fig3. To diagnose the cause, convert the non-Gramian covariance matrix into distance matrix using the above law of cosines. Do double-centering on it. If the resultant matrix has negative eigenvalues, cause 1 is present. Else if any $|cov_{ij}| \gt \sigma_i \sigma_j$, cause 3 is present. Else cause 2 is present. Sometimes more than one cause get along in one matrix. Fig1. Fig2. Fig3.
Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to (Possible looseness in reasoning would be mine. I'm not a mathematician: this is a depiction, not proof, and is from my numeric experimenting, not from books.) A positive semidefinite (psd) matrix, a
4,690
Why do my p-values differ between logistic regression output, chi-squared test, and the confidence interval for the OR?
With generalized linear models, there are three different types of statistical tests that can be run. These are: Wald tests, likelihood ratio tests, and score tests. The excellent UCLA statistics help site has a discussion of them here. The following figure (copied from their site) helps to illustrate them: The Wald test assumes that the likelihood is normally distributed, and on that basis, uses the degree of curvature to estimate the standard error. Then, the parameter estimate divided by the SE yields a $z$-score. This holds under large $N$, but isn't quite true with smaller $N$s. It is hard to say when your $N$ is large enough for this property to hold, so this test can be slightly risky. Likelihood ratio tests look at the ratio of the likelihoods (or difference in log likelihoods) at its maximum and at the null. This is often considered the best test. The score test is based on the slope of the likelihood at the null value. This is typically less powerful, but there are times when the full likelihood cannot be computed and so this is a nice fallback option. The tests that come with summary.glm() are Wald tests. You don't say how you got your confidence intervals, but I assume you used confint(), which in turn calls profile(). More specifically, those confidence intervals are calculated by profiling the likelihood (which is a better approach than multiplying the SE by $1.96$). That is, they are analogous to the likelihood ratio test, not the Wald test. The $\chi^2$-test, in turn, is a score test. As your $N$ becomes indefinitely large, the three different $p$'s should converge on the same value, but they can differ slightly when you don't have infinite data. It is worth noting that the (Wald) $p$-value in your initial output is just barely significant and there is little real difference between just over and just under $\alpha=.05$ (quote). That line isn't 'magic'. Given that the two more reliable tests are just over $.05$, I would say that your data are not quite 'significant' by conventional criteria. Below I profile the coefficients on the scale of the linear predictor and run the likelihood ratio test explicitly (via anova.glm()). I get the same results as you: library(MASS) x = matrix(c(343-268,268,73-49,49), nrow=2, byrow=T); x # [,1] [,2] # [1,] 75 268 # [2,] 24 49 D = factor(c("N","Diabetes"), levels=c("N","Diabetes")) m = glm(x~D, family=binomial) summary(m) # ... # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) -1.2735 0.1306 -9.749 <2e-16 *** # DDiabetes 0.5597 0.2813 1.990 0.0466 * # ... confint(m) # Waiting for profiling to be done... # 2.5 % 97.5 % # (Intercept) -1.536085360 -1.023243 # DDiabetes -0.003161693 1.103671 anova(m, test="LRT") # ... # Df Deviance Resid. Df Resid. Dev Pr(>Chi) # NULL 1 3.7997 # D 1 3.7997 0 0.0000 0.05126 . chisq.test(x) # Pearson's Chi-squared test with Yates' continuity correction # # X-squared = 3.4397, df = 1, p-value = 0.06365 As @JWilliman pointed out in a comment (now deleted), in R, you can also get a score-based p-value using anova.glm(model, test="Rao"). In the example below, note that the p-value isn't quite the same as in the chi-squared test above, because by default, R's chisq.test() applies a continuity correction. If we change that setting, the p-values match: anova(m, test="Rao") # ... # Df Deviance Resid. Df Resid. Dev Rao Pr(>Chi) # NULL 1 3.7997 # D 1 3.7997 0 0.0000 4.024 0.04486 * chisq.test(x, correct=FALSE) # Pearson's Chi-squared test # # data: x # X-squared = 4.024, df = 1, p-value = 0.04486
Why do my p-values differ between logistic regression output, chi-squared test, and the confidence i
With generalized linear models, there are three different types of statistical tests that can be run. These are: Wald tests, likelihood ratio tests, and score tests. The excellent UCLA statistics he
Why do my p-values differ between logistic regression output, chi-squared test, and the confidence interval for the OR? With generalized linear models, there are three different types of statistical tests that can be run. These are: Wald tests, likelihood ratio tests, and score tests. The excellent UCLA statistics help site has a discussion of them here. The following figure (copied from their site) helps to illustrate them: The Wald test assumes that the likelihood is normally distributed, and on that basis, uses the degree of curvature to estimate the standard error. Then, the parameter estimate divided by the SE yields a $z$-score. This holds under large $N$, but isn't quite true with smaller $N$s. It is hard to say when your $N$ is large enough for this property to hold, so this test can be slightly risky. Likelihood ratio tests look at the ratio of the likelihoods (or difference in log likelihoods) at its maximum and at the null. This is often considered the best test. The score test is based on the slope of the likelihood at the null value. This is typically less powerful, but there are times when the full likelihood cannot be computed and so this is a nice fallback option. The tests that come with summary.glm() are Wald tests. You don't say how you got your confidence intervals, but I assume you used confint(), which in turn calls profile(). More specifically, those confidence intervals are calculated by profiling the likelihood (which is a better approach than multiplying the SE by $1.96$). That is, they are analogous to the likelihood ratio test, not the Wald test. The $\chi^2$-test, in turn, is a score test. As your $N$ becomes indefinitely large, the three different $p$'s should converge on the same value, but they can differ slightly when you don't have infinite data. It is worth noting that the (Wald) $p$-value in your initial output is just barely significant and there is little real difference between just over and just under $\alpha=.05$ (quote). That line isn't 'magic'. Given that the two more reliable tests are just over $.05$, I would say that your data are not quite 'significant' by conventional criteria. Below I profile the coefficients on the scale of the linear predictor and run the likelihood ratio test explicitly (via anova.glm()). I get the same results as you: library(MASS) x = matrix(c(343-268,268,73-49,49), nrow=2, byrow=T); x # [,1] [,2] # [1,] 75 268 # [2,] 24 49 D = factor(c("N","Diabetes"), levels=c("N","Diabetes")) m = glm(x~D, family=binomial) summary(m) # ... # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) -1.2735 0.1306 -9.749 <2e-16 *** # DDiabetes 0.5597 0.2813 1.990 0.0466 * # ... confint(m) # Waiting for profiling to be done... # 2.5 % 97.5 % # (Intercept) -1.536085360 -1.023243 # DDiabetes -0.003161693 1.103671 anova(m, test="LRT") # ... # Df Deviance Resid. Df Resid. Dev Pr(>Chi) # NULL 1 3.7997 # D 1 3.7997 0 0.0000 0.05126 . chisq.test(x) # Pearson's Chi-squared test with Yates' continuity correction # # X-squared = 3.4397, df = 1, p-value = 0.06365 As @JWilliman pointed out in a comment (now deleted), in R, you can also get a score-based p-value using anova.glm(model, test="Rao"). In the example below, note that the p-value isn't quite the same as in the chi-squared test above, because by default, R's chisq.test() applies a continuity correction. If we change that setting, the p-values match: anova(m, test="Rao") # ... # Df Deviance Resid. Df Resid. Dev Rao Pr(>Chi) # NULL 1 3.7997 # D 1 3.7997 0 0.0000 4.024 0.04486 * chisq.test(x, correct=FALSE) # Pearson's Chi-squared test # # data: x # X-squared = 4.024, df = 1, p-value = 0.04486
Why do my p-values differ between logistic regression output, chi-squared test, and the confidence i With generalized linear models, there are three different types of statistical tests that can be run. These are: Wald tests, likelihood ratio tests, and score tests. The excellent UCLA statistics he
4,691
Fast linear regression robust to outliers
If your data contains a single outlier, then it can be found reliably using the approach you suggest (without the iterations though). A formal approach to this is Cook, R. Dennis (1979). Influential Observations in Linear Regression. Journal of the American Statistical Association (American Statistical Association) 74 (365): 169–174. For finding more than one outlier, for many years, the leading method was the so-called $M$-estimation family of approach. This is a rather broad family of estimators that includes Huber's $M$ estimator of regression, Koenker's L1 regression as well as the approach proposed by Procastinator in his comment to your question. The $M$ estimators with convex $\rho$ functions have the advantage that they have about the same numerical complexity as a regular regression estimation. The big disadvantage is that they can only reliably find the outliers if: the contamination rate of your sample is smaller than $\frac{1}{1+p}$ where $p$ is the number of design variables, or if the outliers are not outlying in the design space (Ellis and Morgenthaler (1992)). You can find good implementation of $M$ ($l_1$) estimates of regression in the robustbase (quantreg) R package. If your data contains more than $\lfloor\frac{n}{p+1}\rfloor$ outlier potentially also outlying on the design space, then, finding them amounts to solving a combinatorial problem (equivalently the solution to an $M$ estimator with re-decending/non-convex $\rho$ function). In the last 20 years (and specially last 10) a large body of fast and reliable outlier detection algorithms have been designed to approximately solve this combinatorial problem. These are now widely implemented in the most popular statistical packages (R, Matlab, SAS, STATA,...). Nonetheless, the numerical complexity of finding outliers with these approaches is typically of order $O(2^p)$. Most algorithm can be used in practice for values of $p$ in the mid teens. Typically these algorithms are linear in $n$ (the number of observations) so the number of observation isn't an issue. A big advantage is that most of these algorithms are embarrassingly parallel. More recently, many approaches specifically designed for higher dimensional data have been proposed. Given that you did not specify $p$ in your question, I will list some references for the case $p<20$. Here are some papers that explain this in greater details in these series of review articles: Rousseeuw, P. J. and van Zomeren B.C. (1990). Unmasking Multivariate Outliers and Leverage Points. Journal of the American Statistical Association, Vol. 85, No. 411, pp. 633-639. Rousseeuw, P.J. and Van Driessen, K. (2006). Computing LTS Regression for Large Data Sets. Data Mining and Knowledge Discovery archive Volume 12 Issue 1, Pages 29 - 45. Hubert, M., Rousseeuw, P.J. and Van Aelst, S. (2008). High-Breakdown Robust Multivariate Methods. Statistical Science, Vol. 23, No. 1, 92–119 Ellis S. P. and Morgenthaler S. (1992). Leverage and Breakdown in L1 Regression. Journal of the American Statistical Association, Vol. 87, No. 417, pp. 143-148 A recent reference book on the problem of outlier identification is: Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and Methods. Wiley, New York. These (and many other variations of these) methods are implemented (among other) in the robustbase R package.
Fast linear regression robust to outliers
If your data contains a single outlier, then it can be found reliably using the approach you suggest (without the iterations though). A formal approach to this is Cook, R. Dennis (1979). Influential
Fast linear regression robust to outliers If your data contains a single outlier, then it can be found reliably using the approach you suggest (without the iterations though). A formal approach to this is Cook, R. Dennis (1979). Influential Observations in Linear Regression. Journal of the American Statistical Association (American Statistical Association) 74 (365): 169–174. For finding more than one outlier, for many years, the leading method was the so-called $M$-estimation family of approach. This is a rather broad family of estimators that includes Huber's $M$ estimator of regression, Koenker's L1 regression as well as the approach proposed by Procastinator in his comment to your question. The $M$ estimators with convex $\rho$ functions have the advantage that they have about the same numerical complexity as a regular regression estimation. The big disadvantage is that they can only reliably find the outliers if: the contamination rate of your sample is smaller than $\frac{1}{1+p}$ where $p$ is the number of design variables, or if the outliers are not outlying in the design space (Ellis and Morgenthaler (1992)). You can find good implementation of $M$ ($l_1$) estimates of regression in the robustbase (quantreg) R package. If your data contains more than $\lfloor\frac{n}{p+1}\rfloor$ outlier potentially also outlying on the design space, then, finding them amounts to solving a combinatorial problem (equivalently the solution to an $M$ estimator with re-decending/non-convex $\rho$ function). In the last 20 years (and specially last 10) a large body of fast and reliable outlier detection algorithms have been designed to approximately solve this combinatorial problem. These are now widely implemented in the most popular statistical packages (R, Matlab, SAS, STATA,...). Nonetheless, the numerical complexity of finding outliers with these approaches is typically of order $O(2^p)$. Most algorithm can be used in practice for values of $p$ in the mid teens. Typically these algorithms are linear in $n$ (the number of observations) so the number of observation isn't an issue. A big advantage is that most of these algorithms are embarrassingly parallel. More recently, many approaches specifically designed for higher dimensional data have been proposed. Given that you did not specify $p$ in your question, I will list some references for the case $p<20$. Here are some papers that explain this in greater details in these series of review articles: Rousseeuw, P. J. and van Zomeren B.C. (1990). Unmasking Multivariate Outliers and Leverage Points. Journal of the American Statistical Association, Vol. 85, No. 411, pp. 633-639. Rousseeuw, P.J. and Van Driessen, K. (2006). Computing LTS Regression for Large Data Sets. Data Mining and Knowledge Discovery archive Volume 12 Issue 1, Pages 29 - 45. Hubert, M., Rousseeuw, P.J. and Van Aelst, S. (2008). High-Breakdown Robust Multivariate Methods. Statistical Science, Vol. 23, No. 1, 92–119 Ellis S. P. and Morgenthaler S. (1992). Leverage and Breakdown in L1 Regression. Journal of the American Statistical Association, Vol. 87, No. 417, pp. 143-148 A recent reference book on the problem of outlier identification is: Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and Methods. Wiley, New York. These (and many other variations of these) methods are implemented (among other) in the robustbase R package.
Fast linear regression robust to outliers If your data contains a single outlier, then it can be found reliably using the approach you suggest (without the iterations though). A formal approach to this is Cook, R. Dennis (1979). Influential
4,692
Fast linear regression robust to outliers
For simple regression (single x), there's something to be said for the Theil-Sen line in terms of robustness to y-outliers and to influential points as well as generally good efficiency (at the normal) compared to LS for the slope. The breakdown point for the slope is nearly 30%; as long as the intercept (there are a variety of possible intercepts people have used) doesn't have a lower breakdown, the whole procedure copes with a moderate fraction of contamination quite well. Its speed might sound like it would be bad - median of $\binom{n}{2}$ slopes looks to be $O(n^2)$ even with an $O(n)$ median - but my recollection is that it can be done more quickly if speed is really an issue ($O(n \log n)$, I believe) Edit: user603 asked for an advantage of Theil regression over L1 regression. The answer is the other thing I mentioned - influential points: The red line is the $L_1$ fit (from the function rq in the quantreg package). The green is a fit with a Theil slope. All it takes is a single typo in the x-value - like typing 533 instead of 53 - and this sort of thing can happen. So the $L_1$ fit isn't robust to a single typo in the x-space.
Fast linear regression robust to outliers
For simple regression (single x), there's something to be said for the Theil-Sen line in terms of robustness to y-outliers and to influential points as well as generally good efficiency (at the normal
Fast linear regression robust to outliers For simple regression (single x), there's something to be said for the Theil-Sen line in terms of robustness to y-outliers and to influential points as well as generally good efficiency (at the normal) compared to LS for the slope. The breakdown point for the slope is nearly 30%; as long as the intercept (there are a variety of possible intercepts people have used) doesn't have a lower breakdown, the whole procedure copes with a moderate fraction of contamination quite well. Its speed might sound like it would be bad - median of $\binom{n}{2}$ slopes looks to be $O(n^2)$ even with an $O(n)$ median - but my recollection is that it can be done more quickly if speed is really an issue ($O(n \log n)$, I believe) Edit: user603 asked for an advantage of Theil regression over L1 regression. The answer is the other thing I mentioned - influential points: The red line is the $L_1$ fit (from the function rq in the quantreg package). The green is a fit with a Theil slope. All it takes is a single typo in the x-value - like typing 533 instead of 53 - and this sort of thing can happen. So the $L_1$ fit isn't robust to a single typo in the x-space.
Fast linear regression robust to outliers For simple regression (single x), there's something to be said for the Theil-Sen line in terms of robustness to y-outliers and to influential points as well as generally good efficiency (at the normal
4,693
Fast linear regression robust to outliers
Have you looked at RANSAC (Wikipedia)? This should be good at computing a reasonable linear model even when there are a lot of outliers and noise, as it is built on the assumption that only part of the data will actually belong to the mechanism.
Fast linear regression robust to outliers
Have you looked at RANSAC (Wikipedia)? This should be good at computing a reasonable linear model even when there are a lot of outliers and noise, as it is built on the assumption that only part of th
Fast linear regression robust to outliers Have you looked at RANSAC (Wikipedia)? This should be good at computing a reasonable linear model even when there are a lot of outliers and noise, as it is built on the assumption that only part of the data will actually belong to the mechanism.
Fast linear regression robust to outliers Have you looked at RANSAC (Wikipedia)? This should be good at computing a reasonable linear model even when there are a lot of outliers and noise, as it is built on the assumption that only part of th
4,694
Fast linear regression robust to outliers
I found the $l_1$ penalized error regression best. You can also use it iteratively and reweight samples, which are not very consistent with the solution. The basic idea is to augment your model with errors: $$y=Ax+e$$ where $e$ is the unknown error vector. Now you perform the regression on $$\parallel y-Ax-e \parallel_2^2+ \lambda \parallel e \parallel_1$$. Interestingly you can of course use "fused lasso" for this when you can estimate the certainty of your measurements in advance and put this as weighting into $$W=diag(w_i)$$ and to solve the new slighty different task $$\parallel y-Ax-e \parallel_2^2 + \lambda \parallel W e \parallel_1$$ More information can be found here: http://statweb.stanford.edu/~candes/papers/GrossErrorsSmallErrors.pdf
Fast linear regression robust to outliers
I found the $l_1$ penalized error regression best. You can also use it iteratively and reweight samples, which are not very consistent with the solution. The basic idea is to augment your model with e
Fast linear regression robust to outliers I found the $l_1$ penalized error regression best. You can also use it iteratively and reweight samples, which are not very consistent with the solution. The basic idea is to augment your model with errors: $$y=Ax+e$$ where $e$ is the unknown error vector. Now you perform the regression on $$\parallel y-Ax-e \parallel_2^2+ \lambda \parallel e \parallel_1$$. Interestingly you can of course use "fused lasso" for this when you can estimate the certainty of your measurements in advance and put this as weighting into $$W=diag(w_i)$$ and to solve the new slighty different task $$\parallel y-Ax-e \parallel_2^2 + \lambda \parallel W e \parallel_1$$ More information can be found here: http://statweb.stanford.edu/~candes/papers/GrossErrorsSmallErrors.pdf
Fast linear regression robust to outliers I found the $l_1$ penalized error regression best. You can also use it iteratively and reweight samples, which are not very consistent with the solution. The basic idea is to augment your model with e
4,695
How to read Cook's distance plots?
Some texts tell you that points for which Cook's distance is higher than 1 are to be considered as influential. Other texts give you a threshold of $4/N$ or $4/(N - k - 1)$, where $N$ is the number of observations and $k$ the number of explanatory variables. In your case the latter formula should yield a threshold around 0.1 . John Fox (1), in his booklet on regression diagnostics is rather cautious when it comes to giving numerical thresholds. He advises the use of graphics and to examine in closer details the points with "values of D that are substantially larger than the rest". According to Fox, thresholds should just be used to enhance graphical displays. In your case the observations 7 and 16 could be considered as influential. Well, I would at least have a closer look at them. The observation 29 is not substantially different from a couple of other observations. (1) Fox, John. (1991). Regression Diagnostics: An Introduction. Sage Publications.
How to read Cook's distance plots?
Some texts tell you that points for which Cook's distance is higher than 1 are to be considered as influential. Other texts give you a threshold of $4/N$ or $4/(N - k - 1)$, where $N$ is the number of
How to read Cook's distance plots? Some texts tell you that points for which Cook's distance is higher than 1 are to be considered as influential. Other texts give you a threshold of $4/N$ or $4/(N - k - 1)$, where $N$ is the number of observations and $k$ the number of explanatory variables. In your case the latter formula should yield a threshold around 0.1 . John Fox (1), in his booklet on regression diagnostics is rather cautious when it comes to giving numerical thresholds. He advises the use of graphics and to examine in closer details the points with "values of D that are substantially larger than the rest". According to Fox, thresholds should just be used to enhance graphical displays. In your case the observations 7 and 16 could be considered as influential. Well, I would at least have a closer look at them. The observation 29 is not substantially different from a couple of other observations. (1) Fox, John. (1991). Regression Diagnostics: An Introduction. Sage Publications.
How to read Cook's distance plots? Some texts tell you that points for which Cook's distance is higher than 1 are to be considered as influential. Other texts give you a threshold of $4/N$ or $4/(N - k - 1)$, where $N$ is the number of
4,696
How to read Cook's distance plots?
+1 to both @lejohn and @whuber. I wanted to expand a little on @whuber's comment. Cook's distance can be contrasted with dfbeta. Cook's distance refers to how far, on average, predicted y-values will move if the observation in question is dropped from the data set. dfbeta refers to how much a parameter estimate changes if the observation in question is dropped from the data set. Note that with $k$ covariates, there will be $k+1$ dfbetas (the intercept, $\beta_0$, and 1 $\beta$ for each covariate). Cook's distance is presumably more important to you if you are doing predictive modeling, whereas dfbeta is more important in explanatory modeling. There is one other point worth making here. In observational research, it is often difficult to sample uniformly across the predictor space, and you might have just a few points in a given area. Such points can diverge from the rest. Having a few, distinct cases can be discomfiting, but merit considerable thought before being relegated outliers. There may legitimately be an interaction amongst the predictors, or the system may shift to behave differently when predictor values become extreme. In addition, they may be able to help you untangle the effects of colinear predictors. Influential points could be a blessing in disguise.
How to read Cook's distance plots?
+1 to both @lejohn and @whuber. I wanted to expand a little on @whuber's comment. Cook's distance can be contrasted with dfbeta. Cook's distance refers to how far, on average, predicted y-values wi
How to read Cook's distance plots? +1 to both @lejohn and @whuber. I wanted to expand a little on @whuber's comment. Cook's distance can be contrasted with dfbeta. Cook's distance refers to how far, on average, predicted y-values will move if the observation in question is dropped from the data set. dfbeta refers to how much a parameter estimate changes if the observation in question is dropped from the data set. Note that with $k$ covariates, there will be $k+1$ dfbetas (the intercept, $\beta_0$, and 1 $\beta$ for each covariate). Cook's distance is presumably more important to you if you are doing predictive modeling, whereas dfbeta is more important in explanatory modeling. There is one other point worth making here. In observational research, it is often difficult to sample uniformly across the predictor space, and you might have just a few points in a given area. Such points can diverge from the rest. Having a few, distinct cases can be discomfiting, but merit considerable thought before being relegated outliers. There may legitimately be an interaction amongst the predictors, or the system may shift to behave differently when predictor values become extreme. In addition, they may be able to help you untangle the effects of colinear predictors. Influential points could be a blessing in disguise.
How to read Cook's distance plots? +1 to both @lejohn and @whuber. I wanted to expand a little on @whuber's comment. Cook's distance can be contrasted with dfbeta. Cook's distance refers to how far, on average, predicted y-values wi
4,697
Logistic regression: anova chi-square test vs. significance of coefficients (anova() vs summary() in R)
In addition to @gung's answer, I'll try to provide an example of what the anova function actually tests. I hope this enables you to decide what tests are appropriate for the hypotheses you are interested in testing. Let's assume that you have an outcome $y$ and 3 predictor variables: $x_{1}$, $x_{2}$, and $x_{3}$. Now, if your logistic regression model would be my.mod <- glm(y~x1+x2+x3, family="binomial"). When you run anova(my.mod, test="Chisq"), the function compares the following models in sequential order. This type is also called Type I ANOVA or Type I sum of squares (see this post for a comparison of the different types): glm(y~1, family="binomial") vs. glm(y~x1, family="binomial") glm(y~x1, family="binomial") vs. glm(y~x1+x2, family="binomial") glm(y~x1+x2, family="binomial") vs. glm(y~x1+x2+x3, family="binomial") So it sequentially compares the smaller model with the next more complex model by adding one variable in each step. Each of those comparisons is done via a likelihood ratio test (LR test; see example below). To my knowledge, these hypotheses are rarely of interest, but this has to be decided by you. Here is an example in R: mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv") mydata$rank <- factor(mydata$rank) my.mod <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") summary(my.mod) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -3.989979 1.139951 -3.500 0.000465 *** gre 0.002264 0.001094 2.070 0.038465 * gpa 0.804038 0.331819 2.423 0.015388 * rank2 -0.675443 0.316490 -2.134 0.032829 * rank3 -1.340204 0.345306 -3.881 0.000104 *** rank4 -1.551464 0.417832 -3.713 0.000205 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # The sequential analysis anova(my.mod, test="Chisq") Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev Pr(>Chi) NULL 399 499.98 gre 1 13.9204 398 486.06 0.0001907 *** gpa 1 5.7122 397 480.34 0.0168478 * rank 3 21.8265 394 458.52 7.088e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # We can make the comparisons by hand (adding a variable in each step) # model only the intercept mod1 <- glm(admit ~ 1, data = mydata, family = "binomial") # model with intercept + gre mod2 <- glm(admit ~ gre, data = mydata, family = "binomial") # model with intercept + gre + gpa mod3 <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") # model containing all variables (full model) mod4 <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") anova(mod1, mod2, test="LRT") Model 1: admit ~ 1 Model 2: admit ~ gre Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 399 499.98 2 398 486.06 1 13.92 0.0001907 *** anova(mod2, mod3, test="LRT") Model 1: admit ~ gre Model 2: admit ~ gre + gpa Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 398 486.06 2 397 480.34 1 5.7122 0.01685 * anova(mod3, mod4, test="LRT") Model 1: admit ~ gre + gpa Model 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 397 480.34 2 394 458.52 3 21.826 7.088e-05 *** The $p$-values in the output of summary(my.mod) are Wald tests which test the following hypotheses (note that they're interchangeable and the order of the tests does not matter): For coefficient of x1: glm(y~x2+x3, family="binomial") vs. glm(y~x1+x2+x3, family="binomial") For coefficient of x2: glm(y~x1+x3, family="binomial") vs. glm(y~x1+x2+x3, family="binomial") For coefficient of x3: glm(y~x1+x2, family="binomial") vs. glm(y~x1+x2+x3, family="binomial") So each coefficient against the full model containing all coefficients. Wald tests are an approximation of the likelihood ratio test. We could also do the likelihood ratio tests (LR test). Here is how: mod1.2 <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") mod2.2 <- glm(admit ~ gre + rank, data = mydata, family = "binomial") mod3.2 <- glm(admit ~ gpa + rank, data = mydata, family = "binomial") anova(mod1.2, my.mod, test="LRT") # joint LR test for rank Model 1: admit ~ gre + gpa Model 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 397 480.34 2 394 458.52 3 21.826 7.088e-05 *** anova(mod2.2, my.mod, test="LRT") # LR test for gpa Model 1: admit ~ gre + rank Model 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 395 464.53 2 394 458.52 1 6.0143 0.01419 * anova(mod3.2, my.mod, test="LRT") # LR test for gre Model 1: admit ~ gpa + rank Model 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 395 462.88 2 394 458.52 1 4.3578 0.03684 * The $p$-values from the likelihood ratio tests are very similar to those obtained by the Wald tests by summary(my.mod) above. Note: The third model comparison for rank of anova(my.mod, test="Chisq") is the same as the comparison for rank in the example below (anova(mod1.2, my.mod, test="Chisq")). Each time, the $p$-value is the same, $7.088\cdot 10^{-5}$. It is each time the comparison between the model without rank vs. the model containing it.
Logistic regression: anova chi-square test vs. significance of coefficients (anova() vs summary() in
In addition to @gung's answer, I'll try to provide an example of what the anova function actually tests. I hope this enables you to decide what tests are appropriate for the hypotheses you are interes
Logistic regression: anova chi-square test vs. significance of coefficients (anova() vs summary() in R) In addition to @gung's answer, I'll try to provide an example of what the anova function actually tests. I hope this enables you to decide what tests are appropriate for the hypotheses you are interested in testing. Let's assume that you have an outcome $y$ and 3 predictor variables: $x_{1}$, $x_{2}$, and $x_{3}$. Now, if your logistic regression model would be my.mod <- glm(y~x1+x2+x3, family="binomial"). When you run anova(my.mod, test="Chisq"), the function compares the following models in sequential order. This type is also called Type I ANOVA or Type I sum of squares (see this post for a comparison of the different types): glm(y~1, family="binomial") vs. glm(y~x1, family="binomial") glm(y~x1, family="binomial") vs. glm(y~x1+x2, family="binomial") glm(y~x1+x2, family="binomial") vs. glm(y~x1+x2+x3, family="binomial") So it sequentially compares the smaller model with the next more complex model by adding one variable in each step. Each of those comparisons is done via a likelihood ratio test (LR test; see example below). To my knowledge, these hypotheses are rarely of interest, but this has to be decided by you. Here is an example in R: mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv") mydata$rank <- factor(mydata$rank) my.mod <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") summary(my.mod) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -3.989979 1.139951 -3.500 0.000465 *** gre 0.002264 0.001094 2.070 0.038465 * gpa 0.804038 0.331819 2.423 0.015388 * rank2 -0.675443 0.316490 -2.134 0.032829 * rank3 -1.340204 0.345306 -3.881 0.000104 *** rank4 -1.551464 0.417832 -3.713 0.000205 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # The sequential analysis anova(my.mod, test="Chisq") Terms added sequentially (first to last) Df Deviance Resid. Df Resid. Dev Pr(>Chi) NULL 399 499.98 gre 1 13.9204 398 486.06 0.0001907 *** gpa 1 5.7122 397 480.34 0.0168478 * rank 3 21.8265 394 458.52 7.088e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # We can make the comparisons by hand (adding a variable in each step) # model only the intercept mod1 <- glm(admit ~ 1, data = mydata, family = "binomial") # model with intercept + gre mod2 <- glm(admit ~ gre, data = mydata, family = "binomial") # model with intercept + gre + gpa mod3 <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") # model containing all variables (full model) mod4 <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") anova(mod1, mod2, test="LRT") Model 1: admit ~ 1 Model 2: admit ~ gre Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 399 499.98 2 398 486.06 1 13.92 0.0001907 *** anova(mod2, mod3, test="LRT") Model 1: admit ~ gre Model 2: admit ~ gre + gpa Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 398 486.06 2 397 480.34 1 5.7122 0.01685 * anova(mod3, mod4, test="LRT") Model 1: admit ~ gre + gpa Model 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 397 480.34 2 394 458.52 3 21.826 7.088e-05 *** The $p$-values in the output of summary(my.mod) are Wald tests which test the following hypotheses (note that they're interchangeable and the order of the tests does not matter): For coefficient of x1: glm(y~x2+x3, family="binomial") vs. glm(y~x1+x2+x3, family="binomial") For coefficient of x2: glm(y~x1+x3, family="binomial") vs. glm(y~x1+x2+x3, family="binomial") For coefficient of x3: glm(y~x1+x2, family="binomial") vs. glm(y~x1+x2+x3, family="binomial") So each coefficient against the full model containing all coefficients. Wald tests are an approximation of the likelihood ratio test. We could also do the likelihood ratio tests (LR test). Here is how: mod1.2 <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") mod2.2 <- glm(admit ~ gre + rank, data = mydata, family = "binomial") mod3.2 <- glm(admit ~ gpa + rank, data = mydata, family = "binomial") anova(mod1.2, my.mod, test="LRT") # joint LR test for rank Model 1: admit ~ gre + gpa Model 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 397 480.34 2 394 458.52 3 21.826 7.088e-05 *** anova(mod2.2, my.mod, test="LRT") # LR test for gpa Model 1: admit ~ gre + rank Model 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 395 464.53 2 394 458.52 1 6.0143 0.01419 * anova(mod3.2, my.mod, test="LRT") # LR test for gre Model 1: admit ~ gpa + rank Model 2: admit ~ gre + gpa + rank Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 395 462.88 2 394 458.52 1 4.3578 0.03684 * The $p$-values from the likelihood ratio tests are very similar to those obtained by the Wald tests by summary(my.mod) above. Note: The third model comparison for rank of anova(my.mod, test="Chisq") is the same as the comparison for rank in the example below (anova(mod1.2, my.mod, test="Chisq")). Each time, the $p$-value is the same, $7.088\cdot 10^{-5}$. It is each time the comparison between the model without rank vs. the model containing it.
Logistic regression: anova chi-square test vs. significance of coefficients (anova() vs summary() in In addition to @gung's answer, I'll try to provide an example of what the anova function actually tests. I hope this enables you to decide what tests are appropriate for the hypotheses you are interes
4,698
Suppression effect in regression: definition and visual explanation/depiction
There exist a number of frequenly mentioned regressional effects which conceptually are different but share much in common when seen purely statistically (see e.g. this paper "Equivalence of the Mediation, Confounding and Suppression Effect" by David MacKinnon et al., or Wikipedia articles): Mediator: IV which conveys effect (totally of partly) of another IV to the DV. Confounder: IV which constitutes or precludes, totally or partly, effect of another IV to the DV. Moderator: IV which, varying, manages the strength of the effect of another IV on the DV. Statistically, it is known as interaction between the two IVs. Suppressor: IV (a mediator or a moderator conceptually) which inclusion strengthens the effect of another IV on the DV. I'm not going to discuss to what extent some or all of them are technically similar (for that, read the paper linked above). My aim is to try to show graphically what suppressor is. The above definition that "suppressor is a variable which inclusion strengthens the effect of another IV on the DV" seems to me potentially broad because it does not tell anything about mechanisms of such enhancement. Below I'm discussing one mechanism - the only one I consider to be suppression. If there are other mechanisms as well (as for right now, I haven't tried to meditate of any such other) then either the above "broad" definition should be considered imprecise or my definition of suppression should be considered too narrow. Definition (in my understanding) Suppressor is the independent variable which, when added to the model, raises observed R-square mostly due to its accounting for the residuals left by the model without it, and not due to its own association with the DV (which is comparatively weak). We know that the increase in R-square in response to adding a IV is the squared part correlation of that IV in that new model. This way, if the part correlation of the IV with the DV is greater (by absolute value) than the zero-order $r$ between them, that IV is a suppressor. So, a suppressor mostly "suppresses" the error of the reduced model, being weak as a predictor itself. The error term is the complement to the prediction. The prediction is "projected on" or "shared between" the IVs (regression coefficients), and so is the error term ("complements" to the coefficients). The suppressor suppresses such error components unevenly: greater for some IVs, lesser for other IVs. For those IVs "whose" such components it suppresses greatly it lends considerable facilitating aid by actually raising their regression coefficients. Not strong suppressing effects occurs often and wildly (an example on this site). Strong suppression is typically introduced consciously. A researcher seeks for a characteristic which must correlate with the DV as weak as possible and at the same time would correlate with something in the IV of interest which is considered irrelevant, prediction-void, in respect to the DV. He enters it to the model and gets considerable increase in that IV's predictive power. The suppressor's coefficient is typically not interpreted. I could summarize my definition as follows [up on @Jake's answer and @gung's comments]: Formal (statistical) definition: suppressor is IV with part correlation larger than zero-order correlation (with the dependent). Conceptual (practical) definition: the above formal definition + the zero-order correlation is small, so that the suppressor is not a sound predictor itself. "Suppessor" is a role of a IV in a specific model only, not the characteristic of the separate variable. When other IVs are added or removed, the suppressor can suddenly stop suppressing or resume suppressing or change the focus of its suppressing activity. Normal regression situation The first picture below shows a typical regression with two predictors (we'll speak of linear regression). The picture is copied from here where it is explained in more details. In short, moderately correlated (= having acute angle between them) predictors $X_1$ and $X_2$ span 2-dimesional space "plane X". The dependent variable $Y$ is projected onto it orthogonally, leaving the predicted variable $Y'$ and the residuals with st. deviation equal to the length of $e$. R-square of the regression is the angle between $Y$ and $Y'$, and the two regression coefficients are directly related to the skew coordinates $b_1$ and $b_2$, respectively. This situation I've called normal or typical because both $X_1$ and $X_2$ correlate with $Y$ (oblique angle exists between each of the independents and the dependent) and the predictors compete for the prediction because they are correlated. Suppression situation It is shown on the next picture. This one is like the previous; however $Y$ vector now directs somewhat away from the viewer and $X_2$ changed its direction considerably. $X_2$ acts as a suppressor. Note first of all that it hardly correlates with $Y$. Hence it cannot be a valuable predictor itself. Second. Imagine $X_2$ is absent and you predict only by $X_1$; the prediction of this one-variable regression is depicted as $Y^*$ red vector, the error as $e^*$ vector, and the coefficient is given by $b^*$ coordinate (which is the endpoint of $Y^*$). Now bring yourself back to the full model and notice that $X_2$ is fairly correlated with $e^*$. Thus, $X_2$ when introduced in the model, can explain a considerable portion of that error of the reduced model, cutting down $e^*$ to $e$. This constellation: (1) $X_2$ is not a rival to $X_1$ as a predictor; and (2) $X_2$ is a dustman to pick up unpredictedness left by $X_1$, - makes $X_2$ a suppressor. As a result of its effect, predictive strength of $X_1$ has grown to some extent: $b_1$ is larger than $b^*$. Well, why is $X_2$ called a suppressor to $X_1$ and how can it reinforce it when "suppressing" it? Look at the next picture. It is exactly the same as the previous. Think again of the model with the single predictor $X_1$. This predictor could of course be decomposed in two parts or components (shown in grey): the part which is "responsible" for prediction of $Y$ (and thus coinciding with that vector) and the part which is "responsible" for the unpredictedness (and thus parallel to $e^*$). It is this second part of $X_1$ - the part irrelevant to $Y$ - is suppressed by $X_2$ when that suppressor is added to the model. The irrelevant part is suppressed and thus, given that the suppressor doesn't itself predict $Y$ any much, the relevant part looks stronger. A suppressor is not a predictor but rather a facilitator for another/other predictor/s. Because it competes with what impedes them to predict. Sign of the suppressor's regression coefficient It is the sign of the correlation between the suppressor and the error variable $e^*$ left by the reduced (without-the-suppressor) model. In the depiction above, it is positive. In other settings (for example, revert the direction of $X_2$) it could be negative. Suppression example Example data: y x1 x2 1.64454000 .35118800 1.06384500 1.78520400 .20000000 -1.2031500 -1.3635700 -.96106900 -.46651400 .31454900 .80000000 1.17505400 .31795500 .85859700 -.10061200 .97009700 1.00000000 1.43890400 .66438800 .29267000 1.20404800 -.87025200 -1.8901800 -.99385700 1.96219200 -.27535200 -.58754000 1.03638100 -.24644800 -.11083400 .00741500 1.44742200 -.06923400 1.63435300 .46709500 .96537000 .21981300 .34809500 .55326800 -.28577400 .16670800 .35862100 1.49875800 -1.1375700 -2.8797100 1.67153800 .39603400 -.81070800 1.46203600 1.40152200 -.05767700 -.56326600 -.74452200 .90471600 .29787400 -.92970900 .56189800 -1.5489800 -.83829500 -1.2610800 Linear regression results: Observe that $X_2$ served as suppressor. Its zero-order correlation with $Y$ is practically zero but its part correlation is much larger by magnitude, $-.224$. It strengthened to some extent the predictive force of $X_1$ (from r $.419$, a would-be beta in simple regression with it, to beta $.538$ in the multiple regression). According to the formal definition, $X_1$ appeared a suppressor too, because its part correlation is greater than its zero-order correlation. But that is because we have only two IV in the simple example. Conceptually, $X_1$ isn't a suppressor because its $r$ with $Y$ is not about $0$. By way, sum of squared part correlations exceeded R-square: .4750^2+(-.2241)^2 = .2758 > .2256, which would not occur in normal regressional situation (see the Venn diagram below). Suppression and coefficient's sign change Adding a variable that will serve a supressor may as well as may not change the sign of some other variables' coefficients. "Suppression" and "change sign" effects are not the same thing. Moreover, I believe that a suppressor can never change sign of those predictors whom they serve suppressor. (It would be a shocking discovery to add the suppressor on purpose to facilitate a variable and then to find it having become indeed stronger but in the opposite direction! I'd be thankful if somebody could show me it is possible.) Suppression and coefficient strengthening To cite an earlier passage: "For those IVs "whose" such components [error components] it suppresses greatly the suppressor lends considerable facilitating aid by actually raising their regression coefficients". Indeed, in our Example above, $X_2$, the suppressor, raised the coefficient for $X_1$. Such enhancement of the unique predictive power of another regressor is often the aim of a suppressor to a model but it is not the definition of suppressor or of suppression effect. For, the aforementioned enhancement of another predictor's capacity via adding more regressors can easily occure in a normal regressional situation without those regressors being suppressors. Here is an example. y x1 x2 x3 1 1 1 1 3 2 2 6 2 3 3 5 3 2 4 2 4 3 5 9 3 4 4 2 2 5 3 3 3 6 4 4 4 7 5 5 5 6 6 6 4 5 7 5 3 4 5 5 4 5 3 5 5 6 4 6 6 7 5 4 5 8 6 6 4 2 7 7 5 3 8 8 6 4 9 4 5 5 3 3 4 6 4 2 3 2 1 1 4 3 5 4 5 4 6 5 6 9 5 4 5 8 3 3 3 5 5 2 2 6 6 1 3 7 7 5 5 8 8 8 Regressions results without and with $X_3$: Inclusion of $X_3$ in the model raised the beta of $X_1$ from $.381$ to $.399$ (and its corresponding partial correlation with $Y$ from $.420$ to $.451$). Still, we find no suppressor in the model. $X_3$'s part correlation ($.229$) is not greater than its zero-order correlation ($.427$). Same is for the other regressors. "Facilitation" effect was there, but not due to "suppression" effect. Definition of a suppessor is different from just strenghtening/facilitation; and it is about picking up mostly errors, due to which the part correlation exceeds the zero-order one. Suppression and Venn diagram Normal regressional situation is often explained with the help of Venn diagram. A+B+C+D = 1, all $Y$ variability. B+C+D area is the variability accounted by the two IV ($X_1$ and $X_2$), the R-square; the remaining area A is the error variability. B+C = $r_{YX_1}^2$; D+C = $r_{YX_2}^2$, Pearson zero-order correlations. B and D are the squared part (semipartial) correlations: B = $r_{Y(X_1.X_2)}^2$; D = $r_{Y(X_2.X_1)}^2$. B/(A+B) = $r_{YX_1.X_2}^2$ and D/(A+D) = $r_{YX_2.X_1}^2$ are the squared partial correlations which have the same basic meaning as the standardized regression coefficients betas. According to the above definition (which I stick to) that a suppressor is the IV with part correlation greater than zero-order correlation, $X_2$ is the suppressor if D area > D+C area. That cannot be displayed on Venn diagram. (It would imply that C from the view of $X_2$ is not "here" and is not the same entity than C from the view of $X_1$. One must invent perhaps something like multilayered Venn diagram to wriggle oneself to show it.) P.S. Upon finishing my answer I found this answer (by @gung) with a nice simple (schematic) diagram, which seems to be in agreement with what I showed above by vectors.
Suppression effect in regression: definition and visual explanation/depiction
There exist a number of frequenly mentioned regressional effects which conceptually are different but share much in common when seen purely statistically (see e.g. this paper "Equivalence of the Media
Suppression effect in regression: definition and visual explanation/depiction There exist a number of frequenly mentioned regressional effects which conceptually are different but share much in common when seen purely statistically (see e.g. this paper "Equivalence of the Mediation, Confounding and Suppression Effect" by David MacKinnon et al., or Wikipedia articles): Mediator: IV which conveys effect (totally of partly) of another IV to the DV. Confounder: IV which constitutes or precludes, totally or partly, effect of another IV to the DV. Moderator: IV which, varying, manages the strength of the effect of another IV on the DV. Statistically, it is known as interaction between the two IVs. Suppressor: IV (a mediator or a moderator conceptually) which inclusion strengthens the effect of another IV on the DV. I'm not going to discuss to what extent some or all of them are technically similar (for that, read the paper linked above). My aim is to try to show graphically what suppressor is. The above definition that "suppressor is a variable which inclusion strengthens the effect of another IV on the DV" seems to me potentially broad because it does not tell anything about mechanisms of such enhancement. Below I'm discussing one mechanism - the only one I consider to be suppression. If there are other mechanisms as well (as for right now, I haven't tried to meditate of any such other) then either the above "broad" definition should be considered imprecise or my definition of suppression should be considered too narrow. Definition (in my understanding) Suppressor is the independent variable which, when added to the model, raises observed R-square mostly due to its accounting for the residuals left by the model without it, and not due to its own association with the DV (which is comparatively weak). We know that the increase in R-square in response to adding a IV is the squared part correlation of that IV in that new model. This way, if the part correlation of the IV with the DV is greater (by absolute value) than the zero-order $r$ between them, that IV is a suppressor. So, a suppressor mostly "suppresses" the error of the reduced model, being weak as a predictor itself. The error term is the complement to the prediction. The prediction is "projected on" or "shared between" the IVs (regression coefficients), and so is the error term ("complements" to the coefficients). The suppressor suppresses such error components unevenly: greater for some IVs, lesser for other IVs. For those IVs "whose" such components it suppresses greatly it lends considerable facilitating aid by actually raising their regression coefficients. Not strong suppressing effects occurs often and wildly (an example on this site). Strong suppression is typically introduced consciously. A researcher seeks for a characteristic which must correlate with the DV as weak as possible and at the same time would correlate with something in the IV of interest which is considered irrelevant, prediction-void, in respect to the DV. He enters it to the model and gets considerable increase in that IV's predictive power. The suppressor's coefficient is typically not interpreted. I could summarize my definition as follows [up on @Jake's answer and @gung's comments]: Formal (statistical) definition: suppressor is IV with part correlation larger than zero-order correlation (with the dependent). Conceptual (practical) definition: the above formal definition + the zero-order correlation is small, so that the suppressor is not a sound predictor itself. "Suppessor" is a role of a IV in a specific model only, not the characteristic of the separate variable. When other IVs are added or removed, the suppressor can suddenly stop suppressing or resume suppressing or change the focus of its suppressing activity. Normal regression situation The first picture below shows a typical regression with two predictors (we'll speak of linear regression). The picture is copied from here where it is explained in more details. In short, moderately correlated (= having acute angle between them) predictors $X_1$ and $X_2$ span 2-dimesional space "plane X". The dependent variable $Y$ is projected onto it orthogonally, leaving the predicted variable $Y'$ and the residuals with st. deviation equal to the length of $e$. R-square of the regression is the angle between $Y$ and $Y'$, and the two regression coefficients are directly related to the skew coordinates $b_1$ and $b_2$, respectively. This situation I've called normal or typical because both $X_1$ and $X_2$ correlate with $Y$ (oblique angle exists between each of the independents and the dependent) and the predictors compete for the prediction because they are correlated. Suppression situation It is shown on the next picture. This one is like the previous; however $Y$ vector now directs somewhat away from the viewer and $X_2$ changed its direction considerably. $X_2$ acts as a suppressor. Note first of all that it hardly correlates with $Y$. Hence it cannot be a valuable predictor itself. Second. Imagine $X_2$ is absent and you predict only by $X_1$; the prediction of this one-variable regression is depicted as $Y^*$ red vector, the error as $e^*$ vector, and the coefficient is given by $b^*$ coordinate (which is the endpoint of $Y^*$). Now bring yourself back to the full model and notice that $X_2$ is fairly correlated with $e^*$. Thus, $X_2$ when introduced in the model, can explain a considerable portion of that error of the reduced model, cutting down $e^*$ to $e$. This constellation: (1) $X_2$ is not a rival to $X_1$ as a predictor; and (2) $X_2$ is a dustman to pick up unpredictedness left by $X_1$, - makes $X_2$ a suppressor. As a result of its effect, predictive strength of $X_1$ has grown to some extent: $b_1$ is larger than $b^*$. Well, why is $X_2$ called a suppressor to $X_1$ and how can it reinforce it when "suppressing" it? Look at the next picture. It is exactly the same as the previous. Think again of the model with the single predictor $X_1$. This predictor could of course be decomposed in two parts or components (shown in grey): the part which is "responsible" for prediction of $Y$ (and thus coinciding with that vector) and the part which is "responsible" for the unpredictedness (and thus parallel to $e^*$). It is this second part of $X_1$ - the part irrelevant to $Y$ - is suppressed by $X_2$ when that suppressor is added to the model. The irrelevant part is suppressed and thus, given that the suppressor doesn't itself predict $Y$ any much, the relevant part looks stronger. A suppressor is not a predictor but rather a facilitator for another/other predictor/s. Because it competes with what impedes them to predict. Sign of the suppressor's regression coefficient It is the sign of the correlation between the suppressor and the error variable $e^*$ left by the reduced (without-the-suppressor) model. In the depiction above, it is positive. In other settings (for example, revert the direction of $X_2$) it could be negative. Suppression example Example data: y x1 x2 1.64454000 .35118800 1.06384500 1.78520400 .20000000 -1.2031500 -1.3635700 -.96106900 -.46651400 .31454900 .80000000 1.17505400 .31795500 .85859700 -.10061200 .97009700 1.00000000 1.43890400 .66438800 .29267000 1.20404800 -.87025200 -1.8901800 -.99385700 1.96219200 -.27535200 -.58754000 1.03638100 -.24644800 -.11083400 .00741500 1.44742200 -.06923400 1.63435300 .46709500 .96537000 .21981300 .34809500 .55326800 -.28577400 .16670800 .35862100 1.49875800 -1.1375700 -2.8797100 1.67153800 .39603400 -.81070800 1.46203600 1.40152200 -.05767700 -.56326600 -.74452200 .90471600 .29787400 -.92970900 .56189800 -1.5489800 -.83829500 -1.2610800 Linear regression results: Observe that $X_2$ served as suppressor. Its zero-order correlation with $Y$ is practically zero but its part correlation is much larger by magnitude, $-.224$. It strengthened to some extent the predictive force of $X_1$ (from r $.419$, a would-be beta in simple regression with it, to beta $.538$ in the multiple regression). According to the formal definition, $X_1$ appeared a suppressor too, because its part correlation is greater than its zero-order correlation. But that is because we have only two IV in the simple example. Conceptually, $X_1$ isn't a suppressor because its $r$ with $Y$ is not about $0$. By way, sum of squared part correlations exceeded R-square: .4750^2+(-.2241)^2 = .2758 > .2256, which would not occur in normal regressional situation (see the Venn diagram below). Suppression and coefficient's sign change Adding a variable that will serve a supressor may as well as may not change the sign of some other variables' coefficients. "Suppression" and "change sign" effects are not the same thing. Moreover, I believe that a suppressor can never change sign of those predictors whom they serve suppressor. (It would be a shocking discovery to add the suppressor on purpose to facilitate a variable and then to find it having become indeed stronger but in the opposite direction! I'd be thankful if somebody could show me it is possible.) Suppression and coefficient strengthening To cite an earlier passage: "For those IVs "whose" such components [error components] it suppresses greatly the suppressor lends considerable facilitating aid by actually raising their regression coefficients". Indeed, in our Example above, $X_2$, the suppressor, raised the coefficient for $X_1$. Such enhancement of the unique predictive power of another regressor is often the aim of a suppressor to a model but it is not the definition of suppressor or of suppression effect. For, the aforementioned enhancement of another predictor's capacity via adding more regressors can easily occure in a normal regressional situation without those regressors being suppressors. Here is an example. y x1 x2 x3 1 1 1 1 3 2 2 6 2 3 3 5 3 2 4 2 4 3 5 9 3 4 4 2 2 5 3 3 3 6 4 4 4 7 5 5 5 6 6 6 4 5 7 5 3 4 5 5 4 5 3 5 5 6 4 6 6 7 5 4 5 8 6 6 4 2 7 7 5 3 8 8 6 4 9 4 5 5 3 3 4 6 4 2 3 2 1 1 4 3 5 4 5 4 6 5 6 9 5 4 5 8 3 3 3 5 5 2 2 6 6 1 3 7 7 5 5 8 8 8 Regressions results without and with $X_3$: Inclusion of $X_3$ in the model raised the beta of $X_1$ from $.381$ to $.399$ (and its corresponding partial correlation with $Y$ from $.420$ to $.451$). Still, we find no suppressor in the model. $X_3$'s part correlation ($.229$) is not greater than its zero-order correlation ($.427$). Same is for the other regressors. "Facilitation" effect was there, but not due to "suppression" effect. Definition of a suppessor is different from just strenghtening/facilitation; and it is about picking up mostly errors, due to which the part correlation exceeds the zero-order one. Suppression and Venn diagram Normal regressional situation is often explained with the help of Venn diagram. A+B+C+D = 1, all $Y$ variability. B+C+D area is the variability accounted by the two IV ($X_1$ and $X_2$), the R-square; the remaining area A is the error variability. B+C = $r_{YX_1}^2$; D+C = $r_{YX_2}^2$, Pearson zero-order correlations. B and D are the squared part (semipartial) correlations: B = $r_{Y(X_1.X_2)}^2$; D = $r_{Y(X_2.X_1)}^2$. B/(A+B) = $r_{YX_1.X_2}^2$ and D/(A+D) = $r_{YX_2.X_1}^2$ are the squared partial correlations which have the same basic meaning as the standardized regression coefficients betas. According to the above definition (which I stick to) that a suppressor is the IV with part correlation greater than zero-order correlation, $X_2$ is the suppressor if D area > D+C area. That cannot be displayed on Venn diagram. (It would imply that C from the view of $X_2$ is not "here" and is not the same entity than C from the view of $X_1$. One must invent perhaps something like multilayered Venn diagram to wriggle oneself to show it.) P.S. Upon finishing my answer I found this answer (by @gung) with a nice simple (schematic) diagram, which seems to be in agreement with what I showed above by vectors.
Suppression effect in regression: definition and visual explanation/depiction There exist a number of frequenly mentioned regressional effects which conceptually are different but share much in common when seen purely statistically (see e.g. this paper "Equivalence of the Media
4,699
Suppression effect in regression: definition and visual explanation/depiction
Here is another geometric view of suppression, but rather than being in the observation space as @ttnphns's example is, this one is in the variable space, the space where everyday scatterplots live. Consider a regression $\hat{y}_i=x_i+z_i$, that is, the intercept is 0 and both predictors have a partial slope of 1. Now, the predictors $x$ and $z$ may themselves be correlated. We will consider two cases: first the case where $x$ and $z$ are positively correlated, which I will call the "confounding" case (characterized by the secondary regression $\hat{x}_i=\frac{1}{2}z_i$), and second the case where $x$ and $z$ are negatively correlated, which I will call the "suppression" case (with secondary regression $\hat{x}_i=-\frac{1}{2}z_i$). We can plot our regression equation as a plane in the variable space that looks like this: Confounding case Let's consider the slope for the $x$ predictor in the confounding case. To say that the other predictor $z$ is serving as a confounding variable is to say that when we look at a simple regression of $y$ on $x$, the effect of $x$ here is stronger than is the effect of x in a multiple regression of $y$ on $x$ and $z$, where we partial out the effect of $z$. The effect of $x$ that we observe in the simple regression is, in some sense (not necessarily causal), partially due to the effect of $z$, which is positively associated with both $y$ and $x$, but not included in the regression. (For the purposes of this answer I will use "the effect of $x$" to refer to the slope of $x$.) We will call the slope of $x$ in the simple linear regression the "simple slope" of $x$ and the slope of $x$ in the multiple regression the "partial slope" of $x$. Here is what the simple and partial slopes of $x$ look like as vectors on the regression plane: The partial slope of x is perhaps easier to understand. It is shown in red above. It is the slope of a vector that moves along the plane in such a way that $x$ is increasing, but $z$ is held constant. This is what it means to "control for" $z$. The simple slope of $x$ is slightly more complicated because it implicitly also includes part of the effect of the $z$ predictor. It is shown in blue above. The simple slope of $x$ is the slope of a vector that moves along the plane in such a way that $x$ is increasing, and $z$ also is increasing (or decreasing) to whatever extent $x$ and $z$ are associated in the dataset. In the confounding case, we set things up so that the relationship between $x$ and $z$ was such that when we move up one unit on $x$, we also move up half a unit on $z$ (this comes from the secondary regression $\hat{x}_i=\frac{1}{2}z_i$). And since one-unit changes in both $x$ and $z$ are separately associated with one-unit changes in $y$, this means that the simple slope of $x$ in this case will be $\Delta x + \Delta z = 1 + \frac{1}{2} = 1.5$. So when we control for $z$ in the multiple regression, the effect of $x$ appears to be smaller than it was in the simple regression. We can see this visually above in the fact that the red vector (representing the partial slope) is less steep than the blue vector (representing the simple slope). The blue vector is really the result of adding two vectors, the red vector and another vector (not shown) representing the half the partial slope of $z$. Okay, now we turn to the slope for the $x$ predictor in the suppression case. If you followed all of the above, this is a really easy extension. Suppression case To say that the other predictor $z$ is serving as a supressor variable is to say that when we look at a simple regression of $y$ on $x$, the effect of $x$ here is weaker than is the effect of x in a multiple regression of $y$ on $x$ and $z$, where we partial out the effect of $z$. (Note that in extreme cases, the effect of $x$ in the multiple regression might even flip directions! But I am not considering that extreme case here.) The intution behind the terminology is that it appears that in the simple regression case, the effect of $x$ was being "suppressed" by the omitted $z$ variable. And when we include $z$ in the regression, the effect of $x$ emerges clearly for us to see, where we couldn't see it as clearly before. Here is what the simple and partial slopes of $x$ look like as vectors on the regression plane in the suppression case: So when we control for $z$ in the multiple regression, the effect of $x$ appears to increase relative to what it was in the simple regression. We can see this visually above in the fact that the red vector (representing the partial slope) is steeper than the blue vector (representing the simple slope). In this case the secondary regression was $\hat{x}_i=-\frac{1}{2}z_i$, so a one-unit increase in $x$ is associated with a half-unit decrease in $z$, which in turn leads to a half-unit decrease in $y$. So ultimately the simple slope of $x$ in this case will be $\Delta x + \Delta z = 1 + -\frac{1}{2} = 0.5$. As before, the blue vector is really the result of adding two vectors, the red vector and another vector (not shown) representing half of the reverse of the partial slope of $z$. Illustrative datasets In case you want to play around with these examples, here is some R code for generating data conforming to the example values and running the various regressions. library(MASS) # for mvrnorm() set.seed(7310383) # confounding case -------------------------------------------------------- mat <- rbind(c(5,1.5,1.5), c(1.5,1,.5), c(1.5,.5,1)) dat <- data.frame(mvrnorm(n=50, mu=numeric(3), empirical=T, Sigma=mat)) names(dat) <- c("y","x","z") cor(dat) # y x z # y 1.0000000 0.6708204 0.6708204 # x 0.6708204 1.0000000 0.5000000 # z 0.6708204 0.5000000 1.0000000 lm(y ~ x, data=dat) # # Call: # lm(formula = y ~ x, data = dat) # # Coefficients: # (Intercept) x # -1.57e-17 1.50e+00 lm(y ~ x + z, data=dat) # # Call: # lm(formula = y ~ x + z, data = dat) # # Coefficients: # (Intercept) x z # 3.14e-17 1.00e+00 1.00e+00 # @ttnphns comment: for x, zero-order r = .671 > part r = .387 # for z, zero-order r = .671 > part r = .387 lm(x ~ z, data=dat) # # Call: # lm(formula = x ~ z, data = dat) # # Coefficients: # (Intercept) z # 6.973e-33 5.000e-01 # suppression case -------------------------------------------------------- mat <- rbind(c(2,.5,.5), c(.5,1,-.5), c(.5,-.5,1)) dat <- data.frame(mvrnorm(n=50, mu=numeric(3), empirical=T, Sigma=mat)) names(dat) <- c("y","x","z") cor(dat) # y x z # y 1.0000000 0.3535534 0.3535534 # x 0.3535534 1.0000000 -0.5000000 # z 0.3535534 -0.5000000 1.0000000 lm(y ~ x, data=dat) # # Call: # lm(formula = y ~ x, data = dat) # # Coefficients: # (Intercept) x # -4.318e-17 5.000e-01 lm(y ~ x + z, data=dat) # # Call: # lm(formula = y ~ x + z, data = dat) # # Coefficients: # (Intercept) x z # -3.925e-17 1.000e+00 1.000e+00 # @ttnphns comment: for x, zero-order r = .354 < part r = .612 # for z, zero-order r = .354 < part r = .612 lm(x ~ z, data=dat) # # Call: # lm(formula = x ~ z, data = dat) # # Coefficients: # (Intercept) z # 1.57e-17 -5.00e-01
Suppression effect in regression: definition and visual explanation/depiction
Here is another geometric view of suppression, but rather than being in the observation space as @ttnphns's example is, this one is in the variable space, the space where everyday scatterplots live. C
Suppression effect in regression: definition and visual explanation/depiction Here is another geometric view of suppression, but rather than being in the observation space as @ttnphns's example is, this one is in the variable space, the space where everyday scatterplots live. Consider a regression $\hat{y}_i=x_i+z_i$, that is, the intercept is 0 and both predictors have a partial slope of 1. Now, the predictors $x$ and $z$ may themselves be correlated. We will consider two cases: first the case where $x$ and $z$ are positively correlated, which I will call the "confounding" case (characterized by the secondary regression $\hat{x}_i=\frac{1}{2}z_i$), and second the case where $x$ and $z$ are negatively correlated, which I will call the "suppression" case (with secondary regression $\hat{x}_i=-\frac{1}{2}z_i$). We can plot our regression equation as a plane in the variable space that looks like this: Confounding case Let's consider the slope for the $x$ predictor in the confounding case. To say that the other predictor $z$ is serving as a confounding variable is to say that when we look at a simple regression of $y$ on $x$, the effect of $x$ here is stronger than is the effect of x in a multiple regression of $y$ on $x$ and $z$, where we partial out the effect of $z$. The effect of $x$ that we observe in the simple regression is, in some sense (not necessarily causal), partially due to the effect of $z$, which is positively associated with both $y$ and $x$, but not included in the regression. (For the purposes of this answer I will use "the effect of $x$" to refer to the slope of $x$.) We will call the slope of $x$ in the simple linear regression the "simple slope" of $x$ and the slope of $x$ in the multiple regression the "partial slope" of $x$. Here is what the simple and partial slopes of $x$ look like as vectors on the regression plane: The partial slope of x is perhaps easier to understand. It is shown in red above. It is the slope of a vector that moves along the plane in such a way that $x$ is increasing, but $z$ is held constant. This is what it means to "control for" $z$. The simple slope of $x$ is slightly more complicated because it implicitly also includes part of the effect of the $z$ predictor. It is shown in blue above. The simple slope of $x$ is the slope of a vector that moves along the plane in such a way that $x$ is increasing, and $z$ also is increasing (or decreasing) to whatever extent $x$ and $z$ are associated in the dataset. In the confounding case, we set things up so that the relationship between $x$ and $z$ was such that when we move up one unit on $x$, we also move up half a unit on $z$ (this comes from the secondary regression $\hat{x}_i=\frac{1}{2}z_i$). And since one-unit changes in both $x$ and $z$ are separately associated with one-unit changes in $y$, this means that the simple slope of $x$ in this case will be $\Delta x + \Delta z = 1 + \frac{1}{2} = 1.5$. So when we control for $z$ in the multiple regression, the effect of $x$ appears to be smaller than it was in the simple regression. We can see this visually above in the fact that the red vector (representing the partial slope) is less steep than the blue vector (representing the simple slope). The blue vector is really the result of adding two vectors, the red vector and another vector (not shown) representing the half the partial slope of $z$. Okay, now we turn to the slope for the $x$ predictor in the suppression case. If you followed all of the above, this is a really easy extension. Suppression case To say that the other predictor $z$ is serving as a supressor variable is to say that when we look at a simple regression of $y$ on $x$, the effect of $x$ here is weaker than is the effect of x in a multiple regression of $y$ on $x$ and $z$, where we partial out the effect of $z$. (Note that in extreme cases, the effect of $x$ in the multiple regression might even flip directions! But I am not considering that extreme case here.) The intution behind the terminology is that it appears that in the simple regression case, the effect of $x$ was being "suppressed" by the omitted $z$ variable. And when we include $z$ in the regression, the effect of $x$ emerges clearly for us to see, where we couldn't see it as clearly before. Here is what the simple and partial slopes of $x$ look like as vectors on the regression plane in the suppression case: So when we control for $z$ in the multiple regression, the effect of $x$ appears to increase relative to what it was in the simple regression. We can see this visually above in the fact that the red vector (representing the partial slope) is steeper than the blue vector (representing the simple slope). In this case the secondary regression was $\hat{x}_i=-\frac{1}{2}z_i$, so a one-unit increase in $x$ is associated with a half-unit decrease in $z$, which in turn leads to a half-unit decrease in $y$. So ultimately the simple slope of $x$ in this case will be $\Delta x + \Delta z = 1 + -\frac{1}{2} = 0.5$. As before, the blue vector is really the result of adding two vectors, the red vector and another vector (not shown) representing half of the reverse of the partial slope of $z$. Illustrative datasets In case you want to play around with these examples, here is some R code for generating data conforming to the example values and running the various regressions. library(MASS) # for mvrnorm() set.seed(7310383) # confounding case -------------------------------------------------------- mat <- rbind(c(5,1.5,1.5), c(1.5,1,.5), c(1.5,.5,1)) dat <- data.frame(mvrnorm(n=50, mu=numeric(3), empirical=T, Sigma=mat)) names(dat) <- c("y","x","z") cor(dat) # y x z # y 1.0000000 0.6708204 0.6708204 # x 0.6708204 1.0000000 0.5000000 # z 0.6708204 0.5000000 1.0000000 lm(y ~ x, data=dat) # # Call: # lm(formula = y ~ x, data = dat) # # Coefficients: # (Intercept) x # -1.57e-17 1.50e+00 lm(y ~ x + z, data=dat) # # Call: # lm(formula = y ~ x + z, data = dat) # # Coefficients: # (Intercept) x z # 3.14e-17 1.00e+00 1.00e+00 # @ttnphns comment: for x, zero-order r = .671 > part r = .387 # for z, zero-order r = .671 > part r = .387 lm(x ~ z, data=dat) # # Call: # lm(formula = x ~ z, data = dat) # # Coefficients: # (Intercept) z # 6.973e-33 5.000e-01 # suppression case -------------------------------------------------------- mat <- rbind(c(2,.5,.5), c(.5,1,-.5), c(.5,-.5,1)) dat <- data.frame(mvrnorm(n=50, mu=numeric(3), empirical=T, Sigma=mat)) names(dat) <- c("y","x","z") cor(dat) # y x z # y 1.0000000 0.3535534 0.3535534 # x 0.3535534 1.0000000 -0.5000000 # z 0.3535534 -0.5000000 1.0000000 lm(y ~ x, data=dat) # # Call: # lm(formula = y ~ x, data = dat) # # Coefficients: # (Intercept) x # -4.318e-17 5.000e-01 lm(y ~ x + z, data=dat) # # Call: # lm(formula = y ~ x + z, data = dat) # # Coefficients: # (Intercept) x z # -3.925e-17 1.000e+00 1.000e+00 # @ttnphns comment: for x, zero-order r = .354 < part r = .612 # for z, zero-order r = .354 < part r = .612 lm(x ~ z, data=dat) # # Call: # lm(formula = x ~ z, data = dat) # # Coefficients: # (Intercept) z # 1.57e-17 -5.00e-01
Suppression effect in regression: definition and visual explanation/depiction Here is another geometric view of suppression, but rather than being in the observation space as @ttnphns's example is, this one is in the variable space, the space where everyday scatterplots live. C
4,700
Suppression effect in regression: definition and visual explanation/depiction
Here is how I think about the suppressor effect. But please let me know if I am wrong. Here is an example of a binary outcome (classification, logistic regression). We can see that there is no significant difference in X1, there is no difference in X2, but put X1 and X2 together (i.e. correct x1 for x2 or vice versa) and samples can be classified almost perfectly and thus the variables are now highly significant.
Suppression effect in regression: definition and visual explanation/depiction
Here is how I think about the suppressor effect. But please let me know if I am wrong. Here is an example of a binary outcome (classification, logistic regression). We can see that there is no signif
Suppression effect in regression: definition and visual explanation/depiction Here is how I think about the suppressor effect. But please let me know if I am wrong. Here is an example of a binary outcome (classification, logistic regression). We can see that there is no significant difference in X1, there is no difference in X2, but put X1 and X2 together (i.e. correct x1 for x2 or vice versa) and samples can be classified almost perfectly and thus the variables are now highly significant.
Suppression effect in regression: definition and visual explanation/depiction Here is how I think about the suppressor effect. But please let me know if I am wrong. Here is an example of a binary outcome (classification, logistic regression). We can see that there is no signif