idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
3,801 | Two-tailed tests... I'm just not convinced. What's the point? | Often a significance test is performed for the null hypothesis against an alternative hypothesis. This is when one-tailed versus two-tailed make a difference.
For p-values this (two or one sided) does not matter! The point is that you select a criterium that only occurs a fraction $\alpha$ of the time when the null hypothesis is true. This is either two small pieces of both tails, or one big piece of one tail, or something else.
Type I error rate is not different for one or two sided tests.
On the other hand, for the power it matters.
If your alternative hypothesis is asymmetric, then you'd wish to focus
the criterium to reject the null hypothesis only on this tail/end;
such that when the alternative hypothesis is true then you are less
likely to not reject ("accept") the null hypothesis.
If your alternative hypothesis is symmetric (you don't care to place more or less power on one specific side), and deflection/effect on both sides is equally expected (or just unknown/uninformed), then it is more powerful to use a two-sided test (you are not loosing 50% power for the tail that you are not testing and where you will make many type II errors).
Type II error rate is different for one and two sided tests and depending on the alternative hypothesis as well.
It is becoming more a bit like a Bayesian concept now when we start involving preconceptions about whether or not we expect an effect to fall on one side or on both sides, and when we wish to use a test (to see if we can falsify a null-hypothesis) to 'confirm' or make more probable something like an effect. | Two-tailed tests... I'm just not convinced. What's the point? | Often a significance test is performed for the null hypothesis against an alternative hypothesis. This is when one-tailed versus two-tailed make a difference.
For p-values this (two or one sided) do | Two-tailed tests... I'm just not convinced. What's the point?
Often a significance test is performed for the null hypothesis against an alternative hypothesis. This is when one-tailed versus two-tailed make a difference.
For p-values this (two or one sided) does not matter! The point is that you select a criterium that only occurs a fraction $\alpha$ of the time when the null hypothesis is true. This is either two small pieces of both tails, or one big piece of one tail, or something else.
Type I error rate is not different for one or two sided tests.
On the other hand, for the power it matters.
If your alternative hypothesis is asymmetric, then you'd wish to focus
the criterium to reject the null hypothesis only on this tail/end;
such that when the alternative hypothesis is true then you are less
likely to not reject ("accept") the null hypothesis.
If your alternative hypothesis is symmetric (you don't care to place more or less power on one specific side), and deflection/effect on both sides is equally expected (or just unknown/uninformed), then it is more powerful to use a two-sided test (you are not loosing 50% power for the tail that you are not testing and where you will make many type II errors).
Type II error rate is different for one and two sided tests and depending on the alternative hypothesis as well.
It is becoming more a bit like a Bayesian concept now when we start involving preconceptions about whether or not we expect an effect to fall on one side or on both sides, and when we wish to use a test (to see if we can falsify a null-hypothesis) to 'confirm' or make more probable something like an effect. | Two-tailed tests... I'm just not convinced. What's the point?
Often a significance test is performed for the null hypothesis against an alternative hypothesis. This is when one-tailed versus two-tailed make a difference.
For p-values this (two or one sided) do |
3,802 | Two-tailed tests... I'm just not convinced. What's the point? | So one more answer attempt:
I guess whether to take one-tailed or two-tailed depends completely on the Alternative hypothesis.
Consider the following example of testing mean in a t-test:
$H_0: \mu=0$
$H_a: \mu \neq 0$
Now if you observe a very negative sample mean or a very positive sample mean, your hypothesis is unlikely to be true.
On the other hand, you will be willing to accept your hypothesis if your sample mean is close to $0$ whether negative or positive. Now you need to choose the interval in which, if your sample mean would fall, you wouldn't reject your null hypothesis. Obviously you'd choose an interval that has both negative and positive sides around $0$. So you choose the two side test.
But what if you don't want to test $\mu=0$, but rather $\mu\geq 0$. Now intuitively what we want to do here is that if value of sample mean comes very negative, then we can definitely reject our null. So we would want to reject null only for far negative values of sample mean.
But wait! If that's my null hypothesis how would I set my null distribution. The null distribution of the sample mean is known for some assumed value of the population parameter (here $0$). But under current null it can take many values.
Let's say we can do infinite null hypotheses. Each for assuming a positive value of $\mu$. But think of this: In our first hypothesis of $H_0: \mu=0$, if we only reject null on obsering very far negative sample mean, then every next hypothesis with $H_0: \mu>0$ would also reject it. Because for them, the sample mean is even farther from population parameter. So basically all we need to do really is just do one hypothesis but one-tailed.
So your solution becomes:
$H_0: \mu=0$
$H_a: \mu <0$
Best example is Dickey-Fuller test for stationarity.
Hope this helps. (Wanted to include diagrams but replying from mobile). | Two-tailed tests... I'm just not convinced. What's the point? | So one more answer attempt:
I guess whether to take one-tailed or two-tailed depends completely on the Alternative hypothesis.
Consider the following example of testing mean in a t-test:
$H_0: \mu=0$ | Two-tailed tests... I'm just not convinced. What's the point?
So one more answer attempt:
I guess whether to take one-tailed or two-tailed depends completely on the Alternative hypothesis.
Consider the following example of testing mean in a t-test:
$H_0: \mu=0$
$H_a: \mu \neq 0$
Now if you observe a very negative sample mean or a very positive sample mean, your hypothesis is unlikely to be true.
On the other hand, you will be willing to accept your hypothesis if your sample mean is close to $0$ whether negative or positive. Now you need to choose the interval in which, if your sample mean would fall, you wouldn't reject your null hypothesis. Obviously you'd choose an interval that has both negative and positive sides around $0$. So you choose the two side test.
But what if you don't want to test $\mu=0$, but rather $\mu\geq 0$. Now intuitively what we want to do here is that if value of sample mean comes very negative, then we can definitely reject our null. So we would want to reject null only for far negative values of sample mean.
But wait! If that's my null hypothesis how would I set my null distribution. The null distribution of the sample mean is known for some assumed value of the population parameter (here $0$). But under current null it can take many values.
Let's say we can do infinite null hypotheses. Each for assuming a positive value of $\mu$. But think of this: In our first hypothesis of $H_0: \mu=0$, if we only reject null on obsering very far negative sample mean, then every next hypothesis with $H_0: \mu>0$ would also reject it. Because for them, the sample mean is even farther from population parameter. So basically all we need to do really is just do one hypothesis but one-tailed.
So your solution becomes:
$H_0: \mu=0$
$H_a: \mu <0$
Best example is Dickey-Fuller test for stationarity.
Hope this helps. (Wanted to include diagrams but replying from mobile). | Two-tailed tests... I'm just not convinced. What's the point?
So one more answer attempt:
I guess whether to take one-tailed or two-tailed depends completely on the Alternative hypothesis.
Consider the following example of testing mean in a t-test:
$H_0: \mu=0$ |
3,803 | Two-tailed tests... I'm just not convinced. What's the point? | It's easy to see where the confusions comes from if you liberate yourself from a single dimension while remembering that a particular hypothesis you're testing is of significance of difference from 0. Whether your estimate $x$ is different from zero or not is your question here.
Ask yourself what would be the test if your variable wasn't a scalar but a vector? Imagine that you are looking at multidimensional variable $(x_1,x_2,\dots,x_n)$, i.e. a vector $\mathbf x$. You want to know whether it is far from origin or not. How would you proceed?
I have no doubt that some sort of a norm, such as Euclidian, would be your first thought: $r=||\mathbf x||$. Next, you'd want to assess whether it is so far from the origin that you have little doubt it is not at origin, i.e. how far it is from the origin or whether $r=0$?
Now, let's get back to one dimension, and see what's a norm: $r=|x|$, it's just an absolute value. Hence, for a symmetrical distribution such as Gaussian (normal) you'll end up considering quantities such as $\alpha/2$ significance. | Two-tailed tests... I'm just not convinced. What's the point? | It's easy to see where the confusions comes from if you liberate yourself from a single dimension while remembering that a particular hypothesis you're testing is of significance of difference from 0. | Two-tailed tests... I'm just not convinced. What's the point?
It's easy to see where the confusions comes from if you liberate yourself from a single dimension while remembering that a particular hypothesis you're testing is of significance of difference from 0. Whether your estimate $x$ is different from zero or not is your question here.
Ask yourself what would be the test if your variable wasn't a scalar but a vector? Imagine that you are looking at multidimensional variable $(x_1,x_2,\dots,x_n)$, i.e. a vector $\mathbf x$. You want to know whether it is far from origin or not. How would you proceed?
I have no doubt that some sort of a norm, such as Euclidian, would be your first thought: $r=||\mathbf x||$. Next, you'd want to assess whether it is so far from the origin that you have little doubt it is not at origin, i.e. how far it is from the origin or whether $r=0$?
Now, let's get back to one dimension, and see what's a norm: $r=|x|$, it's just an absolute value. Hence, for a symmetrical distribution such as Gaussian (normal) you'll end up considering quantities such as $\alpha/2$ significance. | Two-tailed tests... I'm just not convinced. What's the point?
It's easy to see where the confusions comes from if you liberate yourself from a single dimension while remembering that a particular hypothesis you're testing is of significance of difference from 0. |
3,804 | Are residuals "predicted minus actual" or "actual minus predicted" | The residuals are always actual minus predicted. The models are:
$$y=f(x;\beta)+\varepsilon$$
Hence, the residuals $\hat\varepsilon$, which are estimates of errors $\varepsilon$:
$$\hat\varepsilon=y-\hat y\\\hat y=f(x;\hat\beta)$$
I agree with @whuber that the sign doesn't really matter mathematically. It's just good to have a convention though. And the current convention is as in my answer.
Since OP challenged my authority on this subject, I'm adding some references:
"(2008) Residual. In: The Concise Encyclopedia of Statistics. Springer, New York, NY, which gives the same definition.
Fisher's "Statistical Methods for Research Workers" 1925, has the same definition too, see Section 26 in this 1934 version. Despite unassuming title, this is an important work in historical context | Are residuals "predicted minus actual" or "actual minus predicted" | The residuals are always actual minus predicted. The models are:
$$y=f(x;\beta)+\varepsilon$$
Hence, the residuals $\hat\varepsilon$, which are estimates of errors $\varepsilon$:
$$\hat\varepsilon=y-\ | Are residuals "predicted minus actual" or "actual minus predicted"
The residuals are always actual minus predicted. The models are:
$$y=f(x;\beta)+\varepsilon$$
Hence, the residuals $\hat\varepsilon$, which are estimates of errors $\varepsilon$:
$$\hat\varepsilon=y-\hat y\\\hat y=f(x;\hat\beta)$$
I agree with @whuber that the sign doesn't really matter mathematically. It's just good to have a convention though. And the current convention is as in my answer.
Since OP challenged my authority on this subject, I'm adding some references:
"(2008) Residual. In: The Concise Encyclopedia of Statistics. Springer, New York, NY, which gives the same definition.
Fisher's "Statistical Methods for Research Workers" 1925, has the same definition too, see Section 26 in this 1934 version. Despite unassuming title, this is an important work in historical context | Are residuals "predicted minus actual" or "actual minus predicted"
The residuals are always actual minus predicted. The models are:
$$y=f(x;\beta)+\varepsilon$$
Hence, the residuals $\hat\varepsilon$, which are estimates of errors $\varepsilon$:
$$\hat\varepsilon=y-\ |
3,805 | Are residuals "predicted minus actual" or "actual minus predicted" | I just came across a compelling reason for one answer to be the correct one.
Regression (and most statistical models of any sort) concern how the conditional distributions of a response depend on explanatory variables. An important element of the characterization of those distributions is some measure usually called "skewness" (even though various and different formulas have been offered): it refers to the most basic way in which the distributional shape departs from symmetry. Here is an example of bivariate data (a response $y$ and a single explanatory variable $x$) with positively skewed conditional responses:
The blue curve is the ordinary least squares fit. It plots the fitted values.
When we compute the difference between a response $y$ and its fitted value $\hat y$, we shift the location of the conditional distribution, but do not otherwise change its shape. In particular, its skewness will be unaltered.
This is a standard diagnostic plot showing how the shifted conditional distributions vary with the predicted values. Geometrically, it's almost the same as "untilting" the previous scatterplot.
If instead we compute the difference in the other order, $\hat y - y,$ this will shift and then reverse the shape of the conditional distribution. Its skewness will be the negative of the original conditional distribution.
This shows the same quantities as the previous figure, but the residuals have been computed by subtracting the data from their fits--which of course is the same as negating the previous residuals.
Although both the preceding figures are mathematically equivalent in every respect--one is converted into the other simply by flipping the points across the blue horizon--one of them bears a much more direct visual relationship to the original plot.
Consequently, if our goal is to relate distributional characteristics of the residuals to the characteristics of the original data--and that almost always is the case--then it is better simply to shift the responses rather than to shift and reverse them.
The right answer is clear: compute your residuals as $y - \hat y.$ | Are residuals "predicted minus actual" or "actual minus predicted" | I just came across a compelling reason for one answer to be the correct one.
Regression (and most statistical models of any sort) concern how the conditional distributions of a response depend on expl | Are residuals "predicted minus actual" or "actual minus predicted"
I just came across a compelling reason for one answer to be the correct one.
Regression (and most statistical models of any sort) concern how the conditional distributions of a response depend on explanatory variables. An important element of the characterization of those distributions is some measure usually called "skewness" (even though various and different formulas have been offered): it refers to the most basic way in which the distributional shape departs from symmetry. Here is an example of bivariate data (a response $y$ and a single explanatory variable $x$) with positively skewed conditional responses:
The blue curve is the ordinary least squares fit. It plots the fitted values.
When we compute the difference between a response $y$ and its fitted value $\hat y$, we shift the location of the conditional distribution, but do not otherwise change its shape. In particular, its skewness will be unaltered.
This is a standard diagnostic plot showing how the shifted conditional distributions vary with the predicted values. Geometrically, it's almost the same as "untilting" the previous scatterplot.
If instead we compute the difference in the other order, $\hat y - y,$ this will shift and then reverse the shape of the conditional distribution. Its skewness will be the negative of the original conditional distribution.
This shows the same quantities as the previous figure, but the residuals have been computed by subtracting the data from their fits--which of course is the same as negating the previous residuals.
Although both the preceding figures are mathematically equivalent in every respect--one is converted into the other simply by flipping the points across the blue horizon--one of them bears a much more direct visual relationship to the original plot.
Consequently, if our goal is to relate distributional characteristics of the residuals to the characteristics of the original data--and that almost always is the case--then it is better simply to shift the responses rather than to shift and reverse them.
The right answer is clear: compute your residuals as $y - \hat y.$ | Are residuals "predicted minus actual" or "actual minus predicted"
I just came across a compelling reason for one answer to be the correct one.
Regression (and most statistical models of any sort) concern how the conditional distributions of a response depend on expl |
3,806 | Are residuals "predicted minus actual" or "actual minus predicted" | Green & Tashman (2008, Foresight) report on a small survey on the analogous question for forecast errors. I'll summarize arguments for either convention as reported by them:
Arguments for "actual-predicted"
The statistical convention is $y=\hat{y}+\epsilon$.
At least one respondent from seismology wrote that this is also the convention for modeling seismic wave traveling time. "When actual seismic wave arrives before the time predicted by model we have negative travel time residual (error)." (sic)
This convention makes sense if we interpret $\hat{y}$ as a budget, plan or target. Here, a positive error means that the budget/plan/target has been exceeded.
This convention makes the formulas for exponential smoothing somewhat more intuitive. We can use a $+$ sign. With the other convention, we would need to use a $-$ sign.
Arguments for "predicted-actual"
If $y=\hat{y}-\epsilon$, then a positive error indicates that the forecast was too high. This is more intuitive than the converse.
Relatedly, if a positive bias is defined as positive expected errors, it would mean that forecasts are on average too high with this convention.
And this is pretty much the only argument given for this convention. Then again, given the misunderstandings the other convention can lead to (positive errors = forecast too low), it's a strong one.
In the end, I would argue that it comes down to who you need to communicate your residuals to. And given that there are certainly two sides to this discussion, it makes sense to explicitly note which convention you follow. | Are residuals "predicted minus actual" or "actual minus predicted" | Green & Tashman (2008, Foresight) report on a small survey on the analogous question for forecast errors. I'll summarize arguments for either convention as reported by them:
Arguments for "actual-pred | Are residuals "predicted minus actual" or "actual minus predicted"
Green & Tashman (2008, Foresight) report on a small survey on the analogous question for forecast errors. I'll summarize arguments for either convention as reported by them:
Arguments for "actual-predicted"
The statistical convention is $y=\hat{y}+\epsilon$.
At least one respondent from seismology wrote that this is also the convention for modeling seismic wave traveling time. "When actual seismic wave arrives before the time predicted by model we have negative travel time residual (error)." (sic)
This convention makes sense if we interpret $\hat{y}$ as a budget, plan or target. Here, a positive error means that the budget/plan/target has been exceeded.
This convention makes the formulas for exponential smoothing somewhat more intuitive. We can use a $+$ sign. With the other convention, we would need to use a $-$ sign.
Arguments for "predicted-actual"
If $y=\hat{y}-\epsilon$, then a positive error indicates that the forecast was too high. This is more intuitive than the converse.
Relatedly, if a positive bias is defined as positive expected errors, it would mean that forecasts are on average too high with this convention.
And this is pretty much the only argument given for this convention. Then again, given the misunderstandings the other convention can lead to (positive errors = forecast too low), it's a strong one.
In the end, I would argue that it comes down to who you need to communicate your residuals to. And given that there are certainly two sides to this discussion, it makes sense to explicitly note which convention you follow. | Are residuals "predicted minus actual" or "actual minus predicted"
Green & Tashman (2008, Foresight) report on a small survey on the analogous question for forecast errors. I'll summarize arguments for either convention as reported by them:
Arguments for "actual-pred |
3,807 | Are residuals "predicted minus actual" or "actual minus predicted" | Different terminology suggests different conventions. The term "residual" implies that it's what's left over after all the explanatory variables have been taken into account, i.e. actual-predicted. "Prediction error" implies that it's how much the prediction deviates from actual, i.e. prediction-actual.
One's conception of modeling also influences which convention is more natural. Suppose you have a dataframe with one or more feature columns $X = x_1,x_2...$, response column $y$, and prediction column $\hat y$.
One conception is that $y$ is the "real" value, and $\hat y$ is simply a transformed version of $X$. In this conception, $y$ and $\hat y$ are both random variables ($\hat y$ being a derived one). Although $y$ is the one we're actually interested in, $\hat y$ is the one we can observe, so $\hat y$ is used as a proxy for $y$. The "error" is how much $\hat y$ deviates from this "true" value $y$. This suggests defining the error as following the direction of this deviation, i.e. $e = \hat y -y$.
However, there's another conception that thinks of $\hat y$ as the "real" value. That is, y depends on $X$ through some deterministic process; a particular state of $X$ gives rise to a particular deterministic value. This value is then perturbed by some random process. So we have $x \rightarrow f(X)\rightarrow f(X)+error()$. In this conception, $\hat y$ is the "real" value of y. For example, suppose you're trying to calculate the value of g, the acceleration due to gravity. You drop a bunch of objects, you measure how far they fell ($X$) and how long it took them to fall ($y$). You then analyze the data with the model y = $\sqrt{\frac{2x}{g}}$. You find that there's no value of g that makes this equation work exactly. So you then model this as
$\hat y = \sqrt{\frac{2x}{g}}$ $y = \hat y +error$.
That is, you take the variable y and consider there to be a "real" value $\hat y$ that is actually being generated by physical laws, and then some other value $y$ that is $\hat y$ modified by something independent of $X$, such as measurement errors or wind gusts or whatever.
In this conception, you're taking y = $\sqrt{\frac{2x}{g}}$ to be what reality "should" be doing, and if you get answers that don't agree with that, well, reality got the wrong answer. Now of course this can seems rather silly and arrogant when put this way, but there are good reasons for proceeding this conception, and it can be useful to think this way. And ultimately, it's just a model; statisticians don't necessarily think this is actually how the world works (although there probably are some who do). And given the equation $y = \hat y +error$, it follows that errors are actual minus predicted.
Also, note that if you don't like the "reality got it wrong" aspect of the second conception, you can view it as being "We've identified some process f through which y depends on $X$, but we're not getting exactly the right answers, so there must be some other process g that's also influencing y." In this variation,
$\hat y = f(X)$$y = \hat y+g(?)$$g = y-\hat y$. | Are residuals "predicted minus actual" or "actual minus predicted" | Different terminology suggests different conventions. The term "residual" implies that it's what's left over after all the explanatory variables have been taken into account, i.e. actual-predicted. "P | Are residuals "predicted minus actual" or "actual minus predicted"
Different terminology suggests different conventions. The term "residual" implies that it's what's left over after all the explanatory variables have been taken into account, i.e. actual-predicted. "Prediction error" implies that it's how much the prediction deviates from actual, i.e. prediction-actual.
One's conception of modeling also influences which convention is more natural. Suppose you have a dataframe with one or more feature columns $X = x_1,x_2...$, response column $y$, and prediction column $\hat y$.
One conception is that $y$ is the "real" value, and $\hat y$ is simply a transformed version of $X$. In this conception, $y$ and $\hat y$ are both random variables ($\hat y$ being a derived one). Although $y$ is the one we're actually interested in, $\hat y$ is the one we can observe, so $\hat y$ is used as a proxy for $y$. The "error" is how much $\hat y$ deviates from this "true" value $y$. This suggests defining the error as following the direction of this deviation, i.e. $e = \hat y -y$.
However, there's another conception that thinks of $\hat y$ as the "real" value. That is, y depends on $X$ through some deterministic process; a particular state of $X$ gives rise to a particular deterministic value. This value is then perturbed by some random process. So we have $x \rightarrow f(X)\rightarrow f(X)+error()$. In this conception, $\hat y$ is the "real" value of y. For example, suppose you're trying to calculate the value of g, the acceleration due to gravity. You drop a bunch of objects, you measure how far they fell ($X$) and how long it took them to fall ($y$). You then analyze the data with the model y = $\sqrt{\frac{2x}{g}}$. You find that there's no value of g that makes this equation work exactly. So you then model this as
$\hat y = \sqrt{\frac{2x}{g}}$ $y = \hat y +error$.
That is, you take the variable y and consider there to be a "real" value $\hat y$ that is actually being generated by physical laws, and then some other value $y$ that is $\hat y$ modified by something independent of $X$, such as measurement errors or wind gusts or whatever.
In this conception, you're taking y = $\sqrt{\frac{2x}{g}}$ to be what reality "should" be doing, and if you get answers that don't agree with that, well, reality got the wrong answer. Now of course this can seems rather silly and arrogant when put this way, but there are good reasons for proceeding this conception, and it can be useful to think this way. And ultimately, it's just a model; statisticians don't necessarily think this is actually how the world works (although there probably are some who do). And given the equation $y = \hat y +error$, it follows that errors are actual minus predicted.
Also, note that if you don't like the "reality got it wrong" aspect of the second conception, you can view it as being "We've identified some process f through which y depends on $X$, but we're not getting exactly the right answers, so there must be some other process g that's also influencing y." In this variation,
$\hat y = f(X)$$y = \hat y+g(?)$$g = y-\hat y$. | Are residuals "predicted minus actual" or "actual minus predicted"
Different terminology suggests different conventions. The term "residual" implies that it's what's left over after all the explanatory variables have been taken into account, i.e. actual-predicted. "P |
3,808 | Are residuals "predicted minus actual" or "actual minus predicted" | The answer by @Aksakal is completely correct, but I'll just add one additional element that I find helps me (and my students).
The motto: Statistics is "perfect". As in, I can always provide the perfect prediction (I know some eye-brows are raising right about now...so hear me out).
I'm going to predict my observed values $y_i$. With some form of model, I'll generate a predicted value for each observed value, I'll call this $\hat{y}_i$. The only problem, is that usually (always)
$$y_i \ne \hat{y}_i$$
So, we'll add a new variable $\epsilon_i$ so that equality holds...but it seems to me the better option is to add it to our "predicted" ("made-up") value instead of adding it to the actual value (as adding or subtracting from an actual value may not be physically possible...see comments below):
$$y_i = \hat{y}_i + \epsilon_i$$
Now, we have "perfect" prediction...our "final" value matches our observed value.
Obviously, this glosses over a tremendous amount of the statistical theory underlying what is going on...but it stress the idea that the observed value is the sum of two distinct parts (a systematic part and a random part). If you remember it in this form, you will always have that the residual, $\epsilon_i$, is the observed minus the predicted. | Are residuals "predicted minus actual" or "actual minus predicted" | The answer by @Aksakal is completely correct, but I'll just add one additional element that I find helps me (and my students).
The motto: Statistics is "perfect". As in, I can always provide the per | Are residuals "predicted minus actual" or "actual minus predicted"
The answer by @Aksakal is completely correct, but I'll just add one additional element that I find helps me (and my students).
The motto: Statistics is "perfect". As in, I can always provide the perfect prediction (I know some eye-brows are raising right about now...so hear me out).
I'm going to predict my observed values $y_i$. With some form of model, I'll generate a predicted value for each observed value, I'll call this $\hat{y}_i$. The only problem, is that usually (always)
$$y_i \ne \hat{y}_i$$
So, we'll add a new variable $\epsilon_i$ so that equality holds...but it seems to me the better option is to add it to our "predicted" ("made-up") value instead of adding it to the actual value (as adding or subtracting from an actual value may not be physically possible...see comments below):
$$y_i = \hat{y}_i + \epsilon_i$$
Now, we have "perfect" prediction...our "final" value matches our observed value.
Obviously, this glosses over a tremendous amount of the statistical theory underlying what is going on...but it stress the idea that the observed value is the sum of two distinct parts (a systematic part and a random part). If you remember it in this form, you will always have that the residual, $\epsilon_i$, is the observed minus the predicted. | Are residuals "predicted minus actual" or "actual minus predicted"
The answer by @Aksakal is completely correct, but I'll just add one additional element that I find helps me (and my students).
The motto: Statistics is "perfect". As in, I can always provide the per |
3,809 | Are residuals "predicted minus actual" or "actual minus predicted" | $\newcommand{\e}{\varepsilon}$I'm going to use the particular case of least squares linear regression. If we take our model to be $Y = X\beta + \e$ then as @Aksakal points out we naturally end up with $\e = Y - X\beta$ so $\hat \e = Y - \hat Y$. If instead we take $Y = X\beta - \e$ as our model, which we are certainly free to do, then we get $\e = X\beta - Y \implies \hat \e = \hat Y - Y$. At this point there's really no reason to prefer one over the other aside from a vague preference for $1$ over $-1$.
But if $\hat \e = Y - \hat Y$ then we obtain our residuals via $(I - P_X)Y$, where $I - P_X$ is an idempotent matrix projecting into the space orthogonal to the column space of the design matrix $X$. If we instead used $Y = X\beta - \e$ then we end up with $\hat \e = (P_X - I)Y$. But $P_X - I$ is not itself idempotent as $(P_X - I)^2 = P_X^2 - 2P_X + I = -(P_X - I)$. So really $P_X - I$ is the negative of a projection matrix, namely $I - P_X$. So I view this as undoing the negative introduced by using $Y = X\beta - \e$, so for the sake of parsimony it's better to just use $Y = X\beta + \e$ which in turn gives us $Y - \hat Y$ as the residuals.
As mentioned elsewhere it's not like anything breaks if we use $\hat Y - Y$, but we end up with this double negative situation which I think is a good enough reason to just use $Y - \hat Y$. | Are residuals "predicted minus actual" or "actual minus predicted" | $\newcommand{\e}{\varepsilon}$I'm going to use the particular case of least squares linear regression. If we take our model to be $Y = X\beta + \e$ then as @Aksakal points out we naturally end up with | Are residuals "predicted minus actual" or "actual minus predicted"
$\newcommand{\e}{\varepsilon}$I'm going to use the particular case of least squares linear regression. If we take our model to be $Y = X\beta + \e$ then as @Aksakal points out we naturally end up with $\e = Y - X\beta$ so $\hat \e = Y - \hat Y$. If instead we take $Y = X\beta - \e$ as our model, which we are certainly free to do, then we get $\e = X\beta - Y \implies \hat \e = \hat Y - Y$. At this point there's really no reason to prefer one over the other aside from a vague preference for $1$ over $-1$.
But if $\hat \e = Y - \hat Y$ then we obtain our residuals via $(I - P_X)Y$, where $I - P_X$ is an idempotent matrix projecting into the space orthogonal to the column space of the design matrix $X$. If we instead used $Y = X\beta - \e$ then we end up with $\hat \e = (P_X - I)Y$. But $P_X - I$ is not itself idempotent as $(P_X - I)^2 = P_X^2 - 2P_X + I = -(P_X - I)$. So really $P_X - I$ is the negative of a projection matrix, namely $I - P_X$. So I view this as undoing the negative introduced by using $Y = X\beta - \e$, so for the sake of parsimony it's better to just use $Y = X\beta + \e$ which in turn gives us $Y - \hat Y$ as the residuals.
As mentioned elsewhere it's not like anything breaks if we use $\hat Y - Y$, but we end up with this double negative situation which I think is a good enough reason to just use $Y - \hat Y$. | Are residuals "predicted minus actual" or "actual minus predicted"
$\newcommand{\e}{\varepsilon}$I'm going to use the particular case of least squares linear regression. If we take our model to be $Y = X\beta + \e$ then as @Aksakal points out we naturally end up with |
3,810 | Are residuals "predicted minus actual" or "actual minus predicted" | For practical purposes it's better to calculate residuals as "actual minus predicted": When trying to guess what the physical cause of an outlier data point might be, it gives a better intuitive sense to have an unusually large positive observed value have a large positive residual, and this is precisely what you get when you calculate the residual in the usual way as "actual minus predicted". See also Stephan Kolassa's summary above of four arguments for "actual minus predicted".
The textbooks from which I learnt always taught residuals were (Observed - Expected), in line with Fisher historically as noted by @Aksakal, and contrary to Gauss as noted by @NickCox. Philosophically this corresponds to assuming our model is perfect, and observed deviations from our model are random errors. Although the reverse (predicted minus actual) is equivalent for mathematical purposes, in philosophical terms it would be assuming the data is perfect and the model inadequate...which is true but not helpful and might lead to boring discussions over the scientific method, statistical testing and Bayesian thinking.
In terms of popularity, references to "Observed minus Expected" residuals are listed more than twelve times as often as the reverse in a Google search today, so O-E ("actual minus predicted") is clearly still the "standard". | Are residuals "predicted minus actual" or "actual minus predicted" | For practical purposes it's better to calculate residuals as "actual minus predicted": When trying to guess what the physical cause of an outlier data point might be, it gives a better intuitive sense | Are residuals "predicted minus actual" or "actual minus predicted"
For practical purposes it's better to calculate residuals as "actual minus predicted": When trying to guess what the physical cause of an outlier data point might be, it gives a better intuitive sense to have an unusually large positive observed value have a large positive residual, and this is precisely what you get when you calculate the residual in the usual way as "actual minus predicted". See also Stephan Kolassa's summary above of four arguments for "actual minus predicted".
The textbooks from which I learnt always taught residuals were (Observed - Expected), in line with Fisher historically as noted by @Aksakal, and contrary to Gauss as noted by @NickCox. Philosophically this corresponds to assuming our model is perfect, and observed deviations from our model are random errors. Although the reverse (predicted minus actual) is equivalent for mathematical purposes, in philosophical terms it would be assuming the data is perfect and the model inadequate...which is true but not helpful and might lead to boring discussions over the scientific method, statistical testing and Bayesian thinking.
In terms of popularity, references to "Observed minus Expected" residuals are listed more than twelve times as often as the reverse in a Google search today, so O-E ("actual minus predicted") is clearly still the "standard". | Are residuals "predicted minus actual" or "actual minus predicted"
For practical purposes it's better to calculate residuals as "actual minus predicted": When trying to guess what the physical cause of an outlier data point might be, it gives a better intuitive sense |
3,811 | Who created the first standard normal table? | Laplace was the first to recognize the need for tabulation, coming up with the approximation:
$$\begin{align}G(x)&=\int_x^\infty e^{-t^2}dt\\[2ex]&=\small \frac1 x- \frac{1}{2x^3}+\frac{1\cdot3}{4x^5} -\frac{1\cdot 3\cdot5}{8x^7}+\frac{1\cdot 3\cdot 5\cdot 7}{16x^9}+\cdots\tag{1}
\end{align}$$
The first modern table of the normal distribution was later built by the French astronomer Christian Kramp in Analyse des Réfractions Astronomiques et Terrestres (Par le citoyen Kramp, Professeur de Chymie et de Physique expérimentale à l'école centrale du Département de la Roer, 1799). From Tables Related to the Normal Distribution: A Short History Author(s): Herbert A. David Source: The American Statistician, Vol. 59, No. 4 (Nov., 2005), pp. 309-311:
Ambitiously, Kramp gave eight-decimal ($8$ D) tables up to $x = 1.24,$ $9$ D to $1.50,$ $10$ D to $1.99,$ and $11$ D to $3.00$ together with the
differences needed for interpolation. Writing down the first six derivatives of $G(x),$ he simply uses a Taylor series expansion of $G(x + h)$ about $G(x),$ with $h = .01,$ up to the term in $h^3.$ This enables him to proceed step by step from $x = 0$ to $x = h, 2h, 3h,\dots,$ upon multiplying $h\,e^{-x^2}$ by $$1-hx+ \frac 1 3 \left(2x^2 - 1\right)h^2 - \frac 1 6 \left(2x^3 - 3x\right)h^3.$$
Thus, at $x = 0$ this product reduces to
$$.01 \left(1 - \frac 1 3 \times .0001 \right) = .00999967,$$
so that at $G(.01) = .88622692 - .00999967 = .87622725.$
$$\vdots$$
But... how accurate could he be? OK, let's take $2.97$ as an example:
Amazing!
Let's move on to the modern (normalized) expression of the Gaussian pdf:
The pdf of $\mathscr N(0,1)$ is:
$$f_X(X=x)=\large \frac{1}{\sqrt{2\pi}}\,e^{-\frac {x^2}{2}}= \frac{1}{\sqrt{2\pi}}\,e^{-\left(\frac {x}{\sqrt{2}}\right)^2}= \frac{1}{\sqrt{2\pi}}\,e^{-\left(z\right)^2}$$
where $z = \frac{x}{\sqrt{2}}$. And hence, $x = z \times \sqrt{2}$.
So let's go to R, and look up the $P_Z(Z>z=2.97)$... OK, not so fast. First we have to remember that when there is a constant multiplying the exponent in an exponential function $e^{ax}$, the integral will be divided by that exponent: $1/a$. Since we are aiming at replicating the results in the old tables, we are actually multiplying the value of $x$ by $\sqrt{2}$, which will have to appear in the denominator.
Further, Christian Kramp did not normalize, so we have to correct the results given by R accordingly, multiplying by $\sqrt{2\pi}$. The final correction will look like this:
$$\frac{\sqrt{2\pi}}{\sqrt{2}}\,\mathbb P(X>x)=\sqrt{\pi}\,\,\mathbb P(X>x)$$
In the case above, $z=2.97$ and $x=z\times \sqrt{2}=4.200214$. Now let's go to R:
(R = sqrt(pi) * pnorm(x, lower.tail = F))
[1] 0.00002363235e-05
Fantastic!
Let's go to the top of the table for fun, say $0.06$...
z = 0.06
(x = z * sqrt(2))
(R = sqrt(pi) * pnorm(x, lower.tail = F))
[1] 0.8262988
What says Kramp? $0.82629882$.
So close...
The thing is... how close, exactly? After all the up-votes received, I couldn't leave the actual answer hanging. The problem was that all the optical character recognition (OCR) applications I tried were incredibly off - not surprising if you have taken a look at the original. So, I learned to appreciate Christian Kramp for the tenacity of his work as I personally typed each digit in the first column of his Table Première.
After some valuable help from @Glen_b, now it may very well be accurate, and it's ready to copy and paste on the R console in this GitHub link.
Here is an analysis of the accuracy of his calculations. Brace yourself...
Absolute cumulative difference between [R] values and Kramp's approximation:
$0.000001200764$ - in the course of $301$ calculations, he managed to accumulate an error of approximately $1$ millionth!
Mean absolute error (MAE), or mean(abs(difference)) with difference = R - kramp:
$0.000000003989249$ - he managed to make an outrageously ridiculous $3$ one-billionth error on average!
On the entry in which his calculations were most divergent as compared to [R] the first different decimal place value was in the eighth position (hundred millionth). On average (median) his first "mistake" was in the tenth decimal digit (tenth billionth!). And, although he didn't fully agree with with [R] in any instances, the closest entry doesn't diverge until the thirteen digital entry.
Mean relative difference or mean(abs(R - kramp)) / mean(R) (same as all.equal(R[,2], kramp[,2], tolerance = 0)):
$0.00000002380406$
Root mean squared error (RMSE) or deviation (gives more weight to large mistakes), calculated as sqrt(mean(difference^2)):
$0.000000007283493$
If you find a picture or portrait of Chistian Kramp, please edit this post and place it here. | Who created the first standard normal table? | Laplace was the first to recognize the need for tabulation, coming up with the approximation:
$$\begin{align}G(x)&=\int_x^\infty e^{-t^2}dt\\[2ex]&=\small \frac1 x- \frac{1}{2x^3}+\frac{1\cdot3}{4x^5} | Who created the first standard normal table?
Laplace was the first to recognize the need for tabulation, coming up with the approximation:
$$\begin{align}G(x)&=\int_x^\infty e^{-t^2}dt\\[2ex]&=\small \frac1 x- \frac{1}{2x^3}+\frac{1\cdot3}{4x^5} -\frac{1\cdot 3\cdot5}{8x^7}+\frac{1\cdot 3\cdot 5\cdot 7}{16x^9}+\cdots\tag{1}
\end{align}$$
The first modern table of the normal distribution was later built by the French astronomer Christian Kramp in Analyse des Réfractions Astronomiques et Terrestres (Par le citoyen Kramp, Professeur de Chymie et de Physique expérimentale à l'école centrale du Département de la Roer, 1799). From Tables Related to the Normal Distribution: A Short History Author(s): Herbert A. David Source: The American Statistician, Vol. 59, No. 4 (Nov., 2005), pp. 309-311:
Ambitiously, Kramp gave eight-decimal ($8$ D) tables up to $x = 1.24,$ $9$ D to $1.50,$ $10$ D to $1.99,$ and $11$ D to $3.00$ together with the
differences needed for interpolation. Writing down the first six derivatives of $G(x),$ he simply uses a Taylor series expansion of $G(x + h)$ about $G(x),$ with $h = .01,$ up to the term in $h^3.$ This enables him to proceed step by step from $x = 0$ to $x = h, 2h, 3h,\dots,$ upon multiplying $h\,e^{-x^2}$ by $$1-hx+ \frac 1 3 \left(2x^2 - 1\right)h^2 - \frac 1 6 \left(2x^3 - 3x\right)h^3.$$
Thus, at $x = 0$ this product reduces to
$$.01 \left(1 - \frac 1 3 \times .0001 \right) = .00999967,$$
so that at $G(.01) = .88622692 - .00999967 = .87622725.$
$$\vdots$$
But... how accurate could he be? OK, let's take $2.97$ as an example:
Amazing!
Let's move on to the modern (normalized) expression of the Gaussian pdf:
The pdf of $\mathscr N(0,1)$ is:
$$f_X(X=x)=\large \frac{1}{\sqrt{2\pi}}\,e^{-\frac {x^2}{2}}= \frac{1}{\sqrt{2\pi}}\,e^{-\left(\frac {x}{\sqrt{2}}\right)^2}= \frac{1}{\sqrt{2\pi}}\,e^{-\left(z\right)^2}$$
where $z = \frac{x}{\sqrt{2}}$. And hence, $x = z \times \sqrt{2}$.
So let's go to R, and look up the $P_Z(Z>z=2.97)$... OK, not so fast. First we have to remember that when there is a constant multiplying the exponent in an exponential function $e^{ax}$, the integral will be divided by that exponent: $1/a$. Since we are aiming at replicating the results in the old tables, we are actually multiplying the value of $x$ by $\sqrt{2}$, which will have to appear in the denominator.
Further, Christian Kramp did not normalize, so we have to correct the results given by R accordingly, multiplying by $\sqrt{2\pi}$. The final correction will look like this:
$$\frac{\sqrt{2\pi}}{\sqrt{2}}\,\mathbb P(X>x)=\sqrt{\pi}\,\,\mathbb P(X>x)$$
In the case above, $z=2.97$ and $x=z\times \sqrt{2}=4.200214$. Now let's go to R:
(R = sqrt(pi) * pnorm(x, lower.tail = F))
[1] 0.00002363235e-05
Fantastic!
Let's go to the top of the table for fun, say $0.06$...
z = 0.06
(x = z * sqrt(2))
(R = sqrt(pi) * pnorm(x, lower.tail = F))
[1] 0.8262988
What says Kramp? $0.82629882$.
So close...
The thing is... how close, exactly? After all the up-votes received, I couldn't leave the actual answer hanging. The problem was that all the optical character recognition (OCR) applications I tried were incredibly off - not surprising if you have taken a look at the original. So, I learned to appreciate Christian Kramp for the tenacity of his work as I personally typed each digit in the first column of his Table Première.
After some valuable help from @Glen_b, now it may very well be accurate, and it's ready to copy and paste on the R console in this GitHub link.
Here is an analysis of the accuracy of his calculations. Brace yourself...
Absolute cumulative difference between [R] values and Kramp's approximation:
$0.000001200764$ - in the course of $301$ calculations, he managed to accumulate an error of approximately $1$ millionth!
Mean absolute error (MAE), or mean(abs(difference)) with difference = R - kramp:
$0.000000003989249$ - he managed to make an outrageously ridiculous $3$ one-billionth error on average!
On the entry in which his calculations were most divergent as compared to [R] the first different decimal place value was in the eighth position (hundred millionth). On average (median) his first "mistake" was in the tenth decimal digit (tenth billionth!). And, although he didn't fully agree with with [R] in any instances, the closest entry doesn't diverge until the thirteen digital entry.
Mean relative difference or mean(abs(R - kramp)) / mean(R) (same as all.equal(R[,2], kramp[,2], tolerance = 0)):
$0.00000002380406$
Root mean squared error (RMSE) or deviation (gives more weight to large mistakes), calculated as sqrt(mean(difference^2)):
$0.000000007283493$
If you find a picture or portrait of Chistian Kramp, please edit this post and place it here. | Who created the first standard normal table?
Laplace was the first to recognize the need for tabulation, coming up with the approximation:
$$\begin{align}G(x)&=\int_x^\infty e^{-t^2}dt\\[2ex]&=\small \frac1 x- \frac{1}{2x^3}+\frac{1\cdot3}{4x^5} |
3,812 | Who created the first standard normal table? | According to H.A. David [1] Laplace recognized the need for tables of the normal distribution "as early as 1783" and the first normal table was produced by Kramp in 1799.
Laplace suggested two series approximations, one for the integral from $0$ to $x$ of $e^{-t^2}$ (which is proportional to a normal distribution with variance $\frac{_1}{^2}$) and one for the upper tail.
However, Kramp didn't use these series of Laplace, since there was a gap in the intervals for which they could be usefully applied.
In effect he starts with the integral for the tail area from 0 and then applies a Taylor expansion about the last calculated integral -- that is, as he calculates new values in the table he shifts the $x$ of his Taylor expansion of $G(x+h)$ (where $G$ is the integral giving the upper tail area).
To be specific, quoting the relevant couple of sentences:
he simply uses a Taylor series expansion of $G(x + h)$ about $G(x)$, with $h = .01$, up to the term in $h^3$. This enables him to proceed step by step from $x = 0$ to $x = h, 2h, 3h,...$, upon multiplying $he^{-x^2}$ by $$1-hx+ \frac13(2x^2 - 1)h^2 - \frac16(2x^3 - 3x)h^3.$$ Thus, at $x = 0$ this product reduces to $$.01 (1 - \frac13 \times .0001 ) = .00999967,\qquad\qquad (4)$$ so that at $G(.01) = .88622692 - .00999967 = .87622725$. The next term on the left of (4) can be shown to be $10^{-9}$, so that its omission is justified.
David indicates that the tables were widely used.
So rather than thousands of Riemann sums it was hundreds of Taylor expansions.
On a smaller note, in a pinch (stuck with only a calculator and a few remembered values from the normal table) I have quite successfully applied Simpson's rule (and related rules for numerical integration) to get a good approximation at other values; it's not all that tedious to produce an abbreviated table* to a few figures of accuracy. [To produce tables of the scale and accuracy of Kramp's would be a fairly large task, though, even using a cleverer method, as he did.]
* By an abbreviated table, I mean one where you can basically get away with interpolation in between tabulated values without losing too much accuracy. If you only want say around 3 figure accuracy you really don't need to compute all that many values. I have effectively used polynomial interpolation (more precisely, applied finite difference techniques), which allows for a table with fewer values than linear interpolation -- if somewhat more effort at the interpolation step -- and also have done interpolation with a logit transformation, which makes linear interpolation considerably more effective, but is only much use if you have a good calculator).
[1] Herbert A. David (2005),
"Tables Related to the Normal Distribution: A Short History"
The American Statistician, Vol. 59, No. 4 (Nov.), pp. 309-311
[2] Kramp (1799),
Analyse des Réfractions Astronomiques et Terrestres,
Leipzig: Schwikkert | Who created the first standard normal table? | According to H.A. David [1] Laplace recognized the need for tables of the normal distribution "as early as 1783" and the first normal table was produced by Kramp in 1799.
Laplace suggested two series | Who created the first standard normal table?
According to H.A. David [1] Laplace recognized the need for tables of the normal distribution "as early as 1783" and the first normal table was produced by Kramp in 1799.
Laplace suggested two series approximations, one for the integral from $0$ to $x$ of $e^{-t^2}$ (which is proportional to a normal distribution with variance $\frac{_1}{^2}$) and one for the upper tail.
However, Kramp didn't use these series of Laplace, since there was a gap in the intervals for which they could be usefully applied.
In effect he starts with the integral for the tail area from 0 and then applies a Taylor expansion about the last calculated integral -- that is, as he calculates new values in the table he shifts the $x$ of his Taylor expansion of $G(x+h)$ (where $G$ is the integral giving the upper tail area).
To be specific, quoting the relevant couple of sentences:
he simply uses a Taylor series expansion of $G(x + h)$ about $G(x)$, with $h = .01$, up to the term in $h^3$. This enables him to proceed step by step from $x = 0$ to $x = h, 2h, 3h,...$, upon multiplying $he^{-x^2}$ by $$1-hx+ \frac13(2x^2 - 1)h^2 - \frac16(2x^3 - 3x)h^3.$$ Thus, at $x = 0$ this product reduces to $$.01 (1 - \frac13 \times .0001 ) = .00999967,\qquad\qquad (4)$$ so that at $G(.01) = .88622692 - .00999967 = .87622725$. The next term on the left of (4) can be shown to be $10^{-9}$, so that its omission is justified.
David indicates that the tables were widely used.
So rather than thousands of Riemann sums it was hundreds of Taylor expansions.
On a smaller note, in a pinch (stuck with only a calculator and a few remembered values from the normal table) I have quite successfully applied Simpson's rule (and related rules for numerical integration) to get a good approximation at other values; it's not all that tedious to produce an abbreviated table* to a few figures of accuracy. [To produce tables of the scale and accuracy of Kramp's would be a fairly large task, though, even using a cleverer method, as he did.]
* By an abbreviated table, I mean one where you can basically get away with interpolation in between tabulated values without losing too much accuracy. If you only want say around 3 figure accuracy you really don't need to compute all that many values. I have effectively used polynomial interpolation (more precisely, applied finite difference techniques), which allows for a table with fewer values than linear interpolation -- if somewhat more effort at the interpolation step -- and also have done interpolation with a logit transformation, which makes linear interpolation considerably more effective, but is only much use if you have a good calculator).
[1] Herbert A. David (2005),
"Tables Related to the Normal Distribution: A Short History"
The American Statistician, Vol. 59, No. 4 (Nov.), pp. 309-311
[2] Kramp (1799),
Analyse des Réfractions Astronomiques et Terrestres,
Leipzig: Schwikkert | Who created the first standard normal table?
According to H.A. David [1] Laplace recognized the need for tables of the normal distribution "as early as 1783" and the first normal table was produced by Kramp in 1799.
Laplace suggested two series |
3,813 | Who created the first standard normal table? | Interesting issue! I think the first idea did not come through the integration of complex formula; rather, the result of applying the asymptotics in combinatorics. Pen and paper method may take several weeks; not so tough for Karl Gauss compared to calculation of pie for his predecessors. I think Gauss's idea was courageous; calculation was easy for him.
Example of creating standard z table from scratch-
1. Take a population of n (say n is 20) numbers and list all possible samples of size r (say r is 5) from that.
2. calculate the sample means. You get nCr sample means (here, 20c5=15504 means).
3. Their mean is the same as population mean. Find the stdev of sample means.
4. Find z scores of sample means using those pop mean and stdev of sample means.
5. Sort z in ascending order and find the probability of z being in a range in your nCr z values.
6. Compare values with normal tables. Smaller n is good for hand calculations. Larger n will produce closer approximates of the normal table values.
The following code is in r:
n <- 20
r <- 5
p <- sample(1:40,n) # Don't be misled!! Here, 'sample' is an r function
used to produce n random numbers between 1 and 40.
You can take any 20 numbers, possibly all different.
c <- combn(p, r) # all the nCr samples listed
cmean <- array(0)
for(i in 1:choose(n,r)) {
cmean[i] <- mean(c[,i])
}
z <- array(0)
for(i in 1:choose(n,r)) {
z[i] <- (cmean[i]-mean(c))/sd(cmean)
}
ascend <- sort(z, decreasing = FALSE)
Probability of z falling between 0 and positive value q below; compare with a known table. Manipulate q below between 0 and 3.5 to compare.
q <- 1
probability <- (length(ascend[ascend<q])-length(ascend[ascend<0]))/choose(n,r)
probability # For example, if you use n=30 and r=5, then for q=1, you
will get probability is 0.3413; for q=2, prob is 0.4773 | Who created the first standard normal table? | Interesting issue! I think the first idea did not come through the integration of complex formula; rather, the result of applying the asymptotics in combinatorics. Pen and paper method may take severa | Who created the first standard normal table?
Interesting issue! I think the first idea did not come through the integration of complex formula; rather, the result of applying the asymptotics in combinatorics. Pen and paper method may take several weeks; not so tough for Karl Gauss compared to calculation of pie for his predecessors. I think Gauss's idea was courageous; calculation was easy for him.
Example of creating standard z table from scratch-
1. Take a population of n (say n is 20) numbers and list all possible samples of size r (say r is 5) from that.
2. calculate the sample means. You get nCr sample means (here, 20c5=15504 means).
3. Their mean is the same as population mean. Find the stdev of sample means.
4. Find z scores of sample means using those pop mean and stdev of sample means.
5. Sort z in ascending order and find the probability of z being in a range in your nCr z values.
6. Compare values with normal tables. Smaller n is good for hand calculations. Larger n will produce closer approximates of the normal table values.
The following code is in r:
n <- 20
r <- 5
p <- sample(1:40,n) # Don't be misled!! Here, 'sample' is an r function
used to produce n random numbers between 1 and 40.
You can take any 20 numbers, possibly all different.
c <- combn(p, r) # all the nCr samples listed
cmean <- array(0)
for(i in 1:choose(n,r)) {
cmean[i] <- mean(c[,i])
}
z <- array(0)
for(i in 1:choose(n,r)) {
z[i] <- (cmean[i]-mean(c))/sd(cmean)
}
ascend <- sort(z, decreasing = FALSE)
Probability of z falling between 0 and positive value q below; compare with a known table. Manipulate q below between 0 and 3.5 to compare.
q <- 1
probability <- (length(ascend[ascend<q])-length(ascend[ascend<0]))/choose(n,r)
probability # For example, if you use n=30 and r=5, then for q=1, you
will get probability is 0.3413; for q=2, prob is 0.4773 | Who created the first standard normal table?
Interesting issue! I think the first idea did not come through the integration of complex formula; rather, the result of applying the asymptotics in combinatorics. Pen and paper method may take severa |
3,814 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | It shouldn't matter that much since the test statistic will always be the difference in means (or something equivalent). Small differences can come from the implementation of Monte-Carlo methods. Trying the three packages with your data with a one-sided test for two independent variables:
DV <- c(x1, y1)
IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
library(coin) # for oneway_test(), pvalue()
pvalue(oneway_test(DV ~ IV, alternative="greater",
distribution=approximate(B=9999)))
[1] 0.00330033
library(perm) # for permTS()
permTS(DV ~ IV, alternative="greater", method="exact.mc",
control=permControl(nmc=10^4-1))$p.value
[1] 0.003
library(exactRankTests) # for perm.test()
perm.test(DV ~ IV, paired=FALSE, alternative="greater", exact=TRUE)$p.value
[1] 0.003171822
To check the exact p-value with a manual calculation of all permutations, I'll restrict the data to the first 9 values.
x1 <- x1[1:9]
y1 <- y1[1:9]
DV <- c(x1, y1)
IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
pvalue(oneway_test(DV ~ IV, alternative="greater", distribution="exact"))
[1] 0.0945907
permTS(DV ~ IV, alternative="greater", exact=TRUE)$p.value
[1] 0.0945907
# perm.test() gives different result due to rounding of input values
perm.test(DV ~ IV, paired=FALSE, alternative="greater", exact=TRUE)$p.value
[1] 0.1029412
# manual exact permutation test
idx <- seq(along=DV) # indices to permute
idxA <- combn(idx, length(x1)) # all possibilities for different groups
# function to calculate difference in group means given index vector for group A
getDiffM <- function(x) { mean(DV[x]) - mean(DV[!(idx %in% x)]) }
resDM <- apply(idxA, 2, getDiffM) # difference in means for all permutations
diffM <- mean(x1) - mean(y1) # empirical differencen in group means
# p-value: proportion of group means at least as extreme as observed one
(pVal <- sum(resDM >= diffM) / length(resDM))
[1] 0.0945907
coin and exactRankTests are both from the same author, but coin seems to be more general and extensive - also in terms of documentation. exactRankTests is not actively developed anymore. I'd therefore choose coin (also because of informative functions like support()), unless you don't like to deal with S4 objects.
EDIT: for two dependent variables, the syntax is
id <- factor(rep(1:length(x1), 2)) # factor for participant
pvalue(oneway_test(DV ~ IV | id, alternative="greater",
distribution=approximate(B=9999)))
[1] 0.00810081 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | It shouldn't matter that much since the test statistic will always be the difference in means (or something equivalent). Small differences can come from the implementation of Monte-Carlo methods. Tryi | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
It shouldn't matter that much since the test statistic will always be the difference in means (or something equivalent). Small differences can come from the implementation of Monte-Carlo methods. Trying the three packages with your data with a one-sided test for two independent variables:
DV <- c(x1, y1)
IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
library(coin) # for oneway_test(), pvalue()
pvalue(oneway_test(DV ~ IV, alternative="greater",
distribution=approximate(B=9999)))
[1] 0.00330033
library(perm) # for permTS()
permTS(DV ~ IV, alternative="greater", method="exact.mc",
control=permControl(nmc=10^4-1))$p.value
[1] 0.003
library(exactRankTests) # for perm.test()
perm.test(DV ~ IV, paired=FALSE, alternative="greater", exact=TRUE)$p.value
[1] 0.003171822
To check the exact p-value with a manual calculation of all permutations, I'll restrict the data to the first 9 values.
x1 <- x1[1:9]
y1 <- y1[1:9]
DV <- c(x1, y1)
IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
pvalue(oneway_test(DV ~ IV, alternative="greater", distribution="exact"))
[1] 0.0945907
permTS(DV ~ IV, alternative="greater", exact=TRUE)$p.value
[1] 0.0945907
# perm.test() gives different result due to rounding of input values
perm.test(DV ~ IV, paired=FALSE, alternative="greater", exact=TRUE)$p.value
[1] 0.1029412
# manual exact permutation test
idx <- seq(along=DV) # indices to permute
idxA <- combn(idx, length(x1)) # all possibilities for different groups
# function to calculate difference in group means given index vector for group A
getDiffM <- function(x) { mean(DV[x]) - mean(DV[!(idx %in% x)]) }
resDM <- apply(idxA, 2, getDiffM) # difference in means for all permutations
diffM <- mean(x1) - mean(y1) # empirical differencen in group means
# p-value: proportion of group means at least as extreme as observed one
(pVal <- sum(resDM >= diffM) / length(resDM))
[1] 0.0945907
coin and exactRankTests are both from the same author, but coin seems to be more general and extensive - also in terms of documentation. exactRankTests is not actively developed anymore. I'd therefore choose coin (also because of informative functions like support()), unless you don't like to deal with S4 objects.
EDIT: for two dependent variables, the syntax is
id <- factor(rep(1:length(x1), 2)) # factor for participant
pvalue(oneway_test(DV ~ IV | id, alternative="greater",
distribution=approximate(B=9999)))
[1] 0.00810081 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
It shouldn't matter that much since the test statistic will always be the difference in means (or something equivalent). Small differences can come from the implementation of Monte-Carlo methods. Tryi |
3,815 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | A few comments are, I believe, in order.
1) I would encourage you to try multiple visual displays of your data, because they can capture things that are lost by (graphs like) histograms, and I also strongly recommend that you plot on side-by-side axes. In this case, I do not believe the histograms do a very good job of communicating the salient features of your data. For example, take a look at side-by-side boxplots:
boxplot(x1, y1, names = c("x1", "y1"))
Or even side-by-side stripcharts:
stripchart(c(x1,y1) ~ rep(1:2, each = 20), method = "jitter", group.names = c("x1","y1"), xlab = "")
Look at the centers, spreads, and shapes of these! About three-quarters of the $x1$ data fall well above the median of the $y1$ data. The spread of $x1$ is tiny, while the spread of $y1$ is huge. Both $x1$ and $y1$ are highly left-skewed, but in different ways. For example, $y1$ has five (!) repeated values of zero.
2) You didn't explain in much detail where your data come from, nor how they were measured, but this information is very important when it comes time to select a statistical procedure. Are your two samples above independent? Are there any reasons to believe that the marginal distributions of the two samples should be the same (except for a difference in location, for example)? What were the considerations prior to the study that led you to look for evidence of a difference between the two groups?
3) The t-test is not appropriate for these data because the marginal distributions are markedly non-normal, with extreme values in both samples. If you like, you could appeal to the CLT (due to your moderately-sized sample) to use a $z$-test (which would be similar to a z-test for large samples), but given the skewness (in both variables) of your data I would not judge such an appeal very convincing. Sure, you can use it anyway to calculate a $p$-value, but what does that do for you? If the assumptions aren't satisfied then a $p$-value is just a statistic; it doesn't tell what you (presumably) want to know: whether there is evidence that the two samples come from different distributions.
4) A permutation test would also be inappropriate for these data. The single and often-overlooked assumption for permutation tests is that the two samples are exchangeable under the null hypothesis. That would mean that they have identical marginal distributions (under the null). But you are in trouble, because the graphs suggest that the distributions differ both in location and scale (and shape, too). So, you can't (validly) test for a difference in location because the scales are different, and you can't (validly) test for a difference in scale because the locations are different. Oops. Again, you can do the test anyway and get a $p$-value, but so what? What have you really accomplished?
5) In my opinion, these data are a perfect (?) example that a well chosen picture is worth 1000 hypothesis tests. We don't need statistics to tell the difference between a pencil and a barn. The appropriate statement in my view for these data would be "These data exhibit marked differences with respect to location, scale, and shape." You could follow up with (robust) descriptive statistics for each of those to quantify the differences, and explain what the differences mean in the context of your original study.
6) Your reviewer is probably (and sadly) going to insist on some sort of $p$-value as a precondition to publication. Sigh! If it were me, given the differences with respect to everything I would probably use a nonparametric Kolmogorov-Smirnov test to spit out a $p$-value that demonstrates that the distributions are different, and then proceed with descriptive statistics as above. You would need to add some noise to the two samples to get rid of ties. (And of course, this all assumes that your samples are independent which you didn't state explicitly.)
This answer is a lot longer than I originally intended it to be. Sorry about that. | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | A few comments are, I believe, in order.
1) I would encourage you to try multiple visual displays of your data, because they can capture things that are lost by (graphs like) histograms, and I also st | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
A few comments are, I believe, in order.
1) I would encourage you to try multiple visual displays of your data, because they can capture things that are lost by (graphs like) histograms, and I also strongly recommend that you plot on side-by-side axes. In this case, I do not believe the histograms do a very good job of communicating the salient features of your data. For example, take a look at side-by-side boxplots:
boxplot(x1, y1, names = c("x1", "y1"))
Or even side-by-side stripcharts:
stripchart(c(x1,y1) ~ rep(1:2, each = 20), method = "jitter", group.names = c("x1","y1"), xlab = "")
Look at the centers, spreads, and shapes of these! About three-quarters of the $x1$ data fall well above the median of the $y1$ data. The spread of $x1$ is tiny, while the spread of $y1$ is huge. Both $x1$ and $y1$ are highly left-skewed, but in different ways. For example, $y1$ has five (!) repeated values of zero.
2) You didn't explain in much detail where your data come from, nor how they were measured, but this information is very important when it comes time to select a statistical procedure. Are your two samples above independent? Are there any reasons to believe that the marginal distributions of the two samples should be the same (except for a difference in location, for example)? What were the considerations prior to the study that led you to look for evidence of a difference between the two groups?
3) The t-test is not appropriate for these data because the marginal distributions are markedly non-normal, with extreme values in both samples. If you like, you could appeal to the CLT (due to your moderately-sized sample) to use a $z$-test (which would be similar to a z-test for large samples), but given the skewness (in both variables) of your data I would not judge such an appeal very convincing. Sure, you can use it anyway to calculate a $p$-value, but what does that do for you? If the assumptions aren't satisfied then a $p$-value is just a statistic; it doesn't tell what you (presumably) want to know: whether there is evidence that the two samples come from different distributions.
4) A permutation test would also be inappropriate for these data. The single and often-overlooked assumption for permutation tests is that the two samples are exchangeable under the null hypothesis. That would mean that they have identical marginal distributions (under the null). But you are in trouble, because the graphs suggest that the distributions differ both in location and scale (and shape, too). So, you can't (validly) test for a difference in location because the scales are different, and you can't (validly) test for a difference in scale because the locations are different. Oops. Again, you can do the test anyway and get a $p$-value, but so what? What have you really accomplished?
5) In my opinion, these data are a perfect (?) example that a well chosen picture is worth 1000 hypothesis tests. We don't need statistics to tell the difference between a pencil and a barn. The appropriate statement in my view for these data would be "These data exhibit marked differences with respect to location, scale, and shape." You could follow up with (robust) descriptive statistics for each of those to quantify the differences, and explain what the differences mean in the context of your original study.
6) Your reviewer is probably (and sadly) going to insist on some sort of $p$-value as a precondition to publication. Sigh! If it were me, given the differences with respect to everything I would probably use a nonparametric Kolmogorov-Smirnov test to spit out a $p$-value that demonstrates that the distributions are different, and then proceed with descriptive statistics as above. You would need to add some noise to the two samples to get rid of ties. (And of course, this all assumes that your samples are independent which you didn't state explicitly.)
This answer is a lot longer than I originally intended it to be. Sorry about that. | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
A few comments are, I believe, in order.
1) I would encourage you to try multiple visual displays of your data, because they can capture things that are lost by (graphs like) histograms, and I also st |
3,816 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | My comments are not about implementation of the permutation test but about the more general issues raised by these data and the discussion of it, in particular the post by G. Jay Kerns.
The two distributions actually look quite similar to me EXCEPT for the group of 0s in Y1, which are much different from the other observations in that sample (next smallest is about 50 on the 0-100 scale) as well as all those in X1. I would first investigate whether there was anything different about those observations.
Second, assuming those 0s do belong in the analysis, saying the permutation test isn't valid because the distributions appear to differ begs the question. If the null were true (distributions are identical), could you (with reasonable probability) get distributions looking as different as these two? Answering that's the whole point of the test, isn't it? Maybe in this case some will consider the answer obvious without running the test, but with these smallish, peculiar distributions, I don't think I would. | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | My comments are not about implementation of the permutation test but about the more general issues raised by these data and the discussion of it, in particular the post by G. Jay Kerns.
The two distri | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
My comments are not about implementation of the permutation test but about the more general issues raised by these data and the discussion of it, in particular the post by G. Jay Kerns.
The two distributions actually look quite similar to me EXCEPT for the group of 0s in Y1, which are much different from the other observations in that sample (next smallest is about 50 on the 0-100 scale) as well as all those in X1. I would first investigate whether there was anything different about those observations.
Second, assuming those 0s do belong in the analysis, saying the permutation test isn't valid because the distributions appear to differ begs the question. If the null were true (distributions are identical), could you (with reasonable probability) get distributions looking as different as these two? Answering that's the whole point of the test, isn't it? Maybe in this case some will consider the answer obvious without running the test, but with these smallish, peculiar distributions, I don't think I would. | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
My comments are not about implementation of the permutation test but about the more general issues raised by these data and the discussion of it, in particular the post by G. Jay Kerns.
The two distri |
3,817 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | As this question did pop up again, I may add another answer inspired by a recent blog post via R-Bloggers from Robert Kabacoff, the author of Quick-R and R in Action using the lmPerm package.
However, this methods produces sharply contrasting (and very unstable) results to the one produced by the coin package in the answer of @caracakl (the p-value of the within-subjects analysis is 0.008). The analysis takes the data preparation from @caracal's answer as well:
x1 <- c(99, 99.5, 65, 100, 99, 99.5, 99, 99.5, 99.5, 57, 100, 99.5,
99.5, 99, 99, 99.5, 89.5, 99.5, 100, 99.5)
y1 <- c(99, 99.5, 99.5, 0, 50, 100, 99.5, 99.5, 0, 99.5, 99.5, 90,
80, 0, 99, 0, 74.5, 0, 100, 49.5)
DV <- c(x1, y1)
IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
id <- factor(rep(1:length(x1), 2))
library(lmPerm)
summary(aovp( DV ~ IV + Error(id)))
produces:
> summary(aovp( DV ~ IV + Error(id)))
[1] "Settings: unique SS "
Error: id
Component 1 :
Df R Sum Sq R Mean Sq
Residuals 19 15946 839
Error: Within
Component 1 :
Df R Sum Sq R Mean Sq Iter Pr(Prob)
IV 1 7924 7924 1004 0.091 .
Residuals 19 21124 1112
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
If you run this multiple times, the p-values jumps around between ~.05 and ~.1.
Although it is an answer to the question let me allow to pose a question at the end (I can move this to a new question if desired):
Any ideas of why this analysis is so unstable and does produce a so diverging p-values to the coin analysis? Did I do something wrong? | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | As this question did pop up again, I may add another answer inspired by a recent blog post via R-Bloggers from Robert Kabacoff, the author of Quick-R and R in Action using the lmPerm package.
However, | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
As this question did pop up again, I may add another answer inspired by a recent blog post via R-Bloggers from Robert Kabacoff, the author of Quick-R and R in Action using the lmPerm package.
However, this methods produces sharply contrasting (and very unstable) results to the one produced by the coin package in the answer of @caracakl (the p-value of the within-subjects analysis is 0.008). The analysis takes the data preparation from @caracal's answer as well:
x1 <- c(99, 99.5, 65, 100, 99, 99.5, 99, 99.5, 99.5, 57, 100, 99.5,
99.5, 99, 99, 99.5, 89.5, 99.5, 100, 99.5)
y1 <- c(99, 99.5, 99.5, 0, 50, 100, 99.5, 99.5, 0, 99.5, 99.5, 90,
80, 0, 99, 0, 74.5, 0, 100, 49.5)
DV <- c(x1, y1)
IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
id <- factor(rep(1:length(x1), 2))
library(lmPerm)
summary(aovp( DV ~ IV + Error(id)))
produces:
> summary(aovp( DV ~ IV + Error(id)))
[1] "Settings: unique SS "
Error: id
Component 1 :
Df R Sum Sq R Mean Sq
Residuals 19 15946 839
Error: Within
Component 1 :
Df R Sum Sq R Mean Sq Iter Pr(Prob)
IV 1 7924 7924 1004 0.091 .
Residuals 19 21124 1112
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
If you run this multiple times, the p-values jumps around between ~.05 and ~.1.
Although it is an answer to the question let me allow to pose a question at the end (I can move this to a new question if desired):
Any ideas of why this analysis is so unstable and does produce a so diverging p-values to the coin analysis? Did I do something wrong? | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
As this question did pop up again, I may add another answer inspired by a recent blog post via R-Bloggers from Robert Kabacoff, the author of Quick-R and R in Action using the lmPerm package.
However, |
3,818 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | Are these scores proportions? If so, you certainly shouldn't be using a gaussian parametric test, and while you could go ahead with a non-parametric approach like a permutation test or bootstrap of the means, I'd suggest that you'll get more statistical power by employing a suitable non-gaussian parametric approach. Specifically, any time you can compute a proportion measure within a unit of interest (ex. participant in an experiment), you can and probably should use a mixed effects model that specifies observations with binomially distributed error. See Dixon 2004. | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | Are these scores proportions? If so, you certainly shouldn't be using a gaussian parametric test, and while you could go ahead with a non-parametric approach like a permutation test or bootstrap of th | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
Are these scores proportions? If so, you certainly shouldn't be using a gaussian parametric test, and while you could go ahead with a non-parametric approach like a permutation test or bootstrap of the means, I'd suggest that you'll get more statistical power by employing a suitable non-gaussian parametric approach. Specifically, any time you can compute a proportion measure within a unit of interest (ex. participant in an experiment), you can and probably should use a mixed effects model that specifies observations with binomially distributed error. See Dixon 2004. | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
Are these scores proportions? If so, you certainly shouldn't be using a gaussian parametric test, and while you could go ahead with a non-parametric approach like a permutation test or bootstrap of th |
3,819 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | Just adding another approach, ezPerm of ez package:
> # preparing the data
> DV <- c(x1, y1)
> IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
> id <- factor(rep(1:length(x1), 2))
> df <- data.frame(id=id,DV=DV,IV=IV)
>
> library(ez)
> ezPerm( data = df, dv = DV, wid = id, within = IV, perms = 1000)
|=========================|100% Completed after 17 s
Effect p p<.05
1 IV 0.016 *
This seems to be consistent to the oneway_test of the coin package:
> library(coin)
> pvalue(oneway_test(DV ~ IV | id, distribution=approximate(B=999999)))
[1] 0.01608002
99 percent confidence interval:
0.01575782 0.01640682
However, notice that this is not the same example provided by @caracal. In his example, he includes alternative="greater", therefore the difference in p-values ~0.008 vs ~0.016.
The aovp package suggested in one of the answers produce suspiciously lower p-values, and runs suspiciously fast even when I try high values for the Iter, Ca and maxIter arguments:
library(lmPerm)
summary(aovp(DV ~ IV + Error(id/IV), data=df, maxIter = 1000000000))
summary(aovp(DV ~ IV + Error(id/IV), data=df, Iter = 1000000000))
summary(aovp(DV ~ IV + Error(id/IV), data=df, Ca = 0.00000000001))
That said, the arguments seems to be slightly reducing the variations of p-values from ~.03 and ~.1 (I got a bigger range thant the reported by @Henrik), to 0.03 and 0.07. | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | Just adding another approach, ezPerm of ez package:
> # preparing the data
> DV <- c(x1, y1)
> IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
> id <- factor(rep(1:length(x1), 2))
> df <- da | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
Just adding another approach, ezPerm of ez package:
> # preparing the data
> DV <- c(x1, y1)
> IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
> id <- factor(rep(1:length(x1), 2))
> df <- data.frame(id=id,DV=DV,IV=IV)
>
> library(ez)
> ezPerm( data = df, dv = DV, wid = id, within = IV, perms = 1000)
|=========================|100% Completed after 17 s
Effect p p<.05
1 IV 0.016 *
This seems to be consistent to the oneway_test of the coin package:
> library(coin)
> pvalue(oneway_test(DV ~ IV | id, distribution=approximate(B=999999)))
[1] 0.01608002
99 percent confidence interval:
0.01575782 0.01640682
However, notice that this is not the same example provided by @caracal. In his example, he includes alternative="greater", therefore the difference in p-values ~0.008 vs ~0.016.
The aovp package suggested in one of the answers produce suspiciously lower p-values, and runs suspiciously fast even when I try high values for the Iter, Ca and maxIter arguments:
library(lmPerm)
summary(aovp(DV ~ IV + Error(id/IV), data=df, maxIter = 1000000000))
summary(aovp(DV ~ IV + Error(id/IV), data=df, Iter = 1000000000))
summary(aovp(DV ~ IV + Error(id/IV), data=df, Ca = 0.00000000001))
That said, the arguments seems to be slightly reducing the variations of p-values from ~.03 and ~.1 (I got a bigger range thant the reported by @Henrik), to 0.03 and 0.07. | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
Just adding another approach, ezPerm of ez package:
> # preparing the data
> DV <- c(x1, y1)
> IV <- factor(rep(c("A", "B"), c(length(x1), length(y1))))
> id <- factor(rep(1:length(x1), 2))
> df <- da |
3,820 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | One more example: MKinfer::perm.t.test(). It's quite fast.
I don't know which one of the above it matches, because neither of you set the seed, so the results will never be well comparable. I use 1000.
> set.seed(1000)
> MKinfer::perm.t.test(x1, y1)
Permutation Welch Two Sample t-test
data: x1 and y1
(Monte-Carlo) permutation p-value = 0.007
permutation difference of means (SE) = 28.1 (10.7)
95 percent (Monte-Carlo) permutation percentile confidence interval:
7.35 49.15
Results without permutation:
t = 3, df = 22, p-value = 0.009
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
7.67 48.63
sample estimates:
mean of x mean of y
95.1 67.0
and for the second data:
> set.seed(1000)
> MKinfer::perm.t.test(DV~IV, alternative="greater")
Permutation Welch Two Sample t-test
data: DV by IV
(Monte-Carlo) permutation p-value = 0.004
permutation difference of means (SE) = 28.1 (10.7)
95 percent (Monte-Carlo) permutation percentile confidence interval:
10.5 Inf
Results without permutation:
t = 3, df = 22, p-value = 0.005
alternative hypothesis: true difference in means is greater than 0
95 percent confidence interval:
11.2 Inf
sample estimates:
mean in group A mean in group B
95.1 67.0 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)? | One more example: MKinfer::perm.t.test(). It's quite fast.
I don't know which one of the above it matches, because neither of you set the seed, so the results will never be well comparable. I use 1000 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
One more example: MKinfer::perm.t.test(). It's quite fast.
I don't know which one of the above it matches, because neither of you set the seed, so the results will never be well comparable. I use 1000.
> set.seed(1000)
> MKinfer::perm.t.test(x1, y1)
Permutation Welch Two Sample t-test
data: x1 and y1
(Monte-Carlo) permutation p-value = 0.007
permutation difference of means (SE) = 28.1 (10.7)
95 percent (Monte-Carlo) permutation percentile confidence interval:
7.35 49.15
Results without permutation:
t = 3, df = 22, p-value = 0.009
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
7.67 48.63
sample estimates:
mean of x mean of y
95.1 67.0
and for the second data:
> set.seed(1000)
> MKinfer::perm.t.test(DV~IV, alternative="greater")
Permutation Welch Two Sample t-test
data: DV by IV
(Monte-Carlo) permutation p-value = 0.004
permutation difference of means (SE) = 28.1 (10.7)
95 percent (Monte-Carlo) permutation percentile confidence interval:
10.5 Inf
Results without permutation:
t = 3, df = 22, p-value = 0.005
alternative hypothesis: true difference in means is greater than 0
95 percent confidence interval:
11.2 Inf
sample estimates:
mean in group A mean in group B
95.1 67.0 | Which permutation test implementation in R to use instead of t-tests (paired and non-paired)?
One more example: MKinfer::perm.t.test(). It's quite fast.
I don't know which one of the above it matches, because neither of you set the seed, so the results will never be well comparable. I use 1000 |
3,821 | Is a sample covariance matrix always symmetric and positive definite? | For a sample of vectors $x_i=(x_{i1},\dots,x_{ik})^\top$, with $i=1,\dots,n$, the sample mean vector is
$$
\bar{x}=\frac{1}{n} \sum_{i=1}^n x_i \, ,
$$ and the sample covariance matrix is
$$
Q = \frac{1}{n} \sum_{i=1}^n (x_i-\bar{x})(x_i-\bar{x})^\top \, .
$$
For a nonzero vector $y\in\mathbb{R}^k$, we have
$$
y^\top Qy = y^\top\left(\frac{1}{n} \sum_{i=1}^n (x_i-\bar{x})(x_i-\bar{x})^\top\right) y
$$
$$
= \frac{1}{n} \sum_{i=1}^n y^\top (x_i-\bar{x})(x_i-\bar{x})^\top y
$$
$$
= \frac{1}{n} \sum_{i=1}^n \left( (x_i-\bar{x})^\top y \right)^2 \geq 0 \, . \quad (*)
$$
Therefore, $Q$ is always positive semi-definite.
The additional condition for $Q$ to be positive definite was given in whuber's comment bellow. It goes as follows.
Define $z_i=(x_i-\bar{x})$, for $i=1,\dots,n$. For any nonzero $y\in\mathbb{R}^k$, $(*)$ is zero if and only if $z_i^\top y=0$, for each $i=1,\dots,n$. Suppose the set $\{z_1,\dots,z_n\}$ spans $\mathbb{R}^k$. Then, there are real numbers $\alpha_1,\dots,\alpha_n$ such that $y=\alpha_1 z_1 +\dots+\alpha_n z_n$. But then we have $y^\top y=\alpha_1 z_1^\top y + \dots +\alpha_n z_n^\top y=0$, yielding that $y=0$, a contradiction. Hence, if the $z_i$'s span $\mathbb{R}^k$, then $Q$ is positive definite. This condition is equivalent to $\mathrm{rank} [z_1 \dots z_n] = k$. | Is a sample covariance matrix always symmetric and positive definite? | For a sample of vectors $x_i=(x_{i1},\dots,x_{ik})^\top$, with $i=1,\dots,n$, the sample mean vector is
$$
\bar{x}=\frac{1}{n} \sum_{i=1}^n x_i \, ,
$$ and the sample covariance matrix is
$$
Q = | Is a sample covariance matrix always symmetric and positive definite?
For a sample of vectors $x_i=(x_{i1},\dots,x_{ik})^\top$, with $i=1,\dots,n$, the sample mean vector is
$$
\bar{x}=\frac{1}{n} \sum_{i=1}^n x_i \, ,
$$ and the sample covariance matrix is
$$
Q = \frac{1}{n} \sum_{i=1}^n (x_i-\bar{x})(x_i-\bar{x})^\top \, .
$$
For a nonzero vector $y\in\mathbb{R}^k$, we have
$$
y^\top Qy = y^\top\left(\frac{1}{n} \sum_{i=1}^n (x_i-\bar{x})(x_i-\bar{x})^\top\right) y
$$
$$
= \frac{1}{n} \sum_{i=1}^n y^\top (x_i-\bar{x})(x_i-\bar{x})^\top y
$$
$$
= \frac{1}{n} \sum_{i=1}^n \left( (x_i-\bar{x})^\top y \right)^2 \geq 0 \, . \quad (*)
$$
Therefore, $Q$ is always positive semi-definite.
The additional condition for $Q$ to be positive definite was given in whuber's comment bellow. It goes as follows.
Define $z_i=(x_i-\bar{x})$, for $i=1,\dots,n$. For any nonzero $y\in\mathbb{R}^k$, $(*)$ is zero if and only if $z_i^\top y=0$, for each $i=1,\dots,n$. Suppose the set $\{z_1,\dots,z_n\}$ spans $\mathbb{R}^k$. Then, there are real numbers $\alpha_1,\dots,\alpha_n$ such that $y=\alpha_1 z_1 +\dots+\alpha_n z_n$. But then we have $y^\top y=\alpha_1 z_1^\top y + \dots +\alpha_n z_n^\top y=0$, yielding that $y=0$, a contradiction. Hence, if the $z_i$'s span $\mathbb{R}^k$, then $Q$ is positive definite. This condition is equivalent to $\mathrm{rank} [z_1 \dots z_n] = k$. | Is a sample covariance matrix always symmetric and positive definite?
For a sample of vectors $x_i=(x_{i1},\dots,x_{ik})^\top$, with $i=1,\dots,n$, the sample mean vector is
$$
\bar{x}=\frac{1}{n} \sum_{i=1}^n x_i \, ,
$$ and the sample covariance matrix is
$$
Q = |
3,822 | Is a sample covariance matrix always symmetric and positive definite? | A correct covariance matrix is always symmetric and positive *semi*definite.
The covariance between two variables is defied as $\sigma(x,y) = E [(x-E(x))(y-E(y))]$.
This equation doesn't change if you switch the positions of $x$ and $y$. Hence the matrix has to be symmetric.
It also has to be positive *semi-*definite because:
You can always find a transformation of your variables in a way that the covariance-matrix becomes diagonal. On the diagonal, you find the variances of your transformed variables which are either zero or positive, it is easy to see that this makes the transformed matrix positive semidefinite. However, since the definition of definity is transformation-invariant, it follows that the covariance-matrix is positive semidefinite in any chosen coordinate system.
When you estimate your covariance matrix (that is, when you calculate your sample covariance) with the formula you stated above, it will obv. still be symmetric.
It also has to be positive semidefinite (I think), because for each sample, the pdf that gives each sample point equal probability has the sample covariance as its covariance (somebody please verify this), so everything stated above still applies. | Is a sample covariance matrix always symmetric and positive definite? | A correct covariance matrix is always symmetric and positive *semi*definite.
The covariance between two variables is defied as $\sigma(x,y) = E [(x-E(x))(y-E(y))]$.
This equation doesn't change if you | Is a sample covariance matrix always symmetric and positive definite?
A correct covariance matrix is always symmetric and positive *semi*definite.
The covariance between two variables is defied as $\sigma(x,y) = E [(x-E(x))(y-E(y))]$.
This equation doesn't change if you switch the positions of $x$ and $y$. Hence the matrix has to be symmetric.
It also has to be positive *semi-*definite because:
You can always find a transformation of your variables in a way that the covariance-matrix becomes diagonal. On the diagonal, you find the variances of your transformed variables which are either zero or positive, it is easy to see that this makes the transformed matrix positive semidefinite. However, since the definition of definity is transformation-invariant, it follows that the covariance-matrix is positive semidefinite in any chosen coordinate system.
When you estimate your covariance matrix (that is, when you calculate your sample covariance) with the formula you stated above, it will obv. still be symmetric.
It also has to be positive semidefinite (I think), because for each sample, the pdf that gives each sample point equal probability has the sample covariance as its covariance (somebody please verify this), so everything stated above still applies. | Is a sample covariance matrix always symmetric and positive definite?
A correct covariance matrix is always symmetric and positive *semi*definite.
The covariance between two variables is defied as $\sigma(x,y) = E [(x-E(x))(y-E(y))]$.
This equation doesn't change if you |
3,823 | Is a sample covariance matrix always symmetric and positive definite? | @Zen's answer plus @whuber's comment to @Konstantin's answer provide a complete proof. Nevertheless, I'll rephrase the proof by trying to place more statistical emphasis.
Indeed, one can say that the sample covariance matrix $S$ is always positive and semi-definite because it can be seen as the variance of a suitable univariate variable, which is always non-negative.
In detail, let $x_1,\ldots,x_n$ be the observed sample, with $x_i = (x_{i1},\ldots,x_{ik})^\top$, $i=1,\ldots,n$. The sample covariance matrix is
$$
Q = n^{-1}\sum_{i=1}^n(x_i-\bar x)(x_i-\bar x)^\top,
$$
where $\bar x=n^{-1}\sum_{i}x_i$ is the sample average.
Consider now any vector $a = (a_1,\ldots,a_k)^\top$ and take the $y_i$, linear combination of $x_i$ with coefficients $a_i$, i.e.
$$
y_i = a^\top x_i = a_1x_{11}+\cdots+a_{k}x_{ik},\quad\text{for all } i.
$$
Let $\bar y$ be the sample average of $y_i$'s and note that $\bar y = a^\top \bar x$. The variance of $y_i$ is
\begin{align*}
0\leq s_{y}^2 &= n^{-1}\sum_i(y_i-\bar y)^2 = n^{-1}\sum_{i}(y_i-\bar y)(y_i-\bar y)^\top\\
& = n^{-1}\sum_{i} (a^\top x_i - a^\top \bar x)(a^\top x_i - a^\top \bar x)\\
& = a^\top\left(n^{-1}\sum_{i} (x_i - a^\top \bar x)(x_i -\bar x)\right)a\\
& = a^\top S a.
\end{align*}
Since $a$ was arbitrary, this completes the proof. | Is a sample covariance matrix always symmetric and positive definite? | @Zen's answer plus @whuber's comment to @Konstantin's answer provide a complete proof. Nevertheless, I'll rephrase the proof by trying to place more statistical emphasis.
Indeed, one can say that the | Is a sample covariance matrix always symmetric and positive definite?
@Zen's answer plus @whuber's comment to @Konstantin's answer provide a complete proof. Nevertheless, I'll rephrase the proof by trying to place more statistical emphasis.
Indeed, one can say that the sample covariance matrix $S$ is always positive and semi-definite because it can be seen as the variance of a suitable univariate variable, which is always non-negative.
In detail, let $x_1,\ldots,x_n$ be the observed sample, with $x_i = (x_{i1},\ldots,x_{ik})^\top$, $i=1,\ldots,n$. The sample covariance matrix is
$$
Q = n^{-1}\sum_{i=1}^n(x_i-\bar x)(x_i-\bar x)^\top,
$$
where $\bar x=n^{-1}\sum_{i}x_i$ is the sample average.
Consider now any vector $a = (a_1,\ldots,a_k)^\top$ and take the $y_i$, linear combination of $x_i$ with coefficients $a_i$, i.e.
$$
y_i = a^\top x_i = a_1x_{11}+\cdots+a_{k}x_{ik},\quad\text{for all } i.
$$
Let $\bar y$ be the sample average of $y_i$'s and note that $\bar y = a^\top \bar x$. The variance of $y_i$ is
\begin{align*}
0\leq s_{y}^2 &= n^{-1}\sum_i(y_i-\bar y)^2 = n^{-1}\sum_{i}(y_i-\bar y)(y_i-\bar y)^\top\\
& = n^{-1}\sum_{i} (a^\top x_i - a^\top \bar x)(a^\top x_i - a^\top \bar x)\\
& = a^\top\left(n^{-1}\sum_{i} (x_i - a^\top \bar x)(x_i -\bar x)\right)a\\
& = a^\top S a.
\end{align*}
Since $a$ was arbitrary, this completes the proof. | Is a sample covariance matrix always symmetric and positive definite?
@Zen's answer plus @whuber's comment to @Konstantin's answer provide a complete proof. Nevertheless, I'll rephrase the proof by trying to place more statistical emphasis.
Indeed, one can say that the |
3,824 | Is a sample covariance matrix always symmetric and positive definite? | Let
$$
X=
\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k} \\
x_{21} & x_{22} & \cdots & x_{2k} \\
\vdots & \vdots & \ddots & \vdots\\
x_{n1} & x_{n2} & \cdots & x_{nk}
\end{pmatrix}
$$
denote the data matrix whose $\left(i,j\right)$-th entry is the $i$-th measurement of the $j$-th variable (with $i \in \{1,\ldots, n\}, j \in \{1,\ldots,k \}$).
The sample covariance matrix $\mathcal S$ can be written as
$\mathcal S=n^{-1}X^\top C_n X,$
where $C_n=I_n-n^{-1}\mathbb{1}_n\mathbb{1}_n^\top$ is the centering matrix.
Since $C_n$ is symmetric and idempotent, we also have $\mathcal S=n^{-1}X^\top C_n^\top C_n X$.1 But with $Y\mathrel{:=}C_n X$ this becomes $\mathcal S=n^{-1}Y^\top Y$, which is generally positive semi-definite, and positive definite only if the columns of $Y$ are linearly independent.
This means that $\mathcal S$ is positive definite iff the centered measurement vectors of the $k$ variables, i.e. the vectors $\left(x_{1j}-\bar{x}_{.j},\ldots,x_{nj}-\bar{x}_{.j}\right)^\top$ indexed by $j$, are linearly independent.
1Another way to see that $\mathcal S$ can be written as $n^{-1}X^\top C_n^\top C_n X$ is to interpret $X^\top C_n^\top C_n X = \left(C_n X\right)^\top \left(C_n X\right)$ as sum of outer products of the rows of the column-wise centered $X$ with itself. | Is a sample covariance matrix always symmetric and positive definite? | Let
$$
X=
\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k} \\
x_{21} & x_{22} & \cdots & x_{2k} \\
\vdots & \vdots & \ddots & \vdots\\
x_{n1} & x_{n2} & \cdots & x_{nk}
\end{pmatrix}
$$
denote the da | Is a sample covariance matrix always symmetric and positive definite?
Let
$$
X=
\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k} \\
x_{21} & x_{22} & \cdots & x_{2k} \\
\vdots & \vdots & \ddots & \vdots\\
x_{n1} & x_{n2} & \cdots & x_{nk}
\end{pmatrix}
$$
denote the data matrix whose $\left(i,j\right)$-th entry is the $i$-th measurement of the $j$-th variable (with $i \in \{1,\ldots, n\}, j \in \{1,\ldots,k \}$).
The sample covariance matrix $\mathcal S$ can be written as
$\mathcal S=n^{-1}X^\top C_n X,$
where $C_n=I_n-n^{-1}\mathbb{1}_n\mathbb{1}_n^\top$ is the centering matrix.
Since $C_n$ is symmetric and idempotent, we also have $\mathcal S=n^{-1}X^\top C_n^\top C_n X$.1 But with $Y\mathrel{:=}C_n X$ this becomes $\mathcal S=n^{-1}Y^\top Y$, which is generally positive semi-definite, and positive definite only if the columns of $Y$ are linearly independent.
This means that $\mathcal S$ is positive definite iff the centered measurement vectors of the $k$ variables, i.e. the vectors $\left(x_{1j}-\bar{x}_{.j},\ldots,x_{nj}-\bar{x}_{.j}\right)^\top$ indexed by $j$, are linearly independent.
1Another way to see that $\mathcal S$ can be written as $n^{-1}X^\top C_n^\top C_n X$ is to interpret $X^\top C_n^\top C_n X = \left(C_n X\right)^\top \left(C_n X\right)$ as sum of outer products of the rows of the column-wise centered $X$ with itself. | Is a sample covariance matrix always symmetric and positive definite?
Let
$$
X=
\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k} \\
x_{21} & x_{22} & \cdots & x_{2k} \\
\vdots & \vdots & \ddots & \vdots\\
x_{n1} & x_{n2} & \cdots & x_{nk}
\end{pmatrix}
$$
denote the da |
3,825 | Is a sample covariance matrix always symmetric and positive definite? | I would add to the nice argument of Zen the following which explains why we often say that the covariance matrix is positive definite if $n-1\geq k$.
If $x_1,x_2,...,x_n$ are a random sample of a continuous probability distribution then $x_1,x_2,...,x_n$ are almost surely (in the probability theory sense) linearly independent.
Now, $z_1,z_2,...,z_n$ are not linearly independent because $\sum_{i=1}^n z_i = 0$, but because of $x_1,x_2,...,x_n$ being a.s. linearly independent, $z_1,z_2,...,z_n$ a.s. span $\mathbb{R}^{n-1}$. If $n-1\geq k$, they also span $\mathbb{R}^k$.
To conclude, if $x_1,x_2,...,x_n$ are a random sample of a continuous probability distribution and $n-1\geq k$, the covariance matrix is positive definite. | Is a sample covariance matrix always symmetric and positive definite? | I would add to the nice argument of Zen the following which explains why we often say that the covariance matrix is positive definite if $n-1\geq k$.
If $x_1,x_2,...,x_n$ are a random sample of a cont | Is a sample covariance matrix always symmetric and positive definite?
I would add to the nice argument of Zen the following which explains why we often say that the covariance matrix is positive definite if $n-1\geq k$.
If $x_1,x_2,...,x_n$ are a random sample of a continuous probability distribution then $x_1,x_2,...,x_n$ are almost surely (in the probability theory sense) linearly independent.
Now, $z_1,z_2,...,z_n$ are not linearly independent because $\sum_{i=1}^n z_i = 0$, but because of $x_1,x_2,...,x_n$ being a.s. linearly independent, $z_1,z_2,...,z_n$ a.s. span $\mathbb{R}^{n-1}$. If $n-1\geq k$, they also span $\mathbb{R}^k$.
To conclude, if $x_1,x_2,...,x_n$ are a random sample of a continuous probability distribution and $n-1\geq k$, the covariance matrix is positive definite. | Is a sample covariance matrix always symmetric and positive definite?
I would add to the nice argument of Zen the following which explains why we often say that the covariance matrix is positive definite if $n-1\geq k$.
If $x_1,x_2,...,x_n$ are a random sample of a cont |
3,826 | Recommended books on experiment design? | for me, the best book around is by George Box:
Statistics for Experimenters: Design, Innovation, and Discovery
of course the book by Maxwell and Delaney is also pretty good:
Designing Experiments and Analyzing Data: A Model Comparison Perspective, Second Edition
I personally prefer the first, but they are both top quality. They are a little bit expensive, but you can definitely find a cheap earlier edition for sale. | Recommended books on experiment design? | for me, the best book around is by George Box:
Statistics for Experimenters: Design, Innovation, and Discovery
of course the book by Maxwell and Delaney is also pretty good:
Designing Experiments and | Recommended books on experiment design?
for me, the best book around is by George Box:
Statistics for Experimenters: Design, Innovation, and Discovery
of course the book by Maxwell and Delaney is also pretty good:
Designing Experiments and Analyzing Data: A Model Comparison Perspective, Second Edition
I personally prefer the first, but they are both top quality. They are a little bit expensive, but you can definitely find a cheap earlier edition for sale. | Recommended books on experiment design?
for me, the best book around is by George Box:
Statistics for Experimenters: Design, Innovation, and Discovery
of course the book by Maxwell and Delaney is also pretty good:
Designing Experiments and |
3,827 | Recommended books on experiment design? | Montgomery's Design and Analysis of Experiments is a classic and highly regarded text:
If you are interested in experimental design in a particular field (eg. clinical trials) other more specialised texts may be appropriate. | Recommended books on experiment design? | Montgomery's Design and Analysis of Experiments is a classic and highly regarded text:
If you are interested in experimental design in a particular field (eg. clinical trials) other more specialised t | Recommended books on experiment design?
Montgomery's Design and Analysis of Experiments is a classic and highly regarded text:
If you are interested in experimental design in a particular field (eg. clinical trials) other more specialised texts may be appropriate. | Recommended books on experiment design?
Montgomery's Design and Analysis of Experiments is a classic and highly regarded text:
If you are interested in experimental design in a particular field (eg. clinical trials) other more specialised t |
3,828 | Recommended books on experiment design? | Ronald Fisher's The Design of Experiments (link is Wikipedia rather than Amazon since it is long out of print) is interesting for historical context. The book is often credited as founding the whole field, and certainly did a lot to promote things like blocking, randomisation and factorial design, though things have moved on a bit since.
As a period document it's quite fascinating, but it's also maddening. In the absence of a common terminology and notation, a lot of time is spent painstakingly explaining things in what now seems comically-stilted English. If you had to use it as a reference to look up how to calculate something you'd probably gnaw your own leg off. But the terribly polite hatchet job on some of Galton's analysis is entertaining.
(I know, I know -- how the readers of tomorrow will laugh at the archaisms of today's scientific literature...) | Recommended books on experiment design? | Ronald Fisher's The Design of Experiments (link is Wikipedia rather than Amazon since it is long out of print) is interesting for historical context. The book is often credited as founding the whole f | Recommended books on experiment design?
Ronald Fisher's The Design of Experiments (link is Wikipedia rather than Amazon since it is long out of print) is interesting for historical context. The book is often credited as founding the whole field, and certainly did a lot to promote things like blocking, randomisation and factorial design, though things have moved on a bit since.
As a period document it's quite fascinating, but it's also maddening. In the absence of a common terminology and notation, a lot of time is spent painstakingly explaining things in what now seems comically-stilted English. If you had to use it as a reference to look up how to calculate something you'd probably gnaw your own leg off. But the terribly polite hatchet job on some of Galton's analysis is entertaining.
(I know, I know -- how the readers of tomorrow will laugh at the archaisms of today's scientific literature...) | Recommended books on experiment design?
Ronald Fisher's The Design of Experiments (link is Wikipedia rather than Amazon since it is long out of print) is interesting for historical context. The book is often credited as founding the whole f |
3,829 | Recommended books on experiment design? | I am surprise no one mentioned: Statistical Design by George Casella
Google Books Link | Recommended books on experiment design? | I am surprise no one mentioned: Statistical Design by George Casella
Google Books Link | Recommended books on experiment design?
I am surprise no one mentioned: Statistical Design by George Casella
Google Books Link | Recommended books on experiment design?
I am surprise no one mentioned: Statistical Design by George Casella
Google Books Link |
3,830 | Recommended books on experiment design? | There are many excellent books on design of experiments. These procedures apply generally and I do not think there are special designs specific to bakery applications. Here are a few of my favorites.
Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition [Hardcover] George E. P. Box (Author) J. Stuart Hunter (Author), William G. Hunter (Author)
Design and Analysis of Experiments [Hardcover] Douglas C. Montgomery (Author)
Design of Experiments: An Introduction Based on Linear Models (Chapman & Hall/CRC Texts in Statistical Science) [Hardcover] Max Morris (Author)
Design and Analysis of Experiments (Springer Texts in Statistics) [Hardcover]
Angela M. Dean (Author), Daniel Voss (Author)
Experiments: Planning, Analysis, and Optimization (Wiley Series in Probability and Statistics) [Hardcover] C. F. Jeff Wu (Author), Michael S. Hamada (Author)
Statistical Design and Analysis of Experiments, with Applications to Engineering and Science [Hardcover] Robert L. Mason (Author), Richard F. Gunst (Author), James L. Hess (Author)
Statistical Design and Analysis of Experiments (Classics in Applied Mathematics No 22. ) [Paperback] Peter W. M. John (Author) | Recommended books on experiment design? | There are many excellent books on design of experiments. These procedures apply generally and I do not think there are special designs specific to bakery applications. Here are a few of my favorites | Recommended books on experiment design?
There are many excellent books on design of experiments. These procedures apply generally and I do not think there are special designs specific to bakery applications. Here are a few of my favorites.
Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition [Hardcover] George E. P. Box (Author) J. Stuart Hunter (Author), William G. Hunter (Author)
Design and Analysis of Experiments [Hardcover] Douglas C. Montgomery (Author)
Design of Experiments: An Introduction Based on Linear Models (Chapman & Hall/CRC Texts in Statistical Science) [Hardcover] Max Morris (Author)
Design and Analysis of Experiments (Springer Texts in Statistics) [Hardcover]
Angela M. Dean (Author), Daniel Voss (Author)
Experiments: Planning, Analysis, and Optimization (Wiley Series in Probability and Statistics) [Hardcover] C. F. Jeff Wu (Author), Michael S. Hamada (Author)
Statistical Design and Analysis of Experiments, with Applications to Engineering and Science [Hardcover] Robert L. Mason (Author), Richard F. Gunst (Author), James L. Hess (Author)
Statistical Design and Analysis of Experiments (Classics in Applied Mathematics No 22. ) [Paperback] Peter W. M. John (Author) | Recommended books on experiment design?
There are many excellent books on design of experiments. These procedures apply generally and I do not think there are special designs specific to bakery applications. Here are a few of my favorites |
3,831 | Recommended books on experiment design? | Not published yet, but I'm impatient for Design and analysis of experiments with R
There are not enough books on DoE with R. I'm very reluctant to proprietary software, and R documentation is not always the best | Recommended books on experiment design? | Not published yet, but I'm impatient for Design and analysis of experiments with R
There are not enough books on DoE with R. I'm very reluctant to proprietary software, and R documentation is not alwa | Recommended books on experiment design?
Not published yet, but I'm impatient for Design and analysis of experiments with R
There are not enough books on DoE with R. I'm very reluctant to proprietary software, and R documentation is not always the best | Recommended books on experiment design?
Not published yet, but I'm impatient for Design and analysis of experiments with R
There are not enough books on DoE with R. I'm very reluctant to proprietary software, and R documentation is not alwa |
3,832 | Recommended books on experiment design? | Experiments: Planning, Analysis and Optimization by Wu & Hamada.
I'm only a couple of chapters in, so not yet in a position to recommend confidently, but so far it looks like a good graduate text, reasonably detailed, comprehensive and up-to-date. Has more of a "no nonsense" feel than the Montgomery. | Recommended books on experiment design? | Experiments: Planning, Analysis and Optimization by Wu & Hamada.
I'm only a couple of chapters in, so not yet in a position to recommend confidently, but so far it looks like a good graduate text, rea | Recommended books on experiment design?
Experiments: Planning, Analysis and Optimization by Wu & Hamada.
I'm only a couple of chapters in, so not yet in a position to recommend confidently, but so far it looks like a good graduate text, reasonably detailed, comprehensive and up-to-date. Has more of a "no nonsense" feel than the Montgomery. | Recommended books on experiment design?
Experiments: Planning, Analysis and Optimization by Wu & Hamada.
I'm only a couple of chapters in, so not yet in a position to recommend confidently, but so far it looks like a good graduate text, rea |
3,833 | Recommended books on experiment design? | Experimental Design for the Life Sciences, by Ruxton & Colegrave. Aimed primarily at undergraduates. | Recommended books on experiment design? | Experimental Design for the Life Sciences, by Ruxton & Colegrave. Aimed primarily at undergraduates. | Recommended books on experiment design?
Experimental Design for the Life Sciences, by Ruxton & Colegrave. Aimed primarily at undergraduates. | Recommended books on experiment design?
Experimental Design for the Life Sciences, by Ruxton & Colegrave. Aimed primarily at undergraduates. |
3,834 | Recommended books on experiment design? | If you're interested in pharmaceutical trials, two books I recommend:
Statistical Issues in Drug Development by Stephen Senn (Amazon link)
Cross-over Trials in Clinical Research by Stephen Senn (Amazon link) | Recommended books on experiment design? | If you're interested in pharmaceutical trials, two books I recommend:
Statistical Issues in Drug Development by Stephen Senn (Amazon link)
Cross-over Trials in Clinical Research by Stephen Senn (Amaz | Recommended books on experiment design?
If you're interested in pharmaceutical trials, two books I recommend:
Statistical Issues in Drug Development by Stephen Senn (Amazon link)
Cross-over Trials in Clinical Research by Stephen Senn (Amazon link) | Recommended books on experiment design?
If you're interested in pharmaceutical trials, two books I recommend:
Statistical Issues in Drug Development by Stephen Senn (Amazon link)
Cross-over Trials in Clinical Research by Stephen Senn (Amaz |
3,835 | Recommended books on experiment design? | Not really a book but a gentle introduction on DoE in R: An R companion to Experimental Design. | Recommended books on experiment design? | Not really a book but a gentle introduction on DoE in R: An R companion to Experimental Design. | Recommended books on experiment design?
Not really a book but a gentle introduction on DoE in R: An R companion to Experimental Design. | Recommended books on experiment design?
Not really a book but a gentle introduction on DoE in R: An R companion to Experimental Design. |
3,836 | Recommended books on experiment design? | If your field is biology/ecology, a nice and well written text is "Experimental Design and Data Analysis for Biologists" of Quinn and Keough (amazon
the work done by Underwood is also very interesting to read:
Experiments in Ecology: Their Logical Design and Interpretation Using Analysis of Variance (amazon) | Recommended books on experiment design? | If your field is biology/ecology, a nice and well written text is "Experimental Design and Data Analysis for Biologists" of Quinn and Keough (amazon
the work done by Underwood is also very interestin | Recommended books on experiment design?
If your field is biology/ecology, a nice and well written text is "Experimental Design and Data Analysis for Biologists" of Quinn and Keough (amazon
the work done by Underwood is also very interesting to read:
Experiments in Ecology: Their Logical Design and Interpretation Using Analysis of Variance (amazon) | Recommended books on experiment design?
If your field is biology/ecology, a nice and well written text is "Experimental Design and Data Analysis for Biologists" of Quinn and Keough (amazon
the work done by Underwood is also very interestin |
3,837 | Recommended books on experiment design? | The Design of Experiments: Statistical Principles for Practical Applications by Roger Mead. Examples are drawn from agriculture and biology, so probably most appropriate if you're interested in one of those fields. Rather expensive for a 600-page paperback but you can probably find it second-hand. | Recommended books on experiment design? | The Design of Experiments: Statistical Principles for Practical Applications by Roger Mead. Examples are drawn from agriculture and biology, so probably most appropriate if you're interested in one of | Recommended books on experiment design?
The Design of Experiments: Statistical Principles for Practical Applications by Roger Mead. Examples are drawn from agriculture and biology, so probably most appropriate if you're interested in one of those fields. Rather expensive for a 600-page paperback but you can probably find it second-hand. | Recommended books on experiment design?
The Design of Experiments: Statistical Principles for Practical Applications by Roger Mead. Examples are drawn from agriculture and biology, so probably most appropriate if you're interested in one of |
3,838 | Recommended books on experiment design? | Experimental Design in Biotechnology by Perry D. Haaland, ed Marcel Dekker. | Recommended books on experiment design? | Experimental Design in Biotechnology by Perry D. Haaland, ed Marcel Dekker. | Recommended books on experiment design?
Experimental Design in Biotechnology by Perry D. Haaland, ed Marcel Dekker. | Recommended books on experiment design?
Experimental Design in Biotechnology by Perry D. Haaland, ed Marcel Dekker. |
3,839 | Recommended books on experiment design? | If you're in the social sciences:
Using Randomization in Development Economics Research: A Toolkit | Recommended books on experiment design? | If you're in the social sciences:
Using Randomization in Development Economics Research: A Toolkit | Recommended books on experiment design?
If you're in the social sciences:
Using Randomization in Development Economics Research: A Toolkit | Recommended books on experiment design?
If you're in the social sciences:
Using Randomization in Development Economics Research: A Toolkit |
3,840 | Recommended books on experiment design? | I have recently reviewed a large collection of DoE books (17), with the following requirements:
Not a cookbook approach but geared towards understanding (hard requirement)
Decently in-depth (hard requirement)
Written with an understanding of the New Causal Revolution (nice-to-have)
Uses Hasse diagrams to simplify understanding of variance structure (nice-to-have)
Has problems or exercises, preferably with solutions, for self-study (must have problems, nice to have solutions)
Does not use SAS (I hate SAS) - or at least has an alternative to SAS (hard requirement)
Utilizes a depth-first approach rather than breadth-first (suits my learning style better - hard requirement)
Geared towards upper-level undergraduate with prerequisites of mathematical statistics and linear models (hard requirement)
The books I reviewed were the following (only first author listed):
Kaltenbach, Statistical Design and Analysis of Biological Experiments
Casella, Statistical Design
Montgomery, Design and Analysis of Experiments, 4th Ed.
Montgomery, Design and Analysis of Experiments, 10th Ed.
Montgomery, Design of Experiments: A Modern Approach
Oehlert, A First Course in Design and Analysis of Experiments
Lawson, Design and Analysis of Experiments with R
Fisher, The Design of Experiments
Morris, Design of Experiments: An Introduction Based on Linear Models
Bailey, Design of Comparative Experiments
Wu, Experiments: Planning, Analysis, and Optimization
Dean, Design and Analysis of Experiments
Box, Statistics for Experimenters
Maxwell, Designing Experiments and Analyzing Data: A Model Comparison Perspective
Mead, Statistical Principles for the Design of Experiments
Cobb, Introduction to Design and Analysis of Experiments
Kuehl, Design of Experiments: Statistical Principles of Research Design and Analysis, 2nd Ed.
I was unable to find any book that had all of the desired characteristics. Indeed, I have come to think that Requirement #3 is not satisfied anywhere. Requirement #4 is only true for three books in the list (Oehlert, Kaltenbach, and Bailey). Two books were too advanced (Casella and Morris), though Casella would likely be a terrific graduate-level text. Very few books had answers to problems, and quite a few had no problems at all (Fisher and Kaltenbach, with Bailey and Mead having too few problems). A number of books just presented a cookbook approach: here's the design and how you analyze it, with a very limited attempt at getting to the understanding (all the Montgomery books, Oehlert, Lawson, Kuehl). Maxwell was far too wordy, Cobb was too low-level and also very wordy. Wu was breadth-first.
I narrowed it down to two possibilities that looked promising: Dean and Box. I had to read the first few chapters of both to determine that Dean uses a depth-first approach, while Box has a breadth-first approach.
So I have landed on Dean, Voss, and Draguljic Design and Analysis of Experiments as my favored book for self-study, based on my requirements. It fails Requirements #3 and #4 and does not have solutions for problems. But it does well enough on the other requirements to be the best option.
Fisher is worth reading, but is not sufficient on its own to train you to be proficient, as there are no problems to work. He has very good explanations, and it would be a great supplement. | Recommended books on experiment design? | I have recently reviewed a large collection of DoE books (17), with the following requirements:
Not a cookbook approach but geared towards understanding (hard requirement)
Decently in-depth (hard req | Recommended books on experiment design?
I have recently reviewed a large collection of DoE books (17), with the following requirements:
Not a cookbook approach but geared towards understanding (hard requirement)
Decently in-depth (hard requirement)
Written with an understanding of the New Causal Revolution (nice-to-have)
Uses Hasse diagrams to simplify understanding of variance structure (nice-to-have)
Has problems or exercises, preferably with solutions, for self-study (must have problems, nice to have solutions)
Does not use SAS (I hate SAS) - or at least has an alternative to SAS (hard requirement)
Utilizes a depth-first approach rather than breadth-first (suits my learning style better - hard requirement)
Geared towards upper-level undergraduate with prerequisites of mathematical statistics and linear models (hard requirement)
The books I reviewed were the following (only first author listed):
Kaltenbach, Statistical Design and Analysis of Biological Experiments
Casella, Statistical Design
Montgomery, Design and Analysis of Experiments, 4th Ed.
Montgomery, Design and Analysis of Experiments, 10th Ed.
Montgomery, Design of Experiments: A Modern Approach
Oehlert, A First Course in Design and Analysis of Experiments
Lawson, Design and Analysis of Experiments with R
Fisher, The Design of Experiments
Morris, Design of Experiments: An Introduction Based on Linear Models
Bailey, Design of Comparative Experiments
Wu, Experiments: Planning, Analysis, and Optimization
Dean, Design and Analysis of Experiments
Box, Statistics for Experimenters
Maxwell, Designing Experiments and Analyzing Data: A Model Comparison Perspective
Mead, Statistical Principles for the Design of Experiments
Cobb, Introduction to Design and Analysis of Experiments
Kuehl, Design of Experiments: Statistical Principles of Research Design and Analysis, 2nd Ed.
I was unable to find any book that had all of the desired characteristics. Indeed, I have come to think that Requirement #3 is not satisfied anywhere. Requirement #4 is only true for three books in the list (Oehlert, Kaltenbach, and Bailey). Two books were too advanced (Casella and Morris), though Casella would likely be a terrific graduate-level text. Very few books had answers to problems, and quite a few had no problems at all (Fisher and Kaltenbach, with Bailey and Mead having too few problems). A number of books just presented a cookbook approach: here's the design and how you analyze it, with a very limited attempt at getting to the understanding (all the Montgomery books, Oehlert, Lawson, Kuehl). Maxwell was far too wordy, Cobb was too low-level and also very wordy. Wu was breadth-first.
I narrowed it down to two possibilities that looked promising: Dean and Box. I had to read the first few chapters of both to determine that Dean uses a depth-first approach, while Box has a breadth-first approach.
So I have landed on Dean, Voss, and Draguljic Design and Analysis of Experiments as my favored book for self-study, based on my requirements. It fails Requirements #3 and #4 and does not have solutions for problems. But it does well enough on the other requirements to be the best option.
Fisher is worth reading, but is not sufficient on its own to train you to be proficient, as there are no problems to work. He has very good explanations, and it would be a great supplement. | Recommended books on experiment design?
I have recently reviewed a large collection of DoE books (17), with the following requirements:
Not a cookbook approach but geared towards understanding (hard requirement)
Decently in-depth (hard req |
3,841 | Recommended books on experiment design? | This book gives you a statistical perspective on experimental design:
Casella, G. (2008). Statistical Design. Springer. | Recommended books on experiment design? | This book gives you a statistical perspective on experimental design:
Casella, G. (2008). Statistical Design. Springer. | Recommended books on experiment design?
This book gives you a statistical perspective on experimental design:
Casella, G. (2008). Statistical Design. Springer. | Recommended books on experiment design?
This book gives you a statistical perspective on experimental design:
Casella, G. (2008). Statistical Design. Springer. |
3,842 | Recommended books on experiment design? | Hands on DOE book
John Lawson has written two books.
Design and Analysis of Experiments with SAS
Design and Analysis of Experiments with R
One is for SAS users and another one for R users. Both the version are same in content and context, the only difference is the software used in the book. Second one which is for R users is more useful as R is open source. So this is more of an hands on DOE book. He has in fact developed a library around with name daewr | Recommended books on experiment design? | Hands on DOE book
John Lawson has written two books.
Design and Analysis of Experiments with SAS
Design and Analysis of Experiments with R
One is for SAS users and another one for R users. Both the | Recommended books on experiment design?
Hands on DOE book
John Lawson has written two books.
Design and Analysis of Experiments with SAS
Design and Analysis of Experiments with R
One is for SAS users and another one for R users. Both the version are same in content and context, the only difference is the software used in the book. Second one which is for R users is more useful as R is open source. So this is more of an hands on DOE book. He has in fact developed a library around with name daewr | Recommended books on experiment design?
Hands on DOE book
John Lawson has written two books.
Design and Analysis of Experiments with SAS
Design and Analysis of Experiments with R
One is for SAS users and another one for R users. Both the |
3,843 | Recommended books on experiment design? | A contemporary reference that I've found really useful is
"Randomization in Clinical Trials" by Rosenberger and Lachin
While the focus is on randomized trials and in-human studies, it covers many topics not previously covered in a nice, codified reference (group sequential designs, covariate adaptive designs, causality, etc.).
Lachin has been a trusted reference with a great deal of influence on FDA decision making over the years. The book has some very interesting applied examples, particularly the ECMO study, to demonstrate contemporary issues in trials (blinding, bias, crossover, etc.) | Recommended books on experiment design? | A contemporary reference that I've found really useful is
"Randomization in Clinical Trials" by Rosenberger and Lachin
While the focus is on randomized trials and in-human studies, it covers many topi | Recommended books on experiment design?
A contemporary reference that I've found really useful is
"Randomization in Clinical Trials" by Rosenberger and Lachin
While the focus is on randomized trials and in-human studies, it covers many topics not previously covered in a nice, codified reference (group sequential designs, covariate adaptive designs, causality, etc.).
Lachin has been a trusted reference with a great deal of influence on FDA decision making over the years. The book has some very interesting applied examples, particularly the ECMO study, to demonstrate contemporary issues in trials (blinding, bias, crossover, etc.) | Recommended books on experiment design?
A contemporary reference that I've found really useful is
"Randomization in Clinical Trials" by Rosenberger and Lachin
While the focus is on randomized trials and in-human studies, it covers many topi |
3,844 | Is every covariance matrix positive definite? | No.
Consider three variables, $X$, $Y$ and $Z = X+Y$. Their covariance matrix, $M$, is not positive definite, since there's a vector $z$ ($= (1, 1, -1)'$) for which $z'Mz$ is not positive.
Population covariance matrices are positive semi-definite.
(See property 2 here.)
The same should generally apply to covariance matrices of complete samples (no missing values), since they can also be seen as a form of discrete population covariance.
However due to inexactness of floating point numerical computations, even algebraically positive definite cases might occasionally be computed to not be even positive semi-definite; good choice of algorithms can help with this.
More generally, sample covariance matrices - depending on how they deal with missing values in some variables - may or may not be positive semi-definite, even in theory. If pairwise deletion is used, for example, then there's no guarantee of positive semi-definiteness. Further, accumulated numerical error can cause sample covariance matrices that should be notionally positive semi-definite to fail to be.
Like so:
x <- rnorm(30)
y <- rnorm(30) - x/10 # it doesn't matter for this if x and y are correlated or not
z <- x+y
M <- cov(data.frame(x=x,y=y,z=z))
z <- rbind(1,1,-1)
t(z)%*%M%*%z
[,1]
[1,] -1.110223e-16
This happened on the first example I tried (I probably should supply a seed but it's not so rare that you should have to try a lot of examples before you get one).
The result came out negative, even though it should be algebraically zero. A different set of numbers might yield a positive number or an "exact" zero.
--
Example of moderate missingness leading to loss of positive semidefiniteness via pairwise deletion:
z <- x + y + rnorm(30)/50 # same x and y as before.
xyz1 <- data.frame(x=x,y=y,z=z) # high correlation but definitely of full rank
xyz1$x[sample(1:30,5)] <- NA # make 5 x's missing
xyz1$y[sample(1:30,5)] <- NA # make 5 y's missing
xyz1$z[sample(1:30,5)] <- NA # make 5 z's missing
cov(xyz1,use="pairwise") # the individual pairwise covars are fine ...
x y z
x 1.2107760 -0.2552947 1.255868
y -0.2552947 1.2728156 1.037446
z 1.2558683 1.0374456 2.367978
chol(cov(xyz1,use="pairwise")) # ... but leave the matrix not positive semi-definite
Error in chol.default(cov(xyz1, use = "pairwise")) :
the leading minor of order 3 is not positive definite
chol(cov(xyz1,use="complete")) # but deleting even more rows leaves it PSD
x y z
x 0.8760209 -0.2253484 0.64303448
y 0.0000000 1.1088741 1.11270078
z 0.0000000 0.0000000 0.01345364 | Is every covariance matrix positive definite? | No.
Consider three variables, $X$, $Y$ and $Z = X+Y$. Their covariance matrix, $M$, is not positive definite, since there's a vector $z$ ($= (1, 1, -1)'$) for which $z'Mz$ is not positive.
Population | Is every covariance matrix positive definite?
No.
Consider three variables, $X$, $Y$ and $Z = X+Y$. Their covariance matrix, $M$, is not positive definite, since there's a vector $z$ ($= (1, 1, -1)'$) for which $z'Mz$ is not positive.
Population covariance matrices are positive semi-definite.
(See property 2 here.)
The same should generally apply to covariance matrices of complete samples (no missing values), since they can also be seen as a form of discrete population covariance.
However due to inexactness of floating point numerical computations, even algebraically positive definite cases might occasionally be computed to not be even positive semi-definite; good choice of algorithms can help with this.
More generally, sample covariance matrices - depending on how they deal with missing values in some variables - may or may not be positive semi-definite, even in theory. If pairwise deletion is used, for example, then there's no guarantee of positive semi-definiteness. Further, accumulated numerical error can cause sample covariance matrices that should be notionally positive semi-definite to fail to be.
Like so:
x <- rnorm(30)
y <- rnorm(30) - x/10 # it doesn't matter for this if x and y are correlated or not
z <- x+y
M <- cov(data.frame(x=x,y=y,z=z))
z <- rbind(1,1,-1)
t(z)%*%M%*%z
[,1]
[1,] -1.110223e-16
This happened on the first example I tried (I probably should supply a seed but it's not so rare that you should have to try a lot of examples before you get one).
The result came out negative, even though it should be algebraically zero. A different set of numbers might yield a positive number or an "exact" zero.
--
Example of moderate missingness leading to loss of positive semidefiniteness via pairwise deletion:
z <- x + y + rnorm(30)/50 # same x and y as before.
xyz1 <- data.frame(x=x,y=y,z=z) # high correlation but definitely of full rank
xyz1$x[sample(1:30,5)] <- NA # make 5 x's missing
xyz1$y[sample(1:30,5)] <- NA # make 5 y's missing
xyz1$z[sample(1:30,5)] <- NA # make 5 z's missing
cov(xyz1,use="pairwise") # the individual pairwise covars are fine ...
x y z
x 1.2107760 -0.2552947 1.255868
y -0.2552947 1.2728156 1.037446
z 1.2558683 1.0374456 2.367978
chol(cov(xyz1,use="pairwise")) # ... but leave the matrix not positive semi-definite
Error in chol.default(cov(xyz1, use = "pairwise")) :
the leading minor of order 3 is not positive definite
chol(cov(xyz1,use="complete")) # but deleting even more rows leaves it PSD
x y z
x 0.8760209 -0.2253484 0.64303448
y 0.0000000 1.1088741 1.11270078
z 0.0000000 0.0000000 0.01345364 | Is every covariance matrix positive definite?
No.
Consider three variables, $X$, $Y$ and $Z = X+Y$. Their covariance matrix, $M$, is not positive definite, since there's a vector $z$ ($= (1, 1, -1)'$) for which $z'Mz$ is not positive.
Population |
3,845 | Is every covariance matrix positive definite? | Well, to understand why the covariance matrix of a population is always positive semi-definite, notice that:
$$
\sum_{i,j =1}^{n} y_i \cdot y_j \cdot Cov(X_i, X_j) = Var(\sum_{i=1}^n y_iX_i) \geq 0
$$
where $y_i$ are some real numbers, and $X_i$ are some real valued random variables.
This also explains why in the example given by Glen_b the covariance matrix was not positive definite . We had $y_1 =1 , y_2 = 1, y_3 = -1$, and $X_1 = X, X_2 = Y, X_3 = Z = X+Y$, so $\sum_{i=1}^{3} y_iX_i = 0$, and the variance of a random variable which is constant is $0$. | Is every covariance matrix positive definite? | Well, to understand why the covariance matrix of a population is always positive semi-definite, notice that:
$$
\sum_{i,j =1}^{n} y_i \cdot y_j \cdot Cov(X_i, X_j) = Var(\sum_{i=1}^n y_iX_i) \geq 0
$$ | Is every covariance matrix positive definite?
Well, to understand why the covariance matrix of a population is always positive semi-definite, notice that:
$$
\sum_{i,j =1}^{n} y_i \cdot y_j \cdot Cov(X_i, X_j) = Var(\sum_{i=1}^n y_iX_i) \geq 0
$$
where $y_i$ are some real numbers, and $X_i$ are some real valued random variables.
This also explains why in the example given by Glen_b the covariance matrix was not positive definite . We had $y_1 =1 , y_2 = 1, y_3 = -1$, and $X_1 = X, X_2 = Y, X_3 = Z = X+Y$, so $\sum_{i=1}^{3} y_iX_i = 0$, and the variance of a random variable which is constant is $0$. | Is every covariance matrix positive definite?
Well, to understand why the covariance matrix of a population is always positive semi-definite, notice that:
$$
\sum_{i,j =1}^{n} y_i \cdot y_j \cdot Cov(X_i, X_j) = Var(\sum_{i=1}^n y_iX_i) \geq 0
$$ |
3,846 | Is every covariance matrix positive definite? | As the other answer note, the covariance matrix is positive semi-definite (which I prefer to call non-negative definite), but not necessarily positive definite. We can show that the covariance matrix is positive semi-definite from first principles using its definition. To do this, suppose we consider a random vector $\mathbf{X}$ with mean vector $\boldsymbol{\mu}$ and covariance matrix $\mathbf{\Sigma}_\mathbf{X}$. For any conformable vector $\mathbf{z}$ we can define the corresponding vector:
$$\mathbf{Y} = (\mathbf{X} - \boldsymbol{\mu}_\mathbf{X})^\text{T} \mathbf{z}.$$
Since $||\mathbf{Y}|| \geqslant 0$ we then have:
$$\begin{aligned}
\mathbf{z}^\text{T} \mathbf{\Sigma}_\mathbf{X} \mathbf{z}
&= \mathbf{z}^\text{T} \mathbb{E}((\mathbf{X} - \boldsymbol{\mu}_\mathbf{X}) (\mathbf{X} - \boldsymbol{\mu}_\mathbf{X})^\text{T}) \mathbf{z} \\[6pt]
&= \mathbb{E}(\mathbf{z}^\text{T} (\mathbf{X} - \boldsymbol{\mu}_\mathbf{X}) (\mathbf{X} - \boldsymbol{\mu}_\mathbf{X})^\text{T} \mathbf{z}) \\[6pt]
&= \mathbb{E}(\mathbf{Y}^\text{T} \mathbf{Y}) \\[6pt]
&= \mathbb{E}(||\mathbf{Y}||^2) \geqslant 0. \\[6pt]
\end{aligned}$$
This establishes that the covariance matrix $\mathbf{\Sigma}_\mathbf{X}$ is positive semi-definite. Moreover, we can see that $\mathbf{z}^\text{T} \mathbf{\Sigma}_\mathbf{X} \mathbf{z} = 0$ if and only if $\mathbf{Y}=(\mathbf{X} - \boldsymbol{\mu}_\mathbf{X})^\text{T} \mathbf{z}=\mathbf{0}$ almost surely. | Is every covariance matrix positive definite? | As the other answer note, the covariance matrix is positive semi-definite (which I prefer to call non-negative definite), but not necessarily positive definite. We can show that the covariance matrix | Is every covariance matrix positive definite?
As the other answer note, the covariance matrix is positive semi-definite (which I prefer to call non-negative definite), but not necessarily positive definite. We can show that the covariance matrix is positive semi-definite from first principles using its definition. To do this, suppose we consider a random vector $\mathbf{X}$ with mean vector $\boldsymbol{\mu}$ and covariance matrix $\mathbf{\Sigma}_\mathbf{X}$. For any conformable vector $\mathbf{z}$ we can define the corresponding vector:
$$\mathbf{Y} = (\mathbf{X} - \boldsymbol{\mu}_\mathbf{X})^\text{T} \mathbf{z}.$$
Since $||\mathbf{Y}|| \geqslant 0$ we then have:
$$\begin{aligned}
\mathbf{z}^\text{T} \mathbf{\Sigma}_\mathbf{X} \mathbf{z}
&= \mathbf{z}^\text{T} \mathbb{E}((\mathbf{X} - \boldsymbol{\mu}_\mathbf{X}) (\mathbf{X} - \boldsymbol{\mu}_\mathbf{X})^\text{T}) \mathbf{z} \\[6pt]
&= \mathbb{E}(\mathbf{z}^\text{T} (\mathbf{X} - \boldsymbol{\mu}_\mathbf{X}) (\mathbf{X} - \boldsymbol{\mu}_\mathbf{X})^\text{T} \mathbf{z}) \\[6pt]
&= \mathbb{E}(\mathbf{Y}^\text{T} \mathbf{Y}) \\[6pt]
&= \mathbb{E}(||\mathbf{Y}||^2) \geqslant 0. \\[6pt]
\end{aligned}$$
This establishes that the covariance matrix $\mathbf{\Sigma}_\mathbf{X}$ is positive semi-definite. Moreover, we can see that $\mathbf{z}^\text{T} \mathbf{\Sigma}_\mathbf{X} \mathbf{z} = 0$ if and only if $\mathbf{Y}=(\mathbf{X} - \boldsymbol{\mu}_\mathbf{X})^\text{T} \mathbf{z}=\mathbf{0}$ almost surely. | Is every covariance matrix positive definite?
As the other answer note, the covariance matrix is positive semi-definite (which I prefer to call non-negative definite), but not necessarily positive definite. We can show that the covariance matrix |
3,847 | Is every covariance matrix positive definite? | As the other answers already make clear, a covariance matrix is not necessarily positive definite, but only positive semi-definite.
However, a covariance matrix is generally positive definite unless the space spanned by the variables is actually a linear subspace of lower dimension. This is exactly why in the example with X, Y and Z=X+Y the result is only positive semi-definite, but not positive definite. Although the variables span a three-dimensional space, they actually describe only a two-dimensional linear subspace (because they are not linearly independent). | Is every covariance matrix positive definite? | As the other answers already make clear, a covariance matrix is not necessarily positive definite, but only positive semi-definite.
However, a covariance matrix is generally positive definite unless t | Is every covariance matrix positive definite?
As the other answers already make clear, a covariance matrix is not necessarily positive definite, but only positive semi-definite.
However, a covariance matrix is generally positive definite unless the space spanned by the variables is actually a linear subspace of lower dimension. This is exactly why in the example with X, Y and Z=X+Y the result is only positive semi-definite, but not positive definite. Although the variables span a three-dimensional space, they actually describe only a two-dimensional linear subspace (because they are not linearly independent). | Is every covariance matrix positive definite?
As the other answers already make clear, a covariance matrix is not necessarily positive definite, but only positive semi-definite.
However, a covariance matrix is generally positive definite unless t |
3,848 | Is every covariance matrix positive definite? | $$\begin{array}{l}theory:\left\{ {{{\bf{\Sigma }}_{\bf{X}}}{\rm{ is positive semi - definite}}} \right.\\proof::\\set:\left\{ {{\bf{a}} = {\rm{vector }}\left( {p \times 1} \right){\rm{ }}\left( {{\mathop{\rm const}\nolimits} } \right) \ne \vec 0} \right.\\{{\bf{a}}^T}\Sigma {\bf{a}} = {\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]^T}\left[ {\begin{array}{*{20}{c}}{{\sigma _{11}}}&{{\sigma _{12}}}& \cdots &{{\sigma _{1p}}}\\{{\sigma _{21}}}&{{\sigma _{22}}}& \cdots &{{\sigma _{2p}}}\\ \vdots & \vdots & \ddots & \vdots \\{{\sigma _{p1}}}&{{\sigma _{p2}}}& \cdots &{{\sigma _{pp}}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]\\ = \left[ {\begin{array}{*{20}{c}}{{a_1}{\sigma _{11}} + {a_2}{\sigma _{21}} + \cdots + {a_p}{\sigma _{p1}}}& \cdots & \cdots &{{a_1}{\sigma _{1p}} + {a_2}{\sigma _{2p}} + \cdots + {a_p}{\sigma _{pp}}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]\\ = \left[ {\begin{array}{*{20}{c}}{\sum\limits_{i = 1}^p {{a_i}{\sigma _{i1}}} }& \cdots & \cdots &{\sum\limits_{i = 1}^p {{a_i}{\sigma _{ip}}} }\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]\\ = \left[ {\begin{array}{*{20}{c}}{\sum\limits_{i = 1}^p {{a_i}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_1}} \right)} }& \cdots & \cdots &{\sum\limits_{i = 1}^p {{a_i}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_p}} \right)} }\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]\\ = \sum\limits_{j = 1}^p {{a_j}\sum\limits_{i = 1}^p {{a_i}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_j}} \right)} } = \sum\limits_{j = 1}^p {\sum\limits_{i = 1}^p {{a_i}{a_j}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_j}} \right)} } \\\left[ {rule:\left\{ {{\mathop{\rm var}} \left( {\sum\limits_{i = 1}^p {{a_i}{X_i}} } \right) = \sum\limits_{j = 1}^p {\sum\limits_{i = 1}^p {{a_i}{a_j}{\mathop{\rm cov}} \left( {{X_i},{X_j}} \right)} } } \right.{\rm{ }}\left[ {see{\rm{ }}below} \right]} \right.\\ = {\mathop{\rm Var}\nolimits} \left( {\sum\limits_{i = 1}^p {{a_i}{X_i}} } \right) = {\mathop{\rm Var}\nolimits} \left( {{{\bf{a}}^T}{\bf{X}}} \right) \ge 0\end{array}
$$
Following is followed by the answer from sjm.majewski
$$\begin{array}{l}rule:\left\{ {{\mathop{\rm var}} \left( {\sum\limits_{i = 1}^p {{a_i}{X_i}} } \right) = \sum\limits_{j = 1}^p {\sum\limits_{i = 1}^p {{a_i}{a_j}{\mathop{\rm cov}} \left( {{X_i},{X_j}} \right)} } } \right.\\proof,eg:\\\left\{ \begin{array}{l}{\mathop{\rm var}} \left( {{a_1}{X_1} + {a_2}{X_2} + {a_3}{X_3}} \right) = {\mathop{\rm var}} \left( {\sum\limits_{i = 1}^3 {{a_i}{X_i}} } \right)\\ = {\mathop{\rm var}} \left( {{a_1}{X_1} + {a_2}{X_2}} \right) + 2{\mathop{\rm cov}} \left( {{a_1}{X_1} + {a_2}{X_2},{a_3}{X_3}} \right) + {\mathop{\rm var}} \left( {{a_3}{X_3}} \right)\\ = \left( {{\mathop{\rm var}} \left( {{a_1}{X_1}} \right) + 2{\mathop{\rm cov}} \left( {{a_1}{X_1},{a_2}{X_2}} \right) + {\mathop{\rm var}} \left( {{a_2}{X_2}} \right)} \right) + 2\left( {{\mathop{\rm cov}} \left( {{a_1}{X_1},{a_3}{X_3}} \right) + {\mathop{\rm cov}} \left( {{a_2}{X_2},{a_3}{X_3}} \right)} \right) + {\mathop{\rm var}} \left( {{a_3}{X_3}} \right)\\ = \left( {{a_1}^2{\mathop{\rm var}} \left( {{X_1}} \right) + 2{a_1}{a_2}{\mathop{\rm cov}} \left( {{X_1},{X_2}} \right) + {a_2}^2{\mathop{\rm var}} \left( {{X_2}} \right)} \right) + 2\left( {{a_1}{a_3}{\mathop{\rm cov}} \left( {{X_1},{X_3}} \right) + {a_2}{a_3}{\mathop{\rm cov}} \left( {{X_2},{X_3}} \right)} \right) + {a_3}^2{\mathop{\rm var}} \left( {{X_3}} \right)\\ = {a_1}^2{\mathop{\rm var}} \left( {{X_1}} \right) + 2{a_1}{a_2}{\mathop{\rm cov}} \left( {{X_1},{X_2}} \right) + {a_2}^2{\mathop{\rm var}} \left( {{X_2}} \right) + 2{a_1}{a_3}{\mathop{\rm cov}} \left( {{X_1},{X_3}} \right) + 2{a_2}{a_3}{\mathop{\rm cov}} \left( {{X_2},{X_3}} \right) + {a_3}^2{\mathop{\rm var}} \left( {{X_3}} \right)\\ = \sum\limits_{j = 1}^3 {\sum\limits_{i = 1}^3 {{a_i}{a_j}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_j}} \right)} } \end{array} \right.\end{array}%
$$ | Is every covariance matrix positive definite? | $$\begin{array}{l}theory:\left\{ {{{\bf{\Sigma }}_{\bf{X}}}{\rm{ is positive semi - definite}}} \right.\\proof::\\set:\left\{ {{\bf{a}} = {\rm{vector }}\left( {p \times 1} \right){\rm{ }}\left( {{\mat | Is every covariance matrix positive definite?
$$\begin{array}{l}theory:\left\{ {{{\bf{\Sigma }}_{\bf{X}}}{\rm{ is positive semi - definite}}} \right.\\proof::\\set:\left\{ {{\bf{a}} = {\rm{vector }}\left( {p \times 1} \right){\rm{ }}\left( {{\mathop{\rm const}\nolimits} } \right) \ne \vec 0} \right.\\{{\bf{a}}^T}\Sigma {\bf{a}} = {\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]^T}\left[ {\begin{array}{*{20}{c}}{{\sigma _{11}}}&{{\sigma _{12}}}& \cdots &{{\sigma _{1p}}}\\{{\sigma _{21}}}&{{\sigma _{22}}}& \cdots &{{\sigma _{2p}}}\\ \vdots & \vdots & \ddots & \vdots \\{{\sigma _{p1}}}&{{\sigma _{p2}}}& \cdots &{{\sigma _{pp}}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]\\ = \left[ {\begin{array}{*{20}{c}}{{a_1}{\sigma _{11}} + {a_2}{\sigma _{21}} + \cdots + {a_p}{\sigma _{p1}}}& \cdots & \cdots &{{a_1}{\sigma _{1p}} + {a_2}{\sigma _{2p}} + \cdots + {a_p}{\sigma _{pp}}}\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]\\ = \left[ {\begin{array}{*{20}{c}}{\sum\limits_{i = 1}^p {{a_i}{\sigma _{i1}}} }& \cdots & \cdots &{\sum\limits_{i = 1}^p {{a_i}{\sigma _{ip}}} }\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]\\ = \left[ {\begin{array}{*{20}{c}}{\sum\limits_{i = 1}^p {{a_i}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_1}} \right)} }& \cdots & \cdots &{\sum\limits_{i = 1}^p {{a_i}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_p}} \right)} }\end{array}} \right]\left[ {\begin{array}{*{20}{c}}{{a_1}}\\{{a_2}}\\ \vdots \\{{a_p}}\end{array}} \right]\\ = \sum\limits_{j = 1}^p {{a_j}\sum\limits_{i = 1}^p {{a_i}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_j}} \right)} } = \sum\limits_{j = 1}^p {\sum\limits_{i = 1}^p {{a_i}{a_j}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_j}} \right)} } \\\left[ {rule:\left\{ {{\mathop{\rm var}} \left( {\sum\limits_{i = 1}^p {{a_i}{X_i}} } \right) = \sum\limits_{j = 1}^p {\sum\limits_{i = 1}^p {{a_i}{a_j}{\mathop{\rm cov}} \left( {{X_i},{X_j}} \right)} } } \right.{\rm{ }}\left[ {see{\rm{ }}below} \right]} \right.\\ = {\mathop{\rm Var}\nolimits} \left( {\sum\limits_{i = 1}^p {{a_i}{X_i}} } \right) = {\mathop{\rm Var}\nolimits} \left( {{{\bf{a}}^T}{\bf{X}}} \right) \ge 0\end{array}
$$
Following is followed by the answer from sjm.majewski
$$\begin{array}{l}rule:\left\{ {{\mathop{\rm var}} \left( {\sum\limits_{i = 1}^p {{a_i}{X_i}} } \right) = \sum\limits_{j = 1}^p {\sum\limits_{i = 1}^p {{a_i}{a_j}{\mathop{\rm cov}} \left( {{X_i},{X_j}} \right)} } } \right.\\proof,eg:\\\left\{ \begin{array}{l}{\mathop{\rm var}} \left( {{a_1}{X_1} + {a_2}{X_2} + {a_3}{X_3}} \right) = {\mathop{\rm var}} \left( {\sum\limits_{i = 1}^3 {{a_i}{X_i}} } \right)\\ = {\mathop{\rm var}} \left( {{a_1}{X_1} + {a_2}{X_2}} \right) + 2{\mathop{\rm cov}} \left( {{a_1}{X_1} + {a_2}{X_2},{a_3}{X_3}} \right) + {\mathop{\rm var}} \left( {{a_3}{X_3}} \right)\\ = \left( {{\mathop{\rm var}} \left( {{a_1}{X_1}} \right) + 2{\mathop{\rm cov}} \left( {{a_1}{X_1},{a_2}{X_2}} \right) + {\mathop{\rm var}} \left( {{a_2}{X_2}} \right)} \right) + 2\left( {{\mathop{\rm cov}} \left( {{a_1}{X_1},{a_3}{X_3}} \right) + {\mathop{\rm cov}} \left( {{a_2}{X_2},{a_3}{X_3}} \right)} \right) + {\mathop{\rm var}} \left( {{a_3}{X_3}} \right)\\ = \left( {{a_1}^2{\mathop{\rm var}} \left( {{X_1}} \right) + 2{a_1}{a_2}{\mathop{\rm cov}} \left( {{X_1},{X_2}} \right) + {a_2}^2{\mathop{\rm var}} \left( {{X_2}} \right)} \right) + 2\left( {{a_1}{a_3}{\mathop{\rm cov}} \left( {{X_1},{X_3}} \right) + {a_2}{a_3}{\mathop{\rm cov}} \left( {{X_2},{X_3}} \right)} \right) + {a_3}^2{\mathop{\rm var}} \left( {{X_3}} \right)\\ = {a_1}^2{\mathop{\rm var}} \left( {{X_1}} \right) + 2{a_1}{a_2}{\mathop{\rm cov}} \left( {{X_1},{X_2}} \right) + {a_2}^2{\mathop{\rm var}} \left( {{X_2}} \right) + 2{a_1}{a_3}{\mathop{\rm cov}} \left( {{X_1},{X_3}} \right) + 2{a_2}{a_3}{\mathop{\rm cov}} \left( {{X_2},{X_3}} \right) + {a_3}^2{\mathop{\rm var}} \left( {{X_3}} \right)\\ = \sum\limits_{j = 1}^3 {\sum\limits_{i = 1}^3 {{a_i}{a_j}{\mathop{\rm Cov}\nolimits} \left( {{X_i},{X_j}} \right)} } \end{array} \right.\end{array}%
$$ | Is every covariance matrix positive definite?
$$\begin{array}{l}theory:\left\{ {{{\bf{\Sigma }}_{\bf{X}}}{\rm{ is positive semi - definite}}} \right.\\proof::\\set:\left\{ {{\bf{a}} = {\rm{vector }}\left( {p \times 1} \right){\rm{ }}\left( {{\mat |
3,849 | How to calculate pseudo-$R^2$ from R's logistic regression? | Don't forget the rms package, by Frank Harrell. You'll find everything you need for fitting and validating GLMs.
Here is a toy example (with only one predictor):
set.seed(101)
n <- 200
x <- rnorm(n)
a <- 1
b <- -2
p <- exp(a+b*x)/(1+exp(a+b*x))
y <- factor(ifelse(runif(n)<p, 1, 0), levels=0:1)
mod1 <- glm(y ~ x, family=binomial)
summary(mod1)
This yields:
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.8959 0.1969 4.55 5.36e-06 ***
x -1.8720 0.2807 -6.67 2.56e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 258.98 on 199 degrees of freedom
Residual deviance: 181.02 on 198 degrees of freedom
AIC: 185.02
Now, using the lrm function,
require(rms)
mod1b <- lrm(y ~ x)
You soon get a lot of model fit indices, including Nagelkerke $R^2$, with print(mod1b):
Logistic Regression Model
lrm(formula = y ~ x)
Model Likelihood Discrimination Rank Discrim.
Ratio Test Indexes Indexes
Obs 200 LR chi2 77.96 R2 0.445 C 0.852
0 70 d.f. 1 g 2.054 Dxy 0.705
1 130 Pr(> chi2) <0.0001 gr 7.801 gamma 0.705
max |deriv| 2e-08 gp 0.319 tau-a 0.322
Brier 0.150
Coef S.E. Wald Z Pr(>|Z|)
Intercept 0.8959 0.1969 4.55 <0.0001
x -1.8720 0.2807 -6.67 <0.0001
Here, $R^2=0.445$ and it is computed as $\left(1-\exp(-\text{LR}/n)\right)/\left(1-\exp(-(-2L_0)/n)\right)$, where LR is the $\chi^2$ stat (comparing the two nested models you described), whereas the denominator is just the max value for $R^2$. For a perfect model, we would expect $\text{LR}=2L_0$, that is $R^2=1$.
By hand,
> mod0 <- update(mod1, .~.-x)
> lr.stat <- lrtest(mod0, mod1)
> (1-exp(-as.numeric(lr.stat$stats[1])/n))/(1-exp(2*as.numeric(logLik(mod0)/n)))
[1] 0.4445742
> mod1b$stats["R2"]
R2
0.4445742
Ewout W. Steyerberg discussed the use of $R^2$ with GLM, in his book Clinical Prediction Models (Springer, 2009, § 4.2.2 pp. 58-60). Basically, the relationship between the LR statistic and Nagelkerke's $R^2$ is approximately linear (it will be more linear with low incidence). Now, as discussed on the earlier thread I linked to in my comment, you can use other measures like the $c$ statistic which is equivalent to the AUC statistic (there's also a nice illustration in the above reference, see Figure 4.6). | How to calculate pseudo-$R^2$ from R's logistic regression? | Don't forget the rms package, by Frank Harrell. You'll find everything you need for fitting and validating GLMs.
Here is a toy example (with only one predictor):
set.seed(101)
n <- 200
x <- rnorm(n)
| How to calculate pseudo-$R^2$ from R's logistic regression?
Don't forget the rms package, by Frank Harrell. You'll find everything you need for fitting and validating GLMs.
Here is a toy example (with only one predictor):
set.seed(101)
n <- 200
x <- rnorm(n)
a <- 1
b <- -2
p <- exp(a+b*x)/(1+exp(a+b*x))
y <- factor(ifelse(runif(n)<p, 1, 0), levels=0:1)
mod1 <- glm(y ~ x, family=binomial)
summary(mod1)
This yields:
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.8959 0.1969 4.55 5.36e-06 ***
x -1.8720 0.2807 -6.67 2.56e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 258.98 on 199 degrees of freedom
Residual deviance: 181.02 on 198 degrees of freedom
AIC: 185.02
Now, using the lrm function,
require(rms)
mod1b <- lrm(y ~ x)
You soon get a lot of model fit indices, including Nagelkerke $R^2$, with print(mod1b):
Logistic Regression Model
lrm(formula = y ~ x)
Model Likelihood Discrimination Rank Discrim.
Ratio Test Indexes Indexes
Obs 200 LR chi2 77.96 R2 0.445 C 0.852
0 70 d.f. 1 g 2.054 Dxy 0.705
1 130 Pr(> chi2) <0.0001 gr 7.801 gamma 0.705
max |deriv| 2e-08 gp 0.319 tau-a 0.322
Brier 0.150
Coef S.E. Wald Z Pr(>|Z|)
Intercept 0.8959 0.1969 4.55 <0.0001
x -1.8720 0.2807 -6.67 <0.0001
Here, $R^2=0.445$ and it is computed as $\left(1-\exp(-\text{LR}/n)\right)/\left(1-\exp(-(-2L_0)/n)\right)$, where LR is the $\chi^2$ stat (comparing the two nested models you described), whereas the denominator is just the max value for $R^2$. For a perfect model, we would expect $\text{LR}=2L_0$, that is $R^2=1$.
By hand,
> mod0 <- update(mod1, .~.-x)
> lr.stat <- lrtest(mod0, mod1)
> (1-exp(-as.numeric(lr.stat$stats[1])/n))/(1-exp(2*as.numeric(logLik(mod0)/n)))
[1] 0.4445742
> mod1b$stats["R2"]
R2
0.4445742
Ewout W. Steyerberg discussed the use of $R^2$ with GLM, in his book Clinical Prediction Models (Springer, 2009, § 4.2.2 pp. 58-60). Basically, the relationship between the LR statistic and Nagelkerke's $R^2$ is approximately linear (it will be more linear with low incidence). Now, as discussed on the earlier thread I linked to in my comment, you can use other measures like the $c$ statistic which is equivalent to the AUC statistic (there's also a nice illustration in the above reference, see Figure 4.6). | How to calculate pseudo-$R^2$ from R's logistic regression?
Don't forget the rms package, by Frank Harrell. You'll find everything you need for fitting and validating GLMs.
Here is a toy example (with only one predictor):
set.seed(101)
n <- 200
x <- rnorm(n)
|
3,850 | How to calculate pseudo-$R^2$ from R's logistic regression? | To easily get a McFadden's pseudo $R^2$ for a fitted model in R, use the "pscl" package by Simon Jackman and use the pR2 command. http://cran.r-project.org/web/packages/pscl/index.html | How to calculate pseudo-$R^2$ from R's logistic regression? | To easily get a McFadden's pseudo $R^2$ for a fitted model in R, use the "pscl" package by Simon Jackman and use the pR2 command. http://cran.r-project.org/web/packages/pscl/index.html | How to calculate pseudo-$R^2$ from R's logistic regression?
To easily get a McFadden's pseudo $R^2$ for a fitted model in R, use the "pscl" package by Simon Jackman and use the pR2 command. http://cran.r-project.org/web/packages/pscl/index.html | How to calculate pseudo-$R^2$ from R's logistic regression?
To easily get a McFadden's pseudo $R^2$ for a fitted model in R, use the "pscl" package by Simon Jackman and use the pR2 command. http://cran.r-project.org/web/packages/pscl/index.html |
3,851 | How to calculate pseudo-$R^2$ from R's logistic regression? | Before calculating the pseudo-$R^2$ for your logistic regression, I want to ask you, do you think McFadden’s or McKelvey-Zavoina’s pseudo-$R^2$ measures good enough? The paper has been published. Surrogate R-squared measure
Cannot find a suitable R-squared measure for binary or ordinal data regression models? Here comes our new research product: the surrogate R-squared.
An R package will be made available. Stay tuned.
Old answer:
Firstly, the McFadden’s Pseudo-$R^2$ of the logistic model does not imply the proportion of the variance of the response explained by explanatory variables at all. But one of the purposes of developing a goodness-of-fit is this property. So I want to go deeper and broader for you to think about what could be a good goodness-of-fit measure for the logit/probit model.
Before answering this question, I want to emphasize “comparability” first. This is the reason why the OLS $R^2$ is such an extensively used measure of goodness of fit. It is useful because we can compare the $R^2$ of different models to get an idea about how each model performs to fit the data and what their adequacy of them is.
Another aspect of the “comparability” is when we compare models across different samples! consider the study of job satisfaction where data may be collected in three different fashions using (a) a quantitative score on a scale of 0-100; (b) a dichotomous indicator Yes/No, or (c) a five-category rating ranging from extremely unsatisfied to extremely satisfied. Although neither the samples nor the empirical models used to draw inferences are the same, “most empirical researchers are explicitly or implicitly making rough comparisons of ‘goodness of fit’ across” these models and samples, because they address similar domain questions (Veall and Zimmermann, 1996). In social studies, “the research experience in the area is far more important than any statistical criteria” such as what specific method is used to select variables (Veall and Zimmermann, 1996).
We believe it is vital to have a goodness-of-fit measure for logit/probit models that is analogous to the OLS R2, as it will ensure the comparability of different models across different samples for similar research questions. There is no existing pseudo-$R^2$ known so far that can meet all the needs.
To solve this issue, we have developed a new goodness-of-fit measure to resemble the OLS R-squared that can also imply the proportion of the variance of the surrogate response explained by explanatory variables. We have had the paper "A new goodness-of-fit measure for probit models: surrogate $R^2$" published with all the details. Please check the package webpage as well: https://xiaoruizhu.github.io/SurrogateRsq/.
Old answer:
Be careful with the calculation of Pseudo-$R^2$:
McFadden’s Pseudo-$R^2$ is calculated as $R^2_M=1- \frac{ln\hat{L}_{full}}{ln\hat{L}_{null}}$, where $ln\hat{L}_{full}$ is the log-likelihood of full model, and $ln\hat{L}_{full}$ is log-likelihood of model with only intercept.
Two approaches to calculate Pseudo-$R^2$:
Use deviance: since $deviance = -2*ln(L_{full})$, $null.deviance = -2*ln(L_{null})$
pR2 = 1 - mod$deviance / mod$null.deviance # works for glm
But the above approach doesn't work for out-of-sample Pseudo $R^2$
Use the "logLik" function in R and definition(also works for in-sample)
mod_null <- glm(y~1, family = binomial, data = insample) 1- logLik(mod)/logLik(mod_null)
This can be slightly modified to compute out-of-sample Pseudo $R^2$
Example:
out-of-sample pseudo-R
Usually, the out-of-sample pseudo-$R^2$ is calculated as $$R_p^2=1−\frac{L_{est.out}}{L_{null.out}},$$ where $L_{est.out}$ is the log-likelihood for the out-of-sample period based on the estimated coefficients of the in-sample period, while and $L_{null.out}$ is the log-likelihood for an intercept-only model for the out-of-sample period.
Codes:
pred.out.link <- predict(mod, outSample, type = "link")
mod.out.null <- glm(Default~1, family = binomial, data = outSample)
pR2.out <- 1 - sum(outSample$y * pred.out.link - log(1 + exp(pred.out.link))) / logLik(mod.out.null) | How to calculate pseudo-$R^2$ from R's logistic regression? | Before calculating the pseudo-$R^2$ for your logistic regression, I want to ask you, do you think McFadden’s or McKelvey-Zavoina’s pseudo-$R^2$ measures good enough? The paper has been published. Surr | How to calculate pseudo-$R^2$ from R's logistic regression?
Before calculating the pseudo-$R^2$ for your logistic regression, I want to ask you, do you think McFadden’s or McKelvey-Zavoina’s pseudo-$R^2$ measures good enough? The paper has been published. Surrogate R-squared measure
Cannot find a suitable R-squared measure for binary or ordinal data regression models? Here comes our new research product: the surrogate R-squared.
An R package will be made available. Stay tuned.
Old answer:
Firstly, the McFadden’s Pseudo-$R^2$ of the logistic model does not imply the proportion of the variance of the response explained by explanatory variables at all. But one of the purposes of developing a goodness-of-fit is this property. So I want to go deeper and broader for you to think about what could be a good goodness-of-fit measure for the logit/probit model.
Before answering this question, I want to emphasize “comparability” first. This is the reason why the OLS $R^2$ is such an extensively used measure of goodness of fit. It is useful because we can compare the $R^2$ of different models to get an idea about how each model performs to fit the data and what their adequacy of them is.
Another aspect of the “comparability” is when we compare models across different samples! consider the study of job satisfaction where data may be collected in three different fashions using (a) a quantitative score on a scale of 0-100; (b) a dichotomous indicator Yes/No, or (c) a five-category rating ranging from extremely unsatisfied to extremely satisfied. Although neither the samples nor the empirical models used to draw inferences are the same, “most empirical researchers are explicitly or implicitly making rough comparisons of ‘goodness of fit’ across” these models and samples, because they address similar domain questions (Veall and Zimmermann, 1996). In social studies, “the research experience in the area is far more important than any statistical criteria” such as what specific method is used to select variables (Veall and Zimmermann, 1996).
We believe it is vital to have a goodness-of-fit measure for logit/probit models that is analogous to the OLS R2, as it will ensure the comparability of different models across different samples for similar research questions. There is no existing pseudo-$R^2$ known so far that can meet all the needs.
To solve this issue, we have developed a new goodness-of-fit measure to resemble the OLS R-squared that can also imply the proportion of the variance of the surrogate response explained by explanatory variables. We have had the paper "A new goodness-of-fit measure for probit models: surrogate $R^2$" published with all the details. Please check the package webpage as well: https://xiaoruizhu.github.io/SurrogateRsq/.
Old answer:
Be careful with the calculation of Pseudo-$R^2$:
McFadden’s Pseudo-$R^2$ is calculated as $R^2_M=1- \frac{ln\hat{L}_{full}}{ln\hat{L}_{null}}$, where $ln\hat{L}_{full}$ is the log-likelihood of full model, and $ln\hat{L}_{full}$ is log-likelihood of model with only intercept.
Two approaches to calculate Pseudo-$R^2$:
Use deviance: since $deviance = -2*ln(L_{full})$, $null.deviance = -2*ln(L_{null})$
pR2 = 1 - mod$deviance / mod$null.deviance # works for glm
But the above approach doesn't work for out-of-sample Pseudo $R^2$
Use the "logLik" function in R and definition(also works for in-sample)
mod_null <- glm(y~1, family = binomial, data = insample) 1- logLik(mod)/logLik(mod_null)
This can be slightly modified to compute out-of-sample Pseudo $R^2$
Example:
out-of-sample pseudo-R
Usually, the out-of-sample pseudo-$R^2$ is calculated as $$R_p^2=1−\frac{L_{est.out}}{L_{null.out}},$$ where $L_{est.out}$ is the log-likelihood for the out-of-sample period based on the estimated coefficients of the in-sample period, while and $L_{null.out}$ is the log-likelihood for an intercept-only model for the out-of-sample period.
Codes:
pred.out.link <- predict(mod, outSample, type = "link")
mod.out.null <- glm(Default~1, family = binomial, data = outSample)
pR2.out <- 1 - sum(outSample$y * pred.out.link - log(1 + exp(pred.out.link))) / logLik(mod.out.null) | How to calculate pseudo-$R^2$ from R's logistic regression?
Before calculating the pseudo-$R^2$ for your logistic regression, I want to ask you, do you think McFadden’s or McKelvey-Zavoina’s pseudo-$R^2$ measures good enough? The paper has been published. Surr |
3,852 | How to calculate pseudo-$R^2$ from R's logistic regression? | if deviance were proportional to log likelihood, and one uses the definition (see for example McFadden's here)
pseudo R^2 = 1 - L(model) / L(intercept)
then the pseudo-$R^2$ above would be $1 - \frac{198.63}{958.66}$ = 0.7928
The question is: is reported deviance proportional to log likelihood? | How to calculate pseudo-$R^2$ from R's logistic regression? | if deviance were proportional to log likelihood, and one uses the definition (see for example McFadden's here)
pseudo R^2 = 1 - L(model) / L(intercept)
then the pseudo-$R^2$ above would be $1 - \frac | How to calculate pseudo-$R^2$ from R's logistic regression?
if deviance were proportional to log likelihood, and one uses the definition (see for example McFadden's here)
pseudo R^2 = 1 - L(model) / L(intercept)
then the pseudo-$R^2$ above would be $1 - \frac{198.63}{958.66}$ = 0.7928
The question is: is reported deviance proportional to log likelihood? | How to calculate pseudo-$R^2$ from R's logistic regression?
if deviance were proportional to log likelihood, and one uses the definition (see for example McFadden's here)
pseudo R^2 = 1 - L(model) / L(intercept)
then the pseudo-$R^2$ above would be $1 - \frac |
3,853 | How to calculate pseudo-$R^2$ from R's logistic regression? | If its out of sample, then I believe the $R^2$ must be computed with the according log-likelihoods as $R^2=1-\frac{ll_{full}}{ll_{constant}}$, where $ll_{full}$ is the log-likelihood of the test data with the predictive model calibrated on the training set, and $ll_{constant}$ is the log-likelihood of the test data with a model with just a constant fitted on the training set, and then use the fitted constant to predict on the testing set computing the probabilities and therefore get the log-likelihood.
Note that in a linear regression, is analogous, the out of sample $R^2$ is computed as $R^2=1-\frac{\sum_{i}(y_{i}-\hat{y}_i)^2}{\sum_{i}(y_{i}-\overline{y}_{train})^2}$, where in particular if we look at the denominator term $\sum_{i}(y_{i}-\overline{y}_{train})^2$, the prediction uses the average over the training set, $\overline{y}_{train}$. This is like if we fit a model in the training data with just a constant, so we have to minimize $\sum_{i}(y_i-\beta_0)^2$, which results in $\hat{\beta}_0=\overline{y}_{train}$, then, this plain constant predictive model is the one used as benchamrk (i.e. in the denominator of the oos $R^2$ term) for the computation of the out of sample $R^2$. | How to calculate pseudo-$R^2$ from R's logistic regression? | If its out of sample, then I believe the $R^2$ must be computed with the according log-likelihoods as $R^2=1-\frac{ll_{full}}{ll_{constant}}$, where $ll_{full}$ is the log-likelihood of the test data | How to calculate pseudo-$R^2$ from R's logistic regression?
If its out of sample, then I believe the $R^2$ must be computed with the according log-likelihoods as $R^2=1-\frac{ll_{full}}{ll_{constant}}$, where $ll_{full}$ is the log-likelihood of the test data with the predictive model calibrated on the training set, and $ll_{constant}$ is the log-likelihood of the test data with a model with just a constant fitted on the training set, and then use the fitted constant to predict on the testing set computing the probabilities and therefore get the log-likelihood.
Note that in a linear regression, is analogous, the out of sample $R^2$ is computed as $R^2=1-\frac{\sum_{i}(y_{i}-\hat{y}_i)^2}{\sum_{i}(y_{i}-\overline{y}_{train})^2}$, where in particular if we look at the denominator term $\sum_{i}(y_{i}-\overline{y}_{train})^2$, the prediction uses the average over the training set, $\overline{y}_{train}$. This is like if we fit a model in the training data with just a constant, so we have to minimize $\sum_{i}(y_i-\beta_0)^2$, which results in $\hat{\beta}_0=\overline{y}_{train}$, then, this plain constant predictive model is the one used as benchamrk (i.e. in the denominator of the oos $R^2$ term) for the computation of the out of sample $R^2$. | How to calculate pseudo-$R^2$ from R's logistic regression?
If its out of sample, then I believe the $R^2$ must be computed with the according log-likelihoods as $R^2=1-\frac{ll_{full}}{ll_{constant}}$, where $ll_{full}$ is the log-likelihood of the test data |
3,854 | Comparing SVM and logistic regression | Linear SVMs and logistic regression generally perform comparably in practice. Use SVM with a nonlinear kernel if you have reason to believe your data won't be linearly separable (or you need to be more robust to outliers than LR will normally tolerate). Otherwise, just try logistic regression first and see how you do with that simpler model. If logistic regression fails you, try an SVM with a non-linear kernel like a RBF.
EDIT:
Ok, let's talk about where the objective functions come from.
The logistic regression comes from generalized linear regression. A good discussion of the logistic regression objective function in this context can be found here: https://stats.stackexchange.com/a/29326/8451
The Support Vector Machines algorithm is much more geometrically motivated. Instead of assuming a probabilistic model, we're trying to find a particular optimal separating hyperplane, where we define "optimality" in the context of the support vectors. We don't have anything resembling the statistical model we use in logistic regression here, even though the linear case will give us similar results: really this just means that logistic regression does a pretty good job of producing "wide margin" classifiers, since that's all SVM is trying to do (specifically, SVM is trying to "maximize" the margin between the classes).
I'll try to come back to this later and get a bit deeper into the weeds, I'm just sort of in the middle of something :p | Comparing SVM and logistic regression | Linear SVMs and logistic regression generally perform comparably in practice. Use SVM with a nonlinear kernel if you have reason to believe your data won't be linearly separable (or you need to be mor | Comparing SVM and logistic regression
Linear SVMs and logistic regression generally perform comparably in practice. Use SVM with a nonlinear kernel if you have reason to believe your data won't be linearly separable (or you need to be more robust to outliers than LR will normally tolerate). Otherwise, just try logistic regression first and see how you do with that simpler model. If logistic regression fails you, try an SVM with a non-linear kernel like a RBF.
EDIT:
Ok, let's talk about where the objective functions come from.
The logistic regression comes from generalized linear regression. A good discussion of the logistic regression objective function in this context can be found here: https://stats.stackexchange.com/a/29326/8451
The Support Vector Machines algorithm is much more geometrically motivated. Instead of assuming a probabilistic model, we're trying to find a particular optimal separating hyperplane, where we define "optimality" in the context of the support vectors. We don't have anything resembling the statistical model we use in logistic regression here, even though the linear case will give us similar results: really this just means that logistic regression does a pretty good job of producing "wide margin" classifiers, since that's all SVM is trying to do (specifically, SVM is trying to "maximize" the margin between the classes).
I'll try to come back to this later and get a bit deeper into the weeds, I'm just sort of in the middle of something :p | Comparing SVM and logistic regression
Linear SVMs and logistic regression generally perform comparably in practice. Use SVM with a nonlinear kernel if you have reason to believe your data won't be linearly separable (or you need to be mor |
3,855 | Comparing SVM and logistic regression | Image signifies the difference between SVM and Logistic Regression and where to use which method
this picture comes from the coursera course : "machine learning" by Andrew NG. It can be found in week 7 at the end of: "Support vector machines - using an SVM" | Comparing SVM and logistic regression | Image signifies the difference between SVM and Logistic Regression and where to use which method
this picture comes from the coursera course : "machine learning" by Andrew NG. It can be found in week | Comparing SVM and logistic regression
Image signifies the difference between SVM and Logistic Regression and where to use which method
this picture comes from the coursera course : "machine learning" by Andrew NG. It can be found in week 7 at the end of: "Support vector machines - using an SVM" | Comparing SVM and logistic regression
Image signifies the difference between SVM and Logistic Regression and where to use which method
this picture comes from the coursera course : "machine learning" by Andrew NG. It can be found in week |
3,856 | Comparing SVM and logistic regression | LR gives calibrated probabilities that can be interpreted as
confidence in a decision.
LR gives us an unconstrained, smooth objective.
LR can be (straightforwardly) used within Bayesian models.
SVMs don’t penalize examples for which the correct decision is
made with sufficient confidence. This may be good for
generalization.
SVMs have a nice dual form, giving sparse solutions when
using the kernel trick (better scalability)
Check out Support Vector Machines
vs Logistic Regression, University of Toronto CSC2515 by Kevin Swersky. | Comparing SVM and logistic regression | LR gives calibrated probabilities that can be interpreted as
confidence in a decision.
LR gives us an unconstrained, smooth objective.
LR can be (straightforwardly) used within Bayesian models.
SVMs d | Comparing SVM and logistic regression
LR gives calibrated probabilities that can be interpreted as
confidence in a decision.
LR gives us an unconstrained, smooth objective.
LR can be (straightforwardly) used within Bayesian models.
SVMs don’t penalize examples for which the correct decision is
made with sufficient confidence. This may be good for
generalization.
SVMs have a nice dual form, giving sparse solutions when
using the kernel trick (better scalability)
Check out Support Vector Machines
vs Logistic Regression, University of Toronto CSC2515 by Kevin Swersky. | Comparing SVM and logistic regression
LR gives calibrated probabilities that can be interpreted as
confidence in a decision.
LR gives us an unconstrained, smooth objective.
LR can be (straightforwardly) used within Bayesian models.
SVMs d |
3,857 | Comparing SVM and logistic regression | I think another advantage of LR is that it's actually optimising the weights of an interpretable function (e.g. Y = B0 + B1X1 +B2X2, where X1 and X2 are your predictor variables/features). This means that you could use the model with pen, paper and a basic scientific calculator and get a probability output if you wanted to.
All you have to do is calculate Y with the above optimised function, and plug Y into the sigmoid function to get a class probability between 0 and 1.
This might be useful in some fields/applications, although less and less as we move forward and can just plug numbers into an app and get a result from the model. | Comparing SVM and logistic regression | I think another advantage of LR is that it's actually optimising the weights of an interpretable function (e.g. Y = B0 + B1X1 +B2X2, where X1 and X2 are your predictor variables/features). This means | Comparing SVM and logistic regression
I think another advantage of LR is that it's actually optimising the weights of an interpretable function (e.g. Y = B0 + B1X1 +B2X2, where X1 and X2 are your predictor variables/features). This means that you could use the model with pen, paper and a basic scientific calculator and get a probability output if you wanted to.
All you have to do is calculate Y with the above optimised function, and plug Y into the sigmoid function to get a class probability between 0 and 1.
This might be useful in some fields/applications, although less and less as we move forward and can just plug numbers into an app and get a result from the model. | Comparing SVM and logistic regression
I think another advantage of LR is that it's actually optimising the weights of an interpretable function (e.g. Y = B0 + B1X1 +B2X2, where X1 and X2 are your predictor variables/features). This means |
3,858 | Best practice when analysing pre-post treatment-control designs | There is a huge literature around this topic (change/gain scores), and I think the best references come from the biomedical domain, e.g.
Senn, S (2007). Statistical issues in
drug development. Wiley (chap. 7 pp.
96-112)
In biomedical research, interesting work has also been done in the study of cross-over trials (esp. in relation to carry-over effects, although I don't know how applicable it is to your study).
From Gain Score t to ANCOVA F (and vice versa), from Knapp & Schaffer, provides an interesting review of ANCOVA vs. t approach (the so-called Lord's Paradox). The simple analysis of change scores is not the recommended way for pre/post design according to Senn in his article Change from baseline and analysis of covariance revisited (Stat. Med. 2006 25(24)). Moreover, using a mixed-effects model (e.g. to account for the correlation between the two time points) is not better because you really need to use the "pre" measurement as a covariate to increase precision (through adjustment). Very briefly:
The use of change scores (post $-$ pre, or outcome $-$ baseline) does not solve the problem of imbalance; the correlation between pre and post measurement is < 1, and the correlation between pre and (post $-$ pre) is generally negative -- it follows that if the treatment (your group allocation) as measured by raw scores happens to be an unfair disadvantage compared to control, it will have an unfair advantage with change scores.
The variance of the estimator used in ANCOVA is generally lower than that for raw or change scores (unless correlation between pre and post equals 1).
If the pre/post relationships differ between the two groups (slope), it is not as much of a problem than for any other methods (the change scores approach also assumes that the relationship is identical between the two groups -- the parallel slope hypothesis).
Under the null hypothesis of equality of treatment (on the outcome), no interaction treatment x baseline is expected; it is dangerous to fit such a model, but in this case one must use centered baselines (otherwise, the treatment effect is estimated at the covariate origin).
I also like Ten Difference Score Myths from Edwards, although it focuses on difference scores in a different context; but here is an annotated bibliography on the analysis of pre-post change (unfortunately, it doesn't cover very recent work). Van Breukelen also compared ANOVA vs. ANCOVA in randomized and non-randomized setting, and his conclusions support the idea that ANCOVA is to be preferred, at least in randomized studies (which prevent from regression to the mean effect). | Best practice when analysing pre-post treatment-control designs | There is a huge literature around this topic (change/gain scores), and I think the best references come from the biomedical domain, e.g.
Senn, S (2007). Statistical issues in
drug development. Wiley | Best practice when analysing pre-post treatment-control designs
There is a huge literature around this topic (change/gain scores), and I think the best references come from the biomedical domain, e.g.
Senn, S (2007). Statistical issues in
drug development. Wiley (chap. 7 pp.
96-112)
In biomedical research, interesting work has also been done in the study of cross-over trials (esp. in relation to carry-over effects, although I don't know how applicable it is to your study).
From Gain Score t to ANCOVA F (and vice versa), from Knapp & Schaffer, provides an interesting review of ANCOVA vs. t approach (the so-called Lord's Paradox). The simple analysis of change scores is not the recommended way for pre/post design according to Senn in his article Change from baseline and analysis of covariance revisited (Stat. Med. 2006 25(24)). Moreover, using a mixed-effects model (e.g. to account for the correlation between the two time points) is not better because you really need to use the "pre" measurement as a covariate to increase precision (through adjustment). Very briefly:
The use of change scores (post $-$ pre, or outcome $-$ baseline) does not solve the problem of imbalance; the correlation between pre and post measurement is < 1, and the correlation between pre and (post $-$ pre) is generally negative -- it follows that if the treatment (your group allocation) as measured by raw scores happens to be an unfair disadvantage compared to control, it will have an unfair advantage with change scores.
The variance of the estimator used in ANCOVA is generally lower than that for raw or change scores (unless correlation between pre and post equals 1).
If the pre/post relationships differ between the two groups (slope), it is not as much of a problem than for any other methods (the change scores approach also assumes that the relationship is identical between the two groups -- the parallel slope hypothesis).
Under the null hypothesis of equality of treatment (on the outcome), no interaction treatment x baseline is expected; it is dangerous to fit such a model, but in this case one must use centered baselines (otherwise, the treatment effect is estimated at the covariate origin).
I also like Ten Difference Score Myths from Edwards, although it focuses on difference scores in a different context; but here is an annotated bibliography on the analysis of pre-post change (unfortunately, it doesn't cover very recent work). Van Breukelen also compared ANOVA vs. ANCOVA in randomized and non-randomized setting, and his conclusions support the idea that ANCOVA is to be preferred, at least in randomized studies (which prevent from regression to the mean effect). | Best practice when analysing pre-post treatment-control designs
There is a huge literature around this topic (change/gain scores), and I think the best references come from the biomedical domain, e.g.
Senn, S (2007). Statistical issues in
drug development. Wiley |
3,859 | Best practice when analysing pre-post treatment-control designs | Daniel B. Wright discusses this in section 5 of his article Making Friends with your Data.
He suggests (p.130):
The only procedure that
is always correct in this situation is
a scatterplot comparing the scores at
time 2 with those at time 1 for the
different groups. In most cases you
should analyse the data in several
ways. If the approaches give different
results ... think more
carefully about the model implied by
each.
He recommends the following articles as further reading:
Hand, D. J. (1994). Deconstructing statistical questions. Journal of the Royal Statistical Society: A, 157, 317–356.
Lord, F. M. (1967). A paradox in the interpretation of group comparisons. Psychological Bulletin, 72, 304–305. Free PDF
Wainer, H. (1991). Adjusting for differential base rates: Lord’s paradox again. Psychological Bulletin, 109, 147–151. Free PDF | Best practice when analysing pre-post treatment-control designs | Daniel B. Wright discusses this in section 5 of his article Making Friends with your Data.
He suggests (p.130):
The only procedure that
is always correct in this situation is
a scatterplot compar | Best practice when analysing pre-post treatment-control designs
Daniel B. Wright discusses this in section 5 of his article Making Friends with your Data.
He suggests (p.130):
The only procedure that
is always correct in this situation is
a scatterplot comparing the scores at
time 2 with those at time 1 for the
different groups. In most cases you
should analyse the data in several
ways. If the approaches give different
results ... think more
carefully about the model implied by
each.
He recommends the following articles as further reading:
Hand, D. J. (1994). Deconstructing statistical questions. Journal of the Royal Statistical Society: A, 157, 317–356.
Lord, F. M. (1967). A paradox in the interpretation of group comparisons. Psychological Bulletin, 72, 304–305. Free PDF
Wainer, H. (1991). Adjusting for differential base rates: Lord’s paradox again. Psychological Bulletin, 109, 147–151. Free PDF | Best practice when analysing pre-post treatment-control designs
Daniel B. Wright discusses this in section 5 of his article Making Friends with your Data.
He suggests (p.130):
The only procedure that
is always correct in this situation is
a scatterplot compar |
3,860 | Best practice when analysing pre-post treatment-control designs | The most common strategies would be:
Repeated measures ANOVA with one within-subject factor (pre vs. post-test) and one between-subject factor (treatment vs. control).
ANCOVA on the post-treatment scores, with pre-treatment score as a covariate and treatment as an independent variable. Intuitively, the idea is that a test of the differences between both groups is really what you are after and including pre-test scores as a covariate can increase power compared to a simple t-test or ANOVA.
There are many discussions on the interpretation, assumptions, and apparently paradoxical differences between these two approaches and on more sophisticated alternatives (especially when participants cannot be randomly assigned to treatment) but they remain pretty standard, I think.
One important source of confusion is that for the ANOVA, the effect of interest is most likely the interaction between time and treatment and not the treatment main effect. Incidentally, the F-test for this interaction term will yield exactly the same result than an independent sample t-test on gain scores (i.e. scores obtained by subtracting the pre-test score from the post-test score for each participant) so you might also go for that.
If all this is too much, you don't have time to figure it out, and cannot obtain some help from a statistician, a quick and dirty but by no means entirely absurd approach would be to simply compare the post-test scores with an independent sample t-test, ignoring pre-test values. This only makes sense if participants were in fact randomly assigned to the treatment or control group.
Finally, that's not in itself a very good reason to choose it but I suspect approach 2 above (ANCOVA) is what currently passes for the right approach in psychology so if you choose something else you might have to explain the technique in detail or to justify yourself to someone who is convinced, e.g. that “gain scores are known to be bad”. | Best practice when analysing pre-post treatment-control designs | The most common strategies would be:
Repeated measures ANOVA with one within-subject factor (pre vs. post-test) and one between-subject factor (treatment vs. control).
ANCOVA on the post-treatment sc | Best practice when analysing pre-post treatment-control designs
The most common strategies would be:
Repeated measures ANOVA with one within-subject factor (pre vs. post-test) and one between-subject factor (treatment vs. control).
ANCOVA on the post-treatment scores, with pre-treatment score as a covariate and treatment as an independent variable. Intuitively, the idea is that a test of the differences between both groups is really what you are after and including pre-test scores as a covariate can increase power compared to a simple t-test or ANOVA.
There are many discussions on the interpretation, assumptions, and apparently paradoxical differences between these two approaches and on more sophisticated alternatives (especially when participants cannot be randomly assigned to treatment) but they remain pretty standard, I think.
One important source of confusion is that for the ANOVA, the effect of interest is most likely the interaction between time and treatment and not the treatment main effect. Incidentally, the F-test for this interaction term will yield exactly the same result than an independent sample t-test on gain scores (i.e. scores obtained by subtracting the pre-test score from the post-test score for each participant) so you might also go for that.
If all this is too much, you don't have time to figure it out, and cannot obtain some help from a statistician, a quick and dirty but by no means entirely absurd approach would be to simply compare the post-test scores with an independent sample t-test, ignoring pre-test values. This only makes sense if participants were in fact randomly assigned to the treatment or control group.
Finally, that's not in itself a very good reason to choose it but I suspect approach 2 above (ANCOVA) is what currently passes for the right approach in psychology so if you choose something else you might have to explain the technique in detail or to justify yourself to someone who is convinced, e.g. that “gain scores are known to be bad”. | Best practice when analysing pre-post treatment-control designs
The most common strategies would be:
Repeated measures ANOVA with one within-subject factor (pre vs. post-test) and one between-subject factor (treatment vs. control).
ANCOVA on the post-treatment sc |
3,861 | Best practice when analysing pre-post treatment-control designs | ANCOVA and repeated measures/mixed model for interaction term are testing two different hypothesis.
Refer to this article: ariticle 1 and this article: article 2 | Best practice when analysing pre-post treatment-control designs | ANCOVA and repeated measures/mixed model for interaction term are testing two different hypothesis.
Refer to this article: ariticle 1 and this article: article 2 | Best practice when analysing pre-post treatment-control designs
ANCOVA and repeated measures/mixed model for interaction term are testing two different hypothesis.
Refer to this article: ariticle 1 and this article: article 2 | Best practice when analysing pre-post treatment-control designs
ANCOVA and repeated measures/mixed model for interaction term are testing two different hypothesis.
Refer to this article: ariticle 1 and this article: article 2 |
3,862 | Best practice when analysing pre-post treatment-control designs | Since you have two means (either of a specific item, or of the sum of the inventory), there's no reason to consider an ANOVA. A paired t-test is probably appropriate; this may help you choose which t-test you need.
Do you want to look at item-specific results, or at overall scores? If you want to do an item analysis, this might be a useful starting place. | Best practice when analysing pre-post treatment-control designs | Since you have two means (either of a specific item, or of the sum of the inventory), there's no reason to consider an ANOVA. A paired t-test is probably appropriate; this may help you choose which t- | Best practice when analysing pre-post treatment-control designs
Since you have two means (either of a specific item, or of the sum of the inventory), there's no reason to consider an ANOVA. A paired t-test is probably appropriate; this may help you choose which t-test you need.
Do you want to look at item-specific results, or at overall scores? If you want to do an item analysis, this might be a useful starting place. | Best practice when analysing pre-post treatment-control designs
Since you have two means (either of a specific item, or of the sum of the inventory), there's no reason to consider an ANOVA. A paired t-test is probably appropriate; this may help you choose which t- |
3,863 | Can someone explain the concept of 'exchangeability'? | Exchangeability is meant to capture symmetry in a problem, symmetry in a sense that does not require independence. Formally, a sequence is exchangeable if its joint probability distribution is a symmetric function of its $n$ arguments. Intuitively it means we can swap around, or reorder, variables in the sequence without changing their joint distribution. For example, every IID (independent, identically distributed) sequence is exchangeable - but not the other way around. Every exchangeable sequence is identically distributed, though.
Imagine a table with a bunch of urns on top, each containing different proportions of red and green balls. We choose an urn at random (according to some prior distribution), and then take a sample (without replacement) from the selected urn.
Note that the reds and greens that we observe are NOT independent. And it is maybe not a surprise to learn that the sequence of reds and greens we observe is an exchangeable sequence. What is maybe surprising is that EVERY exchangeable sequence can be imagined this way, for a suitable choice of urns and prior distribution. (see Diaconis/Freedman (1980) "Finite Exchangeable Sequences", Ann. Prob.).
The concept is invoked in all sorts of places, and it is especially useful in Bayesian contexts because in those settings we have a prior distribution (our knowledge of the distribution of urns on the table) and we have a likelihood running around (a model which loosely represents the sampling procedure from a given, fixed, urn). We observe the sequence of reds and greens (the data) and use that information to update our beliefs about the particular urn in our hand (i.e., our posterior), or more generally, the urns on the table.
Exchangeable random variables are especially wonderful because if we have infinitely many of them then we have tomes of mathematical machinery at our fingertips not the least of which being de Finetti's Theorem; see Wikipedia for an introduction. | Can someone explain the concept of 'exchangeability'? | Exchangeability is meant to capture symmetry in a problem, symmetry in a sense that does not require independence. Formally, a sequence is exchangeable if its joint probability distribution is a symme | Can someone explain the concept of 'exchangeability'?
Exchangeability is meant to capture symmetry in a problem, symmetry in a sense that does not require independence. Formally, a sequence is exchangeable if its joint probability distribution is a symmetric function of its $n$ arguments. Intuitively it means we can swap around, or reorder, variables in the sequence without changing their joint distribution. For example, every IID (independent, identically distributed) sequence is exchangeable - but not the other way around. Every exchangeable sequence is identically distributed, though.
Imagine a table with a bunch of urns on top, each containing different proportions of red and green balls. We choose an urn at random (according to some prior distribution), and then take a sample (without replacement) from the selected urn.
Note that the reds and greens that we observe are NOT independent. And it is maybe not a surprise to learn that the sequence of reds and greens we observe is an exchangeable sequence. What is maybe surprising is that EVERY exchangeable sequence can be imagined this way, for a suitable choice of urns and prior distribution. (see Diaconis/Freedman (1980) "Finite Exchangeable Sequences", Ann. Prob.).
The concept is invoked in all sorts of places, and it is especially useful in Bayesian contexts because in those settings we have a prior distribution (our knowledge of the distribution of urns on the table) and we have a likelihood running around (a model which loosely represents the sampling procedure from a given, fixed, urn). We observe the sequence of reds and greens (the data) and use that information to update our beliefs about the particular urn in our hand (i.e., our posterior), or more generally, the urns on the table.
Exchangeable random variables are especially wonderful because if we have infinitely many of them then we have tomes of mathematical machinery at our fingertips not the least of which being de Finetti's Theorem; see Wikipedia for an introduction. | Can someone explain the concept of 'exchangeability'?
Exchangeability is meant to capture symmetry in a problem, symmetry in a sense that does not require independence. Formally, a sequence is exchangeable if its joint probability distribution is a symme |
3,864 | Under what conditions should Likert scales be used as ordinal or interval data? | Maybe too late but I add my answer anyway...
It depends on what you intend to do with your data: If you are interested in showing that scores differ when considering different group of participants (gender, country, etc.), you may treat your scores as numeric values, provided they fulfill usual assumptions about variance (or shape) and sample size. If you are rather interested in highlighting how response patterns vary across subgroups, then you should consider item scores as discrete choice among a set of answer options and look for log-linear modeling, ordinal logistic regression, item-response models or any other statistical model that allows to cope with polytomous items.
As a rule of thumb, one generally considers that having 11 distinct points on a scale is sufficient to approximate an interval scale (for interpretation purpose, see @xmjx's comment)). Likert items may be regarded as true ordinal scale, but they are often used as numeric and we can compute their mean or SD. This is often done in attitude surveys, although it is wise to report both mean/SD and % of response in, e.g. the two highest categories.
When using summated scale scores (i.e., we add up score on each item to compute a "total score"), usual statistics may be applied, but you have to keep in mind that you are now working with a latent variable so the underlying construct should make sense! In psychometrics, we generally check that (1) unidimensionnality of the scale holds, (2) scale reliability is sufficient. When comparing two such scale scores (for two different instruments), we might even consider using attenuated correlation measures instead of classical Pearson correlation coefficient.
Classical textbooks include:
1. Nunnally, J.C. and Bernstein, I.H. (1994). Psychometric Theory (3rd ed.). McGraw-Hill Series in Psychology.
2. Streiner, D.L. and Norman, G.R. (2008). Health Measurement Scales. A practical guide to their development and use (4th ed.). Oxford.
3. Rao, C.R. and Sinharay, S., Eds. (2007). Handbook of Statistics, Vol. 26: Psychometrics. Elsevier Science B.V.
4. Dunn, G. (2000). Statistics in Psychiatry. Hodder Arnold.
You may also have a look at Applications of latent trait and latent class models in the social sciences, from Rost & Langeheine, and W. Revelle's website on personality research.
When validating a psychometric scale, it is important to look at so-called ceiling/floor effects (large asymmetry resulting from participants scoring at the lowest/highest response category), which may seriously impact on any statistics computed when treating them as numeric variable (e.g., country aggregation, t-test). This raises specific issues in cross-cultural studies since it is known that overall response distribution in attitude or health surveys differ from one country to the other (e.g. chinese people vs. those coming from western countries tend to highlight specific response pattern, the former having generally more extreme scores at the item level, see e.g. Song, X.-Y. (2007) Analysis of multisample structural equation models with applications to Quality of Life data, in Handbook of Latent Variable and Related Models, Lee, S.-Y. (Ed.), pp 279-302, North-Holland).
More generally, you should look at the psychometric-related literature which makes extensive use of Likert items if you are interested with measurement issue. Various statistical models have been developed and are currently headed under the Item Response Theory framework. | Under what conditions should Likert scales be used as ordinal or interval data? | Maybe too late but I add my answer anyway...
It depends on what you intend to do with your data: If you are interested in showing that scores differ when considering different group of participants (g | Under what conditions should Likert scales be used as ordinal or interval data?
Maybe too late but I add my answer anyway...
It depends on what you intend to do with your data: If you are interested in showing that scores differ when considering different group of participants (gender, country, etc.), you may treat your scores as numeric values, provided they fulfill usual assumptions about variance (or shape) and sample size. If you are rather interested in highlighting how response patterns vary across subgroups, then you should consider item scores as discrete choice among a set of answer options and look for log-linear modeling, ordinal logistic regression, item-response models or any other statistical model that allows to cope with polytomous items.
As a rule of thumb, one generally considers that having 11 distinct points on a scale is sufficient to approximate an interval scale (for interpretation purpose, see @xmjx's comment)). Likert items may be regarded as true ordinal scale, but they are often used as numeric and we can compute their mean or SD. This is often done in attitude surveys, although it is wise to report both mean/SD and % of response in, e.g. the two highest categories.
When using summated scale scores (i.e., we add up score on each item to compute a "total score"), usual statistics may be applied, but you have to keep in mind that you are now working with a latent variable so the underlying construct should make sense! In psychometrics, we generally check that (1) unidimensionnality of the scale holds, (2) scale reliability is sufficient. When comparing two such scale scores (for two different instruments), we might even consider using attenuated correlation measures instead of classical Pearson correlation coefficient.
Classical textbooks include:
1. Nunnally, J.C. and Bernstein, I.H. (1994). Psychometric Theory (3rd ed.). McGraw-Hill Series in Psychology.
2. Streiner, D.L. and Norman, G.R. (2008). Health Measurement Scales. A practical guide to their development and use (4th ed.). Oxford.
3. Rao, C.R. and Sinharay, S., Eds. (2007). Handbook of Statistics, Vol. 26: Psychometrics. Elsevier Science B.V.
4. Dunn, G. (2000). Statistics in Psychiatry. Hodder Arnold.
You may also have a look at Applications of latent trait and latent class models in the social sciences, from Rost & Langeheine, and W. Revelle's website on personality research.
When validating a psychometric scale, it is important to look at so-called ceiling/floor effects (large asymmetry resulting from participants scoring at the lowest/highest response category), which may seriously impact on any statistics computed when treating them as numeric variable (e.g., country aggregation, t-test). This raises specific issues in cross-cultural studies since it is known that overall response distribution in attitude or health surveys differ from one country to the other (e.g. chinese people vs. those coming from western countries tend to highlight specific response pattern, the former having generally more extreme scores at the item level, see e.g. Song, X.-Y. (2007) Analysis of multisample structural equation models with applications to Quality of Life data, in Handbook of Latent Variable and Related Models, Lee, S.-Y. (Ed.), pp 279-302, North-Holland).
More generally, you should look at the psychometric-related literature which makes extensive use of Likert items if you are interested with measurement issue. Various statistical models have been developed and are currently headed under the Item Response Theory framework. | Under what conditions should Likert scales be used as ordinal or interval data?
Maybe too late but I add my answer anyway...
It depends on what you intend to do with your data: If you are interested in showing that scores differ when considering different group of participants (g |
3,865 | Under what conditions should Likert scales be used as ordinal or interval data? | The simple answer is that Likert scales are always ordinal. The intervals between positions on the scale are monotonic but never so well-defined as to be numerically uniform increments.
That said, the distinction between ordinal and interval is based on the specific demands of the analysis being performed. Under special circumstances, you may be able to treat the responses as if they fell on an interval scale. To do this, typically the respondents need to be in close agreement regarding the meaning of the scale responses and the analysis (or the decisions made based on the analysis) should be relatively insensitive to problems that may arise. | Under what conditions should Likert scales be used as ordinal or interval data? | The simple answer is that Likert scales are always ordinal. The intervals between positions on the scale are monotonic but never so well-defined as to be numerically uniform increments.
That said, the | Under what conditions should Likert scales be used as ordinal or interval data?
The simple answer is that Likert scales are always ordinal. The intervals between positions on the scale are monotonic but never so well-defined as to be numerically uniform increments.
That said, the distinction between ordinal and interval is based on the specific demands of the analysis being performed. Under special circumstances, you may be able to treat the responses as if they fell on an interval scale. To do this, typically the respondents need to be in close agreement regarding the meaning of the scale responses and the analysis (or the decisions made based on the analysis) should be relatively insensitive to problems that may arise. | Under what conditions should Likert scales be used as ordinal or interval data?
The simple answer is that Likert scales are always ordinal. The intervals between positions on the scale are monotonic but never so well-defined as to be numerically uniform increments.
That said, the |
3,866 | Under what conditions should Likert scales be used as ordinal or interval data? | In addition to what has already been said above about summated scales, I'd also mention that the issue can change when analysing data at the group-level. For example, if you were examining
life satisfaction of states or countries,
job satisfaction of organisations or departments,
student satisfaction in subjects.
In all these cases each aggregate measure (perhaps the mean) is based on many individual responses (e.g., n=50, 100, 1000, etc.). In these cases the original Likert item begins to take on properties that resemble an interval scale at the aggregate level. | Under what conditions should Likert scales be used as ordinal or interval data? | In addition to what has already been said above about summated scales, I'd also mention that the issue can change when analysing data at the group-level. For example, if you were examining
life satis | Under what conditions should Likert scales be used as ordinal or interval data?
In addition to what has already been said above about summated scales, I'd also mention that the issue can change when analysing data at the group-level. For example, if you were examining
life satisfaction of states or countries,
job satisfaction of organisations or departments,
student satisfaction in subjects.
In all these cases each aggregate measure (perhaps the mean) is based on many individual responses (e.g., n=50, 100, 1000, etc.). In these cases the original Likert item begins to take on properties that resemble an interval scale at the aggregate level. | Under what conditions should Likert scales be used as ordinal or interval data?
In addition to what has already been said above about summated scales, I'd also mention that the issue can change when analysing data at the group-level. For example, if you were examining
life satis |
3,867 | Under what conditions should Likert scales be used as ordinal or interval data? | likert scale always in ordinal form
: A method of ascribing quantitativevalue to qualitative data, to make it amenable to statistical analysis. A numerical value is assigned to each potential choice and a mean figure for all the responses is computed at the end of the evaluation or survey. | Under what conditions should Likert scales be used as ordinal or interval data? | likert scale always in ordinal form
: A method of ascribing quantitativevalue to qualitative data, to make it amenable to statistical analysis. A numerical value is assigned to each potential choice a | Under what conditions should Likert scales be used as ordinal or interval data?
likert scale always in ordinal form
: A method of ascribing quantitativevalue to qualitative data, to make it amenable to statistical analysis. A numerical value is assigned to each potential choice and a mean figure for all the responses is computed at the end of the evaluation or survey. | Under what conditions should Likert scales be used as ordinal or interval data?
likert scale always in ordinal form
: A method of ascribing quantitativevalue to qualitative data, to make it amenable to statistical analysis. A numerical value is assigned to each potential choice a |
3,868 | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? | I believe the papers, articles, posts e.t.c. that you diligently gathered, contain enough information and analysis as to where and why the two approaches differ. But being different does not mean being incompatible.
The problem with the "hybrid" is that it is a hybrid and not a synthesis, and this is why it is treated by many as a hybris, if you excuse the word-play.
Not being a synthesis, it does not attempt to combine the differences of the two approaches, and either create one unified and internally consistent approach, or keep both approaches in the scientific arsenal as complementary alternatives, in order to deal more effectively with the very complex world we try to analyze through Statistics (thankfully, this last thing is what appears to be happening with the other great civil war of the field, the frequentist-bayesian one).
The dissatisfaction with it I believe comes from the fact that it has indeed created misunderstandings in applying the statistical tools and interpreting the statistical results, mainly by scientists that are not statisticians, misunderstandings that can have possibly very serious and damaging effects (thinking about the field of medicine helps giving the issue its appropriate dramatic tone). This misapplication, is I believe, accepted widely as a fact-and in that sense, the "anti-hybrid" point of view can be considered as widespread (at least due to the consequences it had, if not for its methodological issues).
I see the evolution of the matter so far as a historical accident (but I don't have a $p$-value or a rejection region for my hypothesis), due to the unfortunate battle between the founders. Fisher and Neyman/Pearson have fought bitterly and publicly for decades over their approaches. This created the impression that here is a dichotomous matter: the one approach must be "right", and the other must be "wrong".
The hybrid emerged, I believe, out of the realization that no such easy answer existed, and that there were real-world phenomena to which the one approach is better suited than the other (see this post for such an example, according to me at least, where the Fisherian approach seems more suitable). But instead of keeping the two "separate and ready to act", they were rather superfluously patched together.
I offer a source which summarizes this "complementary alternative" approach:
Spanos, A. (1999). Probability theory and statistical inference: econometric modeling with observational data. Cambridge University Press., ch. 14, especially Section 14.5, where after presenting formally and distinctly the two approaches, the author is in a position to point to their differences clearly, and also argue that they can be seen as complementary alternatives. | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh | I believe the papers, articles, posts e.t.c. that you diligently gathered, contain enough information and analysis as to where and why the two approaches differ. But being different does not mean bein | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"?
I believe the papers, articles, posts e.t.c. that you diligently gathered, contain enough information and analysis as to where and why the two approaches differ. But being different does not mean being incompatible.
The problem with the "hybrid" is that it is a hybrid and not a synthesis, and this is why it is treated by many as a hybris, if you excuse the word-play.
Not being a synthesis, it does not attempt to combine the differences of the two approaches, and either create one unified and internally consistent approach, or keep both approaches in the scientific arsenal as complementary alternatives, in order to deal more effectively with the very complex world we try to analyze through Statistics (thankfully, this last thing is what appears to be happening with the other great civil war of the field, the frequentist-bayesian one).
The dissatisfaction with it I believe comes from the fact that it has indeed created misunderstandings in applying the statistical tools and interpreting the statistical results, mainly by scientists that are not statisticians, misunderstandings that can have possibly very serious and damaging effects (thinking about the field of medicine helps giving the issue its appropriate dramatic tone). This misapplication, is I believe, accepted widely as a fact-and in that sense, the "anti-hybrid" point of view can be considered as widespread (at least due to the consequences it had, if not for its methodological issues).
I see the evolution of the matter so far as a historical accident (but I don't have a $p$-value or a rejection region for my hypothesis), due to the unfortunate battle between the founders. Fisher and Neyman/Pearson have fought bitterly and publicly for decades over their approaches. This created the impression that here is a dichotomous matter: the one approach must be "right", and the other must be "wrong".
The hybrid emerged, I believe, out of the realization that no such easy answer existed, and that there were real-world phenomena to which the one approach is better suited than the other (see this post for such an example, according to me at least, where the Fisherian approach seems more suitable). But instead of keeping the two "separate and ready to act", they were rather superfluously patched together.
I offer a source which summarizes this "complementary alternative" approach:
Spanos, A. (1999). Probability theory and statistical inference: econometric modeling with observational data. Cambridge University Press., ch. 14, especially Section 14.5, where after presenting formally and distinctly the two approaches, the author is in a position to point to their differences clearly, and also argue that they can be seen as complementary alternatives. | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh
I believe the papers, articles, posts e.t.c. that you diligently gathered, contain enough information and analysis as to where and why the two approaches differ. But being different does not mean bein |
3,869 | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? | My own take on my question is that there is nothing particularly incoherent in the hybrid (i.e. accepted) approach. But as I was not sure if I am maybe failing to comprehend the validity of the arguments presented in the anti-hybrid papers, I was happy to find the discussion published together with this paper:
Hubbard & Bayarri, 2003, Confusion over measures of evidence (p's) versus errors (α's) in classical statistical testing
Unfortunately, two replies published as a discussion were not formatted as separate articles and so cannot be properly cited. Still, I would like to quote from both of them:
Berk: The theme of Sections 2 and 3 seems to be that Fisher did not like what
Neyman and Pearson did, and Neyman did not like what Fisher did, and therefore we
should not do anything that combines the two approaches. There is no escaping the
premise here, but the reasoning escapes me.
Carlton: the authors adamantly insist that most confusion stems from the marriage of Fisherian and Neyman-Pearsonian ideas, that such a marriage is a
catastrophic error on the part of modern statisticians [...] [T]hey seem intent on
establishing that P values and Type I errors cannot coexist in the same universe.
It is unclear whether the authors have given any substantive reason why we cannot
utter "p value" and "Type I error" in the same sentence. [...] The "fact" of their [F and NP] incompatibility comes as surprising news to me, as I'm sure it does to
the thousands of qualified statisticians reading the article. The authors even
seem to suggest that among the reasons statisticians should now divorce these two
ideas is that Fisher and Neyman were not terribly fond of each other (or each other's
philosophies on testing). I have always viewed our current practice, which integrates
Fisher's and Neyman's philosophies and permits discussion of both P values and
Type I errors -- though certainly not in parallel -- as one of our discipline's greater triumphs.
Both responses are very worth reading. There is also a rejoinder by the original authors, which does not sound convincing to me at all. | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh | My own take on my question is that there is nothing particularly incoherent in the hybrid (i.e. accepted) approach. But as I was not sure if I am maybe failing to comprehend the validity of the argume | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"?
My own take on my question is that there is nothing particularly incoherent in the hybrid (i.e. accepted) approach. But as I was not sure if I am maybe failing to comprehend the validity of the arguments presented in the anti-hybrid papers, I was happy to find the discussion published together with this paper:
Hubbard & Bayarri, 2003, Confusion over measures of evidence (p's) versus errors (α's) in classical statistical testing
Unfortunately, two replies published as a discussion were not formatted as separate articles and so cannot be properly cited. Still, I would like to quote from both of them:
Berk: The theme of Sections 2 and 3 seems to be that Fisher did not like what
Neyman and Pearson did, and Neyman did not like what Fisher did, and therefore we
should not do anything that combines the two approaches. There is no escaping the
premise here, but the reasoning escapes me.
Carlton: the authors adamantly insist that most confusion stems from the marriage of Fisherian and Neyman-Pearsonian ideas, that such a marriage is a
catastrophic error on the part of modern statisticians [...] [T]hey seem intent on
establishing that P values and Type I errors cannot coexist in the same universe.
It is unclear whether the authors have given any substantive reason why we cannot
utter "p value" and "Type I error" in the same sentence. [...] The "fact" of their [F and NP] incompatibility comes as surprising news to me, as I'm sure it does to
the thousands of qualified statisticians reading the article. The authors even
seem to suggest that among the reasons statisticians should now divorce these two
ideas is that Fisher and Neyman were not terribly fond of each other (or each other's
philosophies on testing). I have always viewed our current practice, which integrates
Fisher's and Neyman's philosophies and permits discussion of both P values and
Type I errors -- though certainly not in parallel -- as one of our discipline's greater triumphs.
Both responses are very worth reading. There is also a rejoinder by the original authors, which does not sound convincing to me at all. | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh
My own take on my question is that there is nothing particularly incoherent in the hybrid (i.e. accepted) approach. But as I was not sure if I am maybe failing to comprehend the validity of the argume |
3,870 | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? | I fear that a real response to this excellent question would require a full-length paper. However, here are a couple of points that are not present in either the question or the current answers.
The error rate 'belongs' to the procedure but the evidence 'belongs' to the experimental results. Thus it is possible with multi-stage procedures with sequential stopping rules to have a result with very strong evidence against the null hypothesis but a not significant hypothesis test result. That can be thought of as a strong incompatibility.
If you are interested in the incompatibilities, you should be interested in the underlying philosophies. The philosophical difficulty comes from a choice between compliance with the Likelihood Principle and compliance with the Repeated Sampling Principle. The LP says roughly that, given a statistical model, the evidence in a dataset relevant to the parameter of interest is completely contained in the relevant likelihood function. The RSP says that one should prefer tests that give error rates in the long run that equal their nominal values. | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh | I fear that a real response to this excellent question would require a full-length paper. However, here are a couple of points that are not present in either the question or the current answers.
The | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"?
I fear that a real response to this excellent question would require a full-length paper. However, here are a couple of points that are not present in either the question or the current answers.
The error rate 'belongs' to the procedure but the evidence 'belongs' to the experimental results. Thus it is possible with multi-stage procedures with sequential stopping rules to have a result with very strong evidence against the null hypothesis but a not significant hypothesis test result. That can be thought of as a strong incompatibility.
If you are interested in the incompatibilities, you should be interested in the underlying philosophies. The philosophical difficulty comes from a choice between compliance with the Likelihood Principle and compliance with the Repeated Sampling Principle. The LP says roughly that, given a statistical model, the evidence in a dataset relevant to the parameter of interest is completely contained in the relevant likelihood function. The RSP says that one should prefer tests that give error rates in the long run that equal their nominal values. | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh
I fear that a real response to this excellent question would require a full-length paper. However, here are a couple of points that are not present in either the question or the current answers.
The |
3,871 | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? | An often seen (and supposedly accepted) union (or better: "hybrid") between the two approaches is as follows:
Set a prespecified level $\alpha$ (0.05 say)
Then test your hypothesis, e.g. $H_o: \mu = 0$ vs. $H_1: \mu \ne 0$
State the p value and formulate your decision based on the level $\alpha$:
If the resulting p value is below $\alpha$, you could say
"I reject $H_o$" or
"I reject $H_o$" in favor of $H_1$" or
"I am $100\% \cdot (1-\alpha)$ certain that $H_1$ holds"
If the p value is not small enough, you would say
"I cannot reject $H_o$" or
"I cannot reject $H_o$ in favor of $H_1$"
Here, aspects from Neyman-Pearson are:
You decide something
You have an alternative hypothesis at hand (although it is just the contrary of $H_o$)
You know the type I error rate
Fisherian aspects are:
You state the p value. Any reader has thus the possibility to use its own level (e.g. strictly correcting for multiple testing) for decision
Basically, only the null hypothesis is required since the alternative is just the contrary
You don't know the type II error rate. (But you could immediately get it for specific values of $\mu \ne 0$.)
ADD-ON
While it is good to be aware of the discussion about the philosophical problems of Fisher's, NP's or this hybrid approach (as taught in almost religious frenzy by some), there are much more relevant issues in statistics to fight against:
Asking uninformative questions (like binary yes/no questions instead of quantitative "how much" questions, i.e. using tests instead of confidence intervals)
Data driven analysis methods that lead to biased results (stepwise regression, testing assumptions etc.)
Choosing wrong tests or methods
Misinterpreting results
Using classic statistics for non-random samples | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh | An often seen (and supposedly accepted) union (or better: "hybrid") between the two approaches is as follows:
Set a prespecified level $\alpha$ (0.05 say)
Then test your hypothesis, e.g. $H_o: \mu = | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"?
An often seen (and supposedly accepted) union (or better: "hybrid") between the two approaches is as follows:
Set a prespecified level $\alpha$ (0.05 say)
Then test your hypothesis, e.g. $H_o: \mu = 0$ vs. $H_1: \mu \ne 0$
State the p value and formulate your decision based on the level $\alpha$:
If the resulting p value is below $\alpha$, you could say
"I reject $H_o$" or
"I reject $H_o$" in favor of $H_1$" or
"I am $100\% \cdot (1-\alpha)$ certain that $H_1$ holds"
If the p value is not small enough, you would say
"I cannot reject $H_o$" or
"I cannot reject $H_o$ in favor of $H_1$"
Here, aspects from Neyman-Pearson are:
You decide something
You have an alternative hypothesis at hand (although it is just the contrary of $H_o$)
You know the type I error rate
Fisherian aspects are:
You state the p value. Any reader has thus the possibility to use its own level (e.g. strictly correcting for multiple testing) for decision
Basically, only the null hypothesis is required since the alternative is just the contrary
You don't know the type II error rate. (But you could immediately get it for specific values of $\mu \ne 0$.)
ADD-ON
While it is good to be aware of the discussion about the philosophical problems of Fisher's, NP's or this hybrid approach (as taught in almost religious frenzy by some), there are much more relevant issues in statistics to fight against:
Asking uninformative questions (like binary yes/no questions instead of quantitative "how much" questions, i.e. using tests instead of confidence intervals)
Data driven analysis methods that lead to biased results (stepwise regression, testing assumptions etc.)
Choosing wrong tests or methods
Misinterpreting results
Using classic statistics for non-random samples | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh
An often seen (and supposedly accepted) union (or better: "hybrid") between the two approaches is as follows:
Set a prespecified level $\alpha$ (0.05 say)
Then test your hypothesis, e.g. $H_o: \mu = |
3,872 | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? | accepting that both F and N-P are valid and meaningful approaches,
what is so bad about their hybrid?
Short answer: the use of a nil (no difference, no correlation) null hypothesis irregardless of the context. Everything else is a "misuse" by people who have created myths for themselves about what the process can achieve. The myths arise from people attempting to reconcile their (sometimes appropriate) use of trust in authority and consensus heuristics with the inapplicability of the procedure to their problem.
As far as I know Gerd Gigerenzer came up with term "hybrid":
I asked the author [a distinguished statistical textbook author, whose book went through many
editions, and whose name does not matter] why he removed the chapter on Bayes as well as the
innocent sentence from all subsequent editions. “What made you present
statistics as if it had only a single hammer, rather than a toolbox?
Why did you mix Fisher’s and Neyman–Pearson’s theories into an
inconsistent hybrid that every decent statistician would reject?”
To
his credit, I should say that the author did not attempt to deny that
he had produced the illusion that there is only one tool. But he let
me know who was to blame for this. There were three culprits: his
fellow researchers, the university administration, and his publisher.
Most researchers, he argued, are not really interested in statistical
thinking, but only in how to get their papers published [...]
The null ritual:
Set up a statistical null hypothesis of “no mean difference” or “zero correlation.” Don’t specify the predictions of your research
hypothesis or of any alternative substantive hypotheses.
Use 5% as a convention for rejecting the null. If significant, accept your research hypothesis. Report the result as $p < 0.05$, $p <
0.01$ , or $p < 0.001$ (whichever comes next to the obtained $p$-value).
Always perform this procedure.
Gigerenzer, G (November 2004). "Mindless statistics". The Journal of Socio-Economics 33 (5): 587–606. doi:10.1016/j.socec.2004.09.033.
Edit:
And we should always need to mention, because the "hybrid" is so slippery and ill-defined, that using the nil null to get a p-value is perfectly fine as a way to compare effect sizes given different sample sizes. It is the "test" aspect that introduces the problem.
Edit 2:
@amoeba A p-value can be fine as a summary statistic, in this case the nil null hypothesis is just an arbitrary landmark: http://arxiv.org/abs/1311.0081. However, as soon as you start trying to draw a conclusion or make a decision (ie "test" the null hypothesis) it stops making sense. In the comparing two groups example, we want to know how different two groups are and the various possible explanations there may be for differences of that magnitude and type.
The p value can be used as a summary statistic telling us the magnitude of the difference. However, using it to "disprove/reject" zero difference serves no purpose that I can tell. Also, I think many of these study designs that compare average measurements of living things at a single timepoint are misguided. We should want to observe how individual instances of the system change over time, then come up with a process that explains the pattern observed (including any group differences). | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh | accepting that both F and N-P are valid and meaningful approaches,
what is so bad about their hybrid?
Short answer: the use of a nil (no difference, no correlation) null hypothesis irregardless of | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"?
accepting that both F and N-P are valid and meaningful approaches,
what is so bad about their hybrid?
Short answer: the use of a nil (no difference, no correlation) null hypothesis irregardless of the context. Everything else is a "misuse" by people who have created myths for themselves about what the process can achieve. The myths arise from people attempting to reconcile their (sometimes appropriate) use of trust in authority and consensus heuristics with the inapplicability of the procedure to their problem.
As far as I know Gerd Gigerenzer came up with term "hybrid":
I asked the author [a distinguished statistical textbook author, whose book went through many
editions, and whose name does not matter] why he removed the chapter on Bayes as well as the
innocent sentence from all subsequent editions. “What made you present
statistics as if it had only a single hammer, rather than a toolbox?
Why did you mix Fisher’s and Neyman–Pearson’s theories into an
inconsistent hybrid that every decent statistician would reject?”
To
his credit, I should say that the author did not attempt to deny that
he had produced the illusion that there is only one tool. But he let
me know who was to blame for this. There were three culprits: his
fellow researchers, the university administration, and his publisher.
Most researchers, he argued, are not really interested in statistical
thinking, but only in how to get their papers published [...]
The null ritual:
Set up a statistical null hypothesis of “no mean difference” or “zero correlation.” Don’t specify the predictions of your research
hypothesis or of any alternative substantive hypotheses.
Use 5% as a convention for rejecting the null. If significant, accept your research hypothesis. Report the result as $p < 0.05$, $p <
0.01$ , or $p < 0.001$ (whichever comes next to the obtained $p$-value).
Always perform this procedure.
Gigerenzer, G (November 2004). "Mindless statistics". The Journal of Socio-Economics 33 (5): 587–606. doi:10.1016/j.socec.2004.09.033.
Edit:
And we should always need to mention, because the "hybrid" is so slippery and ill-defined, that using the nil null to get a p-value is perfectly fine as a way to compare effect sizes given different sample sizes. It is the "test" aspect that introduces the problem.
Edit 2:
@amoeba A p-value can be fine as a summary statistic, in this case the nil null hypothesis is just an arbitrary landmark: http://arxiv.org/abs/1311.0081. However, as soon as you start trying to draw a conclusion or make a decision (ie "test" the null hypothesis) it stops making sense. In the comparing two groups example, we want to know how different two groups are and the various possible explanations there may be for differences of that magnitude and type.
The p value can be used as a summary statistic telling us the magnitude of the difference. However, using it to "disprove/reject" zero difference serves no purpose that I can tell. Also, I think many of these study designs that compare average measurements of living things at a single timepoint are misguided. We should want to observe how individual instances of the system change over time, then come up with a process that explains the pattern observed (including any group differences). | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh
accepting that both F and N-P are valid and meaningful approaches,
what is so bad about their hybrid?
Short answer: the use of a nil (no difference, no correlation) null hypothesis irregardless of |
3,873 | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? | I see that those with more expertise than myself have provided answers, but I think my answer has the potential to add something additional, so I'll offer this as one other layman's perspective.
Is the hybrid approach incoherent? I'd say it depends on whether or not the researcher ends up acting inconsistently with the rules that they started out with: specifically the yes/no rule that comes into play with the setting of an alpha value.
Incoherent
Start with Neyman-Pearson. Researcher sets alpha=0.05, runs the experiment, calculates p=0.052. Researcher looks at that p-value and, using Fisherian inference (often implicitly), considers the result to be sufficiently incompatible with the test hypothesis that they will still claim "something" is going on. The result is somehow "good enough" even though the p-value was greater than the alpha value. Often this is paired with language such as "nearly significant" or "trending towards significance" or some wording along those lines.
However, setting an alpha value before running the experiment means that one has chosen the approach of Neyman-Pearson inductive behavior. Choosing to ignore that alpha value after calculating the p-value, and thus claiming something is still somehow interesting, undermines the entire approach that one started with. If a researcher starts down Path A (Neyman-Pearson), but then jumps across to another path (Fisher) once they don't like the path they are on, I consider that incoherent. They are not being consistent with the (implied) rules that they started with.
Coherent (possibly)
Start with N-P. Researcher sets alpha=0.05, runs the experiment, calculates p=0.0014. Researcher observes that p < alpha, and thus rejects the test hypothesis (typically no effect null) and accepts the alternative hypothesis (the effect is real). At this point the researcher, in addition to deciding to treat the outcome as a real effect (N-P), decides to infer (Fisher) that the experiment provides very strong evidence that the effect is real. They have added nuance to the approach they started with, but have not contradicted the rules set in place by choosing an alpha value at the beginning.
Summary
If one starts by choosing an alpha value, then one has decided to take the Neyman-Pearson path and follow the rules for that approach. If they, at some point, violate those rules using Fisherian inference as the justification, then they have acted inconsistently/incoherently.
I suppose one could go a step further and declare that because it is possible to use the hybrid incoherently, therefore the approach is inherently incoherent, but that seems to be getting deeper into the philosophical aspects, which I don't consider myself qualified to even offer an opinion on.
Hat tip to Michael Lew. His 2006 article helped me understand these issues better than any other resource. | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh | I see that those with more expertise than myself have provided answers, but I think my answer has the potential to add something additional, so I'll offer this as one other layman's perspective.
Is th | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"?
I see that those with more expertise than myself have provided answers, but I think my answer has the potential to add something additional, so I'll offer this as one other layman's perspective.
Is the hybrid approach incoherent? I'd say it depends on whether or not the researcher ends up acting inconsistently with the rules that they started out with: specifically the yes/no rule that comes into play with the setting of an alpha value.
Incoherent
Start with Neyman-Pearson. Researcher sets alpha=0.05, runs the experiment, calculates p=0.052. Researcher looks at that p-value and, using Fisherian inference (often implicitly), considers the result to be sufficiently incompatible with the test hypothesis that they will still claim "something" is going on. The result is somehow "good enough" even though the p-value was greater than the alpha value. Often this is paired with language such as "nearly significant" or "trending towards significance" or some wording along those lines.
However, setting an alpha value before running the experiment means that one has chosen the approach of Neyman-Pearson inductive behavior. Choosing to ignore that alpha value after calculating the p-value, and thus claiming something is still somehow interesting, undermines the entire approach that one started with. If a researcher starts down Path A (Neyman-Pearson), but then jumps across to another path (Fisher) once they don't like the path they are on, I consider that incoherent. They are not being consistent with the (implied) rules that they started with.
Coherent (possibly)
Start with N-P. Researcher sets alpha=0.05, runs the experiment, calculates p=0.0014. Researcher observes that p < alpha, and thus rejects the test hypothesis (typically no effect null) and accepts the alternative hypothesis (the effect is real). At this point the researcher, in addition to deciding to treat the outcome as a real effect (N-P), decides to infer (Fisher) that the experiment provides very strong evidence that the effect is real. They have added nuance to the approach they started with, but have not contradicted the rules set in place by choosing an alpha value at the beginning.
Summary
If one starts by choosing an alpha value, then one has decided to take the Neyman-Pearson path and follow the rules for that approach. If they, at some point, violate those rules using Fisherian inference as the justification, then they have acted inconsistently/incoherently.
I suppose one could go a step further and declare that because it is possible to use the hybrid incoherently, therefore the approach is inherently incoherent, but that seems to be getting deeper into the philosophical aspects, which I don't consider myself qualified to even offer an opinion on.
Hat tip to Michael Lew. His 2006 article helped me understand these issues better than any other resource. | Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoh
I see that those with more expertise than myself have provided answers, but I think my answer has the potential to add something additional, so I'll offer this as one other layman's perspective.
Is th |
3,874 | Logistic regression in R resulted in perfect separation (Hauck-Donner phenomenon). Now what? [duplicate] | With such a large design space ($\mathbb{R}^{50}$!) it is possible to get perfect separation without having separation in any of the variable taken individually. I would even second David J. Harris's comment in saying that this is likely.
You can easily test whether your classes are perfectly separated in your design space. This boils down to solving a linear programming problem. An R implementation of this 'test' (not a test in the statistical sense of the term) is implemented in the safeBinaryRegression package.
If it turns out that separation is indeed the issue, and if you are only interested in a plain vanilla use of glm (e.g. glm is not called by a higher level function but by you), then there is an R implementation of an algorithms that slightly modifies the classical one to make it 'robust' against separation. It is implemented in the hlr package | Logistic regression in R resulted in perfect separation (Hauck-Donner phenomenon). Now what? [duplic | With such a large design space ($\mathbb{R}^{50}$!) it is possible to get perfect separation without having separation in any of the variable taken individually. I would even second David J. Harris's | Logistic regression in R resulted in perfect separation (Hauck-Donner phenomenon). Now what? [duplicate]
With such a large design space ($\mathbb{R}^{50}$!) it is possible to get perfect separation without having separation in any of the variable taken individually. I would even second David J. Harris's comment in saying that this is likely.
You can easily test whether your classes are perfectly separated in your design space. This boils down to solving a linear programming problem. An R implementation of this 'test' (not a test in the statistical sense of the term) is implemented in the safeBinaryRegression package.
If it turns out that separation is indeed the issue, and if you are only interested in a plain vanilla use of glm (e.g. glm is not called by a higher level function but by you), then there is an R implementation of an algorithms that slightly modifies the classical one to make it 'robust' against separation. It is implemented in the hlr package | Logistic regression in R resulted in perfect separation (Hauck-Donner phenomenon). Now what? [duplic
With such a large design space ($\mathbb{R}^{50}$!) it is possible to get perfect separation without having separation in any of the variable taken individually. I would even second David J. Harris's |
3,875 | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint? | It is decidedly out of the ordinary.
The reason is that counts like these tend to have Poisson distributions. This implies their inherent variance equals the count. For counts near $100,$ that variance of $100$ means the standard deviations are nearly $10.$ Unless there is extreme serial correlation of the results (which is not biologically or medically plausible), this means the majority of individual values ought to deviate randomly from the underlying hypothesized "true" rate by up to $10$ (above and below) and, in an appreciable number of cases (around a third of them all) should deviate by more than that.
This is difficult to test in a truly robust manner, but one way would be to overfit the data, attempting to describe them very accurately, and see how large the residuals tend to be. Here, for instance, are two such fits, a lowess smooth and an overfit Poisson GLM:
The variance of the residuals for this Generalized Linear Model (GLM) fit (on a logit scale) is only $0.07.$ For other models with (visually) close fits the variance tends to be from $0.05$ to $0.10.$ This is too small.
How can you know? Bootstrap it. I chose a parametric bootstrap in which the data are replaced by independent Poisson values drawn from distributions whose parameters equal the predicted values. Here is one such bootstrapped dataset:
You can see how much more the individual values fluctuate than before, and by how much.
Doing this $2000$ times produced $2001$ variances (in two or three seconds of computation). Here is their histogram:
The vertical red line marks the value of the variance for the data.
(In a well-fit model, the mean of this histogram should be close to $1.$ The mean is $0.75,$ a little less than $1,$ giving an indication of the degree of overfitting.)
The p-value for this test is the fraction of those $2001$ variances that are equal to or less than the observed variance. Since every bootstrapped variance was larger, the p-value is only $1/2001,$ essentially zero.
I repeated this calculation for other models. In the R code below, the models vary according to the number of knots k and degree d of the spline. In every case the p-value remained at $1/2001.$
This confirms the suspicious look of the data. Indeed, if you hadn't stated that these are counts of cases, I would have guessed they were percentages of something. For percentages near $100$ the variation will be very much less than in this Poisson model and the data would not look so suspicious.
This is the code that produced the first and third figures. (A slight variant produced the second, replacing X by X0 at the beginning.)
y <- c(63, 66, 66, 79, 82, 96, 97, 97, 99, 99, 98, 99, 98,
99, 95, 97, 99, 92, 95, 94, 93)
X <- data.frame(x=seq_along(y), y=y)
library(splines)
k <- 6
d <- 4
form <- y ~ bs(x, knots=k, degree=d)
fit <- glm(form, data=X, family="poisson")
X$y.hat <- predict(fit, type="response")
library(ggplot2)
ggplot(X, aes(x,y)) +
geom_point() +
geom_smooth(span=0.4) +
geom_line(aes(x, y.hat), size=1.25) +
xlab("Day") + ylab("Count") +
ggtitle("Data with Smooth (Blue) and GLM Fit (Black)",
paste(k, "knots of degree", d))
stat <- function(fit) var(residuals(fit))
X0 <- X
set.seed(17)
sim <- replicate(2e3, {
X0$y <- rpois(nrow(X0), X0$y.hat)
stat(glm(form, data=X0, family="poisson"))
})
z <- stat(fit)
p <- mean(c(1, sim <= z))
hist(c(z, sim), breaks=25, col="#f0f0f0",
xlab = "Residual Variance",
main=paste("Bootstrapped variances; p =", round(p, log10(length(sim)))))
abline(v = z, col='Red', lwd=2) | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f | It is decidedly out of the ordinary.
The reason is that counts like these tend to have Poisson distributions. This implies their inherent variance equals the count. For counts near $100,$ that varia | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint?
It is decidedly out of the ordinary.
The reason is that counts like these tend to have Poisson distributions. This implies their inherent variance equals the count. For counts near $100,$ that variance of $100$ means the standard deviations are nearly $10.$ Unless there is extreme serial correlation of the results (which is not biologically or medically plausible), this means the majority of individual values ought to deviate randomly from the underlying hypothesized "true" rate by up to $10$ (above and below) and, in an appreciable number of cases (around a third of them all) should deviate by more than that.
This is difficult to test in a truly robust manner, but one way would be to overfit the data, attempting to describe them very accurately, and see how large the residuals tend to be. Here, for instance, are two such fits, a lowess smooth and an overfit Poisson GLM:
The variance of the residuals for this Generalized Linear Model (GLM) fit (on a logit scale) is only $0.07.$ For other models with (visually) close fits the variance tends to be from $0.05$ to $0.10.$ This is too small.
How can you know? Bootstrap it. I chose a parametric bootstrap in which the data are replaced by independent Poisson values drawn from distributions whose parameters equal the predicted values. Here is one such bootstrapped dataset:
You can see how much more the individual values fluctuate than before, and by how much.
Doing this $2000$ times produced $2001$ variances (in two or three seconds of computation). Here is their histogram:
The vertical red line marks the value of the variance for the data.
(In a well-fit model, the mean of this histogram should be close to $1.$ The mean is $0.75,$ a little less than $1,$ giving an indication of the degree of overfitting.)
The p-value for this test is the fraction of those $2001$ variances that are equal to or less than the observed variance. Since every bootstrapped variance was larger, the p-value is only $1/2001,$ essentially zero.
I repeated this calculation for other models. In the R code below, the models vary according to the number of knots k and degree d of the spline. In every case the p-value remained at $1/2001.$
This confirms the suspicious look of the data. Indeed, if you hadn't stated that these are counts of cases, I would have guessed they were percentages of something. For percentages near $100$ the variation will be very much less than in this Poisson model and the data would not look so suspicious.
This is the code that produced the first and third figures. (A slight variant produced the second, replacing X by X0 at the beginning.)
y <- c(63, 66, 66, 79, 82, 96, 97, 97, 99, 99, 98, 99, 98,
99, 95, 97, 99, 92, 95, 94, 93)
X <- data.frame(x=seq_along(y), y=y)
library(splines)
k <- 6
d <- 4
form <- y ~ bs(x, knots=k, degree=d)
fit <- glm(form, data=X, family="poisson")
X$y.hat <- predict(fit, type="response")
library(ggplot2)
ggplot(X, aes(x,y)) +
geom_point() +
geom_smooth(span=0.4) +
geom_line(aes(x, y.hat), size=1.25) +
xlab("Day") + ylab("Count") +
ggtitle("Data with Smooth (Blue) and GLM Fit (Black)",
paste(k, "knots of degree", d))
stat <- function(fit) var(residuals(fit))
X0 <- X
set.seed(17)
sim <- replicate(2e3, {
X0$y <- rpois(nrow(X0), X0$y.hat)
stat(glm(form, data=X0, family="poisson"))
})
z <- stat(fit)
p <- mean(c(1, sim <= z))
hist(c(z, sim), breaks=25, col="#f0f0f0",
xlab = "Residual Variance",
main=paste("Bootstrapped variances; p =", round(p, log10(length(sim)))))
abline(v = z, col='Red', lwd=2) | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f
It is decidedly out of the ordinary.
The reason is that counts like these tend to have Poisson distributions. This implies their inherent variance equals the count. For counts near $100,$ that varia |
3,876 | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint? | The Krasnodar Krai case is not the only one. Below is a plot for the data from 36 regions (I selected the best examples out of 84) where we either see
a similar underdispersion
or at least the numbers seem to be reaching a plateau around a 'nice' number (I have drawn lines at 10, 25, 50 and 100, where several regions find their plateau)
About the scale of this plot: It looks like a logarithmic scale for the y-axis, but it is not. It is a square root scale. I have done this such that a dispersion like for Poisson distributed data $\sigma^2 = \mu$ will look the same for all means. See also: Why is the square root transformation recommended for count data?
This data looks for some cases clearly underdispersed, if it would be Poisson distributed. (Whuber showed how to derive a significance value, but I guess that it already passes the inter-ocular trauma test. I still shared this plot because I found it interesting that there are cases without the underdispersion, but still they seem to stick to a plateau. There may be more to it than just underdispersion. Or there are cases like nr 15 and nr 22, lower left of the image, which show underdispersion, but not the fixed plateau value.).
The underdispersion is indeed odd. But, we do not know what sort of process has generated these numbers. It is probably not a natural process, and there are humans involved. For some reason, there seems some plateau or an upper limit. We can only guess what it could be (this data tells us not much about it and it is highly speculative to use it to guess what could be going on). It could be falsified data, but it could also be some intricate process that generates the data and has some upper limit (e.g. these data are reported/registered cases and possibly the reporting/registration is limited to some fixed number).
### using the following JSON file
### https://github.com/mediazona/data-corona-Russia/blob/master/data.json
library(rjson)
#data <- fromJSON(file = "~/Downloads/data.json")
data <- fromJSON(file = "https://raw.githubusercontent.com/mediazona/data-corona-Russia/master/data.json")
layout(matrix(1:36,4, byrow = TRUE))
par(mar = c(3,3,1,1), mgp = c(1.5,0.5,0))
## computing means and dispersion for last 9 days
means <- rep(0,84)
disp <- rep(0,84)
for (i in 1:84) {
x <- c(-4:4)
y <- data[[2]][[i]]$confirmed[73:81]
means[i] <- mean(y)
mod <- glm(y ~ x + I(x^2) + I(x^3), family = poisson(link = identity), start = c(2,0,0,0))
disp[i] <- mod$deviance/mod$df.residual
}
### choosing some interresting cases and ordering them
cases <- c(4,5,11,12,14,15,21,22,23,24,
26,29,30,31,34,35,37,41,
42,43,47,48,50,51,53,56,
58,67,68,71,72,75,77,79,82,83)
cases <- cases[order(means[cases])]
for (i in cases) {
col = 1
if (i == 24) {
col = 2
bg = "red"
}
plot(-100,-100, xlim = c(0,85), ylim = c(0,11), yaxt = "n", xaxt = "n",
xlab = "", ylab = "counts", col = col)
axis(2, at = c(1:10), labels = c(1:10)^2, las = 2)
axis(1, at = c(1:85), labels = rep("",85), tck = -0.04)
axis(1, at = c(1,1+31,1+31+30)-1, labels = c("Mar 1", "Apr 1", "May 1"), tck = -0.08)
for (lev in c(10,25,50,100)) {
#polygon(c(-10,200,200,-10), sqrt(c(lev-sqrt(lev),lev-sqrt(lev),lev+sqrt(lev),lev+sqrt(lev))),
# col = "gray")
lines(c(-10,200), sqrt(c(lev,lev)), lty = 2)
}
lines(sqrt(data[[2]][[i]]$confirmed), col = col)
points(sqrt(data[[2]][[i]]$confirmed), bg = "white", col = col, pch = 21, cex=0.7)
title(paste0(i,": ", data[[2]][[i]]$name), cex.main = 1, col.main = col)
}
### an interesting plot of under/overdispersion and mean of last 9 data points
### one might recognize a cluster with low deviance and mean just below 100
plot(means,disp, log= "xy",
yaxt = "n", xaxt = "n")
axis(1,las=1,tck=-0.01,cex.axis=1,
at=c(100*c(1:9),10*c(1:9),1*c(1:9)),labels=rep("",27))
axis(1,las=1,tck=-0.02,cex.axis=1,
labels=c(1,10,100,1000), at=c(1,10,100,1000))
axis(2,las=1,tck=-0.01,cex.axis=1,
at=c(10*c(1:9),1*c(1:9),0.1*c(1:9)),labels=rep("",27))
axis(2,las=1,tck=-0.02,cex.axis=1,
labels=c(1,10,100,1000)/10, at=c(1,10,100,1000)/10)
Maybe this is overinterpreting the data a bit, but anyway here is another interesting graph (also in the code above). The graph below compares all the 84 regions (except the largest three that do not fit on the plot) based on the mean value of the last 13 days and a dispersion-factor based on a GLM model with the Poisson family and a cubic fit. It looks like the cases with underdispersion are often close to 100 cases per day.
It seems to be that whatever is causing these suspiciously level values in Krasnodar Krai, it occurs in multiple regions, and it could be related to some boundary of 100 cases/day. Possibly there is some censoring occurring in the process that generates the data, and that limits the values to some upper limit. Whatever this process is that causes the censored data, it seems to occur in multiple regions in a similar way and has likely some artificial(human) cause (e.g. some sort of limitation of the laboratory testing in smaller regions). | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f | The Krasnodar Krai case is not the only one. Below is a plot for the data from 36 regions (I selected the best examples out of 84) where we either see
a similar underdispersion
or at least the numbe | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint?
The Krasnodar Krai case is not the only one. Below is a plot for the data from 36 regions (I selected the best examples out of 84) where we either see
a similar underdispersion
or at least the numbers seem to be reaching a plateau around a 'nice' number (I have drawn lines at 10, 25, 50 and 100, where several regions find their plateau)
About the scale of this plot: It looks like a logarithmic scale for the y-axis, but it is not. It is a square root scale. I have done this such that a dispersion like for Poisson distributed data $\sigma^2 = \mu$ will look the same for all means. See also: Why is the square root transformation recommended for count data?
This data looks for some cases clearly underdispersed, if it would be Poisson distributed. (Whuber showed how to derive a significance value, but I guess that it already passes the inter-ocular trauma test. I still shared this plot because I found it interesting that there are cases without the underdispersion, but still they seem to stick to a plateau. There may be more to it than just underdispersion. Or there are cases like nr 15 and nr 22, lower left of the image, which show underdispersion, but not the fixed plateau value.).
The underdispersion is indeed odd. But, we do not know what sort of process has generated these numbers. It is probably not a natural process, and there are humans involved. For some reason, there seems some plateau or an upper limit. We can only guess what it could be (this data tells us not much about it and it is highly speculative to use it to guess what could be going on). It could be falsified data, but it could also be some intricate process that generates the data and has some upper limit (e.g. these data are reported/registered cases and possibly the reporting/registration is limited to some fixed number).
### using the following JSON file
### https://github.com/mediazona/data-corona-Russia/blob/master/data.json
library(rjson)
#data <- fromJSON(file = "~/Downloads/data.json")
data <- fromJSON(file = "https://raw.githubusercontent.com/mediazona/data-corona-Russia/master/data.json")
layout(matrix(1:36,4, byrow = TRUE))
par(mar = c(3,3,1,1), mgp = c(1.5,0.5,0))
## computing means and dispersion for last 9 days
means <- rep(0,84)
disp <- rep(0,84)
for (i in 1:84) {
x <- c(-4:4)
y <- data[[2]][[i]]$confirmed[73:81]
means[i] <- mean(y)
mod <- glm(y ~ x + I(x^2) + I(x^3), family = poisson(link = identity), start = c(2,0,0,0))
disp[i] <- mod$deviance/mod$df.residual
}
### choosing some interresting cases and ordering them
cases <- c(4,5,11,12,14,15,21,22,23,24,
26,29,30,31,34,35,37,41,
42,43,47,48,50,51,53,56,
58,67,68,71,72,75,77,79,82,83)
cases <- cases[order(means[cases])]
for (i in cases) {
col = 1
if (i == 24) {
col = 2
bg = "red"
}
plot(-100,-100, xlim = c(0,85), ylim = c(0,11), yaxt = "n", xaxt = "n",
xlab = "", ylab = "counts", col = col)
axis(2, at = c(1:10), labels = c(1:10)^2, las = 2)
axis(1, at = c(1:85), labels = rep("",85), tck = -0.04)
axis(1, at = c(1,1+31,1+31+30)-1, labels = c("Mar 1", "Apr 1", "May 1"), tck = -0.08)
for (lev in c(10,25,50,100)) {
#polygon(c(-10,200,200,-10), sqrt(c(lev-sqrt(lev),lev-sqrt(lev),lev+sqrt(lev),lev+sqrt(lev))),
# col = "gray")
lines(c(-10,200), sqrt(c(lev,lev)), lty = 2)
}
lines(sqrt(data[[2]][[i]]$confirmed), col = col)
points(sqrt(data[[2]][[i]]$confirmed), bg = "white", col = col, pch = 21, cex=0.7)
title(paste0(i,": ", data[[2]][[i]]$name), cex.main = 1, col.main = col)
}
### an interesting plot of under/overdispersion and mean of last 9 data points
### one might recognize a cluster with low deviance and mean just below 100
plot(means,disp, log= "xy",
yaxt = "n", xaxt = "n")
axis(1,las=1,tck=-0.01,cex.axis=1,
at=c(100*c(1:9),10*c(1:9),1*c(1:9)),labels=rep("",27))
axis(1,las=1,tck=-0.02,cex.axis=1,
labels=c(1,10,100,1000), at=c(1,10,100,1000))
axis(2,las=1,tck=-0.01,cex.axis=1,
at=c(10*c(1:9),1*c(1:9),0.1*c(1:9)),labels=rep("",27))
axis(2,las=1,tck=-0.02,cex.axis=1,
labels=c(1,10,100,1000)/10, at=c(1,10,100,1000)/10)
Maybe this is overinterpreting the data a bit, but anyway here is another interesting graph (also in the code above). The graph below compares all the 84 regions (except the largest three that do not fit on the plot) based on the mean value of the last 13 days and a dispersion-factor based on a GLM model with the Poisson family and a cubic fit. It looks like the cases with underdispersion are often close to 100 cases per day.
It seems to be that whatever is causing these suspiciously level values in Krasnodar Krai, it occurs in multiple regions, and it could be related to some boundary of 100 cases/day. Possibly there is some censoring occurring in the process that generates the data, and that limits the values to some upper limit. Whatever this process is that causes the censored data, it seems to occur in multiple regions in a similar way and has likely some artificial(human) cause (e.g. some sort of limitation of the laboratory testing in smaller regions). | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f
The Krasnodar Krai case is not the only one. Below is a plot for the data from 36 regions (I selected the best examples out of 84) where we either see
a similar underdispersion
or at least the numbe |
3,877 | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint? | I will just mention one aspect that I haven't seen mentioned in the other answers. The problem with any analysis that states that this is significantly out of the ordinary is that it doesn't take into account that the data have been selected based on looking strange. At least I'd assume that the thread opener has not only seen these data but also other data sets of similar type (maybe not even consciously, but in the media without noticing because they didn't seem any special - but I would expect somebody who writes a posting like this to have seen more consciously). The question to address is therefore not whether the data, seen as isolated, are significantly different from what could be expected, but rather whether, if everything's normal (not meant as in "normally distributed", you know what I mean), any data set like this or with a different pattern that would also prompt the thread opener to post here could be expected to be among all those they see. As we don't know what they have seen, that's pretty hard to assess, unless we come up with a p-value of $10^{-10}$ which would still be significant adjusting for almost any number of multiple tests.
Another way of testing this would be to make predictions for the future based on what the data show, and then test whether the strange trend goes on with observations that were not part of those that led to picking this data set.
Of course also the other answer that states that this kind of dodgy pattern also occurs in other regions can contribute some reassurance that something meaningful is going on because it isn't then such a special thing to pick. However the point I want to make is that for whatever analysis, selection bias should not be forgotten. | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f | I will just mention one aspect that I haven't seen mentioned in the other answers. The problem with any analysis that states that this is significantly out of the ordinary is that it doesn't take into | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint?
I will just mention one aspect that I haven't seen mentioned in the other answers. The problem with any analysis that states that this is significantly out of the ordinary is that it doesn't take into account that the data have been selected based on looking strange. At least I'd assume that the thread opener has not only seen these data but also other data sets of similar type (maybe not even consciously, but in the media without noticing because they didn't seem any special - but I would expect somebody who writes a posting like this to have seen more consciously). The question to address is therefore not whether the data, seen as isolated, are significantly different from what could be expected, but rather whether, if everything's normal (not meant as in "normally distributed", you know what I mean), any data set like this or with a different pattern that would also prompt the thread opener to post here could be expected to be among all those they see. As we don't know what they have seen, that's pretty hard to assess, unless we come up with a p-value of $10^{-10}$ which would still be significant adjusting for almost any number of multiple tests.
Another way of testing this would be to make predictions for the future based on what the data show, and then test whether the strange trend goes on with observations that were not part of those that led to picking this data set.
Of course also the other answer that states that this kind of dodgy pattern also occurs in other regions can contribute some reassurance that something meaningful is going on because it isn't then such a special thing to pick. However the point I want to make is that for whatever analysis, selection bias should not be forgotten. | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f
I will just mention one aspect that I haven't seen mentioned in the other answers. The problem with any analysis that states that this is significantly out of the ordinary is that it doesn't take into |
3,878 | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint? | Krasnodar
The data for a region is clearly not realistic in terms of its dispersion. Here's a data on Krasnodar town. The sample average is 34 in May, and the dispersion is 8.7.
This is more than Poisson distribution would suggest, where the dispersion is the square root of average, i.e. 5.9. This is overdispersed but the sample size is quite small so it's hard to simply reject Poisson distribution.
The town has a population near 1M people.
However, when we jump into Kransodar krai with population of 5.5M, all of a sudden the dispersion collapses. In your plot the new cases average around 100, but the dispersion is 1-2. In Poisson you'd expect the dispersion of 10. Why would the capital be overdispersed but the whole region would be severy underdispersed? It doesnt make sense to me.
Also where did all the dispersion from the capital of the region go? "It's inconceivable!" (c) to think that the regional incidence is very strongly negatively correlated with its capital. Here's a scatter plot of the cases outside Krasnodar in the region vs Krasnodar town.
Source
chart: source: https://www.yuga.ru/media/d7/69/photo_2020-05-21_10-54-10__cr75et3.jpg
scraped data:
14
45
37
37
32
25
33
40
47
40
33
38
47
25
37
35
20
25
30
37
43
Russia
@AlexeyBurnakov pulled the chart for entire Russia:
I scraped the data for May, and it's severely overdispersed. The average is 10K but the variance is 756K, with dispersion 870 much higher than Poisson process would suggest. Hence, the overall Russia data supports my claim that Krasnodar Krai data is abnormal.
9623
10633
10581
10102
10559
11231
10699
10817
11012
11656
10899
10028
9974
10598
9200
9709
8926
9263
8764
8849
8894
Source
https://yandex.ru/covid19/stat?utm_source=main_title&geoId=225 | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f | Krasnodar
The data for a region is clearly not realistic in terms of its dispersion. Here's a data on Krasnodar town. The sample average is 34 in May, and the dispersion is 8.7.
This is more than Poi | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint?
Krasnodar
The data for a region is clearly not realistic in terms of its dispersion. Here's a data on Krasnodar town. The sample average is 34 in May, and the dispersion is 8.7.
This is more than Poisson distribution would suggest, where the dispersion is the square root of average, i.e. 5.9. This is overdispersed but the sample size is quite small so it's hard to simply reject Poisson distribution.
The town has a population near 1M people.
However, when we jump into Kransodar krai with population of 5.5M, all of a sudden the dispersion collapses. In your plot the new cases average around 100, but the dispersion is 1-2. In Poisson you'd expect the dispersion of 10. Why would the capital be overdispersed but the whole region would be severy underdispersed? It doesnt make sense to me.
Also where did all the dispersion from the capital of the region go? "It's inconceivable!" (c) to think that the regional incidence is very strongly negatively correlated with its capital. Here's a scatter plot of the cases outside Krasnodar in the region vs Krasnodar town.
Source
chart: source: https://www.yuga.ru/media/d7/69/photo_2020-05-21_10-54-10__cr75et3.jpg
scraped data:
14
45
37
37
32
25
33
40
47
40
33
38
47
25
37
35
20
25
30
37
43
Russia
@AlexeyBurnakov pulled the chart for entire Russia:
I scraped the data for May, and it's severely overdispersed. The average is 10K but the variance is 756K, with dispersion 870 much higher than Poisson process would suggest. Hence, the overall Russia data supports my claim that Krasnodar Krai data is abnormal.
9623
10633
10581
10102
10559
11231
10699
10817
11012
11656
10899
10028
9974
10598
9200
9709
8926
9263
8764
8849
8894
Source
https://yandex.ru/covid19/stat?utm_source=main_title&geoId=225 | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f
Krasnodar
The data for a region is clearly not realistic in terms of its dispersion. Here's a data on Krasnodar town. The sample average is 34 in May, and the dispersion is 8.7.
This is more than Poi |
3,879 | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint? | So I think these are the data:
month day new delta tens ones
4 29 63 NA 6 3
4 30 66 3 6 6
5 1 65 -1 6 5
5 2 79 14 7 9
5 3 82 3 8 2
5 4 96 14 9 6
5 5 97 1 9 7
5 6 97 0 9 7
5 7 99 2 9 9
5 8 99 0 9 9
5 9 98 -1 9 8
5 10 99 1 9 9
5 11 98 -1 9 8
5 12 99 1 9 9
5 13 96 -3 9 6
5 14 97 1 9 7
5 15 99 2 9 9
5 16 92 -7 9 2
5 17 95 3 9 5
5 18 94 -1 9 4
5 19 93 -1 9 3
One of the fun, introductory, elements of forensic accounting is Benford's law.
When I look at the frequencies of the ones-digits and the tens digits I get this:
Ones count rate
1 0 0.0
2 2 9.5
3 2 9.5
4 1 4.8
5 2 9.5
6 3 14.3
7 3 14.3
8 2 9.5
9 6 28.6
Tens count rate
1 0 0.0
2 0 0.0
3 0 0.0
4 0 0.0
5 0 0.0
6 3 14.3
7 1 4.8
8 1 4.8
9 16 76.2
I notice a very strong preponderance of "6" and "9" in the data.
If the ones-place (second) digits were distributed according to Benford's rules they should happen something near 9.7% and 8.5% of the time, respectively, instead of better than 20% of the time. | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f | So I think these are the data:
month day new delta tens ones
4 29 63 NA 6 3
4 30 66 3 6 6
5 1 65 -1 6 5
5 2 79 14 7 9
5 3 | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint?
So I think these are the data:
month day new delta tens ones
4 29 63 NA 6 3
4 30 66 3 6 6
5 1 65 -1 6 5
5 2 79 14 7 9
5 3 82 3 8 2
5 4 96 14 9 6
5 5 97 1 9 7
5 6 97 0 9 7
5 7 99 2 9 9
5 8 99 0 9 9
5 9 98 -1 9 8
5 10 99 1 9 9
5 11 98 -1 9 8
5 12 99 1 9 9
5 13 96 -3 9 6
5 14 97 1 9 7
5 15 99 2 9 9
5 16 92 -7 9 2
5 17 95 3 9 5
5 18 94 -1 9 4
5 19 93 -1 9 3
One of the fun, introductory, elements of forensic accounting is Benford's law.
When I look at the frequencies of the ones-digits and the tens digits I get this:
Ones count rate
1 0 0.0
2 2 9.5
3 2 9.5
4 1 4.8
5 2 9.5
6 3 14.3
7 3 14.3
8 2 9.5
9 6 28.6
Tens count rate
1 0 0.0
2 0 0.0
3 0 0.0
4 0 0.0
5 0 0.0
6 3 14.3
7 1 4.8
8 1 4.8
9 16 76.2
I notice a very strong preponderance of "6" and "9" in the data.
If the ones-place (second) digits were distributed according to Benford's rules they should happen something near 9.7% and 8.5% of the time, respectively, instead of better than 20% of the time. | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f
So I think these are the data:
month day new delta tens ones
4 29 63 NA 6 3
4 30 66 3 6 6
5 1 65 -1 6 5
5 2 79 14 7 9
5 3 |
3,880 | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint? | Interesting points from everyone. Let me contradict some.
1) Why Poisson? Cases generation process is intristically interdependent as a pandemic interaction between ill and healthy, so case occurence in a time interval maybe affected by the previous interval occurences. The dependency may be complicated but strong.
UDPATE (as of May 23rd)
1.1) Imagine the physics of the process.
a) A person is healthy ->
b) They get infected from a covid-positive one ->
c) they fill sick and go to a hospital ->
d) they get screened after - and very likely - waiting in line, or
time table slot ->
e) the lab processes tests and determines new positives ->
f) a report goes to a ministry and gets summarized for a daily
report.
I would like to insist again, after long discussion and downvotings I got, that when you see the stage F reports, you should understand that events occurred as a function of a lot of human interactions, and it is important they were accumulated to pass a "bottleneck" of either: their own time to visit a doctor, the doctor appointment time table, or laboratory test processing limits. All of these make it non-Poissonian, as we don't use the Poisson for events that wait in a line. I think that it is mostly about lab tests that are made by humans who work with average capacity and cannot process too many per day. It is also possible that the final reporting stage accumulates information in a sort of buckets.
My point is that it is not Poisson, or generalization. It is the "Poisson with waiting in line and data accumulation in time periods". I don't see 100% evidence of "Soviet-style data manipulations". It could be just bulks of pre-processed data up to report.
2) For the Krasnodar region the daily mean seems to be non-stationary. It is not good at all to approach these data from Poisson view, or at least one should take only the stationary part of it.
These points are about 2 major Possion distribution assumptions violations.
3) Why 100 tests per day? It is official information that in Russia (and I am in Russia, reading news constantly) there were 7.5 million tests made so far, and about 330,000 cases confirmed (as of May 22nd). The proportion of positives is less than 5%. With this, you should expect at least 2,000 tests per day allowed. This could be real, as the tests are scarce and expensive items and not only in the Krasnodar, Russia, or Europe. It is everywhere the same. @Aksakal
(source: https://yandex.ru/covid19/stat?utm_source=main_title&geoId=225)
4) Why ever would you think these are "Soviet data"? Look at the World data for new covid cases. It is extremely low-variance if you think it must be Poisson (a sum of Poissons is a Poisson). Is the World "Soviet" (I guess you mean lying?) then? @Ben - Reinstate Monica
(source: https://yandex.ru/covid19/stat?utm_source=main_title&geoId=225)
So, it seems to me that Statistics application in the case of pandemic is a dangerous thing. Lots of assumptions of all kinds must be true to conclude what have been concluded.
UPDATE
To address the point about the world data under/overdispersion,
library(data.table)
library(magrittr)
dat <- read.csv(url('https://covid.ourworldindata.org/data/owid-covid-data.csv'))
setDT(dat)
dt <-
dat[location == 'World', sum(new_cases), date] %>%
.[, date:= as.Date(date)] %>%
.[date >= '2020-04-01'] %>%
setorder(date)
min(dt$V1)
max(dt$V1)
mean(dt$V1)
var(dt$V1)
var(dt$V1) / mean(dt$V1) # huge overdispersion, indeed
plot(dt$V1,type='l')
acf(dt$V1)
I got data for April, 1st till today (as a more stationary, plateu phase).
The calculation showed that variance to dispersion ratio is 1083. This is huge overdispersion. My naked-eye analysis was wrong.
There is significant weekly autocorrelation present.
This can be one of the reasons for higher variance, but is it enough? And why is there a daily pattern? Is it still the Poisson process or lying statistics worldwide? | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f | Interesting points from everyone. Let me contradict some.
1) Why Poisson? Cases generation process is intristically interdependent as a pandemic interaction between ill and healthy, so case occurence | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so from the statistics viewpoint?
Interesting points from everyone. Let me contradict some.
1) Why Poisson? Cases generation process is intristically interdependent as a pandemic interaction between ill and healthy, so case occurence in a time interval maybe affected by the previous interval occurences. The dependency may be complicated but strong.
UDPATE (as of May 23rd)
1.1) Imagine the physics of the process.
a) A person is healthy ->
b) They get infected from a covid-positive one ->
c) they fill sick and go to a hospital ->
d) they get screened after - and very likely - waiting in line, or
time table slot ->
e) the lab processes tests and determines new positives ->
f) a report goes to a ministry and gets summarized for a daily
report.
I would like to insist again, after long discussion and downvotings I got, that when you see the stage F reports, you should understand that events occurred as a function of a lot of human interactions, and it is important they were accumulated to pass a "bottleneck" of either: their own time to visit a doctor, the doctor appointment time table, or laboratory test processing limits. All of these make it non-Poissonian, as we don't use the Poisson for events that wait in a line. I think that it is mostly about lab tests that are made by humans who work with average capacity and cannot process too many per day. It is also possible that the final reporting stage accumulates information in a sort of buckets.
My point is that it is not Poisson, or generalization. It is the "Poisson with waiting in line and data accumulation in time periods". I don't see 100% evidence of "Soviet-style data manipulations". It could be just bulks of pre-processed data up to report.
2) For the Krasnodar region the daily mean seems to be non-stationary. It is not good at all to approach these data from Poisson view, or at least one should take only the stationary part of it.
These points are about 2 major Possion distribution assumptions violations.
3) Why 100 tests per day? It is official information that in Russia (and I am in Russia, reading news constantly) there were 7.5 million tests made so far, and about 330,000 cases confirmed (as of May 22nd). The proportion of positives is less than 5%. With this, you should expect at least 2,000 tests per day allowed. This could be real, as the tests are scarce and expensive items and not only in the Krasnodar, Russia, or Europe. It is everywhere the same. @Aksakal
(source: https://yandex.ru/covid19/stat?utm_source=main_title&geoId=225)
4) Why ever would you think these are "Soviet data"? Look at the World data for new covid cases. It is extremely low-variance if you think it must be Poisson (a sum of Poissons is a Poisson). Is the World "Soviet" (I guess you mean lying?) then? @Ben - Reinstate Monica
(source: https://yandex.ru/covid19/stat?utm_source=main_title&geoId=225)
So, it seems to me that Statistics application in the case of pandemic is a dangerous thing. Lots of assumptions of all kinds must be true to conclude what have been concluded.
UPDATE
To address the point about the world data under/overdispersion,
library(data.table)
library(magrittr)
dat <- read.csv(url('https://covid.ourworldindata.org/data/owid-covid-data.csv'))
setDT(dat)
dt <-
dat[location == 'World', sum(new_cases), date] %>%
.[, date:= as.Date(date)] %>%
.[date >= '2020-04-01'] %>%
setorder(date)
min(dt$V1)
max(dt$V1)
mean(dt$V1)
var(dt$V1)
var(dt$V1) / mean(dt$V1) # huge overdispersion, indeed
plot(dt$V1,type='l')
acf(dt$V1)
I got data for April, 1st till today (as a more stationary, plateu phase).
The calculation showed that variance to dispersion ratio is 1083. This is huge overdispersion. My naked-eye analysis was wrong.
There is significant weekly autocorrelation present.
This can be one of the reasons for higher variance, but is it enough? And why is there a daily pattern? Is it still the Poisson process or lying statistics worldwide? | A chart of daily cases of COVID-19 in a Russian region looks suspiciously level to me - is this so f
Interesting points from everyone. Let me contradict some.
1) Why Poisson? Cases generation process is intristically interdependent as a pandemic interaction between ill and healthy, so case occurence |
3,881 | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | It has quite a nice intuition in the Bayesian framework. Consider that the regularized cost function $J$ has a similar role as the probability of a parameter configuration $\theta$ given the observations $X, y$. Applying the Bayes theorem, we get:
$$P(\theta|X,y) = \frac{P(X,y|\theta)P(\theta)}{P(X,y)}.$$
Taking the log of the expression gives us:
$$\log P(\theta|X,y) = \log P(X,y|\theta) + \log P(\theta) - \log P(X,y).$$
Now, let's say $J(\theta)$ is the negative1 log-posterior, $-\log P(\theta|X,y)$. Since the last term does not depend on $\theta$, we can omit it without changing the minimum. You are left with two terms: 1) the likelihood term $\log P(X,y|\theta)$ depending on $X$ and $y$, and 2) the prior term $ \log P(\theta)$ depending on $\theta$ only. These two terms correspond exactly to the data term and the regularization term in your formula.
You can go even further and show that the loss function which you posted corresponds exactly to the following model:
$$P(X,y|\theta) = \mathcal{N}(y|\theta X, \sigma_1^2),$$
$$P(\theta) = \mathcal{N}(\theta | 0, \sigma_2^2),$$
where parameters $\theta$ come from a zero-mean Gaussian distribution and the observations $y$ have zero-mean Gaussian noise. For more details see this answer.
1 Negative since you want to maximize the probability but minimize the cost. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | It has quite a nice intuition in the Bayesian framework. Consider that the regularized cost function $J$ has a similar role as the probability of a parameter configuration $\theta$ given the observati | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
It has quite a nice intuition in the Bayesian framework. Consider that the regularized cost function $J$ has a similar role as the probability of a parameter configuration $\theta$ given the observations $X, y$. Applying the Bayes theorem, we get:
$$P(\theta|X,y) = \frac{P(X,y|\theta)P(\theta)}{P(X,y)}.$$
Taking the log of the expression gives us:
$$\log P(\theta|X,y) = \log P(X,y|\theta) + \log P(\theta) - \log P(X,y).$$
Now, let's say $J(\theta)$ is the negative1 log-posterior, $-\log P(\theta|X,y)$. Since the last term does not depend on $\theta$, we can omit it without changing the minimum. You are left with two terms: 1) the likelihood term $\log P(X,y|\theta)$ depending on $X$ and $y$, and 2) the prior term $ \log P(\theta)$ depending on $\theta$ only. These two terms correspond exactly to the data term and the regularization term in your formula.
You can go even further and show that the loss function which you posted corresponds exactly to the following model:
$$P(X,y|\theta) = \mathcal{N}(y|\theta X, \sigma_1^2),$$
$$P(\theta) = \mathcal{N}(\theta | 0, \sigma_2^2),$$
where parameters $\theta$ come from a zero-mean Gaussian distribution and the observations $y$ have zero-mean Gaussian noise. For more details see this answer.
1 Negative since you want to maximize the probability but minimize the cost. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
It has quite a nice intuition in the Bayesian framework. Consider that the regularized cost function $J$ has a similar role as the probability of a parameter configuration $\theta$ given the observati |
3,882 | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | Jan and Cagdas give a good Bayesian reason, interpreting the regularizer as a prior. Here are some non-Bayesian ones:
If your unregularized objective is convex, and you add a convex regularizer, then your total objective will still be convex. This won't be true if you multiply it, or most other methods of combining. Convex optimization is really, really nice compared to non-convex optimization; if the convex formulation works, it's nicer to do that.
Sometimes it leads to a very simple closed form, as wpof mentions is the case for ridge regression.
If you think of the problem you "really" want to solve as a problem with a hard constraint
$$
\min_{\theta : c(\theta) \le 0} J(\theta)
,$$
then its Lagrange dual is the problem
$$
\min_\theta J(\theta) + \lambda c(\theta)
.$$
Though you don't have to use Lagrange duality, a lot is understood about it.
As ogogmad mentioned, the representer theorem applies to the case of an additive penalty: if you want to optimize $f$ over a whole reproducing kernel Hilbert space of functions $\mathcal H$, then we know that the solution to optimization over the whole space
$$
\min_{f \in \mathcal H} J(f) + \lambda \lVert f \rVert_{\mathcal H}^2
$$
lies in a simple finite-dimensional subspace for many losses $J$; I don't know if this would hold for a multiplicative regularizer (though it might). This is the underpinning of kernel SVMs.
If you're doing deep learning or something non-convex anyway: additive losses give simple additive gradients. For the simple $L_2$ regularizer you gave, it becomes very simple weight decay. But even for a more complicated regularizer, say the WGAN-GP's loss
$$
\sum_{x,y}
\underbrace{f_\theta(x) - f_\theta(y)}_\text{the loss} + \lambda
\underbrace{\mathbb{\hat E}_{\alpha \sim \mathrm{Uniform}(0, 1)} \left( \lVert \nabla f_\theta(\alpha x + (1 - \alpha) y) \rVert - 1\right)^2}_\text{the regularizer},
$$ it's easier for backpropagation to compute gradients when it only has to consider the sum of the loss and the complicated regularizer (considering things separately), instead of having to do the product rule.
Additive losses are also amenable to the popular ADMM optimization algorithm, and other "decomposition"-based algorithms.
None of these are hard-and-fast rules, and indeed sometimes a multiplicative (or some other) regularizer might work better (as ogogmad points out). (In fact, I just the other day submitted a paper about how something you could interpret as a multiplicative regularizer does better than the WGAN-GP additive one above!) But hopefully this helps explain why additive regularizers are "the default." | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | Jan and Cagdas give a good Bayesian reason, interpreting the regularizer as a prior. Here are some non-Bayesian ones:
If your unregularized objective is convex, and you add a convex regularizer, then | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
Jan and Cagdas give a good Bayesian reason, interpreting the regularizer as a prior. Here are some non-Bayesian ones:
If your unregularized objective is convex, and you add a convex regularizer, then your total objective will still be convex. This won't be true if you multiply it, or most other methods of combining. Convex optimization is really, really nice compared to non-convex optimization; if the convex formulation works, it's nicer to do that.
Sometimes it leads to a very simple closed form, as wpof mentions is the case for ridge regression.
If you think of the problem you "really" want to solve as a problem with a hard constraint
$$
\min_{\theta : c(\theta) \le 0} J(\theta)
,$$
then its Lagrange dual is the problem
$$
\min_\theta J(\theta) + \lambda c(\theta)
.$$
Though you don't have to use Lagrange duality, a lot is understood about it.
As ogogmad mentioned, the representer theorem applies to the case of an additive penalty: if you want to optimize $f$ over a whole reproducing kernel Hilbert space of functions $\mathcal H$, then we know that the solution to optimization over the whole space
$$
\min_{f \in \mathcal H} J(f) + \lambda \lVert f \rVert_{\mathcal H}^2
$$
lies in a simple finite-dimensional subspace for many losses $J$; I don't know if this would hold for a multiplicative regularizer (though it might). This is the underpinning of kernel SVMs.
If you're doing deep learning or something non-convex anyway: additive losses give simple additive gradients. For the simple $L_2$ regularizer you gave, it becomes very simple weight decay. But even for a more complicated regularizer, say the WGAN-GP's loss
$$
\sum_{x,y}
\underbrace{f_\theta(x) - f_\theta(y)}_\text{the loss} + \lambda
\underbrace{\mathbb{\hat E}_{\alpha \sim \mathrm{Uniform}(0, 1)} \left( \lVert \nabla f_\theta(\alpha x + (1 - \alpha) y) \rVert - 1\right)^2}_\text{the regularizer},
$$ it's easier for backpropagation to compute gradients when it only has to consider the sum of the loss and the complicated regularizer (considering things separately), instead of having to do the product rule.
Additive losses are also amenable to the popular ADMM optimization algorithm, and other "decomposition"-based algorithms.
None of these are hard-and-fast rules, and indeed sometimes a multiplicative (or some other) regularizer might work better (as ogogmad points out). (In fact, I just the other day submitted a paper about how something you could interpret as a multiplicative regularizer does better than the WGAN-GP additive one above!) But hopefully this helps explain why additive regularizers are "the default." | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
Jan and Cagdas give a good Bayesian reason, interpreting the regularizer as a prior. Here are some non-Bayesian ones:
If your unregularized objective is convex, and you add a convex regularizer, then |
3,883 | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | You want to minimize both terms in the objective function. Therefore, you need to decouple the terms. If you multiply the terms you can have one term large and the other very low. So, you still end up with a low value of the objective function, but with an undesirable result.
You may end up with a model that has most variable close to zero with no predictive power.
The objective function, which is the function that is to be minimized, can be constructed as the sum of cost function and regularization terms.
In case both are independent on each other, you get the values illustrated in the first figure for the objective. You see in case of the sum, there is only one minimum at (0, 0). In case of the product you have ambiguity. You have a whole hyper-surface equal to zero at (x=0 or y=0). So, the optimization algorithm can end up anywhere depending on your initialization. And it cannot decide which solution is better. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | You want to minimize both terms in the objective function. Therefore, you need to decouple the terms. If you multiply the terms you can have one term large and the other very low. So, you still end up | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
You want to minimize both terms in the objective function. Therefore, you need to decouple the terms. If you multiply the terms you can have one term large and the other very low. So, you still end up with a low value of the objective function, but with an undesirable result.
You may end up with a model that has most variable close to zero with no predictive power.
The objective function, which is the function that is to be minimized, can be constructed as the sum of cost function and regularization terms.
In case both are independent on each other, you get the values illustrated in the first figure for the objective. You see in case of the sum, there is only one minimum at (0, 0). In case of the product you have ambiguity. You have a whole hyper-surface equal to zero at (x=0 or y=0). So, the optimization algorithm can end up anywhere depending on your initialization. And it cannot decide which solution is better. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
You want to minimize both terms in the objective function. Therefore, you need to decouple the terms. If you multiply the terms you can have one term large and the other very low. So, you still end up |
3,884 | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | You can try other binary operations ($\max,\min,\times$) and see how they compare.
The problem with $\min$ and $\times$ is that if the error is $0$, then the regularized penalty will end up being $0$. This allows the model to overfit.
The problem with $\max$ is that you end up minimizing the "harder" of the two penalties (training error or regularization) but not the other.
In contrast, $+$ is simple and it works.
You might ask why not other binary operations? There's no argument that could rule them out, so why not indeed? | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | You can try other binary operations ($\max,\min,\times$) and see how they compare.
The problem with $\min$ and $\times$ is that if the error is $0$, then the regularized penalty will end up being $0$. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
You can try other binary operations ($\max,\min,\times$) and see how they compare.
The problem with $\min$ and $\times$ is that if the error is $0$, then the regularized penalty will end up being $0$. This allows the model to overfit.
The problem with $\max$ is that you end up minimizing the "harder" of the two penalties (training error or regularization) but not the other.
In contrast, $+$ is simple and it works.
You might ask why not other binary operations? There's no argument that could rule them out, so why not indeed? | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
You can try other binary operations ($\max,\min,\times$) and see how they compare.
The problem with $\min$ and $\times$ is that if the error is $0$, then the regularized penalty will end up being $0$. |
3,885 | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | I think you have a valid question. To give you a proper answer you will have to understand the probabilistic nature of the problem.
In general the problem we are trying to solve is the following: Given data $D$ what is the distribution of hypotheses that explains this data. When we say hypothesis we mean a PDF (at least in this context). And a distribution of hypotheses is a PDF of PDFs, i.e., $p(H | D)$.
$p(H | D)$ is a distribution over hypotheses given $D$. If we can find this then we can select one among these hypotheses, for example the one with the highest probability, or we may choose to average over all of them. A somewhat easier approach is to attack the problem from a different direction using the Bayes' Theorem.
$$p(H|D) = \frac{p(D|H)\times p(H)}{p(D)}$$
$p(D|H)$ is one of the hypothesis, it is also called likelihood. $p(H)$ is the distribution of the hypotheses in our universe of hypotheses before observing the data. After we observe the data we update our beliefs.
$p(D)$ is the average of the hypotheses before we updated our beliefs.
Now if we take the $-\log$ of both sides of Bayes' equation we get:
$$-\log [p(H|D)] = -\log [p(D|H)] -\log [p(H)] + \log [p(D)]$$
Usually $p(D)$ is difficult to calculate. The good thing is it doesn't affect the result. It is simply a normalization constant.
Now for example if our set of hypotheses $p(D|H)$ is a bunch of Gaussians with $p(y|X,\theta)\sim N(\theta X,\sigma)$ where we don't know $\theta$, but assume to know $\sigma$ (or at least assume that it is a constant), and moreover hypotheses themselves are distributed as a Gaussian with $p(H) = p(\theta) \sim N(0,\alpha^{-1} I)$ then plugging everything above looks something like:
$$-\log [p(H|D)] = \text{bunch of constants} + \frac{1}{2}(y-\theta X)^2 + \frac{1}{2}\alpha||\theta||^2 + {\rm constant}$$
Now if we minimize this expression we find the hypothesis with the highest probability. Constants don't affect the minimization. This is the expression in your question.
The fact that we used Gaussians doesn't change the fact the regularization term is additional. It must be additive (in log terms or multiplicative in probabilities), there is no other choice. What will change if we use other distributions is the components of the addition. The cost/loss function you have provided is optimal for a specific scenario of Gaussians. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | I think you have a valid question. To give you a proper answer you will have to understand the probabilistic nature of the problem.
In general the problem we are trying to solve is the following: Give | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
I think you have a valid question. To give you a proper answer you will have to understand the probabilistic nature of the problem.
In general the problem we are trying to solve is the following: Given data $D$ what is the distribution of hypotheses that explains this data. When we say hypothesis we mean a PDF (at least in this context). And a distribution of hypotheses is a PDF of PDFs, i.e., $p(H | D)$.
$p(H | D)$ is a distribution over hypotheses given $D$. If we can find this then we can select one among these hypotheses, for example the one with the highest probability, or we may choose to average over all of them. A somewhat easier approach is to attack the problem from a different direction using the Bayes' Theorem.
$$p(H|D) = \frac{p(D|H)\times p(H)}{p(D)}$$
$p(D|H)$ is one of the hypothesis, it is also called likelihood. $p(H)$ is the distribution of the hypotheses in our universe of hypotheses before observing the data. After we observe the data we update our beliefs.
$p(D)$ is the average of the hypotheses before we updated our beliefs.
Now if we take the $-\log$ of both sides of Bayes' equation we get:
$$-\log [p(H|D)] = -\log [p(D|H)] -\log [p(H)] + \log [p(D)]$$
Usually $p(D)$ is difficult to calculate. The good thing is it doesn't affect the result. It is simply a normalization constant.
Now for example if our set of hypotheses $p(D|H)$ is a bunch of Gaussians with $p(y|X,\theta)\sim N(\theta X,\sigma)$ where we don't know $\theta$, but assume to know $\sigma$ (or at least assume that it is a constant), and moreover hypotheses themselves are distributed as a Gaussian with $p(H) = p(\theta) \sim N(0,\alpha^{-1} I)$ then plugging everything above looks something like:
$$-\log [p(H|D)] = \text{bunch of constants} + \frac{1}{2}(y-\theta X)^2 + \frac{1}{2}\alpha||\theta||^2 + {\rm constant}$$
Now if we minimize this expression we find the hypothesis with the highest probability. Constants don't affect the minimization. This is the expression in your question.
The fact that we used Gaussians doesn't change the fact the regularization term is additional. It must be additive (in log terms or multiplicative in probabilities), there is no other choice. What will change if we use other distributions is the components of the addition. The cost/loss function you have provided is optimal for a specific scenario of Gaussians. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
I think you have a valid question. To give you a proper answer you will have to understand the probabilistic nature of the problem.
In general the problem we are trying to solve is the following: Give |
3,886 | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | Ridge is a very convenient formulation. In contrast to the probabilistic answers, this answers does not give any interpretation of the estimate but instead explains why ridge is an old and obvious formulation.
In linear regression, the normal equations give
$\hat{\theta} = (X^TX)^{-1} X^T y$
But, the matrix $X^TX$ is sometimes not invertible; one way to adjust it is by adding a small element to the diagonal: $X^TX + \alpha I$.
This gives the solution: $\tilde{\theta} = (X^TX + \alpha I)^{-1} X^T y$;
then $\tilde{\theta}$ does not solve the original problem but instead the ridge problem. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | Ridge is a very convenient formulation. In contrast to the probabilistic answers, this answers does not give any interpretation of the estimate but instead explains why ridge is an old and obvious for | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
Ridge is a very convenient formulation. In contrast to the probabilistic answers, this answers does not give any interpretation of the estimate but instead explains why ridge is an old and obvious formulation.
In linear regression, the normal equations give
$\hat{\theta} = (X^TX)^{-1} X^T y$
But, the matrix $X^TX$ is sometimes not invertible; one way to adjust it is by adding a small element to the diagonal: $X^TX + \alpha I$.
This gives the solution: $\tilde{\theta} = (X^TX + \alpha I)^{-1} X^T y$;
then $\tilde{\theta}$ does not solve the original problem but instead the ridge problem. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
Ridge is a very convenient formulation. In contrast to the probabilistic answers, this answers does not give any interpretation of the estimate but instead explains why ridge is an old and obvious for |
3,887 | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | I think there is a more intuitive reason as to why we can't multiply by the regularisation term.
Lets take our penalty function to the regular penalty function multiplied by a regularisation term like you suggest.
$$J(θ)=(\frac{1}{2}(y−θX^T)(y−θX^T)^T)α‖θ‖^2_2$$
Here we create a global minimum of the penalty function where $α‖θ‖^2_2=0$.
In this case our model can produce high errors between the prediction and the data but it doesn't matter, if the model parameter weights are all zero our penalty function is zero $J(θ=0)=0$.
Since, unless our model is completely perfect, the term $(\frac{1}{2}(y−θX^T)(y−θX^T)^T)$ can never be zero (the probability that there exists a set θ to make our model 'perfect' is negligible for real data), then our model should always tend train towards the solution θ=0.
This is what it will return unless it gets stuck in a local minimum somewhere. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)? | I think there is a more intuitive reason as to why we can't multiply by the regularisation term.
Lets take our penalty function to the regular penalty function multiplied by a regularisation term like | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
I think there is a more intuitive reason as to why we can't multiply by the regularisation term.
Lets take our penalty function to the regular penalty function multiplied by a regularisation term like you suggest.
$$J(θ)=(\frac{1}{2}(y−θX^T)(y−θX^T)^T)α‖θ‖^2_2$$
Here we create a global minimum of the penalty function where $α‖θ‖^2_2=0$.
In this case our model can produce high errors between the prediction and the data but it doesn't matter, if the model parameter weights are all zero our penalty function is zero $J(θ=0)=0$.
Since, unless our model is completely perfect, the term $(\frac{1}{2}(y−θX^T)(y−θX^T)^T)$ can never be zero (the probability that there exists a set θ to make our model 'perfect' is negligible for real data), then our model should always tend train towards the solution θ=0.
This is what it will return unless it gets stuck in a local minimum somewhere. | Why is the regularization term *added* to the cost function (instead of multiplied etc.)?
I think there is a more intuitive reason as to why we can't multiply by the regularisation term.
Lets take our penalty function to the regular penalty function multiplied by a regularisation term like |
3,888 | Software needed to scrape data from graph [closed] | Check out the digitize package for R. Its designed to solve exactly this sort of problem. | Software needed to scrape data from graph [closed] | Check out the digitize package for R. Its designed to solve exactly this sort of problem. | Software needed to scrape data from graph [closed]
Check out the digitize package for R. Its designed to solve exactly this sort of problem. | Software needed to scrape data from graph [closed]
Check out the digitize package for R. Its designed to solve exactly this sort of problem. |
3,889 | Software needed to scrape data from graph [closed] | graph digitizing software
There are many different options, but all basically use the same workflow:
upload an image
set the x and y scales by indicating the values at two points on each axis
indicate if the scale is linear, log, etc,
click on the points.
Some of the programs automatically recognize lines or points. I am usually after points, and I find them too inconsistent to be helpful even with 100s of points. I have not found one that recognizes different symbols. This feature could be worth the trouble for digitizing lines, but I have never had to do this.
The program returns each point as an x-y matrix.
Often it helps selecting points if the image is zoomed, either by uploading a zoomed version of the image or using the zooming feature available in some of the programs.
There are many programs, and they vary in extra features, usability, licensing, and cost. I have listed them below.
All of the ones I have used work fine. Except in contexts where measurement error is very small, error from graph scraping is insignificant (e.g. error from digitization << size of error bars or uncertainty in the estimate). If have not tested the accuracy of any of these programs, but it would be interesting to compare among users, among programs, and against the results of reproduced statistical analyses.
Programs I have used:
Digitizer (free software, GPL) auto point / line recognition. Available in Ubuntu repository (engauge-digitizer)
Get Data (shareware) has zoom window, auto point / line recognition
DigitizeIt (shareware) auto point / line recognition
ImageJ (open source, most extensible after R digitize)
R digitize (free, open source), because it simplifies the processs of getting data from the graph into an analysis by keeping all of the steps in R. See the tutorial in R-Journal
GrabIt! (free demo, $69) Excel
plug-in
WebPlotDigitzer (free, online). Browser based, extracts data from images. Reviewed here.
Programs I have not used:
GraphClick (Mac, $8)
g3data (open source - GNU GPL) Has zoom window, no auto-recognition. Available in Ubuntu repository.
GRABIT OpenSource (BSD) plugin that runs in a proprietary platform, Matlab
TL;DR: WebPlotDigitizer is available as a web application as well as a chrome plugin | Software needed to scrape data from graph [closed] | graph digitizing software
There are many different options, but all basically use the same workflow:
upload an image
set the x and y scales by indicating the values at two points on each axis
indic | Software needed to scrape data from graph [closed]
graph digitizing software
There are many different options, but all basically use the same workflow:
upload an image
set the x and y scales by indicating the values at two points on each axis
indicate if the scale is linear, log, etc,
click on the points.
Some of the programs automatically recognize lines or points. I am usually after points, and I find them too inconsistent to be helpful even with 100s of points. I have not found one that recognizes different symbols. This feature could be worth the trouble for digitizing lines, but I have never had to do this.
The program returns each point as an x-y matrix.
Often it helps selecting points if the image is zoomed, either by uploading a zoomed version of the image or using the zooming feature available in some of the programs.
There are many programs, and they vary in extra features, usability, licensing, and cost. I have listed them below.
All of the ones I have used work fine. Except in contexts where measurement error is very small, error from graph scraping is insignificant (e.g. error from digitization << size of error bars or uncertainty in the estimate). If have not tested the accuracy of any of these programs, but it would be interesting to compare among users, among programs, and against the results of reproduced statistical analyses.
Programs I have used:
Digitizer (free software, GPL) auto point / line recognition. Available in Ubuntu repository (engauge-digitizer)
Get Data (shareware) has zoom window, auto point / line recognition
DigitizeIt (shareware) auto point / line recognition
ImageJ (open source, most extensible after R digitize)
R digitize (free, open source), because it simplifies the processs of getting data from the graph into an analysis by keeping all of the steps in R. See the tutorial in R-Journal
GrabIt! (free demo, $69) Excel
plug-in
WebPlotDigitzer (free, online). Browser based, extracts data from images. Reviewed here.
Programs I have not used:
GraphClick (Mac, $8)
g3data (open source - GNU GPL) Has zoom window, no auto-recognition. Available in Ubuntu repository.
GRABIT OpenSource (BSD) plugin that runs in a proprietary platform, Matlab
TL;DR: WebPlotDigitizer is available as a web application as well as a chrome plugin | Software needed to scrape data from graph [closed]
graph digitizing software
There are many different options, but all basically use the same workflow:
upload an image
set the x and y scales by indicating the values at two points on each axis
indic |
3,890 | Software needed to scrape data from graph [closed] | Other answerers assume that you deal with raster image of a graph. But nowadays the good practice is to publish graphs in vector form. In this case you can achieve much higher exactness of the recovered data and even estimate the recovery error if you work with the code of the vector graph directly, without converting it to raster image.
Since the papers are published online as PDF files, I assume that you have a PDF file which contains vector plot with data you wish to recover from it (get in numerical form) and estimate introduced recovery error.
First of all, PDF is a vector format which is basically textual (can be read by a text editor). The problem is that it can (and almost always) contain compressed data streams which require to be uncompressed in order to read them by a text editor. These compressed data streams usually contain the information we need.
There are several ways to uncompress data streams in order to convert PDF file to a textual document with readable PDF code. Probably the simplest way is to use free QPDF utility with --stream-data=uncompress option:
qpdf infile.pdf --stream-data=uncompress -- outfile.pdf
Some other ways are described here and here.
The generated outfile.pdf can be opened by a text editor. Now you need PDF Reference Manual 1.7 to understand what you see. Do not panic at this moment! You need to know only few operators described in the "TABLE 4.9 Path construction operators" on pages 226 - 227. The most important operators are (the first column contains coordinate specification for an operator, the second contains the operator and the third is operator name):
x y m moveto
x y l lineto
x y width height re rectangle
h closepath
In most cases it is sufficient to know these four operators for recovering the data.
Now you need to import the outfile.pdf file as text into some program where you can manipulate the data. I'll show how to do it with Mathematica.
Importing the file:
pdfCode = Import["outfile.pdf", "Text"];
Now I assume the simplest case: the graph contains a line which consists of many two-point segments. In this case each segment of the line is encoded like this:
268.79999 408.92975 m
272.39999 408.92975 l
Extracting all such segments from the PDF code:
lines = StringCases[pdfCode,
StartOfLine ~~ x1 : NumberString ~~ " " ~~ y1 : NumberString ~~ " m\n" ~~
x2 : NumberString ~~ " " ~~ y2 : NumberString ~~ " l\n"
:> ToExpression@{{x1, y1}, {x2, y2}}];
Visualizing them:
Graphics[{Line[lines]}]
You get something like this (the paper I am working with contains four graphs):
Each two adjacent segments share one point. So in this case you can turn the sequences of adjacent segments into paths:
paths = Split[lines, #1[[2]] == #2[[1]] &];
Now you can visualize all the paths separately:
Graphics[{Line /@ paths}]
From this figure you can select (by double-clicking) the path you are looking for, copy graphics selection and paste as new Graphics. For converting it backward to list of points you take the element {1, 1, 1}. Now we have the points not in the coordinate system of the graph but in the coordinate system of the PDF file. We need to establish relationship between them.
From the above plot you select ticks by hand (holding Shift for multiple selection), then copy them and paste as new Graphics. Here is how you can extract coordinates of horizontal ticks:
Now check the differences between ticks:
Differences[reHorTicks]
From these differences you can see how precise is positioning of the ticks in the PDF file. It gives an estimate of error introduced by converting original datapoints into vector graph included in the PDF file. If there are appreciable errors in ticks positioning you can reduce the error by fitting the coordinates of ticks to a linear model. This linear function now can be used to get original coordinates of points of the path (that is in the coordinate system of the plot). | Software needed to scrape data from graph [closed] | Other answerers assume that you deal with raster image of a graph. But nowadays the good practice is to publish graphs in vector form. In this case you can achieve much higher exactness of the recover | Software needed to scrape data from graph [closed]
Other answerers assume that you deal with raster image of a graph. But nowadays the good practice is to publish graphs in vector form. In this case you can achieve much higher exactness of the recovered data and even estimate the recovery error if you work with the code of the vector graph directly, without converting it to raster image.
Since the papers are published online as PDF files, I assume that you have a PDF file which contains vector plot with data you wish to recover from it (get in numerical form) and estimate introduced recovery error.
First of all, PDF is a vector format which is basically textual (can be read by a text editor). The problem is that it can (and almost always) contain compressed data streams which require to be uncompressed in order to read them by a text editor. These compressed data streams usually contain the information we need.
There are several ways to uncompress data streams in order to convert PDF file to a textual document with readable PDF code. Probably the simplest way is to use free QPDF utility with --stream-data=uncompress option:
qpdf infile.pdf --stream-data=uncompress -- outfile.pdf
Some other ways are described here and here.
The generated outfile.pdf can be opened by a text editor. Now you need PDF Reference Manual 1.7 to understand what you see. Do not panic at this moment! You need to know only few operators described in the "TABLE 4.9 Path construction operators" on pages 226 - 227. The most important operators are (the first column contains coordinate specification for an operator, the second contains the operator and the third is operator name):
x y m moveto
x y l lineto
x y width height re rectangle
h closepath
In most cases it is sufficient to know these four operators for recovering the data.
Now you need to import the outfile.pdf file as text into some program where you can manipulate the data. I'll show how to do it with Mathematica.
Importing the file:
pdfCode = Import["outfile.pdf", "Text"];
Now I assume the simplest case: the graph contains a line which consists of many two-point segments. In this case each segment of the line is encoded like this:
268.79999 408.92975 m
272.39999 408.92975 l
Extracting all such segments from the PDF code:
lines = StringCases[pdfCode,
StartOfLine ~~ x1 : NumberString ~~ " " ~~ y1 : NumberString ~~ " m\n" ~~
x2 : NumberString ~~ " " ~~ y2 : NumberString ~~ " l\n"
:> ToExpression@{{x1, y1}, {x2, y2}}];
Visualizing them:
Graphics[{Line[lines]}]
You get something like this (the paper I am working with contains four graphs):
Each two adjacent segments share one point. So in this case you can turn the sequences of adjacent segments into paths:
paths = Split[lines, #1[[2]] == #2[[1]] &];
Now you can visualize all the paths separately:
Graphics[{Line /@ paths}]
From this figure you can select (by double-clicking) the path you are looking for, copy graphics selection and paste as new Graphics. For converting it backward to list of points you take the element {1, 1, 1}. Now we have the points not in the coordinate system of the graph but in the coordinate system of the PDF file. We need to establish relationship between them.
From the above plot you select ticks by hand (holding Shift for multiple selection), then copy them and paste as new Graphics. Here is how you can extract coordinates of horizontal ticks:
Now check the differences between ticks:
Differences[reHorTicks]
From these differences you can see how precise is positioning of the ticks in the PDF file. It gives an estimate of error introduced by converting original datapoints into vector graph included in the PDF file. If there are appreciable errors in ticks positioning you can reduce the error by fitting the coordinates of ticks to a linear model. This linear function now can be used to get original coordinates of points of the path (that is in the coordinate system of the plot). | Software needed to scrape data from graph [closed]
Other answerers assume that you deal with raster image of a graph. But nowadays the good practice is to publish graphs in vector form. In this case you can achieve much higher exactness of the recover |
3,891 | Software needed to scrape data from graph [closed] | I haven't used it, but UWA CogSci lab recommend DataThief (shareware). | Software needed to scrape data from graph [closed] | I haven't used it, but UWA CogSci lab recommend DataThief (shareware). | Software needed to scrape data from graph [closed]
I haven't used it, but UWA CogSci lab recommend DataThief (shareware). | Software needed to scrape data from graph [closed]
I haven't used it, but UWA CogSci lab recommend DataThief (shareware). |
3,892 | Software needed to scrape data from graph [closed] | Check out engauge. Its free and open source
http://digitizer.sourceforge.net/ | Software needed to scrape data from graph [closed] | Check out engauge. Its free and open source
http://digitizer.sourceforge.net/ | Software needed to scrape data from graph [closed]
Check out engauge. Its free and open source
http://digitizer.sourceforge.net/ | Software needed to scrape data from graph [closed]
Check out engauge. Its free and open source
http://digitizer.sourceforge.net/ |
3,893 | Software needed to scrape data from graph [closed] | Un-Scan-It
http://www.silkscientific.com/graph-digitizer.htm | Software needed to scrape data from graph [closed] | Un-Scan-It
http://www.silkscientific.com/graph-digitizer.htm | Software needed to scrape data from graph [closed]
Un-Scan-It
http://www.silkscientific.com/graph-digitizer.htm | Software needed to scrape data from graph [closed]
Un-Scan-It
http://www.silkscientific.com/graph-digitizer.htm |
3,894 | Software needed to scrape data from graph [closed] | Try scanit: http://amsterchem.com/scanit.html
It is free of charge, runs on Windows | Software needed to scrape data from graph [closed] | Try scanit: http://amsterchem.com/scanit.html
It is free of charge, runs on Windows | Software needed to scrape data from graph [closed]
Try scanit: http://amsterchem.com/scanit.html
It is free of charge, runs on Windows | Software needed to scrape data from graph [closed]
Try scanit: http://amsterchem.com/scanit.html
It is free of charge, runs on Windows |
3,895 | Software needed to scrape data from graph [closed] | You can also try im2graph (http://www.im2graph.co.il) to convert graphs to data. Works in Linux and Windows. | Software needed to scrape data from graph [closed] | You can also try im2graph (http://www.im2graph.co.il) to convert graphs to data. Works in Linux and Windows. | Software needed to scrape data from graph [closed]
You can also try im2graph (http://www.im2graph.co.il) to convert graphs to data. Works in Linux and Windows. | Software needed to scrape data from graph [closed]
You can also try im2graph (http://www.im2graph.co.il) to convert graphs to data. Works in Linux and Windows. |
3,896 | Software needed to scrape data from graph [closed] | 'g3data' is a software which can be used to serve your purpose. It's a free software and I have used it. You can download it from here: http://www.frantz.fi/software/g3data.php | Software needed to scrape data from graph [closed] | 'g3data' is a software which can be used to serve your purpose. It's a free software and I have used it. You can download it from here: http://www.frantz.fi/software/g3data.php | Software needed to scrape data from graph [closed]
'g3data' is a software which can be used to serve your purpose. It's a free software and I have used it. You can download it from here: http://www.frantz.fi/software/g3data.php | Software needed to scrape data from graph [closed]
'g3data' is a software which can be used to serve your purpose. It's a free software and I have used it. You can download it from here: http://www.frantz.fi/software/g3data.php |
3,897 | Software needed to scrape data from graph [closed] | I had to do this so many times in my career I eventually put together a javascript program which is available here:
http://kdusling.github.io/projects/DataGrab/index.html
Sorry, but you will still need to click on every single point. Though you can use the arrow keys which does save some wrist strain. | Software needed to scrape data from graph [closed] | I had to do this so many times in my career I eventually put together a javascript program which is available here:
http://kdusling.github.io/projects/DataGrab/index.html
Sorry, but you will still nee | Software needed to scrape data from graph [closed]
I had to do this so many times in my career I eventually put together a javascript program which is available here:
http://kdusling.github.io/projects/DataGrab/index.html
Sorry, but you will still need to click on every single point. Though you can use the arrow keys which does save some wrist strain. | Software needed to scrape data from graph [closed]
I had to do this so many times in my career I eventually put together a javascript program which is available here:
http://kdusling.github.io/projects/DataGrab/index.html
Sorry, but you will still nee |
3,898 | Software needed to scrape data from graph [closed] | STIPlotDigitizer has been newly released.
http://stiwww.com/product/software-techniques-plot-digitizer | Software needed to scrape data from graph [closed] | STIPlotDigitizer has been newly released.
http://stiwww.com/product/software-techniques-plot-digitizer | Software needed to scrape data from graph [closed]
STIPlotDigitizer has been newly released.
http://stiwww.com/product/software-techniques-plot-digitizer | Software needed to scrape data from graph [closed]
STIPlotDigitizer has been newly released.
http://stiwww.com/product/software-techniques-plot-digitizer |
3,899 | Software needed to scrape data from graph [closed] | For R users, the package grImport (on CRAN) can import vector graphics and convert them into objects that R can interpret. It assumes that one can convert PDF (or other vector format of interest) to PostScript format. This can be done for example with Inkscape: import (File > Import) your PDF page with your figure into Inkspace and File > Save As > Save as type: > PostScript *.ps. Once you have your *.ps file fallow the grImport vignette Importing Vector Graphics, more relevant being section '4.1. Scraping data from images'.
You will need ghostscript on your Operating System - try to download it from here.
Note, if you run somehow into ghostscript error 'status 127' when you call grImport::PostScriptTrace, then fallow the recommendation from here, which says to manually set the path to ghostscript on your machine.
Here is some sample R code to import PostScript file into R:
install.packages("grImport")
require(grImport)
# if you get the ghostscript error 'status 127' then set the path to ghostscript, e.g.:
Sys.setenv(R_GSCMD = normalizePath("C:/Program Files/gs/gs9.22/bin/gswin64c.exe"))
PostScriptTrace(file = "graph.ps", outfilename = "graph.ps.xml")
my_fig <- readPicture(rgmlFile = "graph.ps.xml")
grid.picture(my_fig)
Note, if your graph is on a page in a multi page PDF file, then you can split the multi-page document with PDFTK builder. Import your one page PDF file in Ikscape and delete any extra elements (extra text, extra graph elements). This wil ease your work in R when trying to catch the coordinates of the graph elements you are interested in. | Software needed to scrape data from graph [closed] | For R users, the package grImport (on CRAN) can import vector graphics and convert them into objects that R can interpret. It assumes that one can convert PDF (or other vector format of interest) to P | Software needed to scrape data from graph [closed]
For R users, the package grImport (on CRAN) can import vector graphics and convert them into objects that R can interpret. It assumes that one can convert PDF (or other vector format of interest) to PostScript format. This can be done for example with Inkscape: import (File > Import) your PDF page with your figure into Inkspace and File > Save As > Save as type: > PostScript *.ps. Once you have your *.ps file fallow the grImport vignette Importing Vector Graphics, more relevant being section '4.1. Scraping data from images'.
You will need ghostscript on your Operating System - try to download it from here.
Note, if you run somehow into ghostscript error 'status 127' when you call grImport::PostScriptTrace, then fallow the recommendation from here, which says to manually set the path to ghostscript on your machine.
Here is some sample R code to import PostScript file into R:
install.packages("grImport")
require(grImport)
# if you get the ghostscript error 'status 127' then set the path to ghostscript, e.g.:
Sys.setenv(R_GSCMD = normalizePath("C:/Program Files/gs/gs9.22/bin/gswin64c.exe"))
PostScriptTrace(file = "graph.ps", outfilename = "graph.ps.xml")
my_fig <- readPicture(rgmlFile = "graph.ps.xml")
grid.picture(my_fig)
Note, if your graph is on a page in a multi page PDF file, then you can split the multi-page document with PDFTK builder. Import your one page PDF file in Ikscape and delete any extra elements (extra text, extra graph elements). This wil ease your work in R when trying to catch the coordinates of the graph elements you are interested in. | Software needed to scrape data from graph [closed]
For R users, the package grImport (on CRAN) can import vector graphics and convert them into objects that R can interpret. It assumes that one can convert PDF (or other vector format of interest) to P |
3,900 | Why do we use Kullback-Leibler divergence rather than cross entropy in the t-SNE objective function? | KL divergence is a natural way to measure the difference between two probability distributions. The entropy $H(p)$ of a distribution $p$ gives the minimum possible number of bits per message that would be needed (on average) to losslessly encode events drawn from $p$. Achieving this bound would require using an optimal code designed for $p$, which assigns shorter code words to higher probability events. $D_{KL}(p \parallel q)$ can be interpreted as the expected number of extra bits per message needed to encode events drawn from true distribution $p$, if using an optimal code for distribution $q$ rather than $p$. It has some nice properties for comparing distributions. For example, if $p$ and $q$ are equal, then the KL divergence is 0.
The cross entropy $H(p, q)$ can be interpreted as the number of bits per message needed (on average) to encode events drawn from true distribution $p$, if using an optimal code for distribution $q$. Note the difference: $D_{KL}(p \parallel q)$ measures the average number of extra bits per message, whereas $H(p, q)$ measures the average number of total bits per message. It's true that, for fixed $p$, $H(p, q)$ will grow as $q$ becomes increasingly different from $p$. But, if $p$ isn't held fixed, it's hard to interpret $H(p, q)$ as an absolute measure of the difference, because it grows with the entropy of $p$.
KL divergence and cross entropy are related as:
$$D_{KL}(p \parallel q) = H(p, q) - H(p)$$
We can see from this expression that, when $p$ and $q$ are equal, the cross entropy is not zero; rather, it's equal to the entropy of $p$.
Cross entropy commonly shows up in loss functions in machine learning. In many of these situations, $p$ is treated as the 'true' distribution, and $q$ as the model that we're trying to optimize. For example, in classification problems, the commonly used cross entropy loss (aka log loss), measures the cross entropy between the empirical distribution of the labels (given the inputs) and the distribution predicted by the classifier. The empirical distribution for each data point simply assigns probability 1 to the class of that data point, and 0 to all other classes. Side note: The cross entropy in this case turns out to be proportional to the negative log likelihood, so minimizing it is equivalent maximizing the likelihood.
Note that $p$ (the empirical distribution in this example) is fixed. So, it would be equivalent to say that we're minimizing the KL divergence between the empirical distribution and the predicted distribution. As we can see in the expression above, the two are related by the additive term $H(p)$ (the entropy of the empirical distribution). Because $p$ is fixed, $H(p)$ doesn't change with the parameters of the model, and can be disregarded in the loss function. We might still want to talk about the KL divergence for theoretical/philosophical reasons but, in this case, they're equivalent from the perspective of solving the optimization problem. This may not be true for other uses of cross entropy and KL divergence, where $p$ might vary.
t-SNE fits a distribution $p$ in the input space. Each data point is mapped into the embedding space, where corresponding distribution $q$ is fit. The algorithm attempts to adjust the embedding to minimize $D_{KL}(p \parallel q)$. As above, $p$ is held fixed. So, from the perspective of the optimization problem, minimizing the KL divergence and minimizing the cross entropy are equivalent. Indeed, van der Maaten and Hinton (2008) say in section 2: "A natural measure of the faithfulness with which $q_{j \mid i}$ models $p_{j \mid i}$ is the Kullback-Leibler divergence (which is in this case equal to the cross-entropy up to an additive constant)."
van der Maaten and Hinton (2008). Visualizing data using t-SNE. | Why do we use Kullback-Leibler divergence rather than cross entropy in the t-SNE objective function? | KL divergence is a natural way to measure the difference between two probability distributions. The entropy $H(p)$ of a distribution $p$ gives the minimum possible number of bits per message that woul | Why do we use Kullback-Leibler divergence rather than cross entropy in the t-SNE objective function?
KL divergence is a natural way to measure the difference between two probability distributions. The entropy $H(p)$ of a distribution $p$ gives the minimum possible number of bits per message that would be needed (on average) to losslessly encode events drawn from $p$. Achieving this bound would require using an optimal code designed for $p$, which assigns shorter code words to higher probability events. $D_{KL}(p \parallel q)$ can be interpreted as the expected number of extra bits per message needed to encode events drawn from true distribution $p$, if using an optimal code for distribution $q$ rather than $p$. It has some nice properties for comparing distributions. For example, if $p$ and $q$ are equal, then the KL divergence is 0.
The cross entropy $H(p, q)$ can be interpreted as the number of bits per message needed (on average) to encode events drawn from true distribution $p$, if using an optimal code for distribution $q$. Note the difference: $D_{KL}(p \parallel q)$ measures the average number of extra bits per message, whereas $H(p, q)$ measures the average number of total bits per message. It's true that, for fixed $p$, $H(p, q)$ will grow as $q$ becomes increasingly different from $p$. But, if $p$ isn't held fixed, it's hard to interpret $H(p, q)$ as an absolute measure of the difference, because it grows with the entropy of $p$.
KL divergence and cross entropy are related as:
$$D_{KL}(p \parallel q) = H(p, q) - H(p)$$
We can see from this expression that, when $p$ and $q$ are equal, the cross entropy is not zero; rather, it's equal to the entropy of $p$.
Cross entropy commonly shows up in loss functions in machine learning. In many of these situations, $p$ is treated as the 'true' distribution, and $q$ as the model that we're trying to optimize. For example, in classification problems, the commonly used cross entropy loss (aka log loss), measures the cross entropy between the empirical distribution of the labels (given the inputs) and the distribution predicted by the classifier. The empirical distribution for each data point simply assigns probability 1 to the class of that data point, and 0 to all other classes. Side note: The cross entropy in this case turns out to be proportional to the negative log likelihood, so minimizing it is equivalent maximizing the likelihood.
Note that $p$ (the empirical distribution in this example) is fixed. So, it would be equivalent to say that we're minimizing the KL divergence between the empirical distribution and the predicted distribution. As we can see in the expression above, the two are related by the additive term $H(p)$ (the entropy of the empirical distribution). Because $p$ is fixed, $H(p)$ doesn't change with the parameters of the model, and can be disregarded in the loss function. We might still want to talk about the KL divergence for theoretical/philosophical reasons but, in this case, they're equivalent from the perspective of solving the optimization problem. This may not be true for other uses of cross entropy and KL divergence, where $p$ might vary.
t-SNE fits a distribution $p$ in the input space. Each data point is mapped into the embedding space, where corresponding distribution $q$ is fit. The algorithm attempts to adjust the embedding to minimize $D_{KL}(p \parallel q)$. As above, $p$ is held fixed. So, from the perspective of the optimization problem, minimizing the KL divergence and minimizing the cross entropy are equivalent. Indeed, van der Maaten and Hinton (2008) say in section 2: "A natural measure of the faithfulness with which $q_{j \mid i}$ models $p_{j \mid i}$ is the Kullback-Leibler divergence (which is in this case equal to the cross-entropy up to an additive constant)."
van der Maaten and Hinton (2008). Visualizing data using t-SNE. | Why do we use Kullback-Leibler divergence rather than cross entropy in the t-SNE objective function?
KL divergence is a natural way to measure the difference between two probability distributions. The entropy $H(p)$ of a distribution $p$ gives the minimum possible number of bits per message that woul |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.