idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
3,401 | How to visualize what ANOVA does? | Personally, I like introducing linear regression and ANOVA by showing that it is all the same and that linear models amount to partition the total variance: We have some kind of variance in the outcome that can be explained by the factors of interest, plus the unexplained part (called the 'residual'). I generally use the following illustration (gray line for total variability, black lines for group or individual specific variability) :
I also like the heplots R package, from Michael Friendly and John Fox, but see also Visual Hypothesis Tests in Multivariate Linear Models: The heplots Package for R.
Standard ways to explain what ANOVA actually does, especially in the Linear Model framework, are really well explained in Plane answers to complex questions, by Christensen, but there are very few illustrations. Saville and Wood's Statistical methods: The geometric approach has some examples, but mainly on regression. In Montgomery's Design and Analysis of Experiments, which mostly focused on DoE, there are illustrations that I like, but see below
(these are mine :-)
But I think you have to look for textbooks on Linear Models if you want to see how sum of squares, errors, etc. translates into a vector space, as shown on Wikipedia. Estimation and Inference in Econometrics, by Davidson and MacKinnon, seems to have nice illustrations (the 1st chapter actually covers OLS geometry) but I only browse the French translation (available here). The Geometry of Linear Regression has also some good illustrations.
Edit:
Ah, and I just remember this article by Robert Pruzek, A new graphic for one-way ANOVA.
Edit 2
And now, the granova package (mentioned by @gd047 and associated to the above paper) has been ported to ggplot, see granovaGG with an illustration for one-way ANOVA below. | How to visualize what ANOVA does? | Personally, I like introducing linear regression and ANOVA by showing that it is all the same and that linear models amount to partition the total variance: We have some kind of variance in the outcom | How to visualize what ANOVA does?
Personally, I like introducing linear regression and ANOVA by showing that it is all the same and that linear models amount to partition the total variance: We have some kind of variance in the outcome that can be explained by the factors of interest, plus the unexplained part (called the 'residual'). I generally use the following illustration (gray line for total variability, black lines for group or individual specific variability) :
I also like the heplots R package, from Michael Friendly and John Fox, but see also Visual Hypothesis Tests in Multivariate Linear Models: The heplots Package for R.
Standard ways to explain what ANOVA actually does, especially in the Linear Model framework, are really well explained in Plane answers to complex questions, by Christensen, but there are very few illustrations. Saville and Wood's Statistical methods: The geometric approach has some examples, but mainly on regression. In Montgomery's Design and Analysis of Experiments, which mostly focused on DoE, there are illustrations that I like, but see below
(these are mine :-)
But I think you have to look for textbooks on Linear Models if you want to see how sum of squares, errors, etc. translates into a vector space, as shown on Wikipedia. Estimation and Inference in Econometrics, by Davidson and MacKinnon, seems to have nice illustrations (the 1st chapter actually covers OLS geometry) but I only browse the French translation (available here). The Geometry of Linear Regression has also some good illustrations.
Edit:
Ah, and I just remember this article by Robert Pruzek, A new graphic for one-way ANOVA.
Edit 2
And now, the granova package (mentioned by @gd047 and associated to the above paper) has been ported to ggplot, see granovaGG with an illustration for one-way ANOVA below. | How to visualize what ANOVA does?
Personally, I like introducing linear regression and ANOVA by showing that it is all the same and that linear models amount to partition the total variance: We have some kind of variance in the outcom |
3,402 | How to visualize what ANOVA does? | How about something like this?
Following Crawley (2005). Statistics. An introduction using R: Wiley. | How to visualize what ANOVA does? | How about something like this?
Following Crawley (2005). Statistics. An introduction using R: Wiley. | How to visualize what ANOVA does?
How about something like this?
Following Crawley (2005). Statistics. An introduction using R: Wiley. | How to visualize what ANOVA does?
How about something like this?
Following Crawley (2005). Statistics. An introduction using R: Wiley. |
3,403 | How to visualize what ANOVA does? | Thank you for your great answer so far. While they where very enlightening, I felt that using them for the course I am currently teaching (well, TA'ing) will be too much for my students. (I help teach the course BioStatistics for students from advanced degrees in medicine sciences)
Therefore, I ended up creating two images (Both are simulation based) which I think are useful example for explaining ANOVA.
I would be happy to read comments or suggestions for improving them.
The first image shows a simulation of 30 data points, separated to 3 plots (showing how the MST=Var is separated to the data that creates MSB and MSW:
The left plot shows a scatter plot of the data per group.
The middle one shows how the data we are going to use for MSB looks like.
The right image shows how the data we are going to use for MSW looks like.
The second image shows 4 plots, each one for a different combination of variance and expectancy for the groups while
The first row of plots is for low variance, while the second row is for high(er) variance.
The first column of plots is for equal expectancy between the groups, while the second column shows groups with (very) different expectancies. | How to visualize what ANOVA does? | Thank you for your great answer so far. While they where very enlightening, I felt that using them for the course I am currently teaching (well, TA'ing) will be too much for my students. (I help teac | How to visualize what ANOVA does?
Thank you for your great answer so far. While they where very enlightening, I felt that using them for the course I am currently teaching (well, TA'ing) will be too much for my students. (I help teach the course BioStatistics for students from advanced degrees in medicine sciences)
Therefore, I ended up creating two images (Both are simulation based) which I think are useful example for explaining ANOVA.
I would be happy to read comments or suggestions for improving them.
The first image shows a simulation of 30 data points, separated to 3 plots (showing how the MST=Var is separated to the data that creates MSB and MSW:
The left plot shows a scatter plot of the data per group.
The middle one shows how the data we are going to use for MSB looks like.
The right image shows how the data we are going to use for MSW looks like.
The second image shows 4 plots, each one for a different combination of variance and expectancy for the groups while
The first row of plots is for low variance, while the second row is for high(er) variance.
The first column of plots is for equal expectancy between the groups, while the second column shows groups with (very) different expectancies. | How to visualize what ANOVA does?
Thank you for your great answer so far. While they where very enlightening, I felt that using them for the course I am currently teaching (well, TA'ing) will be too much for my students. (I help teac |
3,404 | How to visualize what ANOVA does? | Since we gather certain types of nice graphs in this post, here is another one that I recently found and may help you understand how ANOVA works and how the F statistic is generated. The graphic was created using the granova package in R. | How to visualize what ANOVA does? | Since we gather certain types of nice graphs in this post, here is another one that I recently found and may help you understand how ANOVA works and how the F statistic is generated. The graphic was c | How to visualize what ANOVA does?
Since we gather certain types of nice graphs in this post, here is another one that I recently found and may help you understand how ANOVA works and how the F statistic is generated. The graphic was created using the granova package in R. | How to visualize what ANOVA does?
Since we gather certain types of nice graphs in this post, here is another one that I recently found and may help you understand how ANOVA works and how the F statistic is generated. The graphic was c |
3,405 | How to visualize what ANOVA does? | Check out Hadley Wickham's presentation (pdf, mirror) on ggplot.
Starting on pages 23-40 of this document he describes an interesting approach to visualizing ANOVAs.
*Link taken from: http://had.co.nz/ggplot2/ | How to visualize what ANOVA does? | Check out Hadley Wickham's presentation (pdf, mirror) on ggplot.
Starting on pages 23-40 of this document he describes an interesting approach to visualizing ANOVAs.
*Link taken from: http://had.co.nz | How to visualize what ANOVA does?
Check out Hadley Wickham's presentation (pdf, mirror) on ggplot.
Starting on pages 23-40 of this document he describes an interesting approach to visualizing ANOVAs.
*Link taken from: http://had.co.nz/ggplot2/ | How to visualize what ANOVA does?
Check out Hadley Wickham's presentation (pdf, mirror) on ggplot.
Starting on pages 23-40 of this document he describes an interesting approach to visualizing ANOVAs.
*Link taken from: http://had.co.nz |
3,406 | How to visualize what ANOVA does? | Great question. You know, I've struggled myself with wrapping my head around ANOVA for a very long time. I always find myself going back to the "between versus within" intuition, and I've always tried to imagine what this would look like in my head. I'm glad this question came up, and I've been amazed by the varied approaches to this in the answers above.
Anyway, for a long time (years, even) I've been wanting to collect several plots in one place where I could see what was happening simultaneously from a lot of different directions: 1) how far apart the populations are, 2) how far apart the data are, 3) how big's the between compared to the within, and 4) how do the central versus noncentral F distributions compare?
In a truly great world, I could even play with sliders to see how sample size changes things.
So I've been playing with the manipulate command in RStudio, and holy cow, it works! Here is one of the plots, a snapshot, really:
If you have RStudio you can get the code for making the above plot (sliders and all)! on Github here.
After playing with this for awhile, I am surprised at how well the F statistic distinguishes the groups, even for moderately small sample sizes. When I look at the populations, they really aren't that far apart (to my eye), yet, the "within" bar is consistently dwarfed by the "between" bar. Learn something every day, I guess. | How to visualize what ANOVA does? | Great question. You know, I've struggled myself with wrapping my head around ANOVA for a very long time. I always find myself going back to the "between versus within" intuition, and I've always trie | How to visualize what ANOVA does?
Great question. You know, I've struggled myself with wrapping my head around ANOVA for a very long time. I always find myself going back to the "between versus within" intuition, and I've always tried to imagine what this would look like in my head. I'm glad this question came up, and I've been amazed by the varied approaches to this in the answers above.
Anyway, for a long time (years, even) I've been wanting to collect several plots in one place where I could see what was happening simultaneously from a lot of different directions: 1) how far apart the populations are, 2) how far apart the data are, 3) how big's the between compared to the within, and 4) how do the central versus noncentral F distributions compare?
In a truly great world, I could even play with sliders to see how sample size changes things.
So I've been playing with the manipulate command in RStudio, and holy cow, it works! Here is one of the plots, a snapshot, really:
If you have RStudio you can get the code for making the above plot (sliders and all)! on Github here.
After playing with this for awhile, I am surprised at how well the F statistic distinguishes the groups, even for moderately small sample sizes. When I look at the populations, they really aren't that far apart (to my eye), yet, the "within" bar is consistently dwarfed by the "between" bar. Learn something every day, I guess. | How to visualize what ANOVA does?
Great question. You know, I've struggled myself with wrapping my head around ANOVA for a very long time. I always find myself going back to the "between versus within" intuition, and I've always trie |
3,407 | How to visualize what ANOVA does? | Here are some representations of situations in which an ANOVA will conclude to different level of fit between $Y$ and $X$. | How to visualize what ANOVA does? | Here are some representations of situations in which an ANOVA will conclude to different level of fit between $Y$ and $X$. | How to visualize what ANOVA does?
Here are some representations of situations in which an ANOVA will conclude to different level of fit between $Y$ and $X$. | How to visualize what ANOVA does?
Here are some representations of situations in which an ANOVA will conclude to different level of fit between $Y$ and $X$. |
3,408 | How to visualize what ANOVA does? | To illustrate what is going on with one-way ANOVA I have sometimes used an applet offered by the authors of "Introduction to the Practice of Statistics", which allows the students to play with within and between variances and observe their effect on the F statistic. Here is the link (the applet is the last one on the page). Sample screen shot:
The user controls the top slider, varying the vertical spreads of the three groups of data. The red dot at the bottom moves along the plot of p-values while the F-statistic shown beneath is updated. | How to visualize what ANOVA does? | To illustrate what is going on with one-way ANOVA I have sometimes used an applet offered by the authors of "Introduction to the Practice of Statistics", which allows the students to play with within | How to visualize what ANOVA does?
To illustrate what is going on with one-way ANOVA I have sometimes used an applet offered by the authors of "Introduction to the Practice of Statistics", which allows the students to play with within and between variances and observe their effect on the F statistic. Here is the link (the applet is the last one on the page). Sample screen shot:
The user controls the top slider, varying the vertical spreads of the three groups of data. The red dot at the bottom moves along the plot of p-values while the F-statistic shown beneath is updated. | How to visualize what ANOVA does?
To illustrate what is going on with one-way ANOVA I have sometimes used an applet offered by the authors of "Introduction to the Practice of Statistics", which allows the students to play with within |
3,409 | How to visualize what ANOVA does? | It seems the ship has already sailed in terms of an answer, but I think that if this is an introductory course that most of the displays offered here are going to be too difficult to grasp for introductory students... or at the very least too difficult to grasp without an introductory display which provides a very simplified explanation of partitioning variance. Show them how SST total increases with the number of subjects. Then after showing it inflate for several subjects (maybe adding one in each group several times), explain that SST = SSB + SSW (though I prefer to call it SSE from the outset because it avoids confusion when you go to the within subjects test IMO). Then show them a visual representation of the variance partitioning, e.g. a big square color coded such that you can see how SST is made of SSB and SSW. Then, graphs similar to Tals or EDi may become useful, but I agree with EDi that the scale should be SS rather than MS for pedagogical purposes when first explaining things. | How to visualize what ANOVA does? | It seems the ship has already sailed in terms of an answer, but I think that if this is an introductory course that most of the displays offered here are going to be too difficult to grasp for introdu | How to visualize what ANOVA does?
It seems the ship has already sailed in terms of an answer, but I think that if this is an introductory course that most of the displays offered here are going to be too difficult to grasp for introductory students... or at the very least too difficult to grasp without an introductory display which provides a very simplified explanation of partitioning variance. Show them how SST total increases with the number of subjects. Then after showing it inflate for several subjects (maybe adding one in each group several times), explain that SST = SSB + SSW (though I prefer to call it SSE from the outset because it avoids confusion when you go to the within subjects test IMO). Then show them a visual representation of the variance partitioning, e.g. a big square color coded such that you can see how SST is made of SSB and SSW. Then, graphs similar to Tals or EDi may become useful, but I agree with EDi that the scale should be SS rather than MS for pedagogical purposes when first explaining things. | How to visualize what ANOVA does?
It seems the ship has already sailed in terms of an answer, but I think that if this is an introductory course that most of the displays offered here are going to be too difficult to grasp for introdu |
3,410 | Interpreting Residual and Null Deviance in GLM R | Let LL = loglikelihood
Here is a quick summary of what you see from the summary(glm.fit) output,
Null Deviance = 2(LL(Saturated Model) - LL(Null Model)) on df = df_Sat - df_Null
Residual Deviance = 2(LL(Saturated Model) - LL(Proposed Model)) df = df_Sat - df_Proposed
The Saturated Model is a model that assumes each data point has its own parameters (which means you have n parameters to estimate.)
The Null Model assumes the exact "opposite", in that is assumes one parameter for all of the data points, which means you only estimate 1 parameter.
The Proposed Model assumes you can explain your data points with p parameters + an intercept term, so you have p+1 parameters.
If your Null Deviance is really small, it means that the Null Model explains the data pretty well. Likewise with your Residual Deviance.
What does really small mean? If your model is "good" then your Deviance is approx Chi^2 with (df_sat - df_model) degrees of freedom.
If you want to compare you Null model with your Proposed model, then you can look at
(Null Deviance - Residual Deviance) approx Chi^2 with df Proposed - df Null = (n-(p+1))-(n-1)=p
Are the results you gave directly from R? They seem a little bit odd, because generally you should see that the degrees of freedom reported on the Null are always higher than the degrees of freedom reported on the Residual. That is because again, Null Deviance df = Saturated df - Null df = n-1
Residual Deviance df = Saturated df - Proposed df = n-(p+1) | Interpreting Residual and Null Deviance in GLM R | Let LL = loglikelihood
Here is a quick summary of what you see from the summary(glm.fit) output,
Null Deviance = 2(LL(Saturated Model) - LL(Null Model)) on df = df_Sat - df_Null
Residual Deviance = 2 | Interpreting Residual and Null Deviance in GLM R
Let LL = loglikelihood
Here is a quick summary of what you see from the summary(glm.fit) output,
Null Deviance = 2(LL(Saturated Model) - LL(Null Model)) on df = df_Sat - df_Null
Residual Deviance = 2(LL(Saturated Model) - LL(Proposed Model)) df = df_Sat - df_Proposed
The Saturated Model is a model that assumes each data point has its own parameters (which means you have n parameters to estimate.)
The Null Model assumes the exact "opposite", in that is assumes one parameter for all of the data points, which means you only estimate 1 parameter.
The Proposed Model assumes you can explain your data points with p parameters + an intercept term, so you have p+1 parameters.
If your Null Deviance is really small, it means that the Null Model explains the data pretty well. Likewise with your Residual Deviance.
What does really small mean? If your model is "good" then your Deviance is approx Chi^2 with (df_sat - df_model) degrees of freedom.
If you want to compare you Null model with your Proposed model, then you can look at
(Null Deviance - Residual Deviance) approx Chi^2 with df Proposed - df Null = (n-(p+1))-(n-1)=p
Are the results you gave directly from R? They seem a little bit odd, because generally you should see that the degrees of freedom reported on the Null are always higher than the degrees of freedom reported on the Residual. That is because again, Null Deviance df = Saturated df - Null df = n-1
Residual Deviance df = Saturated df - Proposed df = n-(p+1) | Interpreting Residual and Null Deviance in GLM R
Let LL = loglikelihood
Here is a quick summary of what you see from the summary(glm.fit) output,
Null Deviance = 2(LL(Saturated Model) - LL(Null Model)) on df = df_Sat - df_Null
Residual Deviance = 2 |
3,411 | Interpreting Residual and Null Deviance in GLM R | The null deviance shows how well the response is predicted by the model with nothing but an intercept.
The residual deviance shows how well the response is predicted by the model when the predictors are included. From your example, it can be seen that the deviance goes up by 3443.3 when 22 predictor variables are added (note: degrees of freedom = no. of observations – no. of predictors) . This increase in deviance is evidence of a significant lack of fit.
We can also use the residual deviance to test whether the null hypothesis is true (i.e. Logistic regression model provides an adequate fit for the data). This is possible because the deviance is given by the chi-squared value at a certain degrees of freedom. In order to test for significance, we can find out associated p-values using the below formula in R:
p-value = 1 - pchisq(deviance, degrees of freedom)
Using the above values of residual deviance and DF, you get a p-value of approximately zero showing that there is a significant lack of evidence to support the null hypothesis.
> 1 - pchisq(4589.4, 1099)
[1] 0 | Interpreting Residual and Null Deviance in GLM R | The null deviance shows how well the response is predicted by the model with nothing but an intercept.
The residual deviance shows how well the response is predicted by the model when the predictors a | Interpreting Residual and Null Deviance in GLM R
The null deviance shows how well the response is predicted by the model with nothing but an intercept.
The residual deviance shows how well the response is predicted by the model when the predictors are included. From your example, it can be seen that the deviance goes up by 3443.3 when 22 predictor variables are added (note: degrees of freedom = no. of observations – no. of predictors) . This increase in deviance is evidence of a significant lack of fit.
We can also use the residual deviance to test whether the null hypothesis is true (i.e. Logistic regression model provides an adequate fit for the data). This is possible because the deviance is given by the chi-squared value at a certain degrees of freedom. In order to test for significance, we can find out associated p-values using the below formula in R:
p-value = 1 - pchisq(deviance, degrees of freedom)
Using the above values of residual deviance and DF, you get a p-value of approximately zero showing that there is a significant lack of evidence to support the null hypothesis.
> 1 - pchisq(4589.4, 1099)
[1] 0 | Interpreting Residual and Null Deviance in GLM R
The null deviance shows how well the response is predicted by the model with nothing but an intercept.
The residual deviance shows how well the response is predicted by the model when the predictors a |
3,412 | Interpreting Residual and Null Deviance in GLM R | While both answers given here are correct (and really useful resources), from page 432 of Introduction to Linear Regression Analysis (Montgomery, Peck, Vining, 5E), a general rule of thumb is given as if
$$
\frac{D}{n-p} >> 1,
$$
where $p$ is the number of regressors, $n$ is the number of observations and $D$ is the residual deviance, then the fit can be considered inadequate. | Interpreting Residual and Null Deviance in GLM R | While both answers given here are correct (and really useful resources), from page 432 of Introduction to Linear Regression Analysis (Montgomery, Peck, Vining, 5E), a general rule of thumb is given as | Interpreting Residual and Null Deviance in GLM R
While both answers given here are correct (and really useful resources), from page 432 of Introduction to Linear Regression Analysis (Montgomery, Peck, Vining, 5E), a general rule of thumb is given as if
$$
\frac{D}{n-p} >> 1,
$$
where $p$ is the number of regressors, $n$ is the number of observations and $D$ is the residual deviance, then the fit can be considered inadequate. | Interpreting Residual and Null Deviance in GLM R
While both answers given here are correct (and really useful resources), from page 432 of Introduction to Linear Regression Analysis (Montgomery, Peck, Vining, 5E), a general rule of thumb is given as |
3,413 | Questions about how random effects are specified in lmer | I'm going to describe what model each of your calls to lmer() fits and how they are different and then answer your final question about selecting random effects.
Each of your three models contain fixed effects for practice, context and the interaction between the two. The random effects differ between the models.
lmer(ERPindex ~ practice*context + (1|participants), data=base)
contains a random intercept shared by individuals that have the same value for participants. That is, each participant's regression line is shifted up/down by a random amount with mean $0$.
lmer(ERPindex ~ practice*context + (1+practice|participants), data=base)
This model, in addition to a random intercept, also contains a random slope in practice. This means that the rate at which individuals learn from practice is different from person to person. If an individual has a positive random effect, then they increase more quickly with practice than the average, while a negative random effect indicates they learn less quickly with practice than the average, or possibly get worse with practice, depending on the variance of the random effect (this is assuming the fixed effect of practice is positive).
lmer(ERPindex ~ practice*context + (practice|participants) +
(practice|participants:context), data=base)
This model fits a random slope and intercept in practice (you have to do (practice-1|...) to suppress the intercept), just as the previous model did, but now you've also added a random slope and intercept in the factorparticipants:context, which is a new factor whose levels are every combination of the levels present in participants and context and the corresponding random effects are shared by observations that have the same value of both participants and context. To fit this model you will need to have multiple observations that have the same values for both participants and context or else the model is not estimable. In many situations, the groups created by this interaction variable are very sparse and result in very noisy/difficult to fit random effects models, so you want to be careful when using an interaction factor as a grouping variable.
Basically (read: without getting too complicated) random effects should be used when you think that the grouping variables define "pockets" of inhomogeneity in the data set or that individuals which share the level of the grouping factor should be correlated with each other (while individuals that do not should not be correlated) - the random effects accomplish this. If you think observations which share levels of both participants and context are more similar than the sum of the two parts then including the "interaction" random effect may be appropriate.
Edit: As @Henrik mentions in the comments, the models you fit, e.g.:
lmer(ERPindex ~ practice*context + (1+practice|participants), data=base)
make it so that the random slope and random intercept are correlated with each other, and that correlation is estimated by the model. To constrain the model so that the random slope and random intercept are uncorrelated (and therefore independent, since they are normally distributed), you'd instead fit the model:
lmer(ERPindex ~ practice*context + (1|participants) + (practice-1|participants),
data=base)
The choice between these two should be based on whether you think, for example, participants with a higher baseline than average (i.e. a positive random intercept) are also likely to have a higher rate of change than average (i.e. positive random slope). If so, you'd allow the two to be correlated whereas if not, you'd constrain them to be independent. (Again, this example assumes the fixed effect slope is positive). | Questions about how random effects are specified in lmer | I'm going to describe what model each of your calls to lmer() fits and how they are different and then answer your final question about selecting random effects.
Each of your three models contain fix | Questions about how random effects are specified in lmer
I'm going to describe what model each of your calls to lmer() fits and how they are different and then answer your final question about selecting random effects.
Each of your three models contain fixed effects for practice, context and the interaction between the two. The random effects differ between the models.
lmer(ERPindex ~ practice*context + (1|participants), data=base)
contains a random intercept shared by individuals that have the same value for participants. That is, each participant's regression line is shifted up/down by a random amount with mean $0$.
lmer(ERPindex ~ practice*context + (1+practice|participants), data=base)
This model, in addition to a random intercept, also contains a random slope in practice. This means that the rate at which individuals learn from practice is different from person to person. If an individual has a positive random effect, then they increase more quickly with practice than the average, while a negative random effect indicates they learn less quickly with practice than the average, or possibly get worse with practice, depending on the variance of the random effect (this is assuming the fixed effect of practice is positive).
lmer(ERPindex ~ practice*context + (practice|participants) +
(practice|participants:context), data=base)
This model fits a random slope and intercept in practice (you have to do (practice-1|...) to suppress the intercept), just as the previous model did, but now you've also added a random slope and intercept in the factorparticipants:context, which is a new factor whose levels are every combination of the levels present in participants and context and the corresponding random effects are shared by observations that have the same value of both participants and context. To fit this model you will need to have multiple observations that have the same values for both participants and context or else the model is not estimable. In many situations, the groups created by this interaction variable are very sparse and result in very noisy/difficult to fit random effects models, so you want to be careful when using an interaction factor as a grouping variable.
Basically (read: without getting too complicated) random effects should be used when you think that the grouping variables define "pockets" of inhomogeneity in the data set or that individuals which share the level of the grouping factor should be correlated with each other (while individuals that do not should not be correlated) - the random effects accomplish this. If you think observations which share levels of both participants and context are more similar than the sum of the two parts then including the "interaction" random effect may be appropriate.
Edit: As @Henrik mentions in the comments, the models you fit, e.g.:
lmer(ERPindex ~ practice*context + (1+practice|participants), data=base)
make it so that the random slope and random intercept are correlated with each other, and that correlation is estimated by the model. To constrain the model so that the random slope and random intercept are uncorrelated (and therefore independent, since they are normally distributed), you'd instead fit the model:
lmer(ERPindex ~ practice*context + (1|participants) + (practice-1|participants),
data=base)
The choice between these two should be based on whether you think, for example, participants with a higher baseline than average (i.e. a positive random intercept) are also likely to have a higher rate of change than average (i.e. positive random slope). If so, you'd allow the two to be correlated whereas if not, you'd constrain them to be independent. (Again, this example assumes the fixed effect slope is positive). | Questions about how random effects are specified in lmer
I'm going to describe what model each of your calls to lmer() fits and how they are different and then answer your final question about selecting random effects.
Each of your three models contain fix |
3,414 | Questions about how random effects are specified in lmer | @Macro has given a good answer here, I just want to add one small point. If some people in your situation are using:
lmer(ERPindex ~ practice*context + (practice|participants) +
(practice|participants:context), data=base)
I suspect they are making a mistake. Consider: (practice|participants) means that there is a random slope (and intercept) for the effect of practice for each participant, whereas (practice|participants:context) means that there is a random slope (and intercept) for the effect of practice for each participant by context combination. This is fine, if that's what they want, but I suspect they want (practice:context|participants), which means that there is a random slope (and intercept) for the interaction effect of practice by context for each participant. | Questions about how random effects are specified in lmer | @Macro has given a good answer here, I just want to add one small point. If some people in your situation are using:
lmer(ERPindex ~ practice*context + (practice|participants) +
(p | Questions about how random effects are specified in lmer
@Macro has given a good answer here, I just want to add one small point. If some people in your situation are using:
lmer(ERPindex ~ practice*context + (practice|participants) +
(practice|participants:context), data=base)
I suspect they are making a mistake. Consider: (practice|participants) means that there is a random slope (and intercept) for the effect of practice for each participant, whereas (practice|participants:context) means that there is a random slope (and intercept) for the effect of practice for each participant by context combination. This is fine, if that's what they want, but I suspect they want (practice:context|participants), which means that there is a random slope (and intercept) for the interaction effect of practice by context for each participant. | Questions about how random effects are specified in lmer
@Macro has given a good answer here, I just want to add one small point. If some people in your situation are using:
lmer(ERPindex ~ practice*context + (practice|participants) +
(p |
3,415 | Questions about how random effects are specified in lmer | In a random effects or mixed effects model, a random effect is used when you want to treat the effect that you observed as if it were drawn from some probability distribution of effects.
One of the best examples I can give is when modeling clinical trial data from a multicentered clinical trial. A site (or center) effect is often modeled as a random effect. This is done because the 20 or so sites that were actually used in the trial were drawn from a much larger group of potential sites. In practice, the selection may not have been at random, but it still may be useful to treat it as if it were.
While the site effect could have been modeled as a fixed effect, it would be hard to generalize results to a larger population if we didn't take into account the fact that the effect for a different selected set of 20 sites would be different. Treating it as a random effect allows for us to account for it that way. | Questions about how random effects are specified in lmer | In a random effects or mixed effects model, a random effect is used when you want to treat the effect that you observed as if it were drawn from some probability distribution of effects.
One of the be | Questions about how random effects are specified in lmer
In a random effects or mixed effects model, a random effect is used when you want to treat the effect that you observed as if it were drawn from some probability distribution of effects.
One of the best examples I can give is when modeling clinical trial data from a multicentered clinical trial. A site (or center) effect is often modeled as a random effect. This is done because the 20 or so sites that were actually used in the trial were drawn from a much larger group of potential sites. In practice, the selection may not have been at random, but it still may be useful to treat it as if it were.
While the site effect could have been modeled as a fixed effect, it would be hard to generalize results to a larger population if we didn't take into account the fact that the effect for a different selected set of 20 sites would be different. Treating it as a random effect allows for us to account for it that way. | Questions about how random effects are specified in lmer
In a random effects or mixed effects model, a random effect is used when you want to treat the effect that you observed as if it were drawn from some probability distribution of effects.
One of the be |
3,416 | How does one interpret SVM feature weights? | For a general kernel it is difficult to interpret the SVM weights, however for the linear SVM there actually is a useful interpretation:
1) Recall that in linear SVM, the result is a hyperplane that separates the classes as best as possible. The weights represent this hyperplane, by giving you the coordinates of a vector which is orthogonal to the hyperplane - these are the coefficients given by svm.coef_. Let's call this vector w.
2) What can we do with this vector? It's direction gives us the predicted class, so if you take the dot product of any point with the vector, you can tell on which side it is: if the dot product is positive, it belongs to the positive class, if it is negative it belongs to the negative class.
3) Finally, you can even learn something about the importance of each feature. This is my own interpretation so convince yourself first. Let's say the svm would find only one feature useful for separating the data, then the hyperplane would be orthogonal to that axis. So, you could say that the absolute size of the coefficient relative to the other ones gives an indication of how important the feature was for the separation. For example if only the first coordinate is used for separation, w will be of the form (x,0) where x is some non zero number and then |x|>0. | How does one interpret SVM feature weights? | For a general kernel it is difficult to interpret the SVM weights, however for the linear SVM there actually is a useful interpretation:
1) Recall that in linear SVM, the result is a hyperplane that s | How does one interpret SVM feature weights?
For a general kernel it is difficult to interpret the SVM weights, however for the linear SVM there actually is a useful interpretation:
1) Recall that in linear SVM, the result is a hyperplane that separates the classes as best as possible. The weights represent this hyperplane, by giving you the coordinates of a vector which is orthogonal to the hyperplane - these are the coefficients given by svm.coef_. Let's call this vector w.
2) What can we do with this vector? It's direction gives us the predicted class, so if you take the dot product of any point with the vector, you can tell on which side it is: if the dot product is positive, it belongs to the positive class, if it is negative it belongs to the negative class.
3) Finally, you can even learn something about the importance of each feature. This is my own interpretation so convince yourself first. Let's say the svm would find only one feature useful for separating the data, then the hyperplane would be orthogonal to that axis. So, you could say that the absolute size of the coefficient relative to the other ones gives an indication of how important the feature was for the separation. For example if only the first coordinate is used for separation, w will be of the form (x,0) where x is some non zero number and then |x|>0. | How does one interpret SVM feature weights?
For a general kernel it is difficult to interpret the SVM weights, however for the linear SVM there actually is a useful interpretation:
1) Recall that in linear SVM, the result is a hyperplane that s |
3,417 | How does one interpret SVM feature weights? | I am trying to interpret the variable weights given by fitting a linear SVM.
A good way to understand how the weights are calculated and how to interpret them in the case of linear SVM is to perform the calculations by hand on a very simple example.
Example
Consider the following dataset which is linearly separable
import numpy as np
X = np.array([[3,4],[1,4],[2,3],[6,-1],[7,-1],[5,-3]] )
y = np.array([-1,-1, -1, 1, 1 , 1 ])
Solving the SVM problem by inspection
By inspection we can see that the boundary line that separates the points with the largest "margin" is the line $x_2 = x_1 - 3$. Since the weights of the SVM are proportional to the equation of this decision line (hyperplane in higher dimensions) using $w^T x + b = 0$ a first guess of the parameters would be
$$ w = [1,-1] \ \ b = -3$$
SVM theory tells us that the "width" of the margin is given by $ \frac{2}{||w||}$.
Using the above guess we would obtain a width of $\frac{2}{\sqrt{2}} = \sqrt{2}$. which, by inspection is incorrect. The width is $4 \sqrt{2}$
Recall that scaling the boundary by a factor of $c$ does not change the boundary line, hence we can generalize the equation as
$$ cx_1 - cx_2 - 3c = 0$$
$$ w = [c,-c] \ \ b = -3c$$
Plugging back into the equation for the width we get
\begin{aligned}
\frac{2}{||w||} & = 4 \sqrt{2}
\\
\frac{2}{\sqrt{2}c} & = 4 \sqrt{2}
\\
c = \frac{1}{4}
\end{aligned}
Hence the parameters (or coefficients) are in fact
$$ w = [\frac{1}{4},-\frac{1}{4}] \ \ b = -\frac{3}{4}$$
(I'm using scikit-learn)
So am I, here's some code to check our manual calculations
from sklearn.svm import SVC
clf = SVC(C = 1e5, kernel = 'linear')
clf.fit(X, y)
print('w = ',clf.coef_)
print('b = ',clf.intercept_)
print('Indices of support vectors = ', clf.support_)
print('Support vectors = ', clf.support_vectors_)
print('Number of support vectors for each class = ', clf.n_support_)
print('Coefficients of the support vector in the decision function = ', np.abs(clf.dual_coef_))
w = [[ 0.25 -0.25]] b = [-0.75]
Indices of support vectors = [2 3]
Support vectors = [[ 2. 3.] [ 6. -1.]]
Number of support vectors for each class = [1 1]
Coefficients of the support vector in the decision function = [[0.0625 0.0625]]
Does the sign of the weight have anything to do with class?
Not really, the sign of the weights has to do with the equation of the boundary plane.
Source
https://ai6034.mit.edu/wiki/images/SVM_and_Boosting.pdf | How does one interpret SVM feature weights? | I am trying to interpret the variable weights given by fitting a linear SVM.
A good way to understand how the weights are calculated and how to interpret them in the case of linear SVM is to perform | How does one interpret SVM feature weights?
I am trying to interpret the variable weights given by fitting a linear SVM.
A good way to understand how the weights are calculated and how to interpret them in the case of linear SVM is to perform the calculations by hand on a very simple example.
Example
Consider the following dataset which is linearly separable
import numpy as np
X = np.array([[3,4],[1,4],[2,3],[6,-1],[7,-1],[5,-3]] )
y = np.array([-1,-1, -1, 1, 1 , 1 ])
Solving the SVM problem by inspection
By inspection we can see that the boundary line that separates the points with the largest "margin" is the line $x_2 = x_1 - 3$. Since the weights of the SVM are proportional to the equation of this decision line (hyperplane in higher dimensions) using $w^T x + b = 0$ a first guess of the parameters would be
$$ w = [1,-1] \ \ b = -3$$
SVM theory tells us that the "width" of the margin is given by $ \frac{2}{||w||}$.
Using the above guess we would obtain a width of $\frac{2}{\sqrt{2}} = \sqrt{2}$. which, by inspection is incorrect. The width is $4 \sqrt{2}$
Recall that scaling the boundary by a factor of $c$ does not change the boundary line, hence we can generalize the equation as
$$ cx_1 - cx_2 - 3c = 0$$
$$ w = [c,-c] \ \ b = -3c$$
Plugging back into the equation for the width we get
\begin{aligned}
\frac{2}{||w||} & = 4 \sqrt{2}
\\
\frac{2}{\sqrt{2}c} & = 4 \sqrt{2}
\\
c = \frac{1}{4}
\end{aligned}
Hence the parameters (or coefficients) are in fact
$$ w = [\frac{1}{4},-\frac{1}{4}] \ \ b = -\frac{3}{4}$$
(I'm using scikit-learn)
So am I, here's some code to check our manual calculations
from sklearn.svm import SVC
clf = SVC(C = 1e5, kernel = 'linear')
clf.fit(X, y)
print('w = ',clf.coef_)
print('b = ',clf.intercept_)
print('Indices of support vectors = ', clf.support_)
print('Support vectors = ', clf.support_vectors_)
print('Number of support vectors for each class = ', clf.n_support_)
print('Coefficients of the support vector in the decision function = ', np.abs(clf.dual_coef_))
w = [[ 0.25 -0.25]] b = [-0.75]
Indices of support vectors = [2 3]
Support vectors = [[ 2. 3.] [ 6. -1.]]
Number of support vectors for each class = [1 1]
Coefficients of the support vector in the decision function = [[0.0625 0.0625]]
Does the sign of the weight have anything to do with class?
Not really, the sign of the weights has to do with the equation of the boundary plane.
Source
https://ai6034.mit.edu/wiki/images/SVM_and_Boosting.pdf | How does one interpret SVM feature weights?
I am trying to interpret the variable weights given by fitting a linear SVM.
A good way to understand how the weights are calculated and how to interpret them in the case of linear SVM is to perform |
3,418 | How does one interpret SVM feature weights? | The documentation is pretty complete: for the multiclass case, SVC which is based on the libsvm library uses the one-vs-one setting. In the case of a linear kernel, n_classes * (n_classes - 1) / 2 individual linear binary models are fitted for each possible class pair. Hence the aggregate shape of all the primal parameters concatenated together is [n_classes * (n_classes - 1) / 2, n_features] (+ [n_classes * (n_classes - 1) / 2 intercepts in the intercept_ attribute).
For the binary linear problem, plotting the separating hyperplane from the coef_ attribute is done in this example.
If you want the details on the meaning of the fitted parameters, especially for the non linear kernel case have a look at the mathematical formulation and the references mentioned in the documentation. | How does one interpret SVM feature weights? | The documentation is pretty complete: for the multiclass case, SVC which is based on the libsvm library uses the one-vs-one setting. In the case of a linear kernel, n_classes * (n_classes - 1) / 2 ind | How does one interpret SVM feature weights?
The documentation is pretty complete: for the multiclass case, SVC which is based on the libsvm library uses the one-vs-one setting. In the case of a linear kernel, n_classes * (n_classes - 1) / 2 individual linear binary models are fitted for each possible class pair. Hence the aggregate shape of all the primal parameters concatenated together is [n_classes * (n_classes - 1) / 2, n_features] (+ [n_classes * (n_classes - 1) / 2 intercepts in the intercept_ attribute).
For the binary linear problem, plotting the separating hyperplane from the coef_ attribute is done in this example.
If you want the details on the meaning of the fitted parameters, especially for the non linear kernel case have a look at the mathematical formulation and the references mentioned in the documentation. | How does one interpret SVM feature weights?
The documentation is pretty complete: for the multiclass case, SVC which is based on the libsvm library uses the one-vs-one setting. In the case of a linear kernel, n_classes * (n_classes - 1) / 2 ind |
3,419 | How does one interpret SVM feature weights? | Check this paper on feature selection. The authors use square of weights (of attributes) as assigned by a linear kernel SVM as ranking metric for deciding the relevance of a particular attribute. This is one of the highly cited ways of selecting genes from microarray data. | How does one interpret SVM feature weights? | Check this paper on feature selection. The authors use square of weights (of attributes) as assigned by a linear kernel SVM as ranking metric for deciding the relevance of a particular attribute. This | How does one interpret SVM feature weights?
Check this paper on feature selection. The authors use square of weights (of attributes) as assigned by a linear kernel SVM as ranking metric for deciding the relevance of a particular attribute. This is one of the highly cited ways of selecting genes from microarray data. | How does one interpret SVM feature weights?
Check this paper on feature selection. The authors use square of weights (of attributes) as assigned by a linear kernel SVM as ranking metric for deciding the relevance of a particular attribute. This |
3,420 | How does one interpret SVM feature weights? | A great paper by Guyon and Elisseeff (2003). An introduction to variable and feature selection. Journal of machine learning research, 1157-1182 says:
"Constructing and selecting subsets of features that are useful to build a good predictor contrasts with the problem of finding or ranking all potentially relevant variables. Selecting the most relevant variables is usually suboptimal for building a predictor, particularly if the variables are redundant. Conversely, a subset of useful variables may exclude many redundant, but relevant, variables."
Therefore I recommend caution when interpreting weights of linear models in general (including logistic regression, linear regression and linear kernel SVM). The SVM weights might compensate if the input data was not normalized. The SVM weight for a specific feature depends also on the other features, especially if the features are correlated. To determine the importance of individual features, feature ranking methods are a better choice. | How does one interpret SVM feature weights? | A great paper by Guyon and Elisseeff (2003). An introduction to variable and feature selection. Journal of machine learning research, 1157-1182 says:
"Constructing and selecting subsets of features th | How does one interpret SVM feature weights?
A great paper by Guyon and Elisseeff (2003). An introduction to variable and feature selection. Journal of machine learning research, 1157-1182 says:
"Constructing and selecting subsets of features that are useful to build a good predictor contrasts with the problem of finding or ranking all potentially relevant variables. Selecting the most relevant variables is usually suboptimal for building a predictor, particularly if the variables are redundant. Conversely, a subset of useful variables may exclude many redundant, but relevant, variables."
Therefore I recommend caution when interpreting weights of linear models in general (including logistic regression, linear regression and linear kernel SVM). The SVM weights might compensate if the input data was not normalized. The SVM weight for a specific feature depends also on the other features, especially if the features are correlated. To determine the importance of individual features, feature ranking methods are a better choice. | How does one interpret SVM feature weights?
A great paper by Guyon and Elisseeff (2003). An introduction to variable and feature selection. Journal of machine learning research, 1157-1182 says:
"Constructing and selecting subsets of features th |
3,421 | Why doesn't Random Forest handle missing values in predictors? | Gradient Boosting Trees uses CART trees (in a standard setup, as it was proposed by its authors). CART trees are also used in Random Forests. What @user777 said is true, that RF trees handles missing values either by imputation with average, either by rough average/mode, either by an averaging/mode based on proximities. These methods were proposed by Breiman and Cutler and are used for RF. This is a reference from authors Missing values in training set.
However, one can build a GBM or RF with other type of decision trees. The usual replacement for CART is C4.5 proposed by Quinlan. In C4.5 the missing values are not replaced on data set. Instead, the impurity function computed takes into account the missing values by penalizing the impurity score with the ration of missing values. On test set the evaluation in a node which has a test with missing value, the prediction is built for each child node and aggregated later (by weighting).
Now, in many implementations C4.5 is used instead of CART. The main reason is to avoid expensive computation (CART has more rigorous statistical approaches, which require more computation), the results seems to be similar, the resulted trees are often smaller (since CART is binary and C4.5 not). I know that Weka uses this approach. I do not know other libraries, but I expect it to not be a singular situation. If that is the case with your GBM implementation, than this would be an answer. | Why doesn't Random Forest handle missing values in predictors? | Gradient Boosting Trees uses CART trees (in a standard setup, as it was proposed by its authors). CART trees are also used in Random Forests. What @user777 said is true, that RF trees handles missing | Why doesn't Random Forest handle missing values in predictors?
Gradient Boosting Trees uses CART trees (in a standard setup, as it was proposed by its authors). CART trees are also used in Random Forests. What @user777 said is true, that RF trees handles missing values either by imputation with average, either by rough average/mode, either by an averaging/mode based on proximities. These methods were proposed by Breiman and Cutler and are used for RF. This is a reference from authors Missing values in training set.
However, one can build a GBM or RF with other type of decision trees. The usual replacement for CART is C4.5 proposed by Quinlan. In C4.5 the missing values are not replaced on data set. Instead, the impurity function computed takes into account the missing values by penalizing the impurity score with the ration of missing values. On test set the evaluation in a node which has a test with missing value, the prediction is built for each child node and aggregated later (by weighting).
Now, in many implementations C4.5 is used instead of CART. The main reason is to avoid expensive computation (CART has more rigorous statistical approaches, which require more computation), the results seems to be similar, the resulted trees are often smaller (since CART is binary and C4.5 not). I know that Weka uses this approach. I do not know other libraries, but I expect it to not be a singular situation. If that is the case with your GBM implementation, than this would be an answer. | Why doesn't Random Forest handle missing values in predictors?
Gradient Boosting Trees uses CART trees (in a standard setup, as it was proposed by its authors). CART trees are also used in Random Forests. What @user777 said is true, that RF trees handles missing |
3,422 | Why doesn't Random Forest handle missing values in predictors? | "What are [the] theoretical reasons [for RF] to not handle missing values? Gradient boosting machines, regression trees handle missing values. Why doesn't Random Forest do that?"
RF does handle missing values, just not in the same way that CART and other similar decision tree algorithms do. User777 correctly describes the two methods used by RF to handle missing data (median imputation and/or proximity based measure), whereas Frank Harrell correctly describes how missing values are handled in CART (surrogate splits). For more info, see links on missing data handling for CART (or it's FOSS cousin: RPART) and RF.
An answer to your actual question is covered clearly, IMHO, in Ishwaran et al's 2008 paper entitled Random Survival Forests. They provide the following plausible explanation for why RF does not handle missing data in the same way as CART or similar single decision tree classifiers:
"Although surrogate splitting works well for trees, the method may not
be well suited for forests. Speed is one issue. Finding a surrogate
split is computationally intensive and may become infeasible when
growing a large number of trees, especially for fully saturated trees
used by forests. Further, surrogate splits may not even be meaningful
in a forest paradigm. RF randomly selects variables when splitting a
node and, as such, variables within a node may be uncorrelated, and a
reasonable surrogate split may not exist. Another concern is that
surrogate splitting alters the interpretation of a variable, which
affects measures such as [Variable Importance].
For these reasons, a different strategy is required for RF."
This is an aside, but for me, this calls into question those who claim that RF uses an ensemble of CART models. I've seen this claim made in many articles, but I've never seen such statements sourced to any authoritative text on RF. For one, the trees in a RF are grown without pruning, which is usually not the standard approach when building a CART model. Another reason would be the one you allude to in your question: CART and other ensembles of decision trees handle missing values, whereas [the original] RF does not, at least not internally like CART does.
With those caveats in mind, I think you could say that RF uses an ensemble of CART-like decision trees (i.e., a bunch of unpruned trees, grown to their maximum extent, without the ability to handle missing data through surrogate splitting). Perhaps this is one of those punctilious semantic differences, but it's one I think worth noting.
EDIT: On my side note, which is unrelated to the actual question asked, I stated that "I've never seen such statements sourced to any authoritative text on RF". Turns out Breiman DID specifically state that CART decision trees are used in the original RF algorithm:
"The simplest random forest with random features is formed by selecting
at random, at each node, a small group of input variables to split on.
Grow the tree using CART methodology to maximum size and do not prune." [My emphasis]
Source: p.9 of Random Forests. Breiman (2001)
However, I still stand (albeit more precariously) on the notion that these are CART-like decision trees in that they are grown without pruning, whereas a CART is normally never run in this configuration as it will almost certainly over-fit your data (hence the pruning in the first place). | Why doesn't Random Forest handle missing values in predictors? | "What are [the] theoretical reasons [for RF] to not handle missing values? Gradient boosting machines, regression trees handle missing values. Why doesn't Random Forest do that?"
RF does handle missi | Why doesn't Random Forest handle missing values in predictors?
"What are [the] theoretical reasons [for RF] to not handle missing values? Gradient boosting machines, regression trees handle missing values. Why doesn't Random Forest do that?"
RF does handle missing values, just not in the same way that CART and other similar decision tree algorithms do. User777 correctly describes the two methods used by RF to handle missing data (median imputation and/or proximity based measure), whereas Frank Harrell correctly describes how missing values are handled in CART (surrogate splits). For more info, see links on missing data handling for CART (or it's FOSS cousin: RPART) and RF.
An answer to your actual question is covered clearly, IMHO, in Ishwaran et al's 2008 paper entitled Random Survival Forests. They provide the following plausible explanation for why RF does not handle missing data in the same way as CART or similar single decision tree classifiers:
"Although surrogate splitting works well for trees, the method may not
be well suited for forests. Speed is one issue. Finding a surrogate
split is computationally intensive and may become infeasible when
growing a large number of trees, especially for fully saturated trees
used by forests. Further, surrogate splits may not even be meaningful
in a forest paradigm. RF randomly selects variables when splitting a
node and, as such, variables within a node may be uncorrelated, and a
reasonable surrogate split may not exist. Another concern is that
surrogate splitting alters the interpretation of a variable, which
affects measures such as [Variable Importance].
For these reasons, a different strategy is required for RF."
This is an aside, but for me, this calls into question those who claim that RF uses an ensemble of CART models. I've seen this claim made in many articles, but I've never seen such statements sourced to any authoritative text on RF. For one, the trees in a RF are grown without pruning, which is usually not the standard approach when building a CART model. Another reason would be the one you allude to in your question: CART and other ensembles of decision trees handle missing values, whereas [the original] RF does not, at least not internally like CART does.
With those caveats in mind, I think you could say that RF uses an ensemble of CART-like decision trees (i.e., a bunch of unpruned trees, grown to their maximum extent, without the ability to handle missing data through surrogate splitting). Perhaps this is one of those punctilious semantic differences, but it's one I think worth noting.
EDIT: On my side note, which is unrelated to the actual question asked, I stated that "I've never seen such statements sourced to any authoritative text on RF". Turns out Breiman DID specifically state that CART decision trees are used in the original RF algorithm:
"The simplest random forest with random features is formed by selecting
at random, at each node, a small group of input variables to split on.
Grow the tree using CART methodology to maximum size and do not prune." [My emphasis]
Source: p.9 of Random Forests. Breiman (2001)
However, I still stand (albeit more precariously) on the notion that these are CART-like decision trees in that they are grown without pruning, whereas a CART is normally never run in this configuration as it will almost certainly over-fit your data (hence the pruning in the first place). | Why doesn't Random Forest handle missing values in predictors?
"What are [the] theoretical reasons [for RF] to not handle missing values? Gradient boosting machines, regression trees handle missing values. Why doesn't Random Forest do that?"
RF does handle missi |
3,423 | Why doesn't Random Forest handle missing values in predictors? | Random forest does handle missing data and there are two distinct ways
it does so:
1) Without imputation of missing data, but providing inference.
2) Imputing the data. Imputed data is then used for inference.
Both methods are implemented in my R-package randomForestSRC
(co-written with Udaya Kogalur). First, it is important to remember
that because random forests employs random feature selection,
traditional missing data methods used by single trees (CART and the
like) do not apply. This point was made in Ishwaran et al. (2008), "Random Survival Forests", Annals of Applied Statistics, 2, 3, and nicely articulated by one of the
commenters.
Method (1) is an "on the fly imputation" (OTFI) method. Prior to
splitting a node, missing data for a variable is imputed by randomly
drawing values from non-missing in-bag data. The purpose of this
imputed data is to make it possible to assign cases to daughter nodes
in the event the node is split on a variable with missing data.
Imputed data is however not used to calculate the split-statistic
which uses non-missing data only. Following a node split, imputed
data are reset to missing and the process is repeated until terminal
nodes are reached. OTFI preserves the integrity of out-of-bag data
and therefore performance values such as variable importance (VIMP)
remain unbiased. The OTFI algorithm was described in Ishwaran et al
(2008) and implemented in the retired randomSurvivalForest package,
and has now been extended to randomForestSRC to apply to all families
(i.e. not just survival).
Method (2) is implemented using the "impute" function in
randomForestSRC. Unsupervised, randomized, and multivariate splitting
methods for imputing data are available. For example, multivariate
splitting generalizes the highly successful missForest imputation
method (Stekhoven & Bühlmann (2012), "MissForest—non-parametric missing value imputation for mixed-type data", Bioinformatics, 28, 1). Calling the impute function
with missing data will return an imputed data frame which can be fit
using the primary forest function "rfsrc".
A detailed comparison of the different forest missing data algorithms
implemented using "impute" was described in a recent paper with Fei
Tang "Random forest missing data algorithms", 2017. I recommend
consulting the help files of "rfsrc" and "impute" from randomForestSRC
for more details about imputation and OTFI. | Why doesn't Random Forest handle missing values in predictors? | Random forest does handle missing data and there are two distinct ways
it does so:
1) Without imputation of missing data, but providing inference.
2) Imputing the data. Imputed data is then used for | Why doesn't Random Forest handle missing values in predictors?
Random forest does handle missing data and there are two distinct ways
it does so:
1) Without imputation of missing data, but providing inference.
2) Imputing the data. Imputed data is then used for inference.
Both methods are implemented in my R-package randomForestSRC
(co-written with Udaya Kogalur). First, it is important to remember
that because random forests employs random feature selection,
traditional missing data methods used by single trees (CART and the
like) do not apply. This point was made in Ishwaran et al. (2008), "Random Survival Forests", Annals of Applied Statistics, 2, 3, and nicely articulated by one of the
commenters.
Method (1) is an "on the fly imputation" (OTFI) method. Prior to
splitting a node, missing data for a variable is imputed by randomly
drawing values from non-missing in-bag data. The purpose of this
imputed data is to make it possible to assign cases to daughter nodes
in the event the node is split on a variable with missing data.
Imputed data is however not used to calculate the split-statistic
which uses non-missing data only. Following a node split, imputed
data are reset to missing and the process is repeated until terminal
nodes are reached. OTFI preserves the integrity of out-of-bag data
and therefore performance values such as variable importance (VIMP)
remain unbiased. The OTFI algorithm was described in Ishwaran et al
(2008) and implemented in the retired randomSurvivalForest package,
and has now been extended to randomForestSRC to apply to all families
(i.e. not just survival).
Method (2) is implemented using the "impute" function in
randomForestSRC. Unsupervised, randomized, and multivariate splitting
methods for imputing data are available. For example, multivariate
splitting generalizes the highly successful missForest imputation
method (Stekhoven & Bühlmann (2012), "MissForest—non-parametric missing value imputation for mixed-type data", Bioinformatics, 28, 1). Calling the impute function
with missing data will return an imputed data frame which can be fit
using the primary forest function "rfsrc".
A detailed comparison of the different forest missing data algorithms
implemented using "impute" was described in a recent paper with Fei
Tang "Random forest missing data algorithms", 2017. I recommend
consulting the help files of "rfsrc" and "impute" from randomForestSRC
for more details about imputation and OTFI. | Why doesn't Random Forest handle missing values in predictors?
Random forest does handle missing data and there are two distinct ways
it does so:
1) Without imputation of missing data, but providing inference.
2) Imputing the data. Imputed data is then used for |
3,424 | Why doesn't Random Forest handle missing values in predictors? | Recursive partitioning uses surrogate splits based on non-missing predictors that are correlated with the predictor possessing the missing value for an observation. It would seem possible in theory for random forests to be implemented that use the same idea. I don't know if any random forest software has done so. | Why doesn't Random Forest handle missing values in predictors? | Recursive partitioning uses surrogate splits based on non-missing predictors that are correlated with the predictor possessing the missing value for an observation. It would seem possible in theory f | Why doesn't Random Forest handle missing values in predictors?
Recursive partitioning uses surrogate splits based on non-missing predictors that are correlated with the predictor possessing the missing value for an observation. It would seem possible in theory for random forests to be implemented that use the same idea. I don't know if any random forest software has done so. | Why doesn't Random Forest handle missing values in predictors?
Recursive partitioning uses surrogate splits based on non-missing predictors that are correlated with the predictor possessing the missing value for an observation. It would seem possible in theory f |
3,425 | Why doesn't Random Forest handle missing values in predictors? | Random Forest has two methods for handling missing values, according to Leo Breiman and Adele Cutler, who invented it.
The first is quick and dirty: it just fills in the median value for continuous variables, or the most common non-missing value by class.
The second method fills in missing values, then runs RF, then for missing continuous values, RF computes the proximity-weighted average of the missing values. Then this process is repeated several times. Then the model is trained a final time using the RF-imputed data set. | Why doesn't Random Forest handle missing values in predictors? | Random Forest has two methods for handling missing values, according to Leo Breiman and Adele Cutler, who invented it.
The first is quick and dirty: it just fills in the median value for continuous va | Why doesn't Random Forest handle missing values in predictors?
Random Forest has two methods for handling missing values, according to Leo Breiman and Adele Cutler, who invented it.
The first is quick and dirty: it just fills in the median value for continuous variables, or the most common non-missing value by class.
The second method fills in missing values, then runs RF, then for missing continuous values, RF computes the proximity-weighted average of the missing values. Then this process is repeated several times. Then the model is trained a final time using the RF-imputed data set. | Why doesn't Random Forest handle missing values in predictors?
Random Forest has two methods for handling missing values, according to Leo Breiman and Adele Cutler, who invented it.
The first is quick and dirty: it just fills in the median value for continuous va |
3,426 | Why doesn't Random Forest handle missing values in predictors? | For CART, you can apply the missing-in-attributes (MIA) approach. That is, for categorical predictors, you code missing as a separate category. For numerical predictors, you create two new variables for every variable with missings: one where you code missings as -Inf and one where you code missings as +Inf. Then you apply a random forest function as usual to your data.
Advantages of MIA:
Computationally cheap
Does not yield multiple datasets and thereby models, as multiple imputation does (the imputation-of-missing-data literature generally agrees that one imputed dataset is not enough)
Does not require you to choose a statistical method and/or model for imputing the data.
If values are MNAR, and missingness is in fact predictive of the outcome, MIA may outperform multiple imputation
Functions ctree() and cforest() from package partykit allow for applying MIA by passing ctree_control(MIA = TRUE) to their control arguments.
Jerome Friedman's RuleFit program appears to use MIA for dealing with missings, see https://statweb.stanford.edu/~jhf/r-rulefit/rulefit3/RuleFit_help.html#xmiss.
A description of the MIA approach can be found in Twala et al. (2008):
Twala, B.E.T.H., Jones, M.C., and Hand, D.J. (2008). Good methods for coping with missing data in decision trees. Pattern Recognition Letters, 29(7), 950-956. | Why doesn't Random Forest handle missing values in predictors? | For CART, you can apply the missing-in-attributes (MIA) approach. That is, for categorical predictors, you code missing as a separate category. For numerical predictors, you create two new variables f | Why doesn't Random Forest handle missing values in predictors?
For CART, you can apply the missing-in-attributes (MIA) approach. That is, for categorical predictors, you code missing as a separate category. For numerical predictors, you create two new variables for every variable with missings: one where you code missings as -Inf and one where you code missings as +Inf. Then you apply a random forest function as usual to your data.
Advantages of MIA:
Computationally cheap
Does not yield multiple datasets and thereby models, as multiple imputation does (the imputation-of-missing-data literature generally agrees that one imputed dataset is not enough)
Does not require you to choose a statistical method and/or model for imputing the data.
If values are MNAR, and missingness is in fact predictive of the outcome, MIA may outperform multiple imputation
Functions ctree() and cforest() from package partykit allow for applying MIA by passing ctree_control(MIA = TRUE) to their control arguments.
Jerome Friedman's RuleFit program appears to use MIA for dealing with missings, see https://statweb.stanford.edu/~jhf/r-rulefit/rulefit3/RuleFit_help.html#xmiss.
A description of the MIA approach can be found in Twala et al. (2008):
Twala, B.E.T.H., Jones, M.C., and Hand, D.J. (2008). Good methods for coping with missing data in decision trees. Pattern Recognition Letters, 29(7), 950-956. | Why doesn't Random Forest handle missing values in predictors?
For CART, you can apply the missing-in-attributes (MIA) approach. That is, for categorical predictors, you code missing as a separate category. For numerical predictors, you create two new variables f |
3,427 | Why doesn't Random Forest handle missing values in predictors? | Instead of using median values, etc., I would highly recommend looking at the missRanger package (currently in development on Github) or the R package missForest). Both of these packages use random forests to first impute your data using a method similar to multiple imputation via chained equations (MICE). This would be the appropriate imputation method to use as it corresponds closely to your actual analysis model. You can then use all your data without having to worry about dropping individual rows due to missing observations. In addition, the imputed values will be far more realistic than simply selecting medians or modes.
You can use just one filled-in imputed data set for your analyses, but the best way to incorporate uncertainty over missing values is to run multiple runs of these imputation methods, and then estimate your model on each of the resulting datasets (i.e., multiple imputation) and then combine the estimates using Rubin's rules (see R package mitools). | Why doesn't Random Forest handle missing values in predictors? | Instead of using median values, etc., I would highly recommend looking at the missRanger package (currently in development on Github) or the R package missForest). Both of these packages use random fo | Why doesn't Random Forest handle missing values in predictors?
Instead of using median values, etc., I would highly recommend looking at the missRanger package (currently in development on Github) or the R package missForest). Both of these packages use random forests to first impute your data using a method similar to multiple imputation via chained equations (MICE). This would be the appropriate imputation method to use as it corresponds closely to your actual analysis model. You can then use all your data without having to worry about dropping individual rows due to missing observations. In addition, the imputed values will be far more realistic than simply selecting medians or modes.
You can use just one filled-in imputed data set for your analyses, but the best way to incorporate uncertainty over missing values is to run multiple runs of these imputation methods, and then estimate your model on each of the resulting datasets (i.e., multiple imputation) and then combine the estimates using Rubin's rules (see R package mitools). | Why doesn't Random Forest handle missing values in predictors?
Instead of using median values, etc., I would highly recommend looking at the missRanger package (currently in development on Github) or the R package missForest). Both of these packages use random fo |
3,428 | Why on average does each bootstrap sample contain roughly two thirds of observations? | Essentially, the issue is to show that $\lim_{n\to\infty}(1- 1/n)^n=e^{-1}$
(and of course, $e^{-1} =1/e \approx 1/3$, at least very roughly).
It doesn't work at very small $n$ -- e.g. at $n=2$, $(1- 1/n)^n=\frac{1}{4}$. It passes $\frac{1}{3}$ at $n=6$, passes $0.35$ at $n=11$, and $0.366$ by $n=99$. Once you go beyond $n=11$, $\frac{1}{e}$ is a better approximation than $\frac{1}{3}$.
The grey dashed line is at $\frac{1}{3}$; the red and grey line is at $\frac{1}{e}$.
Rather than show a formal derivation (which can easily be found), I'm going to give an outline (that is an intuitive, handwavy argument) of why a (slightly) more general result holds:
$$e^x = \lim_{n\to \infty} \left(1 + x/n \right)^n$$
(Many people take this to be the definition of $\exp(x)$, but you can prove it from simpler results such as defining $e$ as $\lim_{n\to \infty} \left(1 + 1/n \right)^n$.)
Fact 1: $\exp(x/n)^n=\exp(x)\quad$ This follows from basic results about powers and exponentiation
Fact 2: When $n$ is large, $\exp(x/n) \approx 1+x/n\quad$ This follows from the series expansion for $e^x$.
(I can give fuller arguments for each of these but I assume you already know them)
Substitute (2) in (1). Done. (For this to work as a more formal argument would take some work, because you'd have to show that the remaining terms in Fact 2 don't become large enough to cause a problem when taken to the power $n$. But this is intuition rather than formal proof.)
[Alternatively, just take the Taylor series for $\exp(x/n)$ to first order. A second easy approach is to take the binomial expansion of $\left(1 + x/n \right) ^n$ and take the limit term-by-term, showing it gives the terms in the series for $\exp(x/n)$.]
So if $e^x = \lim_{n\to \infty} \left(1 + x/n \right) ^n$, just substitute $x=-1$.
Immediately, we have the result at the top of this answer, $\lim_{n\to\infty}(1- 1/n)^n=e^{-1}$
As gung points out in comments, the result in your question is the origin of the 632 bootstrap rule
e.g. see
Efron, B. and R. Tibshirani (1997),
"Improvements on Cross-Validation: The .632+ Bootstrap Method,"
Journal of the American Statistical Association Vol. 92, No. 438. (Jun), pp. 548-560 | Why on average does each bootstrap sample contain roughly two thirds of observations? | Essentially, the issue is to show that $\lim_{n\to\infty}(1- 1/n)^n=e^{-1}$
(and of course, $e^{-1} =1/e \approx 1/3$, at least very roughly).
It doesn't work at very small $n$ -- e.g. at $n=2$, $(1- | Why on average does each bootstrap sample contain roughly two thirds of observations?
Essentially, the issue is to show that $\lim_{n\to\infty}(1- 1/n)^n=e^{-1}$
(and of course, $e^{-1} =1/e \approx 1/3$, at least very roughly).
It doesn't work at very small $n$ -- e.g. at $n=2$, $(1- 1/n)^n=\frac{1}{4}$. It passes $\frac{1}{3}$ at $n=6$, passes $0.35$ at $n=11$, and $0.366$ by $n=99$. Once you go beyond $n=11$, $\frac{1}{e}$ is a better approximation than $\frac{1}{3}$.
The grey dashed line is at $\frac{1}{3}$; the red and grey line is at $\frac{1}{e}$.
Rather than show a formal derivation (which can easily be found), I'm going to give an outline (that is an intuitive, handwavy argument) of why a (slightly) more general result holds:
$$e^x = \lim_{n\to \infty} \left(1 + x/n \right)^n$$
(Many people take this to be the definition of $\exp(x)$, but you can prove it from simpler results such as defining $e$ as $\lim_{n\to \infty} \left(1 + 1/n \right)^n$.)
Fact 1: $\exp(x/n)^n=\exp(x)\quad$ This follows from basic results about powers and exponentiation
Fact 2: When $n$ is large, $\exp(x/n) \approx 1+x/n\quad$ This follows from the series expansion for $e^x$.
(I can give fuller arguments for each of these but I assume you already know them)
Substitute (2) in (1). Done. (For this to work as a more formal argument would take some work, because you'd have to show that the remaining terms in Fact 2 don't become large enough to cause a problem when taken to the power $n$. But this is intuition rather than formal proof.)
[Alternatively, just take the Taylor series for $\exp(x/n)$ to first order. A second easy approach is to take the binomial expansion of $\left(1 + x/n \right) ^n$ and take the limit term-by-term, showing it gives the terms in the series for $\exp(x/n)$.]
So if $e^x = \lim_{n\to \infty} \left(1 + x/n \right) ^n$, just substitute $x=-1$.
Immediately, we have the result at the top of this answer, $\lim_{n\to\infty}(1- 1/n)^n=e^{-1}$
As gung points out in comments, the result in your question is the origin of the 632 bootstrap rule
e.g. see
Efron, B. and R. Tibshirani (1997),
"Improvements on Cross-Validation: The .632+ Bootstrap Method,"
Journal of the American Statistical Association Vol. 92, No. 438. (Jun), pp. 548-560 | Why on average does each bootstrap sample contain roughly two thirds of observations?
Essentially, the issue is to show that $\lim_{n\to\infty}(1- 1/n)^n=e^{-1}$
(and of course, $e^{-1} =1/e \approx 1/3$, at least very roughly).
It doesn't work at very small $n$ -- e.g. at $n=2$, $(1- |
3,429 | Why on average does each bootstrap sample contain roughly two thirds of observations? | More precisely, each bootstrap sample (or bagged tree) will contain $1-\frac{1}{e} \approx 0.632$ of the sample.
Let's go over how the bootstrap works. We have an original sample $x_1, x_2, \ldots x_n$ with $n$ items in it. We draw items with replacement from this original set until we have another set of size $n$.
From that, it follows that the probability of choosing any one item (say, $x_1$) on the first draw is $\frac{1}{n}$. Therefore, the probability of not choosing that item is $1 - \frac{1}{n}$. That's just for the first draw; there are a total of $n$ draws, all of which are independent, so the probability of never choosing this item on any of the draws is $(1-\frac{1}{n})^n$.
Now, let's think about what happens when $n$ gets larger and larger. We can take the limit as $n$ goes towards infinity, using the usual calculus tricks (or Wolfram Alpha):
$$ \lim_{n \rightarrow \infty} \big(1-\frac{1}{n}\big)^n = \frac{1}{e} \approx 0.368$$
That's the probability of an item not being chosen. Subtract it from one to find the probability of the item being chosen, which gives you 0.632. | Why on average does each bootstrap sample contain roughly two thirds of observations? | More precisely, each bootstrap sample (or bagged tree) will contain $1-\frac{1}{e} \approx 0.632$ of the sample.
Let's go over how the bootstrap works. We have an original sample $x_1, x_2, \ldots x_ | Why on average does each bootstrap sample contain roughly two thirds of observations?
More precisely, each bootstrap sample (or bagged tree) will contain $1-\frac{1}{e} \approx 0.632$ of the sample.
Let's go over how the bootstrap works. We have an original sample $x_1, x_2, \ldots x_n$ with $n$ items in it. We draw items with replacement from this original set until we have another set of size $n$.
From that, it follows that the probability of choosing any one item (say, $x_1$) on the first draw is $\frac{1}{n}$. Therefore, the probability of not choosing that item is $1 - \frac{1}{n}$. That's just for the first draw; there are a total of $n$ draws, all of which are independent, so the probability of never choosing this item on any of the draws is $(1-\frac{1}{n})^n$.
Now, let's think about what happens when $n$ gets larger and larger. We can take the limit as $n$ goes towards infinity, using the usual calculus tricks (or Wolfram Alpha):
$$ \lim_{n \rightarrow \infty} \big(1-\frac{1}{n}\big)^n = \frac{1}{e} \approx 0.368$$
That's the probability of an item not being chosen. Subtract it from one to find the probability of the item being chosen, which gives you 0.632. | Why on average does each bootstrap sample contain roughly two thirds of observations?
More precisely, each bootstrap sample (or bagged tree) will contain $1-\frac{1}{e} \approx 0.632$ of the sample.
Let's go over how the bootstrap works. We have an original sample $x_1, x_2, \ldots x_ |
3,430 | Why on average does each bootstrap sample contain roughly two thirds of observations? | Sampling with replacement can be modeled as a sequence of binomial trials where "success" is an instance being selected. For an original dataset of $n$ instances, the probability of "success" is $1/n$, and the probability of "failure" is $(n-1)/n$. For a sample size of $b$, the odds of selecting an instance exactly $x$ times is given by the binomial distribution:
$$ P(x,b,n) = \bigl(\frac{1}{n}\bigr)^{x} \bigl(\frac{n-1}{n}\bigr)^{b-x} {b \choose x}$$
In the specific case of a bootstrap sample, the sample size $b$ equals the number of instances $n$. Letting $n$ approach infinity, we get:
$$ \lim_{n \rightarrow \infty} \bigl(\frac{1}{n}\bigr)^{x} \bigl(\frac{n-1}{n}\bigr)^{n-x} {n \choose x} = \frac{1}{ex!}$$
If our original dataset is big, we can use this formula to compute the probability that an instance is selected exactly $x$ times in a bootstrap sample. For $x = 0$, the probability is $1/e$, or roughly $0.368$. The probability of an instance being sampled at least once is thus $1 - 0.368 = 0.632$.
Needless to say, I painstakingly derived this using pen and paper, and did not even consider using Wolfram Alpha. | Why on average does each bootstrap sample contain roughly two thirds of observations? | Sampling with replacement can be modeled as a sequence of binomial trials where "success" is an instance being selected. For an original dataset of $n$ instances, the probability of "success" is $1/n$ | Why on average does each bootstrap sample contain roughly two thirds of observations?
Sampling with replacement can be modeled as a sequence of binomial trials where "success" is an instance being selected. For an original dataset of $n$ instances, the probability of "success" is $1/n$, and the probability of "failure" is $(n-1)/n$. For a sample size of $b$, the odds of selecting an instance exactly $x$ times is given by the binomial distribution:
$$ P(x,b,n) = \bigl(\frac{1}{n}\bigr)^{x} \bigl(\frac{n-1}{n}\bigr)^{b-x} {b \choose x}$$
In the specific case of a bootstrap sample, the sample size $b$ equals the number of instances $n$. Letting $n$ approach infinity, we get:
$$ \lim_{n \rightarrow \infty} \bigl(\frac{1}{n}\bigr)^{x} \bigl(\frac{n-1}{n}\bigr)^{n-x} {n \choose x} = \frac{1}{ex!}$$
If our original dataset is big, we can use this formula to compute the probability that an instance is selected exactly $x$ times in a bootstrap sample. For $x = 0$, the probability is $1/e$, or roughly $0.368$. The probability of an instance being sampled at least once is thus $1 - 0.368 = 0.632$.
Needless to say, I painstakingly derived this using pen and paper, and did not even consider using Wolfram Alpha. | Why on average does each bootstrap sample contain roughly two thirds of observations?
Sampling with replacement can be modeled as a sequence of binomial trials where "success" is an instance being selected. For an original dataset of $n$ instances, the probability of "success" is $1/n$ |
3,431 | Why on average does each bootstrap sample contain roughly two thirds of observations? | Just adding to @retsreg's answer this can also be demonstrated quite easily via numerical simulation in R:
N <- 1e7 # number of instances and sample size
bootstrap <- sample(c(1:N), N, replace = TRUE)
round((length(unique(bootstrap))) / N, 3)
## [1] 0.632 | Why on average does each bootstrap sample contain roughly two thirds of observations? | Just adding to @retsreg's answer this can also be demonstrated quite easily via numerical simulation in R:
N <- 1e7 # number of instances and sample size
bootstrap <- sample(c(1:N), N, replace = TRUE) | Why on average does each bootstrap sample contain roughly two thirds of observations?
Just adding to @retsreg's answer this can also be demonstrated quite easily via numerical simulation in R:
N <- 1e7 # number of instances and sample size
bootstrap <- sample(c(1:N), N, replace = TRUE)
round((length(unique(bootstrap))) / N, 3)
## [1] 0.632 | Why on average does each bootstrap sample contain roughly two thirds of observations?
Just adding to @retsreg's answer this can also be demonstrated quite easily via numerical simulation in R:
N <- 1e7 # number of instances and sample size
bootstrap <- sample(c(1:N), N, replace = TRUE) |
3,432 | Why on average does each bootstrap sample contain roughly two thirds of observations? | If you want to look deeper into the sample coverage of the bootstrap, it is worth noting that simple-random-sampling with replacement gives an "occupancy number" that follows the classical occupancy distribution (see e.g., O'Neill 2019). Suppose we have an original sample containing $n$ data points and we take a bootstrap resample, also with $n$ points. Let $K_n$ denote the number of data points in the original sample that appear in the resample. It is well-known that this quantity follows the classical occupancy distribution, with mass function:
$$\mathbb{P}(K_m=k) = \text{Occ}(k|n,n) = \frac{(n)_k \cdot S(n,k)}{n^n}.$$
(The values $(n)_k = \prod_{i=1}^k (n-i+1)$ are the falling factorials and the values $S(n,k)$ are the Stirling numbers of the second kind.) The mean and variance of this occupancy number are:
$$\begin{align}
\mathbb{E}(K_n) &= n \bigg[ 1 - \bigg( 1-\frac{1}{n} \bigg)^n \bigg], \\[6pt]
\mathbb{V}(K_n) &= n \bigg[ (n-1) \bigg(1-\frac{2}{n} \bigg)^n + \bigg(1-\frac{1}{n} \bigg)^n - n \bigg(1-\frac{1}{n} \bigg)^{2n} \bigg]. \\[6pt]
\end{align}$$
Taking $n \rightarrow \infty$ we get the asymptotic equivalence:
$$\mathbb{E}(K_n) \sim n \bigg( 1-\frac{1}{e} \bigg)
\quad \quad \quad
\mathbb{V}(K_n) \sim \frac{n}{e} \bigg( 1-\frac{1}{e} \bigg).$$
Consequently, as $n \rightarrow \infty$ the proportion of the original data points that are covered by the resample converges to $K_n/n \rightarrow 1-1/e \approx 0.632$. Whilst this is a slightly more complicated presentation of the issue, examination of the classical occupancy distribution allows you to fully describe the stochastic behaviour of the sample coverage. | Why on average does each bootstrap sample contain roughly two thirds of observations? | If you want to look deeper into the sample coverage of the bootstrap, it is worth noting that simple-random-sampling with replacement gives an "occupancy number" that follows the classical occupancy d | Why on average does each bootstrap sample contain roughly two thirds of observations?
If you want to look deeper into the sample coverage of the bootstrap, it is worth noting that simple-random-sampling with replacement gives an "occupancy number" that follows the classical occupancy distribution (see e.g., O'Neill 2019). Suppose we have an original sample containing $n$ data points and we take a bootstrap resample, also with $n$ points. Let $K_n$ denote the number of data points in the original sample that appear in the resample. It is well-known that this quantity follows the classical occupancy distribution, with mass function:
$$\mathbb{P}(K_m=k) = \text{Occ}(k|n,n) = \frac{(n)_k \cdot S(n,k)}{n^n}.$$
(The values $(n)_k = \prod_{i=1}^k (n-i+1)$ are the falling factorials and the values $S(n,k)$ are the Stirling numbers of the second kind.) The mean and variance of this occupancy number are:
$$\begin{align}
\mathbb{E}(K_n) &= n \bigg[ 1 - \bigg( 1-\frac{1}{n} \bigg)^n \bigg], \\[6pt]
\mathbb{V}(K_n) &= n \bigg[ (n-1) \bigg(1-\frac{2}{n} \bigg)^n + \bigg(1-\frac{1}{n} \bigg)^n - n \bigg(1-\frac{1}{n} \bigg)^{2n} \bigg]. \\[6pt]
\end{align}$$
Taking $n \rightarrow \infty$ we get the asymptotic equivalence:
$$\mathbb{E}(K_n) \sim n \bigg( 1-\frac{1}{e} \bigg)
\quad \quad \quad
\mathbb{V}(K_n) \sim \frac{n}{e} \bigg( 1-\frac{1}{e} \bigg).$$
Consequently, as $n \rightarrow \infty$ the proportion of the original data points that are covered by the resample converges to $K_n/n \rightarrow 1-1/e \approx 0.632$. Whilst this is a slightly more complicated presentation of the issue, examination of the classical occupancy distribution allows you to fully describe the stochastic behaviour of the sample coverage. | Why on average does each bootstrap sample contain roughly two thirds of observations?
If you want to look deeper into the sample coverage of the bootstrap, it is worth noting that simple-random-sampling with replacement gives an "occupancy number" that follows the classical occupancy d |
3,433 | Why on average does each bootstrap sample contain roughly two thirds of observations? | This can be easily seen by counting. How many total possible samples? n^n. How many NOT containing a specific value? (n-1)^n. Probability of a sample not having a specific value - (1-1/n)^n, which is about 1/3 in the limit. | Why on average does each bootstrap sample contain roughly two thirds of observations? | This can be easily seen by counting. How many total possible samples? n^n. How many NOT containing a specific value? (n-1)^n. Probability of a sample not having a specific value - (1-1/n)^n, which is | Why on average does each bootstrap sample contain roughly two thirds of observations?
This can be easily seen by counting. How many total possible samples? n^n. How many NOT containing a specific value? (n-1)^n. Probability of a sample not having a specific value - (1-1/n)^n, which is about 1/3 in the limit. | Why on average does each bootstrap sample contain roughly two thirds of observations?
This can be easily seen by counting. How many total possible samples? n^n. How many NOT containing a specific value? (n-1)^n. Probability of a sample not having a specific value - (1-1/n)^n, which is |
3,434 | What is the difference in Bayesian estimate and maximum likelihood estimate? | It is a very broad question and my answer here only begins to scratch the surface a bit. I will use the Bayes's rule to explain the concepts.
Let’s assume that a set of probability distribution parameters, $\theta$, best explains the dataset $D$. We may wish to estimate the parameters $\theta$ with the help of the Bayes’ Rule:
$$p(\theta|D)=\frac{p(D|\theta) * p(\theta)}{p(D)}$$
$$posterior = \frac{likelihood * prior}{evidence}$$
The explanations follow:
Maximum Likelihood Estimate
With MLE,we seek a point value for $\theta$ which maximizes the likelihood, $p(D|\theta)$, shown in the equation(s) above. We can denote this value as $\hat{\theta}$. In MLE, $\hat{\theta}$ is a point estimate, not a random variable.
In other words, in the equation above, MLE treats the term $\frac{p(\theta)}{p(D)}$ as a constant and does NOT allow us to inject our prior beliefs, $p(\theta)$, about the likely values for $\theta$ in the estimation calculations.
Bayesian Estimate
Bayesian estimation, by contrast, fully calculates (or at times approximates) the posterior distribution $p(\theta|D)$. Bayesian inference treats $\theta$ as a random variable. In Bayesian estimation, we put in probability density functions and get out probability density functions, rather than a single point as in MLE.
Of all the $\theta$ values made possible by the output distribution $p(\theta|D)$, it is our job to select a value that we consider best in some sense. For example, we may choose the expected value of $\theta$ assuming its variance is small enough. The variance that we can calculate for the parameter $\theta$ from its posterior distribution allows us to express our confidence in any specific value we may use as an estimate. If the variance is too large, we may declare that there does not exist a good estimate for $\theta$.
As a trade-off, Bayesian estimation is made complex by the fact that we now have to deal with the denominator in the Bayes' rule, i.e. $evidence$. Here evidence -or probability of evidence- is represented by:
$$p(D) = \int_{\theta} p(D|\theta) * p(\theta) d\theta$$
This leads to the concept of 'conjugate priors' in Bayesian estimation. For a given likelihood function, if we have a choice regarding how we express our prior beliefs, we must use that form which allows us to carry out the integration shown above. The idea of conjugate priors and how they are practically implemented are explained quite well in this post by COOlSerdash. | What is the difference in Bayesian estimate and maximum likelihood estimate? | It is a very broad question and my answer here only begins to scratch the surface a bit. I will use the Bayes's rule to explain the concepts.
Let’s assume that a set of probability distribution param | What is the difference in Bayesian estimate and maximum likelihood estimate?
It is a very broad question and my answer here only begins to scratch the surface a bit. I will use the Bayes's rule to explain the concepts.
Let’s assume that a set of probability distribution parameters, $\theta$, best explains the dataset $D$. We may wish to estimate the parameters $\theta$ with the help of the Bayes’ Rule:
$$p(\theta|D)=\frac{p(D|\theta) * p(\theta)}{p(D)}$$
$$posterior = \frac{likelihood * prior}{evidence}$$
The explanations follow:
Maximum Likelihood Estimate
With MLE,we seek a point value for $\theta$ which maximizes the likelihood, $p(D|\theta)$, shown in the equation(s) above. We can denote this value as $\hat{\theta}$. In MLE, $\hat{\theta}$ is a point estimate, not a random variable.
In other words, in the equation above, MLE treats the term $\frac{p(\theta)}{p(D)}$ as a constant and does NOT allow us to inject our prior beliefs, $p(\theta)$, about the likely values for $\theta$ in the estimation calculations.
Bayesian Estimate
Bayesian estimation, by contrast, fully calculates (or at times approximates) the posterior distribution $p(\theta|D)$. Bayesian inference treats $\theta$ as a random variable. In Bayesian estimation, we put in probability density functions and get out probability density functions, rather than a single point as in MLE.
Of all the $\theta$ values made possible by the output distribution $p(\theta|D)$, it is our job to select a value that we consider best in some sense. For example, we may choose the expected value of $\theta$ assuming its variance is small enough. The variance that we can calculate for the parameter $\theta$ from its posterior distribution allows us to express our confidence in any specific value we may use as an estimate. If the variance is too large, we may declare that there does not exist a good estimate for $\theta$.
As a trade-off, Bayesian estimation is made complex by the fact that we now have to deal with the denominator in the Bayes' rule, i.e. $evidence$. Here evidence -or probability of evidence- is represented by:
$$p(D) = \int_{\theta} p(D|\theta) * p(\theta) d\theta$$
This leads to the concept of 'conjugate priors' in Bayesian estimation. For a given likelihood function, if we have a choice regarding how we express our prior beliefs, we must use that form which allows us to carry out the integration shown above. The idea of conjugate priors and how they are practically implemented are explained quite well in this post by COOlSerdash. | What is the difference in Bayesian estimate and maximum likelihood estimate?
It is a very broad question and my answer here only begins to scratch the surface a bit. I will use the Bayes's rule to explain the concepts.
Let’s assume that a set of probability distribution param |
3,435 | What is the difference in Bayesian estimate and maximum likelihood estimate? | I think you're talking about point estimation as in parametric inference, so that we can assume a parametric probability model for a data generating mechanism but the actual value of the parameter is unknown.
Maximum likelihood estimation refers to using a probability model for data and optimizing the joint likelihood function of the observed data over one or more parameters. It's therefore seen that the estimated parameters are most consistent with the observed data relative to any other parameter in the parameter space. Note such likelihood functions aren't necessarily viewed as being "conditional" upon the parameters since the parameters aren't random variables, hence it's somewhat more sophisticated to conceive of the likelihood of various outcomes comparing two different parameterizations. It turns out this is a philosophically sound approach.
Bayesian estimation is a bit more general because we're not necessarily maximizing the Bayesian analogue of the likelihood (the posterior density). However, the analogous type of estimation (or posterior mode estimation) is seen as maximizing the probability of the posterior parameter conditional upon the data. Usually, Bayes' estimates obtained in such a manner behave nearly exactly like those of ML. The key difference is that Bayes inference allows for an explicit method to incorporate prior information.
Also 'The Epic History of Maximum Likelihood makes for an illuminating read
http://arxiv.org/pdf/0804.2996.pdf | What is the difference in Bayesian estimate and maximum likelihood estimate? | I think you're talking about point estimation as in parametric inference, so that we can assume a parametric probability model for a data generating mechanism but the actual value of the parameter is | What is the difference in Bayesian estimate and maximum likelihood estimate?
I think you're talking about point estimation as in parametric inference, so that we can assume a parametric probability model for a data generating mechanism but the actual value of the parameter is unknown.
Maximum likelihood estimation refers to using a probability model for data and optimizing the joint likelihood function of the observed data over one or more parameters. It's therefore seen that the estimated parameters are most consistent with the observed data relative to any other parameter in the parameter space. Note such likelihood functions aren't necessarily viewed as being "conditional" upon the parameters since the parameters aren't random variables, hence it's somewhat more sophisticated to conceive of the likelihood of various outcomes comparing two different parameterizations. It turns out this is a philosophically sound approach.
Bayesian estimation is a bit more general because we're not necessarily maximizing the Bayesian analogue of the likelihood (the posterior density). However, the analogous type of estimation (or posterior mode estimation) is seen as maximizing the probability of the posterior parameter conditional upon the data. Usually, Bayes' estimates obtained in such a manner behave nearly exactly like those of ML. The key difference is that Bayes inference allows for an explicit method to incorporate prior information.
Also 'The Epic History of Maximum Likelihood makes for an illuminating read
http://arxiv.org/pdf/0804.2996.pdf | What is the difference in Bayesian estimate and maximum likelihood estimate?
I think you're talking about point estimation as in parametric inference, so that we can assume a parametric probability model for a data generating mechanism but the actual value of the parameter is |
3,436 | What is the difference in Bayesian estimate and maximum likelihood estimate? | The Bayesian estimate is Bayesian inference while the MLE is a type of frequentist inference method.
According to the Bayesian inference, $f(x_1,...,x_n; \theta) = \frac{f(\theta; x_1,...,x_n) * f(x_1,...,x_n)}{f(\theta)}$ holds, that is $likelihood = \frac{posterior * evidence}{prior}$. Notice that the maximum likelihood estimate treats the ratio of evidence to prior as a constant(setting the prior distribution as uniform distribution/diffuse prior/uninformative prior, $p(\theta) = 1/6$ in playing a dice for instance), which omits the prior beliefs, thus MLE is considered to be a frequentist technique(rather than Bayesian). And the prior can be not the same in this scenario, because if the size of the sample is large enough MLE amounts to MAP(for detailed deduction please refer to this answer).
MLE's alternative in Bayesian inference is called maximum a posteriori estimation(MAP for short), and actually MLE is a special case of MAP where the prior is uniform, as we see above and as stated in Wikipedia:
From the point of view of Bayesian inference, MLE is a special case of
maximum a posteriori estimation (MAP) that assumes a uniform prior
distribution of the parameters.
For details please refer to this awesome article: MLE vs MAP: the connection between Maximum Likelihood and Maximum A Posteriori Estimation.
And one more difference is that maximum likelihood is overfitting-prone, but if you adopt the Bayesian approach the over-fitting problem can be avoided. | What is the difference in Bayesian estimate and maximum likelihood estimate? | The Bayesian estimate is Bayesian inference while the MLE is a type of frequentist inference method.
According to the Bayesian inference, $f(x_1,...,x_n; \theta) = \frac{f(\theta; x_1,...,x_n) * f(x_1 | What is the difference in Bayesian estimate and maximum likelihood estimate?
The Bayesian estimate is Bayesian inference while the MLE is a type of frequentist inference method.
According to the Bayesian inference, $f(x_1,...,x_n; \theta) = \frac{f(\theta; x_1,...,x_n) * f(x_1,...,x_n)}{f(\theta)}$ holds, that is $likelihood = \frac{posterior * evidence}{prior}$. Notice that the maximum likelihood estimate treats the ratio of evidence to prior as a constant(setting the prior distribution as uniform distribution/diffuse prior/uninformative prior, $p(\theta) = 1/6$ in playing a dice for instance), which omits the prior beliefs, thus MLE is considered to be a frequentist technique(rather than Bayesian). And the prior can be not the same in this scenario, because if the size of the sample is large enough MLE amounts to MAP(for detailed deduction please refer to this answer).
MLE's alternative in Bayesian inference is called maximum a posteriori estimation(MAP for short), and actually MLE is a special case of MAP where the prior is uniform, as we see above and as stated in Wikipedia:
From the point of view of Bayesian inference, MLE is a special case of
maximum a posteriori estimation (MAP) that assumes a uniform prior
distribution of the parameters.
For details please refer to this awesome article: MLE vs MAP: the connection between Maximum Likelihood and Maximum A Posteriori Estimation.
And one more difference is that maximum likelihood is overfitting-prone, but if you adopt the Bayesian approach the over-fitting problem can be avoided. | What is the difference in Bayesian estimate and maximum likelihood estimate?
The Bayesian estimate is Bayesian inference while the MLE is a type of frequentist inference method.
According to the Bayesian inference, $f(x_1,...,x_n; \theta) = \frac{f(\theta; x_1,...,x_n) * f(x_1 |
3,437 | What is the difference in Bayesian estimate and maximum likelihood estimate? | In principle the difference is precisely 0 - asymptotically speaking :) | What is the difference in Bayesian estimate and maximum likelihood estimate? | In principle the difference is precisely 0 - asymptotically speaking :) | What is the difference in Bayesian estimate and maximum likelihood estimate?
In principle the difference is precisely 0 - asymptotically speaking :) | What is the difference in Bayesian estimate and maximum likelihood estimate?
In principle the difference is precisely 0 - asymptotically speaking :) |
3,438 | Is it true that the percentile bootstrap should never be used? | There are some difficulties that are common to all nonparametric bootstrapping estimates of confidence intervals (CI), some that are more of an issue with both the "empirical" (called "basic" in the boot.ci() function of the R boot package and in Ref. 1) and the "percentile" CI estimates (as described in Ref. 2), and some that can be exacerbated with percentile CIs.
TL;DR: In some cases percentile bootstrap CI estimates might work adequately, but if certain assumptions don't hold then the percentile CI might be the worst choice, with the empirical/basic bootstrap the next worst. Other bootstrap CI estimates can be more reliable, with better coverage. All can be problematic. Looking at diagnostic plots, as always, helps avoid potential errors incurred by just accepting the output of a software routine.
Bootstrap setup
Generally following the terminology and arguments of Ref. 1, we have a sample of data $y_1, ..., y_n$ drawn from independent and identically distributed random variables $Y_i$ sharing a cumulative distribution function $F$. The empirical distribution function (EDF) constructed from the data sample is $\hat F$. We are interested in a characteristic $\theta$ of the population, estimated by a statistic $T$ whose value in the sample is $t$. We would like to know how well $T$ estimates $\theta$, for example, the distribution of $(T - \theta)$.
Nonparametric bootstrap uses sampling from the EDF $\hat F$ to mimic sampling from $F$, taking $R$ samples each of size $n$ with replacement from the $y_i$. Values calculated from the bootstrap samples are denoted with "*". For example, the statistic $T$ calculated on bootstrap sample j provides a value $T_j^*$.
Empirical/basic versus percentile bootstrap CIs
The empirical/basic bootstrap uses the distribution of $(T^*-t)$ among the $R$ bootstrap samples from $\hat F$ to estimate the distribution of $(T-\theta)$ within the population described by $F$ itself. Its CI estimates are thus based on the distribution of $(T^*-t)$, where $t$ is the value of the statistic in the original sample.
This approach is based on the fundamental principle of bootstrapping (Ref. 3):
The population is to the sample as the sample is to the bootstrap samples.
The percentile bootstrap instead uses quantiles of the $T_j^*$ values themselves to determine the CI. These estimates can be quite different if there is skew or bias in the distribution of $(T-\theta)$.
Say that there is an observed bias $B$ such that:
$$\bar T^*=t+B,$$
where $\bar T^*$ is the mean of the $T_j^*$. For concreteness, say that the 5th and 95th percentiles of the $T_j^*$ are expressed as $\bar T^*-\delta_1$ and $\bar T^*+\delta_2$, where $\bar T^*$ is the mean over the bootstrap samples and $\delta_1,\delta_2$ are each positive and potentially different to allow for skew. The 5th and 95th CI percentile-based estimates would directly be given respectively by:
$$\bar T^*-\delta_1=t+B-\delta_1; \bar T^*+\delta_2=t+B+\delta_2.$$
The 5th and 95th percentile CI estimates by the empirical/basic bootstrap method would be respectively (Ref. 1, eq. 5.6, page 194):
$$2t-(\bar T^*+\delta_2) = t-B-\delta_2; 2t-(\bar T^*-\delta_1) = t-B+\delta_1.$$
So percentile-based CIs both get the bias wrong and flip the directions of the potentially asymmetric positions of the confidence limits around a doubly-biased center. The percentile CIs from bootstrapping in such a case do not represent the distribution of $(T-\theta)$.
This behavior is nicely illustrated on this page, for bootstrapping a statistic so negatively biased that the original sample estimate is below the 95% CIs based on the empirical/basic method (which directly includes appropriate bias correction). The 95% CIs based on the percentile method, arranged around a doubly-negatively biased center, are actually both below even the negatively biased point estimate from the original sample!
Should the percentile bootstrap never be used?
That might be an overstatement or an understatement, depending on your perspective. If you can document minimal bias and skew, for example by visualizing the distribution of $(T^*-t)$ with histograms or density plots, the percentile bootstrap should provide essentially the same CI as the empirical/basic CI. These are probably both better than the simple normal approximation to the CI.
Neither approach, however, provides the accuracy in coverage that can be provided by other bootstrap approaches. Efron from the beginning recognized potential limitations of percentile CIs but said: "Mostly we will be content to let the varying degrees of success of the examples speak for themselves." (Ref. 2, page 3)
Subsequent work, summarized for example by DiCiccio and Efron (Ref. 4), developed methods that "improve by an order of magnitude upon the accuracy of the standard intervals" provided by the empirical/basic or percentile methods. Thus one might argue that neither the empirical/basic nor the percentile methods should be used, if you care about accuracy of the intervals.
In extreme cases, for example sampling directly from a lognormal distribution without transformation, no bootstrapped CI estimates might be reliable, as Frank Harrell has noted.
What limits the reliability of these and other bootstrapped CIs?
Several issues can tend to make bootstrapped CIs unreliable. Some apply to all approaches, others can be alleviated by approaches other than the empirical/basic or percentile methods.
The first, general, issue is how well the empirical distribution $\hat F$ represents the population distribution $F$. If it doesn't, then no bootstrapping method will be reliable. In particular, bootstrapping to determine anything close to extreme values of a distribution can be unreliable. This issue is discussed elsewhere on this site, for example here and here. The few, discrete, values available in the tails of $\hat F$ for any particular sample might not represent the tails of a continuous $F$ very well. An extreme but illustrative case is trying to use bootstrapping to estimate the maximum order statistic of a random sample from a uniform $\;\mathcal{U}[0,\theta]$ distribution, as explained nicely here. Note that bootstrapped 95% or 99% CI are themselves at tails of a distribution and thus could suffer from such a problem, particularly with small sample sizes.
Second, there is no assurance that sampling of any quantity from $\hat F$ will have the same distribution as sampling it from $F$. Yet that assumption underlies the fundamental principle of bootstrapping. Quantities with that desirable property are called pivotal. As AdamO explains:
This means that if the underlying parameter changes, the shape of the distribution is only shifted by a constant, and the scale does not necessarily change. This is a strong assumption!
For example, if there is bias it's important to know that sampling from $F$ around $\theta$ is the same as sampling from $\hat F$ around $t$. And this is a particular problem in nonparametric sampling; as Ref. 1 puts it on page 33:
In nonparametric problems the situation is more complicated. It is now unlikely (but not strictly impossible) that any quantity can be exactly pivotal.
So the best that's typically possible is an approximation. This problem, however, can often be addressed adequately. It's possible to estimate how closely a sampled quantity is to pivotal, for example with pivot plots as recommended by Canty et al. These can display how distributions of bootstrapped estimates $(T^*-t)$ vary with $t$, or how well a transformation $h$ provides a quantity $(h(T^*)-h(t))$ that is pivotal. Methods for improved bootstrapped CIs can try to find a transformation $h$ such that $(h(T^*)-h(t))$ is closer to pivotal for estimating CIs in the transformed scale, then transform back to the original scale.
The boot.ci() function provides studentized bootstrap CIs (called "bootstrap-t" by DiCiccio and Efron) and $BC_a$ CIs (bias corrected and accelerated, where the "acceleration" deals with skew) that are "second-order accurate" in that the difference between the desired and achieved coverage $\alpha$ (e.g., 95% CI) is on the order of $n^{-1}$, versus only first-order accurate (order of $n^{-0.5}$) for the empirical/basic and percentile methods (Ref 1, pp. 212-3; Ref. 4). These methods, however, require keeping track of the variances within each of the bootstrapped samples, not just the individual values of the $T_j^*$ used by those simpler methods.
In extreme cases, one might need to resort to bootstrapping within the bootstrapped samples themselves to provide adequate adjustment of confidence intervals. This "Double Bootstrap" is described in Section 5.6 of Ref. 1, with other chapters in that book suggesting ways to minimize its extreme computational demands.
Davison, A. C. and Hinkley, D. V. Bootstrap Methods and their Application, Cambridge University Press, 1997.
Efron, B. Bootstrap Methods: Another look at the jacknife, Ann. Statist. 7: 1-26, 1979.
Fox, J. and Weisberg, S. Bootstrapping regression models in R. An Appendix to An R Companion to Applied Regression, Third Edition (Sage, 2019). Revision as of 21 September 2018.
DiCiccio, T. J. and Efron, B. Bootstrap confidence intervals. Stat. Sci. 11: 189-228, 1996.
Canty, A. J., Davison, A. C., Hinkley, D. V., and Ventura, V. Bootstrap diagnostics and remedies. Can. J. Stat. 34: 5-27, 2006. | Is it true that the percentile bootstrap should never be used? | There are some difficulties that are common to all nonparametric bootstrapping estimates of confidence intervals (CI), some that are more of an issue with both the "empirical" (called "basic" in the b | Is it true that the percentile bootstrap should never be used?
There are some difficulties that are common to all nonparametric bootstrapping estimates of confidence intervals (CI), some that are more of an issue with both the "empirical" (called "basic" in the boot.ci() function of the R boot package and in Ref. 1) and the "percentile" CI estimates (as described in Ref. 2), and some that can be exacerbated with percentile CIs.
TL;DR: In some cases percentile bootstrap CI estimates might work adequately, but if certain assumptions don't hold then the percentile CI might be the worst choice, with the empirical/basic bootstrap the next worst. Other bootstrap CI estimates can be more reliable, with better coverage. All can be problematic. Looking at diagnostic plots, as always, helps avoid potential errors incurred by just accepting the output of a software routine.
Bootstrap setup
Generally following the terminology and arguments of Ref. 1, we have a sample of data $y_1, ..., y_n$ drawn from independent and identically distributed random variables $Y_i$ sharing a cumulative distribution function $F$. The empirical distribution function (EDF) constructed from the data sample is $\hat F$. We are interested in a characteristic $\theta$ of the population, estimated by a statistic $T$ whose value in the sample is $t$. We would like to know how well $T$ estimates $\theta$, for example, the distribution of $(T - \theta)$.
Nonparametric bootstrap uses sampling from the EDF $\hat F$ to mimic sampling from $F$, taking $R$ samples each of size $n$ with replacement from the $y_i$. Values calculated from the bootstrap samples are denoted with "*". For example, the statistic $T$ calculated on bootstrap sample j provides a value $T_j^*$.
Empirical/basic versus percentile bootstrap CIs
The empirical/basic bootstrap uses the distribution of $(T^*-t)$ among the $R$ bootstrap samples from $\hat F$ to estimate the distribution of $(T-\theta)$ within the population described by $F$ itself. Its CI estimates are thus based on the distribution of $(T^*-t)$, where $t$ is the value of the statistic in the original sample.
This approach is based on the fundamental principle of bootstrapping (Ref. 3):
The population is to the sample as the sample is to the bootstrap samples.
The percentile bootstrap instead uses quantiles of the $T_j^*$ values themselves to determine the CI. These estimates can be quite different if there is skew or bias in the distribution of $(T-\theta)$.
Say that there is an observed bias $B$ such that:
$$\bar T^*=t+B,$$
where $\bar T^*$ is the mean of the $T_j^*$. For concreteness, say that the 5th and 95th percentiles of the $T_j^*$ are expressed as $\bar T^*-\delta_1$ and $\bar T^*+\delta_2$, where $\bar T^*$ is the mean over the bootstrap samples and $\delta_1,\delta_2$ are each positive and potentially different to allow for skew. The 5th and 95th CI percentile-based estimates would directly be given respectively by:
$$\bar T^*-\delta_1=t+B-\delta_1; \bar T^*+\delta_2=t+B+\delta_2.$$
The 5th and 95th percentile CI estimates by the empirical/basic bootstrap method would be respectively (Ref. 1, eq. 5.6, page 194):
$$2t-(\bar T^*+\delta_2) = t-B-\delta_2; 2t-(\bar T^*-\delta_1) = t-B+\delta_1.$$
So percentile-based CIs both get the bias wrong and flip the directions of the potentially asymmetric positions of the confidence limits around a doubly-biased center. The percentile CIs from bootstrapping in such a case do not represent the distribution of $(T-\theta)$.
This behavior is nicely illustrated on this page, for bootstrapping a statistic so negatively biased that the original sample estimate is below the 95% CIs based on the empirical/basic method (which directly includes appropriate bias correction). The 95% CIs based on the percentile method, arranged around a doubly-negatively biased center, are actually both below even the negatively biased point estimate from the original sample!
Should the percentile bootstrap never be used?
That might be an overstatement or an understatement, depending on your perspective. If you can document minimal bias and skew, for example by visualizing the distribution of $(T^*-t)$ with histograms or density plots, the percentile bootstrap should provide essentially the same CI as the empirical/basic CI. These are probably both better than the simple normal approximation to the CI.
Neither approach, however, provides the accuracy in coverage that can be provided by other bootstrap approaches. Efron from the beginning recognized potential limitations of percentile CIs but said: "Mostly we will be content to let the varying degrees of success of the examples speak for themselves." (Ref. 2, page 3)
Subsequent work, summarized for example by DiCiccio and Efron (Ref. 4), developed methods that "improve by an order of magnitude upon the accuracy of the standard intervals" provided by the empirical/basic or percentile methods. Thus one might argue that neither the empirical/basic nor the percentile methods should be used, if you care about accuracy of the intervals.
In extreme cases, for example sampling directly from a lognormal distribution without transformation, no bootstrapped CI estimates might be reliable, as Frank Harrell has noted.
What limits the reliability of these and other bootstrapped CIs?
Several issues can tend to make bootstrapped CIs unreliable. Some apply to all approaches, others can be alleviated by approaches other than the empirical/basic or percentile methods.
The first, general, issue is how well the empirical distribution $\hat F$ represents the population distribution $F$. If it doesn't, then no bootstrapping method will be reliable. In particular, bootstrapping to determine anything close to extreme values of a distribution can be unreliable. This issue is discussed elsewhere on this site, for example here and here. The few, discrete, values available in the tails of $\hat F$ for any particular sample might not represent the tails of a continuous $F$ very well. An extreme but illustrative case is trying to use bootstrapping to estimate the maximum order statistic of a random sample from a uniform $\;\mathcal{U}[0,\theta]$ distribution, as explained nicely here. Note that bootstrapped 95% or 99% CI are themselves at tails of a distribution and thus could suffer from such a problem, particularly with small sample sizes.
Second, there is no assurance that sampling of any quantity from $\hat F$ will have the same distribution as sampling it from $F$. Yet that assumption underlies the fundamental principle of bootstrapping. Quantities with that desirable property are called pivotal. As AdamO explains:
This means that if the underlying parameter changes, the shape of the distribution is only shifted by a constant, and the scale does not necessarily change. This is a strong assumption!
For example, if there is bias it's important to know that sampling from $F$ around $\theta$ is the same as sampling from $\hat F$ around $t$. And this is a particular problem in nonparametric sampling; as Ref. 1 puts it on page 33:
In nonparametric problems the situation is more complicated. It is now unlikely (but not strictly impossible) that any quantity can be exactly pivotal.
So the best that's typically possible is an approximation. This problem, however, can often be addressed adequately. It's possible to estimate how closely a sampled quantity is to pivotal, for example with pivot plots as recommended by Canty et al. These can display how distributions of bootstrapped estimates $(T^*-t)$ vary with $t$, or how well a transformation $h$ provides a quantity $(h(T^*)-h(t))$ that is pivotal. Methods for improved bootstrapped CIs can try to find a transformation $h$ such that $(h(T^*)-h(t))$ is closer to pivotal for estimating CIs in the transformed scale, then transform back to the original scale.
The boot.ci() function provides studentized bootstrap CIs (called "bootstrap-t" by DiCiccio and Efron) and $BC_a$ CIs (bias corrected and accelerated, where the "acceleration" deals with skew) that are "second-order accurate" in that the difference between the desired and achieved coverage $\alpha$ (e.g., 95% CI) is on the order of $n^{-1}$, versus only first-order accurate (order of $n^{-0.5}$) for the empirical/basic and percentile methods (Ref 1, pp. 212-3; Ref. 4). These methods, however, require keeping track of the variances within each of the bootstrapped samples, not just the individual values of the $T_j^*$ used by those simpler methods.
In extreme cases, one might need to resort to bootstrapping within the bootstrapped samples themselves to provide adequate adjustment of confidence intervals. This "Double Bootstrap" is described in Section 5.6 of Ref. 1, with other chapters in that book suggesting ways to minimize its extreme computational demands.
Davison, A. C. and Hinkley, D. V. Bootstrap Methods and their Application, Cambridge University Press, 1997.
Efron, B. Bootstrap Methods: Another look at the jacknife, Ann. Statist. 7: 1-26, 1979.
Fox, J. and Weisberg, S. Bootstrapping regression models in R. An Appendix to An R Companion to Applied Regression, Third Edition (Sage, 2019). Revision as of 21 September 2018.
DiCiccio, T. J. and Efron, B. Bootstrap confidence intervals. Stat. Sci. 11: 189-228, 1996.
Canty, A. J., Davison, A. C., Hinkley, D. V., and Ventura, V. Bootstrap diagnostics and remedies. Can. J. Stat. 34: 5-27, 2006. | Is it true that the percentile bootstrap should never be used?
There are some difficulties that are common to all nonparametric bootstrapping estimates of confidence intervals (CI), some that are more of an issue with both the "empirical" (called "basic" in the b |
3,439 | Is it true that the percentile bootstrap should never be used? | Some comments on different terminology between MIT / Rice and Efron's book
I think that EdM's answer does a fantastic job in answering the OPs original question, in relation to the MIT lecture notes. However, the OP also quotes the book from Efrom (2016) Computer Age Statistical Inference which uses slightly different definitions which may lead to confusion.
Chapter 11 - Student score sample correlation example
This example uses a sample for which the parameter of interest is the correlation. In the sample it is observed as $\hat \theta = 0.498$. Efron then performs $B = 2000$ non parametric bootstrap replications $\hat \theta^*$ for the student score sample correlation and plots the histogram of the results (page 186)
Standard interval bootstrap
He then defines the following Standard interval bootstrap :
$$ \hat \theta \pm 1.96 \hat{se}$$
For 95% coverage where $\hat{se}$ is taken to be the bootstrap standard error: $se_{boot}$, also called the empirical standard deviation of the bootstrap values.
Empirical standard deviation of the bootstrap values:
Let the original sample be $\mathbf{x} = (x_1,x_2,...,x_n)$ and the bootstrap sample be $\mathbf{x^*} = (x_1^*,x_2^*,...,x_n^*)$. Each bootstrap sample $b$ provides a bootstrap replication of the statistic of interest:
$$ \hat \theta^{*b} = s(\mathbf{x}^{*b}) \ \text{ for } b = 1,2,...,B $$
The resulting bootstrap estimate of standard error for $\hat \theta$ is
$$\hat{se}_{boot} = \left[ \sum_{b=1}^B (\hat \theta^{*b} - \hat \theta^{*})^2 / (B-1)\right]^{1/2} $$
$$ \hat \theta^{*} = \frac{\sum_{b=1}^B \hat \theta^{*b}}{B}$$
This definition seems different to the one used in EdM' answer:
The empirical/basic bootstrap uses the distribution of $(T^∗−t)$ among the $R$ bootstrap samples from $\hat F$ to estimate the distribution of $(T−\theta)$ within the population described by $F$ itself.
Percentile bootstrap
Here, both definitions seem aligned. From Efron page 186:
The percentile method uses the shape of the bootstrap distribution to improve upon the standard intervals. Having generated $B$ replications $\hat \theta^{*1}, \hat \theta^{*2},...,\hat \theta^{*B}$ we then use the percentiles of their distribution to define percentile confidence limits.
In this example, these are 0.118 and 0.758 respectively.
Quoting EdM:
The percentile bootstrap instead uses quantiles of the $T^∗_j$ values themselves to determine the CI.
Comparing the standard and percentile method as defined by Efron
Based on his own definitions, Efron goes to considerable length to argue that the percentile method is an improvement. For this example the resulting CI are:
Conclusion
I would argue that the OP's original question is aligned to the definitions provided by EdM. The edits made by the OP to clarify the definitions are aligned to Efron's book and are not exactly the same for Empirical vs Standard bootstrap CI.
Comments are welcome | Is it true that the percentile bootstrap should never be used? | Some comments on different terminology between MIT / Rice and Efron's book
I think that EdM's answer does a fantastic job in answering the OPs original question, in relation to the MIT lecture notes. | Is it true that the percentile bootstrap should never be used?
Some comments on different terminology between MIT / Rice and Efron's book
I think that EdM's answer does a fantastic job in answering the OPs original question, in relation to the MIT lecture notes. However, the OP also quotes the book from Efrom (2016) Computer Age Statistical Inference which uses slightly different definitions which may lead to confusion.
Chapter 11 - Student score sample correlation example
This example uses a sample for which the parameter of interest is the correlation. In the sample it is observed as $\hat \theta = 0.498$. Efron then performs $B = 2000$ non parametric bootstrap replications $\hat \theta^*$ for the student score sample correlation and plots the histogram of the results (page 186)
Standard interval bootstrap
He then defines the following Standard interval bootstrap :
$$ \hat \theta \pm 1.96 \hat{se}$$
For 95% coverage where $\hat{se}$ is taken to be the bootstrap standard error: $se_{boot}$, also called the empirical standard deviation of the bootstrap values.
Empirical standard deviation of the bootstrap values:
Let the original sample be $\mathbf{x} = (x_1,x_2,...,x_n)$ and the bootstrap sample be $\mathbf{x^*} = (x_1^*,x_2^*,...,x_n^*)$. Each bootstrap sample $b$ provides a bootstrap replication of the statistic of interest:
$$ \hat \theta^{*b} = s(\mathbf{x}^{*b}) \ \text{ for } b = 1,2,...,B $$
The resulting bootstrap estimate of standard error for $\hat \theta$ is
$$\hat{se}_{boot} = \left[ \sum_{b=1}^B (\hat \theta^{*b} - \hat \theta^{*})^2 / (B-1)\right]^{1/2} $$
$$ \hat \theta^{*} = \frac{\sum_{b=1}^B \hat \theta^{*b}}{B}$$
This definition seems different to the one used in EdM' answer:
The empirical/basic bootstrap uses the distribution of $(T^∗−t)$ among the $R$ bootstrap samples from $\hat F$ to estimate the distribution of $(T−\theta)$ within the population described by $F$ itself.
Percentile bootstrap
Here, both definitions seem aligned. From Efron page 186:
The percentile method uses the shape of the bootstrap distribution to improve upon the standard intervals. Having generated $B$ replications $\hat \theta^{*1}, \hat \theta^{*2},...,\hat \theta^{*B}$ we then use the percentiles of their distribution to define percentile confidence limits.
In this example, these are 0.118 and 0.758 respectively.
Quoting EdM:
The percentile bootstrap instead uses quantiles of the $T^∗_j$ values themselves to determine the CI.
Comparing the standard and percentile method as defined by Efron
Based on his own definitions, Efron goes to considerable length to argue that the percentile method is an improvement. For this example the resulting CI are:
Conclusion
I would argue that the OP's original question is aligned to the definitions provided by EdM. The edits made by the OP to clarify the definitions are aligned to Efron's book and are not exactly the same for Empirical vs Standard bootstrap CI.
Comments are welcome | Is it true that the percentile bootstrap should never be used?
Some comments on different terminology between MIT / Rice and Efron's book
I think that EdM's answer does a fantastic job in answering the OPs original question, in relation to the MIT lecture notes. |
3,440 | Is it true that the percentile bootstrap should never be used? | I'm following your guideline: "Looking for an answer drawing from credible and/or official sources."
The bootstrap was invented by Brad Efron. I think it's fair to say that he's a distinguished statistician. It is a fact that he is a professor at Stanford. I think that makes his opinions credible and official.
I believe that Computer Age Statistical Inference by Efron and Hastie is his latest book and so should reflect his current views. From p. 204 (11.7, notes and details),
Bootstrap confidence intervals are neither exact nor optimal , but aim instead for a wide applicability combined with near-exact accuracy.
If you read Chapter 11, "Bootstrap Confidence Intervals", he gives 4 methods of creating bootstrap confidence intervals. The second of these methods is (11.2) The Percentile Method. The third and the fourth methods are variants on the percentile method that attempt to correct for what Efron and Hastie describe as a bias in the confidence interval and for which they give a theoretical explanation.
As an aside, I can't decide if there is any difference between what the MIT people call empirical bootstrap CI and percentile CI. I may be having a brain fart, but I see the empirical method as the percentile method after subtracting off a fixed quantity. That should change nothing. I'm probably mis-reading, but I'd be truly grateful if someone can explain how I am mis-understanding their text.
Regardless, the leading authority doesn't seem to have an issue with percentile CI's. I also think his comment answers criticisms of bootstrap CI that are mentioned by some people.
MAJOR ADD ON
Firstly, after taking the time to digest the MIT chapter and the comments, the most important thing to note is that what MIT calls empirical bootstrap and percentile bootstrap differ - The empirical bootstrap and the percentile bootstrap will be different in that what they call the empirical bootstrap will be the interval $[\bar{x*}-\delta_{.1},\bar{x*}-\delta_{.9}]$ whereas the percentile bootstrap will have the confidence interval $[\bar{x*}-\delta_{.9},\bar{x*}-\delta_{.1}]$.
I would further argue that as per Efron-Hastie the percentile bootstrap is more canonical. The key to what MIT calls the empirical bootstrap is to look at the distribution of $\delta = \bar{x} - \mu$ . But why $\bar{x} - \mu$, why not $\mu-\bar{x}$. Just as reasonable. Further, the delta's for the second set is the defiled percentile bootstrap !. Efron uses the percentile and I think that the distribution of the actual means should be most fundamental. I would add that in addition to the Efron and Hastie and the 1979 paper of Efron mentioned in another answer, Efron wrote a book on the bootstrap in 1982. In all 3 sources there are mentions of percentile bootstrap, but I find no mention of what the MIT people call the empirical bootstrap. In addition, I'm pretty sure that they calculate the percentile bootstrap incorrectly. Below is an R notebook I wrote.
Commments on the MIT reference
First let’s get the MIT data into R. I did a simple cut and paste job of their bootstrap samples and saved it to boot.txt.
Hide
orig.boot = c(30, 37, 36, 43, 42, 43, 43, 46, 41, 42)
boot = read.table(file = "boot.txt")
means = as.numeric(lapply(boot,mean)) # lapply creates lists, not vectors. I use it ALWAYS for data frames.
mu = mean(orig.boot)
del = sort(means - mu) # the differences
mu
means
del
And further
Hide
mu - sort(del)[3]
mu - sort(del)[18]
So we get the same answer they do. In particular I have the same 10th and 90th percentile. I want to point out that the range from the 10th to the 90th percentile is 3. This is the same as MIT has.
What are my means?
Hide
means
sort(means)
I’m getting different means. Important point- my 10th and 90th mean 38.9 and 41.9 . This is what I would expect. They are different because I am considering distances from 40.3, so I am reversing the subtraction order. Note that 40.3-38.9 = 1.4 (and 40.3 - 1.6 = 38.7). So what they call the percentile bootstrap gives a distribution that depends on the actual means we get and not the differences.
Key Point
The empirical bootstrap and the percentile bootstrap will be different in that what they call the empirical bootstrap will be the interval [x∗¯−δ.1,x∗¯−δ.9][x∗¯−δ.1,x∗¯−δ.9] whereas the percentile bootstrap will have the confidence interval [x∗¯−δ.9,x∗¯−δ.1][x∗¯−δ.9,x∗¯−δ.1]. Typically they shouldn’t be that different. I have my thoughts as to which I would prefer, but I am not the definitive source that OP requests.
Thought experiment- should the two converge if the sample size increases. Notice that there are 210210 possible samples of size 10. Let’s not go nuts, but what about if we take 2000 samples- a size usually considered sufficient.
Hide
set.seed(1234) # reproducible
boot.2k = matrix(NA,10,2000)
for( i in c(1:2000)){
boot.2k[,i] = sample(orig.boot,10,replace = T)
}
mu2k = sort(apply(boot.2k,2,mean))
Let’s look at mu2k
Hide
summary(mu2k)
mean(mu2k)-mu2k[200]
mean(mu2k) - mu2k[1801]
And the actual values-
Hide
mu2k[200]
mu2k[1801]
So now what MIT calls the empirical bootstrap gives an 80% confidence interval of [,40.3 -1.87,40.3 +1.64] or [38.43,41.94] and the their bad percentile distribution gives [38.5,42]. This of course makes sense because the law of large numbers will say in this case that the distribution should converge to a normal distribution. Incidentally, this is discussed in Efron and Hastie. The first method they give for calculating the bootstrap interval is to use mu =/- 1.96 sd. As they point out, for large enough sample size this will work. They then give an example for which n=2000 is not large enough to get an approximately normal distribution of the data.
Conclusions
Firstly, I want to state the principle I use to decide questions of naming. “It’s my party I can cry if I want to.” While originally enunciated by Petula Clark, I think it also applies naming structures. So with sincere deference to MIT, I think that Bradley Efron deserves to name the various bootstrapping methods as he wishes. What does he do ? I can find no mention in Efron of ‘empirical bootstrap’, just percentile. So I will humbly disagree with Rice, MIT, et al. I would also point out that by the law of large numbers, as used in the MIT lecture, empirical and percentile should converge to the same number. To my taste, percentile bootstrap is intuitive, justified, and what the inventor of bootstrap had in mind. I would add that I took the time to do this just for my own edification, not anything else. In particular, I didn’t write Efron, which probably is what OP should do. I am most willing to stand corrected. | Is it true that the percentile bootstrap should never be used? | I'm following your guideline: "Looking for an answer drawing from credible and/or official sources."
The bootstrap was invented by Brad Efron. I think it's fair to say that he's a distinguished stati | Is it true that the percentile bootstrap should never be used?
I'm following your guideline: "Looking for an answer drawing from credible and/or official sources."
The bootstrap was invented by Brad Efron. I think it's fair to say that he's a distinguished statistician. It is a fact that he is a professor at Stanford. I think that makes his opinions credible and official.
I believe that Computer Age Statistical Inference by Efron and Hastie is his latest book and so should reflect his current views. From p. 204 (11.7, notes and details),
Bootstrap confidence intervals are neither exact nor optimal , but aim instead for a wide applicability combined with near-exact accuracy.
If you read Chapter 11, "Bootstrap Confidence Intervals", he gives 4 methods of creating bootstrap confidence intervals. The second of these methods is (11.2) The Percentile Method. The third and the fourth methods are variants on the percentile method that attempt to correct for what Efron and Hastie describe as a bias in the confidence interval and for which they give a theoretical explanation.
As an aside, I can't decide if there is any difference between what the MIT people call empirical bootstrap CI and percentile CI. I may be having a brain fart, but I see the empirical method as the percentile method after subtracting off a fixed quantity. That should change nothing. I'm probably mis-reading, but I'd be truly grateful if someone can explain how I am mis-understanding their text.
Regardless, the leading authority doesn't seem to have an issue with percentile CI's. I also think his comment answers criticisms of bootstrap CI that are mentioned by some people.
MAJOR ADD ON
Firstly, after taking the time to digest the MIT chapter and the comments, the most important thing to note is that what MIT calls empirical bootstrap and percentile bootstrap differ - The empirical bootstrap and the percentile bootstrap will be different in that what they call the empirical bootstrap will be the interval $[\bar{x*}-\delta_{.1},\bar{x*}-\delta_{.9}]$ whereas the percentile bootstrap will have the confidence interval $[\bar{x*}-\delta_{.9},\bar{x*}-\delta_{.1}]$.
I would further argue that as per Efron-Hastie the percentile bootstrap is more canonical. The key to what MIT calls the empirical bootstrap is to look at the distribution of $\delta = \bar{x} - \mu$ . But why $\bar{x} - \mu$, why not $\mu-\bar{x}$. Just as reasonable. Further, the delta's for the second set is the defiled percentile bootstrap !. Efron uses the percentile and I think that the distribution of the actual means should be most fundamental. I would add that in addition to the Efron and Hastie and the 1979 paper of Efron mentioned in another answer, Efron wrote a book on the bootstrap in 1982. In all 3 sources there are mentions of percentile bootstrap, but I find no mention of what the MIT people call the empirical bootstrap. In addition, I'm pretty sure that they calculate the percentile bootstrap incorrectly. Below is an R notebook I wrote.
Commments on the MIT reference
First let’s get the MIT data into R. I did a simple cut and paste job of their bootstrap samples and saved it to boot.txt.
Hide
orig.boot = c(30, 37, 36, 43, 42, 43, 43, 46, 41, 42)
boot = read.table(file = "boot.txt")
means = as.numeric(lapply(boot,mean)) # lapply creates lists, not vectors. I use it ALWAYS for data frames.
mu = mean(orig.boot)
del = sort(means - mu) # the differences
mu
means
del
And further
Hide
mu - sort(del)[3]
mu - sort(del)[18]
So we get the same answer they do. In particular I have the same 10th and 90th percentile. I want to point out that the range from the 10th to the 90th percentile is 3. This is the same as MIT has.
What are my means?
Hide
means
sort(means)
I’m getting different means. Important point- my 10th and 90th mean 38.9 and 41.9 . This is what I would expect. They are different because I am considering distances from 40.3, so I am reversing the subtraction order. Note that 40.3-38.9 = 1.4 (and 40.3 - 1.6 = 38.7). So what they call the percentile bootstrap gives a distribution that depends on the actual means we get and not the differences.
Key Point
The empirical bootstrap and the percentile bootstrap will be different in that what they call the empirical bootstrap will be the interval [x∗¯−δ.1,x∗¯−δ.9][x∗¯−δ.1,x∗¯−δ.9] whereas the percentile bootstrap will have the confidence interval [x∗¯−δ.9,x∗¯−δ.1][x∗¯−δ.9,x∗¯−δ.1]. Typically they shouldn’t be that different. I have my thoughts as to which I would prefer, but I am not the definitive source that OP requests.
Thought experiment- should the two converge if the sample size increases. Notice that there are 210210 possible samples of size 10. Let’s not go nuts, but what about if we take 2000 samples- a size usually considered sufficient.
Hide
set.seed(1234) # reproducible
boot.2k = matrix(NA,10,2000)
for( i in c(1:2000)){
boot.2k[,i] = sample(orig.boot,10,replace = T)
}
mu2k = sort(apply(boot.2k,2,mean))
Let’s look at mu2k
Hide
summary(mu2k)
mean(mu2k)-mu2k[200]
mean(mu2k) - mu2k[1801]
And the actual values-
Hide
mu2k[200]
mu2k[1801]
So now what MIT calls the empirical bootstrap gives an 80% confidence interval of [,40.3 -1.87,40.3 +1.64] or [38.43,41.94] and the their bad percentile distribution gives [38.5,42]. This of course makes sense because the law of large numbers will say in this case that the distribution should converge to a normal distribution. Incidentally, this is discussed in Efron and Hastie. The first method they give for calculating the bootstrap interval is to use mu =/- 1.96 sd. As they point out, for large enough sample size this will work. They then give an example for which n=2000 is not large enough to get an approximately normal distribution of the data.
Conclusions
Firstly, I want to state the principle I use to decide questions of naming. “It’s my party I can cry if I want to.” While originally enunciated by Petula Clark, I think it also applies naming structures. So with sincere deference to MIT, I think that Bradley Efron deserves to name the various bootstrapping methods as he wishes. What does he do ? I can find no mention in Efron of ‘empirical bootstrap’, just percentile. So I will humbly disagree with Rice, MIT, et al. I would also point out that by the law of large numbers, as used in the MIT lecture, empirical and percentile should converge to the same number. To my taste, percentile bootstrap is intuitive, justified, and what the inventor of bootstrap had in mind. I would add that I took the time to do this just for my own edification, not anything else. In particular, I didn’t write Efron, which probably is what OP should do. I am most willing to stand corrected. | Is it true that the percentile bootstrap should never be used?
I'm following your guideline: "Looking for an answer drawing from credible and/or official sources."
The bootstrap was invented by Brad Efron. I think it's fair to say that he's a distinguished stati |
3,441 | Is it true that the percentile bootstrap should never be used? | As already noted in earlier replies, the "empirical bootstrap" is called "basic bootstrap" in other sources (including the R function boot.ci), which is identical to the "percentile bootstrap" flipped at the point estimate. Venables and Ripley write ("Modern Applied Statstics with S", 4th ed., Springer, 2002, p. 136):
In asymmetric problems the basic and percentile intervals will differ
considerably, and the basic intervals seem more rational.
Out of curiosity, I have done extensive MonteCarlo simulations with two asymetrically distributed estimators, and found -to my own surprise- exactly the opposite, i.e. that the percentile interval outperformed the basic interval in terms of coverage probability. Here are my results with the coverage probability for each sample size $n$ estimated with one million different samples (taken from this Technical Report, p. 26f):
Mean of an asymmetric distribution with density $f(x)=3x^2$
In this case the classic confidence intervals $\pm t_{1-\alpha/2}\sqrt{s^2/n}$ and $\pm z_{1-\alpha/2}\sqrt{s^2/n}$ are given for comparison.
Maximum Likelihood Estimator for $\lambda$ in the exponential distribution
In this case, two alternative confidence intervals are given for comparison: $\pm z_{1-\alpha/2}$ times the log-likelihood Hessian inverse, and $\pm z_{1-\alpha/2}$ times the Jackknife variance estimator.
In both use cases, the BCa bootstrap has the highest coverage probablity among the bootstrap methods, and the percentile bootstrap has higher coverage probability than the basic/empirical bootstrap. | Is it true that the percentile bootstrap should never be used? | As already noted in earlier replies, the "empirical bootstrap" is called "basic bootstrap" in other sources (including the R function boot.ci), which is identical to the "percentile bootstrap" flipped | Is it true that the percentile bootstrap should never be used?
As already noted in earlier replies, the "empirical bootstrap" is called "basic bootstrap" in other sources (including the R function boot.ci), which is identical to the "percentile bootstrap" flipped at the point estimate. Venables and Ripley write ("Modern Applied Statstics with S", 4th ed., Springer, 2002, p. 136):
In asymmetric problems the basic and percentile intervals will differ
considerably, and the basic intervals seem more rational.
Out of curiosity, I have done extensive MonteCarlo simulations with two asymetrically distributed estimators, and found -to my own surprise- exactly the opposite, i.e. that the percentile interval outperformed the basic interval in terms of coverage probability. Here are my results with the coverage probability for each sample size $n$ estimated with one million different samples (taken from this Technical Report, p. 26f):
Mean of an asymmetric distribution with density $f(x)=3x^2$
In this case the classic confidence intervals $\pm t_{1-\alpha/2}\sqrt{s^2/n}$ and $\pm z_{1-\alpha/2}\sqrt{s^2/n}$ are given for comparison.
Maximum Likelihood Estimator for $\lambda$ in the exponential distribution
In this case, two alternative confidence intervals are given for comparison: $\pm z_{1-\alpha/2}$ times the log-likelihood Hessian inverse, and $\pm z_{1-\alpha/2}$ times the Jackknife variance estimator.
In both use cases, the BCa bootstrap has the highest coverage probablity among the bootstrap methods, and the percentile bootstrap has higher coverage probability than the basic/empirical bootstrap. | Is it true that the percentile bootstrap should never be used?
As already noted in earlier replies, the "empirical bootstrap" is called "basic bootstrap" in other sources (including the R function boot.ci), which is identical to the "percentile bootstrap" flipped |
3,442 | Is it true that the percentile bootstrap should never be used? | As noted in cdalitz's answer, the percentile bootstrap gives better confidence intervals than the empirical/basic bootstrap quite often. I'd now like to offer a justification as to why this is the case.
I'm unaware of a frequentist justification for the percentile bootstrap. However, the percentile bootstrap (or a close cousin) can easily be derived from a Bayesian point of view. The Bayesian bootstrap assumes that:
Before observing any data, any possible data point is equally probable.
There is no "smoothing" of the data -- any new information or data that increases the probability of $x$ has no bearing on the probability that the next data point will equal $x + .0000...1$.
Let's say that we have a probability of $p$ assigned to the observed values, and a probability $1-p$ for the infinitely many values that were not observed. By a severe and mathematically unjustifiable abuse of infinities, this means all the points not observed have a probability of $(1-p)/\infty$, i.e. 0. So we ignore any data points that did not show up in our data, and instead assume that the probability is spread out equally over the observed data. (This can be made sensible by talking about a limit of Dirichlet processes, which I'll avoid getting into here.)
The maximum entropy distribution to assign to the data is then a uniform distribution (more precisely, a Dirichlet distribution with $\alpha = 1$), so this is the most justifiable way to spread out the probability among the observations.
Having done this, we can simulate our posterior by drawing random frequencies for each observed value from this uniform distribution. One way to think about this is that the Bayesian bootstrap assumes the observed empirical distribution is equal to the actual likelihood function, and updates accordingly. This gives us a posterior that looks very similar to the frequentist percentile bootstrap. Some Bayesians have argued that Haldane's distribution is less informative than the uniform distribution; if you use Haldane's distribution, you get the percentile bootstrap exactly. In practice the two will hardly differ for any reasonable sample size.
So if you'd like to interpret your bootstrap distribution as an approximate posterior, the percentile bootstrap does a better job than the basic/empirical bootstrap. | Is it true that the percentile bootstrap should never be used? | As noted in cdalitz's answer, the percentile bootstrap gives better confidence intervals than the empirical/basic bootstrap quite often. I'd now like to offer a justification as to why this is the cas | Is it true that the percentile bootstrap should never be used?
As noted in cdalitz's answer, the percentile bootstrap gives better confidence intervals than the empirical/basic bootstrap quite often. I'd now like to offer a justification as to why this is the case.
I'm unaware of a frequentist justification for the percentile bootstrap. However, the percentile bootstrap (or a close cousin) can easily be derived from a Bayesian point of view. The Bayesian bootstrap assumes that:
Before observing any data, any possible data point is equally probable.
There is no "smoothing" of the data -- any new information or data that increases the probability of $x$ has no bearing on the probability that the next data point will equal $x + .0000...1$.
Let's say that we have a probability of $p$ assigned to the observed values, and a probability $1-p$ for the infinitely many values that were not observed. By a severe and mathematically unjustifiable abuse of infinities, this means all the points not observed have a probability of $(1-p)/\infty$, i.e. 0. So we ignore any data points that did not show up in our data, and instead assume that the probability is spread out equally over the observed data. (This can be made sensible by talking about a limit of Dirichlet processes, which I'll avoid getting into here.)
The maximum entropy distribution to assign to the data is then a uniform distribution (more precisely, a Dirichlet distribution with $\alpha = 1$), so this is the most justifiable way to spread out the probability among the observations.
Having done this, we can simulate our posterior by drawing random frequencies for each observed value from this uniform distribution. One way to think about this is that the Bayesian bootstrap assumes the observed empirical distribution is equal to the actual likelihood function, and updates accordingly. This gives us a posterior that looks very similar to the frequentist percentile bootstrap. Some Bayesians have argued that Haldane's distribution is less informative than the uniform distribution; if you use Haldane's distribution, you get the percentile bootstrap exactly. In practice the two will hardly differ for any reasonable sample size.
So if you'd like to interpret your bootstrap distribution as an approximate posterior, the percentile bootstrap does a better job than the basic/empirical bootstrap. | Is it true that the percentile bootstrap should never be used?
As noted in cdalitz's answer, the percentile bootstrap gives better confidence intervals than the empirical/basic bootstrap quite often. I'd now like to offer a justification as to why this is the cas |
3,443 | Data normalization and standardization in neural networks | A standard approach is to scale the inputs to have mean 0 and a variance of 1. Also linear decorrelation/whitening/pca helps a lot.
If you are interested in the tricks of the trade, I can recommend LeCun's efficient backprop paper. | Data normalization and standardization in neural networks | A standard approach is to scale the inputs to have mean 0 and a variance of 1. Also linear decorrelation/whitening/pca helps a lot.
If you are interested in the tricks of the trade, I can recommend Le | Data normalization and standardization in neural networks
A standard approach is to scale the inputs to have mean 0 and a variance of 1. Also linear decorrelation/whitening/pca helps a lot.
If you are interested in the tricks of the trade, I can recommend LeCun's efficient backprop paper. | Data normalization and standardization in neural networks
A standard approach is to scale the inputs to have mean 0 and a variance of 1. Also linear decorrelation/whitening/pca helps a lot.
If you are interested in the tricks of the trade, I can recommend Le |
3,444 | Data normalization and standardization in neural networks | 1- Min-max normalization retains the original distribution of scores except for a scaling factor and transforms all the scores into a common range [0, 1]. However, this method is not robust (i.e., the method is highly sensitive to outliers.
2- Standardization (Z-score normalization) The most commonly used technique, which is calculated using the arithmetic mean and standard deviation of the given data. However, both mean and standard deviation are sensitive to outliers, and this technique does not guarantee a common numerical range for the normalized scores. Moreover, if the input scores are not Gaussian distributed, this technique does not retain the input distribution at the output.
3- Median and MAD: The median and median absolute deviation (MAD) are insensitive to outliers and the points in the extreme tails of the distribution. therefore it is robust. However, this technique does not retain the input distribution and does not transform the scores into a common numerical range.
4- tanh-estimators: The tanh-estimators introduced by Hampel et al. are robust and highly efficient. The normalization is given by
where μGH and σGH are the mean and standard deviation estimates, respectively, of the genuine score distribution as given by Hampel estimators.
Therefore I recommend tanh-estimators.
reference
https://www.cs.ccu.edu.tw/~wylin/BA/Fusion_of_Biometrics_II.ppt | Data normalization and standardization in neural networks | 1- Min-max normalization retains the original distribution of scores except for a scaling factor and transforms all the scores into a common range [0, 1]. However, this method is not robust (i.e., the | Data normalization and standardization in neural networks
1- Min-max normalization retains the original distribution of scores except for a scaling factor and transforms all the scores into a common range [0, 1]. However, this method is not robust (i.e., the method is highly sensitive to outliers.
2- Standardization (Z-score normalization) The most commonly used technique, which is calculated using the arithmetic mean and standard deviation of the given data. However, both mean and standard deviation are sensitive to outliers, and this technique does not guarantee a common numerical range for the normalized scores. Moreover, if the input scores are not Gaussian distributed, this technique does not retain the input distribution at the output.
3- Median and MAD: The median and median absolute deviation (MAD) are insensitive to outliers and the points in the extreme tails of the distribution. therefore it is robust. However, this technique does not retain the input distribution and does not transform the scores into a common numerical range.
4- tanh-estimators: The tanh-estimators introduced by Hampel et al. are robust and highly efficient. The normalization is given by
where μGH and σGH are the mean and standard deviation estimates, respectively, of the genuine score distribution as given by Hampel estimators.
Therefore I recommend tanh-estimators.
reference
https://www.cs.ccu.edu.tw/~wylin/BA/Fusion_of_Biometrics_II.ppt | Data normalization and standardization in neural networks
1- Min-max normalization retains the original distribution of scores except for a scaling factor and transforms all the scores into a common range [0, 1]. However, this method is not robust (i.e., the |
3,445 | Data normalization and standardization in neural networks | You could do
min-max normalization (Normalize inputs/targets to fall in the range [−1,1]), or
mean-standard deviation normalization (Normalize inputs/targets to have zero mean and unity variance/standard deviation) | Data normalization and standardization in neural networks | You could do
min-max normalization (Normalize inputs/targets to fall in the range [−1,1]), or
mean-standard deviation normalization (Normalize inputs/targets to have zero mean and unity variance/st | Data normalization and standardization in neural networks
You could do
min-max normalization (Normalize inputs/targets to fall in the range [−1,1]), or
mean-standard deviation normalization (Normalize inputs/targets to have zero mean and unity variance/standard deviation) | Data normalization and standardization in neural networks
You could do
min-max normalization (Normalize inputs/targets to fall in the range [−1,1]), or
mean-standard deviation normalization (Normalize inputs/targets to have zero mean and unity variance/st |
3,446 | Data normalization and standardization in neural networks | Rank guass scaler is a scikit-learn style transformer that scales numeric variables to normal distributions. Its based on rank transformation. First step is to assign a linspace to the sorted features from 0..1, then apply the inverse of error function ErfInv to shape them like gaussians, then I substract the mean.
Binary features are not touched with this trafo (eg. 1-hot ones).
This works usually much better than standard mean/std scaler or min/max.
Do checkout this link | Data normalization and standardization in neural networks | Rank guass scaler is a scikit-learn style transformer that scales numeric variables to normal distributions. Its based on rank transformation. First step is to assign a linspace to the sorted features | Data normalization and standardization in neural networks
Rank guass scaler is a scikit-learn style transformer that scales numeric variables to normal distributions. Its based on rank transformation. First step is to assign a linspace to the sorted features from 0..1, then apply the inverse of error function ErfInv to shape them like gaussians, then I substract the mean.
Binary features are not touched with this trafo (eg. 1-hot ones).
This works usually much better than standard mean/std scaler or min/max.
Do checkout this link | Data normalization and standardization in neural networks
Rank guass scaler is a scikit-learn style transformer that scales numeric variables to normal distributions. Its based on rank transformation. First step is to assign a linspace to the sorted features |
3,447 | Data normalization and standardization in neural networks | If you are working in python, sklearn has a method for doing this using different techniques in their preprocessing module (plus a nifty pipeline feature, with an example in their docs):
import sklearn
# Normalize X, shape (n_samples, n_features)
X_norm = sklearn.preprocessing.normalize(X) | Data normalization and standardization in neural networks | If you are working in python, sklearn has a method for doing this using different techniques in their preprocessing module (plus a nifty pipeline feature, with an example in their docs):
import sklear | Data normalization and standardization in neural networks
If you are working in python, sklearn has a method for doing this using different techniques in their preprocessing module (plus a nifty pipeline feature, with an example in their docs):
import sklearn
# Normalize X, shape (n_samples, n_features)
X_norm = sklearn.preprocessing.normalize(X) | Data normalization and standardization in neural networks
If you are working in python, sklearn has a method for doing this using different techniques in their preprocessing module (plus a nifty pipeline feature, with an example in their docs):
import sklear |
3,448 | Data normalization and standardization in neural networks | Well, [0,1] is the standard approach.
For Neural Networks, works best in the range 0-1.
Min-Max scaling (or Normalization) is the approach to follow.
Now on the outliers, in most scenarios we have to clip those, as outliers are not common, you don't want outliers to affect your model (unless Anomaly detection is the problem that you are solving).
You can clip it based on the Empirical rule of 68-95-99.7 or make a box plot, observe and accordingly clip it.
MinMax formula - (xi - min(x)) / (max(x) - min(x))
or can use sklearn.preprocessing.MinMaxScaler | Data normalization and standardization in neural networks | Well, [0,1] is the standard approach.
For Neural Networks, works best in the range 0-1.
Min-Max scaling (or Normalization) is the approach to follow.
Now on the outliers, in most scenarios we have to | Data normalization and standardization in neural networks
Well, [0,1] is the standard approach.
For Neural Networks, works best in the range 0-1.
Min-Max scaling (or Normalization) is the approach to follow.
Now on the outliers, in most scenarios we have to clip those, as outliers are not common, you don't want outliers to affect your model (unless Anomaly detection is the problem that you are solving).
You can clip it based on the Empirical rule of 68-95-99.7 or make a box plot, observe and accordingly clip it.
MinMax formula - (xi - min(x)) / (max(x) - min(x))
or can use sklearn.preprocessing.MinMaxScaler | Data normalization and standardization in neural networks
Well, [0,1] is the standard approach.
For Neural Networks, works best in the range 0-1.
Min-Max scaling (or Normalization) is the approach to follow.
Now on the outliers, in most scenarios we have to |
3,449 | Data normalization and standardization in neural networks | "Accepted" is whatever works best for you -- then you accept it.
In my experience fitting a distribution from the Johnson family of distributions to each of the continuous features works well because the distributions are highly flexible and can transform most uni-modal features into standard normal distributions. It will help with multi-modal features as well, but point is it generally puts the features into the most desirable form possible (standard Gaussian-distributed data is ideal to work with -- it is compatible with, and sometimes optimal for, most every statistical/ML method available).
http://qualityamerica.com/LSS-Knowledge-Center/statisticalinference/johnson_distributions.php | Data normalization and standardization in neural networks | "Accepted" is whatever works best for you -- then you accept it.
In my experience fitting a distribution from the Johnson family of distributions to each of the continuous features works well becaus | Data normalization and standardization in neural networks
"Accepted" is whatever works best for you -- then you accept it.
In my experience fitting a distribution from the Johnson family of distributions to each of the continuous features works well because the distributions are highly flexible and can transform most uni-modal features into standard normal distributions. It will help with multi-modal features as well, but point is it generally puts the features into the most desirable form possible (standard Gaussian-distributed data is ideal to work with -- it is compatible with, and sometimes optimal for, most every statistical/ML method available).
http://qualityamerica.com/LSS-Knowledge-Center/statisticalinference/johnson_distributions.php | Data normalization and standardization in neural networks
"Accepted" is whatever works best for you -- then you accept it.
In my experience fitting a distribution from the Johnson family of distributions to each of the continuous features works well becaus |
3,450 | What is a contrast matrix? | In their nice answer, @Gus_est, undertook a mathematical explanation of the essence of the contrast coefficient matrix L (notated there a C). $\bf Lb=k$ is the fundamental formula for testing hypotheses in univariate general linear modeling (where $\bf b$ are parameters and $\bf k$ are estimable function representing a null hypothesis), and that answer shows some necessary formulas used in modern ANOVA programs.
My answer is styled very differently. It is for a data analyst who sees himself rather an "engineer" than a "mathematician", so the answer will be a (superficial) "practical" or "didactic" account and will focus to answer just topics (1) what do the contrast coefficients mean and (2) how can they help to perform ANOVA via linear regression program.
ANOVA as regression with dummy variables: introducing contrasts.
Let us imagine ANOVA with dependent variable Y and categorical factor A having 3 levels (groups). Let us glance at the ANOVA from the linear regression point of view, that is - via turning the factor into the set of dummy (aka indicator aka treatment aka one-hot) binary variables. This is our independent set X. (Probably everybody has heard that it is possible to do ANOVA this way - as linear regression with dummy predictors.)
Since one of the three groups is redundant, only two dummy variables will enter the linear model. Let's appoint Group3 to be redundant, or reference. The dummy predictors constituting X are an example of contrast variables, i.e. elementary variables representing categories of a factor. X itself is often called design matrix. We can now input the dataset in a multiple linear regression program which will center the data and find the regression coefficients (parameters) $\bf b= (X'X)^{-1}X'y=X^+y$, where "+" designates pseudoinverse.
Equivalent pass will be not to do the centering but rather add constant term of the model as the first column of 1s in X, then estimate the coefficients same way as above $\bf b= (X'X)^{-1}X'y=X^+y$. So far so good.
Let us define matrix C to be the aggregation (summarization) of the independent variables design matrix X. It simply shows us the coding scheme observed there, - the contrast coding matrix (= basis matrix): $\bf C= {\it{aggr}} X$.
C
Const A1 A2
Gr1 (A=1) 1 1 0
Gr2 (A=2) 1 0 1
Gr3 (A=3,ref) 1 0 0
The colums are the variables (columns) of X - the elementary contrast variables A1 A2, dummy in this instance, and the rows are all the groups/levels of the factor. So was our coding matrix C for indicator or dummy contrast coding scheme.
Now, $\bf C^+=L$ is called the contrast coefficient matrix, or L-matrix. Since C is square, $\bf L=C^+=C^{-1}$. The contrast matrix, corresponding to our C - that is for indicator contrasts of our example - is therefore:
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 0 0 1 => Const = Mean_Gr3
A1 1 0 -1 => Param1 = Mean_Gr1-Mean_Gr3
A2 0 1 -1 => Param2 = Mean_Gr2-Mean_Gr3
L-matrix is the matrix showing contrast coefficients. Note that sum of contrast coefficients in every row (except row Constant) is $0$. Every such row is called a contrast. Rows correspond to the contrast variables and columns correspond to the groups, factor levels.
The significance of contrast coefficients is that they help understand what each effect (each parameter b estimated in the regression with our X, coded as it is) represent in the sense of the difference (the group comparison). We immediately see, following the coefficients, that the estimated Constant will equal the Y mean in the reference group; that parameter b1 (i.e. of dummy variable A1) will equal the difference: Y mean in group1 minus Y mean in group3; and parameter b2 is the difference: mean in group2 minus mean in group3.
Note: Saying "mean" right above (and further below) we mean estimated (predicted by the model) mean for a group, not the observed mean in a group.
An instructive remark: When we do a regression by binary predictor variables, the parameter of such a variable says about the difference in Y between variable=1 and variable=0 groups. However, in the situation when the binary variables are the set of k-1 dummy variables representing a k-level factor, the meaning of the parameter gets narrower: it shows the difference in Y between variable=1 and (not just variable=0 but even) reference_variable=1 groups.
Like $\bf X^+$ (after multiplied by $\bf y$) brings us values of b, similarly $\bf(\it{aggr} \bf X)^+$ brings in meanings of b.
OK, we've given the definition of contrast coefficient matrix L. Since $\bf L=C^+=C^{-1}$, symmetrically $\bf C=L^+=L^{-1}$, which means that if you were given or have constructed a contrast matrix L based on categorical factor(s) - to test that L in your analysis, then you have clue for how to code correctly your contrast predictor variables X in order to test the L via an ordinary regression software (i.e. the one processing just "continuous" variables the standard OLS way, and not recognizing categorical factors at all). In our present example the coding was - indicator (dummy) type variables.
ANOVA as regression: other contrast types.
Let us briefly observe other contrast types (= coding schemes, = parameterization styles) for a categorical factor A.
Deviation or effect contrasts. C and L matrices and parameter meaning:
C
Const A1 A2
Gr1 (A=1) 1 1 0
Gr2 (A=2) 1 0 1
Gr3 (A=3,ref) 1 -1 -1
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = 1/3Mean_Gr3+1/3Mean_Gr2+1/3Mean_Gr3 = Mean_GU
A1 2/3 -1/3 -1/3 => Param1 = 2/3Mean_Gr1-1/3(Mean_Gr2+Mean_Gr3) = Mean_Gr1-Mean_GU
A2 -1/3 2/3 -1/3 => Param2 = 2/3Mean_Gr2-1/3(Mean_Gr1+Mean_Gr3) = Mean_Gr2-Mean_GU
Parameter for the reference group3 = -(Param1+Param2) = Mean_Gr3-Mean_GU
Mean_GU is grand unweighted mean = 1/3(Mean_Gr1+Mean_Gr2+Mean_Gr3)
By deviation coding, each group of the factor is being compared with the unweighted grand mean, while Constant is that grand mean. This is what you get in regression with contrast predictors X coded in deviation or effect "manner".
Simple contrasts. This contrasts/coding scheme is a hybrid of indicator and deviation types, it gives the meaning of Constant as in deviation type and the meaning of the other parameters as in indicator type:
C
Const A1 A2
Gr1 (A=1) 1 2/3 -1/3
Gr2 (A=2) 1 -1/3 2/3
Gr3 (A=3,ref) 1 -1/3 -1/3
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = as in Deviation
A1 1 0 -1 => Param1 = as in Indicator
A2 0 1 -1 => Param2 = as in Indicator
Helmert contrasts. Compares each group (except reference) with the unweighted mean of the subsequent groups, and Constant is the unweighted grand mean. C and L matrces:
C
Const A1 A2
Gr1 (A=1) 1 2/3 0
Gr2 (A=2) 1 -1/3 1/2
Gr3 (A=3,ref) 1 -1/3 -1/2
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = Mean_GU
A1 1 -1/2 -1/2 => Param1 = Mean_Gr1-1/2(Mean_Gr2+Mean_Gr3)
A2 0 1 -1 => Param2 = Mean_Gr2-Mean_Gr3
Difference or reverse Helmert contrasts. Compares each group (except reference) with the unweighted mean of the previous groups, and Constant is the unweighted grand mean.
C
Const A1 A2
Gr1 (A=1) 1 -1/2 -1/3
Gr2 (A=2) 1 1/2 -1/3
Gr3 (A=3,ref) 1 0 2/3
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = Mean_GU
A1 -1 1 0 => Param1 = Mean_Gr2-Mean_Gr1
A2 -1/2 -1/2 1 => Param2 = Mean_Gr3-1/2(Mean_Gr2+Mean_Gr1)
Repeated contrasts. Compares each group (except reference) with the next group, and Constant is the unweighted grand mean.
C
Const A1 A2
Gr1 (A=1) 1 2/3 1/3
Gr2 (A=2) 1 -1/3 1/3
Gr3 (A=3,ref) 1 -1/3 -2/3
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = Mean_GU
A1 1 -1 0 => Param1 = Mean_Gr1-Mean_Gr2
A2 0 1 -1 => Param2 = Mean_Gr2-Mean_Gr3
The Question asks: how exactly is contrast matrix specified? Looking at the types of contrasts outlined so far it is possible to grasp how. Each type has its logic how to "fill in" the values in L. The logic reflects what each parameter means - what are the two combinations of groups it is planned to compare.
Polynomial contrasts. These are a bit special, nonlinear. The first effect is a linear one, the second is quadratic, next is cubic. I'm leaving here unaccounted the question how their C and L matrices are to be constructed and if they are the inverse of each other. Please consult with profound @Antoni Parellada's explanations of this type of contrast: 1, 2.
In balanced designs, Helmert, reverse Helmert, and polynomial contrasts are always orthogonal contrasts. Other types considered above are not orthogonal contrasts. Orthogonal (under balancedness) is the contrast where in contrast matrix L sum in each row (except Const) is zero and sum of products of the corresponding elements of each pair of rows is zero.
Here is the angle similarity measures (cosine and Pearson correlation) under different contrast types, except polynomial which I didn't test. Let us have single factor A with k levels, and it was then recoded into the set of k-1 contrast variables of a specific type. What are the values in the correlation or cosine matrix between these contrast variables?
Balanced (equal size) groups Unbalanced groups
Contrast type cos corr cos corr
INDICATOR 0 -1/(k-1) 0 varied
DEVIATION .5 .5 varied varied
SIMPLE -1/(k-1) -1/(k-1) varied varied
HELMERT, REVHELMERT 0 0 varied varied
REPEATED varied = varied varied varied
"=" means the two matrices are same while elements in matrix vary
I'm giving the table for information and leaving it uncommented. It is of some importance for a deeper glance into general linear modeling.
User-defined contrasts. This is what we compose to test a custom comparison hypothesis. Normally sum in every but the first row of L should be 0 which means that two groups or two compositions of groups are being compared in that row (i.e. by that parameter).
Where are the model parameters after all?
Are they the rows or the columns of L? Throughout the text above I was saying that parameters correspond to the rows of L, as the rows represent contrast-variables, the predictors. While the columns are levels of a factor, the groups. That may appear to fall in contradiction with such, for example, theoretical block from @Gus_est answer, where clearly the columns correspond to the parameters:
$H_0:
\begin{bmatrix}
0 & 1 & -1 & \phantom{-}0 & \phantom{-}0 \\
0 & 0 & \phantom{-}1 & -1 & \phantom{-}0 \\
0 & 0 & \phantom{-}0 & \phantom{-}1 & -1
\end{bmatrix}
\begin{bmatrix}
\beta_0 \\
\beta_1 \\
\beta_2 \\
\beta_3 \\
\beta_4
\end{bmatrix} =
\begin{bmatrix}
0 \\
0 \\
0
\end{bmatrix}$
Actually, there is no contradiction and the answer to the "problem" is: both rows and columns of the contrast coefficient matrix correspond to the parameters! Just recall that contrasts (contrast variables), the rows, were initially created to represent nothing else than the factor levels: they are the levels except the omitted reference one. Compare please these two equivalent spelling of the L-matrix for the simple contrast:
L
Gr1 Gr2 Gr3
A=1 A=2 A=3(reference)
Const 1/3 1/3 1/3
A1 1 0 -1
A2 0 1 -1
L
b0 b1 b2 b3(redundant)
Const A=1 A=2 A=3(reference)
b0 Const 1 1/3 1/3 1/3
b1 A1 0 1 0 -1
b2 A2 0 0 1 -1
The first one is what I've shown before, the second is more "theoretical" (for general linear model algebra) layout. Simply, a column corresponding to Constant term was added. Parameter coefficients b label the rows and columns. Parameter b3, as redundant, will be set to zero. You may pseudoinverse the second layout to get the coding matrix C, where inside in the bottom-right part you will find still the correct codes for contrast variables A1 and A2. That will be so for any contrast type described (except for indicator type - where the pseudoinverse of such rectangular layout won't give correct result; this is probably why simple contrast type was invented for convenience: contrast coefficients identical to indicator type, but for row Constant).
Contrast type and ANOVA table results.
ANOVA table shows effects as combined (aggregated) - for example main effect of factor A, whereas contrasts correspond to elementary effects, of contrast variables - A1, A2, and (omitted, reference) A3. The parameter estimates for the elementary terms depend on the type of the contrast selected, but the combined result - its mean square and significance level - is the same, whatever the type is. Omnibus ANOVA (say, one-way) null hypothesis that all the three means of A are equal may be put out in a number of equivalent statements, and each will correspond to a specific contrast type: $(\mu_1=\mu_2, \mu_2=\mu_3)$ = repeated type; $(\mu_1=\mu_{23}, \mu_2=\mu_3)$ = Helmert type; $(\mu_1=\mu_{123}, \mu_2=\mu_{123})$ = Deviation type; $(\mu_1=\mu_3, \mu_2=\mu_3)$ = indicator or simple types.
ANOVA programs implemented via general linear model paradigm can display both ANOVA table (combined effects: main, interactions) and parameter estimates table (elementary effects b). Some programs may output the latter table correspondent to the contrast type as bid by the user, but most will output always the parameters correspondent to one type - often, indicator type, because ANOVA programs based on general linear model parameterize specifically dummy variables (most convenient to do) and then switch over for contrasts by special "linking" formulae interpreting the fixed dummy input to a (arbitrary) contrast.
Whereas in my answer - showing ANOVA as regression - the "link" is realized as early as at the level of the input X, which called to introduce the notion of the appropriarte coding schema for the data.
A few examples showing testing of ANOVA contrasts via usual regression.
Showing in SPSS the request a contrast type in ANOVA and getting the same result via linear regression. We have some dataset with Y and factors A (3 levels, reference=last) and B (4 levels, reference=last); find the data below later on.
Deviation contrasts example under full factorial model (A, B, A*B). Deviation type requested for both A and B (we might choose to demand different type for each factor, for your information).
Contrast coefficient matrix L for A and for B:
A=1 A=2 A=3
Const .3333 .3333 .3333
dev_a1 .6667 -.3333 -.3333
dev_a2 -.3333 .6667 -.3333
B=1 B=2 B=3 B=4
Const .2500 .2500 .2500 .2500
dev_b1 .7500 -.2500 -.2500 -.2500
dev_b2 -.2500 .7500 -.2500 -.2500
dev_b3 -.2500 -.2500 .7500 -.2500
Request ANOVA program (GLM in SPSS) to do analysis of variance and to output explicit results for deviation contrasts:
Deviation contrast type compared A=1 vs Grand unweighted Mean and A=2 with that same Mean. Red ellipses ink the difference estimates and their p-values. The combined effect over the factor A is inked by red rectangle. For factor B, everyting is analogously inked in blue. Displaying also the ANOVA table. Note there that the combined contrast effects equal the main effects in it.
Let us now create physically contrast variables dev_a1, dev_a2, dev_b1, dev_b2, dev_b3 and run regression. Invert the L-matrices to obtain the coding C matrices:
dev_a1 dev_a2
A=1 1.0000 .0000
A=2 .0000 1.0000
A=3 -1.0000 -1.0000
dev_b1 dev_b2 dev_b3
B=1 1.0000 .0000 .0000
B=2 .0000 1.0000 .0000
B=3 .0000 .0000 1.0000
B=4 -1.0000 -1.0000 -1.0000
The column of ones (Constant) is omitted: because we'll use regular regression program (which internally centers variables, and is also intolerant to singularity) variable Constant won't be needed. Now create data X: actually no manual recoding of the factors into these values is needed, the one-stroke solution is $\bf X=DC$, where $\bf D$ is the indicator (dummy) variables, all k columns (k is the number of levels in a factor).
Having created the contrast variables, multiply among those from different factors to get variables to represent interactions (our ANOVA model was full factorial): dev_a1b1, dev_a1b2, dev_a1b3, dev_a2b1, dev_a2b2, dev_a2b3. Then run multiple linear regression with all the predictors.
As expected, dev_a1 is the same as effect as was the contrast "Level 1 vs Mean"; dev_a2 is the same as was "Level 2 vs Mean", etc etc, - compare the inked parts with the ANOVA contrast analysis above.
Note that if we were not using interaction variables dev_a1b1, dev_a1b2... in regression the results will coincide with results of main-effects-only ANOVA contrast analysis.
Simple contrasts example under the same full factorial model (A, B, A*B).
Contrast coefficient matrix L for A and for B:
A=1 A=2 A=3
Const .3333 .3333 .3333
sim_a1 1.0000 .0000 -1.0000
sim_a2 .0000 1.0000 -1.0000
B=1 B=2 B=3 B=4
Const .2500 .2500 .2500 .2500
sim_b1 1.0000 .0000 .0000 -1.0000
sim_b2 .0000 1.0000 .0000 -1.0000
sim_b3 .0000 .0000 1.0000 -1.0000
ANOVA results for simple contrasts:
The overall results (ANOVA table) is the same as with deviation contrasts (not displaying now).
Create physically contrast variables sim_a1, sim_a2, sim_b1, sim_b2, sim_b3. The coding matrices by inverting of the L-matrices are (w/o Const column):
sim_a1 sim_a2
A=1 .6667 -.3333
A=2 -.3333 .6667
A=3 -.3333 -.3333
sim_b1 sim_b2 sim_b3
B=1 .7500 -.2500 -.2500
B=2 -.2500 .7500 -.2500
B=3 -.2500 -.2500 .7500
B=4 -.2500 -.2500 -.2500
Create the data $\bf X=DC$ and add there the interaction contrast variables sim_a1b1, sim_a1b2, ... etc, as the products of the main effects contrast variables. Perform the regression.
As before, we see that the results of regression and ANOVA match. A regression parameter of a simple contrast variable is the difference (and significance test of it) between that level of the factor and the reference (the last, in our example) level of it.
The two-factor data used in the examples:
Y A B
.2260 1 1
.6836 1 1
-1.772 1 1
-.5085 1 1
1.1836 1 2
.5633 1 2
.8709 1 2
.2858 1 2
.4057 1 2
-1.156 1 3
1.5199 1 3
-.1388 1 3
.4865 1 3
-.7653 1 3
.3418 1 4
-1.273 1 4
1.4042 1 4
-.1622 2 1
.3347 2 1
-.4576 2 1
.7585 2 1
.4084 2 2
1.4165 2 2
-.5138 2 2
.9725 2 2
.2373 2 2
-1.562 2 2
1.3985 2 3
.0397 2 3
-.4689 2 3
-1.499 2 3
-.7654 2 3
.1442 2 3
-1.404 2 3
-.2201 2 4
-1.166 2 4
.7282 2 4
.9524 2 4
-1.462 2 4
-.3478 3 1
.5679 3 1
.5608 3 2
1.0338 3 2
-1.161 3 2
-.1037 3 3
2.0470 3 3
2.3613 3 3
.1222 3 4
User defined contrast example. Let us have single factor F with 5 levels. I will create and test a set of custom orthogonal contrasts, in ANOVA and in regression.
The picture shows the process (one of possible) of combining/splitting among the 5 groups to obtain 4 orthogonal contrasts, and the L matrix of contrast coefficints resultant from that process is on the right. All the contrasts are orthogonal to each other: $\bf LL'$ is diagonal. (This example schema was years ago copied from D. Howell's book on Statistics for psychologist.)
Let us submit the matrix to SPSS' ANOVA procedure to test the contrasts. Well, we might submit even any one row (contrast) from the matrix, but we'll submit the whole matrix because - as in previous examples - we'll want to receive the same results via regression, and regression program will need the complete set of contrast variables (to be aware that they belong together to one factor!). We'll add the constant row to L, just as we did before, although if we don't need to test for the intercept we may safely omit it.
UNIANOVA Y BY F
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/CONTRAST (F)= special
(.2 .2 .2 .2 .2
3 3 -2 -2 -2
1 -1 0 0 0
0 0 2 -1 -1
0 0 0 1 -1)
/DESIGN=F.
Equivalently, we might also use this syntax (with a more flexible /LMATRIX subcommand)
if we omit the Constant row from the matrix.
UNIANOVA Y BY F
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/LMATRIX= "User contrasts"
F 3 3 -2 -2 -2;
F 1 -1 0 0 0;
F 0 0 2 -1 -1;
F 0 0 0 1 -1
/DESIGN=F.
The overall contrasts effect (in the bottom of the pic) is not the same as the expected overall ANOVA effect:
but it is simply the artefact of our inserting Constant term into the L matrix. For, SPSS already implies Constant when user-defined contrasts are specified. Remove the constant row from L and we'll get the same contrasts results (matrix K on the pic above) except that L0 contrast won't be displayed. And the overall contrast effect will match the overall ANOVA:
OK, now create the contrast variables physically and submit them to regression. $\bf C=L^+$, $\bf X=DC$.
C
use_f1 use_f2 use_f3 use_f4
F=1 .1000 .5000 .0000 .0000
F=2 .1000 -.5000 .0000 .0000
F=3 -.0667 .0000 .3333 .0000
F=4 -.0667 .0000 -.1667 .5000
F=5 -.0667 .0000 -.1667 -.5000
Observe the identity of results. The data used in this example:
Y F
.2260 1
.6836 1
-1.772 1
-.5085 1
1.1836 1
.5633 1
.8709 1
.2858 1
.4057 1
-1.156 1
1.5199 2
-.1388 2
.4865 2
-.7653 2
.3418 2
-1.273 2
1.4042 2
-.1622 3
.3347 3
-.4576 3
.7585 3
.4084 3
1.4165 3
-.5138 3
.9725 3
.2373 3
-1.562 3
1.3985 3
.0397 4
-.4689 4
-1.499 4
-.7654 4
.1442 4
-1.404 4
-.2201 4
-1.166 4
.7282 4
.9524 5
-1.462 5
-.3478 5
.5679 5
.5608 5
1.0338 5
-1.161 5
-.1037 5
2.0470 5
2.3613 5
.1222 5
Contrasts in other than (M)ANOVA analyses.
Wherever nominal predictors appear, the question of contrast (which contrast type to select for which predictor) arise. Some programs solve it behind the scene internally when the overall, omnibus results won't depend on the type selected. If you want a specific type to see more "elementary" results, you have to select. You select (or, rather, compose) a contrast also when you are testing a custom comparison hypothesis.
(M)ANOVA and Loglinear analysis, Mixed and sometimes Generalized linear modeling include options to treat predictors via different types of contrasts. But as I've tried to show, it is possible to create contrasts as contrast variables explicitly and by hand. Then, if you don't have ANOVA package at hand, you might do it - in many respects with as good luck - with multiple regression. | What is a contrast matrix? | In their nice answer, @Gus_est, undertook a mathematical explanation of the essence of the contrast coefficient matrix L (notated there a C). $\bf Lb=k$ is the fundamental formula for testing hypothes | What is a contrast matrix?
In their nice answer, @Gus_est, undertook a mathematical explanation of the essence of the contrast coefficient matrix L (notated there a C). $\bf Lb=k$ is the fundamental formula for testing hypotheses in univariate general linear modeling (where $\bf b$ are parameters and $\bf k$ are estimable function representing a null hypothesis), and that answer shows some necessary formulas used in modern ANOVA programs.
My answer is styled very differently. It is for a data analyst who sees himself rather an "engineer" than a "mathematician", so the answer will be a (superficial) "practical" or "didactic" account and will focus to answer just topics (1) what do the contrast coefficients mean and (2) how can they help to perform ANOVA via linear regression program.
ANOVA as regression with dummy variables: introducing contrasts.
Let us imagine ANOVA with dependent variable Y and categorical factor A having 3 levels (groups). Let us glance at the ANOVA from the linear regression point of view, that is - via turning the factor into the set of dummy (aka indicator aka treatment aka one-hot) binary variables. This is our independent set X. (Probably everybody has heard that it is possible to do ANOVA this way - as linear regression with dummy predictors.)
Since one of the three groups is redundant, only two dummy variables will enter the linear model. Let's appoint Group3 to be redundant, or reference. The dummy predictors constituting X are an example of contrast variables, i.e. elementary variables representing categories of a factor. X itself is often called design matrix. We can now input the dataset in a multiple linear regression program which will center the data and find the regression coefficients (parameters) $\bf b= (X'X)^{-1}X'y=X^+y$, where "+" designates pseudoinverse.
Equivalent pass will be not to do the centering but rather add constant term of the model as the first column of 1s in X, then estimate the coefficients same way as above $\bf b= (X'X)^{-1}X'y=X^+y$. So far so good.
Let us define matrix C to be the aggregation (summarization) of the independent variables design matrix X. It simply shows us the coding scheme observed there, - the contrast coding matrix (= basis matrix): $\bf C= {\it{aggr}} X$.
C
Const A1 A2
Gr1 (A=1) 1 1 0
Gr2 (A=2) 1 0 1
Gr3 (A=3,ref) 1 0 0
The colums are the variables (columns) of X - the elementary contrast variables A1 A2, dummy in this instance, and the rows are all the groups/levels of the factor. So was our coding matrix C for indicator or dummy contrast coding scheme.
Now, $\bf C^+=L$ is called the contrast coefficient matrix, or L-matrix. Since C is square, $\bf L=C^+=C^{-1}$. The contrast matrix, corresponding to our C - that is for indicator contrasts of our example - is therefore:
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 0 0 1 => Const = Mean_Gr3
A1 1 0 -1 => Param1 = Mean_Gr1-Mean_Gr3
A2 0 1 -1 => Param2 = Mean_Gr2-Mean_Gr3
L-matrix is the matrix showing contrast coefficients. Note that sum of contrast coefficients in every row (except row Constant) is $0$. Every such row is called a contrast. Rows correspond to the contrast variables and columns correspond to the groups, factor levels.
The significance of contrast coefficients is that they help understand what each effect (each parameter b estimated in the regression with our X, coded as it is) represent in the sense of the difference (the group comparison). We immediately see, following the coefficients, that the estimated Constant will equal the Y mean in the reference group; that parameter b1 (i.e. of dummy variable A1) will equal the difference: Y mean in group1 minus Y mean in group3; and parameter b2 is the difference: mean in group2 minus mean in group3.
Note: Saying "mean" right above (and further below) we mean estimated (predicted by the model) mean for a group, not the observed mean in a group.
An instructive remark: When we do a regression by binary predictor variables, the parameter of such a variable says about the difference in Y between variable=1 and variable=0 groups. However, in the situation when the binary variables are the set of k-1 dummy variables representing a k-level factor, the meaning of the parameter gets narrower: it shows the difference in Y between variable=1 and (not just variable=0 but even) reference_variable=1 groups.
Like $\bf X^+$ (after multiplied by $\bf y$) brings us values of b, similarly $\bf(\it{aggr} \bf X)^+$ brings in meanings of b.
OK, we've given the definition of contrast coefficient matrix L. Since $\bf L=C^+=C^{-1}$, symmetrically $\bf C=L^+=L^{-1}$, which means that if you were given or have constructed a contrast matrix L based on categorical factor(s) - to test that L in your analysis, then you have clue for how to code correctly your contrast predictor variables X in order to test the L via an ordinary regression software (i.e. the one processing just "continuous" variables the standard OLS way, and not recognizing categorical factors at all). In our present example the coding was - indicator (dummy) type variables.
ANOVA as regression: other contrast types.
Let us briefly observe other contrast types (= coding schemes, = parameterization styles) for a categorical factor A.
Deviation or effect contrasts. C and L matrices and parameter meaning:
C
Const A1 A2
Gr1 (A=1) 1 1 0
Gr2 (A=2) 1 0 1
Gr3 (A=3,ref) 1 -1 -1
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = 1/3Mean_Gr3+1/3Mean_Gr2+1/3Mean_Gr3 = Mean_GU
A1 2/3 -1/3 -1/3 => Param1 = 2/3Mean_Gr1-1/3(Mean_Gr2+Mean_Gr3) = Mean_Gr1-Mean_GU
A2 -1/3 2/3 -1/3 => Param2 = 2/3Mean_Gr2-1/3(Mean_Gr1+Mean_Gr3) = Mean_Gr2-Mean_GU
Parameter for the reference group3 = -(Param1+Param2) = Mean_Gr3-Mean_GU
Mean_GU is grand unweighted mean = 1/3(Mean_Gr1+Mean_Gr2+Mean_Gr3)
By deviation coding, each group of the factor is being compared with the unweighted grand mean, while Constant is that grand mean. This is what you get in regression with contrast predictors X coded in deviation or effect "manner".
Simple contrasts. This contrasts/coding scheme is a hybrid of indicator and deviation types, it gives the meaning of Constant as in deviation type and the meaning of the other parameters as in indicator type:
C
Const A1 A2
Gr1 (A=1) 1 2/3 -1/3
Gr2 (A=2) 1 -1/3 2/3
Gr3 (A=3,ref) 1 -1/3 -1/3
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = as in Deviation
A1 1 0 -1 => Param1 = as in Indicator
A2 0 1 -1 => Param2 = as in Indicator
Helmert contrasts. Compares each group (except reference) with the unweighted mean of the subsequent groups, and Constant is the unweighted grand mean. C and L matrces:
C
Const A1 A2
Gr1 (A=1) 1 2/3 0
Gr2 (A=2) 1 -1/3 1/2
Gr3 (A=3,ref) 1 -1/3 -1/2
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = Mean_GU
A1 1 -1/2 -1/2 => Param1 = Mean_Gr1-1/2(Mean_Gr2+Mean_Gr3)
A2 0 1 -1 => Param2 = Mean_Gr2-Mean_Gr3
Difference or reverse Helmert contrasts. Compares each group (except reference) with the unweighted mean of the previous groups, and Constant is the unweighted grand mean.
C
Const A1 A2
Gr1 (A=1) 1 -1/2 -1/3
Gr2 (A=2) 1 1/2 -1/3
Gr3 (A=3,ref) 1 0 2/3
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = Mean_GU
A1 -1 1 0 => Param1 = Mean_Gr2-Mean_Gr1
A2 -1/2 -1/2 1 => Param2 = Mean_Gr3-1/2(Mean_Gr2+Mean_Gr1)
Repeated contrasts. Compares each group (except reference) with the next group, and Constant is the unweighted grand mean.
C
Const A1 A2
Gr1 (A=1) 1 2/3 1/3
Gr2 (A=2) 1 -1/3 1/3
Gr3 (A=3,ref) 1 -1/3 -2/3
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = Mean_GU
A1 1 -1 0 => Param1 = Mean_Gr1-Mean_Gr2
A2 0 1 -1 => Param2 = Mean_Gr2-Mean_Gr3
The Question asks: how exactly is contrast matrix specified? Looking at the types of contrasts outlined so far it is possible to grasp how. Each type has its logic how to "fill in" the values in L. The logic reflects what each parameter means - what are the two combinations of groups it is planned to compare.
Polynomial contrasts. These are a bit special, nonlinear. The first effect is a linear one, the second is quadratic, next is cubic. I'm leaving here unaccounted the question how their C and L matrices are to be constructed and if they are the inverse of each other. Please consult with profound @Antoni Parellada's explanations of this type of contrast: 1, 2.
In balanced designs, Helmert, reverse Helmert, and polynomial contrasts are always orthogonal contrasts. Other types considered above are not orthogonal contrasts. Orthogonal (under balancedness) is the contrast where in contrast matrix L sum in each row (except Const) is zero and sum of products of the corresponding elements of each pair of rows is zero.
Here is the angle similarity measures (cosine and Pearson correlation) under different contrast types, except polynomial which I didn't test. Let us have single factor A with k levels, and it was then recoded into the set of k-1 contrast variables of a specific type. What are the values in the correlation or cosine matrix between these contrast variables?
Balanced (equal size) groups Unbalanced groups
Contrast type cos corr cos corr
INDICATOR 0 -1/(k-1) 0 varied
DEVIATION .5 .5 varied varied
SIMPLE -1/(k-1) -1/(k-1) varied varied
HELMERT, REVHELMERT 0 0 varied varied
REPEATED varied = varied varied varied
"=" means the two matrices are same while elements in matrix vary
I'm giving the table for information and leaving it uncommented. It is of some importance for a deeper glance into general linear modeling.
User-defined contrasts. This is what we compose to test a custom comparison hypothesis. Normally sum in every but the first row of L should be 0 which means that two groups or two compositions of groups are being compared in that row (i.e. by that parameter).
Where are the model parameters after all?
Are they the rows or the columns of L? Throughout the text above I was saying that parameters correspond to the rows of L, as the rows represent contrast-variables, the predictors. While the columns are levels of a factor, the groups. That may appear to fall in contradiction with such, for example, theoretical block from @Gus_est answer, where clearly the columns correspond to the parameters:
$H_0:
\begin{bmatrix}
0 & 1 & -1 & \phantom{-}0 & \phantom{-}0 \\
0 & 0 & \phantom{-}1 & -1 & \phantom{-}0 \\
0 & 0 & \phantom{-}0 & \phantom{-}1 & -1
\end{bmatrix}
\begin{bmatrix}
\beta_0 \\
\beta_1 \\
\beta_2 \\
\beta_3 \\
\beta_4
\end{bmatrix} =
\begin{bmatrix}
0 \\
0 \\
0
\end{bmatrix}$
Actually, there is no contradiction and the answer to the "problem" is: both rows and columns of the contrast coefficient matrix correspond to the parameters! Just recall that contrasts (contrast variables), the rows, were initially created to represent nothing else than the factor levels: they are the levels except the omitted reference one. Compare please these two equivalent spelling of the L-matrix for the simple contrast:
L
Gr1 Gr2 Gr3
A=1 A=2 A=3(reference)
Const 1/3 1/3 1/3
A1 1 0 -1
A2 0 1 -1
L
b0 b1 b2 b3(redundant)
Const A=1 A=2 A=3(reference)
b0 Const 1 1/3 1/3 1/3
b1 A1 0 1 0 -1
b2 A2 0 0 1 -1
The first one is what I've shown before, the second is more "theoretical" (for general linear model algebra) layout. Simply, a column corresponding to Constant term was added. Parameter coefficients b label the rows and columns. Parameter b3, as redundant, will be set to zero. You may pseudoinverse the second layout to get the coding matrix C, where inside in the bottom-right part you will find still the correct codes for contrast variables A1 and A2. That will be so for any contrast type described (except for indicator type - where the pseudoinverse of such rectangular layout won't give correct result; this is probably why simple contrast type was invented for convenience: contrast coefficients identical to indicator type, but for row Constant).
Contrast type and ANOVA table results.
ANOVA table shows effects as combined (aggregated) - for example main effect of factor A, whereas contrasts correspond to elementary effects, of contrast variables - A1, A2, and (omitted, reference) A3. The parameter estimates for the elementary terms depend on the type of the contrast selected, but the combined result - its mean square and significance level - is the same, whatever the type is. Omnibus ANOVA (say, one-way) null hypothesis that all the three means of A are equal may be put out in a number of equivalent statements, and each will correspond to a specific contrast type: $(\mu_1=\mu_2, \mu_2=\mu_3)$ = repeated type; $(\mu_1=\mu_{23}, \mu_2=\mu_3)$ = Helmert type; $(\mu_1=\mu_{123}, \mu_2=\mu_{123})$ = Deviation type; $(\mu_1=\mu_3, \mu_2=\mu_3)$ = indicator or simple types.
ANOVA programs implemented via general linear model paradigm can display both ANOVA table (combined effects: main, interactions) and parameter estimates table (elementary effects b). Some programs may output the latter table correspondent to the contrast type as bid by the user, but most will output always the parameters correspondent to one type - often, indicator type, because ANOVA programs based on general linear model parameterize specifically dummy variables (most convenient to do) and then switch over for contrasts by special "linking" formulae interpreting the fixed dummy input to a (arbitrary) contrast.
Whereas in my answer - showing ANOVA as regression - the "link" is realized as early as at the level of the input X, which called to introduce the notion of the appropriarte coding schema for the data.
A few examples showing testing of ANOVA contrasts via usual regression.
Showing in SPSS the request a contrast type in ANOVA and getting the same result via linear regression. We have some dataset with Y and factors A (3 levels, reference=last) and B (4 levels, reference=last); find the data below later on.
Deviation contrasts example under full factorial model (A, B, A*B). Deviation type requested for both A and B (we might choose to demand different type for each factor, for your information).
Contrast coefficient matrix L for A and for B:
A=1 A=2 A=3
Const .3333 .3333 .3333
dev_a1 .6667 -.3333 -.3333
dev_a2 -.3333 .6667 -.3333
B=1 B=2 B=3 B=4
Const .2500 .2500 .2500 .2500
dev_b1 .7500 -.2500 -.2500 -.2500
dev_b2 -.2500 .7500 -.2500 -.2500
dev_b3 -.2500 -.2500 .7500 -.2500
Request ANOVA program (GLM in SPSS) to do analysis of variance and to output explicit results for deviation contrasts:
Deviation contrast type compared A=1 vs Grand unweighted Mean and A=2 with that same Mean. Red ellipses ink the difference estimates and their p-values. The combined effect over the factor A is inked by red rectangle. For factor B, everyting is analogously inked in blue. Displaying also the ANOVA table. Note there that the combined contrast effects equal the main effects in it.
Let us now create physically contrast variables dev_a1, dev_a2, dev_b1, dev_b2, dev_b3 and run regression. Invert the L-matrices to obtain the coding C matrices:
dev_a1 dev_a2
A=1 1.0000 .0000
A=2 .0000 1.0000
A=3 -1.0000 -1.0000
dev_b1 dev_b2 dev_b3
B=1 1.0000 .0000 .0000
B=2 .0000 1.0000 .0000
B=3 .0000 .0000 1.0000
B=4 -1.0000 -1.0000 -1.0000
The column of ones (Constant) is omitted: because we'll use regular regression program (which internally centers variables, and is also intolerant to singularity) variable Constant won't be needed. Now create data X: actually no manual recoding of the factors into these values is needed, the one-stroke solution is $\bf X=DC$, where $\bf D$ is the indicator (dummy) variables, all k columns (k is the number of levels in a factor).
Having created the contrast variables, multiply among those from different factors to get variables to represent interactions (our ANOVA model was full factorial): dev_a1b1, dev_a1b2, dev_a1b3, dev_a2b1, dev_a2b2, dev_a2b3. Then run multiple linear regression with all the predictors.
As expected, dev_a1 is the same as effect as was the contrast "Level 1 vs Mean"; dev_a2 is the same as was "Level 2 vs Mean", etc etc, - compare the inked parts with the ANOVA contrast analysis above.
Note that if we were not using interaction variables dev_a1b1, dev_a1b2... in regression the results will coincide with results of main-effects-only ANOVA contrast analysis.
Simple contrasts example under the same full factorial model (A, B, A*B).
Contrast coefficient matrix L for A and for B:
A=1 A=2 A=3
Const .3333 .3333 .3333
sim_a1 1.0000 .0000 -1.0000
sim_a2 .0000 1.0000 -1.0000
B=1 B=2 B=3 B=4
Const .2500 .2500 .2500 .2500
sim_b1 1.0000 .0000 .0000 -1.0000
sim_b2 .0000 1.0000 .0000 -1.0000
sim_b3 .0000 .0000 1.0000 -1.0000
ANOVA results for simple contrasts:
The overall results (ANOVA table) is the same as with deviation contrasts (not displaying now).
Create physically contrast variables sim_a1, sim_a2, sim_b1, sim_b2, sim_b3. The coding matrices by inverting of the L-matrices are (w/o Const column):
sim_a1 sim_a2
A=1 .6667 -.3333
A=2 -.3333 .6667
A=3 -.3333 -.3333
sim_b1 sim_b2 sim_b3
B=1 .7500 -.2500 -.2500
B=2 -.2500 .7500 -.2500
B=3 -.2500 -.2500 .7500
B=4 -.2500 -.2500 -.2500
Create the data $\bf X=DC$ and add there the interaction contrast variables sim_a1b1, sim_a1b2, ... etc, as the products of the main effects contrast variables. Perform the regression.
As before, we see that the results of regression and ANOVA match. A regression parameter of a simple contrast variable is the difference (and significance test of it) between that level of the factor and the reference (the last, in our example) level of it.
The two-factor data used in the examples:
Y A B
.2260 1 1
.6836 1 1
-1.772 1 1
-.5085 1 1
1.1836 1 2
.5633 1 2
.8709 1 2
.2858 1 2
.4057 1 2
-1.156 1 3
1.5199 1 3
-.1388 1 3
.4865 1 3
-.7653 1 3
.3418 1 4
-1.273 1 4
1.4042 1 4
-.1622 2 1
.3347 2 1
-.4576 2 1
.7585 2 1
.4084 2 2
1.4165 2 2
-.5138 2 2
.9725 2 2
.2373 2 2
-1.562 2 2
1.3985 2 3
.0397 2 3
-.4689 2 3
-1.499 2 3
-.7654 2 3
.1442 2 3
-1.404 2 3
-.2201 2 4
-1.166 2 4
.7282 2 4
.9524 2 4
-1.462 2 4
-.3478 3 1
.5679 3 1
.5608 3 2
1.0338 3 2
-1.161 3 2
-.1037 3 3
2.0470 3 3
2.3613 3 3
.1222 3 4
User defined contrast example. Let us have single factor F with 5 levels. I will create and test a set of custom orthogonal contrasts, in ANOVA and in regression.
The picture shows the process (one of possible) of combining/splitting among the 5 groups to obtain 4 orthogonal contrasts, and the L matrix of contrast coefficints resultant from that process is on the right. All the contrasts are orthogonal to each other: $\bf LL'$ is diagonal. (This example schema was years ago copied from D. Howell's book on Statistics for psychologist.)
Let us submit the matrix to SPSS' ANOVA procedure to test the contrasts. Well, we might submit even any one row (contrast) from the matrix, but we'll submit the whole matrix because - as in previous examples - we'll want to receive the same results via regression, and regression program will need the complete set of contrast variables (to be aware that they belong together to one factor!). We'll add the constant row to L, just as we did before, although if we don't need to test for the intercept we may safely omit it.
UNIANOVA Y BY F
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/CONTRAST (F)= special
(.2 .2 .2 .2 .2
3 3 -2 -2 -2
1 -1 0 0 0
0 0 2 -1 -1
0 0 0 1 -1)
/DESIGN=F.
Equivalently, we might also use this syntax (with a more flexible /LMATRIX subcommand)
if we omit the Constant row from the matrix.
UNIANOVA Y BY F
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/LMATRIX= "User contrasts"
F 3 3 -2 -2 -2;
F 1 -1 0 0 0;
F 0 0 2 -1 -1;
F 0 0 0 1 -1
/DESIGN=F.
The overall contrasts effect (in the bottom of the pic) is not the same as the expected overall ANOVA effect:
but it is simply the artefact of our inserting Constant term into the L matrix. For, SPSS already implies Constant when user-defined contrasts are specified. Remove the constant row from L and we'll get the same contrasts results (matrix K on the pic above) except that L0 contrast won't be displayed. And the overall contrast effect will match the overall ANOVA:
OK, now create the contrast variables physically and submit them to regression. $\bf C=L^+$, $\bf X=DC$.
C
use_f1 use_f2 use_f3 use_f4
F=1 .1000 .5000 .0000 .0000
F=2 .1000 -.5000 .0000 .0000
F=3 -.0667 .0000 .3333 .0000
F=4 -.0667 .0000 -.1667 .5000
F=5 -.0667 .0000 -.1667 -.5000
Observe the identity of results. The data used in this example:
Y F
.2260 1
.6836 1
-1.772 1
-.5085 1
1.1836 1
.5633 1
.8709 1
.2858 1
.4057 1
-1.156 1
1.5199 2
-.1388 2
.4865 2
-.7653 2
.3418 2
-1.273 2
1.4042 2
-.1622 3
.3347 3
-.4576 3
.7585 3
.4084 3
1.4165 3
-.5138 3
.9725 3
.2373 3
-1.562 3
1.3985 3
.0397 4
-.4689 4
-1.499 4
-.7654 4
.1442 4
-1.404 4
-.2201 4
-1.166 4
.7282 4
.9524 5
-1.462 5
-.3478 5
.5679 5
.5608 5
1.0338 5
-1.161 5
-.1037 5
2.0470 5
2.3613 5
.1222 5
Contrasts in other than (M)ANOVA analyses.
Wherever nominal predictors appear, the question of contrast (which contrast type to select for which predictor) arise. Some programs solve it behind the scene internally when the overall, omnibus results won't depend on the type selected. If you want a specific type to see more "elementary" results, you have to select. You select (or, rather, compose) a contrast also when you are testing a custom comparison hypothesis.
(M)ANOVA and Loglinear analysis, Mixed and sometimes Generalized linear modeling include options to treat predictors via different types of contrasts. But as I've tried to show, it is possible to create contrasts as contrast variables explicitly and by hand. Then, if you don't have ANOVA package at hand, you might do it - in many respects with as good luck - with multiple regression. | What is a contrast matrix?
In their nice answer, @Gus_est, undertook a mathematical explanation of the essence of the contrast coefficient matrix L (notated there a C). $\bf Lb=k$ is the fundamental formula for testing hypothes |
3,451 | What is a contrast matrix? | I'll use lower-case letters for vectors and upper-case letters for matrices.
In case of a linear model of the form:
$$
\mathbf{y}=\mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}
$$
where $\bf{X}$ is a $n \times (k+1)$ matrix of rank $k+1 \leq n$, and we assume $\boldsymbol{\varepsilon} \sim \mathcal N(0,\sigma^2)$.
We can estimate $\hat{\boldsymbol{\beta}}$ by $(\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top \mathbf{y}$, since the inverse of $\mathbf{X}^\top \mathbf{X}$ exists.
Now, take an ANOVA case in which $\mathbf{X}$ is not full-rank anymore. The implication of this is that we don't have $(\mathbf{X}^\top\mathbf{X})^{-1}$ and we have to settle for the generalized inverse $(\mathbf{X}^\top\mathbf{X})^{-}$.
One of the problems of using this generalized inverse is that it's not unique. Another problem is that we cannot find an unbiased estimator for $\boldsymbol{\beta}$, since
$$\hat{\boldsymbol{\beta}}=(\mathbf{X}^\top\mathbf{X})^{-}\mathbf{X}^\top\mathbf{y} \implies E(\hat{\boldsymbol{\beta}})=(\mathbf{X}^\top\mathbf{X})^{-}\mathbf{X}^\top\mathbf{X}\boldsymbol{\beta}.$$
So, we cannot estimate an unique and unbiased $\boldsymbol{\beta}$. There are various approaches to work around the lack of uniqueness of the parameters in an ANOVA case with non-full-rank $\mathbf{X}$. One of them is to work with the overparameterized model and define linear combinations of the $\boldsymbol{\beta}$'s that are unique and can be estimated.
We have that a linear combination of the $\boldsymbol{\beta}$'s, say $\mathbf{g}^\top \boldsymbol{\beta}$, is estimable if there exists a vector $\mathbf{a}$ such that $E(\mathbf{a}^\top \mathbf{y})=\mathbf{g}^\top \boldsymbol{\beta}$.
The contrasts are a special case of estimable functions in which the sum of the coefficients of $\mathbf{g}$ is equal to zero.
And, contrasts come up in the context of categorical predictors in a linear model. (if you check the manual linked by @amoeba, you see that all their contrast coding are related to categorical variables).
Then, answering @Curious and @amoeba, we see that they arise in ANOVA, but not in a "pure" regression model with only continuous predictors (we can also talk about contrasts in ANCOVA, since we have some categorical variables in it).
Now, in the model $$\mathbf{y}=\mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}$$ where $\mathbf{X}$ is not full-rank, and $E(\mathbf{y})=\mathbf{X}^\top \boldsymbol{\beta}$, the linear function $\mathbf{g}^\top \boldsymbol{\beta}$ is estimable iff there exists a vector $\mathbf{a}$ such that $\mathbf{a}^\top \mathbf{X}=\mathbf{g}^\top$.
That is, $\mathbf{g}^\top$ is a linear combination of the rows of $\mathbf{X}$.
Also, there are many choices of the vector $\mathbf{a}$, such that $\mathbf{a}^\top \mathbf{X}=\mathbf{g}^\top$, as we can see in the example below.
Example 1
Consider the one-way model:
$$y_{ij}=\mu + \alpha_i + \varepsilon_{ij}, \quad i=1,2 \, , j=1,2,3.$$
\begin{align}
\mathbf{X} = \begin{bmatrix}
1 & 1 & 0 \\
1 & 1 & 0 \\
1 & 1 & 0 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1
\end{bmatrix} \, , \quad \boldsymbol{\beta}=\begin{bmatrix}
\mu \\
\tau_1 \\
\tau_2
\end{bmatrix}
\end{align}
And suppose $\mathbf{g}^\top = [0, 1, -1]$, so we want to estimate $[0, 1, -1] \boldsymbol{\beta}=\tau_1-\tau_2$.
We can see that there are different choices of the vector $\mathbf{a}$ that yield $\mathbf{a}^\top \mathbf{X}=\mathbf{g}^\top$: take $\mathbf{a}^\top=[0 , 0,1,-1,0,0]$; or $\mathbf{a}^\top = [1,0,0,0,0,-1]$; or $\mathbf{a}^\top = [2,-1,0,0,1,-2]$.
Example 2
Take the two-way model:
$$ y_{ij}=\mu+\alpha_i+\beta_j+\varepsilon_{ij}, \, i=1,2, \, j=1,2$$.
\begin{align}
\mathbf{X} = \begin{bmatrix}
1 & 1 & 0 & 1 & 0 \\
1 & 1 & 0 & 0 & 1\\
1 & 0 & 1 & 1 & 0 \\
1 & 0 & 1 & 0 & 1
\end{bmatrix} \, , \quad \boldsymbol{\beta}=\begin{bmatrix}
\mu \\
\alpha_1 \\
\alpha_2 \\
\beta_1 \\
\beta_2
\end{bmatrix}
\end{align}
We can define the estimable functions by taking linear combinations of the rows of $\mathbf{X}$.
Subtracting Row 1 from Rows 2, 3, and 4 (of $\mathbf{X}$):
$$
\begin{bmatrix}
1 & \phantom{-}1 & 0 & \phantom{-}1 & 0 \\
0 & 0 & 0 & -1 & 1\\
0 & -1 & 1 & \phantom{-}0 & 0 \\
0 & -1 & 1 & -1 & 1
\end{bmatrix}
$$
And taking Rows 2 and 3 from the fourth row:
$$
\begin{bmatrix}
1 & \phantom{-}1 & 0 & \phantom{-}1 & 0 \\
0 & 0 & 0 & -1 & 1\\
0 & -1 & 1 & \phantom{-}0 & 0 \\
0 & \phantom{-}0 & 0 & \phantom{-}0 & 0
\end{bmatrix}
$$
Multiplying this by $\boldsymbol{\beta}$ yields:
\begin{align}
\mathbf{g}_1^\top \boldsymbol{\beta} &= \mu + \alpha_1 + \beta_1 \\
\mathbf{g}_2^\top \boldsymbol{\beta} &= \beta_2 - \beta_1 \\
\mathbf{g}_3^\top \boldsymbol{\beta} &= \alpha_2 - \alpha_1
\end{align}
So, we have three linearly independent estimable functions. Now, only $\mathbf{g}_2^\top \boldsymbol{\beta}$ and $\mathbf{g}_3^\top \boldsymbol{\beta}$ can be considered contrasts, since the sum of its coefficients (or, the row sum of the respective vector $\mathbf{g}$) is equal to zero.
Going back to a one-way balanced model
$$y_{ij}=\mu + \alpha_i + \varepsilon_{ij}, \quad i=1,2, \ldots, k \, , j=1,2,\ldots,n.$$
And suppose we want to test the hypothesis $H_0: \alpha_1 = \ldots = \alpha_k$.
In this setting the matrix $\mathbf{X}$ is not full-rank, so $\boldsymbol{\beta}=(\mu,\alpha_1,\ldots,\alpha_k)^\top$ is not unique and not estimable. To make it estimable we can multiply $\boldsymbol{\beta}$ by $\mathbf{g}^\top$, as long as $\sum_{i} g_i = 0$. In other words, $\sum_{i} g_i \alpha_i$ is estimable iff $\sum_{i} g_i = 0$.
Why this is true?
We know that $\mathbf{g}^\top \boldsymbol{\beta}=(0,g_1,\ldots,g_k) \boldsymbol{\beta} = \sum_{i} g_i \alpha_i$ is estimable iff there exists a vector $\mathbf{a}$ such that $\mathbf{g}^\top = \mathbf{a}^\top \mathbf{X}$.
Taking the distinct rows of $\mathbf{X}$ and $\mathbf{a}^\top=[a_1,\ldots,a_k]$, then:
$$[0,g_1,\ldots,g_k]=\mathbf{g}^\top=\mathbf{a}^\top \mathbf{X} = \left(\sum_i a_i,a_1,\ldots,a_k \right)$$
And the result follows.
If we would like to test a specific contrast, our hypothesis is $H_0: \sum g_i \alpha_i = 0$. For instance: $H_0: 2 \alpha_1 = \alpha_2 + \alpha_3$, which can be written as $H_0: \alpha_1 = \frac{\alpha_2+\alpha_3}{2}$, so we are comparing $\alpha_1$ to the average of $\alpha_2$ and $\alpha_3$.
This hypothesis can be expressed as $H_0: \mathbf{g}^\top \boldsymbol{\beta}=0$, where ${\mathbf{g}}^\top = (0,g_1,g_2,\ldots,g_k)$. In this case, $q=1$ and we test this hypothesis with the following statistic:
$$F=\cfrac{\left[\mathbf{g}^\top \hat{\boldsymbol{\beta}}\right]^\top \left[\mathbf{g}^\top(\mathbf{X}^\top\mathbf{X})^{-}\mathbf{g} \right]^{-1}\mathbf{g}^\top \hat{\boldsymbol{\beta}}}{SSE/k(n-1)}.$$
If $H_0: \alpha_1 = \alpha_2 = \ldots = \alpha_k$ is expressed as $\mathbf{G}\boldsymbol{\beta}=\boldsymbol{0}$ where the rows of the matrix
$$\mathbf{G} = \begin{bmatrix} \mathbf{g}_1^\top \\ \mathbf{g}_2^\top \\ \vdots \\ \mathbf{g}_k^\top \end{bmatrix}$$
are mutually orthogonal contrasts (${\mathbf{g}_i^\top\mathbf{g}}_j = 0$), then we can test $H_0: \mathbf{G}\boldsymbol{\beta}=\boldsymbol{0}$ using the statistic $F=\cfrac{\frac{\mbox{SSH}}{\mbox{rank}(\mathbf{G})}}{\frac{\mbox{SSE}}{k(n-1)}}$, where $\mbox{SSH}=\left[\mathbf{G}\hat{\boldsymbol{\beta}}\right]^\top \left[\mathbf{G}(\mathbf{X}^\top\mathbf{X})^{-1} \mathbf{G}^\top \right]^{-1}\mathbf{G}\hat{\boldsymbol{\beta}}$.
Example 3
To understand this better, let's use $k=4$, and suppose we want to test $H_0: \alpha_1 = \alpha_2 = \alpha_3 = \alpha_4,$ which can be expressed as
$$H_0: \begin{bmatrix} \alpha_1 - \alpha_2 \\ \alpha_1 - \alpha_3 \\ \alpha_1 - \alpha_4 \end{bmatrix} =
\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}$$
Or, as $H_0: \mathbf{G}\boldsymbol{\beta}=\boldsymbol{0}$:
$$H_0:
\underbrace{\begin{bmatrix}
0 & 1 & -1 & \phantom{-}0 & \phantom{-}0 \\
0 & 1 & \phantom{-}0 & -1 & \phantom{-}0 \\
0 & 1 & \phantom{-}0 & \phantom{-}1 & -1
\end{bmatrix}}_{{\mathbf{G}}, \mbox{our contrast matrix}}
\begin{bmatrix}
\mu \\
\alpha_1 \\
\alpha_2 \\
\alpha_3 \\
\alpha_4
\end{bmatrix} =
\begin{bmatrix}
0 \\
0 \\
0
\end{bmatrix}$$
So, we see that the three rows of our contrast matrix are defined by the coefficients of the contrasts of interest. And each column gives the factor level that we are using in our comparison.
Pretty much all I've written was taken\copied (shamelessly) from Rencher & Schaalje, "Linear Models in Statistics", chapters 8 and 13 (examples, wording of theorems, some interpretations), but other things like the term "contrast matrix" (which, indeed, doesn't appear in this book) and its definition given here were my own.
Relating OP's contrast matrix to my answer
One of OP's matrix (which can also be found in this manual) is the following:
> contr.treatment(4)
2 3 4
1 0 0 0
2 1 0 0
3 0 1 0
4 0 0 1
In this case, our factor has 4 levels, and we can write the model as follows:
This can be written in matrix form as:
\begin{align}
\begin{bmatrix}
y_{11} \\
y_{21} \\
y_{31} \\
y_{41}
\end{bmatrix}
=
\begin{bmatrix}
\mu \\
\mu \\
\mu \\
\mu
\end{bmatrix}
+
\begin{bmatrix}
a_1 \\
a_2 \\
a_3 \\
a_4
\end{bmatrix}
+
\begin{bmatrix}
\varepsilon_{11} \\
\varepsilon_{21} \\
\varepsilon_{31} \\
\varepsilon_{41}
\end{bmatrix}
\end{align}
Or
\begin{align}
\begin{bmatrix}
y_{11} \\
y_{21} \\
y_{31} \\
y_{41}
\end{bmatrix}
=
\underbrace{\begin{bmatrix}
1 & 1 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 0\\
1 & 0 & 0 & 1 & 0\\
1 & 0 & 0 & 0 & 1\\
\end{bmatrix}}_{\mathbf{X}}
\underbrace{\begin{bmatrix}
\mu \\
a_1 \\
a_2 \\
a_3 \\
a_4
\end{bmatrix}}_{\boldsymbol{\beta}}
+
\begin{bmatrix}
\varepsilon_{11} \\
\varepsilon_{21} \\
\varepsilon_{31} \\
\varepsilon_{41}
\end{bmatrix}
\end{align}
Now, for the dummy coding example on the same manual, they use $a_1$ as the reference group. Thus, we subtract Row 1 from every other row in matrix $\mathbf{X}$, which yields the $\widetilde{\mathbf{X}}$:
\begin{align}
\begin{bmatrix}
1 & \phantom{-}1 & 0 & 0 & 0 \\
0 & -1 & 1 & 0 & 0\\
0 & -1 & 0 & 1 & 0\\
0 & -1 & 0 & 0 & 1
\end{bmatrix}
\end{align}
If you observe the numeration of the rows and columns in the contr.treatment(4) matrix, you'll see that they consider all rows and only the columns related to the factors 2, 3, and 4. If we do the same in the above matrix yields:
\begin{align}
\begin{bmatrix}
0 & 0 & 0 \\
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{bmatrix}
\end{align}
This way, the contr.treatment(4) matrix is telling us that they are comparing factors 2, 3 and 4 to factor 1, and comparing factor 1 to the constant (this is my understanding of the above).
And, defining $\mathbf{G}$ (i.e. taking only the rows that sum to 0 in the above matrix):
\begin{align}
\begin{bmatrix}
0 & -1 & 1 & 0 & 0\\
0 & -1 & 0 & 1 & 0\\
0 & -1 & 0 & 0 & 1
\end{bmatrix}
\end{align}
We can test $H_0: \mathbf{G}\boldsymbol{\beta}=0$ and find the estimates of the contrasts.
hsb2 = read.table(
'https://stats.idre.ucla.edu/stat/data/hsb2.csv',
header=T, sep=",")
y <- hsb2$write
dummies <- model.matrix(~factor(hsb2$race) + 0)
X <- cbind(1,dummies)
# Defining G, what I call contrast matrix
G <- matrix(0,3,5)
G[1,] <- c(0,-1,1,0,0)
G[2,] <- c(0,-1,0,1,0)
G[3,] <- c(0,-1,0,0,1)
G
[,1] [,2] [,3] [,4] [,5]
[1,] 0 -1 1 0 0
[2,] 0 -1 0 1 0
[3,] 0 -1 0 0 1
# Estimating Beta
X.X<-t(X)%*%X
X.y<-t(X)%*%y
library(MASS)
Betas<-ginv(X.X)%*%X.y
# Final estimators:
G%*%Betas
[,1]
[1,] 11.541667
[2,] 1.741667
[3,] 7.596839
And the estimates are the same.
Relating @ttnphns' answer to mine.
On their first example, the setup has a categorical factor A having three levels. We can write this as the model (suppose, for simplicity, that $j=1$):
$$y_{ij}=\mu+a_i+\varepsilon_{ij}\, , \quad \mbox{for } i=1,2,3$$
And suppose we want to test $H_0: a_1 = a_2 = a_3$, or $H_0: a_1 - a_3 = a_2 - a_3=0$, with $a_3$ as our reference group/factor.
This can be written in matrix form as:
\begin{align}
\begin{bmatrix}
y_{11} \\
y_{21} \\
y_{31}
\end{bmatrix}
=
\begin{bmatrix}
\mu \\
\mu \\
\mu
\end{bmatrix}
+
\begin{bmatrix}
a_1 \\
a_2 \\
a_3
\end{bmatrix}
+
\begin{bmatrix}
\varepsilon_{11} \\
\varepsilon_{21} \\
\varepsilon_{31}
\end{bmatrix}
\end{align}
Or
\begin{align}
\begin{bmatrix}
y_{11} \\
y_{21} \\
y_{31}
\end{bmatrix}
=
\underbrace{\begin{bmatrix}
1 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 \\
\end{bmatrix}}_{\mathbf{X}}
\underbrace{\begin{bmatrix}
\mu \\
a_1 \\
a_2 \\
a_3
\end{bmatrix}}_{\boldsymbol{\beta}}
+
\begin{bmatrix}
\varepsilon_{11} \\
\varepsilon_{21} \\
\varepsilon_{31}
\end{bmatrix}
\end{align}
Now, if we subtract Row 3 from Row 1 and Row 2, we have that $\mathbf{X}$ becomes (I will call it $\widetilde{\mathbf{X}}$:
\begin{align}
\widetilde{\mathbf{X}} =\begin{bmatrix}
0 & 1 & 0 & -1 \\
0 & 0 & 1 & -1 \\
1 & 0 & 0 & \phantom{-}1 \\
\end{bmatrix}
\end{align}
Compare the last 3 columns of the above matrix with @ttnphns' matrix $\mathbf{L}$. Despite of the order, they are quite similar.
Indeed, if multiply $\widetilde{\mathbf{X}} \boldsymbol{\beta}$, we get:
\begin{align}
\begin{bmatrix}
0 & 1 & 0 & -1 \\
0 & 0 & 1 & -1 \\
1 & 0 & 0 & \phantom{-}1 \\
\end{bmatrix}
\begin{bmatrix}
\mu \\
a_1 \\
a_2 \\
a_3
\end{bmatrix}
=
\begin{bmatrix}
a_1 - a_3 \\
a_2 - a_3 \\
\mu + a_3
\end{bmatrix}
\end{align}
So, we have the estimable functions: $\mathbf{c}_1^\top \boldsymbol{\beta} = a_1-a_3$; $\mathbf{c}_2^\top \boldsymbol{\beta} = a_2-a_3$; $\mathbf{c}_3^\top \boldsymbol{\beta} = \mu + a_3$.
Since $H_0: \mathbf{c}_i^\top \boldsymbol{\beta} = 0$, we see from the above that we are comparing our constant to the coefficient for the reference group (a_3); the coefficient of group1 to the coefficient of group3; and the coefficient of group2 to the group3. Or, as @ttnphns said:
"We immediately see, following the coefficients, that the estimated Constant will equal the Y mean in the reference group; that parameter b1 (i.e. of dummy variable A1) will equal the difference: Y mean in group1 minus Y mean in group3; and parameter b2 is the difference: mean in group2 minus mean in group3."
Moreover, observe that (following the definition of contrast: estimable function+row sum =0), that the vectors $\mathbf{c}_1$ and $\mathbf{c}_2$ are contrasts. And, if we create a matrix $\mathbf{G}$ of constrasts, we have:
\begin{align}
\mathbf{G} =
\begin{bmatrix}
0 & 1 & 0 & -1 \\
0 & 0 & 1 & -1
\end{bmatrix}
\end{align}
Our contrast matrix to test $H_0: \mathbf{G}\boldsymbol{\beta}=0$
Example
We will use the same data as @ttnphns' "User defined contrast example" (I'd like to mention that the theory that I've written here requires a few modifications to consider models with interactions, that's why I chose this example. However, the definitions of contrasts and - what I call - contrast matrix remain the same).
Y <- c(0.226, 0.6836, -1.772, -0.5085, 1.1836, 0.5633,
0.8709, 0.2858, 0.4057, -1.156, 1.5199, -0.1388,
0.4865, -0.7653, 0.3418, -1.273, 1.4042, -0.1622,
0.3347, -0.4576, 0.7585, 0.4084, 1.4165, -0.5138,
0.9725, 0.2373, -1.562, 1.3985, 0.0397, -0.4689,
-1.499, -0.7654, 0.1442, -1.404,-0.2201, -1.166,
0.7282, 0.9524, -1.462, -0.3478, 0.5679, 0.5608,
1.0338, -1.161, -0.1037, 2.047, 2.3613, 0.1222)
F_ <- c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5)
dummies.F <- model.matrix(~as.factor(F_)+0)
X_F<-cbind(1,dummies.F)
G_F<-matrix(0,4,6)
G_F[1,]<-c(0,3,3,-2,-2,-2)
G_F[2,]<-c(0,1,-1,0,0,0)
G_F[3,]<-c(0,0,0,2,-1,-1)
G_F[4,]<-c(0,0,0,0,1,-1)
G
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0 3 3 -2 -2 -2
[2,] 0 1 -1 0 0 0
[3,] 0 0 0 2 -1 -1
[4,] 0 0 0 0 1 -1
# Estimating Beta
X_F.X_F<-t(X_F)%*%X_F
X_F.Y<-t(X_F)%*%Y
Betas_F<-ginv(X_F.X_F)%*%X_F.Y
# Final estimators:
G_F%*%Betas_F
[,1]
[1,] 0.5888183
[2,] -0.1468029
[3,] 0.6115212
[4,] -0.9279030
So, we have the same results.
Conclusion
It seems to me that there isn't one defining concept of what a contrast matrix is.
If you take the definition of contrast, given by Scheffe ("The Analysis of Variance", page 66), you'll see that it's an estimable function whose coefficients sum to zero. So, if we wish to test different linear combinations of the coefficients of our categorical variables, we use the matrix $\mathbf{G}$. This is a matrix in which the rows sum to zero, that we use to multiply our matrix of coefficients by in order to make those coefficients estimable. Its rows indicate the different linear combinations of contrasts that we are testing and its columns indicate which factors (coefficients) are being compared.
As the matrix $\mathbf{G}$ above is constructed in a way that each of its rows is composed by a contrast vector (which sum to 0), for me it makes sense to call $\mathbf{G}$ a "contrast matrix" (Monahan - "A primer on linear models" - also uses this terminology).
However, as beautifully explained by @ttnphns, softwares are calling something else as "contrast matrix", and I couldn't find a direct relationship between the matrix $\mathbf{G}$ and the built-in commands/matrices from SPSS (@ttnphns) or R (OP's question), only similarities. But I believe that the nice discussion/collaboration presented here will help clarify such concepts and definitions. | What is a contrast matrix? | I'll use lower-case letters for vectors and upper-case letters for matrices.
In case of a linear model of the form:
$$
\mathbf{y}=\mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}
$$
where $\bf | What is a contrast matrix?
I'll use lower-case letters for vectors and upper-case letters for matrices.
In case of a linear model of the form:
$$
\mathbf{y}=\mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}
$$
where $\bf{X}$ is a $n \times (k+1)$ matrix of rank $k+1 \leq n$, and we assume $\boldsymbol{\varepsilon} \sim \mathcal N(0,\sigma^2)$.
We can estimate $\hat{\boldsymbol{\beta}}$ by $(\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top \mathbf{y}$, since the inverse of $\mathbf{X}^\top \mathbf{X}$ exists.
Now, take an ANOVA case in which $\mathbf{X}$ is not full-rank anymore. The implication of this is that we don't have $(\mathbf{X}^\top\mathbf{X})^{-1}$ and we have to settle for the generalized inverse $(\mathbf{X}^\top\mathbf{X})^{-}$.
One of the problems of using this generalized inverse is that it's not unique. Another problem is that we cannot find an unbiased estimator for $\boldsymbol{\beta}$, since
$$\hat{\boldsymbol{\beta}}=(\mathbf{X}^\top\mathbf{X})^{-}\mathbf{X}^\top\mathbf{y} \implies E(\hat{\boldsymbol{\beta}})=(\mathbf{X}^\top\mathbf{X})^{-}\mathbf{X}^\top\mathbf{X}\boldsymbol{\beta}.$$
So, we cannot estimate an unique and unbiased $\boldsymbol{\beta}$. There are various approaches to work around the lack of uniqueness of the parameters in an ANOVA case with non-full-rank $\mathbf{X}$. One of them is to work with the overparameterized model and define linear combinations of the $\boldsymbol{\beta}$'s that are unique and can be estimated.
We have that a linear combination of the $\boldsymbol{\beta}$'s, say $\mathbf{g}^\top \boldsymbol{\beta}$, is estimable if there exists a vector $\mathbf{a}$ such that $E(\mathbf{a}^\top \mathbf{y})=\mathbf{g}^\top \boldsymbol{\beta}$.
The contrasts are a special case of estimable functions in which the sum of the coefficients of $\mathbf{g}$ is equal to zero.
And, contrasts come up in the context of categorical predictors in a linear model. (if you check the manual linked by @amoeba, you see that all their contrast coding are related to categorical variables).
Then, answering @Curious and @amoeba, we see that they arise in ANOVA, but not in a "pure" regression model with only continuous predictors (we can also talk about contrasts in ANCOVA, since we have some categorical variables in it).
Now, in the model $$\mathbf{y}=\mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}$$ where $\mathbf{X}$ is not full-rank, and $E(\mathbf{y})=\mathbf{X}^\top \boldsymbol{\beta}$, the linear function $\mathbf{g}^\top \boldsymbol{\beta}$ is estimable iff there exists a vector $\mathbf{a}$ such that $\mathbf{a}^\top \mathbf{X}=\mathbf{g}^\top$.
That is, $\mathbf{g}^\top$ is a linear combination of the rows of $\mathbf{X}$.
Also, there are many choices of the vector $\mathbf{a}$, such that $\mathbf{a}^\top \mathbf{X}=\mathbf{g}^\top$, as we can see in the example below.
Example 1
Consider the one-way model:
$$y_{ij}=\mu + \alpha_i + \varepsilon_{ij}, \quad i=1,2 \, , j=1,2,3.$$
\begin{align}
\mathbf{X} = \begin{bmatrix}
1 & 1 & 0 \\
1 & 1 & 0 \\
1 & 1 & 0 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1
\end{bmatrix} \, , \quad \boldsymbol{\beta}=\begin{bmatrix}
\mu \\
\tau_1 \\
\tau_2
\end{bmatrix}
\end{align}
And suppose $\mathbf{g}^\top = [0, 1, -1]$, so we want to estimate $[0, 1, -1] \boldsymbol{\beta}=\tau_1-\tau_2$.
We can see that there are different choices of the vector $\mathbf{a}$ that yield $\mathbf{a}^\top \mathbf{X}=\mathbf{g}^\top$: take $\mathbf{a}^\top=[0 , 0,1,-1,0,0]$; or $\mathbf{a}^\top = [1,0,0,0,0,-1]$; or $\mathbf{a}^\top = [2,-1,0,0,1,-2]$.
Example 2
Take the two-way model:
$$ y_{ij}=\mu+\alpha_i+\beta_j+\varepsilon_{ij}, \, i=1,2, \, j=1,2$$.
\begin{align}
\mathbf{X} = \begin{bmatrix}
1 & 1 & 0 & 1 & 0 \\
1 & 1 & 0 & 0 & 1\\
1 & 0 & 1 & 1 & 0 \\
1 & 0 & 1 & 0 & 1
\end{bmatrix} \, , \quad \boldsymbol{\beta}=\begin{bmatrix}
\mu \\
\alpha_1 \\
\alpha_2 \\
\beta_1 \\
\beta_2
\end{bmatrix}
\end{align}
We can define the estimable functions by taking linear combinations of the rows of $\mathbf{X}$.
Subtracting Row 1 from Rows 2, 3, and 4 (of $\mathbf{X}$):
$$
\begin{bmatrix}
1 & \phantom{-}1 & 0 & \phantom{-}1 & 0 \\
0 & 0 & 0 & -1 & 1\\
0 & -1 & 1 & \phantom{-}0 & 0 \\
0 & -1 & 1 & -1 & 1
\end{bmatrix}
$$
And taking Rows 2 and 3 from the fourth row:
$$
\begin{bmatrix}
1 & \phantom{-}1 & 0 & \phantom{-}1 & 0 \\
0 & 0 & 0 & -1 & 1\\
0 & -1 & 1 & \phantom{-}0 & 0 \\
0 & \phantom{-}0 & 0 & \phantom{-}0 & 0
\end{bmatrix}
$$
Multiplying this by $\boldsymbol{\beta}$ yields:
\begin{align}
\mathbf{g}_1^\top \boldsymbol{\beta} &= \mu + \alpha_1 + \beta_1 \\
\mathbf{g}_2^\top \boldsymbol{\beta} &= \beta_2 - \beta_1 \\
\mathbf{g}_3^\top \boldsymbol{\beta} &= \alpha_2 - \alpha_1
\end{align}
So, we have three linearly independent estimable functions. Now, only $\mathbf{g}_2^\top \boldsymbol{\beta}$ and $\mathbf{g}_3^\top \boldsymbol{\beta}$ can be considered contrasts, since the sum of its coefficients (or, the row sum of the respective vector $\mathbf{g}$) is equal to zero.
Going back to a one-way balanced model
$$y_{ij}=\mu + \alpha_i + \varepsilon_{ij}, \quad i=1,2, \ldots, k \, , j=1,2,\ldots,n.$$
And suppose we want to test the hypothesis $H_0: \alpha_1 = \ldots = \alpha_k$.
In this setting the matrix $\mathbf{X}$ is not full-rank, so $\boldsymbol{\beta}=(\mu,\alpha_1,\ldots,\alpha_k)^\top$ is not unique and not estimable. To make it estimable we can multiply $\boldsymbol{\beta}$ by $\mathbf{g}^\top$, as long as $\sum_{i} g_i = 0$. In other words, $\sum_{i} g_i \alpha_i$ is estimable iff $\sum_{i} g_i = 0$.
Why this is true?
We know that $\mathbf{g}^\top \boldsymbol{\beta}=(0,g_1,\ldots,g_k) \boldsymbol{\beta} = \sum_{i} g_i \alpha_i$ is estimable iff there exists a vector $\mathbf{a}$ such that $\mathbf{g}^\top = \mathbf{a}^\top \mathbf{X}$.
Taking the distinct rows of $\mathbf{X}$ and $\mathbf{a}^\top=[a_1,\ldots,a_k]$, then:
$$[0,g_1,\ldots,g_k]=\mathbf{g}^\top=\mathbf{a}^\top \mathbf{X} = \left(\sum_i a_i,a_1,\ldots,a_k \right)$$
And the result follows.
If we would like to test a specific contrast, our hypothesis is $H_0: \sum g_i \alpha_i = 0$. For instance: $H_0: 2 \alpha_1 = \alpha_2 + \alpha_3$, which can be written as $H_0: \alpha_1 = \frac{\alpha_2+\alpha_3}{2}$, so we are comparing $\alpha_1$ to the average of $\alpha_2$ and $\alpha_3$.
This hypothesis can be expressed as $H_0: \mathbf{g}^\top \boldsymbol{\beta}=0$, where ${\mathbf{g}}^\top = (0,g_1,g_2,\ldots,g_k)$. In this case, $q=1$ and we test this hypothesis with the following statistic:
$$F=\cfrac{\left[\mathbf{g}^\top \hat{\boldsymbol{\beta}}\right]^\top \left[\mathbf{g}^\top(\mathbf{X}^\top\mathbf{X})^{-}\mathbf{g} \right]^{-1}\mathbf{g}^\top \hat{\boldsymbol{\beta}}}{SSE/k(n-1)}.$$
If $H_0: \alpha_1 = \alpha_2 = \ldots = \alpha_k$ is expressed as $\mathbf{G}\boldsymbol{\beta}=\boldsymbol{0}$ where the rows of the matrix
$$\mathbf{G} = \begin{bmatrix} \mathbf{g}_1^\top \\ \mathbf{g}_2^\top \\ \vdots \\ \mathbf{g}_k^\top \end{bmatrix}$$
are mutually orthogonal contrasts (${\mathbf{g}_i^\top\mathbf{g}}_j = 0$), then we can test $H_0: \mathbf{G}\boldsymbol{\beta}=\boldsymbol{0}$ using the statistic $F=\cfrac{\frac{\mbox{SSH}}{\mbox{rank}(\mathbf{G})}}{\frac{\mbox{SSE}}{k(n-1)}}$, where $\mbox{SSH}=\left[\mathbf{G}\hat{\boldsymbol{\beta}}\right]^\top \left[\mathbf{G}(\mathbf{X}^\top\mathbf{X})^{-1} \mathbf{G}^\top \right]^{-1}\mathbf{G}\hat{\boldsymbol{\beta}}$.
Example 3
To understand this better, let's use $k=4$, and suppose we want to test $H_0: \alpha_1 = \alpha_2 = \alpha_3 = \alpha_4,$ which can be expressed as
$$H_0: \begin{bmatrix} \alpha_1 - \alpha_2 \\ \alpha_1 - \alpha_3 \\ \alpha_1 - \alpha_4 \end{bmatrix} =
\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}$$
Or, as $H_0: \mathbf{G}\boldsymbol{\beta}=\boldsymbol{0}$:
$$H_0:
\underbrace{\begin{bmatrix}
0 & 1 & -1 & \phantom{-}0 & \phantom{-}0 \\
0 & 1 & \phantom{-}0 & -1 & \phantom{-}0 \\
0 & 1 & \phantom{-}0 & \phantom{-}1 & -1
\end{bmatrix}}_{{\mathbf{G}}, \mbox{our contrast matrix}}
\begin{bmatrix}
\mu \\
\alpha_1 \\
\alpha_2 \\
\alpha_3 \\
\alpha_4
\end{bmatrix} =
\begin{bmatrix}
0 \\
0 \\
0
\end{bmatrix}$$
So, we see that the three rows of our contrast matrix are defined by the coefficients of the contrasts of interest. And each column gives the factor level that we are using in our comparison.
Pretty much all I've written was taken\copied (shamelessly) from Rencher & Schaalje, "Linear Models in Statistics", chapters 8 and 13 (examples, wording of theorems, some interpretations), but other things like the term "contrast matrix" (which, indeed, doesn't appear in this book) and its definition given here were my own.
Relating OP's contrast matrix to my answer
One of OP's matrix (which can also be found in this manual) is the following:
> contr.treatment(4)
2 3 4
1 0 0 0
2 1 0 0
3 0 1 0
4 0 0 1
In this case, our factor has 4 levels, and we can write the model as follows:
This can be written in matrix form as:
\begin{align}
\begin{bmatrix}
y_{11} \\
y_{21} \\
y_{31} \\
y_{41}
\end{bmatrix}
=
\begin{bmatrix}
\mu \\
\mu \\
\mu \\
\mu
\end{bmatrix}
+
\begin{bmatrix}
a_1 \\
a_2 \\
a_3 \\
a_4
\end{bmatrix}
+
\begin{bmatrix}
\varepsilon_{11} \\
\varepsilon_{21} \\
\varepsilon_{31} \\
\varepsilon_{41}
\end{bmatrix}
\end{align}
Or
\begin{align}
\begin{bmatrix}
y_{11} \\
y_{21} \\
y_{31} \\
y_{41}
\end{bmatrix}
=
\underbrace{\begin{bmatrix}
1 & 1 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 0\\
1 & 0 & 0 & 1 & 0\\
1 & 0 & 0 & 0 & 1\\
\end{bmatrix}}_{\mathbf{X}}
\underbrace{\begin{bmatrix}
\mu \\
a_1 \\
a_2 \\
a_3 \\
a_4
\end{bmatrix}}_{\boldsymbol{\beta}}
+
\begin{bmatrix}
\varepsilon_{11} \\
\varepsilon_{21} \\
\varepsilon_{31} \\
\varepsilon_{41}
\end{bmatrix}
\end{align}
Now, for the dummy coding example on the same manual, they use $a_1$ as the reference group. Thus, we subtract Row 1 from every other row in matrix $\mathbf{X}$, which yields the $\widetilde{\mathbf{X}}$:
\begin{align}
\begin{bmatrix}
1 & \phantom{-}1 & 0 & 0 & 0 \\
0 & -1 & 1 & 0 & 0\\
0 & -1 & 0 & 1 & 0\\
0 & -1 & 0 & 0 & 1
\end{bmatrix}
\end{align}
If you observe the numeration of the rows and columns in the contr.treatment(4) matrix, you'll see that they consider all rows and only the columns related to the factors 2, 3, and 4. If we do the same in the above matrix yields:
\begin{align}
\begin{bmatrix}
0 & 0 & 0 \\
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{bmatrix}
\end{align}
This way, the contr.treatment(4) matrix is telling us that they are comparing factors 2, 3 and 4 to factor 1, and comparing factor 1 to the constant (this is my understanding of the above).
And, defining $\mathbf{G}$ (i.e. taking only the rows that sum to 0 in the above matrix):
\begin{align}
\begin{bmatrix}
0 & -1 & 1 & 0 & 0\\
0 & -1 & 0 & 1 & 0\\
0 & -1 & 0 & 0 & 1
\end{bmatrix}
\end{align}
We can test $H_0: \mathbf{G}\boldsymbol{\beta}=0$ and find the estimates of the contrasts.
hsb2 = read.table(
'https://stats.idre.ucla.edu/stat/data/hsb2.csv',
header=T, sep=",")
y <- hsb2$write
dummies <- model.matrix(~factor(hsb2$race) + 0)
X <- cbind(1,dummies)
# Defining G, what I call contrast matrix
G <- matrix(0,3,5)
G[1,] <- c(0,-1,1,0,0)
G[2,] <- c(0,-1,0,1,0)
G[3,] <- c(0,-1,0,0,1)
G
[,1] [,2] [,3] [,4] [,5]
[1,] 0 -1 1 0 0
[2,] 0 -1 0 1 0
[3,] 0 -1 0 0 1
# Estimating Beta
X.X<-t(X)%*%X
X.y<-t(X)%*%y
library(MASS)
Betas<-ginv(X.X)%*%X.y
# Final estimators:
G%*%Betas
[,1]
[1,] 11.541667
[2,] 1.741667
[3,] 7.596839
And the estimates are the same.
Relating @ttnphns' answer to mine.
On their first example, the setup has a categorical factor A having three levels. We can write this as the model (suppose, for simplicity, that $j=1$):
$$y_{ij}=\mu+a_i+\varepsilon_{ij}\, , \quad \mbox{for } i=1,2,3$$
And suppose we want to test $H_0: a_1 = a_2 = a_3$, or $H_0: a_1 - a_3 = a_2 - a_3=0$, with $a_3$ as our reference group/factor.
This can be written in matrix form as:
\begin{align}
\begin{bmatrix}
y_{11} \\
y_{21} \\
y_{31}
\end{bmatrix}
=
\begin{bmatrix}
\mu \\
\mu \\
\mu
\end{bmatrix}
+
\begin{bmatrix}
a_1 \\
a_2 \\
a_3
\end{bmatrix}
+
\begin{bmatrix}
\varepsilon_{11} \\
\varepsilon_{21} \\
\varepsilon_{31}
\end{bmatrix}
\end{align}
Or
\begin{align}
\begin{bmatrix}
y_{11} \\
y_{21} \\
y_{31}
\end{bmatrix}
=
\underbrace{\begin{bmatrix}
1 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 \\
1 & 0 & 0 & 1 \\
\end{bmatrix}}_{\mathbf{X}}
\underbrace{\begin{bmatrix}
\mu \\
a_1 \\
a_2 \\
a_3
\end{bmatrix}}_{\boldsymbol{\beta}}
+
\begin{bmatrix}
\varepsilon_{11} \\
\varepsilon_{21} \\
\varepsilon_{31}
\end{bmatrix}
\end{align}
Now, if we subtract Row 3 from Row 1 and Row 2, we have that $\mathbf{X}$ becomes (I will call it $\widetilde{\mathbf{X}}$:
\begin{align}
\widetilde{\mathbf{X}} =\begin{bmatrix}
0 & 1 & 0 & -1 \\
0 & 0 & 1 & -1 \\
1 & 0 & 0 & \phantom{-}1 \\
\end{bmatrix}
\end{align}
Compare the last 3 columns of the above matrix with @ttnphns' matrix $\mathbf{L}$. Despite of the order, they are quite similar.
Indeed, if multiply $\widetilde{\mathbf{X}} \boldsymbol{\beta}$, we get:
\begin{align}
\begin{bmatrix}
0 & 1 & 0 & -1 \\
0 & 0 & 1 & -1 \\
1 & 0 & 0 & \phantom{-}1 \\
\end{bmatrix}
\begin{bmatrix}
\mu \\
a_1 \\
a_2 \\
a_3
\end{bmatrix}
=
\begin{bmatrix}
a_1 - a_3 \\
a_2 - a_3 \\
\mu + a_3
\end{bmatrix}
\end{align}
So, we have the estimable functions: $\mathbf{c}_1^\top \boldsymbol{\beta} = a_1-a_3$; $\mathbf{c}_2^\top \boldsymbol{\beta} = a_2-a_3$; $\mathbf{c}_3^\top \boldsymbol{\beta} = \mu + a_3$.
Since $H_0: \mathbf{c}_i^\top \boldsymbol{\beta} = 0$, we see from the above that we are comparing our constant to the coefficient for the reference group (a_3); the coefficient of group1 to the coefficient of group3; and the coefficient of group2 to the group3. Or, as @ttnphns said:
"We immediately see, following the coefficients, that the estimated Constant will equal the Y mean in the reference group; that parameter b1 (i.e. of dummy variable A1) will equal the difference: Y mean in group1 minus Y mean in group3; and parameter b2 is the difference: mean in group2 minus mean in group3."
Moreover, observe that (following the definition of contrast: estimable function+row sum =0), that the vectors $\mathbf{c}_1$ and $\mathbf{c}_2$ are contrasts. And, if we create a matrix $\mathbf{G}$ of constrasts, we have:
\begin{align}
\mathbf{G} =
\begin{bmatrix}
0 & 1 & 0 & -1 \\
0 & 0 & 1 & -1
\end{bmatrix}
\end{align}
Our contrast matrix to test $H_0: \mathbf{G}\boldsymbol{\beta}=0$
Example
We will use the same data as @ttnphns' "User defined contrast example" (I'd like to mention that the theory that I've written here requires a few modifications to consider models with interactions, that's why I chose this example. However, the definitions of contrasts and - what I call - contrast matrix remain the same).
Y <- c(0.226, 0.6836, -1.772, -0.5085, 1.1836, 0.5633,
0.8709, 0.2858, 0.4057, -1.156, 1.5199, -0.1388,
0.4865, -0.7653, 0.3418, -1.273, 1.4042, -0.1622,
0.3347, -0.4576, 0.7585, 0.4084, 1.4165, -0.5138,
0.9725, 0.2373, -1.562, 1.3985, 0.0397, -0.4689,
-1.499, -0.7654, 0.1442, -1.404,-0.2201, -1.166,
0.7282, 0.9524, -1.462, -0.3478, 0.5679, 0.5608,
1.0338, -1.161, -0.1037, 2.047, 2.3613, 0.1222)
F_ <- c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5)
dummies.F <- model.matrix(~as.factor(F_)+0)
X_F<-cbind(1,dummies.F)
G_F<-matrix(0,4,6)
G_F[1,]<-c(0,3,3,-2,-2,-2)
G_F[2,]<-c(0,1,-1,0,0,0)
G_F[3,]<-c(0,0,0,2,-1,-1)
G_F[4,]<-c(0,0,0,0,1,-1)
G
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0 3 3 -2 -2 -2
[2,] 0 1 -1 0 0 0
[3,] 0 0 0 2 -1 -1
[4,] 0 0 0 0 1 -1
# Estimating Beta
X_F.X_F<-t(X_F)%*%X_F
X_F.Y<-t(X_F)%*%Y
Betas_F<-ginv(X_F.X_F)%*%X_F.Y
# Final estimators:
G_F%*%Betas_F
[,1]
[1,] 0.5888183
[2,] -0.1468029
[3,] 0.6115212
[4,] -0.9279030
So, we have the same results.
Conclusion
It seems to me that there isn't one defining concept of what a contrast matrix is.
If you take the definition of contrast, given by Scheffe ("The Analysis of Variance", page 66), you'll see that it's an estimable function whose coefficients sum to zero. So, if we wish to test different linear combinations of the coefficients of our categorical variables, we use the matrix $\mathbf{G}$. This is a matrix in which the rows sum to zero, that we use to multiply our matrix of coefficients by in order to make those coefficients estimable. Its rows indicate the different linear combinations of contrasts that we are testing and its columns indicate which factors (coefficients) are being compared.
As the matrix $\mathbf{G}$ above is constructed in a way that each of its rows is composed by a contrast vector (which sum to 0), for me it makes sense to call $\mathbf{G}$ a "contrast matrix" (Monahan - "A primer on linear models" - also uses this terminology).
However, as beautifully explained by @ttnphns, softwares are calling something else as "contrast matrix", and I couldn't find a direct relationship between the matrix $\mathbf{G}$ and the built-in commands/matrices from SPSS (@ttnphns) or R (OP's question), only similarities. But I believe that the nice discussion/collaboration presented here will help clarify such concepts and definitions. | What is a contrast matrix?
I'll use lower-case letters for vectors and upper-case letters for matrices.
In case of a linear model of the form:
$$
\mathbf{y}=\mathbf{X} \boldsymbol{\beta} + \boldsymbol{\varepsilon}
$$
where $\bf |
3,452 | What is a contrast matrix? | "Contrast matrix" is not a standard term in the statistical literature. It can have [at least] two related by distinct meanings:
A matrix specifying a particular null hypothesis in an ANOVA regression (unrelated to the coding scheme), where each row is a contrast. This is not a standard usage of the term. I used full text search in Christensen Plane Answers to Complex Questions, Rutherford Introducing ANOVA and ANCOVA; GLM Approach, and Rencher & Schaalje Linear Models in Statistics. They all talk a lot about "contrasts" but never ever mention the term "contrast matrix". However, as @Gus_est found, this term is used in Monahan's A Primer on Linear Models.
A matrix specifying the coding scheme for the design matrix in an ANOVA regression. This is how the term "contrast matrix" is used in the R community (see e.g. this manual or this help page).
The answer by @Gus_est explores the first meaning. The answer by @ttnphns explores the second meaning (he calls it "contrast coding matrix" and also discusses "contrast coefficient matrix" which is a standard term in SPSS literature).
My understanding is that you were asking about meaning #2, so here goes the definition:
"Contrast matrix" in the R sense is $k\times k$ matrix $\mathbf C$ where $k$ is the number of groups, specifying how group membership is encoded in the design matrix $\mathbf X$. Specifically, if a $m$-th observation belongs to the group $i$ then $X_{mj}=C_{ij}$.
Note: usually the first column of $\mathbf C$ is the column of all ones (corresponding to the intercept column in the design matrix). When you call R commands like contr.treatment(4), you get matrix $\mathbf C$ without this first column.
I am planning to extend this answer to make an extended comment on how the answers by @ttnphns and @Gus_est fit together. | What is a contrast matrix? | "Contrast matrix" is not a standard term in the statistical literature. It can have [at least] two related by distinct meanings:
A matrix specifying a particular null hypothesis in an ANOVA regressio | What is a contrast matrix?
"Contrast matrix" is not a standard term in the statistical literature. It can have [at least] two related by distinct meanings:
A matrix specifying a particular null hypothesis in an ANOVA regression (unrelated to the coding scheme), where each row is a contrast. This is not a standard usage of the term. I used full text search in Christensen Plane Answers to Complex Questions, Rutherford Introducing ANOVA and ANCOVA; GLM Approach, and Rencher & Schaalje Linear Models in Statistics. They all talk a lot about "contrasts" but never ever mention the term "contrast matrix". However, as @Gus_est found, this term is used in Monahan's A Primer on Linear Models.
A matrix specifying the coding scheme for the design matrix in an ANOVA regression. This is how the term "contrast matrix" is used in the R community (see e.g. this manual or this help page).
The answer by @Gus_est explores the first meaning. The answer by @ttnphns explores the second meaning (he calls it "contrast coding matrix" and also discusses "contrast coefficient matrix" which is a standard term in SPSS literature).
My understanding is that you were asking about meaning #2, so here goes the definition:
"Contrast matrix" in the R sense is $k\times k$ matrix $\mathbf C$ where $k$ is the number of groups, specifying how group membership is encoded in the design matrix $\mathbf X$. Specifically, if a $m$-th observation belongs to the group $i$ then $X_{mj}=C_{ij}$.
Note: usually the first column of $\mathbf C$ is the column of all ones (corresponding to the intercept column in the design matrix). When you call R commands like contr.treatment(4), you get matrix $\mathbf C$ without this first column.
I am planning to extend this answer to make an extended comment on how the answers by @ttnphns and @Gus_est fit together. | What is a contrast matrix?
"Contrast matrix" is not a standard term in the statistical literature. It can have [at least] two related by distinct meanings:
A matrix specifying a particular null hypothesis in an ANOVA regressio |
3,453 | What is a contrast matrix? | A contrast compares two groups by comparing their difference with zero.
In a contrast matrix the rows are the contrasts and must add to zero, the columns are the groups. For example:
Let's say you have 4 groups A,B,C,D that you want to compare, then the contrast matrix would be:
Group: A B C D
A vs B: 1 -1 0 0
C vs D: 0 0 -1 1
A,B vs D,C: 1 1 -1 -1
Paraphrasing from Understanding Industrial Experimentation:
If there's a group of k objects to be compared, with k subgroups averages, a contrast is defined on this set of k objects by any set of k coefficients,
[c1, c2, c3, ... cj, ..., ck] that sum to zero.
Let C be a contrast then,
$$
C = c_{1}\mu_{1} + c_{2}\mu_{2} + ... c_{j}\mu_{j} + ... c_{k}\mu_{k}
$$
$$
C = \sum_{j=1}^{k} c_{j}\mu{j}
$$
with the constraint
$$
\sum_{j=1}^{k} c_{j} = 0
$$
Those subgroups that are assigned a coefficient of zero will be excluded from the comparison.(*)
It is the signs of the coefficients that actually define the comparison, not the values chosen. The absolute values of the coefficients can be anything as long as the sum of the coefficients is zero.
(*)Each statistical software has a different way of indicating which subgroups will be excluded/included. | What is a contrast matrix? | A contrast compares two groups by comparing their difference with zero.
In a contrast matrix the rows are the contrasts and must add to zero, the columns are the groups. For example:
Let's say you ha | What is a contrast matrix?
A contrast compares two groups by comparing their difference with zero.
In a contrast matrix the rows are the contrasts and must add to zero, the columns are the groups. For example:
Let's say you have 4 groups A,B,C,D that you want to compare, then the contrast matrix would be:
Group: A B C D
A vs B: 1 -1 0 0
C vs D: 0 0 -1 1
A,B vs D,C: 1 1 -1 -1
Paraphrasing from Understanding Industrial Experimentation:
If there's a group of k objects to be compared, with k subgroups averages, a contrast is defined on this set of k objects by any set of k coefficients,
[c1, c2, c3, ... cj, ..., ck] that sum to zero.
Let C be a contrast then,
$$
C = c_{1}\mu_{1} + c_{2}\mu_{2} + ... c_{j}\mu_{j} + ... c_{k}\mu_{k}
$$
$$
C = \sum_{j=1}^{k} c_{j}\mu{j}
$$
with the constraint
$$
\sum_{j=1}^{k} c_{j} = 0
$$
Those subgroups that are assigned a coefficient of zero will be excluded from the comparison.(*)
It is the signs of the coefficients that actually define the comparison, not the values chosen. The absolute values of the coefficients can be anything as long as the sum of the coefficients is zero.
(*)Each statistical software has a different way of indicating which subgroups will be excluded/included. | What is a contrast matrix?
A contrast compares two groups by comparing their difference with zero.
In a contrast matrix the rows are the contrasts and must add to zero, the columns are the groups. For example:
Let's say you ha |
3,454 | What is a contrast matrix? | I wanted to add some more basic information to the previous (great) responses, and clarify a little (also for myself) how contrast coding works in R, and why we need to calculate the inverse of the contrast coding matrix to understand which comparisons are performed.
I'll start with the description of the linear model and contrasts in terms of matrix algebra, and then go through an example in R.
The cell means model for ANOVA is:
\begin{equation}
y = X\mu + \epsilon = X\begin{pmatrix} \mu1 \\\mu2 \\\mu3 \\\mu4 \end{pmatrix} + \epsilon
\end{equation}
With X as the design matrix and u as the vector of means. An example is this, where we have 4 groups coded in each column:
\begin{equation}
X=\begin{pmatrix}
1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 \\
\end{pmatrix}
\end{equation}
In this case, we can estimate the means by the least square method, using the equations:
\begin{equation}
\hat{\mu} =(X^{\prime }X)^{-1}\ X^{\prime }y\\
\end{equation}
This is all good, but let's imagine we want specific comparisons rather than the means, like differences in means compared to a reference group.
In the case of 4 groups, we could express this as a matrix C of comparisons, multiplied by the vector of means:
\begin{equation}
C\mu = \begin{pmatrix}
\phantom{..} 1 & 0 & 0 &0 \\
-1 & 1 & 0 & 0\\
-1 & 0 & 1 & 0\\
-1 & 0 & 0 & 1\\
\end{pmatrix}\
\begin{pmatrix}\mu1 \\\mu2 \\\mu3 \\\mu4 \end{pmatrix}
= \begin{pmatrix} \mu1 \\\mu2-\mu1 \\\mu3-\mu1 \\\mu4-\mu1 \end{pmatrix}
\end{equation}
The first group serves as reference, and we calculate the deviations from it. The matrix C serves to describe the comparisons, it is the contrast matrix.
Technically here these are not contrasts, because the sum in each row should be zero by definition, but that will serve our purpose, and this is the matrix referred to in the contr.treatment() function in R (its inverse, see below).
The matrix C defines the contrasts.
We want to evaluate contrasts from the data, in the context of the same model.
We note that:
\begin{equation}
y \ =\ X\mu \ +\ \epsilon \ =\ XI\mu \ +\ \epsilon \ =\ X \ (C^{-1}C)\ \ \mu \ +\ \epsilon = \ (X C^{-1}) \ (C \mu) \ + \epsilon
\end{equation}
Therefore we can use the first term in parentheses to evaluate the second term (our comparisons), using the least squares method, just as we did for the original equation above.
This is why we use the inverse of the contrast matrix C, and it needs to be square and full rank in this case.
We use the least square method to evaluate the contrasts, with the same equation as above, using the modified design matrix:
\begin{equation}
(X C^{-1})
\end{equation}
And we evaluate:
\begin{equation}
C\mu
\end{equation}
using the method of least squares.
The coefficients for this model can be evaluated as before using least squares, replacing the original design matrix by the new one.
Or naming $X_{1} = (X C^{-1})$ the modified design matrix:
\begin{equation}
\hat{C\mu} = (X_{1}^{'}X_{1})^{-1}X_{1}^{'}y=\\C\hat{\mu}=
\begin{pmatrix} \hat{\mu1} \\\hat{\mu2}-\hat{\mu1} \\\hat{\mu3}-\hat{\mu1} \\\hat{\mu4}-\hat{\mu1} \end{pmatrix}
\end{equation}
Using the modified design matrix (with the inverse of the contrast matrix) and the least squares method, we evaluate the desired constrasts.
Of course, to get the original contrast matrix, we need to invert the contrast coding matrix used in R.
Let's try and make it work on an example in R:
x <- rnorm(20,7,2) + 7
y <- rnorm(20,7,2)
z <- rnorm(20,7,2) + 15
t <- rnorm(20,7,2) + 10
df <- data.frame(Score=c(x,y,z,t), Group = c(rep("A",20),rep("B",20),rep("C",20),rep("D",20)))
df$Group <- as.factor(df$Group)
head(df)
Score Group
1 12.83886 A
2 11.49714 A
3 16.27147 A
4 11.84989 A
5 16.00455 A
6 13.78611 A
We have four teams A, B, C, D and the scores of each individual.
Let's make the design matrix X for the cell means model:
X <- model.matrix(~Group + 0, data= df)
colnames(X) <- c("A", "B", "C", "D")
head(X)
A B C D
[1,] 1 0 0 0
[2,] 1 0 0 0
[3,] 1 0 0 0
[4,] 1 0 0 0
[5,] 1 0 0 0
[6,] 1 0 0 0
We can find the means of each group by the least squares equation
\begin{equation}
\hat{\mu} =(X^{\prime }X)^{-1}\ X^{\prime }y\\
\end{equation}
in R:
solve( t(X) %*% X) %*% t(X) %*% df$Score
[,1]
A 14.189628
B 7.021692
C 21.668745
D 17.595326
with(df, tapply(X= Score, FUN = mean, INDEX = Group))
A B C D
14.189628 7.021692 21.668745 17.595326
But we want comparisons of means to the first group (treatment contrasts). We use the matrix C of contrasts defined earlier.
Based on what was said before, what we really want is the inverse of C, to evaluate the contrasts.
R has a built-in function for this, called contr.treament(), where we specificy the number of factors.
We build the inverse of C, the contrast coding matrix, this way:
cbind(1, contr.treatment(4) )
2 3 4
1 1 0 0 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
if we invert this matrix, we get C, the comparisons we want:
solve(cbind(1, contr.treatment(4)))
1 0 0 0
-1 1 0 0
-1 0 1 0
-1 0 0 1
Now we construct the modified design matrix for the model:
X1 <- X %*% cbind(1, contr.treatment(4) )
colnames(X1) <- unique(levels(df$Group))
And we solve for the contrasts, either by plugging the modified design matrix into the least squares equation, or using the lm() function:
# least square equation
solve(t(X1) %*% X1) %*% t(X1) %*% df$Score
[,1]
A 14.189628
B -7.167936
C 7.479117
D 3.405698
# lm with modified design matrix
summary( lm(formula = Score ~ 0 + X1 , data = df) )
Call:
lm(formula = Score ~ 0 + X1, data = df)
Residuals:
Min 1Q Median 3Q Max
-3.5834 -1.2433 -0.1077 1.3763 4.5317
Coefficients:
Estimate Std. Error t value Pr(>|t|)
X1A 14.1896 0.3851 36.845 < 2e-16 ***
X1B -7.1679 0.5446 -13.161 < 2e-16 ***
X1C 7.4791 0.5446 13.732 < 2e-16 ***
X1D 3.4057 0.5446 6.253 2.16e-08 ***
# lm with built-in treatment contrasts
summary( lm(formula = Score ~ Group , data = df, contrasts = list(Group = "contr.treatment")) )
Call:
lm(formula = Score ~ Group, data = df, contrasts = list(Group = "contr.treatment"))
Residuals:
Min 1Q Median 3Q Max
-3.5834 -1.2433 -0.1077 1.3763 4.5317
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 14.1896 0.3851 36.845 < 2e-16 ***
GroupB -7.1679 0.5446 -13.161 < 2e-16 ***
GroupC 7.4791 0.5446 13.732 < 2e-16 ***
GroupD 3.4057 0.5446 6.253 2.16e-08 ***
We get the mean of the first group and the deviations for the others, as defined in the contrast matrix C.
We can define any type of contrast in this way, either using the built-in functions contr.treatment(), contr.sum() etc or by specifying which comparisons we want. For its contrasts arguments, lm() expects the inverse of C without the intercept column, solve(C)[,-1], and it adds the intercept column to generate $C^{-1}$, and uses it for the modified design matrix.
There are many refinements on this scheme (orthogonal contrasts, more complex contrasts, not full rank design matrix etc), but this is the gist of it (cf also here for reference: https://cran.r-project.org/web/packages/codingMatrices/vignettes/codingMatrices.pdf). | What is a contrast matrix? | I wanted to add some more basic information to the previous (great) responses, and clarify a little (also for myself) how contrast coding works in R, and why we need to calculate the inverse of the co | What is a contrast matrix?
I wanted to add some more basic information to the previous (great) responses, and clarify a little (also for myself) how contrast coding works in R, and why we need to calculate the inverse of the contrast coding matrix to understand which comparisons are performed.
I'll start with the description of the linear model and contrasts in terms of matrix algebra, and then go through an example in R.
The cell means model for ANOVA is:
\begin{equation}
y = X\mu + \epsilon = X\begin{pmatrix} \mu1 \\\mu2 \\\mu3 \\\mu4 \end{pmatrix} + \epsilon
\end{equation}
With X as the design matrix and u as the vector of means. An example is this, where we have 4 groups coded in each column:
\begin{equation}
X=\begin{pmatrix}
1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 \\
\end{pmatrix}
\end{equation}
In this case, we can estimate the means by the least square method, using the equations:
\begin{equation}
\hat{\mu} =(X^{\prime }X)^{-1}\ X^{\prime }y\\
\end{equation}
This is all good, but let's imagine we want specific comparisons rather than the means, like differences in means compared to a reference group.
In the case of 4 groups, we could express this as a matrix C of comparisons, multiplied by the vector of means:
\begin{equation}
C\mu = \begin{pmatrix}
\phantom{..} 1 & 0 & 0 &0 \\
-1 & 1 & 0 & 0\\
-1 & 0 & 1 & 0\\
-1 & 0 & 0 & 1\\
\end{pmatrix}\
\begin{pmatrix}\mu1 \\\mu2 \\\mu3 \\\mu4 \end{pmatrix}
= \begin{pmatrix} \mu1 \\\mu2-\mu1 \\\mu3-\mu1 \\\mu4-\mu1 \end{pmatrix}
\end{equation}
The first group serves as reference, and we calculate the deviations from it. The matrix C serves to describe the comparisons, it is the contrast matrix.
Technically here these are not contrasts, because the sum in each row should be zero by definition, but that will serve our purpose, and this is the matrix referred to in the contr.treatment() function in R (its inverse, see below).
The matrix C defines the contrasts.
We want to evaluate contrasts from the data, in the context of the same model.
We note that:
\begin{equation}
y \ =\ X\mu \ +\ \epsilon \ =\ XI\mu \ +\ \epsilon \ =\ X \ (C^{-1}C)\ \ \mu \ +\ \epsilon = \ (X C^{-1}) \ (C \mu) \ + \epsilon
\end{equation}
Therefore we can use the first term in parentheses to evaluate the second term (our comparisons), using the least squares method, just as we did for the original equation above.
This is why we use the inverse of the contrast matrix C, and it needs to be square and full rank in this case.
We use the least square method to evaluate the contrasts, with the same equation as above, using the modified design matrix:
\begin{equation}
(X C^{-1})
\end{equation}
And we evaluate:
\begin{equation}
C\mu
\end{equation}
using the method of least squares.
The coefficients for this model can be evaluated as before using least squares, replacing the original design matrix by the new one.
Or naming $X_{1} = (X C^{-1})$ the modified design matrix:
\begin{equation}
\hat{C\mu} = (X_{1}^{'}X_{1})^{-1}X_{1}^{'}y=\\C\hat{\mu}=
\begin{pmatrix} \hat{\mu1} \\\hat{\mu2}-\hat{\mu1} \\\hat{\mu3}-\hat{\mu1} \\\hat{\mu4}-\hat{\mu1} \end{pmatrix}
\end{equation}
Using the modified design matrix (with the inverse of the contrast matrix) and the least squares method, we evaluate the desired constrasts.
Of course, to get the original contrast matrix, we need to invert the contrast coding matrix used in R.
Let's try and make it work on an example in R:
x <- rnorm(20,7,2) + 7
y <- rnorm(20,7,2)
z <- rnorm(20,7,2) + 15
t <- rnorm(20,7,2) + 10
df <- data.frame(Score=c(x,y,z,t), Group = c(rep("A",20),rep("B",20),rep("C",20),rep("D",20)))
df$Group <- as.factor(df$Group)
head(df)
Score Group
1 12.83886 A
2 11.49714 A
3 16.27147 A
4 11.84989 A
5 16.00455 A
6 13.78611 A
We have four teams A, B, C, D and the scores of each individual.
Let's make the design matrix X for the cell means model:
X <- model.matrix(~Group + 0, data= df)
colnames(X) <- c("A", "B", "C", "D")
head(X)
A B C D
[1,] 1 0 0 0
[2,] 1 0 0 0
[3,] 1 0 0 0
[4,] 1 0 0 0
[5,] 1 0 0 0
[6,] 1 0 0 0
We can find the means of each group by the least squares equation
\begin{equation}
\hat{\mu} =(X^{\prime }X)^{-1}\ X^{\prime }y\\
\end{equation}
in R:
solve( t(X) %*% X) %*% t(X) %*% df$Score
[,1]
A 14.189628
B 7.021692
C 21.668745
D 17.595326
with(df, tapply(X= Score, FUN = mean, INDEX = Group))
A B C D
14.189628 7.021692 21.668745 17.595326
But we want comparisons of means to the first group (treatment contrasts). We use the matrix C of contrasts defined earlier.
Based on what was said before, what we really want is the inverse of C, to evaluate the contrasts.
R has a built-in function for this, called contr.treament(), where we specificy the number of factors.
We build the inverse of C, the contrast coding matrix, this way:
cbind(1, contr.treatment(4) )
2 3 4
1 1 0 0 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
if we invert this matrix, we get C, the comparisons we want:
solve(cbind(1, contr.treatment(4)))
1 0 0 0
-1 1 0 0
-1 0 1 0
-1 0 0 1
Now we construct the modified design matrix for the model:
X1 <- X %*% cbind(1, contr.treatment(4) )
colnames(X1) <- unique(levels(df$Group))
And we solve for the contrasts, either by plugging the modified design matrix into the least squares equation, or using the lm() function:
# least square equation
solve(t(X1) %*% X1) %*% t(X1) %*% df$Score
[,1]
A 14.189628
B -7.167936
C 7.479117
D 3.405698
# lm with modified design matrix
summary( lm(formula = Score ~ 0 + X1 , data = df) )
Call:
lm(formula = Score ~ 0 + X1, data = df)
Residuals:
Min 1Q Median 3Q Max
-3.5834 -1.2433 -0.1077 1.3763 4.5317
Coefficients:
Estimate Std. Error t value Pr(>|t|)
X1A 14.1896 0.3851 36.845 < 2e-16 ***
X1B -7.1679 0.5446 -13.161 < 2e-16 ***
X1C 7.4791 0.5446 13.732 < 2e-16 ***
X1D 3.4057 0.5446 6.253 2.16e-08 ***
# lm with built-in treatment contrasts
summary( lm(formula = Score ~ Group , data = df, contrasts = list(Group = "contr.treatment")) )
Call:
lm(formula = Score ~ Group, data = df, contrasts = list(Group = "contr.treatment"))
Residuals:
Min 1Q Median 3Q Max
-3.5834 -1.2433 -0.1077 1.3763 4.5317
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 14.1896 0.3851 36.845 < 2e-16 ***
GroupB -7.1679 0.5446 -13.161 < 2e-16 ***
GroupC 7.4791 0.5446 13.732 < 2e-16 ***
GroupD 3.4057 0.5446 6.253 2.16e-08 ***
We get the mean of the first group and the deviations for the others, as defined in the contrast matrix C.
We can define any type of contrast in this way, either using the built-in functions contr.treatment(), contr.sum() etc or by specifying which comparisons we want. For its contrasts arguments, lm() expects the inverse of C without the intercept column, solve(C)[,-1], and it adds the intercept column to generate $C^{-1}$, and uses it for the modified design matrix.
There are many refinements on this scheme (orthogonal contrasts, more complex contrasts, not full rank design matrix etc), but this is the gist of it (cf also here for reference: https://cran.r-project.org/web/packages/codingMatrices/vignettes/codingMatrices.pdf). | What is a contrast matrix?
I wanted to add some more basic information to the previous (great) responses, and clarify a little (also for myself) how contrast coding works in R, and why we need to calculate the inverse of the co |
3,455 | What is the difference between a partial likelihood, profile likelihood and marginal likelihood? | The likelihood function usually depends on many parameters. Depending on the application, we are usually interested in only a subset of these parameters. For example, in linear regression, interest typically lies in the slope coefficients and not on the error variance.
Denote the parameters we are interested in as $\beta$ and the parameters that are not of primary interest as $\theta$. The standard way to approach the estimation problem is to maximize the likelihood function so that we obtain estimates of $\beta$ and $\theta$. However, since the primary interest lies in $\beta$ partial, profile and marginal likelihood offer alternative ways to estimate $\beta$ without estimating $\theta$.
In order to see the difference denote the standard likelihood by $L(\beta, \theta|\mathrm{data})$.
Maximum Likelihood
Find $\beta$ and $\theta$ that maximizes $L(\beta, \theta|\mathrm{data})$.
Partial Likelihood
If we can write the likelihood function as:
$$L(\beta, \theta|\mathrm{data}) = L_1(\beta|\mathrm{data}) L_2(\theta|\mathrm{data})$$
Then we simply maximize $L_1(\beta|\mathrm{data})$.
Profile Likelihood
If we can express $\theta$ as a function of $\beta$ then we replace $\theta$ with the corresponding function.
Say, $\theta = g(\beta)$. Then, we maximize:
$$L(\beta, g(\beta)|\mathrm{data})$$
Marginal Likelihood
We integrate out $\theta$ from the likelihood equation by exploiting the fact that we can identify the probability distribution of $\theta$ conditional on $\beta$. | What is the difference between a partial likelihood, profile likelihood and marginal likelihood? | The likelihood function usually depends on many parameters. Depending on the application, we are usually interested in only a subset of these parameters. For example, in linear regression, interest ty | What is the difference between a partial likelihood, profile likelihood and marginal likelihood?
The likelihood function usually depends on many parameters. Depending on the application, we are usually interested in only a subset of these parameters. For example, in linear regression, interest typically lies in the slope coefficients and not on the error variance.
Denote the parameters we are interested in as $\beta$ and the parameters that are not of primary interest as $\theta$. The standard way to approach the estimation problem is to maximize the likelihood function so that we obtain estimates of $\beta$ and $\theta$. However, since the primary interest lies in $\beta$ partial, profile and marginal likelihood offer alternative ways to estimate $\beta$ without estimating $\theta$.
In order to see the difference denote the standard likelihood by $L(\beta, \theta|\mathrm{data})$.
Maximum Likelihood
Find $\beta$ and $\theta$ that maximizes $L(\beta, \theta|\mathrm{data})$.
Partial Likelihood
If we can write the likelihood function as:
$$L(\beta, \theta|\mathrm{data}) = L_1(\beta|\mathrm{data}) L_2(\theta|\mathrm{data})$$
Then we simply maximize $L_1(\beta|\mathrm{data})$.
Profile Likelihood
If we can express $\theta$ as a function of $\beta$ then we replace $\theta$ with the corresponding function.
Say, $\theta = g(\beta)$. Then, we maximize:
$$L(\beta, g(\beta)|\mathrm{data})$$
Marginal Likelihood
We integrate out $\theta$ from the likelihood equation by exploiting the fact that we can identify the probability distribution of $\theta$ conditional on $\beta$. | What is the difference between a partial likelihood, profile likelihood and marginal likelihood?
The likelihood function usually depends on many parameters. Depending on the application, we are usually interested in only a subset of these parameters. For example, in linear regression, interest ty |
3,456 | What is the difference between a partial likelihood, profile likelihood and marginal likelihood? | All three are used when dealing with nuisance parameters in the completely specified likelihood function.
The marginal likelihood is the primary method to eliminate nuisance parameters in theory. It's a true likelihood function (i.e. it's proportional to the (marginal) probability of the observed data).
The partial likelihood is not a true likelihood in general. However, in some cases it can be treated as a likelihood for asymptotic inference. For example in Cox proportional hazards models, where it originated, we're interested in the observed rankings in the data (T1 > T2 > ..) without specifying the baseline hazard. Efron showed that the partial likelihood loses little to no information for a variety of hazard functions.
The profile likelihood is convenient when we have a multidimensional likelihood function and a single parameter of interest. It's specified by replacing the nuisance S by its MLE at each fixed T (the parameter of interest), i.e. L(T) = L(T, S(T)). This can work well in practice, though there is a potential bias in the MLE obtained in this way; the marginal likelihood corrects for this bias. | What is the difference between a partial likelihood, profile likelihood and marginal likelihood? | All three are used when dealing with nuisance parameters in the completely specified likelihood function.
The marginal likelihood is the primary method to eliminate nuisance parameters in theory. | What is the difference between a partial likelihood, profile likelihood and marginal likelihood?
All three are used when dealing with nuisance parameters in the completely specified likelihood function.
The marginal likelihood is the primary method to eliminate nuisance parameters in theory. It's a true likelihood function (i.e. it's proportional to the (marginal) probability of the observed data).
The partial likelihood is not a true likelihood in general. However, in some cases it can be treated as a likelihood for asymptotic inference. For example in Cox proportional hazards models, where it originated, we're interested in the observed rankings in the data (T1 > T2 > ..) without specifying the baseline hazard. Efron showed that the partial likelihood loses little to no information for a variety of hazard functions.
The profile likelihood is convenient when we have a multidimensional likelihood function and a single parameter of interest. It's specified by replacing the nuisance S by its MLE at each fixed T (the parameter of interest), i.e. L(T) = L(T, S(T)). This can work well in practice, though there is a potential bias in the MLE obtained in this way; the marginal likelihood corrects for this bias. | What is the difference between a partial likelihood, profile likelihood and marginal likelihood?
All three are used when dealing with nuisance parameters in the completely specified likelihood function.
The marginal likelihood is the primary method to eliminate nuisance parameters in theory. |
3,457 | What does orthogonal mean in the context of statistics? | It means they [the random variables X,Y] are 'independent' to each other. Independent random variables are often considered to be at 'right angles' to each other, where by 'right angles' is meant that the inner product of the two is 0 (an equivalent condition from linear algebra).
For example on the X-Y plane the X and Y axis are said to be orthogonal because if a given point's x value changes, say going from (2,3) to (5,3), its y value remains the same (3), and vice versa. Hence the two variables are 'independent'.
See also Wikipedia's entries for Independence and Orthogonality | What does orthogonal mean in the context of statistics? | It means they [the random variables X,Y] are 'independent' to each other. Independent random variables are often considered to be at 'right angles' to each other, where by 'right angles' is meant tha | What does orthogonal mean in the context of statistics?
It means they [the random variables X,Y] are 'independent' to each other. Independent random variables are often considered to be at 'right angles' to each other, where by 'right angles' is meant that the inner product of the two is 0 (an equivalent condition from linear algebra).
For example on the X-Y plane the X and Y axis are said to be orthogonal because if a given point's x value changes, say going from (2,3) to (5,3), its y value remains the same (3), and vice versa. Hence the two variables are 'independent'.
See also Wikipedia's entries for Independence and Orthogonality | What does orthogonal mean in the context of statistics?
It means they [the random variables X,Y] are 'independent' to each other. Independent random variables are often considered to be at 'right angles' to each other, where by 'right angles' is meant tha |
3,458 | What does orthogonal mean in the context of statistics? | I can't make a comment because I don't have enough points, so I'm forced to speak my mind as an answer, please forgive me. From the little I know, I disagree with the selected answer by @crazyjoe because orthogonality is defined as
$$E[XY^{\star}] = 0$$
So:
If $Y=X^2$ with symmetric pdf they they are dependent yet orthogonal.
If $Y=X^2$ but pdf zero for negative values, then they dependent but not orthogonal.
Therefore, orthogonality does not imply independence. | What does orthogonal mean in the context of statistics? | I can't make a comment because I don't have enough points, so I'm forced to speak my mind as an answer, please forgive me. From the little I know, I disagree with the selected answer by @crazyjoe bec | What does orthogonal mean in the context of statistics?
I can't make a comment because I don't have enough points, so I'm forced to speak my mind as an answer, please forgive me. From the little I know, I disagree with the selected answer by @crazyjoe because orthogonality is defined as
$$E[XY^{\star}] = 0$$
So:
If $Y=X^2$ with symmetric pdf they they are dependent yet orthogonal.
If $Y=X^2$ but pdf zero for negative values, then they dependent but not orthogonal.
Therefore, orthogonality does not imply independence. | What does orthogonal mean in the context of statistics?
I can't make a comment because I don't have enough points, so I'm forced to speak my mind as an answer, please forgive me. From the little I know, I disagree with the selected answer by @crazyjoe bec |
3,459 | What does orthogonal mean in the context of statistics? | If X and Y are independent then they are Orthogonal. But the converse is not true as pointed out by the clever example of user497804. For the exact definitions refer to
Orthogonal :
Complex-valued random variables $C_1$ and $C_2$ are called orthogonal if they satisfy ${\rm cov}(C_1,C_2)=0$
(Pg 376, Probability and Random Processes by Geoffrey Grimmett and David Stirzaker)
Independent:
The random variables $X$ and $Y$ are independent if and only if
$F(x,y) = F_X(x)F_Y(y)$ for all $x,y \in \mathbb{R}$
which, for continuous random variables, is equivalent to requiring that
$f(x,y) = f_X(x)f_Y(y)$
(Page 99, Probability and Random Processes by Geoffrey Grimmett and David Stirzaker) | What does orthogonal mean in the context of statistics? | If X and Y are independent then they are Orthogonal. But the converse is not true as pointed out by the clever example of user497804. For the exact definitions refer to
Orthogonal :
Complex-valued ra | What does orthogonal mean in the context of statistics?
If X and Y are independent then they are Orthogonal. But the converse is not true as pointed out by the clever example of user497804. For the exact definitions refer to
Orthogonal :
Complex-valued random variables $C_1$ and $C_2$ are called orthogonal if they satisfy ${\rm cov}(C_1,C_2)=0$
(Pg 376, Probability and Random Processes by Geoffrey Grimmett and David Stirzaker)
Independent:
The random variables $X$ and $Y$ are independent if and only if
$F(x,y) = F_X(x)F_Y(y)$ for all $x,y \in \mathbb{R}$
which, for continuous random variables, is equivalent to requiring that
$f(x,y) = f_X(x)f_Y(y)$
(Page 99, Probability and Random Processes by Geoffrey Grimmett and David Stirzaker) | What does orthogonal mean in the context of statistics?
If X and Y are independent then they are Orthogonal. But the converse is not true as pointed out by the clever example of user497804. For the exact definitions refer to
Orthogonal :
Complex-valued ra |
3,460 | What does orthogonal mean in the context of statistics? | @Mien already provided an answer, and, as pointed out by @whuber, orthogonal means uncorrelated. However, I really wish people would provide some references. You might consider the following links helpful since they explain the concept of correlation from a geometric perspective.
The Geometry of Vectors (see p. 7)
Linearly Independent, Orthogonal, and Uncorrelated Variables
Graphical representation of two-dimensional correlation in vector space (may not be free to you) | What does orthogonal mean in the context of statistics? | @Mien already provided an answer, and, as pointed out by @whuber, orthogonal means uncorrelated. However, I really wish people would provide some references. You might consider the following links hel | What does orthogonal mean in the context of statistics?
@Mien already provided an answer, and, as pointed out by @whuber, orthogonal means uncorrelated. However, I really wish people would provide some references. You might consider the following links helpful since they explain the concept of correlation from a geometric perspective.
The Geometry of Vectors (see p. 7)
Linearly Independent, Orthogonal, and Uncorrelated Variables
Graphical representation of two-dimensional correlation in vector space (may not be free to you) | What does orthogonal mean in the context of statistics?
@Mien already provided an answer, and, as pointed out by @whuber, orthogonal means uncorrelated. However, I really wish people would provide some references. You might consider the following links hel |
3,461 | What does orthogonal mean in the context of statistics? | A NIST website (ref below) defines orthogonal as follows, "An experimental design is orthogonal if the effects of any factor balance out (sum to zero) across the effects of the other factors."
In statistical deisgn, I understand orthogonal to mean "not cofounded" or "not aliased". This is important when designing and analyzing your experiment if you want to make sure you can clearly identify different factors/treatments. If your designed experiment is not orthogonal, then it means you will not be able to completely separate the effects of different treatments. Thus you will need to conduct a follow up experiment to deconfound the effect. This would be called augmented deisgn or comparitive design.
Independence seems to be a poor word choice since its used in so many other aspects of design and analysis.
NIST Ref
http://www.itl.nist.gov/div898/handbook/pri/section7/pri7.htm | What does orthogonal mean in the context of statistics? | A NIST website (ref below) defines orthogonal as follows, "An experimental design is orthogonal if the effects of any factor balance out (sum to zero) across the effects of the other factors."
In sta | What does orthogonal mean in the context of statistics?
A NIST website (ref below) defines orthogonal as follows, "An experimental design is orthogonal if the effects of any factor balance out (sum to zero) across the effects of the other factors."
In statistical deisgn, I understand orthogonal to mean "not cofounded" or "not aliased". This is important when designing and analyzing your experiment if you want to make sure you can clearly identify different factors/treatments. If your designed experiment is not orthogonal, then it means you will not be able to completely separate the effects of different treatments. Thus you will need to conduct a follow up experiment to deconfound the effect. This would be called augmented deisgn or comparitive design.
Independence seems to be a poor word choice since its used in so many other aspects of design and analysis.
NIST Ref
http://www.itl.nist.gov/div898/handbook/pri/section7/pri7.htm | What does orthogonal mean in the context of statistics?
A NIST website (ref below) defines orthogonal as follows, "An experimental design is orthogonal if the effects of any factor balance out (sum to zero) across the effects of the other factors."
In sta |
3,462 | What does orthogonal mean in the context of statistics? | It's most likely they mean 'unrelated' if they say 'orthogonal'; if two factors are orthogonal (e.g. in factor analysis), they are unrelated, their correlation is zero. | What does orthogonal mean in the context of statistics? | It's most likely they mean 'unrelated' if they say 'orthogonal'; if two factors are orthogonal (e.g. in factor analysis), they are unrelated, their correlation is zero. | What does orthogonal mean in the context of statistics?
It's most likely they mean 'unrelated' if they say 'orthogonal'; if two factors are orthogonal (e.g. in factor analysis), they are unrelated, their correlation is zero. | What does orthogonal mean in the context of statistics?
It's most likely they mean 'unrelated' if they say 'orthogonal'; if two factors are orthogonal (e.g. in factor analysis), they are unrelated, their correlation is zero. |
3,463 | What does orthogonal mean in the context of statistics? | I asked a similar question What is the relationship between orthogonality and the expectation of the product of RVs, and I reproduce the answer here. Although orthogonality is a concept from Linear Algebra, and it means that the dot-product of two vectors is zero, the term is sometimes loosely used in statistics and means non-correlation. If two random vectors are orthogonal, then their centralized counterpart are uncorrelated, because orthogonality (dot-product zero) implies non-correlation of the centralized random vectors (sometimes people say that orthogonality implies that the cross-moment is zero). Whenever we have two Random Vectors $(X,Y)$, we can always centralize them around their means to make their expectation to be zero. Assume ortogonality ($X\cdot Y=0$), then the correlation of the centralized random variables are
$$Cov(X-E[X],Y-E[Y]) = E[X\cdot Y]= E[0]=0\implies \\Corr(X-E[X],Y-E[Y])=0$$ | What does orthogonal mean in the context of statistics? | I asked a similar question What is the relationship between orthogonality and the expectation of the product of RVs, and I reproduce the answer here. Although orthogonality is a concept from Linear Al | What does orthogonal mean in the context of statistics?
I asked a similar question What is the relationship between orthogonality and the expectation of the product of RVs, and I reproduce the answer here. Although orthogonality is a concept from Linear Algebra, and it means that the dot-product of two vectors is zero, the term is sometimes loosely used in statistics and means non-correlation. If two random vectors are orthogonal, then their centralized counterpart are uncorrelated, because orthogonality (dot-product zero) implies non-correlation of the centralized random vectors (sometimes people say that orthogonality implies that the cross-moment is zero). Whenever we have two Random Vectors $(X,Y)$, we can always centralize them around their means to make their expectation to be zero. Assume ortogonality ($X\cdot Y=0$), then the correlation of the centralized random variables are
$$Cov(X-E[X],Y-E[Y]) = E[X\cdot Y]= E[0]=0\implies \\Corr(X-E[X],Y-E[Y])=0$$ | What does orthogonal mean in the context of statistics?
I asked a similar question What is the relationship between orthogonality and the expectation of the product of RVs, and I reproduce the answer here. Although orthogonality is a concept from Linear Al |
3,464 | What does orthogonal mean in the context of statistics? | According to https://web.archive.org/web/20160705135417/http://terpconnect.umd.edu/~bmomen/BIOM621/LineardepCorrOrthogonal.pdf, linear independency is a necessary condition for orthogonality or uncorrelatedness. But there are finer distinctions, in particular, orthogonality is not uncorrelatedness. | What does orthogonal mean in the context of statistics? | According to https://web.archive.org/web/20160705135417/http://terpconnect.umd.edu/~bmomen/BIOM621/LineardepCorrOrthogonal.pdf, linear independency is a necessary condition for orthogonality or uncorr | What does orthogonal mean in the context of statistics?
According to https://web.archive.org/web/20160705135417/http://terpconnect.umd.edu/~bmomen/BIOM621/LineardepCorrOrthogonal.pdf, linear independency is a necessary condition for orthogonality or uncorrelatedness. But there are finer distinctions, in particular, orthogonality is not uncorrelatedness. | What does orthogonal mean in the context of statistics?
According to https://web.archive.org/web/20160705135417/http://terpconnect.umd.edu/~bmomen/BIOM621/LineardepCorrOrthogonal.pdf, linear independency is a necessary condition for orthogonality or uncorr |
3,465 | What does orthogonal mean in the context of statistics? | In econometrics, the orthogonality assumption means the expected value of the sum of all errors is 0. All variables of a regressor is orthogonal to their current error terms.
Mathematically, the orthogonality assumption is $E(x_{i}·ε_{i}) = 0$.
In simpler terms, it means a regressor is "perpendicular" to the error term. | What does orthogonal mean in the context of statistics? | In econometrics, the orthogonality assumption means the expected value of the sum of all errors is 0. All variables of a regressor is orthogonal to their current error terms.
Mathematically, the ortho | What does orthogonal mean in the context of statistics?
In econometrics, the orthogonality assumption means the expected value of the sum of all errors is 0. All variables of a regressor is orthogonal to their current error terms.
Mathematically, the orthogonality assumption is $E(x_{i}·ε_{i}) = 0$.
In simpler terms, it means a regressor is "perpendicular" to the error term. | What does orthogonal mean in the context of statistics?
In econometrics, the orthogonality assumption means the expected value of the sum of all errors is 0. All variables of a regressor is orthogonal to their current error terms.
Mathematically, the ortho |
3,466 | What does orthogonal mean in the context of statistics? | Assume a random process x(t), hence y1=cos(x(t)) and y2= sin(x(t)), both are random processes. It is clear that y1 is orthogonal on y2, i.e., E[y1.y2] = 0. However, indeed they are dependent on each other. Actually, both are based on the same random process. Therefore, it is not necessary for orthogonal processes to be independent. Independence in random processes means that if you have any foreknowledge about one process, you will not be able to have any conclusion about the other! However, this is not the case with orthogonal processes. Nevertheless, assume two independent random processes z1, z2 where at least one of them has zero mean, then E[z1.z2]=E[z1].E[z2]=0. Mathematically, this is the same as the orthogonality condition, but geometrically, it is not necessary! | What does orthogonal mean in the context of statistics? | Assume a random process x(t), hence y1=cos(x(t)) and y2= sin(x(t)), both are random processes. It is clear that y1 is orthogonal on y2, i.e., E[y1.y2] = 0. However, indeed they are dependent on each o | What does orthogonal mean in the context of statistics?
Assume a random process x(t), hence y1=cos(x(t)) and y2= sin(x(t)), both are random processes. It is clear that y1 is orthogonal on y2, i.e., E[y1.y2] = 0. However, indeed they are dependent on each other. Actually, both are based on the same random process. Therefore, it is not necessary for orthogonal processes to be independent. Independence in random processes means that if you have any foreknowledge about one process, you will not be able to have any conclusion about the other! However, this is not the case with orthogonal processes. Nevertheless, assume two independent random processes z1, z2 where at least one of them has zero mean, then E[z1.z2]=E[z1].E[z2]=0. Mathematically, this is the same as the orthogonality condition, but geometrically, it is not necessary! | What does orthogonal mean in the context of statistics?
Assume a random process x(t), hence y1=cos(x(t)) and y2= sin(x(t)), both are random processes. It is clear that y1 is orthogonal on y2, i.e., E[y1.y2] = 0. However, indeed they are dependent on each o |
3,467 | What does orthogonal mean in the context of statistics? | The related random variables mean the variables say X and Y can have any relationship; may be linear or non-linear. The independence and orthogonal properties are the same if the two variables are linearly related. | What does orthogonal mean in the context of statistics? | The related random variables mean the variables say X and Y can have any relationship; may be linear or non-linear. The independence and orthogonal properties are the same if the two variables are lin | What does orthogonal mean in the context of statistics?
The related random variables mean the variables say X and Y can have any relationship; may be linear or non-linear. The independence and orthogonal properties are the same if the two variables are linearly related. | What does orthogonal mean in the context of statistics?
The related random variables mean the variables say X and Y can have any relationship; may be linear or non-linear. The independence and orthogonal properties are the same if the two variables are lin |
3,468 | What does orthogonal mean in the context of statistics? | Two or more IV's unrelated (independent) to one another but both having an influence on the DV. Each IV separately contributes a distinct value to the outcome , while both or all IV's also contribute in an additive fashion in the prediction of income (orthogonal=non-intersecting IV's influence on a DV). IV's are non-correlational amongst one another and usually positioned in a right angle *see Venn Diagram.
Example: Relationship among motivation and years of education on income.
IV= Years of Education
IV= Motivation
DV= Income
https://web.archive.org/web/20160216160117/https://onlinecourses.science.psu.edu/stat505/node/167 | What does orthogonal mean in the context of statistics? | Two or more IV's unrelated (independent) to one another but both having an influence on the DV. Each IV separately contributes a distinct value to the outcome , while both or all IV's also contribute | What does orthogonal mean in the context of statistics?
Two or more IV's unrelated (independent) to one another but both having an influence on the DV. Each IV separately contributes a distinct value to the outcome , while both or all IV's also contribute in an additive fashion in the prediction of income (orthogonal=non-intersecting IV's influence on a DV). IV's are non-correlational amongst one another and usually positioned in a right angle *see Venn Diagram.
Example: Relationship among motivation and years of education on income.
IV= Years of Education
IV= Motivation
DV= Income
https://web.archive.org/web/20160216160117/https://onlinecourses.science.psu.edu/stat505/node/167 | What does orthogonal mean in the context of statistics?
Two or more IV's unrelated (independent) to one another but both having an influence on the DV. Each IV separately contributes a distinct value to the outcome , while both or all IV's also contribute |
3,469 | Efficient online linear regression | Maindonald describes a sequential method based on Givens rotations. (A Givens rotation is an orthogonal transformation of two vectors that zeros out a given entry in one of the vectors.) At the previous step you have decomposed the design matrix $\mathbf{X}$ into a triangular matrix $\mathbf{T}$ via an orthogonal transformation $\mathbf{Q}$ so that $\mathbf{Q}\mathbf{X} = (\mathbf{T}, \mathbf{0})'$. (It's fast and easy to get the regression results from a triangular matrix.) Upon adjoining a new row $v$ below $\mathbf{X}$, you effectively extend $(\mathbf{T}, \mathbf{0})'$ by a nonzero row, too, say $t$. The task is to zero out this row while keeping the entries in the position of $\mathbf{T}$ diagonal. A sequence of Givens rotations does this: the rotation with the first row of $\mathbf{T}$ zeros the first element of $t$; then the rotation with the second row of $\mathbf{T}$ zeros the second element, and so on. The effect is to premultiply $\mathbf{Q}$ by a series of rotations, which does not change its orthogonality.
When the design matrix has $p+1$ columns (which is the case when regressing on $p$ variables plus a constant), the number of rotations needed does not exceed $p+1$ and each rotation changes two $p+1$-vectors. The storage needed for $\mathbf{T}$ is $O((p+1)^2)$. Thus this algorithm has a computational cost of $O((p+1)^2)$ in both time and space.
A similar approach lets you determine the effect on regression of deleting a row. Maindonald gives formulas; so do Belsley, Kuh, & Welsh. Thus, if you are looking for a moving window for regression, you can retain data for the window within a circular buffer, adjoining the new datum and dropping the old one with each update. This doubles the update time and requires additional $O(k (p+1))$ storage for a window of width $k$. It appears that $1/k$ would be the analog of the influence parameter.
For exponential decay, I think (speculatively) that you could adapt this approach to weighted least squares, giving each new value a weight greater than 1. There shouldn't be any need to maintain a buffer of previous values or delete any old data.
References
J. H. Maindonald, Statistical Computation. J. Wiley & Sons, 1984. Chapter 4.
D. A. Belsley, E. Kuh, R. E. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. J. Wiley & Sons, 1980. | Efficient online linear regression | Maindonald describes a sequential method based on Givens rotations. (A Givens rotation is an orthogonal transformation of two vectors that zeros out a given entry in one of the vectors.) At the prev | Efficient online linear regression
Maindonald describes a sequential method based on Givens rotations. (A Givens rotation is an orthogonal transformation of two vectors that zeros out a given entry in one of the vectors.) At the previous step you have decomposed the design matrix $\mathbf{X}$ into a triangular matrix $\mathbf{T}$ via an orthogonal transformation $\mathbf{Q}$ so that $\mathbf{Q}\mathbf{X} = (\mathbf{T}, \mathbf{0})'$. (It's fast and easy to get the regression results from a triangular matrix.) Upon adjoining a new row $v$ below $\mathbf{X}$, you effectively extend $(\mathbf{T}, \mathbf{0})'$ by a nonzero row, too, say $t$. The task is to zero out this row while keeping the entries in the position of $\mathbf{T}$ diagonal. A sequence of Givens rotations does this: the rotation with the first row of $\mathbf{T}$ zeros the first element of $t$; then the rotation with the second row of $\mathbf{T}$ zeros the second element, and so on. The effect is to premultiply $\mathbf{Q}$ by a series of rotations, which does not change its orthogonality.
When the design matrix has $p+1$ columns (which is the case when regressing on $p$ variables plus a constant), the number of rotations needed does not exceed $p+1$ and each rotation changes two $p+1$-vectors. The storage needed for $\mathbf{T}$ is $O((p+1)^2)$. Thus this algorithm has a computational cost of $O((p+1)^2)$ in both time and space.
A similar approach lets you determine the effect on regression of deleting a row. Maindonald gives formulas; so do Belsley, Kuh, & Welsh. Thus, if you are looking for a moving window for regression, you can retain data for the window within a circular buffer, adjoining the new datum and dropping the old one with each update. This doubles the update time and requires additional $O(k (p+1))$ storage for a window of width $k$. It appears that $1/k$ would be the analog of the influence parameter.
For exponential decay, I think (speculatively) that you could adapt this approach to weighted least squares, giving each new value a weight greater than 1. There shouldn't be any need to maintain a buffer of previous values or delete any old data.
References
J. H. Maindonald, Statistical Computation. J. Wiley & Sons, 1984. Chapter 4.
D. A. Belsley, E. Kuh, R. E. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. J. Wiley & Sons, 1980. | Efficient online linear regression
Maindonald describes a sequential method based on Givens rotations. (A Givens rotation is an orthogonal transformation of two vectors that zeros out a given entry in one of the vectors.) At the prev |
3,470 | Efficient online linear regression | I think recasting your linear regression model into a state-space model will give you what you are after. If you use R, you may want to use package dlm
and have a look at the companion book by Petris et al. | Efficient online linear regression | I think recasting your linear regression model into a state-space model will give you what you are after. If you use R, you may want to use package dlm
and have a look at the companion book by Petris | Efficient online linear regression
I think recasting your linear regression model into a state-space model will give you what you are after. If you use R, you may want to use package dlm
and have a look at the companion book by Petris et al. | Efficient online linear regression
I think recasting your linear regression model into a state-space model will give you what you are after. If you use R, you may want to use package dlm
and have a look at the companion book by Petris |
3,471 | Efficient online linear regression | You can always just perform gradient descent on the sum of squares cost $E$ wrt the parameters of your model $W$. Just take the gradient of it but don't go for the closed form solution but only for the search direction instead.
Let $E(i; W)$ be the cost of the i'th training sample given the parameters $W$. Your update for the j'th parameter is then
$$W_{j} \leftarrow W_j + \alpha \frac{\partial{E(i; W)}}{\partial{W_j}}$$
where $\alpha$ is a step rate, which you should pick via cross validation or good measure.
This is very efficient and the way neural networks are typically trained. You can process even lots of samples in parallel (say, a 100 or so) efficiently.
Of course more sophisticated optimization algorithms (momentum, conjugate gradient, ...) can be applied. | Efficient online linear regression | You can always just perform gradient descent on the sum of squares cost $E$ wrt the parameters of your model $W$. Just take the gradient of it but don't go for the closed form solution but only for t | Efficient online linear regression
You can always just perform gradient descent on the sum of squares cost $E$ wrt the parameters of your model $W$. Just take the gradient of it but don't go for the closed form solution but only for the search direction instead.
Let $E(i; W)$ be the cost of the i'th training sample given the parameters $W$. Your update for the j'th parameter is then
$$W_{j} \leftarrow W_j + \alpha \frac{\partial{E(i; W)}}{\partial{W_j}}$$
where $\alpha$ is a step rate, which you should pick via cross validation or good measure.
This is very efficient and the way neural networks are typically trained. You can process even lots of samples in parallel (say, a 100 or so) efficiently.
Of course more sophisticated optimization algorithms (momentum, conjugate gradient, ...) can be applied. | Efficient online linear regression
You can always just perform gradient descent on the sum of squares cost $E$ wrt the parameters of your model $W$. Just take the gradient of it but don't go for the closed form solution but only for t |
3,472 | Efficient online linear regression | Surprised no one else touched on this so far. Linear regression has a quadratic objective function. So, a newton Raphson step from any starting point leads you straight to the optima. Now, let's say you already did your linear regression. The objective function is:
$$ L(\beta) = (y - X \beta)^t (y - X \beta) $$
The gradient becomes
$$ \nabla L (\beta) = -2 X^t (y - X \beta)$$
And the hessian:
$$ \nabla^2 L (\beta) = X^t X$$
Now, you got some past data and did a linear regression and are sitting with your parameters ($\beta$). The gradient at this point is zero by definition. The hessian is as given above. A new data point ($x_{new}, y_{new}$) arrives. You just calculate the gradient for the new point via:
$$\nabla L_{new}(\beta) = -2 x_{new} (y_{new}-x_{new}^T \beta)$$
and that will become your overall gradient (since the gradient from the existing data was zero). The hessian for the new data point is:
$$\nabla^2 L_{new} = x_{new}x_{new}^T $$.
Add this to the old hessian given above. Then, just take a Newton Raphson step.
$$\beta_{new} = \beta_{old} + (\nabla^2L)^{-1} \nabla L_{new}$$
And you're done. | Efficient online linear regression | Surprised no one else touched on this so far. Linear regression has a quadratic objective function. So, a newton Raphson step from any starting point leads you straight to the optima. Now, let's say y | Efficient online linear regression
Surprised no one else touched on this so far. Linear regression has a quadratic objective function. So, a newton Raphson step from any starting point leads you straight to the optima. Now, let's say you already did your linear regression. The objective function is:
$$ L(\beta) = (y - X \beta)^t (y - X \beta) $$
The gradient becomes
$$ \nabla L (\beta) = -2 X^t (y - X \beta)$$
And the hessian:
$$ \nabla^2 L (\beta) = X^t X$$
Now, you got some past data and did a linear regression and are sitting with your parameters ($\beta$). The gradient at this point is zero by definition. The hessian is as given above. A new data point ($x_{new}, y_{new}$) arrives. You just calculate the gradient for the new point via:
$$\nabla L_{new}(\beta) = -2 x_{new} (y_{new}-x_{new}^T \beta)$$
and that will become your overall gradient (since the gradient from the existing data was zero). The hessian for the new data point is:
$$\nabla^2 L_{new} = x_{new}x_{new}^T $$.
Add this to the old hessian given above. Then, just take a Newton Raphson step.
$$\beta_{new} = \beta_{old} + (\nabla^2L)^{-1} \nabla L_{new}$$
And you're done. | Efficient online linear regression
Surprised no one else touched on this so far. Linear regression has a quadratic objective function. So, a newton Raphson step from any starting point leads you straight to the optima. Now, let's say y |
3,473 | Efficient online linear regression | The standard least-square fit gives regression coefficients
$
\beta = ( X^T X )^{-1} X^T Y
$
where X is a matrix of M values for each of N data points, and is NXM in size. Y is a NX1 matrix of outputs. $\beta$ of course is a MX1 matrix of coefficients. (If you want an intercept just make one set of x's equal always to 1.)
To make this online presumably you just need to keep track of $X^T X$ and $X^T Y$, so one MXM matrix and one MX1 matrix. Every time you get a new data point you update those $M^2+M$ elements, and then calculate $\beta$ again, which costs you an MXM matrix inversion and the multiplication of the MXM matrix and the MX1 matrix.
For example, if M=1, then the one coefficient is
$
\beta = \frac{\sum_{i=1}^N{x_i y_i}}{\sum_{i=1}^N{x_i^2}}
$
so every time you get a new data point you update both sums and calculate the ratio and you get the updated coefficient.
If you want to damp out the earlier estimates geometrically I suppose you could weight $X^T X$ and $X^T Y$ by $(1-\lambda)$ each time before adding the new term, where $\lambda$ is some small number. | Efficient online linear regression | The standard least-square fit gives regression coefficients
$
\beta = ( X^T X )^{-1} X^T Y
$
where X is a matrix of M values for each of N data points, and is NXM in size. Y is a NX1 matrix of outputs | Efficient online linear regression
The standard least-square fit gives regression coefficients
$
\beta = ( X^T X )^{-1} X^T Y
$
where X is a matrix of M values for each of N data points, and is NXM in size. Y is a NX1 matrix of outputs. $\beta$ of course is a MX1 matrix of coefficients. (If you want an intercept just make one set of x's equal always to 1.)
To make this online presumably you just need to keep track of $X^T X$ and $X^T Y$, so one MXM matrix and one MX1 matrix. Every time you get a new data point you update those $M^2+M$ elements, and then calculate $\beta$ again, which costs you an MXM matrix inversion and the multiplication of the MXM matrix and the MX1 matrix.
For example, if M=1, then the one coefficient is
$
\beta = \frac{\sum_{i=1}^N{x_i y_i}}{\sum_{i=1}^N{x_i^2}}
$
so every time you get a new data point you update both sums and calculate the ratio and you get the updated coefficient.
If you want to damp out the earlier estimates geometrically I suppose you could weight $X^T X$ and $X^T Y$ by $(1-\lambda)$ each time before adding the new term, where $\lambda$ is some small number. | Efficient online linear regression
The standard least-square fit gives regression coefficients
$
\beta = ( X^T X )^{-1} X^T Y
$
where X is a matrix of M values for each of N data points, and is NXM in size. Y is a NX1 matrix of outputs |
3,474 | Efficient online linear regression | The problem is more easily solved when you rewrite things a little bit:
Y = y
X = [x, 1 ]
then
Y = A*X
A one time-solution is found by calculating
V = X' * X
and
C = X' * Y
note the V should have size N-by-N and C a size of N-by-M.
The parameters you're looking for are then given by:
A = inv(V) * C
Since both V and C are calculated by summing over your data, you can calculate
A at every new sample. This has a time complexity of O(N^3), however.
Since V is square and semi-definite positive, a LU decomposition does exists, that
makes inverting V numerically more stable.
There are algorithms to perform rank-1 updates to the inverse of a matrix. Find those and you'll have the efficient implementation you're looking for.
The rank-1 update algorithms can be found in "Matrix computations" by Golub and van Loan. It's tough material, but it does have a comprehensive overview of such algorithms.
Note:
The method above gives a least-square estimate at each step. You can easily adding weights to the updates to X and Y. When the values of X and Y grow too large, they can be scaled by a single scalar, without affecting the result. | Efficient online linear regression | The problem is more easily solved when you rewrite things a little bit:
Y = y
X = [x, 1 ]
then
Y = A*X
A one time-solution is found by calculating
V = X' * X
and
C = X' * Y
note the V should have siz | Efficient online linear regression
The problem is more easily solved when you rewrite things a little bit:
Y = y
X = [x, 1 ]
then
Y = A*X
A one time-solution is found by calculating
V = X' * X
and
C = X' * Y
note the V should have size N-by-N and C a size of N-by-M.
The parameters you're looking for are then given by:
A = inv(V) * C
Since both V and C are calculated by summing over your data, you can calculate
A at every new sample. This has a time complexity of O(N^3), however.
Since V is square and semi-definite positive, a LU decomposition does exists, that
makes inverting V numerically more stable.
There are algorithms to perform rank-1 updates to the inverse of a matrix. Find those and you'll have the efficient implementation you're looking for.
The rank-1 update algorithms can be found in "Matrix computations" by Golub and van Loan. It's tough material, but it does have a comprehensive overview of such algorithms.
Note:
The method above gives a least-square estimate at each step. You can easily adding weights to the updates to X and Y. When the values of X and Y grow too large, they can be scaled by a single scalar, without affecting the result. | Efficient online linear regression
The problem is more easily solved when you rewrite things a little bit:
Y = y
X = [x, 1 ]
then
Y = A*X
A one time-solution is found by calculating
V = X' * X
and
C = X' * Y
note the V should have siz |
3,475 | Perform feature normalization before or within model validation? | Your approach is entirely correct. Although data transformations are often undervalued as "preprocessing", one cannot emphasize enough that transformations in order to optimize model performance can and should be treated as part of the model building process.
Reasoning: A model shall be applied on unseen data which is in general not available at the time the model is built. The validation process (including data splitting) simulates this. So in order to get a good estimate of the model quality (and generalization power) one needs to restrict the calculation of the normalization parameters (mean and variance) to the training set.
I can only guess why this is not always done in literature. One argument could be, that the calculation of mean and variance is not that sensitive to small data variations (but even this is only true if the basic sample size is large enough and the data is approximately normally distributed without extreme outliers). | Perform feature normalization before or within model validation? | Your approach is entirely correct. Although data transformations are often undervalued as "preprocessing", one cannot emphasize enough that transformations in order to optimize model performance can a | Perform feature normalization before or within model validation?
Your approach is entirely correct. Although data transformations are often undervalued as "preprocessing", one cannot emphasize enough that transformations in order to optimize model performance can and should be treated as part of the model building process.
Reasoning: A model shall be applied on unseen data which is in general not available at the time the model is built. The validation process (including data splitting) simulates this. So in order to get a good estimate of the model quality (and generalization power) one needs to restrict the calculation of the normalization parameters (mean and variance) to the training set.
I can only guess why this is not always done in literature. One argument could be, that the calculation of mean and variance is not that sensitive to small data variations (but even this is only true if the basic sample size is large enough and the data is approximately normally distributed without extreme outliers). | Perform feature normalization before or within model validation?
Your approach is entirely correct. Although data transformations are often undervalued as "preprocessing", one cannot emphasize enough that transformations in order to optimize model performance can a |
3,476 | Perform feature normalization before or within model validation? | Feature normalization is to make different features in the same scale. The scaling speeds up gradient descent by avoiding many extra iterations that are required when one or more features take on much larger values than the rest(Without scaling, the cost function that is visualized will show a great asymmetry).
I think it makes sense that use the mean and var from training set when test data come. Yet if the data size is huge, both training and validation sets can be approximately viewed as normal distribution, thus they roughly share the mean and var. | Perform feature normalization before or within model validation? | Feature normalization is to make different features in the same scale. The scaling speeds up gradient descent by avoiding many extra iterations that are required when one or more features take on much | Perform feature normalization before or within model validation?
Feature normalization is to make different features in the same scale. The scaling speeds up gradient descent by avoiding many extra iterations that are required when one or more features take on much larger values than the rest(Without scaling, the cost function that is visualized will show a great asymmetry).
I think it makes sense that use the mean and var from training set when test data come. Yet if the data size is huge, both training and validation sets can be approximately viewed as normal distribution, thus they roughly share the mean and var. | Perform feature normalization before or within model validation?
Feature normalization is to make different features in the same scale. The scaling speeds up gradient descent by avoiding many extra iterations that are required when one or more features take on much |
3,477 | Perform feature normalization before or within model validation? | The methodology you have described is sound as others have said. You should perform the exact same transformation on your test set features as you do on features from your training set.
I think it's worth adding that another reason for feature normalization is to enhance the performance of certain processes that are sensitive to differences to the scale of certain variables. For example principal components analysis (PCA) aims to capture the greatest proportion of variance, and as a result will give more weight to variables that exhibit the largest variance if feature normalization is not performed initially. | Perform feature normalization before or within model validation? | The methodology you have described is sound as others have said. You should perform the exact same transformation on your test set features as you do on features from your training set.
I think it's w | Perform feature normalization before or within model validation?
The methodology you have described is sound as others have said. You should perform the exact same transformation on your test set features as you do on features from your training set.
I think it's worth adding that another reason for feature normalization is to enhance the performance of certain processes that are sensitive to differences to the scale of certain variables. For example principal components analysis (PCA) aims to capture the greatest proportion of variance, and as a result will give more weight to variables that exhibit the largest variance if feature normalization is not performed initially. | Perform feature normalization before or within model validation?
The methodology you have described is sound as others have said. You should perform the exact same transformation on your test set features as you do on features from your training set.
I think it's w |
3,478 | Perform feature normalization before or within model validation? | Let me illustrate as to why we have to do normalization only on training data set.Say we extract features of different characteristics of apples like length,breath of the bounding box surrounding the apples,color,etc.,Let's say we normalize the feature length on the whole data and we have got mean length as 7 cm and variance as 1 cm.But if we just use train data we may get mean as 6 cm and variance as 1 cm.Which actually mean that there are large apples in the test data set which was actually not learn in the train data set. | Perform feature normalization before or within model validation? | Let me illustrate as to why we have to do normalization only on training data set.Say we extract features of different characteristics of apples like length,breath of the bounding box surrounding the | Perform feature normalization before or within model validation?
Let me illustrate as to why we have to do normalization only on training data set.Say we extract features of different characteristics of apples like length,breath of the bounding box surrounding the apples,color,etc.,Let's say we normalize the feature length on the whole data and we have got mean length as 7 cm and variance as 1 cm.But if we just use train data we may get mean as 6 cm and variance as 1 cm.Which actually mean that there are large apples in the test data set which was actually not learn in the train data set. | Perform feature normalization before or within model validation?
Let me illustrate as to why we have to do normalization only on training data set.Say we extract features of different characteristics of apples like length,breath of the bounding box surrounding the |
3,479 | What is the effect of having correlated predictors in a multiple regression model? | The topic you are asking about is multicollinearity. You might want to read some of the threads on CV categorized under the multicollinearity tag. @whuber's answer linked above in particular is also worth your time.
The assertion that "if two predictors are correlated and both are included in a model, one will be insignificant", is not correct. If there is a real effect of a variable, the probability that variable will be significant is a function of several things, such as the magnitude of the effect, the magnitude of the error variance, the variance of the variable itself, the amount of data you have, and the number of other variables in the model. Whether the variables are correlated is also relevant, but it doesn't override these facts. Consider the following simple demonstration in R:
library(MASS) # allows you to generate correlated data
set.seed(4314) # makes this example exactly replicable
# generate sets of 2 correlated variables w/ means=0 & SDs=1
X0 = mvrnorm(n=20, mu=c(0,0), Sigma=rbind(c(1.00, 0.70), # r=.70
c(0.70, 1.00)) )
X1 = mvrnorm(n=100, mu=c(0,0), Sigma=rbind(c(1.00, 0.87), # r=.87
c(0.87, 1.00)) )
X2 = mvrnorm(n=1000, mu=c(0,0), Sigma=rbind(c(1.00, 0.95), # r=.95
c(0.95, 1.00)) )
y0 = 5 + 0.6*X0[,1] + 0.4*X0[,2] + rnorm(20) # y is a function of both
y1 = 5 + 0.6*X1[,1] + 0.4*X1[,2] + rnorm(100) # but is more strongly
y2 = 5 + 0.6*X2[,1] + 0.4*X2[,2] + rnorm(1000) # related to the 1st
# results of fitted models (skipping a lot of output, including the intercepts)
summary(lm(y0~X0[,1]+X0[,2]))
# Estimate Std. Error t value Pr(>|t|)
# X0[, 1] 0.6614 0.3612 1.831 0.0847 . # neither variable
# X0[, 2] 0.4215 0.3217 1.310 0.2075 # is significant
summary(lm(y1~X1[,1]+X1[,2]))
# Estimate Std. Error t value Pr(>|t|)
# X1[, 1] 0.57987 0.21074 2.752 0.00708 ** # only 1 variable
# X1[, 2] 0.25081 0.19806 1.266 0.20841 # is significant
summary(lm(y2~X2[,1]+X2[,2]))
# Estimate Std. Error t value Pr(>|t|)
# X2[, 1] 0.60783 0.09841 6.177 9.52e-10 *** # both variables
# X2[, 2] 0.39632 0.09781 4.052 5.47e-05 *** # are significant
The correlation between the two variables is lowest in the first example and highest in the third, yet neither variable is significant in the first example and both are in the last example. The magnitude of the effects is identical in all three cases, and the variances of the variables and the errors should be similar (they are stochastic, but drawn from populations with the same variance). The pattern we see here is due primarily to my manipulating the $N$s for each case.
The key concept to understand to resolve your questions is the variance inflation factor (VIF). The VIF is how much the variance of your regression coefficient is larger than it would otherwise have been if the variable had been completely uncorrelated with all the other variables in the model. Note that the VIF is a multiplicative factor, if the variable in question is uncorrelated the VIF=1. A simple understanding of the VIF is as follows: you could fit a model predicting a variable (say, $X_1$) from all other variables in your model (say, $X_2$), and get a multiple $R^2$. The VIF for $X_1$ would be $1/(1-R^2)$. Let's say the VIF for $X_1$ were $10$ (often considered a threshold for excessive multicollinearity), then the variance of the sampling distribution of the regression coefficient for $X_1$ would be $10\times$ larger than it would have been if $X_1$ had been completely uncorrelated with all the other variables in the model.
Thinking about what would happen if you included both correlated variables vs. only one is similar, but slightly more complicated than the approach discussed above. This is because not including a variable means the model uses less degrees of freedom, which changes the residual variance and everything computed from that (including the variance of the regression coefficients). In addition, if the non-included variable really is associated with the response, the variance in the response due to that variable will be included into the residual variance, making it larger than it otherwise would be. Thus, several things change simultaneously (the variable is correlated or not with another variable, and the residual variance), and the precise effect of dropping / including the other variable will depend on how those trade off. The best way to think through this issue is based on the counterfactual of how the model would differ if the variables were uncorrelated instead of correlated, rather than including or excluding one of the variables.
Armed with an understanding of the VIF, here are the answers to your questions:
Because the variance of the sampling distribution of the regression coefficient would be larger (by a factor of the VIF) if it were correlated with other variables in the model, the p-values would be higher (i.e., less significant) than they otherwise would.
The variances of the regression coefficients would be larger, as already discussed.
In general, this is hard to know without solving for the model. Typically, if only one of two is significant, it will be the one that had the stronger bivariate correlation with $Y$.
How the predicted values and their variance would change is quite complicated. It depends on how strongly correlated the variables are and the manner in which they appear to be associated with your response variable in your data. Regarding this issue, it may help you to read my answer here: Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression? | What is the effect of having correlated predictors in a multiple regression model? | The topic you are asking about is multicollinearity. You might want to read some of the threads on CV categorized under the multicollinearity tag. @whuber's answer linked above in particular is also | What is the effect of having correlated predictors in a multiple regression model?
The topic you are asking about is multicollinearity. You might want to read some of the threads on CV categorized under the multicollinearity tag. @whuber's answer linked above in particular is also worth your time.
The assertion that "if two predictors are correlated and both are included in a model, one will be insignificant", is not correct. If there is a real effect of a variable, the probability that variable will be significant is a function of several things, such as the magnitude of the effect, the magnitude of the error variance, the variance of the variable itself, the amount of data you have, and the number of other variables in the model. Whether the variables are correlated is also relevant, but it doesn't override these facts. Consider the following simple demonstration in R:
library(MASS) # allows you to generate correlated data
set.seed(4314) # makes this example exactly replicable
# generate sets of 2 correlated variables w/ means=0 & SDs=1
X0 = mvrnorm(n=20, mu=c(0,0), Sigma=rbind(c(1.00, 0.70), # r=.70
c(0.70, 1.00)) )
X1 = mvrnorm(n=100, mu=c(0,0), Sigma=rbind(c(1.00, 0.87), # r=.87
c(0.87, 1.00)) )
X2 = mvrnorm(n=1000, mu=c(0,0), Sigma=rbind(c(1.00, 0.95), # r=.95
c(0.95, 1.00)) )
y0 = 5 + 0.6*X0[,1] + 0.4*X0[,2] + rnorm(20) # y is a function of both
y1 = 5 + 0.6*X1[,1] + 0.4*X1[,2] + rnorm(100) # but is more strongly
y2 = 5 + 0.6*X2[,1] + 0.4*X2[,2] + rnorm(1000) # related to the 1st
# results of fitted models (skipping a lot of output, including the intercepts)
summary(lm(y0~X0[,1]+X0[,2]))
# Estimate Std. Error t value Pr(>|t|)
# X0[, 1] 0.6614 0.3612 1.831 0.0847 . # neither variable
# X0[, 2] 0.4215 0.3217 1.310 0.2075 # is significant
summary(lm(y1~X1[,1]+X1[,2]))
# Estimate Std. Error t value Pr(>|t|)
# X1[, 1] 0.57987 0.21074 2.752 0.00708 ** # only 1 variable
# X1[, 2] 0.25081 0.19806 1.266 0.20841 # is significant
summary(lm(y2~X2[,1]+X2[,2]))
# Estimate Std. Error t value Pr(>|t|)
# X2[, 1] 0.60783 0.09841 6.177 9.52e-10 *** # both variables
# X2[, 2] 0.39632 0.09781 4.052 5.47e-05 *** # are significant
The correlation between the two variables is lowest in the first example and highest in the third, yet neither variable is significant in the first example and both are in the last example. The magnitude of the effects is identical in all three cases, and the variances of the variables and the errors should be similar (they are stochastic, but drawn from populations with the same variance). The pattern we see here is due primarily to my manipulating the $N$s for each case.
The key concept to understand to resolve your questions is the variance inflation factor (VIF). The VIF is how much the variance of your regression coefficient is larger than it would otherwise have been if the variable had been completely uncorrelated with all the other variables in the model. Note that the VIF is a multiplicative factor, if the variable in question is uncorrelated the VIF=1. A simple understanding of the VIF is as follows: you could fit a model predicting a variable (say, $X_1$) from all other variables in your model (say, $X_2$), and get a multiple $R^2$. The VIF for $X_1$ would be $1/(1-R^2)$. Let's say the VIF for $X_1$ were $10$ (often considered a threshold for excessive multicollinearity), then the variance of the sampling distribution of the regression coefficient for $X_1$ would be $10\times$ larger than it would have been if $X_1$ had been completely uncorrelated with all the other variables in the model.
Thinking about what would happen if you included both correlated variables vs. only one is similar, but slightly more complicated than the approach discussed above. This is because not including a variable means the model uses less degrees of freedom, which changes the residual variance and everything computed from that (including the variance of the regression coefficients). In addition, if the non-included variable really is associated with the response, the variance in the response due to that variable will be included into the residual variance, making it larger than it otherwise would be. Thus, several things change simultaneously (the variable is correlated or not with another variable, and the residual variance), and the precise effect of dropping / including the other variable will depend on how those trade off. The best way to think through this issue is based on the counterfactual of how the model would differ if the variables were uncorrelated instead of correlated, rather than including or excluding one of the variables.
Armed with an understanding of the VIF, here are the answers to your questions:
Because the variance of the sampling distribution of the regression coefficient would be larger (by a factor of the VIF) if it were correlated with other variables in the model, the p-values would be higher (i.e., less significant) than they otherwise would.
The variances of the regression coefficients would be larger, as already discussed.
In general, this is hard to know without solving for the model. Typically, if only one of two is significant, it will be the one that had the stronger bivariate correlation with $Y$.
How the predicted values and their variance would change is quite complicated. It depends on how strongly correlated the variables are and the manner in which they appear to be associated with your response variable in your data. Regarding this issue, it may help you to read my answer here: Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression? | What is the effect of having correlated predictors in a multiple regression model?
The topic you are asking about is multicollinearity. You might want to read some of the threads on CV categorized under the multicollinearity tag. @whuber's answer linked above in particular is also |
3,480 | What is the effect of having correlated predictors in a multiple regression model? | This is more of comment, but I wanted to include a graph and some code.
I think the statement "if two predictors are correlated and both are included in a model, one will be insignificant" is false if you mean "only one." Binary statistical significance cannot be used for variable selection.
Here's my counterexample using a regression of body fat percentage on thigh circumference, skin fold thickness*, and mid arm circumference:
. webuse bodyfat, clear
(Body Fat)
. reg bodyfat thigh triceps midarm
Source | SS df MS Number of obs = 20
-------------+------------------------------ F( 3, 16) = 21.52
Model | 396.984607 3 132.328202 Prob > F = 0.0000
Residual | 98.4049068 16 6.15030667 R-squared = 0.8014
-------------+------------------------------ Adj R-squared = 0.7641
Total | 495.389513 19 26.0731323 Root MSE = 2.48
------------------------------------------------------------------------------
bodyfat | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
thigh | -2.856842 2.582015 -1.11 0.285 -8.330468 2.616785
triceps | 4.334085 3.015511 1.44 0.170 -2.058512 10.72668
midarm | -2.186056 1.595499 -1.37 0.190 -5.568362 1.19625
_cons | 117.0844 99.78238 1.17 0.258 -94.44474 328.6136
------------------------------------------------------------------------------
. corr bodyfat thigh triceps midarm
(obs=20)
| bodyfat thigh triceps midarm
-------------+------------------------------------
bodyfat | 1.0000
thigh | 0.8781 1.0000
triceps | 0.8433 0.9238 1.0000
midarm | 0.1424 0.0847 0.4578 1.0000
. ellip thigh triceps, coefs plot( (scatteri `=_b[thigh]' `=_b[triceps]'), yline(0, lcolor(gray)) xline(0, lcolor(gray)) legend(off))
As you can see from the regression table, everything is insignificant, though the p-values do vary a bit.
The last Stata command graphs the confidence region for 2 of the regression coefficients (a two dimensional analog of the familiar confidence intervals) along with the point estimates (red dot). The confidence ellipse for the skin fold thickness and thigh circumference coefficients is long, narrow and tilted, reflecting the collinearity in the regressors. There's high negative covariance between the estimated coefficients. The ellipse covers parts of the vertical and the horizontal axes, which means that we cannot reject the individual hypotheses that the $\beta$s are zero, though we can reject the joint null that both are since the ellipse does not cover the origin. In other words, either thigh and triceps are relevant for body fat, but you can't determine which one is the culprit.
So how do we know which predictors would be less significant? The variation in a regressor can be classified into two types:
Variation unique to each regressor
Variation that is shared by the regressors
In estimating the coefficients of each regressor, only the first will be used. Common variation is ignored since it cannot be allocated, though it is used in prediction and calculating $R^2$. When there is little unique information, the confidence will be low and coefficient variances will be high. The higher the multicollinearity, the smaller the unique variation, and the greater the variances.
*The skin fold is the width of a fold of skin taken over the triceps muscle, and measured using a caliper. | What is the effect of having correlated predictors in a multiple regression model? | This is more of comment, but I wanted to include a graph and some code.
I think the statement "if two predictors are correlated and both are included in a model, one will be insignificant" is false if | What is the effect of having correlated predictors in a multiple regression model?
This is more of comment, but I wanted to include a graph and some code.
I think the statement "if two predictors are correlated and both are included in a model, one will be insignificant" is false if you mean "only one." Binary statistical significance cannot be used for variable selection.
Here's my counterexample using a regression of body fat percentage on thigh circumference, skin fold thickness*, and mid arm circumference:
. webuse bodyfat, clear
(Body Fat)
. reg bodyfat thigh triceps midarm
Source | SS df MS Number of obs = 20
-------------+------------------------------ F( 3, 16) = 21.52
Model | 396.984607 3 132.328202 Prob > F = 0.0000
Residual | 98.4049068 16 6.15030667 R-squared = 0.8014
-------------+------------------------------ Adj R-squared = 0.7641
Total | 495.389513 19 26.0731323 Root MSE = 2.48
------------------------------------------------------------------------------
bodyfat | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
thigh | -2.856842 2.582015 -1.11 0.285 -8.330468 2.616785
triceps | 4.334085 3.015511 1.44 0.170 -2.058512 10.72668
midarm | -2.186056 1.595499 -1.37 0.190 -5.568362 1.19625
_cons | 117.0844 99.78238 1.17 0.258 -94.44474 328.6136
------------------------------------------------------------------------------
. corr bodyfat thigh triceps midarm
(obs=20)
| bodyfat thigh triceps midarm
-------------+------------------------------------
bodyfat | 1.0000
thigh | 0.8781 1.0000
triceps | 0.8433 0.9238 1.0000
midarm | 0.1424 0.0847 0.4578 1.0000
. ellip thigh triceps, coefs plot( (scatteri `=_b[thigh]' `=_b[triceps]'), yline(0, lcolor(gray)) xline(0, lcolor(gray)) legend(off))
As you can see from the regression table, everything is insignificant, though the p-values do vary a bit.
The last Stata command graphs the confidence region for 2 of the regression coefficients (a two dimensional analog of the familiar confidence intervals) along with the point estimates (red dot). The confidence ellipse for the skin fold thickness and thigh circumference coefficients is long, narrow and tilted, reflecting the collinearity in the regressors. There's high negative covariance between the estimated coefficients. The ellipse covers parts of the vertical and the horizontal axes, which means that we cannot reject the individual hypotheses that the $\beta$s are zero, though we can reject the joint null that both are since the ellipse does not cover the origin. In other words, either thigh and triceps are relevant for body fat, but you can't determine which one is the culprit.
So how do we know which predictors would be less significant? The variation in a regressor can be classified into two types:
Variation unique to each regressor
Variation that is shared by the regressors
In estimating the coefficients of each regressor, only the first will be used. Common variation is ignored since it cannot be allocated, though it is used in prediction and calculating $R^2$. When there is little unique information, the confidence will be low and coefficient variances will be high. The higher the multicollinearity, the smaller the unique variation, and the greater the variances.
*The skin fold is the width of a fold of skin taken over the triceps muscle, and measured using a caliper. | What is the effect of having correlated predictors in a multiple regression model?
This is more of comment, but I wanted to include a graph and some code.
I think the statement "if two predictors are correlated and both are included in a model, one will be insignificant" is false if |
3,481 | What is the effect of having correlated predictors in a multiple regression model? | As @whuber noted, this is a complex question. However, the first sentence of your post is a vast simplification. It is often the case that two (or more) variables will be correlated and both related to the dependent variable. Whether they are significant or not depends on both effect size and cell size.
In your example, suppose that, for a given size of house, people preferred fewer rooms (at least in NYC, this isn't unreasonable - it would indicate older buildings, more solid walls etc, and might be a marker for neighborhood). Then both could be significant, in opposite directions!
Or, suppose the two variables were house size and neighborhood - these would be correlated, surely, larger houses in better neighborhoods - but they could still both be significant and would surely both be related to house price.
Also, using only "correlated" masks complexities. Variables can be strongly related without being correlated. | What is the effect of having correlated predictors in a multiple regression model? | As @whuber noted, this is a complex question. However, the first sentence of your post is a vast simplification. It is often the case that two (or more) variables will be correlated and both related t | What is the effect of having correlated predictors in a multiple regression model?
As @whuber noted, this is a complex question. However, the first sentence of your post is a vast simplification. It is often the case that two (or more) variables will be correlated and both related to the dependent variable. Whether they are significant or not depends on both effect size and cell size.
In your example, suppose that, for a given size of house, people preferred fewer rooms (at least in NYC, this isn't unreasonable - it would indicate older buildings, more solid walls etc, and might be a marker for neighborhood). Then both could be significant, in opposite directions!
Or, suppose the two variables were house size and neighborhood - these would be correlated, surely, larger houses in better neighborhoods - but they could still both be significant and would surely both be related to house price.
Also, using only "correlated" masks complexities. Variables can be strongly related without being correlated. | What is the effect of having correlated predictors in a multiple regression model?
As @whuber noted, this is a complex question. However, the first sentence of your post is a vast simplification. It is often the case that two (or more) variables will be correlated and both related t |
3,482 | References containing arguments against null hypothesis significance testing? | Chris Fraley has taught a whole course on the history of the debate (the link seems to be broken, even though it's still on his official site; here is a copy in Internet Archive). His summary/conclusion is here (again, archived copy). According to Fraley's homepage, the last time he taught this course was in 2003.
He prefaces this list with an "Instructor's bias":
Although my goal is to facilitate lively, deep, and fair discussions on the issues at hand, I believe that it is necessary to make my bias explicit from the outset. Paul Meehl once stated that "Sir Ronald [Fisher] has befuddled us, mesmerized us, and led us down the primrose path. I believe that the almost universal reliance on merely refuting the null hypothesis as the standard method for corroborating substantive theories in the soft areas is a terrible mistake, is basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology." I echo Meehl's sentiment. One of my goals in this seminar is to make it clear why I believe this to be the case. Furthermore, I expect you, by the time you have completed this seminar, to be able to articulate and defend your stance on the NHST debate, regardless of what that stance is.
I'll copy in the reading list in case the course page ever disappears:
Week 1. Introduction: What is a Null Hypothesis Significance Test? Facts, Myths, and the State of Our Science
Lyken, D. L. (1991). What’s wrong with psychology? In D. Cicchetti & W.M. Grove (eds.), Thinking Clearly about Psychology, vol. 1: Matters of Public Interest, Essays in honor of Paul E. Meehl (pp. 3 – 39). Minneapolis, MN: University of Minnesota Press.
Week 2. Early Criticisms of NHST
Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34, 103-115.
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.
Rozeboom, W. W. (1960). The fallacy of the null hypothesis significance test. Psychological Bulletin, 57, 416-428.
Bakan, D. (1966). The test of significance in psychological research. Psychological Bulletin, 66, 423-437. [optional]
Week 3. Contemporary Criticisms of NHST
Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003.
Gigerenzer, G. (1993). The superego, the ego, and the id in statistical reasoning. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 311-339). Hillsdale, NJ: Lawrence Erlbaum Associates.
Schmidt, F. L. & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of significance testing in the analysis of research data. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger (Eds.) What if there were no significance tests? (pp. 37-64). Mahwah, NJ: Lawrence Erlbaum Associates.
Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley. (Chapter 2 [A Critique of Significance Tests]) [optional]
Week 4. Rebuttal: Advocates of NHST Come to Its Defense
Frick, R. W. (1996). The appropriate use of null hypothesis testing. Psychological Methods, 1, 379-390.
Hagen, R. L. (1997). In praise of the null hypothesis statistical test. American Psychologist, 52, 15-24.
Wilkinson, L., & the Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.
Wainer, H. (1999). One cheer for null hypothesis significance testing. Psychological Methods, 6, 212-213.
Mulaik, S. A., Raju, N. S., & Harshman, R. A. (1997). There is a time and place for significance testing. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger , Eds. What if there were no significance tests? (pp. 65-116). Mahwah, NJ: Lawrence Erlbaum Associates. [optional]
Week 5. Rebuttal: Advocates of NHST Come to Its Defense
Abelson, R. P. (1997). On the surprising longevity of flogged horses: Why there is a case for the significance test. Psychological Science, 8, 12-15.
Krueger, J. (2001). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist, 56, 16-26.
Scarr, S. (1997). Rules of evidence: A larger context for the statistical debate. Psychological Science, 8, 16-17.
Greenwald, A. G., Gonzalez, R., Harris, R. J., & Guthrie, D. (1996). Effect sizes and p values: What should be reported and what should be replicated? Psychophysiology, 33, 175-183.
Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5, 241-301. [optional]
Harris, R. J. (1997). Significance tests have their place. Psychological Science, 8, 8-11. [optional]
Week 6. Effect Size
Rosenthal, R. (1984). Meta-analytic procedures for social research. Beverly Hills, CA: Sage. [Ch. 2, Defining Research Results]
Chow, S. L. (1988). Significance test or effect size? Psychological Bulletin, 103, 105-110.
Abelson, R. P. (1985). A variance explanation paradox: When a little is a lot. Psychological Bulletin, 97, 129-133. [optional]
Week 7. Statistical Power
Hallahan, M., & Rosenthal, R. (1996). Statistical power: Concepts, procedures, and applications. Behaviour Research and Therapy, 34, 489-499.
Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316.
Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology, 65, 145-153. [optional]
Maddock, J. E., Rossi, J. S. (2001). Statistical power of articles published in three health-psychology related journals. Health Psychology, 20, 76-78. [optional]
Thomas, L. & Juanes, F. (1996). The importance of statistical power analysis: An example from Animal Behaviour. Animal Behaviour, 52, 856-859. [optional]
Rossi, J. S. (1990). Statistical power of psychological research: What have we gained in 20 years? Journal of Consulting and Clinical Psychology, 58, 646-656. [optional]
Tukey, J. W. (1969). Analyzing data: Sanctification or detective work? American Psychologist, 24, 83-91. [optional]
Week 8. Confidence Intervals and Significance testing
Gardner, M. J., & D. G. Altman. 1986. Confidence intervals rather than P values: Estimation rather than hypothesis testing. British Medical Journal, 292, 746-750.
Cumming, G., & Finch, S. (2001). A primer on understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 532-574.
Loftus, G. R., & Masson, M.E.J. (1994). Using confidence intervals in within-subject designs. Psychonomic Bulletin and Review, 1, 476-490.
Week 9 [note: we are skipping this section]. Theoretical Modeling: Developing Formal Models of Natural Phenomena
Haefner, J. W. (1996). Modeling biological systems: Principles and applications. New York: International Thomson Publishing. (Chapters 1 [Models of Systems] & 2 [The Modeling Process])
Loehlin, J. C. (1992). Latent variable models: An introduction to factor, path, and structural analysis. Hillsdale, NJ: Lawrence Erlbaum Associates. (Chapter 1 [Path models in factor, path and structural analysis], p. 1-18]
Grant, D. A. (1962). Testing the null hypothesis and the strategy of investigating theoretical models. Psychological Review, 69, 54-61. [optional]
Binder, A. (1963). Further considerations on testing the null hypothesis and the strategy and tactics of investigating theoretical models. Psychological Review, 70, 107-115. [optional]
Edwards, W. (1965). Tactical note on the relations between scientific and statistical hypotheses. Psychological Bulletin, 63, 400-402. [optional]
Week 10. What is the Meaning of Probability? Controversy Concerning Relative Frequency and Subjective Probability
Salsburg, D. (2001). The lady tasting tea: How statistics revolutionized science in the twentieth century. New York: W. H. Freeman. (Chapters 10, 11, & 12)
Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley. (Chapters 4, 5, & 6)
Pruzek, R. M. (1997). An introduction to Bayesian inference and its applications. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger , Eds. What if there were no significance tests? (pp. 287-318). Mahwah, NJ: Lawrence Erlbaum Associates.
Rindskoph, D. M. (1997). Testing "small," not null, hypothesis: Classical and Bayesian Approaches. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger (Eds). What if there were no significance tests? (pp. 319-332). Mahwah, NJ: Lawrence Erlbaum Associates.
Edwards, W., Lindman, H., Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193-242. [optional]
Week 11. Theory Appraisal: Philosophy of Science and the Testing and Amending of Theories
Meehl, P. E. (1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry, 1, 108-141.
Roberts, S. & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological Review, 107, 358-367.
Week 12. Theory Appraisal: Philosophy of Science and the Testing and Amending of Theories
Urbach, P. (1974). Progress and degeneration in the "IQ debate" (I). British Journal of Philosophy of Science, 25, 99-125.
Serlin, R. C. & Lapsley, D. K. (1985). Rationality in psychological research: The good-enough principle. American Psychologist, 40, 73-83.
Dar, R. (1987). Another look at Meehl, Lakatos, and the scientific practices of psychologists. American Psychologist, 42, 145-151.
Gholson, B. & Barker, P. (1985). Kuhn, Lakatos, & Laudan: Applications in the history of physics and psychology. American Psychologist, 40, 755-769. [optional]
Faust, D., & Meehl, P. E. (1992). Using scientific methods to resolve questions in the history and philosophy of science: Some illustrations. Behavior Therapy, 23, 195-211. [optional]
Urbach, P. (1974). Progress and degeneration in the "IQ debate" (II). British Journal of Philosophy of Science, 25, 235-259. [optional]
Salmon, W. C. (1973, May). Confirmation. Scientific American, 228, 75-83. [optional]
Meehl, P. E. (1993). Philosophy of science: Help or hindrance? Psychological Reports, 72, 707-733. [optional]
Manicas. P. T., & Secord, P. F. (1983). Implications for psychology of the new philosophy of science. American Psychologist, 38, 399-413. [optional]
Week 13. Has the NHST Tradition Undermined a Non-Biased, Cumulative Knowledge Base in Psychology?
Cooper, H., DeNeve, K., & Charlton, K. (1997). Finding the missing science: The fate of studies submitted for review by a human subjects committee. Psychological Methods, 2, 447-452.
Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115-129.
Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1-20.
Berger, J. O. & Berry, D. A. (1988). Statistical analysis and illusion of objectivity. American Scientist, 76, 159-165.
Week 14. Replication and Scientific Integrity
Smith, N. C. (1970). Replication studies: A neglected aspect of psychological research. American Psychologist, 25, 970-975.
Sohn, D. (1998). Statistical significance and replicability: Why the former does not presage the latter. Theory and Psychology, 8, 291-311.
Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66, 195-244.
Platt, J. R. (1964). Strong Inference. Science, 146, 347-353.
Feynman, R. L. (1997). Surely you’re joking, Mr. Feynman! New York: W. W. Norton. (Chapter: Cargo-cult science).
Rorer, L. G. (1991). Some myths of science in psychology. In D. Cicchetti & W.M. Grove (eds.), Thinking Clearly about Psychology, vol. 1: Matters of Public Interest, Essays in honor of Paul E. Meehl (pp. 61 – 87). Minneapolis, MN: University of Minnesota Press. [optional]
Lindsay, R. M. & Ehrenberg, A. S. C. (1993). The design of replicated studies. The American Statistician, 47, 217-228. [optional]
Week 15. Quantitative Thinking: Why We Need Mathematics (and not NHST per se) in Psychological Science
Aiken, L. S., West, S. G., Sechrest, L., & Reno, R. R. (1990). Graduate training in statistics, methodology, and measurement in psychology: A survey of Ph.D. programs in North America. American Psychologist, 45, 721-734.
Meehl, P. E. (1998, May). The power of quantitative thinking. Invited address as recipient of the James McKeen Cattell Award at the annual meeting of the American Psychological Society, Washington, DC. | References containing arguments against null hypothesis significance testing? | Chris Fraley has taught a whole course on the history of the debate (the link seems to be broken, even though it's still on his official site; here is a copy in Internet Archive). His summary/conclus | References containing arguments against null hypothesis significance testing?
Chris Fraley has taught a whole course on the history of the debate (the link seems to be broken, even though it's still on his official site; here is a copy in Internet Archive). His summary/conclusion is here (again, archived copy). According to Fraley's homepage, the last time he taught this course was in 2003.
He prefaces this list with an "Instructor's bias":
Although my goal is to facilitate lively, deep, and fair discussions on the issues at hand, I believe that it is necessary to make my bias explicit from the outset. Paul Meehl once stated that "Sir Ronald [Fisher] has befuddled us, mesmerized us, and led us down the primrose path. I believe that the almost universal reliance on merely refuting the null hypothesis as the standard method for corroborating substantive theories in the soft areas is a terrible mistake, is basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology." I echo Meehl's sentiment. One of my goals in this seminar is to make it clear why I believe this to be the case. Furthermore, I expect you, by the time you have completed this seminar, to be able to articulate and defend your stance on the NHST debate, regardless of what that stance is.
I'll copy in the reading list in case the course page ever disappears:
Week 1. Introduction: What is a Null Hypothesis Significance Test? Facts, Myths, and the State of Our Science
Lyken, D. L. (1991). What’s wrong with psychology? In D. Cicchetti & W.M. Grove (eds.), Thinking Clearly about Psychology, vol. 1: Matters of Public Interest, Essays in honor of Paul E. Meehl (pp. 3 – 39). Minneapolis, MN: University of Minnesota Press.
Week 2. Early Criticisms of NHST
Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34, 103-115.
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.
Rozeboom, W. W. (1960). The fallacy of the null hypothesis significance test. Psychological Bulletin, 57, 416-428.
Bakan, D. (1966). The test of significance in psychological research. Psychological Bulletin, 66, 423-437. [optional]
Week 3. Contemporary Criticisms of NHST
Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003.
Gigerenzer, G. (1993). The superego, the ego, and the id in statistical reasoning. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 311-339). Hillsdale, NJ: Lawrence Erlbaum Associates.
Schmidt, F. L. & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of significance testing in the analysis of research data. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger (Eds.) What if there were no significance tests? (pp. 37-64). Mahwah, NJ: Lawrence Erlbaum Associates.
Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley. (Chapter 2 [A Critique of Significance Tests]) [optional]
Week 4. Rebuttal: Advocates of NHST Come to Its Defense
Frick, R. W. (1996). The appropriate use of null hypothesis testing. Psychological Methods, 1, 379-390.
Hagen, R. L. (1997). In praise of the null hypothesis statistical test. American Psychologist, 52, 15-24.
Wilkinson, L., & the Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.
Wainer, H. (1999). One cheer for null hypothesis significance testing. Psychological Methods, 6, 212-213.
Mulaik, S. A., Raju, N. S., & Harshman, R. A. (1997). There is a time and place for significance testing. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger , Eds. What if there were no significance tests? (pp. 65-116). Mahwah, NJ: Lawrence Erlbaum Associates. [optional]
Week 5. Rebuttal: Advocates of NHST Come to Its Defense
Abelson, R. P. (1997). On the surprising longevity of flogged horses: Why there is a case for the significance test. Psychological Science, 8, 12-15.
Krueger, J. (2001). Null hypothesis significance testing: On the survival of a flawed method. American Psychologist, 56, 16-26.
Scarr, S. (1997). Rules of evidence: A larger context for the statistical debate. Psychological Science, 8, 16-17.
Greenwald, A. G., Gonzalez, R., Harris, R. J., & Guthrie, D. (1996). Effect sizes and p values: What should be reported and what should be replicated? Psychophysiology, 33, 175-183.
Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5, 241-301. [optional]
Harris, R. J. (1997). Significance tests have their place. Psychological Science, 8, 8-11. [optional]
Week 6. Effect Size
Rosenthal, R. (1984). Meta-analytic procedures for social research. Beverly Hills, CA: Sage. [Ch. 2, Defining Research Results]
Chow, S. L. (1988). Significance test or effect size? Psychological Bulletin, 103, 105-110.
Abelson, R. P. (1985). A variance explanation paradox: When a little is a lot. Psychological Bulletin, 97, 129-133. [optional]
Week 7. Statistical Power
Hallahan, M., & Rosenthal, R. (1996). Statistical power: Concepts, procedures, and applications. Behaviour Research and Therapy, 34, 489-499.
Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316.
Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology, 65, 145-153. [optional]
Maddock, J. E., Rossi, J. S. (2001). Statistical power of articles published in three health-psychology related journals. Health Psychology, 20, 76-78. [optional]
Thomas, L. & Juanes, F. (1996). The importance of statistical power analysis: An example from Animal Behaviour. Animal Behaviour, 52, 856-859. [optional]
Rossi, J. S. (1990). Statistical power of psychological research: What have we gained in 20 years? Journal of Consulting and Clinical Psychology, 58, 646-656. [optional]
Tukey, J. W. (1969). Analyzing data: Sanctification or detective work? American Psychologist, 24, 83-91. [optional]
Week 8. Confidence Intervals and Significance testing
Gardner, M. J., & D. G. Altman. 1986. Confidence intervals rather than P values: Estimation rather than hypothesis testing. British Medical Journal, 292, 746-750.
Cumming, G., & Finch, S. (2001). A primer on understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 532-574.
Loftus, G. R., & Masson, M.E.J. (1994). Using confidence intervals in within-subject designs. Psychonomic Bulletin and Review, 1, 476-490.
Week 9 [note: we are skipping this section]. Theoretical Modeling: Developing Formal Models of Natural Phenomena
Haefner, J. W. (1996). Modeling biological systems: Principles and applications. New York: International Thomson Publishing. (Chapters 1 [Models of Systems] & 2 [The Modeling Process])
Loehlin, J. C. (1992). Latent variable models: An introduction to factor, path, and structural analysis. Hillsdale, NJ: Lawrence Erlbaum Associates. (Chapter 1 [Path models in factor, path and structural analysis], p. 1-18]
Grant, D. A. (1962). Testing the null hypothesis and the strategy of investigating theoretical models. Psychological Review, 69, 54-61. [optional]
Binder, A. (1963). Further considerations on testing the null hypothesis and the strategy and tactics of investigating theoretical models. Psychological Review, 70, 107-115. [optional]
Edwards, W. (1965). Tactical note on the relations between scientific and statistical hypotheses. Psychological Bulletin, 63, 400-402. [optional]
Week 10. What is the Meaning of Probability? Controversy Concerning Relative Frequency and Subjective Probability
Salsburg, D. (2001). The lady tasting tea: How statistics revolutionized science in the twentieth century. New York: W. H. Freeman. (Chapters 10, 11, & 12)
Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley. (Chapters 4, 5, & 6)
Pruzek, R. M. (1997). An introduction to Bayesian inference and its applications. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger , Eds. What if there were no significance tests? (pp. 287-318). Mahwah, NJ: Lawrence Erlbaum Associates.
Rindskoph, D. M. (1997). Testing "small," not null, hypothesis: Classical and Bayesian Approaches. In Lisa A. Harlow, Stanley A. Mulaik, and James H. Steiger (Eds). What if there were no significance tests? (pp. 319-332). Mahwah, NJ: Lawrence Erlbaum Associates.
Edwards, W., Lindman, H., Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193-242. [optional]
Week 11. Theory Appraisal: Philosophy of Science and the Testing and Amending of Theories
Meehl, P. E. (1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry, 1, 108-141.
Roberts, S. & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological Review, 107, 358-367.
Week 12. Theory Appraisal: Philosophy of Science and the Testing and Amending of Theories
Urbach, P. (1974). Progress and degeneration in the "IQ debate" (I). British Journal of Philosophy of Science, 25, 99-125.
Serlin, R. C. & Lapsley, D. K. (1985). Rationality in psychological research: The good-enough principle. American Psychologist, 40, 73-83.
Dar, R. (1987). Another look at Meehl, Lakatos, and the scientific practices of psychologists. American Psychologist, 42, 145-151.
Gholson, B. & Barker, P. (1985). Kuhn, Lakatos, & Laudan: Applications in the history of physics and psychology. American Psychologist, 40, 755-769. [optional]
Faust, D., & Meehl, P. E. (1992). Using scientific methods to resolve questions in the history and philosophy of science: Some illustrations. Behavior Therapy, 23, 195-211. [optional]
Urbach, P. (1974). Progress and degeneration in the "IQ debate" (II). British Journal of Philosophy of Science, 25, 235-259. [optional]
Salmon, W. C. (1973, May). Confirmation. Scientific American, 228, 75-83. [optional]
Meehl, P. E. (1993). Philosophy of science: Help or hindrance? Psychological Reports, 72, 707-733. [optional]
Manicas. P. T., & Secord, P. F. (1983). Implications for psychology of the new philosophy of science. American Psychologist, 38, 399-413. [optional]
Week 13. Has the NHST Tradition Undermined a Non-Biased, Cumulative Knowledge Base in Psychology?
Cooper, H., DeNeve, K., & Charlton, K. (1997). Finding the missing science: The fate of studies submitted for review by a human subjects committee. Psychological Methods, 2, 447-452.
Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115-129.
Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1-20.
Berger, J. O. & Berry, D. A. (1988). Statistical analysis and illusion of objectivity. American Scientist, 76, 159-165.
Week 14. Replication and Scientific Integrity
Smith, N. C. (1970). Replication studies: A neglected aspect of psychological research. American Psychologist, 25, 970-975.
Sohn, D. (1998). Statistical significance and replicability: Why the former does not presage the latter. Theory and Psychology, 8, 291-311.
Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66, 195-244.
Platt, J. R. (1964). Strong Inference. Science, 146, 347-353.
Feynman, R. L. (1997). Surely you’re joking, Mr. Feynman! New York: W. W. Norton. (Chapter: Cargo-cult science).
Rorer, L. G. (1991). Some myths of science in psychology. In D. Cicchetti & W.M. Grove (eds.), Thinking Clearly about Psychology, vol. 1: Matters of Public Interest, Essays in honor of Paul E. Meehl (pp. 61 – 87). Minneapolis, MN: University of Minnesota Press. [optional]
Lindsay, R. M. & Ehrenberg, A. S. C. (1993). The design of replicated studies. The American Statistician, 47, 217-228. [optional]
Week 15. Quantitative Thinking: Why We Need Mathematics (and not NHST per se) in Psychological Science
Aiken, L. S., West, S. G., Sechrest, L., & Reno, R. R. (1990). Graduate training in statistics, methodology, and measurement in psychology: A survey of Ph.D. programs in North America. American Psychologist, 45, 721-734.
Meehl, P. E. (1998, May). The power of quantitative thinking. Invited address as recipient of the James McKeen Cattell Award at the annual meeting of the American Psychological Society, Washington, DC. | References containing arguments against null hypothesis significance testing?
Chris Fraley has taught a whole course on the history of the debate (the link seems to be broken, even though it's still on his official site; here is a copy in Internet Archive). His summary/conclus |
3,483 | References containing arguments against null hypothesis significance testing? | These are excellent references. I have a perhaps useful handout at http://hbiostat.org/bayes | References containing arguments against null hypothesis significance testing? | These are excellent references. I have a perhaps useful handout at http://hbiostat.org/bayes | References containing arguments against null hypothesis significance testing?
These are excellent references. I have a perhaps useful handout at http://hbiostat.org/bayes | References containing arguments against null hypothesis significance testing?
These are excellent references. I have a perhaps useful handout at http://hbiostat.org/bayes |
3,484 | References containing arguments against null hypothesis significance testing? | 402 Citations Questioning the Indiscriminate Use of
Null Hypothesis Significance Tests in Observational Studies:
http://warnercnr.colostate.edu/~anderson/thompson1.html | References containing arguments against null hypothesis significance testing? | 402 Citations Questioning the Indiscriminate Use of
Null Hypothesis Significance Tests in Observational Studies:
http://warnercnr.colostate.edu/~anderson/thompson1.html | References containing arguments against null hypothesis significance testing?
402 Citations Questioning the Indiscriminate Use of
Null Hypothesis Significance Tests in Observational Studies:
http://warnercnr.colostate.edu/~anderson/thompson1.html | References containing arguments against null hypothesis significance testing?
402 Citations Questioning the Indiscriminate Use of
Null Hypothesis Significance Tests in Observational Studies:
http://warnercnr.colostate.edu/~anderson/thompson1.html |
3,485 | Why is the square root transformation recommended for count data? | The square root is approximately variance-stabilizing for the Poisson. There are a number of variations on the square root that improve the properties, such as adding $\frac{3}{8}$ before taking the square root, or the Freeman-Tukey ($\sqrt{X}+\sqrt{X+1}$ - though it's often adjusted for the mean as well).
In the plots below, we have a Poisson $Y$ vs a predictor $x$ (with mean of $Y$ a multiple of $x$), and then $\sqrt{Y}$ vs $\sqrt{x}$ and then $\sqrt{Y+\frac{3}{8}}$ vs $\sqrt{x}$.
The square root transformation somewhat improves symmetry - though not as well as the $\frac{2}{3}$ power does [1]:
If you particularly want near-normality (as long as the parameter of the Poisson is not really small) and don't care about/can adjust for heteroscedasticity, try $\frac{2}{3}$ power.
The canonical link is not generally a particularly good transformation for Poisson data; log zero being a particular issue (another is heteroskedasticity; you can also get left-skewness even when you don't have 0's). If the smallest values are not too close to 0 it can be useful for linearizing the mean. It's a good 'transformation' for the conditional population mean of a Poisson in a number of contexts, but not always of Poisson data. However if you do want to transform, one common strategy is to add a constant $y^*=\log(y+c)$ which avoids the $0$ issue. In that case we should consider what constant to add. Without getting too far from the question at hand, values of $c$ between $0.4$ and $0.5$ work very well (e.g. in relation to bias in the slope estimate) across a range of $\mu$ values. I usually just use $\frac12$ since it's simple, with values around $0.43$ often doing just slightly better.
As for why people choose one transformation over another (or none) -- that's really a matter of what they're doing it to achieve.
[1]: Plots patterned after Henrik Bengtsson's plots in his handout "Generalized Linear Models and Transformed
Residuals" see here
(see first slide on p4). I added a little y-jitter and omitted the lines. | Why is the square root transformation recommended for count data? | The square root is approximately variance-stabilizing for the Poisson. There are a number of variations on the square root that improve the properties, such as adding $\frac{3}{8}$ before taking the s | Why is the square root transformation recommended for count data?
The square root is approximately variance-stabilizing for the Poisson. There are a number of variations on the square root that improve the properties, such as adding $\frac{3}{8}$ before taking the square root, or the Freeman-Tukey ($\sqrt{X}+\sqrt{X+1}$ - though it's often adjusted for the mean as well).
In the plots below, we have a Poisson $Y$ vs a predictor $x$ (with mean of $Y$ a multiple of $x$), and then $\sqrt{Y}$ vs $\sqrt{x}$ and then $\sqrt{Y+\frac{3}{8}}$ vs $\sqrt{x}$.
The square root transformation somewhat improves symmetry - though not as well as the $\frac{2}{3}$ power does [1]:
If you particularly want near-normality (as long as the parameter of the Poisson is not really small) and don't care about/can adjust for heteroscedasticity, try $\frac{2}{3}$ power.
The canonical link is not generally a particularly good transformation for Poisson data; log zero being a particular issue (another is heteroskedasticity; you can also get left-skewness even when you don't have 0's). If the smallest values are not too close to 0 it can be useful for linearizing the mean. It's a good 'transformation' for the conditional population mean of a Poisson in a number of contexts, but not always of Poisson data. However if you do want to transform, one common strategy is to add a constant $y^*=\log(y+c)$ which avoids the $0$ issue. In that case we should consider what constant to add. Without getting too far from the question at hand, values of $c$ between $0.4$ and $0.5$ work very well (e.g. in relation to bias in the slope estimate) across a range of $\mu$ values. I usually just use $\frac12$ since it's simple, with values around $0.43$ often doing just slightly better.
As for why people choose one transformation over another (or none) -- that's really a matter of what they're doing it to achieve.
[1]: Plots patterned after Henrik Bengtsson's plots in his handout "Generalized Linear Models and Transformed
Residuals" see here
(see first slide on p4). I added a little y-jitter and omitted the lines. | Why is the square root transformation recommended for count data?
The square root is approximately variance-stabilizing for the Poisson. There are a number of variations on the square root that improve the properties, such as adding $\frac{3}{8}$ before taking the s |
3,486 | Real-life examples of moving average processes | One very common cause is mis-specification. For example, let $y$ be grocery sales and $\varepsilon$ be an unobserved (to the analyst) coupon campaign that varies in intensity over time. At any point in time, there may be several "vintages" of coupons circulating as people use them, throw them away, and receive new ones. Shocks can also have persistent (but gradually weakening) effects. Take natural disasters or simply bad weather. Battery sales go up before the storm, then fall during, and then jump again as people people realize that disaster kits may be a good idea for the future.
Similarly, data manipulation (like smoothing or interpolation) can induce this effect.
I also have "inherently smooth behavior of time series data (inertia) can cause $MA(1)$" in my notes, but that one no longer makes sense to me. | Real-life examples of moving average processes | One very common cause is mis-specification. For example, let $y$ be grocery sales and $\varepsilon$ be an unobserved (to the analyst) coupon campaign that varies in intensity over time. At any point i | Real-life examples of moving average processes
One very common cause is mis-specification. For example, let $y$ be grocery sales and $\varepsilon$ be an unobserved (to the analyst) coupon campaign that varies in intensity over time. At any point in time, there may be several "vintages" of coupons circulating as people use them, throw them away, and receive new ones. Shocks can also have persistent (but gradually weakening) effects. Take natural disasters or simply bad weather. Battery sales go up before the storm, then fall during, and then jump again as people people realize that disaster kits may be a good idea for the future.
Similarly, data manipulation (like smoothing or interpolation) can induce this effect.
I also have "inherently smooth behavior of time series data (inertia) can cause $MA(1)$" in my notes, but that one no longer makes sense to me. | Real-life examples of moving average processes
One very common cause is mis-specification. For example, let $y$ be grocery sales and $\varepsilon$ be an unobserved (to the analyst) coupon campaign that varies in intensity over time. At any point i |
3,487 | Real-life examples of moving average processes | Suppose you are producing some good, stockpiling some of it and selling the rest. Your production in time period $t$ is $x_t=m+\varepsilon_t$ with $\mathbb{E}(\varepsilon_t)=0$ and your stock is $y_t$. The sequence of $\varepsilon$s is i.i.d. A $1-\theta$ fraction of the period's production is sold during the next period, and the remaining $\theta$ during the one after that. Then your stockpile is
\begin{aligned}
y_t&=x_t+\theta_1x_{t-1} \\
&=\mu+\varepsilon_t+\theta_1\varepsilon_{t-1},
\end{aligned}
where $\mu=(1+\theta_1)m$. Thus, $y_t$ follows and MA(1) process.
If it took a longer time ($q+1$ periods instead of $2$ periods) to sell a period's production, you would have an MA(q) process. | Real-life examples of moving average processes | Suppose you are producing some good, stockpiling some of it and selling the rest. Your production in time period $t$ is $x_t=m+\varepsilon_t$ with $\mathbb{E}(\varepsilon_t)=0$ and your stock is $y_t$ | Real-life examples of moving average processes
Suppose you are producing some good, stockpiling some of it and selling the rest. Your production in time period $t$ is $x_t=m+\varepsilon_t$ with $\mathbb{E}(\varepsilon_t)=0$ and your stock is $y_t$. The sequence of $\varepsilon$s is i.i.d. A $1-\theta$ fraction of the period's production is sold during the next period, and the remaining $\theta$ during the one after that. Then your stockpile is
\begin{aligned}
y_t&=x_t+\theta_1x_{t-1} \\
&=\mu+\varepsilon_t+\theta_1\varepsilon_{t-1},
\end{aligned}
where $\mu=(1+\theta_1)m$. Thus, $y_t$ follows and MA(1) process.
If it took a longer time ($q+1$ periods instead of $2$ periods) to sell a period's production, you would have an MA(q) process. | Real-life examples of moving average processes
Suppose you are producing some good, stockpiling some of it and selling the rest. Your production in time period $t$ is $x_t=m+\varepsilon_t$ with $\mathbb{E}(\varepsilon_t)=0$ and your stock is $y_t$ |
3,488 | Real-life examples of moving average processes | in our article
Scaling portfolio volatility and calculating risk contributions in the presence of serial
cross-correlations we analyze a multivariate model of asset returns. Due to different closing times of the stock exchanges a dependence structure (by the covariance) appears. This dependence only holds for one period. Thus we model this as a vector moving average process of order $1$ (see pages 4 and 5).
The resulting portfolio process is a linear transformation of a $VMA(1)$ process which in general is an $MA(q)$ process with $q\ge1$ (see details on pages 15 and 16). | Real-life examples of moving average processes | in our article
Scaling portfolio volatility and calculating risk contributions in the presence of serial
cross-correlations we analyze a multivariate model of asset returns. Due to different closing | Real-life examples of moving average processes
in our article
Scaling portfolio volatility and calculating risk contributions in the presence of serial
cross-correlations we analyze a multivariate model of asset returns. Due to different closing times of the stock exchanges a dependence structure (by the covariance) appears. This dependence only holds for one period. Thus we model this as a vector moving average process of order $1$ (see pages 4 and 5).
The resulting portfolio process is a linear transformation of a $VMA(1)$ process which in general is an $MA(q)$ process with $q\ge1$ (see details on pages 15 and 16). | Real-life examples of moving average processes
in our article
Scaling portfolio volatility and calculating risk contributions in the presence of serial
cross-correlations we analyze a multivariate model of asset returns. Due to different closing |
3,489 | Real-life examples of moving average processes | It is true that MA processes are more difficult to explain to users than AR processes. However they are very ubiquitous. The most common MA type of the process that you didn't know about is a low pass filter.
The active versions would be a "TREBLE" knob on your car stereo, or a tone control knob on your guitar.
Here's how the most primitive passive RC series circuit works. At high frequencies it integrates:
$$V_C \approx \frac{1}{RC}\int_{0}^{t}V_\mathrm{in}\,dt\,,$$
You should recognize the continuous time version of the MA process in this equation. The reason why this happens is because capacitor's impedance changes with frequency of input.
The filter is called a low pass because at low frequencies it doesn't integrate, and lets them pass as is:
$$V_\mathrm{in} \approx V_C$$ | Real-life examples of moving average processes | It is true that MA processes are more difficult to explain to users than AR processes. However they are very ubiquitous. The most common MA type of the process that you didn't know about is a low pass | Real-life examples of moving average processes
It is true that MA processes are more difficult to explain to users than AR processes. However they are very ubiquitous. The most common MA type of the process that you didn't know about is a low pass filter.
The active versions would be a "TREBLE" knob on your car stereo, or a tone control knob on your guitar.
Here's how the most primitive passive RC series circuit works. At high frequencies it integrates:
$$V_C \approx \frac{1}{RC}\int_{0}^{t}V_\mathrm{in}\,dt\,,$$
You should recognize the continuous time version of the MA process in this equation. The reason why this happens is because capacitor's impedance changes with frequency of input.
The filter is called a low pass because at low frequencies it doesn't integrate, and lets them pass as is:
$$V_\mathrm{in} \approx V_C$$ | Real-life examples of moving average processes
It is true that MA processes are more difficult to explain to users than AR processes. However they are very ubiquitous. The most common MA type of the process that you didn't know about is a low pass |
3,490 | Real-life examples of moving average processes | Consecutive multiple-step-ahead forecast errors from optimal forecasts will be MA processes.
For example, suppose the data generating process is a random walk: $X_t=X_{t-1}+\varepsilon_t$ where $\varepsilon_t\sim\text{i.i.d.}(0,\sigma_\varepsilon^2)$.
If you are at time $t$ predicting the value of the process at time $t+3$, the optimal forecast is $X_t$. The forecast error is therefore $e_{t+3|t}=X_{t+3}-X_t=\varepsilon_{t+3}+\varepsilon_{t+2}+\varepsilon_{t+1}$.
If you repeat the forecasting exercise at time $t+1$, you have the optimal prediction $X_{t+1}$ and the forecast error $e_{t+4|t+1}=X_{t+4}-X_{t+1}=\varepsilon_{t+4}+\varepsilon_{t+3}+\varepsilon_{t+2}$.
Now $e_{t+3|t}$ and $e_{t+4|t+1}$ will be (positively) correlated because they share two elements, $\varepsilon_3$ and $\varepsilon_2$. Similarly, $e_{t+3|t}$ and $e_{t+5|t+2}$ will be (positively) correlated because they share an element $\varepsilon_3$. $e_{t+3|t}$ and $e_{t+6|t+3}$ will however not be correlated because there is no shared element and $\varepsilon_t$ is an i.i.d. sequence.
The fact that autocorrelations cuts off abruplty after several periods is characteristic of MA processes. Indeed, it is not difficult to show that the sequence of consecutive 3-step-ahead forecast errors $(e_{t+3|t},e_{t+4|t+1},e_{t+5|t+2},\dots)$ is an MA(2) process. More generally, when predicting $h$ steps ahead, consecutive errors from an optimal forecast form an MA($q$) process with $q\leq h-1$. (The precise value $q$ depends on the memory of the process being forecast. For a random walk, $q=h-1$; for some processes with shorter memory, $q<h-1$. For a process with no memory, $q=0$.)
Processes of consecutive multiple-step-ahead forecast errors are common. You see them in macroeconomics (long-term forecasts of GDP, inflation, unemployment, etc.), finance (forecasts of asset returns, currency exchange rates, etc.) and beyond. While hardly any of the forecasts are optimal, some are close to that, and their forecast errors will resemble MA processes quite closely. An example could be the random-walk based multiple-step-ahead forecast of daily stock prices as detailed above. | Real-life examples of moving average processes | Consecutive multiple-step-ahead forecast errors from optimal forecasts will be MA processes.
For example, suppose the data generating process is a random walk: $X_t=X_{t-1}+\varepsilon_t$ where $\vare | Real-life examples of moving average processes
Consecutive multiple-step-ahead forecast errors from optimal forecasts will be MA processes.
For example, suppose the data generating process is a random walk: $X_t=X_{t-1}+\varepsilon_t$ where $\varepsilon_t\sim\text{i.i.d.}(0,\sigma_\varepsilon^2)$.
If you are at time $t$ predicting the value of the process at time $t+3$, the optimal forecast is $X_t$. The forecast error is therefore $e_{t+3|t}=X_{t+3}-X_t=\varepsilon_{t+3}+\varepsilon_{t+2}+\varepsilon_{t+1}$.
If you repeat the forecasting exercise at time $t+1$, you have the optimal prediction $X_{t+1}$ and the forecast error $e_{t+4|t+1}=X_{t+4}-X_{t+1}=\varepsilon_{t+4}+\varepsilon_{t+3}+\varepsilon_{t+2}$.
Now $e_{t+3|t}$ and $e_{t+4|t+1}$ will be (positively) correlated because they share two elements, $\varepsilon_3$ and $\varepsilon_2$. Similarly, $e_{t+3|t}$ and $e_{t+5|t+2}$ will be (positively) correlated because they share an element $\varepsilon_3$. $e_{t+3|t}$ and $e_{t+6|t+3}$ will however not be correlated because there is no shared element and $\varepsilon_t$ is an i.i.d. sequence.
The fact that autocorrelations cuts off abruplty after several periods is characteristic of MA processes. Indeed, it is not difficult to show that the sequence of consecutive 3-step-ahead forecast errors $(e_{t+3|t},e_{t+4|t+1},e_{t+5|t+2},\dots)$ is an MA(2) process. More generally, when predicting $h$ steps ahead, consecutive errors from an optimal forecast form an MA($q$) process with $q\leq h-1$. (The precise value $q$ depends on the memory of the process being forecast. For a random walk, $q=h-1$; for some processes with shorter memory, $q<h-1$. For a process with no memory, $q=0$.)
Processes of consecutive multiple-step-ahead forecast errors are common. You see them in macroeconomics (long-term forecasts of GDP, inflation, unemployment, etc.), finance (forecasts of asset returns, currency exchange rates, etc.) and beyond. While hardly any of the forecasts are optimal, some are close to that, and their forecast errors will resemble MA processes quite closely. An example could be the random-walk based multiple-step-ahead forecast of daily stock prices as detailed above. | Real-life examples of moving average processes
Consecutive multiple-step-ahead forecast errors from optimal forecasts will be MA processes.
For example, suppose the data generating process is a random walk: $X_t=X_{t-1}+\varepsilon_t$ where $\vare |
3,491 | Real-life examples of moving average processes | Increments of cumulative processes measured over overlapping periods of time are MA processes when increments are i.i.d. If
$$
x_t=\sum_{\tau=0}^t\varepsilon_\tau
$$
where $\varepsilon_\tau\sim i.i.d.$, then
$$
(x_t-x_{t-s},x_{t+1}-x_{t+1-s},\dots)=(\sum_{\tau=s}^t\varepsilon_\tau,\sum_{\tau=s+1}^{t+1}\varepsilon_\tau,\dots)
$$
is an MA($s-1$) process.
A prime example of an approximate* MA process is multi-period asset returns (concretely, price changes). E.g. a daily series of yearly returns on a stock has a one-year-minus-one-day overlap between consecutive observations.
The yearly return on January 2 is the cumulative return from January 3 the previous year through January 2 the current year.
The yearly return on January 3 is the cumulative return from January 4 the previous year through January 3 the current year.
Etc., etc.
If there are $252$ trading days a year, we have an MA($252-1$) process. (This can be seen e.g. from inspecting the theoretical autocorrelation function of the series; the autocorrelation cuts off at lag $251$.) The same holds for logarithmic returns, as they are additive just as price changes are. If we look at percentage returns on the other hand, these may be approximated by MA processes as long as the price level is not too close to zero and does not vary too much over the course of the sample. The latter conditions ensure that percentage returns are approximately proportionate to price changes (which are MA processes themselves, as pointed out above).
Similar examples could be macroeconomic processes such as year-on-year quarterly GDP growth that could be considered approximately MA($3$).
*Asset returns are not strictly i.i.d. but usually not very far from that, so MA may be a good approximation for overlapping cumulative returns. | Real-life examples of moving average processes | Increments of cumulative processes measured over overlapping periods of time are MA processes when increments are i.i.d. If
$$
x_t=\sum_{\tau=0}^t\varepsilon_\tau
$$
where $\varepsilon_\tau\sim i.i.d. | Real-life examples of moving average processes
Increments of cumulative processes measured over overlapping periods of time are MA processes when increments are i.i.d. If
$$
x_t=\sum_{\tau=0}^t\varepsilon_\tau
$$
where $\varepsilon_\tau\sim i.i.d.$, then
$$
(x_t-x_{t-s},x_{t+1}-x_{t+1-s},\dots)=(\sum_{\tau=s}^t\varepsilon_\tau,\sum_{\tau=s+1}^{t+1}\varepsilon_\tau,\dots)
$$
is an MA($s-1$) process.
A prime example of an approximate* MA process is multi-period asset returns (concretely, price changes). E.g. a daily series of yearly returns on a stock has a one-year-minus-one-day overlap between consecutive observations.
The yearly return on January 2 is the cumulative return from January 3 the previous year through January 2 the current year.
The yearly return on January 3 is the cumulative return from January 4 the previous year through January 3 the current year.
Etc., etc.
If there are $252$ trading days a year, we have an MA($252-1$) process. (This can be seen e.g. from inspecting the theoretical autocorrelation function of the series; the autocorrelation cuts off at lag $251$.) The same holds for logarithmic returns, as they are additive just as price changes are. If we look at percentage returns on the other hand, these may be approximated by MA processes as long as the price level is not too close to zero and does not vary too much over the course of the sample. The latter conditions ensure that percentage returns are approximately proportionate to price changes (which are MA processes themselves, as pointed out above).
Similar examples could be macroeconomic processes such as year-on-year quarterly GDP growth that could be considered approximately MA($3$).
*Asset returns are not strictly i.i.d. but usually not very far from that, so MA may be a good approximation for overlapping cumulative returns. | Real-life examples of moving average processes
Increments of cumulative processes measured over overlapping periods of time are MA processes when increments are i.i.d. If
$$
x_t=\sum_{\tau=0}^t\varepsilon_\tau
$$
where $\varepsilon_\tau\sim i.i.d. |
3,492 | Why do neural networks need so many training examples to perform? | I caution against expecting strong resemblance between biological and artificial neural networks. I think the name "neural networks" is a bit dangerous, because it tricks people into expecting that neurological processes and machine learning should be the same. The differences between biological and artificial neural networks outweigh the similarities.
As an example of how this can go awry, you can also turn the reasoning in the original post on its head. You can train a neural network to learn to recognize cars in an afternoon, provided you have a reasonably fast computer and some amount of training data. You can make this a binary task (car/not car) or a multi-class task (car/tram/bike/airplane/boat) and still be confident in a high level of success.
By contrast, I wouldn't expect a child to be able to pick out a car the day - or even the week - after it's born, even after it has seen "so many training examples." Something is obviously different between a two-year-old and an infant that accounts for the difference in learning ability, whereas a vanilla image classification neural network is perfectly capable of picking up object classification immediately after "birth." I think that there are two important differences: (1) the relative volumes of training data available and (2) a self-teaching mechanism that develops over time because of abundant training data.
The original post exposes two questions. The title and body of the question ask why neural networks need "so many examples." Relative to a child's experience, neural networks trained using common image benchmarks have comparatively little data.
I will re-phrases the question in the title to
"How does training a neural network for a common image benchmark compare & contrast to the learning experience of a child?"
For the sake of comparison I'll consider the CIFAR-10 data because it is a common image benchmark. The labeled portion is composed of 10 classes of images with 6000 images per class. Each image is 32x32 pixels. If you somehow stacked the labeled images from CIFAR-10 and made a standard 48 fps video, you'd have about 20 minutes of footage.
A child of 2 years who observes the world for 12 hours daily has roughly 263000 minutes (more than 4000 hours) of direct observations of the world, including feedback from adults (labels). (These are just ballpark figures -- I don't know how many minutes a typical two-year-old has spent observing the world.) Moreover, the child will have exposure to many, many objects beyond the 10 classes that comprise CIFAR-10.
So there are a few things at play. One is that the child has exposure to more data overall and a more diverse source of data than the CIFAR-10 model has. Data diversity and data volume are well-recognized as pre-requisites for robust models in general. In this light, it doesn't seem surprising that a neural network is worse at this task than the child, because a neural network trained on CIFAR-10 is positively starved for training data compared to the two-year-old. The image resolution available to a child is better than the 32x32 CIFAR-10 images, so the child is able to learn information about the fine details of objects.
The CIFAR-10 to two-year-old comparison is not perfect because the CIFAR-10 model will likely be trained with multiple passes over the same static images, while the child will see, using binocular vision, how objects are arranged in a three-dimensional world while moving about and with different lighting conditions and perspectives on the same objects.
The anecdote about OP's child implies a second question,
"How can neural networks become self-teaching?"
A child is endowed with some talent for self-teaching, so that new categories of objects can be added over time without having to start over from scratch.
OP's remark about transfer-learning names one kind of model adaptation in the machine learning context.
In comments, other users have pointed out that one- and few-shot learning* is another machine learning research area.
Additionally, reinforcement-learning addresses self-teaching models from a different perspective, essentially allowing robots to undertake trial-and-error experimentation to find optimal strategies for solving specific problems (e.g. playing chess).
It's probably true that all three of these machine learning paradigms are germane to improving how machines adapt to new computer vision tasks. Quickly adapting machine learning models to new tasks is an active area of research. However, because the practical goals of these projects (identify new instances of malware, recognize imposters in passport photos, index the internet) and criteria for success differ from the goals of a child learning about the world, and the fact that one is done in a computer using math and the other is done in organic material using chemistry, direct comparisons between the two will remain fraught.
As an aside, it would be interesting to study how to flip the CIFAR-10 problem around and train a neural network to recognize 6000 objects from 10 examples of each. But even this wouldn't be a fair comparison to 2-year-old, because there would still be a large discrepancy in the total volume, diversity and resolution of the training data.
*We don't presently have a tags for one-shot learning or few-shot learning. | Why do neural networks need so many training examples to perform? | I caution against expecting strong resemblance between biological and artificial neural networks. I think the name "neural networks" is a bit dangerous, because it tricks people into expecting that ne | Why do neural networks need so many training examples to perform?
I caution against expecting strong resemblance between biological and artificial neural networks. I think the name "neural networks" is a bit dangerous, because it tricks people into expecting that neurological processes and machine learning should be the same. The differences between biological and artificial neural networks outweigh the similarities.
As an example of how this can go awry, you can also turn the reasoning in the original post on its head. You can train a neural network to learn to recognize cars in an afternoon, provided you have a reasonably fast computer and some amount of training data. You can make this a binary task (car/not car) or a multi-class task (car/tram/bike/airplane/boat) and still be confident in a high level of success.
By contrast, I wouldn't expect a child to be able to pick out a car the day - or even the week - after it's born, even after it has seen "so many training examples." Something is obviously different between a two-year-old and an infant that accounts for the difference in learning ability, whereas a vanilla image classification neural network is perfectly capable of picking up object classification immediately after "birth." I think that there are two important differences: (1) the relative volumes of training data available and (2) a self-teaching mechanism that develops over time because of abundant training data.
The original post exposes two questions. The title and body of the question ask why neural networks need "so many examples." Relative to a child's experience, neural networks trained using common image benchmarks have comparatively little data.
I will re-phrases the question in the title to
"How does training a neural network for a common image benchmark compare & contrast to the learning experience of a child?"
For the sake of comparison I'll consider the CIFAR-10 data because it is a common image benchmark. The labeled portion is composed of 10 classes of images with 6000 images per class. Each image is 32x32 pixels. If you somehow stacked the labeled images from CIFAR-10 and made a standard 48 fps video, you'd have about 20 minutes of footage.
A child of 2 years who observes the world for 12 hours daily has roughly 263000 minutes (more than 4000 hours) of direct observations of the world, including feedback from adults (labels). (These are just ballpark figures -- I don't know how many minutes a typical two-year-old has spent observing the world.) Moreover, the child will have exposure to many, many objects beyond the 10 classes that comprise CIFAR-10.
So there are a few things at play. One is that the child has exposure to more data overall and a more diverse source of data than the CIFAR-10 model has. Data diversity and data volume are well-recognized as pre-requisites for robust models in general. In this light, it doesn't seem surprising that a neural network is worse at this task than the child, because a neural network trained on CIFAR-10 is positively starved for training data compared to the two-year-old. The image resolution available to a child is better than the 32x32 CIFAR-10 images, so the child is able to learn information about the fine details of objects.
The CIFAR-10 to two-year-old comparison is not perfect because the CIFAR-10 model will likely be trained with multiple passes over the same static images, while the child will see, using binocular vision, how objects are arranged in a three-dimensional world while moving about and with different lighting conditions and perspectives on the same objects.
The anecdote about OP's child implies a second question,
"How can neural networks become self-teaching?"
A child is endowed with some talent for self-teaching, so that new categories of objects can be added over time without having to start over from scratch.
OP's remark about transfer-learning names one kind of model adaptation in the machine learning context.
In comments, other users have pointed out that one- and few-shot learning* is another machine learning research area.
Additionally, reinforcement-learning addresses self-teaching models from a different perspective, essentially allowing robots to undertake trial-and-error experimentation to find optimal strategies for solving specific problems (e.g. playing chess).
It's probably true that all three of these machine learning paradigms are germane to improving how machines adapt to new computer vision tasks. Quickly adapting machine learning models to new tasks is an active area of research. However, because the practical goals of these projects (identify new instances of malware, recognize imposters in passport photos, index the internet) and criteria for success differ from the goals of a child learning about the world, and the fact that one is done in a computer using math and the other is done in organic material using chemistry, direct comparisons between the two will remain fraught.
As an aside, it would be interesting to study how to flip the CIFAR-10 problem around and train a neural network to recognize 6000 objects from 10 examples of each. But even this wouldn't be a fair comparison to 2-year-old, because there would still be a large discrepancy in the total volume, diversity and resolution of the training data.
*We don't presently have a tags for one-shot learning or few-shot learning. | Why do neural networks need so many training examples to perform?
I caution against expecting strong resemblance between biological and artificial neural networks. I think the name "neural networks" is a bit dangerous, because it tricks people into expecting that ne |
3,493 | Why do neural networks need so many training examples to perform? | First of all, at age two, a child knows a lot about the world and actively applies this knowledge. A child does a lot of "transfer learning" by applying this knowledge to new concepts.
Second, before seeing those five "labeled" examples of cars, a child sees a lot of cars on the street, on TV, toy cars, etc., so also a lot of "unsupervised learning" happens beforehand.
Finally, neural networks have almost nothing in common with the human brain, so there's not much point in comparing them. Also notice that there are algorithms for one-shot learning, and pretty much research on it currently happens. | Why do neural networks need so many training examples to perform? | First of all, at age two, a child knows a lot about the world and actively applies this knowledge. A child does a lot of "transfer learning" by applying this knowledge to new concepts.
Second, before | Why do neural networks need so many training examples to perform?
First of all, at age two, a child knows a lot about the world and actively applies this knowledge. A child does a lot of "transfer learning" by applying this knowledge to new concepts.
Second, before seeing those five "labeled" examples of cars, a child sees a lot of cars on the street, on TV, toy cars, etc., so also a lot of "unsupervised learning" happens beforehand.
Finally, neural networks have almost nothing in common with the human brain, so there's not much point in comparing them. Also notice that there are algorithms for one-shot learning, and pretty much research on it currently happens. | Why do neural networks need so many training examples to perform?
First of all, at age two, a child knows a lot about the world and actively applies this knowledge. A child does a lot of "transfer learning" by applying this knowledge to new concepts.
Second, before |
3,494 | Why do neural networks need so many training examples to perform? | One major aspect that I don't see in current answers is evolution.
A child's brain does not learn from scratch. It's similar to asking how deer and giraffe babies can walk a few minutes after birth. Because they are born with their brains already wired for this task. There is some fine-tuning needed of course, but the baby deer doesn't learn to walk from "random initialization".
Similarly, the fact that big moving objects exist and are important to keep track of is something we are born with.
So I think the presupposition of this question is simply false. Human neural networks had the opportunity to see tons of - maybe not cars but - moving, rotating 3D objects with difficult textures and shapes etc., but this happened through lots of generations and the learning took place by evolutionary algorithms, i.e. the ones whose brain was better structured for this task, could live to reproduce with higher chance, leaving the next generation with better and better brain wiring from the start. | Why do neural networks need so many training examples to perform? | One major aspect that I don't see in current answers is evolution.
A child's brain does not learn from scratch. It's similar to asking how deer and giraffe babies can walk a few minutes after birth. B | Why do neural networks need so many training examples to perform?
One major aspect that I don't see in current answers is evolution.
A child's brain does not learn from scratch. It's similar to asking how deer and giraffe babies can walk a few minutes after birth. Because they are born with their brains already wired for this task. There is some fine-tuning needed of course, but the baby deer doesn't learn to walk from "random initialization".
Similarly, the fact that big moving objects exist and are important to keep track of is something we are born with.
So I think the presupposition of this question is simply false. Human neural networks had the opportunity to see tons of - maybe not cars but - moving, rotating 3D objects with difficult textures and shapes etc., but this happened through lots of generations and the learning took place by evolutionary algorithms, i.e. the ones whose brain was better structured for this task, could live to reproduce with higher chance, leaving the next generation with better and better brain wiring from the start. | Why do neural networks need so many training examples to perform?
One major aspect that I don't see in current answers is evolution.
A child's brain does not learn from scratch. It's similar to asking how deer and giraffe babies can walk a few minutes after birth. B |
3,495 | Why do neural networks need so many training examples to perform? | I don't know much about neural networks but I know a fair bit about babies.
Many 2 year olds have a lot of issues with how general words should be. For instance, it is quite common at that age for kids to use "dog" for any four legged animal. That's a more difficult distinction than "car" - just think how different a poodle looks from a great Dane, for instance and yet they are both "dog" while a cat is not.
And a child at 2 has seen many many more than 5 examples of "car". A kid sees dozens or even hundreds of examples of cars any time the family goes for a drive. And a lot of parents will comment "look at the car" a lot more than 5 times. But kids can also think in ways that they weren't told about. For instance, on the street the kid sees lots of things lined up. His dad says (of one) "look at the shiny car!" and the kid thinks "maybe all those other things lined up are also cars?" | Why do neural networks need so many training examples to perform? | I don't know much about neural networks but I know a fair bit about babies.
Many 2 year olds have a lot of issues with how general words should be. For instance, it is quite common at that age for kid | Why do neural networks need so many training examples to perform?
I don't know much about neural networks but I know a fair bit about babies.
Many 2 year olds have a lot of issues with how general words should be. For instance, it is quite common at that age for kids to use "dog" for any four legged animal. That's a more difficult distinction than "car" - just think how different a poodle looks from a great Dane, for instance and yet they are both "dog" while a cat is not.
And a child at 2 has seen many many more than 5 examples of "car". A kid sees dozens or even hundreds of examples of cars any time the family goes for a drive. And a lot of parents will comment "look at the car" a lot more than 5 times. But kids can also think in ways that they weren't told about. For instance, on the street the kid sees lots of things lined up. His dad says (of one) "look at the shiny car!" and the kid thinks "maybe all those other things lined up are also cars?" | Why do neural networks need so many training examples to perform?
I don't know much about neural networks but I know a fair bit about babies.
Many 2 year olds have a lot of issues with how general words should be. For instance, it is quite common at that age for kid |
3,496 | Why do neural networks need so many training examples to perform? | This is an a fascinating question that I've pondered over a lot also, and can come up with a few explanations why.
Neural networks work nothing like the brain. Backpropagation is unique to neural networks, and does not happen in the brain. In that sense, we just don't know the general learning algorithm in our brains. It could be electrical, it could be chemical, it could even be a combination of the two. Neural networks could be considered an inferior form of learning compared to our brains because of how simplified they are.
If neural networks are indeed like our brain, then human babies undergo extensive "training" of the early layers, like feature extraction, in their early days. So their neural networks aren't really trained from scratch, but rather the last layer is retrained to add more and more classes and labels. | Why do neural networks need so many training examples to perform? | This is an a fascinating question that I've pondered over a lot also, and can come up with a few explanations why.
Neural networks work nothing like the brain. Backpropagation is unique to neural net | Why do neural networks need so many training examples to perform?
This is an a fascinating question that I've pondered over a lot also, and can come up with a few explanations why.
Neural networks work nothing like the brain. Backpropagation is unique to neural networks, and does not happen in the brain. In that sense, we just don't know the general learning algorithm in our brains. It could be electrical, it could be chemical, it could even be a combination of the two. Neural networks could be considered an inferior form of learning compared to our brains because of how simplified they are.
If neural networks are indeed like our brain, then human babies undergo extensive "training" of the early layers, like feature extraction, in their early days. So their neural networks aren't really trained from scratch, but rather the last layer is retrained to add more and more classes and labels. | Why do neural networks need so many training examples to perform?
This is an a fascinating question that I've pondered over a lot also, and can come up with a few explanations why.
Neural networks work nothing like the brain. Backpropagation is unique to neural net |
3,497 | Why do neural networks need so many training examples to perform? | A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc.
The concept of "instances" gets easily muddied. While a child may have seen 5 unique instances of a car, they have actually seen thousands of thousands of frames, in many differing environments. They have likely seen cars in other contexts. They also have an intuition for the physical world developed over their lifetime - some transfer learning probably happens here. Yet we wrap all of that up into "5 instances."
Meanwhile, every single frame/image you pass to a CNN is considered an "example." If you apply a consistent definition, both systems are really utilizing a much more similar amount of training data.
Also, I would like to note that convolutional neural networks - CNNs - are more useful in computer vision than ANNs, and in fact approach human performance in tasks like image classification. Deep learning is (probably) not a panacea, but it does perform admirably in this domain. | Why do neural networks need so many training examples to perform? | A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc.
The concept of "instances" gets easily muddied. While a ch | Why do neural networks need so many training examples to perform?
A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc.
The concept of "instances" gets easily muddied. While a child may have seen 5 unique instances of a car, they have actually seen thousands of thousands of frames, in many differing environments. They have likely seen cars in other contexts. They also have an intuition for the physical world developed over their lifetime - some transfer learning probably happens here. Yet we wrap all of that up into "5 instances."
Meanwhile, every single frame/image you pass to a CNN is considered an "example." If you apply a consistent definition, both systems are really utilizing a much more similar amount of training data.
Also, I would like to note that convolutional neural networks - CNNs - are more useful in computer vision than ANNs, and in fact approach human performance in tasks like image classification. Deep learning is (probably) not a panacea, but it does perform admirably in this domain. | Why do neural networks need so many training examples to perform?
A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc.
The concept of "instances" gets easily muddied. While a ch |
3,498 | Why do neural networks need so many training examples to perform? | As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning methods, that can solve the task of labelling trams with quite good accuracy, using only a single labelled sample.
One way to do this is by so-called transfer learning; a network trained on other labels is usually very effectively adaptable to new labels, since the hard work is breaking down the low level components of the image in a sensible way.
But we do not infact need such labeled data to perform such task; much like babies dont need nearly as much labeled data as the neural networs you are thinking of do.
For instance, one such unsupervised methods that I have also successfully applied in other contexts, is to take an unlabeled set of images, randomly rotate them, and train a network to predict which side of the image is 'up'. Without knowing what the visible objects are, or what they are called, this forces the network to learn a tremendous amount of structure about the images; and this can form an excellent basis for much more data-efficient subsequent labeled learning.
While it is true that artificial networks are quite different from real ones in probably meaningful ways, such as the absence of an obvious analogue of backpropagation, it is very probably true that real neural networks make use of the same tricks, of trying to learn the structure in the data implied by some simple priors.
One other example which almost certainly plays a role in animals and has also shown great promise in understanding video, is in the assumption that the future should be predictable from the past. Just by starting from that assumption, you can teach a neural network a whole lot. Or on a philosophical level, I am inclined to believe that this assumption underlies almost everything what we consider to be 'knowledge'.
I am not saying anything new here; but it is relatively new in the sense that these possibilities are too young to have found many applications yet, and do not yet have percolated down to the textbook understanding of 'what an ANN can do'. So to answer the OPs question; ANN's have already closed much of the gap that you describe. | Why do neural networks need so many training examples to perform? | As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning method | Why do neural networks need so many training examples to perform?
As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning methods, that can solve the task of labelling trams with quite good accuracy, using only a single labelled sample.
One way to do this is by so-called transfer learning; a network trained on other labels is usually very effectively adaptable to new labels, since the hard work is breaking down the low level components of the image in a sensible way.
But we do not infact need such labeled data to perform such task; much like babies dont need nearly as much labeled data as the neural networs you are thinking of do.
For instance, one such unsupervised methods that I have also successfully applied in other contexts, is to take an unlabeled set of images, randomly rotate them, and train a network to predict which side of the image is 'up'. Without knowing what the visible objects are, or what they are called, this forces the network to learn a tremendous amount of structure about the images; and this can form an excellent basis for much more data-efficient subsequent labeled learning.
While it is true that artificial networks are quite different from real ones in probably meaningful ways, such as the absence of an obvious analogue of backpropagation, it is very probably true that real neural networks make use of the same tricks, of trying to learn the structure in the data implied by some simple priors.
One other example which almost certainly plays a role in animals and has also shown great promise in understanding video, is in the assumption that the future should be predictable from the past. Just by starting from that assumption, you can teach a neural network a whole lot. Or on a philosophical level, I am inclined to believe that this assumption underlies almost everything what we consider to be 'knowledge'.
I am not saying anything new here; but it is relatively new in the sense that these possibilities are too young to have found many applications yet, and do not yet have percolated down to the textbook understanding of 'what an ANN can do'. So to answer the OPs question; ANN's have already closed much of the gap that you describe. | Why do neural networks need so many training examples to perform?
As pointed out by others, the data-efficiency of artificial neural networks varies quite substantially, depending on the details. As a matter of fact, there are many so called one-shot learning method |
3,499 | Why do neural networks need so many training examples to perform? | One thing that I haven't seen in the answers so far is the fact that one 'instance' of a real world object that is seen by a human child does not corresponds to an instance in the context of NN training.
Suppose you're standing at a railway intersection with a 5 year old child and watch 5 trains pass within 10 minutes. Now, you could say "My child only saw 5 trains and can reliably identify other trains while a NN needs thousands of images!". While this is likely true, you are completely ignoring the fact that every train your child sees contains A LOT more information than a single image of a train. In fact, the brain of your child is processing several dozens images of the train per second while it is passing by, each from a slightly different angle, different shadows, etc., while a single image will provide the NN with very limited information.
In this context, your child even has information that is not available to the NN, for example the speed of the train or the sound that the train makes.
Further, your child can talk and ASK QUESTIONS! "Trains are very long, right?" "Yes.", "And they are very big too, right?" "Yes.". With two simple questions your child learn two very essential features in less than a minute!
Another important point is object detection. Your child is able to identify immediately on which object, i.e. which part of the image, it needs to focus on, while a NN must learn to detect the relevant object before it can attempt to classify it. | Why do neural networks need so many training examples to perform? | One thing that I haven't seen in the answers so far is the fact that one 'instance' of a real world object that is seen by a human child does not corresponds to an instance in the context of NN traini | Why do neural networks need so many training examples to perform?
One thing that I haven't seen in the answers so far is the fact that one 'instance' of a real world object that is seen by a human child does not corresponds to an instance in the context of NN training.
Suppose you're standing at a railway intersection with a 5 year old child and watch 5 trains pass within 10 minutes. Now, you could say "My child only saw 5 trains and can reliably identify other trains while a NN needs thousands of images!". While this is likely true, you are completely ignoring the fact that every train your child sees contains A LOT more information than a single image of a train. In fact, the brain of your child is processing several dozens images of the train per second while it is passing by, each from a slightly different angle, different shadows, etc., while a single image will provide the NN with very limited information.
In this context, your child even has information that is not available to the NN, for example the speed of the train or the sound that the train makes.
Further, your child can talk and ASK QUESTIONS! "Trains are very long, right?" "Yes.", "And they are very big too, right?" "Yes.". With two simple questions your child learn two very essential features in less than a minute!
Another important point is object detection. Your child is able to identify immediately on which object, i.e. which part of the image, it needs to focus on, while a NN must learn to detect the relevant object before it can attempt to classify it. | Why do neural networks need so many training examples to perform?
One thing that I haven't seen in the answers so far is the fact that one 'instance' of a real world object that is seen by a human child does not corresponds to an instance in the context of NN traini |
3,500 | Why do neural networks need so many training examples to perform? | One way to train a deep neural network is to treat it as a stack of auto-encoders (Restricted Boltzmann Machines).
In theory, an auto-encoder learns in an unsupervised manner: It takes arbitrary, unlabelled input data and processes it to generate output data. Then it takes that output data, and tries to regenerate its input data. It tweaks its nodes' parameters until it can come close to round-tripping its data. If you think about it, the auto-encoder is writing its own automated unit tests. In effect, it is turning its "unlabelled input data" into labelled data: The original data serves as a label for the round-tripped data.
After the layers of auto-encoders are trained, the neural network is fine-tuned using labelled data to perform its intended function. In effect, these are functional tests.
The original poster asks why a lot of data is needed to train an artificial neural network, and compares that to the allegedly low amount of training data needed by a two-year-old human. The original poster is comparing apples-to-oranges: The overall training process for the artificial neural net, versus the fine-tuning with labels for the two-year-old.
But in reality, the two-year old has been training its auto-encoders on random, self-labelled data for more than two years. Babies dream when they are in utero. (So do kittens.) Researchers have described these dreams as involving random neuron firings in the visual processing centers. | Why do neural networks need so many training examples to perform? | One way to train a deep neural network is to treat it as a stack of auto-encoders (Restricted Boltzmann Machines).
In theory, an auto-encoder learns in an unsupervised manner: It takes arbitrary, unl | Why do neural networks need so many training examples to perform?
One way to train a deep neural network is to treat it as a stack of auto-encoders (Restricted Boltzmann Machines).
In theory, an auto-encoder learns in an unsupervised manner: It takes arbitrary, unlabelled input data and processes it to generate output data. Then it takes that output data, and tries to regenerate its input data. It tweaks its nodes' parameters until it can come close to round-tripping its data. If you think about it, the auto-encoder is writing its own automated unit tests. In effect, it is turning its "unlabelled input data" into labelled data: The original data serves as a label for the round-tripped data.
After the layers of auto-encoders are trained, the neural network is fine-tuned using labelled data to perform its intended function. In effect, these are functional tests.
The original poster asks why a lot of data is needed to train an artificial neural network, and compares that to the allegedly low amount of training data needed by a two-year-old human. The original poster is comparing apples-to-oranges: The overall training process for the artificial neural net, versus the fine-tuning with labels for the two-year-old.
But in reality, the two-year old has been training its auto-encoders on random, self-labelled data for more than two years. Babies dream when they are in utero. (So do kittens.) Researchers have described these dreams as involving random neuron firings in the visual processing centers. | Why do neural networks need so many training examples to perform?
One way to train a deep neural network is to treat it as a stack of auto-encoders (Restricted Boltzmann Machines).
In theory, an auto-encoder learns in an unsupervised manner: It takes arbitrary, unl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.