idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
4,901
Why is softmax output not a good uncertainty measure for Deep Learning models?
What is being raised is just a basic misunderstanding of statistics coming from the ML "classification" not probability viewpoint. a predicted probability is just a prediction, if you want confidence intervals you need to do something like bootstrapping/bayesian methods. eg if I win 6/10 games or 600/1000 my predicted estimate of winning the next game is still 60%, but the confidence interval around the 60% is much narrower after 1000 games.
Why is softmax output not a good uncertainty measure for Deep Learning models?
What is being raised is just a basic misunderstanding of statistics coming from the ML "classification" not probability viewpoint. a predicted probability is just a prediction, if you want confidence
Why is softmax output not a good uncertainty measure for Deep Learning models? What is being raised is just a basic misunderstanding of statistics coming from the ML "classification" not probability viewpoint. a predicted probability is just a prediction, if you want confidence intervals you need to do something like bootstrapping/bayesian methods. eg if I win 6/10 games or 600/1000 my predicted estimate of winning the next game is still 60%, but the confidence interval around the 60% is much narrower after 1000 games.
Why is softmax output not a good uncertainty measure for Deep Learning models? What is being raised is just a basic misunderstanding of statistics coming from the ML "classification" not probability viewpoint. a predicted probability is just a prediction, if you want confidence
4,902
Why are MA(q) time series models called "moving averages"?
A footnote in Pankratz (1983), on page 48, says: The label "moving average" is technically incorrect since the MA coefficients may be negative and may not sum to unity. This label is used by convention. Box and Jenkins (1976) also says something similar. On page 10: The name "moving average" is somewhat misleading because the weights $1, -\theta_{1}, -\theta_{2}, \ldots, -\theta_{q}$, which multiply the $a$'s, need not total unity nor need that be positive. However, this nomenclature is in common use, and therefore we employ it. I hope this helps.
Why are MA(q) time series models called "moving averages"?
A footnote in Pankratz (1983), on page 48, says: The label "moving average" is technically incorrect since the MA coefficients may be negative and may not sum to unity. This label is used by conv
Why are MA(q) time series models called "moving averages"? A footnote in Pankratz (1983), on page 48, says: The label "moving average" is technically incorrect since the MA coefficients may be negative and may not sum to unity. This label is used by convention. Box and Jenkins (1976) also says something similar. On page 10: The name "moving average" is somewhat misleading because the weights $1, -\theta_{1}, -\theta_{2}, \ldots, -\theta_{q}$, which multiply the $a$'s, need not total unity nor need that be positive. However, this nomenclature is in common use, and therefore we employ it. I hope this helps.
Why are MA(q) time series models called "moving averages"? A footnote in Pankratz (1983), on page 48, says: The label "moving average" is technically incorrect since the MA coefficients may be negative and may not sum to unity. This label is used by conv
4,903
Why are MA(q) time series models called "moving averages"?
If you look at a zero-mean MA process: $X_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \cdots + \theta_q \varepsilon_{t-q} \,$ then you could regard the right hand side as akin to a weighted moving average of the $\varepsilon$ terms, but where the weights don't sum to 1. For example, Hyndman and Athanasopoulos (2013) [1] say: Notice that each value of $y_t$ can be thought of as a weighted mov­ing aver­age of the past few fore­cast errors. Similar explanations of the term may be found in numerous other places. (In spite of the popularity of this explanation, I don't know for certain that this is the origin of the term, however; for example perhaps there was originally some connection between the model and moving-average smoothing.) Note that Graeme Walsh points out in comments above that this may have originated with Slutsky (1927) "The Summation of Random Causes as a Source of Cyclical Processes" [1] Hyn­d­man, R.J. and Athana­sopou­los, G. (2013) Fore­cast­ing: prin­ci­ples and prac­tice. Sec­tion 8/4. http://otexts.com/fpp/8/4. Accessed on 22 Sept 2013.
Why are MA(q) time series models called "moving averages"?
If you look at a zero-mean MA process: $X_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \cdots + \theta_q \varepsilon_{t-q} \,$ then you could regard the right hand side as akin to a weighted movi
Why are MA(q) time series models called "moving averages"? If you look at a zero-mean MA process: $X_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \cdots + \theta_q \varepsilon_{t-q} \,$ then you could regard the right hand side as akin to a weighted moving average of the $\varepsilon$ terms, but where the weights don't sum to 1. For example, Hyndman and Athanasopoulos (2013) [1] say: Notice that each value of $y_t$ can be thought of as a weighted mov­ing aver­age of the past few fore­cast errors. Similar explanations of the term may be found in numerous other places. (In spite of the popularity of this explanation, I don't know for certain that this is the origin of the term, however; for example perhaps there was originally some connection between the model and moving-average smoothing.) Note that Graeme Walsh points out in comments above that this may have originated with Slutsky (1927) "The Summation of Random Causes as a Source of Cyclical Processes" [1] Hyn­d­man, R.J. and Athana­sopou­los, G. (2013) Fore­cast­ing: prin­ci­ples and prac­tice. Sec­tion 8/4. http://otexts.com/fpp/8/4. Accessed on 22 Sept 2013.
Why are MA(q) time series models called "moving averages"? If you look at a zero-mean MA process: $X_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \cdots + \theta_q \varepsilon_{t-q} \,$ then you could regard the right hand side as akin to a weighted movi
4,904
How to interpret p-value of Kolmogorov-Smirnov test (python)?
As Stijn pointed out, the k-s test returns a D statistic and a p-value corresponding to the D statistic. The D statistic is the absolute max distance (supremum) between the CDFs of the two samples. The closer this number is to 0 the more likely it is that the two samples were drawn from the same distribution. Check out the Wikipedia page for the k-s test. It provides a good explanation: https://en.m.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test The p-value returned by the k-s test has the same interpretation as other p-values. You reject the null hypothesis that the two samples were drawn from the same distribution if the p-value is less than your significance level. You can find tables online for the conversion of the D statistic into a p-value if you are interested in the procedure.
How to interpret p-value of Kolmogorov-Smirnov test (python)?
As Stijn pointed out, the k-s test returns a D statistic and a p-value corresponding to the D statistic. The D statistic is the absolute max distance (supremum) between the CDFs of the two samples. Th
How to interpret p-value of Kolmogorov-Smirnov test (python)? As Stijn pointed out, the k-s test returns a D statistic and a p-value corresponding to the D statistic. The D statistic is the absolute max distance (supremum) between the CDFs of the two samples. The closer this number is to 0 the more likely it is that the two samples were drawn from the same distribution. Check out the Wikipedia page for the k-s test. It provides a good explanation: https://en.m.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test The p-value returned by the k-s test has the same interpretation as other p-values. You reject the null hypothesis that the two samples were drawn from the same distribution if the p-value is less than your significance level. You can find tables online for the conversion of the D statistic into a p-value if you are interested in the procedure.
How to interpret p-value of Kolmogorov-Smirnov test (python)? As Stijn pointed out, the k-s test returns a D statistic and a p-value corresponding to the D statistic. The D statistic is the absolute max distance (supremum) between the CDFs of the two samples. Th
4,905
How to interpret p-value of Kolmogorov-Smirnov test (python)?
When doing a Google search for ks_2samp, the first hit is this website. On it, you can see the function specification: This is a two-sided test for the null hypothesis that 2 independent samples are drawn from the same continuous distribution. Parameters : a, b : sequence of 1-D ndarrays two arrays of sample observations assumed to be drawn from a continuous distribution, sample sizes can be different Returns : D : float, KS statistic p-value : float, two-tailed p-value
How to interpret p-value of Kolmogorov-Smirnov test (python)?
When doing a Google search for ks_2samp, the first hit is this website. On it, you can see the function specification: This is a two-sided test for the null hypothesis that 2 independent samples are d
How to interpret p-value of Kolmogorov-Smirnov test (python)? When doing a Google search for ks_2samp, the first hit is this website. On it, you can see the function specification: This is a two-sided test for the null hypothesis that 2 independent samples are drawn from the same continuous distribution. Parameters : a, b : sequence of 1-D ndarrays two arrays of sample observations assumed to be drawn from a continuous distribution, sample sizes can be different Returns : D : float, KS statistic p-value : float, two-tailed p-value
How to interpret p-value of Kolmogorov-Smirnov test (python)? When doing a Google search for ks_2samp, the first hit is this website. On it, you can see the function specification: This is a two-sided test for the null hypothesis that 2 independent samples are d
4,906
Using lmer for repeated-measures linear mixed-effect model
I think that your approach is correct. Model m1 specifies a separate intercept for each subject. Model m2 adds a separate slope for each subject. Your slope is across days as subjects only participate in one treatment group. If you write model m2 as follows it's more obvious that you model a separate intercept and slope for each subject m2 <- lmer(Obs ~ Treatment * Day + (1+Day|Subject), mydata) This is equivalent to: m2 <- lmer(Obs ~ Treatment + Day + Treatment:Day + (1+Day|Subject), mydata) I.e. the main effects of treatment, day and the interaction between the two. I think that you don't need to worry about nesting as long as you don't repeat subject ID's within treatment groups. Which model is correct, really depends on your research question. Is there reason to believe that subjects' slopes vary in addition to the treatment effect? You could run both models and compare them with anova(m1,m2) to see if the data supports either one. I'm not sure what you want to express with model m3? The nesting syntax uses a /, e.g. (1|group/subgroup). I don't think that you need to worry about autocorrelation with such a small number of time points.
Using lmer for repeated-measures linear mixed-effect model
I think that your approach is correct. Model m1 specifies a separate intercept for each subject. Model m2 adds a separate slope for each subject. Your slope is across days as subjects only participate
Using lmer for repeated-measures linear mixed-effect model I think that your approach is correct. Model m1 specifies a separate intercept for each subject. Model m2 adds a separate slope for each subject. Your slope is across days as subjects only participate in one treatment group. If you write model m2 as follows it's more obvious that you model a separate intercept and slope for each subject m2 <- lmer(Obs ~ Treatment * Day + (1+Day|Subject), mydata) This is equivalent to: m2 <- lmer(Obs ~ Treatment + Day + Treatment:Day + (1+Day|Subject), mydata) I.e. the main effects of treatment, day and the interaction between the two. I think that you don't need to worry about nesting as long as you don't repeat subject ID's within treatment groups. Which model is correct, really depends on your research question. Is there reason to believe that subjects' slopes vary in addition to the treatment effect? You could run both models and compare them with anova(m1,m2) to see if the data supports either one. I'm not sure what you want to express with model m3? The nesting syntax uses a /, e.g. (1|group/subgroup). I don't think that you need to worry about autocorrelation with such a small number of time points.
Using lmer for repeated-measures linear mixed-effect model I think that your approach is correct. Model m1 specifies a separate intercept for each subject. Model m2 adds a separate slope for each subject. Your slope is across days as subjects only participate
4,907
Using lmer for repeated-measures linear mixed-effect model
I don't feel comfortable enough to comment on your autocorrelated errors issue (nor about the different implementations in lme4 vs. nlme), but I can speak to the rest. Your model m1 is a random-intercept model, where you have included the cross-level interaction between Treatment and Day (the effect of Day is allowed to vary between Treatment groups). In order to allow for the change over time to differ across participants (i.e. to explicitly model individual differences in change over time), you also need to allow for the effect of Day to be random. To do this, you would specify: m2 <- lmer(Obs ~ Day + Treatment + Day:Treatment + (Day | Subject), mydata) In this model: The intercept if the predicted score for the treatment reference category at Day=0 The coefficient for Day is the predicted change over time for each 1-unit increase in days for the treatment reference category The coefficients for the two dummy codes for the treatment groups (automatically created by R) are the predicted difference between each remaining treatment group and the reference category at Day=0 The coefficients for the two interaction terms are the difference in the effect of time (Day) on predicted scores between the reference category and the remaining treatment groups. Both the intercepts and the effect of Day on score are random (each subject is allowed to have a different predicted score at Day=0 and a different linear change over time). The covariance between intercepts and slopes is also being modeled (they are allowed to covary). As you can see, the interpretation of the coefficients for the two dummy variables are conditional on Day=0. They will tell you if the predicted score at Day=0 for the reference category is significantly different from the two remaining treatment groups. Therefore, where you decide to center your Day variable is important. If you center at Day 1, then the coefficients tell you whether the predicted score for the reference category at Day 1 is significantly different from the predicted score of the two remaining groups. This way, you could see if there are pre-existing differences between the groups. If you center at Day 3, then the coefficients tell you whether the predicted score for the reference category at Day 3 is significantly different from the predicted score of the two remaining groups. This way, you could see if there are differences between the groups at the end of the intervention. Finally, note that Subjects are not nested within Treatment. Your three treatments are not random levels of a population of levels to which you want to generalize your results--rather, as you mentioned, your levels are fixed, and you want to generalize your results to these levels only. (Not to mention, you shouldn't use multilevel modeling if you have only 3 upper-level units; see Maas & Hox, 2005.) Instead, treatment is a level-2 predictor, i.e. a predictor which takes a single value across Days (level-1 units) for each subject. Therefore, it is merely included as a predictor in your model. Reference: Maas, C. J. M., & Hox, J. J. (2005). Sufficient sample sizes for multilevel modeling. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 1, 86-92.
Using lmer for repeated-measures linear mixed-effect model
I don't feel comfortable enough to comment on your autocorrelated errors issue (nor about the different implementations in lme4 vs. nlme), but I can speak to the rest. Your model m1 is a random-interc
Using lmer for repeated-measures linear mixed-effect model I don't feel comfortable enough to comment on your autocorrelated errors issue (nor about the different implementations in lme4 vs. nlme), but I can speak to the rest. Your model m1 is a random-intercept model, where you have included the cross-level interaction between Treatment and Day (the effect of Day is allowed to vary between Treatment groups). In order to allow for the change over time to differ across participants (i.e. to explicitly model individual differences in change over time), you also need to allow for the effect of Day to be random. To do this, you would specify: m2 <- lmer(Obs ~ Day + Treatment + Day:Treatment + (Day | Subject), mydata) In this model: The intercept if the predicted score for the treatment reference category at Day=0 The coefficient for Day is the predicted change over time for each 1-unit increase in days for the treatment reference category The coefficients for the two dummy codes for the treatment groups (automatically created by R) are the predicted difference between each remaining treatment group and the reference category at Day=0 The coefficients for the two interaction terms are the difference in the effect of time (Day) on predicted scores between the reference category and the remaining treatment groups. Both the intercepts and the effect of Day on score are random (each subject is allowed to have a different predicted score at Day=0 and a different linear change over time). The covariance between intercepts and slopes is also being modeled (they are allowed to covary). As you can see, the interpretation of the coefficients for the two dummy variables are conditional on Day=0. They will tell you if the predicted score at Day=0 for the reference category is significantly different from the two remaining treatment groups. Therefore, where you decide to center your Day variable is important. If you center at Day 1, then the coefficients tell you whether the predicted score for the reference category at Day 1 is significantly different from the predicted score of the two remaining groups. This way, you could see if there are pre-existing differences between the groups. If you center at Day 3, then the coefficients tell you whether the predicted score for the reference category at Day 3 is significantly different from the predicted score of the two remaining groups. This way, you could see if there are differences between the groups at the end of the intervention. Finally, note that Subjects are not nested within Treatment. Your three treatments are not random levels of a population of levels to which you want to generalize your results--rather, as you mentioned, your levels are fixed, and you want to generalize your results to these levels only. (Not to mention, you shouldn't use multilevel modeling if you have only 3 upper-level units; see Maas & Hox, 2005.) Instead, treatment is a level-2 predictor, i.e. a predictor which takes a single value across Days (level-1 units) for each subject. Therefore, it is merely included as a predictor in your model. Reference: Maas, C. J. M., & Hox, J. J. (2005). Sufficient sample sizes for multilevel modeling. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 1, 86-92.
Using lmer for repeated-measures linear mixed-effect model I don't feel comfortable enough to comment on your autocorrelated errors issue (nor about the different implementations in lme4 vs. nlme), but I can speak to the rest. Your model m1 is a random-interc
4,908
What book is recommendable to start learning statistics using R at the same time?
I think one reason it is so hard to answer this is that R is so powerful and flexible that a real introduction to R programming goes well beyond what is normally needed in an introduction to statistics. The books that teach statistics using MiniTab, JMP or SPSS are doing relatively straightforward things with the software that barely scratch the surface of what R is capable of when it comes to data manipulation, simulations, custom-built functions, etc. Having said that, I think that Wilcox's Modern Statistics for the Social and Behavioral Sciences: A Practical Introduction (2012) is a brilliant new book. It assumes no statistical knowledge and takes you from scratch right through to a big range of modern robust techniques; and assumes not much more R knowledge than the ability to open it up and load a dataset. It covers many of the classical techniques too including ANOVA (mentioned in the OP). I would see this book as the equivalent of the books that introduce stats and a stats package like SPSS at the same time. However, it won't teach you to program in R - only how to do modern statistical analysis with it, with an emphasis on robust techniques that address the known problems with classical analysis that are sidelined by most other approaches to teaching statistics. The three problems with classical methods that this book particularly addresses right from the beginning are sampling from heavy-tailed distributions; skewness; and heteroscedasticity. Wilcox uses R because "In terms of taking advantage of modern statistical techniques, R clearly dominates. When analyzing data, it is undoubtedly the most important software development during the last quarter of a century. And it is free. Although classic methods have fundamental flaws, it is not suggested that they be completely abandoned... Consequently, illustrations are provided on how to apply standard methods with R. Of particular importance here is that, in addition, illustrations are provided regarding how to apply modern methods using over 900 R functions written for this book." This book is so excellent that after we bought a copy for work I purchased my own copy at home. The chapter headings are: numerical and graphical summaries of data; probability and related concepts; sampling distributions and confidence intervals; hypothesis testing; regression and correlation; bootstrap methods; comparing two independent groups; comparing two dependent groups; one-way ANOVA; two-way and three-way designs; comparing more than two dependent groups; multiple comparisons; some multivariate methods; robust regression and measures of association; basic methods for analyzing categorical data; Further edit - having checked out the David Moore example of what you are looking for, I really think Wilcox's book meets the need.
What book is recommendable to start learning statistics using R at the same time?
I think one reason it is so hard to answer this is that R is so powerful and flexible that a real introduction to R programming goes well beyond what is normally needed in an introduction to statistic
What book is recommendable to start learning statistics using R at the same time? I think one reason it is so hard to answer this is that R is so powerful and flexible that a real introduction to R programming goes well beyond what is normally needed in an introduction to statistics. The books that teach statistics using MiniTab, JMP or SPSS are doing relatively straightforward things with the software that barely scratch the surface of what R is capable of when it comes to data manipulation, simulations, custom-built functions, etc. Having said that, I think that Wilcox's Modern Statistics for the Social and Behavioral Sciences: A Practical Introduction (2012) is a brilliant new book. It assumes no statistical knowledge and takes you from scratch right through to a big range of modern robust techniques; and assumes not much more R knowledge than the ability to open it up and load a dataset. It covers many of the classical techniques too including ANOVA (mentioned in the OP). I would see this book as the equivalent of the books that introduce stats and a stats package like SPSS at the same time. However, it won't teach you to program in R - only how to do modern statistical analysis with it, with an emphasis on robust techniques that address the known problems with classical analysis that are sidelined by most other approaches to teaching statistics. The three problems with classical methods that this book particularly addresses right from the beginning are sampling from heavy-tailed distributions; skewness; and heteroscedasticity. Wilcox uses R because "In terms of taking advantage of modern statistical techniques, R clearly dominates. When analyzing data, it is undoubtedly the most important software development during the last quarter of a century. And it is free. Although classic methods have fundamental flaws, it is not suggested that they be completely abandoned... Consequently, illustrations are provided on how to apply standard methods with R. Of particular importance here is that, in addition, illustrations are provided regarding how to apply modern methods using over 900 R functions written for this book." This book is so excellent that after we bought a copy for work I purchased my own copy at home. The chapter headings are: numerical and graphical summaries of data; probability and related concepts; sampling distributions and confidence intervals; hypothesis testing; regression and correlation; bootstrap methods; comparing two independent groups; comparing two dependent groups; one-way ANOVA; two-way and three-way designs; comparing more than two dependent groups; multiple comparisons; some multivariate methods; robust regression and measures of association; basic methods for analyzing categorical data; Further edit - having checked out the David Moore example of what you are looking for, I really think Wilcox's book meets the need.
What book is recommendable to start learning statistics using R at the same time? I think one reason it is so hard to answer this is that R is so powerful and flexible that a real introduction to R programming goes well beyond what is normally needed in an introduction to statistic
4,909
What book is recommendable to start learning statistics using R at the same time?
May be "Introduction to Statistical Thought"?
What book is recommendable to start learning statistics using R at the same time?
May be "Introduction to Statistical Thought"?
What book is recommendable to start learning statistics using R at the same time? May be "Introduction to Statistical Thought"?
What book is recommendable to start learning statistics using R at the same time? May be "Introduction to Statistical Thought"?
4,910
What book is recommendable to start learning statistics using R at the same time?
@Julie's post of Verzani's book is a real nice choice for someone who has neither R or statistics experience. It's soft enough on both the R and the statistics that it's used by the political science department at UC Davis, and those students have neither programming classes nor higher-level math. His work is available through his CRAN package, simpleR. Since you come from a Computer Science background, I don't think you need a very gentle introduction to R. I'd assume you have a decent knowledge of data structures, scoping, and why you need a debugger. For a very computing-centric perspective on R (moreso than you might even see in a statistical programming class in an undergrad stat department), check out Norm Matloff's The Art of R Programming. To see if it interests you, Matloff has a very rough draft pre-print version available on his website. If you like his style, I would recommend grabbing the finished copy. He is a CS professor, and he writes the book more to a CS audience than a statistics audience. G. Jay Kerns (a frequent poster here) also has a book available online called Introduction to Probability and Statistics Using R. I personally feel it does a wonderful service to introducing the guts of R. I realize your question is targeted to get responses aimed at a CS major, but please also peruse this topic: What book would you recommend for non-statistician scientists?
What book is recommendable to start learning statistics using R at the same time?
@Julie's post of Verzani's book is a real nice choice for someone who has neither R or statistics experience. It's soft enough on both the R and the statistics that it's used by the political science
What book is recommendable to start learning statistics using R at the same time? @Julie's post of Verzani's book is a real nice choice for someone who has neither R or statistics experience. It's soft enough on both the R and the statistics that it's used by the political science department at UC Davis, and those students have neither programming classes nor higher-level math. His work is available through his CRAN package, simpleR. Since you come from a Computer Science background, I don't think you need a very gentle introduction to R. I'd assume you have a decent knowledge of data structures, scoping, and why you need a debugger. For a very computing-centric perspective on R (moreso than you might even see in a statistical programming class in an undergrad stat department), check out Norm Matloff's The Art of R Programming. To see if it interests you, Matloff has a very rough draft pre-print version available on his website. If you like his style, I would recommend grabbing the finished copy. He is a CS professor, and he writes the book more to a CS audience than a statistics audience. G. Jay Kerns (a frequent poster here) also has a book available online called Introduction to Probability and Statistics Using R. I personally feel it does a wonderful service to introducing the guts of R. I realize your question is targeted to get responses aimed at a CS major, but please also peruse this topic: What book would you recommend for non-statistician scientists?
What book is recommendable to start learning statistics using R at the same time? @Julie's post of Verzani's book is a real nice choice for someone who has neither R or statistics experience. It's soft enough on both the R and the statistics that it's used by the political science
4,911
What book is recommendable to start learning statistics using R at the same time?
I found this book to be of great use, but it does assume some knowledge of basic statistical terms, such as p-value, ANOVA, et cetera. This book offers a much gentler introduction to statistical concepts themselves...
What book is recommendable to start learning statistics using R at the same time?
I found this book to be of great use, but it does assume some knowledge of basic statistical terms, such as p-value, ANOVA, et cetera. This book offers a much gentler introduction to statistical conce
What book is recommendable to start learning statistics using R at the same time? I found this book to be of great use, but it does assume some knowledge of basic statistical terms, such as p-value, ANOVA, et cetera. This book offers a much gentler introduction to statistical concepts themselves...
What book is recommendable to start learning statistics using R at the same time? I found this book to be of great use, but it does assume some knowledge of basic statistical terms, such as p-value, ANOVA, et cetera. This book offers a much gentler introduction to statistical conce
4,912
What book is recommendable to start learning statistics using R at the same time?
A good book is produced via Adelaide University it is available free online and as a purchase for hardcopy. Learning Statistics with R It is very well broken up in its structure and does cover an introduction to R as well as basic introduction to Statistics before moving into more in-depth topics. There is a very deep list of books on the R website providing it as a reference however currently have not read the titles, will update as I move forward. https://www.r-project.org/doc/bib/R-books.html
What book is recommendable to start learning statistics using R at the same time?
A good book is produced via Adelaide University it is available free online and as a purchase for hardcopy. Learning Statistics with R It is very well broken up in its structure and does cover an intr
What book is recommendable to start learning statistics using R at the same time? A good book is produced via Adelaide University it is available free online and as a purchase for hardcopy. Learning Statistics with R It is very well broken up in its structure and does cover an introduction to R as well as basic introduction to Statistics before moving into more in-depth topics. There is a very deep list of books on the R website providing it as a reference however currently have not read the titles, will update as I move forward. https://www.r-project.org/doc/bib/R-books.html
What book is recommendable to start learning statistics using R at the same time? A good book is produced via Adelaide University it is available free online and as a purchase for hardcopy. Learning Statistics with R It is very well broken up in its structure and does cover an intr
4,913
What book is recommendable to start learning statistics using R at the same time?
Learning Statistics Using R by Randall E. Schumacker is coming out January 2014 from SAGE Publications. It contains all the material in the posting.
What book is recommendable to start learning statistics using R at the same time?
Learning Statistics Using R by Randall E. Schumacker is coming out January 2014 from SAGE Publications. It contains all the material in the posting.
What book is recommendable to start learning statistics using R at the same time? Learning Statistics Using R by Randall E. Schumacker is coming out January 2014 from SAGE Publications. It contains all the material in the posting.
What book is recommendable to start learning statistics using R at the same time? Learning Statistics Using R by Randall E. Schumacker is coming out January 2014 from SAGE Publications. It contains all the material in the posting.
4,914
Different ways to write interaction terms in lm?
The results are different because the way lm sets up the model with the interaction is different from how it is set up when you set it up yourself. If you look at the residual sd, it's the same, which indicates (not definitively) that the underlying models are the same, just expressed (to the lm internals) differently. If you define your interaction as paste(d$s, d$r) instead of paste(d$r, d$s) your parameter estimates will change again, in interesting ways. Note how in your model summary for lm1 the coefficient estimate for ss2 is 4.94 lower than in the summary for lm2, with the coefficient for rr2:ss2 being 4.95 (if you print to 3 decimal places, the difference goes away). This is another indication that an internal rearrangement of terms has occurred. I can't think of any advantage to doing it yourself, but there may be one with more complex models where you don't want a full interaction term but instead only some of the terms in the "cross" between two or more factors.
Different ways to write interaction terms in lm?
The results are different because the way lm sets up the model with the interaction is different from how it is set up when you set it up yourself. If you look at the residual sd, it's the same, whic
Different ways to write interaction terms in lm? The results are different because the way lm sets up the model with the interaction is different from how it is set up when you set it up yourself. If you look at the residual sd, it's the same, which indicates (not definitively) that the underlying models are the same, just expressed (to the lm internals) differently. If you define your interaction as paste(d$s, d$r) instead of paste(d$r, d$s) your parameter estimates will change again, in interesting ways. Note how in your model summary for lm1 the coefficient estimate for ss2 is 4.94 lower than in the summary for lm2, with the coefficient for rr2:ss2 being 4.95 (if you print to 3 decimal places, the difference goes away). This is another indication that an internal rearrangement of terms has occurred. I can't think of any advantage to doing it yourself, but there may be one with more complex models where you don't want a full interaction term but instead only some of the terms in the "cross" between two or more factors.
Different ways to write interaction terms in lm? The results are different because the way lm sets up the model with the interaction is different from how it is set up when you set it up yourself. If you look at the residual sd, it's the same, whic
4,915
Different ways to write interaction terms in lm?
You might understand this behavior better if you look at the model matrices. model.matrix(lm1 <- lm(y ~ r*s, data=d)) model.matrix(lm2 <- lm(y ~ r + s + rs, data=d)) When you look at these matrices, you can compare the constellations of s2=1 with the other variables (i.e. when s2=1, which values do the other variables take?). You will see that these constellations differ slightly, which just means that the base category is different. Everything else is essientially the same. In particular, note that in your lm1, the coefficient on ss2 equals the coefficients ss2+rsr1s2 of lm2, i.e. 3.82=8.76-4.95, short of rounding errors. For instance, executing the following code gives you exactly the same output as using the automatic setting of R: d$rs <- relevel(d$rs, "r1s1") summary(lm1 <- lm(y~ factor(r) + factor(s) + factor(rs), data=d)) This also provides a quick answer to your question: the really only reason to change the way factors are set up is to provide expositional clarity. Consider the following example: Suppose you regress wage on a dummy for high school completion interacted with a factor indicating if you belong to a minority. That is: $wage = \alpha + \beta \ edu + \gamma \ edu*minority + \epsilon $ If said minority factor takes value 1 if you do belong to a minority, the coefficient $\beta$ can be interpreted as a wage difference for non-minority individuals who have completed high school. If this is your coefficient of interest, then you should code it as such. Otherwise, suppose the minority factor takes the value of 1 if you do not belong to a minority. Then, in order to see how much more non-minority individuals earn when they complete high school, you would have to "manually" compute $\beta+\gamma$. Note though that all information is contained in the estimates though, and substantial results do not change by setting up the factors differently!
Different ways to write interaction terms in lm?
You might understand this behavior better if you look at the model matrices. model.matrix(lm1 <- lm(y ~ r*s, data=d)) model.matrix(lm2 <- lm(y ~ r + s + rs, data=d)) When you look at these matrice
Different ways to write interaction terms in lm? You might understand this behavior better if you look at the model matrices. model.matrix(lm1 <- lm(y ~ r*s, data=d)) model.matrix(lm2 <- lm(y ~ r + s + rs, data=d)) When you look at these matrices, you can compare the constellations of s2=1 with the other variables (i.e. when s2=1, which values do the other variables take?). You will see that these constellations differ slightly, which just means that the base category is different. Everything else is essientially the same. In particular, note that in your lm1, the coefficient on ss2 equals the coefficients ss2+rsr1s2 of lm2, i.e. 3.82=8.76-4.95, short of rounding errors. For instance, executing the following code gives you exactly the same output as using the automatic setting of R: d$rs <- relevel(d$rs, "r1s1") summary(lm1 <- lm(y~ factor(r) + factor(s) + factor(rs), data=d)) This also provides a quick answer to your question: the really only reason to change the way factors are set up is to provide expositional clarity. Consider the following example: Suppose you regress wage on a dummy for high school completion interacted with a factor indicating if you belong to a minority. That is: $wage = \alpha + \beta \ edu + \gamma \ edu*minority + \epsilon $ If said minority factor takes value 1 if you do belong to a minority, the coefficient $\beta$ can be interpreted as a wage difference for non-minority individuals who have completed high school. If this is your coefficient of interest, then you should code it as such. Otherwise, suppose the minority factor takes the value of 1 if you do not belong to a minority. Then, in order to see how much more non-minority individuals earn when they complete high school, you would have to "manually" compute $\beta+\gamma$. Note though that all information is contained in the estimates though, and substantial results do not change by setting up the factors differently!
Different ways to write interaction terms in lm? You might understand this behavior better if you look at the model matrices. model.matrix(lm1 <- lm(y ~ r*s, data=d)) model.matrix(lm2 <- lm(y ~ r + s + rs, data=d)) When you look at these matrice
4,916
Different ways to write interaction terms in lm?
I think there are good reasons not to use lm(y ~ r + s + rs, data=d) When you create rs and put it into the formula, R will think of rs as just another variable, it has no way of knowing that it is an interaction of r and s. This matters if you use drop1() or stepwise regression. It is invalid to drop a variable x while keeping an interaction with x in the formula. R knows this so drop1() will only drop variables that result in valid formulas. If the formula contains rs, it has no way of knowing that attempting to drop r (or s) is invalid. d$rs <- paste(d$r, d$s) lm1 <- lm(y ~ r + s + r:s, data=d) lm3 <- lm(y ~ r + s + rs, data=d) drop1(lm1) Single term deletions Model: y ~ r + s + r:s Df Sum of Sq RSS AIC <none> 171.05 50.924 r:s 1 30.619 201.67 52.218 # only the interaction term can be dropped drop1(lm3) Single term deletions Model: y ~ r + s + rs Df Sum of Sq RSS AIC <none> 171.05 50.924 r 0 0.000 171.05 50.924 # invalid s 0 0.000 171.05 50.924 # invalid rs 1 30.619 201.67 52.218 Incidently I believe the following will all be equivalent lm0 <- lm(y ~ r*s, data=d) lm1 <- lm(y ~ r + s + r:s, data=d) lm2 <- lm(y ~ r + s + r*s, data=d) The formula parts all get converted into the same formula.
Different ways to write interaction terms in lm?
I think there are good reasons not to use lm(y ~ r + s + rs, data=d) When you create rs and put it into the formula, R will think of rs as just another variable, it has no way of knowing that it is an
Different ways to write interaction terms in lm? I think there are good reasons not to use lm(y ~ r + s + rs, data=d) When you create rs and put it into the formula, R will think of rs as just another variable, it has no way of knowing that it is an interaction of r and s. This matters if you use drop1() or stepwise regression. It is invalid to drop a variable x while keeping an interaction with x in the formula. R knows this so drop1() will only drop variables that result in valid formulas. If the formula contains rs, it has no way of knowing that attempting to drop r (or s) is invalid. d$rs <- paste(d$r, d$s) lm1 <- lm(y ~ r + s + r:s, data=d) lm3 <- lm(y ~ r + s + rs, data=d) drop1(lm1) Single term deletions Model: y ~ r + s + r:s Df Sum of Sq RSS AIC <none> 171.05 50.924 r:s 1 30.619 201.67 52.218 # only the interaction term can be dropped drop1(lm3) Single term deletions Model: y ~ r + s + rs Df Sum of Sq RSS AIC <none> 171.05 50.924 r 0 0.000 171.05 50.924 # invalid s 0 0.000 171.05 50.924 # invalid rs 1 30.619 201.67 52.218 Incidently I believe the following will all be equivalent lm0 <- lm(y ~ r*s, data=d) lm1 <- lm(y ~ r + s + r:s, data=d) lm2 <- lm(y ~ r + s + r*s, data=d) The formula parts all get converted into the same formula.
Different ways to write interaction terms in lm? I think there are good reasons not to use lm(y ~ r + s + rs, data=d) When you create rs and put it into the formula, R will think of rs as just another variable, it has no way of knowing that it is an
4,917
Data APIs/feeds available as packages in R
Instructions for using R to download netCDF data can be found here, a common format used for storing Earth science data, e.g. as in marine geospatial data from OpenEarth or climate model driver and forecasts from UCAR rnpn (under development) enables you to get data from the National Phenology Network - a citizen science project to track the timing of plant green-up, flowering, and senescence. See the developer's blog post. -obsolete- RClimate provides tools to download and manipulate flat-file climate data (with tutorials, including here- Download historical finance data with tseries::get.hist.quote Michael Samuel's documents downloading public health data raster::getData provides access to climate variables via worldclim
Data APIs/feeds available as packages in R
Instructions for using R to download netCDF data can be found here, a common format used for storing Earth science data, e.g. as in marine geospatial data from OpenEarth or climate model driver and fo
Data APIs/feeds available as packages in R Instructions for using R to download netCDF data can be found here, a common format used for storing Earth science data, e.g. as in marine geospatial data from OpenEarth or climate model driver and forecasts from UCAR rnpn (under development) enables you to get data from the National Phenology Network - a citizen science project to track the timing of plant green-up, flowering, and senescence. See the developer's blog post. -obsolete- RClimate provides tools to download and manipulate flat-file climate data (with tutorials, including here- Download historical finance data with tseries::get.hist.quote Michael Samuel's documents downloading public health data raster::getData provides access to climate variables via worldclim
Data APIs/feeds available as packages in R Instructions for using R to download netCDF data can be found here, a common format used for storing Earth science data, e.g. as in marine geospatial data from OpenEarth or climate model driver and fo
4,918
Data APIs/feeds available as packages in R
There's a project aimed at creating R packages with this objective (R interface to real-time APIs) called rOpenSci, which has 18 packages currently available or in development. Some (rnpn, rfishbase) are on your list already. Great list! and full disclosure - I'm part of the rOpenSci project.
Data APIs/feeds available as packages in R
There's a project aimed at creating R packages with this objective (R interface to real-time APIs) called rOpenSci, which has 18 packages currently available or in development. Some (rnpn, rfishbase)
Data APIs/feeds available as packages in R There's a project aimed at creating R packages with this objective (R interface to real-time APIs) called rOpenSci, which has 18 packages currently available or in development. Some (rnpn, rfishbase) are on your list already. Great list! and full disclosure - I'm part of the rOpenSci project.
Data APIs/feeds available as packages in R There's a project aimed at creating R packages with this objective (R interface to real-time APIs) called rOpenSci, which has 18 packages currently available or in development. Some (rnpn, rfishbase)
4,919
Data APIs/feeds available as packages in R
ONETr - efficient interaction with the O*NET™ API, offering occupational descriptor data from the U.S. Department of Labor.
Data APIs/feeds available as packages in R
ONETr - efficient interaction with the O*NET™ API, offering occupational descriptor data from the U.S. Department of Labor.
Data APIs/feeds available as packages in R ONETr - efficient interaction with the O*NET™ API, offering occupational descriptor data from the U.S. Department of Labor.
Data APIs/feeds available as packages in R ONETr - efficient interaction with the O*NET™ API, offering occupational descriptor data from the U.S. Department of Labor.
4,920
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
I agree with many of the other answers here but think the statement is even worse than they make it out to be. The statement is an explicit version of an implicit claim in many shoddy analyses of small datasets. These hint that because they have found a significant result in a small sample, their claimed result must be real and important because it is 'harder' to find a significant effect in a small sample. This belief is simply wrong, because random error in small samples means that any result is less trustworthy, whether the effect size is large or small. Large and significant effects are therefore more likely to be of the incorrect magnitude and more importantly, they can be in the wrong direction. Andrew Gelman refers to these usefully as 'Type S' errors (estimates whose sign is wrong) as opposed to 'Type M' errors (estimates whose magnitude is wrong). Combine this with the file-drawer effect (small, non-significant results go unpublished, while large, significant ones are published) and you are most of the way to the replication crisis and a lot of wasted time, effort and money. Thanks to @Adrian below for digging up a figure from Gelman that illustrates this point well: This may seem to be an extreme example but the point is entirely relevant to the argument made by Raoult.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
I agree with many of the other answers here but think the statement is even worse than they make it out to be. The statement is an explicit version of an implicit claim in many shoddy analyses of sma
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? I agree with many of the other answers here but think the statement is even worse than they make it out to be. The statement is an explicit version of an implicit claim in many shoddy analyses of small datasets. These hint that because they have found a significant result in a small sample, their claimed result must be real and important because it is 'harder' to find a significant effect in a small sample. This belief is simply wrong, because random error in small samples means that any result is less trustworthy, whether the effect size is large or small. Large and significant effects are therefore more likely to be of the incorrect magnitude and more importantly, they can be in the wrong direction. Andrew Gelman refers to these usefully as 'Type S' errors (estimates whose sign is wrong) as opposed to 'Type M' errors (estimates whose magnitude is wrong). Combine this with the file-drawer effect (small, non-significant results go unpublished, while large, significant ones are published) and you are most of the way to the replication crisis and a lot of wasted time, effort and money. Thanks to @Adrian below for digging up a figure from Gelman that illustrates this point well: This may seem to be an extreme example but the point is entirely relevant to the argument made by Raoult.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? I agree with many of the other answers here but think the statement is even worse than they make it out to be. The statement is an explicit version of an implicit claim in many shoddy analyses of sma
4,921
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
"It's counterintuitive, but the smaller the sample size of a clinical test, the more significant its results are. The differences in a sample of 20 people may be more significant than in a sample of 10,000 people. If we need such a sample, there is a risk of being wrong. With 10,000 people, when the differences are small, sometimes they don't exist." I have read the linked article (via Google-translate) in which this quote is given. Unfortunately it does not give any additional clarity of what Prof Raoult meant. Taken on its own, this statement makes no sense at all to me. It is a jumble of unclear references and invalid reasoning, and it exhibits a fundamental misunderstanding of the goal of statistical inference and the mechanics of a hypothesis test. The goal of sampling is not to try to trick the significance test; it is to make the most accurate inference possible about an unknown parameter or hypothesis, and that is done by taking as much data as possible. As to the claim that a lower sample size will tend to be "more significant", that is false. Assuming you are dealing with continuous data, and your test assumptions are correct, the p-value of the hypothesis test should be uniform under the null hypothesis regardless of the sample size --- i.e., the formula for the p-value takes account of the sample size and so there is no tendency for smaller samples to be "more significant". If there were such a tendency, this would be considered a failure of the testing procedure, not something to try to take advantage of in order to "trick" the hypotheses test. Prof Raoult states that we may "need such a sample" (i.e., a sample with a significant difference), which unfortunately suggests that the goal of the test methodology is to maximise the chances of coming to a pre-conceived desirable conclusion. This is the kind of thing I hear occasionally from applied researchers who get too involved in trying to prove some hypothesis of theirs, and it makes me cringe --- if the goal of statistical testing is merely to affirm a pre-conceived conclusion then we might as well jettison statistics altogether. Now, it is possible that Prof Raoult had an entirely different point in mind, and he is simply mashing up his statistical words and saying the wrong thing. (The last sentence is contradictory as written, so he must obviously mean somthing else, but I don't know what.) I have seen that happen many times before when hearing descriptions of statistical phenomena from applied researchers who have no training in theoretical statistics. In this case, I would just ignore the quote, because it is either flat-out wrong, or it is a failed attempt to say something completely different. In either case, you are right in your suspicions --- it is not better to have less data.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
"It's counterintuitive, but the smaller the sample size of a clinical test, the more significant its results are. The differences in a sample of 20 people may be more significant than in a sample of 1
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? "It's counterintuitive, but the smaller the sample size of a clinical test, the more significant its results are. The differences in a sample of 20 people may be more significant than in a sample of 10,000 people. If we need such a sample, there is a risk of being wrong. With 10,000 people, when the differences are small, sometimes they don't exist." I have read the linked article (via Google-translate) in which this quote is given. Unfortunately it does not give any additional clarity of what Prof Raoult meant. Taken on its own, this statement makes no sense at all to me. It is a jumble of unclear references and invalid reasoning, and it exhibits a fundamental misunderstanding of the goal of statistical inference and the mechanics of a hypothesis test. The goal of sampling is not to try to trick the significance test; it is to make the most accurate inference possible about an unknown parameter or hypothesis, and that is done by taking as much data as possible. As to the claim that a lower sample size will tend to be "more significant", that is false. Assuming you are dealing with continuous data, and your test assumptions are correct, the p-value of the hypothesis test should be uniform under the null hypothesis regardless of the sample size --- i.e., the formula for the p-value takes account of the sample size and so there is no tendency for smaller samples to be "more significant". If there were such a tendency, this would be considered a failure of the testing procedure, not something to try to take advantage of in order to "trick" the hypotheses test. Prof Raoult states that we may "need such a sample" (i.e., a sample with a significant difference), which unfortunately suggests that the goal of the test methodology is to maximise the chances of coming to a pre-conceived desirable conclusion. This is the kind of thing I hear occasionally from applied researchers who get too involved in trying to prove some hypothesis of theirs, and it makes me cringe --- if the goal of statistical testing is merely to affirm a pre-conceived conclusion then we might as well jettison statistics altogether. Now, it is possible that Prof Raoult had an entirely different point in mind, and he is simply mashing up his statistical words and saying the wrong thing. (The last sentence is contradictory as written, so he must obviously mean somthing else, but I don't know what.) I have seen that happen many times before when hearing descriptions of statistical phenomena from applied researchers who have no training in theoretical statistics. In this case, I would just ignore the quote, because it is either flat-out wrong, or it is a failed attempt to say something completely different. In either case, you are right in your suspicions --- it is not better to have less data.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? "It's counterintuitive, but the smaller the sample size of a clinical test, the more significant its results are. The differences in a sample of 20 people may be more significant than in a sample of 1
4,922
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
(I think the phrasing is deliberately provocative.) If you have 10 observations and want to show that their mean is not zero, it will have to be quite a bit different from 0 if you want to have any kind of chance (power) of detecting the difference. If you have a trillion observations and want to show that their mean is not 0, the mean could be just a tiny bit different from zero, perhaps just 0.01, and you would still have a considerable chance of detecting this difference. Yes, we all agree that $0\ne 0.01$, but the practical significance of a mean of 0.01 instead of 0 might be inconsequential: no one cares. If you detect a difference in that sample of ten, however, the difference from zero is likely to be quite great, probably into the realm of having practical significance. The quote is about practical significance. Power calculations along with subject matter expertise guiding what counts as an interesting difference (“effect size”) formalize this. EDIT The promised power calculation, which turned out to be an effect size calculation. library(pwr) n1 <- 100 n2 <- 100000 alpha = 0.05 power = 0.8 # find the effect size, d, for n=100 # pwr.t.test(n1, sig.level=alpha, power=power)$d # this is 0.3981407 # find the effect size, d, for n=100,000 # pwr.t.test(n2, sig.level=alpha, power=power)$d # this is 0.01252399 In this example, the test on only 100 subjects is able to detect a difference in mean of $0.398$ $80\%$ of the time. The test on 100,000 subjects is able to detect a difference of $0.013$ $80\%$ of the time. If you need a difference of at least $0.15$ in order for the findings to be interesting, then it isn't so helpful to get the 100,000-subject test going "ding ding ding, REJECT" every time it sees an observed effect of $0.013$. However, if the 100-subject test rejects, you can have more confidence that the effect size is large enough to be interesting. (That difference is number of standard deviations of the population of the group.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
(I think the phrasing is deliberately provocative.) If you have 10 observations and want to show that their mean is not zero, it will have to be quite a bit different from 0 if you want to have any ki
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? (I think the phrasing is deliberately provocative.) If you have 10 observations and want to show that their mean is not zero, it will have to be quite a bit different from 0 if you want to have any kind of chance (power) of detecting the difference. If you have a trillion observations and want to show that their mean is not 0, the mean could be just a tiny bit different from zero, perhaps just 0.01, and you would still have a considerable chance of detecting this difference. Yes, we all agree that $0\ne 0.01$, but the practical significance of a mean of 0.01 instead of 0 might be inconsequential: no one cares. If you detect a difference in that sample of ten, however, the difference from zero is likely to be quite great, probably into the realm of having practical significance. The quote is about practical significance. Power calculations along with subject matter expertise guiding what counts as an interesting difference (“effect size”) formalize this. EDIT The promised power calculation, which turned out to be an effect size calculation. library(pwr) n1 <- 100 n2 <- 100000 alpha = 0.05 power = 0.8 # find the effect size, d, for n=100 # pwr.t.test(n1, sig.level=alpha, power=power)$d # this is 0.3981407 # find the effect size, d, for n=100,000 # pwr.t.test(n2, sig.level=alpha, power=power)$d # this is 0.01252399 In this example, the test on only 100 subjects is able to detect a difference in mean of $0.398$ $80\%$ of the time. The test on 100,000 subjects is able to detect a difference of $0.013$ $80\%$ of the time. If you need a difference of at least $0.15$ in order for the findings to be interesting, then it isn't so helpful to get the 100,000-subject test going "ding ding ding, REJECT" every time it sees an observed effect of $0.013$. However, if the 100-subject test rejects, you can have more confidence that the effect size is large enough to be interesting. (That difference is number of standard deviations of the population of the group.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? (I think the phrasing is deliberately provocative.) If you have 10 observations and want to show that their mean is not zero, it will have to be quite a bit different from 0 if you want to have any ki
4,923
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
Can you confirm that it is a FALSE statement in statistics I think the statement is phrased poorly. In this context, the word "significant" seems to have the flavor of "importance". Difference detected in smaller datasets are not somehow more important or meaningful by virtue of being detected in small datasets. Rather, differences detected in small datasets are often times very large when compared to the inherent noise in the data (assuming the differences are not false positives or the result of some sort of bias), explaining why we detected them in the first place. The term "significant" has been overloaded in statistics, which often leads to confusion and misuse. I would not conclude from this that smaller datasets are better. Indeed, large data (or perhaps more appropriately, enough data) is better than small data because I can estimate what I want with sufficient precision. Its also worth noting that there is far far more important things than sample size which go into medical research. So the buck doesn't stop with saying you detected a large difference. Now, I am not able to determine whether or not Dr. Raoul's statements re: Hydroxychloroquine are accurate. But, if his intention is to argue that differences detected in small groups are large (again, assuming differences are not false positives or the result of bias), then I can get behind that.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
Can you confirm that it is a FALSE statement in statistics I think the statement is phrased poorly. In this context, the word "significant" seems to have the flavor of "importance". Difference detect
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? Can you confirm that it is a FALSE statement in statistics I think the statement is phrased poorly. In this context, the word "significant" seems to have the flavor of "importance". Difference detected in smaller datasets are not somehow more important or meaningful by virtue of being detected in small datasets. Rather, differences detected in small datasets are often times very large when compared to the inherent noise in the data (assuming the differences are not false positives or the result of some sort of bias), explaining why we detected them in the first place. The term "significant" has been overloaded in statistics, which often leads to confusion and misuse. I would not conclude from this that smaller datasets are better. Indeed, large data (or perhaps more appropriately, enough data) is better than small data because I can estimate what I want with sufficient precision. Its also worth noting that there is far far more important things than sample size which go into medical research. So the buck doesn't stop with saying you detected a large difference. Now, I am not able to determine whether or not Dr. Raoul's statements re: Hydroxychloroquine are accurate. But, if his intention is to argue that differences detected in small groups are large (again, assuming differences are not false positives or the result of bias), then I can get behind that.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? Can you confirm that it is a FALSE statement in statistics I think the statement is phrased poorly. In this context, the word "significant" seems to have the flavor of "importance". Difference detect
4,924
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
The quote in question seems to originate from marianne.net (in French) and, as it stands, is definitely wrong. But, as Demetri and Dave pointed out, with some language bending there might be some truth to it. In my understanding, Prof. Raoult confuses significance and effect size. In a small sample, the effect size has to be large (i.e. of practical relevance) to be statistically significant. In large samples, even very small effects, negligible for all practical purposes, can be statistically "significant". Just as a practical example: If the true effect of a drug is to prolong the life of a patient by, on average, one day, it is most likely useless for all practical purposes. In a small sample, say 20 persons, this small life extension will probably drown in the noise and wouldn't be noticeable at all. In a sample of $10^9$ persons, you might be able to see it. That doesn't mean that smaller samples are better. Just because you have found that the effect is non-zero doesn't mean that the hypothetical drug is worth its price (I assume there are some direct cost associated with it, and there are probably other opportunity costs). "Statistical significance" is not the right criterion for making decisions, and even the effect size isn't enough (although you should always look at it). Decision making always involves balancing costs and benefits. As of refuting the original statement: If a smaller data set is better, why don't we take the empty set, of size zero, and simply announce the result which is the most convenient to us?
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
The quote in question seems to originate from marianne.net (in French) and, as it stands, is definitely wrong. But, as Demetri and Dave pointed out, with some language bending there might be some trut
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? The quote in question seems to originate from marianne.net (in French) and, as it stands, is definitely wrong. But, as Demetri and Dave pointed out, with some language bending there might be some truth to it. In my understanding, Prof. Raoult confuses significance and effect size. In a small sample, the effect size has to be large (i.e. of practical relevance) to be statistically significant. In large samples, even very small effects, negligible for all practical purposes, can be statistically "significant". Just as a practical example: If the true effect of a drug is to prolong the life of a patient by, on average, one day, it is most likely useless for all practical purposes. In a small sample, say 20 persons, this small life extension will probably drown in the noise and wouldn't be noticeable at all. In a sample of $10^9$ persons, you might be able to see it. That doesn't mean that smaller samples are better. Just because you have found that the effect is non-zero doesn't mean that the hypothetical drug is worth its price (I assume there are some direct cost associated with it, and there are probably other opportunity costs). "Statistical significance" is not the right criterion for making decisions, and even the effect size isn't enough (although you should always look at it). Decision making always involves balancing costs and benefits. As of refuting the original statement: If a smaller data set is better, why don't we take the empty set, of size zero, and simply announce the result which is the most convenient to us?
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? The quote in question seems to originate from marianne.net (in French) and, as it stands, is definitely wrong. But, as Demetri and Dave pointed out, with some language bending there might be some trut
4,925
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
A smaller sample size is not better. A small sample size needs a more significant* result if you want to draw a conclusion from it. Let's consider some results and their interpretation: If your drug cures 30% of 10 people, the percentage of the general population cured could be anywhere between around 0% and 65% of people. If your drug cures 30% of 10000 people, you can be quite sure it actually cures around 30% of people (more specifically, between 29% and 31% of people). If your drug cures 100% of 10 people, you can be quite sure it would cure around at least 60% of people. If your drug cures 100% of 10000 people, you can be quite sure it actually cures around 100% of people. Note: the above probably misses a few details about control groups, side effects, hypothesis testing, etc. It's just meant to give a basic idea of what the numbers might look like. Now a one-line conclusion of a study could be "the drug likely cures some percentage of people" or "we don't know whether it cures anyone". A 10000-person study is going to end up saying "the drug likely cures some percentage of people" more often, even if the percentage is really tiny. A 10-person study will end up saying "we don't know whether it cures anyone" more often. When the 10-person study does end up saying "the drug likely cures some percentage of people", the percentage will generally be larger. When a 10000-person study says "we don't know whether it cures anyone", we can be pretty sure that it cures between 0% and a very, very tiny percentage of the population. Whereas with a 10-person study with the same conclusion it could still cure a fairly large percentage. We just don't know yet. But the results themselves are not more significant. Note that above I didn't say "the results are more significant", but rather that you need more significant results. And I'm differentiating the results from the conclusion. The quote (without context) seems to imply a smaller sample provides a more useful result, when this is blatantly false. This may not be what the author actually meant, but that's how I read it. The results from a large study allows us to be more sure how effective something actually is, which is always more useful. The only thing that would be more significant would be a positive conclusion ("the drug likely works"), but taking one look at the actual percentages would still give you a lot more information for the large study. The only way in which a smaller sample would provide a more useful result is when people who don't know what they're doing misinterpret or misrepresent the result (by e.g. saying "the drug works" without also noting that it actually only works 1% of the time). This admittedly might happen a whole lot more often than it should in today's world with the media and social media. What about bias? If you have a very small sample size, you're much more likely to not have a sample that's proportional to what the actual population looks like, and you might even miss out on some demographic altogether. In medicine there are many variables that could contribute to or alter the effects something has, so having an accurate representation of the population is quite important. If your data is too biased, your results would not be particularly useful. A bigger sample size doesn't automatically fix it, but does make it easier to avoid. *: this answer uses "significant" to mean "practically significant" not "statistically significant". As in "something that actually matters to the general public". Results from larger samples would generally be more statistically significant, as in it's something we can be more sure about.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
A smaller sample size is not better. A small sample size needs a more significant* result if you want to draw a conclusion from it. Let's consider some results and their interpretation: If your drug
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? A smaller sample size is not better. A small sample size needs a more significant* result if you want to draw a conclusion from it. Let's consider some results and their interpretation: If your drug cures 30% of 10 people, the percentage of the general population cured could be anywhere between around 0% and 65% of people. If your drug cures 30% of 10000 people, you can be quite sure it actually cures around 30% of people (more specifically, between 29% and 31% of people). If your drug cures 100% of 10 people, you can be quite sure it would cure around at least 60% of people. If your drug cures 100% of 10000 people, you can be quite sure it actually cures around 100% of people. Note: the above probably misses a few details about control groups, side effects, hypothesis testing, etc. It's just meant to give a basic idea of what the numbers might look like. Now a one-line conclusion of a study could be "the drug likely cures some percentage of people" or "we don't know whether it cures anyone". A 10000-person study is going to end up saying "the drug likely cures some percentage of people" more often, even if the percentage is really tiny. A 10-person study will end up saying "we don't know whether it cures anyone" more often. When the 10-person study does end up saying "the drug likely cures some percentage of people", the percentage will generally be larger. When a 10000-person study says "we don't know whether it cures anyone", we can be pretty sure that it cures between 0% and a very, very tiny percentage of the population. Whereas with a 10-person study with the same conclusion it could still cure a fairly large percentage. We just don't know yet. But the results themselves are not more significant. Note that above I didn't say "the results are more significant", but rather that you need more significant results. And I'm differentiating the results from the conclusion. The quote (without context) seems to imply a smaller sample provides a more useful result, when this is blatantly false. This may not be what the author actually meant, but that's how I read it. The results from a large study allows us to be more sure how effective something actually is, which is always more useful. The only thing that would be more significant would be a positive conclusion ("the drug likely works"), but taking one look at the actual percentages would still give you a lot more information for the large study. The only way in which a smaller sample would provide a more useful result is when people who don't know what they're doing misinterpret or misrepresent the result (by e.g. saying "the drug works" without also noting that it actually only works 1% of the time). This admittedly might happen a whole lot more often than it should in today's world with the media and social media. What about bias? If you have a very small sample size, you're much more likely to not have a sample that's proportional to what the actual population looks like, and you might even miss out on some demographic altogether. In medicine there are many variables that could contribute to or alter the effects something has, so having an accurate representation of the population is quite important. If your data is too biased, your results would not be particularly useful. A bigger sample size doesn't automatically fix it, but does make it easier to avoid. *: this answer uses "significant" to mean "practically significant" not "statistically significant". As in "something that actually matters to the general public". Results from larger samples would generally be more statistically significant, as in it's something we can be more sure about.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? A smaller sample size is not better. A small sample size needs a more significant* result if you want to draw a conclusion from it. Let's consider some results and their interpretation: If your drug
4,926
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
There are a few things that are true, and worth understanding for where the confusion might slip in. First, it is possible to get high levels of confidence from small samples, so long as the effect is sufficiently pronounced. For example, a treatment that goes from 10% control recovery to 90% experimental recovery will show up with a very good score even if you only have 20 samples. It will be better than a treatment going from 49.9% to 50.1% shown on a hundred samples. Of course, for the same treatment a bigger sample is still better, but a small sample may be enough. Second, a small sample being enough is much more likely if the effect is more pronounced. This is one of these all-too-common effects in statistics that things flip around depending on whether you're talking about before or after the experiment. The spread from randomness is larger in both directions with smaller samples. Effectively, to plan a demonstration of your hypothesis with confidence, you need enough margin that even if randomness goes against you, and everyone presumes randomness went for you, you can still show an effect. To do that you need to reduce the effect of randomness, either by having a large sample, or by having a strong effect. So if you're planning your experiment and expect the effect to be very strong, then you can afford to use a smaller sample. (although you still shouldn't expect bonus points for doing so!) If you're planning an experiment and expect the effect to be subtle, then you'll need a much larger sample. What this does not mean is that a small sample ever implies a more trustworthy result. To someone assessing the research, a 10% shift in outcome shown with a sample of 1000 is strictly better than a 10% shift in outcome with a sample of 20. Strong effect implies small sample will (probably) be enough. Small sample does not imply a strong effect.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
There are a few things that are true, and worth understanding for where the confusion might slip in. First, it is possible to get high levels of confidence from small samples, so long as the effect is
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? There are a few things that are true, and worth understanding for where the confusion might slip in. First, it is possible to get high levels of confidence from small samples, so long as the effect is sufficiently pronounced. For example, a treatment that goes from 10% control recovery to 90% experimental recovery will show up with a very good score even if you only have 20 samples. It will be better than a treatment going from 49.9% to 50.1% shown on a hundred samples. Of course, for the same treatment a bigger sample is still better, but a small sample may be enough. Second, a small sample being enough is much more likely if the effect is more pronounced. This is one of these all-too-common effects in statistics that things flip around depending on whether you're talking about before or after the experiment. The spread from randomness is larger in both directions with smaller samples. Effectively, to plan a demonstration of your hypothesis with confidence, you need enough margin that even if randomness goes against you, and everyone presumes randomness went for you, you can still show an effect. To do that you need to reduce the effect of randomness, either by having a large sample, or by having a strong effect. So if you're planning your experiment and expect the effect to be very strong, then you can afford to use a smaller sample. (although you still shouldn't expect bonus points for doing so!) If you're planning an experiment and expect the effect to be subtle, then you'll need a much larger sample. What this does not mean is that a small sample ever implies a more trustworthy result. To someone assessing the research, a 10% shift in outcome shown with a sample of 1000 is strictly better than a 10% shift in outcome with a sample of 20. Strong effect implies small sample will (probably) be enough. Small sample does not imply a strong effect.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? There are a few things that are true, and worth understanding for where the confusion might slip in. First, it is possible to get high levels of confidence from small samples, so long as the effect is
4,927
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
This statement is misleading because it is unclear what he means by significant. In the case of a clinical trial, what you want to show is that people are more likely to heal when given a test treatment than when given a placebo. So you have two (random) groups of equal size, one of which gets a the treatment while the other gets a placebo. Then you observe the proportion of people who heal in each group, and this is where you must be careful what you call significant : Is there a large difference in the observed number of healing people in each group?(e.g. 3/4 of people healed in test group vs 1/2 in control group). This is purely based on one observation, and can be completely random due to natural variance (even with two placebo groups, one could by pure luck get more people healing than the other). For this definition of significant, smaller groups produce more significant results. Is the difference in observed healing rate (even if its absolute value is small) statistically significant ? This does not mean that there is a large difference between the two groups, but that we can know with high confidence that this difference is not due to random fluctuations. With a very large group, you can observe a very small difference in healing rate (e.g. 54% vs 55%) and still know it is not by chance. This is achieved using statistical techniques such as the Central Limit Theorem (CLT). For this definition of significant, larger groups produce more significant results. Is the difference large in absolute value given that the test is statistically significant? i.e. "Given that we know our results are not due to pure luck, are they of valuable practical use?". As some have already said, if you somehow manage to obtain a statistically significant result with a small sample, it is likely that your difference in absolute value is pretty large because small sample size wouldn't be able to detect a small difference in a statistically significant way. Also, while it is technically possible, be careful when checking statistical significance on small sample size because the usual asymptotic theorems such as the aforementioned CLT don't apply (I'm sure some happily use them anyway...). For this definition, smaller groups produce more significant but this is a case I wouldn't expect to encounter often, and I would be careful. Hence, depending on which definition the author is using, he could be right or wrong. If he is using the first one, he is technically right but this number alone is useless in practice; if he's using the second one, he is simply wrong; and if he is using the third one, he is technically right but I still find it kind of suspicious for the reasons I mentioned.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
This statement is misleading because it is unclear what he means by significant. In the case of a clinical trial, what you want to show is that people are more likely to heal when given a test treatme
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? This statement is misleading because it is unclear what he means by significant. In the case of a clinical trial, what you want to show is that people are more likely to heal when given a test treatment than when given a placebo. So you have two (random) groups of equal size, one of which gets a the treatment while the other gets a placebo. Then you observe the proportion of people who heal in each group, and this is where you must be careful what you call significant : Is there a large difference in the observed number of healing people in each group?(e.g. 3/4 of people healed in test group vs 1/2 in control group). This is purely based on one observation, and can be completely random due to natural variance (even with two placebo groups, one could by pure luck get more people healing than the other). For this definition of significant, smaller groups produce more significant results. Is the difference in observed healing rate (even if its absolute value is small) statistically significant ? This does not mean that there is a large difference between the two groups, but that we can know with high confidence that this difference is not due to random fluctuations. With a very large group, you can observe a very small difference in healing rate (e.g. 54% vs 55%) and still know it is not by chance. This is achieved using statistical techniques such as the Central Limit Theorem (CLT). For this definition of significant, larger groups produce more significant results. Is the difference large in absolute value given that the test is statistically significant? i.e. "Given that we know our results are not due to pure luck, are they of valuable practical use?". As some have already said, if you somehow manage to obtain a statistically significant result with a small sample, it is likely that your difference in absolute value is pretty large because small sample size wouldn't be able to detect a small difference in a statistically significant way. Also, while it is technically possible, be careful when checking statistical significance on small sample size because the usual asymptotic theorems such as the aforementioned CLT don't apply (I'm sure some happily use them anyway...). For this definition, smaller groups produce more significant but this is a case I wouldn't expect to encounter often, and I would be careful. Hence, depending on which definition the author is using, he could be right or wrong. If he is using the first one, he is technically right but this number alone is useless in practice; if he's using the second one, he is simply wrong; and if he is using the third one, he is technically right but I still find it kind of suspicious for the reasons I mentioned.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? This statement is misleading because it is unclear what he means by significant. In the case of a clinical trial, what you want to show is that people are more likely to heal when given a test treatme
4,928
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
A smaller sample size is definitely not better than a larger one. Other answers do a good job of parsing what he might have meant (e.g., effect size, etc.). However, these miss the fact that the underlying effect is the same whether we use a larger or small sample size. This is more obvious if we look at a binary outcome (e.g., survived vs died). What happens if half of all patients die of the disease and the treatment has no effect? If we only sample three individuals in each group (treated and not), then 1.5% of the time, all three non-treated individuals will die and all three treated individuals will survive; an additional 4.5% of the time, all three non-treated individuals will die and two treated individuals will survive. So, about 6% of the time that you run this study, it would look like the treatment had a huge impact. (6% of the time you would get the reverse result, but then the bias of the File Drawer Problem rears its head; p-hacking by adding samples for in-between results introduce yet another bias.) That risk gets less severe as the sample size increases. Even with only 10 samples of each, the odds of seeing all non-treated die and all treated survive is only 0.0000954%. The risk of a false-positive remains the same (because that is how p-values are defined), but we would have far better confidence in the estimate of the effect size (this is why confidence intervals shrink with increasing sample size). That confidence in the effect size is crucial, particularly for assessing the risk-reward tradeoff of a treatment with side-effects as severe as hydroxychloroquine and when rationing is already leading to problems from patients no longer able to access the drug for treatment of conditions that we know are mitigated by hydroxychloroquine (e.g., lupus).
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
A smaller sample size is definitely not better than a larger one. Other answers do a good job of parsing what he might have meant (e.g., effect size, etc.). However, these miss the fact that the under
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? A smaller sample size is definitely not better than a larger one. Other answers do a good job of parsing what he might have meant (e.g., effect size, etc.). However, these miss the fact that the underlying effect is the same whether we use a larger or small sample size. This is more obvious if we look at a binary outcome (e.g., survived vs died). What happens if half of all patients die of the disease and the treatment has no effect? If we only sample three individuals in each group (treated and not), then 1.5% of the time, all three non-treated individuals will die and all three treated individuals will survive; an additional 4.5% of the time, all three non-treated individuals will die and two treated individuals will survive. So, about 6% of the time that you run this study, it would look like the treatment had a huge impact. (6% of the time you would get the reverse result, but then the bias of the File Drawer Problem rears its head; p-hacking by adding samples for in-between results introduce yet another bias.) That risk gets less severe as the sample size increases. Even with only 10 samples of each, the odds of seeing all non-treated die and all treated survive is only 0.0000954%. The risk of a false-positive remains the same (because that is how p-values are defined), but we would have far better confidence in the estimate of the effect size (this is why confidence intervals shrink with increasing sample size). That confidence in the effect size is crucial, particularly for assessing the risk-reward tradeoff of a treatment with side-effects as severe as hydroxychloroquine and when rationing is already leading to problems from patients no longer able to access the drug for treatment of conditions that we know are mitigated by hydroxychloroquine (e.g., lupus).
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? A smaller sample size is definitely not better than a larger one. Other answers do a good job of parsing what he might have meant (e.g., effect size, etc.). However, these miss the fact that the under
4,929
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
Before really answering the question, I have to point out that the study being discussed was a non-randomized open label study where the controls were possibly in a different facility than the treatment subjects, and they tossed out dead, ICU patients and dropouts due to side effects from the treatment group. The poor outcomes (it appears) came from the treatment group, but they based the analysis on a surrogate endpoint - PCR detection of viruses. The use of a 6-day endpoint also seems post hoc. Thus I'd not be surprised if there were a LARGE amount of bias in the results. So, the comments on sample size perhaps hold, but only if an incomplete analysis is performed. Usually, when you get statistically significant results (or even if you don't), you should follow up with confidence intervals for interesting effects. Dr. Raoult's argument is that tiny errors or biases in a study with huge sample sizes will cause the null hypothesis to be rejected despite a tiny effect size that could be due to poor procedures. In a small sample size, bias due to poor procedures is unlikely to cause the null to be rejected (if it were true), so a rejection would be associated with an apparent large estimated effect size. This argument falls apart immediately when you follow up with a confidence interval. For the situation where there is a tiny bias and a large sample size (and the null is true), you'll get a narrow C.I. around a result that is apparently not practically significant. With the small sample size, you'll get a very wide confidence interval leaving little faith in the results at all. If Dr. Raoult's thoughts on sample size were correct, then an equally good procedure would be to add a large amount of noise to your dataset to decrease the chance of accidental rejection of the null hypothesis in the presence of small experimental bias (a small sample size was advocated for just this reason). This doesn't appear especially wise.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
Before really answering the question, I have to point out that the study being discussed was a non-randomized open label study where the controls were possibly in a different facility than the treatme
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? Before really answering the question, I have to point out that the study being discussed was a non-randomized open label study where the controls were possibly in a different facility than the treatment subjects, and they tossed out dead, ICU patients and dropouts due to side effects from the treatment group. The poor outcomes (it appears) came from the treatment group, but they based the analysis on a surrogate endpoint - PCR detection of viruses. The use of a 6-day endpoint also seems post hoc. Thus I'd not be surprised if there were a LARGE amount of bias in the results. So, the comments on sample size perhaps hold, but only if an incomplete analysis is performed. Usually, when you get statistically significant results (or even if you don't), you should follow up with confidence intervals for interesting effects. Dr. Raoult's argument is that tiny errors or biases in a study with huge sample sizes will cause the null hypothesis to be rejected despite a tiny effect size that could be due to poor procedures. In a small sample size, bias due to poor procedures is unlikely to cause the null to be rejected (if it were true), so a rejection would be associated with an apparent large estimated effect size. This argument falls apart immediately when you follow up with a confidence interval. For the situation where there is a tiny bias and a large sample size (and the null is true), you'll get a narrow C.I. around a result that is apparently not practically significant. With the small sample size, you'll get a very wide confidence interval leaving little faith in the results at all. If Dr. Raoult's thoughts on sample size were correct, then an equally good procedure would be to add a large amount of noise to your dataset to decrease the chance of accidental rejection of the null hypothesis in the presence of small experimental bias (a small sample size was advocated for just this reason). This doesn't appear especially wise.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? Before really answering the question, I have to point out that the study being discussed was a non-randomized open label study where the controls were possibly in a different facility than the treatme
4,930
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
First of I would like to state the following: Biostatistics is a really difficult field; many biostatisticians are better data scientists than people with a maths background. Biostatistics has create it own tools that we can use today. The experiments they make must be really regulated (at least from pharmaceutical view) Now imagine a trait that is really different between all of us. Let's experiment on height. That trait has indeed a high variance in the population. Will it follow a normal distribution? Of course yes we are talking about 7 billion people. Now take 10 people from 10 different countries but you have to choose a country of at least one continent. You take the estimated world median and run a simple Mann-Whitney test to see if the estimated median is statistically different from your sample's median. There are 2 possible results: There is indeed a signifigant difference, so your stratified sample is not so effective There is no difference whatsoever and the two medians seem to be the same with p-value=.0001 In the first case, there is noone that can argue with your experiment: it was wrong all along to test a sample with so few participants. In the second case there is a really good question to be answered: if you repeat your experiment about 20 times and have the exact same result; is this series of experiments better than an experiment with 10,000 participants? And if so, what does this means about our variable? Well in the highly unlikely case of the 20 repeated experiments to be significant, it is really to scratch your head. We just used a good sampling technique and nothing fancy so we are aware of the whole procedure and we cannot see any bugs or shade areas and on the top of that the experiment had given us the statistically correct result! But wait what if we had taken 10,000 people from different countries while having in mind the same sampling technique: stratified sampling. You ran the test again (we can still use Mann-Whitney although they should follow normal distribution and t-test is more powerful) and you see that the test says something unpredictable: the median of your sample is not the same as this of the population! You can go home and be done with statistics in this hypothetical scenario, however there should be an explanation. Which is pretty simple. Variables with high variances have huge amount of outliers and many tests (like Mann-Whitney or t-test) are really susceptible to them. You have taken 10,000 people; how many of them do you expect to be an outlier? Now consider a pharmaceutical experiment and the underlying variables (genes, environment, food etc etc). You should consider all these unique outliers that exists out there to have the best experiment, which is practically impossible. So what do we gain from a smaller sample? We are granted with the holy grail of riskiness. I am not kidding. A smaller sample is likely to be so variant that there is no way to compare it with any distribution and sleep well at night. However if you can see a pattern in such a small (and always random with a really careful sampling method) sample there is really good potential to your theory. So our problem is not wether or not the test is really significant or not (and I am starting to hearing your complaints but hear me out) is about how reliable is the sample. That's why your everyday painkiller has a telephone number on it. Because despite the large sample of the experiments anyone could still be in the outlier zone that the created model does not explain. So the statement is really correct. The question is how good was their sampling method. So what about his research? Well I read that it does not meet the [International Society of Antimicrobial Chemotherapy’s] expected standard, especially relating to the lack of better explanations of the inclusion criteria and the triage of patients to ensure patient safety. found here so the problem of the International Society of Antimicrobial Chemotherapy’s was not the statistical method, rather the sampling method they used.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
First of I would like to state the following: Biostatistics is a really difficult field; many biostatisticians are better data scientists than people with a maths background. Biostatistics has create
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? First of I would like to state the following: Biostatistics is a really difficult field; many biostatisticians are better data scientists than people with a maths background. Biostatistics has create it own tools that we can use today. The experiments they make must be really regulated (at least from pharmaceutical view) Now imagine a trait that is really different between all of us. Let's experiment on height. That trait has indeed a high variance in the population. Will it follow a normal distribution? Of course yes we are talking about 7 billion people. Now take 10 people from 10 different countries but you have to choose a country of at least one continent. You take the estimated world median and run a simple Mann-Whitney test to see if the estimated median is statistically different from your sample's median. There are 2 possible results: There is indeed a signifigant difference, so your stratified sample is not so effective There is no difference whatsoever and the two medians seem to be the same with p-value=.0001 In the first case, there is noone that can argue with your experiment: it was wrong all along to test a sample with so few participants. In the second case there is a really good question to be answered: if you repeat your experiment about 20 times and have the exact same result; is this series of experiments better than an experiment with 10,000 participants? And if so, what does this means about our variable? Well in the highly unlikely case of the 20 repeated experiments to be significant, it is really to scratch your head. We just used a good sampling technique and nothing fancy so we are aware of the whole procedure and we cannot see any bugs or shade areas and on the top of that the experiment had given us the statistically correct result! But wait what if we had taken 10,000 people from different countries while having in mind the same sampling technique: stratified sampling. You ran the test again (we can still use Mann-Whitney although they should follow normal distribution and t-test is more powerful) and you see that the test says something unpredictable: the median of your sample is not the same as this of the population! You can go home and be done with statistics in this hypothetical scenario, however there should be an explanation. Which is pretty simple. Variables with high variances have huge amount of outliers and many tests (like Mann-Whitney or t-test) are really susceptible to them. You have taken 10,000 people; how many of them do you expect to be an outlier? Now consider a pharmaceutical experiment and the underlying variables (genes, environment, food etc etc). You should consider all these unique outliers that exists out there to have the best experiment, which is practically impossible. So what do we gain from a smaller sample? We are granted with the holy grail of riskiness. I am not kidding. A smaller sample is likely to be so variant that there is no way to compare it with any distribution and sleep well at night. However if you can see a pattern in such a small (and always random with a really careful sampling method) sample there is really good potential to your theory. So our problem is not wether or not the test is really significant or not (and I am starting to hearing your complaints but hear me out) is about how reliable is the sample. That's why your everyday painkiller has a telephone number on it. Because despite the large sample of the experiments anyone could still be in the outlier zone that the created model does not explain. So the statement is really correct. The question is how good was their sampling method. So what about his research? Well I read that it does not meet the [International Society of Antimicrobial Chemotherapy’s] expected standard, especially relating to the lack of better explanations of the inclusion criteria and the triage of patients to ensure patient safety. found here so the problem of the International Society of Antimicrobial Chemotherapy’s was not the statistical method, rather the sampling method they used.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? First of I would like to state the following: Biostatistics is a really difficult field; many biostatisticians are better data scientists than people with a maths background. Biostatistics has create
4,931
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
tl;dr– It sounds like they're arguing that smaller data sets are superior because larger data sets imply $p$-hacking or/and a less observable finding. But, obviously, doing a particular experiment with more data is better than with less data when the data analysis is done correctly. Translating the argument. Raw argument: It's counterintuitive, but the smaller the sample size of a clinical test, the more significant its results are. The differences in a sample of 20 people may be more significant than in a sample of 10,000 people. If we need such a sample, there is a risk of being wrong. With 10,000 people, when the differences are small, sometimes they don't exist. Initial paraphrasing pass: The less data collected, the better the findings are. For example, findings based on 20 data points can be better than findings based on 10,000 data points. Because, if you get 10,000 data points, that implies that you couldn't find what you were looking for with just 20 data points. Even if you do eventually find something with such a large data set, it's probably going to be a smaller effect that might not even exist. Rewriting the entire thing: Results are better when they're based on smaller data sets. The problem with large data sets is that they imply that the researchers failed to find the effect with a smaller data set, forcing them to resort to collecting more data. So, larger data sets imply weaker, less significant findings. They seem to have two arguments in favor of smaller data sets: Smaller data sets imply that the studied effect was more observable. Larger data sets suggest that the researchers may've kept collecting data until they found the result they wanted, i.e. "optional stopping" as described in this question. Of course, this argument completely fails to address the fact that, all else held equal, more data is superior to less. For example, even if they think that 20 data points was sufficient to find some effect, clearly 10,000 data points would be better.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
tl;dr– It sounds like they're arguing that smaller data sets are superior because larger data sets imply $p$-hacking or/and a less observable finding. But, obviously, doing a particular experiment w
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? tl;dr– It sounds like they're arguing that smaller data sets are superior because larger data sets imply $p$-hacking or/and a less observable finding. But, obviously, doing a particular experiment with more data is better than with less data when the data analysis is done correctly. Translating the argument. Raw argument: It's counterintuitive, but the smaller the sample size of a clinical test, the more significant its results are. The differences in a sample of 20 people may be more significant than in a sample of 10,000 people. If we need such a sample, there is a risk of being wrong. With 10,000 people, when the differences are small, sometimes they don't exist. Initial paraphrasing pass: The less data collected, the better the findings are. For example, findings based on 20 data points can be better than findings based on 10,000 data points. Because, if you get 10,000 data points, that implies that you couldn't find what you were looking for with just 20 data points. Even if you do eventually find something with such a large data set, it's probably going to be a smaller effect that might not even exist. Rewriting the entire thing: Results are better when they're based on smaller data sets. The problem with large data sets is that they imply that the researchers failed to find the effect with a smaller data set, forcing them to resort to collecting more data. So, larger data sets imply weaker, less significant findings. They seem to have two arguments in favor of smaller data sets: Smaller data sets imply that the studied effect was more observable. Larger data sets suggest that the researchers may've kept collecting data until they found the result they wanted, i.e. "optional stopping" as described in this question. Of course, this argument completely fails to address the fact that, all else held equal, more data is superior to less. For example, even if they think that 20 data points was sufficient to find some effect, clearly 10,000 data points would be better.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? tl;dr– It sounds like they're arguing that smaller data sets are superior because larger data sets imply $p$-hacking or/and a less observable finding. But, obviously, doing a particular experiment w
4,932
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
The requirement that sample sizes be a certain size to have statistical inference confidence stems from I think the emperical law. And that is as you take more random SAMPLES the average of the MEANS converge to the actual population mean. But I've heard in order to be valid a sample size of greater than 32 is required for all samples. But there are other methods used for small sample sizes. You have to use the correct inferential statistics. But I do not know how sample size converge when the sample sizes are effected. I would think you need more data and there might be some notion of conservation relating error and number of samples, and sample size. Edit after some simple algebra can see that if you have m samples of size n you should get same mean as one sample of size mn. Also it could be true for some random distribution that the error may oh increase its only when very large samples are taken that they have more likely hood of being inside the required intervals. So it seems it could depend actually on the distribution itself. But often you can't know that. Also usually we use sample statistics to infer population statistics, not individual means or cases. Because then it is simple the probability described by the unknown population and we can only guess. Even if were right you can only know something to accuracy the probability distribution alows. But in statistics when we talk about statistics like population mean, we actually can get to the desired accuracy. Also from data perspective using samples we can throw away the data and store sample means. But as for this question I think it is hard to say what this person actually means, it not simple and clear concrete statement. But if he is saying that smaller samples are more accurate, it could be plausible but if any statistics are of use then much larger samples would ultimately be best.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
The requirement that sample sizes be a certain size to have statistical inference confidence stems from I think the emperical law. And that is as you take more random SAMPLES the average of the MEANS
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? The requirement that sample sizes be a certain size to have statistical inference confidence stems from I think the emperical law. And that is as you take more random SAMPLES the average of the MEANS converge to the actual population mean. But I've heard in order to be valid a sample size of greater than 32 is required for all samples. But there are other methods used for small sample sizes. You have to use the correct inferential statistics. But I do not know how sample size converge when the sample sizes are effected. I would think you need more data and there might be some notion of conservation relating error and number of samples, and sample size. Edit after some simple algebra can see that if you have m samples of size n you should get same mean as one sample of size mn. Also it could be true for some random distribution that the error may oh increase its only when very large samples are taken that they have more likely hood of being inside the required intervals. So it seems it could depend actually on the distribution itself. But often you can't know that. Also usually we use sample statistics to infer population statistics, not individual means or cases. Because then it is simple the probability described by the unknown population and we can only guess. Even if were right you can only know something to accuracy the probability distribution alows. But in statistics when we talk about statistics like population mean, we actually can get to the desired accuracy. Also from data perspective using samples we can throw away the data and store sample means. But as for this question I think it is hard to say what this person actually means, it not simple and clear concrete statement. But if he is saying that smaller samples are more accurate, it could be plausible but if any statistics are of use then much larger samples would ultimately be best.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? The requirement that sample sizes be a certain size to have statistical inference confidence stems from I think the emperical law. And that is as you take more random SAMPLES the average of the MEANS
4,933
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
Dr. Raoult's statement is false. A bigger dataset (on your domain of discourse subject) is always better. It yields a better average, thus better certainty. You may apply the principle of charity. Probably what he is trying to say is: A small (but most significant) sample set is better than a bigger (but less significant). Imagine you are sampling hydrochloroquine efficacy in treating patients with Covid-19, but you sample randomly (without testing for Covid-19). This'll yield a misleading average. From a scientific point of view, in general, you should ignore non-scientific press.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
Dr. Raoult's statement is false. A bigger dataset (on your domain of discourse subject) is always better. It yields a better average, thus better certainty. You may apply the principle of charity. Pro
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? Dr. Raoult's statement is false. A bigger dataset (on your domain of discourse subject) is always better. It yields a better average, thus better certainty. You may apply the principle of charity. Probably what he is trying to say is: A small (but most significant) sample set is better than a bigger (but less significant). Imagine you are sampling hydrochloroquine efficacy in treating patients with Covid-19, but you sample randomly (without testing for Covid-19). This'll yield a misleading average. From a scientific point of view, in general, you should ignore non-scientific press.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? Dr. Raoult's statement is false. A bigger dataset (on your domain of discourse subject) is always better. It yields a better average, thus better certainty. You may apply the principle of charity. Pro
4,934
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
Yes, a smaller sample CAN be more informative due to the confluence of similar significant discriminating attributes. For example, a small sample of Detriot African Americans with similar income levels and associated access to healthcare, occupations, stress, diets and pre-existing conditions (high blood pressure,...) is better than a 100,000 sample of Americans where the confluence of these attributes is rare. This is a problem in model design and conformity of the data to underlying assumptions of independence and identically distributed. More is not always better, my field experience confirms 'garbage in' does result in 'garbage out', where increasing the sample size is not a solution nor is postulating explanatory variables after seeing the data.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly?
Yes, a smaller sample CAN be more informative due to the confluence of similar significant discriminating attributes. For example, a small sample of Detriot African Americans with similar income level
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? Yes, a smaller sample CAN be more informative due to the confluence of similar significant discriminating attributes. For example, a small sample of Detriot African Americans with similar income levels and associated access to healthcare, occupations, stress, diets and pre-existing conditions (high blood pressure,...) is better than a 100,000 sample of Americans where the confluence of these attributes is rare. This is a problem in model design and conformity of the data to underlying assumptions of independence and identically distributed. More is not always better, my field experience confirms 'garbage in' does result in 'garbage out', where increasing the sample size is not a solution nor is postulating explanatory variables after seeing the data.
A smaller dataset is better: Is this statement false in statistics? How to refute it properly? Yes, a smaller sample CAN be more informative due to the confluence of similar significant discriminating attributes. For example, a small sample of Detriot African Americans with similar income level
4,935
Famous easy to understand examples of a confounding variable invalidating a study
Coffee Drinking & Lung Cancer My favorite example is that supposedly, "coffee drinkers have a greater risk of lung cancer", despite most coffee drinkers... well... drinking coffee, rather than inhaling it. There have been various studies about this, but the consensus remains that studies with this conclusion usually just have a larger proportion of smoking coffee drinkers, than non-smoking coffee drinkers. In other words, the effect of smoking confounds the effect of coffee consumption, if not included in the model. The most recent article on this I could find is a meta analysis by Vania Galarraga and Paolo Boffetta (2016).$^\dagger$ The Obesity Paradox Another example that plagues clinical research, is the claim that obesity can be beneficial for certain diseases. Specifically, many articles, still to this day (just do a quick search for obesity paradox on pubmed and be amazed), claim the following: While a higher BMI increases the risk of diabetes, cardiovascular disease and certain types of cancer, once a patient already has the disease, a higher BMI is associated with lower rates of major adversarial events or death. Why does this happen? Obesity is defined as excess fat negatively affecting health, yet we classify obesity based on BMI. BMI is just calculated as: $$\text{BMI} = \frac{\text{weight in kg}}{(\text{height in m})^2},$$ so the most direct way to combat obesity is through weight loss (or by growing taller somehow). Regimes that focus on loss of weight rather than fat, tend to result in a proportionally large loss of muscle. This is likely what causes lower BMI to be associated with a higher rate of major adversarial events. Because many studies do not include measures of body fat (percentage), but only BMI as a proxy, the amount of body fat confounds the effect of BMI on health. A nice review of this phenomenon was written by Steven G. Chrysant (2018).$^\ddagger$ He ends with: [B]ased on the recent evidence, the obesity paradox is a misnomer and could convey the wrong message to the general public that obesity is not bad. Followed by: Journals [should] no longer accept articles about the 'obesity paradox'. $\dagger$: Vania Galarraga and Paolo Boffetta (2016): Coffee Drinking and Risk of Lung Cancer—A Meta-Analysis. Cancer Epidemiol Biomarkers Prev June 1 2016 (25) (6) 951-957; DOI: 10.1158/1055-9965.EPI-15-0727 $\ddagger$: Steven G. Chrysant (2018): Obesity is bad regardless of the obesity paradox for hypertension and heart disease. J Clin Hypertens (Greenwich). 2018 May;20(5):842-846. doi: 10.1111/jch.13281. Epub 2018 Apr 17. Examples of (poor) studies claiming to have demonstrated the obesity paradox: McAuley et al. (2018): Exercise Capacity and the Obesity Paradox in Heart Failure: The FIT (Henry Ford Exercise Testing) Project Weatherald et al. (2018): The association between body mass index and obesity with survival in pulmonary arterial hypertension Patel et al. (2018): The obestiy paradox: the protective effect of obesity on right ventricular function using echocardiographic strain imaging in patients with pulmonary hypertension Articles refuting the obesity paradox as a mere confounding effect of body fat: Lin et al. (2017): Impact of Misclassification of Obesity by Body Mass Index on Mortality in Patients With CKD Leggio et al. (2018): High body mass index, healthy metabolic profile and low visceral adipose tissue: The paradox is to call it obesity again Medina-Inojosa et al. (2018): Association Between Adiposity and Lean Mass With Long-Term Cardiovascular Events in Patients With Coronary Artery Disease: No Paradox Flegal & Ioannidis (2018): The Obesity Paradox: A Misleading Term That Should Be Abandoned Articles about the obesity paradox in cancer: Cespedes et al. (2018): The Obesity Paradox in Cancer: How Important Is Muscle? Caan et al. (2018): The Importance of Body Composition in Explaining the Overweight Paradox in Cancer-Counterpoint
Famous easy to understand examples of a confounding variable invalidating a study
Coffee Drinking & Lung Cancer My favorite example is that supposedly, "coffee drinkers have a greater risk of lung cancer", despite most coffee drinkers... well... drinking coffee, rather than inhalin
Famous easy to understand examples of a confounding variable invalidating a study Coffee Drinking & Lung Cancer My favorite example is that supposedly, "coffee drinkers have a greater risk of lung cancer", despite most coffee drinkers... well... drinking coffee, rather than inhaling it. There have been various studies about this, but the consensus remains that studies with this conclusion usually just have a larger proportion of smoking coffee drinkers, than non-smoking coffee drinkers. In other words, the effect of smoking confounds the effect of coffee consumption, if not included in the model. The most recent article on this I could find is a meta analysis by Vania Galarraga and Paolo Boffetta (2016).$^\dagger$ The Obesity Paradox Another example that plagues clinical research, is the claim that obesity can be beneficial for certain diseases. Specifically, many articles, still to this day (just do a quick search for obesity paradox on pubmed and be amazed), claim the following: While a higher BMI increases the risk of diabetes, cardiovascular disease and certain types of cancer, once a patient already has the disease, a higher BMI is associated with lower rates of major adversarial events or death. Why does this happen? Obesity is defined as excess fat negatively affecting health, yet we classify obesity based on BMI. BMI is just calculated as: $$\text{BMI} = \frac{\text{weight in kg}}{(\text{height in m})^2},$$ so the most direct way to combat obesity is through weight loss (or by growing taller somehow). Regimes that focus on loss of weight rather than fat, tend to result in a proportionally large loss of muscle. This is likely what causes lower BMI to be associated with a higher rate of major adversarial events. Because many studies do not include measures of body fat (percentage), but only BMI as a proxy, the amount of body fat confounds the effect of BMI on health. A nice review of this phenomenon was written by Steven G. Chrysant (2018).$^\ddagger$ He ends with: [B]ased on the recent evidence, the obesity paradox is a misnomer and could convey the wrong message to the general public that obesity is not bad. Followed by: Journals [should] no longer accept articles about the 'obesity paradox'. $\dagger$: Vania Galarraga and Paolo Boffetta (2016): Coffee Drinking and Risk of Lung Cancer—A Meta-Analysis. Cancer Epidemiol Biomarkers Prev June 1 2016 (25) (6) 951-957; DOI: 10.1158/1055-9965.EPI-15-0727 $\ddagger$: Steven G. Chrysant (2018): Obesity is bad regardless of the obesity paradox for hypertension and heart disease. J Clin Hypertens (Greenwich). 2018 May;20(5):842-846. doi: 10.1111/jch.13281. Epub 2018 Apr 17. Examples of (poor) studies claiming to have demonstrated the obesity paradox: McAuley et al. (2018): Exercise Capacity and the Obesity Paradox in Heart Failure: The FIT (Henry Ford Exercise Testing) Project Weatherald et al. (2018): The association between body mass index and obesity with survival in pulmonary arterial hypertension Patel et al. (2018): The obestiy paradox: the protective effect of obesity on right ventricular function using echocardiographic strain imaging in patients with pulmonary hypertension Articles refuting the obesity paradox as a mere confounding effect of body fat: Lin et al. (2017): Impact of Misclassification of Obesity by Body Mass Index on Mortality in Patients With CKD Leggio et al. (2018): High body mass index, healthy metabolic profile and low visceral adipose tissue: The paradox is to call it obesity again Medina-Inojosa et al. (2018): Association Between Adiposity and Lean Mass With Long-Term Cardiovascular Events in Patients With Coronary Artery Disease: No Paradox Flegal & Ioannidis (2018): The Obesity Paradox: A Misleading Term That Should Be Abandoned Articles about the obesity paradox in cancer: Cespedes et al. (2018): The Obesity Paradox in Cancer: How Important Is Muscle? Caan et al. (2018): The Importance of Body Composition in Explaining the Overweight Paradox in Cancer-Counterpoint
Famous easy to understand examples of a confounding variable invalidating a study Coffee Drinking & Lung Cancer My favorite example is that supposedly, "coffee drinkers have a greater risk of lung cancer", despite most coffee drinkers... well... drinking coffee, rather than inhalin
4,936
Famous easy to understand examples of a confounding variable invalidating a study
You might want to introduce Simpson's Paradox. The first example that page is the UC Berkeley gender bias case where it was thought that there was gender bias (towards males) in admissions when looking at overall acceptance rates, but this was eliminated or reversed when investigated by department. The confounding variable of department picked up on a gender difference in applying to more competetive departments.
Famous easy to understand examples of a confounding variable invalidating a study
You might want to introduce Simpson's Paradox. The first example that page is the UC Berkeley gender bias case where it was thought that there was gender bias (towards males) in admissions when lookin
Famous easy to understand examples of a confounding variable invalidating a study You might want to introduce Simpson's Paradox. The first example that page is the UC Berkeley gender bias case where it was thought that there was gender bias (towards males) in admissions when looking at overall acceptance rates, but this was eliminated or reversed when investigated by department. The confounding variable of department picked up on a gender difference in applying to more competetive departments.
Famous easy to understand examples of a confounding variable invalidating a study You might want to introduce Simpson's Paradox. The first example that page is the UC Berkeley gender bias case where it was thought that there was gender bias (towards males) in admissions when lookin
4,937
Famous easy to understand examples of a confounding variable invalidating a study
Power Lines and Cancer After an initial study finding a link between living next to high-voltage transmission lines and cancer, follow-up studies found that when you include income in the model the effect of the power lines goes away. Living next to power lines is a moderately accurate predictor of low household income / wealth. Put bluntly, there aren't as many fancy mansions next to transmission lines as elsewhere. There is correlation between poverty and cancer. When comparisons were made between households on similar income brackets close to and far away from transmission lines, the effect of transmission lines disappeared. In this case, the confounding variables were household wealth and distance to the nearest high voltage line. Background reading.
Famous easy to understand examples of a confounding variable invalidating a study
Power Lines and Cancer After an initial study finding a link between living next to high-voltage transmission lines and cancer, follow-up studies found that when you include income in the model the ef
Famous easy to understand examples of a confounding variable invalidating a study Power Lines and Cancer After an initial study finding a link between living next to high-voltage transmission lines and cancer, follow-up studies found that when you include income in the model the effect of the power lines goes away. Living next to power lines is a moderately accurate predictor of low household income / wealth. Put bluntly, there aren't as many fancy mansions next to transmission lines as elsewhere. There is correlation between poverty and cancer. When comparisons were made between households on similar income brackets close to and far away from transmission lines, the effect of transmission lines disappeared. In this case, the confounding variables were household wealth and distance to the nearest high voltage line. Background reading.
Famous easy to understand examples of a confounding variable invalidating a study Power Lines and Cancer After an initial study finding a link between living next to high-voltage transmission lines and cancer, follow-up studies found that when you include income in the model the ef
4,938
Famous easy to understand examples of a confounding variable invalidating a study
Consider the following examples. I am not sure they are necessarily very famous but they help to demonstrate the potential negative effects of confounding variables. Say one is studying the relation between birth order (1st child, 2nd child, etc.) and the presence of Down Syndrome in the child. In this scenario, maternal age would be a confounding variable: Higher maternal age is directly associated with Down Syndrome in the child Higher maternal age is directly associated with Down Syndrome, regardless of birth order (a mother having her 1st vs 3rd child at age 50 confers the same risk) Maternal age is directly associated with birth order (the 2nd child, except in the case of twins, is born when the mother is older than she was for the birth of the 1st child) Maternal age is not a consequence of birth order (having a 2nd child does not change the mother's age) More examples In risk assessments, factors such as age, gender, and educational levels often affect health status and so should be controlled. Beyond these factors, researchers may not consider or have access to data on other causal factors. An example is the study of smoking tobacco on human health. Smoking, drinking alcohol, and diet are lifestyle activities that are related. A risk assessment that looks at the effects of smoking but does not control for alcohol consumption or diet may overestimate the risk of smoking (Tjønneland, Grønbaek, Stripp, & Overvad, 1999). Smoking and confounding are reviewed in occupational risk assessments such as the safety of coal mining (Axelson, 1989). When there is not a large sample population of non-smokers or non-drinkers in a particular occupation, the risk assessment may be biased towards finding a negative effect on health. References: https://en.wikipedia.org/wiki/Confounding Tjønneland, A., Grønbaek, M., Stripp, C., & Overvad, K. (1999). Wine intake and diet in a random sample of 48763 Danish men and women. The American Journal of Clinical Nutrition, 69(1), 49-54. Axelson, O. (1989). Confounding from smoking in occupational epidemiology. British Journal of Industrial Medicine, 46(8), 505-507.
Famous easy to understand examples of a confounding variable invalidating a study
Consider the following examples. I am not sure they are necessarily very famous but they help to demonstrate the potential negative effects of confounding variables. Say one is studying the relation
Famous easy to understand examples of a confounding variable invalidating a study Consider the following examples. I am not sure they are necessarily very famous but they help to demonstrate the potential negative effects of confounding variables. Say one is studying the relation between birth order (1st child, 2nd child, etc.) and the presence of Down Syndrome in the child. In this scenario, maternal age would be a confounding variable: Higher maternal age is directly associated with Down Syndrome in the child Higher maternal age is directly associated with Down Syndrome, regardless of birth order (a mother having her 1st vs 3rd child at age 50 confers the same risk) Maternal age is directly associated with birth order (the 2nd child, except in the case of twins, is born when the mother is older than she was for the birth of the 1st child) Maternal age is not a consequence of birth order (having a 2nd child does not change the mother's age) More examples In risk assessments, factors such as age, gender, and educational levels often affect health status and so should be controlled. Beyond these factors, researchers may not consider or have access to data on other causal factors. An example is the study of smoking tobacco on human health. Smoking, drinking alcohol, and diet are lifestyle activities that are related. A risk assessment that looks at the effects of smoking but does not control for alcohol consumption or diet may overestimate the risk of smoking (Tjønneland, Grønbaek, Stripp, & Overvad, 1999). Smoking and confounding are reviewed in occupational risk assessments such as the safety of coal mining (Axelson, 1989). When there is not a large sample population of non-smokers or non-drinkers in a particular occupation, the risk assessment may be biased towards finding a negative effect on health. References: https://en.wikipedia.org/wiki/Confounding Tjønneland, A., Grønbaek, M., Stripp, C., & Overvad, K. (1999). Wine intake and diet in a random sample of 48763 Danish men and women. The American Journal of Clinical Nutrition, 69(1), 49-54. Axelson, O. (1989). Confounding from smoking in occupational epidemiology. British Journal of Industrial Medicine, 46(8), 505-507.
Famous easy to understand examples of a confounding variable invalidating a study Consider the following examples. I am not sure they are necessarily very famous but they help to demonstrate the potential negative effects of confounding variables. Say one is studying the relation
4,939
Famous easy to understand examples of a confounding variable invalidating a study
There was one about diet that looked at diet in different countries and concluded that meat caused all sorts of problems (e.g. heart disease), but failed to account for the average lifespan in each country: The countries that ate very little meat also had lower life expectancies and the problems that meat "caused" were ones that were linked to age. I don't have citations for this - I read about it about 25 years ago - but maybe someone will remember or maybe you can find it.
Famous easy to understand examples of a confounding variable invalidating a study
There was one about diet that looked at diet in different countries and concluded that meat caused all sorts of problems (e.g. heart disease), but failed to account for the average lifespan in each co
Famous easy to understand examples of a confounding variable invalidating a study There was one about diet that looked at diet in different countries and concluded that meat caused all sorts of problems (e.g. heart disease), but failed to account for the average lifespan in each country: The countries that ate very little meat also had lower life expectancies and the problems that meat "caused" were ones that were linked to age. I don't have citations for this - I read about it about 25 years ago - but maybe someone will remember or maybe you can find it.
Famous easy to understand examples of a confounding variable invalidating a study There was one about diet that looked at diet in different countries and concluded that meat caused all sorts of problems (e.g. heart disease), but failed to account for the average lifespan in each co
4,940
Famous easy to understand examples of a confounding variable invalidating a study
I'm not sure it entirely counts as a confounding variable so much as confounding situations, but animals' abilities to find their way through a maze may qualify. As described in this ScienceDirect summary, studies of rats (or other animals) in mazes were popular for a large part of the 20th century, and continue today to some extent. One possible purpose is to study the subject's ability to remember a maze which it has previously run; another popular purpose is to study any bias in the subject's choices of whether to turn left or right at junctions, in a maze which the subject has not previously run. It should be immediately clear that if the subject has forgotten the maze, then any inherent bias in choice of route will be a confounding factor. If the "right" direction coincides with the subject's bias, then they could find their way in spite of not remembering the route. In addition to this, studies found various other confounding features exist which might not have been considered. The height of walls and width of passages are factors, for example. And if another subject has previously navigated the maze, subjects which rely strongly on their sense of smell (mice and dogs, for instance) may find their way simply by tracking the previous subject's scent. Even the construction of the maze may be an issue - animals tend to be less happy to run over "hollow-sounding" floors. Many animal maze studies ended up finding confounding factors instead of the intended study results. More disturbingly, according to Richard Feynmann, the studies reporting these confounding factors were not picked up by researchers at the time. As a result we simply don't know if any animal maze studies carried out around this time have any validity whatsoever. That's decades worth of high-end research at the finest universities around the world, by the finest psychologists and animal behaviourists, and every last shred of work had to at best be taken with a very large spoon of salt. Later researchers had to go back and duplicate all this work, to find out what was actually valid and what wasn't repeatable.
Famous easy to understand examples of a confounding variable invalidating a study
I'm not sure it entirely counts as a confounding variable so much as confounding situations, but animals' abilities to find their way through a maze may qualify. As described in this ScienceDirect sum
Famous easy to understand examples of a confounding variable invalidating a study I'm not sure it entirely counts as a confounding variable so much as confounding situations, but animals' abilities to find their way through a maze may qualify. As described in this ScienceDirect summary, studies of rats (or other animals) in mazes were popular for a large part of the 20th century, and continue today to some extent. One possible purpose is to study the subject's ability to remember a maze which it has previously run; another popular purpose is to study any bias in the subject's choices of whether to turn left or right at junctions, in a maze which the subject has not previously run. It should be immediately clear that if the subject has forgotten the maze, then any inherent bias in choice of route will be a confounding factor. If the "right" direction coincides with the subject's bias, then they could find their way in spite of not remembering the route. In addition to this, studies found various other confounding features exist which might not have been considered. The height of walls and width of passages are factors, for example. And if another subject has previously navigated the maze, subjects which rely strongly on their sense of smell (mice and dogs, for instance) may find their way simply by tracking the previous subject's scent. Even the construction of the maze may be an issue - animals tend to be less happy to run over "hollow-sounding" floors. Many animal maze studies ended up finding confounding factors instead of the intended study results. More disturbingly, according to Richard Feynmann, the studies reporting these confounding factors were not picked up by researchers at the time. As a result we simply don't know if any animal maze studies carried out around this time have any validity whatsoever. That's decades worth of high-end research at the finest universities around the world, by the finest psychologists and animal behaviourists, and every last shred of work had to at best be taken with a very large spoon of salt. Later researchers had to go back and duplicate all this work, to find out what was actually valid and what wasn't repeatable.
Famous easy to understand examples of a confounding variable invalidating a study I'm not sure it entirely counts as a confounding variable so much as confounding situations, but animals' abilities to find their way through a maze may qualify. As described in this ScienceDirect sum
4,941
Famous easy to understand examples of a confounding variable invalidating a study
There was a great study of mobile phone use and brain cancer. Most people with a lateral brain cancer, when asked which hand they hold their phone in, answer the diseased side. This seemed to show that phone use caused cancer. However, maybe the answers are informed by hindsight. Someone thought of a great test for this. The sample was big enough to include some people with two cancers. So you could ask, does the declared side of phone use influence the risk of a cancer on the other side of the brain? It was actually protective, thus showing the hindsight bias in the original result. Sorry, I don't have the reference.
Famous easy to understand examples of a confounding variable invalidating a study
There was a great study of mobile phone use and brain cancer. Most people with a lateral brain cancer, when asked which hand they hold their phone in, answer the diseased side. This seemed to show tha
Famous easy to understand examples of a confounding variable invalidating a study There was a great study of mobile phone use and brain cancer. Most people with a lateral brain cancer, when asked which hand they hold their phone in, answer the diseased side. This seemed to show that phone use caused cancer. However, maybe the answers are informed by hindsight. Someone thought of a great test for this. The sample was big enough to include some people with two cancers. So you could ask, does the declared side of phone use influence the risk of a cancer on the other side of the brain? It was actually protective, thus showing the hindsight bias in the original result. Sorry, I don't have the reference.
Famous easy to understand examples of a confounding variable invalidating a study There was a great study of mobile phone use and brain cancer. Most people with a lateral brain cancer, when asked which hand they hold their phone in, answer the diseased side. This seemed to show tha
4,942
Famous easy to understand examples of a confounding variable invalidating a study
'Statistics' by Freedman, Purvis et al. has a number of examples in the first couple of chapters. My personal favorite is that ice cream causes polio. The confounding variable is that they are both prevalent in the summertime when young children are out, about, and spreading polio. The book is "Statistics (Fourth Edition) 4th Edition, Kindle Edition- by David Freedman (Author), Robert Pisani (Author), Roger Purves (Author)"
Famous easy to understand examples of a confounding variable invalidating a study
'Statistics' by Freedman, Purvis et al. has a number of examples in the first couple of chapters. My personal favorite is that ice cream causes polio. The confounding variable is that they are both
Famous easy to understand examples of a confounding variable invalidating a study 'Statistics' by Freedman, Purvis et al. has a number of examples in the first couple of chapters. My personal favorite is that ice cream causes polio. The confounding variable is that they are both prevalent in the summertime when young children are out, about, and spreading polio. The book is "Statistics (Fourth Edition) 4th Edition, Kindle Edition- by David Freedman (Author), Robert Pisani (Author), Roger Purves (Author)"
Famous easy to understand examples of a confounding variable invalidating a study 'Statistics' by Freedman, Purvis et al. has a number of examples in the first couple of chapters. My personal favorite is that ice cream causes polio. The confounding variable is that they are both
4,943
Famous easy to understand examples of a confounding variable invalidating a study
See: Subversive Subjects: Rule-Breaking and Deception in Clinical Trials https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4520402/
Famous easy to understand examples of a confounding variable invalidating a study
See: Subversive Subjects: Rule-Breaking and Deception in Clinical Trials https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4520402/
Famous easy to understand examples of a confounding variable invalidating a study See: Subversive Subjects: Rule-Breaking and Deception in Clinical Trials https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4520402/
Famous easy to understand examples of a confounding variable invalidating a study See: Subversive Subjects: Rule-Breaking and Deception in Clinical Trials https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4520402/
4,944
Famous easy to understand examples of a confounding variable invalidating a study
Hormone replacement Therapy and heart disease? https://www.teachepi.org/wp-content/uploads/OldTE/documents/courses/bfiles/The%20B%20Files_File1_HRT_Final_Complete.pdf The benefits were determined by observation, and essentially it appears that the people who chose to do hrt had higher socioeconomic status, healthier lifestyle etc (So one could argue on confounding Vs observational study)
Famous easy to understand examples of a confounding variable invalidating a study
Hormone replacement Therapy and heart disease? https://www.teachepi.org/wp-content/uploads/OldTE/documents/courses/bfiles/The%20B%20Files_File1_HRT_Final_Complete.pdf The benefits were determined by o
Famous easy to understand examples of a confounding variable invalidating a study Hormone replacement Therapy and heart disease? https://www.teachepi.org/wp-content/uploads/OldTE/documents/courses/bfiles/The%20B%20Files_File1_HRT_Final_Complete.pdf The benefits were determined by observation, and essentially it appears that the people who chose to do hrt had higher socioeconomic status, healthier lifestyle etc (So one could argue on confounding Vs observational study)
Famous easy to understand examples of a confounding variable invalidating a study Hormone replacement Therapy and heart disease? https://www.teachepi.org/wp-content/uploads/OldTE/documents/courses/bfiles/The%20B%20Files_File1_HRT_Final_Complete.pdf The benefits were determined by o
4,945
Famous easy to understand examples of a confounding variable invalidating a study
There are lots of good examples in Howard Weiner's books. In particular, Chapter 1 "The most dangerous equation" in "How to understand, communicate and control uncertainty through graphical display" Examples include: The small schools movement. People noticed that some small schools had better performance than large schools so spent money to reduce school size. It turned out that some small schools also had worse performance than large schools. It was largely an artefact of extreme outcomes showing up in small samples. Kidney cancer rates (This example is also used Daniel Kahneman's "Thinking Fast and Slow", see the start of Chapter 10). Lowest kidney cancer rates in rural, sparsely populated counties. These low rates have to be because of the the clean living rural life style. But wait, counties with the highest incidence of kidney cancer are also rural and sparsely populated. This has to be because of the lack of access to good medical care and too much drinking. Of course, the extreme are actually an artefact of the small populations.
Famous easy to understand examples of a confounding variable invalidating a study
There are lots of good examples in Howard Weiner's books. In particular, Chapter 1 "The most dangerous equation" in "How to understand, communicate and control uncertainty through graphical display"
Famous easy to understand examples of a confounding variable invalidating a study There are lots of good examples in Howard Weiner's books. In particular, Chapter 1 "The most dangerous equation" in "How to understand, communicate and control uncertainty through graphical display" Examples include: The small schools movement. People noticed that some small schools had better performance than large schools so spent money to reduce school size. It turned out that some small schools also had worse performance than large schools. It was largely an artefact of extreme outcomes showing up in small samples. Kidney cancer rates (This example is also used Daniel Kahneman's "Thinking Fast and Slow", see the start of Chapter 10). Lowest kidney cancer rates in rural, sparsely populated counties. These low rates have to be because of the the clean living rural life style. But wait, counties with the highest incidence of kidney cancer are also rural and sparsely populated. This has to be because of the lack of access to good medical care and too much drinking. Of course, the extreme are actually an artefact of the small populations.
Famous easy to understand examples of a confounding variable invalidating a study There are lots of good examples in Howard Weiner's books. In particular, Chapter 1 "The most dangerous equation" in "How to understand, communicate and control uncertainty through graphical display"
4,946
Why is polynomial regression considered a special case of multiple linear regression?
When you fit a regression model such as $\hat y_i = \hat\beta_0 + \hat\beta_1x_i + \hat\beta_2x^2_i$, the model and the OLS estimator doesn't 'know' that $x^2_i$ is simply the square of $x_i$, it just 'thinks' it's another variable. Of course there is some collinearity, and that gets incorporated into the fit (e.g., the standard errors are larger than they might otherwise be), but lots of pairs of variables can be somewhat collinear without one of them being a function of the other. We don't recognize that there are really two separate variables in the model, because we know that $x^2_i$ is ultimately the same variable as $x_i$ that we transformed and included in order to capture a curvilinear relationship between $x_i$ and $y_i$. That knowledge of the true nature of $x^2_i$, coupled with our belief that there is a curvilinear relationship between $x_i$ and $y_i$ is what makes it difficult for us to understand the way that it is still linear from the model's perspective. In addition, we visualize $x_i$ and $x^2_i$ together by looking at the marginal projection of the 3D function onto the 2D $x, y$ plane. If you only have $x_i$ and $x^2_i$, you can try to visualize them in the full 3D space (although it is still rather hard to really see what is going on). If you did look at the fitted function in the full 3D space, you would see that the fitted function is a 2D plane, and moreover that it is a flat plane. As I say, it is hard to see well because the $x_i, x^2_i$ data exist only along a curved line going through that 3D space (that fact is the visual manifestation of their collinearity). We can try to do that here. Imagine this is the fitted model: x = seq(from=0, to=10, by=.5) x2 = x**2 y = 3 + x - .05*x2 d.mat = data.frame(X1=x, X2=x2, Y=y) # 2D plot plot(x, y, pch=1, ylim=c(0,11), col="red", main="Marginal projection onto the 2D X,Y plane") lines(x, y, col="lightblue") # 3D plot library(scatterplot3d) s = scatterplot3d(x=d.mat$X1, y=d.mat$X2, z=d.mat$Y, color="gray", pch=1, xlab="X1", ylab="X2", zlab="Y", xlim=c(0, 11), ylim=c(0,101), zlim=c(0, 11), type="h", main="In pseudo-3D space") s$points(x=d.mat$X1, y=d.mat$X2, z=d.mat$Y, col="red", pch=1) s$plane3d(Intercept=3, x.coef=1, y.coef=-.05, col="lightblue") It may be easier to see in these images, which are screenshots of a rotated 3D figure made with the same data using the rgl package. When we say that a model that is "linear in the parameters" really is linear, this isn't just some mathematical sophistry. With $p$ variables, you are fitting a $p$-dimensional hyperplane in a $p\!+\!1$-dimensional hyperspace (in our example a 2D plane in a 3D space). That hyperplane really is 'flat' / 'linear'; it isn't just a metaphor.
Why is polynomial regression considered a special case of multiple linear regression?
When you fit a regression model such as $\hat y_i = \hat\beta_0 + \hat\beta_1x_i + \hat\beta_2x^2_i$, the model and the OLS estimator doesn't 'know' that $x^2_i$ is simply the square of $x_i$, it just
Why is polynomial regression considered a special case of multiple linear regression? When you fit a regression model such as $\hat y_i = \hat\beta_0 + \hat\beta_1x_i + \hat\beta_2x^2_i$, the model and the OLS estimator doesn't 'know' that $x^2_i$ is simply the square of $x_i$, it just 'thinks' it's another variable. Of course there is some collinearity, and that gets incorporated into the fit (e.g., the standard errors are larger than they might otherwise be), but lots of pairs of variables can be somewhat collinear without one of them being a function of the other. We don't recognize that there are really two separate variables in the model, because we know that $x^2_i$ is ultimately the same variable as $x_i$ that we transformed and included in order to capture a curvilinear relationship between $x_i$ and $y_i$. That knowledge of the true nature of $x^2_i$, coupled with our belief that there is a curvilinear relationship between $x_i$ and $y_i$ is what makes it difficult for us to understand the way that it is still linear from the model's perspective. In addition, we visualize $x_i$ and $x^2_i$ together by looking at the marginal projection of the 3D function onto the 2D $x, y$ plane. If you only have $x_i$ and $x^2_i$, you can try to visualize them in the full 3D space (although it is still rather hard to really see what is going on). If you did look at the fitted function in the full 3D space, you would see that the fitted function is a 2D plane, and moreover that it is a flat plane. As I say, it is hard to see well because the $x_i, x^2_i$ data exist only along a curved line going through that 3D space (that fact is the visual manifestation of their collinearity). We can try to do that here. Imagine this is the fitted model: x = seq(from=0, to=10, by=.5) x2 = x**2 y = 3 + x - .05*x2 d.mat = data.frame(X1=x, X2=x2, Y=y) # 2D plot plot(x, y, pch=1, ylim=c(0,11), col="red", main="Marginal projection onto the 2D X,Y plane") lines(x, y, col="lightblue") # 3D plot library(scatterplot3d) s = scatterplot3d(x=d.mat$X1, y=d.mat$X2, z=d.mat$Y, color="gray", pch=1, xlab="X1", ylab="X2", zlab="Y", xlim=c(0, 11), ylim=c(0,101), zlim=c(0, 11), type="h", main="In pseudo-3D space") s$points(x=d.mat$X1, y=d.mat$X2, z=d.mat$Y, col="red", pch=1) s$plane3d(Intercept=3, x.coef=1, y.coef=-.05, col="lightblue") It may be easier to see in these images, which are screenshots of a rotated 3D figure made with the same data using the rgl package. When we say that a model that is "linear in the parameters" really is linear, this isn't just some mathematical sophistry. With $p$ variables, you are fitting a $p$-dimensional hyperplane in a $p\!+\!1$-dimensional hyperspace (in our example a 2D plane in a 3D space). That hyperplane really is 'flat' / 'linear'; it isn't just a metaphor.
Why is polynomial regression considered a special case of multiple linear regression? When you fit a regression model such as $\hat y_i = \hat\beta_0 + \hat\beta_1x_i + \hat\beta_2x^2_i$, the model and the OLS estimator doesn't 'know' that $x^2_i$ is simply the square of $x_i$, it just
4,947
Why is polynomial regression considered a special case of multiple linear regression?
So a general linear model is function that is linear in the unknown parameters. A polynomial regression, for example $y = a + bx + cx^2$ is quadratic as a function of $x$ but linear in the coefficients $a$, $b$ and $c$. More generally, a general linear model can be expressed as $y = \sum_{i=0}^N a_i h_i(x)$, where the $h_i$ are arbitrary functions of vectorial inputs $x$ - see that the $h_i$ can include any interaction terms (between components of $x$) and such like.
Why is polynomial regression considered a special case of multiple linear regression?
So a general linear model is function that is linear in the unknown parameters. A polynomial regression, for example $y = a + bx + cx^2$ is quadratic as a function of $x$ but linear in the coefficient
Why is polynomial regression considered a special case of multiple linear regression? So a general linear model is function that is linear in the unknown parameters. A polynomial regression, for example $y = a + bx + cx^2$ is quadratic as a function of $x$ but linear in the coefficients $a$, $b$ and $c$. More generally, a general linear model can be expressed as $y = \sum_{i=0}^N a_i h_i(x)$, where the $h_i$ are arbitrary functions of vectorial inputs $x$ - see that the $h_i$ can include any interaction terms (between components of $x$) and such like.
Why is polynomial regression considered a special case of multiple linear regression? So a general linear model is function that is linear in the unknown parameters. A polynomial regression, for example $y = a + bx + cx^2$ is quadratic as a function of $x$ but linear in the coefficient
4,948
Why is polynomial regression considered a special case of multiple linear regression?
Consider a model $$ y_i = b_0+b_1 x^{n_1}_i + \cdots+ b_px^{n_p}_i + \epsilon_i. $$ This can be rewritten $$ y = X b + \epsilon;\\ X= \begin{pmatrix} 1 & x_{1}^{n_1} & \cdots & x_{1}^{n_p} \\ 1 & x_{2}^{n_1} & \cdots & x_{2}^{n_p} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n}^{n_1} & \cdots & x_{n}^{n_p} \\ \end{pmatrix}.$$
Why is polynomial regression considered a special case of multiple linear regression?
Consider a model $$ y_i = b_0+b_1 x^{n_1}_i + \cdots+ b_px^{n_p}_i + \epsilon_i. $$ This can be rewritten $$ y = X b + \epsilon;\\ X= \begin{pmatrix} 1 & x_{1}^{n_1} & \cdots & x_{1}^{n_p} \\ 1 &
Why is polynomial regression considered a special case of multiple linear regression? Consider a model $$ y_i = b_0+b_1 x^{n_1}_i + \cdots+ b_px^{n_p}_i + \epsilon_i. $$ This can be rewritten $$ y = X b + \epsilon;\\ X= \begin{pmatrix} 1 & x_{1}^{n_1} & \cdots & x_{1}^{n_p} \\ 1 & x_{2}^{n_1} & \cdots & x_{2}^{n_p} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n}^{n_1} & \cdots & x_{n}^{n_p} \\ \end{pmatrix}.$$
Why is polynomial regression considered a special case of multiple linear regression? Consider a model $$ y_i = b_0+b_1 x^{n_1}_i + \cdots+ b_px^{n_p}_i + \epsilon_i. $$ This can be rewritten $$ y = X b + \epsilon;\\ X= \begin{pmatrix} 1 & x_{1}^{n_1} & \cdots & x_{1}^{n_p} \\ 1 &
4,949
What kind of information is Fisher information?
Trying to complement the other answers... What kind of information is Fisher information? Start with the loglikelihood function $$ \ell (\theta) = \log f(x;\theta) $$ as a function of $\theta$ for $\theta \in \Theta$, the parameter space. Assuming some regularity conditions we do not discuss here, we have $\DeclareMathOperator{\E}{\mathbb{E}} \E \frac{\partial}{\partial \theta} \ell (\theta) = \E_\theta \dot{\ell}(\theta) = 0$ (we will write derivatives with respect to the parameter as dots as here). The variance is the Fisher information $$ I(\theta) = \E_\theta ( \dot{\ell}(\theta) )^2= -\E_\theta \ddot{\ell}(\theta) $$ the last formula showing that it is the (negative) curvature of the loglikelihood function. One often finds the maximum likelihood estimator (mle) of $\theta$ by solving the likelihood equation $\dot{\ell}(\theta)=0$ when the Fisher information as the variance of the score $\dot{\ell}(\theta)$ is large, then the solution to that equation will be very sensitive to the data, giving a hope for high precision of the mle. That is confirmed at least asymptotically, the asymptotic variance of the mle being the inverse of Fisher information. How can we interpret this? $\ell(\theta)$ is the likelihood information about the parameter $\theta$ from the sample. This can really only be interpreted in a relative sense, like when we use it to compare the plausibilities of two distinct possible parameter values via the likelihood ratio test $\ell(\theta_0) - \ell(\theta_1)$. The rate of change of the loglikelihood is the score function $\dot{\ell}(\theta)$ tells us how fast the likelihood changes, and its variance $I(\theta)$ how much this varies from sample to sample, at a given parameter value, say $\theta_0$. The equation (which is really surprising!) $$ I(\theta) = - \E_\theta \ddot{\ell}(\theta) $$ tells us there is a relationship (equality) between the variability in the information (likelihood) for a given parameter value, $\theta_0$, and the curvature of the likelihood function for that parameter value. This is a surprising relationship between the variability (variance) of ths statistic $\dot{\ell}(\theta) \mid_{\theta=\theta_0}$ and the expected change in likelihood when we vary the parameter $\theta$ in some interval around $\theta_0$ (for the same data). This is really both strange, surprising and powerful! So what is the likelihood function? We usually think of the statistical model $\{ f(x;\theta), \theta \in \Theta \} $ as a family of probability distributions for data $x$, indexed by the parameter $\theta$ some element in the parameter space $\Theta$. We think of this model as being true if there exists some value $\theta_0 \in \Theta$ such that the data $x$ actually have the probability distribution $f(x;\theta_0)$. So we get a statistical model by imbedding the true data-generating probability distribution $f(x;\theta_0)$ in a family of probability distributions. But, it is clear that such an imbedding can be done in many different ways, and each such imbedding will be a "true" model, and they will give different likelihood functions. And, without such an imbedding, there is no likelihood function. It seems that we really do need some help, some principles for how to choose an imbedding wisely! So, what does this mean? It means that the choice of likelihood function tells us how we would expect the data to change, if the truth changed a little bit. But, this cannot really be verified by the data, as the data only gives information about the true model function $f(x;\theta_0)$ which actually generated the data, and not nothing about all the other elements in the choosen model. This way we see that choice of the likelihood function is similar to choice of a prior in Bayesian analysis, it injects non-data information into the analysis. Let us look at this in a simple (somewhat artificial) example, and look at the effect of imbedding $f(x;\theta_0)$ in a model in different ways. Let us assume that $X_1, \dotsc, X_n$ are iid as $N(\mu=10, \sigma^2=1)$. So, that is the true, data-generating distribution. Now, let us embed this in a model in two different ways, model A and model B. $$ A \colon X_1, \dotsc, X_n ~\text{iid}~N(\mu, \sigma^2=1),\mu \in \mathbb{R} \\ B \colon X_1, \dotsc, X_n ~\text{iid}~N(\mu, \mu/10), \mu>0 $$ you can check that this coincides for $\mu=10$. The loglikelihood functions become $$ \ell_A(\mu) = -\frac{n}{2} \log (2\pi) -\frac12\sum_i (x_i-\mu)^2 \\ \ell_B(\mu) = -\frac{n}{2} \log (2\pi) - \frac{n}{2}\log(\mu/10) - \frac{10}{2}\sum_i \frac{(x_i-\mu)^2}{\mu} $$ The score functions: (loglikelihood derivatives): $$ \dot{\ell}_A(\mu) = n (\bar{x}-\mu) \\ \dot{\ell}_B(\mu) = -\frac{n}{2\mu}- \frac{10}{2}\sum_i \left(\frac{x_i}{\mu}\right)^2 - 15 n $$ and the curvatures $$ \ddot{\ell}_A(\mu) = -n \\ \ddot{\ell}_B(\mu) = \frac{n}{2\mu^2} + \frac{10}{2}\sum_i \frac{2 x_i^2}{\mu^3} $$ so, the Fisher information do really depend on the imbedding. Now, we calculate the Fisher information at the true value $\mu=10$, $$ I_A(\mu=10) = n, \\ I_B(\mu=10) = n \cdot \left(\frac1{200}+\frac{2020}{2000}\right) > n $$ so the Fisher information about the parameter is somewhat larger in model B. This illustrates that, in some sense, the Fisher information tells us how fast the information from the data about the parameter would have changed if the governing parameter changed in the way postulated by the imbedding in a model family. The explanation of higher information in model B is that our model family B postulates that if the expectation would have increased, then the variance too would have increased. So that, under model B, the sample variance will also carry information about $\mu$, which it will not do under model A. Also, this example illustrates that we really do need some theory for helping us in how to construct model families.
What kind of information is Fisher information?
Trying to complement the other answers... What kind of information is Fisher information? Start with the loglikelihood function $$ \ell (\theta) = \log f(x;\theta) $$ as a function of $\theta$ for
What kind of information is Fisher information? Trying to complement the other answers... What kind of information is Fisher information? Start with the loglikelihood function $$ \ell (\theta) = \log f(x;\theta) $$ as a function of $\theta$ for $\theta \in \Theta$, the parameter space. Assuming some regularity conditions we do not discuss here, we have $\DeclareMathOperator{\E}{\mathbb{E}} \E \frac{\partial}{\partial \theta} \ell (\theta) = \E_\theta \dot{\ell}(\theta) = 0$ (we will write derivatives with respect to the parameter as dots as here). The variance is the Fisher information $$ I(\theta) = \E_\theta ( \dot{\ell}(\theta) )^2= -\E_\theta \ddot{\ell}(\theta) $$ the last formula showing that it is the (negative) curvature of the loglikelihood function. One often finds the maximum likelihood estimator (mle) of $\theta$ by solving the likelihood equation $\dot{\ell}(\theta)=0$ when the Fisher information as the variance of the score $\dot{\ell}(\theta)$ is large, then the solution to that equation will be very sensitive to the data, giving a hope for high precision of the mle. That is confirmed at least asymptotically, the asymptotic variance of the mle being the inverse of Fisher information. How can we interpret this? $\ell(\theta)$ is the likelihood information about the parameter $\theta$ from the sample. This can really only be interpreted in a relative sense, like when we use it to compare the plausibilities of two distinct possible parameter values via the likelihood ratio test $\ell(\theta_0) - \ell(\theta_1)$. The rate of change of the loglikelihood is the score function $\dot{\ell}(\theta)$ tells us how fast the likelihood changes, and its variance $I(\theta)$ how much this varies from sample to sample, at a given parameter value, say $\theta_0$. The equation (which is really surprising!) $$ I(\theta) = - \E_\theta \ddot{\ell}(\theta) $$ tells us there is a relationship (equality) between the variability in the information (likelihood) for a given parameter value, $\theta_0$, and the curvature of the likelihood function for that parameter value. This is a surprising relationship between the variability (variance) of ths statistic $\dot{\ell}(\theta) \mid_{\theta=\theta_0}$ and the expected change in likelihood when we vary the parameter $\theta$ in some interval around $\theta_0$ (for the same data). This is really both strange, surprising and powerful! So what is the likelihood function? We usually think of the statistical model $\{ f(x;\theta), \theta \in \Theta \} $ as a family of probability distributions for data $x$, indexed by the parameter $\theta$ some element in the parameter space $\Theta$. We think of this model as being true if there exists some value $\theta_0 \in \Theta$ such that the data $x$ actually have the probability distribution $f(x;\theta_0)$. So we get a statistical model by imbedding the true data-generating probability distribution $f(x;\theta_0)$ in a family of probability distributions. But, it is clear that such an imbedding can be done in many different ways, and each such imbedding will be a "true" model, and they will give different likelihood functions. And, without such an imbedding, there is no likelihood function. It seems that we really do need some help, some principles for how to choose an imbedding wisely! So, what does this mean? It means that the choice of likelihood function tells us how we would expect the data to change, if the truth changed a little bit. But, this cannot really be verified by the data, as the data only gives information about the true model function $f(x;\theta_0)$ which actually generated the data, and not nothing about all the other elements in the choosen model. This way we see that choice of the likelihood function is similar to choice of a prior in Bayesian analysis, it injects non-data information into the analysis. Let us look at this in a simple (somewhat artificial) example, and look at the effect of imbedding $f(x;\theta_0)$ in a model in different ways. Let us assume that $X_1, \dotsc, X_n$ are iid as $N(\mu=10, \sigma^2=1)$. So, that is the true, data-generating distribution. Now, let us embed this in a model in two different ways, model A and model B. $$ A \colon X_1, \dotsc, X_n ~\text{iid}~N(\mu, \sigma^2=1),\mu \in \mathbb{R} \\ B \colon X_1, \dotsc, X_n ~\text{iid}~N(\mu, \mu/10), \mu>0 $$ you can check that this coincides for $\mu=10$. The loglikelihood functions become $$ \ell_A(\mu) = -\frac{n}{2} \log (2\pi) -\frac12\sum_i (x_i-\mu)^2 \\ \ell_B(\mu) = -\frac{n}{2} \log (2\pi) - \frac{n}{2}\log(\mu/10) - \frac{10}{2}\sum_i \frac{(x_i-\mu)^2}{\mu} $$ The score functions: (loglikelihood derivatives): $$ \dot{\ell}_A(\mu) = n (\bar{x}-\mu) \\ \dot{\ell}_B(\mu) = -\frac{n}{2\mu}- \frac{10}{2}\sum_i \left(\frac{x_i}{\mu}\right)^2 - 15 n $$ and the curvatures $$ \ddot{\ell}_A(\mu) = -n \\ \ddot{\ell}_B(\mu) = \frac{n}{2\mu^2} + \frac{10}{2}\sum_i \frac{2 x_i^2}{\mu^3} $$ so, the Fisher information do really depend on the imbedding. Now, we calculate the Fisher information at the true value $\mu=10$, $$ I_A(\mu=10) = n, \\ I_B(\mu=10) = n \cdot \left(\frac1{200}+\frac{2020}{2000}\right) > n $$ so the Fisher information about the parameter is somewhat larger in model B. This illustrates that, in some sense, the Fisher information tells us how fast the information from the data about the parameter would have changed if the governing parameter changed in the way postulated by the imbedding in a model family. The explanation of higher information in model B is that our model family B postulates that if the expectation would have increased, then the variance too would have increased. So that, under model B, the sample variance will also carry information about $\mu$, which it will not do under model A. Also, this example illustrates that we really do need some theory for helping us in how to construct model families.
What kind of information is Fisher information? Trying to complement the other answers... What kind of information is Fisher information? Start with the loglikelihood function $$ \ell (\theta) = \log f(x;\theta) $$ as a function of $\theta$ for
4,950
What kind of information is Fisher information?
Let's think in terms of the negative log-likelihood function $\ell$. The negative score is its gradient with respect to the parameter value. At the true parameter, the score is zero. Otherwise, it gives the direction towards the minimum $\ell$ (or in the case of non-convex $\ell$, a saddle point or local minimum or maximum). The Fisher information measures the curvature of $\ell$ around $\theta$ if the data follows $\theta$. In other words, it tells you how much wiggling the parameter would affect your log-likelihood. Consider that you had a big model with millions of parameters. And you had a small thumb drive on which to store your model. How should you prioritize how many bits of each parameter to store? The right answer is to allocate bits according the Fisher information (Rissanen wrote about this). If the Fisher information of a parameter is zero, that parameter doesn't matter. We call it "information" because the Fisher information measures how much this parameter tells us about the data. A colloquial way to think about it is this: Suppose the parameters are driving a car, and the data is in the back seat correcting the driver. The annoyingness of the data is the Fisher information. If the data lets the driver drive, the Fisher information is zero; if the data is constantly making corrections, it's big. In this sense, the Fisher information is the amount of information going from the data to the parameters. Consider what happens if you make the steering wheel more sensitive. This is equivalent to a reparametrization. In that case, the data doesn't want to be so loud for fear of the car oversteering. This kind of reparametrization decreases the Fisher information.
What kind of information is Fisher information?
Let's think in terms of the negative log-likelihood function $\ell$. The negative score is its gradient with respect to the parameter value. At the true parameter, the score is zero. Otherwise, it
What kind of information is Fisher information? Let's think in terms of the negative log-likelihood function $\ell$. The negative score is its gradient with respect to the parameter value. At the true parameter, the score is zero. Otherwise, it gives the direction towards the minimum $\ell$ (or in the case of non-convex $\ell$, a saddle point or local minimum or maximum). The Fisher information measures the curvature of $\ell$ around $\theta$ if the data follows $\theta$. In other words, it tells you how much wiggling the parameter would affect your log-likelihood. Consider that you had a big model with millions of parameters. And you had a small thumb drive on which to store your model. How should you prioritize how many bits of each parameter to store? The right answer is to allocate bits according the Fisher information (Rissanen wrote about this). If the Fisher information of a parameter is zero, that parameter doesn't matter. We call it "information" because the Fisher information measures how much this parameter tells us about the data. A colloquial way to think about it is this: Suppose the parameters are driving a car, and the data is in the back seat correcting the driver. The annoyingness of the data is the Fisher information. If the data lets the driver drive, the Fisher information is zero; if the data is constantly making corrections, it's big. In this sense, the Fisher information is the amount of information going from the data to the parameters. Consider what happens if you make the steering wheel more sensitive. This is equivalent to a reparametrization. In that case, the data doesn't want to be so loud for fear of the car oversteering. This kind of reparametrization decreases the Fisher information.
What kind of information is Fisher information? Let's think in terms of the negative log-likelihood function $\ell$. The negative score is its gradient with respect to the parameter value. At the true parameter, the score is zero. Otherwise, it
4,951
What kind of information is Fisher information?
Complementary to @NeilG's nice answer (+1) and to address your specific questions: I would say it counts the "precision" rather than the "error" itself. Remember that the Hessian of the log-likelihood evaluated at the ML estimates is the observed Fisher information. The estimated standard errors are the square roots of the diagonal elements of the inverse of the observed Fisher information matrix. Stemming from this the Fisher information is the trace of the Fisher information matrix. Given that the Fisher Information matrix $I$ is a Hermitian positive-semidefinite matrix matrix then the diagonal entries $I_{j,j}$ of it are real and non-negative; as a direct consequence it trace $tr(I)$ must be positive. This means that you can have only "non-ideal" estimators according to your assertion. So no, a positive Fisher information is not related to how ideal is your MLE. The definition differs in the way we interpreter the notion of information in both cases. Having said that, the two measurements are closely related. The inverse of Fisher information is the minimum variance of an unbiased estimator (Cramér–Rao bound). In that sense the information matrix indicates how much information about the estimated coefficients is contained in the data. On the contrary the Shannon entropy was taken from thermodynamics. It relates the information content of a particular value of a variable as $–p·log_2(p)$ where $p$ is the probability of the variable taking on the value. Both are measurements of how "informative" a variable is. In the first case though you judge this information in terms of precision while in the second case in terms of disorder; different sides, same coin! :D To recap: The inverse of the Fisher information matrix $I$ evaluated at the ML estimator values is the asymptotic or approximate covariance matrix. As this ML estimator values are found in a local minimum graphically the Fisher information shows how deep is that minimum and who much wiggle room you have around it. I found this paper by Lutwak et al. on Extensions of Fisher information and Stam’s inequality an informative read on this matter. The Wikipedia articles on the Fisher Information Metric and on Jensen–Shannon divergence are also good to get you started.
What kind of information is Fisher information?
Complementary to @NeilG's nice answer (+1) and to address your specific questions: I would say it counts the "precision" rather than the "error" itself. Remember that the Hessian of the log-like
What kind of information is Fisher information? Complementary to @NeilG's nice answer (+1) and to address your specific questions: I would say it counts the "precision" rather than the "error" itself. Remember that the Hessian of the log-likelihood evaluated at the ML estimates is the observed Fisher information. The estimated standard errors are the square roots of the diagonal elements of the inverse of the observed Fisher information matrix. Stemming from this the Fisher information is the trace of the Fisher information matrix. Given that the Fisher Information matrix $I$ is a Hermitian positive-semidefinite matrix matrix then the diagonal entries $I_{j,j}$ of it are real and non-negative; as a direct consequence it trace $tr(I)$ must be positive. This means that you can have only "non-ideal" estimators according to your assertion. So no, a positive Fisher information is not related to how ideal is your MLE. The definition differs in the way we interpreter the notion of information in both cases. Having said that, the two measurements are closely related. The inverse of Fisher information is the minimum variance of an unbiased estimator (Cramér–Rao bound). In that sense the information matrix indicates how much information about the estimated coefficients is contained in the data. On the contrary the Shannon entropy was taken from thermodynamics. It relates the information content of a particular value of a variable as $–p·log_2(p)$ where $p$ is the probability of the variable taking on the value. Both are measurements of how "informative" a variable is. In the first case though you judge this information in terms of precision while in the second case in terms of disorder; different sides, same coin! :D To recap: The inverse of the Fisher information matrix $I$ evaluated at the ML estimator values is the asymptotic or approximate covariance matrix. As this ML estimator values are found in a local minimum graphically the Fisher information shows how deep is that minimum and who much wiggle room you have around it. I found this paper by Lutwak et al. on Extensions of Fisher information and Stam’s inequality an informative read on this matter. The Wikipedia articles on the Fisher Information Metric and on Jensen–Shannon divergence are also good to get you started.
What kind of information is Fisher information? Complementary to @NeilG's nice answer (+1) and to address your specific questions: I would say it counts the "precision" rather than the "error" itself. Remember that the Hessian of the log-like
4,952
Why is there a difference between manually calculating a logistic regression 95% confidence interval, and using the confint() function in R?
After having fetched the data from the accompanying website, here is how I would do it: chdage <- read.table("chdage.dat", header=F, col.names=c("id","age","chd")) chdage$aged <- ifelse(chdage$age>=55, 1, 0) mod.lr <- glm(chd ~ aged, data=chdage, family=binomial) summary(mod.lr) The 95% CIs based on profile likelihood are obtained with require(MASS) exp(confint(mod.lr)) This often is the default if the MASS package is automatically loaded. In this case, I get 2.5 % 97.5 % (Intercept) 0.2566283 0.7013384 aged 3.0293727 24.7013080 Now, if I wanted to compare with 95% Wald CIs (based on asymptotic normality) like the one you computed by hand, I would use confint.default() instead; this yields 2.5 % 97.5 % (Intercept) 0.2616579 0.7111663 aged 2.8795652 22.8614705 Wald CIs are good in most situations, although profile likelihood-based may be useful with complex sampling strategies. If you want to grasp the idea of how they work, here is a brief overview of the main principles: Confidence intervals by the profile likelihood method, with applications in veterinary epidemiology. You can also take a look at Venables and Ripley's MASS book, §8.4, pp. 220-221.
Why is there a difference between manually calculating a logistic regression 95% confidence interval
After having fetched the data from the accompanying website, here is how I would do it: chdage <- read.table("chdage.dat", header=F, col.names=c("id","age","chd")) chdage$aged <- ifelse(chdage$age>=55
Why is there a difference between manually calculating a logistic regression 95% confidence interval, and using the confint() function in R? After having fetched the data from the accompanying website, here is how I would do it: chdage <- read.table("chdage.dat", header=F, col.names=c("id","age","chd")) chdage$aged <- ifelse(chdage$age>=55, 1, 0) mod.lr <- glm(chd ~ aged, data=chdage, family=binomial) summary(mod.lr) The 95% CIs based on profile likelihood are obtained with require(MASS) exp(confint(mod.lr)) This often is the default if the MASS package is automatically loaded. In this case, I get 2.5 % 97.5 % (Intercept) 0.2566283 0.7013384 aged 3.0293727 24.7013080 Now, if I wanted to compare with 95% Wald CIs (based on asymptotic normality) like the one you computed by hand, I would use confint.default() instead; this yields 2.5 % 97.5 % (Intercept) 0.2616579 0.7111663 aged 2.8795652 22.8614705 Wald CIs are good in most situations, although profile likelihood-based may be useful with complex sampling strategies. If you want to grasp the idea of how they work, here is a brief overview of the main principles: Confidence intervals by the profile likelihood method, with applications in veterinary epidemiology. You can also take a look at Venables and Ripley's MASS book, §8.4, pp. 220-221.
Why is there a difference between manually calculating a logistic regression 95% confidence interval After having fetched the data from the accompanying website, here is how I would do it: chdage <- read.table("chdage.dat", header=F, col.names=c("id","age","chd")) chdage$aged <- ifelse(chdage$age>=55
4,953
Why is there a difference between manually calculating a logistic regression 95% confidence interval, and using the confint() function in R?
Following up: profile confidence intervals are more reliable (choosing the appropriate cutoff for the likelihood does involve an asymptotic (large sample) assumption, but this is a much weaker assumption than the quadratic-likelihood-surface assumption underlying the Wald confidence intervals). As far as I know, there is no argument for the Wald statistics over the profile confidence intervals except that the Wald statistics are much quicker to compute and may be "good enough" in many circumstances (but sometimes way off: look up the Hauck-Donner effect).
Why is there a difference between manually calculating a logistic regression 95% confidence interval
Following up: profile confidence intervals are more reliable (choosing the appropriate cutoff for the likelihood does involve an asymptotic (large sample) assumption, but this is a much weaker assumpt
Why is there a difference between manually calculating a logistic regression 95% confidence interval, and using the confint() function in R? Following up: profile confidence intervals are more reliable (choosing the appropriate cutoff for the likelihood does involve an asymptotic (large sample) assumption, but this is a much weaker assumption than the quadratic-likelihood-surface assumption underlying the Wald confidence intervals). As far as I know, there is no argument for the Wald statistics over the profile confidence intervals except that the Wald statistics are much quicker to compute and may be "good enough" in many circumstances (but sometimes way off: look up the Hauck-Donner effect).
Why is there a difference between manually calculating a logistic regression 95% confidence interval Following up: profile confidence intervals are more reliable (choosing the appropriate cutoff for the likelihood does involve an asymptotic (large sample) assumption, but this is a much weaker assumpt
4,954
Why is there a difference between manually calculating a logistic regression 95% confidence interval, and using the confint() function in R?
I believe if you look into the help file for confint() you will find that the confidence interval being constructed is a "profile" interval instead of a Wald confidence interval (your formula from HL).
Why is there a difference between manually calculating a logistic regression 95% confidence interval
I believe if you look into the help file for confint() you will find that the confidence interval being constructed is a "profile" interval instead of a Wald confidence interval (your formula from HL)
Why is there a difference between manually calculating a logistic regression 95% confidence interval, and using the confint() function in R? I believe if you look into the help file for confint() you will find that the confidence interval being constructed is a "profile" interval instead of a Wald confidence interval (your formula from HL).
Why is there a difference between manually calculating a logistic regression 95% confidence interval I believe if you look into the help file for confint() you will find that the confidence interval being constructed is a "profile" interval instead of a Wald confidence interval (your formula from HL)
4,955
How to perform a test using R to see if data follows normal distribution
If I understand your question correctly, then to test if word occurrences in a set of documents follows a Normal distribution you can just use a shapiro-Wilk test and some qqplots. For example, ## Generate two data sets ## First Normal, second from a t-distribution words1 = rnorm(100); words2 = rt(100, df=3) ## Have a look at the densities plot(density(words1));plot(density(words2)) ## Perform the test shapiro.test(words1); shapiro.test(words2) ## Plot using a qqplot qqnorm(words1);qqline(words1, col = 2) qqnorm(words2);qqline(words2, col = 2) The qqplot commands give: You can see that the second data set is clearly not Normal by the heavy tails (More Info). In the Shapiro-Walk normality test, the p-value is large for the first data set (>.9) but very small for the second data set (<.01). This will lead you to reject the null hypothesis for the second.
How to perform a test using R to see if data follows normal distribution
If I understand your question correctly, then to test if word occurrences in a set of documents follows a Normal distribution you can just use a shapiro-Wilk test and some qqplots. For example, ## Gen
How to perform a test using R to see if data follows normal distribution If I understand your question correctly, then to test if word occurrences in a set of documents follows a Normal distribution you can just use a shapiro-Wilk test and some qqplots. For example, ## Generate two data sets ## First Normal, second from a t-distribution words1 = rnorm(100); words2 = rt(100, df=3) ## Have a look at the densities plot(density(words1));plot(density(words2)) ## Perform the test shapiro.test(words1); shapiro.test(words2) ## Plot using a qqplot qqnorm(words1);qqline(words1, col = 2) qqnorm(words2);qqline(words2, col = 2) The qqplot commands give: You can see that the second data set is clearly not Normal by the heavy tails (More Info). In the Shapiro-Walk normality test, the p-value is large for the first data set (>.9) but very small for the second data set (<.01). This will lead you to reject the null hypothesis for the second.
How to perform a test using R to see if data follows normal distribution If I understand your question correctly, then to test if word occurrences in a set of documents follows a Normal distribution you can just use a shapiro-Wilk test and some qqplots. For example, ## Gen
4,956
How to perform a test using R to see if data follows normal distribution
Assuming your dataset is called words and has a counts column, you can plot the histogram to have a visualization of the distribution: hist(words$counts, 100, col="black") where 100 is the number of bins You can also do a normal Q-Q plot using qqnorm(words$counts) Finally, you can also use the Shapiro-Wilk test for normality shapiro.test(word$counts) Although, look at this discussion: Normality Testing: 'Essentially Useless?'
How to perform a test using R to see if data follows normal distribution
Assuming your dataset is called words and has a counts column, you can plot the histogram to have a visualization of the distribution: hist(words$counts, 100, col="black") where 100 is the number of
How to perform a test using R to see if data follows normal distribution Assuming your dataset is called words and has a counts column, you can plot the histogram to have a visualization of the distribution: hist(words$counts, 100, col="black") where 100 is the number of bins You can also do a normal Q-Q plot using qqnorm(words$counts) Finally, you can also use the Shapiro-Wilk test for normality shapiro.test(word$counts) Although, look at this discussion: Normality Testing: 'Essentially Useless?'
How to perform a test using R to see if data follows normal distribution Assuming your dataset is called words and has a counts column, you can plot the histogram to have a visualization of the distribution: hist(words$counts, 100, col="black") where 100 is the number of
4,957
How to perform a test using R to see if data follows normal distribution
No test will show you that your data has a normal distribution - it will only be able to show you when the data is sufficiently inconsistent with a normal that you would reject the null. But counts are not normal in any case, they're positive integers - what's the probability that an observation from a normal distribution will take a value that isn't an integer? (... that's an event of probability 1). Why would you test for normality in this case? It's obviously untrue. [In some cases it may not necessarily matter that you can tell your data aren't actually normal. Real data are never (or almost never) going to be actually drawn from a normal distribution.] If you really need to do a test, the Shapiro-Wilk test (?shapiro.test) is a good general test of normality, one that's widely used.
How to perform a test using R to see if data follows normal distribution
No test will show you that your data has a normal distribution - it will only be able to show you when the data is sufficiently inconsistent with a normal that you would reject the null. But counts a
How to perform a test using R to see if data follows normal distribution No test will show you that your data has a normal distribution - it will only be able to show you when the data is sufficiently inconsistent with a normal that you would reject the null. But counts are not normal in any case, they're positive integers - what's the probability that an observation from a normal distribution will take a value that isn't an integer? (... that's an event of probability 1). Why would you test for normality in this case? It's obviously untrue. [In some cases it may not necessarily matter that you can tell your data aren't actually normal. Real data are never (or almost never) going to be actually drawn from a normal distribution.] If you really need to do a test, the Shapiro-Wilk test (?shapiro.test) is a good general test of normality, one that's widely used.
How to perform a test using R to see if data follows normal distribution No test will show you that your data has a normal distribution - it will only be able to show you when the data is sufficiently inconsistent with a normal that you would reject the null. But counts a
4,958
How to perform a test using R to see if data follows normal distribution
A more formal way of looking at the normality is by testing whether the kurtosis and skewness are significantly different from zero. To do this, we need to get: kurtosis.test <- function (x) { m4 <- sum((x-mean(x))^4)/length(x) s4 <- var(x)^2 kurt <- (m4/s4) - 3 sek <- sqrt(24/length(x)) totest <- kurt/sek pvalue <- pt(totest,(length(x)-1)) pvalue } for kurtosis, and: skew.test <- function (x) { m3 <- sum((x-mean(x))^3)/length(x) s3 <- sqrt(var(x))^3 skew <- m3/s3 ses <- sqrt(6/length(x)) totest <- skew/ses pt(totest,(length(x)-1)) pval <- pt(totest,(length(x)-1)) pval } for Skewness. Both these tests are one-tailed, so you'll need to multiply the p-value by 2 to become two-tailed. If your p-value become larger than one you'll need to use 1-kurtosis.test() instead of kurtosis.test. If you have any other questions you can email me at [email protected]
How to perform a test using R to see if data follows normal distribution
A more formal way of looking at the normality is by testing whether the kurtosis and skewness are significantly different from zero. To do this, we need to get: kurtosis.test <- function (x) { m4 <
How to perform a test using R to see if data follows normal distribution A more formal way of looking at the normality is by testing whether the kurtosis and skewness are significantly different from zero. To do this, we need to get: kurtosis.test <- function (x) { m4 <- sum((x-mean(x))^4)/length(x) s4 <- var(x)^2 kurt <- (m4/s4) - 3 sek <- sqrt(24/length(x)) totest <- kurt/sek pvalue <- pt(totest,(length(x)-1)) pvalue } for kurtosis, and: skew.test <- function (x) { m3 <- sum((x-mean(x))^3)/length(x) s3 <- sqrt(var(x))^3 skew <- m3/s3 ses <- sqrt(6/length(x)) totest <- skew/ses pt(totest,(length(x)-1)) pval <- pt(totest,(length(x)-1)) pval } for Skewness. Both these tests are one-tailed, so you'll need to multiply the p-value by 2 to become two-tailed. If your p-value become larger than one you'll need to use 1-kurtosis.test() instead of kurtosis.test. If you have any other questions you can email me at [email protected]
How to perform a test using R to see if data follows normal distribution A more formal way of looking at the normality is by testing whether the kurtosis and skewness are significantly different from zero. To do this, we need to get: kurtosis.test <- function (x) { m4 <
4,959
How to perform a test using R to see if data follows normal distribution
By using the nortest package of R, these tests can be conducted: Perform Anderson-Darling normality test ad.test(data1) Perform Cramér-von Mises test for normality cvm.test(data1) Perform Pearson chi-square test for normality pearson.test(data1) Perform Shapiro-Francia test for normality sf.test(data1) Many other tests can be done by using the normtest package. See description at https://cran.r-project.org/web/packages/normtest/normtest.pdf
How to perform a test using R to see if data follows normal distribution
By using the nortest package of R, these tests can be conducted: Perform Anderson-Darling normality test ad.test(data1) Perform Cramér-von Mises test for normality cvm.test(data1) Perform Pearson c
How to perform a test using R to see if data follows normal distribution By using the nortest package of R, these tests can be conducted: Perform Anderson-Darling normality test ad.test(data1) Perform Cramér-von Mises test for normality cvm.test(data1) Perform Pearson chi-square test for normality pearson.test(data1) Perform Shapiro-Francia test for normality sf.test(data1) Many other tests can be done by using the normtest package. See description at https://cran.r-project.org/web/packages/normtest/normtest.pdf
How to perform a test using R to see if data follows normal distribution By using the nortest package of R, these tests can be conducted: Perform Anderson-Darling normality test ad.test(data1) Perform Cramér-von Mises test for normality cvm.test(data1) Perform Pearson c
4,960
How to perform a test using R to see if data follows normal distribution
In addition to the Shapiro-Wilk test of the stats package, the nortest package (available on CRAN) provides other normality tests.
How to perform a test using R to see if data follows normal distribution
In addition to the Shapiro-Wilk test of the stats package, the nortest package (available on CRAN) provides other normality tests.
How to perform a test using R to see if data follows normal distribution In addition to the Shapiro-Wilk test of the stats package, the nortest package (available on CRAN) provides other normality tests.
How to perform a test using R to see if data follows normal distribution In addition to the Shapiro-Wilk test of the stats package, the nortest package (available on CRAN) provides other normality tests.
4,961
Alternatives to one-way ANOVA for heteroskedastic data
There are a number of options available when dealing with heteroscedastic data. Unfortunately, none of them is guaranteed to always work. Here are some options I'm familiar with: transformations Welch ANOVA weighted least squares robust regression heteroscedasticity consistent standard errors bootstrap Kruskal-Wallis test ordinal logistic regression Update: Here is a demonstration in R of some ways of fitting a linear model (i.e., an ANOVA or a regression) when you have heteroscedasticity / heterogeneity of variance. Let's start by taking a look at your data. For convenience, I have them loaded into two data frames called my.data (which is structured like above with one column per group) and stacked.data (which has two columns: values with the numbers and ind with the group indicator). We can formally test for heteroscedasticity with Levene's test: library(car) leveneTest(values~ind, stacked.data) # Levene's Test for Homogeneity of Variance (center = median) # Df F value Pr(>F) # group 2 8.1269 0.001153 ** # 38 Sure enough, you have heteroscedasticity. We'll check to see what the variances of the groups are. A rule of thumb is that linear models are fairly robust to heterogeneity of variance so long as the maximum variance is no more than $4\!\times$ greater than the minimum variance, so we'll find that ratio as well: apply(my.data, 2, function(x){ var(x, na.rm=T) }) # A B C # 0.01734578 0.33182844 0.06673060 var(my.data$B, na.rm=T) / var(my.data$A, na.rm=T) # [1] 19.13021 Your variances differ substantially, with the largest, B, being $19\!\times$ the smallest, A. This is a problematic level of heteroscedsaticity. You had thought to use transformations such as the log or square root to stabilize the variance. That will work in some cases, but Box-Cox type transformations stabilize variance by squeezing the data asymmetrically, either squeezing them downwards with the highest data squeezed the most, or squeezing them upwards with the lowest data squeezed the most. Thus, you need the variance of your data to change with the mean for this to work optimally. Your data have a huge difference in variance, but a relatively small difference amongst the means and medians, i.e., the distributions mostly overlap. As a teaching exercise, we can create some parallel.universe.data by adding $2.7$ to all B values and $.7$ to C's to show how it would work: parallel.universe.data = with(my.data, data.frame(A=A, B=B+2.7, C=C+.7)) apply(parallel.universe.data, 2, function(x){ var(x, na.rm=T) }) # A B C # 0.01734578 0.33182844 0.06673060 apply(log(parallel.universe.data), 2, function(x){ var(x, na.rm=T) }) # A B C # 0.12750634 0.02631383 0.05240742 apply(sqrt(parallel.universe.data), 2, function(x){ var(x, na.rm=T) }) # A B C # 0.01120956 0.02325107 0.01461479 var(sqrt(parallel.universe.data$B), na.rm=T) / var(sqrt(parallel.universe.data$A), na.rm=T) # [1] 2.074217 Using the square root transformation stabilizes those data quite well. You can see the improvement for the parallel universe data here: Rather than just trying different transformations, a more systematic approach is to optimize the Box-Cox parameter $\lambda$ (although it is usually recommended to round that to the nearest interpretable transformation). In your case either the square root, $\lambda = .5$, or the log, $\lambda = 0$, are acceptable, though neither actually works. For the parallel universe data, the square root is best: boxcox(values~ind, data=stacked.data, na.action=na.omit) boxcox(values~ind, data=stacked.pu.data, na.action=na.omit) Since this case is an ANOVA (i.e., no continuous variables), one way to deal with heterogeneity is to use the Welch correction to the denominator degrees of freedom in the $F$-test (n.b., df = 19.445, a fractional value, rather than df = 38): oneway.test(values~ind, data=stacked.data, na.action=na.omit, var.equal=FALSE) # One-way analysis of means (not assuming equal variances) # # data: values and ind # F = 4.1769, num df = 2.000, denom df = 19.445, p-value = 0.03097 A more general approach is to use weighted least squares. Since some groups (B) spread out more, the data in those groups provide less information about the location of the mean than the data in other groups. We can let the model incorporate this by providing a weight with each data point. A common system is to use the reciprocal of the group variance as the weight: wl = 1 / apply(my.data, 2, function(x){ var(x, na.rm=T) }) stacked.data$w = with(stacked.data, ifelse(ind=="A", wl[1], ifelse(ind=="B", wl[2], wl[3]))) w.mod = lm(values~ind, stacked.data, na.action=na.omit, weights=w) anova(w.mod) # Response: values # Df Sum Sq Mean Sq F value Pr(>F) # ind 2 8.64 4.3201 4.3201 0.02039 * # Residuals 38 38.00 1.0000 This yields slightly different $F$ and $p$-values than the unweighted ANOVA (4.5089, 0.01749), but it has addressed the heterogeneity well: Weighted least squares is not a panacea, however. One uncomfortable fact is that it is only just right if the weights are just right, meaning, among other things, that they are known a-priori. It does not address non-normality (such as skew) or outliers, either. Using weights estimated from your data will often work fine, though, particularly if you have enough data to estimate the variance with reasonable precision (this is analogous to the idea of using a $z$-table instead of a $t$-table when you have $50$ or $100$ degrees of freedom), your data are sufficiently normal, and you don't appear to have any outliers. Unfortunately, you have relatively few data (13 or 15 per group), some skew and possibly some outliers. I'm not sure that these are bad enough to make a big deal out of, but you could mix weighted least squares with robust methods. Instead of using the variance as your measure of spread (which is sensitive to outliers, especially with low $N$), you could use the reciprocal of the inter-quartile range (which is unaffected by up to 50% outliers in each group). These weights could then be combined with robust regression using a different loss function like Tukey's bisquare: 1 / apply(my.data, 2, function(x){ var(x, na.rm=T) }) # A B C # 57.650907 3.013606 14.985628 1 / apply(my.data, 2, function(x){ IQR(x, na.rm=T) }) # A B C # 9.661836 1.291990 4.878049 rw = 1 / apply(my.data, 2, function(x){ IQR(x, na.rm=T) }) stacked.data$rw = with(stacked.data, ifelse(ind=="A", rw[1], ifelse(ind=="B", rw[2], rw[3]))) library(robustbase) w.r.mod = lmrob(values~ind, stacked.data, na.action=na.omit, weights=rw) anova(w.r.mod, lmrob(values~1,stacked.data,na.action=na.omit,weights=rw), test="Wald") # Robust Wald Test Table # # Model 1: values ~ ind # Model 2: values ~ 1 # Largest model fitted by lmrob(), i.e. SM # # pseudoDf Test.Stat Df Pr(>chisq) # 1 38 # 2 40 6.6016 2 0.03685 * The weights here aren't as extreme. The predicted group means differ slightly (A: WLS 0.36673, robust 0.35722; B: WLS 0.77646, robust 0.70433; C: WLS 0.50554, robust 0.51845), with the means of B and C being less pulled by extreme values. In econometrics the Huber-White ("sandwich") standard error is very popular. Like the Welch correction, this does not require you to know the variances a-priori and doesn't require you to estimate weights from your data and/or contingent on a model that may not be correct. On the other hand, I don't know how to incorporate this with an ANOVA, meaning that you only get them for the tests of individual dummy codes, which strikes me as less helpful in this case, but I'll demonstrate them anyway: library(sandwich) mod = lm(values~ind, stacked.data, na.action=na.omit) sqrt(diag(vcovHC(mod))) # (Intercept) indB indC # 0.03519921 0.16997457 0.08246131 2*(1-pt(coef(mod) / sqrt(diag(vcovHC(mod))), df=38)) # (Intercept) indB indC # 1.078249e-12 2.087484e-02 1.005212e-01 The function vcovHC calculates a heteroscedasticicy consistent variance-covariance matrix for your betas (your dummy codes), which is what the letters in the function call stand for. To get standard errors, you extract the main diagonal and take the square roots. To get $t$-tests for your betas, you divide your coefficient estimates by the SEs and compare the results to the appropriate $t$-distribution (namely, the $t$-distribution with your residual degrees of freedom). For R users specifically, @TomWenseleers notes in the comments below that the ?Anova function in the car package can accept a white.adjust argument to get a $p$-value for the factor using heteroscedasticity consistent errors. Anova(mod, white.adjust=TRUE) # Analysis of Deviance Table (Type II tests) # # Response: values # Df F Pr(>F) # ind 2 3.9946 0.02663 * # Residuals 38 # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 You can try to get an empirical estimate of what the actual sampling distribution of your test statistic looks like by bootstrapping. First, you create a true null by making all group means exactly equal. Then you resample with replacement and calculate your test statistic ($F$) on each bootsample to get an empirical estimate of the sampling distribution of $F$ under the null with your data whatever their status with regard to normality or homogeneity. The proportion of that sampling distribution that is as extreme or more extreme than your observed test statistic is the $p$-value: mod = lm(values~ind, stacked.data, na.action=na.omit) F.stat = anova(mod)[1,4] # create null version of the data nullA = my.data$A - mean(my.data$A) nullB = my.data$B - mean(my.data$B, na.rm=T) nullC = my.data$C - mean(my.data$C, na.rm=T) set.seed(1) F.vect = vector(length=10000) for(i in 1:10000){ A = sample(na.omit(nullA), 15, replace=T) B = sample(na.omit(nullB), 13, replace=T) C = sample(na.omit(nullC), 13, replace=T) boot.dat = stack(list(A=A, B=B, C=C)) boot.mod = lm(values~ind, boot.dat) F.vect[i] = anova(boot.mod)[1,4] } 1-mean(F.stat>F.vect) # [1] 0.0485 In some ways, bootstrapping is the ultimate reduced assumption approach to conducting an analysis of the parameters (e.g., means), but it does assume that your data are a good representation of the population, meaning you have a reasonable sample size. Since your $n$'s are small, it may be less trustworthy. Probably the ultimate protection against non-normality and heterogeneity is to use a non-parametric test. The basic non-parametric version of an ANOVA is the Kruskal-Wallis test: kruskal.test(values~ind, stacked.data, na.action=na.omit) # Kruskal-Wallis rank sum test # # data: values by ind # Kruskal-Wallis chi-squared = 5.7705, df = 2, p-value = 0.05584 Although the Kruskal-Wallis test is definitely the best protection against type I errors, it can only be used with a single categorical variable (i.e., no continuous predictors or factorial designs) and it has the least power of all strategies discussed. Another non-parametric approach is to use ordinal logistic regression. This seems odd to a lot of people, but you only need to assume that your response data contain legitimate ordinal information, which they surely do or else every other strategy above is invalid as well: library(rms) olr.mod = orm(values~ind, stacked.data) olr.mod # Model Likelihood Discrimination Rank Discrim. # Ratio Test Indexes Indexes # Obs 41 LR chi2 6.63 R2 0.149 rho 0.365 # Unique Y 41 d.f. 2 g 0.829 # Median Y 0.432 Pr(> chi2) 0.0363 gr 2.292 # max |deriv| 2e-04 Score chi2 6.48 |Pr(Y>=median)-0.5| 0.179 # Pr(> chi2) 0.0391 It may not be clear from the output, but the test of the model as a whole, which in this case is the test of your groups, is the chi2 under Discrimination Indexes. Two versions are listed, a likelihood ratio test and a score test. The likelihood ratio test is typically considered the best. It yields a $p$-value of 0.0363.
Alternatives to one-way ANOVA for heteroskedastic data
There are a number of options available when dealing with heteroscedastic data. Unfortunately, none of them is guaranteed to always work. Here are some options I'm familiar with: transformations We
Alternatives to one-way ANOVA for heteroskedastic data There are a number of options available when dealing with heteroscedastic data. Unfortunately, none of them is guaranteed to always work. Here are some options I'm familiar with: transformations Welch ANOVA weighted least squares robust regression heteroscedasticity consistent standard errors bootstrap Kruskal-Wallis test ordinal logistic regression Update: Here is a demonstration in R of some ways of fitting a linear model (i.e., an ANOVA or a regression) when you have heteroscedasticity / heterogeneity of variance. Let's start by taking a look at your data. For convenience, I have them loaded into two data frames called my.data (which is structured like above with one column per group) and stacked.data (which has two columns: values with the numbers and ind with the group indicator). We can formally test for heteroscedasticity with Levene's test: library(car) leveneTest(values~ind, stacked.data) # Levene's Test for Homogeneity of Variance (center = median) # Df F value Pr(>F) # group 2 8.1269 0.001153 ** # 38 Sure enough, you have heteroscedasticity. We'll check to see what the variances of the groups are. A rule of thumb is that linear models are fairly robust to heterogeneity of variance so long as the maximum variance is no more than $4\!\times$ greater than the minimum variance, so we'll find that ratio as well: apply(my.data, 2, function(x){ var(x, na.rm=T) }) # A B C # 0.01734578 0.33182844 0.06673060 var(my.data$B, na.rm=T) / var(my.data$A, na.rm=T) # [1] 19.13021 Your variances differ substantially, with the largest, B, being $19\!\times$ the smallest, A. This is a problematic level of heteroscedsaticity. You had thought to use transformations such as the log or square root to stabilize the variance. That will work in some cases, but Box-Cox type transformations stabilize variance by squeezing the data asymmetrically, either squeezing them downwards with the highest data squeezed the most, or squeezing them upwards with the lowest data squeezed the most. Thus, you need the variance of your data to change with the mean for this to work optimally. Your data have a huge difference in variance, but a relatively small difference amongst the means and medians, i.e., the distributions mostly overlap. As a teaching exercise, we can create some parallel.universe.data by adding $2.7$ to all B values and $.7$ to C's to show how it would work: parallel.universe.data = with(my.data, data.frame(A=A, B=B+2.7, C=C+.7)) apply(parallel.universe.data, 2, function(x){ var(x, na.rm=T) }) # A B C # 0.01734578 0.33182844 0.06673060 apply(log(parallel.universe.data), 2, function(x){ var(x, na.rm=T) }) # A B C # 0.12750634 0.02631383 0.05240742 apply(sqrt(parallel.universe.data), 2, function(x){ var(x, na.rm=T) }) # A B C # 0.01120956 0.02325107 0.01461479 var(sqrt(parallel.universe.data$B), na.rm=T) / var(sqrt(parallel.universe.data$A), na.rm=T) # [1] 2.074217 Using the square root transformation stabilizes those data quite well. You can see the improvement for the parallel universe data here: Rather than just trying different transformations, a more systematic approach is to optimize the Box-Cox parameter $\lambda$ (although it is usually recommended to round that to the nearest interpretable transformation). In your case either the square root, $\lambda = .5$, or the log, $\lambda = 0$, are acceptable, though neither actually works. For the parallel universe data, the square root is best: boxcox(values~ind, data=stacked.data, na.action=na.omit) boxcox(values~ind, data=stacked.pu.data, na.action=na.omit) Since this case is an ANOVA (i.e., no continuous variables), one way to deal with heterogeneity is to use the Welch correction to the denominator degrees of freedom in the $F$-test (n.b., df = 19.445, a fractional value, rather than df = 38): oneway.test(values~ind, data=stacked.data, na.action=na.omit, var.equal=FALSE) # One-way analysis of means (not assuming equal variances) # # data: values and ind # F = 4.1769, num df = 2.000, denom df = 19.445, p-value = 0.03097 A more general approach is to use weighted least squares. Since some groups (B) spread out more, the data in those groups provide less information about the location of the mean than the data in other groups. We can let the model incorporate this by providing a weight with each data point. A common system is to use the reciprocal of the group variance as the weight: wl = 1 / apply(my.data, 2, function(x){ var(x, na.rm=T) }) stacked.data$w = with(stacked.data, ifelse(ind=="A", wl[1], ifelse(ind=="B", wl[2], wl[3]))) w.mod = lm(values~ind, stacked.data, na.action=na.omit, weights=w) anova(w.mod) # Response: values # Df Sum Sq Mean Sq F value Pr(>F) # ind 2 8.64 4.3201 4.3201 0.02039 * # Residuals 38 38.00 1.0000 This yields slightly different $F$ and $p$-values than the unweighted ANOVA (4.5089, 0.01749), but it has addressed the heterogeneity well: Weighted least squares is not a panacea, however. One uncomfortable fact is that it is only just right if the weights are just right, meaning, among other things, that they are known a-priori. It does not address non-normality (such as skew) or outliers, either. Using weights estimated from your data will often work fine, though, particularly if you have enough data to estimate the variance with reasonable precision (this is analogous to the idea of using a $z$-table instead of a $t$-table when you have $50$ or $100$ degrees of freedom), your data are sufficiently normal, and you don't appear to have any outliers. Unfortunately, you have relatively few data (13 or 15 per group), some skew and possibly some outliers. I'm not sure that these are bad enough to make a big deal out of, but you could mix weighted least squares with robust methods. Instead of using the variance as your measure of spread (which is sensitive to outliers, especially with low $N$), you could use the reciprocal of the inter-quartile range (which is unaffected by up to 50% outliers in each group). These weights could then be combined with robust regression using a different loss function like Tukey's bisquare: 1 / apply(my.data, 2, function(x){ var(x, na.rm=T) }) # A B C # 57.650907 3.013606 14.985628 1 / apply(my.data, 2, function(x){ IQR(x, na.rm=T) }) # A B C # 9.661836 1.291990 4.878049 rw = 1 / apply(my.data, 2, function(x){ IQR(x, na.rm=T) }) stacked.data$rw = with(stacked.data, ifelse(ind=="A", rw[1], ifelse(ind=="B", rw[2], rw[3]))) library(robustbase) w.r.mod = lmrob(values~ind, stacked.data, na.action=na.omit, weights=rw) anova(w.r.mod, lmrob(values~1,stacked.data,na.action=na.omit,weights=rw), test="Wald") # Robust Wald Test Table # # Model 1: values ~ ind # Model 2: values ~ 1 # Largest model fitted by lmrob(), i.e. SM # # pseudoDf Test.Stat Df Pr(>chisq) # 1 38 # 2 40 6.6016 2 0.03685 * The weights here aren't as extreme. The predicted group means differ slightly (A: WLS 0.36673, robust 0.35722; B: WLS 0.77646, robust 0.70433; C: WLS 0.50554, robust 0.51845), with the means of B and C being less pulled by extreme values. In econometrics the Huber-White ("sandwich") standard error is very popular. Like the Welch correction, this does not require you to know the variances a-priori and doesn't require you to estimate weights from your data and/or contingent on a model that may not be correct. On the other hand, I don't know how to incorporate this with an ANOVA, meaning that you only get them for the tests of individual dummy codes, which strikes me as less helpful in this case, but I'll demonstrate them anyway: library(sandwich) mod = lm(values~ind, stacked.data, na.action=na.omit) sqrt(diag(vcovHC(mod))) # (Intercept) indB indC # 0.03519921 0.16997457 0.08246131 2*(1-pt(coef(mod) / sqrt(diag(vcovHC(mod))), df=38)) # (Intercept) indB indC # 1.078249e-12 2.087484e-02 1.005212e-01 The function vcovHC calculates a heteroscedasticicy consistent variance-covariance matrix for your betas (your dummy codes), which is what the letters in the function call stand for. To get standard errors, you extract the main diagonal and take the square roots. To get $t$-tests for your betas, you divide your coefficient estimates by the SEs and compare the results to the appropriate $t$-distribution (namely, the $t$-distribution with your residual degrees of freedom). For R users specifically, @TomWenseleers notes in the comments below that the ?Anova function in the car package can accept a white.adjust argument to get a $p$-value for the factor using heteroscedasticity consistent errors. Anova(mod, white.adjust=TRUE) # Analysis of Deviance Table (Type II tests) # # Response: values # Df F Pr(>F) # ind 2 3.9946 0.02663 * # Residuals 38 # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 You can try to get an empirical estimate of what the actual sampling distribution of your test statistic looks like by bootstrapping. First, you create a true null by making all group means exactly equal. Then you resample with replacement and calculate your test statistic ($F$) on each bootsample to get an empirical estimate of the sampling distribution of $F$ under the null with your data whatever their status with regard to normality or homogeneity. The proportion of that sampling distribution that is as extreme or more extreme than your observed test statistic is the $p$-value: mod = lm(values~ind, stacked.data, na.action=na.omit) F.stat = anova(mod)[1,4] # create null version of the data nullA = my.data$A - mean(my.data$A) nullB = my.data$B - mean(my.data$B, na.rm=T) nullC = my.data$C - mean(my.data$C, na.rm=T) set.seed(1) F.vect = vector(length=10000) for(i in 1:10000){ A = sample(na.omit(nullA), 15, replace=T) B = sample(na.omit(nullB), 13, replace=T) C = sample(na.omit(nullC), 13, replace=T) boot.dat = stack(list(A=A, B=B, C=C)) boot.mod = lm(values~ind, boot.dat) F.vect[i] = anova(boot.mod)[1,4] } 1-mean(F.stat>F.vect) # [1] 0.0485 In some ways, bootstrapping is the ultimate reduced assumption approach to conducting an analysis of the parameters (e.g., means), but it does assume that your data are a good representation of the population, meaning you have a reasonable sample size. Since your $n$'s are small, it may be less trustworthy. Probably the ultimate protection against non-normality and heterogeneity is to use a non-parametric test. The basic non-parametric version of an ANOVA is the Kruskal-Wallis test: kruskal.test(values~ind, stacked.data, na.action=na.omit) # Kruskal-Wallis rank sum test # # data: values by ind # Kruskal-Wallis chi-squared = 5.7705, df = 2, p-value = 0.05584 Although the Kruskal-Wallis test is definitely the best protection against type I errors, it can only be used with a single categorical variable (i.e., no continuous predictors or factorial designs) and it has the least power of all strategies discussed. Another non-parametric approach is to use ordinal logistic regression. This seems odd to a lot of people, but you only need to assume that your response data contain legitimate ordinal information, which they surely do or else every other strategy above is invalid as well: library(rms) olr.mod = orm(values~ind, stacked.data) olr.mod # Model Likelihood Discrimination Rank Discrim. # Ratio Test Indexes Indexes # Obs 41 LR chi2 6.63 R2 0.149 rho 0.365 # Unique Y 41 d.f. 2 g 0.829 # Median Y 0.432 Pr(> chi2) 0.0363 gr 2.292 # max |deriv| 2e-04 Score chi2 6.48 |Pr(Y>=median)-0.5| 0.179 # Pr(> chi2) 0.0391 It may not be clear from the output, but the test of the model as a whole, which in this case is the test of your groups, is the chi2 under Discrimination Indexes. Two versions are listed, a likelihood ratio test and a score test. The likelihood ratio test is typically considered the best. It yields a $p$-value of 0.0363.
Alternatives to one-way ANOVA for heteroskedastic data There are a number of options available when dealing with heteroscedastic data. Unfortunately, none of them is guaranteed to always work. Here are some options I'm familiar with: transformations We
4,962
What is Deviance? (specifically in CART/rpart)
Deviance and GLM Formally, one can view deviance as a sort of distance between two probabilistic models; in GLM context, it amounts to two times the log ratio of likelihoods between two nested models $\ell_1/\ell_0$ where $\ell_0$ is the "smaller" model; that is, a linear restriction on model parameters (cf. the Neyman–Pearson lemma), as @suncoolsu said. As such, it can be used to perform model comparison. It can also be seen as a generalization of the RSS used in OLS estimation (ANOVA, regression), for it provides a measure of goodness-of-fit of the model being evaluated when compared to the null model (intercept only). It works with LM too: > x <- rnorm(100) > y <- 0.8*x+rnorm(100) > lm.res <- lm(y ~ x) The residuals SS (RSS) is computed as $\hat\varepsilon^t\hat\varepsilon$, which is readily obtained as: > t(residuals(lm.res))%*%residuals(lm.res) [,1] [1,] 98.66754 or from the (unadjusted) $R^2$ > summary(lm.res) Call: lm(formula = y ~ x) (...) Residual standard error: 1.003 on 98 degrees of freedom Multiple R-squared: 0.4234, Adjusted R-squared: 0.4175 F-statistic: 71.97 on 1 and 98 DF, p-value: 2.334e-13 since $R^2=1-\text{RSS}/\text{TSS}$ where $\text{TSS}$ is the total variance. Note that it is directly available in an ANOVA table, like > summary.aov(lm.res) Df Sum Sq Mean Sq F value Pr(>F) x 1 72.459 72.459 71.969 2.334e-13 *** Residuals 98 98.668 1.007 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Now, look at the deviance: > deviance(lm.res) [1] 98.66754 In fact, for linear models the deviance equals the RSS (you may recall that OLS and ML estimates coincide in such a case). Deviance and CART We can see CART as a way to allocate already $n$ labeled individuals into arbitrary classes (in a classification context). Trees can be viewed as providing a probability model for individuals class membership. So, at each node $i$, we have a probability distribution $p_{ik}$ over the classes. What is important here is that the leaves of the tree give us a random sample $n_{ik}$ from a multinomial distribution specified by $p_{ik}$. We can thus define the deviance of a tree, $D$, as the sum over all leaves of $$D_i=-2\sum_kn_{ik}\log(p_{ik}),$$ following Venables and Ripley's notations (MASS, Springer 2002, 4th ed.). If you have access to this essential reference for R users (IMHO), you can check by yourself how such an approach is used for splitting nodes and fitting a tree to observed data (p. 255 ff.); basically, the idea is to minimize, by pruning the tree, $D+\alpha \#(T)$ where $\#(T)$ is the number of nodes in the tree $T$. Here we recognize the cost-complexity trade-off. Here, $D$ is equivalent to the concept of node impurity (i.e., the heterogeneity of the distribution at a given node) which are based on a measure of entropy or information gain, or the well-known Gini index, defined as $1-\sum_kp_{ik}^2$ (the unknown proportions are estimated from node proportions). With a regression tree, the idea is quite similar, and we can conceptualize the deviance as sum of squares defined for individuals $j$ by $$D_i=\sum_j(y_j-\mu_i)^2,$$ summed over all leaves. Here, the probability model that is considered within each leaf is a gaussian $\mathcal{N}(\mu_i,\sigma^2)$. Quoting Venables and Ripley (p. 256), "$D$ is the usual scaled deviance for a gaussian GLM. However, the distribution at internal nodes of the tree is then a mixture of normal distributions, and so $D_i$ is only appropriate at the leaves. The tree-construction process has to be seen as a hierarchical refinement of probability models, very similar to forward variable selection in regression." Section 9.2 provides further detailed information about rpart implementation, but you can already look at the residuals() function for rpart object, where "deviance residuals" are computed as the square root of minus twice the logarithm of the fitted model. An introduction to recursive partitioning using the rpart routines, by Atkinson and Therneau, is also a good start. For more general review (including bagging), I would recommend Moissen, G.G. (2008). Classification and Regression Trees. Ecological Informatics, pp. 582-588. Sutton, C.D. (2005). Classification and Regression Trees, Bagging, and Boosting, in Handbook of Statistics, Vol. 24, pp. 303-329, Elsevier.
What is Deviance? (specifically in CART/rpart)
Deviance and GLM Formally, one can view deviance as a sort of distance between two probabilistic models; in GLM context, it amounts to two times the log ratio of likelihoods between two nested models
What is Deviance? (specifically in CART/rpart) Deviance and GLM Formally, one can view deviance as a sort of distance between two probabilistic models; in GLM context, it amounts to two times the log ratio of likelihoods between two nested models $\ell_1/\ell_0$ where $\ell_0$ is the "smaller" model; that is, a linear restriction on model parameters (cf. the Neyman–Pearson lemma), as @suncoolsu said. As such, it can be used to perform model comparison. It can also be seen as a generalization of the RSS used in OLS estimation (ANOVA, regression), for it provides a measure of goodness-of-fit of the model being evaluated when compared to the null model (intercept only). It works with LM too: > x <- rnorm(100) > y <- 0.8*x+rnorm(100) > lm.res <- lm(y ~ x) The residuals SS (RSS) is computed as $\hat\varepsilon^t\hat\varepsilon$, which is readily obtained as: > t(residuals(lm.res))%*%residuals(lm.res) [,1] [1,] 98.66754 or from the (unadjusted) $R^2$ > summary(lm.res) Call: lm(formula = y ~ x) (...) Residual standard error: 1.003 on 98 degrees of freedom Multiple R-squared: 0.4234, Adjusted R-squared: 0.4175 F-statistic: 71.97 on 1 and 98 DF, p-value: 2.334e-13 since $R^2=1-\text{RSS}/\text{TSS}$ where $\text{TSS}$ is the total variance. Note that it is directly available in an ANOVA table, like > summary.aov(lm.res) Df Sum Sq Mean Sq F value Pr(>F) x 1 72.459 72.459 71.969 2.334e-13 *** Residuals 98 98.668 1.007 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Now, look at the deviance: > deviance(lm.res) [1] 98.66754 In fact, for linear models the deviance equals the RSS (you may recall that OLS and ML estimates coincide in such a case). Deviance and CART We can see CART as a way to allocate already $n$ labeled individuals into arbitrary classes (in a classification context). Trees can be viewed as providing a probability model for individuals class membership. So, at each node $i$, we have a probability distribution $p_{ik}$ over the classes. What is important here is that the leaves of the tree give us a random sample $n_{ik}$ from a multinomial distribution specified by $p_{ik}$. We can thus define the deviance of a tree, $D$, as the sum over all leaves of $$D_i=-2\sum_kn_{ik}\log(p_{ik}),$$ following Venables and Ripley's notations (MASS, Springer 2002, 4th ed.). If you have access to this essential reference for R users (IMHO), you can check by yourself how such an approach is used for splitting nodes and fitting a tree to observed data (p. 255 ff.); basically, the idea is to minimize, by pruning the tree, $D+\alpha \#(T)$ where $\#(T)$ is the number of nodes in the tree $T$. Here we recognize the cost-complexity trade-off. Here, $D$ is equivalent to the concept of node impurity (i.e., the heterogeneity of the distribution at a given node) which are based on a measure of entropy or information gain, or the well-known Gini index, defined as $1-\sum_kp_{ik}^2$ (the unknown proportions are estimated from node proportions). With a regression tree, the idea is quite similar, and we can conceptualize the deviance as sum of squares defined for individuals $j$ by $$D_i=\sum_j(y_j-\mu_i)^2,$$ summed over all leaves. Here, the probability model that is considered within each leaf is a gaussian $\mathcal{N}(\mu_i,\sigma^2)$. Quoting Venables and Ripley (p. 256), "$D$ is the usual scaled deviance for a gaussian GLM. However, the distribution at internal nodes of the tree is then a mixture of normal distributions, and so $D_i$ is only appropriate at the leaves. The tree-construction process has to be seen as a hierarchical refinement of probability models, very similar to forward variable selection in regression." Section 9.2 provides further detailed information about rpart implementation, but you can already look at the residuals() function for rpart object, where "deviance residuals" are computed as the square root of minus twice the logarithm of the fitted model. An introduction to recursive partitioning using the rpart routines, by Atkinson and Therneau, is also a good start. For more general review (including bagging), I would recommend Moissen, G.G. (2008). Classification and Regression Trees. Ecological Informatics, pp. 582-588. Sutton, C.D. (2005). Classification and Regression Trees, Bagging, and Boosting, in Handbook of Statistics, Vol. 24, pp. 303-329, Elsevier.
What is Deviance? (specifically in CART/rpart) Deviance and GLM Formally, one can view deviance as a sort of distance between two probabilistic models; in GLM context, it amounts to two times the log ratio of likelihoods between two nested models
4,963
What is Deviance? (specifically in CART/rpart)
It might be a bit clearer if we think about a perfect model with as many parameters as observations such that it explains all variance in the response. This is the saturated model. Deviance simply measures the difference in "fit" of a candidate model and that of the saturated model. In a regression tree, the saturated model would be one that had as many terminal nodes (leaves) as observations so it would perfectly fit the response. The deviance of a simpler model can be computed as the node residual sums of squares, summed over all nodes. In other words, the sum of squared differences between predicted and observed values. This is the same sort of error (or deviance) used in least squares regression. For a classification tree, residual sums of squares is not the most appropriate measure of lack of fit. Instead, there is an alternative measure of deviance, plus trees can be built minimising an entropy measure or the Gini index. The latter is the default in rpart. The Gini index is computed as: $$D_i = 1 - \sum_{k = 1}^{K} p_{ik}^2$$ where $p_{ik}$ is the observed proportion of class $k$ in node $i$. This measure is summed of all terminal $i$ nodes in the tree to arrive at a deviance for the fitted tree model.
What is Deviance? (specifically in CART/rpart)
It might be a bit clearer if we think about a perfect model with as many parameters as observations such that it explains all variance in the response. This is the saturated model. Deviance simply mea
What is Deviance? (specifically in CART/rpart) It might be a bit clearer if we think about a perfect model with as many parameters as observations such that it explains all variance in the response. This is the saturated model. Deviance simply measures the difference in "fit" of a candidate model and that of the saturated model. In a regression tree, the saturated model would be one that had as many terminal nodes (leaves) as observations so it would perfectly fit the response. The deviance of a simpler model can be computed as the node residual sums of squares, summed over all nodes. In other words, the sum of squared differences between predicted and observed values. This is the same sort of error (or deviance) used in least squares regression. For a classification tree, residual sums of squares is not the most appropriate measure of lack of fit. Instead, there is an alternative measure of deviance, plus trees can be built minimising an entropy measure or the Gini index. The latter is the default in rpart. The Gini index is computed as: $$D_i = 1 - \sum_{k = 1}^{K} p_{ik}^2$$ where $p_{ik}$ is the observed proportion of class $k$ in node $i$. This measure is summed of all terminal $i$ nodes in the tree to arrive at a deviance for the fitted tree model.
What is Deviance? (specifically in CART/rpart) It might be a bit clearer if we think about a perfect model with as many parameters as observations such that it explains all variance in the response. This is the saturated model. Deviance simply mea
4,964
What is Deviance? (specifically in CART/rpart)
Deviance is the likelihood-ratio statistic for testing the null hypothesis that the model holds agains the general alternative (i.e., the saturated model). For some Poisson and binomial GLMs, the number of observations $N$ stays fixed as the individual counts increase in size. Then the deviance has a chi-squared asymptotic null distribution. The degrees of freedom = N - p, where p is the number of model parameters; i.e., it is equal to the numbers of free parameters in the saturated and unsaturated models. The deviance then provides a test for the model fit. $Deviance = -2[L(\hat{\mathbf{\mu}} | \mathbf{y})-L(\mathbf{y}|\mathbf{y})]$ However, most of the times, you want to test if you need to drop some variables. Say there are two models $M_1$ and $M_2$ with $p_1$ and $p_2$ parameters, respectively, and you need to test which of these two is better. Assume $M_1$ is a special case of $M_2$ i.e. nested models. In that case, the difference of deviance is taken: $\Delta Deviance = -2[L(\hat{\mathbf{\mu}_1} | \mathbf{y})-L(\hat{\mathbf{\mu}_2}|\mathbf{y})]$ Notice that the log likelihood of the saturated model cancels and the degree of freedom of $\Delta Deviance$ changes to $p_2-p_1$. This is what we use most often when we need to test if some of the parameters are 0 or not. But when you fit glm in R the deviance output is for the saturated model vs the current model. If you want to read in greater details: cf: Categorical Data Analysis by Alan Agresti, pp 118.
What is Deviance? (specifically in CART/rpart)
Deviance is the likelihood-ratio statistic for testing the null hypothesis that the model holds agains the general alternative (i.e., the saturated model). For some Poisson and binomial GLMs, the numb
What is Deviance? (specifically in CART/rpart) Deviance is the likelihood-ratio statistic for testing the null hypothesis that the model holds agains the general alternative (i.e., the saturated model). For some Poisson and binomial GLMs, the number of observations $N$ stays fixed as the individual counts increase in size. Then the deviance has a chi-squared asymptotic null distribution. The degrees of freedom = N - p, where p is the number of model parameters; i.e., it is equal to the numbers of free parameters in the saturated and unsaturated models. The deviance then provides a test for the model fit. $Deviance = -2[L(\hat{\mathbf{\mu}} | \mathbf{y})-L(\mathbf{y}|\mathbf{y})]$ However, most of the times, you want to test if you need to drop some variables. Say there are two models $M_1$ and $M_2$ with $p_1$ and $p_2$ parameters, respectively, and you need to test which of these two is better. Assume $M_1$ is a special case of $M_2$ i.e. nested models. In that case, the difference of deviance is taken: $\Delta Deviance = -2[L(\hat{\mathbf{\mu}_1} | \mathbf{y})-L(\hat{\mathbf{\mu}_2}|\mathbf{y})]$ Notice that the log likelihood of the saturated model cancels and the degree of freedom of $\Delta Deviance$ changes to $p_2-p_1$. This is what we use most often when we need to test if some of the parameters are 0 or not. But when you fit glm in R the deviance output is for the saturated model vs the current model. If you want to read in greater details: cf: Categorical Data Analysis by Alan Agresti, pp 118.
What is Deviance? (specifically in CART/rpart) Deviance is the likelihood-ratio statistic for testing the null hypothesis that the model holds agains the general alternative (i.e., the saturated model). For some Poisson and binomial GLMs, the numb
4,965
Rank in R - descending order [closed]
You could negate x: > rank(-x) [1] 5 3 6 2 4 1
Rank in R - descending order [closed]
You could negate x: > rank(-x) [1] 5 3 6 2 4 1
Rank in R - descending order [closed] You could negate x: > rank(-x) [1] 5 3 6 2 4 1
Rank in R - descending order [closed] You could negate x: > rank(-x) [1] 5 3 6 2 4 1
4,966
Deriving Bellman's Equation in Reinforcement Learning
There are already a great many answers to this question, but most involve few words describing what is going on in the manipulations. I'm going to answer it using way more words, I think. To start, $$G_{t} \doteq \sum_{k=t+1}^{T} \gamma^{k-t-1} R_{k}$$ is defined in equation 3.11 of Sutton and Barto, with a constant discount factor $0 \leq \gamma \leq 1$ and we can have $T = \infty$ or $\gamma = 1$, but not both. Since the rewards, $R_{k}$, are random variables, so is $G_{t}$ as it is merely a linear combination of random variables. $$\begin{align} v_\pi(s) & \doteq \mathbb{E}_\pi\left[G_t \mid S_t = s\right] \\ & = \mathbb{E}_\pi\left[R_{t+1} + \gamma G_{t+1} \mid S_t = s\right] \\ & = \mathbb{E}_{\pi}\left[ R_{t+1} | S_t = s \right] + \gamma \mathbb{E}_{\pi}\left[ G_{t+1} | S_t = s \right] \end{align}$$ That last line follows from the linearity of expectation values. $R_{t+1}$ is the reward the agent gains after taking action at time step $t$. For simplicity, I assume that it can take on a finite number of values $r \in \mathcal{R}$. Work on the first term. In words, I need to compute the expectation values of $R_{t+1}$ given that we know that the current state is $s$. The formula for this is $$\begin{align} \mathbb{E}_{\pi}\left[ R_{t+1} | S_t = s \right] = \sum_{r \in \mathcal{R}} r p(r|s). \end{align}$$ In other words the probability of the appearance of reward $r$ is conditioned on the state $s$; different states may have different rewards. This $p(r|s)$ distribution is a marginal distribution of a distribution that also contained the variables $a$ and $s'$, the action taken at time $t$ and the state at time $t+1$ after the action, respectively: $$\begin{align} p(r|s) = \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(s',a,r|s) = \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} \pi(a|s) p(s',r | a,s). \end{align}$$ Where I have used $\pi(a|s) \doteq p(a|s)$, following the book's convention. If that last equality is confusing, forget the sums, suppress the $s$ (the probability now looks like a joint probability), use the law of multiplication and finally reintroduce the condition on $s$ in all the new terms. It in now easy to see that the first term is $$\begin{align} \mathbb{E}_{\pi}\left[ R_{t+1} | S_t = s \right] = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} r \pi(a|s) p(s',r | a,s), \end{align}$$ as required. On to the second term, where I assume that $G_{t+1}$ is a random variable that takes on a finite number of values $g \in \Gamma$. Just like the first term: $$\begin{align} \mathbb{E}_{\pi}\left[ G_{t+1} | S_t = s \right] = \sum_{g \in \Gamma} g p(g|s). \qquad\qquad\qquad\qquad (*) \end{align}$$ Once again, I "un-marginalize" the probability distribution by writing (law of multiplication again) $$\begin{align} p(g|s) & = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(s',r,a,g|s) = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(g | s', r, a, s) p(s', r, a | s) \\ & = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(g | s', r, a, s) p(s', r | a, s) \pi(a | s) \\ & = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(g | s', r, a, s) p(s', r | a, s) \pi(a | s) \\ & = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(g | s') p(s', r | a, s) \pi(a | s) \qquad\qquad\qquad\qquad (**) \end{align}$$ The last line in there follows from the Markovian property. Remember that $G_{t+1}$ is the sum of all the future (discounted) rewards that the agent receives after state $s'$. The Markovian property is that the process is memory-less with regards to previous states, actions and rewards. Future actions (and the rewards they reap) depend only on the state in which the action is taken, so $p(g | s', r, a, s) = p(g | s')$, by assumption. Ok, so the second term in the proof is now $$\begin{align} \gamma \mathbb{E}_{\pi}\left[ G_{t+1} | S_t = s \right] & = \gamma \sum_{g \in \Gamma} \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} g p(g | s') p(s', r | a, s) \pi(a | s) \\ & = \gamma \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} \mathbb{E}_{\pi}\left[ G_{t+1} | S_{t+1} = s' \right] p(s', r | a, s) \pi(a | s) \\ & = \gamma \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} v_{\pi}(s') p(s', r | a, s) \pi(a | s) \end{align}$$ as required, once again. Combining the two terms completes the proof $$\begin{align} v_\pi(s) & \doteq \mathbb{E}_\pi\left[G_t \mid S_t = s\right] \\ & = \sum_{a \in \mathcal{A}} \pi(a | s) \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} p(s', r | a, s) \left[ r + \gamma v_{\pi}(s') \right]. \end{align}$$ UPDATE I want to address what might look like a sleight of hand in the derivation of the second term. In the equation marked with $(*)$, I use a term $p(g|s)$ and then later in the equation marked $(**)$ I claim that $g$ doesn't depend on $s$, by arguing the Markovian property. So, you might say that if this is the case, then $p(g|s) = p(g)$. But this is not true. I can take $p(g | s', r, a, s) \rightarrow p(g | s')$ because the probability on the left side of that statement says that this is the probability of $g$ conditioned on $s'$, $a$, $r$, and $s$. Because we either know or assume the state $s'$, none of the other conditionals matter, because of the Markovian property. If you do not know or assume the state $s'$, then the future rewards (the meaning of $g$) will depend on which state you begin at, because that will determine (based on the policy) which state $s'$ you start at when computing $g$. If that argument doesn't convince you, try to compute what $p(g)$ is: $$\begin{align} p(g) & = \sum_{s' \in \mathcal{S}} p(g, s') = \sum_{s' \in \mathcal{S}} p(g | s') p(s') \\ & = \sum_{s' \in \mathcal{S}} p(g | s') \sum_{s,a,r} p(s', a, r, s) \\ & = \sum_{s' \in \mathcal{S}} p(g | s') \sum_{s,a,r} p(s', r | a, s) p(a, s) \\ & = \sum_{s \in \mathcal{S}} p(s) \sum_{s' \in \mathcal{S}} p(g | s') \sum_{a,r} p(s', r | a, s) \pi(a | s) \\ & \doteq \sum_{s \in \mathcal{S}} p(s) p(g|s) = \sum_{s \in \mathcal{S}} p(g,s) = p(g). \end{align}$$ As can be seen in the last line, it is not true that $p(g|s) = p(g)$. The expected value of $g$ depends on which state you start in (i.e. the identity of $s$), if you do not know or assume the state $s'$.
Deriving Bellman's Equation in Reinforcement Learning
There are already a great many answers to this question, but most involve few words describing what is going on in the manipulations. I'm going to answer it using way more words, I think. To start,
Deriving Bellman's Equation in Reinforcement Learning There are already a great many answers to this question, but most involve few words describing what is going on in the manipulations. I'm going to answer it using way more words, I think. To start, $$G_{t} \doteq \sum_{k=t+1}^{T} \gamma^{k-t-1} R_{k}$$ is defined in equation 3.11 of Sutton and Barto, with a constant discount factor $0 \leq \gamma \leq 1$ and we can have $T = \infty$ or $\gamma = 1$, but not both. Since the rewards, $R_{k}$, are random variables, so is $G_{t}$ as it is merely a linear combination of random variables. $$\begin{align} v_\pi(s) & \doteq \mathbb{E}_\pi\left[G_t \mid S_t = s\right] \\ & = \mathbb{E}_\pi\left[R_{t+1} + \gamma G_{t+1} \mid S_t = s\right] \\ & = \mathbb{E}_{\pi}\left[ R_{t+1} | S_t = s \right] + \gamma \mathbb{E}_{\pi}\left[ G_{t+1} | S_t = s \right] \end{align}$$ That last line follows from the linearity of expectation values. $R_{t+1}$ is the reward the agent gains after taking action at time step $t$. For simplicity, I assume that it can take on a finite number of values $r \in \mathcal{R}$. Work on the first term. In words, I need to compute the expectation values of $R_{t+1}$ given that we know that the current state is $s$. The formula for this is $$\begin{align} \mathbb{E}_{\pi}\left[ R_{t+1} | S_t = s \right] = \sum_{r \in \mathcal{R}} r p(r|s). \end{align}$$ In other words the probability of the appearance of reward $r$ is conditioned on the state $s$; different states may have different rewards. This $p(r|s)$ distribution is a marginal distribution of a distribution that also contained the variables $a$ and $s'$, the action taken at time $t$ and the state at time $t+1$ after the action, respectively: $$\begin{align} p(r|s) = \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(s',a,r|s) = \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} \pi(a|s) p(s',r | a,s). \end{align}$$ Where I have used $\pi(a|s) \doteq p(a|s)$, following the book's convention. If that last equality is confusing, forget the sums, suppress the $s$ (the probability now looks like a joint probability), use the law of multiplication and finally reintroduce the condition on $s$ in all the new terms. It in now easy to see that the first term is $$\begin{align} \mathbb{E}_{\pi}\left[ R_{t+1} | S_t = s \right] = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} r \pi(a|s) p(s',r | a,s), \end{align}$$ as required. On to the second term, where I assume that $G_{t+1}$ is a random variable that takes on a finite number of values $g \in \Gamma$. Just like the first term: $$\begin{align} \mathbb{E}_{\pi}\left[ G_{t+1} | S_t = s \right] = \sum_{g \in \Gamma} g p(g|s). \qquad\qquad\qquad\qquad (*) \end{align}$$ Once again, I "un-marginalize" the probability distribution by writing (law of multiplication again) $$\begin{align} p(g|s) & = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(s',r,a,g|s) = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(g | s', r, a, s) p(s', r, a | s) \\ & = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(g | s', r, a, s) p(s', r | a, s) \pi(a | s) \\ & = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(g | s', r, a, s) p(s', r | a, s) \pi(a | s) \\ & = \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} p(g | s') p(s', r | a, s) \pi(a | s) \qquad\qquad\qquad\qquad (**) \end{align}$$ The last line in there follows from the Markovian property. Remember that $G_{t+1}$ is the sum of all the future (discounted) rewards that the agent receives after state $s'$. The Markovian property is that the process is memory-less with regards to previous states, actions and rewards. Future actions (and the rewards they reap) depend only on the state in which the action is taken, so $p(g | s', r, a, s) = p(g | s')$, by assumption. Ok, so the second term in the proof is now $$\begin{align} \gamma \mathbb{E}_{\pi}\left[ G_{t+1} | S_t = s \right] & = \gamma \sum_{g \in \Gamma} \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} g p(g | s') p(s', r | a, s) \pi(a | s) \\ & = \gamma \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} \mathbb{E}_{\pi}\left[ G_{t+1} | S_{t+1} = s' \right] p(s', r | a, s) \pi(a | s) \\ & = \gamma \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} \sum_{a \in \mathcal{A}} v_{\pi}(s') p(s', r | a, s) \pi(a | s) \end{align}$$ as required, once again. Combining the two terms completes the proof $$\begin{align} v_\pi(s) & \doteq \mathbb{E}_\pi\left[G_t \mid S_t = s\right] \\ & = \sum_{a \in \mathcal{A}} \pi(a | s) \sum_{r \in \mathcal{R}} \sum_{s' \in \mathcal{S}} p(s', r | a, s) \left[ r + \gamma v_{\pi}(s') \right]. \end{align}$$ UPDATE I want to address what might look like a sleight of hand in the derivation of the second term. In the equation marked with $(*)$, I use a term $p(g|s)$ and then later in the equation marked $(**)$ I claim that $g$ doesn't depend on $s$, by arguing the Markovian property. So, you might say that if this is the case, then $p(g|s) = p(g)$. But this is not true. I can take $p(g | s', r, a, s) \rightarrow p(g | s')$ because the probability on the left side of that statement says that this is the probability of $g$ conditioned on $s'$, $a$, $r$, and $s$. Because we either know or assume the state $s'$, none of the other conditionals matter, because of the Markovian property. If you do not know or assume the state $s'$, then the future rewards (the meaning of $g$) will depend on which state you begin at, because that will determine (based on the policy) which state $s'$ you start at when computing $g$. If that argument doesn't convince you, try to compute what $p(g)$ is: $$\begin{align} p(g) & = \sum_{s' \in \mathcal{S}} p(g, s') = \sum_{s' \in \mathcal{S}} p(g | s') p(s') \\ & = \sum_{s' \in \mathcal{S}} p(g | s') \sum_{s,a,r} p(s', a, r, s) \\ & = \sum_{s' \in \mathcal{S}} p(g | s') \sum_{s,a,r} p(s', r | a, s) p(a, s) \\ & = \sum_{s \in \mathcal{S}} p(s) \sum_{s' \in \mathcal{S}} p(g | s') \sum_{a,r} p(s', r | a, s) \pi(a | s) \\ & \doteq \sum_{s \in \mathcal{S}} p(s) p(g|s) = \sum_{s \in \mathcal{S}} p(g,s) = p(g). \end{align}$$ As can be seen in the last line, it is not true that $p(g|s) = p(g)$. The expected value of $g$ depends on which state you start in (i.e. the identity of $s$), if you do not know or assume the state $s'$.
Deriving Bellman's Equation in Reinforcement Learning There are already a great many answers to this question, but most involve few words describing what is going on in the manipulations. I'm going to answer it using way more words, I think. To start,
4,967
Deriving Bellman's Equation in Reinforcement Learning
Here is my proof. It is based on the manipulation of conditional distributions, which makes it easier to follow. Hope this one helps you. \begin{align} v_{\pi}(s)&=E{\left[G_t|S_t=s\right]} \nonumber \\ &=E{\left[R_{t+1}+\gamma G_{t+1}|S_t=s\right]} \nonumber \\ &= \sum_{s'}\sum_{r}\sum_{g_{t+1}}\sum_{a}p(s',r,g_{t+1}, a|s)(r+\gamma g_{t+1}) \nonumber \\ &= \sum_{a}p(a|s)\sum_{s'}\sum_{r}\sum_{g_{t+1}}p(s',r,g_{t+1} |a, s)(r+\gamma g_{t+1}) \nonumber \\ &= \sum_{a}p(a|s)\sum_{s'}\sum_{r}\sum_{g_{t+1}}p(s',r|a, s)p(g_{t+1}|s', r, a, s)(r+\gamma g_{t+1}) \nonumber \\ &\text{Note that $p(g_{t+1}|s', r, a, s)=p(g_{t+1}|s')$ by assumption of MDP} \nonumber \\ &= \sum_{a}p(a|s)\sum_{s'}\sum_{r}p(s',r|a, s)\sum_{g_{t+1}}p(g_{t+1}|s')(r+\gamma g_{t+1}) \nonumber \\ &= \sum_{a}p(a|s)\sum_{s'}\sum_{r}p(s',r|a, s)(r+\gamma\sum_{g_{t+1}}p(g_{t+1}|s')g_{t+1}) \nonumber \\ &=\sum_{a}p(a|s)\sum_{s'}\sum_{r}p(s',r|a, s)\left(r+\gamma v_{\pi}(s')\right) \label{eq2} \end{align} This is the famous Bellman equation.
Deriving Bellman's Equation in Reinforcement Learning
Here is my proof. It is based on the manipulation of conditional distributions, which makes it easier to follow. Hope this one helps you. \begin{align} v_{\pi}(s)&=E{\left[G_t|S_t=s\right]} \nonumber
Deriving Bellman's Equation in Reinforcement Learning Here is my proof. It is based on the manipulation of conditional distributions, which makes it easier to follow. Hope this one helps you. \begin{align} v_{\pi}(s)&=E{\left[G_t|S_t=s\right]} \nonumber \\ &=E{\left[R_{t+1}+\gamma G_{t+1}|S_t=s\right]} \nonumber \\ &= \sum_{s'}\sum_{r}\sum_{g_{t+1}}\sum_{a}p(s',r,g_{t+1}, a|s)(r+\gamma g_{t+1}) \nonumber \\ &= \sum_{a}p(a|s)\sum_{s'}\sum_{r}\sum_{g_{t+1}}p(s',r,g_{t+1} |a, s)(r+\gamma g_{t+1}) \nonumber \\ &= \sum_{a}p(a|s)\sum_{s'}\sum_{r}\sum_{g_{t+1}}p(s',r|a, s)p(g_{t+1}|s', r, a, s)(r+\gamma g_{t+1}) \nonumber \\ &\text{Note that $p(g_{t+1}|s', r, a, s)=p(g_{t+1}|s')$ by assumption of MDP} \nonumber \\ &= \sum_{a}p(a|s)\sum_{s'}\sum_{r}p(s',r|a, s)\sum_{g_{t+1}}p(g_{t+1}|s')(r+\gamma g_{t+1}) \nonumber \\ &= \sum_{a}p(a|s)\sum_{s'}\sum_{r}p(s',r|a, s)(r+\gamma\sum_{g_{t+1}}p(g_{t+1}|s')g_{t+1}) \nonumber \\ &=\sum_{a}p(a|s)\sum_{s'}\sum_{r}p(s',r|a, s)\left(r+\gamma v_{\pi}(s')\right) \label{eq2} \end{align} This is the famous Bellman equation.
Deriving Bellman's Equation in Reinforcement Learning Here is my proof. It is based on the manipulation of conditional distributions, which makes it easier to follow. Hope this one helps you. \begin{align} v_{\pi}(s)&=E{\left[G_t|S_t=s\right]} \nonumber
4,968
Deriving Bellman's Equation in Reinforcement Learning
This is the answer for everybody who wonders about the clean, structured math behind it (i.e. if you belong to the group of people that knows what a random variable is and that you must show or assume that a random variable has a density then this is the answer for you ;-)): First of all we need to have that the Markov Decision Process has only a finite number of $L^1$-rewards, i.e. we need that there exists a finite set $E$ of densities, each belonging to $L^1$ variables, i.e. $\int_{\mathbb{R}}x \cdot e(x) dx < \infty$ for all $e \in E$ and a map $F : A \times S \to E$ such that $$p(r_t|a_t, s_t) = F(a_t, s_t)(r_t)$$ (i.e. in the automata behind the MDP, there may be infinitely many states but there are only finitely many $L^1$-reward-distributions attached to the possibly infinite transitions between the states) Theorem 1: Let $X \in L^1(\Omega)$ (i.e. an integrable real random variable) and let $Y$ be another random variable such that $X,Y$ have a common density then $$E[X|Y=y] = \int_\mathbb{R} x p(x|y) dx$$ Proof: Essentially proven in here by Stefan Hansen. Theorem 2: Let $X \in L^1(\Omega)$ and let $Y,Z$ be further random variables such that $X,Y,Z$ have a common density then $$E[X|Y=y] = \int_{\mathcal{Z}} p(z|y) E[X|Y=y,Z=z] dz$$ where $\mathcal{Z}$ is the range of $Z$. Proof: \begin{align*} E[X|Y=y] &= \int_{\mathbb{R}} x p(x|y) dx \\ &~~~~\text{(by Thm. 1)}\\ &= \int_{\mathbb{R}} x \frac{p(x,y)}{p(y)} dx \\ &= \int_{\mathbb{R}} x \frac{\int_{\mathcal{Z}} p(x,y,z) dz}{p(y)} dx \\ &= \int_{\mathcal{Z}} \int_{\mathbb{R}} x \frac{ p(x,y,z) }{p(y)} dx dz \\ &= \int_{\mathcal{Z}} \int_{\mathbb{R}} x p(x|y,z)p(z|y) dx dz \\ &= \int_{\mathcal{Z}} p(z|y) \int_{\mathbb{R}} x p(x|y,z) dx dz \\ &= \int_{\mathcal{Z}} p(z|y) E[X|Y=y,Z=z] dz \\ &~~~~\text{(by Thm. 1)} \end{align*} Put $G_t = \sum_{k=0}^\infty \gamma^k R_{t+k}$ and put $G_t^{(K)} = \sum_{k=0}^K \gamma^k R_{t+k}$ then one can show (using the fact that the MDP has only finitely many $L^1$-rewards) that $G_t^{(K)}$ converges and that since the function $\sum_{k=0}^\infty \gamma^k |R_{t+k}|$ is still in $L^1(\Omega)$ (i.e. integrable) one can also show (by using the usual combination of the theorems of monotone convergence and then dominated convergence on the defining equations for [the factorizations of] the conditional expectation) that $$\lim_{K \to \infty} E[G_t^{(K)} | S_t=s_t] = E[G_t | S_t=s_t]$$ Now one shows that $$E[G_t^{(K)} | S_t=s_t] = E[R_{t} | S_t=s_t] + \gamma \int_S p(s_{t+1}|s_t) E[G_{t+1}^{(K-1)} | S_{t+1}=s_{t+1}] ds_{t+1}$$ using $G_t^{(K)} = R_t + \gamma G_{t+1}^{(K-1)}$, Thm. 2 above then Thm. 1 on $E[G_{t+1}^{(K-1)}|S_{t+1}=s', S_t=s_t]$ and then using a straightforward marginalization war, one shows that $p(r_q|s_{t+1}, s_t) = p(r_q|s_{t+1})$ for all $q \geq t+1$. Now we need to apply the limit $K \to \infty$ to both sides of the equation. In order to pull the limit into the integral over the state space $S$ we need to make some additional assumptions: Either the state space is finite (then $\int_S = \sum_S$ and the sum is finite) or all the rewards are all positive (then we use monotone convergence) or all the rewards are negative (then we put a minus sign in front of the equation and use monotone convergence again) or all the rewards are bounded (then we use dominated convergence). Then (by applying $\lim_{K \to \infty}$ to both sides of the partial / finite Bellman equation above) we obtain $$ E[G_t | S_t=s_t] = E[G_t^{(K)} | S_t=s_t] = E[R_{t} | S_t=s_t] + \gamma \int_S p(s_{t+1}|s_t) E[G_{t+1} | S_{t+1}=s_{t+1}] ds_{t+1}$$ and then the rest is usual density manipulation. REMARK: Even in very simple tasks the state space can be infinite! One example would be the 'balancing a pole'-task. The state is essentially the angle of the pole (a value in $[0, 2\pi)$, an uncountably infinite set!) REMARK: People might comment 'dough, this proof can be shortened much more if you just use the density of $G_t$ directly and show that $p(g_{t+1}|s_{t+1}, s_t) = p(g_{t+1}|s_{t+1})$' ... BUT ... my questions would be: How come that you even know that $G_{t+1}$ has a density? How come that you even know that $G_{t+1}$ has a common density together with $S_{t+1}, S_t$? How do you infer that $p(g_{t+1}|s_{t+1}, s_t) = p(g_{t+1}|s_{t+1})$? This is not only the Markov property: The Markov property only tells you something about the marginal distributions but these do not necessarily determine the whole distribution, see e.g. multivariate Gaussians!
Deriving Bellman's Equation in Reinforcement Learning
This is the answer for everybody who wonders about the clean, structured math behind it (i.e. if you belong to the group of people that knows what a random variable is and that you must show or assume
Deriving Bellman's Equation in Reinforcement Learning This is the answer for everybody who wonders about the clean, structured math behind it (i.e. if you belong to the group of people that knows what a random variable is and that you must show or assume that a random variable has a density then this is the answer for you ;-)): First of all we need to have that the Markov Decision Process has only a finite number of $L^1$-rewards, i.e. we need that there exists a finite set $E$ of densities, each belonging to $L^1$ variables, i.e. $\int_{\mathbb{R}}x \cdot e(x) dx < \infty$ for all $e \in E$ and a map $F : A \times S \to E$ such that $$p(r_t|a_t, s_t) = F(a_t, s_t)(r_t)$$ (i.e. in the automata behind the MDP, there may be infinitely many states but there are only finitely many $L^1$-reward-distributions attached to the possibly infinite transitions between the states) Theorem 1: Let $X \in L^1(\Omega)$ (i.e. an integrable real random variable) and let $Y$ be another random variable such that $X,Y$ have a common density then $$E[X|Y=y] = \int_\mathbb{R} x p(x|y) dx$$ Proof: Essentially proven in here by Stefan Hansen. Theorem 2: Let $X \in L^1(\Omega)$ and let $Y,Z$ be further random variables such that $X,Y,Z$ have a common density then $$E[X|Y=y] = \int_{\mathcal{Z}} p(z|y) E[X|Y=y,Z=z] dz$$ where $\mathcal{Z}$ is the range of $Z$. Proof: \begin{align*} E[X|Y=y] &= \int_{\mathbb{R}} x p(x|y) dx \\ &~~~~\text{(by Thm. 1)}\\ &= \int_{\mathbb{R}} x \frac{p(x,y)}{p(y)} dx \\ &= \int_{\mathbb{R}} x \frac{\int_{\mathcal{Z}} p(x,y,z) dz}{p(y)} dx \\ &= \int_{\mathcal{Z}} \int_{\mathbb{R}} x \frac{ p(x,y,z) }{p(y)} dx dz \\ &= \int_{\mathcal{Z}} \int_{\mathbb{R}} x p(x|y,z)p(z|y) dx dz \\ &= \int_{\mathcal{Z}} p(z|y) \int_{\mathbb{R}} x p(x|y,z) dx dz \\ &= \int_{\mathcal{Z}} p(z|y) E[X|Y=y,Z=z] dz \\ &~~~~\text{(by Thm. 1)} \end{align*} Put $G_t = \sum_{k=0}^\infty \gamma^k R_{t+k}$ and put $G_t^{(K)} = \sum_{k=0}^K \gamma^k R_{t+k}$ then one can show (using the fact that the MDP has only finitely many $L^1$-rewards) that $G_t^{(K)}$ converges and that since the function $\sum_{k=0}^\infty \gamma^k |R_{t+k}|$ is still in $L^1(\Omega)$ (i.e. integrable) one can also show (by using the usual combination of the theorems of monotone convergence and then dominated convergence on the defining equations for [the factorizations of] the conditional expectation) that $$\lim_{K \to \infty} E[G_t^{(K)} | S_t=s_t] = E[G_t | S_t=s_t]$$ Now one shows that $$E[G_t^{(K)} | S_t=s_t] = E[R_{t} | S_t=s_t] + \gamma \int_S p(s_{t+1}|s_t) E[G_{t+1}^{(K-1)} | S_{t+1}=s_{t+1}] ds_{t+1}$$ using $G_t^{(K)} = R_t + \gamma G_{t+1}^{(K-1)}$, Thm. 2 above then Thm. 1 on $E[G_{t+1}^{(K-1)}|S_{t+1}=s', S_t=s_t]$ and then using a straightforward marginalization war, one shows that $p(r_q|s_{t+1}, s_t) = p(r_q|s_{t+1})$ for all $q \geq t+1$. Now we need to apply the limit $K \to \infty$ to both sides of the equation. In order to pull the limit into the integral over the state space $S$ we need to make some additional assumptions: Either the state space is finite (then $\int_S = \sum_S$ and the sum is finite) or all the rewards are all positive (then we use monotone convergence) or all the rewards are negative (then we put a minus sign in front of the equation and use monotone convergence again) or all the rewards are bounded (then we use dominated convergence). Then (by applying $\lim_{K \to \infty}$ to both sides of the partial / finite Bellman equation above) we obtain $$ E[G_t | S_t=s_t] = E[G_t^{(K)} | S_t=s_t] = E[R_{t} | S_t=s_t] + \gamma \int_S p(s_{t+1}|s_t) E[G_{t+1} | S_{t+1}=s_{t+1}] ds_{t+1}$$ and then the rest is usual density manipulation. REMARK: Even in very simple tasks the state space can be infinite! One example would be the 'balancing a pole'-task. The state is essentially the angle of the pole (a value in $[0, 2\pi)$, an uncountably infinite set!) REMARK: People might comment 'dough, this proof can be shortened much more if you just use the density of $G_t$ directly and show that $p(g_{t+1}|s_{t+1}, s_t) = p(g_{t+1}|s_{t+1})$' ... BUT ... my questions would be: How come that you even know that $G_{t+1}$ has a density? How come that you even know that $G_{t+1}$ has a common density together with $S_{t+1}, S_t$? How do you infer that $p(g_{t+1}|s_{t+1}, s_t) = p(g_{t+1}|s_{t+1})$? This is not only the Markov property: The Markov property only tells you something about the marginal distributions but these do not necessarily determine the whole distribution, see e.g. multivariate Gaussians!
Deriving Bellman's Equation in Reinforcement Learning This is the answer for everybody who wonders about the clean, structured math behind it (i.e. if you belong to the group of people that knows what a random variable is and that you must show or assume
4,969
Deriving Bellman's Equation in Reinforcement Learning
Let total sum of discounted rewards after time $t$ be: $G_t = R_{t+1}+\gamma R_{t+2}+\gamma^2 R_{t+3}+...$ Utility value of starting in state,$s$ at time,$t$ is equivalent to expected sum of discounted rewards $R$ of executing policy $\pi$ starting from state $s$ onwards. $U_\pi(S_t=s) = E_\pi[G_t|S_t = s]$ $\\ = E_\pi[(R_{t+1}+\gamma R_{t+2}+\gamma^2 R_{t+3}+...)|S_t = s]$ By definition of $G_t$ $= E_\pi[(R_{t+1}+\gamma (R_{t+2}+\gamma R_{t+3}+...))|S_t = s]$ $= E_\pi[(R_{t+1}+\gamma (G_{t+1}))|S_t = s]$ $= E_\pi[R_{t+1}|S_t = s]+\gamma E_\pi[ G_{t+1}|S_t = s]$ By law of linearity $= E_\pi[R_{t+1}|S_t = s]+\gamma E_\pi[E_\pi(G_{t+1}|S_{t+1} = s')|S_t = s]$ By law of Total Expectation $= E_\pi[R_{t+1}|S_t = s]+\gamma E_\pi[U_\pi(S_{t+1}= s')|S_t = s]$ By definition of $U_\pi$ $= E_\pi[R_{t+1} + \gamma U_\pi(S_{t+1}= s')|S_t = s]$ By law of linearity Assuming that the process satisfies Markov Property: Probability $Pr$ of ending up in state $s'$ having started from state $s$ and taken action $a$ , $Pr(s'|s,a) = Pr(S_{t+1} = s', S_t=s,A_t = a)$ and Reward $R$ of ending up in state $s'$ having started from state $s$ and taken action $a$, $R(s,a,s') = [R_{t+1}|S_t = s, A_t = a, S_{t+1}= s']$ Therefore we can re-write above utility equation as, $= \sum_a \pi(a|s) \sum_{s'} Pr(s'|s,a)[R(s,a,s')+ \gamma U_\pi(S_{t+1}=s')]$ Where; $\pi(a|s)$ : Probability of taking action $a$ when in state $s$ for a stochastic policy. For deterministic policy, $\sum_a \pi(a|s)= 1$
Deriving Bellman's Equation in Reinforcement Learning
Let total sum of discounted rewards after time $t$ be: $G_t = R_{t+1}+\gamma R_{t+2}+\gamma^2 R_{t+3}+...$ Utility value of starting in state,$s$ at time,$t$ is equivalent to expected sum of disco
Deriving Bellman's Equation in Reinforcement Learning Let total sum of discounted rewards after time $t$ be: $G_t = R_{t+1}+\gamma R_{t+2}+\gamma^2 R_{t+3}+...$ Utility value of starting in state,$s$ at time,$t$ is equivalent to expected sum of discounted rewards $R$ of executing policy $\pi$ starting from state $s$ onwards. $U_\pi(S_t=s) = E_\pi[G_t|S_t = s]$ $\\ = E_\pi[(R_{t+1}+\gamma R_{t+2}+\gamma^2 R_{t+3}+...)|S_t = s]$ By definition of $G_t$ $= E_\pi[(R_{t+1}+\gamma (R_{t+2}+\gamma R_{t+3}+...))|S_t = s]$ $= E_\pi[(R_{t+1}+\gamma (G_{t+1}))|S_t = s]$ $= E_\pi[R_{t+1}|S_t = s]+\gamma E_\pi[ G_{t+1}|S_t = s]$ By law of linearity $= E_\pi[R_{t+1}|S_t = s]+\gamma E_\pi[E_\pi(G_{t+1}|S_{t+1} = s')|S_t = s]$ By law of Total Expectation $= E_\pi[R_{t+1}|S_t = s]+\gamma E_\pi[U_\pi(S_{t+1}= s')|S_t = s]$ By definition of $U_\pi$ $= E_\pi[R_{t+1} + \gamma U_\pi(S_{t+1}= s')|S_t = s]$ By law of linearity Assuming that the process satisfies Markov Property: Probability $Pr$ of ending up in state $s'$ having started from state $s$ and taken action $a$ , $Pr(s'|s,a) = Pr(S_{t+1} = s', S_t=s,A_t = a)$ and Reward $R$ of ending up in state $s'$ having started from state $s$ and taken action $a$, $R(s,a,s') = [R_{t+1}|S_t = s, A_t = a, S_{t+1}= s']$ Therefore we can re-write above utility equation as, $= \sum_a \pi(a|s) \sum_{s'} Pr(s'|s,a)[R(s,a,s')+ \gamma U_\pi(S_{t+1}=s')]$ Where; $\pi(a|s)$ : Probability of taking action $a$ when in state $s$ for a stochastic policy. For deterministic policy, $\sum_a \pi(a|s)= 1$
Deriving Bellman's Equation in Reinforcement Learning Let total sum of discounted rewards after time $t$ be: $G_t = R_{t+1}+\gamma R_{t+2}+\gamma^2 R_{t+3}+...$ Utility value of starting in state,$s$ at time,$t$ is equivalent to expected sum of disco
4,970
Deriving Bellman's Equation in Reinforcement Learning
I know there is already an accepted answer, but I wish to provide a probably more concrete derivation. I would also like to mention that although @Jie Shi trick somewhat makes sense, but it makes me feel very uncomfortable:(. We need to consider the time dimension to make this work. And it is important to note that, the expectation is actually taken over the entire infinite horizon, rather than just over $s$ and $s'$. Let assume we start from $t=0$ (in fact, the derivation is the same regardless of the starting time; I do not want to contaminate the equations with another subscript $k$) \begin{align} v_{\pi}(s_0)&=\mathbb{E}_{\pi}[G_{0}|s_0]\\ G_0&=\sum_{t=0}^{T-1}\gamma^tR_{t+1}\\ \mathbb{E}_{\pi}[G_{0}|s_0]&=\sum_{a_0}\pi(a_0|s_0)\sum_{a_{1},...a_{T}}\sum_{s_{1},...s_{T}}\sum_{r_{1},...r_{T}}\bigg(\prod_{t=0}^{T-1}\pi(a_{t+1}|s_{t+1})p(s_{t+1},r_{t+1}|s_t,a_t)\\ &\times\Big(\sum_{t=0}^{T-1}\gamma^tr_{t+1}\Big)\bigg)\\ &=\sum_{a_0}\pi(a_0|s_0)\sum_{a_{1},...a_{T}}\sum_{s_{1},...s_{T}}\sum_{r_{1},...r_{T}}\bigg(\prod_{t=0}^{T-1}\pi(a_{t+1}|s_{t+1})p(s_{t+1},r_{t+1}|s_t,a_t)\\ &\times\Big(r_1+\gamma\sum_{t=0}^{T-2}\gamma^tr_{t+2}\Big)\bigg) \end{align} NOTED THAT THE ABOVE EQUATION HOLDS EVEN IF $T\rightarrow\infty$, IN FACT IT WILL BE TRUE UNTIL THE END OF UNIVERSE (maybe be a bit exaggerated :) ) At this stage, I believe most of us should already have in mind how the above leads to the final expression--we just need to apply sum-product rule($\sum_a\sum_b\sum_cabc\equiv\sum_aa\sum_bb\sum_cc$) painstakingly. Let us apply the law of linearity of Expectation to each term inside the $\Big(r_{1}+\gamma\sum_{t=0}^{T-2}\gamma^tr_{t+2}\Big)$ Part 1 $$\sum_{a_0}\pi(a_0|s_0)\sum_{a_{1},...a_{T}}\sum_{s_{1},...s_{T}}\sum_{r_{1},...r_{T}}\bigg(\prod_{t=0}^{T-1}\pi(a_{t+1}|s_{t+1})p(s_{t+1},r_{t+1}|s_t,a_t)\times r_1\bigg)$$ Well this is rather trivial, all probabilities disappear (actually sum to 1) except those related to $r_1$. Therefore, we have $$\sum_{a_0}\pi(a_0|s_0)\sum_{s_1,r_1}p(s_1,r_1|s_0,a_0)\times r_1$$ Part 2 Guess what, this part is even more trivial--it only involves rearranging the sequence of summations. $$\sum_{a_0}\pi(a_0|s_0)\sum_{a_{1},...a_{T}}\sum_{s_{1},...s_{T}}\sum_{r_{1},...r_{T}}\bigg(\prod_{t=0}^{T-1}\pi(a_{t+1}|s_{t+1})p(s_{t+1},r_{t+1}|s_t,a_t)\bigg)\\=\sum_{a_0}\pi(a_0|s_0)\sum_{s_1,r_1}p(s_1,r_1|s_0,a_0)\bigg(\sum_{a_1}\pi(a_1|s_1)\sum_{a_{2},...a_{T}}\sum_{s_{2},...s_{T}}\sum_{r_{2},...r_{T}}\bigg(\prod_{t=0}^{T-2}\pi(a_{t+2}|s_{t+2})p(s_{t+2},r_{t+2}|s_{t+1},a_{t+1})\bigg)\bigg)$$ And Eureka!! we recover a recursive pattern in side the big parentheses. Let us combine it with $\gamma\sum_{t=0}^{T-2}\gamma^tr_{t+2}$, and we obtain $v_{\pi}(s_1)=\mathbb{E}_{\pi}[G_1|s_1]$ $$\gamma\mathbb{E}_{\pi}[G_1|s_1]=\sum_{a_1}\pi(a_1|s_1)\sum_{a_{2},...a_{T}}\sum_{s_{2},...s_{T}}\sum_{r_{2},...r_{T}}\bigg(\prod_{t=0}^{T-2}\pi(a_{t+2}|s_{t+2})p(s_{t+2},r_{t+2}|s_{t+1},a_{t+1})\bigg)\bigg(\gamma\sum_{t=0}^{T-2}\gamma^tr_{t+2}\bigg)$$ and part 2 becomes $$\sum_{a_0}\pi(a_0|s_0)\sum_{s_1,r_1}p(s_1,r_1|s_0,a_0)\times \gamma v_{\pi}(s_1)$$ Part 1 + Part 2 $$v_{\pi}(s_0) =\sum_{a_0}\pi(a_0|s_0)\sum_{s_1,r_1}p(s_1,r_1|s_0,a_0)\times \Big(r_1+\gamma v_{\pi}(s_1)\Big) $$ And now if we can tuck in the time dimension and recover the general recursive formulae $$v_{\pi}(s) =\sum_a \pi(a|s)\sum_{s',r} p(s',r|s,a)\times \Big(r+\gamma v_{\pi}(s')\Big) $$ Final confession, I laughed when I saw people above mention the use of law of total expectation. So here I am
Deriving Bellman's Equation in Reinforcement Learning
I know there is already an accepted answer, but I wish to provide a probably more concrete derivation. I would also like to mention that although @Jie Shi trick somewhat makes sense, but it makes me f
Deriving Bellman's Equation in Reinforcement Learning I know there is already an accepted answer, but I wish to provide a probably more concrete derivation. I would also like to mention that although @Jie Shi trick somewhat makes sense, but it makes me feel very uncomfortable:(. We need to consider the time dimension to make this work. And it is important to note that, the expectation is actually taken over the entire infinite horizon, rather than just over $s$ and $s'$. Let assume we start from $t=0$ (in fact, the derivation is the same regardless of the starting time; I do not want to contaminate the equations with another subscript $k$) \begin{align} v_{\pi}(s_0)&=\mathbb{E}_{\pi}[G_{0}|s_0]\\ G_0&=\sum_{t=0}^{T-1}\gamma^tR_{t+1}\\ \mathbb{E}_{\pi}[G_{0}|s_0]&=\sum_{a_0}\pi(a_0|s_0)\sum_{a_{1},...a_{T}}\sum_{s_{1},...s_{T}}\sum_{r_{1},...r_{T}}\bigg(\prod_{t=0}^{T-1}\pi(a_{t+1}|s_{t+1})p(s_{t+1},r_{t+1}|s_t,a_t)\\ &\times\Big(\sum_{t=0}^{T-1}\gamma^tr_{t+1}\Big)\bigg)\\ &=\sum_{a_0}\pi(a_0|s_0)\sum_{a_{1},...a_{T}}\sum_{s_{1},...s_{T}}\sum_{r_{1},...r_{T}}\bigg(\prod_{t=0}^{T-1}\pi(a_{t+1}|s_{t+1})p(s_{t+1},r_{t+1}|s_t,a_t)\\ &\times\Big(r_1+\gamma\sum_{t=0}^{T-2}\gamma^tr_{t+2}\Big)\bigg) \end{align} NOTED THAT THE ABOVE EQUATION HOLDS EVEN IF $T\rightarrow\infty$, IN FACT IT WILL BE TRUE UNTIL THE END OF UNIVERSE (maybe be a bit exaggerated :) ) At this stage, I believe most of us should already have in mind how the above leads to the final expression--we just need to apply sum-product rule($\sum_a\sum_b\sum_cabc\equiv\sum_aa\sum_bb\sum_cc$) painstakingly. Let us apply the law of linearity of Expectation to each term inside the $\Big(r_{1}+\gamma\sum_{t=0}^{T-2}\gamma^tr_{t+2}\Big)$ Part 1 $$\sum_{a_0}\pi(a_0|s_0)\sum_{a_{1},...a_{T}}\sum_{s_{1},...s_{T}}\sum_{r_{1},...r_{T}}\bigg(\prod_{t=0}^{T-1}\pi(a_{t+1}|s_{t+1})p(s_{t+1},r_{t+1}|s_t,a_t)\times r_1\bigg)$$ Well this is rather trivial, all probabilities disappear (actually sum to 1) except those related to $r_1$. Therefore, we have $$\sum_{a_0}\pi(a_0|s_0)\sum_{s_1,r_1}p(s_1,r_1|s_0,a_0)\times r_1$$ Part 2 Guess what, this part is even more trivial--it only involves rearranging the sequence of summations. $$\sum_{a_0}\pi(a_0|s_0)\sum_{a_{1},...a_{T}}\sum_{s_{1},...s_{T}}\sum_{r_{1},...r_{T}}\bigg(\prod_{t=0}^{T-1}\pi(a_{t+1}|s_{t+1})p(s_{t+1},r_{t+1}|s_t,a_t)\bigg)\\=\sum_{a_0}\pi(a_0|s_0)\sum_{s_1,r_1}p(s_1,r_1|s_0,a_0)\bigg(\sum_{a_1}\pi(a_1|s_1)\sum_{a_{2},...a_{T}}\sum_{s_{2},...s_{T}}\sum_{r_{2},...r_{T}}\bigg(\prod_{t=0}^{T-2}\pi(a_{t+2}|s_{t+2})p(s_{t+2},r_{t+2}|s_{t+1},a_{t+1})\bigg)\bigg)$$ And Eureka!! we recover a recursive pattern in side the big parentheses. Let us combine it with $\gamma\sum_{t=0}^{T-2}\gamma^tr_{t+2}$, and we obtain $v_{\pi}(s_1)=\mathbb{E}_{\pi}[G_1|s_1]$ $$\gamma\mathbb{E}_{\pi}[G_1|s_1]=\sum_{a_1}\pi(a_1|s_1)\sum_{a_{2},...a_{T}}\sum_{s_{2},...s_{T}}\sum_{r_{2},...r_{T}}\bigg(\prod_{t=0}^{T-2}\pi(a_{t+2}|s_{t+2})p(s_{t+2},r_{t+2}|s_{t+1},a_{t+1})\bigg)\bigg(\gamma\sum_{t=0}^{T-2}\gamma^tr_{t+2}\bigg)$$ and part 2 becomes $$\sum_{a_0}\pi(a_0|s_0)\sum_{s_1,r_1}p(s_1,r_1|s_0,a_0)\times \gamma v_{\pi}(s_1)$$ Part 1 + Part 2 $$v_{\pi}(s_0) =\sum_{a_0}\pi(a_0|s_0)\sum_{s_1,r_1}p(s_1,r_1|s_0,a_0)\times \Big(r_1+\gamma v_{\pi}(s_1)\Big) $$ And now if we can tuck in the time dimension and recover the general recursive formulae $$v_{\pi}(s) =\sum_a \pi(a|s)\sum_{s',r} p(s',r|s,a)\times \Big(r+\gamma v_{\pi}(s')\Big) $$ Final confession, I laughed when I saw people above mention the use of law of total expectation. So here I am
Deriving Bellman's Equation in Reinforcement Learning I know there is already an accepted answer, but I wish to provide a probably more concrete derivation. I would also like to mention that although @Jie Shi trick somewhat makes sense, but it makes me f
4,971
Deriving Bellman's Equation in Reinforcement Learning
This is just a comment/addition to the accepted answer. I was confused at the line where law of total expectation is being applied. I don't think the main form of law of total expectation can help here. A variant of that is in fact needed here. If $X,Y,Z$ are random variables and assuming all the expectation exists, then the following identity holds: $E[X|Y] = E[E[X|Y,Z]|Y]$ In this case, $X= G_{t+1}$, $Y = S_t$ and $Z = S_{t+1}$. Then $E[G_{t+1}|S_t=s] = E[E[G_{t+1}|S_t=s, S_{t+1}=s'|S_t=s]$, which by Markov property eqauls to $E[E[G_{t+1}|S_{t+1}=s']|S_t=s]$ From there, one could follow the rest of the proof from the answer.
Deriving Bellman's Equation in Reinforcement Learning
This is just a comment/addition to the accepted answer. I was confused at the line where law of total expectation is being applied. I don't think the main form of law of total expectation can help he
Deriving Bellman's Equation in Reinforcement Learning This is just a comment/addition to the accepted answer. I was confused at the line where law of total expectation is being applied. I don't think the main form of law of total expectation can help here. A variant of that is in fact needed here. If $X,Y,Z$ are random variables and assuming all the expectation exists, then the following identity holds: $E[X|Y] = E[E[X|Y,Z]|Y]$ In this case, $X= G_{t+1}$, $Y = S_t$ and $Z = S_{t+1}$. Then $E[G_{t+1}|S_t=s] = E[E[G_{t+1}|S_t=s, S_{t+1}=s'|S_t=s]$, which by Markov property eqauls to $E[E[G_{t+1}|S_{t+1}=s']|S_t=s]$ From there, one could follow the rest of the proof from the answer.
Deriving Bellman's Equation in Reinforcement Learning This is just a comment/addition to the accepted answer. I was confused at the line where law of total expectation is being applied. I don't think the main form of law of total expectation can help he
4,972
Deriving Bellman's Equation in Reinforcement Learning
even though the correct answer has already been given and some time has passed, I thought the following step by step guide might be useful: By linearity of the Expected Value we can split $E[R_{t+1} + \gamma E[G_{t+1}|S_{t}=s]]$ into $E[R_{t+1}|S_t=s]$ and $\gamma E[G_{t+1}|S_{t}=s]$. I will outline the steps only for the first part, as the second part follows by the same steps combined with the Law of Total Expectation. \begin{align} E[R_{t+1}|S_t=s]&=\sum_r{ r P[R_{t+1}=r|S_t =s]} \\ &= \sum_a{ \sum_r{ r P[R_{t+1}=r, A_t=a|S_t=s]}} \qquad \text{(III)} \\ &=\sum_a{ \sum_r{ r P[R_{t+1}=r| A_t=a, S_t=s] P[A_t=a|S_t=s]}} \\ &= \sum_{s^{'}}{ \sum_a{ \sum_r{ r P[S_{t+1}=s^{'}, R_{t+1}=r| A_t=a, S_t=s] P[A_t=a|S_t=s] }}} \\ &=\sum_a{ \pi(a|s) \sum_{s^{'},r}{p(s^{'},r|s,a)} } r \end{align} Whereas (III) follows form: \begin{align} P[A,B|C]&=\frac{P[A,B,C]}{P[C]} \\ &= \frac{P[A,B,C]}{P[C]} \frac{P[B,C]}{P[B,C]}\\ &= \frac{P[A,B,C]}{P[B,C]} \frac{P[B,C]}{P[C]}\\ &= P[A|B,C] P[B|C] \end{align}
Deriving Bellman's Equation in Reinforcement Learning
even though the correct answer has already been given and some time has passed, I thought the following step by step guide might be useful: By linearity of the Expected Value we can split $E[R_{t+1} +
Deriving Bellman's Equation in Reinforcement Learning even though the correct answer has already been given and some time has passed, I thought the following step by step guide might be useful: By linearity of the Expected Value we can split $E[R_{t+1} + \gamma E[G_{t+1}|S_{t}=s]]$ into $E[R_{t+1}|S_t=s]$ and $\gamma E[G_{t+1}|S_{t}=s]$. I will outline the steps only for the first part, as the second part follows by the same steps combined with the Law of Total Expectation. \begin{align} E[R_{t+1}|S_t=s]&=\sum_r{ r P[R_{t+1}=r|S_t =s]} \\ &= \sum_a{ \sum_r{ r P[R_{t+1}=r, A_t=a|S_t=s]}} \qquad \text{(III)} \\ &=\sum_a{ \sum_r{ r P[R_{t+1}=r| A_t=a, S_t=s] P[A_t=a|S_t=s]}} \\ &= \sum_{s^{'}}{ \sum_a{ \sum_r{ r P[S_{t+1}=s^{'}, R_{t+1}=r| A_t=a, S_t=s] P[A_t=a|S_t=s] }}} \\ &=\sum_a{ \pi(a|s) \sum_{s^{'},r}{p(s^{'},r|s,a)} } r \end{align} Whereas (III) follows form: \begin{align} P[A,B|C]&=\frac{P[A,B,C]}{P[C]} \\ &= \frac{P[A,B,C]}{P[C]} \frac{P[B,C]}{P[B,C]}\\ &= \frac{P[A,B,C]}{P[B,C]} \frac{P[B,C]}{P[C]}\\ &= P[A|B,C] P[B|C] \end{align}
Deriving Bellman's Equation in Reinforcement Learning even though the correct answer has already been given and some time has passed, I thought the following step by step guide might be useful: By linearity of the Expected Value we can split $E[R_{t+1} +
4,973
Deriving Bellman's Equation in Reinforcement Learning
What's with the following approach? $$\begin{align} v_\pi(s) & = \mathbb{E}_\pi\left[G_t \mid S_t = s\right] \\ & = \mathbb{E}_\pi\left[R_{t+1} + \gamma G_{t+1} \mid S_t = s\right] \\ & = \sum_a \pi(a \mid s) \sum_{s'} \sum_r p(s', r \mid s, a) \cdot \,\\ & \qquad \mathbb{E}_\pi\left[R_{t+1} + \gamma G_{t+1} \mid S_{t} = s, A_{t+1} = a, S_{t+1} = s', R_{t+1} = r\right] \\ & = \sum_a \pi(a \mid s) \sum_{s', r} p(s', r \mid s, a) \left[r + \gamma v_\pi(s')\right]. \end{align}$$ The sums are introduced in order to retrieve $a$, $s'$ and $r$ from $s$. After all, the possible actions and possible next states can be . With these extra conditions, the linearity of the expectation leads to the result almost directly. I am not sure how rigorous my argument is mathematically, though. I am open for improvements.
Deriving Bellman's Equation in Reinforcement Learning
What's with the following approach? $$\begin{align} v_\pi(s) & = \mathbb{E}_\pi\left[G_t \mid S_t = s\right] \\ & = \mathbb{E}_\pi\left[R_{t+1} + \gamma G_{t+1} \mid S_t = s\right] \\ & = \sum_a
Deriving Bellman's Equation in Reinforcement Learning What's with the following approach? $$\begin{align} v_\pi(s) & = \mathbb{E}_\pi\left[G_t \mid S_t = s\right] \\ & = \mathbb{E}_\pi\left[R_{t+1} + \gamma G_{t+1} \mid S_t = s\right] \\ & = \sum_a \pi(a \mid s) \sum_{s'} \sum_r p(s', r \mid s, a) \cdot \,\\ & \qquad \mathbb{E}_\pi\left[R_{t+1} + \gamma G_{t+1} \mid S_{t} = s, A_{t+1} = a, S_{t+1} = s', R_{t+1} = r\right] \\ & = \sum_a \pi(a \mid s) \sum_{s', r} p(s', r \mid s, a) \left[r + \gamma v_\pi(s')\right]. \end{align}$$ The sums are introduced in order to retrieve $a$, $s'$ and $r$ from $s$. After all, the possible actions and possible next states can be . With these extra conditions, the linearity of the expectation leads to the result almost directly. I am not sure how rigorous my argument is mathematically, though. I am open for improvements.
Deriving Bellman's Equation in Reinforcement Learning What's with the following approach? $$\begin{align} v_\pi(s) & = \mathbb{E}_\pi\left[G_t \mid S_t = s\right] \\ & = \mathbb{E}_\pi\left[R_{t+1} + \gamma G_{t+1} \mid S_t = s\right] \\ & = \sum_a
4,974
Deriving Bellman's Equation in Reinforcement Learning
$\mathbb{E}_\pi(\cdot)$ usually denotes the expectation assuming the agent follows policy $\pi$. In this case $\pi(a|s)$ seems non-deterministic, i.e. returns the probability that the agent takes action $a$ when in state $s$. It looks like $r$, lower-case, is replacing $R_{t+1}$, a random variable. The second expectation replaces the infinite sum, to reflect the assumption that we continue to follow $\pi$ for all future $t$. $\sum_{s',r} r \cdot p(s′,r|s,a)$ is then the expected immediate reward on the next time step; The second expectation—which becomes $v_\pi$—is the expected value of the next state, weighted by the probability of winding up in state $s'$ having taken $a$ from $s$. Thus, the expectation accounts for the policy probability as well as the transition and reward functions, here expressed together as $p(s', r|s,a)$.
Deriving Bellman's Equation in Reinforcement Learning
$\mathbb{E}_\pi(\cdot)$ usually denotes the expectation assuming the agent follows policy $\pi$. In this case $\pi(a|s)$ seems non-deterministic, i.e. returns the probability that the agent takes acti
Deriving Bellman's Equation in Reinforcement Learning $\mathbb{E}_\pi(\cdot)$ usually denotes the expectation assuming the agent follows policy $\pi$. In this case $\pi(a|s)$ seems non-deterministic, i.e. returns the probability that the agent takes action $a$ when in state $s$. It looks like $r$, lower-case, is replacing $R_{t+1}$, a random variable. The second expectation replaces the infinite sum, to reflect the assumption that we continue to follow $\pi$ for all future $t$. $\sum_{s',r} r \cdot p(s′,r|s,a)$ is then the expected immediate reward on the next time step; The second expectation—which becomes $v_\pi$—is the expected value of the next state, weighted by the probability of winding up in state $s'$ having taken $a$ from $s$. Thus, the expectation accounts for the policy probability as well as the transition and reward functions, here expressed together as $p(s', r|s,a)$.
Deriving Bellman's Equation in Reinforcement Learning $\mathbb{E}_\pi(\cdot)$ usually denotes the expectation assuming the agent follows policy $\pi$. In this case $\pi(a|s)$ seems non-deterministic, i.e. returns the probability that the agent takes acti
4,975
Deriving Bellman's Equation in Reinforcement Learning
Here is an approach that uses the results of exercises in the book (assuming you are using the 2nd edition of the book). In exercise 3.12 you should have derived the equation $$v_\pi(s) = \sum_a \pi(a \mid s) q_\pi(s,a)$$ and in exercise 3.13 you should have derived the equation $$q_\pi(s,a) = \sum_{s',r} p(s',r\mid s,a)(r + \gamma v_\pi(s'))$$ Using these two equations, we can write $$\begin{align}v_\pi(s) &= \sum_a \pi(a \mid s) q_\pi(s,a) \\ &= \sum_a \pi(a \mid s) \sum_{s',r} p(s',r\mid s,a)(r + \gamma v_\pi(s'))\end{align}$$ which is the Bellman equation. Of course, this pushes most of the work into exercise 3.13 (but assuming you are reading/doing the exercises linearly, this shouldn't be a problem). Actually, it's a little strange that Sutton and Barto decided to go for the straight derivation (I guess they didn't want to give away the answers to the exercises).
Deriving Bellman's Equation in Reinforcement Learning
Here is an approach that uses the results of exercises in the book (assuming you are using the 2nd edition of the book). In exercise 3.12 you should have derived the equation $$v_\pi(s) = \sum_a \pi(a
Deriving Bellman's Equation in Reinforcement Learning Here is an approach that uses the results of exercises in the book (assuming you are using the 2nd edition of the book). In exercise 3.12 you should have derived the equation $$v_\pi(s) = \sum_a \pi(a \mid s) q_\pi(s,a)$$ and in exercise 3.13 you should have derived the equation $$q_\pi(s,a) = \sum_{s',r} p(s',r\mid s,a)(r + \gamma v_\pi(s'))$$ Using these two equations, we can write $$\begin{align}v_\pi(s) &= \sum_a \pi(a \mid s) q_\pi(s,a) \\ &= \sum_a \pi(a \mid s) \sum_{s',r} p(s',r\mid s,a)(r + \gamma v_\pi(s'))\end{align}$$ which is the Bellman equation. Of course, this pushes most of the work into exercise 3.13 (but assuming you are reading/doing the exercises linearly, this shouldn't be a problem). Actually, it's a little strange that Sutton and Barto decided to go for the straight derivation (I guess they didn't want to give away the answers to the exercises).
Deriving Bellman's Equation in Reinforcement Learning Here is an approach that uses the results of exercises in the book (assuming you are using the 2nd edition of the book). In exercise 3.12 you should have derived the equation $$v_\pi(s) = \sum_a \pi(a
4,976
Deriving Bellman's Equation in Reinforcement Learning
I wasn't satisfied with any of the above solutions, so I'll give it a try. I find the solution proposed by riceissa the most elegant one, but he only proved the last step. I want to add the missing pieces. So let's go ... Proof of $v_\pi(s) = \sum_a\pi(a|s)q_\pi(s,a)$: \begin{eqnarray*} v_\pi(s) &=& \mathbb{E}_\pi[G_t|S_t=s]\\ &=&\sum_g g p(g|s)\\ &=&\sum_g g \sum_a p(g,a|s)\\ &=&\sum_g \sum_a g p(g|a,s)p(a|s)\\ &=&\sum_a p(a|s) \sum_g g p(g|a,s)\\ &=&\sum_a \pi(a|s) \mathbb{E}_\pi[G_t|S_t=s,A_t=a]\\ &=&\sum_a\pi(a|s)q_\pi(s,a) \end{eqnarray*} Proof of $q_\pi(s,a) = \sum_{s',r}p(s',r|s,a)[r+\gamma v_\pi(s')]$: \begin{eqnarray*} q_\pi(s,a) &=& \mathbb{E}_\pi[G_t|S_t=s,A_t=a]\\ &=&\mathbb{E}_\pi[R_{t+1} + \gamma G_{t+1}|S_t=s,A_t=a]\\ &=&\mathbb{E}_\pi[R_{t+1}|S_t=s,A_t=a] + \gamma\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a]\\ &=&\sum_r rp(r|s,a) + \gamma\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a]\\ &=&\sum_r r\sum_{s'}p(s',r|s,a) + \gamma\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a]\\ &=&\sum_{s',r}rp(s',r|s,a) + \gamma\mathbb{E}[\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a,R_{t+1},S_{t+1}]] \quad (*)\\ &=&\sum_{s',r} rp(s',r|s,a) + \gamma\sum_{s',r}\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a,R_{t+1}=r,S_{t+1}=s']p(s',r|s,a)\\ &=&\sum_{s',r} p(s',r|s,a)[r + \gamma\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a,R_{t+1}=r,S_{t+1}=s']\\ &=&\sum_{s',r} p(s',r|s,a)[r + \gamma\mathbb{E}_\pi[G_{t+1}|S_{t+1}=s'] \quad (**)\\ &=&\sum_{s',r} p(s',r|s,a)[r + \gamma v_\pi(s')]\\ \end{eqnarray*} (*) Law of total expectation (**) $S_{t+1} = s'$ holds all information, so all other variables can be dropped (Markov property).
Deriving Bellman's Equation in Reinforcement Learning
I wasn't satisfied with any of the above solutions, so I'll give it a try. I find the solution proposed by riceissa the most elegant one, but he only proved the last step. I want to add the missing pi
Deriving Bellman's Equation in Reinforcement Learning I wasn't satisfied with any of the above solutions, so I'll give it a try. I find the solution proposed by riceissa the most elegant one, but he only proved the last step. I want to add the missing pieces. So let's go ... Proof of $v_\pi(s) = \sum_a\pi(a|s)q_\pi(s,a)$: \begin{eqnarray*} v_\pi(s) &=& \mathbb{E}_\pi[G_t|S_t=s]\\ &=&\sum_g g p(g|s)\\ &=&\sum_g g \sum_a p(g,a|s)\\ &=&\sum_g \sum_a g p(g|a,s)p(a|s)\\ &=&\sum_a p(a|s) \sum_g g p(g|a,s)\\ &=&\sum_a \pi(a|s) \mathbb{E}_\pi[G_t|S_t=s,A_t=a]\\ &=&\sum_a\pi(a|s)q_\pi(s,a) \end{eqnarray*} Proof of $q_\pi(s,a) = \sum_{s',r}p(s',r|s,a)[r+\gamma v_\pi(s')]$: \begin{eqnarray*} q_\pi(s,a) &=& \mathbb{E}_\pi[G_t|S_t=s,A_t=a]\\ &=&\mathbb{E}_\pi[R_{t+1} + \gamma G_{t+1}|S_t=s,A_t=a]\\ &=&\mathbb{E}_\pi[R_{t+1}|S_t=s,A_t=a] + \gamma\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a]\\ &=&\sum_r rp(r|s,a) + \gamma\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a]\\ &=&\sum_r r\sum_{s'}p(s',r|s,a) + \gamma\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a]\\ &=&\sum_{s',r}rp(s',r|s,a) + \gamma\mathbb{E}[\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a,R_{t+1},S_{t+1}]] \quad (*)\\ &=&\sum_{s',r} rp(s',r|s,a) + \gamma\sum_{s',r}\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a,R_{t+1}=r,S_{t+1}=s']p(s',r|s,a)\\ &=&\sum_{s',r} p(s',r|s,a)[r + \gamma\mathbb{E}_\pi[G_{t+1}|S_t=s,A_t=a,R_{t+1}=r,S_{t+1}=s']\\ &=&\sum_{s',r} p(s',r|s,a)[r + \gamma\mathbb{E}_\pi[G_{t+1}|S_{t+1}=s'] \quad (**)\\ &=&\sum_{s',r} p(s',r|s,a)[r + \gamma v_\pi(s')]\\ \end{eqnarray*} (*) Law of total expectation (**) $S_{t+1} = s'$ holds all information, so all other variables can be dropped (Markov property).
Deriving Bellman's Equation in Reinforcement Learning I wasn't satisfied with any of the above solutions, so I'll give it a try. I find the solution proposed by riceissa the most elegant one, but he only proved the last step. I want to add the missing pi
4,977
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
Here is a paper that explains the issue. I'm quoting some part of it to make the issue clear. The rectifier activation function allows a network to easily obtain sparse representations. For example, after uniform initialization of the weights, around 50% of hidden units continuous output values are real zeros, and this fraction can easily increase with sparsity-inducing regularization. So rectifier activation function introduces sparsity effect on the network. Here are some advantages of sparsity from the same paper; Information disentangling. One of the claimed objectives of deep learning algorithms (Bengio,2009) is to disentangle the factors explaining the variations in the data. A dense representation is highly entangled because almost any change in the input modifies most of the entries in the representation vector. Instead, if a representation is both sparse and robust to small input changes, the set of non-zero features is almost always roughly conserved by small changes of the input. Efficient variable-size representation. Different inputs may contain different amounts of information and would be more conveniently represented using a variable-size data-structure, which is common in computer representations of information. Varying the number of active neurons allows a model to control the effective dimensionality of the representation for a given input and the required precision. Linear separability. Sparse representations are also more likely to be linearly separable, or more easily separable with less non-linear machinery, simply because the information is represented in a high-dimensional space. Besides, this can reflect the original data format. In text-related applications for instance, the original raw data is already very sparse. Distributed but sparse. Dense distributed representations are the richest representations, being potentially exponentially more efficient than purely local ones (Bengio, 2009). Sparse representations’ efficiency is still exponentially greater, with the power of the exponent being the number of non-zero features. They may represent a good trade-off with respect to the above criteria. It also answers the question you've asked: One may hypothesize that the hard saturation at 0 may hurt optimization by blocking gradient back-propagation. To evaluate the potential impact of this effect we also investigate the softplus activation: $ \text{softplus}(x) = \log(1 + e^x) $ (Dugas et al., 2001), a smooth version of the rectifying non-linearity. We lose the exact sparsity, but may hope to gain easier training. However, experimental results tend to contradict that hypothesis, suggesting that hard zeros can actually help supervised training. We hypothesize that the hard non-linearities do not hurt so long as the gradient can propagate along some paths, i.e., that some of the hidden units in each layer are non-zero With the credit and blame assigned to these ON units rather than distributed more evenly, we hypothesize that optimization is easier. You can read the paper Deep Sparse Rectifier Neural Networks for more detail.
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
Here is a paper that explains the issue. I'm quoting some part of it to make the issue clear. The rectifier activation function allows a network to easily obtain sparse representations. For example,
How does rectilinear activation function solve the vanishing gradient problem in neural networks? Here is a paper that explains the issue. I'm quoting some part of it to make the issue clear. The rectifier activation function allows a network to easily obtain sparse representations. For example, after uniform initialization of the weights, around 50% of hidden units continuous output values are real zeros, and this fraction can easily increase with sparsity-inducing regularization. So rectifier activation function introduces sparsity effect on the network. Here are some advantages of sparsity from the same paper; Information disentangling. One of the claimed objectives of deep learning algorithms (Bengio,2009) is to disentangle the factors explaining the variations in the data. A dense representation is highly entangled because almost any change in the input modifies most of the entries in the representation vector. Instead, if a representation is both sparse and robust to small input changes, the set of non-zero features is almost always roughly conserved by small changes of the input. Efficient variable-size representation. Different inputs may contain different amounts of information and would be more conveniently represented using a variable-size data-structure, which is common in computer representations of information. Varying the number of active neurons allows a model to control the effective dimensionality of the representation for a given input and the required precision. Linear separability. Sparse representations are also more likely to be linearly separable, or more easily separable with less non-linear machinery, simply because the information is represented in a high-dimensional space. Besides, this can reflect the original data format. In text-related applications for instance, the original raw data is already very sparse. Distributed but sparse. Dense distributed representations are the richest representations, being potentially exponentially more efficient than purely local ones (Bengio, 2009). Sparse representations’ efficiency is still exponentially greater, with the power of the exponent being the number of non-zero features. They may represent a good trade-off with respect to the above criteria. It also answers the question you've asked: One may hypothesize that the hard saturation at 0 may hurt optimization by blocking gradient back-propagation. To evaluate the potential impact of this effect we also investigate the softplus activation: $ \text{softplus}(x) = \log(1 + e^x) $ (Dugas et al., 2001), a smooth version of the rectifying non-linearity. We lose the exact sparsity, but may hope to gain easier training. However, experimental results tend to contradict that hypothesis, suggesting that hard zeros can actually help supervised training. We hypothesize that the hard non-linearities do not hurt so long as the gradient can propagate along some paths, i.e., that some of the hidden units in each layer are non-zero With the credit and blame assigned to these ON units rather than distributed more evenly, we hypothesize that optimization is easier. You can read the paper Deep Sparse Rectifier Neural Networks for more detail.
How does rectilinear activation function solve the vanishing gradient problem in neural networks? Here is a paper that explains the issue. I'm quoting some part of it to make the issue clear. The rectifier activation function allows a network to easily obtain sparse representations. For example,
4,978
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
Here is a heuristic explanation: Each gradient update in backprop consists of a number of multiplied factors. The further you get towards the start of the network, the more of these factors are multiplied together to get the gradient update. Many of these factors are derivatives of the activation function of the neurons - the rest are weights, biases etc. Of these factors, the ones that intuitively matter are the weights, biases, etc. The activation function derivatives are more of a kind of tuning parameter, designed to get the gradient descent going in the right direction at the right kind of velocity. If you multiply a bunch of terms which are less than 1, they will tend towards zero the more terms you have. Hence vanishing gradient as you get further from the output layer if you have activation functions which have a slope < 1. If you multiply a bunch of terms which are greater than 1, they will tend towards infinity the more you have, hence exploding gradient as you get further from the output layer if you have activation functions which have a slope > 1. How about if we could, somehow, magically, get these terms contributed by the derivative of the activation functions to be 1. This intuitively means that all the contributions to the gradient updates come from the input to the problem and the model - the weights, inputs, biases - rather than some artefact of the activation function chosen. RELU has gradient 1 when output > 0, and zero otherwise. Hence multiplying a bunch of RELU derivatives together in the backprop equations has the nice property of being either 1 or zero - the update is either nothing, or takes contributions entirely from the other weights and biases. You might think that it would be better to have a linear function, rather than flattening when x < 0. The idea here is that RELU generates sparse networks with a relatively small number of useful links, which has more biological plausibility, so the loss of a bunch of weights is actually helpful. Also, simulation of interesting functions with neural nets is only possible with some nonlinearity in the activation function. A linear activation function results in a linear output, which is not very interesting at all.
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
Here is a heuristic explanation: Each gradient update in backprop consists of a number of multiplied factors. The further you get towards the start of the network, the more of these factors are mult
How does rectilinear activation function solve the vanishing gradient problem in neural networks? Here is a heuristic explanation: Each gradient update in backprop consists of a number of multiplied factors. The further you get towards the start of the network, the more of these factors are multiplied together to get the gradient update. Many of these factors are derivatives of the activation function of the neurons - the rest are weights, biases etc. Of these factors, the ones that intuitively matter are the weights, biases, etc. The activation function derivatives are more of a kind of tuning parameter, designed to get the gradient descent going in the right direction at the right kind of velocity. If you multiply a bunch of terms which are less than 1, they will tend towards zero the more terms you have. Hence vanishing gradient as you get further from the output layer if you have activation functions which have a slope < 1. If you multiply a bunch of terms which are greater than 1, they will tend towards infinity the more you have, hence exploding gradient as you get further from the output layer if you have activation functions which have a slope > 1. How about if we could, somehow, magically, get these terms contributed by the derivative of the activation functions to be 1. This intuitively means that all the contributions to the gradient updates come from the input to the problem and the model - the weights, inputs, biases - rather than some artefact of the activation function chosen. RELU has gradient 1 when output > 0, and zero otherwise. Hence multiplying a bunch of RELU derivatives together in the backprop equations has the nice property of being either 1 or zero - the update is either nothing, or takes contributions entirely from the other weights and biases. You might think that it would be better to have a linear function, rather than flattening when x < 0. The idea here is that RELU generates sparse networks with a relatively small number of useful links, which has more biological plausibility, so the loss of a bunch of weights is actually helpful. Also, simulation of interesting functions with neural nets is only possible with some nonlinearity in the activation function. A linear activation function results in a linear output, which is not very interesting at all.
How does rectilinear activation function solve the vanishing gradient problem in neural networks? Here is a heuristic explanation: Each gradient update in backprop consists of a number of multiplied factors. The further you get towards the start of the network, the more of these factors are mult
4,979
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
This is why it's probably a better idea to use PReLU, ELU, or other leaky ReLU-like activations which don't just die off to 0, but which fall to something like 0.1*x when x gets negative to keep learning. It seemed to me for a long time that ReLUs are history like sigmoid, though for some reason people still publish papers with these. Why? I don't know. Dmytro Mishkin and other guys actually tested a network with plenty of different activation types, you should look into their findings on performance of different activation functions and other stuff. Some functions, like XOR, though, are better learnt with plain ReLU. Don't think about any neural stuff in dogma terms, because neural nets are very much work in progress. Nobody in the world actually knows and understands them well enough to tell the divine truth. Nobody. Try things out, make your own discoveries. Mind that using ReLU itself is a very recent development and for decades all the different PhD guys in the field have used over-complicated activation functions that we now can only laugh about. Too often "knowing" too much can get you bad results. It's important to understand that neural networks aren't an exact science. Nothing in maths says that neural networks will actually work as good as they do. It's heuristic. And so it's very malleable. FYI even absolute-value activation gets good results on some problems, for example XOR-like problems. Different activation functions are better suited to different purposes. I tried Cifar-10 with abs() and it seemed to perform worse. Though, I can't say that "it is a worse activation function for visual recognition", because I'm not sure, for example, if my pre-initialization was optimal for it, etc. The very fact that it was learning relatively well amazed me. Also, in real life, "derivatives" that you pass to the backprop don't necessarily have to match the actual mathematical derivatives. I'd even go as far as to say we should ban calling them "derivatives" and start calling them something else, for example, error activation functions to not close our minds to possibilities of tinkering with them. You can actually, for example, use ReLU activation, but provide a 0.1, or something like that instead of 0 as a derivative for x<0. In a way, you then have a plain ReLU, but with neurons not being able to "die out of adaptability". I call this NecroRelu, because it's a ReLU that can't die. And in some cases (definitely not in most, though) that works better than plain LeakyReLU, which actually has 0.1 derivative at x<0 and better than usual ReLU. I don't think too many others have investigated such a function, though, this, or something similar might actually be a generally cool activation function that nobody considered just because they're too concentrated on maths. As for what's generally used, for tanH(x) activation function it's a usual thing to pass 1 - x² instead of 1 - tanH(x)² as a derivative in order to calculate things faster. Also, mind that ReLU isn't all that "obviously better" than, for example, TanH. TanH can probably be better in some cases. Just, so it seems, not in visual recognition. Though, ELU, for example, has a bit of sigmoid softness to it and it's one of the best known activation functions for visual recognition at the moment. I haven't really tried, but I bet one can set several groups with different activation functions on the same layer level to an advantage. Because, different logic is better described with different activation functions. And sometimes you probably need several types of evaluation. Note that it's important to have an intialization that corresponds to the type of your activation function. Leaky ReLUs need other init that plain ReLUs, for example. EDIT: Actually, standard ReLU seems less prone to overfitting vs leaky ones with modern architectures. At least in image recognition. It seems that if you are going for very high accuracy net with a huge load of parameters, it might be better to stick with plain ReLU vs leaky options. But, of course, test all of this by yourself. Maybe, some leaky stuff will work better if more regularization is given.
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
This is why it's probably a better idea to use PReLU, ELU, or other leaky ReLU-like activations which don't just die off to 0, but which fall to something like 0.1*x when x gets negative to keep learn
How does rectilinear activation function solve the vanishing gradient problem in neural networks? This is why it's probably a better idea to use PReLU, ELU, or other leaky ReLU-like activations which don't just die off to 0, but which fall to something like 0.1*x when x gets negative to keep learning. It seemed to me for a long time that ReLUs are history like sigmoid, though for some reason people still publish papers with these. Why? I don't know. Dmytro Mishkin and other guys actually tested a network with plenty of different activation types, you should look into their findings on performance of different activation functions and other stuff. Some functions, like XOR, though, are better learnt with plain ReLU. Don't think about any neural stuff in dogma terms, because neural nets are very much work in progress. Nobody in the world actually knows and understands them well enough to tell the divine truth. Nobody. Try things out, make your own discoveries. Mind that using ReLU itself is a very recent development and for decades all the different PhD guys in the field have used over-complicated activation functions that we now can only laugh about. Too often "knowing" too much can get you bad results. It's important to understand that neural networks aren't an exact science. Nothing in maths says that neural networks will actually work as good as they do. It's heuristic. And so it's very malleable. FYI even absolute-value activation gets good results on some problems, for example XOR-like problems. Different activation functions are better suited to different purposes. I tried Cifar-10 with abs() and it seemed to perform worse. Though, I can't say that "it is a worse activation function for visual recognition", because I'm not sure, for example, if my pre-initialization was optimal for it, etc. The very fact that it was learning relatively well amazed me. Also, in real life, "derivatives" that you pass to the backprop don't necessarily have to match the actual mathematical derivatives. I'd even go as far as to say we should ban calling them "derivatives" and start calling them something else, for example, error activation functions to not close our minds to possibilities of tinkering with them. You can actually, for example, use ReLU activation, but provide a 0.1, or something like that instead of 0 as a derivative for x<0. In a way, you then have a plain ReLU, but with neurons not being able to "die out of adaptability". I call this NecroRelu, because it's a ReLU that can't die. And in some cases (definitely not in most, though) that works better than plain LeakyReLU, which actually has 0.1 derivative at x<0 and better than usual ReLU. I don't think too many others have investigated such a function, though, this, or something similar might actually be a generally cool activation function that nobody considered just because they're too concentrated on maths. As for what's generally used, for tanH(x) activation function it's a usual thing to pass 1 - x² instead of 1 - tanH(x)² as a derivative in order to calculate things faster. Also, mind that ReLU isn't all that "obviously better" than, for example, TanH. TanH can probably be better in some cases. Just, so it seems, not in visual recognition. Though, ELU, for example, has a bit of sigmoid softness to it and it's one of the best known activation functions for visual recognition at the moment. I haven't really tried, but I bet one can set several groups with different activation functions on the same layer level to an advantage. Because, different logic is better described with different activation functions. And sometimes you probably need several types of evaluation. Note that it's important to have an intialization that corresponds to the type of your activation function. Leaky ReLUs need other init that plain ReLUs, for example. EDIT: Actually, standard ReLU seems less prone to overfitting vs leaky ones with modern architectures. At least in image recognition. It seems that if you are going for very high accuracy net with a huge load of parameters, it might be better to stick with plain ReLU vs leaky options. But, of course, test all of this by yourself. Maybe, some leaky stuff will work better if more regularization is given.
How does rectilinear activation function solve the vanishing gradient problem in neural networks? This is why it's probably a better idea to use PReLU, ELU, or other leaky ReLU-like activations which don't just die off to 0, but which fall to something like 0.1*x when x gets negative to keep learn
4,980
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
Let's consider the main recurrence relation that defines the back propagation of the error signal. let $ {W_i}$ and ${b_i}$ be the weight matrix and bias vector of layer $\text{i}$, and ${f}$ be the activation function. The activation vector ${h_i}$ of layer ${i}$ is calculated as follows: ${s_i} = {W_i}({h_{i-1}}) + {b_i} $ ${h_i} = {f(s_i)}$ The error singal $\delta$ for layer ${i}$ is defined by: ${\delta_{i}} = {W_{i+1}({\delta_{i+1}}}\odot{f^{'}({s_i})})$ Where $\odot$ is elementwise multiplication of two vectors. This recurrence relation is calculated for each layer in the network, and expresses the way the error signal is transferred from the output layer backwards. Now, if we take for example ${f}$ to be the tanh function, we have ${f^{'}({s_i})}=(1-h_i^2)$. Unless $h_i$ is exactly 1 or -1, this expression is a fraction between 0 to 1. Hence, each layer, the error signal is multiplied by a fraction, and becomes smaller and smaller: a vanishing gradient. However, if we take ${f}=Relu=max(0,x)$, we have ${f^{'}}$ that is 1 for every neuron that has fired something, i.e. a neuron whose activation is nonzero (in numpy, this would be ${f^{'}} = \text{numpy.where}(h_i>0, 1, 0)$). In this case, the error signal is propagated fully to the next layer (it's multiplied by 1). Hence, even for a network with multiple layers, we don't encounter a vanishing gradient. This equation also demonstrates the other problem characteristic to relu activation - dead neurons: if a given neuron happened to be initialized in a way that it doesn't fire for any input (its activation is zero), its gradient would also be zero, and hence it would never be activated.
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
Let's consider the main recurrence relation that defines the back propagation of the error signal. let $ {W_i}$ and ${b_i}$ be the weight matrix and bias vector of layer $\text{i}$, and ${f}$ be the
How does rectilinear activation function solve the vanishing gradient problem in neural networks? Let's consider the main recurrence relation that defines the back propagation of the error signal. let $ {W_i}$ and ${b_i}$ be the weight matrix and bias vector of layer $\text{i}$, and ${f}$ be the activation function. The activation vector ${h_i}$ of layer ${i}$ is calculated as follows: ${s_i} = {W_i}({h_{i-1}}) + {b_i} $ ${h_i} = {f(s_i)}$ The error singal $\delta$ for layer ${i}$ is defined by: ${\delta_{i}} = {W_{i+1}({\delta_{i+1}}}\odot{f^{'}({s_i})})$ Where $\odot$ is elementwise multiplication of two vectors. This recurrence relation is calculated for each layer in the network, and expresses the way the error signal is transferred from the output layer backwards. Now, if we take for example ${f}$ to be the tanh function, we have ${f^{'}({s_i})}=(1-h_i^2)$. Unless $h_i$ is exactly 1 or -1, this expression is a fraction between 0 to 1. Hence, each layer, the error signal is multiplied by a fraction, and becomes smaller and smaller: a vanishing gradient. However, if we take ${f}=Relu=max(0,x)$, we have ${f^{'}}$ that is 1 for every neuron that has fired something, i.e. a neuron whose activation is nonzero (in numpy, this would be ${f^{'}} = \text{numpy.where}(h_i>0, 1, 0)$). In this case, the error signal is propagated fully to the next layer (it's multiplied by 1). Hence, even for a network with multiple layers, we don't encounter a vanishing gradient. This equation also demonstrates the other problem characteristic to relu activation - dead neurons: if a given neuron happened to be initialized in a way that it doesn't fire for any input (its activation is zero), its gradient would also be zero, and hence it would never be activated.
How does rectilinear activation function solve the vanishing gradient problem in neural networks? Let's consider the main recurrence relation that defines the back propagation of the error signal. let $ {W_i}$ and ${b_i}$ be the weight matrix and bias vector of layer $\text{i}$, and ${f}$ be the
4,981
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
Essentially, ReLUs only lead to vanishing gradients for inputs smaller than zero, while for other inputs they allow the gradient to pass through (look at plot of derivative of ReLU). This is unlike Sigmoid and Tanh, with can lead to gradient saturation for small and large values. From Original ReLU paper: Because of this linearity, gradients flow well on the active paths of neurons (there is no gradient vanishing effect due to activation non-linearities of sigmoid or tanh units), and mathematical investigation is easier. Why not just have flowing gradients on both ends? Sparsity is beneficial: One may hypothesize that the hard saturation at 0 may hurt optimization by blocking gradient back-propagation. To evaluate the potential impact of this effect we also investigate the softplus activation: softplus(x)=log(1+ex) (Dugas et al., 2001), a smooth version of the rectifying non-linearity. We lose the exact sparsity, but may hope to gain easier training. However, experimental results tend to contradict that hypothesis, suggesting that hard zeros can actually help supervised training. We hypothesize that the hard non-linearities do not hurt so long as the gradient can propagate along some paths, i.e., that some of the hidden units in each layer are non-zero With the credit and blame assigned to these ON units rather than distributed more evenly, we hypothesize that optimization is easier. Paper
How does rectilinear activation function solve the vanishing gradient problem in neural networks?
Essentially, ReLUs only lead to vanishing gradients for inputs smaller than zero, while for other inputs they allow the gradient to pass through (look at plot of derivative of ReLU). This is unlike Si
How does rectilinear activation function solve the vanishing gradient problem in neural networks? Essentially, ReLUs only lead to vanishing gradients for inputs smaller than zero, while for other inputs they allow the gradient to pass through (look at plot of derivative of ReLU). This is unlike Sigmoid and Tanh, with can lead to gradient saturation for small and large values. From Original ReLU paper: Because of this linearity, gradients flow well on the active paths of neurons (there is no gradient vanishing effect due to activation non-linearities of sigmoid or tanh units), and mathematical investigation is easier. Why not just have flowing gradients on both ends? Sparsity is beneficial: One may hypothesize that the hard saturation at 0 may hurt optimization by blocking gradient back-propagation. To evaluate the potential impact of this effect we also investigate the softplus activation: softplus(x)=log(1+ex) (Dugas et al., 2001), a smooth version of the rectifying non-linearity. We lose the exact sparsity, but may hope to gain easier training. However, experimental results tend to contradict that hypothesis, suggesting that hard zeros can actually help supervised training. We hypothesize that the hard non-linearities do not hurt so long as the gradient can propagate along some paths, i.e., that some of the hidden units in each layer are non-zero With the credit and blame assigned to these ON units rather than distributed more evenly, we hypothesize that optimization is easier. Paper
How does rectilinear activation function solve the vanishing gradient problem in neural networks? Essentially, ReLUs only lead to vanishing gradients for inputs smaller than zero, while for other inputs they allow the gradient to pass through (look at plot of derivative of ReLU). This is unlike Si
4,982
Hierarchical clustering with mixed type data - what distance/similarity to use?
One way is to use Gower similarity coefficient which is a composite measure$^1$; it takes quantitative (such as rating scale), binary (such as present/absent) and nominal (such as worker/teacher/clerk) variables. Later Podani$^2$ added an option to take ordinal variables as well. The coefficient is easily understood even without a formula; you compute the similarity value between the individuals by each variable, taking the type of the variable into account, and then average across all the variables. Usually, a program calculating Gower will allow you to weight variables, that is, their contribution, to the composite formula. However, proper weighting of variables of different type is a problem, no clear-cut guidelines exist, which makes Gower or other "composite" indices of proximity pull ones face. The facets of Gower similarity ($GS$): When all variables are quantitative (interval) then the coefficient is the range-normalized Manhattan distance converted into similarity. Because of the normalization variables of different units may be safely used. You should not, however, forget about outliers. (You might also decide to normalize by another measure of spread than range.) Because of the said normalization by a statistic, such as range, which is sensitive to the composition of individuals in the dataset Gower similarity between some two individuals may change its value if you remove or add some other individuals in the data. When all variables are ordinal, then they are first ranked, and then Manhattan is computed, as above with quantitative variables, but with the special adjustment for ties. When all variables are binary (with an asymmetric significance of categories: "present" vs "absent" attribute) then the coefficient is the Jaccard matching coefficient (this coefficient treats when both individuals lack the attribute as neither match nor mismatch). When all variables are nominal (also including here dichotomous with symmetric significance: "this" vs "that") then the coefficient is the Dice matching coefficient that you obtain from your nominal variables if recode them into dummy variables (see this answer for more). (It is easy to extend the list of types. For example, one could add a summand for count variables, using normalized chi-squared distance converted to similarity.) The coefficient ranges between 0 and 1. "Gower distance". Without ordinal variables present (i.e. w/o using the Podani's option) $\sqrt{1-GS}$ behaves as Euclidean distance, it fully supports euclidean space. But $1-GS$ is only metric (supports triangular inequality), not Euclidean. With ordinal variables present (using the Podani's option) $\sqrt{1-GS}$ is only metric, not Euclidean; and $1-GS$ isn't metric at all. See also. With euclidean distances (distances supporting Euclidean space), virtually any classic clustering technique will do. Including K-means (if your K-means program can process distance matrices, of course) and including Ward's, centroid, median methods of Hierarchical clustering. Using K-means or other those methods based on Euclidean distance with non-euclidean still metric distance is heuristically admissible, perhaps. With non-metric distances, no such methods may be used. The previous paragraph talks about if K-means or Ward's or such clustering is legal or not with Gower distance mathematically (geometrically). From the measurement-scale ("psychometric") point of view one should not compute mean or euclidean-distance deviation from it in any categorical (nominal, binary, as well as ordinal) data; therefore from this stance you just may not process Gower coefficient by K-means, Ward etc. This viewpoint warns that even if a Euclidean space is present it may be granulated, not smooth (see related). If you want all the formulae and additional info on Gower similarity / distance, please read the description of my SPSS macro !KO_gower; it's in the Word document found in collection "Various proximities" on my web-page. $^1$ Gower J. C. A general coefficient of similarity and some of its properties // Biometrics, 1971, 27, 857-872 $^2$ Podani, J. Extending Gower’s general coefficient of similarity to ordinal characters // Taxon, 1999, 48, 331-340
Hierarchical clustering with mixed type data - what distance/similarity to use?
One way is to use Gower similarity coefficient which is a composite measure$^1$; it takes quantitative (such as rating scale), binary (such as present/absent) and nominal (such as worker/teacher/clerk
Hierarchical clustering with mixed type data - what distance/similarity to use? One way is to use Gower similarity coefficient which is a composite measure$^1$; it takes quantitative (such as rating scale), binary (such as present/absent) and nominal (such as worker/teacher/clerk) variables. Later Podani$^2$ added an option to take ordinal variables as well. The coefficient is easily understood even without a formula; you compute the similarity value between the individuals by each variable, taking the type of the variable into account, and then average across all the variables. Usually, a program calculating Gower will allow you to weight variables, that is, their contribution, to the composite formula. However, proper weighting of variables of different type is a problem, no clear-cut guidelines exist, which makes Gower or other "composite" indices of proximity pull ones face. The facets of Gower similarity ($GS$): When all variables are quantitative (interval) then the coefficient is the range-normalized Manhattan distance converted into similarity. Because of the normalization variables of different units may be safely used. You should not, however, forget about outliers. (You might also decide to normalize by another measure of spread than range.) Because of the said normalization by a statistic, such as range, which is sensitive to the composition of individuals in the dataset Gower similarity between some two individuals may change its value if you remove or add some other individuals in the data. When all variables are ordinal, then they are first ranked, and then Manhattan is computed, as above with quantitative variables, but with the special adjustment for ties. When all variables are binary (with an asymmetric significance of categories: "present" vs "absent" attribute) then the coefficient is the Jaccard matching coefficient (this coefficient treats when both individuals lack the attribute as neither match nor mismatch). When all variables are nominal (also including here dichotomous with symmetric significance: "this" vs "that") then the coefficient is the Dice matching coefficient that you obtain from your nominal variables if recode them into dummy variables (see this answer for more). (It is easy to extend the list of types. For example, one could add a summand for count variables, using normalized chi-squared distance converted to similarity.) The coefficient ranges between 0 and 1. "Gower distance". Without ordinal variables present (i.e. w/o using the Podani's option) $\sqrt{1-GS}$ behaves as Euclidean distance, it fully supports euclidean space. But $1-GS$ is only metric (supports triangular inequality), not Euclidean. With ordinal variables present (using the Podani's option) $\sqrt{1-GS}$ is only metric, not Euclidean; and $1-GS$ isn't metric at all. See also. With euclidean distances (distances supporting Euclidean space), virtually any classic clustering technique will do. Including K-means (if your K-means program can process distance matrices, of course) and including Ward's, centroid, median methods of Hierarchical clustering. Using K-means or other those methods based on Euclidean distance with non-euclidean still metric distance is heuristically admissible, perhaps. With non-metric distances, no such methods may be used. The previous paragraph talks about if K-means or Ward's or such clustering is legal or not with Gower distance mathematically (geometrically). From the measurement-scale ("psychometric") point of view one should not compute mean or euclidean-distance deviation from it in any categorical (nominal, binary, as well as ordinal) data; therefore from this stance you just may not process Gower coefficient by K-means, Ward etc. This viewpoint warns that even if a Euclidean space is present it may be granulated, not smooth (see related). If you want all the formulae and additional info on Gower similarity / distance, please read the description of my SPSS macro !KO_gower; it's in the Word document found in collection "Various proximities" on my web-page. $^1$ Gower J. C. A general coefficient of similarity and some of its properties // Biometrics, 1971, 27, 857-872 $^2$ Podani, J. Extending Gower’s general coefficient of similarity to ordinal characters // Taxon, 1999, 48, 331-340
Hierarchical clustering with mixed type data - what distance/similarity to use? One way is to use Gower similarity coefficient which is a composite measure$^1$; it takes quantitative (such as rating scale), binary (such as present/absent) and nominal (such as worker/teacher/clerk
4,983
Hierarchical clustering with mixed type data - what distance/similarity to use?
If you have stumbled upon this question and are wondering what package to download for using Gower metric in R, the cluster package has a function named daisy(), which by default uses Gower's metric whenever mixed types of variables are used. Or you can manually set it to use Gower's metric. daisy(x, metric = c("euclidean", "manhattan", "gower"), stand = FALSE, type = list(), weights = rep.int(1, p))
Hierarchical clustering with mixed type data - what distance/similarity to use?
If you have stumbled upon this question and are wondering what package to download for using Gower metric in R, the cluster package has a function named daisy(), which by default uses Gower's metric
Hierarchical clustering with mixed type data - what distance/similarity to use? If you have stumbled upon this question and are wondering what package to download for using Gower metric in R, the cluster package has a function named daisy(), which by default uses Gower's metric whenever mixed types of variables are used. Or you can manually set it to use Gower's metric. daisy(x, metric = c("euclidean", "manhattan", "gower"), stand = FALSE, type = list(), weights = rep.int(1, p))
Hierarchical clustering with mixed type data - what distance/similarity to use? If you have stumbled upon this question and are wondering what package to download for using Gower metric in R, the cluster package has a function named daisy(), which by default uses Gower's metric
4,984
What is difference between “in-sample” and “out-of-sample” forecasts?
By the "sample" it is meant the data sample that you are using to fit the model. First - you have a sample Second - you fit a model on the sample Third - you can use the model for forecasting If you are forecasting for an observation that was part of the data sample - it is in-sample forecast. If you are forecasting for an observation that was not part of the data sample - it is out-of-sample forecast. So the question you have to ask yourself is: Was the particular observation used for the model fitting or not ? If it was used for the model fitting, then the forecast of the observation is in-sample. Otherwise it is out-of-sample. if you use data 1990-2013 to fit the model and then you forecast for 2011-2013, it's in-sample forecast. but if you only use 1990-2010 for fitting the model and then you forecast 2011-2013, then its out-of-sample forecast.
What is difference between “in-sample” and “out-of-sample” forecasts?
By the "sample" it is meant the data sample that you are using to fit the model. First - you have a sample Second - you fit a model on the sample Third - you can use the model for forecasting If you
What is difference between “in-sample” and “out-of-sample” forecasts? By the "sample" it is meant the data sample that you are using to fit the model. First - you have a sample Second - you fit a model on the sample Third - you can use the model for forecasting If you are forecasting for an observation that was part of the data sample - it is in-sample forecast. If you are forecasting for an observation that was not part of the data sample - it is out-of-sample forecast. So the question you have to ask yourself is: Was the particular observation used for the model fitting or not ? If it was used for the model fitting, then the forecast of the observation is in-sample. Otherwise it is out-of-sample. if you use data 1990-2013 to fit the model and then you forecast for 2011-2013, it's in-sample forecast. but if you only use 1990-2010 for fitting the model and then you forecast 2011-2013, then its out-of-sample forecast.
What is difference between “in-sample” and “out-of-sample” forecasts? By the "sample" it is meant the data sample that you are using to fit the model. First - you have a sample Second - you fit a model on the sample Third - you can use the model for forecasting If you
4,985
What is difference between “in-sample” and “out-of-sample” forecasts?
Suppose in your sample, you have a sequence of 10 data points. This data can be divided into two parts - e.g. first 7 data points for estimating the model parameters and next 3 data points to test the model performance. Using the fitted model, predictions made for the first 7 data points will be called in-sample forecast and same for last 3 data points will be called out of sample forecast. This is same as the idea of splitting the data into training set and validation set.
What is difference between “in-sample” and “out-of-sample” forecasts?
Suppose in your sample, you have a sequence of 10 data points. This data can be divided into two parts - e.g. first 7 data points for estimating the model parameters and next 3 data points to test the
What is difference between “in-sample” and “out-of-sample” forecasts? Suppose in your sample, you have a sequence of 10 data points. This data can be divided into two parts - e.g. first 7 data points for estimating the model parameters and next 3 data points to test the model performance. Using the fitted model, predictions made for the first 7 data points will be called in-sample forecast and same for last 3 data points will be called out of sample forecast. This is same as the idea of splitting the data into training set and validation set.
What is difference between “in-sample” and “out-of-sample” forecasts? Suppose in your sample, you have a sequence of 10 data points. This data can be divided into two parts - e.g. first 7 data points for estimating the model parameters and next 3 data points to test the
4,986
What is difference between “in-sample” and “out-of-sample” forecasts?
The below diagram will help you understand the IN TIME and OUT OF TIME
What is difference between “in-sample” and “out-of-sample” forecasts?
The below diagram will help you understand the IN TIME and OUT OF TIME
What is difference between “in-sample” and “out-of-sample” forecasts? The below diagram will help you understand the IN TIME and OUT OF TIME
What is difference between “in-sample” and “out-of-sample” forecasts? The below diagram will help you understand the IN TIME and OUT OF TIME
4,987
What is difference between “in-sample” and “out-of-sample” forecasts?
In-sample forecast is the process of formally evaluating the predictive capabilities of the models developed using observed data to see how effective the algorithms are in reproducing data. It is kind of similar to a training set in a machine learning algorithm and the out-of-sample is similar to the test set.
What is difference between “in-sample” and “out-of-sample” forecasts?
In-sample forecast is the process of formally evaluating the predictive capabilities of the models developed using observed data to see how effective the algorithms are in reproducing data. It is kind
What is difference between “in-sample” and “out-of-sample” forecasts? In-sample forecast is the process of formally evaluating the predictive capabilities of the models developed using observed data to see how effective the algorithms are in reproducing data. It is kind of similar to a training set in a machine learning algorithm and the out-of-sample is similar to the test set.
What is difference between “in-sample” and “out-of-sample” forecasts? In-sample forecast is the process of formally evaluating the predictive capabilities of the models developed using observed data to see how effective the algorithms are in reproducing data. It is kind
4,988
What is difference between “in-sample” and “out-of-sample” forecasts?
I consider the in-sample is used to construct a model. And out-of-sample means to exam the model which uses im-sample data. It just likes the data analysis training and test.
What is difference between “in-sample” and “out-of-sample” forecasts?
I consider the in-sample is used to construct a model. And out-of-sample means to exam the model which uses im-sample data. It just likes the data analysis training and test.
What is difference between “in-sample” and “out-of-sample” forecasts? I consider the in-sample is used to construct a model. And out-of-sample means to exam the model which uses im-sample data. It just likes the data analysis training and test.
What is difference between “in-sample” and “out-of-sample” forecasts? I consider the in-sample is used to construct a model. And out-of-sample means to exam the model which uses im-sample data. It just likes the data analysis training and test.
4,989
What is a latent space?
Latent space refers to an abstract multi-dimensional space containing feature values that we cannot interpret directly, but which encodes a meaningful internal representation of externally observed events. Just as we, humans, have an understanding of a broad range of topics and the events belonging to those topics, latent space aims to provide a similar understanding to a computer through a quantitative spatial representation/modeling. The motivation to learn a latent space (set of hidden topics/ internal representations) over the observed data (set of events) is that large differences in observed space/events could be due to small variations in latent space (for the same topic). Hence, learning a latent space would help the model make better sense of observed data than from observed data itself, which is a very large space to learn from. Some examples of latent space are: 1) Word Embedding Space - consisting of word vectors where words similar in meaning have vectors that lie close to each other in space (as measured by cosine-similarity or euclidean-distance) and words that are unrelated lie far apart (Tensorflow's Embedding Projector provides a good visualization of word embedding spaces). 2) Image Feature Space - CNNs in the final layers encode higher-level features in the input image that allows it to effectively detect, for example, the presence of a cat in the input image under varying lighting conditions, which is a difficult task in the raw pixel space. 3) Topic Modeling methods such as LDA, PLSA use statistical approaches to obtain a latent set of topics from an observed set of documents and word distribution. (PyLDAvis provides a good visualization of topic models) 4) VAEs & GANs aim to obtain a latent space/distribution that closely approximates the real latent space/distribution of the observed data. In all the above examples, we quantitatively represent the complex observation space with a (relatively simple) multi-dimensional latent space that approximates the real latent space of the observed data. The terms "high dimensional" and "low dimensional" help us define how specific or how general the kinds of features we want our latent space to learn and represent. High dimensional latent space is sensitive to more specific features of the input data and can sometimes lead to overfitting when there isn't sufficient training data. Low dimensional latent space aims to capture the most important features/aspects required to learn and represent the input data (a good example is a low-dimensional bottleneck layer in VAEs). If this answer helped, please don't forget to up-vote it :)
What is a latent space?
Latent space refers to an abstract multi-dimensional space containing feature values that we cannot interpret directly, but which encodes a meaningful internal representation of externally observed ev
What is a latent space? Latent space refers to an abstract multi-dimensional space containing feature values that we cannot interpret directly, but which encodes a meaningful internal representation of externally observed events. Just as we, humans, have an understanding of a broad range of topics and the events belonging to those topics, latent space aims to provide a similar understanding to a computer through a quantitative spatial representation/modeling. The motivation to learn a latent space (set of hidden topics/ internal representations) over the observed data (set of events) is that large differences in observed space/events could be due to small variations in latent space (for the same topic). Hence, learning a latent space would help the model make better sense of observed data than from observed data itself, which is a very large space to learn from. Some examples of latent space are: 1) Word Embedding Space - consisting of word vectors where words similar in meaning have vectors that lie close to each other in space (as measured by cosine-similarity or euclidean-distance) and words that are unrelated lie far apart (Tensorflow's Embedding Projector provides a good visualization of word embedding spaces). 2) Image Feature Space - CNNs in the final layers encode higher-level features in the input image that allows it to effectively detect, for example, the presence of a cat in the input image under varying lighting conditions, which is a difficult task in the raw pixel space. 3) Topic Modeling methods such as LDA, PLSA use statistical approaches to obtain a latent set of topics from an observed set of documents and word distribution. (PyLDAvis provides a good visualization of topic models) 4) VAEs & GANs aim to obtain a latent space/distribution that closely approximates the real latent space/distribution of the observed data. In all the above examples, we quantitatively represent the complex observation space with a (relatively simple) multi-dimensional latent space that approximates the real latent space of the observed data. The terms "high dimensional" and "low dimensional" help us define how specific or how general the kinds of features we want our latent space to learn and represent. High dimensional latent space is sensitive to more specific features of the input data and can sometimes lead to overfitting when there isn't sufficient training data. Low dimensional latent space aims to capture the most important features/aspects required to learn and represent the input data (a good example is a low-dimensional bottleneck layer in VAEs). If this answer helped, please don't forget to up-vote it :)
What is a latent space? Latent space refers to an abstract multi-dimensional space containing feature values that we cannot interpret directly, but which encodes a meaningful internal representation of externally observed ev
4,990
What is a latent space?
Latent space is a vector space spanned by the latent variables. Latent variables are variables which are not directly observable, but which are $-$ up to the level of noise $-$ sufficient to describe the data. I.e. the observable variables can be derived (computed) from the latent ones. Let me use this image, adapted from GeeksforGeeks, to visualise the idea: Each observable data point has four visible features: the $x, y,$ and $z$-coordinates, and the colour. However, each point is uniquely determined by a single latent variable, $\varphi$ (phi in the python code). phi = np.linspace(0, 1, 100) # the latent variable x = phi * np.sin(25 * phi) # 1st observable: x-coordinate y = np.exp(phi) * np.cos(25 * phi) # 2nd observable: y-coordinate z = np.sqrt(phi) # 3rd observable: z-coordinate c = x + y # 4th observable: colour This is, of course, just a toy example. In practice, you often have many, maybe even millions of observable variables (think of pixel values in images), but they can be sufficiently well computed from a much smaller set of latent variables. In such cases it may be useful to perform some kind of dimensionality reduction. As a real-world example, consider spectra of light-emitting objects, like stars. A spectrum is a long vector of values, light intensities at many different wavelengths. Modern spectrometers measure the intensity at thousands of wavelengths. However, each spectrum can be quite well described by the star's temperature (through the black body radiation law) and the concentration of different elements (for the absorption lines). These are likely to be way less then thousands, maybe only a dozen or two. That would be a low dimensional latent space. Note, however, that it's not necessary for the latent space to be smaller than the observable space. It is completely conceivable for many latent variables to influence few observable ones. For example, the value of a particular share at the stock market at a certain point in time is a single value, but it is likely due to many influences which are mostly unknown. In machine learning I've seen people using high dimensional latent space to denote a feature space induced by some non-linear data transformation which increases the dimensionality of the data. The idea (or the hope) is to achieve linear separability (for classification) or linearity (for regression) of the transformed data. For example, support vector machines use the kernel trick to transform the data, but the transformation is only implicit, given by the kernel function. Such data are "latent" in the sense that you (or the algorithm) never know their values; you only know the dot products of pairs of points.
What is a latent space?
Latent space is a vector space spanned by the latent variables. Latent variables are variables which are not directly observable, but which are $-$ up to the level of noise $-$ sufficient to describe
What is a latent space? Latent space is a vector space spanned by the latent variables. Latent variables are variables which are not directly observable, but which are $-$ up to the level of noise $-$ sufficient to describe the data. I.e. the observable variables can be derived (computed) from the latent ones. Let me use this image, adapted from GeeksforGeeks, to visualise the idea: Each observable data point has four visible features: the $x, y,$ and $z$-coordinates, and the colour. However, each point is uniquely determined by a single latent variable, $\varphi$ (phi in the python code). phi = np.linspace(0, 1, 100) # the latent variable x = phi * np.sin(25 * phi) # 1st observable: x-coordinate y = np.exp(phi) * np.cos(25 * phi) # 2nd observable: y-coordinate z = np.sqrt(phi) # 3rd observable: z-coordinate c = x + y # 4th observable: colour This is, of course, just a toy example. In practice, you often have many, maybe even millions of observable variables (think of pixel values in images), but they can be sufficiently well computed from a much smaller set of latent variables. In such cases it may be useful to perform some kind of dimensionality reduction. As a real-world example, consider spectra of light-emitting objects, like stars. A spectrum is a long vector of values, light intensities at many different wavelengths. Modern spectrometers measure the intensity at thousands of wavelengths. However, each spectrum can be quite well described by the star's temperature (through the black body radiation law) and the concentration of different elements (for the absorption lines). These are likely to be way less then thousands, maybe only a dozen or two. That would be a low dimensional latent space. Note, however, that it's not necessary for the latent space to be smaller than the observable space. It is completely conceivable for many latent variables to influence few observable ones. For example, the value of a particular share at the stock market at a certain point in time is a single value, but it is likely due to many influences which are mostly unknown. In machine learning I've seen people using high dimensional latent space to denote a feature space induced by some non-linear data transformation which increases the dimensionality of the data. The idea (or the hope) is to achieve linear separability (for classification) or linearity (for regression) of the transformed data. For example, support vector machines use the kernel trick to transform the data, but the transformation is only implicit, given by the kernel function. Such data are "latent" in the sense that you (or the algorithm) never know their values; you only know the dot products of pairs of points.
What is a latent space? Latent space is a vector space spanned by the latent variables. Latent variables are variables which are not directly observable, but which are $-$ up to the level of noise $-$ sufficient to describe
4,991
What is a latent space?
This article would give you a great understanding about latent space,as a short review : The latent space is simply a representation of compressed data in which similar data points are closer together in space. Latent space is useful for learning data features and for finding simpler representations of data for analysis. We can understand patterns or structural similarities between data points by analyzing data in the latent space, be it through manifolds, clustering, etc. We can interpolate data in the latent space, and use our model’s decoder to ‘generate’ data samples. We can visualize the latent space using algorithms such as t-SNE and LLE, which takes our latent space representation and transforms it into 2D or 3D. you can also see a great description in here and here
What is a latent space?
This article would give you a great understanding about latent space,as a short review : The latent space is simply a representation of compressed data in which similar data points are closer togethe
What is a latent space? This article would give you a great understanding about latent space,as a short review : The latent space is simply a representation of compressed data in which similar data points are closer together in space. Latent space is useful for learning data features and for finding simpler representations of data for analysis. We can understand patterns or structural similarities between data points by analyzing data in the latent space, be it through manifolds, clustering, etc. We can interpolate data in the latent space, and use our model’s decoder to ‘generate’ data samples. We can visualize the latent space using algorithms such as t-SNE and LLE, which takes our latent space representation and transforms it into 2D or 3D. you can also see a great description in here and here
What is a latent space? This article would give you a great understanding about latent space,as a short review : The latent space is simply a representation of compressed data in which similar data points are closer togethe
4,992
Danger of setting all initial weights to zero in Backpropagation [duplicate]
edit see alfa's comment below. I'm not an expert on neural nets, so I'll defer to him. My understanding is different from the other answers that have been posted here. I'm pretty sure that backpropagation involves adding to the existing weights, not multiplying. The amount that you add is specified by the delta rule. Note that wij doesn't appear on the right-hand-side of the equation. My understanding is that there are at least two good reasons not to set the initial weights to zero: First, neural networks tend to get stuck in local minima, so it's a good idea to give them many different starting values. You can't do that if they all start at zero. Second, if the neurons start with the same weights, then all the neurons will follow the same gradient, and will always end up doing the same thing as one another.
Danger of setting all initial weights to zero in Backpropagation [duplicate]
edit see alfa's comment below. I'm not an expert on neural nets, so I'll defer to him. My understanding is different from the other answers that have been posted here. I'm pretty sure that backpropag
Danger of setting all initial weights to zero in Backpropagation [duplicate] edit see alfa's comment below. I'm not an expert on neural nets, so I'll defer to him. My understanding is different from the other answers that have been posted here. I'm pretty sure that backpropagation involves adding to the existing weights, not multiplying. The amount that you add is specified by the delta rule. Note that wij doesn't appear on the right-hand-side of the equation. My understanding is that there are at least two good reasons not to set the initial weights to zero: First, neural networks tend to get stuck in local minima, so it's a good idea to give them many different starting values. You can't do that if they all start at zero. Second, if the neurons start with the same weights, then all the neurons will follow the same gradient, and will always end up doing the same thing as one another.
Danger of setting all initial weights to zero in Backpropagation [duplicate] edit see alfa's comment below. I'm not an expert on neural nets, so I'll defer to him. My understanding is different from the other answers that have been posted here. I'm pretty sure that backpropag
4,993
Danger of setting all initial weights to zero in Backpropagation [duplicate]
It's a bad idea because of 2 reasons: If you have sigmoid activation, or anything where $g(0) \neq 0$ then it will cause weights to move "together", limiting the power of back-propagation to search the entire space to find the optimal weights which lower the loss/cost. If you have $\tanh$ or ReLu activation, or anything where $g(0) = 0$ then all the outputs will be 0, and the gradients for the weights will always be 0. Hence you will not have any learning at all. Let's demonstrate this (for simplicity I assume a final output layer of 1 neuron): Forward feed: If all weights are 0's, then the input to the 2nd layer will be the same for all nodes. The outputs of the nodes will be the same, though they will be multiplied by the next set of weights which will be 0, and so the inputs for the next layer will be zero etc., etc. So all the inputs (except the first layer which takes the actual inputs) will be 0, and all the outputs will be the same (0.5 for sigmoid activation and 0 for $\tanh$ and ReLu activation). Back propagation: Let's examine just the final layer. The final loss ($\mathcal{L}$) depends on the final output of the network ($a^L$, where L denotes the final layer), which depends on the final input before activation ($z^L = W^{L} a^{L-1}$), which depends on the weights of the final layer ($W^{L}$). Now we want to find: $$dW^{L}:= \frac{\partial\mathcal{L}}{\partial W^{L}} = \frac{\partial\mathcal{L}}{\partial a^L} \frac{\partial a^L}{\partial z^L} \frac{\partial z^L}{\partial W^{L}}$$ $\frac{\partial\mathcal{L}}{\partial a}$ is the derivative of the cost function, $\frac{\partial a}{\partial z}$ is the derivative of the activation function. Regardless of what their ($\frac{\partial\mathcal{L}}{\partial a} \frac{\partial a}{\partial z}$) value is, $\frac{\partial z}{\partial W}$ simply equals to the previous layer outputs, i.e. to $a^{L-1}$, but since they are all the same, you get that the final result $dW^{L}$ is a vector with all element equal. So, when you'll update $W^L = W^L - \alpha dW^L$ it will move in the same direction. And the same goes for the previous layers. Point 2 can be shown from the fact that $a^{L-1}$ will be equal to zero's. Hence your $dW^L$ vector will be full of zeros, and no learning can be achieved. Update: I made a video about it on YouTube, you can check it out here.
Danger of setting all initial weights to zero in Backpropagation [duplicate]
It's a bad idea because of 2 reasons: If you have sigmoid activation, or anything where $g(0) \neq 0$ then it will cause weights to move "together", limiting the power of back-propagation to search t
Danger of setting all initial weights to zero in Backpropagation [duplicate] It's a bad idea because of 2 reasons: If you have sigmoid activation, or anything where $g(0) \neq 0$ then it will cause weights to move "together", limiting the power of back-propagation to search the entire space to find the optimal weights which lower the loss/cost. If you have $\tanh$ or ReLu activation, or anything where $g(0) = 0$ then all the outputs will be 0, and the gradients for the weights will always be 0. Hence you will not have any learning at all. Let's demonstrate this (for simplicity I assume a final output layer of 1 neuron): Forward feed: If all weights are 0's, then the input to the 2nd layer will be the same for all nodes. The outputs of the nodes will be the same, though they will be multiplied by the next set of weights which will be 0, and so the inputs for the next layer will be zero etc., etc. So all the inputs (except the first layer which takes the actual inputs) will be 0, and all the outputs will be the same (0.5 for sigmoid activation and 0 for $\tanh$ and ReLu activation). Back propagation: Let's examine just the final layer. The final loss ($\mathcal{L}$) depends on the final output of the network ($a^L$, where L denotes the final layer), which depends on the final input before activation ($z^L = W^{L} a^{L-1}$), which depends on the weights of the final layer ($W^{L}$). Now we want to find: $$dW^{L}:= \frac{\partial\mathcal{L}}{\partial W^{L}} = \frac{\partial\mathcal{L}}{\partial a^L} \frac{\partial a^L}{\partial z^L} \frac{\partial z^L}{\partial W^{L}}$$ $\frac{\partial\mathcal{L}}{\partial a}$ is the derivative of the cost function, $\frac{\partial a}{\partial z}$ is the derivative of the activation function. Regardless of what their ($\frac{\partial\mathcal{L}}{\partial a} \frac{\partial a}{\partial z}$) value is, $\frac{\partial z}{\partial W}$ simply equals to the previous layer outputs, i.e. to $a^{L-1}$, but since they are all the same, you get that the final result $dW^{L}$ is a vector with all element equal. So, when you'll update $W^L = W^L - \alpha dW^L$ it will move in the same direction. And the same goes for the previous layers. Point 2 can be shown from the fact that $a^{L-1}$ will be equal to zero's. Hence your $dW^L$ vector will be full of zeros, and no learning can be achieved. Update: I made a video about it on YouTube, you can check it out here.
Danger of setting all initial weights to zero in Backpropagation [duplicate] It's a bad idea because of 2 reasons: If you have sigmoid activation, or anything where $g(0) \neq 0$ then it will cause weights to move "together", limiting the power of back-propagation to search t
4,994
Danger of setting all initial weights to zero in Backpropagation [duplicate]
If you thought of the weights as priors, as in a Bayesian network, then you've ruled out any possibility that those inputs could possibly affect the system. Another explanation is that backpropagation identifies the set of weights that minimizes the weighted squared difference between the target and observed values (E). Then how could any gradient descent algorithm be oriented in terms of determining the direction of the system? You are placing yourself on a saddle point of the parameter space.
Danger of setting all initial weights to zero in Backpropagation [duplicate]
If you thought of the weights as priors, as in a Bayesian network, then you've ruled out any possibility that those inputs could possibly affect the system. Another explanation is that backpropagation
Danger of setting all initial weights to zero in Backpropagation [duplicate] If you thought of the weights as priors, as in a Bayesian network, then you've ruled out any possibility that those inputs could possibly affect the system. Another explanation is that backpropagation identifies the set of weights that minimizes the weighted squared difference between the target and observed values (E). Then how could any gradient descent algorithm be oriented in terms of determining the direction of the system? You are placing yourself on a saddle point of the parameter space.
Danger of setting all initial weights to zero in Backpropagation [duplicate] If you thought of the weights as priors, as in a Bayesian network, then you've ruled out any possibility that those inputs could possibly affect the system. Another explanation is that backpropagation
4,995
Danger of setting all initial weights to zero in Backpropagation [duplicate]
In each iteration of your backpropagation algorithm, you will update the weights by multiplying the existing weight by a delta determined by backpropagation. If the initial weight value is 0, multiplying it by any value for delta won't change the weight which means each iteration has no effect on the weights you're trying to optimize.
Danger of setting all initial weights to zero in Backpropagation [duplicate]
In each iteration of your backpropagation algorithm, you will update the weights by multiplying the existing weight by a delta determined by backpropagation. If the initial weight value is 0, multiply
Danger of setting all initial weights to zero in Backpropagation [duplicate] In each iteration of your backpropagation algorithm, you will update the weights by multiplying the existing weight by a delta determined by backpropagation. If the initial weight value is 0, multiplying it by any value for delta won't change the weight which means each iteration has no effect on the weights you're trying to optimize.
Danger of setting all initial weights to zero in Backpropagation [duplicate] In each iteration of your backpropagation algorithm, you will update the weights by multiplying the existing weight by a delta determined by backpropagation. If the initial weight value is 0, multiply
4,996
Danger of setting all initial weights to zero in Backpropagation [duplicate]
It seems to me that one reason why it's bad to initialize weights to the same values (not just zero) is because then for any particular hidden layer all the nodes in this layer would have exactly the same inputs and would therefore stay the same as each other.
Danger of setting all initial weights to zero in Backpropagation [duplicate]
It seems to me that one reason why it's bad to initialize weights to the same values (not just zero) is because then for any particular hidden layer all the nodes in this layer would have exactly the
Danger of setting all initial weights to zero in Backpropagation [duplicate] It seems to me that one reason why it's bad to initialize weights to the same values (not just zero) is because then for any particular hidden layer all the nodes in this layer would have exactly the same inputs and would therefore stay the same as each other.
Danger of setting all initial weights to zero in Backpropagation [duplicate] It seems to me that one reason why it's bad to initialize weights to the same values (not just zero) is because then for any particular hidden layer all the nodes in this layer would have exactly the
4,997
Danger of setting all initial weights to zero in Backpropagation [duplicate]
The answer to this is not entirely "Local Minima/Maxima". When you have more than 1 Hidden Layer and every weight are 0's, no matter how big/small a change in Weight_i will not cause a change in the Output. This is because delta Weight_i will be absorbed by the next Hidden Layer. When there is no change in the Output, there is no gradient and hence no direction. This shares the same traits as a Local Minima/Maxima, but is actually because of 0's, which is technically different
Danger of setting all initial weights to zero in Backpropagation [duplicate]
The answer to this is not entirely "Local Minima/Maxima". When you have more than 1 Hidden Layer and every weight are 0's, no matter how big/small a change in Weight_i will not cause a change in the O
Danger of setting all initial weights to zero in Backpropagation [duplicate] The answer to this is not entirely "Local Minima/Maxima". When you have more than 1 Hidden Layer and every weight are 0's, no matter how big/small a change in Weight_i will not cause a change in the Output. This is because delta Weight_i will be absorbed by the next Hidden Layer. When there is no change in the Output, there is no gradient and hence no direction. This shares the same traits as a Local Minima/Maxima, but is actually because of 0's, which is technically different
Danger of setting all initial weights to zero in Backpropagation [duplicate] The answer to this is not entirely "Local Minima/Maxima". When you have more than 1 Hidden Layer and every weight are 0's, no matter how big/small a change in Weight_i will not cause a change in the O
4,998
Danger of setting all initial weights to zero in Backpropagation [duplicate]
A few reasonable arguments have been provided as to why you should not initialise your network with all weights set to zero. However, I would like to point out that zero initialisation can work — if done correctly! To start things of, it is not just zero initialisation that is problematic. As a matter of fact, when initialising the weights with any constant, such that $w_{ij} = c$, can be considered problematic. Heck, even an initialisation of the form $w_{ij} = v_j$ would not work very well! The actual problem with all of these approaches is that they induce a certain symmetry among neurons. Concretely, the activations will be the same for all neurons in a single layer. To see this, consider the (pre-)activation, $s_i$, of neuron $i$ in some layer with weight matrix $\boldsymbol{W}$ and inputs $\boldsymbol{x}$: $$s_i = \boldsymbol{w}_{i:} \cdot \boldsymbol{x} = \sum_j v_j x_j,$$ where $\boldsymbol{w}_{i:}$ is the $i$-th row of $\boldsymbol{W}$. Note that $s_i$ is completely independent of the index $i$. This means that all neurons will have the same outputs. Having symmetric (i.e. duplicated) neurons at initialisation might not seem so bad. However, the network will never be able to actually break the symmetry. To see this, we have to consider the back-propagation of errors in terms of $\boldsymbol{\delta} = \frac{\partial L}{\partial \boldsymbol{s}}$. Applying the chain rule, you should be able to find that $$\delta_j^- = \phi'(s_j)\, \boldsymbol{\delta} \cdot \boldsymbol{w}_{:j} = \phi'(s) \sum_i \delta_i v_j,$$ where $w_{:j}$ is the $j$-th column of $\boldsymbol{W}$ and $\phi$ is the activation function. Also, we used $s = s_j$ given that the (pre-)activations are independent of the individual neurons anyway. If we now want to compute the update for a single entry in our weight matrix, we find $$\Delta w_{ij} = \phi(s_i) \delta_j = \phi(s) \delta_j.$$ Again, note that the update will be identical for every row! This means that the all neurons will output the same value, even after being updated. Therefore initialising every column in your weight matrices with a constant effectively reduces the effective number of neurons in a each layer to 1, which is generally something you want to avoid. The most common way to break this symmetry is to use randomly sampled values to initialise the weights. However, it is by no means the only way. The entire analysis "ignores" the bias parameters or at least assumes that they are initialised to be zeros, which is common practice. However, if we simply initialise the bias parameters by sampling from a random distribution, the symmetry of neurons can be broken, even if all initial weights are zero. TL;DR: the problem is symmetry, which reduces a layer to a single neuron. One solution is random weights, but also biases can be used to break symmetry.
Danger of setting all initial weights to zero in Backpropagation [duplicate]
A few reasonable arguments have been provided as to why you should not initialise your network with all weights set to zero. However, I would like to point out that zero initialisation can work — if d
Danger of setting all initial weights to zero in Backpropagation [duplicate] A few reasonable arguments have been provided as to why you should not initialise your network with all weights set to zero. However, I would like to point out that zero initialisation can work — if done correctly! To start things of, it is not just zero initialisation that is problematic. As a matter of fact, when initialising the weights with any constant, such that $w_{ij} = c$, can be considered problematic. Heck, even an initialisation of the form $w_{ij} = v_j$ would not work very well! The actual problem with all of these approaches is that they induce a certain symmetry among neurons. Concretely, the activations will be the same for all neurons in a single layer. To see this, consider the (pre-)activation, $s_i$, of neuron $i$ in some layer with weight matrix $\boldsymbol{W}$ and inputs $\boldsymbol{x}$: $$s_i = \boldsymbol{w}_{i:} \cdot \boldsymbol{x} = \sum_j v_j x_j,$$ where $\boldsymbol{w}_{i:}$ is the $i$-th row of $\boldsymbol{W}$. Note that $s_i$ is completely independent of the index $i$. This means that all neurons will have the same outputs. Having symmetric (i.e. duplicated) neurons at initialisation might not seem so bad. However, the network will never be able to actually break the symmetry. To see this, we have to consider the back-propagation of errors in terms of $\boldsymbol{\delta} = \frac{\partial L}{\partial \boldsymbol{s}}$. Applying the chain rule, you should be able to find that $$\delta_j^- = \phi'(s_j)\, \boldsymbol{\delta} \cdot \boldsymbol{w}_{:j} = \phi'(s) \sum_i \delta_i v_j,$$ where $w_{:j}$ is the $j$-th column of $\boldsymbol{W}$ and $\phi$ is the activation function. Also, we used $s = s_j$ given that the (pre-)activations are independent of the individual neurons anyway. If we now want to compute the update for a single entry in our weight matrix, we find $$\Delta w_{ij} = \phi(s_i) \delta_j = \phi(s) \delta_j.$$ Again, note that the update will be identical for every row! This means that the all neurons will output the same value, even after being updated. Therefore initialising every column in your weight matrices with a constant effectively reduces the effective number of neurons in a each layer to 1, which is generally something you want to avoid. The most common way to break this symmetry is to use randomly sampled values to initialise the weights. However, it is by no means the only way. The entire analysis "ignores" the bias parameters or at least assumes that they are initialised to be zeros, which is common practice. However, if we simply initialise the bias parameters by sampling from a random distribution, the symmetry of neurons can be broken, even if all initial weights are zero. TL;DR: the problem is symmetry, which reduces a layer to a single neuron. One solution is random weights, but also biases can be used to break symmetry.
Danger of setting all initial weights to zero in Backpropagation [duplicate] A few reasonable arguments have been provided as to why you should not initialise your network with all weights set to zero. However, I would like to point out that zero initialisation can work — if d
4,999
Danger of setting all initial weights to zero in Backpropagation [duplicate]
Main problem with initialization of all weights to zero mathematically leads to either the neuron values are zero (for multi layers) or the delta would be zero. In one of the comments by @alfa in the above answers already a hint is provided, it is mentioned that the product of weights and delta needs to be zero. This would essentially mean that for the gradient descent this is on the top of the hill right at its peak and it is unable to break the symmetry. Randomness will break this symmetry and one would reach local minimum. Even if we perturb the weight(s) a little we would be on the track. Reference: Learning from data Lecture 10.
Danger of setting all initial weights to zero in Backpropagation [duplicate]
Main problem with initialization of all weights to zero mathematically leads to either the neuron values are zero (for multi layers) or the delta would be zero. In one of the comments by @alfa in the
Danger of setting all initial weights to zero in Backpropagation [duplicate] Main problem with initialization of all weights to zero mathematically leads to either the neuron values are zero (for multi layers) or the delta would be zero. In one of the comments by @alfa in the above answers already a hint is provided, it is mentioned that the product of weights and delta needs to be zero. This would essentially mean that for the gradient descent this is on the top of the hill right at its peak and it is unable to break the symmetry. Randomness will break this symmetry and one would reach local minimum. Even if we perturb the weight(s) a little we would be on the track. Reference: Learning from data Lecture 10.
Danger of setting all initial weights to zero in Backpropagation [duplicate] Main problem with initialization of all weights to zero mathematically leads to either the neuron values are zero (for multi layers) or the delta would be zero. In one of the comments by @alfa in the
5,000
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the original p-value. How can it be true?
Summary: The trick appears to be a Bayesian approach which assumes a uniform (Jeffreys) prior for the hidden parameter ($z_\mu$ in appendix B of the paper, $\theta$ here). I believe there may be a Bayesian-style approach to get the equations given in the paper's appendix B. As I understand it, the experiment boils down to a statistic $z\sim\mathrm{N}_{\theta,1}$. The mean $\theta$ of the sampling distribution is unknown, but vanishes under the null hypothesis, $\theta\mid{}H_0=0$. Call the experimentally observed statistic $\hat{z}\mid\theta\sim\mathrm{N}_{\theta,1}$. Then if we assume a "uniform" (improper) prior on $\theta\sim1$, the Bayesian posterior is $\theta\mid\hat{z}\sim\mathrm{N}_{\hat{z},1}$. If we then update the original sampling distribution by marginalizing over $\theta\mid\hat{z}$, the posterior becomes $z\mid\hat{z}\sim\mathrm{N}_{\hat{z},2}$. (The doubled variance is due to convolution of Gaussians.) Mathematically at least, this seems to work. And it explains how the $\frac{1}{\sqrt{2}}$ factor "magically" appears going from equation B2 to equation B3. Discussion How can this result be reconciled with the standard null hypothesis testing framework? One possible interpretation is as follows. In the standard framework, the null hypothesis is in some sense the "default" (e.g. we speak of "rejecting the null"). In the above Bayesian context this would be a non-uniform prior that prefers $\theta=0$. If we take this to be $\theta\sim\mathrm{N}_{0,\lambda^2}$, then the variance $\lambda^2$ represents our prior uncertainty. Carrying this prior through the analysis above, we find $$\theta\sim\mathrm{N}_{0,\lambda^2} \implies \theta\mid\hat{z}\sim\mathrm{N}_{\delta^2\hat{z},\delta^2} \,,\, z\mid\hat{z}\sim\mathrm{N}_{\delta^2\hat{z},1+\delta^2} \,,\, \delta^2\equiv\tfrac{1}{1+\lambda^{-2}}\in[0,1]$$ From this we can see that in the limit $\lambda\to\infty$ we recover the analysis above. But in the limit $\lambda\to{0}$ our "posteriors" become the null, $\theta\mid\hat{z}\sim\mathrm{N}_{0,0}$ and $z\mid\hat{z}\sim\mathrm{N}_{0,1}$, so we recover the standard result, ${p}\mid{\hat{z}}\sim\mathrm{U}_{0,1}$. (For repeated studies, the above suggests an interesting question here about the implications for Bayesian updating vs. "traditional" methods for meta-analysis. I am completely ignorant on the subject of meta-analysis though!) Appendix As requested in the comments, here is a plot for comparison. This is a relatively straightforward application of the formulas in the paper. However I will write these out to ensure no ambiguity. Let $p$ denote the one-sided p value for the statistic $z$, and denote its (posterior) CDF by $F[u]\equiv\Pr\big[\,p\leq{u}\mid{\hat{z}}\,\big]$. Then equation B3 from the appendix is equivalent to $$F[p]=1-\Phi\left[\tfrac{1}{\sqrt{2}}\left(z[p]-\hat{z}\right)\right] \,,\, z[p]=\Phi^{-1}[1-p]$$ where $\Phi[\,\,]$ is the standard normal CDF. The corresponding density is then $$f\big[p\big]\equiv{F^\prime}\big[p\big]=\frac{\phi\Big[(z-\hat{z})/\sqrt{2}\,\Big]}{\sqrt{2}\,\phi\big[z\big]}$$ where $\phi[\,\,]$ is the standard normal PDF, and $z=z[p]$ as in the CDF formula. Finally, if we denote by $\hat{p}$ the observed two-sided p value corresponding to $\hat{z}$, then we have $$\hat{z}=\Phi^{-1}\Big[1-\tfrac{\hat{p}}{2}\Big]$$ Using these equations gives the figure below, which should be comparable to the paper's figure 5 quoted in the question. (This was produced by the following Matlab code; run here.) phat2=[1e-3,1e-2,5e-2,0.2]'; zhat=norminv(1-phat2/2); np=1e3+1; p1=(1:np)/(np+1); z=norminv(1-p1); p1pdf=normpdf((z-zhat)/sqrt(2))./(sqrt(2)*normpdf(z)); plot(p1,p1pdf,'LineWidth',1); axis([0,1,0,6]); xlabel('p'); ylabel('PDF p|p_{obs}'); legend(arrayfun(@(p)sprintf('p_{obs} = %g',p),phat2,'uni',0));
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the ori
Summary: The trick appears to be a Bayesian approach which assumes a uniform (Jeffreys) prior for the hidden parameter ($z_\mu$ in appendix B of the paper, $\theta$ here). I believe there may be a Ba
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the original p-value. How can it be true? Summary: The trick appears to be a Bayesian approach which assumes a uniform (Jeffreys) prior for the hidden parameter ($z_\mu$ in appendix B of the paper, $\theta$ here). I believe there may be a Bayesian-style approach to get the equations given in the paper's appendix B. As I understand it, the experiment boils down to a statistic $z\sim\mathrm{N}_{\theta,1}$. The mean $\theta$ of the sampling distribution is unknown, but vanishes under the null hypothesis, $\theta\mid{}H_0=0$. Call the experimentally observed statistic $\hat{z}\mid\theta\sim\mathrm{N}_{\theta,1}$. Then if we assume a "uniform" (improper) prior on $\theta\sim1$, the Bayesian posterior is $\theta\mid\hat{z}\sim\mathrm{N}_{\hat{z},1}$. If we then update the original sampling distribution by marginalizing over $\theta\mid\hat{z}$, the posterior becomes $z\mid\hat{z}\sim\mathrm{N}_{\hat{z},2}$. (The doubled variance is due to convolution of Gaussians.) Mathematically at least, this seems to work. And it explains how the $\frac{1}{\sqrt{2}}$ factor "magically" appears going from equation B2 to equation B3. Discussion How can this result be reconciled with the standard null hypothesis testing framework? One possible interpretation is as follows. In the standard framework, the null hypothesis is in some sense the "default" (e.g. we speak of "rejecting the null"). In the above Bayesian context this would be a non-uniform prior that prefers $\theta=0$. If we take this to be $\theta\sim\mathrm{N}_{0,\lambda^2}$, then the variance $\lambda^2$ represents our prior uncertainty. Carrying this prior through the analysis above, we find $$\theta\sim\mathrm{N}_{0,\lambda^2} \implies \theta\mid\hat{z}\sim\mathrm{N}_{\delta^2\hat{z},\delta^2} \,,\, z\mid\hat{z}\sim\mathrm{N}_{\delta^2\hat{z},1+\delta^2} \,,\, \delta^2\equiv\tfrac{1}{1+\lambda^{-2}}\in[0,1]$$ From this we can see that in the limit $\lambda\to\infty$ we recover the analysis above. But in the limit $\lambda\to{0}$ our "posteriors" become the null, $\theta\mid\hat{z}\sim\mathrm{N}_{0,0}$ and $z\mid\hat{z}\sim\mathrm{N}_{0,1}$, so we recover the standard result, ${p}\mid{\hat{z}}\sim\mathrm{U}_{0,1}$. (For repeated studies, the above suggests an interesting question here about the implications for Bayesian updating vs. "traditional" methods for meta-analysis. I am completely ignorant on the subject of meta-analysis though!) Appendix As requested in the comments, here is a plot for comparison. This is a relatively straightforward application of the formulas in the paper. However I will write these out to ensure no ambiguity. Let $p$ denote the one-sided p value for the statistic $z$, and denote its (posterior) CDF by $F[u]\equiv\Pr\big[\,p\leq{u}\mid{\hat{z}}\,\big]$. Then equation B3 from the appendix is equivalent to $$F[p]=1-\Phi\left[\tfrac{1}{\sqrt{2}}\left(z[p]-\hat{z}\right)\right] \,,\, z[p]=\Phi^{-1}[1-p]$$ where $\Phi[\,\,]$ is the standard normal CDF. The corresponding density is then $$f\big[p\big]\equiv{F^\prime}\big[p\big]=\frac{\phi\Big[(z-\hat{z})/\sqrt{2}\,\Big]}{\sqrt{2}\,\phi\big[z\big]}$$ where $\phi[\,\,]$ is the standard normal PDF, and $z=z[p]$ as in the CDF formula. Finally, if we denote by $\hat{p}$ the observed two-sided p value corresponding to $\hat{z}$, then we have $$\hat{z}=\Phi^{-1}\Big[1-\tfrac{\hat{p}}{2}\Big]$$ Using these equations gives the figure below, which should be comparable to the paper's figure 5 quoted in the question. (This was produced by the following Matlab code; run here.) phat2=[1e-3,1e-2,5e-2,0.2]'; zhat=norminv(1-phat2/2); np=1e3+1; p1=(1:np)/(np+1); z=norminv(1-p1); p1pdf=normpdf((z-zhat)/sqrt(2))./(sqrt(2)*normpdf(z)); plot(p1,p1pdf,'LineWidth',1); axis([0,1,0,6]); xlabel('p'); ylabel('PDF p|p_{obs}'); legend(arrayfun(@(p)sprintf('p_{obs} = %g',p),phat2,'uni',0));
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the ori Summary: The trick appears to be a Bayesian approach which assumes a uniform (Jeffreys) prior for the hidden parameter ($z_\mu$ in appendix B of the paper, $\theta$ here). I believe there may be a Ba