idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
3,301
What is a good, convincing example in which p-values are useful?
I will consider both Matloff's points: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. The logic here is that if somebody reports highly significant $p=0.0001$, then from this number alone we cannot say if the effect is large and important or irrelevantly tiny (as can happen with large $n$). I find this argument strange and cannot connect to it at all, because I have never seen a study that would report a $p$-value without reporting [some equivalent of] effect size. Studies that I read would e.g. say (and usually show on a figure) that group A had such and such mean, group B had such and such mean and they were significantly different with such and such $p$-value. I can obviously judge for myself if the difference between A and B is large or small. (In the comments, @RobinEkman pointed me to several highly-cited studies by Ziliak & McCloskey (1996, 2004) who observed that the majority of the economics papers trumpet "statistical significance" of some effects without paying much attention to the effect size and its "practical significance" (which, Z&MS argue, can often be minuscule). This is clearly bad practice. However, as @MatteoS explained below, the effect sizes (regression estimates) are always reported, so my argument stands.) Almost no null hypotheses are true in the real world, so performing a significance test on them is absurd and bizarre. This concern is also often voiced, but here again I cannot really connect to it. It is important to realize that researchers do not increase their $n$ ad infinitum. In the branch of neuroscience that I am familiar with, people will do experiments with $n=20$ or maybe $n=50$, say, rats. If there is no effect to be seen then the conclusion is that the effect is not large enough to be interesting. Nobody I know would go on breeding, training, recording, and sacrificing $n=5000$ rats to show that there is some statistically significant but tiny effect. And whereas it might be true that almost no real effects are exactly zero, it is certainly true that many many real effects are small enough to be detected with reasonable sample sizes that reasonable researchers are actually using, exercising their good judgment. (There is a valid concern that sample sizes are often not big enough and that many studies are underpowered. So perhaps researchers in many fields should rather aim at, say, $n=100$ instead of $n=20$. Still, whatever the sample size is, it puts a limit on the effect size that the study has power to detect.) In addition, I do not think I agree that almost no null hypotheses are true, at least not in the experimental randomized studies (as opposed to observational ones). Two reasons: Very often there is a directionality to the prediction that is being tested; researcher aims to demonstrate that some effect is positive $\delta>0$. By convention this is usually done with a two-sided test assuming a point null $H_0: \delta=0$ but in fact this is rather a one-sided test trying to reject $H_0: \delta<0$. (@CliffAB's answer, +1, makes a related point.) And this can certainly be true. Even talking about the point "nil" null $H_0: \delta=0$, I do not see why they are never true. Some things are just not causally related to other things. Look at the psychology studies that are failing to replicate in the last years: people feeling the future; women dressing in red when ovulating; priming with old-age-related words affecting walking speed; etc. It might very well be that there are no causal links here at all and so the true effects are exactly zero. Himself, Norm Matloff suggests to use confidence intervals instead of $p$-values because they show the effect size. Confidence intervals are good, but notice one disadvantage of a confidence interval as compared to the $p$-value: confidence interval is reported for one particular coverage value, e.g. $95\%$. Seeing a $95\%$ confidence interval does not tell me how broad a $99\%$ confidence interval would be. But one single $p$-value can be compared with any $\alpha$ and different readers can have different alphas in mind. In other words, I think that for somebody who likes to use confidence intervals, a $p$-value is a useful and meaningful additional statistic to report. I would like to give a long quote about the practical usefulness of $p$-values from my favorite blogger Scott Alexander; he is not a statistician (he is a psychiatrist) but has lots of experience with reading psychological/medical literature and scrutinizing the statistics therein. The quote is from his blog post on the fake chocolate study which I highly recommend. Emphasis mine. [...] But suppose we're not allowed to do $p$-values. All I do is tell you "Yeah, there was a study with fifteen people that found chocolate helped with insulin resistance" and you laugh in my face. Effect size is supposed to help with that. But suppose I tell you "There was a study with fifteen people that found chocolate helped with insulin resistance. The effect size was $0.6$." I don't have any intuition at all for whether or not that's consistent with random noise. Do you? Okay, then they say we’re supposed to report confidence intervals. The effect size was $0.6$, with $95\%$ confidence interval of $[0.2, 1.0]$. Okay. So I check the lower bound of the confidence interval, I see it’s different from zero. But now I’m not transcending the $p$-value. I’m just using the p-value by doing a sort of kludgy calculation of it myself – “$95\%$ confidence interval does not include zero” is the same as “$p$-value is less than $0.05$”. (Imagine that, although I know the $95\%$ confidence interval doesn’t include zero, I start wondering if the $99\%$ confidence interval does. If only there were some statistic that would give me this information!) But wouldn’t getting rid of $p$-values prevent “$p$-hacking”? Maybe, but it would just give way to “d-hacking”. You don’t think you could test for twenty different metabolic parameters and only report the one with the highest effect size? The only difference would be that p-hacking is completely transparent – if you do twenty tests and report a $p$ of $0.05$, I know you’re an idiot – but d-hacking would be inscrutable. If you do twenty tests and report that one of them got a $d = 0.6$, is that impressive? [...] But wouldn’t switching from $p$-values to effect sizes prevent people from making a big deal about tiny effects that are nevertheless statistically significant? Yes, but sometimes we want to make a big deal about tiny effects that are nevertheless statistically significant! Suppose that Coca-Cola is testing a new product additive, and finds in large epidemiological studies that it causes one extra death per hundred thousand people per year. That’s an effect size of approximately zero, but it might still be statistically significant. And since about a billion people worldwide drink Coke each year, that’s a ten thousand deaths. If Coke said “Nope, effect size too small, not worth thinking about”, they would kill almost two milli-Hitlers worth of people. For some further discussion of various alternatives to $p$-values (including Bayesian ones), see my answer in ASA discusses limitations of $p$-values - what are the alternatives?
What is a good, convincing example in which p-values are useful?
I will consider both Matloff's points: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. The logic here is that if somebody reports highly sign
What is a good, convincing example in which p-values are useful? I will consider both Matloff's points: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. The logic here is that if somebody reports highly significant $p=0.0001$, then from this number alone we cannot say if the effect is large and important or irrelevantly tiny (as can happen with large $n$). I find this argument strange and cannot connect to it at all, because I have never seen a study that would report a $p$-value without reporting [some equivalent of] effect size. Studies that I read would e.g. say (and usually show on a figure) that group A had such and such mean, group B had such and such mean and they were significantly different with such and such $p$-value. I can obviously judge for myself if the difference between A and B is large or small. (In the comments, @RobinEkman pointed me to several highly-cited studies by Ziliak & McCloskey (1996, 2004) who observed that the majority of the economics papers trumpet "statistical significance" of some effects without paying much attention to the effect size and its "practical significance" (which, Z&MS argue, can often be minuscule). This is clearly bad practice. However, as @MatteoS explained below, the effect sizes (regression estimates) are always reported, so my argument stands.) Almost no null hypotheses are true in the real world, so performing a significance test on them is absurd and bizarre. This concern is also often voiced, but here again I cannot really connect to it. It is important to realize that researchers do not increase their $n$ ad infinitum. In the branch of neuroscience that I am familiar with, people will do experiments with $n=20$ or maybe $n=50$, say, rats. If there is no effect to be seen then the conclusion is that the effect is not large enough to be interesting. Nobody I know would go on breeding, training, recording, and sacrificing $n=5000$ rats to show that there is some statistically significant but tiny effect. And whereas it might be true that almost no real effects are exactly zero, it is certainly true that many many real effects are small enough to be detected with reasonable sample sizes that reasonable researchers are actually using, exercising their good judgment. (There is a valid concern that sample sizes are often not big enough and that many studies are underpowered. So perhaps researchers in many fields should rather aim at, say, $n=100$ instead of $n=20$. Still, whatever the sample size is, it puts a limit on the effect size that the study has power to detect.) In addition, I do not think I agree that almost no null hypotheses are true, at least not in the experimental randomized studies (as opposed to observational ones). Two reasons: Very often there is a directionality to the prediction that is being tested; researcher aims to demonstrate that some effect is positive $\delta>0$. By convention this is usually done with a two-sided test assuming a point null $H_0: \delta=0$ but in fact this is rather a one-sided test trying to reject $H_0: \delta<0$. (@CliffAB's answer, +1, makes a related point.) And this can certainly be true. Even talking about the point "nil" null $H_0: \delta=0$, I do not see why they are never true. Some things are just not causally related to other things. Look at the psychology studies that are failing to replicate in the last years: people feeling the future; women dressing in red when ovulating; priming with old-age-related words affecting walking speed; etc. It might very well be that there are no causal links here at all and so the true effects are exactly zero. Himself, Norm Matloff suggests to use confidence intervals instead of $p$-values because they show the effect size. Confidence intervals are good, but notice one disadvantage of a confidence interval as compared to the $p$-value: confidence interval is reported for one particular coverage value, e.g. $95\%$. Seeing a $95\%$ confidence interval does not tell me how broad a $99\%$ confidence interval would be. But one single $p$-value can be compared with any $\alpha$ and different readers can have different alphas in mind. In other words, I think that for somebody who likes to use confidence intervals, a $p$-value is a useful and meaningful additional statistic to report. I would like to give a long quote about the practical usefulness of $p$-values from my favorite blogger Scott Alexander; he is not a statistician (he is a psychiatrist) but has lots of experience with reading psychological/medical literature and scrutinizing the statistics therein. The quote is from his blog post on the fake chocolate study which I highly recommend. Emphasis mine. [...] But suppose we're not allowed to do $p$-values. All I do is tell you "Yeah, there was a study with fifteen people that found chocolate helped with insulin resistance" and you laugh in my face. Effect size is supposed to help with that. But suppose I tell you "There was a study with fifteen people that found chocolate helped with insulin resistance. The effect size was $0.6$." I don't have any intuition at all for whether or not that's consistent with random noise. Do you? Okay, then they say we’re supposed to report confidence intervals. The effect size was $0.6$, with $95\%$ confidence interval of $[0.2, 1.0]$. Okay. So I check the lower bound of the confidence interval, I see it’s different from zero. But now I’m not transcending the $p$-value. I’m just using the p-value by doing a sort of kludgy calculation of it myself – “$95\%$ confidence interval does not include zero” is the same as “$p$-value is less than $0.05$”. (Imagine that, although I know the $95\%$ confidence interval doesn’t include zero, I start wondering if the $99\%$ confidence interval does. If only there were some statistic that would give me this information!) But wouldn’t getting rid of $p$-values prevent “$p$-hacking”? Maybe, but it would just give way to “d-hacking”. You don’t think you could test for twenty different metabolic parameters and only report the one with the highest effect size? The only difference would be that p-hacking is completely transparent – if you do twenty tests and report a $p$ of $0.05$, I know you’re an idiot – but d-hacking would be inscrutable. If you do twenty tests and report that one of them got a $d = 0.6$, is that impressive? [...] But wouldn’t switching from $p$-values to effect sizes prevent people from making a big deal about tiny effects that are nevertheless statistically significant? Yes, but sometimes we want to make a big deal about tiny effects that are nevertheless statistically significant! Suppose that Coca-Cola is testing a new product additive, and finds in large epidemiological studies that it causes one extra death per hundred thousand people per year. That’s an effect size of approximately zero, but it might still be statistically significant. And since about a billion people worldwide drink Coke each year, that’s a ten thousand deaths. If Coke said “Nope, effect size too small, not worth thinking about”, they would kill almost two milli-Hitlers worth of people. For some further discussion of various alternatives to $p$-values (including Bayesian ones), see my answer in ASA discusses limitations of $p$-values - what are the alternatives?
What is a good, convincing example in which p-values are useful? I will consider both Matloff's points: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. The logic here is that if somebody reports highly sign
3,302
What is a good, convincing example in which p-values are useful?
I take great offense at the following two ideas: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. Almost no null hypotheses are true in the real world, so performing a significance test on them is absurd and bizarre. It is such a strawman argument about p-values. The very foundational problem that motivated the development of statistics comes from seeing a trend and wanting to know whether what we see is by chance, or representative of a systematic trend. With that in mind, it is true that we, as statisticians, do not typically believe that a null-hypothesis is true (i.e. $H_o: \mu_d = 0$, where $\mu_d$ is the mean difference in some measurement between two groups). However, with two sided tests, we don't know which alternative hypothesis is true! In a two sided test, we may be willing to say that we are 100% sure that $\mu_d \neq 0$ before seeing the data. But we do not know whether $\mu_d > 0$ or $\mu_d < 0$. So if we run our experiment and conclude that $\mu_d > 0$, we have rejected $\mu_d = 0$ (as Matloff might say; useless conclusion) but more importantly, we have also rejected $\mu_d < 0$ (I say; useful conclusion). As @amoeba pointed out, this also applies to one sided test that have the potential to be two sided, such as testing whether a drug has a positive effect. It's true that this doesn't tell you the magnitude of the effect. But it does tell you the direction of the effect. So let's not put the cart before the horse; before I start drawing conclusions about the magnitude of the effect, I want to be confident I've got the direction of the effect correct! Similarly, the argument that "p-values pounce on tiny, unimportant effects" seems quite flawed to me. If you think of a p-value as a measure of how much the data supports the direction of your conclusion, then of course you want it to pick up small effects when the sample size is large enough. To say this means they are not useful is very strange to me: are these fields of research that have suffered from p-values the same ones that have so much data they have no need to assess the reliability of their estimates? Similarly, if your issues is really that p-values "pounce on tiny effect sizes", then you can simply test the hypotheses $H_{1}:\mu_d > 1$ and $H_{2}: \mu_d < -1$ (assuming you believe 1 to be the minimal important effect size). This is done often in clinical trials. To further illustrate this, suppose we just looked at confidence intervals and discarded p-values. What is the first thing you would check in the confidence interval? Whether the effect was strictly positive (or negative) before taking the results too seriously. As such, even without p-values, we would informally be doing hypothesis testing. Finally, in regards to the OP/Matloff's request, "Give a convincing argument of p-values being significantly better", I think question is a little awkward. I say this because depending on your view, it automatically answers itself ("give me one concrete example where testing a hypothesis is better than not testing them"). However, a special case that I think is almost undeniable is that of RNAseq data. In this case, we are typically looking at the expression level of RNA in two different groups (i.e. diseased, controls) and trying to find genes that are differentially expressed in the two groups. In this case, the effect size itself is not even really meaningful. This is because the expression levels of different genes vary so wildly that for some genes, having 2x higher expression doesn't mean anything, while on other tightly regulated genes, 1.2x higher expression is fatal. So the actual magnitude of the effect size is actually somewhat uninteresting when first comparing the groups. But you really, really want to know if the expression of the gene changes between the groups and direction of the change! Furthermore, it's much more difficult to address the issues of multiple comparisons (for which you may be doing 20,000 of them in a single run) with confidence intervals than it is with p-values.
What is a good, convincing example in which p-values are useful?
I take great offense at the following two ideas: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. Almost no null hypotheses are true in the re
What is a good, convincing example in which p-values are useful? I take great offense at the following two ideas: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. Almost no null hypotheses are true in the real world, so performing a significance test on them is absurd and bizarre. It is such a strawman argument about p-values. The very foundational problem that motivated the development of statistics comes from seeing a trend and wanting to know whether what we see is by chance, or representative of a systematic trend. With that in mind, it is true that we, as statisticians, do not typically believe that a null-hypothesis is true (i.e. $H_o: \mu_d = 0$, where $\mu_d$ is the mean difference in some measurement between two groups). However, with two sided tests, we don't know which alternative hypothesis is true! In a two sided test, we may be willing to say that we are 100% sure that $\mu_d \neq 0$ before seeing the data. But we do not know whether $\mu_d > 0$ or $\mu_d < 0$. So if we run our experiment and conclude that $\mu_d > 0$, we have rejected $\mu_d = 0$ (as Matloff might say; useless conclusion) but more importantly, we have also rejected $\mu_d < 0$ (I say; useful conclusion). As @amoeba pointed out, this also applies to one sided test that have the potential to be two sided, such as testing whether a drug has a positive effect. It's true that this doesn't tell you the magnitude of the effect. But it does tell you the direction of the effect. So let's not put the cart before the horse; before I start drawing conclusions about the magnitude of the effect, I want to be confident I've got the direction of the effect correct! Similarly, the argument that "p-values pounce on tiny, unimportant effects" seems quite flawed to me. If you think of a p-value as a measure of how much the data supports the direction of your conclusion, then of course you want it to pick up small effects when the sample size is large enough. To say this means they are not useful is very strange to me: are these fields of research that have suffered from p-values the same ones that have so much data they have no need to assess the reliability of their estimates? Similarly, if your issues is really that p-values "pounce on tiny effect sizes", then you can simply test the hypotheses $H_{1}:\mu_d > 1$ and $H_{2}: \mu_d < -1$ (assuming you believe 1 to be the minimal important effect size). This is done often in clinical trials. To further illustrate this, suppose we just looked at confidence intervals and discarded p-values. What is the first thing you would check in the confidence interval? Whether the effect was strictly positive (or negative) before taking the results too seriously. As such, even without p-values, we would informally be doing hypothesis testing. Finally, in regards to the OP/Matloff's request, "Give a convincing argument of p-values being significantly better", I think question is a little awkward. I say this because depending on your view, it automatically answers itself ("give me one concrete example where testing a hypothesis is better than not testing them"). However, a special case that I think is almost undeniable is that of RNAseq data. In this case, we are typically looking at the expression level of RNA in two different groups (i.e. diseased, controls) and trying to find genes that are differentially expressed in the two groups. In this case, the effect size itself is not even really meaningful. This is because the expression levels of different genes vary so wildly that for some genes, having 2x higher expression doesn't mean anything, while on other tightly regulated genes, 1.2x higher expression is fatal. So the actual magnitude of the effect size is actually somewhat uninteresting when first comparing the groups. But you really, really want to know if the expression of the gene changes between the groups and direction of the change! Furthermore, it's much more difficult to address the issues of multiple comparisons (for which you may be doing 20,000 of them in a single run) with confidence intervals than it is with p-values.
What is a good, convincing example in which p-values are useful? I take great offense at the following two ideas: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. Almost no null hypotheses are true in the re
3,303
What is a good, convincing example in which p-values are useful?
Forgive my sarcasm, but one obvious good example of the utility of p-values is in getting published. I had one experimenter approach me for producing a p-value... he had introduced a transgene in a single plant to improve growth. From that single plant he produced multiple clones and chose the largest clone, an example where the entire population is enumerated. His question, the reviewer wants to see a p-value that this clone is the largest. I mentioned that there is not any need for statistics in this case as he had the entire population at hand, but to no avail. More seriously, in my humble opinion, from an academic perspective i find these discussion interesting and stimulating, just like the frequentist vs Bayesian debates from a few years ago. It brings out the differing perspectives of the best minds in this field and illuminates the many assumptions/pitfalls associated with the methodology thats not generally readily accesible. In practice, I think that rather than arguing about the best approach and replacing one flawed yardstick with another, as has been suggested before elsewhere, for me it is rather a revelation of an underlying systemic problem and the focus should be on trying to find optimal solutions. For instance, one could present situations where p-values and CI complement each other and circumstance wherein one is more reliable than the other. In the grand scheme of things, I understand that all inferential tools have their own shortcomings which need to be understood in any application so as to not stymie progress towards the ultimate goal.. the deeper understanding of the system of study.
What is a good, convincing example in which p-values are useful?
Forgive my sarcasm, but one obvious good example of the utility of p-values is in getting published. I had one experimenter approach me for producing a p-value... he had introduced a transgene in a si
What is a good, convincing example in which p-values are useful? Forgive my sarcasm, but one obvious good example of the utility of p-values is in getting published. I had one experimenter approach me for producing a p-value... he had introduced a transgene in a single plant to improve growth. From that single plant he produced multiple clones and chose the largest clone, an example where the entire population is enumerated. His question, the reviewer wants to see a p-value that this clone is the largest. I mentioned that there is not any need for statistics in this case as he had the entire population at hand, but to no avail. More seriously, in my humble opinion, from an academic perspective i find these discussion interesting and stimulating, just like the frequentist vs Bayesian debates from a few years ago. It brings out the differing perspectives of the best minds in this field and illuminates the many assumptions/pitfalls associated with the methodology thats not generally readily accesible. In practice, I think that rather than arguing about the best approach and replacing one flawed yardstick with another, as has been suggested before elsewhere, for me it is rather a revelation of an underlying systemic problem and the focus should be on trying to find optimal solutions. For instance, one could present situations where p-values and CI complement each other and circumstance wherein one is more reliable than the other. In the grand scheme of things, I understand that all inferential tools have their own shortcomings which need to be understood in any application so as to not stymie progress towards the ultimate goal.. the deeper understanding of the system of study.
What is a good, convincing example in which p-values are useful? Forgive my sarcasm, but one obvious good example of the utility of p-values is in getting published. I had one experimenter approach me for producing a p-value... he had introduced a transgene in a si
3,304
What is a good, convincing example in which p-values are useful?
I'll give you the exemplary case of how p-values should be used and reported. It's a very recent report on the search of a mysterious particle on Large Hadron Collider(LHC) in CERN. A few months ago there was a lot of excited chatter in high energy physics circles about a possibility that a large particle was detected on LHC. Remember this was after Higgs boson discovery. Here's the excerpt from the paper "Search for resonances decaying to photon pairs in 3.2 fb−1 of p p collisions at √s = 13 TeV with the ATLAS detector" by The ATLAS Collaboration Dec 15 2015 and my comments follow: What they're saying here is that the event counts exceed what the Standard Model predicts. The Figure below from the paper shows the p-values of excess events as a function of a mass of a particle. You see how p-value dives around 750 GeV. So, they're saying that there's a possibility that a new particle is detected with a mass equal to 750 Giga eV. The p-values on the figure are calculated as "local". The global p-values are much higher. That's not important for our conversation though. What's important is that p-values are not yet "low enough" for physicists to declare a find, but "low enough" to get excited. So, they're planning to keep counting, and hoping that that p-values will further decrease. Zoom a few months forward to Aug 2016, Chicago, a conference on HEP. There was a new report presented "Search for resonant production of high mass photon pairs using 12.9 fb−1 of proton-proton collisions at √ s = 13 TeV and combined interpretation of searches at 8 and 13 TeV" by The CMS Collaboration this time. Here's the excerpts with my comments again: So, the guys continued collecting events, and now that blip of excess events at 750 GeV is gone. The figure below from the paper shows p-values, and you can see how p-value increased compared to the first report. So, they sadly conclude that no particle is detected at 750 GeV. I think this is how p-values are supposed to be used. They totally make a sense, and they clearly work. I think the reason is that frequentist approaches are inherently natural in physics. There's nothing subjective about particle scattering. You collect a a sample large enough and you get a clear signal if it's there. If you're really into how exactly p-values are calculated here, read this paper: "Asymptotic formulae for likelihood-based tests of new physics" by Cowan et al
What is a good, convincing example in which p-values are useful?
I'll give you the exemplary case of how p-values should be used and reported. It's a very recent report on the search of a mysterious particle on Large Hadron Collider(LHC) in CERN. A few months ago t
What is a good, convincing example in which p-values are useful? I'll give you the exemplary case of how p-values should be used and reported. It's a very recent report on the search of a mysterious particle on Large Hadron Collider(LHC) in CERN. A few months ago there was a lot of excited chatter in high energy physics circles about a possibility that a large particle was detected on LHC. Remember this was after Higgs boson discovery. Here's the excerpt from the paper "Search for resonances decaying to photon pairs in 3.2 fb−1 of p p collisions at √s = 13 TeV with the ATLAS detector" by The ATLAS Collaboration Dec 15 2015 and my comments follow: What they're saying here is that the event counts exceed what the Standard Model predicts. The Figure below from the paper shows the p-values of excess events as a function of a mass of a particle. You see how p-value dives around 750 GeV. So, they're saying that there's a possibility that a new particle is detected with a mass equal to 750 Giga eV. The p-values on the figure are calculated as "local". The global p-values are much higher. That's not important for our conversation though. What's important is that p-values are not yet "low enough" for physicists to declare a find, but "low enough" to get excited. So, they're planning to keep counting, and hoping that that p-values will further decrease. Zoom a few months forward to Aug 2016, Chicago, a conference on HEP. There was a new report presented "Search for resonant production of high mass photon pairs using 12.9 fb−1 of proton-proton collisions at √ s = 13 TeV and combined interpretation of searches at 8 and 13 TeV" by The CMS Collaboration this time. Here's the excerpts with my comments again: So, the guys continued collecting events, and now that blip of excess events at 750 GeV is gone. The figure below from the paper shows p-values, and you can see how p-value increased compared to the first report. So, they sadly conclude that no particle is detected at 750 GeV. I think this is how p-values are supposed to be used. They totally make a sense, and they clearly work. I think the reason is that frequentist approaches are inherently natural in physics. There's nothing subjective about particle scattering. You collect a a sample large enough and you get a clear signal if it's there. If you're really into how exactly p-values are calculated here, read this paper: "Asymptotic formulae for likelihood-based tests of new physics" by Cowan et al
What is a good, convincing example in which p-values are useful? I'll give you the exemplary case of how p-values should be used and reported. It's a very recent report on the search of a mysterious particle on Large Hadron Collider(LHC) in CERN. A few months ago t
3,305
What is a good, convincing example in which p-values are useful?
The other explanations are all fine, I just wanted to try and give a brief and direct answer to the question that popped into my head. Checking Covariate Imbalance in Randomized Experiments Your second claim (about unrealistic null hypotheses) is not true when we are checking covariate balance in randomized experiments where we know the randomization was done properly. In this case, we know that the null hypothesis is true. If we get a significant difference between treatment and control group on some covariate - after controlling for multiple comparisons, of course - then that tells us that we got a "bad draw" in the randomization and we maybe shouldn't trust the causal estimate as much. This is because we might think that our treatment effect estimates from this particular "bad draw" randomization are further away from the true treatment effects than estimates obtained from a "good draw." I think this is a perfect use of p-values. It uses the definition of p-value: the probability of getting a value as or more extreme given the null hypothesis. If the result is highly unlikely, then we did in fact get a "bad draw." Balance tables/statistics are also common when using observational data to try and make causal inferences (e.g., matching, natural experiments). Although in these cases balance tables are far from sufficient to justify a "causal" label to the estimates.
What is a good, convincing example in which p-values are useful?
The other explanations are all fine, I just wanted to try and give a brief and direct answer to the question that popped into my head. Checking Covariate Imbalance in Randomized Experiments Your secon
What is a good, convincing example in which p-values are useful? The other explanations are all fine, I just wanted to try and give a brief and direct answer to the question that popped into my head. Checking Covariate Imbalance in Randomized Experiments Your second claim (about unrealistic null hypotheses) is not true when we are checking covariate balance in randomized experiments where we know the randomization was done properly. In this case, we know that the null hypothesis is true. If we get a significant difference between treatment and control group on some covariate - after controlling for multiple comparisons, of course - then that tells us that we got a "bad draw" in the randomization and we maybe shouldn't trust the causal estimate as much. This is because we might think that our treatment effect estimates from this particular "bad draw" randomization are further away from the true treatment effects than estimates obtained from a "good draw." I think this is a perfect use of p-values. It uses the definition of p-value: the probability of getting a value as or more extreme given the null hypothesis. If the result is highly unlikely, then we did in fact get a "bad draw." Balance tables/statistics are also common when using observational data to try and make causal inferences (e.g., matching, natural experiments). Although in these cases balance tables are far from sufficient to justify a "causal" label to the estimates.
What is a good, convincing example in which p-values are useful? The other explanations are all fine, I just wanted to try and give a brief and direct answer to the question that popped into my head. Checking Covariate Imbalance in Randomized Experiments Your secon
3,306
What is a good, convincing example in which p-values are useful?
Error rates control is similar to quality control in production. A robot in a production line has a rule for deciding that a part is defective which guarantees not to exceed a specified rate of defective parts that go through undetected. Similarly, an agency that makes decisions for drug approval based on "honest" P-values has a way to keep the rate of false rejections at a controlled level, by definition via the frequentist long-run construction of tests. Here, "honest" means absence of uncontrolled biases, hidden selections, etc. However, neither the robot, nor the agency have a personal stake in any particular drug or a part that goes through the assembly conveyor. In science, on the other hand, we, as individual investigators care most about the particular hypothesis we study, rather than about the proportion of spurious claims in our favorite journal we submit to. Neither the P-value magnitude nor the bounds of a confidence interval (CI) refer directly to our question about the credibility of what we report. When we construct the CI bounds, we should be saying that the only meaning of the two numbers is that if other scientists do the same kind of CI computation in their studies, the 95% or whatever coverage will be maintained over various studies as a whole. In this light, I find it ironic that P-values are being "banned" by journals, considering that in the thick of replicability crisis they are of more value to journal editors than to researchers submitting their papers, as a practical way of keeping the rate of spurious findings reported by a journal at bay, in the long run. P-values are good at filtering, or as IJ Good wrote, they are good for protecting statistician's rear end, but not so much the rear end of the client. P.S. I'm a huge fan of Benjamini and Hochberg's idea of taking the unconditional expectation across studies with multiple tests. Under the global "null", the "frequentist" FDR is still controlled - studies with one or more rejections pop up in a journal at a controlled rate, although, in this case, any study where some rejections have been actually made has the proportion of false rejections that is equal to one.
What is a good, convincing example in which p-values are useful?
Error rates control is similar to quality control in production. A robot in a production line has a rule for deciding that a part is defective which guarantees not to exceed a specified rate of defect
What is a good, convincing example in which p-values are useful? Error rates control is similar to quality control in production. A robot in a production line has a rule for deciding that a part is defective which guarantees not to exceed a specified rate of defective parts that go through undetected. Similarly, an agency that makes decisions for drug approval based on "honest" P-values has a way to keep the rate of false rejections at a controlled level, by definition via the frequentist long-run construction of tests. Here, "honest" means absence of uncontrolled biases, hidden selections, etc. However, neither the robot, nor the agency have a personal stake in any particular drug or a part that goes through the assembly conveyor. In science, on the other hand, we, as individual investigators care most about the particular hypothesis we study, rather than about the proportion of spurious claims in our favorite journal we submit to. Neither the P-value magnitude nor the bounds of a confidence interval (CI) refer directly to our question about the credibility of what we report. When we construct the CI bounds, we should be saying that the only meaning of the two numbers is that if other scientists do the same kind of CI computation in their studies, the 95% or whatever coverage will be maintained over various studies as a whole. In this light, I find it ironic that P-values are being "banned" by journals, considering that in the thick of replicability crisis they are of more value to journal editors than to researchers submitting their papers, as a practical way of keeping the rate of spurious findings reported by a journal at bay, in the long run. P-values are good at filtering, or as IJ Good wrote, they are good for protecting statistician's rear end, but not so much the rear end of the client. P.S. I'm a huge fan of Benjamini and Hochberg's idea of taking the unconditional expectation across studies with multiple tests. Under the global "null", the "frequentist" FDR is still controlled - studies with one or more rejections pop up in a journal at a controlled rate, although, in this case, any study where some rejections have been actually made has the proportion of false rejections that is equal to one.
What is a good, convincing example in which p-values are useful? Error rates control is similar to quality control in production. A robot in a production line has a rule for deciding that a part is defective which guarantees not to exceed a specified rate of defect
3,307
What is a good, convincing example in which p-values are useful?
I agree with Matt that p-values are useful when the null hypothesis is true. The simplest example I can think of is testing a random number generator. If the generator is working correctly, you can use any appropriate sample size of realizations and when testing the fit over many samples, the p-values should have a uniform distribution. If they do, this is good evidence for a correct implementation. If they don't, you know you have made an error somewhere. Other similar situations occur when you know a statistic or random variable should have a certain distribution (again, the most obvious context is simulation). If the p-values are uniform, you have found support for a valid implementation. If not, you know you have a problem somewhere in your code.
What is a good, convincing example in which p-values are useful?
I agree with Matt that p-values are useful when the null hypothesis is true. The simplest example I can think of is testing a random number generator. If the generator is working correctly, you can u
What is a good, convincing example in which p-values are useful? I agree with Matt that p-values are useful when the null hypothesis is true. The simplest example I can think of is testing a random number generator. If the generator is working correctly, you can use any appropriate sample size of realizations and when testing the fit over many samples, the p-values should have a uniform distribution. If they do, this is good evidence for a correct implementation. If they don't, you know you have made an error somewhere. Other similar situations occur when you know a statistic or random variable should have a certain distribution (again, the most obvious context is simulation). If the p-values are uniform, you have found support for a valid implementation. If not, you know you have a problem somewhere in your code.
What is a good, convincing example in which p-values are useful? I agree with Matt that p-values are useful when the null hypothesis is true. The simplest example I can think of is testing a random number generator. If the generator is working correctly, you can u
3,308
What is a good, convincing example in which p-values are useful?
I can think of example in which p-values are useful, in Experimental High Energy Physics. See Fig.1 This plot is taken from this paper: Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC In this Fig, the p-value is shown versus the mass of an hypothetical particle. The null hypothesis denotes the compatibility of the observation with a continuous background. The large ($5 \sigma$) deviation at m$_\mathrm{H} \approx 125$ GeV was the first evidence and discovery of a new particle. This earned François Englert, Peter Higgs the Nobel Prize in Physics in 2013.
What is a good, convincing example in which p-values are useful?
I can think of example in which p-values are useful, in Experimental High Energy Physics. See Fig.1 This plot is taken from this paper: Observation of a new particle in the search for the Standard Mo
What is a good, convincing example in which p-values are useful? I can think of example in which p-values are useful, in Experimental High Energy Physics. See Fig.1 This plot is taken from this paper: Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC In this Fig, the p-value is shown versus the mass of an hypothetical particle. The null hypothesis denotes the compatibility of the observation with a continuous background. The large ($5 \sigma$) deviation at m$_\mathrm{H} \approx 125$ GeV was the first evidence and discovery of a new particle. This earned François Englert, Peter Higgs the Nobel Prize in Physics in 2013.
What is a good, convincing example in which p-values are useful? I can think of example in which p-values are useful, in Experimental High Energy Physics. See Fig.1 This plot is taken from this paper: Observation of a new particle in the search for the Standard Mo
3,309
Test if two binomial distributions are statistically different from each other
The solution is a simple google away: http://en.wikipedia.org/wiki/Statistical_hypothesis_testing So you would like to test the following null hypothesis against the given alternative $H_0:p_1=p_2$ versus $H_A:p_1\neq p_2$ So you just need to calculate the test statistic which is $$z=\frac{\hat p_1-\hat p_2}{\sqrt{\hat p(1-\hat p)\left(\frac{1}{n_1}+\frac{1}{n_2}\right)}}$$ where $\hat p=\frac{n_1\hat p_1+n_2\hat p_2}{n_1+n_2}$. So now, in your problem, $\hat p_1=.634$, $\hat p_2=.612$, $n_1=2455$ and $n_2=2730.$ Once you calculate the test statistic, you just need to calculate the corresponding critical region value to compare your test statistic too. For example, if you are testing this hypothesis at the 95% confidence level then you need to compare the absolute value of your test statistic against the critical region value of $z_{\alpha/2}=1.96$ (for this two tailed test). Now, if $|z|>z_{\alpha/2}$ then you may reject the null hypothesis, otherwise you must fail to reject the null hypothesis. Well this solution works for the case when you are comparing two groups, but it does not generalize to the case where you want to compare 3 groups. You could however use a Chi Squared test to test if all three groups have equal proportions as suggested by @Eric in his comment above: " Does this question help? stats.stackexchange.com/questions/25299/ … – Eric"
Test if two binomial distributions are statistically different from each other
The solution is a simple google away: http://en.wikipedia.org/wiki/Statistical_hypothesis_testing So you would like to test the following null hypothesis against the given alternative $H_0:p_1=p_2$ ve
Test if two binomial distributions are statistically different from each other The solution is a simple google away: http://en.wikipedia.org/wiki/Statistical_hypothesis_testing So you would like to test the following null hypothesis against the given alternative $H_0:p_1=p_2$ versus $H_A:p_1\neq p_2$ So you just need to calculate the test statistic which is $$z=\frac{\hat p_1-\hat p_2}{\sqrt{\hat p(1-\hat p)\left(\frac{1}{n_1}+\frac{1}{n_2}\right)}}$$ where $\hat p=\frac{n_1\hat p_1+n_2\hat p_2}{n_1+n_2}$. So now, in your problem, $\hat p_1=.634$, $\hat p_2=.612$, $n_1=2455$ and $n_2=2730.$ Once you calculate the test statistic, you just need to calculate the corresponding critical region value to compare your test statistic too. For example, if you are testing this hypothesis at the 95% confidence level then you need to compare the absolute value of your test statistic against the critical region value of $z_{\alpha/2}=1.96$ (for this two tailed test). Now, if $|z|>z_{\alpha/2}$ then you may reject the null hypothesis, otherwise you must fail to reject the null hypothesis. Well this solution works for the case when you are comparing two groups, but it does not generalize to the case where you want to compare 3 groups. You could however use a Chi Squared test to test if all three groups have equal proportions as suggested by @Eric in his comment above: " Does this question help? stats.stackexchange.com/questions/25299/ … – Eric"
Test if two binomial distributions are statistically different from each other The solution is a simple google away: http://en.wikipedia.org/wiki/Statistical_hypothesis_testing So you would like to test the following null hypothesis against the given alternative $H_0:p_1=p_2$ ve
3,310
Test if two binomial distributions are statistically different from each other
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. In R the answer is calculated as: fisher.test(rbind(c(1556,2455-1556), c(1671,2730-1671)), alternative="less")
Test if two binomial distributions are statistically different from each other
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Test if two binomial distributions are statistically different from each other Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. In R the answer is calculated as: fisher.test(rbind(c(1556,2455-1556), c(1671,2730-1671)), alternative="less")
Test if two binomial distributions are statistically different from each other Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
3,311
Test if two binomial distributions are statistically different from each other
Just a summary: Dan and Abaumann's answers suggest testing under a binomial model where the null hypothesis is a unified single binomial model with its mean estimated from the empirical data. Their answers are correct in theory but they need approximation using normal distribution since the distribution of test statistic does not exactly follow Normal distribution. Therefore, it's only correct for a large sample size. But David's answer is indicating a nonparametric test using Fisher's test.The information is here: https://en.wikipedia.org/wiki/Fisher%27s_exact_test And it can be applied to small sample sizes but hard to calculate for big sample sizes. Which test to use and how much you trust your p-value is a mystery. But there are always biases in whichever test to choose.
Test if two binomial distributions are statistically different from each other
Just a summary: Dan and Abaumann's answers suggest testing under a binomial model where the null hypothesis is a unified single binomial model with its mean estimated from the empirical data. Their an
Test if two binomial distributions are statistically different from each other Just a summary: Dan and Abaumann's answers suggest testing under a binomial model where the null hypothesis is a unified single binomial model with its mean estimated from the empirical data. Their answers are correct in theory but they need approximation using normal distribution since the distribution of test statistic does not exactly follow Normal distribution. Therefore, it's only correct for a large sample size. But David's answer is indicating a nonparametric test using Fisher's test.The information is here: https://en.wikipedia.org/wiki/Fisher%27s_exact_test And it can be applied to small sample sizes but hard to calculate for big sample sizes. Which test to use and how much you trust your p-value is a mystery. But there are always biases in whichever test to choose.
Test if two binomial distributions are statistically different from each other Just a summary: Dan and Abaumann's answers suggest testing under a binomial model where the null hypothesis is a unified single binomial model with its mean estimated from the empirical data. Their an
3,312
Test if two binomial distributions are statistically different from each other
Your test statistic is $Z = \frac{\hat{p_1}-\hat{p_2}}{\sqrt{\hat{p}(1-\hat{p})(1/n_1+1/n_2)}}$, where $\hat{p}=\frac{n_1\hat{p_1}+n_2\hat{p_2}}{n_1+n_2}$. The critical regions are $Z > \Phi^{-1}(1-\alpha/2)$ and $Z<\Phi^{-1}(\alpha/2)$ for the two-tailed test with the usual adjustments for a one-tailed test.
Test if two binomial distributions are statistically different from each other
Your test statistic is $Z = \frac{\hat{p_1}-\hat{p_2}}{\sqrt{\hat{p}(1-\hat{p})(1/n_1+1/n_2)}}$, where $\hat{p}=\frac{n_1\hat{p_1}+n_2\hat{p_2}}{n_1+n_2}$. The critical regions are $Z > \Phi^{-1}(1-\
Test if two binomial distributions are statistically different from each other Your test statistic is $Z = \frac{\hat{p_1}-\hat{p_2}}{\sqrt{\hat{p}(1-\hat{p})(1/n_1+1/n_2)}}$, where $\hat{p}=\frac{n_1\hat{p_1}+n_2\hat{p_2}}{n_1+n_2}$. The critical regions are $Z > \Phi^{-1}(1-\alpha/2)$ and $Z<\Phi^{-1}(\alpha/2)$ for the two-tailed test with the usual adjustments for a one-tailed test.
Test if two binomial distributions are statistically different from each other Your test statistic is $Z = \frac{\hat{p_1}-\hat{p_2}}{\sqrt{\hat{p}(1-\hat{p})(1/n_1+1/n_2)}}$, where $\hat{p}=\frac{n_1\hat{p_1}+n_2\hat{p_2}}{n_1+n_2}$. The critical regions are $Z > \Phi^{-1}(1-\
3,313
Test if two binomial distributions are statistically different from each other
Original post: Dan's answer is actually incorrect, not to offend anyone. A z-test is used only if your data follows a standard normal distribution. In this case, your data follows a binomial distribution, therefore a use a chi-squared test if your sample is large or fisher's test if your sample is small. Edit: My mistake, apologies to @Dan. A z-test is valid here if your variables are independent. If this assumption is not met or unknown, a z-test may be invalid.
Test if two binomial distributions are statistically different from each other
Original post: Dan's answer is actually incorrect, not to offend anyone. A z-test is used only if your data follows a standard normal distribution. In this case, your data follows a binomial distribut
Test if two binomial distributions are statistically different from each other Original post: Dan's answer is actually incorrect, not to offend anyone. A z-test is used only if your data follows a standard normal distribution. In this case, your data follows a binomial distribution, therefore a use a chi-squared test if your sample is large or fisher's test if your sample is small. Edit: My mistake, apologies to @Dan. A z-test is valid here if your variables are independent. If this assumption is not met or unknown, a z-test may be invalid.
Test if two binomial distributions are statistically different from each other Original post: Dan's answer is actually incorrect, not to offend anyone. A z-test is used only if your data follows a standard normal distribution. In this case, your data follows a binomial distribut
3,314
Test if two binomial distributions are statistically different from each other
As suggested in other answers and comments, you can use an exact test that takes into account the origin of the data. Under the null hypothesis that the probability of success $\theta$ is the same in both experiments, $P \bigl(\begin{smallmatrix}k_1 & k_2 \\ n_1-k_1 & n_2-k_2\end{smallmatrix}\bigr) = \binom{n_1}{k_1}\binom{n_2}{k_2}\theta^{{k_1 + k_2}}\left({1-\theta}\right)^{{\left(n_1-k_1\right)+\left(n_2-k_2\right)}}$ Notice that $P$ is not the p value, but the probability of this result under the null hypothesis. To calculate the p value, we need to consider all the cases whose $P$ is not higher than for our result. As noted in the question, the main problem is that we do not know the value of $\theta$. This is why it is called a nuisance parameter. Fisher's test solves this problem by making the experimental design conditional, meaning that the only contingency tables that are considered for the calculation are those where the sum of the number of successes is the same as in the example ($1556 + 1671 = 3227$). This condition may not be in accordance with the experimental design, but it also means that we do not need to deal with the nuisance parameter. There are also unconditional exact tests. For instance, Barnard's test estimates the most likely value of the nuisance parameter and directly uses the binomial distribution with that parameter. Obviously, the problem here is how to calculate $\theta$, and there may be more than one answer for that. The original approach is to find the value of $\theta$ that maximizes $P$. Here you can find an explanation of both tests. I have recently uploaded a preprint that employs a similar strategy to that of Barnard's test. However, instead of estimating $\theta$, this method (tentatively called m-test) considers every possible value of this parameter and integrates all the results. Using the same notation as in the question, $P \bigl(\begin{smallmatrix}k_1 & k_2 \\ n_1-k_1 & n_2-k_2\end{smallmatrix}\bigr) = \binom{n_1}{k_1}\binom{n_2}{k_2}\int_{0}^{1}\theta^{{k_1 + k_2}}\left({1-\theta}\right)^{{\left(n_1-k_1\right)+\left(n_2-k_2\right)}}d\theta$ The calculation of the p value can be simplified using the properties of the integral, as shown in the article. Preliminary tests with Monte Carlo simulations suggest that the m-test is more powerful than the other extact tests at different significance levels. As a bonus, this test can be easily extended to more than two experiments, and also to more than two outcomes. The only limitation is in the speed, as many cases need to be considered. I have also prepared an R package to use the test (https://github.com/vqf/mtest). In this example, >library(mtest) >m <- matrix(c(1556,2455-1556, 1671,2730-1671), nrow = 2, byrow = F) >m.test(m) [1] 0.0837938 In my computer, this takes about 20 seconds, whereas Barnard's test takes much longer.
Test if two binomial distributions are statistically different from each other
As suggested in other answers and comments, you can use an exact test that takes into account the origin of the data. Under the null hypothesis that the probability of success $\theta$ is the same in
Test if two binomial distributions are statistically different from each other As suggested in other answers and comments, you can use an exact test that takes into account the origin of the data. Under the null hypothesis that the probability of success $\theta$ is the same in both experiments, $P \bigl(\begin{smallmatrix}k_1 & k_2 \\ n_1-k_1 & n_2-k_2\end{smallmatrix}\bigr) = \binom{n_1}{k_1}\binom{n_2}{k_2}\theta^{{k_1 + k_2}}\left({1-\theta}\right)^{{\left(n_1-k_1\right)+\left(n_2-k_2\right)}}$ Notice that $P$ is not the p value, but the probability of this result under the null hypothesis. To calculate the p value, we need to consider all the cases whose $P$ is not higher than for our result. As noted in the question, the main problem is that we do not know the value of $\theta$. This is why it is called a nuisance parameter. Fisher's test solves this problem by making the experimental design conditional, meaning that the only contingency tables that are considered for the calculation are those where the sum of the number of successes is the same as in the example ($1556 + 1671 = 3227$). This condition may not be in accordance with the experimental design, but it also means that we do not need to deal with the nuisance parameter. There are also unconditional exact tests. For instance, Barnard's test estimates the most likely value of the nuisance parameter and directly uses the binomial distribution with that parameter. Obviously, the problem here is how to calculate $\theta$, and there may be more than one answer for that. The original approach is to find the value of $\theta$ that maximizes $P$. Here you can find an explanation of both tests. I have recently uploaded a preprint that employs a similar strategy to that of Barnard's test. However, instead of estimating $\theta$, this method (tentatively called m-test) considers every possible value of this parameter and integrates all the results. Using the same notation as in the question, $P \bigl(\begin{smallmatrix}k_1 & k_2 \\ n_1-k_1 & n_2-k_2\end{smallmatrix}\bigr) = \binom{n_1}{k_1}\binom{n_2}{k_2}\int_{0}^{1}\theta^{{k_1 + k_2}}\left({1-\theta}\right)^{{\left(n_1-k_1\right)+\left(n_2-k_2\right)}}d\theta$ The calculation of the p value can be simplified using the properties of the integral, as shown in the article. Preliminary tests with Monte Carlo simulations suggest that the m-test is more powerful than the other extact tests at different significance levels. As a bonus, this test can be easily extended to more than two experiments, and also to more than two outcomes. The only limitation is in the speed, as many cases need to be considered. I have also prepared an R package to use the test (https://github.com/vqf/mtest). In this example, >library(mtest) >m <- matrix(c(1556,2455-1556, 1671,2730-1671), nrow = 2, byrow = F) >m.test(m) [1] 0.0837938 In my computer, this takes about 20 seconds, whereas Barnard's test takes much longer.
Test if two binomial distributions are statistically different from each other As suggested in other answers and comments, you can use an exact test that takes into account the origin of the data. Under the null hypothesis that the probability of success $\theta$ is the same in
3,315
Why is tanh almost always better than sigmoid as an activation function?
Yan LeCun and others argue in Efficient BackProp that Convergence is usually faster if the average of each input variable over the training set is close to zero. To see this, consider the extreme case where all the inputs are positive. Weights to a particular node in the first weight layer are updated by an amount proportional to $\delta x$ where $\delta$ is the (scalar) error at that node and $x$ is the input vector (see equations (5) and (10)). When all of the components of an input vector are positive, all of the updates of weights that feed into a node will have the same sign (i.e. sign($\delta$)). As a result, these weights can only all decrease or all increase together for a given input pattern. Thus, if a weight vector must change direction it can only do so by zigzagging which is inefficient and thus very slow. This is why you should normalize your inputs so that the average is zero. The same logic applies to middle layers: This heuristic should be applied at all layers which means that we want the average of the outputs of a node to be close to zero because these outputs are the inputs to the next layer. Postscript @craq makes the point that this quote doesn't make sense for ReLU(x)=max(0,x) which has become a widely popular activation function. While ReLU does avoid the first zigzag problem mentioned by LeCun, it doesn't solve this second point by LeCun who says it is important to push the average to zero. I would love to know what LeCun has to say about this. In any case, there is a paper called Batch Normalization, which builds on top of the work of LeCun and offers a way to address this issue: It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its inputs are whitened – i.e., linearly transformed to have zero means and unit variances, and decorrelated. As each layer observes the inputs produced by the layers below, it would be advantageous to achieve the same whitening of the inputs of each layer. By the way, this video by Siraj explains a lot about activation functions in 10 fun minutes. @elkout says "The real reason that tanh is preferred compared to sigmoid (...) is that the derivatives of the tanh are larger than the derivatives of the sigmoid." I think this is a non-issue. I never seen this being a problem in the literature. If it bothers you that one derivative is smaller than another, you can just scale it. The logistic function has the shape $\sigma(x)=\frac{1}{1+e^{-kx}}$. Usually, we use $k=1$, but nothing forbids you from using another value for $k$ to make your derivatives wider, if that was your problem. Nitpick: tanh is also a sigmoid function. Any function with a S shape is a sigmoid. What you guys are calling sigmoid is the logistic function. The reason why the logistic function is more popular is historical reasons. It has been used for a longer time by statisticians. Besides, some feel that it is more biologically plausible.
Why is tanh almost always better than sigmoid as an activation function?
Yan LeCun and others argue in Efficient BackProp that Convergence is usually faster if the average of each input variable over the training set is close to zero. To see this, consider the extreme cas
Why is tanh almost always better than sigmoid as an activation function? Yan LeCun and others argue in Efficient BackProp that Convergence is usually faster if the average of each input variable over the training set is close to zero. To see this, consider the extreme case where all the inputs are positive. Weights to a particular node in the first weight layer are updated by an amount proportional to $\delta x$ where $\delta$ is the (scalar) error at that node and $x$ is the input vector (see equations (5) and (10)). When all of the components of an input vector are positive, all of the updates of weights that feed into a node will have the same sign (i.e. sign($\delta$)). As a result, these weights can only all decrease or all increase together for a given input pattern. Thus, if a weight vector must change direction it can only do so by zigzagging which is inefficient and thus very slow. This is why you should normalize your inputs so that the average is zero. The same logic applies to middle layers: This heuristic should be applied at all layers which means that we want the average of the outputs of a node to be close to zero because these outputs are the inputs to the next layer. Postscript @craq makes the point that this quote doesn't make sense for ReLU(x)=max(0,x) which has become a widely popular activation function. While ReLU does avoid the first zigzag problem mentioned by LeCun, it doesn't solve this second point by LeCun who says it is important to push the average to zero. I would love to know what LeCun has to say about this. In any case, there is a paper called Batch Normalization, which builds on top of the work of LeCun and offers a way to address this issue: It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its inputs are whitened – i.e., linearly transformed to have zero means and unit variances, and decorrelated. As each layer observes the inputs produced by the layers below, it would be advantageous to achieve the same whitening of the inputs of each layer. By the way, this video by Siraj explains a lot about activation functions in 10 fun minutes. @elkout says "The real reason that tanh is preferred compared to sigmoid (...) is that the derivatives of the tanh are larger than the derivatives of the sigmoid." I think this is a non-issue. I never seen this being a problem in the literature. If it bothers you that one derivative is smaller than another, you can just scale it. The logistic function has the shape $\sigma(x)=\frac{1}{1+e^{-kx}}$. Usually, we use $k=1$, but nothing forbids you from using another value for $k$ to make your derivatives wider, if that was your problem. Nitpick: tanh is also a sigmoid function. Any function with a S shape is a sigmoid. What you guys are calling sigmoid is the logistic function. The reason why the logistic function is more popular is historical reasons. It has been used for a longer time by statisticians. Besides, some feel that it is more biologically plausible.
Why is tanh almost always better than sigmoid as an activation function? Yan LeCun and others argue in Efficient BackProp that Convergence is usually faster if the average of each input variable over the training set is close to zero. To see this, consider the extreme cas
3,316
Why is tanh almost always better than sigmoid as an activation function?
It's not that it is necessarily better than $\text{sigmoid}$. In other words, it's not the center of an activation fuction that makes it better. And the idea behind both functions is the same, and they also share a similar "trend". Needless to say that the $\tanh$ function is called a shifted version of the $\text{sigmoid}$ function. The real reason that $\text{tanh}$ is preferred compared to $\text{sigmoid}$, especially when it comes to big data when you are usually struggling to find quickly the local (or global) minimum, is that the derivatives of the $\text{tanh}$ are larger than the derivatives of the $\text{sigmoid}$. In other words, you minimize your cost function faster if you use $\text{tanh}$ as an activation fuction. But why does the hyperbolic tangent have larger derivatives? Just to give you a very simple intuition you may observe the following graph: The fact that the range is between -1 and 1 compared to 0 and 1, makes the function to be more convenient for neural networks. Apart from that, if I use some math, I can prove that: $$\tanh{x} = 2σ(2x)-1$$ And in general, we may prove that in most cases $\Big|\frac{\partial\tanh (x)}{\partial x}\Big| > \Big|\frac{\partial\text{σ} (x)}{\partial x}\Big|$.
Why is tanh almost always better than sigmoid as an activation function?
It's not that it is necessarily better than $\text{sigmoid}$. In other words, it's not the center of an activation fuction that makes it better. And the idea behind both functions is the same, and the
Why is tanh almost always better than sigmoid as an activation function? It's not that it is necessarily better than $\text{sigmoid}$. In other words, it's not the center of an activation fuction that makes it better. And the idea behind both functions is the same, and they also share a similar "trend". Needless to say that the $\tanh$ function is called a shifted version of the $\text{sigmoid}$ function. The real reason that $\text{tanh}$ is preferred compared to $\text{sigmoid}$, especially when it comes to big data when you are usually struggling to find quickly the local (or global) minimum, is that the derivatives of the $\text{tanh}$ are larger than the derivatives of the $\text{sigmoid}$. In other words, you minimize your cost function faster if you use $\text{tanh}$ as an activation fuction. But why does the hyperbolic tangent have larger derivatives? Just to give you a very simple intuition you may observe the following graph: The fact that the range is between -1 and 1 compared to 0 and 1, makes the function to be more convenient for neural networks. Apart from that, if I use some math, I can prove that: $$\tanh{x} = 2σ(2x)-1$$ And in general, we may prove that in most cases $\Big|\frac{\partial\tanh (x)}{\partial x}\Big| > \Big|\frac{\partial\text{σ} (x)}{\partial x}\Big|$.
Why is tanh almost always better than sigmoid as an activation function? It's not that it is necessarily better than $\text{sigmoid}$. In other words, it's not the center of an activation fuction that makes it better. And the idea behind both functions is the same, and the
3,317
Why is tanh almost always better than sigmoid as an activation function?
It all essentially depends on the derivatives of the activation function, the main problem with the sigmoid function is that the max value of its derivative is 0.25, this means that the update of the values of W and b will be small. The tanh function on the other hand, has a derivativ of up to 1.0, making the updates of W and b much larger. This makes the tanh function almost always better as an activation function (for hidden layers) rather than the sigmoid function. To prove this myself (at least in a simple case), I coded a simple neural network and used sigmoid, tanh and relu as activation functions, then I plotted how the error value evolved and this is what I got. The full notebook I wrote is here https://www.kaggle.com/moriano/a-showcase-of-how-relus-can-speed-up-the-learning If it helps, here are the charts of the derivatives of the tanh function and the sigmoid one (pay attention to the vertical axis!)
Why is tanh almost always better than sigmoid as an activation function?
It all essentially depends on the derivatives of the activation function, the main problem with the sigmoid function is that the max value of its derivative is 0.25, this means that the update of the
Why is tanh almost always better than sigmoid as an activation function? It all essentially depends on the derivatives of the activation function, the main problem with the sigmoid function is that the max value of its derivative is 0.25, this means that the update of the values of W and b will be small. The tanh function on the other hand, has a derivativ of up to 1.0, making the updates of W and b much larger. This makes the tanh function almost always better as an activation function (for hidden layers) rather than the sigmoid function. To prove this myself (at least in a simple case), I coded a simple neural network and used sigmoid, tanh and relu as activation functions, then I plotted how the error value evolved and this is what I got. The full notebook I wrote is here https://www.kaggle.com/moriano/a-showcase-of-how-relus-can-speed-up-the-learning If it helps, here are the charts of the derivatives of the tanh function and the sigmoid one (pay attention to the vertical axis!)
Why is tanh almost always better than sigmoid as an activation function? It all essentially depends on the derivatives of the activation function, the main problem with the sigmoid function is that the max value of its derivative is 0.25, this means that the update of the
3,318
Why is tanh almost always better than sigmoid as an activation function?
Answering the part of the question so far unaddressed: Andrew Ng says that using the logistic function (commonly know as sigmoid) really only makes sense in the final layer of a binary classification network. As the output of the network is expected to be between $0$ and $1$, the logistic is a perfect choice as it's range is exactly $(0, 1)$. No scaling and shifting of $tanh$ required.
Why is tanh almost always better than sigmoid as an activation function?
Answering the part of the question so far unaddressed: Andrew Ng says that using the logistic function (commonly know as sigmoid) really only makes sense in the final layer of a binary classification
Why is tanh almost always better than sigmoid as an activation function? Answering the part of the question so far unaddressed: Andrew Ng says that using the logistic function (commonly know as sigmoid) really only makes sense in the final layer of a binary classification network. As the output of the network is expected to be between $0$ and $1$, the logistic is a perfect choice as it's range is exactly $(0, 1)$. No scaling and shifting of $tanh$ required.
Why is tanh almost always better than sigmoid as an activation function? Answering the part of the question so far unaddressed: Andrew Ng says that using the logistic function (commonly know as sigmoid) really only makes sense in the final layer of a binary classification
3,319
Why is tanh almost always better than sigmoid as an activation function?
Generally, the non-zero centered activation function restricts the movement of parameters over the surface area in some specific directions. Which makes the training slower because it needs mush steps to move from the initial point to the minimum point with these restricted movements. For more details watch only 7 min from this video starting from time 8:43.
Why is tanh almost always better than sigmoid as an activation function?
Generally, the non-zero centered activation function restricts the movement of parameters over the surface area in some specific directions. Which makes the training slower because it needs mush steps
Why is tanh almost always better than sigmoid as an activation function? Generally, the non-zero centered activation function restricts the movement of parameters over the surface area in some specific directions. Which makes the training slower because it needs mush steps to move from the initial point to the minimum point with these restricted movements. For more details watch only 7 min from this video starting from time 8:43.
Why is tanh almost always better than sigmoid as an activation function? Generally, the non-zero centered activation function restricts the movement of parameters over the surface area in some specific directions. Which makes the training slower because it needs mush steps
3,320
Testing equality of coefficients from two different regressions
Although this isn't a common analysis, it really is one of interest. The accepted answer fits the way you asked your question, but I'm going to provide another reasonably well accepted technique that may or may not be equivalent (I'll leave it to better minds to comment on that). This approach is to use the following Z test: $Z = \frac{\beta_1-\beta_2}{\sqrt{(SE\beta_1)^2+(SE\beta_2)^2}}$ Where $SE\beta$ is the standard error of $\beta$. This equation is provided by Clogg, C. C., Petkova, E., & Haritou, A. (1995). Statistical methods for comparing regression coefficients between models. American Journal of Sociology, 100(5), 1261-1293. and is cited by Paternoster, R., Brame, R., Mazerolle, P., & Piquero, A. (1998). Using the correct statistical test for equality of regression coefficients. Criminology, 36(4), 859-866. equation 4, which is available free of a paywall. I've adapted Peternoster's formula to use $\beta$ rather than $b$ because it is possible that you might be interested in different DVs for some awful reason and my memory of Clogg et al. was that their formula used $\beta$. I also remember cross checking this formula against Cohen, Cohen, West, and Aiken, and the root of the same thinking can be found there in the confidence interval of differences between coefficients, equation 2.8.6, pg 46-47.
Testing equality of coefficients from two different regressions
Although this isn't a common analysis, it really is one of interest. The accepted answer fits the way you asked your question, but I'm going to provide another reasonably well accepted technique that
Testing equality of coefficients from two different regressions Although this isn't a common analysis, it really is one of interest. The accepted answer fits the way you asked your question, but I'm going to provide another reasonably well accepted technique that may or may not be equivalent (I'll leave it to better minds to comment on that). This approach is to use the following Z test: $Z = \frac{\beta_1-\beta_2}{\sqrt{(SE\beta_1)^2+(SE\beta_2)^2}}$ Where $SE\beta$ is the standard error of $\beta$. This equation is provided by Clogg, C. C., Petkova, E., & Haritou, A. (1995). Statistical methods for comparing regression coefficients between models. American Journal of Sociology, 100(5), 1261-1293. and is cited by Paternoster, R., Brame, R., Mazerolle, P., & Piquero, A. (1998). Using the correct statistical test for equality of regression coefficients. Criminology, 36(4), 859-866. equation 4, which is available free of a paywall. I've adapted Peternoster's formula to use $\beta$ rather than $b$ because it is possible that you might be interested in different DVs for some awful reason and my memory of Clogg et al. was that their formula used $\beta$. I also remember cross checking this formula against Cohen, Cohen, West, and Aiken, and the root of the same thinking can be found there in the confidence interval of differences between coefficients, equation 2.8.6, pg 46-47.
Testing equality of coefficients from two different regressions Although this isn't a common analysis, it really is one of interest. The accepted answer fits the way you asked your question, but I'm going to provide another reasonably well accepted technique that
3,321
Testing equality of coefficients from two different regressions
For people with a similar question, let me provide a simple outline of the answer. The trick is to set up the two equations as a system of seemingly unrelated equations and to estimate them jointly. That is, we stack $y_1$ and $y_2$ on top of each other, and doing more or less the same with the design matrix. That is, the system to be estimated is: $\left(\array{y_1 \\ y_2}\right) = \left(\array{X_1 \ \ 0 \\ 0 \ \ X_2}\right)\left(\array{\beta_1 \\ \beta_2 }\right) + \left(\array{e_1 \\ e_2 }\right) $ This will lead to a variance-covariance matrix that allows to test for equality of the two coefficients.
Testing equality of coefficients from two different regressions
For people with a similar question, let me provide a simple outline of the answer. The trick is to set up the two equations as a system of seemingly unrelated equations and to estimate them jointly. T
Testing equality of coefficients from two different regressions For people with a similar question, let me provide a simple outline of the answer. The trick is to set up the two equations as a system of seemingly unrelated equations and to estimate them jointly. That is, we stack $y_1$ and $y_2$ on top of each other, and doing more or less the same with the design matrix. That is, the system to be estimated is: $\left(\array{y_1 \\ y_2}\right) = \left(\array{X_1 \ \ 0 \\ 0 \ \ X_2}\right)\left(\array{\beta_1 \\ \beta_2 }\right) + \left(\array{e_1 \\ e_2 }\right) $ This will lead to a variance-covariance matrix that allows to test for equality of the two coefficients.
Testing equality of coefficients from two different regressions For people with a similar question, let me provide a simple outline of the answer. The trick is to set up the two equations as a system of seemingly unrelated equations and to estimate them jointly. T
3,322
Testing equality of coefficients from two different regressions
When the regressions come from two different samples, you can assume: $Var(\beta_1-\beta_2)=Var(\beta_1)+Var(\beta_2)$ which leads to the formula provided in another answer. But your question was precisely related to the case when $covar(\beta_1,\beta_2) \neq 0$. In this case, seemingly unrelated equations seems the most general case. Yet it will provide different coefficients from the ones from the original equations, which may not be what you are looking for. (Clogg, C. C., Petkova, E., & Haritou, A. (1995). Statistical methods for comparing regression coefficients between models. American Journal of Sociology, 100(5), 1261-1293.) presents an answer in the special case of nested equations (ie. to get the second equation, consider the first equation and add a few explanatory variables) They say it is easy to implement. If I well understand it, in this special case, a Haussman test can also be implemented. The key difference is that their test considers as true the second (full) equation, while the Haussman test considers as true the first equation. Note that Clogg et al (1995) is not suited for panel data. But their test has been generalized by (Yan, J., Aseltine Jr, R. H., & Harel, O. (2013). Comparing regression coefficients between nested linear models for clustered data with generalized estimating equations. Journal of Educational and Behavioral Statistics, 38(2), 172-189.) with a package provided in R: geepack See: https://www.jstor.org/stable/pdf/41999419.pdf?refreqid=excelsior%3Aa0a3b20f2bc68223edb59e3254c234be&seq=1 And (for the R-package): https://cran.r-project.org/web/packages/geepack/index.html
Testing equality of coefficients from two different regressions
When the regressions come from two different samples, you can assume: $Var(\beta_1-\beta_2)=Var(\beta_1)+Var(\beta_2)$ which leads to the formula provided in another answer. But your question was prec
Testing equality of coefficients from two different regressions When the regressions come from two different samples, you can assume: $Var(\beta_1-\beta_2)=Var(\beta_1)+Var(\beta_2)$ which leads to the formula provided in another answer. But your question was precisely related to the case when $covar(\beta_1,\beta_2) \neq 0$. In this case, seemingly unrelated equations seems the most general case. Yet it will provide different coefficients from the ones from the original equations, which may not be what you are looking for. (Clogg, C. C., Petkova, E., & Haritou, A. (1995). Statistical methods for comparing regression coefficients between models. American Journal of Sociology, 100(5), 1261-1293.) presents an answer in the special case of nested equations (ie. to get the second equation, consider the first equation and add a few explanatory variables) They say it is easy to implement. If I well understand it, in this special case, a Haussman test can also be implemented. The key difference is that their test considers as true the second (full) equation, while the Haussman test considers as true the first equation. Note that Clogg et al (1995) is not suited for panel data. But their test has been generalized by (Yan, J., Aseltine Jr, R. H., & Harel, O. (2013). Comparing regression coefficients between nested linear models for clustered data with generalized estimating equations. Journal of Educational and Behavioral Statistics, 38(2), 172-189.) with a package provided in R: geepack See: https://www.jstor.org/stable/pdf/41999419.pdf?refreqid=excelsior%3Aa0a3b20f2bc68223edb59e3254c234be&seq=1 And (for the R-package): https://cran.r-project.org/web/packages/geepack/index.html
Testing equality of coefficients from two different regressions When the regressions come from two different samples, you can assume: $Var(\beta_1-\beta_2)=Var(\beta_1)+Var(\beta_2)$ which leads to the formula provided in another answer. But your question was prec
3,323
Testing equality of coefficients from two different regressions
Using some data, here is how you could use the Clifford Clogg et al. (1995) paper cited by Ray Paternoster et al. (1998). I have a small script, which can be improved to do that. This assumes that you are using the R language and that you have two sets of regression coefficients that you have extracted from your model into two dataframes, like below. I have truncated the outputs to only those germane to this illustration: df1 = model1$coefficients df2 = model2$coefficients df1 = data.frame( estimate = c(15.2418519, 2.2215987, 0.3889724, 0.5289710), std.error = c(1.0958919, 0.2487793, 0.1973446, 0.1639074), row.names = c('(Intercept)', 'psychoticism', 'extraversion', 'neuroticism') ); df1 df2 = data.frame( estimate = c(17.2373874, 0.8350460, -0.3714803, 1.0382513), std.error = c(1.0987151, 0.2494201, 0.1978530, 0.1643297), row.names = c('(Intercept)', 'psychoticism', 'extraversion', 'neuroticism') ); df2 The next step is to iterate over the columns and compare the coefficients. The function is also assuming that you are comparing all the coefficients between two models. If that is not the case, you can modify the script as needed. The calculations in the function are done in a step-by-step manner so it's easy to follow along with the formula provided by Clogg et al. (1995). As well, I have commented it liberally, so it's easy to follow along. Anyway, below is the script compare_coefs <- function(.data1, .data2){ # imports map_dbl() and pluck() functions from purrr library import::here(map_dbl, pluck, .from = purrr) # extract the relevant data b1 = map_dbl(.data1[-1, 1], pluck) #get reg. coefs for model 1 se1 = map_dbl(.data1[-1, 2], pluck) #get std errors for model 1 b2 = map_dbl(.data2[-1, 1], pluck) #get reg. coefs for model 2 se2 = map_dbl(.data2[-1, 2], pluck) #get std. errors for model 2 # Clogg et al. (1995) formula as cited by Ray Paternoster et al. (1998) b = b1 - b2 s1 = se1^2 s2 = se2^2 sc = s1 + s2 v = b / sqrt(sc) data.frame(diff=b, zdiff=v, `p-value`=format(2*pnorm(-abs(v)), scientific=FALSE)) } Note In this example, I am comparing the effects of personality characteristics on two indicators of criminality. Specifically, I wanted to investigate if the effects (the regression coefficients) of personality characteristics are the same across those two indicators.
Testing equality of coefficients from two different regressions
Using some data, here is how you could use the Clifford Clogg et al. (1995) paper cited by Ray Paternoster et al. (1998). I have a small script, which can be improved to do that. This assumes that you
Testing equality of coefficients from two different regressions Using some data, here is how you could use the Clifford Clogg et al. (1995) paper cited by Ray Paternoster et al. (1998). I have a small script, which can be improved to do that. This assumes that you are using the R language and that you have two sets of regression coefficients that you have extracted from your model into two dataframes, like below. I have truncated the outputs to only those germane to this illustration: df1 = model1$coefficients df2 = model2$coefficients df1 = data.frame( estimate = c(15.2418519, 2.2215987, 0.3889724, 0.5289710), std.error = c(1.0958919, 0.2487793, 0.1973446, 0.1639074), row.names = c('(Intercept)', 'psychoticism', 'extraversion', 'neuroticism') ); df1 df2 = data.frame( estimate = c(17.2373874, 0.8350460, -0.3714803, 1.0382513), std.error = c(1.0987151, 0.2494201, 0.1978530, 0.1643297), row.names = c('(Intercept)', 'psychoticism', 'extraversion', 'neuroticism') ); df2 The next step is to iterate over the columns and compare the coefficients. The function is also assuming that you are comparing all the coefficients between two models. If that is not the case, you can modify the script as needed. The calculations in the function are done in a step-by-step manner so it's easy to follow along with the formula provided by Clogg et al. (1995). As well, I have commented it liberally, so it's easy to follow along. Anyway, below is the script compare_coefs <- function(.data1, .data2){ # imports map_dbl() and pluck() functions from purrr library import::here(map_dbl, pluck, .from = purrr) # extract the relevant data b1 = map_dbl(.data1[-1, 1], pluck) #get reg. coefs for model 1 se1 = map_dbl(.data1[-1, 2], pluck) #get std errors for model 1 b2 = map_dbl(.data2[-1, 1], pluck) #get reg. coefs for model 2 se2 = map_dbl(.data2[-1, 2], pluck) #get std. errors for model 2 # Clogg et al. (1995) formula as cited by Ray Paternoster et al. (1998) b = b1 - b2 s1 = se1^2 s2 = se2^2 sc = s1 + s2 v = b / sqrt(sc) data.frame(diff=b, zdiff=v, `p-value`=format(2*pnorm(-abs(v)), scientific=FALSE)) } Note In this example, I am comparing the effects of personality characteristics on two indicators of criminality. Specifically, I wanted to investigate if the effects (the regression coefficients) of personality characteristics are the same across those two indicators.
Testing equality of coefficients from two different regressions Using some data, here is how you could use the Clifford Clogg et al. (1995) paper cited by Ray Paternoster et al. (1998). I have a small script, which can be improved to do that. This assumes that you
3,324
Taleb and the Black Swan
I read the Black Swan a couple of years ago. The Black Swan idea is good and the attack on the ludic fallacy (seeing things as though they are dice games, with knowable probabilities) is good but statistics is outrageously misrepresented, with the central problem being the wrong claim that all statistics falls apart if variables are not normally distributed. I was sufficiently annoyed by this aspect to write Taleb the letter below: Dear Dr Taleb I recently read "The Black Swan". Like you, I am a fan of Karl Popper, and I found myself agreeing with much that is in it. I think your exposition of the ludic fallacy is basically sound, and draws attention to a real and common problem. However, I think that much of Part III lets your overall argument down badly, even to the point of possibly discrediting the rest of the book. This is a shame, as I think the arguments with regard to Black Swans and "unknown unknowns" stand on their merits without relying on some of the errors in Part III. The main issue I wish to point out - and seek your response on, particularly if I have misunderstood issues - is your misrepresentation of the field of applied statistics. In my judgement, chapters 14, 15 and 16 depend largely upon a straw man argument, misrepresenting statistics and econometrics. The field of econometrics that you describe is not the one that I was taught when I studied applied statistics, econometrics, and actuarial risk theory (at the Australian National University, but using texts that seemed pretty standard). The issues that you raise (such as the limitations of Gaussian distributions) are well and truly understood and taught, even at the undergraduate level. For example, you go to some lengths to show how income distribution does not follow a normal distribution, and present this as an argument against statistical practice in general. No competent statistician would ever claim that it does, and ways of dealing with this issue are well established. Just using techniques from the very most basic "first year econometrics" level, for example, transforming the variable by taking its logarithm would make your numerical examples look much less convincing. Such a transformation would in fact invalidate much of what you say, because then the variance of the original variable does increase as its mean increases. I am sure there are some incompetent econometricians who do OLS regressions etc with an untransformed response variable the way you say, but that just makes them incompetent and using techniques which are well established to be inappropriate. They would certainly have been failed even in undergraduate courses, which spend much time looking for more appropriate ways of modelling variables such as income, reflecting the actual observed (non-Gaussian) distribution. The family of Generalized Linear Models is one set of techniques developed in part to get around the problems you raise. Many of the exponential family of distributions (eg Gamma, Exponential, and Poisson distributions) are asymmetrical and have variance that increases as the centre of the distribution increases, getting around the problem you point out with using the Gaussian distribution. If this is still too limiting, it is possible to drop a pre-existing "shape" altogether and simply specify a relationship between the mean of a distribution and its variance (eg allowing the variance to increase proportionately to the square of the mean), using the "quasi-likelihood" method of estimation. Of course, you could argue that this form of modelling is still too simplistic and an intellectual trap that lulls us into thinking the future will be like the past. You may be correct, and I think the strength of your book is to make people like me consider this. But you need different arguments to those that you use in chapters 14-16. The great weight you place on the fact that the variance of the Gaussian distribution is constant regardless of its mean (which causes problems with scalability), for instance, is invalid. So is your emphasis on the fact that real-life distributions tend to be asymmetric rather than bell-curves. Basically, you have taken one over-simplification of the most basic approach to statistics (naïve modelling of raw variables as having Gaussian distributions) and shown, at great length, (correctly) the shortcomings of such an oversimplified approach. You then use this to make the gap to discredit the whole field. This is either a serious lapse in logic, or a propaganda technique. It is unfortunate because it detracts from your overall argument, much of which (as I said) I found valid and persuasive. I would be interested to hear what you say in response. I doubt I am the first to have raised this issue. Yours sincerely PE
Taleb and the Black Swan
I read the Black Swan a couple of years ago. The Black Swan idea is good and the attack on the ludic fallacy (seeing things as though they are dice games, with knowable probabilities) is good but sta
Taleb and the Black Swan I read the Black Swan a couple of years ago. The Black Swan idea is good and the attack on the ludic fallacy (seeing things as though they are dice games, with knowable probabilities) is good but statistics is outrageously misrepresented, with the central problem being the wrong claim that all statistics falls apart if variables are not normally distributed. I was sufficiently annoyed by this aspect to write Taleb the letter below: Dear Dr Taleb I recently read "The Black Swan". Like you, I am a fan of Karl Popper, and I found myself agreeing with much that is in it. I think your exposition of the ludic fallacy is basically sound, and draws attention to a real and common problem. However, I think that much of Part III lets your overall argument down badly, even to the point of possibly discrediting the rest of the book. This is a shame, as I think the arguments with regard to Black Swans and "unknown unknowns" stand on their merits without relying on some of the errors in Part III. The main issue I wish to point out - and seek your response on, particularly if I have misunderstood issues - is your misrepresentation of the field of applied statistics. In my judgement, chapters 14, 15 and 16 depend largely upon a straw man argument, misrepresenting statistics and econometrics. The field of econometrics that you describe is not the one that I was taught when I studied applied statistics, econometrics, and actuarial risk theory (at the Australian National University, but using texts that seemed pretty standard). The issues that you raise (such as the limitations of Gaussian distributions) are well and truly understood and taught, even at the undergraduate level. For example, you go to some lengths to show how income distribution does not follow a normal distribution, and present this as an argument against statistical practice in general. No competent statistician would ever claim that it does, and ways of dealing with this issue are well established. Just using techniques from the very most basic "first year econometrics" level, for example, transforming the variable by taking its logarithm would make your numerical examples look much less convincing. Such a transformation would in fact invalidate much of what you say, because then the variance of the original variable does increase as its mean increases. I am sure there are some incompetent econometricians who do OLS regressions etc with an untransformed response variable the way you say, but that just makes them incompetent and using techniques which are well established to be inappropriate. They would certainly have been failed even in undergraduate courses, which spend much time looking for more appropriate ways of modelling variables such as income, reflecting the actual observed (non-Gaussian) distribution. The family of Generalized Linear Models is one set of techniques developed in part to get around the problems you raise. Many of the exponential family of distributions (eg Gamma, Exponential, and Poisson distributions) are asymmetrical and have variance that increases as the centre of the distribution increases, getting around the problem you point out with using the Gaussian distribution. If this is still too limiting, it is possible to drop a pre-existing "shape" altogether and simply specify a relationship between the mean of a distribution and its variance (eg allowing the variance to increase proportionately to the square of the mean), using the "quasi-likelihood" method of estimation. Of course, you could argue that this form of modelling is still too simplistic and an intellectual trap that lulls us into thinking the future will be like the past. You may be correct, and I think the strength of your book is to make people like me consider this. But you need different arguments to those that you use in chapters 14-16. The great weight you place on the fact that the variance of the Gaussian distribution is constant regardless of its mean (which causes problems with scalability), for instance, is invalid. So is your emphasis on the fact that real-life distributions tend to be asymmetric rather than bell-curves. Basically, you have taken one over-simplification of the most basic approach to statistics (naïve modelling of raw variables as having Gaussian distributions) and shown, at great length, (correctly) the shortcomings of such an oversimplified approach. You then use this to make the gap to discredit the whole field. This is either a serious lapse in logic, or a propaganda technique. It is unfortunate because it detracts from your overall argument, much of which (as I said) I found valid and persuasive. I would be interested to hear what you say in response. I doubt I am the first to have raised this issue. Yours sincerely PE
Taleb and the Black Swan I read the Black Swan a couple of years ago. The Black Swan idea is good and the attack on the ludic fallacy (seeing things as though they are dice games, with knowable probabilities) is good but sta
3,325
Taleb and the Black Swan
I did read "The Black Swan", I did enjoy it, and I am a statistician. I didn't find its "criticism of statistics" unbearable, at all. Point by point: Taleb did not invent the concept of the black swan. It had been a favored example in philosophical thought for quite a while! Taleb is not so much criticizing "statistics", as certain (bad) applications of it. The book was a bestseller. It was not directed toward statisticians, but to the general public. It did very well in teaching that public about things statisticians knew very well, but many of the other readers (the majority!) did not. So we could learn a lot from that book about how to "sell" statistics. Most important (for me), Taleb included a lot of references to ancient Greek skeptical philosophy. Nobody else has mentioned that point here, but I think that inclusion was the real selling point of the book! The book is a literary work, not a technical work. If you want to criticize Taleb for his technical work, go to his homepage and download some of his technical papers. For those which doesn' t like this answer, or dislike the book, can have a look at Taleb's technical arguments in the new https://fernandonogueiracosta.files.wordpress.com/2014/07/taleb-nassim-silent-risk.pdf "Silent Risk", which is technical.
Taleb and the Black Swan
I did read "The Black Swan", I did enjoy it, and I am a statistician. I didn't find its "criticism of statistics" unbearable, at all. Point by point: Taleb did not invent the concept of the black sw
Taleb and the Black Swan I did read "The Black Swan", I did enjoy it, and I am a statistician. I didn't find its "criticism of statistics" unbearable, at all. Point by point: Taleb did not invent the concept of the black swan. It had been a favored example in philosophical thought for quite a while! Taleb is not so much criticizing "statistics", as certain (bad) applications of it. The book was a bestseller. It was not directed toward statisticians, but to the general public. It did very well in teaching that public about things statisticians knew very well, but many of the other readers (the majority!) did not. So we could learn a lot from that book about how to "sell" statistics. Most important (for me), Taleb included a lot of references to ancient Greek skeptical philosophy. Nobody else has mentioned that point here, but I think that inclusion was the real selling point of the book! The book is a literary work, not a technical work. If you want to criticize Taleb for his technical work, go to his homepage and download some of his technical papers. For those which doesn' t like this answer, or dislike the book, can have a look at Taleb's technical arguments in the new https://fernandonogueiracosta.files.wordpress.com/2014/07/taleb-nassim-silent-risk.pdf "Silent Risk", which is technical.
Taleb and the Black Swan I did read "The Black Swan", I did enjoy it, and I am a statistician. I didn't find its "criticism of statistics" unbearable, at all. Point by point: Taleb did not invent the concept of the black sw
3,326
Taleb and the Black Swan
I've not read the book, but as stated the criticism seems pretty unreasonable to me. If extreme events are important, then statistics has appropriate tools in the toolbox, such as extreme value theory, and a good statistician will know how to use them (or at least find out how to use them and will be sufficiently engaged with the purpose of the analysis to look). The criticism seems to be "statistics is bad because there are bad statisticians that only know about normal distributions".
Taleb and the Black Swan
I've not read the book, but as stated the criticism seems pretty unreasonable to me. If extreme events are important, then statistics has appropriate tools in the toolbox, such as extreme value theor
Taleb and the Black Swan I've not read the book, but as stated the criticism seems pretty unreasonable to me. If extreme events are important, then statistics has appropriate tools in the toolbox, such as extreme value theory, and a good statistician will know how to use them (or at least find out how to use them and will be sufficiently engaged with the purpose of the analysis to look). The criticism seems to be "statistics is bad because there are bad statisticians that only know about normal distributions".
Taleb and the Black Swan I've not read the book, but as stated the criticism seems pretty unreasonable to me. If extreme events are important, then statistics has appropriate tools in the toolbox, such as extreme value theor
3,327
Taleb and the Black Swan
Saying that " the thrust of the book is that statistics is not very useful " is inaccurate, I think. Having read the book, what he appears to be saying is that things like quantitative finance or any sort of securities trading that assumes a normal distribution is fundamentally flawed (actually, in the book, he calls people who claim to use these models to make predictions, "charlatans"). According to Taleb, while the normal distribution does a great job of modelling the values of tangible/physical things (eg. height, weight, life span etc.), systems like the markets are often driven by human emotion and thus, are prone to large swings that normal distributions cannot accurately predict. I don't understand statistics well, and until reading the answers here, I'd never heard of things like extreme value theory. Regardless, The Black Swan and Fooled By Randomness seem to have similar premises, which is "normal distribution not always OK". I don't recall him defaming the entire field of statistics.
Taleb and the Black Swan
Saying that " the thrust of the book is that statistics is not very useful " is inaccurate, I think. Having read the book, what he appears to be saying is that things like quantitative finance or any
Taleb and the Black Swan Saying that " the thrust of the book is that statistics is not very useful " is inaccurate, I think. Having read the book, what he appears to be saying is that things like quantitative finance or any sort of securities trading that assumes a normal distribution is fundamentally flawed (actually, in the book, he calls people who claim to use these models to make predictions, "charlatans"). According to Taleb, while the normal distribution does a great job of modelling the values of tangible/physical things (eg. height, weight, life span etc.), systems like the markets are often driven by human emotion and thus, are prone to large swings that normal distributions cannot accurately predict. I don't understand statistics well, and until reading the answers here, I'd never heard of things like extreme value theory. Regardless, The Black Swan and Fooled By Randomness seem to have similar premises, which is "normal distribution not always OK". I don't recall him defaming the entire field of statistics.
Taleb and the Black Swan Saying that " the thrust of the book is that statistics is not very useful " is inaccurate, I think. Having read the book, what he appears to be saying is that things like quantitative finance or any
3,328
Taleb and the Black Swan
I strongly recommend Dennis Lindley's review of this book. It contains a number of devastating arguments against the poor and arrogant exposition of ideas in the book: http://onlinelibrary.wiley.com/doi/10.1111/j.1740-9713.2008.00281.x/abstract The Black Swan is another example where being a "Best-seller" does not guarantee high quality content.
Taleb and the Black Swan
I strongly recommend Dennis Lindley's review of this book. It contains a number of devastating arguments against the poor and arrogant exposition of ideas in the book: http://onlinelibrary.wiley.com/d
Taleb and the Black Swan I strongly recommend Dennis Lindley's review of this book. It contains a number of devastating arguments against the poor and arrogant exposition of ideas in the book: http://onlinelibrary.wiley.com/doi/10.1111/j.1740-9713.2008.00281.x/abstract The Black Swan is another example where being a "Best-seller" does not guarantee high quality content.
Taleb and the Black Swan I strongly recommend Dennis Lindley's review of this book. It contains a number of devastating arguments against the poor and arrogant exposition of ideas in the book: http://onlinelibrary.wiley.com/d
3,329
Taleb and the Black Swan
I haven't read the Black Swan, but if his criticism of statistics is really as simple as you say, then it's ridiculous. Obviously some statistics relies on the Normal distribution, but much does not. Can rare events be modeled? Of course they can. The real question is how well they can be modeled. And that question will have different answers in different fields, based on how much we know about the rare events and their antecedents. In today's NY Times Magazine there's an interesting article by Nate Silver on how weather forecasting has improved in the last decade or so. This includes better modeling of rare events such as hurricanes. Is the book worth reading?
Taleb and the Black Swan
I haven't read the Black Swan, but if his criticism of statistics is really as simple as you say, then it's ridiculous. Obviously some statistics relies on the Normal distribution, but much does not.
Taleb and the Black Swan I haven't read the Black Swan, but if his criticism of statistics is really as simple as you say, then it's ridiculous. Obviously some statistics relies on the Normal distribution, but much does not. Can rare events be modeled? Of course they can. The real question is how well they can be modeled. And that question will have different answers in different fields, based on how much we know about the rare events and their antecedents. In today's NY Times Magazine there's an interesting article by Nate Silver on how weather forecasting has improved in the last decade or so. This includes better modeling of rare events such as hurricanes. Is the book worth reading?
Taleb and the Black Swan I haven't read the Black Swan, but if his criticism of statistics is really as simple as you say, then it's ridiculous. Obviously some statistics relies on the Normal distribution, but much does not.
3,330
Taleb and the Black Swan
I also have not read the book, but there is no way that his point can be as simplistic as saying that there are distributions with fatter tails than the normal distribution. This would be a comment to the other answers, but I have not accumulated enough accolades on this website. From Wikipedia: "He states that statistics is fundamentally incomplete as a field as it cannot predict the risk of rare events..." This question is also quite similar to What is the community's take on the Fourth Quadrant?
Taleb and the Black Swan
I also have not read the book, but there is no way that his point can be as simplistic as saying that there are distributions with fatter tails than the normal distribution. This would be a comment t
Taleb and the Black Swan I also have not read the book, but there is no way that his point can be as simplistic as saying that there are distributions with fatter tails than the normal distribution. This would be a comment to the other answers, but I have not accumulated enough accolades on this website. From Wikipedia: "He states that statistics is fundamentally incomplete as a field as it cannot predict the risk of rare events..." This question is also quite similar to What is the community's take on the Fourth Quadrant?
Taleb and the Black Swan I also have not read the book, but there is no way that his point can be as simplistic as saying that there are distributions with fatter tails than the normal distribution. This would be a comment t
3,331
Taleb and the Black Swan
I don't think Taleb would actually say that statistical techniques relying on the Gaussian distribution are not useful. His point in the book was that they are highly useful for many (but not all) physical or biological processes and modeling. He makes some good points and some bad (The Black Swan and Linked were the beginning of the "everything is a power law!" plague that still haunts us today), but it's important to remember that the book is a collection of literary and philosophical essays meant for the lay person. That said I think Taleb likes to aggravate people. You can see this in his battle with Myron Scholes. In this case it may have been useful as statistical education at the undergrad level, and sometimes at the graduate level, sort of flits over the assumption of Gaussian distributions. I imagine during his years in finance he came across a large number of quants with a great knowledge of Black-Scholes and other techniques but who did not consider underlying assumptions like the distribution. I suspect Taleb was poking at the educational establishment for a failure to properly educate.
Taleb and the Black Swan
I don't think Taleb would actually say that statistical techniques relying on the Gaussian distribution are not useful. His point in the book was that they are highly useful for many (but not all) phy
Taleb and the Black Swan I don't think Taleb would actually say that statistical techniques relying on the Gaussian distribution are not useful. His point in the book was that they are highly useful for many (but not all) physical or biological processes and modeling. He makes some good points and some bad (The Black Swan and Linked were the beginning of the "everything is a power law!" plague that still haunts us today), but it's important to remember that the book is a collection of literary and philosophical essays meant for the lay person. That said I think Taleb likes to aggravate people. You can see this in his battle with Myron Scholes. In this case it may have been useful as statistical education at the undergrad level, and sometimes at the graduate level, sort of flits over the assumption of Gaussian distributions. I imagine during his years in finance he came across a large number of quants with a great knowledge of Black-Scholes and other techniques but who did not consider underlying assumptions like the distribution. I suspect Taleb was poking at the educational establishment for a failure to properly educate.
Taleb and the Black Swan I don't think Taleb would actually say that statistical techniques relying on the Gaussian distribution are not useful. His point in the book was that they are highly useful for many (but not all) phy
3,332
Taleb and the Black Swan
Those of you who have not read the book are way off base. He makes a LARGE distinction between the scalable and unscalable. For unscalable matters traditional stats will serve one well enough. He is not critiquing that whatsoever. Black Swans originate in the scalable and are hard to predict given past empirical data. The book is about how these events can have enormous impact and are generally only explained after the fact. The epistemology is excellent.
Taleb and the Black Swan
Those of you who have not read the book are way off base. He makes a LARGE distinction between the scalable and unscalable. For unscalable matters traditional stats will serve one well enough. He is n
Taleb and the Black Swan Those of you who have not read the book are way off base. He makes a LARGE distinction between the scalable and unscalable. For unscalable matters traditional stats will serve one well enough. He is not critiquing that whatsoever. Black Swans originate in the scalable and are hard to predict given past empirical data. The book is about how these events can have enormous impact and are generally only explained after the fact. The epistemology is excellent.
Taleb and the Black Swan Those of you who have not read the book are way off base. He makes a LARGE distinction between the scalable and unscalable. For unscalable matters traditional stats will serve one well enough. He is n
3,333
Standard errors for lasso prediction using R
Kyung et al. (2010), "Penalized regression, standard errors, & Bayesian lassos", Bayesian Analysis , 5, 2, suggest that there might not be a consensus on a statistically valid method of calculating standard errors for the lasso predictions. Tibshirani seems to agree (slide 43) that standard errors are still an unresolved issue.
Standard errors for lasso prediction using R
Kyung et al. (2010), "Penalized regression, standard errors, & Bayesian lassos", Bayesian Analysis , 5, 2, suggest that there might not be a consensus on a statistically valid method of calculating st
Standard errors for lasso prediction using R Kyung et al. (2010), "Penalized regression, standard errors, & Bayesian lassos", Bayesian Analysis , 5, 2, suggest that there might not be a consensus on a statistically valid method of calculating standard errors for the lasso predictions. Tibshirani seems to agree (slide 43) that standard errors are still an unresolved issue.
Standard errors for lasso prediction using R Kyung et al. (2010), "Penalized regression, standard errors, & Bayesian lassos", Bayesian Analysis , 5, 2, suggest that there might not be a consensus on a statistically valid method of calculating st
3,334
Standard errors for lasso prediction using R
On a related note, which may be helpful, Tibshirani and colleagues have proposed a significance test for the lasso. The paper is available, and titled "A significance test for the lasso". A free version of the paper can be found here
Standard errors for lasso prediction using R
On a related note, which may be helpful, Tibshirani and colleagues have proposed a significance test for the lasso. The paper is available, and titled "A significance test for the lasso". A free versi
Standard errors for lasso prediction using R On a related note, which may be helpful, Tibshirani and colleagues have proposed a significance test for the lasso. The paper is available, and titled "A significance test for the lasso". A free version of the paper can be found here
Standard errors for lasso prediction using R On a related note, which may be helpful, Tibshirani and colleagues have proposed a significance test for the lasso. The paper is available, and titled "A significance test for the lasso". A free versi
3,335
Standard errors for lasso prediction using R
Sandipan Karmakar answer tells you what to do, this should help you on the "how": > library(monomvn) > > ## following the lars diabetes example > data(diabetes) > str(diabetes) 'data.frame': 442 obs. of 3 variables: $ x : AsIs [1:442, 1:10] 0.038075.... -0.00188.... 0.085298.... -0.08906.... 0.005383.... ... ..- attr(*, "dimnames")=List of 2 .. ..$ : NULL .. ..$ : chr "age" "sex" "bmi" "map" ... $ y : num 151 75 141 206 135 97 138 63 110 310 ... [...] > ## Bayesian Lasso regression > reg_blas <- with(diabetes, blasso(x, y)) t=100, m=8 t=200, m=5 t=300, m=8 t=400, m=8 t=500, m=7 t=600, m=8 t=700, m=8 t=800, m=8 t=900, m=5 > > ## posterior mean beta (setting those with >50% mass at zero to exactly zero) > (beta <- colMeans(reg_blas$beta) * (colMeans(reg_blas$beta != 0) > 0.5)) b.1 b.2 b.3 b.4 b.5 b.6 b.7 b.8 0.0000 -195.9795 532.7136 309.1673 -101.1288 0.0000 -196.4315 0.0000 b.9 b.10 505.4726 0.0000 > > ## n x nsims matrix of realizations from the posterior predictive: > post_pred_y <- with(reg_blas, X %*% t(beta)) > > ## predictions: > y_pred <- rowMeans(post_pred_y) > head(y_pred) [1] 52.772443 -78.690610 24.234753 9.717777 -23.360369 -45.477199 > > ## sd of y: > sd_y <- apply(post_pred_y, 1, sd) > head(sd_y) [1] 6.331673 6.756569 6.031290 5.236101 5.657265 6.150473 > > ## 90% credible intervals > ci_y <- t(apply(post_pred_y, 1, quantile, probs=c(0.05, 0.95))) > head(ci_y) 5% 95% [1,] 42.842535 62.56743 [2,] -88.877760 -68.47159 [3,] 14.933617 33.85679 [4,] 1.297094 18.01523 [5,] -32.709132 -14.13260 [6,] -55.533807 -35.77809
Standard errors for lasso prediction using R
Sandipan Karmakar answer tells you what to do, this should help you on the "how": > library(monomvn) > > ## following the lars diabetes example > data(diabetes) > str(diabetes) 'data.frame': 442 obs
Standard errors for lasso prediction using R Sandipan Karmakar answer tells you what to do, this should help you on the "how": > library(monomvn) > > ## following the lars diabetes example > data(diabetes) > str(diabetes) 'data.frame': 442 obs. of 3 variables: $ x : AsIs [1:442, 1:10] 0.038075.... -0.00188.... 0.085298.... -0.08906.... 0.005383.... ... ..- attr(*, "dimnames")=List of 2 .. ..$ : NULL .. ..$ : chr "age" "sex" "bmi" "map" ... $ y : num 151 75 141 206 135 97 138 63 110 310 ... [...] > ## Bayesian Lasso regression > reg_blas <- with(diabetes, blasso(x, y)) t=100, m=8 t=200, m=5 t=300, m=8 t=400, m=8 t=500, m=7 t=600, m=8 t=700, m=8 t=800, m=8 t=900, m=5 > > ## posterior mean beta (setting those with >50% mass at zero to exactly zero) > (beta <- colMeans(reg_blas$beta) * (colMeans(reg_blas$beta != 0) > 0.5)) b.1 b.2 b.3 b.4 b.5 b.6 b.7 b.8 0.0000 -195.9795 532.7136 309.1673 -101.1288 0.0000 -196.4315 0.0000 b.9 b.10 505.4726 0.0000 > > ## n x nsims matrix of realizations from the posterior predictive: > post_pred_y <- with(reg_blas, X %*% t(beta)) > > ## predictions: > y_pred <- rowMeans(post_pred_y) > head(y_pred) [1] 52.772443 -78.690610 24.234753 9.717777 -23.360369 -45.477199 > > ## sd of y: > sd_y <- apply(post_pred_y, 1, sd) > head(sd_y) [1] 6.331673 6.756569 6.031290 5.236101 5.657265 6.150473 > > ## 90% credible intervals > ci_y <- t(apply(post_pred_y, 1, quantile, probs=c(0.05, 0.95))) > head(ci_y) 5% 95% [1,] 42.842535 62.56743 [2,] -88.877760 -68.47159 [3,] 14.933617 33.85679 [4,] 1.297094 18.01523 [5,] -32.709132 -14.13260 [6,] -55.533807 -35.77809
Standard errors for lasso prediction using R Sandipan Karmakar answer tells you what to do, this should help you on the "how": > library(monomvn) > > ## following the lars diabetes example > data(diabetes) > str(diabetes) 'data.frame': 442 obs
3,336
Standard errors for lasso prediction using R
Bayesian LASSO is the only alternative to the problem of calculating standard errors. Standard errors are automatically calculated in Bayesian LASSO...You can implement Bayesian LASSO very easily using Gibbs Sampling scheme... Bayesian LASSO needs prior distributions to be assigned to the parameters of the model. In LASSO model, we have the objective function $||\mathbf{y}-\mathbf{X}\boldsymbol{\beta}||_2^2 + \lambda||\boldsymbol{\beta}||_1$ with $\lambda$ as the regularization parameter. Here as we have $\ell_1$-norm for $\boldsymbol{\beta}$ so, a special type of prior distribution is needed for this, LAPLACE distribution a scale mixture of normal distribution with exponential distribution as mixing density. Based upon the full conditional posteriors of each of the parameters are to be deduced. Then one can use Gibbs Sampling for simulating the chain. See Park & Cassella (2008), "The Bayesian Lasso", JASA, 103, 482. There are three inherent drawbacks of frequentist LASSO: One has to choose $\lambda$ by cross validation or other means. Standard errors are difficult to calculate as the LARS and other algorithms produce point estimates for $\boldsymbol{\beta}$. The hierarchical structure of the problem at hand cannot be encoded using frequentist model, which is quite easy in Bayesian framework.
Standard errors for lasso prediction using R
Bayesian LASSO is the only alternative to the problem of calculating standard errors. Standard errors are automatically calculated in Bayesian LASSO...You can implement Bayesian LASSO very easily usin
Standard errors for lasso prediction using R Bayesian LASSO is the only alternative to the problem of calculating standard errors. Standard errors are automatically calculated in Bayesian LASSO...You can implement Bayesian LASSO very easily using Gibbs Sampling scheme... Bayesian LASSO needs prior distributions to be assigned to the parameters of the model. In LASSO model, we have the objective function $||\mathbf{y}-\mathbf{X}\boldsymbol{\beta}||_2^2 + \lambda||\boldsymbol{\beta}||_1$ with $\lambda$ as the regularization parameter. Here as we have $\ell_1$-norm for $\boldsymbol{\beta}$ so, a special type of prior distribution is needed for this, LAPLACE distribution a scale mixture of normal distribution with exponential distribution as mixing density. Based upon the full conditional posteriors of each of the parameters are to be deduced. Then one can use Gibbs Sampling for simulating the chain. See Park & Cassella (2008), "The Bayesian Lasso", JASA, 103, 482. There are three inherent drawbacks of frequentist LASSO: One has to choose $\lambda$ by cross validation or other means. Standard errors are difficult to calculate as the LARS and other algorithms produce point estimates for $\boldsymbol{\beta}$. The hierarchical structure of the problem at hand cannot be encoded using frequentist model, which is quite easy in Bayesian framework.
Standard errors for lasso prediction using R Bayesian LASSO is the only alternative to the problem of calculating standard errors. Standard errors are automatically calculated in Bayesian LASSO...You can implement Bayesian LASSO very easily usin
3,337
Standard errors for lasso prediction using R
To add to the answers above, the issue appears to be that even a bootstrap is likely insufficient as the estimate from the penalized model is biased and bootstrapping will only speak to the variance - ignoring the bias of the estimate. This is nicely summarized in the vignette for the penalized package on Page 18. If being used for prediction however, why is a standard error from the model required? Can you not cross validate or bootstrap appropriately and produce a standard error around a metric related to prediction such as MSE?
Standard errors for lasso prediction using R
To add to the answers above, the issue appears to be that even a bootstrap is likely insufficient as the estimate from the penalized model is biased and bootstrapping will only speak to the variance -
Standard errors for lasso prediction using R To add to the answers above, the issue appears to be that even a bootstrap is likely insufficient as the estimate from the penalized model is biased and bootstrapping will only speak to the variance - ignoring the bias of the estimate. This is nicely summarized in the vignette for the penalized package on Page 18. If being used for prediction however, why is a standard error from the model required? Can you not cross validate or bootstrap appropriately and produce a standard error around a metric related to prediction such as MSE?
Standard errors for lasso prediction using R To add to the answers above, the issue appears to be that even a bootstrap is likely insufficient as the estimate from the penalized model is biased and bootstrapping will only speak to the variance -
3,338
Standard errors for lasso prediction using R
There is the selectiveInference package in R, https://cran.r-project.org/web/packages/selectiveInference/index.html, that provides confidence intervals and p values for your coefficients fitted by the LASSO, based on the following paper: Stephen Reid, Jerome Friedman, and Rob Tibshirani (2014). A study of error variance estimation in lasso regression. arXiv:1311.5274 PS: just realise that this produces error estimates for your parameters, not sure for the error on your final prediction, if that's what you're after... I suppose you could use "population prediction intervals" for that if you like (by resampling parameters according to the fit following a multivariate normal distribution).
Standard errors for lasso prediction using R
There is the selectiveInference package in R, https://cran.r-project.org/web/packages/selectiveInference/index.html, that provides confidence intervals and p values for your coefficients fitted by the
Standard errors for lasso prediction using R There is the selectiveInference package in R, https://cran.r-project.org/web/packages/selectiveInference/index.html, that provides confidence intervals and p values for your coefficients fitted by the LASSO, based on the following paper: Stephen Reid, Jerome Friedman, and Rob Tibshirani (2014). A study of error variance estimation in lasso regression. arXiv:1311.5274 PS: just realise that this produces error estimates for your parameters, not sure for the error on your final prediction, if that's what you're after... I suppose you could use "population prediction intervals" for that if you like (by resampling parameters according to the fit following a multivariate normal distribution).
Standard errors for lasso prediction using R There is the selectiveInference package in R, https://cran.r-project.org/web/packages/selectiveInference/index.html, that provides confidence intervals and p values for your coefficients fitted by the
3,339
How do you calculate the probability density function of the maximum of a sample of IID uniform random variables?
It is possible that this question is homework but I felt this classical elementary probability question was still lacking a complete answer after several months, so I'll give one here. From the problem statement, we want the distribution of $$Y = \max \{ X_1, ..., X_n \}$$ where $X_1, ..., X_n$ are iid ${\rm Uniform}(a,b)$. We know that $Y < x$ if and only if every element of the sample is less than $x$. Then this, as indicated in @varty's hint, combined with the fact that the $X_i$'s are independent, allows us to deduce $$ P(Y \leq x) = P(X_1 \leq x, ..., X_n \leq x) = \prod_{i=1}^{n} P(X_i \leq x) = F_{X}(x)^n$$ where $F_{X}(x)$ is the CDF of the uniform distribution that is $\frac{y-a}{b-a}$. Therefore the CDF of $Y$ is $$F_{Y}(y) = P(Y \leq y) = \begin{cases} 0 & y \leq a \\ \phantom{} \left( \frac{y-a}{b-a} \right)^n & y\in(a,b) \\ 1 & y \geq b \\ \end{cases}$$ Since $Y$ has an absolutely continuous distribution we can derive its density by differentiating the CDF. Therefore the density of $Y$ is $$ p_{Y}(y) = \frac{n(y-a)^{n-1}}{(b-a)^{n}}$$ In the special case where $a=0,b=1$, we have that $p_{Y}(y)=ny^{n-1}$, which is the density of a Beta distribution with $\alpha=n$ and $\beta=1$, since ${\rm Beta}(n,1) = \frac{\Gamma(n+1)}{\Gamma(n)\Gamma(1)}=\frac{n!}{(n-1)!} = n$. As a note, the sequence you get if you were to sort your sample in increasing order - $X_{(1)}, ..., X_{(n)}$ - are called the order statistics. A generalization of this answer is that all order statistics of a ${\rm Uniform}(0,1)$ distributed sample have a Beta distribution, as noted in @bnaul's answer.
How do you calculate the probability density function of the maximum of a sample of IID uniform rand
It is possible that this question is homework but I felt this classical elementary probability question was still lacking a complete answer after several months, so I'll give one here. From the proble
How do you calculate the probability density function of the maximum of a sample of IID uniform random variables? It is possible that this question is homework but I felt this classical elementary probability question was still lacking a complete answer after several months, so I'll give one here. From the problem statement, we want the distribution of $$Y = \max \{ X_1, ..., X_n \}$$ where $X_1, ..., X_n$ are iid ${\rm Uniform}(a,b)$. We know that $Y < x$ if and only if every element of the sample is less than $x$. Then this, as indicated in @varty's hint, combined with the fact that the $X_i$'s are independent, allows us to deduce $$ P(Y \leq x) = P(X_1 \leq x, ..., X_n \leq x) = \prod_{i=1}^{n} P(X_i \leq x) = F_{X}(x)^n$$ where $F_{X}(x)$ is the CDF of the uniform distribution that is $\frac{y-a}{b-a}$. Therefore the CDF of $Y$ is $$F_{Y}(y) = P(Y \leq y) = \begin{cases} 0 & y \leq a \\ \phantom{} \left( \frac{y-a}{b-a} \right)^n & y\in(a,b) \\ 1 & y \geq b \\ \end{cases}$$ Since $Y$ has an absolutely continuous distribution we can derive its density by differentiating the CDF. Therefore the density of $Y$ is $$ p_{Y}(y) = \frac{n(y-a)^{n-1}}{(b-a)^{n}}$$ In the special case where $a=0,b=1$, we have that $p_{Y}(y)=ny^{n-1}$, which is the density of a Beta distribution with $\alpha=n$ and $\beta=1$, since ${\rm Beta}(n,1) = \frac{\Gamma(n+1)}{\Gamma(n)\Gamma(1)}=\frac{n!}{(n-1)!} = n$. As a note, the sequence you get if you were to sort your sample in increasing order - $X_{(1)}, ..., X_{(n)}$ - are called the order statistics. A generalization of this answer is that all order statistics of a ${\rm Uniform}(0,1)$ distributed sample have a Beta distribution, as noted in @bnaul's answer.
How do you calculate the probability density function of the maximum of a sample of IID uniform rand It is possible that this question is homework but I felt this classical elementary probability question was still lacking a complete answer after several months, so I'll give one here. From the proble
3,340
How do you calculate the probability density function of the maximum of a sample of IID uniform random variables?
The maximum of a sample is one of the order statistics, in particular the $n$th order statistic of the sample $X_1,\dots,X_n$. In general, computing the distribution of order statistics is difficult, as described by the Wikipedia article; for some special distributions, the order statistics are well-known (e.g. for the uniform distribution, which has Beta-distributed order statistics). EDIT: The Wikipedia article on sample maximum and minimum is also helpful and more specific to your problem.
How do you calculate the probability density function of the maximum of a sample of IID uniform rand
The maximum of a sample is one of the order statistics, in particular the $n$th order statistic of the sample $X_1,\dots,X_n$. In general, computing the distribution of order statistics is difficult,
How do you calculate the probability density function of the maximum of a sample of IID uniform random variables? The maximum of a sample is one of the order statistics, in particular the $n$th order statistic of the sample $X_1,\dots,X_n$. In general, computing the distribution of order statistics is difficult, as described by the Wikipedia article; for some special distributions, the order statistics are well-known (e.g. for the uniform distribution, which has Beta-distributed order statistics). EDIT: The Wikipedia article on sample maximum and minimum is also helpful and more specific to your problem.
How do you calculate the probability density function of the maximum of a sample of IID uniform rand The maximum of a sample is one of the order statistics, in particular the $n$th order statistic of the sample $X_1,\dots,X_n$. In general, computing the distribution of order statistics is difficult,
3,341
How do you calculate the probability density function of the maximum of a sample of IID uniform random variables?
If $F_{Y}(y)$ is the CDF of $Y$, then $$F_Y(y)=\text{Prob}(y>X_1,y>X_2,...,y>X_n)$$ You can then use the iid property and the cdf of a uniform variate to compute $F_Y(y)$.
How do you calculate the probability density function of the maximum of a sample of IID uniform rand
If $F_{Y}(y)$ is the CDF of $Y$, then $$F_Y(y)=\text{Prob}(y>X_1,y>X_2,...,y>X_n)$$ You can then use the iid property and the cdf of a uniform variate to compute $F_Y(y)$.
How do you calculate the probability density function of the maximum of a sample of IID uniform random variables? If $F_{Y}(y)$ is the CDF of $Y$, then $$F_Y(y)=\text{Prob}(y>X_1,y>X_2,...,y>X_n)$$ You can then use the iid property and the cdf of a uniform variate to compute $F_Y(y)$.
How do you calculate the probability density function of the maximum of a sample of IID uniform rand If $F_{Y}(y)$ is the CDF of $Y$, then $$F_Y(y)=\text{Prob}(y>X_1,y>X_2,...,y>X_n)$$ You can then use the iid property and the cdf of a uniform variate to compute $F_Y(y)$.
3,342
How do you calculate the probability density function of the maximum of a sample of IID uniform random variables?
The maximum of a set of IID random variables when appropriately normalized will generally converge to one of the three extreme value types. This is Gnedenko's theorem,the equivalence of the central limit theorem for extremes. The particular type depends on the tail behavior of the population distribution. Knowing this you can use the limiting distribution to approximate the distribution for the maximum. Since the uniform distribution on [a, b] is the subject of this question Macro has given the exact distribution for any n and a very nice answer. The result is rather trivial. For the normal distribution a nice closed form is not possible but appropriately normalized the maximum for the normal converges to the Gumbel distribution F(x)=exp(- e $^-$$^x$). For the uniform the normalization is (b-a)-x/n and F$^n$(b-a-x/n)=(1-x/[n(b-a)])$^n$ which converges to e$^-$$^x$$^/$$^($$^b$$^-$$^a$$^)$. Note here that y=b-a-x/n. and F$^n$(y) converges to 1 as y goes to b-a. This holds for all 0 In this case it is easy to compare the exact value to its asymptotic limit. Gumbel's book Galambos' book Leadbetter's book Novak's book Coles book
How do you calculate the probability density function of the maximum of a sample of IID uniform rand
The maximum of a set of IID random variables when appropriately normalized will generally converge to one of the three extreme value types. This is Gnedenko's theorem,the equivalence of the central l
How do you calculate the probability density function of the maximum of a sample of IID uniform random variables? The maximum of a set of IID random variables when appropriately normalized will generally converge to one of the three extreme value types. This is Gnedenko's theorem,the equivalence of the central limit theorem for extremes. The particular type depends on the tail behavior of the population distribution. Knowing this you can use the limiting distribution to approximate the distribution for the maximum. Since the uniform distribution on [a, b] is the subject of this question Macro has given the exact distribution for any n and a very nice answer. The result is rather trivial. For the normal distribution a nice closed form is not possible but appropriately normalized the maximum for the normal converges to the Gumbel distribution F(x)=exp(- e $^-$$^x$). For the uniform the normalization is (b-a)-x/n and F$^n$(b-a-x/n)=(1-x/[n(b-a)])$^n$ which converges to e$^-$$^x$$^/$$^($$^b$$^-$$^a$$^)$. Note here that y=b-a-x/n. and F$^n$(y) converges to 1 as y goes to b-a. This holds for all 0 In this case it is easy to compare the exact value to its asymptotic limit. Gumbel's book Galambos' book Leadbetter's book Novak's book Coles book
How do you calculate the probability density function of the maximum of a sample of IID uniform rand The maximum of a set of IID random variables when appropriately normalized will generally converge to one of the three extreme value types. This is Gnedenko's theorem,the equivalence of the central l
3,343
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression?
Controlling for something and ignoring something are not the same thing. Let's consider a universe in which only 3 variables exist: $Y$, $X_1$, and $X_2$. We want to build a regression model that predicts $Y$, and we are especially interested in its relationship with $X_1$. There are two basic possibilities. We could assess the relationship between $X_1$ and $Y$ while controlling for $X_2$: $$ Y = \beta_0 + \beta_1X_1 + \beta_2X_2 $$ or, we could assess the relationship between $X_1$ and $Y$ while ignoring $X_2$: $$ Y = \beta_0 + \beta_1X_1 $$ Granted, these are very simple models, but they constitute different ways of looking at how the relationship between $X_1$ and $Y$ manifests. Often, the estimated $\hat\beta_1$s might be similar in both models, but they can be quite different. What is most important in determining how different they are is the relationship (or lack thereof) between $X_1$ and $X_2$. Consider this figure: In this scenario, $X_1$ is correlated with $X_2$. Since the plot is two-dimensional, it sort of ignores $X_2$ (perhaps ironically), so I have indicated the values of $X_2$ for each point with distinct symbols and colors (the pseudo-3D plot below provides another way to try to display the structure of the data). If we fit a regression model that ignored $X_2$, we would get the solid black regression line. If we fit a model that controlled for $X_2$, we would get a regression plane, which is again hard to plot, so I have plotted three slices through that plane where $X_2=1$, $X_2=2$, and $X_2=3$. Thus, we have the lines that show the relationship between $X_1$ and $Y$ that hold when we control for $X_2$. Of note, we see that controlling for $X_2$ does not yield a single line, but a set of lines. Another way to think about the distinction between ignoring and controlling for another variable, is to consider the distinction between a marginal distribution and a conditional distribution. Consider this figure: (This is taken from my answer here: What is the intuition behind conditional Gaussian distributions?) If you look at the normal curve drawn to the left of the main figure, that is the marginal distribution of $Y$. It is the distribution of $Y$ if we ignore its relationship with $X$. Within the main figure, there are two normal curves representing conditional distributions of $Y$ when $X_1 = 25$ and $X_1 = 45$. The conditional distributions control for the level of $X_1$, whereas the marginal distribution ignores it.
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regressio
Controlling for something and ignoring something are not the same thing. Let's consider a universe in which only 3 variables exist: $Y$, $X_1$, and $X_2$. We want to build a regression model that pr
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression? Controlling for something and ignoring something are not the same thing. Let's consider a universe in which only 3 variables exist: $Y$, $X_1$, and $X_2$. We want to build a regression model that predicts $Y$, and we are especially interested in its relationship with $X_1$. There are two basic possibilities. We could assess the relationship between $X_1$ and $Y$ while controlling for $X_2$: $$ Y = \beta_0 + \beta_1X_1 + \beta_2X_2 $$ or, we could assess the relationship between $X_1$ and $Y$ while ignoring $X_2$: $$ Y = \beta_0 + \beta_1X_1 $$ Granted, these are very simple models, but they constitute different ways of looking at how the relationship between $X_1$ and $Y$ manifests. Often, the estimated $\hat\beta_1$s might be similar in both models, but they can be quite different. What is most important in determining how different they are is the relationship (or lack thereof) between $X_1$ and $X_2$. Consider this figure: In this scenario, $X_1$ is correlated with $X_2$. Since the plot is two-dimensional, it sort of ignores $X_2$ (perhaps ironically), so I have indicated the values of $X_2$ for each point with distinct symbols and colors (the pseudo-3D plot below provides another way to try to display the structure of the data). If we fit a regression model that ignored $X_2$, we would get the solid black regression line. If we fit a model that controlled for $X_2$, we would get a regression plane, which is again hard to plot, so I have plotted three slices through that plane where $X_2=1$, $X_2=2$, and $X_2=3$. Thus, we have the lines that show the relationship between $X_1$ and $Y$ that hold when we control for $X_2$. Of note, we see that controlling for $X_2$ does not yield a single line, but a set of lines. Another way to think about the distinction between ignoring and controlling for another variable, is to consider the distinction between a marginal distribution and a conditional distribution. Consider this figure: (This is taken from my answer here: What is the intuition behind conditional Gaussian distributions?) If you look at the normal curve drawn to the left of the main figure, that is the marginal distribution of $Y$. It is the distribution of $Y$ if we ignore its relationship with $X$. Within the main figure, there are two normal curves representing conditional distributions of $Y$ when $X_1 = 25$ and $X_1 = 45$. The conditional distributions control for the level of $X_1$, whereas the marginal distribution ignores it.
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regressio Controlling for something and ignoring something are not the same thing. Let's consider a universe in which only 3 variables exist: $Y$, $X_1$, and $X_2$. We want to build a regression model that pr
3,344
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression?
They are not ignored. If they were 'ignored' they would not be in the model. The estimate of the explanatory variable of interest is conditional on the other variables. The estimate is formed "in the context of" or "allowing for the impact of" the other variables in the model.
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regressio
They are not ignored. If they were 'ignored' they would not be in the model. The estimate of the explanatory variable of interest is conditional on the other variables. The estimate is formed "in the
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression? They are not ignored. If they were 'ignored' they would not be in the model. The estimate of the explanatory variable of interest is conditional on the other variables. The estimate is formed "in the context of" or "allowing for the impact of" the other variables in the model.
Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regressio They are not ignored. If they were 'ignored' they would not be in the model. The estimate of the explanatory variable of interest is conditional on the other variables. The estimate is formed "in the
3,345
Why is expectation the same as the arithmetic mean?
Informally, a probability distribution defines the relative frequency of outcomes of a random variable - the expected value can be thought of as a weighted average of those outcomes (weighted by the relative frequency). Similarly, the expected value can be thought of as the arithmetic mean of a set of numbers generated in exact proportion to their probability of occurring (in the case of a continuous random variable this isn't exactly true since specific values have probability $0$). The connection between the expected value and the arithmetic mean is most clear with a discrete random variable, where the expected value is $$ E(X) = \sum_{S} x P(X=x) $$ where $S$ is the sample space. As an example, suppose you have a discrete random variable $X$ such that: $$ X = \begin{cases} 1 & \mbox{with probability } 1/8 \\ 2 & \mbox{with probability } 3/8 \\ 3 & \mbox{with probability } 1/2 \end{cases} $$ That is, the probability mass function is $P(X=1)=1/8$, $P(X=2)=3/8$, and $P(X=3)=1/2$. Using the formula above, the expected value is $$ E(X) = 1\cdot (1/8) + 2 \cdot (3/8) + 3 \cdot (1/2) = 2.375 $$ Now consider numbers generated with frequencies exactly proportional to the probability mass function - for example, the set of numbers $\{1,1,2,2,2,2,2,2,3,3,3,3,3,3,3,3\}$ - two $1$s, six $2$s and eight $3$s. Now take the arithmetic mean of these numbers: $$ \frac{1+1+2+2+2+2+2+2+3+3+3+3+3+3+3+3}{16} = 2.375 $$ and you can see it's exactly equal to the expected value.
Why is expectation the same as the arithmetic mean?
Informally, a probability distribution defines the relative frequency of outcomes of a random variable - the expected value can be thought of as a weighted average of those outcomes (weighted by the r
Why is expectation the same as the arithmetic mean? Informally, a probability distribution defines the relative frequency of outcomes of a random variable - the expected value can be thought of as a weighted average of those outcomes (weighted by the relative frequency). Similarly, the expected value can be thought of as the arithmetic mean of a set of numbers generated in exact proportion to their probability of occurring (in the case of a continuous random variable this isn't exactly true since specific values have probability $0$). The connection between the expected value and the arithmetic mean is most clear with a discrete random variable, where the expected value is $$ E(X) = \sum_{S} x P(X=x) $$ where $S$ is the sample space. As an example, suppose you have a discrete random variable $X$ such that: $$ X = \begin{cases} 1 & \mbox{with probability } 1/8 \\ 2 & \mbox{with probability } 3/8 \\ 3 & \mbox{with probability } 1/2 \end{cases} $$ That is, the probability mass function is $P(X=1)=1/8$, $P(X=2)=3/8$, and $P(X=3)=1/2$. Using the formula above, the expected value is $$ E(X) = 1\cdot (1/8) + 2 \cdot (3/8) + 3 \cdot (1/2) = 2.375 $$ Now consider numbers generated with frequencies exactly proportional to the probability mass function - for example, the set of numbers $\{1,1,2,2,2,2,2,2,3,3,3,3,3,3,3,3\}$ - two $1$s, six $2$s and eight $3$s. Now take the arithmetic mean of these numbers: $$ \frac{1+1+2+2+2+2+2+2+3+3+3+3+3+3+3+3}{16} = 2.375 $$ and you can see it's exactly equal to the expected value.
Why is expectation the same as the arithmetic mean? Informally, a probability distribution defines the relative frequency of outcomes of a random variable - the expected value can be thought of as a weighted average of those outcomes (weighted by the r
3,346
Why is expectation the same as the arithmetic mean?
The expectation is the average value or mean of a random variable not a probability distribution. As such it is for discrete random variables the weighted average of the values the random variable takes on where the weighting is according to the relative frequency of occurrence of those individual values. For an absolutely continuous random variable it is the integral of values x multiplied by the probability density. Observed data can be viewed as the values of a collection of independent identically distributed random variables. The sample mean (or sample expectation) is defined as the expectation of the data with respect to the empirical distribution for the observed data. This makes it simply the arithmetic average of the data.
Why is expectation the same as the arithmetic mean?
The expectation is the average value or mean of a random variable not a probability distribution. As such it is for discrete random variables the weighted average of the values the random variable tak
Why is expectation the same as the arithmetic mean? The expectation is the average value or mean of a random variable not a probability distribution. As such it is for discrete random variables the weighted average of the values the random variable takes on where the weighting is according to the relative frequency of occurrence of those individual values. For an absolutely continuous random variable it is the integral of values x multiplied by the probability density. Observed data can be viewed as the values of a collection of independent identically distributed random variables. The sample mean (or sample expectation) is defined as the expectation of the data with respect to the empirical distribution for the observed data. This makes it simply the arithmetic average of the data.
Why is expectation the same as the arithmetic mean? The expectation is the average value or mean of a random variable not a probability distribution. As such it is for discrete random variables the weighted average of the values the random variable tak
3,347
Why is expectation the same as the arithmetic mean?
Let's pay close attention to the definitions: Mean is defined as the sum of a collection of numbers divided by the number of numbers in the collection. The calculation would be "for i in 1 to n, (sum of x sub i) divided by n." Expected value (EV) is the long-run average value of repetitions of the experiment it represents. The calculation would be "for i in 1 to n, sum of event x sub i times its probability (and the sum of all p sub i must = 1)." In the case of a fair die, it is easy to see that the mean and the EV are the same. Mean - (1+2+3+4+5+6)/6 - 3.5 and EV would be: prob x p*x 0.167 1 0.17 0.167 2 0.33 0.167 3 0.50 0.167 4 0.67 0.167 5 0.83 0.167 6 1.00 EV=sum(p*x) = 3.50 But what if the die were not "fair." An easy way to make an unfair die would be to drill a hole in the corner at the intersection of the 4, 5, and 6 faces. Further let's now say that the probability of rolling a 4, 5, or 6 on our new and improved crooked die is now .2 and the probability of rolling a 1, 2, or 3 is now .133. It is the same die with 6 faces, one number on each face and the mean for this die is still 3.5. However, after rolling this die many times, our EV is now 3.8 because the probabilities for the events are no longer the same for all events. prob x p*x 0.133 1 0.13 0.133 2 0.27 0.133 3 0.40 0.200 4 0.80 0.200 5 1.00 0.200 6 1.20 EV=sum(p*x) = 3.80 Again, let's be careful and go back to the definition before concluding that one thing will always be "the same" as another. Take a look at how a normal die is set up and drill a hole in the other 7 corners and see how the EVs change - have fun. Bob_T
Why is expectation the same as the arithmetic mean?
Let's pay close attention to the definitions: Mean is defined as the sum of a collection of numbers divided by the number of numbers in the collection. The calculation would be "for i in 1 to n, (sum
Why is expectation the same as the arithmetic mean? Let's pay close attention to the definitions: Mean is defined as the sum of a collection of numbers divided by the number of numbers in the collection. The calculation would be "for i in 1 to n, (sum of x sub i) divided by n." Expected value (EV) is the long-run average value of repetitions of the experiment it represents. The calculation would be "for i in 1 to n, sum of event x sub i times its probability (and the sum of all p sub i must = 1)." In the case of a fair die, it is easy to see that the mean and the EV are the same. Mean - (1+2+3+4+5+6)/6 - 3.5 and EV would be: prob x p*x 0.167 1 0.17 0.167 2 0.33 0.167 3 0.50 0.167 4 0.67 0.167 5 0.83 0.167 6 1.00 EV=sum(p*x) = 3.50 But what if the die were not "fair." An easy way to make an unfair die would be to drill a hole in the corner at the intersection of the 4, 5, and 6 faces. Further let's now say that the probability of rolling a 4, 5, or 6 on our new and improved crooked die is now .2 and the probability of rolling a 1, 2, or 3 is now .133. It is the same die with 6 faces, one number on each face and the mean for this die is still 3.5. However, after rolling this die many times, our EV is now 3.8 because the probabilities for the events are no longer the same for all events. prob x p*x 0.133 1 0.13 0.133 2 0.27 0.133 3 0.40 0.200 4 0.80 0.200 5 1.00 0.200 6 1.20 EV=sum(p*x) = 3.80 Again, let's be careful and go back to the definition before concluding that one thing will always be "the same" as another. Take a look at how a normal die is set up and drill a hole in the other 7 corners and see how the EVs change - have fun. Bob_T
Why is expectation the same as the arithmetic mean? Let's pay close attention to the definitions: Mean is defined as the sum of a collection of numbers divided by the number of numbers in the collection. The calculation would be "for i in 1 to n, (sum
3,348
Why is expectation the same as the arithmetic mean?
The only difference between "mean" and "expected value" is that mean is mainly used for frequency distribution and expectation is used for probability distribution. In frequency distribution, sample space consists of variables and their frequencies of occurrence. In probability distribution, sample space consists of random variables and their probabilities. Now we know that total probability of all variables in sample space must be=1. Here in lies the basic difference. The denominator term for expectation is always =1. (i.e Summation f(xi) = 1) However no such restrictions on summation of frequency (which is basically total number of entries).
Why is expectation the same as the arithmetic mean?
The only difference between "mean" and "expected value" is that mean is mainly used for frequency distribution and expectation is used for probability distribution. In frequency distribution, sample s
Why is expectation the same as the arithmetic mean? The only difference between "mean" and "expected value" is that mean is mainly used for frequency distribution and expectation is used for probability distribution. In frequency distribution, sample space consists of variables and their frequencies of occurrence. In probability distribution, sample space consists of random variables and their probabilities. Now we know that total probability of all variables in sample space must be=1. Here in lies the basic difference. The denominator term for expectation is always =1. (i.e Summation f(xi) = 1) However no such restrictions on summation of frequency (which is basically total number of entries).
Why is expectation the same as the arithmetic mean? The only difference between "mean" and "expected value" is that mean is mainly used for frequency distribution and expectation is used for probability distribution. In frequency distribution, sample s
3,349
Derivation of closed form lasso solution
This can be attacked in a number of ways, including fairly economical approaches via the Karush–Kuhn–Tucker conditions. Below is a quite elementary alternative argument. The least squares solution for an orthogonal design Suppose $X$ is composed of orthogonal columns. Then, the least-squares solution is $$ \newcommand{\bls}{\hat{\beta}^{{\small \text{LS}}}}\newcommand{\blasso}{\hat{\beta}^{{\text{lasso}}}} \bls = (X^T X)^{-1} X^T y = X^T y \>. $$ Some equivalent problems Via the Lagrangian form, it is straightforward to see that an equivalent problem to that considered in the question is $$ \min_\beta \frac{1}{2} \|y - X \beta\|_2^2 + \gamma \|\beta\|_1 \>. $$ Expanding out the first term we get $\frac{1}{2} y^T y - y^T X \beta + \frac{1}{2}\beta^T \beta$ and since $y^T y$ does not contain any of the variables of interest, we can discard it and consider yet another equivalent problem, $$ \min_\beta (- y^T X \beta + \frac{1}{2} \|\beta\|^2) + \gamma \|\beta\|_1 \>. $$ Noting that $\bls = X^T y$, the previous problem can be rewritten as $$ \min_\beta \sum_{i=1}^p - \bls_i \beta_i + \frac{1}{2} \beta_i^2 + \gamma |\beta_i| \> . $$ Our objective function is now a sum of objectives, each corresponding to a separate variable $\beta_i$, so they may each be solved individually. The whole is equal to the sum of its parts Fix a certain $i$. Then, we want to minimize $$ \mathcal L_i = -\bls_i \beta_i + \frac{1}{2}\beta_i^2 + \gamma |\beta_i| \> . $$ If $\bls_i > 0$, then we must have $\beta_i \geq 0$ since otherwise we could flip its sign and get a lower value for the objective function. Likewise if $\bls_i < 0$, then we must choose $\beta_i \leq 0$. Case 1: $\bls_i > 0$. Since $\beta_i \geq 0$, $$ \mathcal L_i = -\bls_i \beta_i + \frac{1}{2}\beta_i^2 + \gamma \beta_i \> , $$ and differentiating this with respect to $\beta_i$ and setting equal to zero, we get $\beta_i = \bls_i - \gamma$ and this is only feasible if the right-hand side is nonnegative, so in this case the actual solution is $$ \blasso_i = (\bls_i - \gamma)^+ = \mathrm{sgn}(\bls_i)(|\bls_i| - \gamma)^+ \>. $$ Case 2: $\bls_i \leq 0$. This implies we must have $\beta_i \leq 0$ and so $$ \mathcal L_i = -\bls_i \beta_i + \frac{1}{2}\beta_i^2 - \gamma \beta_i \> . $$ Differentiating with respect to $\beta_i$ and setting equal to zero, we get $\beta_i = \bls_i + \gamma = \mathrm{sgn}(\bls_i)(|\bls_i| - \gamma)$. But, again, to ensure this is feasible, we need $\beta_i \leq 0$, which is achieved by taking $$ \blasso_i = \mathrm{sgn}(\bls_i)(|\bls_i| - \gamma)^+ \>. $$ In both cases, we get the desired form, and so we are done. Final remarks Note that as $\gamma$ increases, then each of the $|\blasso_i|$ necessarily decreases, hence so does $\|\blasso\|_1$. When $\gamma = 0$, we recover the OLS solutions, and, for $\gamma > \max_i |\bls_i|$, we obtain $\blasso_i = 0$ for all $i$.
Derivation of closed form lasso solution
This can be attacked in a number of ways, including fairly economical approaches via the Karush–Kuhn–Tucker conditions. Below is a quite elementary alternative argument. The least squares solution for
Derivation of closed form lasso solution This can be attacked in a number of ways, including fairly economical approaches via the Karush–Kuhn–Tucker conditions. Below is a quite elementary alternative argument. The least squares solution for an orthogonal design Suppose $X$ is composed of orthogonal columns. Then, the least-squares solution is $$ \newcommand{\bls}{\hat{\beta}^{{\small \text{LS}}}}\newcommand{\blasso}{\hat{\beta}^{{\text{lasso}}}} \bls = (X^T X)^{-1} X^T y = X^T y \>. $$ Some equivalent problems Via the Lagrangian form, it is straightforward to see that an equivalent problem to that considered in the question is $$ \min_\beta \frac{1}{2} \|y - X \beta\|_2^2 + \gamma \|\beta\|_1 \>. $$ Expanding out the first term we get $\frac{1}{2} y^T y - y^T X \beta + \frac{1}{2}\beta^T \beta$ and since $y^T y$ does not contain any of the variables of interest, we can discard it and consider yet another equivalent problem, $$ \min_\beta (- y^T X \beta + \frac{1}{2} \|\beta\|^2) + \gamma \|\beta\|_1 \>. $$ Noting that $\bls = X^T y$, the previous problem can be rewritten as $$ \min_\beta \sum_{i=1}^p - \bls_i \beta_i + \frac{1}{2} \beta_i^2 + \gamma |\beta_i| \> . $$ Our objective function is now a sum of objectives, each corresponding to a separate variable $\beta_i$, so they may each be solved individually. The whole is equal to the sum of its parts Fix a certain $i$. Then, we want to minimize $$ \mathcal L_i = -\bls_i \beta_i + \frac{1}{2}\beta_i^2 + \gamma |\beta_i| \> . $$ If $\bls_i > 0$, then we must have $\beta_i \geq 0$ since otherwise we could flip its sign and get a lower value for the objective function. Likewise if $\bls_i < 0$, then we must choose $\beta_i \leq 0$. Case 1: $\bls_i > 0$. Since $\beta_i \geq 0$, $$ \mathcal L_i = -\bls_i \beta_i + \frac{1}{2}\beta_i^2 + \gamma \beta_i \> , $$ and differentiating this with respect to $\beta_i$ and setting equal to zero, we get $\beta_i = \bls_i - \gamma$ and this is only feasible if the right-hand side is nonnegative, so in this case the actual solution is $$ \blasso_i = (\bls_i - \gamma)^+ = \mathrm{sgn}(\bls_i)(|\bls_i| - \gamma)^+ \>. $$ Case 2: $\bls_i \leq 0$. This implies we must have $\beta_i \leq 0$ and so $$ \mathcal L_i = -\bls_i \beta_i + \frac{1}{2}\beta_i^2 - \gamma \beta_i \> . $$ Differentiating with respect to $\beta_i$ and setting equal to zero, we get $\beta_i = \bls_i + \gamma = \mathrm{sgn}(\bls_i)(|\bls_i| - \gamma)$. But, again, to ensure this is feasible, we need $\beta_i \leq 0$, which is achieved by taking $$ \blasso_i = \mathrm{sgn}(\bls_i)(|\bls_i| - \gamma)^+ \>. $$ In both cases, we get the desired form, and so we are done. Final remarks Note that as $\gamma$ increases, then each of the $|\blasso_i|$ necessarily decreases, hence so does $\|\blasso\|_1$. When $\gamma = 0$, we recover the OLS solutions, and, for $\gamma > \max_i |\bls_i|$, we obtain $\blasso_i = 0$ for all $i$.
Derivation of closed form lasso solution This can be attacked in a number of ways, including fairly economical approaches via the Karush–Kuhn–Tucker conditions. Below is a quite elementary alternative argument. The least squares solution for
3,350
Derivation of closed form lasso solution
Assume that the covariates $x_j$, the columns of $X \in \mathbb{R}^{n \times p}$, are also standardized so that $X^T X = I$. This is just for convenience later: without it, the notation just gets heavier since $X^T X$ is only diagonal. Further assume that $n \geq p$. This is a necessary assumption for the result to hold. Define the least squares estimator $\hat\beta_{OLS} = \arg\min_\beta \|y - X \beta\|_2^2$. Then, the (Lagrangian form of the) lasso estimator \begin{align*} \hat\beta_\lambda & = \arg\min_{\beta} \frac{1}{2n} \|y - X \beta\|_2^2 + \lambda \|\beta\|_1 \tag{defn.} \\ & = \arg\min_\beta \frac{1}{2n} \|X \hat\beta_{OLS} - X \beta\|_2^2 + \lambda \|\beta\|_1 \tag{OLS is projection} \\ & = \arg\min_\beta \frac{1}{2n} \|\hat\beta_{OLS} - \beta\|_2^2 + \lambda \|\beta\|_1 \tag{$X^TX=I$} \\ & = \arg\min_\beta \frac{1}{2} \|\hat\beta_{OLS} - \beta\|_2^2 + n \lambda \|\beta\|_1 \tag{algebra} \\ & = \mathrm{prox}_{n \lambda \|\cdot\|_1} \left( \hat\beta_{OLS} \right) \tag{defn.} \\ & = S_{n \lambda} \left( \hat\beta_{OLS} \right) \tag{takes some work}, \end{align*} where $\mathrm{prox}_f$ is the proximal operator of a function $f$ and $S_{\alpha}$ soft thresholds by the amount $\alpha$. This is a derivation that skips the detailed derivation of the proximal operator that Cardinal works out, but, I hope, clarifies the main steps that make possible a closed form.
Derivation of closed form lasso solution
Assume that the covariates $x_j$, the columns of $X \in \mathbb{R}^{n \times p}$, are also standardized so that $X^T X = I$. This is just for convenience later: without it, the notation just gets heav
Derivation of closed form lasso solution Assume that the covariates $x_j$, the columns of $X \in \mathbb{R}^{n \times p}$, are also standardized so that $X^T X = I$. This is just for convenience later: without it, the notation just gets heavier since $X^T X$ is only diagonal. Further assume that $n \geq p$. This is a necessary assumption for the result to hold. Define the least squares estimator $\hat\beta_{OLS} = \arg\min_\beta \|y - X \beta\|_2^2$. Then, the (Lagrangian form of the) lasso estimator \begin{align*} \hat\beta_\lambda & = \arg\min_{\beta} \frac{1}{2n} \|y - X \beta\|_2^2 + \lambda \|\beta\|_1 \tag{defn.} \\ & = \arg\min_\beta \frac{1}{2n} \|X \hat\beta_{OLS} - X \beta\|_2^2 + \lambda \|\beta\|_1 \tag{OLS is projection} \\ & = \arg\min_\beta \frac{1}{2n} \|\hat\beta_{OLS} - \beta\|_2^2 + \lambda \|\beta\|_1 \tag{$X^TX=I$} \\ & = \arg\min_\beta \frac{1}{2} \|\hat\beta_{OLS} - \beta\|_2^2 + n \lambda \|\beta\|_1 \tag{algebra} \\ & = \mathrm{prox}_{n \lambda \|\cdot\|_1} \left( \hat\beta_{OLS} \right) \tag{defn.} \\ & = S_{n \lambda} \left( \hat\beta_{OLS} \right) \tag{takes some work}, \end{align*} where $\mathrm{prox}_f$ is the proximal operator of a function $f$ and $S_{\alpha}$ soft thresholds by the amount $\alpha$. This is a derivation that skips the detailed derivation of the proximal operator that Cardinal works out, but, I hope, clarifies the main steps that make possible a closed form.
Derivation of closed form lasso solution Assume that the covariates $x_j$, the columns of $X \in \mathbb{R}^{n \times p}$, are also standardized so that $X^T X = I$. This is just for convenience later: without it, the notation just gets heav
3,351
Advanced statistics books recommendation
Maximum likelihood: In all Likelihood (Pawitan). Moderately clear book and the most clear (IMO) with respect to books dealing with likelihood only. Also has R code. GLMs: Categorical Data Analysis (Agresti, 2002) is one of the best written stat books I have read (also has R code available). This text will also help with maximum likelihood. The third edition is coming out in a few months. Second on my list for the above two is Collett's Modelling Binary Data. PCA: I find Rencher's writing clear in Methods of multivariate analysis. This is a graduate level text, but it is introductory.
Advanced statistics books recommendation
Maximum likelihood: In all Likelihood (Pawitan). Moderately clear book and the most clear (IMO) with respect to books dealing with likelihood only. Also has R code. GLMs: Categorical Data Analysis (Ag
Advanced statistics books recommendation Maximum likelihood: In all Likelihood (Pawitan). Moderately clear book and the most clear (IMO) with respect to books dealing with likelihood only. Also has R code. GLMs: Categorical Data Analysis (Agresti, 2002) is one of the best written stat books I have read (also has R code available). This text will also help with maximum likelihood. The third edition is coming out in a few months. Second on my list for the above two is Collett's Modelling Binary Data. PCA: I find Rencher's writing clear in Methods of multivariate analysis. This is a graduate level text, but it is introductory.
Advanced statistics books recommendation Maximum likelihood: In all Likelihood (Pawitan). Moderately clear book and the most clear (IMO) with respect to books dealing with likelihood only. Also has R code. GLMs: Categorical Data Analysis (Ag
3,352
Advanced statistics books recommendation
Some books on Likelihood Estimation * Amari, Barndorff-Nielsen, Kass, Lauritzen and Rao, Differential geometry in statistical inference. $-\small{\text{Geometrical approach for proving existence, uniqueness and other properties of MLE.}}$ * Butler, Saddlepoint Approximations with Applications. $-\small{\text{Saddlepoint approximations to the MLE on complicated models.}}$ * Cox, Principles of Statistical Inference. $-\small{\text{A basic reference on MLE.}}$ * Cox and Barndorff-Nielsen, Inference and Asymptotics. $-\small{\text{Likelihood, pseudo-likelihood, approximation theorems and asymptotics explained by}}$ $ \small{\text{two exponents in this area.}}$ * Edwards, Likelihood. $-\small{\text{A reference for a general discussion on this concept.}}$ * Ferguson, A Course in Large Sample Theory. $-\small{\text{Contains classical results on asymptotic properties of point estimators.}}$ * Kalbfleisch, Probability and Statistical Inference II. $\spadesuit$ $-\small{\text{Introductory book containing interesting basic results such as the continuous }}$ $\small{\text{approximation to the likelihood which is not always explained.}}$ * Lehmann and Casella, Theory of Point Estimation. $-\small{\text{Classical results on point estimation, an essential reference.}}$ * Pace and Salvan, Principles of Statistical Inference: From a Neo-Fisherian Perspective. $-\small{\text{A good reference on a school of thought becoming more and more popular:}}$ $\small{\text{the Neo-Fisherian.}}$ * Pawittan, In All Likelihood: Statistical Modelling and Inference Using Likelihood. * Serfling, Approximation Theorems of Mathematical Statistics. $-\small{\text{More rigorous book, here you can find the mystical "regularity conditions".}}$ * Severini, Likelihood Methods in Statistics. * Shao, Mathematical Statistics. $-\small{\text{Classical results, good as a textbook.}}$ * Sprott, Statistical Inference in Science. $\spadesuit$ $-\small{\text{Basic reference on likelihood, profile likelihood and classical statistical modelling.}}$ * van der Vaart, Asymptotic Statistics. $-\small{\text{A general reference on: modes of convergence, properties of MLE, delta method,}}$ $\small{\text{ moment estimators, efficiency and tests.}}$ * Young and Smith, Essentials of Statistical Inference. $-\small{\text{A more recent book on: Likelihood, pseudolikelihood, saddlepoint approximations,}}$ $\small{p^*\text{ formula, modified profile likelihoods and more.}}$ $\spadesuit$ Suggestion for the OP
Advanced statistics books recommendation
Some books on Likelihood Estimation * Amari, Barndorff-Nielsen, Kass, Lauritzen and Rao, Differential geometry in statistical inference. $-\small{\text{Geometrical approach for proving existence, uni
Advanced statistics books recommendation Some books on Likelihood Estimation * Amari, Barndorff-Nielsen, Kass, Lauritzen and Rao, Differential geometry in statistical inference. $-\small{\text{Geometrical approach for proving existence, uniqueness and other properties of MLE.}}$ * Butler, Saddlepoint Approximations with Applications. $-\small{\text{Saddlepoint approximations to the MLE on complicated models.}}$ * Cox, Principles of Statistical Inference. $-\small{\text{A basic reference on MLE.}}$ * Cox and Barndorff-Nielsen, Inference and Asymptotics. $-\small{\text{Likelihood, pseudo-likelihood, approximation theorems and asymptotics explained by}}$ $ \small{\text{two exponents in this area.}}$ * Edwards, Likelihood. $-\small{\text{A reference for a general discussion on this concept.}}$ * Ferguson, A Course in Large Sample Theory. $-\small{\text{Contains classical results on asymptotic properties of point estimators.}}$ * Kalbfleisch, Probability and Statistical Inference II. $\spadesuit$ $-\small{\text{Introductory book containing interesting basic results such as the continuous }}$ $\small{\text{approximation to the likelihood which is not always explained.}}$ * Lehmann and Casella, Theory of Point Estimation. $-\small{\text{Classical results on point estimation, an essential reference.}}$ * Pace and Salvan, Principles of Statistical Inference: From a Neo-Fisherian Perspective. $-\small{\text{A good reference on a school of thought becoming more and more popular:}}$ $\small{\text{the Neo-Fisherian.}}$ * Pawittan, In All Likelihood: Statistical Modelling and Inference Using Likelihood. * Serfling, Approximation Theorems of Mathematical Statistics. $-\small{\text{More rigorous book, here you can find the mystical "regularity conditions".}}$ * Severini, Likelihood Methods in Statistics. * Shao, Mathematical Statistics. $-\small{\text{Classical results, good as a textbook.}}$ * Sprott, Statistical Inference in Science. $\spadesuit$ $-\small{\text{Basic reference on likelihood, profile likelihood and classical statistical modelling.}}$ * van der Vaart, Asymptotic Statistics. $-\small{\text{A general reference on: modes of convergence, properties of MLE, delta method,}}$ $\small{\text{ moment estimators, efficiency and tests.}}$ * Young and Smith, Essentials of Statistical Inference. $-\small{\text{A more recent book on: Likelihood, pseudolikelihood, saddlepoint approximations,}}$ $\small{p^*\text{ formula, modified profile likelihoods and more.}}$ $\spadesuit$ Suggestion for the OP
Advanced statistics books recommendation Some books on Likelihood Estimation * Amari, Barndorff-Nielsen, Kass, Lauritzen and Rao, Differential geometry in statistical inference. $-\small{\text{Geometrical approach for proving existence, uni
3,353
Advanced statistics books recommendation
My guess is that, for your requirements, the best book on generalized linear models is probably: Agresti's Introduction to Categorical Data Analysis There are other books that might be considered better, but I suspect would be less appealing to a practitioner who would prefer to avoid dense mathematics: Agresti's Categorical Data Analysis (his primary text), is good for practitioners, but is denser McCullagh & Nelder's Generalized Linear Models, is, I hear (I've never tried it), the bible for this, but demands considerable mathematical sophistication Dobson's Introduction to Generalized Linear Models, is possible to get through, but still pretty mathematically dense, IMO As for your other topics, I'm afraid I don't know of books for them, but maybe others can make some recommendations.
Advanced statistics books recommendation
My guess is that, for your requirements, the best book on generalized linear models is probably: Agresti's Introduction to Categorical Data Analysis There are other books that might be considered
Advanced statistics books recommendation My guess is that, for your requirements, the best book on generalized linear models is probably: Agresti's Introduction to Categorical Data Analysis There are other books that might be considered better, but I suspect would be less appealing to a practitioner who would prefer to avoid dense mathematics: Agresti's Categorical Data Analysis (his primary text), is good for practitioners, but is denser McCullagh & Nelder's Generalized Linear Models, is, I hear (I've never tried it), the bible for this, but demands considerable mathematical sophistication Dobson's Introduction to Generalized Linear Models, is possible to get through, but still pretty mathematically dense, IMO As for your other topics, I'm afraid I don't know of books for them, but maybe others can make some recommendations.
Advanced statistics books recommendation My guess is that, for your requirements, the best book on generalized linear models is probably: Agresti's Introduction to Categorical Data Analysis There are other books that might be considered
3,354
Advanced statistics books recommendation
Not sure if these are at the level you're looking for, but some books I've found useful- GLMs - McCullagh and Nelder is the canonical book PCA - A User's Guide to Principal Components - despite the title it does go into some degree of depth on the topic
Advanced statistics books recommendation
Not sure if these are at the level you're looking for, but some books I've found useful- GLMs - McCullagh and Nelder is the canonical book PCA - A User's Guide to Principal Components - despite the ti
Advanced statistics books recommendation Not sure if these are at the level you're looking for, but some books I've found useful- GLMs - McCullagh and Nelder is the canonical book PCA - A User's Guide to Principal Components - despite the title it does go into some degree of depth on the topic
Advanced statistics books recommendation Not sure if these are at the level you're looking for, but some books I've found useful- GLMs - McCullagh and Nelder is the canonical book PCA - A User's Guide to Principal Components - despite the ti
3,355
Advanced statistics books recommendation
I really like Larry Wasserman's books "All of Statistics" and "All of Nonparametric Statistics". They are very readable, and cover a lot of ground quickly.
Advanced statistics books recommendation
I really like Larry Wasserman's books "All of Statistics" and "All of Nonparametric Statistics". They are very readable, and cover a lot of ground quickly.
Advanced statistics books recommendation I really like Larry Wasserman's books "All of Statistics" and "All of Nonparametric Statistics". They are very readable, and cover a lot of ground quickly.
Advanced statistics books recommendation I really like Larry Wasserman's books "All of Statistics" and "All of Nonparametric Statistics". They are very readable, and cover a lot of ground quickly.
3,356
Advanced statistics books recommendation
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The Nonlinear Models books that I like and rely on are (1) Bates and Watts and (2) Gallant. Both are published by Wiley.
Advanced statistics books recommendation
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Advanced statistics books recommendation Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The Nonlinear Models books that I like and rely on are (1) Bates and Watts and (2) Gallant. Both are published by Wiley.
Advanced statistics books recommendation Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
3,357
Advanced statistics books recommendation
For Bayesian analysis (including imprecise analysis), I'm going to put in big plugs for: Bernardo, J.M. and Smith, A.F.M. (2000) Bayesian Theory. Wiley: Chichester. Gelman, A. et al (2013) Bayesian Data Analysis (Third Edition). CRC Press: Boca Raton. Walley, P. (1990) Statistical Reasoning with Imprecise Probabilities. Chapman and Hall. That last book, by the brilliant Peter Walley, is an eye-opener on different ways of doing sensitivity analysis, and the fact that this can be built into probability theory at an axiomatic level.
Advanced statistics books recommendation
For Bayesian analysis (including imprecise analysis), I'm going to put in big plugs for: Bernardo, J.M. and Smith, A.F.M. (2000) Bayesian Theory. Wiley: Chichester. Gelman, A. et al (2013) Bayesian D
Advanced statistics books recommendation For Bayesian analysis (including imprecise analysis), I'm going to put in big plugs for: Bernardo, J.M. and Smith, A.F.M. (2000) Bayesian Theory. Wiley: Chichester. Gelman, A. et al (2013) Bayesian Data Analysis (Third Edition). CRC Press: Boca Raton. Walley, P. (1990) Statistical Reasoning with Imprecise Probabilities. Chapman and Hall. That last book, by the brilliant Peter Walley, is an eye-opener on different ways of doing sensitivity analysis, and the fact that this can be built into probability theory at an axiomatic level.
Advanced statistics books recommendation For Bayesian analysis (including imprecise analysis), I'm going to put in big plugs for: Bernardo, J.M. and Smith, A.F.M. (2000) Bayesian Theory. Wiley: Chichester. Gelman, A. et al (2013) Bayesian D
3,358
Advanced statistics books recommendation
Mehta (2014) Statistical Topics (ISBN: 978-1499273533) is good intermediate level statistics story telling. Doesn't cover much of you topics you noted above though.
Advanced statistics books recommendation
Mehta (2014) Statistical Topics (ISBN: 978-1499273533) is good intermediate level statistics story telling. Doesn't cover much of you topics you noted above though.
Advanced statistics books recommendation Mehta (2014) Statistical Topics (ISBN: 978-1499273533) is good intermediate level statistics story telling. Doesn't cover much of you topics you noted above though.
Advanced statistics books recommendation Mehta (2014) Statistical Topics (ISBN: 978-1499273533) is good intermediate level statistics story telling. Doesn't cover much of you topics you noted above though.
3,359
Advanced statistics books recommendation
One really simple introductory statistics book is Andy Field's "Discovering Statistics using R" - also available for SPSS. It contains a lot of nice examples and is even fun to read. Less precise, though compared to other books, but with very little mathematical formulations and lots of text. I found it easy for a basic start, and am still using it from time to time.
Advanced statistics books recommendation
One really simple introductory statistics book is Andy Field's "Discovering Statistics using R" - also available for SPSS. It contains a lot of nice examples and is even fun to read. Less precise, tho
Advanced statistics books recommendation One really simple introductory statistics book is Andy Field's "Discovering Statistics using R" - also available for SPSS. It contains a lot of nice examples and is even fun to read. Less precise, though compared to other books, but with very little mathematical formulations and lots of text. I found it easy for a basic start, and am still using it from time to time.
Advanced statistics books recommendation One really simple introductory statistics book is Andy Field's "Discovering Statistics using R" - also available for SPSS. It contains a lot of nice examples and is even fun to read. Less precise, tho
3,360
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets?
Generally, scale_pos_weight is the ratio of number of negative class to the positive class. Suppose, the dataset has 90 observations of negative class and 10 observations of positive class, then ideal value of scale_pos_weight should be 9. See the doc: http://xgboost.readthedocs.io/en/latest/parameter.html
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets?
Generally, scale_pos_weight is the ratio of number of negative class to the positive class. Suppose, the dataset has 90 observations of negative class and 10 observations of positive class, then ideal
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets? Generally, scale_pos_weight is the ratio of number of negative class to the positive class. Suppose, the dataset has 90 observations of negative class and 10 observations of positive class, then ideal value of scale_pos_weight should be 9. See the doc: http://xgboost.readthedocs.io/en/latest/parameter.html
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets? Generally, scale_pos_weight is the ratio of number of negative class to the positive class. Suppose, the dataset has 90 observations of negative class and 10 observations of positive class, then ideal
3,361
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets?
All the documentation says that is should be: scale_pos_weight = count(negative examples)/count(Positive examples) In practice, that works pretty well, but if your dataset is extremely unbalanced I'd recommend using something more conservative like: scale_pos_weight = sqrt(count(negative examples)/count(Positive examples)) This is useful to limit the effect of a multiplication of positive examples by a very high weight.
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets?
All the documentation says that is should be: scale_pos_weight = count(negative examples)/count(Positive examples) In practice, that works pretty well, but if your dataset is extremely unbalanced I'd
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets? All the documentation says that is should be: scale_pos_weight = count(negative examples)/count(Positive examples) In practice, that works pretty well, but if your dataset is extremely unbalanced I'd recommend using something more conservative like: scale_pos_weight = sqrt(count(negative examples)/count(Positive examples)) This is useful to limit the effect of a multiplication of positive examples by a very high weight.
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets? All the documentation says that is should be: scale_pos_weight = count(negative examples)/count(Positive examples) In practice, that works pretty well, but if your dataset is extremely unbalanced I'd
3,362
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets?
I understand your question and frustration, but I am not sure this is something that could be computed analytically, rather you'd have to determine a good setting empirically for your data, as you do for most hyper parameters, using cross validation as @user2149631 suggested. I've had some success using SelectFPR with Xgboost and the sklearn API to lower the FPR for XGBoost via feature selection instead, then further tuning the scale_pos_weight between 0 and 1.0. O.9 seems to work well but as with anything, YMMV depending on your data. You can also weight each data point individually when sending it to XGboost if you look through their docs. You have to use their API not the sklearn wrapper. That way you can weight one set of data points much higher than the other, and it will impact the boosting algorithm it uses.
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets?
I understand your question and frustration, but I am not sure this is something that could be computed analytically, rather you'd have to determine a good setting empirically for your data, as you do
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets? I understand your question and frustration, but I am not sure this is something that could be computed analytically, rather you'd have to determine a good setting empirically for your data, as you do for most hyper parameters, using cross validation as @user2149631 suggested. I've had some success using SelectFPR with Xgboost and the sklearn API to lower the FPR for XGBoost via feature selection instead, then further tuning the scale_pos_weight between 0 and 1.0. O.9 seems to work well but as with anything, YMMV depending on your data. You can also weight each data point individually when sending it to XGboost if you look through their docs. You have to use their API not the sklearn wrapper. That way you can weight one set of data points much higher than the other, and it will impact the boosting algorithm it uses.
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets? I understand your question and frustration, but I am not sure this is something that could be computed analytically, rather you'd have to determine a good setting empirically for your data, as you do
3,363
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets?
I also stumbled upon this dilemma and still looking for the best solution. However, I would suggest you using methods such as Grid Search (GridSearchCV in sklearn) for best parameter tuning for your classifier. However, if your dataset is highly imbalanced, its worthwhile to consider sampling methods (especially random oversampling and SMOTE oversampling methods) and model ensemble on data samples with different ratios of positive and negative class examples. Here is one nice and useful (almost comprehensive) tutorial about handling imbalanced datasets. https://www.analyticsvidhya.com/blog/2017/03/imbalanced-classification-problem/
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets?
I also stumbled upon this dilemma and still looking for the best solution. However, I would suggest you using methods such as Grid Search (GridSearchCV in sklearn) for best parameter tuning for your c
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets? I also stumbled upon this dilemma and still looking for the best solution. However, I would suggest you using methods such as Grid Search (GridSearchCV in sklearn) for best parameter tuning for your classifier. However, if your dataset is highly imbalanced, its worthwhile to consider sampling methods (especially random oversampling and SMOTE oversampling methods) and model ensemble on data samples with different ratios of positive and negative class examples. Here is one nice and useful (almost comprehensive) tutorial about handling imbalanced datasets. https://www.analyticsvidhya.com/blog/2017/03/imbalanced-classification-problem/
What is the proper usage of scale_pos_weight in xgboost for imbalanced datasets? I also stumbled upon this dilemma and still looking for the best solution. However, I would suggest you using methods such as Grid Search (GridSearchCV in sklearn) for best parameter tuning for your c
3,364
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one?
Standardization is all about the weights of different variables for the model. If you do the standardisation "only" for the sake of numerical stability, there may be transformations that yield very similar numerical properties but different physical meaning that could be much more appropriate for the interpretation. The same is true for centering, which is usually part of the standardization. Situations where you probably want to standardize: the variables are different physical quantities and the numeric values are on very different scales of magnitude and there is no "external" knowledge that the variables with high (numeric) variation should be considered more important. Situations where you may not want to standardize: if the variables are the same physical quantity, and are (roughly) of the same magnitude, e.g. relative concentrations of different chemical species absorbances at different wavelengths emission intensity (otherwise same measurement conditions) at different wavelengths you definitively do not want to standardize variables that do not change between the samples (baseline channels) - you'd just blow up measurement noise (you may want to exclude them from the model instead) if you have such physically related variables, your measurement noise may be roughly the same for all variables, but the signal intensity varies much more. I.e. variables with low values have higher relative noise. Standardizing would blow up the noise. In other words, you may have to decide whether you want relative or absolute noise to be standardized. There may be physically meaningful values that you can use to relate your measured value to, e.g. instead of transmitted intensity use percent of transmitted intensity (transmittance T). You may do something "in between", and transform the variables or choose the unit so that the new variables still have physical meaning but the variation in the numerical value is not that different, e.g. if you work with mice, use body weight g and length in cm (expected range of variation about 5 for both) instead of the base units kg and m (expected range of variation 0.005 kg and 0.05 m - one order of magnitude different). for the transmittance T above, you may consider using the absorbance $A = -log_{10} T$ Similar for centering: There may be (physically/chemically/biologically/...) meaningful baseline values available (e.g. controls, blinds, etc.) Is the mean actually meaningful? (The average human has one ovary and one testicle)
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, a
Standardization is all about the weights of different variables for the model. If you do the standardisation "only" for the sake of numerical stability, there may be transformations that yield very s
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one? Standardization is all about the weights of different variables for the model. If you do the standardisation "only" for the sake of numerical stability, there may be transformations that yield very similar numerical properties but different physical meaning that could be much more appropriate for the interpretation. The same is true for centering, which is usually part of the standardization. Situations where you probably want to standardize: the variables are different physical quantities and the numeric values are on very different scales of magnitude and there is no "external" knowledge that the variables with high (numeric) variation should be considered more important. Situations where you may not want to standardize: if the variables are the same physical quantity, and are (roughly) of the same magnitude, e.g. relative concentrations of different chemical species absorbances at different wavelengths emission intensity (otherwise same measurement conditions) at different wavelengths you definitively do not want to standardize variables that do not change between the samples (baseline channels) - you'd just blow up measurement noise (you may want to exclude them from the model instead) if you have such physically related variables, your measurement noise may be roughly the same for all variables, but the signal intensity varies much more. I.e. variables with low values have higher relative noise. Standardizing would blow up the noise. In other words, you may have to decide whether you want relative or absolute noise to be standardized. There may be physically meaningful values that you can use to relate your measured value to, e.g. instead of transmitted intensity use percent of transmitted intensity (transmittance T). You may do something "in between", and transform the variables or choose the unit so that the new variables still have physical meaning but the variation in the numerical value is not that different, e.g. if you work with mice, use body weight g and length in cm (expected range of variation about 5 for both) instead of the base units kg and m (expected range of variation 0.005 kg and 0.05 m - one order of magnitude different). for the transmittance T above, you may consider using the absorbance $A = -log_{10} T$ Similar for centering: There may be (physically/chemically/biologically/...) meaningful baseline values available (e.g. controls, blinds, etc.) Is the mean actually meaningful? (The average human has one ovary and one testicle)
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, a Standardization is all about the weights of different variables for the model. If you do the standardisation "only" for the sake of numerical stability, there may be transformations that yield very s
3,365
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one?
One thing I always ask myself before standardizing is, "How will I interpret the output?" If there is a way to analyze data without transformation, this may well be preferable purely from an interpretation standpoint.
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, a
One thing I always ask myself before standardizing is, "How will I interpret the output?" If there is a way to analyze data without transformation, this may well be preferable purely from an interpre
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one? One thing I always ask myself before standardizing is, "How will I interpret the output?" If there is a way to analyze data without transformation, this may well be preferable purely from an interpretation standpoint.
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, a One thing I always ask myself before standardizing is, "How will I interpret the output?" If there is a way to analyze data without transformation, this may well be preferable purely from an interpre
3,366
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one?
In general I don't recommend scaling or standardization unless it's absolutely necessary. The advantage or appeal of such a process is that, when an explanatory variable has a totally different physical dimension and magnitude from the response variable, scaling through division by standard deviation may help in terms of numerical stability, and enables one to compare effects across multiple explanatory variables. With the most common standardization, the variable effect is the amount of change in the response variable when the explanatory variable increases by one standard deviation; it also indicates that the meaning of the variable effect (the amount of change in the response variable when the explanatory variable increases by one unit) would be lost although the statistical value for the explanatory variable remains unchanged. However, when interaction is considered in a model, scaling could be very problematic even for statistical testing because of a complication involving a stochastic scaling adjustment in calculating the standard error of the interaction effect (Preacher, 2003). For this reason, scaling by standard deviation (or standardization/normalization) is generally not recommended, especially when interactions are involved. Preacher, K.J., Curran, P.J., and Bauer, D. J., 2006. Computational tools for probing interaction effects in multiple linear regression, multilevel modeling, and latent curve analysis. Journal of Educational and Behavioral Statistics, 31(4), 437-448.
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, a
In general I don't recommend scaling or standardization unless it's absolutely necessary. The advantage or appeal of such a process is that, when an explanatory variable has a totally different physic
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one? In general I don't recommend scaling or standardization unless it's absolutely necessary. The advantage or appeal of such a process is that, when an explanatory variable has a totally different physical dimension and magnitude from the response variable, scaling through division by standard deviation may help in terms of numerical stability, and enables one to compare effects across multiple explanatory variables. With the most common standardization, the variable effect is the amount of change in the response variable when the explanatory variable increases by one standard deviation; it also indicates that the meaning of the variable effect (the amount of change in the response variable when the explanatory variable increases by one unit) would be lost although the statistical value for the explanatory variable remains unchanged. However, when interaction is considered in a model, scaling could be very problematic even for statistical testing because of a complication involving a stochastic scaling adjustment in calculating the standard error of the interaction effect (Preacher, 2003). For this reason, scaling by standard deviation (or standardization/normalization) is generally not recommended, especially when interactions are involved. Preacher, K.J., Curran, P.J., and Bauer, D. J., 2006. Computational tools for probing interaction effects in multiple linear regression, multilevel modeling, and latent curve analysis. Journal of Educational and Behavioral Statistics, 31(4), 437-448.
Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, a In general I don't recommend scaling or standardization unless it's absolutely necessary. The advantage or appeal of such a process is that, when an explanatory variable has a totally different physic
3,367
What is the relationship between a chi squared test and test of equal proportions?
Very short answer: The chi-Squared test (chisq.test() in R) compares the observed frequencies in each category of a contingency table with the expected frequencies (computed as the product of the marginal frequencies). It is used to determine whether the deviations between the observed and the expected counts are too large to be attributed to chance. Departure from independence is easily checked by inspecting residuals (try ?mosaicplot or ?assocplot, but also look at the vcd package). Use fisher.test() for an exact test (relying on the hypergeometric distribution). The prop.test() function in R allows to test whether proportions are comparable between groups or does not differ from theoretical probabilities. It is referred to as a $z$-test because the test statistic looks like this: $$ z=\frac{(f_1-f_2)}{\sqrt{\hat p \left(1-\hat p \right) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}} $$ where $\hat p=(p_1+p_2)/(n_1+n_2)$, and the indices $(1,2)$ refer to the first and second line of your table. In a two-way contingency table where $H_0:\; p_1=p_2$, this should yield comparable results to the ordinary $\chi^2$ test: > tab <- matrix(c(100, 80, 20, 10), ncol = 2) > chisq.test(tab) Pearson's Chi-squared test with Yates' continuity correction data: tab X-squared = 0.8823, df = 1, p-value = 0.3476 > prop.test(tab) 2-sample test for equality of proportions with continuity correction data: tab X-squared = 0.8823, df = 1, p-value = 0.3476 alternative hypothesis: two.sided 95 percent confidence interval: -0.15834617 0.04723506 sample estimates: prop 1 prop 2 0.8333333 0.8888889 For analysis of discrete data with R, I highly recommend R (and S-PLUS) Manual to Accompany Agresti’s Categorical Data Analysis (2002), from Laura Thompson.
What is the relationship between a chi squared test and test of equal proportions?
Very short answer: The chi-Squared test (chisq.test() in R) compares the observed frequencies in each category of a contingency table with the expected frequencies (computed as the product of the mar
What is the relationship between a chi squared test and test of equal proportions? Very short answer: The chi-Squared test (chisq.test() in R) compares the observed frequencies in each category of a contingency table with the expected frequencies (computed as the product of the marginal frequencies). It is used to determine whether the deviations between the observed and the expected counts are too large to be attributed to chance. Departure from independence is easily checked by inspecting residuals (try ?mosaicplot or ?assocplot, but also look at the vcd package). Use fisher.test() for an exact test (relying on the hypergeometric distribution). The prop.test() function in R allows to test whether proportions are comparable between groups or does not differ from theoretical probabilities. It is referred to as a $z$-test because the test statistic looks like this: $$ z=\frac{(f_1-f_2)}{\sqrt{\hat p \left(1-\hat p \right) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}} $$ where $\hat p=(p_1+p_2)/(n_1+n_2)$, and the indices $(1,2)$ refer to the first and second line of your table. In a two-way contingency table where $H_0:\; p_1=p_2$, this should yield comparable results to the ordinary $\chi^2$ test: > tab <- matrix(c(100, 80, 20, 10), ncol = 2) > chisq.test(tab) Pearson's Chi-squared test with Yates' continuity correction data: tab X-squared = 0.8823, df = 1, p-value = 0.3476 > prop.test(tab) 2-sample test for equality of proportions with continuity correction data: tab X-squared = 0.8823, df = 1, p-value = 0.3476 alternative hypothesis: two.sided 95 percent confidence interval: -0.15834617 0.04723506 sample estimates: prop 1 prop 2 0.8333333 0.8888889 For analysis of discrete data with R, I highly recommend R (and S-PLUS) Manual to Accompany Agresti’s Categorical Data Analysis (2002), from Laura Thompson.
What is the relationship between a chi squared test and test of equal proportions? Very short answer: The chi-Squared test (chisq.test() in R) compares the observed frequencies in each category of a contingency table with the expected frequencies (computed as the product of the mar
3,368
What is the relationship between a chi squared test and test of equal proportions?
A chi-square test for equality of two proportions is exactly the same thing as a $z$-test. The chi-squared distribution with one degree of freedom is just that of a normal deviate, squared. You're basically just repeating the chi-squared test on a subset of the contingency table. (This is why @chl gets the exact same $p$-value with both tests.) The problem of doing the chi-squared test globally first and then diving down to do more tests on subsets is you won't necessarily preserve your alpha -- that is, you won't control false positives to be less than 5% (or whatever $\alpha$) across the whole experiment. I think if you want to do this properly in the classical paradigm, you need to identify your hypotheses at the outset (which proportions to compare), collect the data, and then test the hypotheses such that the total threshold for significance of each test sums to $\alpha$. Unless you can prove a priori that there's some correlation. The most powerful test for equality of proportions is called Barnard's test for superiority.
What is the relationship between a chi squared test and test of equal proportions?
A chi-square test for equality of two proportions is exactly the same thing as a $z$-test. The chi-squared distribution with one degree of freedom is just that of a normal deviate, squared. You're bas
What is the relationship between a chi squared test and test of equal proportions? A chi-square test for equality of two proportions is exactly the same thing as a $z$-test. The chi-squared distribution with one degree of freedom is just that of a normal deviate, squared. You're basically just repeating the chi-squared test on a subset of the contingency table. (This is why @chl gets the exact same $p$-value with both tests.) The problem of doing the chi-squared test globally first and then diving down to do more tests on subsets is you won't necessarily preserve your alpha -- that is, you won't control false positives to be less than 5% (or whatever $\alpha$) across the whole experiment. I think if you want to do this properly in the classical paradigm, you need to identify your hypotheses at the outset (which proportions to compare), collect the data, and then test the hypotheses such that the total threshold for significance of each test sums to $\alpha$. Unless you can prove a priori that there's some correlation. The most powerful test for equality of proportions is called Barnard's test for superiority.
What is the relationship between a chi squared test and test of equal proportions? A chi-square test for equality of two proportions is exactly the same thing as a $z$-test. The chi-squared distribution with one degree of freedom is just that of a normal deviate, squared. You're bas
3,369
What is the difference between a "nested" and a "non-nested" model?
Nested versus non-nested can mean a whole lot of things. You have nested designs versus crossed designs (see eg this explanation). You have nested models in model comparison. Nested means here that all terms of a smaller model occur in a larger model. This is a necessary condition for using most model comparison tests like likelihood ratio tests. In the context of multilevel models I think it's better to speak of nested and non-nested factors. The difference is in how the different factors are related to one another. In a nested design, the levels of one factor only make sense within the levels of another factor. Say you want to measure the oxygen production of leaves. You sample a number of tree species, and on every tree you sample some leaves on the bottom, in the middle and on top of the tree. This is a nested design. The difference for leaves in a different position only makes sense within one tree species. So comparing bottom leaves, middle leaves and top leaves over all trees is senseless. Or said differently: leaf position should not be modelled as a main effect. Non-nested factors is a combination of two factors that are not related. Say you study patients, and are interested in the difference of age and gender. So you have a factor ageclass and a factor gender that are not related. You should model both age and gender as a main effect, and you can take a look at the interaction if necessary. The difference is not always that clear. If in my first example the tree species are closely related in form and physiology, you could consider leaf position also as a valid main effect. In many cases, the choice for a nested design versus a non-nested design is more a decision of the researcher than a true fact.
What is the difference between a "nested" and a "non-nested" model?
Nested versus non-nested can mean a whole lot of things. You have nested designs versus crossed designs (see eg this explanation). You have nested models in model comparison. Nested means here that al
What is the difference between a "nested" and a "non-nested" model? Nested versus non-nested can mean a whole lot of things. You have nested designs versus crossed designs (see eg this explanation). You have nested models in model comparison. Nested means here that all terms of a smaller model occur in a larger model. This is a necessary condition for using most model comparison tests like likelihood ratio tests. In the context of multilevel models I think it's better to speak of nested and non-nested factors. The difference is in how the different factors are related to one another. In a nested design, the levels of one factor only make sense within the levels of another factor. Say you want to measure the oxygen production of leaves. You sample a number of tree species, and on every tree you sample some leaves on the bottom, in the middle and on top of the tree. This is a nested design. The difference for leaves in a different position only makes sense within one tree species. So comparing bottom leaves, middle leaves and top leaves over all trees is senseless. Or said differently: leaf position should not be modelled as a main effect. Non-nested factors is a combination of two factors that are not related. Say you study patients, and are interested in the difference of age and gender. So you have a factor ageclass and a factor gender that are not related. You should model both age and gender as a main effect, and you can take a look at the interaction if necessary. The difference is not always that clear. If in my first example the tree species are closely related in form and physiology, you could consider leaf position also as a valid main effect. In many cases, the choice for a nested design versus a non-nested design is more a decision of the researcher than a true fact.
What is the difference between a "nested" and a "non-nested" model? Nested versus non-nested can mean a whole lot of things. You have nested designs versus crossed designs (see eg this explanation). You have nested models in model comparison. Nested means here that al
3,370
What is the difference between a "nested" and a "non-nested" model?
Nested vs non-nested models come up in conjoint analysis and IIA. Consider the "red bus blue bus problem". You have a population where 50% of people take a car to work and the other 50% take the red bus. What happens if you add a blue bus which has the same specifications as the red bus to the equation? A multinomial logit model will predict 33% share for all three modes. We intuitively know this is not correct as the red bus and blue bus are more similar to one another than to the car and will thus take more share from one another before taking share from the car. That is where a nesting structure comes in, which is typically specified as a lambda coefficient on the similar alternatives. Ben Akiva has put together a nice set of slides outlining the theory on this here. He begins talking about nested logit around slide 23.
What is the difference between a "nested" and a "non-nested" model?
Nested vs non-nested models come up in conjoint analysis and IIA. Consider the "red bus blue bus problem". You have a population where 50% of people take a car to work and the other 50% take the red b
What is the difference between a "nested" and a "non-nested" model? Nested vs non-nested models come up in conjoint analysis and IIA. Consider the "red bus blue bus problem". You have a population where 50% of people take a car to work and the other 50% take the red bus. What happens if you add a blue bus which has the same specifications as the red bus to the equation? A multinomial logit model will predict 33% share for all three modes. We intuitively know this is not correct as the red bus and blue bus are more similar to one another than to the car and will thus take more share from one another before taking share from the car. That is where a nesting structure comes in, which is typically specified as a lambda coefficient on the similar alternatives. Ben Akiva has put together a nice set of slides outlining the theory on this here. He begins talking about nested logit around slide 23.
What is the difference between a "nested" and a "non-nested" model? Nested vs non-nested models come up in conjoint analysis and IIA. Consider the "red bus blue bus problem". You have a population where 50% of people take a car to work and the other 50% take the red b
3,371
What is the difference between a "nested" and a "non-nested" model?
One model is nested in another if you can always obtain the first model by constraining some of the parameters of the second model. For example, the linear model $ y = a x + c $ is nested within the 2-degree polynomial $ y = ax + bx^2 + c $, because by setting b = 0, the 2-deg. polynomial becomes identical to the linear form. In other words, a line is a special case of a polynomial, and so the two are nested. The main implication if two models are nested is that it is relatively easy to compare them statistically. Simply put, with nested models you can consider the more complex one as being constructed by adding something to a more simple "null model". To select the best out of these two models, therefore, you simply have to find out whether that added something explains a significant amount of additional variance in the data. This scenario is actually equivalent to fitting the simple model first and removing its predicted variance from the data, and then fitting the additional component of the more complex model to the residuals from the first fit (at least with least squares estimation). Non-nested models may explain entirely different portions of variance in the data. A complex model may even explain less variance than a simple one, if the complex one doesn't include the "right stuff" that the simple one does have. So in that case it is a bit more difficult to predict what would happen under the null hypothesis that both models explain the data equally well. More to the point, under the null hypothesis (and given certain moderate assumptions), the difference in goodness-of-fit between two nested models follows a known distribution, the shape of which depends only on the difference in degrees of freedom between the two models. This is not true for non-nested models.
What is the difference between a "nested" and a "non-nested" model?
One model is nested in another if you can always obtain the first model by constraining some of the parameters of the second model. For example, the linear model $ y = a x + c $ is nested within the 2
What is the difference between a "nested" and a "non-nested" model? One model is nested in another if you can always obtain the first model by constraining some of the parameters of the second model. For example, the linear model $ y = a x + c $ is nested within the 2-degree polynomial $ y = ax + bx^2 + c $, because by setting b = 0, the 2-deg. polynomial becomes identical to the linear form. In other words, a line is a special case of a polynomial, and so the two are nested. The main implication if two models are nested is that it is relatively easy to compare them statistically. Simply put, with nested models you can consider the more complex one as being constructed by adding something to a more simple "null model". To select the best out of these two models, therefore, you simply have to find out whether that added something explains a significant amount of additional variance in the data. This scenario is actually equivalent to fitting the simple model first and removing its predicted variance from the data, and then fitting the additional component of the more complex model to the residuals from the first fit (at least with least squares estimation). Non-nested models may explain entirely different portions of variance in the data. A complex model may even explain less variance than a simple one, if the complex one doesn't include the "right stuff" that the simple one does have. So in that case it is a bit more difficult to predict what would happen under the null hypothesis that both models explain the data equally well. More to the point, under the null hypothesis (and given certain moderate assumptions), the difference in goodness-of-fit between two nested models follows a known distribution, the shape of which depends only on the difference in degrees of freedom between the two models. This is not true for non-nested models.
What is the difference between a "nested" and a "non-nested" model? One model is nested in another if you can always obtain the first model by constraining some of the parameters of the second model. For example, the linear model $ y = a x + c $ is nested within the 2
3,372
What is the difference between a "nested" and a "non-nested" model?
Two models are nonested or separate if one model cannot be obtained as limit of the other (or one model is not a particular case of the other)
What is the difference between a "nested" and a "non-nested" model?
Two models are nonested or separate if one model cannot be obtained as limit of the other (or one model is not a particular case of the other)
What is the difference between a "nested" and a "non-nested" model? Two models are nonested or separate if one model cannot be obtained as limit of the other (or one model is not a particular case of the other)
What is the difference between a "nested" and a "non-nested" model? Two models are nonested or separate if one model cannot be obtained as limit of the other (or one model is not a particular case of the other)
3,373
What is the difference between a "nested" and a "non-nested" model?
See a simpler answer in this pdf. Essentially, a nested model is a model with less variables than a full model. One intention is to look for more parsimonious answers.
What is the difference between a "nested" and a "non-nested" model?
See a simpler answer in this pdf. Essentially, a nested model is a model with less variables than a full model. One intention is to look for more parsimonious answers.
What is the difference between a "nested" and a "non-nested" model? See a simpler answer in this pdf. Essentially, a nested model is a model with less variables than a full model. One intention is to look for more parsimonious answers.
What is the difference between a "nested" and a "non-nested" model? See a simpler answer in this pdf. Essentially, a nested model is a model with less variables than a full model. One intention is to look for more parsimonious answers.
3,374
Neural Network: For Binary Classification use 1 or 2 output neurons?
In the second case you are probably writing about softmax activation function. If that's true, than the sigmoid is just a special case of softmax function. That's easy to show. $$ y = \frac{1}{1 + e ^ {-x}} = \frac{1}{1 + \frac{1}{e ^ x}} = \frac{1}{\frac{e ^ x + 1}{e ^ x}} = \frac{e ^ x}{1 + e ^ x} = \frac{e ^ x}{e ^ 0 + e ^ x} $$ As you can see sigmoid is the same as softmax. You can think that you have two outputs, but one of them has all weights equal to zero and therefore its output will be always equal to zero. So the better choice for the binary classification is to use one output unit with sigmoid instead of softmax with two output units, because it will update faster.
Neural Network: For Binary Classification use 1 or 2 output neurons?
In the second case you are probably writing about softmax activation function. If that's true, than the sigmoid is just a special case of softmax function. That's easy to show. $$ y = \frac{1}{1 + e ^
Neural Network: For Binary Classification use 1 or 2 output neurons? In the second case you are probably writing about softmax activation function. If that's true, than the sigmoid is just a special case of softmax function. That's easy to show. $$ y = \frac{1}{1 + e ^ {-x}} = \frac{1}{1 + \frac{1}{e ^ x}} = \frac{1}{\frac{e ^ x + 1}{e ^ x}} = \frac{e ^ x}{1 + e ^ x} = \frac{e ^ x}{e ^ 0 + e ^ x} $$ As you can see sigmoid is the same as softmax. You can think that you have two outputs, but one of them has all weights equal to zero and therefore its output will be always equal to zero. So the better choice for the binary classification is to use one output unit with sigmoid instead of softmax with two output units, because it will update faster.
Neural Network: For Binary Classification use 1 or 2 output neurons? In the second case you are probably writing about softmax activation function. If that's true, than the sigmoid is just a special case of softmax function. That's easy to show. $$ y = \frac{1}{1 + e ^
3,375
Neural Network: For Binary Classification use 1 or 2 output neurons?
Machine learning algorithms such as classifiers statistically model the input data, here, by determining the probabilities of the input belonging to different categories. For an arbitrary number of classes, normally a softmax layer is appended to the model so the outputs would have probabilistic properties by design: $$\vec{y} = \text{softmax}(\vec{a}) \equiv \frac{1}{\sum_i{ e^{-a_i} }} \times [e^{-a_1}, e^{-a_2}, ...,e^{-a_n}] $$ $$ 0 \le y_i \le 1 \text{ for all i}$$ $$ y_1 + y_2 + ... + y_n = 1$$ Here, $a$ is the activation of the layer before the softmax layer. This is perfectly valid for two classes, however, one can also use one neuron (instead of two) given that its output satisfies: $$ 0 \le y \le 1 \text{ for all inputs.}$$ This can be assured if a transformation (differentiable/smooth for backpropagation purposes) is applied which maps $a$ to $y$ such that the above condition is met. The sigmoid function meets our criteria. There is nothing special about it, other than a simple mathematical representation, $$ \text{sigmoid}(a) \equiv \sigma(a) \equiv \frac{1}{1+e^{-a}}$$ useful mathematical properties (differentiation, being bounded between 0 and 1, etc.), computational efficiency, and having the right slope such that updating network's weights would have a small but measurable change in the output for optimization purposes. Conclusion I am not sure if @itdxer's reasoning that shows softmax and sigmoid are equivalent if valid, but he is right about choosing 1 neuron in contrast to 2 neurons for binary classifiers since fewer parameters and computation are needed. I have also been critized for using two neurons for a binary classifier since "it is superfluous".
Neural Network: For Binary Classification use 1 or 2 output neurons?
Machine learning algorithms such as classifiers statistically model the input data, here, by determining the probabilities of the input belonging to different categories. For an arbitrary number of cl
Neural Network: For Binary Classification use 1 or 2 output neurons? Machine learning algorithms such as classifiers statistically model the input data, here, by determining the probabilities of the input belonging to different categories. For an arbitrary number of classes, normally a softmax layer is appended to the model so the outputs would have probabilistic properties by design: $$\vec{y} = \text{softmax}(\vec{a}) \equiv \frac{1}{\sum_i{ e^{-a_i} }} \times [e^{-a_1}, e^{-a_2}, ...,e^{-a_n}] $$ $$ 0 \le y_i \le 1 \text{ for all i}$$ $$ y_1 + y_2 + ... + y_n = 1$$ Here, $a$ is the activation of the layer before the softmax layer. This is perfectly valid for two classes, however, one can also use one neuron (instead of two) given that its output satisfies: $$ 0 \le y \le 1 \text{ for all inputs.}$$ This can be assured if a transformation (differentiable/smooth for backpropagation purposes) is applied which maps $a$ to $y$ such that the above condition is met. The sigmoid function meets our criteria. There is nothing special about it, other than a simple mathematical representation, $$ \text{sigmoid}(a) \equiv \sigma(a) \equiv \frac{1}{1+e^{-a}}$$ useful mathematical properties (differentiation, being bounded between 0 and 1, etc.), computational efficiency, and having the right slope such that updating network's weights would have a small but measurable change in the output for optimization purposes. Conclusion I am not sure if @itdxer's reasoning that shows softmax and sigmoid are equivalent if valid, but he is right about choosing 1 neuron in contrast to 2 neurons for binary classifiers since fewer parameters and computation are needed. I have also been critized for using two neurons for a binary classifier since "it is superfluous".
Neural Network: For Binary Classification use 1 or 2 output neurons? Machine learning algorithms such as classifiers statistically model the input data, here, by determining the probabilities of the input belonging to different categories. For an arbitrary number of cl
3,376
Neural Network: For Binary Classification use 1 or 2 output neurons?
For binary classification, there are 2 outputs p0 and p1 which represent probabilities and 2 targets y0 and y1. where p0, p1 = [0 1] and p0 + p1 = 1; y0,y1 = {0, 1} and y0 + y1 = 1. e.g. p0 = 0.8, p1 = 0.2; y0 = 1, y1 = 0. To satisfy the above conditions, the output layer must have sigmoid activations, and the loss function must be binary cross-entropy.
Neural Network: For Binary Classification use 1 or 2 output neurons?
For binary classification, there are 2 outputs p0 and p1 which represent probabilities and 2 targets y0 and y1. where p0, p1 = [0 1] and p0 + p1 = 1; y0,y1 = {0, 1} and y0 + y1 = 1. e.g. p0 = 0.8, p1
Neural Network: For Binary Classification use 1 or 2 output neurons? For binary classification, there are 2 outputs p0 and p1 which represent probabilities and 2 targets y0 and y1. where p0, p1 = [0 1] and p0 + p1 = 1; y0,y1 = {0, 1} and y0 + y1 = 1. e.g. p0 = 0.8, p1 = 0.2; y0 = 1, y1 = 0. To satisfy the above conditions, the output layer must have sigmoid activations, and the loss function must be binary cross-entropy.
Neural Network: For Binary Classification use 1 or 2 output neurons? For binary classification, there are 2 outputs p0 and p1 which represent probabilities and 2 targets y0 and y1. where p0, p1 = [0 1] and p0 + p1 = 1; y0,y1 = {0, 1} and y0 + y1 = 1. e.g. p0 = 0.8, p1
3,377
What does standard deviation tell us in non-normal distribution
It's the square root of the second central moment, the variance. The moments are related to characteristic functions(CF), which are called characteristic for a reason that they define the probability distribution. So, if you know all moments, you know CF, hence you know the entire probability distribution. Normal distribution's characteristic function is defined by just two moments: mean and the variance (or standard deviation). Therefore, for normal distribution the standard deviation is especially important, it's 50% of its definition in a way. For other distributions the standard deviation is in some ways less important because they have other moments. However, for many distributions used in practice the first few moments are the largest, so they are the most important ones to know. Now, intuitively, the mean tell you where the center of your distribution is, while the standard deviation tell you how close to this center your data is. Since the standard deviation is in the units of the variable it's also used to scale other moments to obtain measures such as kurtosis. Kurtosis is a dimensionless metric which tells you how fat are the tails of your distribution compared to normal
What does standard deviation tell us in non-normal distribution
It's the square root of the second central moment, the variance. The moments are related to characteristic functions(CF), which are called characteristic for a reason that they define the probability
What does standard deviation tell us in non-normal distribution It's the square root of the second central moment, the variance. The moments are related to characteristic functions(CF), which are called characteristic for a reason that they define the probability distribution. So, if you know all moments, you know CF, hence you know the entire probability distribution. Normal distribution's characteristic function is defined by just two moments: mean and the variance (or standard deviation). Therefore, for normal distribution the standard deviation is especially important, it's 50% of its definition in a way. For other distributions the standard deviation is in some ways less important because they have other moments. However, for many distributions used in practice the first few moments are the largest, so they are the most important ones to know. Now, intuitively, the mean tell you where the center of your distribution is, while the standard deviation tell you how close to this center your data is. Since the standard deviation is in the units of the variable it's also used to scale other moments to obtain measures such as kurtosis. Kurtosis is a dimensionless metric which tells you how fat are the tails of your distribution compared to normal
What does standard deviation tell us in non-normal distribution It's the square root of the second central moment, the variance. The moments are related to characteristic functions(CF), which are called characteristic for a reason that they define the probability
3,378
What does standard deviation tell us in non-normal distribution
The standard deviation is one particular measure of the variation. There are several others, Mean Absolute Deviation is fairly popular. The standard deviation is by no means special. What makes it appear special is that the Gaussian distribution is special. As Pointed out in comments Chebyshev's inequality is useful for getting a feeling. However there are a more.
What does standard deviation tell us in non-normal distribution
The standard deviation is one particular measure of the variation. There are several others, Mean Absolute Deviation is fairly popular. The standard deviation is by no means special. What makes it app
What does standard deviation tell us in non-normal distribution The standard deviation is one particular measure of the variation. There are several others, Mean Absolute Deviation is fairly popular. The standard deviation is by no means special. What makes it appear special is that the Gaussian distribution is special. As Pointed out in comments Chebyshev's inequality is useful for getting a feeling. However there are a more.
What does standard deviation tell us in non-normal distribution The standard deviation is one particular measure of the variation. There are several others, Mean Absolute Deviation is fairly popular. The standard deviation is by no means special. What makes it app
3,379
What does standard deviation tell us in non-normal distribution
The sample standard deviation is a measure of the deviance of the observed values from the mean, in the same units used to measure the data. Normal distribution, or not. Specifically it is the square root of the mean squared deviance from the mean. So the standard deviation tells you how spread out the data are from the mean, regardless of distribution.
What does standard deviation tell us in non-normal distribution
The sample standard deviation is a measure of the deviance of the observed values from the mean, in the same units used to measure the data. Normal distribution, or not. Specifically it is the square
What does standard deviation tell us in non-normal distribution The sample standard deviation is a measure of the deviance of the observed values from the mean, in the same units used to measure the data. Normal distribution, or not. Specifically it is the square root of the mean squared deviance from the mean. So the standard deviation tells you how spread out the data are from the mean, regardless of distribution.
What does standard deviation tell us in non-normal distribution The sample standard deviation is a measure of the deviance of the observed values from the mean, in the same units used to measure the data. Normal distribution, or not. Specifically it is the square
3,380
What is the most surprising characterization of the Gaussian (normal) distribution?
My personal most surprising is the one about the sample mean and variance, but here is another (maybe) surprising characterization: if $X$ and $Y$ are IID with finite variance with $X+Y$ and $X-Y$ independent, then $X$ and $Y$ are normal. Intuitively, we can usually identify when variables are not independent with a scatterplot. So imagine a scatterplot of $(X,Y)$ pairs that looks independent. Now rotate by 45 degrees and look again: if it still looks independent, then the $X$ and $Y$ coordinates individually must be normal (this is all speaking loosely, of course). To see why the intuitive bit works, take a look at $$ \left[ \begin{array}{cc} \cos45^{\circ} & -\sin45^{\circ} \newline \sin45^{\circ} & \cos45^{\circ} \end{array} \right] \left[ \begin{array}{c} x \newline y \end{array} \right]= \frac{1}{\sqrt{2}} \left[ \begin{array}{c} x-y \newline x+y \end{array} \right] $$
What is the most surprising characterization of the Gaussian (normal) distribution?
My personal most surprising is the one about the sample mean and variance, but here is another (maybe) surprising characterization: if $X$ and $Y$ are IID with finite variance with $X+Y$ and $X-Y$ ind
What is the most surprising characterization of the Gaussian (normal) distribution? My personal most surprising is the one about the sample mean and variance, but here is another (maybe) surprising characterization: if $X$ and $Y$ are IID with finite variance with $X+Y$ and $X-Y$ independent, then $X$ and $Y$ are normal. Intuitively, we can usually identify when variables are not independent with a scatterplot. So imagine a scatterplot of $(X,Y)$ pairs that looks independent. Now rotate by 45 degrees and look again: if it still looks independent, then the $X$ and $Y$ coordinates individually must be normal (this is all speaking loosely, of course). To see why the intuitive bit works, take a look at $$ \left[ \begin{array}{cc} \cos45^{\circ} & -\sin45^{\circ} \newline \sin45^{\circ} & \cos45^{\circ} \end{array} \right] \left[ \begin{array}{c} x \newline y \end{array} \right]= \frac{1}{\sqrt{2}} \left[ \begin{array}{c} x-y \newline x+y \end{array} \right] $$
What is the most surprising characterization of the Gaussian (normal) distribution? My personal most surprising is the one about the sample mean and variance, but here is another (maybe) surprising characterization: if $X$ and $Y$ are IID with finite variance with $X+Y$ and $X-Y$ ind
3,381
What is the most surprising characterization of the Gaussian (normal) distribution?
The continuous distribution with fixed variance which maximizes differential entropy is the Gaussian distribution.
What is the most surprising characterization of the Gaussian (normal) distribution?
The continuous distribution with fixed variance which maximizes differential entropy is the Gaussian distribution.
What is the most surprising characterization of the Gaussian (normal) distribution? The continuous distribution with fixed variance which maximizes differential entropy is the Gaussian distribution.
What is the most surprising characterization of the Gaussian (normal) distribution? The continuous distribution with fixed variance which maximizes differential entropy is the Gaussian distribution.
3,382
What is the most surprising characterization of the Gaussian (normal) distribution?
There's an entire book written about this: "Characterizations of the normal probability law", A. M. Mathai & G. Perderzoli. A brief review in JASA (Dec. 1978) mentions the following: Let $X_1, \ldots, X_n$ be independent random variables. Then $\sum_{i=1}^n{a_i x_i}$ and $\sum_{i=1}^n{b_i x_i}$ are independent, where $a_i b_i \ne 0$, if and only if $X_i$ [are] normally distributed.
What is the most surprising characterization of the Gaussian (normal) distribution?
There's an entire book written about this: "Characterizations of the normal probability law", A. M. Mathai & G. Perderzoli. A brief review in JASA (Dec. 1978) mentions the following: Let $X_1, \ldot
What is the most surprising characterization of the Gaussian (normal) distribution? There's an entire book written about this: "Characterizations of the normal probability law", A. M. Mathai & G. Perderzoli. A brief review in JASA (Dec. 1978) mentions the following: Let $X_1, \ldots, X_n$ be independent random variables. Then $\sum_{i=1}^n{a_i x_i}$ and $\sum_{i=1}^n{b_i x_i}$ are independent, where $a_i b_i \ne 0$, if and only if $X_i$ [are] normally distributed.
What is the most surprising characterization of the Gaussian (normal) distribution? There's an entire book written about this: "Characterizations of the normal probability law", A. M. Mathai & G. Perderzoli. A brief review in JASA (Dec. 1978) mentions the following: Let $X_1, \ldot
3,383
What is the most surprising characterization of the Gaussian (normal) distribution?
Stein’s Lemma provides a very useful characterization. $Z$ is standard Gaussian iff $$E f’(Z) = E Z f(Z)$$ for all absolutely continuous functions $f$ with $E|f’(Z)| < \infty$.
What is the most surprising characterization of the Gaussian (normal) distribution?
Stein’s Lemma provides a very useful characterization. $Z$ is standard Gaussian iff $$E f’(Z) = E Z f(Z)$$ for all absolutely continuous functions $f$ with $E|f’(Z)| < \infty$.
What is the most surprising characterization of the Gaussian (normal) distribution? Stein’s Lemma provides a very useful characterization. $Z$ is standard Gaussian iff $$E f’(Z) = E Z f(Z)$$ for all absolutely continuous functions $f$ with $E|f’(Z)| < \infty$.
What is the most surprising characterization of the Gaussian (normal) distribution? Stein’s Lemma provides a very useful characterization. $Z$ is standard Gaussian iff $$E f’(Z) = E Z f(Z)$$ for all absolutely continuous functions $f$ with $E|f’(Z)| < \infty$.
3,384
What is the most surprising characterization of the Gaussian (normal) distribution?
Gaussian distributions are the only sum-stable distributions with finite variance.
What is the most surprising characterization of the Gaussian (normal) distribution?
Gaussian distributions are the only sum-stable distributions with finite variance.
What is the most surprising characterization of the Gaussian (normal) distribution? Gaussian distributions are the only sum-stable distributions with finite variance.
What is the most surprising characterization of the Gaussian (normal) distribution? Gaussian distributions are the only sum-stable distributions with finite variance.
3,385
What is the most surprising characterization of the Gaussian (normal) distribution?
Theorem [Herschel-Maxwell]: Let $Z \in \mathbb{R}^n$ be a random vector for which (i) projections into orthogonal subspaces are independent and (ii) the distribution of $Z$ depends only on the length $\|Z\|$. Then $Z$ is normally distributed. Cited by George Cobb in Teaching statistics: Some important tensions (Chilean J. Statistics Vol. 2, No. 1, April 2011) at p. 54. Cobb uses this characterization as a starting point for deriving the $\chi^2$, $t$, and $F$ distributions, without using Calculus (or much probability theory).
What is the most surprising characterization of the Gaussian (normal) distribution?
Theorem [Herschel-Maxwell]: Let $Z \in \mathbb{R}^n$ be a random vector for which (i) projections into orthogonal subspaces are independent and (ii) the distribution of $Z$ depends only on the length
What is the most surprising characterization of the Gaussian (normal) distribution? Theorem [Herschel-Maxwell]: Let $Z \in \mathbb{R}^n$ be a random vector for which (i) projections into orthogonal subspaces are independent and (ii) the distribution of $Z$ depends only on the length $\|Z\|$. Then $Z$ is normally distributed. Cited by George Cobb in Teaching statistics: Some important tensions (Chilean J. Statistics Vol. 2, No. 1, April 2011) at p. 54. Cobb uses this characterization as a starting point for deriving the $\chi^2$, $t$, and $F$ distributions, without using Calculus (or much probability theory).
What is the most surprising characterization of the Gaussian (normal) distribution? Theorem [Herschel-Maxwell]: Let $Z \in \mathbb{R}^n$ be a random vector for which (i) projections into orthogonal subspaces are independent and (ii) the distribution of $Z$ depends only on the length
3,386
What is the most surprising characterization of the Gaussian (normal) distribution?
This is not a characterization but a conjecture, which dates back from 1917 and is due to Cantelli: If $f$ is a positive function on $\mathbb{R}$ and $X$ and $Y$ are $N(0,1)$ independent random variables such that $X+f(X)Y$ is normal, then $f$ is a constant almost everywhere. Mentioned by Gérard Letac here.
What is the most surprising characterization of the Gaussian (normal) distribution?
This is not a characterization but a conjecture, which dates back from 1917 and is due to Cantelli: If $f$ is a positive function on $\mathbb{R}$ and $X$ and $Y$ are $N(0,1)$ independent random vari
What is the most surprising characterization of the Gaussian (normal) distribution? This is not a characterization but a conjecture, which dates back from 1917 and is due to Cantelli: If $f$ is a positive function on $\mathbb{R}$ and $X$ and $Y$ are $N(0,1)$ independent random variables such that $X+f(X)Y$ is normal, then $f$ is a constant almost everywhere. Mentioned by Gérard Letac here.
What is the most surprising characterization of the Gaussian (normal) distribution? This is not a characterization but a conjecture, which dates back from 1917 and is due to Cantelli: If $f$ is a positive function on $\mathbb{R}$ and $X$ and $Y$ are $N(0,1)$ independent random vari
3,387
What is the most surprising characterization of the Gaussian (normal) distribution?
Let $\eta$ and $\xi$ be two independent random variables with a common symmetric distribution such that $$ P\left ( \left |\frac{\xi+\eta}{\sqrt{2}}\right | \geq t \right )\leq P(|\xi|\geq t).$$ Then these random variables are gaussian. (Obviously, if the $\xi$ and $\eta$ are centered gaussian, it is true.) This is the Bobkov-Houdre Theorem
What is the most surprising characterization of the Gaussian (normal) distribution?
Let $\eta$ and $\xi$ be two independent random variables with a common symmetric distribution such that $$ P\left ( \left |\frac{\xi+\eta}{\sqrt{2}}\right | \geq t \right )\leq P(|\xi|\geq t).$$ The
What is the most surprising characterization of the Gaussian (normal) distribution? Let $\eta$ and $\xi$ be two independent random variables with a common symmetric distribution such that $$ P\left ( \left |\frac{\xi+\eta}{\sqrt{2}}\right | \geq t \right )\leq P(|\xi|\geq t).$$ Then these random variables are gaussian. (Obviously, if the $\xi$ and $\eta$ are centered gaussian, it is true.) This is the Bobkov-Houdre Theorem
What is the most surprising characterization of the Gaussian (normal) distribution? Let $\eta$ and $\xi$ be two independent random variables with a common symmetric distribution such that $$ P\left ( \left |\frac{\xi+\eta}{\sqrt{2}}\right | \geq t \right )\leq P(|\xi|\geq t).$$ The
3,388
What is the most surprising characterization of the Gaussian (normal) distribution?
Suppose one is estimating a location parameter using i.i.d. data $\{x_1,...,x_n\}$. If $\bar{x}$ is the maximum likelihood estimator, then the sampling distribution is Gaussian. According to Jaynes's Probability Theory: The Logic of Science pp. 202-4, this was how Gauss originally derived it.
What is the most surprising characterization of the Gaussian (normal) distribution?
Suppose one is estimating a location parameter using i.i.d. data $\{x_1,...,x_n\}$. If $\bar{x}$ is the maximum likelihood estimator, then the sampling distribution is Gaussian. According to Jaynes's
What is the most surprising characterization of the Gaussian (normal) distribution? Suppose one is estimating a location parameter using i.i.d. data $\{x_1,...,x_n\}$. If $\bar{x}$ is the maximum likelihood estimator, then the sampling distribution is Gaussian. According to Jaynes's Probability Theory: The Logic of Science pp. 202-4, this was how Gauss originally derived it.
What is the most surprising characterization of the Gaussian (normal) distribution? Suppose one is estimating a location parameter using i.i.d. data $\{x_1,...,x_n\}$. If $\bar{x}$ is the maximum likelihood estimator, then the sampling distribution is Gaussian. According to Jaynes's
3,389
What is the most surprising characterization of the Gaussian (normal) distribution?
A more particular characterisation of the normal distribution among the class of infinitely divisible distributions is presented in Steutel and Van Harn (2004). A non-degenerate infinitely divisible random variable $X$ has a normal distribution if and only if it satisfies $$-\limsup_{x\rightarrow\infty}\dfrac{\log{\mathbb P}(\vert X\vert>x)}{x\log(x)}=\infty.$$ This result characterises the normal distribution in terms of its tail behaviour.
What is the most surprising characterization of the Gaussian (normal) distribution?
A more particular characterisation of the normal distribution among the class of infinitely divisible distributions is presented in Steutel and Van Harn (2004). A non-degenerate infinitely divisible
What is the most surprising characterization of the Gaussian (normal) distribution? A more particular characterisation of the normal distribution among the class of infinitely divisible distributions is presented in Steutel and Van Harn (2004). A non-degenerate infinitely divisible random variable $X$ has a normal distribution if and only if it satisfies $$-\limsup_{x\rightarrow\infty}\dfrac{\log{\mathbb P}(\vert X\vert>x)}{x\log(x)}=\infty.$$ This result characterises the normal distribution in terms of its tail behaviour.
What is the most surprising characterization of the Gaussian (normal) distribution? A more particular characterisation of the normal distribution among the class of infinitely divisible distributions is presented in Steutel and Van Harn (2004). A non-degenerate infinitely divisible
3,390
What is the most surprising characterization of the Gaussian (normal) distribution?
In the context of image smoothing (e.g. scale space), the Gaussian is the only rotationally symmetric separable* kernel. That is, if we require $$F[x,y]=f[x]f[y]$$ where $[x,y]=r[\cos\theta,\sin\theta]$, then rotational symmetry requires \begin{align} F_\theta &= f'[x]f[y]x_\theta+f[x]f'[y]y_\theta \\ &= -f'[x]f[y]y+f[x]f'[y]x = 0 \\ &\implies \\ \frac{f'[x]}{xf[x]} &= \frac{f'[y]}{yf[y]} = \mathrm{const.} \end{align} which is equivalent to $\log\big[f[x]\big]'=cx$. Requiring that $f[x]$ be a proper kernel then requires the constant be negative and the initial value positive, yielding the Gaussian kernel. *In the context of probability distributions, separable means independent, while in the context of image filtering it allows the 2D convolution to be reduced computationally to two 1D convolutions.
What is the most surprising characterization of the Gaussian (normal) distribution?
In the context of image smoothing (e.g. scale space), the Gaussian is the only rotationally symmetric separable* kernel. That is, if we require $$F[x,y]=f[x]f[y]$$ where $[x,y]=r[\cos\theta,\sin\theta
What is the most surprising characterization of the Gaussian (normal) distribution? In the context of image smoothing (e.g. scale space), the Gaussian is the only rotationally symmetric separable* kernel. That is, if we require $$F[x,y]=f[x]f[y]$$ where $[x,y]=r[\cos\theta,\sin\theta]$, then rotational symmetry requires \begin{align} F_\theta &= f'[x]f[y]x_\theta+f[x]f'[y]y_\theta \\ &= -f'[x]f[y]y+f[x]f'[y]x = 0 \\ &\implies \\ \frac{f'[x]}{xf[x]} &= \frac{f'[y]}{yf[y]} = \mathrm{const.} \end{align} which is equivalent to $\log\big[f[x]\big]'=cx$. Requiring that $f[x]$ be a proper kernel then requires the constant be negative and the initial value positive, yielding the Gaussian kernel. *In the context of probability distributions, separable means independent, while in the context of image filtering it allows the 2D convolution to be reduced computationally to two 1D convolutions.
What is the most surprising characterization of the Gaussian (normal) distribution? In the context of image smoothing (e.g. scale space), the Gaussian is the only rotationally symmetric separable* kernel. That is, if we require $$F[x,y]=f[x]f[y]$$ where $[x,y]=r[\cos\theta,\sin\theta
3,391
What is the most surprising characterization of the Gaussian (normal) distribution?
Recently Ejsmont [1] published article with new characterization of Gaussian: Let $(X_1,\dots, X_m,Y) \textrm{ and } (X_{m+1},\dots,X_n,Z)$ be independent random vectors with all moments, where $X_i$ are nondegenerate, and let statistic $\sum_{i=1}^na_iX_i+Y+Z$ have a distribution which depends only on $\sum_{i=1}^n a_i^2$, where $a_i\in \mathbb{R}$ and $1\leq m < n$. Then $X_i $ are independent and have the same normal distribution with zero means and $cov(X_i,Y)=cov(X_i,Z)=0$ for $i\in\{1,\dots,n\}$. [1]. Ejsmont, Wiktor. "A characterization of the normal distribution by the independence of a pair of random vectors." Statistics & Probability Letters 114 (2016): 1-5.
What is the most surprising characterization of the Gaussian (normal) distribution?
Recently Ejsmont [1] published article with new characterization of Gaussian: Let $(X_1,\dots, X_m,Y) \textrm{ and } (X_{m+1},\dots,X_n,Z)$ be independent random vectors with all moments, where $X_
What is the most surprising characterization of the Gaussian (normal) distribution? Recently Ejsmont [1] published article with new characterization of Gaussian: Let $(X_1,\dots, X_m,Y) \textrm{ and } (X_{m+1},\dots,X_n,Z)$ be independent random vectors with all moments, where $X_i$ are nondegenerate, and let statistic $\sum_{i=1}^na_iX_i+Y+Z$ have a distribution which depends only on $\sum_{i=1}^n a_i^2$, where $a_i\in \mathbb{R}$ and $1\leq m < n$. Then $X_i $ are independent and have the same normal distribution with zero means and $cov(X_i,Y)=cov(X_i,Z)=0$ for $i\in\{1,\dots,n\}$. [1]. Ejsmont, Wiktor. "A characterization of the normal distribution by the independence of a pair of random vectors." Statistics & Probability Letters 114 (2016): 1-5.
What is the most surprising characterization of the Gaussian (normal) distribution? Recently Ejsmont [1] published article with new characterization of Gaussian: Let $(X_1,\dots, X_m,Y) \textrm{ and } (X_{m+1},\dots,X_n,Z)$ be independent random vectors with all moments, where $X_
3,392
What is the most surprising characterization of the Gaussian (normal) distribution?
Its characteristic function has the same form as its pdf. I am not sure of another distribution which does that.
What is the most surprising characterization of the Gaussian (normal) distribution?
Its characteristic function has the same form as its pdf. I am not sure of another distribution which does that.
What is the most surprising characterization of the Gaussian (normal) distribution? Its characteristic function has the same form as its pdf. I am not sure of another distribution which does that.
What is the most surprising characterization of the Gaussian (normal) distribution? Its characteristic function has the same form as its pdf. I am not sure of another distribution which does that.
3,393
What is the most surprising characterization of the Gaussian (normal) distribution?
The expectation plus minus the standard deviation are the saddle points of the function.
What is the most surprising characterization of the Gaussian (normal) distribution?
The expectation plus minus the standard deviation are the saddle points of the function.
What is the most surprising characterization of the Gaussian (normal) distribution? The expectation plus minus the standard deviation are the saddle points of the function.
What is the most surprising characterization of the Gaussian (normal) distribution? The expectation plus minus the standard deviation are the saddle points of the function.
3,394
Where did the frequentist-Bayesian debate go?
I actually mildly disagree with the premise. Everyone is a Bayesian, if they really do have a probability distribution handed to them as a prior. The trouble comes about when they don't, and I think there's still a pretty good-sized divide on that topic. Having said that, though, I do agree that more and more people are less inclined to fight holy wars and just get on with doing what seems appropriate in any given situation. I would say that, as the profession advanced, both sides realized there were merits in the other side's approaches. Bayesians realized that evaluating how well Bayesian procedures would do if used over and over again (e.g., does this 95% credible interval (CI) actually contain the true parameter about 95% of the time?) required a frequentist outlook. Without this, there's no calibration of that "95%" to any real-world number. Robustness? Model building through iterative fitting etc.? Ideas that came up in the frequentist world, and were adapted by Bayesians starting in the late 1980s or so. Frequentists realized that regularization was good, and use it quite commonly these days - and Bayesian priors can be easily interpreted as regularization. Nonparametric modeling via cubic splines with a penalty function? Your penalty is my prior! Now we can all get along. The other major influence, I believe, is the staggering improvement in availability of high-quality software that will let you do analysis quickly. This comes in two parts - algorithms, e.g., Gibbs sampling and Metropolis-Hastings, and the software itself, R, SAS, ... I might be more of a pure Bayesian if I had to write all my code in C (I simply wouldn't have the time to try anything else), but as it is, I'll use gam in the mgcv package in R any time my model looks like I can fit it into that framework without too much squeezing, and I'm a better statistician for it. Familiarity with your opponent's methods, and realizing how much effort it can save / better quality it can provide to use them in some situations, even though they may not fit 100% into your default framework for thinking about an issue, is a big antidote to dogmatism.
Where did the frequentist-Bayesian debate go?
I actually mildly disagree with the premise. Everyone is a Bayesian, if they really do have a probability distribution handed to them as a prior. The trouble comes about when they don't, and I think
Where did the frequentist-Bayesian debate go? I actually mildly disagree with the premise. Everyone is a Bayesian, if they really do have a probability distribution handed to them as a prior. The trouble comes about when they don't, and I think there's still a pretty good-sized divide on that topic. Having said that, though, I do agree that more and more people are less inclined to fight holy wars and just get on with doing what seems appropriate in any given situation. I would say that, as the profession advanced, both sides realized there were merits in the other side's approaches. Bayesians realized that evaluating how well Bayesian procedures would do if used over and over again (e.g., does this 95% credible interval (CI) actually contain the true parameter about 95% of the time?) required a frequentist outlook. Without this, there's no calibration of that "95%" to any real-world number. Robustness? Model building through iterative fitting etc.? Ideas that came up in the frequentist world, and were adapted by Bayesians starting in the late 1980s or so. Frequentists realized that regularization was good, and use it quite commonly these days - and Bayesian priors can be easily interpreted as regularization. Nonparametric modeling via cubic splines with a penalty function? Your penalty is my prior! Now we can all get along. The other major influence, I believe, is the staggering improvement in availability of high-quality software that will let you do analysis quickly. This comes in two parts - algorithms, e.g., Gibbs sampling and Metropolis-Hastings, and the software itself, R, SAS, ... I might be more of a pure Bayesian if I had to write all my code in C (I simply wouldn't have the time to try anything else), but as it is, I'll use gam in the mgcv package in R any time my model looks like I can fit it into that framework without too much squeezing, and I'm a better statistician for it. Familiarity with your opponent's methods, and realizing how much effort it can save / better quality it can provide to use them in some situations, even though they may not fit 100% into your default framework for thinking about an issue, is a big antidote to dogmatism.
Where did the frequentist-Bayesian debate go? I actually mildly disagree with the premise. Everyone is a Bayesian, if they really do have a probability distribution handed to them as a prior. The trouble comes about when they don't, and I think
3,395
Where did the frequentist-Bayesian debate go?
(The original answer is dated 2012). This is a difficult question to answer. The number of people who truly do both is still very limited. Hard core Bayesians despise the users of mainstream statistics for their use of $p$-values, a nonsensical, internally inconsistent statistic for Bayesians; and the mainstream statisticians just do not know Bayesian methods well enough to comment on them. In the light of this, you will see a lot of criticism of the null hypothesis significance testing in Bayesian literature (ranging as far as nearly pure biology or pure psychology journals), with little to no response from mainstreamers. There are conflicting manifestation as to "who won the debate" in statistics profession. On one hand, the the composition of an average statistics department is that in most places, you will find 10-15 mainstreamers vs. 1-2 Bayesians, although some departments are purely Bayesian, with no mainstreamers at all, except probably for consulting positions that have the job responsibility of producing experiment designs for biologists. Harvard, Duke, Carnegie Mellon, British Columbia, Montreal come to mind in North America; I am less familiar with European scene. On the other hand, you will see that in journals like JASA or JRSS, probably 25-30% of papers are Bayesian. In a way, the Bayesian renaissance may be something like the burst of ANOVA papers in the 1950s: back then, people thought that pretty much any statistics problem can be framed as an ANOVA problem; right now, people think that pretty much anything can be solved with the right MCMC. My feeling is that applied areas don't bother figuring out the philosophical details, and just go with whatever is easier to work with. Bayesian methodology is just too damn complicated: on top of statistics, you also need to learn the art of computation (setting up the sampler, blocking, convergence diagnostics, blah-blah-blah) and be prepared to defend your priors (should you use objective priors, or should you use informative priors if the field has pretty much settled on the speed of light being 3e8 m/s, or even whether the choice of the prior affects whether your posterior will be proper or not). So in most medical or psychology or economics applications, you will see mainstream approaches in the papers written by substantive researchers, although you can also see occasional glimpse of a Bayesian paper -- to have been written by more sophisticated methodologists, or in collaboration with a Bayesian statistician, just because that was the person available at a local department to do this collaborative work. One area where, I think, Bayesian framework is still coming short is model diagnostics -- and that is an important area for practitioners. In Bayesian world, to diagnose a model, you need to build a more complicated one and choose whichever has a better fit by Bayesian factor or BIC. So if you don't like the normality assumption for your linear regression, you can build a regression with Student errors, and let the data generate an estimate of the degrees of freedom, or you can become all fancy and have a Dirichlet process for your error terms and do some M-H jumps between different models. The mainstream approach would be to build a Q-Q plot of studentized residuals and remove outliers, and this is, again, so much simpler. I edited a chapter in a book on this -- see http://onlinelibrary.wiley.com/doi/10.1002/9780470583333.ch5/summary. It is a very archetypal paper, in that gave about 80 references on this debate, all supporting the Bayesian point of view. (I asked the author to extend it in a revised version, which says a lot about it :) ). Jim Berger from Duke, one of the leading Bayesian theorists, gave a number of lectures, and wrote a number of very thoughtful articles on the topic. P.S. (edit in June 2020): the "blah-blah-blah" part of computation has been significantly simplified in the recent years with Stan (https://mc-stan.org/). The NUTS sampler has fewer parameters to tweak, while offering additional diagnostics that make convergence failures more obvious. Model diagnostics has seen improvements, too, with posterior predictive checks and simulation based calibration.
Where did the frequentist-Bayesian debate go?
(The original answer is dated 2012). This is a difficult question to answer. The number of people who truly do both is still very limited. Hard core Bayesians despise the users of mainstream statistic
Where did the frequentist-Bayesian debate go? (The original answer is dated 2012). This is a difficult question to answer. The number of people who truly do both is still very limited. Hard core Bayesians despise the users of mainstream statistics for their use of $p$-values, a nonsensical, internally inconsistent statistic for Bayesians; and the mainstream statisticians just do not know Bayesian methods well enough to comment on them. In the light of this, you will see a lot of criticism of the null hypothesis significance testing in Bayesian literature (ranging as far as nearly pure biology or pure psychology journals), with little to no response from mainstreamers. There are conflicting manifestation as to "who won the debate" in statistics profession. On one hand, the the composition of an average statistics department is that in most places, you will find 10-15 mainstreamers vs. 1-2 Bayesians, although some departments are purely Bayesian, with no mainstreamers at all, except probably for consulting positions that have the job responsibility of producing experiment designs for biologists. Harvard, Duke, Carnegie Mellon, British Columbia, Montreal come to mind in North America; I am less familiar with European scene. On the other hand, you will see that in journals like JASA or JRSS, probably 25-30% of papers are Bayesian. In a way, the Bayesian renaissance may be something like the burst of ANOVA papers in the 1950s: back then, people thought that pretty much any statistics problem can be framed as an ANOVA problem; right now, people think that pretty much anything can be solved with the right MCMC. My feeling is that applied areas don't bother figuring out the philosophical details, and just go with whatever is easier to work with. Bayesian methodology is just too damn complicated: on top of statistics, you also need to learn the art of computation (setting up the sampler, blocking, convergence diagnostics, blah-blah-blah) and be prepared to defend your priors (should you use objective priors, or should you use informative priors if the field has pretty much settled on the speed of light being 3e8 m/s, or even whether the choice of the prior affects whether your posterior will be proper or not). So in most medical or psychology or economics applications, you will see mainstream approaches in the papers written by substantive researchers, although you can also see occasional glimpse of a Bayesian paper -- to have been written by more sophisticated methodologists, or in collaboration with a Bayesian statistician, just because that was the person available at a local department to do this collaborative work. One area where, I think, Bayesian framework is still coming short is model diagnostics -- and that is an important area for practitioners. In Bayesian world, to diagnose a model, you need to build a more complicated one and choose whichever has a better fit by Bayesian factor or BIC. So if you don't like the normality assumption for your linear regression, you can build a regression with Student errors, and let the data generate an estimate of the degrees of freedom, or you can become all fancy and have a Dirichlet process for your error terms and do some M-H jumps between different models. The mainstream approach would be to build a Q-Q plot of studentized residuals and remove outliers, and this is, again, so much simpler. I edited a chapter in a book on this -- see http://onlinelibrary.wiley.com/doi/10.1002/9780470583333.ch5/summary. It is a very archetypal paper, in that gave about 80 references on this debate, all supporting the Bayesian point of view. (I asked the author to extend it in a revised version, which says a lot about it :) ). Jim Berger from Duke, one of the leading Bayesian theorists, gave a number of lectures, and wrote a number of very thoughtful articles on the topic. P.S. (edit in June 2020): the "blah-blah-blah" part of computation has been significantly simplified in the recent years with Stan (https://mc-stan.org/). The NUTS sampler has fewer parameters to tweak, while offering additional diagnostics that make convergence failures more obvious. Model diagnostics has seen improvements, too, with posterior predictive checks and simulation based calibration.
Where did the frequentist-Bayesian debate go? (The original answer is dated 2012). This is a difficult question to answer. The number of people who truly do both is still very limited. Hard core Bayesians despise the users of mainstream statistic
3,396
Where did the frequentist-Bayesian debate go?
There is a good reason for still having both, which is that a good craftsman will want to select the best tool for the task at hand, and both Bayesian and frequentist methods have applications where they are the best tool for the job. However, often the wrong tool for the job is used because frequentist statistics are more amenable to a "statistics cookbook" approach which makes them easier to apply in science and engineering than their Bayesian counterparts, even though the Bayesian methods provide a more direct answer to the question posed (which is generally what we can infer from the particular sample of data we actually have). I am not greatly in favour of this as the "cookbook" approach leads to using statistics without a solid understanding of what you are actually doing, which is why things like the p-value fallacy crop up again and again. However, as time progresses, the software tools for the Bayesian approach will improve and they will be used more frequently as jbowman rightly says. I am a Bayesian by inclination (it seems to make a lot more sense to me than the frequentist approach), however I end up using frequentist statistics in my papers, partly because I will have trouble with the reviewers if I use Bayesian statistics as they will be "non-standard". Finally (somewhat tongue in cheek ;o), to quote Max Plank "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
Where did the frequentist-Bayesian debate go?
There is a good reason for still having both, which is that a good craftsman will want to select the best tool for the task at hand, and both Bayesian and frequentist methods have applications where t
Where did the frequentist-Bayesian debate go? There is a good reason for still having both, which is that a good craftsman will want to select the best tool for the task at hand, and both Bayesian and frequentist methods have applications where they are the best tool for the job. However, often the wrong tool for the job is used because frequentist statistics are more amenable to a "statistics cookbook" approach which makes them easier to apply in science and engineering than their Bayesian counterparts, even though the Bayesian methods provide a more direct answer to the question posed (which is generally what we can infer from the particular sample of data we actually have). I am not greatly in favour of this as the "cookbook" approach leads to using statistics without a solid understanding of what you are actually doing, which is why things like the p-value fallacy crop up again and again. However, as time progresses, the software tools for the Bayesian approach will improve and they will be used more frequently as jbowman rightly says. I am a Bayesian by inclination (it seems to make a lot more sense to me than the frequentist approach), however I end up using frequentist statistics in my papers, partly because I will have trouble with the reviewers if I use Bayesian statistics as they will be "non-standard". Finally (somewhat tongue in cheek ;o), to quote Max Plank "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."
Where did the frequentist-Bayesian debate go? There is a good reason for still having both, which is that a good craftsman will want to select the best tool for the task at hand, and both Bayesian and frequentist methods have applications where t
3,397
Where did the frequentist-Bayesian debate go?
I don't think the Frequentists and Bayesians give different answers to the same questions. I think they are prepared to answer different questions. Therefore, I don't think it makes sense to talk much about one side winning, or even to talk about compromise. Consider all the questions we might want to ask. Many are just impossible questions ("What is the true value of $\theta$?"). It's more useful to consider the subset of these questions that can be answered given various assumptions. The larger subset is the questions that can be answered where you do allow yourself to use priors. Call this set BF. There is a subset of BF, which is the set of questions that do not depend on any prior. Call this second subset F. F is a subset of BF. Define B = BF \ B. However, we cannot choose which questions to answer. In order to make useful inferences about the world, we sometimes have to answer questions that are in B and that means using a prior. Ideally, given an estimator you would do a thorough analysis. You might use a prior, but it also would be cool if you could prove nice things about your estimator which do not depend on any prior. That doesn't mean you can ditch the prior, maybe the really interesting questions require a prior. Everybody agrees on how to answer the questions in F. The worry is whether the really 'interesting' questions are in F or in B? An example: a patient walks into the doctor and is either healthy(H) or sick(S). There is a test that we run, which will return positive(+) or negative(-). The test never gives false negatives - i.e $\mathcal{P}(-|S) = 0$. But it will sometimes give false positives - $\mathcal{P}(+|H) = 0.05$ We have a piece of card and the testing machine will write + or - on one side of the card. Imagine, if you will, that we have an oracle who somehow knows the truth, and this oracle writes the true state, H or S, on the other side of the card before putting the card into an envelope. As the statistically-trained doctor, what can we say about the card in the envolope before we open the card? The following statements can be made (these are in F above): If S on one side of the card, then the other side will be +. $\mathcal{P}(+|S) = 1$ If H, then the other side will be + with 5% probability, - with 95% probability. $\mathcal{P}(-|H) = 0.95$ (summarizing the last two points) The probability that the two sides match is at least 95%. $\mathcal{P}( (-,S) \cup (+,H) ) \geq 0.95$ We don't know what $\mathcal{P}( (-,S) )$ or $\mathcal{P}( (+,H) )$ is. We can't really answer that without some sort of prior for $\mathcal{P}(S)$. But we can make statements about the sum of those two probabilities. This is as far as we can go so far. Before opening the envelope, we can make very positive statements about the accuracy of the test. There is (at least) 95% probability that the test result matches the truth. But what happens when we actually open the card? Given that the test result is positive (or negative), what can we say about whether they are healthy or sick? If the test is positive (+), there is nothing we can say. Maybe they are healthy, and maybe not. Depending on the current prevalence of the disease ($\mathcal{P}(S)$) it might be the case that most patients who test positive are healthy, or it might be the case that most are sick. We can't put any bounds on this, without first allowing ourselves to put some bounds on $\mathcal{P}(S)$. In this simple example, it's clear that everybody with a negative test result is healthy. There are no false negatives, and hence every statistician will happily send that patient home. Therefore, it makes no sense to pay for the advice of a statistician unless the test result has been positive. The three bullet points above are correct, and quite simple. But they're also useless! The really interesting question, in this admittedly contrived model, is: $$ \mathcal{P}(S|+) $$ and this cannot be answered without $\mathcal{P}(S)$ (i.e a prior, or at least some bounds on the prior) I don't deny this is perhaps an oversimplified model, but it does demonstrate that if we want to make useful statements about the health of those patients, we must start off we some prior belief about their health.
Where did the frequentist-Bayesian debate go?
I don't think the Frequentists and Bayesians give different answers to the same questions. I think they are prepared to answer different questions. Therefore, I don't think it makes sense to talk much
Where did the frequentist-Bayesian debate go? I don't think the Frequentists and Bayesians give different answers to the same questions. I think they are prepared to answer different questions. Therefore, I don't think it makes sense to talk much about one side winning, or even to talk about compromise. Consider all the questions we might want to ask. Many are just impossible questions ("What is the true value of $\theta$?"). It's more useful to consider the subset of these questions that can be answered given various assumptions. The larger subset is the questions that can be answered where you do allow yourself to use priors. Call this set BF. There is a subset of BF, which is the set of questions that do not depend on any prior. Call this second subset F. F is a subset of BF. Define B = BF \ B. However, we cannot choose which questions to answer. In order to make useful inferences about the world, we sometimes have to answer questions that are in B and that means using a prior. Ideally, given an estimator you would do a thorough analysis. You might use a prior, but it also would be cool if you could prove nice things about your estimator which do not depend on any prior. That doesn't mean you can ditch the prior, maybe the really interesting questions require a prior. Everybody agrees on how to answer the questions in F. The worry is whether the really 'interesting' questions are in F or in B? An example: a patient walks into the doctor and is either healthy(H) or sick(S). There is a test that we run, which will return positive(+) or negative(-). The test never gives false negatives - i.e $\mathcal{P}(-|S) = 0$. But it will sometimes give false positives - $\mathcal{P}(+|H) = 0.05$ We have a piece of card and the testing machine will write + or - on one side of the card. Imagine, if you will, that we have an oracle who somehow knows the truth, and this oracle writes the true state, H or S, on the other side of the card before putting the card into an envelope. As the statistically-trained doctor, what can we say about the card in the envolope before we open the card? The following statements can be made (these are in F above): If S on one side of the card, then the other side will be +. $\mathcal{P}(+|S) = 1$ If H, then the other side will be + with 5% probability, - with 95% probability. $\mathcal{P}(-|H) = 0.95$ (summarizing the last two points) The probability that the two sides match is at least 95%. $\mathcal{P}( (-,S) \cup (+,H) ) \geq 0.95$ We don't know what $\mathcal{P}( (-,S) )$ or $\mathcal{P}( (+,H) )$ is. We can't really answer that without some sort of prior for $\mathcal{P}(S)$. But we can make statements about the sum of those two probabilities. This is as far as we can go so far. Before opening the envelope, we can make very positive statements about the accuracy of the test. There is (at least) 95% probability that the test result matches the truth. But what happens when we actually open the card? Given that the test result is positive (or negative), what can we say about whether they are healthy or sick? If the test is positive (+), there is nothing we can say. Maybe they are healthy, and maybe not. Depending on the current prevalence of the disease ($\mathcal{P}(S)$) it might be the case that most patients who test positive are healthy, or it might be the case that most are sick. We can't put any bounds on this, without first allowing ourselves to put some bounds on $\mathcal{P}(S)$. In this simple example, it's clear that everybody with a negative test result is healthy. There are no false negatives, and hence every statistician will happily send that patient home. Therefore, it makes no sense to pay for the advice of a statistician unless the test result has been positive. The three bullet points above are correct, and quite simple. But they're also useless! The really interesting question, in this admittedly contrived model, is: $$ \mathcal{P}(S|+) $$ and this cannot be answered without $\mathcal{P}(S)$ (i.e a prior, or at least some bounds on the prior) I don't deny this is perhaps an oversimplified model, but it does demonstrate that if we want to make useful statements about the health of those patients, we must start off we some prior belief about their health.
Where did the frequentist-Bayesian debate go? I don't think the Frequentists and Bayesians give different answers to the same questions. I think they are prepared to answer different questions. Therefore, I don't think it makes sense to talk much
3,398
Where did the frequentist-Bayesian debate go?
As you'll see, there's quite a lot of frequentist-Bayesian debate going on. In fact, I think it's hotter than ever, and less dogmatic. You might be interested in my blog: http://errorstatistics.com
Where did the frequentist-Bayesian debate go?
As you'll see, there's quite a lot of frequentist-Bayesian debate going on. In fact, I think it's hotter than ever, and less dogmatic. You might be interested in my blog: http://errorstatistics.com
Where did the frequentist-Bayesian debate go? As you'll see, there's quite a lot of frequentist-Bayesian debate going on. In fact, I think it's hotter than ever, and less dogmatic. You might be interested in my blog: http://errorstatistics.com
Where did the frequentist-Bayesian debate go? As you'll see, there's quite a lot of frequentist-Bayesian debate going on. In fact, I think it's hotter than ever, and less dogmatic. You might be interested in my blog: http://errorstatistics.com
3,399
Where did the frequentist-Bayesian debate go?
Many people (outside the specialist experts) who think they are frequentist are in fact Bayesian. This makes the debate a bit pointless. I think that Bayesianism won, but that there are still many Bayesians who think they are frequentist. There are some people who think that they don't use priors and hence they think they are frequentist. This is dangerous logic. This is not so much about priors (uniform priors or non-uniform), the real difference is more subtle. (I'm not formally in the statistics department; my background is maths and computer science. I'm writing because of difficulties I've had trying to discuss this 'debate' with other non-statisticians, and even with some early-career statisticians.) The MLE is actually a Bayesian method. Some people will say "I'm a frequentist because I use the MLE to estimate my parameters". I have seen this in peer-reviewed literature. This is nonsense and is based on this (unsaid, but implied) myth that a frequentist is somebody who uses a uniform prior instead of a non-uniform prior). Consider drawing a single number from a normal distribution with known mean, $\mu = 0$, and unknown variance. Call this variance $\theta$. $ X \equiv N(\mu = 0, \sigma^2 = \theta) $ Now consider the likelihood function. This function has two parameters, $x$ and $\theta$ and it returns the probability, given $\theta$, of $x$. $ f(x,\theta) = \mathrm{P}_{\sigma^2=\theta} (X=x) = \frac{1}{\sqrt{2\pi \theta}} e^{-\frac{x^2}{2\theta}} $ You can imagine plotting this in a heatmap, with $x$ on the x-axis and $\theta$ on the y-axis, and using the colour (or z-axis). Here is the plot, with contour lines and colours. First, a few observations. If you fix on a single value of $\theta$, then you can take the corresponding horizontal slice through the heatmap. This slice will give you the pdf for that value of $\theta$. Obviously, the area under the curve in that slice will be 1. On the other hand, if you fix on a single value of $x$, and then look at the corresponding vertical slice, then there is no such guarantee about the area under the curve. This distinction between the horizontal and vertical slices is crucial, and I found this analogy helped me to understand the frequentist approach to bias. A Bayesian is somebody who says For this value of x, which values of $\theta$ give a 'high enough' value of $f(x,\theta)$?. Alternatively, a Bayesian might include a prior, $g(\theta)$, but they are still talking about for this value of x, which values of $\theta$ give a high enough value of $f(x,\theta)g(\theta)$? So a Bayesian fixes x and looks at the corresponding vertical slice in that contour plot (or in the variant plot incorporating the prior). In this slice, the area under the curve need not be 1 (as I said earlier). A Bayesian 95% credible interval (CI) is the interval which contains 95% of the available area. For example, if the area is 2, then the area under the Bayesian CI must be 1.9. On the other hand, a frequentist will ignore x and first consider fixing $\theta$, and will ask: For this $\theta$, which values of x will appear most often? In this example, with $\mathcal{N}(\mu=0, \sigma^2 = \theta)$, one answer to this frequentist question is: "For a given $\theta$, 95% of the $x$ will appear between $-3\sqrt\theta$ and $+3\sqrt\theta$." So a frequentist is more concerned with the horizontal lines corresponding to fixed values of $\theta$. This is not the only way to construct the frequentist CI, it's not even a good (narrow) one, but bear with me for a moment. The best way to interpret the word 'interval' is not as an interval on a 1-d line, but to think of it as an area on the above 2-d plane. An 'interval' is a subset of the 2-d plane, not of any 1-d line. If somebody proposes such an 'interval', we then have to test is the 'interval' is valid at a 95% confidence/credible level. A frequentist will check the validity of this 'interval' by considering each horizontal slice in turn and looking at the area under the curve. As I said before, the area under this curve will always be one. The crucial requirement is that the area within the 'interval' be at least 0.95. A Bayesian will check validity by instead looking at the vertical slices. Again, the area under the curve will be compared to the subarea that's under the interval. If the latter is at least 95% of the former, then the 'interval' is a valid 95% Bayesian credible interval. Now that we know how to test whether a particular interval is 'valid', the question is how do we choose the best option among the valid options. This can be a black art, but generally you want the narrowest interval. Both approaches tend to agree here - the vertical slices are considered and the goal is to make the interval as narrow as possible within each vertical slice. I have not attempted to define the narrowest possible frequentist confidence interval in the above example. See the comments by @cardinal below for examples of narrower intervals. My goal is not to find the best intervals, but to emphasize the difference between the horizontal and vertical slices in determining validity. An interval that satisfies the conditions of a 95% frequentist confidence interval will usually not satisfy the conditions of a 95% Bayesian credible interval, and vice versa. Both approaches desire narrow intervals, i.e. when considering one vertical slice we want to make the (1-d) interval in that slice to be as narrow as possible. The difference is in how the 95% is enforced - a frequentist will only look at proposed intervals where 95% of each horizontal slice's area is under the interval, whereas a Bayesian will insist that each vertical slice be such that 95% of its area is under the interval. Many non-statisticians don't understand this and they focus only on the vertical slices; this makes them Bayesians even if they think otherwise.
Where did the frequentist-Bayesian debate go?
Many people (outside the specialist experts) who think they are frequentist are in fact Bayesian. This makes the debate a bit pointless. I think that Bayesianism won, but that there are still many Bay
Where did the frequentist-Bayesian debate go? Many people (outside the specialist experts) who think they are frequentist are in fact Bayesian. This makes the debate a bit pointless. I think that Bayesianism won, but that there are still many Bayesians who think they are frequentist. There are some people who think that they don't use priors and hence they think they are frequentist. This is dangerous logic. This is not so much about priors (uniform priors or non-uniform), the real difference is more subtle. (I'm not formally in the statistics department; my background is maths and computer science. I'm writing because of difficulties I've had trying to discuss this 'debate' with other non-statisticians, and even with some early-career statisticians.) The MLE is actually a Bayesian method. Some people will say "I'm a frequentist because I use the MLE to estimate my parameters". I have seen this in peer-reviewed literature. This is nonsense and is based on this (unsaid, but implied) myth that a frequentist is somebody who uses a uniform prior instead of a non-uniform prior). Consider drawing a single number from a normal distribution with known mean, $\mu = 0$, and unknown variance. Call this variance $\theta$. $ X \equiv N(\mu = 0, \sigma^2 = \theta) $ Now consider the likelihood function. This function has two parameters, $x$ and $\theta$ and it returns the probability, given $\theta$, of $x$. $ f(x,\theta) = \mathrm{P}_{\sigma^2=\theta} (X=x) = \frac{1}{\sqrt{2\pi \theta}} e^{-\frac{x^2}{2\theta}} $ You can imagine plotting this in a heatmap, with $x$ on the x-axis and $\theta$ on the y-axis, and using the colour (or z-axis). Here is the plot, with contour lines and colours. First, a few observations. If you fix on a single value of $\theta$, then you can take the corresponding horizontal slice through the heatmap. This slice will give you the pdf for that value of $\theta$. Obviously, the area under the curve in that slice will be 1. On the other hand, if you fix on a single value of $x$, and then look at the corresponding vertical slice, then there is no such guarantee about the area under the curve. This distinction between the horizontal and vertical slices is crucial, and I found this analogy helped me to understand the frequentist approach to bias. A Bayesian is somebody who says For this value of x, which values of $\theta$ give a 'high enough' value of $f(x,\theta)$?. Alternatively, a Bayesian might include a prior, $g(\theta)$, but they are still talking about for this value of x, which values of $\theta$ give a high enough value of $f(x,\theta)g(\theta)$? So a Bayesian fixes x and looks at the corresponding vertical slice in that contour plot (or in the variant plot incorporating the prior). In this slice, the area under the curve need not be 1 (as I said earlier). A Bayesian 95% credible interval (CI) is the interval which contains 95% of the available area. For example, if the area is 2, then the area under the Bayesian CI must be 1.9. On the other hand, a frequentist will ignore x and first consider fixing $\theta$, and will ask: For this $\theta$, which values of x will appear most often? In this example, with $\mathcal{N}(\mu=0, \sigma^2 = \theta)$, one answer to this frequentist question is: "For a given $\theta$, 95% of the $x$ will appear between $-3\sqrt\theta$ and $+3\sqrt\theta$." So a frequentist is more concerned with the horizontal lines corresponding to fixed values of $\theta$. This is not the only way to construct the frequentist CI, it's not even a good (narrow) one, but bear with me for a moment. The best way to interpret the word 'interval' is not as an interval on a 1-d line, but to think of it as an area on the above 2-d plane. An 'interval' is a subset of the 2-d plane, not of any 1-d line. If somebody proposes such an 'interval', we then have to test is the 'interval' is valid at a 95% confidence/credible level. A frequentist will check the validity of this 'interval' by considering each horizontal slice in turn and looking at the area under the curve. As I said before, the area under this curve will always be one. The crucial requirement is that the area within the 'interval' be at least 0.95. A Bayesian will check validity by instead looking at the vertical slices. Again, the area under the curve will be compared to the subarea that's under the interval. If the latter is at least 95% of the former, then the 'interval' is a valid 95% Bayesian credible interval. Now that we know how to test whether a particular interval is 'valid', the question is how do we choose the best option among the valid options. This can be a black art, but generally you want the narrowest interval. Both approaches tend to agree here - the vertical slices are considered and the goal is to make the interval as narrow as possible within each vertical slice. I have not attempted to define the narrowest possible frequentist confidence interval in the above example. See the comments by @cardinal below for examples of narrower intervals. My goal is not to find the best intervals, but to emphasize the difference between the horizontal and vertical slices in determining validity. An interval that satisfies the conditions of a 95% frequentist confidence interval will usually not satisfy the conditions of a 95% Bayesian credible interval, and vice versa. Both approaches desire narrow intervals, i.e. when considering one vertical slice we want to make the (1-d) interval in that slice to be as narrow as possible. The difference is in how the 95% is enforced - a frequentist will only look at proposed intervals where 95% of each horizontal slice's area is under the interval, whereas a Bayesian will insist that each vertical slice be such that 95% of its area is under the interval. Many non-statisticians don't understand this and they focus only on the vertical slices; this makes them Bayesians even if they think otherwise.
Where did the frequentist-Bayesian debate go? Many people (outside the specialist experts) who think they are frequentist are in fact Bayesian. This makes the debate a bit pointless. I think that Bayesianism won, but that there are still many Bay
3,400
Where did the frequentist-Bayesian debate go?
These are actually apples and oranges if you dig deep enough. The Frequentist vs. Bayesian approach dilemma is actually a philosophical question about what probabilities actually are. Bayes discovered an useful tool to handle probabilities and treated them as primary entities in his mathematical background. Fisher, Pearson, Gosset and others discovered useful tools and framed them within the hypothetic deductive perspective. The frequentists regard probabilities as reflecting outcomes happening most often in deterministic systems, Bayesians consider probabilities to be a more fundamental concept reflecting the distribution of possible configurations in a random system. In practical terms, we call Bayesian models those who are based on Bayes theorem, calculating posterior probabilities based on a prior, a likelihood function and model evidence. Frequentist models are slightly hard to define, because that is a label for many different statistical frameworks. Generally, it means p-values and estimation using exact maximum likelihood solutions via least squares method. One would say Bayesian analysis gives probability distributions instead of testing hypothesis in a classical manner (e.g. null hypothesis). This is also a misunderstanding, since testing falsifiable scenarios is an epistemological choice and could be done with any tools (including non mathematical ones).
Where did the frequentist-Bayesian debate go?
These are actually apples and oranges if you dig deep enough. The Frequentist vs. Bayesian approach dilemma is actually a philosophical question about what probabilities actually are. Bayes discovered
Where did the frequentist-Bayesian debate go? These are actually apples and oranges if you dig deep enough. The Frequentist vs. Bayesian approach dilemma is actually a philosophical question about what probabilities actually are. Bayes discovered an useful tool to handle probabilities and treated them as primary entities in his mathematical background. Fisher, Pearson, Gosset and others discovered useful tools and framed them within the hypothetic deductive perspective. The frequentists regard probabilities as reflecting outcomes happening most often in deterministic systems, Bayesians consider probabilities to be a more fundamental concept reflecting the distribution of possible configurations in a random system. In practical terms, we call Bayesian models those who are based on Bayes theorem, calculating posterior probabilities based on a prior, a likelihood function and model evidence. Frequentist models are slightly hard to define, because that is a label for many different statistical frameworks. Generally, it means p-values and estimation using exact maximum likelihood solutions via least squares method. One would say Bayesian analysis gives probability distributions instead of testing hypothesis in a classical manner (e.g. null hypothesis). This is also a misunderstanding, since testing falsifiable scenarios is an epistemological choice and could be done with any tools (including non mathematical ones).
Where did the frequentist-Bayesian debate go? These are actually apples and oranges if you dig deep enough. The Frequentist vs. Bayesian approach dilemma is actually a philosophical question about what probabilities actually are. Bayes discovered