idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
5,001
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the original p-value. How can it be true?
Thanks for all the interesting discussions! When writing that 2008 article, it took me a while to convince myself that the distribution of replication p (the p value given by an exact replication of a study, meaning a study that is exactly the same, but with a new sample) is dependent only on p given by the original study. (In the paper I assume a normally distributed population and random sampling, and that our studies aim to estimate the mean of the population.) Therefore the p interval (the 80% prediction interval for replication p) is the same, whatever the N, power, or true effect size of the original study. Sure, that's at first unbelievable. But note carefully that my original statement is based on knowing p from the original study. Think of it this way. Suppose you tell me that your original study has found p=.05. You tell me nothing else about the study. I know that the 95% CI on your sample mean extends exactly to zero (assuming p was calculated for a null hypothesis of zero). So your sample mean is MoE (the length of one arm of that 95% CI), because it is that distance from zero. The sampling distribution of means from studies like yours has standard deviation MoE/1.96. That's the standard error. Consider the mean given by an exact replication. The distribution of that replication mean has mean MoE, i.e. that distribution is centred on your original sample mean. Consider the difference between your sample mean and a replication mean. It has variance equal to the sum of the variances of the mean of studies like your original study, and replications. That's twice the variance of studies like your original study, i.e. 2 x SE^2. Which is 2 x (MoE/1.96)^2. So the SD of that difference is SQRT(2) x MoE/1.96. We therefore know the distribution of the replication mean: its mean is MoE and it's SD is SQRT(2) x MoE/1.96. Sure, the horizontal scale is arbitrary, but we only need to know this distribution in relation to the CI from your original study. As replications are run, most of the means (around 83%) will fall in that original 95% CI, and around 8% will fall below it (i.e. below zero, if your original mean was >0) and 8% higher than that CI. If we know where a replication mean falls in relation to your original CI, we can calculate its p value. We know the distribution of such replication means (in relation to your CI) so we can figure out the distribution of the replication p value. The only assumption we are making about the replication is that it is exact, i.e. it came from the same population, with the same effect size, as your original study, and that N (and the experimental design) was the same as in your study. All the above is just a restating of the argument in the article, without pictures. Still informally, it may be helpful to think what p=.05 in the original study implies. It could mean that you have an enormous study with a tiny effect size, or a tiny study with a giant effect size. Either way, if you repeat that study (same N, same population) then you will no doubt get a somewhat different sample mean. It turns out that, in terms of the p value, 'somewhat different' is the same, whether you had the enormous or the tiny study. So, tell me only your p value and I'll tell you your p interval. Geoff
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the ori
Thanks for all the interesting discussions! When writing that 2008 article, it took me a while to convince myself that the distribution of replication p (the p value given by an exact replication of a
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the original p-value. How can it be true? Thanks for all the interesting discussions! When writing that 2008 article, it took me a while to convince myself that the distribution of replication p (the p value given by an exact replication of a study, meaning a study that is exactly the same, but with a new sample) is dependent only on p given by the original study. (In the paper I assume a normally distributed population and random sampling, and that our studies aim to estimate the mean of the population.) Therefore the p interval (the 80% prediction interval for replication p) is the same, whatever the N, power, or true effect size of the original study. Sure, that's at first unbelievable. But note carefully that my original statement is based on knowing p from the original study. Think of it this way. Suppose you tell me that your original study has found p=.05. You tell me nothing else about the study. I know that the 95% CI on your sample mean extends exactly to zero (assuming p was calculated for a null hypothesis of zero). So your sample mean is MoE (the length of one arm of that 95% CI), because it is that distance from zero. The sampling distribution of means from studies like yours has standard deviation MoE/1.96. That's the standard error. Consider the mean given by an exact replication. The distribution of that replication mean has mean MoE, i.e. that distribution is centred on your original sample mean. Consider the difference between your sample mean and a replication mean. It has variance equal to the sum of the variances of the mean of studies like your original study, and replications. That's twice the variance of studies like your original study, i.e. 2 x SE^2. Which is 2 x (MoE/1.96)^2. So the SD of that difference is SQRT(2) x MoE/1.96. We therefore know the distribution of the replication mean: its mean is MoE and it's SD is SQRT(2) x MoE/1.96. Sure, the horizontal scale is arbitrary, but we only need to know this distribution in relation to the CI from your original study. As replications are run, most of the means (around 83%) will fall in that original 95% CI, and around 8% will fall below it (i.e. below zero, if your original mean was >0) and 8% higher than that CI. If we know where a replication mean falls in relation to your original CI, we can calculate its p value. We know the distribution of such replication means (in relation to your CI) so we can figure out the distribution of the replication p value. The only assumption we are making about the replication is that it is exact, i.e. it came from the same population, with the same effect size, as your original study, and that N (and the experimental design) was the same as in your study. All the above is just a restating of the argument in the article, without pictures. Still informally, it may be helpful to think what p=.05 in the original study implies. It could mean that you have an enormous study with a tiny effect size, or a tiny study with a giant effect size. Either way, if you repeat that study (same N, same population) then you will no doubt get a somewhat different sample mean. It turns out that, in terms of the p value, 'somewhat different' is the same, whether you had the enormous or the tiny study. So, tell me only your p value and I'll tell you your p interval. Geoff
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the ori Thanks for all the interesting discussions! When writing that 2008 article, it took me a while to convince myself that the distribution of replication p (the p value given by an exact replication of a
5,002
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the original p-value. How can it be true?
The issue has been clarified by @GeoMatt22, and I've been delighted to see @GeoffCumming coming here to participate in the discussion. I am posting this answer as a further commentary. As it turns out, this discussion goes back at least to Goodman (1992) A comment on replication, P‐values and evidence and a later reply Senn (2002) Letter to the Editor. I can highly recommend reading these two brief articles, in particularly the Stephen Senn's one; I find myself fully agreeing with Senn. If I had read these papers before asking this question, I would most likely have never posted it. Goodman (unlike Cumming) states very clearly that he considers a Bayesian setting with a flat prior. He does not present $p$-value distributions as Cumming does, and instead reports probabilities of observing a "significant" $p<0.05$ result in a replication experiment: His main point is that these probabilities are surprisingly low (even for $p=0.001$ it is only $0.78$). In particular, for $p=0.05$ it is only $0.5$. (This latter $1/2$ probability remains the same for any $\alpha$ and $p=\alpha$.) The point of Senn's reply is that this is a useful observation which, however, does not undermine $p$-values in any way and does not, contrary to Goodman, mean that $p$-values "overstate the evidence against the null". He writes: I also consider that his [Goodman's] demonstration is useful for two reasons. First, it serves as a warning for anybody planning a further similar study to one just completed (and which has a marginally significant result) that this may not be matched in the second study. Second, it serves as a warning that apparent inconsistency in results from individual studies may be expected to be common and that one should not overreact to this phenomenon. Senn reminds us that one-sided $p$-values can be understood as Bayesian posterior probabilities of $H_0:\mu<0$ under the flat prior for $\mu$ (improper prior on the whole real line) [see Marsman & Wagenmakers 2016 for a brief discussion of this fact and some citations]. If so, then having obtained any particular $p$-value in one experiment, probability that the next experiment will yield a lower $p$-value has to be $1/2$; otherwise future replications could somehow provide additional evidence before being conducted. So it makes total sense that for $p=0.05$ Goodman obtained probability $0.5$. And indeed, all replication distributions computed by Cumming and @GeoMatt22 have medians at the respective $p_\mathrm{obs}$. We do not, however, need this replication probability to be higher than $0.5$ to believe that the efficacy of the treatment is probable. A long series of trials, $50$ per cent of which were significant at the $5$ per cent level, would be convincing evidence that the treatment was effective. Incidentally, anybody who looked at the predictive distributions of $p$-values for, say, a t-test of given size and power (see e.g. here) will not be surprised that requiring a median at $p=0.05$ will necessarily make this distribution pretty broad, with a fat tail going towards $1$. In this light, broad intervals reported by Cumming cease being surprising. What they rather do suggest, is that one should use larger sample sizes when trying to replicate an experiment; and indeed, this is a standard recommendation for replication studies (e.g. Uri Simonsohn suggests, as a rule of thumb, to increase sample size $2.5$-fold).
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the ori
The issue has been clarified by @GeoMatt22, and I've been delighted to see @GeoffCumming coming here to participate in the discussion. I am posting this answer as a further commentary. As it turns ou
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the original p-value. How can it be true? The issue has been clarified by @GeoMatt22, and I've been delighted to see @GeoffCumming coming here to participate in the discussion. I am posting this answer as a further commentary. As it turns out, this discussion goes back at least to Goodman (1992) A comment on replication, P‐values and evidence and a later reply Senn (2002) Letter to the Editor. I can highly recommend reading these two brief articles, in particularly the Stephen Senn's one; I find myself fully agreeing with Senn. If I had read these papers before asking this question, I would most likely have never posted it. Goodman (unlike Cumming) states very clearly that he considers a Bayesian setting with a flat prior. He does not present $p$-value distributions as Cumming does, and instead reports probabilities of observing a "significant" $p<0.05$ result in a replication experiment: His main point is that these probabilities are surprisingly low (even for $p=0.001$ it is only $0.78$). In particular, for $p=0.05$ it is only $0.5$. (This latter $1/2$ probability remains the same for any $\alpha$ and $p=\alpha$.) The point of Senn's reply is that this is a useful observation which, however, does not undermine $p$-values in any way and does not, contrary to Goodman, mean that $p$-values "overstate the evidence against the null". He writes: I also consider that his [Goodman's] demonstration is useful for two reasons. First, it serves as a warning for anybody planning a further similar study to one just completed (and which has a marginally significant result) that this may not be matched in the second study. Second, it serves as a warning that apparent inconsistency in results from individual studies may be expected to be common and that one should not overreact to this phenomenon. Senn reminds us that one-sided $p$-values can be understood as Bayesian posterior probabilities of $H_0:\mu<0$ under the flat prior for $\mu$ (improper prior on the whole real line) [see Marsman & Wagenmakers 2016 for a brief discussion of this fact and some citations]. If so, then having obtained any particular $p$-value in one experiment, probability that the next experiment will yield a lower $p$-value has to be $1/2$; otherwise future replications could somehow provide additional evidence before being conducted. So it makes total sense that for $p=0.05$ Goodman obtained probability $0.5$. And indeed, all replication distributions computed by Cumming and @GeoMatt22 have medians at the respective $p_\mathrm{obs}$. We do not, however, need this replication probability to be higher than $0.5$ to believe that the efficacy of the treatment is probable. A long series of trials, $50$ per cent of which were significant at the $5$ per cent level, would be convincing evidence that the treatment was effective. Incidentally, anybody who looked at the predictive distributions of $p$-values for, say, a t-test of given size and power (see e.g. here) will not be surprised that requiring a median at $p=0.05$ will necessarily make this distribution pretty broad, with a fat tail going towards $1$. In this light, broad intervals reported by Cumming cease being surprising. What they rather do suggest, is that one should use larger sample sizes when trying to replicate an experiment; and indeed, this is a standard recommendation for replication studies (e.g. Uri Simonsohn suggests, as a rule of thumb, to increase sample size $2.5$-fold).
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the ori The issue has been clarified by @GeoMatt22, and I've been delighted to see @GeoffCumming coming here to participate in the discussion. I am posting this answer as a further commentary. As it turns ou
5,003
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the original p-value. How can it be true?
Thanks everyone for further interesting discussion. Rather than making my comments, point by point, I’ll offer some general reflections. Bayes. I have nothing at all against Bayesian approaches. From the beginning I’ve expected that a Bayesian analysis, assuming a flat or diffuse prior, would give the same or very similar prediction intervals. There is a para on p. 291 in the 2008 article about that, partly prompted by one of the reviewers. So I’m pleased to see, above, a working through of that approach. That’s great, but it’s a very different approach from the one I took. As an aside, I have chosen to work on advocacy of confidence intervals (the new statistics: effect sizes, CIs, meta-analysis) rather than Bayesian approaches to estimation (based on credible intervals) because I don’t know how to explain the Bayesian approaches to beginners sufficiently well. I haven’t seen any truly introductory Bayesian textbook that I feel I could use with beginners, or that is likely to be found accessible and convincing by large numbers of researchers. Therefore, we need to look elsewhere if we want to have a decent chance of improving the way researchers do their statistical inference. Yes, we need to move beyond p values, and shift from dichotomous decision making to estimation, and Bayesians can do that. But much more likely to achieve practical change, imho, is a conventional CI approach. That’s why our intro statistics textbook, recently released, takes the new statistics approach. See www.thenewstatistics.com Back to reflections. Central to my analysis is what I mean by knowing only the p value from the first study. The assumptions I make are stated (normal population, random sampling, known population SD so we can use z rather than t calculations as we conduct inference about the population mean, exact replication). But that’s all I’m assuming. My question is ‘given only p from the initial experiment, how far can we go?’ My conclusion is that we can find the distribution of p expected from a replication experiment. From that distribution we can derive p intervals, or any probability of interest, such as the probability that the replication will give p<.05, or any other value of interest. The core of the argument, and perhaps the step worth most reflection, is illustrated in Figure A2 in the article. The lower half is probably unproblematic. If we know mu (usually achieved by assuming it equals the mean from the initial study) then the estimation errors, represented by the thick line segments, have a known distribution (normal, mean mu, SD as explained in the caption). Then the big step: Consider the upper half of Figure 2A. We have NO information about mu. No information—not any hidden assumption about a prior. Yet we can state the distribution of those thick line segments: normal, mean zero, SD = SQRT(2) times the SD in the lower half. That gives us what we need to find the distribution of replication p. The resulting p intervals are astonishingly long—at least I feel astonishment when I compare with the way p values are virtually universally used by researchers. Researchers typically obsess about the second or third decimal place of a p value, without appreciating that the value they are seeing could very easily have been very different indeed. Hence my comments on pp 293-4 about reporting p intervals to acknowledge the vagueness of p. Long, yes, but that doesn’t mean that p from the initial experiment means nothing. After a very low initial p, replications will tend, on average, to have smallish p values. Higher initial p and replications will tend to have somewhat larger p values. See Table 1 on p. 292 and compare, for example, the p intervals in the right column for initial p = .001 and .1—two results conventionally considered to be miles apart. The two p intervals are definitely different, but there is enormous overlap of the two. Replication of the .001 experiment could fairly easily give p larger than a replication of the .1 experiment. Although, most likely, it wouldn’t. As part of his PhD research, Jerry Lai, reported (Lai, et al., 2011) several nice studies that found that published researchers from a number of disciplines have subjective p intervals that are far too short. In other words, researchers tend to under-estimate drastically how different the p value of a replication is likely to be. My conclusion is that we should simply not use p values at all. Report and discuss the 95% CI, which conveys all the information in the data that tells us about the population mean we are investigating. Given the CI, the p value adds nothing, and is likely to suggest, wrongly, some degree of certainty (Significant! Not significant! The effect exists! It doesn’t!). Sure, CIs and p values are based on the same theory, and we can convert from one to the other (there’s lots on that in Chapter 6 of our intro textbook). But the CI gives way more information than p. Most importantly, it makes salient the extent of uncertainty. Given our human tendency to grasp for certainty, the extent of the CI is vital to consider. I’ve also attempted to highlight the variability of p values in the ‘dance of the p values’ videos. Google ‘dance of the p values’. There are at least a couple of versions. May all your confidence intervals be short! Geoff
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the ori
Thanks everyone for further interesting discussion. Rather than making my comments, point by point, I’ll offer some general reflections. Bayes. I have nothing at all against Bayesian approaches. From
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the original p-value. How can it be true? Thanks everyone for further interesting discussion. Rather than making my comments, point by point, I’ll offer some general reflections. Bayes. I have nothing at all against Bayesian approaches. From the beginning I’ve expected that a Bayesian analysis, assuming a flat or diffuse prior, would give the same or very similar prediction intervals. There is a para on p. 291 in the 2008 article about that, partly prompted by one of the reviewers. So I’m pleased to see, above, a working through of that approach. That’s great, but it’s a very different approach from the one I took. As an aside, I have chosen to work on advocacy of confidence intervals (the new statistics: effect sizes, CIs, meta-analysis) rather than Bayesian approaches to estimation (based on credible intervals) because I don’t know how to explain the Bayesian approaches to beginners sufficiently well. I haven’t seen any truly introductory Bayesian textbook that I feel I could use with beginners, or that is likely to be found accessible and convincing by large numbers of researchers. Therefore, we need to look elsewhere if we want to have a decent chance of improving the way researchers do their statistical inference. Yes, we need to move beyond p values, and shift from dichotomous decision making to estimation, and Bayesians can do that. But much more likely to achieve practical change, imho, is a conventional CI approach. That’s why our intro statistics textbook, recently released, takes the new statistics approach. See www.thenewstatistics.com Back to reflections. Central to my analysis is what I mean by knowing only the p value from the first study. The assumptions I make are stated (normal population, random sampling, known population SD so we can use z rather than t calculations as we conduct inference about the population mean, exact replication). But that’s all I’m assuming. My question is ‘given only p from the initial experiment, how far can we go?’ My conclusion is that we can find the distribution of p expected from a replication experiment. From that distribution we can derive p intervals, or any probability of interest, such as the probability that the replication will give p<.05, or any other value of interest. The core of the argument, and perhaps the step worth most reflection, is illustrated in Figure A2 in the article. The lower half is probably unproblematic. If we know mu (usually achieved by assuming it equals the mean from the initial study) then the estimation errors, represented by the thick line segments, have a known distribution (normal, mean mu, SD as explained in the caption). Then the big step: Consider the upper half of Figure 2A. We have NO information about mu. No information—not any hidden assumption about a prior. Yet we can state the distribution of those thick line segments: normal, mean zero, SD = SQRT(2) times the SD in the lower half. That gives us what we need to find the distribution of replication p. The resulting p intervals are astonishingly long—at least I feel astonishment when I compare with the way p values are virtually universally used by researchers. Researchers typically obsess about the second or third decimal place of a p value, without appreciating that the value they are seeing could very easily have been very different indeed. Hence my comments on pp 293-4 about reporting p intervals to acknowledge the vagueness of p. Long, yes, but that doesn’t mean that p from the initial experiment means nothing. After a very low initial p, replications will tend, on average, to have smallish p values. Higher initial p and replications will tend to have somewhat larger p values. See Table 1 on p. 292 and compare, for example, the p intervals in the right column for initial p = .001 and .1—two results conventionally considered to be miles apart. The two p intervals are definitely different, but there is enormous overlap of the two. Replication of the .001 experiment could fairly easily give p larger than a replication of the .1 experiment. Although, most likely, it wouldn’t. As part of his PhD research, Jerry Lai, reported (Lai, et al., 2011) several nice studies that found that published researchers from a number of disciplines have subjective p intervals that are far too short. In other words, researchers tend to under-estimate drastically how different the p value of a replication is likely to be. My conclusion is that we should simply not use p values at all. Report and discuss the 95% CI, which conveys all the information in the data that tells us about the population mean we are investigating. Given the CI, the p value adds nothing, and is likely to suggest, wrongly, some degree of certainty (Significant! Not significant! The effect exists! It doesn’t!). Sure, CIs and p values are based on the same theory, and we can convert from one to the other (there’s lots on that in Chapter 6 of our intro textbook). But the CI gives way more information than p. Most importantly, it makes salient the extent of uncertainty. Given our human tendency to grasp for certainty, the extent of the CI is vital to consider. I’ve also attempted to highlight the variability of p values in the ‘dance of the p values’ videos. Google ‘dance of the p values’. There are at least a couple of versions. May all your confidence intervals be short! Geoff
Cumming (2008) claims that distribution of p-values obtained in replications depends only on the ori Thanks everyone for further interesting discussion. Rather than making my comments, point by point, I’ll offer some general reflections. Bayes. I have nothing at all against Bayesian approaches. From
5,004
Derive Variance of regression coefficient in simple linear regression
At the start of your derivation you multiply out the brackets $\sum_i (x_i - \bar{x})(y_i - \bar{y})$, in the process expanding both $y_i$ and $\bar{y}$. The former depends on the sum variable $i$, whereas the latter doesn't. If you leave $\bar{y}$ as is, the derivation is a lot simpler, because \begin{align} \sum_i (x_i - \bar{x})\bar{y} &= \bar{y}\sum_i (x_i - \bar{x})\\ &= \bar{y}\left(\left(\sum_i x_i\right) - n\bar{x}\right)\\ &= \bar{y}\left(n\bar{x} - n\bar{x}\right)\\ &= 0 \end{align} Hence \begin{align} \sum_i (x_i - \bar{x})(y_i - \bar{y}) &= \sum_i (x_i - \bar{x})y_i - \sum_i (x_i - \bar{x})\bar{y}\\ &= \sum_i (x_i - \bar{x})y_i\\ &= \sum_i (x_i - \bar{x})(\beta_0 + \beta_1x_i + u_i )\\ \end{align} and \begin{align} \text{Var}(\hat{\beta_1}) & = \text{Var} \left(\frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sum_i (x_i - \bar{x})^2} \right) \\ &= \text{Var} \left(\frac{\sum_i (x_i - \bar{x})(\beta_0 + \beta_1x_i + u_i )}{\sum_i (x_i - \bar{x})^2} \right), \;\;\;\text{substituting in the above} \\ &= \text{Var} \left(\frac{\sum_i (x_i - \bar{x})u_i}{\sum_i (x_i - \bar{x})^2} \right), \;\;\;\text{noting only $u_i$ is a random variable} \\ &= \frac{\sum_i (x_i - \bar{x})^2\text{Var}(u_i)}{\left(\sum_i (x_i - \bar{x})^2\right)^2} , \;\;\;\text{independence of } u_i \text{ and, Var}(kX)=k^2\text{Var}(X) \\ &= \frac{\sigma^2}{\sum_i (x_i - \bar{x})^2} \\ \end{align} which is the result you want. As a side note, I spent a long time trying to find an error in your derivation. In the end I decided that discretion was the better part of valour and it was best to try the simpler approach. However for the record I wasn't sure that this step was justified $$\begin{align} & =. \frac{1}{(\sum_i (x_i - \bar{x})^2)^2} E\left[\left( \sum_i(x_i - \bar{x})(u_i - \sum_j \frac{u_j}{n})\right)^2 \right] \\ & = \frac{1}{(\sum_i (x_i - \bar{x})^2)^2} E\left[\sum_i(x_i - \bar{x})^2(u_i - \sum_j \frac{u_j}{n})^2 \right]\;\;\;\;\text{ , since } u_i \text{ 's are iid} \\ \end{align}$$ because it misses out the cross terms due to $\sum_j \frac{u_j}{n}$.
Derive Variance of regression coefficient in simple linear regression
At the start of your derivation you multiply out the brackets $\sum_i (x_i - \bar{x})(y_i - \bar{y})$, in the process expanding both $y_i$ and $\bar{y}$. The former depends on the sum variable $i$, wh
Derive Variance of regression coefficient in simple linear regression At the start of your derivation you multiply out the brackets $\sum_i (x_i - \bar{x})(y_i - \bar{y})$, in the process expanding both $y_i$ and $\bar{y}$. The former depends on the sum variable $i$, whereas the latter doesn't. If you leave $\bar{y}$ as is, the derivation is a lot simpler, because \begin{align} \sum_i (x_i - \bar{x})\bar{y} &= \bar{y}\sum_i (x_i - \bar{x})\\ &= \bar{y}\left(\left(\sum_i x_i\right) - n\bar{x}\right)\\ &= \bar{y}\left(n\bar{x} - n\bar{x}\right)\\ &= 0 \end{align} Hence \begin{align} \sum_i (x_i - \bar{x})(y_i - \bar{y}) &= \sum_i (x_i - \bar{x})y_i - \sum_i (x_i - \bar{x})\bar{y}\\ &= \sum_i (x_i - \bar{x})y_i\\ &= \sum_i (x_i - \bar{x})(\beta_0 + \beta_1x_i + u_i )\\ \end{align} and \begin{align} \text{Var}(\hat{\beta_1}) & = \text{Var} \left(\frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sum_i (x_i - \bar{x})^2} \right) \\ &= \text{Var} \left(\frac{\sum_i (x_i - \bar{x})(\beta_0 + \beta_1x_i + u_i )}{\sum_i (x_i - \bar{x})^2} \right), \;\;\;\text{substituting in the above} \\ &= \text{Var} \left(\frac{\sum_i (x_i - \bar{x})u_i}{\sum_i (x_i - \bar{x})^2} \right), \;\;\;\text{noting only $u_i$ is a random variable} \\ &= \frac{\sum_i (x_i - \bar{x})^2\text{Var}(u_i)}{\left(\sum_i (x_i - \bar{x})^2\right)^2} , \;\;\;\text{independence of } u_i \text{ and, Var}(kX)=k^2\text{Var}(X) \\ &= \frac{\sigma^2}{\sum_i (x_i - \bar{x})^2} \\ \end{align} which is the result you want. As a side note, I spent a long time trying to find an error in your derivation. In the end I decided that discretion was the better part of valour and it was best to try the simpler approach. However for the record I wasn't sure that this step was justified $$\begin{align} & =. \frac{1}{(\sum_i (x_i - \bar{x})^2)^2} E\left[\left( \sum_i(x_i - \bar{x})(u_i - \sum_j \frac{u_j}{n})\right)^2 \right] \\ & = \frac{1}{(\sum_i (x_i - \bar{x})^2)^2} E\left[\sum_i(x_i - \bar{x})^2(u_i - \sum_j \frac{u_j}{n})^2 \right]\;\;\;\;\text{ , since } u_i \text{ 's are iid} \\ \end{align}$$ because it misses out the cross terms due to $\sum_j \frac{u_j}{n}$.
Derive Variance of regression coefficient in simple linear regression At the start of your derivation you multiply out the brackets $\sum_i (x_i - \bar{x})(y_i - \bar{y})$, in the process expanding both $y_i$ and $\bar{y}$. The former depends on the sum variable $i$, wh
5,005
Derive Variance of regression coefficient in simple linear regression
I believe the problem in your proof is the step where you take the expected value of the square of $\sum_i (x_i - \bar{x} )\left( u_i -\sum_j \frac{u_j}{n} \right)$. This is of the form $E \left[\left(\sum_i a_i b_i \right)^2 \right]$, where $a_i = x_i -\bar{x}; b_i = u_i -\sum_j \frac{u_j}{n}$. So, upon squaring, we get $E \left[ \sum_{i,j} a_i a_j b_i b_j \right] = \sum_{i,j} a_i a_j E\left[b_i b_j \right]$. Now, from explicit computation, $E\left[b_i b_j \right] = \sigma^2 \left( \delta_{ij} -\frac{1}{n} \right)$, so $E \left[ \sum_{i,j} a_i a_j b_i b_j \right] = \sum_{i,j} a_i a_j \sigma^2 \left( \delta_{ij} -\frac{1}{n} \right) = \sum_i a_i^2 \sigma^2$ as $\sum_i a_i = 0$.
Derive Variance of regression coefficient in simple linear regression
I believe the problem in your proof is the step where you take the expected value of the square of $\sum_i (x_i - \bar{x} )\left( u_i -\sum_j \frac{u_j}{n} \right)$. This is of the form $E \left[\left
Derive Variance of regression coefficient in simple linear regression I believe the problem in your proof is the step where you take the expected value of the square of $\sum_i (x_i - \bar{x} )\left( u_i -\sum_j \frac{u_j}{n} \right)$. This is of the form $E \left[\left(\sum_i a_i b_i \right)^2 \right]$, where $a_i = x_i -\bar{x}; b_i = u_i -\sum_j \frac{u_j}{n}$. So, upon squaring, we get $E \left[ \sum_{i,j} a_i a_j b_i b_j \right] = \sum_{i,j} a_i a_j E\left[b_i b_j \right]$. Now, from explicit computation, $E\left[b_i b_j \right] = \sigma^2 \left( \delta_{ij} -\frac{1}{n} \right)$, so $E \left[ \sum_{i,j} a_i a_j b_i b_j \right] = \sum_{i,j} a_i a_j \sigma^2 \left( \delta_{ij} -\frac{1}{n} \right) = \sum_i a_i^2 \sigma^2$ as $\sum_i a_i = 0$.
Derive Variance of regression coefficient in simple linear regression I believe the problem in your proof is the step where you take the expected value of the square of $\sum_i (x_i - \bar{x} )\left( u_i -\sum_j \frac{u_j}{n} \right)$. This is of the form $E \left[\left
5,006
Derive Variance of regression coefficient in simple linear regression
Begin from "The derivation is as follow:" The 7th "=" is wrong. Because $\sum_i (x_i - \bar{x})(u_i - \bar{u})$ $ = \sum_i (x_i - \bar{x})u_i - \sum_i (x_i - \bar{x}) \bar{u}$ $ = \sum_i (x_i - \bar{x})u_i - \bar{u} \sum_i (x_i - \bar{x})$ $ = \sum_i (x_i - \bar{x})u_i - \bar{u} (\sum_i{x_i} -n \bar{x})$ $ = \sum_i (x_i - \bar{x})u_i - \bar{u} (\sum_i{x_i} -\sum_i{x_i})$ $ = \sum_i (x_i - \bar{x})u_i - \bar{u} 0$ $ = \sum_i (x_i - \bar{x})u_i$ So after 7th "=" it should be: $\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}E\left[\left(\sum_i(x_i-\bar{x})u_i\right)^2\right]$ $=\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}E\left(\sum_i(x_i-\bar{x})^2u_i^2 + 2\sum_{i\ne j}(x_i-\bar{x})(x_j-\bar{x})u_iu_j\right)$ =$\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}E\left(\sum_i(x_i-\bar{x})^2u_i^2\right) + 2E\left(\sum_{i\ne j}(x_i-\bar{x})(x_j-\bar{x})u_iu_j\right)$ =$\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}E\left(\sum_i(x_i-\bar{x})^2u_i^2\right) $, because $u_i$ and $u_j$ are independent and mean 0, so $E(u_iu_j) =0$ =$\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}\left(\sum_i(x_i-\bar{x})^2E(u_i^2)\right) $ $\frac {\sigma^2} {(\sum_i(x_i-\bar{x})^2)^2}$
Derive Variance of regression coefficient in simple linear regression
Begin from "The derivation is as follow:" The 7th "=" is wrong. Because $\sum_i (x_i - \bar{x})(u_i - \bar{u})$ $ = \sum_i (x_i - \bar{x})u_i - \sum_i (x_i - \bar{x}) \bar{u}$ $ = \sum_i (x_i - \bar
Derive Variance of regression coefficient in simple linear regression Begin from "The derivation is as follow:" The 7th "=" is wrong. Because $\sum_i (x_i - \bar{x})(u_i - \bar{u})$ $ = \sum_i (x_i - \bar{x})u_i - \sum_i (x_i - \bar{x}) \bar{u}$ $ = \sum_i (x_i - \bar{x})u_i - \bar{u} \sum_i (x_i - \bar{x})$ $ = \sum_i (x_i - \bar{x})u_i - \bar{u} (\sum_i{x_i} -n \bar{x})$ $ = \sum_i (x_i - \bar{x})u_i - \bar{u} (\sum_i{x_i} -\sum_i{x_i})$ $ = \sum_i (x_i - \bar{x})u_i - \bar{u} 0$ $ = \sum_i (x_i - \bar{x})u_i$ So after 7th "=" it should be: $\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}E\left[\left(\sum_i(x_i-\bar{x})u_i\right)^2\right]$ $=\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}E\left(\sum_i(x_i-\bar{x})^2u_i^2 + 2\sum_{i\ne j}(x_i-\bar{x})(x_j-\bar{x})u_iu_j\right)$ =$\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}E\left(\sum_i(x_i-\bar{x})^2u_i^2\right) + 2E\left(\sum_{i\ne j}(x_i-\bar{x})(x_j-\bar{x})u_iu_j\right)$ =$\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}E\left(\sum_i(x_i-\bar{x})^2u_i^2\right) $, because $u_i$ and $u_j$ are independent and mean 0, so $E(u_iu_j) =0$ =$\frac {1} {(\sum_i(x_i-\bar{x})^2)^2}\left(\sum_i(x_i-\bar{x})^2E(u_i^2)\right) $ $\frac {\sigma^2} {(\sum_i(x_i-\bar{x})^2)^2}$
Derive Variance of regression coefficient in simple linear regression Begin from "The derivation is as follow:" The 7th "=" is wrong. Because $\sum_i (x_i - \bar{x})(u_i - \bar{u})$ $ = \sum_i (x_i - \bar{x})u_i - \sum_i (x_i - \bar{x}) \bar{u}$ $ = \sum_i (x_i - \bar
5,007
Determining sample size necessary for bootstrap method / Proposed Method
I took interest in this question because I saw the word bootstrap and I have written books on the bootstrap. Also people often ask "How many bootstrap samples do I need to get a good Monte Carlo approximation to the bootstrap result?" My suggested answer to that question is to keep increasing the size until you get convergence. No one number fits all problems. But that is apparently not that question you are asking. You seem to be asking what the original sample size needs to be for the bootstrap to work. First of all I do not agree with your premise. The basic nonparametric bootstrap assumes that the sample is taken at random from a population. So for any sample size $n$ the distribution for samples chosen at random is the sampling distribution assumed in bootstrapping. The bootstrap principle says that choosing a random sample of size $n$ from the population can be mimicked by choosing a bootstrap sample of size $n$ from the original sample. Whether or not the bootstrap principle holds does not depend on any individual sample "looking representative of the population". What it does depend on is what you are estimating and some properties of the population distribution (e.g., this works for sampling means with population distributions that have finite variances, but not when they have infinite variances). It will not work for estimating extremes regardless of the population distribution. The theory of the bootstrap involves showing consistency of the estimate. So it can be shown in theory that it works for large samples. But it can also work in small samples. I have seen it work for classification error rate estimation particularly well in small sample sizes such as 20 for bivariate data. Now if the sample size is very small---say 4---the bootstrap may not work just because the set of possible bootstrap samples is not rich enough. In my book or Peter Hall's book this issue of too small a sample size is discussed. But this number of distinct bootstrap samples gets large very quickly. So this is not an issue even for sample sizes as small as 8. You can take a look at these references: My book: Bootstrap Methods: A Guide for Practitioners and Researchers Hall's book: The Bootstrap and Edgeworth Expansion
Determining sample size necessary for bootstrap method / Proposed Method
I took interest in this question because I saw the word bootstrap and I have written books on the bootstrap. Also people often ask "How many bootstrap samples do I need to get a good Monte Carlo appr
Determining sample size necessary for bootstrap method / Proposed Method I took interest in this question because I saw the word bootstrap and I have written books on the bootstrap. Also people often ask "How many bootstrap samples do I need to get a good Monte Carlo approximation to the bootstrap result?" My suggested answer to that question is to keep increasing the size until you get convergence. No one number fits all problems. But that is apparently not that question you are asking. You seem to be asking what the original sample size needs to be for the bootstrap to work. First of all I do not agree with your premise. The basic nonparametric bootstrap assumes that the sample is taken at random from a population. So for any sample size $n$ the distribution for samples chosen at random is the sampling distribution assumed in bootstrapping. The bootstrap principle says that choosing a random sample of size $n$ from the population can be mimicked by choosing a bootstrap sample of size $n$ from the original sample. Whether or not the bootstrap principle holds does not depend on any individual sample "looking representative of the population". What it does depend on is what you are estimating and some properties of the population distribution (e.g., this works for sampling means with population distributions that have finite variances, but not when they have infinite variances). It will not work for estimating extremes regardless of the population distribution. The theory of the bootstrap involves showing consistency of the estimate. So it can be shown in theory that it works for large samples. But it can also work in small samples. I have seen it work for classification error rate estimation particularly well in small sample sizes such as 20 for bivariate data. Now if the sample size is very small---say 4---the bootstrap may not work just because the set of possible bootstrap samples is not rich enough. In my book or Peter Hall's book this issue of too small a sample size is discussed. But this number of distinct bootstrap samples gets large very quickly. So this is not an issue even for sample sizes as small as 8. You can take a look at these references: My book: Bootstrap Methods: A Guide for Practitioners and Researchers Hall's book: The Bootstrap and Edgeworth Expansion
Determining sample size necessary for bootstrap method / Proposed Method I took interest in this question because I saw the word bootstrap and I have written books on the bootstrap. Also people often ask "How many bootstrap samples do I need to get a good Monte Carlo appr
5,008
Determining sample size necessary for bootstrap method / Proposed Method
The resampling process creates many possible samples that a study could have drawn. The various combinations of values in the simulated samples collectively provide an estimate of the variability between random samples drawn from the same population. The range of these potential samples allows the procedure to construct confidence intervals and perform hypothesis testing. Importantly, as the bootstrap sample size increases, bootstrapping converges on the correct sampling distribution under most conditions. In regards to your question on: "This is merely an idea on how to determine how large your original sample size needs to be in order to be reasonably certain that the sample distribution corresponds with the population distribution." This is dependent on the specific problem that you are examining and is not dependent on the bootstrap sample size. The purpose of the bootstrap sample is merely to obtain a large enough bootstrap sample size, usually at least 1000 in order to obtain with low MC errors such that one can obtain distribution statistics on the original sample e.g. 95% CI. But this cannot guarantee that the original sample taken is representative of the actual population distribution.
Determining sample size necessary for bootstrap method / Proposed Method
The resampling process creates many possible samples that a study could have drawn. The various combinations of values in the simulated samples collectively provide an estimate of the variability betw
Determining sample size necessary for bootstrap method / Proposed Method The resampling process creates many possible samples that a study could have drawn. The various combinations of values in the simulated samples collectively provide an estimate of the variability between random samples drawn from the same population. The range of these potential samples allows the procedure to construct confidence intervals and perform hypothesis testing. Importantly, as the bootstrap sample size increases, bootstrapping converges on the correct sampling distribution under most conditions. In regards to your question on: "This is merely an idea on how to determine how large your original sample size needs to be in order to be reasonably certain that the sample distribution corresponds with the population distribution." This is dependent on the specific problem that you are examining and is not dependent on the bootstrap sample size. The purpose of the bootstrap sample is merely to obtain a large enough bootstrap sample size, usually at least 1000 in order to obtain with low MC errors such that one can obtain distribution statistics on the original sample e.g. 95% CI. But this cannot guarantee that the original sample taken is representative of the actual population distribution.
Determining sample size necessary for bootstrap method / Proposed Method The resampling process creates many possible samples that a study could have drawn. The various combinations of values in the simulated samples collectively provide an estimate of the variability betw
5,009
What algorithm is used in linear regression?
Regarding the question in the title, about what is the algorithm that is used: In a linear algebra perspective, the linear regression algorithm is the way to solve a linear system $\mathbf{A}x=b$ with more equations than unknowns. In most of the cases there is no solution to this problem. And this is because the vector $b$ doesn't belong to the column space of $\mathbf{A}$, $C(\mathbf{A})$. The best straight line is the one that makes the overall error $e=\mathbf{A}x-b$ as small as it takes. And is convenient to think as small to be the squared length, $\lVert e \rVert^2$, because it's non negative, and it equals 0 only when $b\in C(\mathbf{A})$. Projecting (orthogonally) the vector $b$ to the nearest point in the column space of $\mathbf{A}$ gives the vector $b^*$ that solves the system (it's components lie on the best straight line) with the minimum error. $\mathbf{A}^T\mathbf{A}\hat{x}=\mathbf{A}^Tb \Rightarrow \hat{x}=(\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^Tb$ and the projected vector $b^*$ is given by: $b^*=\mathbf{A}\hat{x}=\mathbf{A}(\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^Tb$ Perhaps the least squares method is not exclusively used because that squaring overcompensates for outliers. Let me give a simple example in R, that solves the regression problem using this algorithm: library(fBasics) reg.data <- read.table(textConnection(" b x 12 0 10 1 8 2 11 3 6 4 7 5 2 6 3 7 3 8 "), header = T) attach(reg.data) A <- model.matrix(b~x) # intercept and slope inv(t(A) %*% A) %*% t(A) %*% b # fitted values - the projected vector b in the C(A) A %*% inv(t(A) %*%A ) %*% t(A) %*% b # The projection is easier if the orthogonal matrix Q is used, # because t(Q)%*%Q = I Q <- qr.Q(qr(A)) R <- qr.R(qr(A)) # intercept and slope best.line <- inv(R) %*% t(Q) %*% b # fitted values Q %*% t(Q) %*% b plot(x,b,pch=16) abline(best.line[1],best.line[2])
What algorithm is used in linear regression?
Regarding the question in the title, about what is the algorithm that is used: In a linear algebra perspective, the linear regression algorithm is the way to solve a linear system $\mathbf{A}x=b$ with
What algorithm is used in linear regression? Regarding the question in the title, about what is the algorithm that is used: In a linear algebra perspective, the linear regression algorithm is the way to solve a linear system $\mathbf{A}x=b$ with more equations than unknowns. In most of the cases there is no solution to this problem. And this is because the vector $b$ doesn't belong to the column space of $\mathbf{A}$, $C(\mathbf{A})$. The best straight line is the one that makes the overall error $e=\mathbf{A}x-b$ as small as it takes. And is convenient to think as small to be the squared length, $\lVert e \rVert^2$, because it's non negative, and it equals 0 only when $b\in C(\mathbf{A})$. Projecting (orthogonally) the vector $b$ to the nearest point in the column space of $\mathbf{A}$ gives the vector $b^*$ that solves the system (it's components lie on the best straight line) with the minimum error. $\mathbf{A}^T\mathbf{A}\hat{x}=\mathbf{A}^Tb \Rightarrow \hat{x}=(\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^Tb$ and the projected vector $b^*$ is given by: $b^*=\mathbf{A}\hat{x}=\mathbf{A}(\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^Tb$ Perhaps the least squares method is not exclusively used because that squaring overcompensates for outliers. Let me give a simple example in R, that solves the regression problem using this algorithm: library(fBasics) reg.data <- read.table(textConnection(" b x 12 0 10 1 8 2 11 3 6 4 7 5 2 6 3 7 3 8 "), header = T) attach(reg.data) A <- model.matrix(b~x) # intercept and slope inv(t(A) %*% A) %*% t(A) %*% b # fitted values - the projected vector b in the C(A) A %*% inv(t(A) %*%A ) %*% t(A) %*% b # The projection is easier if the orthogonal matrix Q is used, # because t(Q)%*%Q = I Q <- qr.Q(qr(A)) R <- qr.R(qr(A)) # intercept and slope best.line <- inv(R) %*% t(Q) %*% b # fitted values Q %*% t(Q) %*% b plot(x,b,pch=16) abline(best.line[1],best.line[2])
What algorithm is used in linear regression? Regarding the question in the title, about what is the algorithm that is used: In a linear algebra perspective, the linear regression algorithm is the way to solve a linear system $\mathbf{A}x=b$ with
5,010
What algorithm is used in linear regression?
To answer the letter of the question, "ordinary least squares" is not an algorithm; rather it is a type of problem in computational linear algebra, of which linear regression is one example. Usually one has data $\{(x_1,y_1),\dots,(x_m,y_m)\}$ and a tentative function ("model") to fit the data against, of the form $f(x)=c_1 f_1(x)+\dots+c_n f_n(x)$. The $f_j(x)$ are called "basis functions" and can be anything from monomials $x^j$ to trigonometric functions (e.g. $\sin(jx)$, $\cos(jx)$) and exponential functions ($\exp(-jx)$). The term "linear" in "linear regression" here does not refer to the basis functions, but to the coefficients $c_j$, in that taking the partial derivative of the model with respect to any of the $c_j$ gives you the factor multiplying $c_j$; that is, $f_j(x)$. One now has an $m\times n$ rectangular matrix $\mathbf A$ ("design matrix") that (usually) has more rows than columns, and each entry is of the form $f_j(x_i)$, $i$ being the row index and $j$ being the column index. OLS is now the task of finding the vector $\mathbf c=(c_1\,\dots\,c_n)^\top$ that minimizes the quantity $\sqrt{\sum\limits_{j=1}^{m}\left(y_j-f(x_j)\right)^2}$ (in matrix notation, $\|\mathbf{A}\mathbf{c}-\mathbf{y}\|_2$ ; here, $\mathbf{y}=(y_1\,\dots\,y_m)^\top$ is usually called the "response vector"). There are at least three methods used in practice for computing least-squares solutions: the normal equations, QR decomposition, and singular value decomposition. In brief, they are ways to transform the matrix $\mathbf{A}$ into a product of matrices that are easily manipulated to solve for the vector $\mathbf{c}$. George already showed the method of normal equations in his answer; one just solves the $n\times n$ set of linear equations $\mathbf{A}^\top\mathbf{A}\mathbf{c}=\mathbf{A}^\top\mathbf{y}$ for $\mathbf{c}$. Due to the fact that the matrix $\mathbf{A}^\top\mathbf{A}$ is symmetric positive (semi)definite, the usual method used for this is Cholesky decomposition, which factors $\mathbf{A}^\top\mathbf{A}$ into the form $\mathbf{G}\mathbf{G}^\top$, with $\mathbf{G}$ a lower triangular matrix. The problem with this approach, despite the advantage of being able to compress the $m\times n$ design matrix into a (usually) much smaller $n\times n$ matrix, is that this operation is prone to loss of significant figures (this has something to do with the "condition number" of the design matrix). A slightly better way is QR decomposition, which directly works with the design matrix. It factors $\mathbf{A}$ as $\mathbf{A}=\mathbf{Q}\mathbf{R}$, where $\mathbf{Q}$ is an orthogonal matrix (multiplying such a matrix with its transpose gives an identity matrix) and $\mathbf{R}$ is upper triangular. $\mathbf{c}$ is subsequently computed as $\mathbf{R}^{-1}\mathbf{Q}^\top\mathbf{y}$. For reasons I won't get into (just see any decent numerical linear algebra text, like this one), this has better numerical properties than the method of normal equations. One variation in using the QR decomposition is the method of seminormal equations. Briefly, if one has the decomposition $\mathbf{A}=\mathbf{Q}\mathbf{R}$, the linear system to be solved takes the form $$\mathbf{R}^\top\mathbf{R}\mathbf{c}=\mathbf{A}^\top\mathbf{y}$$ Effectively, one is using the QR decomposition to form the Cholesky triangle of $\mathbf{A}^\top\mathbf{A}$ in this approach. This is useful for the case where $\mathbf{A}$ is sparse, and the explicit storage and/or formation of $\mathbf{Q}$ (or a factored version of it) is unwanted or impractical. Finally, the most expensive, yet safest, way of solving OLS is the singular value decomposition (SVD). This time, $\mathbf{A}$ is factored as $\mathbf{A}=\mathbf{U}\mathbf \Sigma\mathbf{V}^\top$, where $\mathbf{U}$ and $\mathbf{V}$ are both orthogonal, and $\mathbf{\Sigma}$ is a diagonal matrix, whose diagonal entries are termed "singular values". The power of this decomposition lies in the diagnostic ability granted to you by the singular values, in that if one sees one or more tiny singular values, then it is likely that you have chosen a not entirely independent basis set, thus necessitating a reformulation of your model. (The "condition number" mentioned earlier is in fact related to the ratio of the largest singular value to the smallest one; the ratio of course becomes huge (and the matrix is thus ill-conditioned) if the smallest singular value is "tiny".) This is merely a sketch of these three algorithms; any good book on computational statistics and numerical linear algebra should be able to give you more relevant details.
What algorithm is used in linear regression?
To answer the letter of the question, "ordinary least squares" is not an algorithm; rather it is a type of problem in computational linear algebra, of which linear regression is one example. Usually o
What algorithm is used in linear regression? To answer the letter of the question, "ordinary least squares" is not an algorithm; rather it is a type of problem in computational linear algebra, of which linear regression is one example. Usually one has data $\{(x_1,y_1),\dots,(x_m,y_m)\}$ and a tentative function ("model") to fit the data against, of the form $f(x)=c_1 f_1(x)+\dots+c_n f_n(x)$. The $f_j(x)$ are called "basis functions" and can be anything from monomials $x^j$ to trigonometric functions (e.g. $\sin(jx)$, $\cos(jx)$) and exponential functions ($\exp(-jx)$). The term "linear" in "linear regression" here does not refer to the basis functions, but to the coefficients $c_j$, in that taking the partial derivative of the model with respect to any of the $c_j$ gives you the factor multiplying $c_j$; that is, $f_j(x)$. One now has an $m\times n$ rectangular matrix $\mathbf A$ ("design matrix") that (usually) has more rows than columns, and each entry is of the form $f_j(x_i)$, $i$ being the row index and $j$ being the column index. OLS is now the task of finding the vector $\mathbf c=(c_1\,\dots\,c_n)^\top$ that minimizes the quantity $\sqrt{\sum\limits_{j=1}^{m}\left(y_j-f(x_j)\right)^2}$ (in matrix notation, $\|\mathbf{A}\mathbf{c}-\mathbf{y}\|_2$ ; here, $\mathbf{y}=(y_1\,\dots\,y_m)^\top$ is usually called the "response vector"). There are at least three methods used in practice for computing least-squares solutions: the normal equations, QR decomposition, and singular value decomposition. In brief, they are ways to transform the matrix $\mathbf{A}$ into a product of matrices that are easily manipulated to solve for the vector $\mathbf{c}$. George already showed the method of normal equations in his answer; one just solves the $n\times n$ set of linear equations $\mathbf{A}^\top\mathbf{A}\mathbf{c}=\mathbf{A}^\top\mathbf{y}$ for $\mathbf{c}$. Due to the fact that the matrix $\mathbf{A}^\top\mathbf{A}$ is symmetric positive (semi)definite, the usual method used for this is Cholesky decomposition, which factors $\mathbf{A}^\top\mathbf{A}$ into the form $\mathbf{G}\mathbf{G}^\top$, with $\mathbf{G}$ a lower triangular matrix. The problem with this approach, despite the advantage of being able to compress the $m\times n$ design matrix into a (usually) much smaller $n\times n$ matrix, is that this operation is prone to loss of significant figures (this has something to do with the "condition number" of the design matrix). A slightly better way is QR decomposition, which directly works with the design matrix. It factors $\mathbf{A}$ as $\mathbf{A}=\mathbf{Q}\mathbf{R}$, where $\mathbf{Q}$ is an orthogonal matrix (multiplying such a matrix with its transpose gives an identity matrix) and $\mathbf{R}$ is upper triangular. $\mathbf{c}$ is subsequently computed as $\mathbf{R}^{-1}\mathbf{Q}^\top\mathbf{y}$. For reasons I won't get into (just see any decent numerical linear algebra text, like this one), this has better numerical properties than the method of normal equations. One variation in using the QR decomposition is the method of seminormal equations. Briefly, if one has the decomposition $\mathbf{A}=\mathbf{Q}\mathbf{R}$, the linear system to be solved takes the form $$\mathbf{R}^\top\mathbf{R}\mathbf{c}=\mathbf{A}^\top\mathbf{y}$$ Effectively, one is using the QR decomposition to form the Cholesky triangle of $\mathbf{A}^\top\mathbf{A}$ in this approach. This is useful for the case where $\mathbf{A}$ is sparse, and the explicit storage and/or formation of $\mathbf{Q}$ (or a factored version of it) is unwanted or impractical. Finally, the most expensive, yet safest, way of solving OLS is the singular value decomposition (SVD). This time, $\mathbf{A}$ is factored as $\mathbf{A}=\mathbf{U}\mathbf \Sigma\mathbf{V}^\top$, where $\mathbf{U}$ and $\mathbf{V}$ are both orthogonal, and $\mathbf{\Sigma}$ is a diagonal matrix, whose diagonal entries are termed "singular values". The power of this decomposition lies in the diagnostic ability granted to you by the singular values, in that if one sees one or more tiny singular values, then it is likely that you have chosen a not entirely independent basis set, thus necessitating a reformulation of your model. (The "condition number" mentioned earlier is in fact related to the ratio of the largest singular value to the smallest one; the ratio of course becomes huge (and the matrix is thus ill-conditioned) if the smallest singular value is "tiny".) This is merely a sketch of these three algorithms; any good book on computational statistics and numerical linear algebra should be able to give you more relevant details.
What algorithm is used in linear regression? To answer the letter of the question, "ordinary least squares" is not an algorithm; rather it is a type of problem in computational linear algebra, of which linear regression is one example. Usually o
5,011
What algorithm is used in linear regression?
The wiki link: Estimation Methods for Linear Regression gives a fairly comprehensive list of estimation methods including OLS and the contexts in which alternative estimation methods are used.
What algorithm is used in linear regression?
The wiki link: Estimation Methods for Linear Regression gives a fairly comprehensive list of estimation methods including OLS and the contexts in which alternative estimation methods are used.
What algorithm is used in linear regression? The wiki link: Estimation Methods for Linear Regression gives a fairly comprehensive list of estimation methods including OLS and the contexts in which alternative estimation methods are used.
What algorithm is used in linear regression? The wiki link: Estimation Methods for Linear Regression gives a fairly comprehensive list of estimation methods including OLS and the contexts in which alternative estimation methods are used.
5,012
What algorithm is used in linear regression?
It is easy to get confused between definitions and terminology. Both terms are used, sometimes interchangeably. A quick lookup on Wikipedia should help: ordinary least squares lnear regression Ordinary Least Squares (OLS) is a method used to fit linear regression models. Because of the demonstrable consistency and efficiency (under supplementary assumptions) of the OLS method, it is the dominant approach. See the articles for further leads.
What algorithm is used in linear regression?
It is easy to get confused between definitions and terminology. Both terms are used, sometimes interchangeably. A quick lookup on Wikipedia should help: ordinary least squares lnear regression Ord
What algorithm is used in linear regression? It is easy to get confused between definitions and terminology. Both terms are used, sometimes interchangeably. A quick lookup on Wikipedia should help: ordinary least squares lnear regression Ordinary Least Squares (OLS) is a method used to fit linear regression models. Because of the demonstrable consistency and efficiency (under supplementary assumptions) of the OLS method, it is the dominant approach. See the articles for further leads.
What algorithm is used in linear regression? It is easy to get confused between definitions and terminology. Both terms are used, sometimes interchangeably. A quick lookup on Wikipedia should help: ordinary least squares lnear regression Ord
5,013
What algorithm is used in linear regression?
I tend to think of 'least squares' as a criterion for defining the best fitting regression line (i.e., that which makes the sum of 'squared' residuals 'least') and the 'algorithm' in this context as the set of steps used to determine the regression coefficients that satisfy that criterion. This distinction suggests that it is possible to have different algorithms that would satisfy the same criterion. I'd be curious to know whether others make this distinction and what terminology they use.
What algorithm is used in linear regression?
I tend to think of 'least squares' as a criterion for defining the best fitting regression line (i.e., that which makes the sum of 'squared' residuals 'least') and the 'algorithm' in this context as t
What algorithm is used in linear regression? I tend to think of 'least squares' as a criterion for defining the best fitting regression line (i.e., that which makes the sum of 'squared' residuals 'least') and the 'algorithm' in this context as the set of steps used to determine the regression coefficients that satisfy that criterion. This distinction suggests that it is possible to have different algorithms that would satisfy the same criterion. I'd be curious to know whether others make this distinction and what terminology they use.
What algorithm is used in linear regression? I tend to think of 'least squares' as a criterion for defining the best fitting regression line (i.e., that which makes the sum of 'squared' residuals 'least') and the 'algorithm' in this context as t
5,014
What algorithm is used in linear regression?
An old book, yet one I find myself repeatedly turning to, is Lawson, C.L. and Hanson, R.J. Solving Least Squares Problems, Prentice-Hall, 1974. It contains a detailed and very readable discussion of some of the algorithms that previous answers have mentioned. You might want to look at it.
What algorithm is used in linear regression?
An old book, yet one I find myself repeatedly turning to, is Lawson, C.L. and Hanson, R.J. Solving Least Squares Problems, Prentice-Hall, 1974. It contains a detailed and very readable discussion of
What algorithm is used in linear regression? An old book, yet one I find myself repeatedly turning to, is Lawson, C.L. and Hanson, R.J. Solving Least Squares Problems, Prentice-Hall, 1974. It contains a detailed and very readable discussion of some of the algorithms that previous answers have mentioned. You might want to look at it.
What algorithm is used in linear regression? An old book, yet one I find myself repeatedly turning to, is Lawson, C.L. and Hanson, R.J. Solving Least Squares Problems, Prentice-Hall, 1974. It contains a detailed and very readable discussion of
5,015
What references should be cited to support using 30 as a large enough sample size?
The choice of n = 30 for a boundary between small and large samples is a rule of thumb, only. There is a large number of books that quote (around) this value, for example, Hogg and Tanis' Probability and Statistical Inference (7e) says "greater than 25 or 30". That said, the story told to me was that the only reason 30 was regarded as a good boundary was because it made for pretty Student's t tables in the back of textbooks to fit nicely on one page. That, and the critical values (between Student's t and Normal) are only off by approximately up to 0.25, anyway, from df = 30 to df = infinity. For hand computation the difference didn't really matter. Nowadays it is easy to compute critical values for all sorts of things to 15 decimal places. On top of that we have resampling and permutation methods for which we aren't even restricted to parametric population distributions. In practice I never rely on n = 30. Plot the data. Superimpose a normal distribution, if you like. Visually assess whether a normal approximation is appropriate (and ask whether an approximation is even really needed). If generating samples for research and an approximation is obligatory, generate enough of a sample size to make the approximation as close as desired (or as close as computationally feasible).
What references should be cited to support using 30 as a large enough sample size?
The choice of n = 30 for a boundary between small and large samples is a rule of thumb, only. There is a large number of books that quote (around) this value, for example, Hogg and Tanis' Probability
What references should be cited to support using 30 as a large enough sample size? The choice of n = 30 for a boundary between small and large samples is a rule of thumb, only. There is a large number of books that quote (around) this value, for example, Hogg and Tanis' Probability and Statistical Inference (7e) says "greater than 25 or 30". That said, the story told to me was that the only reason 30 was regarded as a good boundary was because it made for pretty Student's t tables in the back of textbooks to fit nicely on one page. That, and the critical values (between Student's t and Normal) are only off by approximately up to 0.25, anyway, from df = 30 to df = infinity. For hand computation the difference didn't really matter. Nowadays it is easy to compute critical values for all sorts of things to 15 decimal places. On top of that we have resampling and permutation methods for which we aren't even restricted to parametric population distributions. In practice I never rely on n = 30. Plot the data. Superimpose a normal distribution, if you like. Visually assess whether a normal approximation is appropriate (and ask whether an approximation is even really needed). If generating samples for research and an approximation is obligatory, generate enough of a sample size to make the approximation as close as desired (or as close as computationally feasible).
What references should be cited to support using 30 as a large enough sample size? The choice of n = 30 for a boundary between small and large samples is a rule of thumb, only. There is a large number of books that quote (around) this value, for example, Hogg and Tanis' Probability
5,016
What references should be cited to support using 30 as a large enough sample size?
Actually, the "magic number" 30 is a fallacy. See Jacob's Cohen's delightful paper, Things I Have Learned (So Far) (Am. Psych. December 1990 45 #12, pp 1304-1312). This myth is his first example of how "some things you learn aren't so". [O]ne of my fellow doctoral candidates undertook a dissertation [with] a sample of only 20 cases per group. ... [L]ater I discovered ... that for a two-independent-group-mean comparison with $n = 30$ per group at the sanctified two-tailed $.05$ level, the probability that a medium-sized effect would be labeled as significant by ... a t test was only $.47$. Thus, it was approximately a coin flip whether one would get a significant result, even though, in reality, the effect size was meaningful. ... [My friend] ended up with nonsignificant results–with which he proceeded to demolish an important branch of psychoanalytic theory.
What references should be cited to support using 30 as a large enough sample size?
Actually, the "magic number" 30 is a fallacy. See Jacob's Cohen's delightful paper, Things I Have Learned (So Far) (Am. Psych. December 1990 45 #12, pp 1304-1312). This myth is his first example of ho
What references should be cited to support using 30 as a large enough sample size? Actually, the "magic number" 30 is a fallacy. See Jacob's Cohen's delightful paper, Things I Have Learned (So Far) (Am. Psych. December 1990 45 #12, pp 1304-1312). This myth is his first example of how "some things you learn aren't so". [O]ne of my fellow doctoral candidates undertook a dissertation [with] a sample of only 20 cases per group. ... [L]ater I discovered ... that for a two-independent-group-mean comparison with $n = 30$ per group at the sanctified two-tailed $.05$ level, the probability that a medium-sized effect would be labeled as significant by ... a t test was only $.47$. Thus, it was approximately a coin flip whether one would get a significant result, even though, in reality, the effect size was meaningful. ... [My friend] ended up with nonsignificant results–with which he proceeded to demolish an important branch of psychoanalytic theory.
What references should be cited to support using 30 as a large enough sample size? Actually, the "magic number" 30 is a fallacy. See Jacob's Cohen's delightful paper, Things I Have Learned (So Far) (Am. Psych. December 1990 45 #12, pp 1304-1312). This myth is his first example of ho
5,017
What references should be cited to support using 30 as a large enough sample size?
Mostly arbitrary rule of thumb. This statement depends on a number of factor to be true. For example on the distribution of the data. If the data comes from a Cauchy for example, even 30^30 observations are not enough to estimate the mean (in that case even an infinite number of observations would not be enough to cause $\bar{\mu}^{(n)}$ to converge). This number (30) is also false if the values you draw are not independent from one another (again, you may have that there are no convergence at all, regardless of sample size). More generally, the CLT needs essentially two pillars to hold: That the random variables are independent: that you can re-order your observations without losing any information*. That the r.v. come from a distribution with finite second moments: meaning that the classical estimators of mean and s.d. tend to converge as sample size increases. (Both these condition can be somewhat weakened, but the differences are largely of theoretical nature)
What references should be cited to support using 30 as a large enough sample size?
Mostly arbitrary rule of thumb. This statement depends on a number of factor to be true. For example on the distribution of the data. If the data comes from a Cauchy for example, even 30^30 observatio
What references should be cited to support using 30 as a large enough sample size? Mostly arbitrary rule of thumb. This statement depends on a number of factor to be true. For example on the distribution of the data. If the data comes from a Cauchy for example, even 30^30 observations are not enough to estimate the mean (in that case even an infinite number of observations would not be enough to cause $\bar{\mu}^{(n)}$ to converge). This number (30) is also false if the values you draw are not independent from one another (again, you may have that there are no convergence at all, regardless of sample size). More generally, the CLT needs essentially two pillars to hold: That the random variables are independent: that you can re-order your observations without losing any information*. That the r.v. come from a distribution with finite second moments: meaning that the classical estimators of mean and s.d. tend to converge as sample size increases. (Both these condition can be somewhat weakened, but the differences are largely of theoretical nature)
What references should be cited to support using 30 as a large enough sample size? Mostly arbitrary rule of thumb. This statement depends on a number of factor to be true. For example on the distribution of the data. If the data comes from a Cauchy for example, even 30^30 observatio
5,018
What references should be cited to support using 30 as a large enough sample size?
IMO, it all depends on what you want to use your sample for. Two "silly" examples to illustrate what I mean: If you need to estimate a mean, 30 observations is more than enough. If you need to estimate a linear regression with 100 predictors, 30 observations will not be close to enough.
What references should be cited to support using 30 as a large enough sample size?
IMO, it all depends on what you want to use your sample for. Two "silly" examples to illustrate what I mean: If you need to estimate a mean, 30 observations is more than enough. If you need to estim
What references should be cited to support using 30 as a large enough sample size? IMO, it all depends on what you want to use your sample for. Two "silly" examples to illustrate what I mean: If you need to estimate a mean, 30 observations is more than enough. If you need to estimate a linear regression with 100 predictors, 30 observations will not be close to enough.
What references should be cited to support using 30 as a large enough sample size? IMO, it all depends on what you want to use your sample for. Two "silly" examples to illustrate what I mean: If you need to estimate a mean, 30 observations is more than enough. If you need to estim
5,019
What references should be cited to support using 30 as a large enough sample size?
This is meant to supplement user1108's answer stating that: That said, the story told to me was that the only reason 30 was regarded as a good boundary was because it made for pretty Student's t tables in the back of textbooks to fit nicely on one page. That, and the critical values (between Student's t and Normal) are only off by approximately up to 0.25, anyway, from df = 30 to df = infinity. For hand computation the difference didn't really matter. I did some investigation on this issue, and the earliest source I can find is Fisher's Statistical Methods for Research Workers (1925). I remember examining a copy of this text (you can see http://psychclassics.yorku.ca/Fisher/Methods/, for example) and noticing that the following table neatly fit on one page. From what I recall reading in the text, there is nothing justifying why Fisher chose to stop at $n = 30$. So as far as I know, the only justification for this is that such tables can fit neatly on one page back in the day.
What references should be cited to support using 30 as a large enough sample size?
This is meant to supplement user1108's answer stating that: That said, the story told to me was that the only reason 30 was regarded as a good boundary was because it made for pretty Student's t tabl
What references should be cited to support using 30 as a large enough sample size? This is meant to supplement user1108's answer stating that: That said, the story told to me was that the only reason 30 was regarded as a good boundary was because it made for pretty Student's t tables in the back of textbooks to fit nicely on one page. That, and the critical values (between Student's t and Normal) are only off by approximately up to 0.25, anyway, from df = 30 to df = infinity. For hand computation the difference didn't really matter. I did some investigation on this issue, and the earliest source I can find is Fisher's Statistical Methods for Research Workers (1925). I remember examining a copy of this text (you can see http://psychclassics.yorku.ca/Fisher/Methods/, for example) and noticing that the following table neatly fit on one page. From what I recall reading in the text, there is nothing justifying why Fisher chose to stop at $n = 30$. So as far as I know, the only justification for this is that such tables can fit neatly on one page back in the day.
What references should be cited to support using 30 as a large enough sample size? This is meant to supplement user1108's answer stating that: That said, the story told to me was that the only reason 30 was regarded as a good boundary was because it made for pretty Student's t tabl
5,020
Bootstrap vs. permutation hypothesis testing
Both are popular and useful, but primarily for different uses. The permutation test is best for testing hypotheses and bootstrapping is best for estimating confidence intervals. Permutation tests test a specific null hypothesis of exchangeability, i.e. that only the random sampling/randomization explains the difference seen. This is the common case for things like t-tests and ANOVA. It can also be expanded to things like time series (null hypothesis that there is no serial correlation) or regression (null hypothesis of no relationship). Permutation tests can be used to create confidence intervals, but it requires many more assumptions, that may or may not be reasonable (so other methods are preferred). The Mann-Whitney/Wilcoxon test is actually a special case of a permutation test, so they are much more popular than some realize. The bootstrap estimates the variability of the sampling process and works well for estimating confidence intervals. You can do a test of hypothesis this way but it tends to be less powerful than the permutation test for cases that the permutation test assumptions hold.
Bootstrap vs. permutation hypothesis testing
Both are popular and useful, but primarily for different uses. The permutation test is best for testing hypotheses and bootstrapping is best for estimating confidence intervals. Permutation tests tes
Bootstrap vs. permutation hypothesis testing Both are popular and useful, but primarily for different uses. The permutation test is best for testing hypotheses and bootstrapping is best for estimating confidence intervals. Permutation tests test a specific null hypothesis of exchangeability, i.e. that only the random sampling/randomization explains the difference seen. This is the common case for things like t-tests and ANOVA. It can also be expanded to things like time series (null hypothesis that there is no serial correlation) or regression (null hypothesis of no relationship). Permutation tests can be used to create confidence intervals, but it requires many more assumptions, that may or may not be reasonable (so other methods are preferred). The Mann-Whitney/Wilcoxon test is actually a special case of a permutation test, so they are much more popular than some realize. The bootstrap estimates the variability of the sampling process and works well for estimating confidence intervals. You can do a test of hypothesis this way but it tends to be less powerful than the permutation test for cases that the permutation test assumptions hold.
Bootstrap vs. permutation hypothesis testing Both are popular and useful, but primarily for different uses. The permutation test is best for testing hypotheses and bootstrapping is best for estimating confidence intervals. Permutation tests tes
5,021
Bootstrap vs. permutation hypothesis testing
If you are using R, then they are all easy to implement. See, for instance, http://www.burns-stat.com/pages/Tutor/bootstrap_resampling.html I would say there is a third major technique: cross validation. This is used to test the predictive power of models.
Bootstrap vs. permutation hypothesis testing
If you are using R, then they are all easy to implement. See, for instance, http://www.burns-stat.com/pages/Tutor/bootstrap_resampling.html I would say there is a third major technique: cross valida
Bootstrap vs. permutation hypothesis testing If you are using R, then they are all easy to implement. See, for instance, http://www.burns-stat.com/pages/Tutor/bootstrap_resampling.html I would say there is a third major technique: cross validation. This is used to test the predictive power of models.
Bootstrap vs. permutation hypothesis testing If you are using R, then they are all easy to implement. See, for instance, http://www.burns-stat.com/pages/Tutor/bootstrap_resampling.html I would say there is a third major technique: cross valida
5,022
Bootstrap vs. permutation hypothesis testing
My question is which resampling technique has gained the more popularity Bootstrapping or permutation tests? Bootstrapping is mostly about generating large sample standard errors or confidence intervals; permutation tests as the name suggests are mostly about testing. (Each can be adapted to be used for the other task though.) How would we judge popularity? If we look at fields like psychology and education we can find plenty of use of rank based tests like Wilcoxon-Mann-Whitney, the signed rank test, rank-correlation tests and so on. These are all permutation tests (on the other hand there are many instances where permutation tests of the original data could be used instead but are usually not). In some other application areas, permutation tests would rarely be used, but the varying popularity across application areas sometimes says more about the local culture of whichever area than usefulness. easier to implement? In many cases - especially simpler ones - they're almost exactly equally easy -- it's essentially the difference between sampling with replacement and sampling without replacement. In some of the more complex cases, bootstrapping is easier to do because (looking at it from the testing point of view) it operates as well under the alternative as the null (at least naive implementations will be -- doing it so that it works well may be much more complicated). Exact permutation tests can be difficult in the more complex cases because a suitable exchangable quantity may be unobservable -- often a nearly-exchangeable quantity may be substituted at the price of exactness (and of being truly distribution-free). Bootstrapping essentially gives up on the corresponding exactness criterion (e.g. exact coverage of intervals) from the outset, and instead focuses on trying to get reasonably good coverage in large samples (sometimes with less success than may be understood; if you haven't checked, don't assume your bootstrap gives the coverage you expect it to). Permutation tests can work on small samples (though limited choice of significance levels can sometimes be a problem with very small samples), while the bootstrap is a large-sample technique (if you use it with small samples, in many cases the results may not be very useful). I rarely see them as competitors on the same problem, and have used them on (different) real problems -- often there will be a natural choice of which to look at. There are benefits to both, but neither in a panacaea. If you're hoping to reduce learning effort by focusing on only one of them you're likely to be disappointed -- both are essential parts of the resampling toolbox.
Bootstrap vs. permutation hypothesis testing
My question is which resampling technique has gained the more popularity Bootstrapping or permutation tests? Bootstrapping is mostly about generating large sample standard errors or confidence int
Bootstrap vs. permutation hypothesis testing My question is which resampling technique has gained the more popularity Bootstrapping or permutation tests? Bootstrapping is mostly about generating large sample standard errors or confidence intervals; permutation tests as the name suggests are mostly about testing. (Each can be adapted to be used for the other task though.) How would we judge popularity? If we look at fields like psychology and education we can find plenty of use of rank based tests like Wilcoxon-Mann-Whitney, the signed rank test, rank-correlation tests and so on. These are all permutation tests (on the other hand there are many instances where permutation tests of the original data could be used instead but are usually not). In some other application areas, permutation tests would rarely be used, but the varying popularity across application areas sometimes says more about the local culture of whichever area than usefulness. easier to implement? In many cases - especially simpler ones - they're almost exactly equally easy -- it's essentially the difference between sampling with replacement and sampling without replacement. In some of the more complex cases, bootstrapping is easier to do because (looking at it from the testing point of view) it operates as well under the alternative as the null (at least naive implementations will be -- doing it so that it works well may be much more complicated). Exact permutation tests can be difficult in the more complex cases because a suitable exchangable quantity may be unobservable -- often a nearly-exchangeable quantity may be substituted at the price of exactness (and of being truly distribution-free). Bootstrapping essentially gives up on the corresponding exactness criterion (e.g. exact coverage of intervals) from the outset, and instead focuses on trying to get reasonably good coverage in large samples (sometimes with less success than may be understood; if you haven't checked, don't assume your bootstrap gives the coverage you expect it to). Permutation tests can work on small samples (though limited choice of significance levels can sometimes be a problem with very small samples), while the bootstrap is a large-sample technique (if you use it with small samples, in many cases the results may not be very useful). I rarely see them as competitors on the same problem, and have used them on (different) real problems -- often there will be a natural choice of which to look at. There are benefits to both, but neither in a panacaea. If you're hoping to reduce learning effort by focusing on only one of them you're likely to be disappointed -- both are essential parts of the resampling toolbox.
Bootstrap vs. permutation hypothesis testing My question is which resampling technique has gained the more popularity Bootstrapping or permutation tests? Bootstrapping is mostly about generating large sample standard errors or confidence int
5,023
Why is "statistically significant" not enough?
Hypothesis testing versus parameter estimation Typically, hypotheses are framed in a binary way. I'll put directional hypotheses to one side, as they don't change the issue much. It is common, at least in psychology, to talk about hypotheses such as: the difference between group means is or is not zero; the correlation is or is not zero; the regression coefficient is or is not zero; the r-square is or is not zero. In all these cases, there is a null hypothesis of no effect, and an alternative hypothesis of an effect. This binary thinking is generally not what we are most interested in. Once you think about your research question, you will almost always find that you are actually interested in estimating parameters. You are interested in the actual difference between group means, or the size of the correlation, or the size of the regression coefficient, or the amount of variance explained. Of course, when we get a sample of data, the sample estimate of a parameter is not the same as the population parameter. So we need a way of quantifying our uncertainty about what the value of the parameter might be. From a frequentist perspective, confidence intervals provide a means of doing, although Bayesian purists might argue that they don't strictly permit the inference you might want to make. From a Bayesian perspective, credible intervals on posterior densities provide a more direct means of quantifying your uncertainty about the value of a population parameter. Parameters / effect sizes Moving away from the binary hypothesis testing approach forces you to think in a continuous way. For example, what size difference in group means would be theoretically interesting? How would you map difference between group means onto subjective language or practical implications? Standardised measures of effect along with contextual norms are one way of building a language for quantifying what different parameter values mean. Such measures are often labelled "effect sizes" (e.g., Cohen's d, r, $R^2$, etc.). However, it is perfectly reasonable, and often preferable, to talk about the importance of an effect using unstandardised measures (e.g., the difference in group means on meaningful unstandardised variables such as income levels, life expectancy, etc.). There's a huge literature in psychology (and other fields) critiquing a focus on p-values, null hypothesis significance testing, and so on (see this Google Scholar search). This literature often recommends reporting effect sizes with confidence intervals as a resolution (e.g., APA Task force by Wilkinson, 1999). Steps for moving away from binary hypothesis testing If you are thinking about adopting this thinking, I think there are progressively more sophisticated approaches you can take: Approach 1a. Report the point estimate of your sample effect (e.g., group mean differences) in both raw and standardised terms. When you report your results discuss what such a magnitude would mean for theory and practice. Approach 1b. Add to 1a, at least at a very basic level, some sense of the uncertainty around your parameter estimate based on your sample size. Approach 2. Also report confidence intervals on effect sizes and incorporate this uncertainty into your thinking about the plausible values of the parameter of interest. Approach 3. Report Bayesian credible intervals, and examine the implications of various assumptions on that credible interval, such as choice of prior, the data generating process implied by your model, and so on. Among many possible references, you'll see Andrew Gelman talk a lot about these issues on his blog and in his research. References Nickerson, R. S. (2000). Null hypothesis significance testing: a review of an old and continuing controversy. Psychological methods, 5(2), 241. Wilkinson, L. (1999). Statistical methods in psychology journals: guidelines and explanations. American psychologist, 54(8), 594. PDF
Why is "statistically significant" not enough?
Hypothesis testing versus parameter estimation Typically, hypotheses are framed in a binary way. I'll put directional hypotheses to one side, as they don't change the issue much. It is common, at leas
Why is "statistically significant" not enough? Hypothesis testing versus parameter estimation Typically, hypotheses are framed in a binary way. I'll put directional hypotheses to one side, as they don't change the issue much. It is common, at least in psychology, to talk about hypotheses such as: the difference between group means is or is not zero; the correlation is or is not zero; the regression coefficient is or is not zero; the r-square is or is not zero. In all these cases, there is a null hypothesis of no effect, and an alternative hypothesis of an effect. This binary thinking is generally not what we are most interested in. Once you think about your research question, you will almost always find that you are actually interested in estimating parameters. You are interested in the actual difference between group means, or the size of the correlation, or the size of the regression coefficient, or the amount of variance explained. Of course, when we get a sample of data, the sample estimate of a parameter is not the same as the population parameter. So we need a way of quantifying our uncertainty about what the value of the parameter might be. From a frequentist perspective, confidence intervals provide a means of doing, although Bayesian purists might argue that they don't strictly permit the inference you might want to make. From a Bayesian perspective, credible intervals on posterior densities provide a more direct means of quantifying your uncertainty about the value of a population parameter. Parameters / effect sizes Moving away from the binary hypothesis testing approach forces you to think in a continuous way. For example, what size difference in group means would be theoretically interesting? How would you map difference between group means onto subjective language or practical implications? Standardised measures of effect along with contextual norms are one way of building a language for quantifying what different parameter values mean. Such measures are often labelled "effect sizes" (e.g., Cohen's d, r, $R^2$, etc.). However, it is perfectly reasonable, and often preferable, to talk about the importance of an effect using unstandardised measures (e.g., the difference in group means on meaningful unstandardised variables such as income levels, life expectancy, etc.). There's a huge literature in psychology (and other fields) critiquing a focus on p-values, null hypothesis significance testing, and so on (see this Google Scholar search). This literature often recommends reporting effect sizes with confidence intervals as a resolution (e.g., APA Task force by Wilkinson, 1999). Steps for moving away from binary hypothesis testing If you are thinking about adopting this thinking, I think there are progressively more sophisticated approaches you can take: Approach 1a. Report the point estimate of your sample effect (e.g., group mean differences) in both raw and standardised terms. When you report your results discuss what such a magnitude would mean for theory and practice. Approach 1b. Add to 1a, at least at a very basic level, some sense of the uncertainty around your parameter estimate based on your sample size. Approach 2. Also report confidence intervals on effect sizes and incorporate this uncertainty into your thinking about the plausible values of the parameter of interest. Approach 3. Report Bayesian credible intervals, and examine the implications of various assumptions on that credible interval, such as choice of prior, the data generating process implied by your model, and so on. Among many possible references, you'll see Andrew Gelman talk a lot about these issues on his blog and in his research. References Nickerson, R. S. (2000). Null hypothesis significance testing: a review of an old and continuing controversy. Psychological methods, 5(2), 241. Wilkinson, L. (1999). Statistical methods in psychology journals: guidelines and explanations. American psychologist, 54(8), 594. PDF
Why is "statistically significant" not enough? Hypothesis testing versus parameter estimation Typically, hypotheses are framed in a binary way. I'll put directional hypotheses to one side, as they don't change the issue much. It is common, at leas
5,024
Why is "statistically significant" not enough?
Just to add to the existing answers (which are great, by the way). It is important to be aware that statistical significance is a function of sample size. When you get more and more data, you can find statistically significant differences wherever you look. When the amount of data is huge, even the tiniest effects can lead to statistical significance. This does not imply said effects are meaningful in any practical way. When testing for differences, $p$-values alone are not enough because the required effect size to produce a statistically significant result decreases with increasing sample size. In practice, the actual question is usually whether there is an effect of a given minimal size (to be relevant). When samples become very large, $p$-values become close to meaningless in answering the actual question.
Why is "statistically significant" not enough?
Just to add to the existing answers (which are great, by the way). It is important to be aware that statistical significance is a function of sample size. When you get more and more data, you can fin
Why is "statistically significant" not enough? Just to add to the existing answers (which are great, by the way). It is important to be aware that statistical significance is a function of sample size. When you get more and more data, you can find statistically significant differences wherever you look. When the amount of data is huge, even the tiniest effects can lead to statistical significance. This does not imply said effects are meaningful in any practical way. When testing for differences, $p$-values alone are not enough because the required effect size to produce a statistically significant result decreases with increasing sample size. In practice, the actual question is usually whether there is an effect of a given minimal size (to be relevant). When samples become very large, $p$-values become close to meaningless in answering the actual question.
Why is "statistically significant" not enough? Just to add to the existing answers (which are great, by the way). It is important to be aware that statistical significance is a function of sample size. When you get more and more data, you can fin
5,025
Why is "statistically significant" not enough?
If there was a reasonable basis for suspecting your hypothesis might be true before you ran your study; and you ran a good study (e.g., you didn't induce any confounds); and your results were consistent with your hypothesis and statistically significant; then I think you are fine, as far as that goes. However, you shouldn't think that significance is all that is important in your results. First, you should look at the effect size as well (see my answer here: Effect size as the hypothesis for significance testing). You might also want to explore your data a bit and see if you can find any potentially interesting surprises that might be worth following up on.
Why is "statistically significant" not enough?
If there was a reasonable basis for suspecting your hypothesis might be true before you ran your study; and you ran a good study (e.g., you didn't induce any confounds); and your results were consiste
Why is "statistically significant" not enough? If there was a reasonable basis for suspecting your hypothesis might be true before you ran your study; and you ran a good study (e.g., you didn't induce any confounds); and your results were consistent with your hypothesis and statistically significant; then I think you are fine, as far as that goes. However, you shouldn't think that significance is all that is important in your results. First, you should look at the effect size as well (see my answer here: Effect size as the hypothesis for significance testing). You might also want to explore your data a bit and see if you can find any potentially interesting surprises that might be worth following up on.
Why is "statistically significant" not enough? If there was a reasonable basis for suspecting your hypothesis might be true before you ran your study; and you ran a good study (e.g., you didn't induce any confounds); and your results were consiste
5,026
Why is "statistically significant" not enough?
Before reporting this and this and this and this, start by formulating what do you want to learn from you experimental data. The main problem with usual hypothesis tests (these tests we learn at school...) is not the binarity: the main problem is that these are tests for hypotheses which are not hypotheses of interest. See slide 13 here (download the pdf to appreciate the animations). About effect sizes, there's no general definition of this notion. Frankly I would not recommend to use this for non-expert statisticians, these are technical, not natural, measures of "effect". Your hypothesis of interest should be formulated in terms understandable by the laymen.
Why is "statistically significant" not enough?
Before reporting this and this and this and this, start by formulating what do you want to learn from you experimental data. The main problem with usual hypothesis tests (these tests we learn at scho
Why is "statistically significant" not enough? Before reporting this and this and this and this, start by formulating what do you want to learn from you experimental data. The main problem with usual hypothesis tests (these tests we learn at school...) is not the binarity: the main problem is that these are tests for hypotheses which are not hypotheses of interest. See slide 13 here (download the pdf to appreciate the animations). About effect sizes, there's no general definition of this notion. Frankly I would not recommend to use this for non-expert statisticians, these are technical, not natural, measures of "effect". Your hypothesis of interest should be formulated in terms understandable by the laymen.
Why is "statistically significant" not enough? Before reporting this and this and this and this, start by formulating what do you want to learn from you experimental data. The main problem with usual hypothesis tests (these tests we learn at scho
5,027
Why is "statistically significant" not enough?
I'm far from an expert on statistics, but one thing that has been emphasised in the stats courses I have done to date is the issue of "practical significance". I believe this alludes to what what Jeromy and gung are talking about when referring to "effect size". We had an example in class of a 12 week diet that had statistically significant weight loss results, but the 95% confidence interval showed a mean weight loss of between 0.2 and 1.2 kg (OK, data was probably made up but it illustrates a point). While "statistically significantly"" different from zero, is a 200gram weight loss over 12 weeks a "practically significant" result to an overweight person trying to get healthy?
Why is "statistically significant" not enough?
I'm far from an expert on statistics, but one thing that has been emphasised in the stats courses I have done to date is the issue of "practical significance". I believe this alludes to what what Jero
Why is "statistically significant" not enough? I'm far from an expert on statistics, but one thing that has been emphasised in the stats courses I have done to date is the issue of "practical significance". I believe this alludes to what what Jeromy and gung are talking about when referring to "effect size". We had an example in class of a 12 week diet that had statistically significant weight loss results, but the 95% confidence interval showed a mean weight loss of between 0.2 and 1.2 kg (OK, data was probably made up but it illustrates a point). While "statistically significantly"" different from zero, is a 200gram weight loss over 12 weeks a "practically significant" result to an overweight person trying to get healthy?
Why is "statistically significant" not enough? I'm far from an expert on statistics, but one thing that has been emphasised in the stats courses I have done to date is the issue of "practical significance". I believe this alludes to what what Jero
5,028
Why is "statistically significant" not enough?
This is impossible to answer accurately without knowing more details of your study and the person's criticism. But here's one possibility: if you've run multiple tests, and you choose to focus on the one that came out at p<0.05 and ignore others, then that "significance" has been diluted by the fact of your selective attention to it. As an intuition pump for this, remember that p=0.05 means "this result would happen by chance (only) 5% of the time even if the null hypothesis is true". So the more tests you run, the more likely it is that at least one of them will be a "significant" result just by chance—even if there's no effect there. See http://en.wikipedia.org/wiki/Multiple_comparisons and http://en.wikipedia.org/wiki/Post-hoc_analysis
Why is "statistically significant" not enough?
This is impossible to answer accurately without knowing more details of your study and the person's criticism. But here's one possibility: if you've run multiple tests, and you choose to focus on the
Why is "statistically significant" not enough? This is impossible to answer accurately without knowing more details of your study and the person's criticism. But here's one possibility: if you've run multiple tests, and you choose to focus on the one that came out at p<0.05 and ignore others, then that "significance" has been diluted by the fact of your selective attention to it. As an intuition pump for this, remember that p=0.05 means "this result would happen by chance (only) 5% of the time even if the null hypothesis is true". So the more tests you run, the more likely it is that at least one of them will be a "significant" result just by chance—even if there's no effect there. See http://en.wikipedia.org/wiki/Multiple_comparisons and http://en.wikipedia.org/wiki/Post-hoc_analysis
Why is "statistically significant" not enough? This is impossible to answer accurately without knowing more details of your study and the person's criticism. But here's one possibility: if you've run multiple tests, and you choose to focus on the
5,029
Why is "statistically significant" not enough?
I suggest you read the following: Anderson, D.R., Burnham, K.P., Thompson, W.L., 2000. Null hypothesis testing: Problems, prevalence, and an alternative. J. Wildl. Manage. 64, 912-923. Gigerenzer, G., 2004. Mindless statistics. Journal of Socio-Economics 33, 587-606. Johnson, D.H., 1999. The Insignificance of Statistical Significance Testing. The Journal of Wildlife Management 63, 763-772. Null hypotheses are rarely interesting in the sense that, from any experiment or set of observations, there are two outcomes: correctly rejecting the null or making a Type II error. The effect size is what you are probably interesting in determining and, once done, you should produce confidence intervals for that effect size.
Why is "statistically significant" not enough?
I suggest you read the following: Anderson, D.R., Burnham, K.P., Thompson, W.L., 2000. Null hypothesis testing: Problems, prevalence, and an alternative. J. Wildl. Manage. 64, 912-923. Gigerenzer, G.
Why is "statistically significant" not enough? I suggest you read the following: Anderson, D.R., Burnham, K.P., Thompson, W.L., 2000. Null hypothesis testing: Problems, prevalence, and an alternative. J. Wildl. Manage. 64, 912-923. Gigerenzer, G., 2004. Mindless statistics. Journal of Socio-Economics 33, 587-606. Johnson, D.H., 1999. The Insignificance of Statistical Significance Testing. The Journal of Wildlife Management 63, 763-772. Null hypotheses are rarely interesting in the sense that, from any experiment or set of observations, there are two outcomes: correctly rejecting the null or making a Type II error. The effect size is what you are probably interesting in determining and, once done, you should produce confidence intervals for that effect size.
Why is "statistically significant" not enough? I suggest you read the following: Anderson, D.R., Burnham, K.P., Thompson, W.L., 2000. Null hypothesis testing: Problems, prevalence, and an alternative. J. Wildl. Manage. 64, 912-923. Gigerenzer, G.
5,030
Statistical tests when sample size is 1
Unfortunately, your student has a problem. The idea of any (inferential) statistical analysis is to understand whether a pattern of observations can be simply due to natural variation or chance, or whether there is something systematic there. If the natural variation is large, then the observed difference may be simply due to chance. If the natural variation is small, then it may be indicative of a true underlying effect. With only a single pair of observations, we have no idea of the natural variation in the data we observe. So we are missing half of the information we need. You note that your student has three pairs of observations. Unfortunately, they were collected under different conditions. So the variability we observe between these three pairs may simply be due to the varying conditions, and won't help us for the underlying question about a possible effect of insulin. One straw to grasp at would be to get an idea of the natural variation through other channels. Maybe similar observations under similar conditions have been made before and reported in the literature. If so, we could compare our observations to these published data. (This would still be problematic, because the protocols will almost certainly have been slightly different, but it might be better than nothing.) EDIT: note that my explanation here applies to the case where the condition has a potential impact on the effect of insulin, an interaction. If we can disregard this possibility and expect only main effects (i.e., the condition will have an additive effect on glucose that is independent of the additional effect of insulin), then we can at least formally run an ANOVA as per BruceET's answer. This may be the best the student can do. (And they at least get to practice writing up the limitations of their study, which is also an important skill!) Failing that, I am afraid the only possibility would be to go back to the lab bench and collect more data. In any case, this is a (probably painful, but still) great learning opportunity! I am sure this student will in the future always think about the statistical analysis before planning their study, which is how it should be. Better to learn this in high school rather than only in college. Let me close with a relevant quote attributed to Ronald Fisher: To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.
Statistical tests when sample size is 1
Unfortunately, your student has a problem. The idea of any (inferential) statistical analysis is to understand whether a pattern of observations can be simply due to natural variation or chance, or wh
Statistical tests when sample size is 1 Unfortunately, your student has a problem. The idea of any (inferential) statistical analysis is to understand whether a pattern of observations can be simply due to natural variation or chance, or whether there is something systematic there. If the natural variation is large, then the observed difference may be simply due to chance. If the natural variation is small, then it may be indicative of a true underlying effect. With only a single pair of observations, we have no idea of the natural variation in the data we observe. So we are missing half of the information we need. You note that your student has three pairs of observations. Unfortunately, they were collected under different conditions. So the variability we observe between these three pairs may simply be due to the varying conditions, and won't help us for the underlying question about a possible effect of insulin. One straw to grasp at would be to get an idea of the natural variation through other channels. Maybe similar observations under similar conditions have been made before and reported in the literature. If so, we could compare our observations to these published data. (This would still be problematic, because the protocols will almost certainly have been slightly different, but it might be better than nothing.) EDIT: note that my explanation here applies to the case where the condition has a potential impact on the effect of insulin, an interaction. If we can disregard this possibility and expect only main effects (i.e., the condition will have an additive effect on glucose that is independent of the additional effect of insulin), then we can at least formally run an ANOVA as per BruceET's answer. This may be the best the student can do. (And they at least get to practice writing up the limitations of their study, which is also an important skill!) Failing that, I am afraid the only possibility would be to go back to the lab bench and collect more data. In any case, this is a (probably painful, but still) great learning opportunity! I am sure this student will in the future always think about the statistical analysis before planning their study, which is how it should be. Better to learn this in high school rather than only in college. Let me close with a relevant quote attributed to Ronald Fisher: To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.
Statistical tests when sample size is 1 Unfortunately, your student has a problem. The idea of any (inferential) statistical analysis is to understand whether a pattern of observations can be simply due to natural variation or chance, or wh
5,031
Statistical tests when sample size is 1
Two-way ANOVA with One Observation per Cell After you finish your important 'lecture' about consulting a statistician before starting to take data, you can tell your student that there is barely enough data here to support a legitimate experimental design. If the subjects were chosen at random from some relevant population, glucose determinations were made in the same way for each of the six subjects, and if glucose levels are anything like normally distributed, then it seems possible to analyze the results according to a simple two-way ANOVA with one observation per cell. The data might be displayed is a table like this: Insulin -------------- Method Yes No --------------------------- 1 2 3 The model is $Y_{ij} = \mu + \alpha_i + \beta_j + e_{ij},$ where $i = 1,2,3$ methods; $j = 1, 2$ conditions (Y or N), and $e_{ij} \stackrel{iid}{\sim} \mathsf{Norm}(0, \sigma).$ You can look at an intermediate level statistics text or introductory level text of experimental design for details. The two-way ANOVA design would allow for a test whether the two Conditions have different glucose level (almost certainly so if insulin doses are meaningful) and whether the three Methods differ or are all the same. With only two levels of one factor, only two levels of the other, and only one observation per cell, it would not be possible to take interaction between insulin dose and method into account. [There is no $(\alpha*\beta)_{ij}$ term in the model above; it would have the same subscripts as the error term $e_{ij}.]$ Also, it probably wouldn't be worthwhile to do any kind of nonparametric test (with more than three Methods---perhaps a Friedman test). That is why I made prominent mention normality above. Example using fake data in R: gluc = c(110, 135, 123, 200, 210, 234) meth = as.factor(c( 2, 2, 3, 1, 2, 2)) insl = as.factor(c( 1, 1, 1, 2, 2, 2)) aov.out = aov(gluc ~ meth + insl) summary(aov.out) Df Sum Sq Mean Sq F value Pr(>F) meth 2 3119 1559 5.193 0.161 insl 1 9900 9900 32.973 0.029 * Residuals 2 600 300 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Insulin effect significant at 3% level. You could also use just paired glucose measurements for Insulin (Y/N) in a paired t test to get a significant result. (In the ANOVA the Methods provide a bit of interaction, which can't be tested because there is only one observation per cell.) t.test(gluc~insl, pair=T) Paired t-test data: gluc by insl t = -8.812, df = 2, p-value = 0.01263 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -136.92101 -47.07899 sample estimates: mean of the differences -92 Note: See this demo for a $2 \times 3$ ANOVA with several replications per cell, analyzed in detail.
Statistical tests when sample size is 1
Two-way ANOVA with One Observation per Cell After you finish your important 'lecture' about consulting a statistician before starting to take data, you can tell your student that there is barely enou
Statistical tests when sample size is 1 Two-way ANOVA with One Observation per Cell After you finish your important 'lecture' about consulting a statistician before starting to take data, you can tell your student that there is barely enough data here to support a legitimate experimental design. If the subjects were chosen at random from some relevant population, glucose determinations were made in the same way for each of the six subjects, and if glucose levels are anything like normally distributed, then it seems possible to analyze the results according to a simple two-way ANOVA with one observation per cell. The data might be displayed is a table like this: Insulin -------------- Method Yes No --------------------------- 1 2 3 The model is $Y_{ij} = \mu + \alpha_i + \beta_j + e_{ij},$ where $i = 1,2,3$ methods; $j = 1, 2$ conditions (Y or N), and $e_{ij} \stackrel{iid}{\sim} \mathsf{Norm}(0, \sigma).$ You can look at an intermediate level statistics text or introductory level text of experimental design for details. The two-way ANOVA design would allow for a test whether the two Conditions have different glucose level (almost certainly so if insulin doses are meaningful) and whether the three Methods differ or are all the same. With only two levels of one factor, only two levels of the other, and only one observation per cell, it would not be possible to take interaction between insulin dose and method into account. [There is no $(\alpha*\beta)_{ij}$ term in the model above; it would have the same subscripts as the error term $e_{ij}.]$ Also, it probably wouldn't be worthwhile to do any kind of nonparametric test (with more than three Methods---perhaps a Friedman test). That is why I made prominent mention normality above. Example using fake data in R: gluc = c(110, 135, 123, 200, 210, 234) meth = as.factor(c( 2, 2, 3, 1, 2, 2)) insl = as.factor(c( 1, 1, 1, 2, 2, 2)) aov.out = aov(gluc ~ meth + insl) summary(aov.out) Df Sum Sq Mean Sq F value Pr(>F) meth 2 3119 1559 5.193 0.161 insl 1 9900 9900 32.973 0.029 * Residuals 2 600 300 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Insulin effect significant at 3% level. You could also use just paired glucose measurements for Insulin (Y/N) in a paired t test to get a significant result. (In the ANOVA the Methods provide a bit of interaction, which can't be tested because there is only one observation per cell.) t.test(gluc~insl, pair=T) Paired t-test data: gluc by insl t = -8.812, df = 2, p-value = 0.01263 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -136.92101 -47.07899 sample estimates: mean of the differences -92 Note: See this demo for a $2 \times 3$ ANOVA with several replications per cell, analyzed in detail.
Statistical tests when sample size is 1 Two-way ANOVA with One Observation per Cell After you finish your important 'lecture' about consulting a statistician before starting to take data, you can tell your student that there is barely enou
5,032
Statistical tests when sample size is 1
BruceET has described the proper analysis (Two-way ANOVA without interaction), so I'll put a more positive spin on the experiment. I'm assuming that the design was three pairs, where there is variability between pairs. One of each pair was given insulin and the other without, hopefully randomized. Then each sample (pair X treatment, I call the experimental unit a petrie) was measured once. 1) This is not a bad design. This is probably one of the most commonly used experimental designs in science - it's a complete block design (also called a matched pairs design when the blocks only have two observations). This design generally is superior in power to the even more common completely randomized design (all six experimental units randomized into a set of three that got insulin and three that didn't). The paired design removes variability due to pair-to-pair variability. Seriously, this design is ubiquitous in agriculture, medicine, etc. The only objection I would have is that three pairs might allow too little power. But it is certainly replicated (there are multiple pairs). 2) It appears that the suggestion was that the student should have sampled each petrie multiple times to get replications. This would be a very bad recommendation. Multiply sampling each experimental unit to get replication is an example of pseudo-replication. If the pseudo-replicates are averaged together to yield one measure per petrie dish, you might lower variability somewhat, but you won't gain degrees of freedom in the analysis at all. The subsamples are not independent. So it is good that you didn't recommend that. NOTE: Yes, with this design you can't get a culture:treatment interaction estimate. But that is also the case if this had been designed as a completely randomized design. The interaction ends up in the noise. SUMMARY: The design is actually a classical experimental design, highly recommended for this kind of research. It is also easy to analyze. The only objection would be that three pairs might be underpowered.
Statistical tests when sample size is 1
BruceET has described the proper analysis (Two-way ANOVA without interaction), so I'll put a more positive spin on the experiment. I'm assuming that the design was three pairs, where there is variabil
Statistical tests when sample size is 1 BruceET has described the proper analysis (Two-way ANOVA without interaction), so I'll put a more positive spin on the experiment. I'm assuming that the design was three pairs, where there is variability between pairs. One of each pair was given insulin and the other without, hopefully randomized. Then each sample (pair X treatment, I call the experimental unit a petrie) was measured once. 1) This is not a bad design. This is probably one of the most commonly used experimental designs in science - it's a complete block design (also called a matched pairs design when the blocks only have two observations). This design generally is superior in power to the even more common completely randomized design (all six experimental units randomized into a set of three that got insulin and three that didn't). The paired design removes variability due to pair-to-pair variability. Seriously, this design is ubiquitous in agriculture, medicine, etc. The only objection I would have is that three pairs might allow too little power. But it is certainly replicated (there are multiple pairs). 2) It appears that the suggestion was that the student should have sampled each petrie multiple times to get replications. This would be a very bad recommendation. Multiply sampling each experimental unit to get replication is an example of pseudo-replication. If the pseudo-replicates are averaged together to yield one measure per petrie dish, you might lower variability somewhat, but you won't gain degrees of freedom in the analysis at all. The subsamples are not independent. So it is good that you didn't recommend that. NOTE: Yes, with this design you can't get a culture:treatment interaction estimate. But that is also the case if this had been designed as a completely randomized design. The interaction ends up in the noise. SUMMARY: The design is actually a classical experimental design, highly recommended for this kind of research. It is also easy to analyze. The only objection would be that three pairs might be underpowered.
Statistical tests when sample size is 1 BruceET has described the proper analysis (Two-way ANOVA without interaction), so I'll put a more positive spin on the experiment. I'm assuming that the design was three pairs, where there is variabil
5,033
Statistical tests when sample size is 1
Delightful question and one with historical precedent. As much as we might fault our budding high school junior scientist for his experimental design, it has a nearly perfect historical precedent. What some consider the first controlled scientific medical experiment did the same thing. This high school student tested 3 situations with placebo or intervention. Physician James Lind aboard the HMS Salisbury did the same in his famous discovery of the treatment of scurvy. He hypothesized that scurvy might be treated by acids. So he came up with six acids and gave one to each of 6 scurvy-afflicted sailors while each had a matching single control for six more who did not receive the acid. This was basically six simultaneous controlled trials of an intervention on 1 person and no intervention on another. All told, 12 sailors, 6 treated, 6 not treated. Interventions were "cider, diluted sulfuric acid, vinegar, sea water, two oranges and a lemon, or a purgative mixture". How amazingly lucky we are that the one sailor who received the citrus fruits did not incidentally die of something else. The rest, as they say, is history. I've heard this discussed on a few podcasts so I knew the story. Here's a citation which I found with a quick internet search. It may not be the best source, but it'll get you started if you want to read more. James Lind and Scurvy -- JS
Statistical tests when sample size is 1
Delightful question and one with historical precedent. As much as we might fault our budding high school junior scientist for his experimental design, it has a nearly perfect historical precedent. Wh
Statistical tests when sample size is 1 Delightful question and one with historical precedent. As much as we might fault our budding high school junior scientist for his experimental design, it has a nearly perfect historical precedent. What some consider the first controlled scientific medical experiment did the same thing. This high school student tested 3 situations with placebo or intervention. Physician James Lind aboard the HMS Salisbury did the same in his famous discovery of the treatment of scurvy. He hypothesized that scurvy might be treated by acids. So he came up with six acids and gave one to each of 6 scurvy-afflicted sailors while each had a matching single control for six more who did not receive the acid. This was basically six simultaneous controlled trials of an intervention on 1 person and no intervention on another. All told, 12 sailors, 6 treated, 6 not treated. Interventions were "cider, diluted sulfuric acid, vinegar, sea water, two oranges and a lemon, or a purgative mixture". How amazingly lucky we are that the one sailor who received the citrus fruits did not incidentally die of something else. The rest, as they say, is history. I've heard this discussed on a few podcasts so I knew the story. Here's a citation which I found with a quick internet search. It may not be the best source, but it'll get you started if you want to read more. James Lind and Scurvy -- JS
Statistical tests when sample size is 1 Delightful question and one with historical precedent. As much as we might fault our budding high school junior scientist for his experimental design, it has a nearly perfect historical precedent. Wh
5,034
Statistical tests when sample size is 1
If the student were willing to make a rather deep dive, you might redirect their interest from sampling variation to uncertainty, and from a hypothesis test to an expanded uncertainty interval. Sampling variation is only one component of uncertainty. While the student is not in position to assess sampling variability, they might learn something from attempting to approximate the uncertainty associated with their measurements. I imagine your student is not up for the investment, but it's a suggestion.
Statistical tests when sample size is 1
If the student were willing to make a rather deep dive, you might redirect their interest from sampling variation to uncertainty, and from a hypothesis test to an expanded uncertainty interval. Sampli
Statistical tests when sample size is 1 If the student were willing to make a rather deep dive, you might redirect their interest from sampling variation to uncertainty, and from a hypothesis test to an expanded uncertainty interval. Sampling variation is only one component of uncertainty. While the student is not in position to assess sampling variability, they might learn something from attempting to approximate the uncertainty associated with their measurements. I imagine your student is not up for the investment, but it's a suggestion.
Statistical tests when sample size is 1 If the student were willing to make a rather deep dive, you might redirect their interest from sampling variation to uncertainty, and from a hypothesis test to an expanded uncertainty interval. Sampli
5,035
Statistical tests when sample size is 1
A major problem is the small sample size reducing the degrees of freedom in model selection along with the model's required/sensitivity to normality of error assumption. Preserving degrees of freedom and being robust in methodology appears likely to be the best path. I would even advise generating random errors from possible parent distributions, and with knowledge of the actual parameter values, noting the variation in estimated parameter values and possible changes in test results. As such, a simple parsimonous model approach would be first to place the data in a regression format in accord with the following reduced model in the variable Methods: $$ Y_{i,j}-Ymedian = \beta *InsulinDummy_i + \gamma * MethodDummy_j + \varepsilon_{i,j} $$ where the dependent variable is the observed concentration of glucose centered around the population median, and the Insulin Dummy variable (also centered) is 1/2 if Insulin is present in test sample i, else -1/2. The Method Dummy variable is 2/3 for Method 1, else -1/3 for Methods 2&3 (repeat analysis, swapping out Method 1 for say Method 2, and repeat again swapping out Method 2 for Method 3). Note, the proposed model interpretation of the regression coefficients is that it may aid in accurately determining which side of the median an observation will fall. Given the small sample size, I suggest a probabilistic (even Bayesian) interpretation, whose accuracy can be assessed in simulated model testing. Next, the introduction of a robust regression analysis, where Least Absolute Deviations (LAD) is an option. Mathematically, LAD is linked to a Laplace distribution of error terms. One can compute coefficients employing iterative weighted Least-Squares, or, especially in the current context with 6 data points, employing the property that the model parameters determine a straight line that passes through two of the observed points in space. This implies examining permutations and testing total sum of absolute deviations. The selected points nearly always avoid outliers (unlike Least-Squares, where ANOVA also rests on a squared error criterion). To obtain confidence intervals on parameters, bootstrap re-sampling of error terms has been suggested (see this), which can also be assessed on accuracy in simulation runs. [EDIT] I thought my model is worthy of further exploration, so I built a worksheet based simulation model (convenient for the iterative LAD iteration, which involves examining point shifting, what points absolute errors are converging to zero (indicative of point pairs determining the LAD regression line). Here is a summary of a dozen simulation runs based on a uniform (-0.5 to +.5) error added to the model proposed above. Actual Underlying Simulated Parameter Values are: 1.250 and 0.100 Simulation Run Values: Average Observed Values 1.225 0.026 Observed Median 1.224 0.045 Run 1 1.001 0.324 Run 2 1.546 0.297 Run 3 1.350 -0.038 Run 4 1.283 -0.115 Run 5 1.593 -0.113 Run 6 1.498 -0.089 Run 7 0.863 0.151 Run 8 1.090 0.323 Run 9 1.102 -0.435 Run 10 1.166 -0.265 Run 11 1.451 0.128 Run 12 0.761 0.146 My take on the results are that the obtained summary statistics are amazing for my proposed parsimonous model based on 6 points with a uniform error distribution estimating 2 parameters on a data-centered model employing robust regression. Individual runs display, as expected, quite a range on the parameter values, but appear to more likely point to an effect greater than 1 for the first parameter (only 2 out of 12 are less than 1).
Statistical tests when sample size is 1
A major problem is the small sample size reducing the degrees of freedom in model selection along with the model's required/sensitivity to normality of error assumption. Preserving degrees of freedom
Statistical tests when sample size is 1 A major problem is the small sample size reducing the degrees of freedom in model selection along with the model's required/sensitivity to normality of error assumption. Preserving degrees of freedom and being robust in methodology appears likely to be the best path. I would even advise generating random errors from possible parent distributions, and with knowledge of the actual parameter values, noting the variation in estimated parameter values and possible changes in test results. As such, a simple parsimonous model approach would be first to place the data in a regression format in accord with the following reduced model in the variable Methods: $$ Y_{i,j}-Ymedian = \beta *InsulinDummy_i + \gamma * MethodDummy_j + \varepsilon_{i,j} $$ where the dependent variable is the observed concentration of glucose centered around the population median, and the Insulin Dummy variable (also centered) is 1/2 if Insulin is present in test sample i, else -1/2. The Method Dummy variable is 2/3 for Method 1, else -1/3 for Methods 2&3 (repeat analysis, swapping out Method 1 for say Method 2, and repeat again swapping out Method 2 for Method 3). Note, the proposed model interpretation of the regression coefficients is that it may aid in accurately determining which side of the median an observation will fall. Given the small sample size, I suggest a probabilistic (even Bayesian) interpretation, whose accuracy can be assessed in simulated model testing. Next, the introduction of a robust regression analysis, where Least Absolute Deviations (LAD) is an option. Mathematically, LAD is linked to a Laplace distribution of error terms. One can compute coefficients employing iterative weighted Least-Squares, or, especially in the current context with 6 data points, employing the property that the model parameters determine a straight line that passes through two of the observed points in space. This implies examining permutations and testing total sum of absolute deviations. The selected points nearly always avoid outliers (unlike Least-Squares, where ANOVA also rests on a squared error criterion). To obtain confidence intervals on parameters, bootstrap re-sampling of error terms has been suggested (see this), which can also be assessed on accuracy in simulation runs. [EDIT] I thought my model is worthy of further exploration, so I built a worksheet based simulation model (convenient for the iterative LAD iteration, which involves examining point shifting, what points absolute errors are converging to zero (indicative of point pairs determining the LAD regression line). Here is a summary of a dozen simulation runs based on a uniform (-0.5 to +.5) error added to the model proposed above. Actual Underlying Simulated Parameter Values are: 1.250 and 0.100 Simulation Run Values: Average Observed Values 1.225 0.026 Observed Median 1.224 0.045 Run 1 1.001 0.324 Run 2 1.546 0.297 Run 3 1.350 -0.038 Run 4 1.283 -0.115 Run 5 1.593 -0.113 Run 6 1.498 -0.089 Run 7 0.863 0.151 Run 8 1.090 0.323 Run 9 1.102 -0.435 Run 10 1.166 -0.265 Run 11 1.451 0.128 Run 12 0.761 0.146 My take on the results are that the obtained summary statistics are amazing for my proposed parsimonous model based on 6 points with a uniform error distribution estimating 2 parameters on a data-centered model employing robust regression. Individual runs display, as expected, quite a range on the parameter values, but appear to more likely point to an effect greater than 1 for the first parameter (only 2 out of 12 are less than 1).
Statistical tests when sample size is 1 A major problem is the small sample size reducing the degrees of freedom in model selection along with the model's required/sensitivity to normality of error assumption. Preserving degrees of freedom
5,036
Statistical tests when sample size is 1
While the student does not have type A repeatability measurements, the student may/should be able to estimate the type B error contribution caused by equipment supplied from elsewhere ("For an estimate xi of an input quantity Xi that has not been obtained from repeated observations"). This is detailed in the SI/bipm Guide to Uncertainty in Measurement (there's a NIST equivalent). This at least allows a route to making some judgement about the results. The alternative, if the student did have a time series measurement (mentioned in one of the comments) is to estimate the smooth curve shape and hence the measurement error on top of that underlying smooth shape. And lastly, if all the control groups were actually the same (not clear from the comments) then they could form a single group for the estimation of measurement noise. Finally, use this as a 'post-mortem' to identify the level of measurement accuracy that would have been required to confirm the hypothesis at risk, and hence the likely number of repeat measurements needed to obtain that accuracy (error in the mean), given particular levels of basic accuracy (error on a single measurement). This at least rescues the student from feeling like it was a complete waste (i.e something learnt!).
Statistical tests when sample size is 1
While the student does not have type A repeatability measurements, the student may/should be able to estimate the type B error contribution caused by equipment supplied from elsewhere ("For an estimat
Statistical tests when sample size is 1 While the student does not have type A repeatability measurements, the student may/should be able to estimate the type B error contribution caused by equipment supplied from elsewhere ("For an estimate xi of an input quantity Xi that has not been obtained from repeated observations"). This is detailed in the SI/bipm Guide to Uncertainty in Measurement (there's a NIST equivalent). This at least allows a route to making some judgement about the results. The alternative, if the student did have a time series measurement (mentioned in one of the comments) is to estimate the smooth curve shape and hence the measurement error on top of that underlying smooth shape. And lastly, if all the control groups were actually the same (not clear from the comments) then they could form a single group for the estimation of measurement noise. Finally, use this as a 'post-mortem' to identify the level of measurement accuracy that would have been required to confirm the hypothesis at risk, and hence the likely number of repeat measurements needed to obtain that accuracy (error in the mean), given particular levels of basic accuracy (error on a single measurement). This at least rescues the student from feeling like it was a complete waste (i.e something learnt!).
Statistical tests when sample size is 1 While the student does not have type A repeatability measurements, the student may/should be able to estimate the type B error contribution caused by equipment supplied from elsewhere ("For an estimat
5,037
Statistical tests when sample size is 1
What a good example of the old question of bias and random errors in observational errors. If the biased estimation of the standard deviation is, as you mention: $ \sigma = \sqrt{\frac{\sum{(x_i-\bar{x})^2}}{n}} = \frac {0}{1}=0$, the unbiased estimation is $ \sigma = \sqrt{\frac{\sum{(x_i-\bar{x})^2}}{n-1}} = \frac {0}{0}=undefined$. So if even you student succeed in drawing some statistical conclusions, these will have an unknown bias. However, this did not prevent Student to design the t-test, and Fisher to design the ANOVA method for such situations. What about starting by drawing the three pairs on a scatter-plot, then a linear regression and look at the slope and compare with its standard error? This is tantamount as BruceET answer, perhaps a bit more geometric and intuitive.
Statistical tests when sample size is 1
What a good example of the old question of bias and random errors in observational errors. If the biased estimation of the standard deviation is, as you mention: $ \sigma = \sqrt{\frac{\sum{(x_i-\bar{
Statistical tests when sample size is 1 What a good example of the old question of bias and random errors in observational errors. If the biased estimation of the standard deviation is, as you mention: $ \sigma = \sqrt{\frac{\sum{(x_i-\bar{x})^2}}{n}} = \frac {0}{1}=0$, the unbiased estimation is $ \sigma = \sqrt{\frac{\sum{(x_i-\bar{x})^2}}{n-1}} = \frac {0}{0}=undefined$. So if even you student succeed in drawing some statistical conclusions, these will have an unknown bias. However, this did not prevent Student to design the t-test, and Fisher to design the ANOVA method for such situations. What about starting by drawing the three pairs on a scatter-plot, then a linear regression and look at the slope and compare with its standard error? This is tantamount as BruceET answer, perhaps a bit more geometric and intuitive.
Statistical tests when sample size is 1 What a good example of the old question of bias and random errors in observational errors. If the biased estimation of the standard deviation is, as you mention: $ \sigma = \sqrt{\frac{\sum{(x_i-\bar{
5,038
Bayesian vs frequentist Interpretations of Probability
In the frequentist approach, it is asserted that the only sense in which probabilities have meaning is as the limiting value of the number of successes in a sequence of trials, i.e. as $$p = \lim_{n\to\infty} \frac{k}{n}$$ where $k$ is the number of successes and $n$ is the number of trials. In particular, it doesn't make any sense to associate a probability distribution with a parameter. For example, consider samples $X_1, \dots, X_n$ from the Bernoulli distribution with parameter $p$ (i.e. they have value 1 with probability $p$ and 0 with probability $1-p$). We can define the sample success rate to be $$\hat{p} = \frac{X_1+\cdots +X_n}{n}$$ and talk about the distribution of $\hat{p}$ conditional on the value of $p$, but it doesn't make sense to invert the question and start talking about the probability distribution of $p$ conditional on the observed value of $\hat{p}$. In particular, this means that when we compute a confidence interval, we interpret the ends of the confidence interval as random variables, and we talk about "the probability that the interval includes the true parameter", rather than "the probability that the parameter is inside the confidence interval". In the Bayesian approach, we interpret probability distributions as quantifying our uncertainty about the world. In particular, this means that we can now meaningfully talk about probability distributions of parameters, since even though the parameter is fixed, our knowledge of its true value may be limited. In the example above, we can invert the probability distribution $f(\hat{p}\mid p)$ using Bayes' law, to give $$\overbrace{f(p\mid \hat{p})}^\text{posterior} = \underbrace{\frac{f(\hat{p}\mid p)}{f(\hat{p})}}_\text{likelihood ratio} \overbrace{f(p)}^\text{prior}$$ The snag is that we have to introduce the prior distribution into our analysis - this reflects our belief about the value of $p$ before seeing the actual values of the $X_i$. The role of the prior is often criticised in the frequentist approach, as it is argued that it introduces subjectivity into the otherwise austere and object world of probability. In the Bayesian approach one no longer talks of confidence intervals, but instead of credible intervals, which have a more natural interpretation - given a 95% credible interval, we can assign a 95% probability that the parameter is inside the interval.
Bayesian vs frequentist Interpretations of Probability
In the frequentist approach, it is asserted that the only sense in which probabilities have meaning is as the limiting value of the number of successes in a sequence of trials, i.e. as $$p = \lim_{n\t
Bayesian vs frequentist Interpretations of Probability In the frequentist approach, it is asserted that the only sense in which probabilities have meaning is as the limiting value of the number of successes in a sequence of trials, i.e. as $$p = \lim_{n\to\infty} \frac{k}{n}$$ where $k$ is the number of successes and $n$ is the number of trials. In particular, it doesn't make any sense to associate a probability distribution with a parameter. For example, consider samples $X_1, \dots, X_n$ from the Bernoulli distribution with parameter $p$ (i.e. they have value 1 with probability $p$ and 0 with probability $1-p$). We can define the sample success rate to be $$\hat{p} = \frac{X_1+\cdots +X_n}{n}$$ and talk about the distribution of $\hat{p}$ conditional on the value of $p$, but it doesn't make sense to invert the question and start talking about the probability distribution of $p$ conditional on the observed value of $\hat{p}$. In particular, this means that when we compute a confidence interval, we interpret the ends of the confidence interval as random variables, and we talk about "the probability that the interval includes the true parameter", rather than "the probability that the parameter is inside the confidence interval". In the Bayesian approach, we interpret probability distributions as quantifying our uncertainty about the world. In particular, this means that we can now meaningfully talk about probability distributions of parameters, since even though the parameter is fixed, our knowledge of its true value may be limited. In the example above, we can invert the probability distribution $f(\hat{p}\mid p)$ using Bayes' law, to give $$\overbrace{f(p\mid \hat{p})}^\text{posterior} = \underbrace{\frac{f(\hat{p}\mid p)}{f(\hat{p})}}_\text{likelihood ratio} \overbrace{f(p)}^\text{prior}$$ The snag is that we have to introduce the prior distribution into our analysis - this reflects our belief about the value of $p$ before seeing the actual values of the $X_i$. The role of the prior is often criticised in the frequentist approach, as it is argued that it introduces subjectivity into the otherwise austere and object world of probability. In the Bayesian approach one no longer talks of confidence intervals, but instead of credible intervals, which have a more natural interpretation - given a 95% credible interval, we can assign a 95% probability that the parameter is inside the interval.
Bayesian vs frequentist Interpretations of Probability In the frequentist approach, it is asserted that the only sense in which probabilities have meaning is as the limiting value of the number of successes in a sequence of trials, i.e. as $$p = \lim_{n\t
5,039
Bayesian vs frequentist Interpretations of Probability
You're right about your interpretation of Frequentist probability: randomness in this setup is merely due to incomplete sampling. From the Bayesian viewpoint probabilities are "subjective", in that they reflect an agent's uncertainty about the world. It's not quite right to say that the parameters of the distributions "change". Since we don't have complete information about the parameters, our uncertainty about them changes as we gather more information. Both interpretations are useful in applications, and which is more useful depends on the situation. You might check out Andrew Gelman's blog for ideas about Bayesian applications. In many situations what Bayesians call "priors" Frequentists call "regularization", and so (from my perspective) the excitement can leave the room rather quickly. In fact, according to the Bernstein-von Mises theorem, Bayesian and Frequentist inference are actually asymptotically equivalent under rather weak assumptions (though notably the theorem fails for infinite-dimensional distributions). You can find a slew of references about this here. Since you asked for interpretations: I think the Frequentist viewpoint makes great sense when modeling scientific experiments as it was designed to do. For some applications in machine learning or for modeling inductive reasoning (or learning), Bayesian probability makes more sense to me. There are many situations in which modeling an event with a fixed, "true" probability seems implausible. For a toy example going back to Laplace, consider the probability that the sun rises tomorrow. From the Frequentist perspective, we have to posit something like infinitely-many universes to define the probability. As Bayesians, there is only one universe (or at least, there needn't be many). Our uncertainty about the sun rising is squelched by our very, very strong prior belief that it will rise again tomorrow.
Bayesian vs frequentist Interpretations of Probability
You're right about your interpretation of Frequentist probability: randomness in this setup is merely due to incomplete sampling. From the Bayesian viewpoint probabilities are "subjective", in that th
Bayesian vs frequentist Interpretations of Probability You're right about your interpretation of Frequentist probability: randomness in this setup is merely due to incomplete sampling. From the Bayesian viewpoint probabilities are "subjective", in that they reflect an agent's uncertainty about the world. It's not quite right to say that the parameters of the distributions "change". Since we don't have complete information about the parameters, our uncertainty about them changes as we gather more information. Both interpretations are useful in applications, and which is more useful depends on the situation. You might check out Andrew Gelman's blog for ideas about Bayesian applications. In many situations what Bayesians call "priors" Frequentists call "regularization", and so (from my perspective) the excitement can leave the room rather quickly. In fact, according to the Bernstein-von Mises theorem, Bayesian and Frequentist inference are actually asymptotically equivalent under rather weak assumptions (though notably the theorem fails for infinite-dimensional distributions). You can find a slew of references about this here. Since you asked for interpretations: I think the Frequentist viewpoint makes great sense when modeling scientific experiments as it was designed to do. For some applications in machine learning or for modeling inductive reasoning (or learning), Bayesian probability makes more sense to me. There are many situations in which modeling an event with a fixed, "true" probability seems implausible. For a toy example going back to Laplace, consider the probability that the sun rises tomorrow. From the Frequentist perspective, we have to posit something like infinitely-many universes to define the probability. As Bayesians, there is only one universe (or at least, there needn't be many). Our uncertainty about the sun rising is squelched by our very, very strong prior belief that it will rise again tomorrow.
Bayesian vs frequentist Interpretations of Probability You're right about your interpretation of Frequentist probability: randomness in this setup is merely due to incomplete sampling. From the Bayesian viewpoint probabilities are "subjective", in that th
5,040
Bayesian vs frequentist Interpretations of Probability
The Bayesian interpretation of probability is a degree-of-belief interpretation. A Bayesian may say that the probability that there was life on Mars a billion years ago is $1/2$. A frequentist will refuse to assign a probability to that proposition. It is not something that could be said to be true in half of all cases, so one cannot assign probability $1/2$.
Bayesian vs frequentist Interpretations of Probability
The Bayesian interpretation of probability is a degree-of-belief interpretation. A Bayesian may say that the probability that there was life on Mars a billion years ago is $1/2$. A frequentist will re
Bayesian vs frequentist Interpretations of Probability The Bayesian interpretation of probability is a degree-of-belief interpretation. A Bayesian may say that the probability that there was life on Mars a billion years ago is $1/2$. A frequentist will refuse to assign a probability to that proposition. It is not something that could be said to be true in half of all cases, so one cannot assign probability $1/2$.
Bayesian vs frequentist Interpretations of Probability The Bayesian interpretation of probability is a degree-of-belief interpretation. A Bayesian may say that the probability that there was life on Mars a billion years ago is $1/2$. A frequentist will re
5,041
Bayesian vs frequentist Interpretations of Probability
Chris gives a nice simplistic explanation that properly differentiates the two approaches to probability. But frequentist theory of probability is more than just looking at the long range proportion of successes. We also consider data sampled at random from a distribution and estimate parameters of the distribution such as the mean and variance by taking certain types of averages of the data (e.g. for the mean it is the arithmetic average of the observations. Frequentist theory associates a probability with the estimate that is called the sampling distribution. In frequency theory we are able to show for parameters like the mean that are taken by averaging from the samples that the estimate will converge to the true parameter. The sampling distribution is used to describe how close the estimate is to the parameter for any fixed sample size n. Close is defined by a measure of accuracy (e.g. mean square error). At Chris points out for any parameter such as the mean the Bayesian attaches a prior probability distribution on it. Then given the data Bayes' rule is used to compute a posterior distribution for the parameter. For the Bayesian all inference about the parameter is based on this posterior distribution. Frequentists construct confidence intervals which are intervals of plausible values for the parameter. Their construction is based on the frequentist probability that if the process used to generate the interval were repeated many times for independent samples the proportion of intervals that would actually include the true value of the parameter would be at least some prespecified confidence level (e.g. 95%). Bayesians use the a posteriori distribution for the parameter to construct credible regions. These are simply regions in the parameter space over which the posterior distibution is integrated to get a prespecified probability (e.g. 0.95). Credible regions are interpreted by Bayesians as regions that have a high (e.g. the prespecified 0.95) probability of including the true value of the parameter.
Bayesian vs frequentist Interpretations of Probability
Chris gives a nice simplistic explanation that properly differentiates the two approaches to probability. But frequentist theory of probability is more than just looking at the long range proportion
Bayesian vs frequentist Interpretations of Probability Chris gives a nice simplistic explanation that properly differentiates the two approaches to probability. But frequentist theory of probability is more than just looking at the long range proportion of successes. We also consider data sampled at random from a distribution and estimate parameters of the distribution such as the mean and variance by taking certain types of averages of the data (e.g. for the mean it is the arithmetic average of the observations. Frequentist theory associates a probability with the estimate that is called the sampling distribution. In frequency theory we are able to show for parameters like the mean that are taken by averaging from the samples that the estimate will converge to the true parameter. The sampling distribution is used to describe how close the estimate is to the parameter for any fixed sample size n. Close is defined by a measure of accuracy (e.g. mean square error). At Chris points out for any parameter such as the mean the Bayesian attaches a prior probability distribution on it. Then given the data Bayes' rule is used to compute a posterior distribution for the parameter. For the Bayesian all inference about the parameter is based on this posterior distribution. Frequentists construct confidence intervals which are intervals of plausible values for the parameter. Their construction is based on the frequentist probability that if the process used to generate the interval were repeated many times for independent samples the proportion of intervals that would actually include the true value of the parameter would be at least some prespecified confidence level (e.g. 95%). Bayesians use the a posteriori distribution for the parameter to construct credible regions. These are simply regions in the parameter space over which the posterior distibution is integrated to get a prespecified probability (e.g. 0.95). Credible regions are interpreted by Bayesians as regions that have a high (e.g. the prespecified 0.95) probability of including the true value of the parameter.
Bayesian vs frequentist Interpretations of Probability Chris gives a nice simplistic explanation that properly differentiates the two approaches to probability. But frequentist theory of probability is more than just looking at the long range proportion
5,042
Bayesian vs frequentist Interpretations of Probability
From a "real world" point of view, I find one major difference between a frequentist and a classical or Bayesian "solution" that applies to at least three major scenarios. The difference in selecting a methodology depends on whether you need a solution that is impacted by the population probability, or one that is impacted by the individual probability. Examples below: If there is a known 5% probability that males over 40 will die in a given year and require life insurance payments, an insurance company can use the 5% POPULATION percentage to estimate its costs, but to say that each individual male over 40 only has a 5% chance of dying ... is meaningless... Because 5% have a 100% probability of dying - which is a frequentist approach. At the individual level the event either occurs (100% probability) or it does not (0% probability) However, based on this limited information, it is not possible to predict the individuals who have a 100% probability of dying, and the 5% "averaged" population probability is useless at the individual level. The above argument applies equally as well to fires in buildings which is why sprinklers are required in all buildings in a population. Both of the above arguments apply equally as well to information systems breeches, damage, or "hacks". The population percentages are useless so all systems must be safeguarded.
Bayesian vs frequentist Interpretations of Probability
From a "real world" point of view, I find one major difference between a frequentist and a classical or Bayesian "solution" that applies to at least three major scenarios. The difference in selecting
Bayesian vs frequentist Interpretations of Probability From a "real world" point of view, I find one major difference between a frequentist and a classical or Bayesian "solution" that applies to at least three major scenarios. The difference in selecting a methodology depends on whether you need a solution that is impacted by the population probability, or one that is impacted by the individual probability. Examples below: If there is a known 5% probability that males over 40 will die in a given year and require life insurance payments, an insurance company can use the 5% POPULATION percentage to estimate its costs, but to say that each individual male over 40 only has a 5% chance of dying ... is meaningless... Because 5% have a 100% probability of dying - which is a frequentist approach. At the individual level the event either occurs (100% probability) or it does not (0% probability) However, based on this limited information, it is not possible to predict the individuals who have a 100% probability of dying, and the 5% "averaged" population probability is useless at the individual level. The above argument applies equally as well to fires in buildings which is why sprinklers are required in all buildings in a population. Both of the above arguments apply equally as well to information systems breeches, damage, or "hacks". The population percentages are useless so all systems must be safeguarded.
Bayesian vs frequentist Interpretations of Probability From a "real world" point of view, I find one major difference between a frequentist and a classical or Bayesian "solution" that applies to at least three major scenarios. The difference in selecting
5,043
Bayesian vs frequentist Interpretations of Probability
The following is taken from my manuscript on p-value functions - Johnson, Geoffrey S. "Decision Making in Drug Development via Inference on Power" Researchgate.net (2021). In any quantitative field it is not enough to simply apply a set of mathematical operations. One must also provide an interpretation. The field of statistics concerns itself with a special branch of mathematics regarding probability. When interpreting probability there are primarily two competing paradigms: Bayesian and frequentist. These paradigms differ on what it means for something to be considered random and what probability itself measures. Both frequentists and Bayesians would agree that once a test statistic is observed it is fixed, there is nothing random about it. Additionally, frequentists and most Bayesians would agree that the parameter under investigation, say $\theta$, is an unknown fixed quantity and it is simply treated as random in the Bayesian paradigm as a matter of practice. The question then becomes, "How do we interpret probability statements about a fixed quantity?" Without delving into the mathematical details of how a posterior or a p-value is calculated, I explore various interpretations below and what makes them untenable. One interpretation of a Bayesian prior is that "random'' is synonymous with "unknown'' and probability measures the experimenter's belief so that the posterior measures belief about the unknown fixed true $\theta$ given the observed data. This interpretation is untenable because belief is unfalsifiable $-$ it is not a verifiable statement about the actual parameter, the hypothesis, nor the experiment. Another interpretation is that "random'' is short for "random sampling'' and probability measures the emergent pattern of many samples so that a Bayesian prior is merely a modeling assumption regarding $\theta$, i.e. the unknown fixed true $\theta$ was randomly selected from a known collection or prevalence of $\theta$'s (prior distribution) and the observed data is used to subset this collection, forming the posterior distribution. The unknown fixed true $\theta$ is now imagined to have instead been randomly selected from the posterior. This interpretation is untenable because of the contradiction caused by claiming two sampling frames. The second sampling frame is correct only if the first sampling frame is correct, yet there can only be a single sampling frame from which we obtained the unknown fixed true $\theta$ under investigation. A third interpretation of a Bayesian prior is that "random'' is synonymous with "unrealized'' or "undetermined'' and probability measures a simultaneity of existance so that $\theta$ is not fixed and all values of $\theta$ are true simultaneously; the truth exists in a superposition depending on the data observed according to the posterior distribution (think Schrödinger's cat). This interpretation is untenable because it reverses cause and effect $-$ the population-level parameter depends on the data observed, but the observed data depended on the parameter. Ascribing any of these interpretations to the posterior allows one to make philosophical probability statements about hypotheses given the data. While the p-value is typically not interpreted in the same manner, it does show us the plausibility of a hypothesis given the data $-$ the ex-post sampling probability of the observed result or something more extreme if the hypothesis for the unknown fixed $\theta$ is true. One might notice the similarity between a p-value and a posterior probability (or a confidence interval and a credible interval) and wonder under what circumstances is each one preferable. At its essence this is a matter of scientific objectivity. To the Bayesian, probability is axiomatic and measures the experimenter. To the frequentist, probability measures the experiment and must be verifiable. The Bayesian interpretation of probability as a measure of belief is unfalsifiable. Only if there exists a real-life mechanism by which we can sample values of $\theta$ can a probability distribution for $\theta$ be verified. In such settings probability statements about $\theta$ would have a purely frequentist interpretation. This may be a reason why frequentist inference is ubiquitous in the scientific literature. If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood or is proportional to the likelihood, Bayesian belief is more objectively viewed as confidence based on frequency probability of the experiment. In short, for those who subscribe to the frequentist interpretation of probability the p-value function summarizes all the probability statements about the experiment one can make as a function of the hypothesis for $\theta$. It is a matter of correct interpretation given the definition of probability and what constitutes a random variable. The posterior remains an incredibly useful tool and can be interpreted as an approximate p-value function.
Bayesian vs frequentist Interpretations of Probability
The following is taken from my manuscript on p-value functions - Johnson, Geoffrey S. "Decision Making in Drug Development via Inference on Power" Researchgate.net (2021). In any quantitative field it
Bayesian vs frequentist Interpretations of Probability The following is taken from my manuscript on p-value functions - Johnson, Geoffrey S. "Decision Making in Drug Development via Inference on Power" Researchgate.net (2021). In any quantitative field it is not enough to simply apply a set of mathematical operations. One must also provide an interpretation. The field of statistics concerns itself with a special branch of mathematics regarding probability. When interpreting probability there are primarily two competing paradigms: Bayesian and frequentist. These paradigms differ on what it means for something to be considered random and what probability itself measures. Both frequentists and Bayesians would agree that once a test statistic is observed it is fixed, there is nothing random about it. Additionally, frequentists and most Bayesians would agree that the parameter under investigation, say $\theta$, is an unknown fixed quantity and it is simply treated as random in the Bayesian paradigm as a matter of practice. The question then becomes, "How do we interpret probability statements about a fixed quantity?" Without delving into the mathematical details of how a posterior or a p-value is calculated, I explore various interpretations below and what makes them untenable. One interpretation of a Bayesian prior is that "random'' is synonymous with "unknown'' and probability measures the experimenter's belief so that the posterior measures belief about the unknown fixed true $\theta$ given the observed data. This interpretation is untenable because belief is unfalsifiable $-$ it is not a verifiable statement about the actual parameter, the hypothesis, nor the experiment. Another interpretation is that "random'' is short for "random sampling'' and probability measures the emergent pattern of many samples so that a Bayesian prior is merely a modeling assumption regarding $\theta$, i.e. the unknown fixed true $\theta$ was randomly selected from a known collection or prevalence of $\theta$'s (prior distribution) and the observed data is used to subset this collection, forming the posterior distribution. The unknown fixed true $\theta$ is now imagined to have instead been randomly selected from the posterior. This interpretation is untenable because of the contradiction caused by claiming two sampling frames. The second sampling frame is correct only if the first sampling frame is correct, yet there can only be a single sampling frame from which we obtained the unknown fixed true $\theta$ under investigation. A third interpretation of a Bayesian prior is that "random'' is synonymous with "unrealized'' or "undetermined'' and probability measures a simultaneity of existance so that $\theta$ is not fixed and all values of $\theta$ are true simultaneously; the truth exists in a superposition depending on the data observed according to the posterior distribution (think Schrödinger's cat). This interpretation is untenable because it reverses cause and effect $-$ the population-level parameter depends on the data observed, but the observed data depended on the parameter. Ascribing any of these interpretations to the posterior allows one to make philosophical probability statements about hypotheses given the data. While the p-value is typically not interpreted in the same manner, it does show us the plausibility of a hypothesis given the data $-$ the ex-post sampling probability of the observed result or something more extreme if the hypothesis for the unknown fixed $\theta$ is true. One might notice the similarity between a p-value and a posterior probability (or a confidence interval and a credible interval) and wonder under what circumstances is each one preferable. At its essence this is a matter of scientific objectivity. To the Bayesian, probability is axiomatic and measures the experimenter. To the frequentist, probability measures the experiment and must be verifiable. The Bayesian interpretation of probability as a measure of belief is unfalsifiable. Only if there exists a real-life mechanism by which we can sample values of $\theta$ can a probability distribution for $\theta$ be verified. In such settings probability statements about $\theta$ would have a purely frequentist interpretation. This may be a reason why frequentist inference is ubiquitous in the scientific literature. If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood or is proportional to the likelihood, Bayesian belief is more objectively viewed as confidence based on frequency probability of the experiment. In short, for those who subscribe to the frequentist interpretation of probability the p-value function summarizes all the probability statements about the experiment one can make as a function of the hypothesis for $\theta$. It is a matter of correct interpretation given the definition of probability and what constitutes a random variable. The posterior remains an incredibly useful tool and can be interpreted as an approximate p-value function.
Bayesian vs frequentist Interpretations of Probability The following is taken from my manuscript on p-value functions - Johnson, Geoffrey S. "Decision Making in Drug Development via Inference on Power" Researchgate.net (2021). In any quantitative field it
5,044
Bayesian vs frequentist Interpretations of Probability
“I was just wondering whether anyone could give me a quick summary of their interpretation of Bayesian vs. Frequentist approach including Bayesian statistical equivalents of the Frequentist p-value and confidence interval. In addition, specific examples of where one method would be preferable to the other are appreciated.” At a certain level, you basically have it. However, I spent some time thinking about your question and thought I would add an answer to it. The first thing I will do is change some language. When I speak about the Frequentist perspective, I will use the words data and parameters. When I speak about the Bayesian perspective, I will use the words observables and unobservables. Do notice that they are not the same. For example, if you are missing a data point, it is an unobservable but not a parameter in the Frequentist sense. Likewise, if it is known that the variance of a process is equal to some value $k$, then it is an observable. One of the difficulties of talking across paradigms is that people try and use the same words for almost identical concepts but are not quite the same. In the general sense, you are correct. In the Frequentist view of the world, there are fixed but unknown parameters. Those fixed parameters determine what data could be seen. The data that could be seen is the sample space. I would note that when I say “fixed,” I do not necessarily mean that $\sigma^2=k$, it could be that $\sigma^2=k*t$ or any other function or possibly relation. I do not require homoskedasticity or that a distribution is stationary. I am saying that at any instantaneous moment, there is a parameter of fixed value. Those parameters are the determiners of that which is instantaneously possible. Note that the long-run frequencies do not follow from the observations or their nature but the fixed nature of the natural system that the frequencies represent. An essential and exceedingly helpful side effect is that the sampling distributions of many estimators can be known. In the real world, we are often dealing with estimators rather than raw frequencies. Indeed, the raw frequencies are often of little use. It is a bit more challenging to discuss the Bayesian view of the world because it has multiple axiomatizations. In contrast, there is only one axiomatization of the Frequentist side, Kolmogorov’s. Usually, the differences do not matter, especially in applied work. Nonetheless, they can matter in theoretical work. Savage’s and de Finetti’s solutions to probability differ in some theoretical constructions, as do others. That can result in differing applied Bayesian models, particularly in the social sciences. In addition to axiomatic differences, there can be differences in interpretation created by the silences in the math. Bayesian theory often does not require you to adopt a particular point of view, but we are humans, and we like to feel comfortable. I suspect people that work in quantum mechanics have the same difficulty. As an example, there are two equally valid ways that you could view a prior distribution for an unobservable. The first would be that the unobserved quantity is fixed, but its location is unknown. The prior represents your uncertainty about its location. It is a representation of your beliefs about it. The second would be that Nature draws the actual value of the unobservable when you do an experiment from a distribution called the prior. Your prior represents your best estimate of what nature is doing. They are equivalent; to reject one is to deny the other. You can emotionally reject one, but not the math. You can assert one of the two, but that is a statement of emotional comfort, not math. The math doesn’t make nature do anything. Randomness on the Bayesian side of the coin is uncertainty. There are unobservable things that you want to know more about, or to take actions about or because of, likely based on observable things. You are uncertain about the unobservables. You are certain of the observables. Please note that just because you are certain, treating the observables as fixed in the same sense as Frequentist parameters, does not imply that your observations are valid. If you observe a magician providing you with data, the observations are fixed. It does not mean that they are accurately informative of an underlying phenomenon. Regardless, Bayesian probability measures and statistics are subjective. They depend entirely upon the prior knowledge of the system. A seasoned engineer with twenty years of experience and a graduate degree will have a different prior regarding soil samples for constructing a bridge than a fresh-out engineer who graduated engineering school last month. That difference in skill and knowledge can have very real-world consequences. A new engineer happily accepting the results of a t-test may find a grumpy senior engineer requiring more sampling and a rejection of the results by the inclusion of his or her prior. Bayesian methods are about updating beliefs. The probability distributions are the distributions of belief. That may imply that there would be no scientific use, but that is not true. If one were to adopt the prior of an ardent opponent and show that even a highly bigoted opponent should assume the opposite belief, then that is very convincing. A passionate proponent of using Ivermectin to treat COVID, as long as they do not have a degenerate prior conviction that there is only one answer, will give up on Ivermectin as data comes through. It may take much longer than for a person with no personal opinion one way or the other, but it will happen. To be honest, because doctors prescribed so much Ivermectin in the last few months to keep their patients from going to other doctors, there is now an extensive data set. We have data from controlled experiments and natural experiments. The upshot of this is that people should get vaccinated and seek other treatments such as the monoclonal or polyclonal antibody infusions early in the infection. As long as your prior beliefs are not degenerate, they can change upon seeing data, then the data will drive you to reality. Subject to you not living in the Truman show or gathering your entire worldview from magicians and con artists, the data wins eventually. As to Bayesian equivalents to the p-value or the confidence interval, there are none. A p-value provides the probability of observing a result as extreme or more extreme if the null hypothesis is true. There can be no real Bayesian equivalent because there is no equivalent to the null hypothesis on the Bayesian side. No hypothesis is special, and there is no restriction to a null and an alternative. You can have any finite number of hypotheses that you find meaningful. The closes thing is the Bayesian posterior probability. It is a statement of how much weight you give to the truth of a hypothesis. It has nothing to do with chance. The hypothesis is not assumed to be true. The question the posterior resolves is what probability do you give or how much credence or credibility do you provide a hypothesis. There is no Bayesian equivalent to a confidence interval. A confidence interval is any function that guarantees that the interval will cover a parameter to some specified percentage of the time. There is an infinite number of such functions. Confidence intervals are not unique. Suppose you repeat an experiment an infinite number of times. In that case, the percentage of time that your interval will cover the parameter will never be less than your desired guaranteed percentage. Of course, since infinite repetition isn’t feasible, it is just a model, as are Bayesian models. Of great importance, if you perform an experiment and a parameter is estimated with 95% confidence to be in the interval $[a,b]$, that does not imply that there is a 95% chance that the parameter is in the interval. It is a statement of confidence in the interval building process. You believe that at least 95% of your intervals will cover the parameter as the number of experiments goes to infinity. The closest Bayesian equivalent is the credible set. It is not an interval, and it does not have to be a connected set. It can certainly be true that the Frequentist confidence interval is $[5,15]$, when the Bayesian set of equivalent probability is $[6,7]\cup[8,9.5]$. The set can be disjoint if some area is improbable. As there is an infinite number of ways to subset a probability distribution, there is an endless number of possible credible sets that all add to at least some chosen percentage. The credible interval is created by applying some rule to the posterior probability distribution. So a 95% credible set is the region where you would give at least a 95% probability of finding the parameter given your observations and prior knowledge. Which method you should use depends entirely upon what usage you are going to put the techniques to. Fisher’s method of maximum likelihood should be used whenever you want to acquire new knowledge. If you wonder if something is true, then you collect data and research it. Take that data, plug it into the method of maximum likelihood, and use that to generate a p-value or a likelihood ratio. If the p-value is small enough, then provisionally accept that your null hypothesis is false and do more research into the topic. If it is not small enough for you, then realize that you have wasted your time and go on to other, hopefully, more fruitful, things. Pearson and Neyman’s frequency method should be used when it matters whether you accept or reject something. It allows you to create an acceptance and rejection region and gives you a way to control for statistical power. An excellent example of that would be quality control inspection. The method says that if you choose some value, $\alpha$, and stick to it, then you will be made a fool of no more often than $\alpha$ percentage of the time. Laplace’s method of inverse probability, Bayesian analysis, should be used when you need to find something, gamble, take personal action, or update your beliefs. You should never ever place money at risk other than with Bayesian methods unless you want someone to have the ability to force you to take sure losses. Risk-taking with money is my area, so it colors my perspective. Likewise, if you need to find a downed plane, use a Bayesian method. If you need to find an unobservable quantity with observable data, Bayes is your tool. If you need to be able to make factual statements about a parameter using conventions that we can all agree with, Frequentist or Likelihoodhist methods are your toolkit. That boils it down. The frequency side answers, “what are the minimal statements we can all agree with because we can have statistical confidence in the procedures that were used to create it?” The Bayesian side answers, “what should I believe, or how should I act, based only on what I saw and prior observations and the prior observations of other people that I have chosen to endorse?”
Bayesian vs frequentist Interpretations of Probability
“I was just wondering whether anyone could give me a quick summary of their interpretation of Bayesian vs. Frequentist approach including Bayesian statistical equivalents of the Frequentist p-value an
Bayesian vs frequentist Interpretations of Probability “I was just wondering whether anyone could give me a quick summary of their interpretation of Bayesian vs. Frequentist approach including Bayesian statistical equivalents of the Frequentist p-value and confidence interval. In addition, specific examples of where one method would be preferable to the other are appreciated.” At a certain level, you basically have it. However, I spent some time thinking about your question and thought I would add an answer to it. The first thing I will do is change some language. When I speak about the Frequentist perspective, I will use the words data and parameters. When I speak about the Bayesian perspective, I will use the words observables and unobservables. Do notice that they are not the same. For example, if you are missing a data point, it is an unobservable but not a parameter in the Frequentist sense. Likewise, if it is known that the variance of a process is equal to some value $k$, then it is an observable. One of the difficulties of talking across paradigms is that people try and use the same words for almost identical concepts but are not quite the same. In the general sense, you are correct. In the Frequentist view of the world, there are fixed but unknown parameters. Those fixed parameters determine what data could be seen. The data that could be seen is the sample space. I would note that when I say “fixed,” I do not necessarily mean that $\sigma^2=k$, it could be that $\sigma^2=k*t$ or any other function or possibly relation. I do not require homoskedasticity or that a distribution is stationary. I am saying that at any instantaneous moment, there is a parameter of fixed value. Those parameters are the determiners of that which is instantaneously possible. Note that the long-run frequencies do not follow from the observations or their nature but the fixed nature of the natural system that the frequencies represent. An essential and exceedingly helpful side effect is that the sampling distributions of many estimators can be known. In the real world, we are often dealing with estimators rather than raw frequencies. Indeed, the raw frequencies are often of little use. It is a bit more challenging to discuss the Bayesian view of the world because it has multiple axiomatizations. In contrast, there is only one axiomatization of the Frequentist side, Kolmogorov’s. Usually, the differences do not matter, especially in applied work. Nonetheless, they can matter in theoretical work. Savage’s and de Finetti’s solutions to probability differ in some theoretical constructions, as do others. That can result in differing applied Bayesian models, particularly in the social sciences. In addition to axiomatic differences, there can be differences in interpretation created by the silences in the math. Bayesian theory often does not require you to adopt a particular point of view, but we are humans, and we like to feel comfortable. I suspect people that work in quantum mechanics have the same difficulty. As an example, there are two equally valid ways that you could view a prior distribution for an unobservable. The first would be that the unobserved quantity is fixed, but its location is unknown. The prior represents your uncertainty about its location. It is a representation of your beliefs about it. The second would be that Nature draws the actual value of the unobservable when you do an experiment from a distribution called the prior. Your prior represents your best estimate of what nature is doing. They are equivalent; to reject one is to deny the other. You can emotionally reject one, but not the math. You can assert one of the two, but that is a statement of emotional comfort, not math. The math doesn’t make nature do anything. Randomness on the Bayesian side of the coin is uncertainty. There are unobservable things that you want to know more about, or to take actions about or because of, likely based on observable things. You are uncertain about the unobservables. You are certain of the observables. Please note that just because you are certain, treating the observables as fixed in the same sense as Frequentist parameters, does not imply that your observations are valid. If you observe a magician providing you with data, the observations are fixed. It does not mean that they are accurately informative of an underlying phenomenon. Regardless, Bayesian probability measures and statistics are subjective. They depend entirely upon the prior knowledge of the system. A seasoned engineer with twenty years of experience and a graduate degree will have a different prior regarding soil samples for constructing a bridge than a fresh-out engineer who graduated engineering school last month. That difference in skill and knowledge can have very real-world consequences. A new engineer happily accepting the results of a t-test may find a grumpy senior engineer requiring more sampling and a rejection of the results by the inclusion of his or her prior. Bayesian methods are about updating beliefs. The probability distributions are the distributions of belief. That may imply that there would be no scientific use, but that is not true. If one were to adopt the prior of an ardent opponent and show that even a highly bigoted opponent should assume the opposite belief, then that is very convincing. A passionate proponent of using Ivermectin to treat COVID, as long as they do not have a degenerate prior conviction that there is only one answer, will give up on Ivermectin as data comes through. It may take much longer than for a person with no personal opinion one way or the other, but it will happen. To be honest, because doctors prescribed so much Ivermectin in the last few months to keep their patients from going to other doctors, there is now an extensive data set. We have data from controlled experiments and natural experiments. The upshot of this is that people should get vaccinated and seek other treatments such as the monoclonal or polyclonal antibody infusions early in the infection. As long as your prior beliefs are not degenerate, they can change upon seeing data, then the data will drive you to reality. Subject to you not living in the Truman show or gathering your entire worldview from magicians and con artists, the data wins eventually. As to Bayesian equivalents to the p-value or the confidence interval, there are none. A p-value provides the probability of observing a result as extreme or more extreme if the null hypothesis is true. There can be no real Bayesian equivalent because there is no equivalent to the null hypothesis on the Bayesian side. No hypothesis is special, and there is no restriction to a null and an alternative. You can have any finite number of hypotheses that you find meaningful. The closes thing is the Bayesian posterior probability. It is a statement of how much weight you give to the truth of a hypothesis. It has nothing to do with chance. The hypothesis is not assumed to be true. The question the posterior resolves is what probability do you give or how much credence or credibility do you provide a hypothesis. There is no Bayesian equivalent to a confidence interval. A confidence interval is any function that guarantees that the interval will cover a parameter to some specified percentage of the time. There is an infinite number of such functions. Confidence intervals are not unique. Suppose you repeat an experiment an infinite number of times. In that case, the percentage of time that your interval will cover the parameter will never be less than your desired guaranteed percentage. Of course, since infinite repetition isn’t feasible, it is just a model, as are Bayesian models. Of great importance, if you perform an experiment and a parameter is estimated with 95% confidence to be in the interval $[a,b]$, that does not imply that there is a 95% chance that the parameter is in the interval. It is a statement of confidence in the interval building process. You believe that at least 95% of your intervals will cover the parameter as the number of experiments goes to infinity. The closest Bayesian equivalent is the credible set. It is not an interval, and it does not have to be a connected set. It can certainly be true that the Frequentist confidence interval is $[5,15]$, when the Bayesian set of equivalent probability is $[6,7]\cup[8,9.5]$. The set can be disjoint if some area is improbable. As there is an infinite number of ways to subset a probability distribution, there is an endless number of possible credible sets that all add to at least some chosen percentage. The credible interval is created by applying some rule to the posterior probability distribution. So a 95% credible set is the region where you would give at least a 95% probability of finding the parameter given your observations and prior knowledge. Which method you should use depends entirely upon what usage you are going to put the techniques to. Fisher’s method of maximum likelihood should be used whenever you want to acquire new knowledge. If you wonder if something is true, then you collect data and research it. Take that data, plug it into the method of maximum likelihood, and use that to generate a p-value or a likelihood ratio. If the p-value is small enough, then provisionally accept that your null hypothesis is false and do more research into the topic. If it is not small enough for you, then realize that you have wasted your time and go on to other, hopefully, more fruitful, things. Pearson and Neyman’s frequency method should be used when it matters whether you accept or reject something. It allows you to create an acceptance and rejection region and gives you a way to control for statistical power. An excellent example of that would be quality control inspection. The method says that if you choose some value, $\alpha$, and stick to it, then you will be made a fool of no more often than $\alpha$ percentage of the time. Laplace’s method of inverse probability, Bayesian analysis, should be used when you need to find something, gamble, take personal action, or update your beliefs. You should never ever place money at risk other than with Bayesian methods unless you want someone to have the ability to force you to take sure losses. Risk-taking with money is my area, so it colors my perspective. Likewise, if you need to find a downed plane, use a Bayesian method. If you need to find an unobservable quantity with observable data, Bayes is your tool. If you need to be able to make factual statements about a parameter using conventions that we can all agree with, Frequentist or Likelihoodhist methods are your toolkit. That boils it down. The frequency side answers, “what are the minimal statements we can all agree with because we can have statistical confidence in the procedures that were used to create it?” The Bayesian side answers, “what should I believe, or how should I act, based only on what I saw and prior observations and the prior observations of other people that I have chosen to endorse?”
Bayesian vs frequentist Interpretations of Probability “I was just wondering whether anyone could give me a quick summary of their interpretation of Bayesian vs. Frequentist approach including Bayesian statistical equivalents of the Frequentist p-value an
5,045
Bayesian vs frequentist Interpretations of Probability
The choice of interpretation depends on the question. If you wish to know the odds in a game of chance, classical interpretation will solve your problem, but statistical data is useless since fair dice have no memory. If you wish to predict a future event based on past experience, the frequentist interpretation is correct and sufficient. If you don't know if a past event had occurred, and wish to assess the probability that it did, you must take your prior beliefs, i.e. what you already know about the chance of the event to occur and update your belief when you acquire new data. Since the question is about a degree of belief, and each person may have a different idea about the priors, the interpretation is necessarily subjective, a.k.a. Bayesian.
Bayesian vs frequentist Interpretations of Probability
The choice of interpretation depends on the question. If you wish to know the odds in a game of chance, classical interpretation will solve your problem, but statistical data is useless since fair dic
Bayesian vs frequentist Interpretations of Probability The choice of interpretation depends on the question. If you wish to know the odds in a game of chance, classical interpretation will solve your problem, but statistical data is useless since fair dice have no memory. If you wish to predict a future event based on past experience, the frequentist interpretation is correct and sufficient. If you don't know if a past event had occurred, and wish to assess the probability that it did, you must take your prior beliefs, i.e. what you already know about the chance of the event to occur and update your belief when you acquire new data. Since the question is about a degree of belief, and each person may have a different idea about the priors, the interpretation is necessarily subjective, a.k.a. Bayesian.
Bayesian vs frequentist Interpretations of Probability The choice of interpretation depends on the question. If you wish to know the odds in a game of chance, classical interpretation will solve your problem, but statistical data is useless since fair dic
5,046
Bayesian vs frequentist Interpretations of Probability
The other answers do a good job explaining this topic, but I think there's room for a more motivated explanation. What is the probability of rolling a 1 with a fair six-sided die? The traditional answer would be 1 in 6, because no one side is favored over another. This concept can be extended to the principle of indifference, which was stated by Keynes (1921): If there is no known reason for predicating of our subject one rather than another of several alternatives, then relatively to such knowledge the assertions of each of these alternatives have an equal probability. Thus equal probabilities must be assigned to each of several arguments, if there is an absence of positive ground for assigning unequal ones. And so we find the probability of pulling any card from a deck is 1 in 52, the probability of flipping heads with a fair coin is 1 in 2, the probability of pulling a 5 from a random number generator which gives single digit integers from 0 to 9 is 1 in 10. We know that, with sufficient information about the system, the result could always be predicted exactly, and so there is no "chance" involved, only lack of complete knowledge. But unless we have evidence that a particular outcome is more likely, we must consider all equally. We can now distinguish two types of probability: Subjective probability, which is the degree of belief of an actor based on a priori knowledge Objective probability, the theoretical chance or propensity of an event Presumably, our goal is to better define the concept of objective probability, or to limit ourselves to subjective probabilities through which we infer something like the objective probability. In both cases, we wish to develop a model for the events of interest. We will work towards understanding probability from the perspective of Bayesion probability (subjective probability based on observations) and frequentist probability (objective probability based on the idea that, in the limit, the relative frequencies obtained by random sampling give the probability of the respective events). But first, we must give evidence that the principle of indifference is flawed if misapplied. The principle of indifference is subject to paradoxes, one of which is Bertrand's paradox (see e.g. here, slightly modified). It goes as follows: Consider an equilateral triangle inscribed in a circle. Suppose a chord of the circle is chosen at random. What is the probability that the chord is longer than a side of the triangle? A key idea here is that the domain of possibilities is infinite, there are an infinite number of possible chords we can choose, and many ways we can "randomly" select them. For example: Random endpoints - Select two random points on the circumference of the circle and connect them; the probability that the chord is longer than the side of an equilateral triangle inscribed in the circle is 1 in 3 Random radials - Select a radius line of the circle, choose a point on the radius, and construct the chord through this point and perpendicular to the radius. The probability of the chord being longer than a side is 1 in 2. Random midpoints: Select a random point as the midpoint of a chord. The probability that the chord is longer is 1 in 4. There are other considerations (e.g. the role of diameter lines, the "maximum ignorance" principle), but by the classical theory of probability, the problem as stated above has no unique solution. While Bertrand's paradox is still considered by some as unresolved, whether you believe that or not the paradox still illustrates issues with naively adapting the principle of indifference. Another problem in probability which the traditional interpretation of probability fails to answer is this: what is the probability that the sun will rise tomorrow? There is no natural symmetry by which to reason, so the traditional approach is difficult to apply. The question can be interpreted in different ways, but we seek to answer what the probability should be. As an example, I [or humans in general] have experienced the sun rising every day since I have been alive, therefore my degree of belief that the sun will rise tomorrow is very high. Alternatively, the physical mechanism behind the sun "rising" is well-known, and no observations have been made which would lead us to think that the Earth will stop rotating, so there is a very small probability that the sun will not rise, the only plausible reason for denying this being a lack of absolute information about all possible astronomical events in the next 24 hours. Laplace first posed this problem, and describes the probability as follows. Assume the sun rises on $p\%$ of days, where $p$ can only be inferred from experience. We wish to calculate the probability that $p$ is in a given range, based on the number of days we have actually seen the sun rise. Initially (at the beginning of Earth time, say), we have no information about $p$, therefore we assume a uniform distribution from 0 to 100%. That is, the probability that the sun will rise can be anywhere in the range $[0,100\%]$ with equal likelihood. (Careful to note: that's the probability of a probability.) Now, every day on record where the sun has risen is evidence in favor of the statement "the sun will rise tomorrow." Therefore, the distribution of $p$ is calculated from the conditional probability of the sun rising tomorrow given that the sun has risen $k$ times previously. The initial distribution (the uniform distribution) of the probability $p$ is called the a priori distribution, or prior distribution. From Bayes' rule, this conditional probability can be calculated from the prior distribution and the observations. Taking this idea further, the Bayesian interpretation of probability states that any probability is a conditional probability given knowledge about the population. Bayesian probability frames problems in e.g. statistics in quite a different way, which the other answers discuss. The Bayesian system seems to be a direct application of the theory of probability, which seeks to avoid inferring anything which is not already known, and only inferring based on exactly what has been observed. Initially, it was only considered within the domain where this kind of reasoning was desirable (like the sunrise problem), while problems with random sampling and statistical inference were considered in their own domain. But, in the mid-20th century, the assumptions of the statistical approach of random sampling began to be understood. Most of the early 20th century statistical techniques followed an approach to probability similar to the following: given an exactly defined random experiment, which can be repeated without subjectivity, one may estimate the probability of the event occurring in general, and as the number of observations increases, the objective probability of the event is approached. The fact that this was an alternative interpretation of the concept of probability was not immediately obvious, but it grew to be well accepted. Because of the role of relative frequencies of events occurring, this was described as the frequentist interpretation of probability. Returning to the sunrise problem. From the Bayesian perspective, we began with a prior distribution (uniform probability) for the probability $p$ that the sun will rise tomorrow, and we used the repeated experiences of the sun rising, combined with Bayes' rule, to obtain a conditional probability. The probability is only what we can obtain by starting with ignorance and working towards understanding. From the frequentist perspective, the experiment is not the well-defined random experiment necessary, so we cannot directly apply the concept. But if we want to force it, we also consider the past experiences, considering these as a sample of the sample space (all times the sun will or will not rise in the morning). With an imperfect understanding of the experiment and underlying mechanism, we would be forced to infer that because the sun has always risen in the past, that it will always continue to rise in the future, within some confidence interval. This problem clearly favors a Bayesian approach. Compare this with any of the random experiments featured in statistical textbooks (random number generators, whether a die is loaded or fair, presence of genetic traits in plant or animal populations). Before the rise of modern computing, calculating the Bayesian probability of events was almost impossible. But in the last few decades, Bayesianism has seen a marked rise in popularity, especially in problems where it is most applicable (as to which problems those are, sometimes it's clear, but generally it's an open problem).
Bayesian vs frequentist Interpretations of Probability
The other answers do a good job explaining this topic, but I think there's room for a more motivated explanation. What is the probability of rolling a 1 with a fair six-sided die? The traditional answ
Bayesian vs frequentist Interpretations of Probability The other answers do a good job explaining this topic, but I think there's room for a more motivated explanation. What is the probability of rolling a 1 with a fair six-sided die? The traditional answer would be 1 in 6, because no one side is favored over another. This concept can be extended to the principle of indifference, which was stated by Keynes (1921): If there is no known reason for predicating of our subject one rather than another of several alternatives, then relatively to such knowledge the assertions of each of these alternatives have an equal probability. Thus equal probabilities must be assigned to each of several arguments, if there is an absence of positive ground for assigning unequal ones. And so we find the probability of pulling any card from a deck is 1 in 52, the probability of flipping heads with a fair coin is 1 in 2, the probability of pulling a 5 from a random number generator which gives single digit integers from 0 to 9 is 1 in 10. We know that, with sufficient information about the system, the result could always be predicted exactly, and so there is no "chance" involved, only lack of complete knowledge. But unless we have evidence that a particular outcome is more likely, we must consider all equally. We can now distinguish two types of probability: Subjective probability, which is the degree of belief of an actor based on a priori knowledge Objective probability, the theoretical chance or propensity of an event Presumably, our goal is to better define the concept of objective probability, or to limit ourselves to subjective probabilities through which we infer something like the objective probability. In both cases, we wish to develop a model for the events of interest. We will work towards understanding probability from the perspective of Bayesion probability (subjective probability based on observations) and frequentist probability (objective probability based on the idea that, in the limit, the relative frequencies obtained by random sampling give the probability of the respective events). But first, we must give evidence that the principle of indifference is flawed if misapplied. The principle of indifference is subject to paradoxes, one of which is Bertrand's paradox (see e.g. here, slightly modified). It goes as follows: Consider an equilateral triangle inscribed in a circle. Suppose a chord of the circle is chosen at random. What is the probability that the chord is longer than a side of the triangle? A key idea here is that the domain of possibilities is infinite, there are an infinite number of possible chords we can choose, and many ways we can "randomly" select them. For example: Random endpoints - Select two random points on the circumference of the circle and connect them; the probability that the chord is longer than the side of an equilateral triangle inscribed in the circle is 1 in 3 Random radials - Select a radius line of the circle, choose a point on the radius, and construct the chord through this point and perpendicular to the radius. The probability of the chord being longer than a side is 1 in 2. Random midpoints: Select a random point as the midpoint of a chord. The probability that the chord is longer is 1 in 4. There are other considerations (e.g. the role of diameter lines, the "maximum ignorance" principle), but by the classical theory of probability, the problem as stated above has no unique solution. While Bertrand's paradox is still considered by some as unresolved, whether you believe that or not the paradox still illustrates issues with naively adapting the principle of indifference. Another problem in probability which the traditional interpretation of probability fails to answer is this: what is the probability that the sun will rise tomorrow? There is no natural symmetry by which to reason, so the traditional approach is difficult to apply. The question can be interpreted in different ways, but we seek to answer what the probability should be. As an example, I [or humans in general] have experienced the sun rising every day since I have been alive, therefore my degree of belief that the sun will rise tomorrow is very high. Alternatively, the physical mechanism behind the sun "rising" is well-known, and no observations have been made which would lead us to think that the Earth will stop rotating, so there is a very small probability that the sun will not rise, the only plausible reason for denying this being a lack of absolute information about all possible astronomical events in the next 24 hours. Laplace first posed this problem, and describes the probability as follows. Assume the sun rises on $p\%$ of days, where $p$ can only be inferred from experience. We wish to calculate the probability that $p$ is in a given range, based on the number of days we have actually seen the sun rise. Initially (at the beginning of Earth time, say), we have no information about $p$, therefore we assume a uniform distribution from 0 to 100%. That is, the probability that the sun will rise can be anywhere in the range $[0,100\%]$ with equal likelihood. (Careful to note: that's the probability of a probability.) Now, every day on record where the sun has risen is evidence in favor of the statement "the sun will rise tomorrow." Therefore, the distribution of $p$ is calculated from the conditional probability of the sun rising tomorrow given that the sun has risen $k$ times previously. The initial distribution (the uniform distribution) of the probability $p$ is called the a priori distribution, or prior distribution. From Bayes' rule, this conditional probability can be calculated from the prior distribution and the observations. Taking this idea further, the Bayesian interpretation of probability states that any probability is a conditional probability given knowledge about the population. Bayesian probability frames problems in e.g. statistics in quite a different way, which the other answers discuss. The Bayesian system seems to be a direct application of the theory of probability, which seeks to avoid inferring anything which is not already known, and only inferring based on exactly what has been observed. Initially, it was only considered within the domain where this kind of reasoning was desirable (like the sunrise problem), while problems with random sampling and statistical inference were considered in their own domain. But, in the mid-20th century, the assumptions of the statistical approach of random sampling began to be understood. Most of the early 20th century statistical techniques followed an approach to probability similar to the following: given an exactly defined random experiment, which can be repeated without subjectivity, one may estimate the probability of the event occurring in general, and as the number of observations increases, the objective probability of the event is approached. The fact that this was an alternative interpretation of the concept of probability was not immediately obvious, but it grew to be well accepted. Because of the role of relative frequencies of events occurring, this was described as the frequentist interpretation of probability. Returning to the sunrise problem. From the Bayesian perspective, we began with a prior distribution (uniform probability) for the probability $p$ that the sun will rise tomorrow, and we used the repeated experiences of the sun rising, combined with Bayes' rule, to obtain a conditional probability. The probability is only what we can obtain by starting with ignorance and working towards understanding. From the frequentist perspective, the experiment is not the well-defined random experiment necessary, so we cannot directly apply the concept. But if we want to force it, we also consider the past experiences, considering these as a sample of the sample space (all times the sun will or will not rise in the morning). With an imperfect understanding of the experiment and underlying mechanism, we would be forced to infer that because the sun has always risen in the past, that it will always continue to rise in the future, within some confidence interval. This problem clearly favors a Bayesian approach. Compare this with any of the random experiments featured in statistical textbooks (random number generators, whether a die is loaded or fair, presence of genetic traits in plant or animal populations). Before the rise of modern computing, calculating the Bayesian probability of events was almost impossible. But in the last few decades, Bayesianism has seen a marked rise in popularity, especially in problems where it is most applicable (as to which problems those are, sometimes it's clear, but generally it's an open problem).
Bayesian vs frequentist Interpretations of Probability The other answers do a good job explaining this topic, but I think there's room for a more motivated explanation. What is the probability of rolling a 1 with a fair six-sided die? The traditional answ
5,047
Approximate order statistics for normal random variables
The classic reference is Royston (1982)[1] which has algorithms going beyond explicit formulas. It also quotes a well-known formula by Blom (1958): $E(r:n) \approx \mu + \Phi^{-1}(\frac{r-\alpha}{n-2\alpha+1})\sigma$ with $\alpha=0.375$. This formula gives a multiplier of -2.73 for $n=200, r=1$. [1]: Algorithm AS 177: Expected Normal Order Statistics (Exact and Approximate) J. P. Royston. Journal of the Royal Statistical Society. Series C (Applied Statistics) Vol. 31, No. 2 (1982), pp. 161-165
Approximate order statistics for normal random variables
The classic reference is Royston (1982)[1] which has algorithms going beyond explicit formulas. It also quotes a well-known formula by Blom (1958): $E(r:n) \approx \mu + \Phi^{-1}(\frac{r-\alpha}{n-2\
Approximate order statistics for normal random variables The classic reference is Royston (1982)[1] which has algorithms going beyond explicit formulas. It also quotes a well-known formula by Blom (1958): $E(r:n) \approx \mu + \Phi^{-1}(\frac{r-\alpha}{n-2\alpha+1})\sigma$ with $\alpha=0.375$. This formula gives a multiplier of -2.73 for $n=200, r=1$. [1]: Algorithm AS 177: Expected Normal Order Statistics (Exact and Approximate) J. P. Royston. Journal of the Royal Statistical Society. Series C (Applied Statistics) Vol. 31, No. 2 (1982), pp. 161-165
Approximate order statistics for normal random variables The classic reference is Royston (1982)[1] which has algorithms going beyond explicit formulas. It also quotes a well-known formula by Blom (1958): $E(r:n) \approx \mu + \Phi^{-1}(\frac{r-\alpha}{n-2\
5,048
Approximate order statistics for normal random variables
$$\newcommand{\Pr}{\mathrm{Pr}}\newcommand{\Beta}{\mathrm{Beta}}\newcommand{\Var}{\mathrm{Var}}$$The distribution of the ith order statistic of any continuous random variable with a PDF is given by the "beta-F" compound distribution. The intuitive way to think about this distribution, is to consider the ith order statistic in a sample of $N$. Now in order for the value of the ith order statistic of a random variable $X$ to be equal to $x$ we need 3 conditions: $i-1$ values below $x$, this has probability $F_{X}(x)$ for each observation, where $F_X(x)=\Pr(X<x)$ is the CDF of the random variable X. $N-i$ values above $x$, this has probability $1-F_{X}(x)$ 1 value inside a infinitesimal interval containing $x$, this has probability $f_{X}(x)dx$ where $f_{X}(x)dx=dF_{X}(x)=\Pr(x<X<x+dx)$ is the PDF of the random variable $X$ There are ${N \choose 1}{N-1 \choose i-1}$ ways to make this choice, so we have: $$f_{i}(x_{i})=\frac{N!}{(i-1)!(N-i)!}f_{X}(x_{i})\left[1-F_{X}(x_{i})\right]^{N-i}\left[F_{X}(x_{i})\right]^{i-1}dx$$ EDIT in my original post, I made a very poor attempt at going further from this point, and the comments below reflect this. I have sought to rectify this below If we take the mean value of this pdf we get: $$E(X_{i})=\int_{-\infty}^{\infty} x_{i}f_{i}(x_{i})dx_{i}$$ And in this integral, we make the following change of variable $p_{i}=F_{X}(x_{i})$ (taking @henry's hint), and the integral becomes: $$E(X_{i})=\int_{0}^{1} F_{X}^{-1}(p_{i})\Beta(p_{i}|i,N-i+1)dp_{i}=E_{\Beta(p_{i}|i,N-i+1)}\left[F_{X}^{-1}(p_{i})\right]$$ So this is the expected value of the inverse CDF, which can be well approximated using the delta method to give: $$E_{\Beta(p_{i}|i,N-i+1)}\left[F_{X}^{-1}(p_{i})\right]\approx F_{X}^{-1}\left[E_{\Beta(p_{i}|i,N-i+1)}\right]=F_{X}^{-1}\left[\frac{i}{N+1}\right]$$ To make a better approximation, we can expand to 2nd order (prime denoting differentiation), and noting that the second derivative of an inverse is: $$\frac{\partial^{2}}{\partial a^{2}}F_{X}^{-1}(a)=-\frac{F_{X}^{''}(F_{X}^{-1}(a))}{\left[F_{X}^{'}(F_{X}^{-1}(a))\right]^{3}}=-\frac{f_{X}^{'}(F_{X}^{-1}(a))}{\left[f_{X}(F_{X}^{-1}(a))\right]^{3}}$$ Let $\nu_{i}=F_{X}^{-1}\left[\frac{i}{N+1}\right]$. Then We have: $$E_{\Beta(p_{i}|i,N-i+1)}\left[F_{X}^{-1}(p_{i})\right]\approx F_{X}^{-1}\left[\nu_{i}\right]-\frac{\Var_{\Beta(p_{i}|i,N-i+1)}\left[p_{i}\right]}{2}\frac{f_{X}^{'}(\nu_{i})}{\left[f_{X}(\nu_{i})\right]^{3}}$$ $$=\nu_{i}-\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)}\frac{f_{X}^{'}(\nu_{i})}{\left[f_{X}(\nu_{i})\right]^{3}}$$ Now, specialising to normal case we have $$f_{X}(x)=\frac{1}{\sigma}\phi(\frac{x-\mu}{\sigma})\rightarrow f_{X}^{'}(x)=-\frac{x-\mu}{\sigma^{3}}\phi(\frac{x-\mu}{\sigma})=-\frac{x-\mu}{\sigma^{2}}f_{X}(x)$$ $$F_{X}(x)=\Phi(\frac{x-\mu}{\sigma})\implies F_{X}^{-1}(x)=\mu+\sigma\Phi^{-1}(x)$$ Note that $f_{X}(\nu_{i})=\frac{1}{\sigma}\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]$ And the expectation approximately becomes: $$E[x_{i}]\approx \mu+\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)+\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)}\frac{\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)}{\left[\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]\right]^{2}}$$ And finally: $$E[x_{i}]\approx \mu+\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)\left[1+\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)\left[\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]\right]^{2}}\right]$$ Although as @whuber has noted, this will not be accurate in the tails. In fact I think it may be worse, because of the skewness of a beta with different parameters
Approximate order statistics for normal random variables
$$\newcommand{\Pr}{\mathrm{Pr}}\newcommand{\Beta}{\mathrm{Beta}}\newcommand{\Var}{\mathrm{Var}}$$The distribution of the ith order statistic of any continuous random variable with a PDF is given by th
Approximate order statistics for normal random variables $$\newcommand{\Pr}{\mathrm{Pr}}\newcommand{\Beta}{\mathrm{Beta}}\newcommand{\Var}{\mathrm{Var}}$$The distribution of the ith order statistic of any continuous random variable with a PDF is given by the "beta-F" compound distribution. The intuitive way to think about this distribution, is to consider the ith order statistic in a sample of $N$. Now in order for the value of the ith order statistic of a random variable $X$ to be equal to $x$ we need 3 conditions: $i-1$ values below $x$, this has probability $F_{X}(x)$ for each observation, where $F_X(x)=\Pr(X<x)$ is the CDF of the random variable X. $N-i$ values above $x$, this has probability $1-F_{X}(x)$ 1 value inside a infinitesimal interval containing $x$, this has probability $f_{X}(x)dx$ where $f_{X}(x)dx=dF_{X}(x)=\Pr(x<X<x+dx)$ is the PDF of the random variable $X$ There are ${N \choose 1}{N-1 \choose i-1}$ ways to make this choice, so we have: $$f_{i}(x_{i})=\frac{N!}{(i-1)!(N-i)!}f_{X}(x_{i})\left[1-F_{X}(x_{i})\right]^{N-i}\left[F_{X}(x_{i})\right]^{i-1}dx$$ EDIT in my original post, I made a very poor attempt at going further from this point, and the comments below reflect this. I have sought to rectify this below If we take the mean value of this pdf we get: $$E(X_{i})=\int_{-\infty}^{\infty} x_{i}f_{i}(x_{i})dx_{i}$$ And in this integral, we make the following change of variable $p_{i}=F_{X}(x_{i})$ (taking @henry's hint), and the integral becomes: $$E(X_{i})=\int_{0}^{1} F_{X}^{-1}(p_{i})\Beta(p_{i}|i,N-i+1)dp_{i}=E_{\Beta(p_{i}|i,N-i+1)}\left[F_{X}^{-1}(p_{i})\right]$$ So this is the expected value of the inverse CDF, which can be well approximated using the delta method to give: $$E_{\Beta(p_{i}|i,N-i+1)}\left[F_{X}^{-1}(p_{i})\right]\approx F_{X}^{-1}\left[E_{\Beta(p_{i}|i,N-i+1)}\right]=F_{X}^{-1}\left[\frac{i}{N+1}\right]$$ To make a better approximation, we can expand to 2nd order (prime denoting differentiation), and noting that the second derivative of an inverse is: $$\frac{\partial^{2}}{\partial a^{2}}F_{X}^{-1}(a)=-\frac{F_{X}^{''}(F_{X}^{-1}(a))}{\left[F_{X}^{'}(F_{X}^{-1}(a))\right]^{3}}=-\frac{f_{X}^{'}(F_{X}^{-1}(a))}{\left[f_{X}(F_{X}^{-1}(a))\right]^{3}}$$ Let $\nu_{i}=F_{X}^{-1}\left[\frac{i}{N+1}\right]$. Then We have: $$E_{\Beta(p_{i}|i,N-i+1)}\left[F_{X}^{-1}(p_{i})\right]\approx F_{X}^{-1}\left[\nu_{i}\right]-\frac{\Var_{\Beta(p_{i}|i,N-i+1)}\left[p_{i}\right]}{2}\frac{f_{X}^{'}(\nu_{i})}{\left[f_{X}(\nu_{i})\right]^{3}}$$ $$=\nu_{i}-\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)}\frac{f_{X}^{'}(\nu_{i})}{\left[f_{X}(\nu_{i})\right]^{3}}$$ Now, specialising to normal case we have $$f_{X}(x)=\frac{1}{\sigma}\phi(\frac{x-\mu}{\sigma})\rightarrow f_{X}^{'}(x)=-\frac{x-\mu}{\sigma^{3}}\phi(\frac{x-\mu}{\sigma})=-\frac{x-\mu}{\sigma^{2}}f_{X}(x)$$ $$F_{X}(x)=\Phi(\frac{x-\mu}{\sigma})\implies F_{X}^{-1}(x)=\mu+\sigma\Phi^{-1}(x)$$ Note that $f_{X}(\nu_{i})=\frac{1}{\sigma}\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]$ And the expectation approximately becomes: $$E[x_{i}]\approx \mu+\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)+\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)}\frac{\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)}{\left[\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]\right]^{2}}$$ And finally: $$E[x_{i}]\approx \mu+\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)\left[1+\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)\left[\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]\right]^{2}}\right]$$ Although as @whuber has noted, this will not be accurate in the tails. In fact I think it may be worse, because of the skewness of a beta with different parameters
Approximate order statistics for normal random variables $$\newcommand{\Pr}{\mathrm{Pr}}\newcommand{\Beta}{\mathrm{Beta}}\newcommand{\Var}{\mathrm{Var}}$$The distribution of the ith order statistic of any continuous random variable with a PDF is given by th
5,049
Approximate order statistics for normal random variables
Aniko's answer relies on Blom's well known formula that involves a choice of $\alpha = 3/8$. It turns out that this formula is itself a mere approximation of an exact answer due to G. Elfving (1947), The asymptotical distribution of range in samples from a normal population, Biometrika, Vol. 34, pp. 111-119. Elfving's formula is aimed at the minimum and maximum of the sample, for which the correct choice of alpha is $\pi/8$. Blom's formula results when we approximate $\pi$ by $3$. By using the Elfving formula rather than Blom's approximation, we get a multiplier of -2.744165. This number is closer to Erik P.'s exact answer (-2.746) and to the Monte Carlo approximation (-2.75) than is Blom's approximation (-2.73), while being easier to implement than the exact formula.
Approximate order statistics for normal random variables
Aniko's answer relies on Blom's well known formula that involves a choice of $\alpha = 3/8$. It turns out that this formula is itself a mere approximation of an exact answer due to G. Elfving (1947),
Approximate order statistics for normal random variables Aniko's answer relies on Blom's well known formula that involves a choice of $\alpha = 3/8$. It turns out that this formula is itself a mere approximation of an exact answer due to G. Elfving (1947), The asymptotical distribution of range in samples from a normal population, Biometrika, Vol. 34, pp. 111-119. Elfving's formula is aimed at the minimum and maximum of the sample, for which the correct choice of alpha is $\pi/8$. Blom's formula results when we approximate $\pi$ by $3$. By using the Elfving formula rather than Blom's approximation, we get a multiplier of -2.744165. This number is closer to Erik P.'s exact answer (-2.746) and to the Monte Carlo approximation (-2.75) than is Blom's approximation (-2.73), while being easier to implement than the exact formula.
Approximate order statistics for normal random variables Aniko's answer relies on Blom's well known formula that involves a choice of $\alpha = 3/8$. It turns out that this formula is itself a mere approximation of an exact answer due to G. Elfving (1947),
5,050
Approximate order statistics for normal random variables
Depending on what you want to do, this answer may or may not help - I got the following exact formula from Maple's Statistics package. with(Statistics): X := OrderStatistic(Normal(0, 1), 1, n): m := Mean(X): m; $$\int _{-\infty }^{\infty }\!1/2\,{\frac {{\it \_t0}\,n!\,\sqrt {2}{ {\rm e}^{-1/2\,{{\it \_t0}}^{2}}} \left( 1/2-1/2\, {{\rm erf}\left(1/2\,{\it \_t0}\,\sqrt {2}\right)} \right) ^{-1+n}}{ \left( -1+n \right) !\,\sqrt {\pi }}}{d{\it \_t0}}$$ By itself this isn't very useful (and it could probably be derived fairly easily by hand, since it's the minimum of $n$ random variables), but it does allow for quick and very accurate approximation for given values of $n $ - much more accurate than Monte Carlo: evalf(eval(m, n = 200)); evalf[25](eval(m, n = 200)); gives -2.746042447 and -2.746042447451154492412344, respectively. (Full disclosure - I maintain this package.)
Approximate order statistics for normal random variables
Depending on what you want to do, this answer may or may not help - I got the following exact formula from Maple's Statistics package. with(Statistics): X := OrderStatistic(Normal(0, 1), 1, n): m := M
Approximate order statistics for normal random variables Depending on what you want to do, this answer may or may not help - I got the following exact formula from Maple's Statistics package. with(Statistics): X := OrderStatistic(Normal(0, 1), 1, n): m := Mean(X): m; $$\int _{-\infty }^{\infty }\!1/2\,{\frac {{\it \_t0}\,n!\,\sqrt {2}{ {\rm e}^{-1/2\,{{\it \_t0}}^{2}}} \left( 1/2-1/2\, {{\rm erf}\left(1/2\,{\it \_t0}\,\sqrt {2}\right)} \right) ^{-1+n}}{ \left( -1+n \right) !\,\sqrt {\pi }}}{d{\it \_t0}}$$ By itself this isn't very useful (and it could probably be derived fairly easily by hand, since it's the minimum of $n$ random variables), but it does allow for quick and very accurate approximation for given values of $n $ - much more accurate than Monte Carlo: evalf(eval(m, n = 200)); evalf[25](eval(m, n = 200)); gives -2.746042447 and -2.746042447451154492412344, respectively. (Full disclosure - I maintain this package.)
Approximate order statistics for normal random variables Depending on what you want to do, this answer may or may not help - I got the following exact formula from Maple's Statistics package. with(Statistics): X := OrderStatistic(Normal(0, 1), 1, n): m := M
5,051
What is a good resource on table design?
Ed Tufte has a few pages on this in his classic "The Visual Display of Quantitative Information". For a much more detailed treatment, there is Jane Miller's Chicago Guide to Writing about Numbers. I've never seen anything else like it. It has a whole chapter on "Creating Effective Tables".
What is a good resource on table design?
Ed Tufte has a few pages on this in his classic "The Visual Display of Quantitative Information". For a much more detailed treatment, there is Jane Miller's Chicago Guide to Writing about Numbers. I'v
What is a good resource on table design? Ed Tufte has a few pages on this in his classic "The Visual Display of Quantitative Information". For a much more detailed treatment, there is Jane Miller's Chicago Guide to Writing about Numbers. I've never seen anything else like it. It has a whole chapter on "Creating Effective Tables".
What is a good resource on table design? Ed Tufte has a few pages on this in his classic "The Visual Display of Quantitative Information". For a much more detailed treatment, there is Jane Miller's Chicago Guide to Writing about Numbers. I'v
5,052
What is a good resource on table design?
Stephen Few's book Show Me the Numbers: Designing Tables and Graphs to Enlighten has a couple of chapters devoted to tabular display of information. It's good and recommended, but it's not quite Grammar of Graphics if that's what you're after. Update This sounds interesting, but I haven't read it: Handbook of tabular presentation: How to design and edit statistical tables, a style manual and case book. (Curious to hear any comments from someone in the know ..)
What is a good resource on table design?
Stephen Few's book Show Me the Numbers: Designing Tables and Graphs to Enlighten has a couple of chapters devoted to tabular display of information. It's good and recommended, but it's not quite Gram
What is a good resource on table design? Stephen Few's book Show Me the Numbers: Designing Tables and Graphs to Enlighten has a couple of chapters devoted to tabular display of information. It's good and recommended, but it's not quite Grammar of Graphics if that's what you're after. Update This sounds interesting, but I haven't read it: Handbook of tabular presentation: How to design and edit statistical tables, a style manual and case book. (Curious to hear any comments from someone in the know ..)
What is a good resource on table design? Stephen Few's book Show Me the Numbers: Designing Tables and Graphs to Enlighten has a couple of chapters devoted to tabular display of information. It's good and recommended, but it's not quite Gram
5,053
What is a good resource on table design?
If you are interested in table design, I would definitely recommend two papers on the subject by Andrew Gelman: A necessary preface to the paper on table design is Gelman et al, 2002 Let's practice what we preach: Turning Tables Into Graphs Gelman argues that graphs are better than tables in the above paper. Then his satire piece provides a look at elements commonly found in tables that make them particularly difficult to interpret. Why Tables are Really Much Better than Graphs suggest the following (interpreted as satire, these are actually what not to do): lots of numbers don't obsess about clarity exact numbers, minimum of four significant digits default table design provided by your favorite software Both are great reads. Gelman, Pasarica, and Dodhia. The American Statistician, 56(2): 121-130 Gelman, 2011. Journal of Computational and Graphical Statistics, Vol. 20, No. 1: 3–7.
What is a good resource on table design?
If you are interested in table design, I would definitely recommend two papers on the subject by Andrew Gelman: A necessary preface to the paper on table design is Gelman et al, 2002 Let's practice wh
What is a good resource on table design? If you are interested in table design, I would definitely recommend two papers on the subject by Andrew Gelman: A necessary preface to the paper on table design is Gelman et al, 2002 Let's practice what we preach: Turning Tables Into Graphs Gelman argues that graphs are better than tables in the above paper. Then his satire piece provides a look at elements commonly found in tables that make them particularly difficult to interpret. Why Tables are Really Much Better than Graphs suggest the following (interpreted as satire, these are actually what not to do): lots of numbers don't obsess about clarity exact numbers, minimum of four significant digits default table design provided by your favorite software Both are great reads. Gelman, Pasarica, and Dodhia. The American Statistician, 56(2): 121-130 Gelman, 2011. Journal of Computational and Graphical Statistics, Vol. 20, No. 1: 3–7.
What is a good resource on table design? If you are interested in table design, I would definitely recommend two papers on the subject by Andrew Gelman: A necessary preface to the paper on table design is Gelman et al, 2002 Let's practice wh
5,054
What is a good resource on table design?
You might check out the documentation for the LaTeX package booktabs; it gives general guidance and implements its design suggestions in LaTeX tables.
What is a good resource on table design?
You might check out the documentation for the LaTeX package booktabs; it gives general guidance and implements its design suggestions in LaTeX tables.
What is a good resource on table design? You might check out the documentation for the LaTeX package booktabs; it gives general guidance and implements its design suggestions in LaTeX tables.
What is a good resource on table design? You might check out the documentation for the LaTeX package booktabs; it gives general guidance and implements its design suggestions in LaTeX tables.
5,055
What is a good resource on table design?
I hope this answer is not too off topic, but a couple of days ago I have seen this link on visualizing tables at StackExchange: Visual Representation of Tabular Information – How to Fix the Uncommunicative Table
What is a good resource on table design?
I hope this answer is not too off topic, but a couple of days ago I have seen this link on visualizing tables at StackExchange: Visual Representation of Tabular Information – How to Fix the Uncommunic
What is a good resource on table design? I hope this answer is not too off topic, but a couple of days ago I have seen this link on visualizing tables at StackExchange: Visual Representation of Tabular Information – How to Fix the Uncommunicative Table
What is a good resource on table design? I hope this answer is not too off topic, but a couple of days ago I have seen this link on visualizing tables at StackExchange: Visual Representation of Tabular Information – How to Fix the Uncommunic
5,056
What is a good resource on table design?
I cover table design in the seminars I offer. My sources are primarily Chapter 8 of Few’s Show Me the Numbers and a paper by Martin Koschat: Koschat, Martin. 2005. “A Case for Simple Tables,” The American Statistician 59:1, 31-40. https://doi.org/10.1198/000313005X21429 Also, Howard Wainer discusses table design in Visual Revelations.
What is a good resource on table design?
I cover table design in the seminars I offer. My sources are primarily Chapter 8 of Few’s Show Me the Numbers and a paper by Martin Koschat: Koschat, Martin. 2005. “A Case for Simple Tables,” The Amer
What is a good resource on table design? I cover table design in the seminars I offer. My sources are primarily Chapter 8 of Few’s Show Me the Numbers and a paper by Martin Koschat: Koschat, Martin. 2005. “A Case for Simple Tables,” The American Statistician 59:1, 31-40. https://doi.org/10.1198/000313005X21429 Also, Howard Wainer discusses table design in Visual Revelations.
What is a good resource on table design? I cover table design in the seminars I offer. My sources are primarily Chapter 8 of Few’s Show Me the Numbers and a paper by Martin Koschat: Koschat, Martin. 2005. “A Case for Simple Tables,” The Amer
5,057
What is a good resource on table design?
This CV blog post by @AndyW is a really excellent. It gathers a number of best practices, useful examples, and a helpful literature review with links to papers and other resources.
What is a good resource on table design?
This CV blog post by @AndyW is a really excellent. It gathers a number of best practices, useful examples, and a helpful literature review with links to papers and other resources.
What is a good resource on table design? This CV blog post by @AndyW is a really excellent. It gathers a number of best practices, useful examples, and a helpful literature review with links to papers and other resources.
What is a good resource on table design? This CV blog post by @AndyW is a really excellent. It gathers a number of best practices, useful examples, and a helpful literature review with links to papers and other resources.
5,058
What is a good resource on table design?
The UN Document "Making Data Meaningful" provides a nice overview, with rules and examples, of table design in Section 3 (p 12-17). This is in part 2 of a set of guidelines on 'using text and visualizations to bring statistics to life' https://www.unece.org/stats/documents/writing/
What is a good resource on table design?
The UN Document "Making Data Meaningful" provides a nice overview, with rules and examples, of table design in Section 3 (p 12-17). This is in part 2 of a set of guidelines on 'using text and visualiz
What is a good resource on table design? The UN Document "Making Data Meaningful" provides a nice overview, with rules and examples, of table design in Section 3 (p 12-17). This is in part 2 of a set of guidelines on 'using text and visualizations to bring statistics to life' https://www.unece.org/stats/documents/writing/
What is a good resource on table design? The UN Document "Making Data Meaningful" provides a nice overview, with rules and examples, of table design in Section 3 (p 12-17). This is in part 2 of a set of guidelines on 'using text and visualiz
5,059
Logistic regression vs. LDA as two-class classifiers
It sounds to me that you are correct. Logistic regression indeed does not assume any specific shapes of densities in the space of predictor variables, but LDA does. Here are some differences between the two analyses, briefly. Binary Logistic regression (BLR) vs Linear Discriminant analysis (with 2 groups: also known as Fisher's LDA): BLR: Based on Maximum likelihood estimation. LDA: Based on Least squares estimation; equivalent to linear regression with binary predictand (coefficients are proportional and R-square = 1-Wilk's lambda). BLR: Estimates probability (of group membership) immediately (the predictand is itself taken as probability, observed one) and conditionally. LDA: estimates probability mediately (the predictand is viewed as binned continuous variable, the discriminant) via classificatory device (such as naive Bayes) which uses both conditional and marginal information. BLR: Not so exigent to the level of the scale and the form of the distribution in predictors. LDA: Predictirs desirably interval level with multivariate normal distribution. BLR: No requirements about the within-group covariance matrices of the predictors. LDA: The within-group covariance matrices should be identical in population. BLR: The groups may have quite different $n$. LDA: The groups should have similar $n$. BLR: Not so sensitive to outliers. LDA: Quite sensitive to outliers. BLR: Younger method. LDA: Older method. BLR: Usually preferred, because less exigent / more robust. LDA: With all its requirements met, often classifies better than BLR (asymptotic relative efficiency 3/2 time higher then).
Logistic regression vs. LDA as two-class classifiers
It sounds to me that you are correct. Logistic regression indeed does not assume any specific shapes of densities in the space of predictor variables, but LDA does. Here are some differences between t
Logistic regression vs. LDA as two-class classifiers It sounds to me that you are correct. Logistic regression indeed does not assume any specific shapes of densities in the space of predictor variables, but LDA does. Here are some differences between the two analyses, briefly. Binary Logistic regression (BLR) vs Linear Discriminant analysis (with 2 groups: also known as Fisher's LDA): BLR: Based on Maximum likelihood estimation. LDA: Based on Least squares estimation; equivalent to linear regression with binary predictand (coefficients are proportional and R-square = 1-Wilk's lambda). BLR: Estimates probability (of group membership) immediately (the predictand is itself taken as probability, observed one) and conditionally. LDA: estimates probability mediately (the predictand is viewed as binned continuous variable, the discriminant) via classificatory device (such as naive Bayes) which uses both conditional and marginal information. BLR: Not so exigent to the level of the scale and the form of the distribution in predictors. LDA: Predictirs desirably interval level with multivariate normal distribution. BLR: No requirements about the within-group covariance matrices of the predictors. LDA: The within-group covariance matrices should be identical in population. BLR: The groups may have quite different $n$. LDA: The groups should have similar $n$. BLR: Not so sensitive to outliers. LDA: Quite sensitive to outliers. BLR: Younger method. LDA: Older method. BLR: Usually preferred, because less exigent / more robust. LDA: With all its requirements met, often classifies better than BLR (asymptotic relative efficiency 3/2 time higher then).
Logistic regression vs. LDA as two-class classifiers It sounds to me that you are correct. Logistic regression indeed does not assume any specific shapes of densities in the space of predictor variables, but LDA does. Here are some differences between t
5,060
Logistic regression vs. LDA as two-class classifiers
Let me add some points to @ttnphns nice list: The Bayes prediction of the LDA's posterior class membership probability follows a logistic curve as well. [Efron, B. The efficiency of logistic regression compared to normal discriminant analysis, J Am Stat Assoc, 70, 892-898 (1975).] While that paper shows that the relative efficiency of LDA is superior to LR if the LDA's assumtions are met (Ref: Efron paper above, @tthnps' last point), according to the Elements of Statistical Learning in practice there is hardly any difference. [Hastie, T. and Tibshirani, R. and Friedman, J. The Elements of Statistical Learning; Data mining, Inference andPrediction Springer Verlag, New York, 2009] That hugely increased relative efficiency of LDA mostly happens in asymptotic cases where the absolute error is practically negligible anyways. [ Harrell, F. E. & Lee, K. L. A comparison of the discrimination of discriminant analysis and logistic regression under multivariate normality, Biostatistics: Statistics in Biomedical, Public Health and Environmental Sciences, 333-343 (1985).] Though I have in practice encountered high dimensional small sample size situations where the LDA seems superior (despite both the multivariate normality and the equal covariance matrix assumptions being visibly not met). [Beleites, C.; Geiger, K.; Kirsch, M.; Sobottka, S. B.; Schackert, G. & Salzer, R. Raman spectroscopic grading of astrocytoma tissues: using soft reference information., Anal Bioanal Chem, 400, 2801-2816 (2011). DOI: 10.1007/s00216-011-4985-4] But note that in our paper the LR is possibly struggling with the problem that directions with (near) perfect separability can be found. The LDA on the other hand may be less severely overfitting. The famous assumptions for LDA are only needed to prove optimality. If they are not met, the procedure can still be a good heuristic. A difference that is important for me in practice because the classification problems I work on sometimes/frequently turn out actually not to be that clearly classification problems at all: LR can easily be done with data where the reference has intermediate levels of class membership. After all, it is a regression technique. [see paper linked above] You may say that LR concentrates more than LDA on examples near the class boundary and basically disregards cases at the "backside" of the distributions. This also explains why it is less sensitive to outliers (i.e. those at the back side) than LDA. (support vector machines would be a classifier that goes this direction to the very end: here everything but the cases at the boundary is disregarded)
Logistic regression vs. LDA as two-class classifiers
Let me add some points to @ttnphns nice list: The Bayes prediction of the LDA's posterior class membership probability follows a logistic curve as well. [Efron, B. The efficiency of logistic regressi
Logistic regression vs. LDA as two-class classifiers Let me add some points to @ttnphns nice list: The Bayes prediction of the LDA's posterior class membership probability follows a logistic curve as well. [Efron, B. The efficiency of logistic regression compared to normal discriminant analysis, J Am Stat Assoc, 70, 892-898 (1975).] While that paper shows that the relative efficiency of LDA is superior to LR if the LDA's assumtions are met (Ref: Efron paper above, @tthnps' last point), according to the Elements of Statistical Learning in practice there is hardly any difference. [Hastie, T. and Tibshirani, R. and Friedman, J. The Elements of Statistical Learning; Data mining, Inference andPrediction Springer Verlag, New York, 2009] That hugely increased relative efficiency of LDA mostly happens in asymptotic cases where the absolute error is practically negligible anyways. [ Harrell, F. E. & Lee, K. L. A comparison of the discrimination of discriminant analysis and logistic regression under multivariate normality, Biostatistics: Statistics in Biomedical, Public Health and Environmental Sciences, 333-343 (1985).] Though I have in practice encountered high dimensional small sample size situations where the LDA seems superior (despite both the multivariate normality and the equal covariance matrix assumptions being visibly not met). [Beleites, C.; Geiger, K.; Kirsch, M.; Sobottka, S. B.; Schackert, G. & Salzer, R. Raman spectroscopic grading of astrocytoma tissues: using soft reference information., Anal Bioanal Chem, 400, 2801-2816 (2011). DOI: 10.1007/s00216-011-4985-4] But note that in our paper the LR is possibly struggling with the problem that directions with (near) perfect separability can be found. The LDA on the other hand may be less severely overfitting. The famous assumptions for LDA are only needed to prove optimality. If they are not met, the procedure can still be a good heuristic. A difference that is important for me in practice because the classification problems I work on sometimes/frequently turn out actually not to be that clearly classification problems at all: LR can easily be done with data where the reference has intermediate levels of class membership. After all, it is a regression technique. [see paper linked above] You may say that LR concentrates more than LDA on examples near the class boundary and basically disregards cases at the "backside" of the distributions. This also explains why it is less sensitive to outliers (i.e. those at the back side) than LDA. (support vector machines would be a classifier that goes this direction to the very end: here everything but the cases at the boundary is disregarded)
Logistic regression vs. LDA as two-class classifiers Let me add some points to @ttnphns nice list: The Bayes prediction of the LDA's posterior class membership probability follows a logistic curve as well. [Efron, B. The efficiency of logistic regressi
5,061
Logistic regression vs. LDA as two-class classifiers
I just wanted to add one more point. LDA works when all the independent/predictor variables are continuous (not categorical) and follow a Normal distribution. Whereas in Logistic Regression this is not the case and categorical variables can be used as independent variables while making predictions.
Logistic regression vs. LDA as two-class classifiers
I just wanted to add one more point. LDA works when all the independent/predictor variables are continuous (not categorical) and follow a Normal distribution. Whereas in Logistic Regression this is
Logistic regression vs. LDA as two-class classifiers I just wanted to add one more point. LDA works when all the independent/predictor variables are continuous (not categorical) and follow a Normal distribution. Whereas in Logistic Regression this is not the case and categorical variables can be used as independent variables while making predictions.
Logistic regression vs. LDA as two-class classifiers I just wanted to add one more point. LDA works when all the independent/predictor variables are continuous (not categorical) and follow a Normal distribution. Whereas in Logistic Regression this is
5,062
Can a deep neural network approximate multiplication function?
NN with relu activation function can approximate multiplication when range of inputs is limited. Recall that relu(x) = max(x, 0). It is enough if NN approximates a square function g(z) = z^2, because x*y = ((x-y)^2 - x^2 - y^2)/(-2). Right-hand side has just linear combinations and squares. NN can approximate z^2 with a piecewise linear function. For example, on range [0, 2] a combination of x and relu(2(x-1)) is not that bad. Below figure visualises this. No idea if this is useful beyond theory :-)
Can a deep neural network approximate multiplication function?
NN with relu activation function can approximate multiplication when range of inputs is limited. Recall that relu(x) = max(x, 0). It is enough if NN approximates a square function g(z) = z^2, because
Can a deep neural network approximate multiplication function? NN with relu activation function can approximate multiplication when range of inputs is limited. Recall that relu(x) = max(x, 0). It is enough if NN approximates a square function g(z) = z^2, because x*y = ((x-y)^2 - x^2 - y^2)/(-2). Right-hand side has just linear combinations and squares. NN can approximate z^2 with a piecewise linear function. For example, on range [0, 2] a combination of x and relu(2(x-1)) is not that bad. Below figure visualises this. No idea if this is useful beyond theory :-)
Can a deep neural network approximate multiplication function? NN with relu activation function can approximate multiplication when range of inputs is limited. Recall that relu(x) = max(x, 0). It is enough if NN approximates a square function g(z) = z^2, because
5,063
Can a deep neural network approximate multiplication function?
A big multiplication function gradient forces the net probably almost immediately into some horrifying state where all its hidden nodes have a zero gradient (because of neural network implementation details and limitations). We can use two approaches: Divide by a constant. We are just dividing everything before the learning and multiply after. Use log-normalization. It makes multiplication into addition: \begin{align} m &= x \cdot y\\ &\Rightarrow \\ \ln(m) &= \ln(x) + \ln(y) \end{align}
Can a deep neural network approximate multiplication function?
A big multiplication function gradient forces the net probably almost immediately into some horrifying state where all its hidden nodes have a zero gradient (because of neural network implementation d
Can a deep neural network approximate multiplication function? A big multiplication function gradient forces the net probably almost immediately into some horrifying state where all its hidden nodes have a zero gradient (because of neural network implementation details and limitations). We can use two approaches: Divide by a constant. We are just dividing everything before the learning and multiply after. Use log-normalization. It makes multiplication into addition: \begin{align} m &= x \cdot y\\ &\Rightarrow \\ \ln(m) &= \ln(x) + \ln(y) \end{align}
Can a deep neural network approximate multiplication function? A big multiplication function gradient forces the net probably almost immediately into some horrifying state where all its hidden nodes have a zero gradient (because of neural network implementation d
5,064
Can a deep neural network approximate multiplication function?
I'm unable to comment due to being a newly active user on StackExchange. But I think this is an important question because it's so friggin simple to understand yet difficult to explain. With respect, I don't think the accepted answer is sufficient. If you think about the core operations of a standard feed-forward NN, with activations of the form s(W*x+b) for some nonlinear activation function s, it's actually not obvious how to "get" multiplication from this even in a composed (multi-layered) network. Scaling (the first bullet in accepted answer) does not seem to address the question at all ... scale by what? The inputs x and y are presumably different for every sample. And taking the log is fine as long as you know that's what you need to do, and take care of the sign issue in preprocessing (since obviously log isn't defined for negative inputs). But this fundamentally doesn't jive with the notion that neural networks can just "learn" (it feels like a cheat as the OP said). I don't think the question should be considered answered until it really is, by someone smarter than me!
Can a deep neural network approximate multiplication function?
I'm unable to comment due to being a newly active user on StackExchange. But I think this is an important question because it's so friggin simple to understand yet difficult to explain. With respect
Can a deep neural network approximate multiplication function? I'm unable to comment due to being a newly active user on StackExchange. But I think this is an important question because it's so friggin simple to understand yet difficult to explain. With respect, I don't think the accepted answer is sufficient. If you think about the core operations of a standard feed-forward NN, with activations of the form s(W*x+b) for some nonlinear activation function s, it's actually not obvious how to "get" multiplication from this even in a composed (multi-layered) network. Scaling (the first bullet in accepted answer) does not seem to address the question at all ... scale by what? The inputs x and y are presumably different for every sample. And taking the log is fine as long as you know that's what you need to do, and take care of the sign issue in preprocessing (since obviously log isn't defined for negative inputs). But this fundamentally doesn't jive with the notion that neural networks can just "learn" (it feels like a cheat as the OP said). I don't think the question should be considered answered until it really is, by someone smarter than me!
Can a deep neural network approximate multiplication function? I'm unable to comment due to being a newly active user on StackExchange. But I think this is an important question because it's so friggin simple to understand yet difficult to explain. With respect
5,065
Can a deep neural network approximate multiplication function?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. A similar question struck me earlier today, and I was surprised I couldn’t find a quick answer. My question was that given NN’s only have summation functions, how could they model multiplicative functions. This kind of answered it, though it was lengthy explanation. My summary would be that NN’s model the function surface rather than the function itself. Which is obvious, in retrospect…
Can a deep neural network approximate multiplication function?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Can a deep neural network approximate multiplication function? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. A similar question struck me earlier today, and I was surprised I couldn’t find a quick answer. My question was that given NN’s only have summation functions, how could they model multiplicative functions. This kind of answered it, though it was lengthy explanation. My summary would be that NN’s model the function surface rather than the function itself. Which is obvious, in retrospect…
Can a deep neural network approximate multiplication function? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
5,066
Can a deep neural network approximate multiplication function?
Traditional neural network consists of linear maps and Lipschitiz activation function. As a composition of Lischitz continuous functions, neural network is also Lipschitz continuous, but multiplication is not Lipschitz continuous. This means that neural network cannot approximate multiplication when one of the x or y goes too large.
Can a deep neural network approximate multiplication function?
Traditional neural network consists of linear maps and Lipschitiz activation function. As a composition of Lischitz continuous functions, neural network is also Lipschitz continuous, but multiplicatio
Can a deep neural network approximate multiplication function? Traditional neural network consists of linear maps and Lipschitiz activation function. As a composition of Lischitz continuous functions, neural network is also Lipschitz continuous, but multiplication is not Lipschitz continuous. This means that neural network cannot approximate multiplication when one of the x or y goes too large.
Can a deep neural network approximate multiplication function? Traditional neural network consists of linear maps and Lipschitiz activation function. As a composition of Lischitz continuous functions, neural network is also Lipschitz continuous, but multiplicatio
5,067
Can a deep neural network approximate multiplication function?
"one hidden layer" does not limit the number of neurons and kinds of activate function used, it still has a large representation space. One simple method to validate the existence of this problem: Train this regress problem with a real neuron network, record each weights and bias, use these parameters plot the predict curve, contrast it with the target function curve. This essay may help.
Can a deep neural network approximate multiplication function?
"one hidden layer" does not limit the number of neurons and kinds of activate function used, it still has a large representation space. One simple method to validate the existence of this problem: Tra
Can a deep neural network approximate multiplication function? "one hidden layer" does not limit the number of neurons and kinds of activate function used, it still has a large representation space. One simple method to validate the existence of this problem: Train this regress problem with a real neuron network, record each weights and bias, use these parameters plot the predict curve, contrast it with the target function curve. This essay may help.
Can a deep neural network approximate multiplication function? "one hidden layer" does not limit the number of neurons and kinds of activate function used, it still has a large representation space. One simple method to validate the existence of this problem: Tra
5,068
Can a deep neural network approximate multiplication function?
Feed-forward REUL, will never be able to learn multiplication exactly or approximately with absolute bounded error, if numbers are unbounded.
Can a deep neural network approximate multiplication function?
Feed-forward REUL, will never be able to learn multiplication exactly or approximately with absolute bounded error, if numbers are unbounded.
Can a deep neural network approximate multiplication function? Feed-forward REUL, will never be able to learn multiplication exactly or approximately with absolute bounded error, if numbers are unbounded.
Can a deep neural network approximate multiplication function? Feed-forward REUL, will never be able to learn multiplication exactly or approximately with absolute bounded error, if numbers are unbounded.
5,069
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach?
Two reasons one may go with a Bayesian approach even if you're using highly non-informative priors: Convergence problems. There are some distributions (binomial, negative binomial and generalized gamma are the ones I'm most familiar with) that have convergence issues a non-trivial amount of the time. You can use a "Bayesian" framework - and particular Markov chain Monte Carlo (MCMC) methods, to essentially plow through these convergence issues with computational power and get decent estimates from them. Interpretation. A Bayesian estimate + 95% credible interval has a more intuitive interpretation than a frequentist estimate + 95% confidence interval, so some may prefer to simply report those.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas
Two reasons one may go with a Bayesian approach even if you're using highly non-informative priors: Convergence problems. There are some distributions (binomial, negative binomial and generalized gam
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach? Two reasons one may go with a Bayesian approach even if you're using highly non-informative priors: Convergence problems. There are some distributions (binomial, negative binomial and generalized gamma are the ones I'm most familiar with) that have convergence issues a non-trivial amount of the time. You can use a "Bayesian" framework - and particular Markov chain Monte Carlo (MCMC) methods, to essentially plow through these convergence issues with computational power and get decent estimates from them. Interpretation. A Bayesian estimate + 95% credible interval has a more intuitive interpretation than a frequentist estimate + 95% confidence interval, so some may prefer to simply report those.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas Two reasons one may go with a Bayesian approach even if you're using highly non-informative priors: Convergence problems. There are some distributions (binomial, negative binomial and generalized gam
5,070
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach?
Although the results are going to be very similar, their interpretations differ. Confidence intervals imply the notion of repeating an experiment many times and being able to capture the true parameter 95% of times. But you cannot say you have a 95% chance of capturing it. Credible intervals (Bayesian), on the other hand, allow you to say that there is a 95% "chance" that the interval captures the true value. Update: A more Bayesian way of putting it would be that you could be 95% confident about your results. This is just because you went from $P(Data|Hypothesis)$ to $P(Hypothesis|Data)$ using Baye's Rule.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas
Although the results are going to be very similar, their interpretations differ. Confidence intervals imply the notion of repeating an experiment many times and being able to capture the true paramet
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach? Although the results are going to be very similar, their interpretations differ. Confidence intervals imply the notion of repeating an experiment many times and being able to capture the true parameter 95% of times. But you cannot say you have a 95% chance of capturing it. Credible intervals (Bayesian), on the other hand, allow you to say that there is a 95% "chance" that the interval captures the true value. Update: A more Bayesian way of putting it would be that you could be 95% confident about your results. This is just because you went from $P(Data|Hypothesis)$ to $P(Hypothesis|Data)$ using Baye's Rule.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas Although the results are going to be very similar, their interpretations differ. Confidence intervals imply the notion of repeating an experiment many times and being able to capture the true paramet
5,071
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach?
I believe one reason to do so is that a Bayesian analysis provides you with a full posterior distribution. This can result in more detailed intervals than the typical frequentist $\pm 2 \sigma$. An applicable quote, from Reis and Stedinger 2005, is: Providing a full posterior distribution of the parameters is an advantage of the Bayesian approach over classical methods, which usually provide only a point estimate of the parameters represented by the mode of the likelihood function, and make use of asymptotic normality assumptions and a quadratic approximation of the log-likelihood function to describe uncertainties. With the Bayesian framework, one does not have to use any approximation to evaluate the uncertainties because the full posterior distribution of the parameters is available. Moreover, a Bayesian analysis can provide credible intervals for parameters or any function of the parameters which are more easily interpreted than the concept of confidence interval in classical statistics (Congdon, 2001). So, for example, you can calculate credible intervals for the difference between two parameters.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas
I believe one reason to do so is that a Bayesian analysis provides you with a full posterior distribution. This can result in more detailed intervals than the typical frequentist $\pm 2 \sigma$. An ap
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach? I believe one reason to do so is that a Bayesian analysis provides you with a full posterior distribution. This can result in more detailed intervals than the typical frequentist $\pm 2 \sigma$. An applicable quote, from Reis and Stedinger 2005, is: Providing a full posterior distribution of the parameters is an advantage of the Bayesian approach over classical methods, which usually provide only a point estimate of the parameters represented by the mode of the likelihood function, and make use of asymptotic normality assumptions and a quadratic approximation of the log-likelihood function to describe uncertainties. With the Bayesian framework, one does not have to use any approximation to evaluate the uncertainties because the full posterior distribution of the parameters is available. Moreover, a Bayesian analysis can provide credible intervals for parameters or any function of the parameters which are more easily interpreted than the concept of confidence interval in classical statistics (Congdon, 2001). So, for example, you can calculate credible intervals for the difference between two parameters.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas I believe one reason to do so is that a Bayesian analysis provides you with a full posterior distribution. This can result in more detailed intervals than the typical frequentist $\pm 2 \sigma$. An ap
5,072
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach?
Sir Harold Jeffreys was a strong proponent of the Bayesian approach. He showed that if you use diffuse improper priors the resulting Bayesian inference would be the same as the frequentist inferential approach (that is, Bayesian credible regions are the same as frequentist confidence intervals). Most Bayesians advocate proper informative priors. There are problems with improper priors and some can argue that no prior is truly non-informative. I think that the Bayesians that use these Jeffreys' prior do it as followers of Jeffreys. Dennis Lindley, one of the strongest advocates of the Bayesian approach, had a great deal of respect for Jeffreys but advocated informative priors.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas
Sir Harold Jeffreys was a strong proponent of the Bayesian approach. He showed that if you use diffuse improper priors the resulting Bayesian inference would be the same as the frequentist inferential
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach? Sir Harold Jeffreys was a strong proponent of the Bayesian approach. He showed that if you use diffuse improper priors the resulting Bayesian inference would be the same as the frequentist inferential approach (that is, Bayesian credible regions are the same as frequentist confidence intervals). Most Bayesians advocate proper informative priors. There are problems with improper priors and some can argue that no prior is truly non-informative. I think that the Bayesians that use these Jeffreys' prior do it as followers of Jeffreys. Dennis Lindley, one of the strongest advocates of the Bayesian approach, had a great deal of respect for Jeffreys but advocated informative priors.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas Sir Harold Jeffreys was a strong proponent of the Bayesian approach. He showed that if you use diffuse improper priors the resulting Bayesian inference would be the same as the frequentist inferential
5,073
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach?
We could argue forever about foundations of inference to defend both approaches, but let me propose something different. A $\textit{practical}$ reason to favor a Bayesian analysis over a classical one is shown clearly by how both approaches deal with prediction. Suppose that we have the usual conditionally i.i.d. case. Classically, a predictive density is defined plugging the value $\hat{\theta} = \hat{\theta}(x_1,\dots,x_n)$ of an estimate of the parameter $\Theta$ into the conditional density $f_{X_{n+1}\mid\Theta}(x_{n+1}\mid\theta)$. This classical predictive density $f_{X_{n+1}\mid\Theta}(x_{n+1}\mid\hat{\theta})$ does not account for the uncertainty of the estimate $\hat{\theta}$: two equal point estimates with totally different confidence intervals give you the same predictive density. On the other hand, the Bayesian predictive density takes into account the uncertainty about the parameter, given the information in a sample of observations, automatically, since $$ f_{X_{n+1}\mid X_1,\dots,X_m}(x_{n+1}\mid x_1,\dots,x_n) = \int f_{X_{n+1}\mid\Theta}(x_{n+1}\mid\theta) \, \pi(\theta\mid x_1,\dots,x_n) \, d\theta \, . $$
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas
We could argue forever about foundations of inference to defend both approaches, but let me propose something different. A $\textit{practical}$ reason to favor a Bayesian analysis over a classical one
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach? We could argue forever about foundations of inference to defend both approaches, but let me propose something different. A $\textit{practical}$ reason to favor a Bayesian analysis over a classical one is shown clearly by how both approaches deal with prediction. Suppose that we have the usual conditionally i.i.d. case. Classically, a predictive density is defined plugging the value $\hat{\theta} = \hat{\theta}(x_1,\dots,x_n)$ of an estimate of the parameter $\Theta$ into the conditional density $f_{X_{n+1}\mid\Theta}(x_{n+1}\mid\theta)$. This classical predictive density $f_{X_{n+1}\mid\Theta}(x_{n+1}\mid\hat{\theta})$ does not account for the uncertainty of the estimate $\hat{\theta}$: two equal point estimates with totally different confidence intervals give you the same predictive density. On the other hand, the Bayesian predictive density takes into account the uncertainty about the parameter, given the information in a sample of observations, automatically, since $$ f_{X_{n+1}\mid X_1,\dots,X_m}(x_{n+1}\mid x_1,\dots,x_n) = \int f_{X_{n+1}\mid\Theta}(x_{n+1}\mid\theta) \, \pi(\theta\mid x_1,\dots,x_n) \, d\theta \, . $$
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas We could argue forever about foundations of inference to defend both approaches, but let me propose something different. A $\textit{practical}$ reason to favor a Bayesian analysis over a classical one
5,074
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach?
The Bayesian approach has practical advantages. It helps with estimation, often being mandatory. And it enables novel model families, and helps in construction of more complicated (hierarchical, multilevel) models. For example, with mixed models (including random effects with variance parameters) one gets better estimates if variance parameters are estimated by marginalizing over lower-level parameters (model coefficients; this is called REML). The Bayesian approach does this naturally. With these models, even with REML, maximum likelyhood (ML) estimates of variance parameters are often zero, or downward biased. A proper prior for the variance parameters helps. Even if point estimation (MAP, maximum a posteriori) is used, priors change the model family. Linear regression with a large set of somewhat collinear variables is unstable. L2 regularization is used as a remedy, but it is interpretable as a Bayesian model with Gaussian (non-informative) prior, and MAP estimation. (L1 regularization is a different prior and gives different results. Actually here the prior may be somewhat informative, but it is about the collective properties of the parameters, not about a single parameter.) So there are some common and relatively simple models where a Bayesian approach is needed just to get the thing done! Things are even more in favor with more complicated models, such as the latent Dirichlet allocation (LDA) used in machine learning. And some models are inherently Bayesian, e.g., those based on Dirichlet processes.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas
The Bayesian approach has practical advantages. It helps with estimation, often being mandatory. And it enables novel model families, and helps in construction of more complicated (hierarchical, multi
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach? The Bayesian approach has practical advantages. It helps with estimation, often being mandatory. And it enables novel model families, and helps in construction of more complicated (hierarchical, multilevel) models. For example, with mixed models (including random effects with variance parameters) one gets better estimates if variance parameters are estimated by marginalizing over lower-level parameters (model coefficients; this is called REML). The Bayesian approach does this naturally. With these models, even with REML, maximum likelyhood (ML) estimates of variance parameters are often zero, or downward biased. A proper prior for the variance parameters helps. Even if point estimation (MAP, maximum a posteriori) is used, priors change the model family. Linear regression with a large set of somewhat collinear variables is unstable. L2 regularization is used as a remedy, but it is interpretable as a Bayesian model with Gaussian (non-informative) prior, and MAP estimation. (L1 regularization is a different prior and gives different results. Actually here the prior may be somewhat informative, but it is about the collective properties of the parameters, not about a single parameter.) So there are some common and relatively simple models where a Bayesian approach is needed just to get the thing done! Things are even more in favor with more complicated models, such as the latent Dirichlet allocation (LDA) used in machine learning. And some models are inherently Bayesian, e.g., those based on Dirichlet processes.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas The Bayesian approach has practical advantages. It helps with estimation, often being mandatory. And it enables novel model families, and helps in construction of more complicated (hierarchical, multi
5,075
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach?
There are several reasons: In many situations constructing test statistics or confidence intervals is quite difficult, because normal approximations – even after using an appropriate link function – to work with $\pm \text{SE}$ are often not working too well for sparse data situations. By using Bayesian inference with uninformative priors implemented via MCMC you get around this (for caveats see below). The large sample properties are usually completely identical to some corresponding frequentist approach. There is often considerable reluctance to agree on any priors, no matter how much we actually know, due to a fear of being accused of “not being objective”. By using uninformative priors (“no priors”) one can pretend that there is no such issue, which will avoid criticism from some reviewers. Now as to the downsides of just using uninformative priors, starting with what I think is the most important and then heading for some of the also quite important technical aspects: The interpretation of what you get is, quite honestly, much the same as for frequentist inference. You cannot just re-label your frequentist maximum likelihood inference as Bayesian maximum a-posteriori inference and claim that this absolves you of any worries about multiple comparisons, multiple looks at the data and lets you interpret all statements in terms of the probability that some hypothesis is true. Sure, type I errors and so on are frequentist concepts, but we should as scientists care about making false claims and we know that doing the above causes problems. A lot of these issues go away (or at least are a lot less of a problem), if you embed things in a hierarchical model / do something empirical Bayes, but that usually boils down to implicitly generating priors via the analysis procedure by including the basis for your prior in your model (and an alternative to that is to explicitly formulate priors). These considerations are frequently ignored, in my opinion mostly to conduct Bayesian p-hacking (i.e. introduce multiplicity, but ignore it) with the fig-leaf of an excuse that this is no problem when you use Bayesian methods (omitting all the conditions that would have to be fulfilled). On the more “technical” side, uninformative priors are problematic, because you are not guaranteed a proper posterior. Many people have fitted Bayesian models with uninformative priors and not realized that the posterior is not proper. As a result MCMC samples were generated that were essentially meaningless. The last point is an argument for preferring rather vague (or slightly more weakly-informative) priors that ensure a proper posterior. Admittedly, it can sometimes be hard to sample from these, too, and it may be hard to notice that the whole posterior has not been explored. However, Bayesian methods with vague (but proper) priors have in many fields been shown to have really good small sample properties from a frequentist perspective and you could certainly see that as an argument for using those, while with somewhat more data there will be hardly any difference versus methods with uninformative priors.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas
There are several reasons: In many situations constructing test statistics or confidence intervals is quite difficult, because normal approximations – even after using an appropriate link function –
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach? There are several reasons: In many situations constructing test statistics or confidence intervals is quite difficult, because normal approximations – even after using an appropriate link function – to work with $\pm \text{SE}$ are often not working too well for sparse data situations. By using Bayesian inference with uninformative priors implemented via MCMC you get around this (for caveats see below). The large sample properties are usually completely identical to some corresponding frequentist approach. There is often considerable reluctance to agree on any priors, no matter how much we actually know, due to a fear of being accused of “not being objective”. By using uninformative priors (“no priors”) one can pretend that there is no such issue, which will avoid criticism from some reviewers. Now as to the downsides of just using uninformative priors, starting with what I think is the most important and then heading for some of the also quite important technical aspects: The interpretation of what you get is, quite honestly, much the same as for frequentist inference. You cannot just re-label your frequentist maximum likelihood inference as Bayesian maximum a-posteriori inference and claim that this absolves you of any worries about multiple comparisons, multiple looks at the data and lets you interpret all statements in terms of the probability that some hypothesis is true. Sure, type I errors and so on are frequentist concepts, but we should as scientists care about making false claims and we know that doing the above causes problems. A lot of these issues go away (or at least are a lot less of a problem), if you embed things in a hierarchical model / do something empirical Bayes, but that usually boils down to implicitly generating priors via the analysis procedure by including the basis for your prior in your model (and an alternative to that is to explicitly formulate priors). These considerations are frequently ignored, in my opinion mostly to conduct Bayesian p-hacking (i.e. introduce multiplicity, but ignore it) with the fig-leaf of an excuse that this is no problem when you use Bayesian methods (omitting all the conditions that would have to be fulfilled). On the more “technical” side, uninformative priors are problematic, because you are not guaranteed a proper posterior. Many people have fitted Bayesian models with uninformative priors and not realized that the posterior is not proper. As a result MCMC samples were generated that were essentially meaningless. The last point is an argument for preferring rather vague (or slightly more weakly-informative) priors that ensure a proper posterior. Admittedly, it can sometimes be hard to sample from these, too, and it may be hard to notice that the whole posterior has not been explored. However, Bayesian methods with vague (but proper) priors have in many fields been shown to have really good small sample properties from a frequentist perspective and you could certainly see that as an argument for using those, while with somewhat more data there will be hardly any difference versus methods with uninformative priors.
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the clas There are several reasons: In many situations constructing test statistics or confidence intervals is quite difficult, because normal approximations – even after using an appropriate link function –
5,076
Why does inversion of a covariance matrix yield partial correlations between random variables?
When a multivariate random variable $(X_1,X_2,\ldots,X_n)$ has a nondegenerate covariance matrix $\mathbb{C} = (\gamma_{ij}) = (\text{Cov}(X_i,X_j))$, the set of all real linear combinations of the $X_i$ forms an $n$-dimensional real vector space with basis $E=(X_1,X_2,\ldots, X_n)$ and a non-degenerate inner product given by $$\langle X_i,X_j \rangle = \gamma_{ij}\ .$$ Its dual basis with respect to this inner product, $E^{*} = (X_1^{*},X_2^{*}, \ldots, X_n^{*})$, is uniquely defined by the relationships $$\langle X_i^{*}, X_j \rangle = \delta_{ij}\ ,$$ the Kronecker delta (equal to $1$ when $i=j$ and $0$ otherwise). The dual basis is of interest here because the partial correlation of $X_i$ and $X_j$ is obtained as the correlation between the part of $X_i$ that is left after projecting it into the space spanned by all the other vectors (let's simply call it its "residual", $X_{i\circ}$) and the comparable part of $X_j$, its residual $X_{j\circ}$. Yet $X_i^{*}$ is a vector that is orthogonal to all vectors besides $X_i$ and has positive inner product with $X_i$ whence $X_{i\circ}$ must be some non-negative multiple of $X_i^{*}$, and likewise for $X_j$. Let us therefore write $$X_{i\circ} = \lambda_i X_i^{*},\ X_{j\circ} = \lambda_j X_j^{*}$$ for positive real numbers $\lambda_i$ and $\lambda_j$. The partial correlation is the normalized dot product of the residuals, which is unchanged by rescaling: $$\rho_{ij\circ} = \frac{\langle X_{i\circ}, X_{j\circ} \rangle}{\sqrt{\langle X_{i\circ}, X_{i\circ} \rangle\langle X_{j\circ}, X_{j\circ} \rangle}} = \frac{\lambda_i\lambda_j\langle X_{i}^{*}, X_{j}^{*} \rangle}{\sqrt{\lambda_i^2\langle X_{i}^{*}, X_{i}^{*} \rangle\lambda_j^2\langle X_{j}^{*}, X_{j}^{*} \rangle}} = \frac{\langle X_{i}^{*}, X_{j}^{*} \rangle}{\sqrt{\langle X_{i}^{*}, X_{i}^{*} \rangle\langle X_{j}^{*}, X_{j}^{*} \rangle}}\ .$$ (In either case the partial correlation will be zero whenever the residuals are orthogonal, whether or not they are nonzero.) We need to find the inner products of dual basis elements. To this end, expand the dual basis elements in terms of the original basis $E$: $$X_i^{*} = \sum_{j=1}^n \beta_{ij} X_j\ .$$ Then by definition $$\delta_{ik} = \langle X_i^{*}, X_k \rangle = \sum_{j=1}^n \beta_{ij}\langle X_j, X_k \rangle = \sum_{j=1}^n \beta_{ij}\gamma_{jk}\ .$$ In matrix notation with $\mathbb{I} = (\delta_{ij})$ the identity matrix and $\mathbb{B} = (\beta_{ij})$ the change-of-basis matrix, this states $$\mathbb{I} = \mathbb{BC}\ .$$ That is, $\mathbb{B} = \mathbb{C}^{-1}$, which is exactly what the Wikipedia article is asserting. The previous formula for the partial correlation gives $$\rho_{ij\cdot} = \frac{\beta_{ij}}{\sqrt{\beta_{ii} \beta_{jj}}} = \frac{\mathbb{C}^{-1}_{ij}}{\sqrt{\mathbb{C}^{-1}_{ii} \mathbb{C}^{-1}_{jj}}}\ .$$
Why does inversion of a covariance matrix yield partial correlations between random variables?
When a multivariate random variable $(X_1,X_2,\ldots,X_n)$ has a nondegenerate covariance matrix $\mathbb{C} = (\gamma_{ij}) = (\text{Cov}(X_i,X_j))$, the set of all real linear combinations of the $X
Why does inversion of a covariance matrix yield partial correlations between random variables? When a multivariate random variable $(X_1,X_2,\ldots,X_n)$ has a nondegenerate covariance matrix $\mathbb{C} = (\gamma_{ij}) = (\text{Cov}(X_i,X_j))$, the set of all real linear combinations of the $X_i$ forms an $n$-dimensional real vector space with basis $E=(X_1,X_2,\ldots, X_n)$ and a non-degenerate inner product given by $$\langle X_i,X_j \rangle = \gamma_{ij}\ .$$ Its dual basis with respect to this inner product, $E^{*} = (X_1^{*},X_2^{*}, \ldots, X_n^{*})$, is uniquely defined by the relationships $$\langle X_i^{*}, X_j \rangle = \delta_{ij}\ ,$$ the Kronecker delta (equal to $1$ when $i=j$ and $0$ otherwise). The dual basis is of interest here because the partial correlation of $X_i$ and $X_j$ is obtained as the correlation between the part of $X_i$ that is left after projecting it into the space spanned by all the other vectors (let's simply call it its "residual", $X_{i\circ}$) and the comparable part of $X_j$, its residual $X_{j\circ}$. Yet $X_i^{*}$ is a vector that is orthogonal to all vectors besides $X_i$ and has positive inner product with $X_i$ whence $X_{i\circ}$ must be some non-negative multiple of $X_i^{*}$, and likewise for $X_j$. Let us therefore write $$X_{i\circ} = \lambda_i X_i^{*},\ X_{j\circ} = \lambda_j X_j^{*}$$ for positive real numbers $\lambda_i$ and $\lambda_j$. The partial correlation is the normalized dot product of the residuals, which is unchanged by rescaling: $$\rho_{ij\circ} = \frac{\langle X_{i\circ}, X_{j\circ} \rangle}{\sqrt{\langle X_{i\circ}, X_{i\circ} \rangle\langle X_{j\circ}, X_{j\circ} \rangle}} = \frac{\lambda_i\lambda_j\langle X_{i}^{*}, X_{j}^{*} \rangle}{\sqrt{\lambda_i^2\langle X_{i}^{*}, X_{i}^{*} \rangle\lambda_j^2\langle X_{j}^{*}, X_{j}^{*} \rangle}} = \frac{\langle X_{i}^{*}, X_{j}^{*} \rangle}{\sqrt{\langle X_{i}^{*}, X_{i}^{*} \rangle\langle X_{j}^{*}, X_{j}^{*} \rangle}}\ .$$ (In either case the partial correlation will be zero whenever the residuals are orthogonal, whether or not they are nonzero.) We need to find the inner products of dual basis elements. To this end, expand the dual basis elements in terms of the original basis $E$: $$X_i^{*} = \sum_{j=1}^n \beta_{ij} X_j\ .$$ Then by definition $$\delta_{ik} = \langle X_i^{*}, X_k \rangle = \sum_{j=1}^n \beta_{ij}\langle X_j, X_k \rangle = \sum_{j=1}^n \beta_{ij}\gamma_{jk}\ .$$ In matrix notation with $\mathbb{I} = (\delta_{ij})$ the identity matrix and $\mathbb{B} = (\beta_{ij})$ the change-of-basis matrix, this states $$\mathbb{I} = \mathbb{BC}\ .$$ That is, $\mathbb{B} = \mathbb{C}^{-1}$, which is exactly what the Wikipedia article is asserting. The previous formula for the partial correlation gives $$\rho_{ij\cdot} = \frac{\beta_{ij}}{\sqrt{\beta_{ii} \beta_{jj}}} = \frac{\mathbb{C}^{-1}_{ij}}{\sqrt{\mathbb{C}^{-1}_{ii} \mathbb{C}^{-1}_{jj}}}\ .$$
Why does inversion of a covariance matrix yield partial correlations between random variables? When a multivariate random variable $(X_1,X_2,\ldots,X_n)$ has a nondegenerate covariance matrix $\mathbb{C} = (\gamma_{ij}) = (\text{Cov}(X_i,X_j))$, the set of all real linear combinations of the $X
5,077
Why does inversion of a covariance matrix yield partial correlations between random variables?
Here is a proof with just matrix calculations. I appreciate the answer by whuber. It is very insightful on the math behind the scene. However, it is still not so trivial how to use his answer to obtain the minus sign in the formula stated in the wikipediaPartial_correlation#Using_matrix_inversion. $$ \rho_{X_iX_j\cdot \mathbf{V} \setminus \{X_i,X_j\}} = - \frac{p_{ij}}{\sqrt{p_{ii}p_{jj}}} $$ To get this minus sign, here is a different proof I found in "Graphical Models Lauriten 1995 Page 130". It is simply done by some matrix calculations. The key is the following matrix identity: $$ \begin{pmatrix} A & B \\ C & D \end{pmatrix}^{-1} = \begin{pmatrix} E^{-1} & -E^{-1}G \\ -FE^{-1} & D^{-1}+FE^{-1}G \end{pmatrix} $$ where $E = A - BD^{-1}C$, $F = D^{-1}C$ and $G = BD^{-1}$. Write down the covariance matrix as $$ \Omega = \begin{pmatrix} \Omega_{11} & \Omega_{12} \\ \Omega_{21} & \Omega_{22} \end{pmatrix} $$ where $\Omega_{11}$ is covariance matrix of $(X_i, X_j)$ and $\Omega_{22}$ is covariance matrix of $\mathbf{V} \setminus \{X_i, X_j \}$. Let $P = \Omega^{-1}$. Similarly, write down $P$ as $$ P = \begin{pmatrix} P_{11} & P_{12} \\ P_{21} & P_{22} \end{pmatrix} $$ By the key matrix identity, $$ P_{11}^{-1} = \Omega_{11} - \Omega_{12}\Omega_{22}^{-1}\Omega_{21} $$ We also know that $\Omega_{11} - \Omega_{12}\Omega_{22}^{-1}\Omega_{21}$ is the covariance matrix of $(X_i, X_j) | \mathbf{V} \setminus \{X_i, X_j\}$ (from Multivariate_normal_distribution#Conditional_distributions). The partial correlation is therefore $$ \rho_{X_iX_j\cdot \mathbf{V} \setminus \{X_i,X_j\}} = \frac{[P_{11}^{-1}]_{12}}{\sqrt{[P_{11}^{-1}]_{11}[P_{11}^{-1}]_{22}}}. $$ I use the notation that the $(k,l)$th entry of the matrix $M$ is denoted by $[M]_{kl}$. Just simple inversion formula of 2-by-2 matrix, $$ \begin{pmatrix} [P_{11}^{-1}]_{11} & [P_{11}^{-1}]_{12} \\ [P_{11}^{-1}]_{21} & [P_{11}^{-1}]_{22} \\ \end{pmatrix} = P_{11}^{-1} = \frac{1}{\text{det} P_{11}} \begin{pmatrix} [P_{11}]_{22} & -[P_{11}]_{12} \\ -[P_{11}]_{21} & [P_{11}]_{11} \\ \end{pmatrix} $$ Therefore, $$ \rho_{X_iX_j\cdot \mathbf{V} \setminus \{X_i,X_j\}} = \frac{[P_{11}^{-1}]_{12}}{\sqrt{[P_{11}^{-1}]_{11}[P_{11}^{-1}]_{22}}} = \frac{- \frac{1}{\text{det}P_{11}}[P_{11}]_{12}}{\sqrt{\frac{1}{\text{det}P_{11}}[P_{11}]_{22}\frac{1}{\text{det}P_{11}}[P_{11}]_{11}}} = \frac{-[P_{11}]_{12}}{\sqrt{[P_{11}]_{22}[P_{11}]_{11}}} $$ which is exactly what the Wikipedia article is asserting. EDIT: This proof is only valid in the Gaussian case. The proof is actually more simple, and due to the particular definition of partial correlation in terms of residuals of linear regression. Note this is not the same as conditional expectation, see reference on wikipedia: Baba, Kunihiro; Ritei Shibata; Masaaki Sibuya (2004). "Partial correlation and conditional correlation as measures of conditional independence". Australian and New Zealand Journal of Statistics. 46 (4): 657–664. doi:10.1111/j.1467-842X.2004.00360.x. S2CID 123130024 I have added a proof (i.e. answer to this question) to the partial correlation wikipedia page now! (Don't have enough reputation to comment/post my own answer so stuck with editing I'm afraid!)
Why does inversion of a covariance matrix yield partial correlations between random variables?
Here is a proof with just matrix calculations. I appreciate the answer by whuber. It is very insightful on the math behind the scene. However, it is still not so trivial how to use his answer to obtai
Why does inversion of a covariance matrix yield partial correlations between random variables? Here is a proof with just matrix calculations. I appreciate the answer by whuber. It is very insightful on the math behind the scene. However, it is still not so trivial how to use his answer to obtain the minus sign in the formula stated in the wikipediaPartial_correlation#Using_matrix_inversion. $$ \rho_{X_iX_j\cdot \mathbf{V} \setminus \{X_i,X_j\}} = - \frac{p_{ij}}{\sqrt{p_{ii}p_{jj}}} $$ To get this minus sign, here is a different proof I found in "Graphical Models Lauriten 1995 Page 130". It is simply done by some matrix calculations. The key is the following matrix identity: $$ \begin{pmatrix} A & B \\ C & D \end{pmatrix}^{-1} = \begin{pmatrix} E^{-1} & -E^{-1}G \\ -FE^{-1} & D^{-1}+FE^{-1}G \end{pmatrix} $$ where $E = A - BD^{-1}C$, $F = D^{-1}C$ and $G = BD^{-1}$. Write down the covariance matrix as $$ \Omega = \begin{pmatrix} \Omega_{11} & \Omega_{12} \\ \Omega_{21} & \Omega_{22} \end{pmatrix} $$ where $\Omega_{11}$ is covariance matrix of $(X_i, X_j)$ and $\Omega_{22}$ is covariance matrix of $\mathbf{V} \setminus \{X_i, X_j \}$. Let $P = \Omega^{-1}$. Similarly, write down $P$ as $$ P = \begin{pmatrix} P_{11} & P_{12} \\ P_{21} & P_{22} \end{pmatrix} $$ By the key matrix identity, $$ P_{11}^{-1} = \Omega_{11} - \Omega_{12}\Omega_{22}^{-1}\Omega_{21} $$ We also know that $\Omega_{11} - \Omega_{12}\Omega_{22}^{-1}\Omega_{21}$ is the covariance matrix of $(X_i, X_j) | \mathbf{V} \setminus \{X_i, X_j\}$ (from Multivariate_normal_distribution#Conditional_distributions). The partial correlation is therefore $$ \rho_{X_iX_j\cdot \mathbf{V} \setminus \{X_i,X_j\}} = \frac{[P_{11}^{-1}]_{12}}{\sqrt{[P_{11}^{-1}]_{11}[P_{11}^{-1}]_{22}}}. $$ I use the notation that the $(k,l)$th entry of the matrix $M$ is denoted by $[M]_{kl}$. Just simple inversion formula of 2-by-2 matrix, $$ \begin{pmatrix} [P_{11}^{-1}]_{11} & [P_{11}^{-1}]_{12} \\ [P_{11}^{-1}]_{21} & [P_{11}^{-1}]_{22} \\ \end{pmatrix} = P_{11}^{-1} = \frac{1}{\text{det} P_{11}} \begin{pmatrix} [P_{11}]_{22} & -[P_{11}]_{12} \\ -[P_{11}]_{21} & [P_{11}]_{11} \\ \end{pmatrix} $$ Therefore, $$ \rho_{X_iX_j\cdot \mathbf{V} \setminus \{X_i,X_j\}} = \frac{[P_{11}^{-1}]_{12}}{\sqrt{[P_{11}^{-1}]_{11}[P_{11}^{-1}]_{22}}} = \frac{- \frac{1}{\text{det}P_{11}}[P_{11}]_{12}}{\sqrt{\frac{1}{\text{det}P_{11}}[P_{11}]_{22}\frac{1}{\text{det}P_{11}}[P_{11}]_{11}}} = \frac{-[P_{11}]_{12}}{\sqrt{[P_{11}]_{22}[P_{11}]_{11}}} $$ which is exactly what the Wikipedia article is asserting. EDIT: This proof is only valid in the Gaussian case. The proof is actually more simple, and due to the particular definition of partial correlation in terms of residuals of linear regression. Note this is not the same as conditional expectation, see reference on wikipedia: Baba, Kunihiro; Ritei Shibata; Masaaki Sibuya (2004). "Partial correlation and conditional correlation as measures of conditional independence". Australian and New Zealand Journal of Statistics. 46 (4): 657–664. doi:10.1111/j.1467-842X.2004.00360.x. S2CID 123130024 I have added a proof (i.e. answer to this question) to the partial correlation wikipedia page now! (Don't have enough reputation to comment/post my own answer so stuck with editing I'm afraid!)
Why does inversion of a covariance matrix yield partial correlations between random variables? Here is a proof with just matrix calculations. I appreciate the answer by whuber. It is very insightful on the math behind the scene. However, it is still not so trivial how to use his answer to obtai
5,078
Why does inversion of a covariance matrix yield partial correlations between random variables?
Note that the sign of the answer actually depends on how you define partial correlation. There is a difference between regressing $X_i$ and $X_j$ on the other $n - 1$ variables separately vs. regressing $X_i$ and $X_j$ on the other $n - 2$ variables together. Under the second definition, let the correlation between residuals $\epsilon_i$ and $\epsilon_j$ be $\rho$. Then the partial correlation of the two (regressing $\epsilon_i$ on $\epsilon_j$ and vice versa) is $-\rho$. This explains the confusion in the comments above, as well as on Wikipedia. The second definition is used universally from what I can tell, so there should be a negative sign. I originally posted an edit to the other answer, but made a mistake - sorry about that!
Why does inversion of a covariance matrix yield partial correlations between random variables?
Note that the sign of the answer actually depends on how you define partial correlation. There is a difference between regressing $X_i$ and $X_j$ on the other $n - 1$ variables separately vs. regressi
Why does inversion of a covariance matrix yield partial correlations between random variables? Note that the sign of the answer actually depends on how you define partial correlation. There is a difference between regressing $X_i$ and $X_j$ on the other $n - 1$ variables separately vs. regressing $X_i$ and $X_j$ on the other $n - 2$ variables together. Under the second definition, let the correlation between residuals $\epsilon_i$ and $\epsilon_j$ be $\rho$. Then the partial correlation of the two (regressing $\epsilon_i$ on $\epsilon_j$ and vice versa) is $-\rho$. This explains the confusion in the comments above, as well as on Wikipedia. The second definition is used universally from what I can tell, so there should be a negative sign. I originally posted an edit to the other answer, but made a mistake - sorry about that!
Why does inversion of a covariance matrix yield partial correlations between random variables? Note that the sign of the answer actually depends on how you define partial correlation. There is a difference between regressing $X_i$ and $X_j$ on the other $n - 1$ variables separately vs. regressi
5,079
Why does inversion of a covariance matrix yield partial correlations between random variables?
For another perspective, this will examine the left inverse of a finite data matrix $A$. We can consider the data to be a sample rather than a theoretical distribution. While any distribution -- even continuous -- will have a covariance matrix, you can't generally talk about a data matrix unless you get into infinite vectors and/or special inner products. So we have a finite sample in an n-by-m data matrix $A$. Let each column be one random variable. Then it's $n$ samples and $m$ random variables. Let $A$'s columns (the random variables) be linearly independent (this is independence in the linear algebra sense, not as in independent random variables). Let $A$ be mean-centered already. Then, $$ C = \frac{1}{n}A^TA $$ is our covariance matrix. It's invertible since $A$'s columns are linearly independent. And we'll use later that $C^{-1} = n(A^TA)^{-1}$ The left inverse of $A$ is $B = (A^TA)^{-1}A^T$. And we have $BA = I_{m-by-m}$. What do we know about $B$? It's m-by-n. There's a row of $B$ corresponding to each column of $A$. Because $BA = I$, we know the inner product of the $i$th row of $B$ with the $i$th column in $A$ equals 1 (diagonal of $I$). An inner product of the $i$th row of $B$ with a $j$th ($i \neq j$) column of $A$ is 0 (off-diagonal of $I$). The right-most term in the expression for $B$ is $A^T$. Therefore $B$'s rows are in the rowspace of $A^T$, the column space of $A$. by (4) and the fact that $A$'s columns are mean-centered, $B$'s rows must also be mean-centered. Let $x_i$ be the $i$th column of $A$. The only vectors that have a non-zero inner product with the $x_i$, zero inner product with all other $x_j$, and are linear combinations of the columns of $A$, are vectors parallel to the residual of $x_i$ after projecting it into the space spanned by all the other $x_j$. Call these residuals $r_{i}$. And call the projection (the linear regression result) $p_i$. So the $i$th row of $B$ must be parallel to $r_i$ (6). Now we know its direction, but what about magnitude? Let $b_i$ be the $i$th row of $B$. $$ \begin{align} 1 & = b_i \cdot x_i &&\text{by (2)} \\ & = b_i \cdot (p_i + r_i) &&\text{$x_i$ is the sum of its projection and residual}\\ & = (b_i \cdot p_i) + (b_i \cdot r_i) &&\text{linearity of dot product} \\ & = 0 + (b_i \cdot r_i) &&\text{by (3), and that $p_i$ is a linear combination of the $x_j$s ($j \neq i$)} \\ & = (c_i r_i) \cdot r_i &&\text{for some constant $c_i$, by (6)} \\ \end{align} $$ Therefore, $c_i = \dfrac{1}{r_i \cdot r_i} = \dfrac{1}{\|r_i\|^2}$, so $b_i = \dfrac{r_i}{\|r_i\|^2}$. We now know what each row of $B$ looks like. Notice $BB^T = ((A^TA)^{-1}A^T)(A((A^TA)^{-1})^T) = (A^TA)^{-1} = \frac{1}{n}C^{-1}$ We can look at any $i,j$th element $C^{-1}_{ij} = n(BB^T)_{ij} = n (b_i \cdot b_j) = n\dfrac{r_i \cdot r_j}{\|r_i\|^2\|r_j\|^2}$ The $(r_i \cdot r_j)$ part of that should tell you we're getting close to covariances and correlations of these residuals. Conveniently, the diagonal elements look like $C^{-1}_{ii} = n\dfrac{r_i \cdot r_i}{\|r_i\|^2\|r_i\|^2} = n\dfrac{1}{\|r_i\|^2}$. This quantity is exactly 1 over the variance of the residual $r_i$, $\dfrac{\|r_i\|^2}{n}$ (the $n$ makes it a variance instead of a squared vector magnitude). Then to get partial correlations you just need to combine the elements of $C^{-1}$ in the way others have shown. Gilbert Strang lecture on left inverses Gilbert Strang lecture on projection, residuals
Why does inversion of a covariance matrix yield partial correlations between random variables?
For another perspective, this will examine the left inverse of a finite data matrix $A$. We can consider the data to be a sample rather than a theoretical distribution. While any distribution -- eve
Why does inversion of a covariance matrix yield partial correlations between random variables? For another perspective, this will examine the left inverse of a finite data matrix $A$. We can consider the data to be a sample rather than a theoretical distribution. While any distribution -- even continuous -- will have a covariance matrix, you can't generally talk about a data matrix unless you get into infinite vectors and/or special inner products. So we have a finite sample in an n-by-m data matrix $A$. Let each column be one random variable. Then it's $n$ samples and $m$ random variables. Let $A$'s columns (the random variables) be linearly independent (this is independence in the linear algebra sense, not as in independent random variables). Let $A$ be mean-centered already. Then, $$ C = \frac{1}{n}A^TA $$ is our covariance matrix. It's invertible since $A$'s columns are linearly independent. And we'll use later that $C^{-1} = n(A^TA)^{-1}$ The left inverse of $A$ is $B = (A^TA)^{-1}A^T$. And we have $BA = I_{m-by-m}$. What do we know about $B$? It's m-by-n. There's a row of $B$ corresponding to each column of $A$. Because $BA = I$, we know the inner product of the $i$th row of $B$ with the $i$th column in $A$ equals 1 (diagonal of $I$). An inner product of the $i$th row of $B$ with a $j$th ($i \neq j$) column of $A$ is 0 (off-diagonal of $I$). The right-most term in the expression for $B$ is $A^T$. Therefore $B$'s rows are in the rowspace of $A^T$, the column space of $A$. by (4) and the fact that $A$'s columns are mean-centered, $B$'s rows must also be mean-centered. Let $x_i$ be the $i$th column of $A$. The only vectors that have a non-zero inner product with the $x_i$, zero inner product with all other $x_j$, and are linear combinations of the columns of $A$, are vectors parallel to the residual of $x_i$ after projecting it into the space spanned by all the other $x_j$. Call these residuals $r_{i}$. And call the projection (the linear regression result) $p_i$. So the $i$th row of $B$ must be parallel to $r_i$ (6). Now we know its direction, but what about magnitude? Let $b_i$ be the $i$th row of $B$. $$ \begin{align} 1 & = b_i \cdot x_i &&\text{by (2)} \\ & = b_i \cdot (p_i + r_i) &&\text{$x_i$ is the sum of its projection and residual}\\ & = (b_i \cdot p_i) + (b_i \cdot r_i) &&\text{linearity of dot product} \\ & = 0 + (b_i \cdot r_i) &&\text{by (3), and that $p_i$ is a linear combination of the $x_j$s ($j \neq i$)} \\ & = (c_i r_i) \cdot r_i &&\text{for some constant $c_i$, by (6)} \\ \end{align} $$ Therefore, $c_i = \dfrac{1}{r_i \cdot r_i} = \dfrac{1}{\|r_i\|^2}$, so $b_i = \dfrac{r_i}{\|r_i\|^2}$. We now know what each row of $B$ looks like. Notice $BB^T = ((A^TA)^{-1}A^T)(A((A^TA)^{-1})^T) = (A^TA)^{-1} = \frac{1}{n}C^{-1}$ We can look at any $i,j$th element $C^{-1}_{ij} = n(BB^T)_{ij} = n (b_i \cdot b_j) = n\dfrac{r_i \cdot r_j}{\|r_i\|^2\|r_j\|^2}$ The $(r_i \cdot r_j)$ part of that should tell you we're getting close to covariances and correlations of these residuals. Conveniently, the diagonal elements look like $C^{-1}_{ii} = n\dfrac{r_i \cdot r_i}{\|r_i\|^2\|r_i\|^2} = n\dfrac{1}{\|r_i\|^2}$. This quantity is exactly 1 over the variance of the residual $r_i$, $\dfrac{\|r_i\|^2}{n}$ (the $n$ makes it a variance instead of a squared vector magnitude). Then to get partial correlations you just need to combine the elements of $C^{-1}$ in the way others have shown. Gilbert Strang lecture on left inverses Gilbert Strang lecture on projection, residuals
Why does inversion of a covariance matrix yield partial correlations between random variables? For another perspective, this will examine the left inverse of a finite data matrix $A$. We can consider the data to be a sample rather than a theoretical distribution. While any distribution -- eve
5,080
Maximum Likelihood Estimators - Multivariate Gaussian
Deriving the Maximum Likelihood Estimators Assume that we have $m$ random vectors, each of size $p$: $\mathbf{X^{(1)}, X^{(2)}, \dotsc, X^{(m)}}$ where each random vectors can be interpreted as an observation (data point) across $p$ variables. If each $\mathbf{X}^{(i)}$ are i.i.d. as multivariate Gaussian vectors: $$ \mathbf{X^{(i)}} \sim \mathcal{N}_p(\mu, \Sigma) $$ Where the parameters $\mu, \Sigma$ are unknown. To obtain their estimate we can use the method of maximum likelihood and maximize the log likelihood function. Note that by the independence of the random vectors, the joint density of the data $\mathbf{ \{X^{(i)}}, i = 1,2, \dotsc ,m\}$ is the product of the individual densities, that is $\prod_{i=1}^m f_{\mathbf{X^{(i)}}}(\mathbf{x^{(i)} ; \mu , \Sigma })$. Taking the logarithm gives the log-likelihood function \begin{aligned} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \log \prod_{i=1}^m f_{\mathbf{X^{(i)}}}(\mathbf{x^{(i)} | \mu , \Sigma }) \\ & = \log \ \prod_{i=1}^m \frac{1}{(2 \pi)^{p/2} |\Sigma|^{1/2}} \exp \left( - \frac{1}{2} \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \right) \\ & = \sum_{i=1}^m \left( - \frac{p}{2} \log (2 \pi) - \frac{1}{2} \log |\Sigma| - \frac{1}{2} \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \right) \end{aligned} \begin{aligned} l(\mu, \Sigma ; ) & = - \frac{mp}{2} \log (2 \pi) - \frac{m}{2} \log |\Sigma| - \frac{1}{2} \sum_{i=1}^m \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \end{aligned} Deriving $\hat \mu$ To take the derivative with respect to $\mu$ and equate to zero we will make use of the following matrix calculus identity: $\mathbf{ \frac{\partial w^T A w}{\partial w} = 2Aw}$ if $\mathbf{w}$ does not depend on $\mathbf{A}$ and $\mathbf{A}$ is symmetric. \begin{aligned} \frac{\partial }{\partial \mu} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \sum_{i=1}^m \mathbf{ \Sigma^{-1} ( x^{(i)} - \mu ) } = 0 \\ & \text{Since $\Sigma$ is positive definite} \\ 0 & = m \mu - \sum_{i=1}^m \mathbf{ x^{(i)} } \\ \hat \mu &= \frac{1}{m} \sum_{i=1}^m \mathbf{ x^{(i)} } = \mathbf{\bar{x}} \end{aligned} Which is often called the sample mean vector. Deriving $\hat \Sigma$ Deriving the MLE for the covariance matrix requires more work and the use of the following linear algebra and calculus properties: The trace is invariant under cyclic permutations of matrix products: $\mathrm{tr}\left[ABC\right] = \mathrm{tr}\left[CAB\right] = \mathrm{tr}\left[BCA\right]$ Since $x^TAx$ is scalar, we can take its trace and obtain the same value: $x^TAx = \mathrm{tr}\left[x^TAx\right] = \mathrm{tr}\left[xx^TA\right]$ $\frac{\partial}{\partial A} \mathrm{tr}\left[AB\right] = B^T$ $\frac{\partial}{\partial A} \log |A| = (A^{-1})^T = (A^T)^{-1}$ The determinant of the inverse of an invertible matrix is the inverse of the determinant: $|A| = \frac{1}{|A^{-1}|}$ Combining these properties allows us to calculate $$ \frac{\partial}{\partial A} x^TAx =\frac{\partial}{\partial A} \mathrm{tr}\left[xx^TA\right] = [xx^T]^T = \left(x^{T}\right)^Tx^T = xx^T $$ Which is the outer product of the vector $x$ with itself. We can now re-write the log-likelihood function and compute the derivative w.r.t. $\Sigma^{-1}$ (note $C$ is constant) \begin{aligned} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \text{C} - \frac{m}{2} \log |\Sigma| - \frac{1}{2} \sum_{i=1}^m \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \\ & = \text{C} + \frac{m}{2} \log |\Sigma^{-1}| - \frac{1}{2} \sum_{i=1}^m \mathrm{tr}\left[ \mathbf{(x^{(i)} - \mu) (x^{(i)} - \mu)^T \Sigma^{-1} } \right] \\ \frac{\partial }{\partial \Sigma^{-1}} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \frac{m}{2} \Sigma - \frac{1}{2} \sum_{i=1}^m \mathbf{(x^{(i)} - \mu) (x^{(i)} - \mu)}^T \ \ \text{Since $\Sigma^T = \Sigma$} \end{aligned} Equating to zero and solving for $\Sigma$ \begin{aligned} 0 &= m \Sigma - \sum_{i=1}^m \mathbf{(x^{(i)} - \mu) (x^{(i)} - \mu)}^T \\ \hat \Sigma & = \frac{1}{m} \sum_{i=1}^m \mathbf{(x^{(i)} - \hat \mu) (x^{(i)} -\hat \mu)}^T \end{aligned} Sources https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/other-readings/chapter13.pdf http://ttic.uchicago.edu/~shubhendu/Slides/Estimation.pdf
Maximum Likelihood Estimators - Multivariate Gaussian
Deriving the Maximum Likelihood Estimators Assume that we have $m$ random vectors, each of size $p$: $\mathbf{X^{(1)}, X^{(2)}, \dotsc, X^{(m)}}$ where each random vectors can be interpreted as an obs
Maximum Likelihood Estimators - Multivariate Gaussian Deriving the Maximum Likelihood Estimators Assume that we have $m$ random vectors, each of size $p$: $\mathbf{X^{(1)}, X^{(2)}, \dotsc, X^{(m)}}$ where each random vectors can be interpreted as an observation (data point) across $p$ variables. If each $\mathbf{X}^{(i)}$ are i.i.d. as multivariate Gaussian vectors: $$ \mathbf{X^{(i)}} \sim \mathcal{N}_p(\mu, \Sigma) $$ Where the parameters $\mu, \Sigma$ are unknown. To obtain their estimate we can use the method of maximum likelihood and maximize the log likelihood function. Note that by the independence of the random vectors, the joint density of the data $\mathbf{ \{X^{(i)}}, i = 1,2, \dotsc ,m\}$ is the product of the individual densities, that is $\prod_{i=1}^m f_{\mathbf{X^{(i)}}}(\mathbf{x^{(i)} ; \mu , \Sigma })$. Taking the logarithm gives the log-likelihood function \begin{aligned} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \log \prod_{i=1}^m f_{\mathbf{X^{(i)}}}(\mathbf{x^{(i)} | \mu , \Sigma }) \\ & = \log \ \prod_{i=1}^m \frac{1}{(2 \pi)^{p/2} |\Sigma|^{1/2}} \exp \left( - \frac{1}{2} \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \right) \\ & = \sum_{i=1}^m \left( - \frac{p}{2} \log (2 \pi) - \frac{1}{2} \log |\Sigma| - \frac{1}{2} \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \right) \end{aligned} \begin{aligned} l(\mu, \Sigma ; ) & = - \frac{mp}{2} \log (2 \pi) - \frac{m}{2} \log |\Sigma| - \frac{1}{2} \sum_{i=1}^m \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \end{aligned} Deriving $\hat \mu$ To take the derivative with respect to $\mu$ and equate to zero we will make use of the following matrix calculus identity: $\mathbf{ \frac{\partial w^T A w}{\partial w} = 2Aw}$ if $\mathbf{w}$ does not depend on $\mathbf{A}$ and $\mathbf{A}$ is symmetric. \begin{aligned} \frac{\partial }{\partial \mu} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \sum_{i=1}^m \mathbf{ \Sigma^{-1} ( x^{(i)} - \mu ) } = 0 \\ & \text{Since $\Sigma$ is positive definite} \\ 0 & = m \mu - \sum_{i=1}^m \mathbf{ x^{(i)} } \\ \hat \mu &= \frac{1}{m} \sum_{i=1}^m \mathbf{ x^{(i)} } = \mathbf{\bar{x}} \end{aligned} Which is often called the sample mean vector. Deriving $\hat \Sigma$ Deriving the MLE for the covariance matrix requires more work and the use of the following linear algebra and calculus properties: The trace is invariant under cyclic permutations of matrix products: $\mathrm{tr}\left[ABC\right] = \mathrm{tr}\left[CAB\right] = \mathrm{tr}\left[BCA\right]$ Since $x^TAx$ is scalar, we can take its trace and obtain the same value: $x^TAx = \mathrm{tr}\left[x^TAx\right] = \mathrm{tr}\left[xx^TA\right]$ $\frac{\partial}{\partial A} \mathrm{tr}\left[AB\right] = B^T$ $\frac{\partial}{\partial A} \log |A| = (A^{-1})^T = (A^T)^{-1}$ The determinant of the inverse of an invertible matrix is the inverse of the determinant: $|A| = \frac{1}{|A^{-1}|}$ Combining these properties allows us to calculate $$ \frac{\partial}{\partial A} x^TAx =\frac{\partial}{\partial A} \mathrm{tr}\left[xx^TA\right] = [xx^T]^T = \left(x^{T}\right)^Tx^T = xx^T $$ Which is the outer product of the vector $x$ with itself. We can now re-write the log-likelihood function and compute the derivative w.r.t. $\Sigma^{-1}$ (note $C$ is constant) \begin{aligned} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \text{C} - \frac{m}{2} \log |\Sigma| - \frac{1}{2} \sum_{i=1}^m \mathbf{(x^{(i)} - \mu)^T \Sigma^{-1} (x^{(i)} - \mu) } \\ & = \text{C} + \frac{m}{2} \log |\Sigma^{-1}| - \frac{1}{2} \sum_{i=1}^m \mathrm{tr}\left[ \mathbf{(x^{(i)} - \mu) (x^{(i)} - \mu)^T \Sigma^{-1} } \right] \\ \frac{\partial }{\partial \Sigma^{-1}} l(\mathbf{ \mu, \Sigma | x^{(i)} }) & = \frac{m}{2} \Sigma - \frac{1}{2} \sum_{i=1}^m \mathbf{(x^{(i)} - \mu) (x^{(i)} - \mu)}^T \ \ \text{Since $\Sigma^T = \Sigma$} \end{aligned} Equating to zero and solving for $\Sigma$ \begin{aligned} 0 &= m \Sigma - \sum_{i=1}^m \mathbf{(x^{(i)} - \mu) (x^{(i)} - \mu)}^T \\ \hat \Sigma & = \frac{1}{m} \sum_{i=1}^m \mathbf{(x^{(i)} - \hat \mu) (x^{(i)} -\hat \mu)}^T \end{aligned} Sources https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/other-readings/chapter13.pdf http://ttic.uchicago.edu/~shubhendu/Slides/Estimation.pdf
Maximum Likelihood Estimators - Multivariate Gaussian Deriving the Maximum Likelihood Estimators Assume that we have $m$ random vectors, each of size $p$: $\mathbf{X^{(1)}, X^{(2)}, \dotsc, X^{(m)}}$ where each random vectors can be interpreted as an obs
5,081
Maximum Likelihood Estimators - Multivariate Gaussian
An alternate proof for $\widehat{\Sigma}$ that takes the derivative with respect to $\Sigma$ directly: Picking up with the log-likelihood as above: \begin{eqnarray} \ell(\mu, \Sigma) &=& C - \frac{m}{2}\log|\Sigma|-\frac{1}{2} \sum_{i=1}^m \text{tr}\left[(\mathbf{x}^{(i)}-\mu)^T \Sigma^{-1} (\mathbf{x}^{(i)}-\mu)\right]\\ &=&C - \frac{1}{2}\left(m\log|\Sigma| + \sum_{i=1}^m\text{tr} \left[(\mathbf{x}^{(i)}-\mu)(\mathbf{x}^{(i)}-\mu)^T\Sigma^{-1} \right]\right)\\ &=&C - \frac{1}{2}\left(m\log|\Sigma| +\text{tr}\left[ S_\mu \Sigma^{-1} \right] \right) \end{eqnarray} where $S_\mu = \sum_{i=1}^m (\mathbf{x}^{(i)}-\mu)(\mathbf{x}^{(i)}-\mu)^T$ and we have used the cyclic and linear properties of $\text{tr}$. To compute $\partial \ell /\partial \Sigma$ we first observe that $$ \frac{\partial}{\partial \Sigma} \log |\Sigma| = \Sigma^{-T}=\Sigma^{-1} $$ by the fourth property above. To take the derivative of the second term we will need the property that $$ \frac{\partial}{\partial X}\text{tr}\left( A X^{-1} B\right) = -(X^{-1}BAX^{-1})^T. $$ (from The Matrix Cookbook, equation 63). Applying this with $B=I$ we obtain that $$ \frac{\partial}{\partial \Sigma}\text{tr}\left[S_\mu \Sigma^{-1}\right] = -\left( \Sigma^{-1} S_\mu \Sigma^{-1}\right)^T = -\Sigma^{-1} S_\mu \Sigma^{-1} $$ because both $\Sigma$ and $S_\mu$ are symmetric. Then $$ \frac{\partial}{\partial \Sigma}\ell(\mu, \Sigma) \propto m \Sigma^{-1} - \Sigma^{-1} S_\mu \Sigma^{-1}. $$ Setting this to 0 and rearranging gives $$ \widehat{\Sigma} = \frac{1}{m}S_\mu. $$ This approach is more work than the standard one using derivatives with respect to $\Lambda = \Sigma^{-1}$, and requires a more complicated trace identity. I only found it useful because I currently need to take derivatives of a modified likelihood function for which it seems much harder to use $\partial/{\partial \Sigma^{-1}}$ than $\partial/\partial \Sigma$.
Maximum Likelihood Estimators - Multivariate Gaussian
An alternate proof for $\widehat{\Sigma}$ that takes the derivative with respect to $\Sigma$ directly: Picking up with the log-likelihood as above: \begin{eqnarray} \ell(\mu, \Sigma) &=& C - \frac
Maximum Likelihood Estimators - Multivariate Gaussian An alternate proof for $\widehat{\Sigma}$ that takes the derivative with respect to $\Sigma$ directly: Picking up with the log-likelihood as above: \begin{eqnarray} \ell(\mu, \Sigma) &=& C - \frac{m}{2}\log|\Sigma|-\frac{1}{2} \sum_{i=1}^m \text{tr}\left[(\mathbf{x}^{(i)}-\mu)^T \Sigma^{-1} (\mathbf{x}^{(i)}-\mu)\right]\\ &=&C - \frac{1}{2}\left(m\log|\Sigma| + \sum_{i=1}^m\text{tr} \left[(\mathbf{x}^{(i)}-\mu)(\mathbf{x}^{(i)}-\mu)^T\Sigma^{-1} \right]\right)\\ &=&C - \frac{1}{2}\left(m\log|\Sigma| +\text{tr}\left[ S_\mu \Sigma^{-1} \right] \right) \end{eqnarray} where $S_\mu = \sum_{i=1}^m (\mathbf{x}^{(i)}-\mu)(\mathbf{x}^{(i)}-\mu)^T$ and we have used the cyclic and linear properties of $\text{tr}$. To compute $\partial \ell /\partial \Sigma$ we first observe that $$ \frac{\partial}{\partial \Sigma} \log |\Sigma| = \Sigma^{-T}=\Sigma^{-1} $$ by the fourth property above. To take the derivative of the second term we will need the property that $$ \frac{\partial}{\partial X}\text{tr}\left( A X^{-1} B\right) = -(X^{-1}BAX^{-1})^T. $$ (from The Matrix Cookbook, equation 63). Applying this with $B=I$ we obtain that $$ \frac{\partial}{\partial \Sigma}\text{tr}\left[S_\mu \Sigma^{-1}\right] = -\left( \Sigma^{-1} S_\mu \Sigma^{-1}\right)^T = -\Sigma^{-1} S_\mu \Sigma^{-1} $$ because both $\Sigma$ and $S_\mu$ are symmetric. Then $$ \frac{\partial}{\partial \Sigma}\ell(\mu, \Sigma) \propto m \Sigma^{-1} - \Sigma^{-1} S_\mu \Sigma^{-1}. $$ Setting this to 0 and rearranging gives $$ \widehat{\Sigma} = \frac{1}{m}S_\mu. $$ This approach is more work than the standard one using derivatives with respect to $\Lambda = \Sigma^{-1}$, and requires a more complicated trace identity. I only found it useful because I currently need to take derivatives of a modified likelihood function for which it seems much harder to use $\partial/{\partial \Sigma^{-1}}$ than $\partial/\partial \Sigma$.
Maximum Likelihood Estimators - Multivariate Gaussian An alternate proof for $\widehat{\Sigma}$ that takes the derivative with respect to $\Sigma$ directly: Picking up with the log-likelihood as above: \begin{eqnarray} \ell(\mu, \Sigma) &=& C - \frac
5,082
Maximum Likelihood Estimators - Multivariate Gaussian
While previous answers are correct, mentioning the trace is unnecessary (from a personal point of view). The following derivation might be more succinct:
Maximum Likelihood Estimators - Multivariate Gaussian
While previous answers are correct, mentioning the trace is unnecessary (from a personal point of view). The following derivation might be more succinct:
Maximum Likelihood Estimators - Multivariate Gaussian While previous answers are correct, mentioning the trace is unnecessary (from a personal point of view). The following derivation might be more succinct:
Maximum Likelihood Estimators - Multivariate Gaussian While previous answers are correct, mentioning the trace is unnecessary (from a personal point of view). The following derivation might be more succinct:
5,083
Regression: Transforming Variables
One transforms the dependent variable to achieve approximate symmetry and homoscedasticity of the residuals. Transformations of the independent variables have a different purpose: after all, in this regression all the independent values are taken as fixed, not random, so "normality" is inapplicable. The main objective in these transformations is to achieve linear relationships with the dependent variable (or, really, with its logit). (This objective over-rides auxiliary ones such as reducing excess leverage or achieving a simple interpretation of the coefficients.) These relationships are a property of the data and the phenomena that produced them, so you need the flexibility to choose appropriate re-expressions of each of the variables separately from the others. Specifically, not only is it not a problem to use a log, a root, and a reciprocal, it's rather common. The principle is that there is (usually) nothing special about how the data are originally expressed, so you should let the data suggest re-expressions that lead to effective, accurate, useful, and (if possible) theoretically justified models. The histograms--which reflect the univariate distributions--often hint at an initial transformation, but are not dispositive. Accompany them with scatterplot matrices so you can examine the relationships among all the variables. Transformations like $\log(x + c)$ where $c$ is a positive constant "start value" can work--and can be indicated even when no value of $x$ is zero--but sometimes they destroy linear relationships. When this occurs, a good solution is to create two variables. One of them equals $\log(x)$ when $x$ is nonzero and otherwise is anything; it's convenient to let it default to zero. The other, let's call it $z_x$, is an indicator of whether $x$ is zero: it equals 1 when $x = 0$ and is 0 otherwise. These terms contribute a sum $$\beta \log(x) + \beta_0 z_x$$ to the estimate. When $x \gt 0$, $z_x = 0$ so the second term drops out leaving just $\beta \log(x)$. When $x = 0$, "$\log(x)$" has been set to zero while $z_x = 1$, leaving just the value $\beta_0$. Thus, $\beta_0$ estimates the effect when $x = 0$ and otherwise $\beta$ is the coefficient of $\log(x)$.
Regression: Transforming Variables
One transforms the dependent variable to achieve approximate symmetry and homoscedasticity of the residuals. Transformations of the independent variables have a different purpose: after all, in this
Regression: Transforming Variables One transforms the dependent variable to achieve approximate symmetry and homoscedasticity of the residuals. Transformations of the independent variables have a different purpose: after all, in this regression all the independent values are taken as fixed, not random, so "normality" is inapplicable. The main objective in these transformations is to achieve linear relationships with the dependent variable (or, really, with its logit). (This objective over-rides auxiliary ones such as reducing excess leverage or achieving a simple interpretation of the coefficients.) These relationships are a property of the data and the phenomena that produced them, so you need the flexibility to choose appropriate re-expressions of each of the variables separately from the others. Specifically, not only is it not a problem to use a log, a root, and a reciprocal, it's rather common. The principle is that there is (usually) nothing special about how the data are originally expressed, so you should let the data suggest re-expressions that lead to effective, accurate, useful, and (if possible) theoretically justified models. The histograms--which reflect the univariate distributions--often hint at an initial transformation, but are not dispositive. Accompany them with scatterplot matrices so you can examine the relationships among all the variables. Transformations like $\log(x + c)$ where $c$ is a positive constant "start value" can work--and can be indicated even when no value of $x$ is zero--but sometimes they destroy linear relationships. When this occurs, a good solution is to create two variables. One of them equals $\log(x)$ when $x$ is nonzero and otherwise is anything; it's convenient to let it default to zero. The other, let's call it $z_x$, is an indicator of whether $x$ is zero: it equals 1 when $x = 0$ and is 0 otherwise. These terms contribute a sum $$\beta \log(x) + \beta_0 z_x$$ to the estimate. When $x \gt 0$, $z_x = 0$ so the second term drops out leaving just $\beta \log(x)$. When $x = 0$, "$\log(x)$" has been set to zero while $z_x = 1$, leaving just the value $\beta_0$. Thus, $\beta_0$ estimates the effect when $x = 0$ and otherwise $\beta$ is the coefficient of $\log(x)$.
Regression: Transforming Variables One transforms the dependent variable to achieve approximate symmetry and homoscedasticity of the residuals. Transformations of the independent variables have a different purpose: after all, in this
5,084
PP-plots vs. QQ-plots
As @vector07 notes, probability plot is the more abstract category of which pp-plots and qq-plots are members. Thus, I will discuss the distinction between the latter two. The best way to understand the differences is to think about how they are constructed, and to understand that you need to recognize the difference between the quantiles of a distribution and the proportion of the distribution that you have passed through when you reach a given quantile. You can see the relationship between these by plotting the cumulative distribution function (CDF) of a distribution. For example, consider the standard normal distribution: We see that approximately 68% of the y-axis (region between red lines) corresponds to 1/3 of the x-axis (region between blue lines). That means that when we use the proportion of the distribution we have passed through to evaluate the match between two distributions (i.e., we use a pp-plot), we will get a lot of resolution in the center of the distributions, but less at the tails. On the other hand, when we use the quantiles to evaluate the match between two distributions (i.e., we use a qq-plot), we will get very good resolution at the tails, but less in the center. (Because data analysts are typically more concerned about the tails of a distribution, which will have more effect on inference for example, qq-plots are much more common than pp-plots.) To see these facts in action, I will walk through the construction of a pp-plot and a qq-plot. (I also walk through the construction of a qq-plot verbally / more slowly here: QQ-plot does not match histogram.) I don't know if you use R, but hopefully it will be self-explanatory: set.seed(1) # this makes the example exactly reproducible N = 10 # I will generate 10 data points x = sort(rnorm(n=N, mean=0, sd=1)) # from a normal distribution w/ mean 0 & SD 1 n.props = pnorm(x, mean(x), sd(x)) # here I calculate the probabilities associated # w/ these data if they came from a normal # distribution w/ the same mean & SD # I calculate the proportion of x we've gone through at each point props = 1:N / (N+1) n.quantiles = qnorm(props, mean=mean(x), sd=sd(x)) # this calculates the quantiles (ie # z-scores) associated w/ the props my.data = data.frame(x=x, props=props, # here I bundle them together normal.proportions=n.props, normal.quantiles=n.quantiles) round(my.data, digits=3) # & display them w/ 3 decimal places # x props normal.proportions normal.quantiles # 1 -0.836 0.091 0.108 -0.910 # 2 -0.820 0.182 0.111 -0.577 # 3 -0.626 0.273 0.166 -0.340 # 4 -0.305 0.364 0.288 -0.140 # 5 0.184 0.455 0.526 0.043 # 6 0.330 0.545 0.600 0.221 # 7 0.487 0.636 0.675 0.404 # 8 0.576 0.727 0.715 0.604 # 9 0.738 0.818 0.781 0.841 # 10 1.595 0.909 0.970 1.174 Unfortunately, these plots aren't very distinctive, because there are few data and we are comparing a true normal to the correct theoretical distribution, so there isn't anything special to see in either the center or the tails of the distribution. To better demonstrate these differences, I plot a (fat-tailed) t-distribution with 4 degrees of freedom, and a bi-modal distribution below. The fat tails are much more distinctive in the qq-plot, whereas the bi-modality is more distinctive in the pp-plot.
PP-plots vs. QQ-plots
As @vector07 notes, probability plot is the more abstract category of which pp-plots and qq-plots are members. Thus, I will discuss the distinction between the latter two. The best way to understand
PP-plots vs. QQ-plots As @vector07 notes, probability plot is the more abstract category of which pp-plots and qq-plots are members. Thus, I will discuss the distinction between the latter two. The best way to understand the differences is to think about how they are constructed, and to understand that you need to recognize the difference between the quantiles of a distribution and the proportion of the distribution that you have passed through when you reach a given quantile. You can see the relationship between these by plotting the cumulative distribution function (CDF) of a distribution. For example, consider the standard normal distribution: We see that approximately 68% of the y-axis (region between red lines) corresponds to 1/3 of the x-axis (region between blue lines). That means that when we use the proportion of the distribution we have passed through to evaluate the match between two distributions (i.e., we use a pp-plot), we will get a lot of resolution in the center of the distributions, but less at the tails. On the other hand, when we use the quantiles to evaluate the match between two distributions (i.e., we use a qq-plot), we will get very good resolution at the tails, but less in the center. (Because data analysts are typically more concerned about the tails of a distribution, which will have more effect on inference for example, qq-plots are much more common than pp-plots.) To see these facts in action, I will walk through the construction of a pp-plot and a qq-plot. (I also walk through the construction of a qq-plot verbally / more slowly here: QQ-plot does not match histogram.) I don't know if you use R, but hopefully it will be self-explanatory: set.seed(1) # this makes the example exactly reproducible N = 10 # I will generate 10 data points x = sort(rnorm(n=N, mean=0, sd=1)) # from a normal distribution w/ mean 0 & SD 1 n.props = pnorm(x, mean(x), sd(x)) # here I calculate the probabilities associated # w/ these data if they came from a normal # distribution w/ the same mean & SD # I calculate the proportion of x we've gone through at each point props = 1:N / (N+1) n.quantiles = qnorm(props, mean=mean(x), sd=sd(x)) # this calculates the quantiles (ie # z-scores) associated w/ the props my.data = data.frame(x=x, props=props, # here I bundle them together normal.proportions=n.props, normal.quantiles=n.quantiles) round(my.data, digits=3) # & display them w/ 3 decimal places # x props normal.proportions normal.quantiles # 1 -0.836 0.091 0.108 -0.910 # 2 -0.820 0.182 0.111 -0.577 # 3 -0.626 0.273 0.166 -0.340 # 4 -0.305 0.364 0.288 -0.140 # 5 0.184 0.455 0.526 0.043 # 6 0.330 0.545 0.600 0.221 # 7 0.487 0.636 0.675 0.404 # 8 0.576 0.727 0.715 0.604 # 9 0.738 0.818 0.781 0.841 # 10 1.595 0.909 0.970 1.174 Unfortunately, these plots aren't very distinctive, because there are few data and we are comparing a true normal to the correct theoretical distribution, so there isn't anything special to see in either the center or the tails of the distribution. To better demonstrate these differences, I plot a (fat-tailed) t-distribution with 4 degrees of freedom, and a bi-modal distribution below. The fat tails are much more distinctive in the qq-plot, whereas the bi-modality is more distinctive in the pp-plot.
PP-plots vs. QQ-plots As @vector07 notes, probability plot is the more abstract category of which pp-plots and qq-plots are members. Thus, I will discuss the distinction between the latter two. The best way to understand
5,085
PP-plots vs. QQ-plots
Here is a definition from v8doc.sas.com: A P-P plot compares the empirical cumulative distribution function of a data set with a specified theoretical cumulative distribution function F(·). A Q-Q plot compares the quantiles of a data distribution with the quantiles of a standardized theoretical distribution from a specified family of distributions. In the text, they also mention: differences regarding the way P-P plots and Q-Q plots are constructed and interpreted. advantages of using one or another, regarding comparing empirical and theoretical distributions. Reference: SAS Institute Inc., SAS OnlineDoc®, Version 8, Cary, NC: SAS Institute Inc., 1999
PP-plots vs. QQ-plots
Here is a definition from v8doc.sas.com: A P-P plot compares the empirical cumulative distribution function of a data set with a specified theoretical cumulative distribution function F(·). A Q-Q plo
PP-plots vs. QQ-plots Here is a definition from v8doc.sas.com: A P-P plot compares the empirical cumulative distribution function of a data set with a specified theoretical cumulative distribution function F(·). A Q-Q plot compares the quantiles of a data distribution with the quantiles of a standardized theoretical distribution from a specified family of distributions. In the text, they also mention: differences regarding the way P-P plots and Q-Q plots are constructed and interpreted. advantages of using one or another, regarding comparing empirical and theoretical distributions. Reference: SAS Institute Inc., SAS OnlineDoc®, Version 8, Cary, NC: SAS Institute Inc., 1999
PP-plots vs. QQ-plots Here is a definition from v8doc.sas.com: A P-P plot compares the empirical cumulative distribution function of a data set with a specified theoretical cumulative distribution function F(·). A Q-Q plo
5,086
How to interpret error measures?
Let's denote the true value of interest as $\theta$ and the value estimated using some algorithm as $\hat{\theta}$. Correlation tells you how much $\theta$ and $\hat{\theta}$ are related. It gives values between $-1$ and $1$, where $0$ is no relation, $1$ is very strong, linear relation and $-1$ is an inverse linear relation (i.e. bigger values of $\theta$ indicate smaller values of $\hat{\theta}$, or vice versa). Below you'll find an illustrated example of correlation. (source: http://www.mathsisfun.com/data/correlation.html) Mean absolute error is: $$\mathrm{MAE} = \frac{1}{N} \sum^N_{i=1} | \hat{\theta}_i - \theta_i | $$ Root mean square error is: $$ \mathrm{RMSE} = \sqrt{ \frac{1}{N} \sum^N_{i=1} \left( \hat{\theta}_i - \theta_i \right)^2 } $$ Relative absolute error: $$ \mathrm{ RAE} = \frac{ \sum^N_{i=1} | \hat{\theta}_i - \theta_i | } { \sum^N_{i=1} | \overline{\theta} - \theta_i | } $$ where $\overline{\theta}$ is a mean value of $\theta$. Root relative squared error: $$ \mathrm{ RRSE }= \sqrt{ \frac{ \sum^N_{i=1} \left( \hat{\theta}_i - \theta_i \right)^2 } { \sum^N_{i=1} \left( \overline{\theta} - \theta_i \right)^2 }} $$ As you see, all the statistics compare true values to their estimates, but do it in a slightly different way. They all tell you "how far away" are your estimated values from the true value of $\theta$. Sometimes square roots are used and sometimes absolute values - this is because when using square roots the extreme values have more influence on the result (see Why square the difference instead of taking the absolute value in standard deviation? or on Mathoverflow). In $ \mathrm{ MAE}$ and $ \mathrm{ RMSE}$ you simply look at the "average difference" between those two values - so you interpret them comparing to the scale of your valiable, (i.e. $ \mathrm{ MSE}$ of 1 point is a difference of 1 point of $\theta$ between $\hat{\theta}$ and $\theta$). In $ \mathrm{ RAE}$ and $ \mathrm{ RRSE}$ you divide those differences by the variation of $\theta$ so they have a scale from 0 to 1 and if you multiply this value by 100 you get similarity in 0-100 scale (i.e. percentage). The values of $\sum(\overline{\theta} - \theta_i)^2$ or $\sum|\overline{\theta} - \theta_i|$ tell you how much $\theta$ differs from it's mean value - so you could tell that it is about how much $\theta$ differs from itself (compare to variance). Because of that the measures are named "relative" - they give you result related to the scale of $\theta$. Check also those slides.
How to interpret error measures?
Let's denote the true value of interest as $\theta$ and the value estimated using some algorithm as $\hat{\theta}$. Correlation tells you how much $\theta$ and $\hat{\theta}$ are related. It gives va
How to interpret error measures? Let's denote the true value of interest as $\theta$ and the value estimated using some algorithm as $\hat{\theta}$. Correlation tells you how much $\theta$ and $\hat{\theta}$ are related. It gives values between $-1$ and $1$, where $0$ is no relation, $1$ is very strong, linear relation and $-1$ is an inverse linear relation (i.e. bigger values of $\theta$ indicate smaller values of $\hat{\theta}$, or vice versa). Below you'll find an illustrated example of correlation. (source: http://www.mathsisfun.com/data/correlation.html) Mean absolute error is: $$\mathrm{MAE} = \frac{1}{N} \sum^N_{i=1} | \hat{\theta}_i - \theta_i | $$ Root mean square error is: $$ \mathrm{RMSE} = \sqrt{ \frac{1}{N} \sum^N_{i=1} \left( \hat{\theta}_i - \theta_i \right)^2 } $$ Relative absolute error: $$ \mathrm{ RAE} = \frac{ \sum^N_{i=1} | \hat{\theta}_i - \theta_i | } { \sum^N_{i=1} | \overline{\theta} - \theta_i | } $$ where $\overline{\theta}$ is a mean value of $\theta$. Root relative squared error: $$ \mathrm{ RRSE }= \sqrt{ \frac{ \sum^N_{i=1} \left( \hat{\theta}_i - \theta_i \right)^2 } { \sum^N_{i=1} \left( \overline{\theta} - \theta_i \right)^2 }} $$ As you see, all the statistics compare true values to their estimates, but do it in a slightly different way. They all tell you "how far away" are your estimated values from the true value of $\theta$. Sometimes square roots are used and sometimes absolute values - this is because when using square roots the extreme values have more influence on the result (see Why square the difference instead of taking the absolute value in standard deviation? or on Mathoverflow). In $ \mathrm{ MAE}$ and $ \mathrm{ RMSE}$ you simply look at the "average difference" between those two values - so you interpret them comparing to the scale of your valiable, (i.e. $ \mathrm{ MSE}$ of 1 point is a difference of 1 point of $\theta$ between $\hat{\theta}$ and $\theta$). In $ \mathrm{ RAE}$ and $ \mathrm{ RRSE}$ you divide those differences by the variation of $\theta$ so they have a scale from 0 to 1 and if you multiply this value by 100 you get similarity in 0-100 scale (i.e. percentage). The values of $\sum(\overline{\theta} - \theta_i)^2$ or $\sum|\overline{\theta} - \theta_i|$ tell you how much $\theta$ differs from it's mean value - so you could tell that it is about how much $\theta$ differs from itself (compare to variance). Because of that the measures are named "relative" - they give you result related to the scale of $\theta$. Check also those slides.
How to interpret error measures? Let's denote the true value of interest as $\theta$ and the value estimated using some algorithm as $\hat{\theta}$. Correlation tells you how much $\theta$ and $\hat{\theta}$ are related. It gives va
5,087
How are we defining 'reproducible research'?
"Reproducible research" as reproducible analysis Reproducible research is a term used in some research domains to refer specifically to conducting analyses such that code transforms raw data and meta-data into processed data, code runs analyses on the data, and code incorporates analyses into a report. When such data and code are shared, this allows other researchers to: perform analyses not reported by the original researchers check the correctness of the analyses performed by the original researchers This usage can be seen in discussions of technologies like Sweave. E.g., Friedrich Leisch writes in the context of Sweave that "the report can be automatically updated if data or analysis change, which allows for truly reproducible research." It can also be seen in the CRAN Task View on Reproducible Research which states that "the goal of reproducible research is to tie specific instructions to data analysis and experimental data so that scholarship can be recreated, better understood and verified." Broader usage of the term "reproducibility" Reproducibility is a fundamental aim of science. It's not new. Research reports include method and results sections that should outline how the data was generated, processed, and analysed. A general rule is that the details provided should be sufficient to enable an appropriately competent researcher to take the information provided and replicate the study. Reproducibility is also closely related to the concepts of replicability and generalisation. Thus, the term "reproducible research", taken literally, as applied to technologies like Sweave, is a misnomer, given that it suggests a relevance broader than it covers. Also, when presenting technologies like Sweave to researchers who have not used such technologies, such researchers are often surprised when I call the process "reproducible research". A better term than "reproducible research" Given that "reproducible research" as used within Sweave-like contexts only pertains to one aspect of reproducible research, perhaps an alternative term should be adopted. Possible alternatives include: Reproducible analysis: John D Cook has used this term Jennifer Blackford uses the term "reliable and reproducible analyses" Reproducible data analysis Christophe Pouzat uses this term Reproducible statistical analysis A Biostats site at Vanderbilt uses the term "reproducible statistical analysis and reporting activities" Reproducible reporting All of the above terms are a more accurate reflection of what Sweave-like analyses entail. Reproducible analysis is short and sweet. Adding "data" or "statistical" further clarifies things, but also makes the term both longer and narrower. Furthermore, "statistical" has a narrow and a broad meaning, and certainly within the narrow meaning, much of data processing is not statistical. Thus, the breadth implied by the term "reproducible analysis" has its advantages. It's not just about reproducibility The other additional issue with the term "reproducible research" is the aim of Sweave-like technologies is not just "reproducibility". There are several interrelated aims: Reproducibility Can the analyses easily be re-run to transform raw data into final report with the same results? Correctness Is the data analysis consistent with the intentions of the researcher? Are the intentions of the researcher correct? Openness Transparency, accountability Can others check and verify the accuracy of analyses performed? Extensibility, modfifiability Can others modify, extend, reuse, and mash, the data, analyses, or both to create new research works? There is an argument that reproducible analysis should promote correct analyses, because there is a written record of analyses that can be checked. Furthermore if data and code is shared, it creates accountability which motivates researchers to check their analyses, and enables other researchers to note corrections. Reproducible analysis also fits in closely with concepts around open research. Of course, a researcher can use Sweave-like technologies just for themselves. Open research principles encourage sharing the data and analysis code to enable greater reuse and accountability. This is not really a critique of the use of the word "reproducible". Rather, it just highlights that using Sweave-like technologies is necessary but not sufficient to achieving open scientific research aims.
How are we defining 'reproducible research'?
"Reproducible research" as reproducible analysis Reproducible research is a term used in some research domains to refer specifically to conducting analyses such that code transforms raw data and met
How are we defining 'reproducible research'? "Reproducible research" as reproducible analysis Reproducible research is a term used in some research domains to refer specifically to conducting analyses such that code transforms raw data and meta-data into processed data, code runs analyses on the data, and code incorporates analyses into a report. When such data and code are shared, this allows other researchers to: perform analyses not reported by the original researchers check the correctness of the analyses performed by the original researchers This usage can be seen in discussions of technologies like Sweave. E.g., Friedrich Leisch writes in the context of Sweave that "the report can be automatically updated if data or analysis change, which allows for truly reproducible research." It can also be seen in the CRAN Task View on Reproducible Research which states that "the goal of reproducible research is to tie specific instructions to data analysis and experimental data so that scholarship can be recreated, better understood and verified." Broader usage of the term "reproducibility" Reproducibility is a fundamental aim of science. It's not new. Research reports include method and results sections that should outline how the data was generated, processed, and analysed. A general rule is that the details provided should be sufficient to enable an appropriately competent researcher to take the information provided and replicate the study. Reproducibility is also closely related to the concepts of replicability and generalisation. Thus, the term "reproducible research", taken literally, as applied to technologies like Sweave, is a misnomer, given that it suggests a relevance broader than it covers. Also, when presenting technologies like Sweave to researchers who have not used such technologies, such researchers are often surprised when I call the process "reproducible research". A better term than "reproducible research" Given that "reproducible research" as used within Sweave-like contexts only pertains to one aspect of reproducible research, perhaps an alternative term should be adopted. Possible alternatives include: Reproducible analysis: John D Cook has used this term Jennifer Blackford uses the term "reliable and reproducible analyses" Reproducible data analysis Christophe Pouzat uses this term Reproducible statistical analysis A Biostats site at Vanderbilt uses the term "reproducible statistical analysis and reporting activities" Reproducible reporting All of the above terms are a more accurate reflection of what Sweave-like analyses entail. Reproducible analysis is short and sweet. Adding "data" or "statistical" further clarifies things, but also makes the term both longer and narrower. Furthermore, "statistical" has a narrow and a broad meaning, and certainly within the narrow meaning, much of data processing is not statistical. Thus, the breadth implied by the term "reproducible analysis" has its advantages. It's not just about reproducibility The other additional issue with the term "reproducible research" is the aim of Sweave-like technologies is not just "reproducibility". There are several interrelated aims: Reproducibility Can the analyses easily be re-run to transform raw data into final report with the same results? Correctness Is the data analysis consistent with the intentions of the researcher? Are the intentions of the researcher correct? Openness Transparency, accountability Can others check and verify the accuracy of analyses performed? Extensibility, modfifiability Can others modify, extend, reuse, and mash, the data, analyses, or both to create new research works? There is an argument that reproducible analysis should promote correct analyses, because there is a written record of analyses that can be checked. Furthermore if data and code is shared, it creates accountability which motivates researchers to check their analyses, and enables other researchers to note corrections. Reproducible analysis also fits in closely with concepts around open research. Of course, a researcher can use Sweave-like technologies just for themselves. Open research principles encourage sharing the data and analysis code to enable greater reuse and accountability. This is not really a critique of the use of the word "reproducible". Rather, it just highlights that using Sweave-like technologies is necessary but not sufficient to achieving open scientific research aims.
How are we defining 'reproducible research'? "Reproducible research" as reproducible analysis Reproducible research is a term used in some research domains to refer specifically to conducting analyses such that code transforms raw data and met
5,088
How are we defining 'reproducible research'?
Having access to the data and code for the analysis in an easy-to-execute form is a sine qua non of reproducible research. Once you verify that the analysis works, you can substitute your own code/data where you are skeptical of the original author's. I'd say that the majority of statistics-containing papers I read have at least one part of the methodology that is left vague. My attempts to reproduce these analyses are often unsuccessful (and always time-consuming), but it is very difficult to say whether this is because of fraud, human error, or (much more likely) my resolving these ambiguities differently than the author. So, having data+code for a paper does not guarantee that its conclusions are true, but it makes it much easier to critique or extend them. Also, "reproducible research" a matter of degree. So the reproducible research movement can be seen as encouraging research that is "more reproducible" than the norm, rather than demanding that research meet some minimum threshold. I'd guess that "release the data and code" is in vogue now because it is a relatively easy and non-threatening step.
How are we defining 'reproducible research'?
Having access to the data and code for the analysis in an easy-to-execute form is a sine qua non of reproducible research. Once you verify that the analysis works, you can substitute your own code/da
How are we defining 'reproducible research'? Having access to the data and code for the analysis in an easy-to-execute form is a sine qua non of reproducible research. Once you verify that the analysis works, you can substitute your own code/data where you are skeptical of the original author's. I'd say that the majority of statistics-containing papers I read have at least one part of the methodology that is left vague. My attempts to reproduce these analyses are often unsuccessful (and always time-consuming), but it is very difficult to say whether this is because of fraud, human error, or (much more likely) my resolving these ambiguities differently than the author. So, having data+code for a paper does not guarantee that its conclusions are true, but it makes it much easier to critique or extend them. Also, "reproducible research" a matter of degree. So the reproducible research movement can be seen as encouraging research that is "more reproducible" than the norm, rather than demanding that research meet some minimum threshold. I'd guess that "release the data and code" is in vogue now because it is a relatively easy and non-threatening step.
How are we defining 'reproducible research'? Having access to the data and code for the analysis in an easy-to-execute form is a sine qua non of reproducible research. Once you verify that the analysis works, you can substitute your own code/da
5,089
How are we defining 'reproducible research'?
Being able to re-run everything is a starting point for the reproducible research. It permits to show that you are actually using the same procedure. After that -and only after that- you can pursue the research of your peer. In other words, the strict reproducibility is not to be perceived as a time at which the research is moving forward, but as a landmark, a consensus, something on which people people agree. Isn't thi fundamental to get further ? Also, according to the discussion of Donoho (read section 2 "the scandal") the aim of reproducible research is also to test robustness of the given code. First by playing with the code, making sligth modification that was not done in the paper (because we don't want papers with 30 Figures ...). I think the concept of reproducible research in the litterature contains the idea of having strong robust landmark. It almost contain the idea of going further.
How are we defining 'reproducible research'?
Being able to re-run everything is a starting point for the reproducible research. It permits to show that you are actually using the same procedure. After that -and only after that- you can pursue th
How are we defining 'reproducible research'? Being able to re-run everything is a starting point for the reproducible research. It permits to show that you are actually using the same procedure. After that -and only after that- you can pursue the research of your peer. In other words, the strict reproducibility is not to be perceived as a time at which the research is moving forward, but as a landmark, a consensus, something on which people people agree. Isn't thi fundamental to get further ? Also, according to the discussion of Donoho (read section 2 "the scandal") the aim of reproducible research is also to test robustness of the given code. First by playing with the code, making sligth modification that was not done in the paper (because we don't want papers with 30 Figures ...). I think the concept of reproducible research in the litterature contains the idea of having strong robust landmark. It almost contain the idea of going further.
How are we defining 'reproducible research'? Being able to re-run everything is a starting point for the reproducible research. It permits to show that you are actually using the same procedure. After that -and only after that- you can pursue th
5,090
how to weight KLD loss vs reconstruction loss in variational auto-encoder
For anyone stumbling on this post also looking for an answer, this twitter thread has added a lot of very useful insight. Namely: beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework discusses my exact question with a few experiments. Interestingly, it seems their $\beta_{norm}$ (which is similar to my normalised KLD weight) is also centred around 0.1, with higher values giving more structured latent space at the cost of poorer reconstruction, and lower values giving better reconstruction with less structured latent space (though their focus is specifically on learning disentangled representations). and related reading (where similar issues are discussed) Semi-Supervised Learning with Deep Generative Models https://github.com/dpkingma/nips14-ssl InfoVAE: Information Maximizing Variational Autoencoders Density estimation using Real NVP Neural Discrete Representation Learning
how to weight KLD loss vs reconstruction loss in variational auto-encoder
For anyone stumbling on this post also looking for an answer, this twitter thread has added a lot of very useful insight. Namely: beta-VAE: Learning Basic Visual Concepts with a Constrained Variationa
how to weight KLD loss vs reconstruction loss in variational auto-encoder For anyone stumbling on this post also looking for an answer, this twitter thread has added a lot of very useful insight. Namely: beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework discusses my exact question with a few experiments. Interestingly, it seems their $\beta_{norm}$ (which is similar to my normalised KLD weight) is also centred around 0.1, with higher values giving more structured latent space at the cost of poorer reconstruction, and lower values giving better reconstruction with less structured latent space (though their focus is specifically on learning disentangled representations). and related reading (where similar issues are discussed) Semi-Supervised Learning with Deep Generative Models https://github.com/dpkingma/nips14-ssl InfoVAE: Information Maximizing Variational Autoencoders Density estimation using Real NVP Neural Discrete Representation Learning
how to weight KLD loss vs reconstruction loss in variational auto-encoder For anyone stumbling on this post also looking for an answer, this twitter thread has added a lot of very useful insight. Namely: beta-VAE: Learning Basic Visual Concepts with a Constrained Variationa
5,091
how to weight KLD loss vs reconstruction loss in variational auto-encoder
I would like to add one more paper relating to this issue (I cannot comment due to my low reputation at the moment). In subsection 3.1 of the paper, the authors specified that they failed to train a straight implementation of VAE that equally weighted the likelihood and the KL divergence. In their case, the KL loss was undesirably reduced to zero, although it was expected to have a small value. To overcome this, they proposed to use "KL cost annealing", which slowly increased the weight factor of the KL divergence term (blue curve) from 0 to 1. This work-around solution is also applied in Ladder VAE. Paper: Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Jozefowicz, R. and Bengio, S., 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.
how to weight KLD loss vs reconstruction loss in variational auto-encoder
I would like to add one more paper relating to this issue (I cannot comment due to my low reputation at the moment). In subsection 3.1 of the paper, the authors specified that they failed to train a s
how to weight KLD loss vs reconstruction loss in variational auto-encoder I would like to add one more paper relating to this issue (I cannot comment due to my low reputation at the moment). In subsection 3.1 of the paper, the authors specified that they failed to train a straight implementation of VAE that equally weighted the likelihood and the KL divergence. In their case, the KL loss was undesirably reduced to zero, although it was expected to have a small value. To overcome this, they proposed to use "KL cost annealing", which slowly increased the weight factor of the KL divergence term (blue curve) from 0 to 1. This work-around solution is also applied in Ladder VAE. Paper: Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Jozefowicz, R. and Bengio, S., 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.
how to weight KLD loss vs reconstruction loss in variational auto-encoder I would like to add one more paper relating to this issue (I cannot comment due to my low reputation at the moment). In subsection 3.1 of the paper, the authors specified that they failed to train a s
5,092
how to weight KLD loss vs reconstruction loss in variational auto-encoder
I faced the same problem of not knowing how to weigh the reconstruction and KL terms, and here I want to add an answer with some concrete values of $\beta$ (the weight of KL term) for future reference. In the $\beta$-VAE paper, they seem to use the values of $\beta_{norm}$ ranging between $0.001$ and $10$ (Fig. 6 from the paper). They calculate $\beta_{norm}$ as follows: $$\beta_{norm} = \frac{\beta M}{N},$$ where $M$ is the size of latent space (e.g. $10$) and $N$ is the input size (e.g. $64 \cdot 64 \cdot 1 = 4096$).
how to weight KLD loss vs reconstruction loss in variational auto-encoder
I faced the same problem of not knowing how to weigh the reconstruction and KL terms, and here I want to add an answer with some concrete values of $\beta$ (the weight of KL term) for future reference
how to weight KLD loss vs reconstruction loss in variational auto-encoder I faced the same problem of not knowing how to weigh the reconstruction and KL terms, and here I want to add an answer with some concrete values of $\beta$ (the weight of KL term) for future reference. In the $\beta$-VAE paper, they seem to use the values of $\beta_{norm}$ ranging between $0.001$ and $10$ (Fig. 6 from the paper). They calculate $\beta_{norm}$ as follows: $$\beta_{norm} = \frac{\beta M}{N},$$ where $M$ is the size of latent space (e.g. $10$) and $N$ is the input size (e.g. $64 \cdot 64 \cdot 1 = 4096$).
how to weight KLD loss vs reconstruction loss in variational auto-encoder I faced the same problem of not knowing how to weigh the reconstruction and KL terms, and here I want to add an answer with some concrete values of $\beta$ (the weight of KL term) for future reference
5,093
how to weight KLD loss vs reconstruction loss in variational auto-encoder
Update on Dec. 6th 2020: I made a blog post to explain this in details. I finally manage to figure out the reason of weighting KL divergence in VAE. It is more about the normalized constant of the distribution modeled the target variable. Here, I am going to present some output distributions we often use. Most of the notation will follow the book "Pattern recognitions and Machine learning". Linear regression (unbounded regression): (section 3.1.1 on page 140) - This explains for the weighting KL divergence when using MSE loss The target variable $t$ is assumed to be the sum of the deterministic function $y(\mathbf{x}, \mathbf{w})$ and a Gaussian noise: \begin{equation} t = y(\mathbf{x}, \mathbf{w}) + \epsilon, \qquad\epsilon \sim \mathcal{N}\left(\epsilon | 0, \color{red}{\beta}^{-1}\right) \end{equation} The target variable is therefore modeled as a Gaussian random variable with the log-likelihood given as: \begin{equation} p(t | \mathbf{x}, \mathbf{w}, \color{red}{\beta}) = \mathcal{N} \left( t | y(\mathbf{x}, \mathbf{w}), \color{red}{\beta}^{-1} \right) \end{equation} Given this assumption, the log-likelihood at data points $\{\mathbf{x}_{n}, t_{n}\}_{n=1}^{N}$ is: \begin{equation} \ln p(\mathbf{t} | \mathbf{x}, \mathbf{w}, \color{red}{\beta}) = \frac{N}{2} \ln \frac{\color{red}{\beta}}{2\pi} - \color{red}{\beta} E_{D}(\mathbf{w}), \end{equation} where: \begin{equation} E_{D}(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} [t_{n} - y(\mathbf{x}, \mathbf{w})]^{2}. \end{equation} We often optimize only $E_{D}(\mathbf{w})$, not the whole log-likelihood $\ln p(\mathbf{t} | \mathbf{x}, \mathbf{w}, \beta)$, resulting in ignoring the precision $\color{red}{\beta}$. This might be fine for conventional regression where the loss consists of only the negative log-likelihood (NLL) $-\ln p(\mathbf{t} | \mathbf{x}, \mathbf{w}, \beta)$, and the prediction would be the mean of the target variable $t$. However, the loss in VAE consists of the NLL (or reconstruction loss) and the regularization (KL loss). Therefore, if the weight factor of MSE term (or, $E_{D}(\mathbf{w})$ in this case) is 1, we need to weight the KL divergence with a factor $\beta_{KL} = 1/\color{red}{\beta}$ to be mathematically correct. In practice, people often find a good value of the precision $\beta_{KL}$ through hyper-parameter tuning. Another approach is to learn $\color{red}{\beta}$ by considering it as a learnable parameter which is obtained by minimizing the whole VAE loss function. Logistic regression - This explains the case of binary cross-entropy loss used for black-and-white images Let's consider the case of binary classification. The ground-truth is either 0 or 1, and the target variable $t = p(y = 1 | \mathbf{x})$ is assumed to follow a Bernoulli distribution: \begin{equation} p(t | \mathbf{x}, \mathbf{w}) = \mathcal{B}(t | y(\mathbf{x}, \mathbf{w})) = \left[y(\mathbf{x}, \mathbf{w})\right]^{t} \left[ 1 - y(\mathbf{x}, \mathbf{w}) \right)^{1 - t}. \end{equation} Hence, the NLL in this case is given by: \begin{equation} -\ln p(t | \mathbf{x}, \mathbf{w}) = -\left[ t \ln y(\mathbf{x}, \mathbf{w}) + (1 - t) \ln (1 - y(\mathbf{x}, \mathbf{w})) \right], \end{equation} which is the binary cross-entropy loss. (One can extend to softmax for multiclass classification by using a categorical distribution to lead to cross-entropy loss.) For MNIST (or black and white images) data set, each pixel is either 0 or 1, and therefore, we can use binary cross-entropy loss as the reconstruction loss in the VAE to predict the probability that the value of a pixel is 1. And since the mean of the Bernoulli distribution equals to $y(\mathbf{x}, \mathbf{w})$, we often use $y(\mathbf{x}, \mathbf{w})$ as pixel intensity to plot the reconstructed images. Note that when using binary cross-entropy loss in a VAE for black and white images, we do not need to weight the KL divergence term, which has been seen in many implementations. Bounded regression (e.g. regression in [0, 1]) - This explains the case of weighting KL divergence when using binary cross-entropy loss for color images As explained in logistic regression, the support (or the label) of a Bernoulli distribution is $\{0, 1\}$, not $[0, 1]$, but in practice, it is still employed for color-image reconstruction, which requires a support in $[0, 1]$, or $\{0, 1, \ldots, 255\}$. Since our interest is the case for support in $[0, 1]$, we could find some continuous distribution that has support in $[0, 1]$ to model our prediction. One simple one is the beta distribution. In that case, our prediction would be the 2 parameters $\alpha$ and $\beta$. Seem complicated? Fortunately, a continuous version of Bernoulli distribution has been proposed recently, so that, we can still use the binary cross-entropy loss to predict the intensive of a pixel with some minimal modification. Please refer to the paper "The continuous Bernoulli distribution: fixing a pervasive error in VAE" or Wikipedia page for further details of the distribution. Under the assumption of the continuous Bernoulli distribution, the log-likelihood can be expressed as: \begin{equation} \ln p(t | \mathbf{x}, \mathbf{w}) = \mathcal{CB}(t | y(\mathbf{x}, \mathbf{w})) = C(y(\mathbf{x}, \mathbf{w})) (y(\mathbf{x}, \mathbf{w}))^{t} (1 - y(\mathbf{x}, \mathbf{w}))^{1-t}, \end{equation} where $C(y(\mathbf{x}, \mathbf{w}))$ is the normalized constant. Hence, when working with VAE involving binary cross-entropy loss, instead of tuning for a weight factor of the KL term - which might be mathematically incorrect, we simply add $- \ln C(y(\mathbf{x}, \mathbf{w}))$ into the loss, and then optimize.
how to weight KLD loss vs reconstruction loss in variational auto-encoder
Update on Dec. 6th 2020: I made a blog post to explain this in details. I finally manage to figure out the reason of weighting KL divergence in VAE. It is more about the normalized constant of the di
how to weight KLD loss vs reconstruction loss in variational auto-encoder Update on Dec. 6th 2020: I made a blog post to explain this in details. I finally manage to figure out the reason of weighting KL divergence in VAE. It is more about the normalized constant of the distribution modeled the target variable. Here, I am going to present some output distributions we often use. Most of the notation will follow the book "Pattern recognitions and Machine learning". Linear regression (unbounded regression): (section 3.1.1 on page 140) - This explains for the weighting KL divergence when using MSE loss The target variable $t$ is assumed to be the sum of the deterministic function $y(\mathbf{x}, \mathbf{w})$ and a Gaussian noise: \begin{equation} t = y(\mathbf{x}, \mathbf{w}) + \epsilon, \qquad\epsilon \sim \mathcal{N}\left(\epsilon | 0, \color{red}{\beta}^{-1}\right) \end{equation} The target variable is therefore modeled as a Gaussian random variable with the log-likelihood given as: \begin{equation} p(t | \mathbf{x}, \mathbf{w}, \color{red}{\beta}) = \mathcal{N} \left( t | y(\mathbf{x}, \mathbf{w}), \color{red}{\beta}^{-1} \right) \end{equation} Given this assumption, the log-likelihood at data points $\{\mathbf{x}_{n}, t_{n}\}_{n=1}^{N}$ is: \begin{equation} \ln p(\mathbf{t} | \mathbf{x}, \mathbf{w}, \color{red}{\beta}) = \frac{N}{2} \ln \frac{\color{red}{\beta}}{2\pi} - \color{red}{\beta} E_{D}(\mathbf{w}), \end{equation} where: \begin{equation} E_{D}(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} [t_{n} - y(\mathbf{x}, \mathbf{w})]^{2}. \end{equation} We often optimize only $E_{D}(\mathbf{w})$, not the whole log-likelihood $\ln p(\mathbf{t} | \mathbf{x}, \mathbf{w}, \beta)$, resulting in ignoring the precision $\color{red}{\beta}$. This might be fine for conventional regression where the loss consists of only the negative log-likelihood (NLL) $-\ln p(\mathbf{t} | \mathbf{x}, \mathbf{w}, \beta)$, and the prediction would be the mean of the target variable $t$. However, the loss in VAE consists of the NLL (or reconstruction loss) and the regularization (KL loss). Therefore, if the weight factor of MSE term (or, $E_{D}(\mathbf{w})$ in this case) is 1, we need to weight the KL divergence with a factor $\beta_{KL} = 1/\color{red}{\beta}$ to be mathematically correct. In practice, people often find a good value of the precision $\beta_{KL}$ through hyper-parameter tuning. Another approach is to learn $\color{red}{\beta}$ by considering it as a learnable parameter which is obtained by minimizing the whole VAE loss function. Logistic regression - This explains the case of binary cross-entropy loss used for black-and-white images Let's consider the case of binary classification. The ground-truth is either 0 or 1, and the target variable $t = p(y = 1 | \mathbf{x})$ is assumed to follow a Bernoulli distribution: \begin{equation} p(t | \mathbf{x}, \mathbf{w}) = \mathcal{B}(t | y(\mathbf{x}, \mathbf{w})) = \left[y(\mathbf{x}, \mathbf{w})\right]^{t} \left[ 1 - y(\mathbf{x}, \mathbf{w}) \right)^{1 - t}. \end{equation} Hence, the NLL in this case is given by: \begin{equation} -\ln p(t | \mathbf{x}, \mathbf{w}) = -\left[ t \ln y(\mathbf{x}, \mathbf{w}) + (1 - t) \ln (1 - y(\mathbf{x}, \mathbf{w})) \right], \end{equation} which is the binary cross-entropy loss. (One can extend to softmax for multiclass classification by using a categorical distribution to lead to cross-entropy loss.) For MNIST (or black and white images) data set, each pixel is either 0 or 1, and therefore, we can use binary cross-entropy loss as the reconstruction loss in the VAE to predict the probability that the value of a pixel is 1. And since the mean of the Bernoulli distribution equals to $y(\mathbf{x}, \mathbf{w})$, we often use $y(\mathbf{x}, \mathbf{w})$ as pixel intensity to plot the reconstructed images. Note that when using binary cross-entropy loss in a VAE for black and white images, we do not need to weight the KL divergence term, which has been seen in many implementations. Bounded regression (e.g. regression in [0, 1]) - This explains the case of weighting KL divergence when using binary cross-entropy loss for color images As explained in logistic regression, the support (or the label) of a Bernoulli distribution is $\{0, 1\}$, not $[0, 1]$, but in practice, it is still employed for color-image reconstruction, which requires a support in $[0, 1]$, or $\{0, 1, \ldots, 255\}$. Since our interest is the case for support in $[0, 1]$, we could find some continuous distribution that has support in $[0, 1]$ to model our prediction. One simple one is the beta distribution. In that case, our prediction would be the 2 parameters $\alpha$ and $\beta$. Seem complicated? Fortunately, a continuous version of Bernoulli distribution has been proposed recently, so that, we can still use the binary cross-entropy loss to predict the intensive of a pixel with some minimal modification. Please refer to the paper "The continuous Bernoulli distribution: fixing a pervasive error in VAE" or Wikipedia page for further details of the distribution. Under the assumption of the continuous Bernoulli distribution, the log-likelihood can be expressed as: \begin{equation} \ln p(t | \mathbf{x}, \mathbf{w}) = \mathcal{CB}(t | y(\mathbf{x}, \mathbf{w})) = C(y(\mathbf{x}, \mathbf{w})) (y(\mathbf{x}, \mathbf{w}))^{t} (1 - y(\mathbf{x}, \mathbf{w}))^{1-t}, \end{equation} where $C(y(\mathbf{x}, \mathbf{w}))$ is the normalized constant. Hence, when working with VAE involving binary cross-entropy loss, instead of tuning for a weight factor of the KL term - which might be mathematically incorrect, we simply add $- \ln C(y(\mathbf{x}, \mathbf{w}))$ into the loss, and then optimize.
how to weight KLD loss vs reconstruction loss in variational auto-encoder Update on Dec. 6th 2020: I made a blog post to explain this in details. I finally manage to figure out the reason of weighting KL divergence in VAE. It is more about the normalized constant of the di
5,094
How to make a time series stationary?
De-trending is fundamental. This includes regressing against covariates other than time. Seasonal adjustment is a version of taking differences but could be construed as a separate technique. Transformation of the data implicitly converts a difference operator into something else; e.g., differences of the logarithms are actually ratios. Some EDA smoothing techniques (such as removing a moving median) could be construed as non-parametric ways of detrending. They were used as such by Tukey in his book on EDA. Tukey continued by detrending the residuals and iterating this process for as long as necessary (until he achieved residuals that appeared stationary and symmetrically distributed around zero).
How to make a time series stationary?
De-trending is fundamental. This includes regressing against covariates other than time. Seasonal adjustment is a version of taking differences but could be construed as a separate technique. Transfo
How to make a time series stationary? De-trending is fundamental. This includes regressing against covariates other than time. Seasonal adjustment is a version of taking differences but could be construed as a separate technique. Transformation of the data implicitly converts a difference operator into something else; e.g., differences of the logarithms are actually ratios. Some EDA smoothing techniques (such as removing a moving median) could be construed as non-parametric ways of detrending. They were used as such by Tukey in his book on EDA. Tukey continued by detrending the residuals and iterating this process for as long as necessary (until he achieved residuals that appeared stationary and symmetrically distributed around zero).
How to make a time series stationary? De-trending is fundamental. This includes regressing against covariates other than time. Seasonal adjustment is a version of taking differences but could be construed as a separate technique. Transfo
5,095
How to make a time series stationary?
I still think using the % change from one period to the next is the best way to render a non-stationary variable stationary as you first suggest. A transformation such as a log works reasonably well (it flattens the non-stationary quality; but does not eliminate it entirely). The third way is to deseasonalize and de-trend the data simultaneously in one single linear regression. One independent variable would be trend (or time): 1, 2, 3, ... to how many time period you have. And, the other variable would be a categorical variable with 11 different categories (for 11 out of the 12 months). Then, using the resulting coefficient from this regression you can simultaneously detrend and de-seasonalize the data. You will see your whole data set essentially flattened. The remaining differences between periods will reflect changes independent from both growth trend and season.
How to make a time series stationary?
I still think using the % change from one period to the next is the best way to render a non-stationary variable stationary as you first suggest. A transformation such as a log works reasonably well
How to make a time series stationary? I still think using the % change from one period to the next is the best way to render a non-stationary variable stationary as you first suggest. A transformation such as a log works reasonably well (it flattens the non-stationary quality; but does not eliminate it entirely). The third way is to deseasonalize and de-trend the data simultaneously in one single linear regression. One independent variable would be trend (or time): 1, 2, 3, ... to how many time period you have. And, the other variable would be a categorical variable with 11 different categories (for 11 out of the 12 months). Then, using the resulting coefficient from this regression you can simultaneously detrend and de-seasonalize the data. You will see your whole data set essentially flattened. The remaining differences between periods will reflect changes independent from both growth trend and season.
How to make a time series stationary? I still think using the % change from one period to the next is the best way to render a non-stationary variable stationary as you first suggest. A transformation such as a log works reasonably well
5,096
How to make a time series stationary?
Logs and reciprocals and other power transformations often yield unexpected results. As for detrending residuals(ie Tukey), this may have some application in some cases but could be dangerous. On the other hand, detecting level shifts and trend changes are systematically available to researchers employing intervention detection methods. Since a level shift is the difference of a time trend just as a pulse is the difference of a level shift the methods employed by Ruey Tsay are easily covered by this problem. If a series exhibits level shifts (ie change in intercept) the appropriate remedy to make the series stationary is to "demean" the series. Box-Jenkins errored critically by assuming that the remedy for non-stationarity was a differencing operators. So, sometimes differencing is appropriate and other times adjusting for the mean shift"s" is appropriate. In either case, the autocorrelation function can exhibit non-stationarity. This is a symptom of the state of the series(ie stationary or non-stationary). In the case of evidented non-stationarity the causes can be different. For example, the series has truly a continuous varying mean or the series has had a temporary change in mean. The suggested approach was first proposed Tsay in 1982 and has been added to some software. Researchers should refer to Tsay's Journal of Forecasting article titled "Outliers, Level Shifts, and Variance Changes in Time Series " , Journal of Forecasting, Vol. 7, I-20 (1988). As usual, textbooks are slow to incorporate leading edge technology, but this material can be referenced in the Wei book (ie Time Series Analysis), Delurgio and Makradakis cover the incorporating interventions, but not how to detect as Wei's text does.
How to make a time series stationary?
Logs and reciprocals and other power transformations often yield unexpected results. As for detrending residuals(ie Tukey), this may have some application in some cases but could be dangerous. On t
How to make a time series stationary? Logs and reciprocals and other power transformations often yield unexpected results. As for detrending residuals(ie Tukey), this may have some application in some cases but could be dangerous. On the other hand, detecting level shifts and trend changes are systematically available to researchers employing intervention detection methods. Since a level shift is the difference of a time trend just as a pulse is the difference of a level shift the methods employed by Ruey Tsay are easily covered by this problem. If a series exhibits level shifts (ie change in intercept) the appropriate remedy to make the series stationary is to "demean" the series. Box-Jenkins errored critically by assuming that the remedy for non-stationarity was a differencing operators. So, sometimes differencing is appropriate and other times adjusting for the mean shift"s" is appropriate. In either case, the autocorrelation function can exhibit non-stationarity. This is a symptom of the state of the series(ie stationary or non-stationary). In the case of evidented non-stationarity the causes can be different. For example, the series has truly a continuous varying mean or the series has had a temporary change in mean. The suggested approach was first proposed Tsay in 1982 and has been added to some software. Researchers should refer to Tsay's Journal of Forecasting article titled "Outliers, Level Shifts, and Variance Changes in Time Series " , Journal of Forecasting, Vol. 7, I-20 (1988). As usual, textbooks are slow to incorporate leading edge technology, but this material can be referenced in the Wei book (ie Time Series Analysis), Delurgio and Makradakis cover the incorporating interventions, but not how to detect as Wei's text does.
How to make a time series stationary? Logs and reciprocals and other power transformations often yield unexpected results. As for detrending residuals(ie Tukey), this may have some application in some cases but could be dangerous. On t
5,097
How to make a time series stationary?
Could you fit a loess/spline through the data and use the residuals? Would the residuals be stationary? Seems fraught with issues to consider, and perhaps there would not be as clear an indication of an overly-flexible curve as there is for over-differencing.
How to make a time series stationary?
Could you fit a loess/spline through the data and use the residuals? Would the residuals be stationary? Seems fraught with issues to consider, and perhaps there would not be as clear an indication of
How to make a time series stationary? Could you fit a loess/spline through the data and use the residuals? Would the residuals be stationary? Seems fraught with issues to consider, and perhaps there would not be as clear an indication of an overly-flexible curve as there is for over-differencing.
How to make a time series stationary? Could you fit a loess/spline through the data and use the residuals? Would the residuals be stationary? Seems fraught with issues to consider, and perhaps there would not be as clear an indication of
5,098
How to make a time series stationary?
Difference with another series. i.e. Brent oil prices are not stationary, but the spread brent-light sweet crude might be. A more risky proposition for forecasting is to bet on the existence of a co integration relationship with another time series.
How to make a time series stationary?
Difference with another series. i.e. Brent oil prices are not stationary, but the spread brent-light sweet crude might be. A more risky proposition for forecasting is to bet on the existence of a co i
How to make a time series stationary? Difference with another series. i.e. Brent oil prices are not stationary, but the spread brent-light sweet crude might be. A more risky proposition for forecasting is to bet on the existence of a co integration relationship with another time series.
How to make a time series stationary? Difference with another series. i.e. Brent oil prices are not stationary, but the spread brent-light sweet crude might be. A more risky proposition for forecasting is to bet on the existence of a co i
5,099
Is it possible to give variable sized images as input to a convolutional neural network?
There are a number of ways to do it. Most of these have already been covered in a number of posts over StackOverflow, Quora and other content websites. To summarize, most of the techniques listed can be grouped into two classes of solutions, namely, Transformations Inherent Network Property In transformations, one can look up techniques such as Resize, which is the simplest of all the techniques mentioned Crop, which can be done as a sliding window or one-time crop with information loss One can also look into networks that have inherent property to be immune to the size of the input by the virtue of layer behaviour which builds up the network. Examples of this can be found in terms of, Fully convolutional networks (FCN), which have no limitations on the input size at all because once the kernel and step sizes are described, the convolution at each layer can generate appropriate dimension outputs according to the corresponding inputs. Spatial Pyramid Pooling (SPP), FCNs do not have a fully connected dense layer and hence are agnostic to the image size, but say if one wanted to use dense layer without considering input transformations, then there is a interesting paper that explains the layer in a deep learning network. References: https://www.quora.com/How-are-variably-shaped-and-sized-images-given-inputs-to-convoluted-neural-networks https://ai.stackexchange.com/questions/2008/how-can-neural-networks-deal-with-varying-input-sizes https://discuss.pytorch.org/t/how-to-create-convnet-for-variable-size-input-dimension-images/1906 P.S. I might have missed citing a few techniques. Not claiming this to be an exhaustive list.
Is it possible to give variable sized images as input to a convolutional neural network?
There are a number of ways to do it. Most of these have already been covered in a number of posts over StackOverflow, Quora and other content websites. To summarize, most of the techniques listed can
Is it possible to give variable sized images as input to a convolutional neural network? There are a number of ways to do it. Most of these have already been covered in a number of posts over StackOverflow, Quora and other content websites. To summarize, most of the techniques listed can be grouped into two classes of solutions, namely, Transformations Inherent Network Property In transformations, one can look up techniques such as Resize, which is the simplest of all the techniques mentioned Crop, which can be done as a sliding window or one-time crop with information loss One can also look into networks that have inherent property to be immune to the size of the input by the virtue of layer behaviour which builds up the network. Examples of this can be found in terms of, Fully convolutional networks (FCN), which have no limitations on the input size at all because once the kernel and step sizes are described, the convolution at each layer can generate appropriate dimension outputs according to the corresponding inputs. Spatial Pyramid Pooling (SPP), FCNs do not have a fully connected dense layer and hence are agnostic to the image size, but say if one wanted to use dense layer without considering input transformations, then there is a interesting paper that explains the layer in a deep learning network. References: https://www.quora.com/How-are-variably-shaped-and-sized-images-given-inputs-to-convoluted-neural-networks https://ai.stackexchange.com/questions/2008/how-can-neural-networks-deal-with-varying-input-sizes https://discuss.pytorch.org/t/how-to-create-convnet-for-variable-size-input-dimension-images/1906 P.S. I might have missed citing a few techniques. Not claiming this to be an exhaustive list.
Is it possible to give variable sized images as input to a convolutional neural network? There are a number of ways to do it. Most of these have already been covered in a number of posts over StackOverflow, Quora and other content websites. To summarize, most of the techniques listed can
5,100
Is it possible to give variable sized images as input to a convolutional neural network?
The convolutional layers and pooling layers themselves are independent of the input dimensions. However, the output of the convolutional layers will have different spatial sizes for differently sized images, and this will cause an issue if we have a fully connected layer afterwards (since our fully connected layer requires a fixed size input). There are several solutions to this: 1. Global Pooling: Avoid fully connected layers at the end of the convolutional layers, and instead use pooling (such as Global Average Pooling) to reduce your feature maps from a shape of (N,H,W,C) (before global pool) to shape (N,1,1,C) (after global pool), where: N = Number of minibatch samples H = Spatial height of feature map W = Spatial width of feature map C = Number of feature maps (channels) As can be seen, the output dimensionality (N*C) is now independent of the spatial size (H,W) of the feature maps. In case of classification, you can then proceed to use a fully connected layer on top to get the logits for your classes. 2. Variable sized pooling: Use variable sized pooling regions to get the same feature map size for different input sizes. 3. Crop/Resize/Pad input images: You can try to rescale/crop/pad your input images to all have the same shape. In the context of transfer learning, you might want to use differently sized inputs than the original inputs that the model was trained with. Here are some options for doing so: 4. Create new fully connected layers: You can ditch the original fully connected layers completely and initialize a new fully connected layer with the dimensionality that you need, and train it from scratch. 5. Treat the fully connected layer as a convolution: Normally, we reshape the feature maps from (N,H,W,C) to (N,H*W*C) before feeding it to the fully connected layer. But you can also treat the fully connected layer as a convolution with a receptive field of (H,W). Then, you can just convolve this kernel with your feature maps regardless of their size (use zero padding if needed) [http://cs231n.github.io/transfer-learning/ ].
Is it possible to give variable sized images as input to a convolutional neural network?
The convolutional layers and pooling layers themselves are independent of the input dimensions. However, the output of the convolutional layers will have different spatial sizes for differently sized
Is it possible to give variable sized images as input to a convolutional neural network? The convolutional layers and pooling layers themselves are independent of the input dimensions. However, the output of the convolutional layers will have different spatial sizes for differently sized images, and this will cause an issue if we have a fully connected layer afterwards (since our fully connected layer requires a fixed size input). There are several solutions to this: 1. Global Pooling: Avoid fully connected layers at the end of the convolutional layers, and instead use pooling (such as Global Average Pooling) to reduce your feature maps from a shape of (N,H,W,C) (before global pool) to shape (N,1,1,C) (after global pool), where: N = Number of minibatch samples H = Spatial height of feature map W = Spatial width of feature map C = Number of feature maps (channels) As can be seen, the output dimensionality (N*C) is now independent of the spatial size (H,W) of the feature maps. In case of classification, you can then proceed to use a fully connected layer on top to get the logits for your classes. 2. Variable sized pooling: Use variable sized pooling regions to get the same feature map size for different input sizes. 3. Crop/Resize/Pad input images: You can try to rescale/crop/pad your input images to all have the same shape. In the context of transfer learning, you might want to use differently sized inputs than the original inputs that the model was trained with. Here are some options for doing so: 4. Create new fully connected layers: You can ditch the original fully connected layers completely and initialize a new fully connected layer with the dimensionality that you need, and train it from scratch. 5. Treat the fully connected layer as a convolution: Normally, we reshape the feature maps from (N,H,W,C) to (N,H*W*C) before feeding it to the fully connected layer. But you can also treat the fully connected layer as a convolution with a receptive field of (H,W). Then, you can just convolve this kernel with your feature maps regardless of their size (use zero padding if needed) [http://cs231n.github.io/transfer-learning/ ].
Is it possible to give variable sized images as input to a convolutional neural network? The convolutional layers and pooling layers themselves are independent of the input dimensions. However, the output of the convolutional layers will have different spatial sizes for differently sized