idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
4,301 | Is the COVID-19 pandemic curve a Gaussian curve? | Not but (under the right assumptions that in practice aren't likely to hold) sort of.
As Michael Reid points the number of infected people of an epidemic under simplified constant conditions (constant R0) is governed by the logistic equation, which leads to a sigmoid, the logistic function. The derivative of the logistic function is the bell shaped density curve of the logistic distribution, which is not normal in spite of looking normal at first glance. Since the derivative would represent number of new infected people per time unit and common metrics like the number of deaths per day or the number newly reported cases per day are more or less proportional to a delayed and unfocused version of the number of new infected people, they also follow a curve similar to the logistic distribution density function.
However, some assumptions of the logistic equation may not hold for the coronavirus outbreak - in fact, they may not hold for any real population, although the logistic equation is a common and useful model in population dynamics:
In the dynamic equation it is assumed that all the population reproduces, that is that all the people that has been infected keeps infecting more people. In reality, at some point infected people stop spreading the infection.
Conditions (R0) are assumed constant. In real world, contention measures are introduced and therefore R0 changes. | Is the COVID-19 pandemic curve a Gaussian curve? | Not but (under the right assumptions that in practice aren't likely to hold) sort of.
As Michael Reid points the number of infected people of an epidemic under simplified constant conditions (constant | Is the COVID-19 pandemic curve a Gaussian curve?
Not but (under the right assumptions that in practice aren't likely to hold) sort of.
As Michael Reid points the number of infected people of an epidemic under simplified constant conditions (constant R0) is governed by the logistic equation, which leads to a sigmoid, the logistic function. The derivative of the logistic function is the bell shaped density curve of the logistic distribution, which is not normal in spite of looking normal at first glance. Since the derivative would represent number of new infected people per time unit and common metrics like the number of deaths per day or the number newly reported cases per day are more or less proportional to a delayed and unfocused version of the number of new infected people, they also follow a curve similar to the logistic distribution density function.
However, some assumptions of the logistic equation may not hold for the coronavirus outbreak - in fact, they may not hold for any real population, although the logistic equation is a common and useful model in population dynamics:
In the dynamic equation it is assumed that all the population reproduces, that is that all the people that has been infected keeps infecting more people. In reality, at some point infected people stop spreading the infection.
Conditions (R0) are assumed constant. In real world, contention measures are introduced and therefore R0 changes. | Is the COVID-19 pandemic curve a Gaussian curve?
Not but (under the right assumptions that in practice aren't likely to hold) sort of.
As Michael Reid points the number of infected people of an epidemic under simplified constant conditions (constant |
4,302 | Is the COVID-19 pandemic curve a Gaussian curve? | Short answer, no. I was wondering the same thing and I found out a way to plot populations of susceptible, infected, and recovered people. It's a model called a compartmental model of epidemiology and the specific algorithm is called the Gillespie Algorithm. There's Python code in the second link but I tried it in R and it looks like this and here's the notebook if you're interested.
It seems like something like Poisson distribution would be closer, but under the right conditions, we could approximate the Poisson with a normal/Gaussian distribution. That's the generous interpretation. The other interpretations are: 1, the CDC actually doesn't know the right shape, or 2, the CDC wants to dumb it down for public consumption. | Is the COVID-19 pandemic curve a Gaussian curve? | Short answer, no. I was wondering the same thing and I found out a way to plot populations of susceptible, infected, and recovered people. It's a model called a compartmental model of epidemiology an | Is the COVID-19 pandemic curve a Gaussian curve?
Short answer, no. I was wondering the same thing and I found out a way to plot populations of susceptible, infected, and recovered people. It's a model called a compartmental model of epidemiology and the specific algorithm is called the Gillespie Algorithm. There's Python code in the second link but I tried it in R and it looks like this and here's the notebook if you're interested.
It seems like something like Poisson distribution would be closer, but under the right conditions, we could approximate the Poisson with a normal/Gaussian distribution. That's the generous interpretation. The other interpretations are: 1, the CDC actually doesn't know the right shape, or 2, the CDC wants to dumb it down for public consumption. | Is the COVID-19 pandemic curve a Gaussian curve?
Short answer, no. I was wondering the same thing and I found out a way to plot populations of susceptible, infected, and recovered people. It's a model called a compartmental model of epidemiology an |
4,303 | Is the COVID-19 pandemic curve a Gaussian curve? | The most simple analysis of an epidemic leads to a logistics curve model. The rate of new infections will be the derivative of total cases, which under that model would give a bell-shaped curve (normal-ish in the middle but with much fatter tails -- see Dirk's comment below).
The assumptions behind the model are a constant rate of transmission, exactly as would be the case for exponential growth, but unlike exponential growth there is the presence of a saturation limit. In many epidemics the saturation limit would be the entire population (i.e. eventually everyone will have been exposed and acquired immunity). In the case of COVID-19 that's hopefully not going to be the case so some hand-wavy adjustment will be needed such that the spread limits at some sub-set of the whole population.
My source for this was this excellent youtube video 1. (Maybe there is some better source than youtube?) | Is the COVID-19 pandemic curve a Gaussian curve? | The most simple analysis of an epidemic leads to a logistics curve model. The rate of new infections will be the derivative of total cases, which under that model would give a bell-shaped curve (norma | Is the COVID-19 pandemic curve a Gaussian curve?
The most simple analysis of an epidemic leads to a logistics curve model. The rate of new infections will be the derivative of total cases, which under that model would give a bell-shaped curve (normal-ish in the middle but with much fatter tails -- see Dirk's comment below).
The assumptions behind the model are a constant rate of transmission, exactly as would be the case for exponential growth, but unlike exponential growth there is the presence of a saturation limit. In many epidemics the saturation limit would be the entire population (i.e. eventually everyone will have been exposed and acquired immunity). In the case of COVID-19 that's hopefully not going to be the case so some hand-wavy adjustment will be needed such that the spread limits at some sub-set of the whole population.
My source for this was this excellent youtube video 1. (Maybe there is some better source than youtube?) | Is the COVID-19 pandemic curve a Gaussian curve?
The most simple analysis of an epidemic leads to a logistics curve model. The rate of new infections will be the derivative of total cases, which under that model would give a bell-shaped curve (norma |
4,304 | Is the COVID-19 pandemic curve a Gaussian curve? | I'm no epidemiologist myself, but another key difference between that curve and a Gaussian curve is that the Gaussian decays to zero relatively fast (as $e^{-t^2}$ after some time $t$), while an actual epidemic can be expected to taper off at a much slower rate at the end, or might even not decay to $0$ but to some other (hopefully low) constant – i.e. the virus might not die out entirely like the Gaussian curve suggests. | Is the COVID-19 pandemic curve a Gaussian curve? | I'm no epidemiologist myself, but another key difference between that curve and a Gaussian curve is that the Gaussian decays to zero relatively fast (as $e^{-t^2}$ after some time $t$), while an actua | Is the COVID-19 pandemic curve a Gaussian curve?
I'm no epidemiologist myself, but another key difference between that curve and a Gaussian curve is that the Gaussian decays to zero relatively fast (as $e^{-t^2}$ after some time $t$), while an actual epidemic can be expected to taper off at a much slower rate at the end, or might even not decay to $0$ but to some other (hopefully low) constant – i.e. the virus might not die out entirely like the Gaussian curve suggests. | Is the COVID-19 pandemic curve a Gaussian curve?
I'm no epidemiologist myself, but another key difference between that curve and a Gaussian curve is that the Gaussian decays to zero relatively fast (as $e^{-t^2}$ after some time $t$), while an actua |
4,305 | Is the COVID-19 pandemic curve a Gaussian curve? | No. As demonstrated here on various countries, so far, a reasonable way to model the curves of daily new confirmed cases and deaths for Covid-19 is to use:
an increasing exponential at the very beginning
a logistic curve when the curve starts to flatten (see 3Blue1Brown's video)
an decreasing exponential shortly after the first peak
afterwards, we might lack data to tell.
See for example Italy as of the 22nd of April 2020 (with Logistics fit before peak, exponential after):
As for the USA, the logistics model is enough so far:
Finally, it is harder to tell for China: | Is the COVID-19 pandemic curve a Gaussian curve? | No. As demonstrated here on various countries, so far, a reasonable way to model the curves of daily new confirmed cases and deaths for Covid-19 is to use:
an increasing exponential at the very begin | Is the COVID-19 pandemic curve a Gaussian curve?
No. As demonstrated here on various countries, so far, a reasonable way to model the curves of daily new confirmed cases and deaths for Covid-19 is to use:
an increasing exponential at the very beginning
a logistic curve when the curve starts to flatten (see 3Blue1Brown's video)
an decreasing exponential shortly after the first peak
afterwards, we might lack data to tell.
See for example Italy as of the 22nd of April 2020 (with Logistics fit before peak, exponential after):
As for the USA, the logistics model is enough so far:
Finally, it is harder to tell for China: | Is the COVID-19 pandemic curve a Gaussian curve?
No. As demonstrated here on various countries, so far, a reasonable way to model the curves of daily new confirmed cases and deaths for Covid-19 is to use:
an increasing exponential at the very begin |
4,306 | Is the COVID-19 pandemic curve a Gaussian curve? | In fact, this curve seems to fit well an Inverse Gausssian distribution. This distribution is widely used in psychology or economics for describing the distribution of time delays. Indeed, there are similarities of such processes with a pandemic (where what is denoted in the graph by the variable $x$ will be time since the start of the pandemic):
The source code for applying such a fitting procedure to your own data is available in this notebook
Note that for certain values, this curve may look close to a Gaussian "bell-shaped" distribution. The mean and standard deviation control the time of the peak and the "spread" the curve. Still, the fitting error will be less using the inverse Gaussian distribution. Knowing how the precision of inferred parameters are on generic political decisions and the final fatality rate, the choice of a fitting procedure must be accurately validated. | Is the COVID-19 pandemic curve a Gaussian curve? | In fact, this curve seems to fit well an Inverse Gausssian distribution. This distribution is widely used in psychology or economics for describing the distribution of time delays. Indeed, there are s | Is the COVID-19 pandemic curve a Gaussian curve?
In fact, this curve seems to fit well an Inverse Gausssian distribution. This distribution is widely used in psychology or economics for describing the distribution of time delays. Indeed, there are similarities of such processes with a pandemic (where what is denoted in the graph by the variable $x$ will be time since the start of the pandemic):
The source code for applying such a fitting procedure to your own data is available in this notebook
Note that for certain values, this curve may look close to a Gaussian "bell-shaped" distribution. The mean and standard deviation control the time of the peak and the "spread" the curve. Still, the fitting error will be less using the inverse Gaussian distribution. Knowing how the precision of inferred parameters are on generic political decisions and the final fatality rate, the choice of a fitting procedure must be accurately validated. | Is the COVID-19 pandemic curve a Gaussian curve?
In fact, this curve seems to fit well an Inverse Gausssian distribution. This distribution is widely used in psychology or economics for describing the distribution of time delays. Indeed, there are s |
4,307 | Is the COVID-19 pandemic curve a Gaussian curve? | In the early stages of an epidemic growth is exponential. The two key parameters are R0 (average number of people infected by each person who catches it) and incubation time. The goal is to reduce R0 - once it is less than 1.0 the epidemic is over. Most counties are still at that stage for COVID-19.
Once a significant fraction of the population becomes immune, an exponential model is no longer a good fit. See user953847's great answer above.
Sextus Empiricus points out that actual data are irregular. That is true of any real data. Nevertheless, ideal models can be useful as a way to find and communicate trends underlying the irregularities. | Is the COVID-19 pandemic curve a Gaussian curve? | In the early stages of an epidemic growth is exponential. The two key parameters are R0 (average number of people infected by each person who catches it) and incubation time. The goal is to reduce R0 | Is the COVID-19 pandemic curve a Gaussian curve?
In the early stages of an epidemic growth is exponential. The two key parameters are R0 (average number of people infected by each person who catches it) and incubation time. The goal is to reduce R0 - once it is less than 1.0 the epidemic is over. Most counties are still at that stage for COVID-19.
Once a significant fraction of the population becomes immune, an exponential model is no longer a good fit. See user953847's great answer above.
Sextus Empiricus points out that actual data are irregular. That is true of any real data. Nevertheless, ideal models can be useful as a way to find and communicate trends underlying the irregularities. | Is the COVID-19 pandemic curve a Gaussian curve?
In the early stages of an epidemic growth is exponential. The two key parameters are R0 (average number of people infected by each person who catches it) and incubation time. The goal is to reduce R0 |
4,308 | Is the COVID-19 pandemic curve a Gaussian curve? | Biological growth (cumulative) of virus epidemics, or trees, or humans, or other biological phenomena, in general follows the logistic function: 1/(1+e^-1). The logistic curve is sigmoid or S-shaped. It does not "flatten" but it has an inflection point.
The first derivative is the growth rate. That curve follows the logistic distribution. It is bell-shaped like the Gaussian curve, although it is different. F(x) = e^-x/(1+e^-x)^2. The peak of the growth rate curve is contemporaneous (because the x-axis is time) with the inflection point of the cumulative growth curve.
The second derivative is acceleration. It is S-shaped on its side, like a sine wave skewed to the right. Acceleration passes through the x-axis (equals zero) when the rate peaks and cumulative growth inflects. Thereafter acceleration is negative (deceleration) and after dipping into negative territory it asymptotically approaches the x-axis from below.
The Gompertz function is a specialized case of the general logistic function, and is sometimes used for growth studies because it has parameters that can be solved for via linear regression. One of the parameters is the upper asymptote of the cumulative growth curve. That parameter would correspond to total deaths or total cases if those were what you were estimating.
Also sometimes used is the Weibull distribution, another specialized case with parameters. We used the Weibull to develop so-called individual tree stand growth models back when I was a grad student.
That is the math of growth. It is not "exponential" or "logarithmic". It is logistic. | Is the COVID-19 pandemic curve a Gaussian curve? | Biological growth (cumulative) of virus epidemics, or trees, or humans, or other biological phenomena, in general follows the logistic function: 1/(1+e^-1). The logistic curve is sigmoid or S-shaped. | Is the COVID-19 pandemic curve a Gaussian curve?
Biological growth (cumulative) of virus epidemics, or trees, or humans, or other biological phenomena, in general follows the logistic function: 1/(1+e^-1). The logistic curve is sigmoid or S-shaped. It does not "flatten" but it has an inflection point.
The first derivative is the growth rate. That curve follows the logistic distribution. It is bell-shaped like the Gaussian curve, although it is different. F(x) = e^-x/(1+e^-x)^2. The peak of the growth rate curve is contemporaneous (because the x-axis is time) with the inflection point of the cumulative growth curve.
The second derivative is acceleration. It is S-shaped on its side, like a sine wave skewed to the right. Acceleration passes through the x-axis (equals zero) when the rate peaks and cumulative growth inflects. Thereafter acceleration is negative (deceleration) and after dipping into negative territory it asymptotically approaches the x-axis from below.
The Gompertz function is a specialized case of the general logistic function, and is sometimes used for growth studies because it has parameters that can be solved for via linear regression. One of the parameters is the upper asymptote of the cumulative growth curve. That parameter would correspond to total deaths or total cases if those were what you were estimating.
Also sometimes used is the Weibull distribution, another specialized case with parameters. We used the Weibull to develop so-called individual tree stand growth models back when I was a grad student.
That is the math of growth. It is not "exponential" or "logarithmic". It is logistic. | Is the COVID-19 pandemic curve a Gaussian curve?
Biological growth (cumulative) of virus epidemics, or trees, or humans, or other biological phenomena, in general follows the logistic function: 1/(1+e^-1). The logistic curve is sigmoid or S-shaped. |
4,309 | What are the breakthroughs in Statistics of the past 15 years? | The answer is so simple that i have to write all this gibberish to make CV let me post it: R | What are the breakthroughs in Statistics of the past 15 years? | The answer is so simple that i have to write all this gibberish to make CV let me post it: R | What are the breakthroughs in Statistics of the past 15 years?
The answer is so simple that i have to write all this gibberish to make CV let me post it: R | What are the breakthroughs in Statistics of the past 15 years?
The answer is so simple that i have to write all this gibberish to make CV let me post it: R |
4,310 | What are the breakthroughs in Statistics of the past 15 years? | I'm not sure if you would call it a "breakthrough" per se, But the Publishing of Probability Theory: The Logic of Science By Edwin Jaynes and Larry Bretthorst may be noteworthy. Some of the things they do here are:
1) show equivalence between some iterative "seasonal adjustment" schemes and Bayesian "nuisance parameter" integration.
2) resolved the so called "Marginalisation Paradox" - thought to be the "death of bayesianism" by some, and the "death of improper priors" by others.
3) the idea that probability describes a state of knowledge about a proposition being true or false, as opposed to describing a physical property of the world.
The first three chapters of this book are available for free here. | What are the breakthroughs in Statistics of the past 15 years? | I'm not sure if you would call it a "breakthrough" per se, But the Publishing of Probability Theory: The Logic of Science By Edwin Jaynes and Larry Bretthorst may be noteworthy. Some of the things th | What are the breakthroughs in Statistics of the past 15 years?
I'm not sure if you would call it a "breakthrough" per se, But the Publishing of Probability Theory: The Logic of Science By Edwin Jaynes and Larry Bretthorst may be noteworthy. Some of the things they do here are:
1) show equivalence between some iterative "seasonal adjustment" schemes and Bayesian "nuisance parameter" integration.
2) resolved the so called "Marginalisation Paradox" - thought to be the "death of bayesianism" by some, and the "death of improper priors" by others.
3) the idea that probability describes a state of knowledge about a proposition being true or false, as opposed to describing a physical property of the world.
The first three chapters of this book are available for free here. | What are the breakthroughs in Statistics of the past 15 years?
I'm not sure if you would call it a "breakthrough" per se, But the Publishing of Probability Theory: The Logic of Science By Edwin Jaynes and Larry Bretthorst may be noteworthy. Some of the things th |
4,311 | What are the breakthroughs in Statistics of the past 15 years? | As an applied statistician and occasional minor software author, I'd say:
WinBUGS (released 1997)
It's based on BUGS, which was released more than 15 years ago (1989), but it's WinBUGS that made Bayesian analysis of realistically complex models available to a far wider user base. See e.g. Lunn, Spiegelhalter, Thomas & Best (2009) (and the discussion on it in Statistics in Medicine vol. 28 issue 25). | What are the breakthroughs in Statistics of the past 15 years? | As an applied statistician and occasional minor software author, I'd say:
WinBUGS (released 1997)
It's based on BUGS, which was released more than 15 years ago (1989), but it's WinBUGS that made Bay | What are the breakthroughs in Statistics of the past 15 years?
As an applied statistician and occasional minor software author, I'd say:
WinBUGS (released 1997)
It's based on BUGS, which was released more than 15 years ago (1989), but it's WinBUGS that made Bayesian analysis of realistically complex models available to a far wider user base. See e.g. Lunn, Spiegelhalter, Thomas & Best (2009) (and the discussion on it in Statistics in Medicine vol. 28 issue 25). | What are the breakthroughs in Statistics of the past 15 years?
As an applied statistician and occasional minor software author, I'd say:
WinBUGS (released 1997)
It's based on BUGS, which was released more than 15 years ago (1989), but it's WinBUGS that made Bay |
4,312 | What are the breakthroughs in Statistics of the past 15 years? | LARS gets my vote. It combines linear regression with variable selection. Algorithms to compute it usually give you a collection of $k$ linear models, the $i$th one of which has nonzero coefficients for only $i$ regressors, so you can easily look at models of different complexity. | What are the breakthroughs in Statistics of the past 15 years? | LARS gets my vote. It combines linear regression with variable selection. Algorithms to compute it usually give you a collection of $k$ linear models, the $i$th one of which has nonzero coefficients f | What are the breakthroughs in Statistics of the past 15 years?
LARS gets my vote. It combines linear regression with variable selection. Algorithms to compute it usually give you a collection of $k$ linear models, the $i$th one of which has nonzero coefficients for only $i$ regressors, so you can easily look at models of different complexity. | What are the breakthroughs in Statistics of the past 15 years?
LARS gets my vote. It combines linear regression with variable selection. Algorithms to compute it usually give you a collection of $k$ linear models, the $i$th one of which has nonzero coefficients f |
4,313 | What are the breakthroughs in Statistics of the past 15 years? | The introduction of the "intrinsic discrepancy" loss function and other "parameterisation free" loss functions into decision theory. It has many other "nice" properties, but I think the best one is as follows:
if the best estimate of $\theta$ using the intrinsic discrepancy loss function is $\theta^{e}$, then the best estimate of any one-to-one function of $\theta$, say $g(\theta)$ is simply $g(\theta^{e})$.
I think this is very cool! (e.g. best estimate of log-odds is log(p/(1-p)), best estimate of variance is square of standard deviation, etc. etc.)
The catch? the intrinsic discrepancy can be quite difficult to work out! (it involves min() funcion, a likelihood ratio, and integrals!)
The "counter-catch"? you can "re-arrange" the problem so that it is easier to calculate!
The "counter-counter-catch"? figuring out how to "re-arrange" the problem can be difficult!
Here are some references I know of which use this loss function. While I very much like the "intrinsic estimation" parts of these papers/slides, I have some reservations about the "reference prior" approach that is also described.
Bayesian Hypothesis Testing:A Reference Approach
Intrinsic Estimation
Comparing Normal Means: New Methods for an Old Problem
Integrated Objective Bayesian Estimation and Hypothesis Testing | What are the breakthroughs in Statistics of the past 15 years? | The introduction of the "intrinsic discrepancy" loss function and other "parameterisation free" loss functions into decision theory. It has many other "nice" properties, but I think the best one is a | What are the breakthroughs in Statistics of the past 15 years?
The introduction of the "intrinsic discrepancy" loss function and other "parameterisation free" loss functions into decision theory. It has many other "nice" properties, but I think the best one is as follows:
if the best estimate of $\theta$ using the intrinsic discrepancy loss function is $\theta^{e}$, then the best estimate of any one-to-one function of $\theta$, say $g(\theta)$ is simply $g(\theta^{e})$.
I think this is very cool! (e.g. best estimate of log-odds is log(p/(1-p)), best estimate of variance is square of standard deviation, etc. etc.)
The catch? the intrinsic discrepancy can be quite difficult to work out! (it involves min() funcion, a likelihood ratio, and integrals!)
The "counter-catch"? you can "re-arrange" the problem so that it is easier to calculate!
The "counter-counter-catch"? figuring out how to "re-arrange" the problem can be difficult!
Here are some references I know of which use this loss function. While I very much like the "intrinsic estimation" parts of these papers/slides, I have some reservations about the "reference prior" approach that is also described.
Bayesian Hypothesis Testing:A Reference Approach
Intrinsic Estimation
Comparing Normal Means: New Methods for an Old Problem
Integrated Objective Bayesian Estimation and Hypothesis Testing | What are the breakthroughs in Statistics of the past 15 years?
The introduction of the "intrinsic discrepancy" loss function and other "parameterisation free" loss functions into decision theory. It has many other "nice" properties, but I think the best one is a |
4,314 | What are the breakthroughs in Statistics of the past 15 years? | Just falling within the 15 year window, I believe, are the algorithms for controlling False Discovery Rate. I like the 'q-value' approach. | What are the breakthroughs in Statistics of the past 15 years? | Just falling within the 15 year window, I believe, are the algorithms for controlling False Discovery Rate. I like the 'q-value' approach. | What are the breakthroughs in Statistics of the past 15 years?
Just falling within the 15 year window, I believe, are the algorithms for controlling False Discovery Rate. I like the 'q-value' approach. | What are the breakthroughs in Statistics of the past 15 years?
Just falling within the 15 year window, I believe, are the algorithms for controlling False Discovery Rate. I like the 'q-value' approach. |
4,315 | What are the breakthroughs in Statistics of the past 15 years? | Adding my own 5 cents, I believe the most significant breakthrough of the past 15 years has been Compressed Sensing. LARS, LASSO, and a host of other algorithms fall in this domain, in that Compressed Sensing explains why they work and extends them to other domains. | What are the breakthroughs in Statistics of the past 15 years? | Adding my own 5 cents, I believe the most significant breakthrough of the past 15 years has been Compressed Sensing. LARS, LASSO, and a host of other algorithms fall in this domain, in that Compressed | What are the breakthroughs in Statistics of the past 15 years?
Adding my own 5 cents, I believe the most significant breakthrough of the past 15 years has been Compressed Sensing. LARS, LASSO, and a host of other algorithms fall in this domain, in that Compressed Sensing explains why they work and extends them to other domains. | What are the breakthroughs in Statistics of the past 15 years?
Adding my own 5 cents, I believe the most significant breakthrough of the past 15 years has been Compressed Sensing. LARS, LASSO, and a host of other algorithms fall in this domain, in that Compressed |
4,316 | What are the breakthroughs in Statistics of the past 15 years? | Something that has very little to do with statistics themselves, but has been massively beneficial: The increasing firepower of computers, making larger datasets and more complex statistical analysis more accessible, especially in applied fields. | What are the breakthroughs in Statistics of the past 15 years? | Something that has very little to do with statistics themselves, but has been massively beneficial: The increasing firepower of computers, making larger datasets and more complex statistical analysis | What are the breakthroughs in Statistics of the past 15 years?
Something that has very little to do with statistics themselves, but has been massively beneficial: The increasing firepower of computers, making larger datasets and more complex statistical analysis more accessible, especially in applied fields. | What are the breakthroughs in Statistics of the past 15 years?
Something that has very little to do with statistics themselves, but has been massively beneficial: The increasing firepower of computers, making larger datasets and more complex statistical analysis |
4,317 | What are the breakthroughs in Statistics of the past 15 years? | The Expectation-Propagation algorithm for Bayesian inference, especially in Gaussian Process classification, was arguably a significant breakthrough, as it provides an efficient analytic approximation method that works almost as well as computationally expensive sampling based approaches (unlike the usual Laplace approximation). See the work of Thomas Minka and others on the EP roadmap | What are the breakthroughs in Statistics of the past 15 years? | The Expectation-Propagation algorithm for Bayesian inference, especially in Gaussian Process classification, was arguably a significant breakthrough, as it provides an efficient analytic approximation | What are the breakthroughs in Statistics of the past 15 years?
The Expectation-Propagation algorithm for Bayesian inference, especially in Gaussian Process classification, was arguably a significant breakthrough, as it provides an efficient analytic approximation method that works almost as well as computationally expensive sampling based approaches (unlike the usual Laplace approximation). See the work of Thomas Minka and others on the EP roadmap | What are the breakthroughs in Statistics of the past 15 years?
The Expectation-Propagation algorithm for Bayesian inference, especially in Gaussian Process classification, was arguably a significant breakthrough, as it provides an efficient analytic approximation |
4,318 | What are the breakthroughs in Statistics of the past 15 years? | I think that the 'Approximate Bayesian Inference for Latent Gaussian
Models Using Integrated Nested Laplace Approximations' of H. Rue et. al (2009) is a potential candidate. | What are the breakthroughs in Statistics of the past 15 years? | I think that the 'Approximate Bayesian Inference for Latent Gaussian
Models Using Integrated Nested Laplace Approximations' of H. Rue et. al (2009) is a potential candidate. | What are the breakthroughs in Statistics of the past 15 years?
I think that the 'Approximate Bayesian Inference for Latent Gaussian
Models Using Integrated Nested Laplace Approximations' of H. Rue et. al (2009) is a potential candidate. | What are the breakthroughs in Statistics of the past 15 years?
I think that the 'Approximate Bayesian Inference for Latent Gaussian
Models Using Integrated Nested Laplace Approximations' of H. Rue et. al (2009) is a potential candidate. |
4,319 | What are the breakthroughs in Statistics of the past 15 years? | In my opinion, everything allowing you to run new models on a large scale is a breakthrough. Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP) could be a candidate (though the idea is new and there have not been many implementations of the idea presented). | What are the breakthroughs in Statistics of the past 15 years? | In my opinion, everything allowing you to run new models on a large scale is a breakthrough. Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP) could be a candidate (though the | What are the breakthroughs in Statistics of the past 15 years?
In my opinion, everything allowing you to run new models on a large scale is a breakthrough. Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP) could be a candidate (though the idea is new and there have not been many implementations of the idea presented). | What are the breakthroughs in Statistics of the past 15 years?
In my opinion, everything allowing you to run new models on a large scale is a breakthrough. Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP) could be a candidate (though the |
4,320 | What are the breakthroughs in Statistics of the past 15 years? | In my opinion, paper published in 2011 in Science magazine. Authors propose very interesting measure of association between pair of random variables that works well in many situations where similar measures fail (Pearson, Spearman, Kendall). Really nice paper. Here it is. | What are the breakthroughs in Statistics of the past 15 years? | In my opinion, paper published in 2011 in Science magazine. Authors propose very interesting measure of association between pair of random variables that works well in many situations where similar me | What are the breakthroughs in Statistics of the past 15 years?
In my opinion, paper published in 2011 in Science magazine. Authors propose very interesting measure of association between pair of random variables that works well in many situations where similar measures fail (Pearson, Spearman, Kendall). Really nice paper. Here it is. | What are the breakthroughs in Statistics of the past 15 years?
In my opinion, paper published in 2011 in Science magazine. Authors propose very interesting measure of association between pair of random variables that works well in many situations where similar me |
4,321 | What are the breakthroughs in Statistics of the past 15 years? | While a bit more general than statistics, I think there have been important advances in methods of reproducible research (RR). For example the development of R's knittr and Sweave packages and "R Markdown" notebooks, LyX and LaTeX improvements have contributed significantly to data sharing, collaboration, verification/validation, and even additional statistical advancement. Refereed papers in statistical, medical, and epidemiological journals rarely allowed one to reproduce results easily prior to the emergence of these reproducible research methods/technologies. Now, several journals are requiring reproducible research and many statisticians are using RR and posting code, their results and data sources on the web. This has also helped to foster data science disciplines and made statistical learning more accessible. | What are the breakthroughs in Statistics of the past 15 years? | While a bit more general than statistics, I think there have been important advances in methods of reproducible research (RR). For example the development of R's knittr and Sweave packages and "R Mar | What are the breakthroughs in Statistics of the past 15 years?
While a bit more general than statistics, I think there have been important advances in methods of reproducible research (RR). For example the development of R's knittr and Sweave packages and "R Markdown" notebooks, LyX and LaTeX improvements have contributed significantly to data sharing, collaboration, verification/validation, and even additional statistical advancement. Refereed papers in statistical, medical, and epidemiological journals rarely allowed one to reproduce results easily prior to the emergence of these reproducible research methods/technologies. Now, several journals are requiring reproducible research and many statisticians are using RR and posting code, their results and data sources on the web. This has also helped to foster data science disciplines and made statistical learning more accessible. | What are the breakthroughs in Statistics of the past 15 years?
While a bit more general than statistics, I think there have been important advances in methods of reproducible research (RR). For example the development of R's knittr and Sweave packages and "R Mar |
4,322 | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | This is my personal opinion, so I'm not sure it properly qualifies as an answer.
Why should we teach hypothesis testing?
One very big reason, in short, is that, in all likelihood, in the time it takes you to read this sentence, hundreds, if not thousands (or millions) of hypothesis tests have been conducted within a 10ft radius of where you sit.
Your cell phone is definitely using a likelihood ratio test to decide whether or not it is within range of a base station. Your laptop's WiFi hardware is doing the same in communicating with your router.
The microwave you used to auto-reheat that two-day old piece of pizza used a hypothesis test to decide when your pizza was hot enough.
Your car's traction control system kicked in when you gave it too much gas on an icy road, or the tire-pressure warning system let you know that your rear passenger-side tire was abnormally low, and your headlights came on automatically at around 5:19pm as dusk was setting in.
Your iPad is rendering this page in landscape format based on (noisy) accelerometer readings.
Your credit card company shut off your card when "you" purchased a flat-screen TV at a Best Buy in Texas and a $2000 diamond ring at Zales in a Washington-state mall within a couple hours of buying lunch, gas, and a movie near your home in the Pittsburgh suburbs.
The hundreds of thousands of bits that were sent to render this webpage in your browser each individually underwent a hypothesis test to determine whether they were most likely a 0 or a 1 (in addition to some amazing error-correction).
Look to your right just a little bit at those "related" topics.
All of these things "happened" due to hypothesis tests. For many of these things some interval estimate of some parameter could be calculated. But, especially for automated industrial processes, the use and understanding of hypothesis testing is crucial.
On a more theoretical statistical level, the important concept of statistical power arises rather naturally from a decision-theoretic / hypothesis-testing framework. Plus, I believe "even" a pure mathematician can appreciate the beauty and simplicity of the Neyman–Pearson lemma and its proof.
This is not to say that hypothesis testing is taught, or understood, well. By and large, it's not. And, while I would agree that—particularly in the medical sciences—reporting of interval estimates along with effect sizes and notions of practical vs. statistical significance are almost universally preferable to any formal hypothesis test, this does not mean that hypothesis testing and the related concepts are not important and interesting in their own right. | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | This is my personal opinion, so I'm not sure it properly qualifies as an answer.
Why should we teach hypothesis testing?
One very big reason, in short, is that, in all likelihood, in the time it takes | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
This is my personal opinion, so I'm not sure it properly qualifies as an answer.
Why should we teach hypothesis testing?
One very big reason, in short, is that, in all likelihood, in the time it takes you to read this sentence, hundreds, if not thousands (or millions) of hypothesis tests have been conducted within a 10ft radius of where you sit.
Your cell phone is definitely using a likelihood ratio test to decide whether or not it is within range of a base station. Your laptop's WiFi hardware is doing the same in communicating with your router.
The microwave you used to auto-reheat that two-day old piece of pizza used a hypothesis test to decide when your pizza was hot enough.
Your car's traction control system kicked in when you gave it too much gas on an icy road, or the tire-pressure warning system let you know that your rear passenger-side tire was abnormally low, and your headlights came on automatically at around 5:19pm as dusk was setting in.
Your iPad is rendering this page in landscape format based on (noisy) accelerometer readings.
Your credit card company shut off your card when "you" purchased a flat-screen TV at a Best Buy in Texas and a $2000 diamond ring at Zales in a Washington-state mall within a couple hours of buying lunch, gas, and a movie near your home in the Pittsburgh suburbs.
The hundreds of thousands of bits that were sent to render this webpage in your browser each individually underwent a hypothesis test to determine whether they were most likely a 0 or a 1 (in addition to some amazing error-correction).
Look to your right just a little bit at those "related" topics.
All of these things "happened" due to hypothesis tests. For many of these things some interval estimate of some parameter could be calculated. But, especially for automated industrial processes, the use and understanding of hypothesis testing is crucial.
On a more theoretical statistical level, the important concept of statistical power arises rather naturally from a decision-theoretic / hypothesis-testing framework. Plus, I believe "even" a pure mathematician can appreciate the beauty and simplicity of the Neyman–Pearson lemma and its proof.
This is not to say that hypothesis testing is taught, or understood, well. By and large, it's not. And, while I would agree that—particularly in the medical sciences—reporting of interval estimates along with effect sizes and notions of practical vs. statistical significance are almost universally preferable to any formal hypothesis test, this does not mean that hypothesis testing and the related concepts are not important and interesting in their own right. | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
This is my personal opinion, so I'm not sure it properly qualifies as an answer.
Why should we teach hypothesis testing?
One very big reason, in short, is that, in all likelihood, in the time it takes |
4,323 | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | I teach hypothesis tests for a number of reasons. One is historical, that they'll have to understand a large body of prior research they read and understand the hypothesis testing point of view. A second is that, even in modern times, it's still used by some researchers, often implicitly, when performing other kinds of statistical analyses.
But when I teach it, I teach it in the framework of model building, that these assumptions and estimates are parts of building models. That way it's relatively easy to switch to comparing more complex and theoretically interesting models. Research more often pits theories against one another rather than a theory versus nothing.
The sins of hypothesis testing are not inherent in the math, and proper use of those calculations. Where they primarily lie is in over-reliance and misinterpretation. If a vast majority of naïve researchers exclusively used interval estimation with no recognition of any of the relationships to these things we call hypotheses we might call that a sin. | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | I teach hypothesis tests for a number of reasons. One is historical, that they'll have to understand a large body of prior research they read and understand the hypothesis testing point of view. A s | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
I teach hypothesis tests for a number of reasons. One is historical, that they'll have to understand a large body of prior research they read and understand the hypothesis testing point of view. A second is that, even in modern times, it's still used by some researchers, often implicitly, when performing other kinds of statistical analyses.
But when I teach it, I teach it in the framework of model building, that these assumptions and estimates are parts of building models. That way it's relatively easy to switch to comparing more complex and theoretically interesting models. Research more often pits theories against one another rather than a theory versus nothing.
The sins of hypothesis testing are not inherent in the math, and proper use of those calculations. Where they primarily lie is in over-reliance and misinterpretation. If a vast majority of naïve researchers exclusively used interval estimation with no recognition of any of the relationships to these things we call hypotheses we might call that a sin. | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
I teach hypothesis tests for a number of reasons. One is historical, that they'll have to understand a large body of prior research they read and understand the hypothesis testing point of view. A s |
4,324 | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | I personally feel we would be better off without hypothesis tests. The only place I can think of where hypothesis tests offer something unique and useful is in the area of multiple degree of freedom joint hypothesis tests. Examples include ANOVA for comparing more than two groups, simultaneous tests combining main effects and interactions (tests of total effect), and simultaneous tests combining linear and nonlinear terms related to a continuous predictor (multiple d.f. test of association). For simple things, interval estimation is easier, and much less likely to mislead than $P$-values. As said so well in the classic paper Absence of evidence is not evidence of absence, a large $P$-value contains no information. $P$-values only provide evidence against a hypothesis, never in favor of it (Fisher's response when asked how to interpret a large $P$-value was "Get more data"). A confidence or credible interval keeps the researcher more honest by describing how much she doesn't know. | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | I personally feel we would be better off without hypothesis tests. The only place I can think of where hypothesis tests offer something unique and useful is in the area of multiple degree of freedom | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
I personally feel we would be better off without hypothesis tests. The only place I can think of where hypothesis tests offer something unique and useful is in the area of multiple degree of freedom joint hypothesis tests. Examples include ANOVA for comparing more than two groups, simultaneous tests combining main effects and interactions (tests of total effect), and simultaneous tests combining linear and nonlinear terms related to a continuous predictor (multiple d.f. test of association). For simple things, interval estimation is easier, and much less likely to mislead than $P$-values. As said so well in the classic paper Absence of evidence is not evidence of absence, a large $P$-value contains no information. $P$-values only provide evidence against a hypothesis, never in favor of it (Fisher's response when asked how to interpret a large $P$-value was "Get more data"). A confidence or credible interval keeps the researcher more honest by describing how much she doesn't know. | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
I personally feel we would be better off without hypothesis tests. The only place I can think of where hypothesis tests offer something unique and useful is in the area of multiple degree of freedom |
4,325 | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | I think it depends on which hypothesis testing you are talking about. The "classical" hypothesis testing (Neyman-Pearson) is said to be defective because it does not appropriately condition on what actually happened when you did the test. It instead is designed to work "regardless" of what you actually saw in the long run. But failing to condition can lead to misleading results in the individual case. This is simply because the procedure "does not care" about the individual case, on the long run.
Hypothesis testing can be cast in the decision theoretical framework, which I think is a much better way to understand it. You can restate the problem as two decisions:
"I will act as if $H_0$ is true"
"I will act as if $H_\mathrm{A}$ is true"
The decision framework is much easier to understand, because it clearly separates out the concepts of "what will you do?" and "what is the truth?" (via your prior information).
You could even apply "decision theory" (DT) to your question. But in order to stop hypothesis testing, DT says you must have an alternative decision available to you. So the question is: if hypothesis testing is abandoned, what is to take its place? I can't think of an answer to this question. I can only think of alternative ways to do hypothesis testing.
(NOTE: in the context of hypothesis testing, the data, sampling distribution, prior distribution, and loss function are all prior information because they are obtained prior to making the decision.) | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | I think it depends on which hypothesis testing you are talking about. The "classical" hypothesis testing (Neyman-Pearson) is said to be defective because it does not appropriately condition on what a | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
I think it depends on which hypothesis testing you are talking about. The "classical" hypothesis testing (Neyman-Pearson) is said to be defective because it does not appropriately condition on what actually happened when you did the test. It instead is designed to work "regardless" of what you actually saw in the long run. But failing to condition can lead to misleading results in the individual case. This is simply because the procedure "does not care" about the individual case, on the long run.
Hypothesis testing can be cast in the decision theoretical framework, which I think is a much better way to understand it. You can restate the problem as two decisions:
"I will act as if $H_0$ is true"
"I will act as if $H_\mathrm{A}$ is true"
The decision framework is much easier to understand, because it clearly separates out the concepts of "what will you do?" and "what is the truth?" (via your prior information).
You could even apply "decision theory" (DT) to your question. But in order to stop hypothesis testing, DT says you must have an alternative decision available to you. So the question is: if hypothesis testing is abandoned, what is to take its place? I can't think of an answer to this question. I can only think of alternative ways to do hypothesis testing.
(NOTE: in the context of hypothesis testing, the data, sampling distribution, prior distribution, and loss function are all prior information because they are obtained prior to making the decision.) | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
I think it depends on which hypothesis testing you are talking about. The "classical" hypothesis testing (Neyman-Pearson) is said to be defective because it does not appropriately condition on what a |
4,326 | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | If I were a hardcore Frequentist I would remind you that confidence intervals are quite regularly just inverted hypothesis tests, i.e. when the 95% interval is simply another way of describing all the points that a test involving your data wouldn't reject at the .05 level. In these situations a preference for one over the other is question of exposition rather than method.
Now, exposition is important of course, but I think that would be a pretty good argument. It's neat and clarifying to explain the two approaches as restatements of the same inference from different points of view. (The fact that not all interval estimators are inverted tests is then an inelegant but not particularly awkward fact, pedagogically speaking).
Much more serious implications come from the decision to condition on the observations, as pointed out above. However, even in retreat the Frequentist could always observe that there are plenty of situations (perhaps not a majority) where conditioning on the observations would be unwise or unilluminating. For those, the HT/CI setup is (not 'are') exactly what is wanted, and should be taught as such. | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | If I were a hardcore Frequentist I would remind you that confidence intervals are quite regularly just inverted hypothesis tests, i.e. when the 95% interval is simply another way of describing all the | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
If I were a hardcore Frequentist I would remind you that confidence intervals are quite regularly just inverted hypothesis tests, i.e. when the 95% interval is simply another way of describing all the points that a test involving your data wouldn't reject at the .05 level. In these situations a preference for one over the other is question of exposition rather than method.
Now, exposition is important of course, but I think that would be a pretty good argument. It's neat and clarifying to explain the two approaches as restatements of the same inference from different points of view. (The fact that not all interval estimators are inverted tests is then an inelegant but not particularly awkward fact, pedagogically speaking).
Much more serious implications come from the decision to condition on the observations, as pointed out above. However, even in retreat the Frequentist could always observe that there are plenty of situations (perhaps not a majority) where conditioning on the observations would be unwise or unilluminating. For those, the HT/CI setup is (not 'are') exactly what is wanted, and should be taught as such. | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
If I were a hardcore Frequentist I would remind you that confidence intervals are quite regularly just inverted hypothesis tests, i.e. when the 95% interval is simply another way of describing all the |
4,327 | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | In teaching Neyman Pearson hypothesis testing to early statistics students, I have often tried to locate it in its original setting: that of making decisions. Then the infrastructure of type 1 and type 2 errors all makes sense, as does the idea that you might accept the null hypothesis.
We have to make a decision, we think that the outcome of our decision can be improved by knowledge of a parameter, we only have an estimate of that parameter. We still have to make a decision. Then what is the best decision to make in the context of having an estimate of the parameter?
It seems to me that in its original setting (making decisions in the face of uncertainty) the NP hypothesis test makes perfect sense. See e.g. N & P 1933, particularly p. 291.
Neyman and Pearson. On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character (1933) vol. 231 pp. 289-337 | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | In teaching Neyman Pearson hypothesis testing to early statistics students, I have often tried to locate it in its original setting: that of making decisions. Then the infrastructure of type 1 and ty | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
In teaching Neyman Pearson hypothesis testing to early statistics students, I have often tried to locate it in its original setting: that of making decisions. Then the infrastructure of type 1 and type 2 errors all makes sense, as does the idea that you might accept the null hypothesis.
We have to make a decision, we think that the outcome of our decision can be improved by knowledge of a parameter, we only have an estimate of that parameter. We still have to make a decision. Then what is the best decision to make in the context of having an estimate of the parameter?
It seems to me that in its original setting (making decisions in the face of uncertainty) the NP hypothesis test makes perfect sense. See e.g. N & P 1933, particularly p. 291.
Neyman and Pearson. On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character (1933) vol. 231 pp. 289-337 | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
In teaching Neyman Pearson hypothesis testing to early statistics students, I have often tried to locate it in its original setting: that of making decisions. Then the infrastructure of type 1 and ty |
4,328 | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | Hypothesis testing is a useful way to frame a lot of questions: is the effect of a treatment zero or nonzero? The ability between statements such as these and a statistical model or procedure (including the construction of an interval estimator) is important for practitioners I think.
It also bears mentioning that a confidence interval (in the traditional sense) isn't inherently any less "sin-prone" than hypothesis testing - how many intro stats students know the real definition of a confidence interval?
Perhaps the problem isn't hypothesis testing or interval estimation as it is the classical versions of the same; the Bayesian formulation avoids these quite nicely. | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | Hypothesis testing is a useful way to frame a lot of questions: is the effect of a treatment zero or nonzero? The ability between statements such as these and a statistical model or procedure (includi | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
Hypothesis testing is a useful way to frame a lot of questions: is the effect of a treatment zero or nonzero? The ability between statements such as these and a statistical model or procedure (including the construction of an interval estimator) is important for practitioners I think.
It also bears mentioning that a confidence interval (in the traditional sense) isn't inherently any less "sin-prone" than hypothesis testing - how many intro stats students know the real definition of a confidence interval?
Perhaps the problem isn't hypothesis testing or interval estimation as it is the classical versions of the same; the Bayesian formulation avoids these quite nicely. | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
Hypothesis testing is a useful way to frame a lot of questions: is the effect of a treatment zero or nonzero? The ability between statements such as these and a statistical model or procedure (includi |
4,329 | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | The reason is decision making. In most decision making you either do it or not. You may keep looking at intervals all day long, in the end there's a moment where you decide to do it or not.
Hypothesis testing fits nicely into this simple reality of YES/NO. | Why continue to teach and use hypothesis testing (when confidence intervals are available)? | The reason is decision making. In most decision making you either do it or not. You may keep looking at intervals all day long, in the end there's a moment where you decide to do it or not.
Hypothesi | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
The reason is decision making. In most decision making you either do it or not. You may keep looking at intervals all day long, in the end there's a moment where you decide to do it or not.
Hypothesis testing fits nicely into this simple reality of YES/NO. | Why continue to teach and use hypothesis testing (when confidence intervals are available)?
The reason is decision making. In most decision making you either do it or not. You may keep looking at intervals all day long, in the end there's a moment where you decide to do it or not.
Hypothesi |
4,330 | What are some examples of anachronistic practices in statistics? | It's strongly arguable that the use of threshold significance levels such as $P = 0.05$ or $P = 0.01$ is a historical hangover from a period when most researchers depended on previously calculated tables of critical values. Now good software will give $P$-values directly. Indeed, good software lets you customise your analysis and not depend on textbook tests.
This is contentious if only because some significance testing problems do require decisions, as in quality control where accepting or rejecting a batch is the decision needed, followed by an action either way. But even there the thresholds to be used should grow out of a risk analysis, not depend on tradition. And often in the sciences, analysis of quantitative indications is more appropriate than decisions: thinking quantitatively implies attention to sizes of $P$-values and not just to a crude dichotomy, significant versus not significant.
I will flag that I here touch on an intricate and controversial issue which is the focus of entire books and probably thousands of papers, but it seems a fair example for this thread. | What are some examples of anachronistic practices in statistics? | It's strongly arguable that the use of threshold significance levels such as $P = 0.05$ or $P = 0.01$ is a historical hangover from a period when most researchers depended on previously calculated tab | What are some examples of anachronistic practices in statistics?
It's strongly arguable that the use of threshold significance levels such as $P = 0.05$ or $P = 0.01$ is a historical hangover from a period when most researchers depended on previously calculated tables of critical values. Now good software will give $P$-values directly. Indeed, good software lets you customise your analysis and not depend on textbook tests.
This is contentious if only because some significance testing problems do require decisions, as in quality control where accepting or rejecting a batch is the decision needed, followed by an action either way. But even there the thresholds to be used should grow out of a risk analysis, not depend on tradition. And often in the sciences, analysis of quantitative indications is more appropriate than decisions: thinking quantitatively implies attention to sizes of $P$-values and not just to a crude dichotomy, significant versus not significant.
I will flag that I here touch on an intricate and controversial issue which is the focus of entire books and probably thousands of papers, but it seems a fair example for this thread. | What are some examples of anachronistic practices in statistics?
It's strongly arguable that the use of threshold significance levels such as $P = 0.05$ or $P = 0.01$ is a historical hangover from a period when most researchers depended on previously calculated tab |
4,331 | What are some examples of anachronistic practices in statistics? | One method that I think many visitors of this site will agree with me on is stepwise regression. It's still done all the time, but you don't have to search far for experts on this site saying deploring its use. A method like LASSO is much preferred. | What are some examples of anachronistic practices in statistics? | One method that I think many visitors of this site will agree with me on is stepwise regression. It's still done all the time, but you don't have to search far for experts on this site saying deplorin | What are some examples of anachronistic practices in statistics?
One method that I think many visitors of this site will agree with me on is stepwise regression. It's still done all the time, but you don't have to search far for experts on this site saying deploring its use. A method like LASSO is much preferred. | What are some examples of anachronistic practices in statistics?
One method that I think many visitors of this site will agree with me on is stepwise regression. It's still done all the time, but you don't have to search far for experts on this site saying deplorin |
4,332 | What are some examples of anachronistic practices in statistics? | My view is that at least in (applied) econometrics, it is more and more the norm to use the robust or empirical covariance matrix rather than the "anachronistic practice" of relying (asymptotically) on the correct specification of the covariance matrix. This is of course not without controversy: see some of the answers I linked here at CrossValidated, but it is certainly a clear trend.
Examples include heteroscedasticity-robust standard error (Eicker-Huber-White standard errors). Some researchers such as Angrist and Pischke apparently advise always using heteroscedasticity-robust standard error rather than the "anachronistic" procedure to use normal standard error as default and check whether the assumption $E[uu'] = \sigma^2 I_n$ is warranted.
Other examples include panel data, Imbens and Wooldridge write for example in their lecture slides argue against using the random effects variance covariance matrix (implicitly assuming some misspecification in the variance component as default):
Fully robust inference is available and should generally be used. (Note: The usual RE variance matrix, which depends only on $\sigma_c^2$and $\sigma_u^2$, need not be correctly specified! It still makes sense to use it in estimation but make inference robust.)
Using generalized linear models (for distributions which belong to the exponential family), often it is advised to use always the so-called sandwich estimator rather than relying on correct distributional assumptions (the anachronistic practice here): see for example this answer or Cameron referring to count data because pseudo-maximum likelihood estimation can be quite flexible in the case of misspecification (e.g. using Poisson if negative binomial would be correct).
Such [White] standard error corrections must be made for Poisson regression, as they can make a much bigger difference than similar heteroskedasticity corrections for OLS.
Greene writes in his textbook in Chapter 14 (available on his website) for example with a critical note and goes more into detail about the advantages and disadvantages of this practice:
There is a trend in the current literature to compute this [sandwich] estimator routinely, regardless of the likelihood function.* [...] *We do emphasize once again that the sandwich estimator, in and of itself, is not necessarily of any virtue if the likelihood function is misspecified and the other conditions for the M estimator are not met. | What are some examples of anachronistic practices in statistics? | My view is that at least in (applied) econometrics, it is more and more the norm to use the robust or empirical covariance matrix rather than the "anachronistic practice" of relying (asymptotically) o | What are some examples of anachronistic practices in statistics?
My view is that at least in (applied) econometrics, it is more and more the norm to use the robust or empirical covariance matrix rather than the "anachronistic practice" of relying (asymptotically) on the correct specification of the covariance matrix. This is of course not without controversy: see some of the answers I linked here at CrossValidated, but it is certainly a clear trend.
Examples include heteroscedasticity-robust standard error (Eicker-Huber-White standard errors). Some researchers such as Angrist and Pischke apparently advise always using heteroscedasticity-robust standard error rather than the "anachronistic" procedure to use normal standard error as default and check whether the assumption $E[uu'] = \sigma^2 I_n$ is warranted.
Other examples include panel data, Imbens and Wooldridge write for example in their lecture slides argue against using the random effects variance covariance matrix (implicitly assuming some misspecification in the variance component as default):
Fully robust inference is available and should generally be used. (Note: The usual RE variance matrix, which depends only on $\sigma_c^2$and $\sigma_u^2$, need not be correctly specified! It still makes sense to use it in estimation but make inference robust.)
Using generalized linear models (for distributions which belong to the exponential family), often it is advised to use always the so-called sandwich estimator rather than relying on correct distributional assumptions (the anachronistic practice here): see for example this answer or Cameron referring to count data because pseudo-maximum likelihood estimation can be quite flexible in the case of misspecification (e.g. using Poisson if negative binomial would be correct).
Such [White] standard error corrections must be made for Poisson regression, as they can make a much bigger difference than similar heteroskedasticity corrections for OLS.
Greene writes in his textbook in Chapter 14 (available on his website) for example with a critical note and goes more into detail about the advantages and disadvantages of this practice:
There is a trend in the current literature to compute this [sandwich] estimator routinely, regardless of the likelihood function.* [...] *We do emphasize once again that the sandwich estimator, in and of itself, is not necessarily of any virtue if the likelihood function is misspecified and the other conditions for the M estimator are not met. | What are some examples of anachronistic practices in statistics?
My view is that at least in (applied) econometrics, it is more and more the norm to use the robust or empirical covariance matrix rather than the "anachronistic practice" of relying (asymptotically) o |
4,333 | What are some examples of anachronistic practices in statistics? | Most anachronistic practices are probably due to the way statistics is taught and the fact that analyses are run by huge numbers of people who have only taken a couple of basic classes. We often teach a set of standard statistical ideas and procedures because they form a logical sequence of increasing conceptual sophistication that makes sense pedagogically (cf., How can we ever know the population variance?). I'm guilty of this myself: I occasionally teach stats 101 and 102, and I constantly say, 'there's a better way to do this, but it's beyond the scope of this class'. For those students who don't go on beyond the introductory sequence (almost all), they are left with basic, but superseded, strategies.
For a stats 101 example, probably the most common anachronistic practice is to test some assumption and then run a traditional statistical analysis because the test was not significant. A more modern / advanced / defensible approach would be to use a method robust to that assumption from the start. Some references for more information:
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Is normality testing 'essentially useless'?
For stats 102 examples, any number of modeling practices have been outmoded:
Transforming $Y$ to achieve normality of residuals for getting reliable $p$-values vs. bootstrapping.
Transforming $Y$ to achieve homoscedasticity instead of using a sandwich estimator, etc.
Using a higher-order polynomial to capture curvature vs. cubic splines.
Assessing models intended for prediction using $p$-values and in-sample goodness of fit metrics like $R^2$ instead of cross-validation.
With repeated measures data, categorizing a continuous variable so that rmANOVA can be used or averaging multiple measurements vs. using a linear mixed model.
Etc.
The point in all these cases is that people are doing what was taught first in an introductory class because they simply don't know more advanced and appropriate methods. | What are some examples of anachronistic practices in statistics? | Most anachronistic practices are probably due to the way statistics is taught and the fact that analyses are run by huge numbers of people who have only taken a couple of basic classes. We often teac | What are some examples of anachronistic practices in statistics?
Most anachronistic practices are probably due to the way statistics is taught and the fact that analyses are run by huge numbers of people who have only taken a couple of basic classes. We often teach a set of standard statistical ideas and procedures because they form a logical sequence of increasing conceptual sophistication that makes sense pedagogically (cf., How can we ever know the population variance?). I'm guilty of this myself: I occasionally teach stats 101 and 102, and I constantly say, 'there's a better way to do this, but it's beyond the scope of this class'. For those students who don't go on beyond the introductory sequence (almost all), they are left with basic, but superseded, strategies.
For a stats 101 example, probably the most common anachronistic practice is to test some assumption and then run a traditional statistical analysis because the test was not significant. A more modern / advanced / defensible approach would be to use a method robust to that assumption from the start. Some references for more information:
How to choose between t-test or non-parametric test e.g. Wilcoxon in small samples
Is normality testing 'essentially useless'?
For stats 102 examples, any number of modeling practices have been outmoded:
Transforming $Y$ to achieve normality of residuals for getting reliable $p$-values vs. bootstrapping.
Transforming $Y$ to achieve homoscedasticity instead of using a sandwich estimator, etc.
Using a higher-order polynomial to capture curvature vs. cubic splines.
Assessing models intended for prediction using $p$-values and in-sample goodness of fit metrics like $R^2$ instead of cross-validation.
With repeated measures data, categorizing a continuous variable so that rmANOVA can be used or averaging multiple measurements vs. using a linear mixed model.
Etc.
The point in all these cases is that people are doing what was taught first in an introductory class because they simply don't know more advanced and appropriate methods. | What are some examples of anachronistic practices in statistics?
Most anachronistic practices are probably due to the way statistics is taught and the fact that analyses are run by huge numbers of people who have only taken a couple of basic classes. We often teac |
4,334 | What are some examples of anachronistic practices in statistics? | A method that is unnecessarily used all the time is the Bonferroni correction to p-values. While multiple comparisons is as big an issue as it ever was, the Bonferroni correction is essentially obsolete for p-values: for any situation in which the Bonferroni correction is valid, so is the Holm-Bonferroni, which will have strictly higher power under the alternative if $m > 1$, where $m$ is the number of hypothesis tested (equality at $m = 1$).
I think the reason for the persistence of the Bonferroni correction is the ease of mental use (i.e. p = 0.004 with $m = 30$ is easily adjusted to 0.12, while Holm-Bonferroni requires sorting of p-values). | What are some examples of anachronistic practices in statistics? | A method that is unnecessarily used all the time is the Bonferroni correction to p-values. While multiple comparisons is as big an issue as it ever was, the Bonferroni correction is essentially obsole | What are some examples of anachronistic practices in statistics?
A method that is unnecessarily used all the time is the Bonferroni correction to p-values. While multiple comparisons is as big an issue as it ever was, the Bonferroni correction is essentially obsolete for p-values: for any situation in which the Bonferroni correction is valid, so is the Holm-Bonferroni, which will have strictly higher power under the alternative if $m > 1$, where $m$ is the number of hypothesis tested (equality at $m = 1$).
I think the reason for the persistence of the Bonferroni correction is the ease of mental use (i.e. p = 0.004 with $m = 30$ is easily adjusted to 0.12, while Holm-Bonferroni requires sorting of p-values). | What are some examples of anachronistic practices in statistics?
A method that is unnecessarily used all the time is the Bonferroni correction to p-values. While multiple comparisons is as big an issue as it ever was, the Bonferroni correction is essentially obsole |
4,335 | What are some examples of anachronistic practices in statistics? | Paying licensing fees for high-quality statistical software systems. #R | What are some examples of anachronistic practices in statistics? | Paying licensing fees for high-quality statistical software systems. #R | What are some examples of anachronistic practices in statistics?
Paying licensing fees for high-quality statistical software systems. #R | What are some examples of anachronistic practices in statistics?
Paying licensing fees for high-quality statistical software systems. #R |
4,336 | What are some examples of anachronistic practices in statistics? | A very interesting example are unit root tests in econometrics. While there are plenty of choices available to test against or for a unit root in the lag polynomial of a time series (e.g., the (Augmented) Dickey Fuller Test or the KPSS test), the problem can be circumvented completely when one uses Bayesian analysis. Sims pointed this out in his provocative paper titled Understanding Unit Rooters: A Helicopter Tour from 1991.
Unit root tests remain valid and used in econometrics. While I personally would attribute this mostly to people being reluctant to adjust to Bayesian practices, many conservative econometricians defend the practice of unit root tests by saying that a Bayesian view of the world contradicts the premise of econometric research. (That is, economists think of the world as a place with fixed parameters, not random parameters that are governed by some hyperparameter.) | What are some examples of anachronistic practices in statistics? | A very interesting example are unit root tests in econometrics. While there are plenty of choices available to test against or for a unit root in the lag polynomial of a time series (e.g., the (Augmen | What are some examples of anachronistic practices in statistics?
A very interesting example are unit root tests in econometrics. While there are plenty of choices available to test against or for a unit root in the lag polynomial of a time series (e.g., the (Augmented) Dickey Fuller Test or the KPSS test), the problem can be circumvented completely when one uses Bayesian analysis. Sims pointed this out in his provocative paper titled Understanding Unit Rooters: A Helicopter Tour from 1991.
Unit root tests remain valid and used in econometrics. While I personally would attribute this mostly to people being reluctant to adjust to Bayesian practices, many conservative econometricians defend the practice of unit root tests by saying that a Bayesian view of the world contradicts the premise of econometric research. (That is, economists think of the world as a place with fixed parameters, not random parameters that are governed by some hyperparameter.) | What are some examples of anachronistic practices in statistics?
A very interesting example are unit root tests in econometrics. While there are plenty of choices available to test against or for a unit root in the lag polynomial of a time series (e.g., the (Augmen |
4,337 | What are some examples of anachronistic practices in statistics? | Teaching/conducting two-tailed tests for difference without simultaneously testing for equivalence in the frequentist realm of hypothesis testing is a deep commitment to confirmation bias.
There's some nuance, in that an appropriate power analysis with thoughtful definition of effect size can guard against this and provide more or less the same kinds of inferences, but (a) power analyses are so often ignored in presenting findings, and (b) I have never seen a power analysis for, for example, each coefficient estimated for each variable in a multiple regression, but it is straightforward to do so for combined tests for difference and tests for equivalence (i.e. relevance tests). | What are some examples of anachronistic practices in statistics? | Teaching/conducting two-tailed tests for difference without simultaneously testing for equivalence in the frequentist realm of hypothesis testing is a deep commitment to confirmation bias.
There's som | What are some examples of anachronistic practices in statistics?
Teaching/conducting two-tailed tests for difference without simultaneously testing for equivalence in the frequentist realm of hypothesis testing is a deep commitment to confirmation bias.
There's some nuance, in that an appropriate power analysis with thoughtful definition of effect size can guard against this and provide more or less the same kinds of inferences, but (a) power analyses are so often ignored in presenting findings, and (b) I have never seen a power analysis for, for example, each coefficient estimated for each variable in a multiple regression, but it is straightforward to do so for combined tests for difference and tests for equivalence (i.e. relevance tests). | What are some examples of anachronistic practices in statistics?
Teaching/conducting two-tailed tests for difference without simultaneously testing for equivalence in the frequentist realm of hypothesis testing is a deep commitment to confirmation bias.
There's som |
4,338 | What are some examples of anachronistic practices in statistics? | Using a Negative Binomial model rather than a (robust) Poisson model to identify a parameter of interest in a count variable, only because there is over-dispersion ?
See as a reference: https://blog.stata.com/2011/08/22/use-poisson-rather-than-regress-tell-a-friend/
The proof that Poisson is more robust in the case of fixed-effects is quite recent as it is offen made reference to: Wooldridge, J. M., “Distribution-Free Estimation of Some Nonlinear Panel Data Models,” Journal of Econometrics 90 (1999), 77–97. | What are some examples of anachronistic practices in statistics? | Using a Negative Binomial model rather than a (robust) Poisson model to identify a parameter of interest in a count variable, only because there is over-dispersion ?
See as a reference: https://blog.s | What are some examples of anachronistic practices in statistics?
Using a Negative Binomial model rather than a (robust) Poisson model to identify a parameter of interest in a count variable, only because there is over-dispersion ?
See as a reference: https://blog.stata.com/2011/08/22/use-poisson-rather-than-regress-tell-a-friend/
The proof that Poisson is more robust in the case of fixed-effects is quite recent as it is offen made reference to: Wooldridge, J. M., “Distribution-Free Estimation of Some Nonlinear Panel Data Models,” Journal of Econometrics 90 (1999), 77–97. | What are some examples of anachronistic practices in statistics?
Using a Negative Binomial model rather than a (robust) Poisson model to identify a parameter of interest in a count variable, only because there is over-dispersion ?
See as a reference: https://blog.s |
4,339 | What are some examples of anachronistic practices in statistics? | Here are a few anachronisms:
The neoplatonic assumption that there is a single, "true" population out there in the theoretical ether that is eternal, fixed and unmoving against which our imperfect samples can be evaluated does little to advance learning and knowledge.
The reductionism inherent in mandates such as Occam's Razor is inconsistent with the times. OR can be summarized as, "Among competing hypotheses, the one with the fewest assumptions should be selected." Alternatives include Epicurus' Principle of Multiple Explanations, which roughly states, "If more than one theory is consistent with the data, keep them all."
The whole peer-review system is desperately in need of an overhaul.
* Edit *
With massive data containing tens of millions of features, there is no longer need for a variable selection phase.
In addition, inferential statistics are meaningless. | What are some examples of anachronistic practices in statistics? | Here are a few anachronisms:
The neoplatonic assumption that there is a single, "true" population out there in the theoretical ether that is eternal, fixed and unmoving against which our imperfect sa | What are some examples of anachronistic practices in statistics?
Here are a few anachronisms:
The neoplatonic assumption that there is a single, "true" population out there in the theoretical ether that is eternal, fixed and unmoving against which our imperfect samples can be evaluated does little to advance learning and knowledge.
The reductionism inherent in mandates such as Occam's Razor is inconsistent with the times. OR can be summarized as, "Among competing hypotheses, the one with the fewest assumptions should be selected." Alternatives include Epicurus' Principle of Multiple Explanations, which roughly states, "If more than one theory is consistent with the data, keep them all."
The whole peer-review system is desperately in need of an overhaul.
* Edit *
With massive data containing tens of millions of features, there is no longer need for a variable selection phase.
In addition, inferential statistics are meaningless. | What are some examples of anachronistic practices in statistics?
Here are a few anachronisms:
The neoplatonic assumption that there is a single, "true" population out there in the theoretical ether that is eternal, fixed and unmoving against which our imperfect sa |
4,340 | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen | I am trained as a statistician not as a biologist or medical doctor. But I do quite a bit of medical research (working with biologists and medical doctors), as part of my research I have learned quite a bit about treatment of several different diseases. Does this mean that if a friend asks me about a disease that I have researched that I can just write them a prescription for a medication that I know is commonly used for that particular disease? If I were to do this (I don't), then in many cases it would probably work out OK (since a medical doctor would just have prescribed the same medication), but there is always a possibility that they have an allergy/drug interaction/other that a doctor would know to ask about, that I do not and end up causing much more harm than good.
If you are doing statistics without understanding what you are assuming and what could go wrong (or consulting with a statistician along the way that will look for these things) then you are practicing statistical malpractice. Most of the time it will probably be OK, but what about the occasion where an important assumption does not hold, but you just ignore it?
I work with some doctors who are reasonably statistically competent and can do much of their own analysis, but they will still run it past me. Often I confirm that they did the correct thing and that they can do the analysis themselves (and they are generally grateful for the confirmation) but occasionally they will be doing something more complex and when I mention a better approach they will usually turn the analysis over to me or my team, or at least bring me in for a more active role.
So my answer to your title question is "No" we are not exaggerating, rather we should be stressing some things more so that laymen will be more likely to at least double check their procedures/results with a statistician.
Edit
This is an addition based on Adam's comment below (will be a bit long for another comment).
Adam, Thanks for your comment. The short answer is "I don't know". I think that progress is being made in improving the statistical quality of articles, but things have moved so quickly in many different ways that it will take a while to catch up and guarentee the quality. Part of the solution is focusing on the assumptions and the consequences of the violations in intro stats courses. This is more likely to happen when the classes are taught by statisticians, but needs to happen in all classes.
Some journals are doing better, but I would like to see a specific statistician reviewer become the standard. There was an article a few years back (sorry don't have the reference handy, but it was in either JAMA or the New England Journal of Medicine) that showed a higher probability of being published (though not as big a difference as it should be) in JAMA or NEJM if a biostatistican or epidemiologist was one of the co-authors.
An interesting article that came out recently is: http://www.nature.com/news/statistics-p-values-are-just-the-tip-of-the-iceberg-1.17412 which discusses some of the same issues. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often | I am trained as a statistician not as a biologist or medical doctor. But I do quite a bit of medical research (working with biologists and medical doctors), as part of my research I have learned quit | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen
I am trained as a statistician not as a biologist or medical doctor. But I do quite a bit of medical research (working with biologists and medical doctors), as part of my research I have learned quite a bit about treatment of several different diseases. Does this mean that if a friend asks me about a disease that I have researched that I can just write them a prescription for a medication that I know is commonly used for that particular disease? If I were to do this (I don't), then in many cases it would probably work out OK (since a medical doctor would just have prescribed the same medication), but there is always a possibility that they have an allergy/drug interaction/other that a doctor would know to ask about, that I do not and end up causing much more harm than good.
If you are doing statistics without understanding what you are assuming and what could go wrong (or consulting with a statistician along the way that will look for these things) then you are practicing statistical malpractice. Most of the time it will probably be OK, but what about the occasion where an important assumption does not hold, but you just ignore it?
I work with some doctors who are reasonably statistically competent and can do much of their own analysis, but they will still run it past me. Often I confirm that they did the correct thing and that they can do the analysis themselves (and they are generally grateful for the confirmation) but occasionally they will be doing something more complex and when I mention a better approach they will usually turn the analysis over to me or my team, or at least bring me in for a more active role.
So my answer to your title question is "No" we are not exaggerating, rather we should be stressing some things more so that laymen will be more likely to at least double check their procedures/results with a statistician.
Edit
This is an addition based on Adam's comment below (will be a bit long for another comment).
Adam, Thanks for your comment. The short answer is "I don't know". I think that progress is being made in improving the statistical quality of articles, but things have moved so quickly in many different ways that it will take a while to catch up and guarentee the quality. Part of the solution is focusing on the assumptions and the consequences of the violations in intro stats courses. This is more likely to happen when the classes are taught by statisticians, but needs to happen in all classes.
Some journals are doing better, but I would like to see a specific statistician reviewer become the standard. There was an article a few years back (sorry don't have the reference handy, but it was in either JAMA or the New England Journal of Medicine) that showed a higher probability of being published (though not as big a difference as it should be) in JAMA or NEJM if a biostatistican or epidemiologist was one of the co-authors.
An interesting article that came out recently is: http://www.nature.com/news/statistics-p-values-are-just-the-tip-of-the-iceberg-1.17412 which discusses some of the same issues. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often
I am trained as a statistician not as a biologist or medical doctor. But I do quite a bit of medical research (working with biologists and medical doctors), as part of my research I have learned quit |
4,341 | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen | Well, yes, assumptions matter -- if they didn't matter at all, we wouldn't need to make them, would we?
The question is how much they matter -- this varies across procedures and assumptions and what you want to claim about your results (and also how tolerant your audience is of approximation -- even inaccuracy -- in such claims).
So for an example of a situation where an assumption is critical, consider the normality assumption in an F-test of variances; even fairly modest changes in distribution may have fairly dramatic effects on the properties (actual significance level and power) of the procedure. If you claim you're carrying out a test at the 5% level when it's really at the 28% level, you're in some sense doing the same kind of thing as lying about how you conducted your experiments. If you don't think such statistical issues are important, make arguments that don't rely on them. On the other hand, if you want to use the statistical information as support, you can't go about misrepresenting that support.
In other cases, particular assumptions may be much less critical. If you're estimating the coefficient in a linear regression and you don't care if it's statistically significant and you don't care about efficiency, well, it doesn't necessarily matter if the homoskedasticity assumption holds. But if you want to say it's statistically significant, or show a confidence interval, yes, it certainly can matter. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often | Well, yes, assumptions matter -- if they didn't matter at all, we wouldn't need to make them, would we?
The question is how much they matter -- this varies across procedures and assumptions and what y | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen
Well, yes, assumptions matter -- if they didn't matter at all, we wouldn't need to make them, would we?
The question is how much they matter -- this varies across procedures and assumptions and what you want to claim about your results (and also how tolerant your audience is of approximation -- even inaccuracy -- in such claims).
So for an example of a situation where an assumption is critical, consider the normality assumption in an F-test of variances; even fairly modest changes in distribution may have fairly dramatic effects on the properties (actual significance level and power) of the procedure. If you claim you're carrying out a test at the 5% level when it's really at the 28% level, you're in some sense doing the same kind of thing as lying about how you conducted your experiments. If you don't think such statistical issues are important, make arguments that don't rely on them. On the other hand, if you want to use the statistical information as support, you can't go about misrepresenting that support.
In other cases, particular assumptions may be much less critical. If you're estimating the coefficient in a linear regression and you don't care if it's statistically significant and you don't care about efficiency, well, it doesn't necessarily matter if the homoskedasticity assumption holds. But if you want to say it's statistically significant, or show a confidence interval, yes, it certainly can matter. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often
Well, yes, assumptions matter -- if they didn't matter at all, we wouldn't need to make them, would we?
The question is how much they matter -- this varies across procedures and assumptions and what y |
4,342 | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen | While Glen_b gave a great answer, I would like to add a couple of cents to that.
One consideration is whether you really want to get the scientific truth, which would require polishing your results and figuring out all the details of whether your approach is defensible, vs. publishing in the "ah well, nobody checks these eigenvalues in my discipline anyway" mode. In other words, you'd have to ask your inner professional conscience whether you are doing the best job you could. Referring to the low statistical literacy and lax statistical practices in your discipline does not make a convincing argument. Reviewers are often at best half-helpful if they come from the same discipline with these lax standards, although some top outlets have explicit initiatives to bring statistical expertise into the review process.
But even if you are a cynical "publish-or-perish" salami slicer, the other consideration is basically the safety of your research reputation. If your model fails, and you don't know it, you are exposing yourself to the risk of rebuttal by those who can come and drive the ax into the cracks of the model checks with more refined instruments. Granted, the possibility of that appears to be low, as the science community, despite the nominal philosophical requirements of reputability and reproducibility, rarely engages in attempts to reproduce somebody else's research. (I was involved in writing a couple of papers that basically started with, "oh my God, did they really write that?", and offered a critique and a refinement of a peer-reviewed published semi-statistical approach.) However, the failures of statistical analyses, when exposed, often make big and unpleasant splashes. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often | While Glen_b gave a great answer, I would like to add a couple of cents to that.
One consideration is whether you really want to get the scientific truth, which would require polishing your results an | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen
While Glen_b gave a great answer, I would like to add a couple of cents to that.
One consideration is whether you really want to get the scientific truth, which would require polishing your results and figuring out all the details of whether your approach is defensible, vs. publishing in the "ah well, nobody checks these eigenvalues in my discipline anyway" mode. In other words, you'd have to ask your inner professional conscience whether you are doing the best job you could. Referring to the low statistical literacy and lax statistical practices in your discipline does not make a convincing argument. Reviewers are often at best half-helpful if they come from the same discipline with these lax standards, although some top outlets have explicit initiatives to bring statistical expertise into the review process.
But even if you are a cynical "publish-or-perish" salami slicer, the other consideration is basically the safety of your research reputation. If your model fails, and you don't know it, you are exposing yourself to the risk of rebuttal by those who can come and drive the ax into the cracks of the model checks with more refined instruments. Granted, the possibility of that appears to be low, as the science community, despite the nominal philosophical requirements of reputability and reproducibility, rarely engages in attempts to reproduce somebody else's research. (I was involved in writing a couple of papers that basically started with, "oh my God, did they really write that?", and offered a critique and a refinement of a peer-reviewed published semi-statistical approach.) However, the failures of statistical analyses, when exposed, often make big and unpleasant splashes. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often
While Glen_b gave a great answer, I would like to add a couple of cents to that.
One consideration is whether you really want to get the scientific truth, which would require polishing your results an |
4,343 | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen | The nature of violations of assumptions can be an important clue for future research. For example, a violation of the proportional-hazards assumption in Cox survival analysis might be due to a variable with a large effect on short-term survival but little effect in the longer term. That's the type of unexpected but potentially important information you can get by examining the validity of your assumptions in a statistical test.
So you do yourself, not just the literature, a potential disservice if you don't test the underlying assumptions. As high-quality journals start requiring more sophisticated statistical review you will find yourself called on more frequently to do so. You don't want to be in a position where a test required by a statistical reviewer undermines what you thought had been a key point of your paper. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often | The nature of violations of assumptions can be an important clue for future research. For example, a violation of the proportional-hazards assumption in Cox survival analysis might be due to a variabl | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen
The nature of violations of assumptions can be an important clue for future research. For example, a violation of the proportional-hazards assumption in Cox survival analysis might be due to a variable with a large effect on short-term survival but little effect in the longer term. That's the type of unexpected but potentially important information you can get by examining the validity of your assumptions in a statistical test.
So you do yourself, not just the literature, a potential disservice if you don't test the underlying assumptions. As high-quality journals start requiring more sophisticated statistical review you will find yourself called on more frequently to do so. You don't want to be in a position where a test required by a statistical reviewer undermines what you thought had been a key point of your paper. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often
The nature of violations of assumptions can be an important clue for future research. For example, a violation of the proportional-hazards assumption in Cox survival analysis might be due to a variabl |
4,344 | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen | I'll answer from an intermediate perspective. I'm not a statistician, I'm chemist. However, I've spent the last 10 years specializing in chemometrics = statistical data analysis for chemistry-related data.
I simply believe that researchers are not doing their statistics well enough.
That is probably the case.
Short version:
Now about the assumptions. IMHO the situation here is far too heterogeneous to deal with it in one statement. Understanding of both what exactly the assumption is needed for and in which way it is likely to be violated by the application is necessary in order to judge whether the violation is harmless or critical. And this needs both the statistics as well as the application knowledge.
As a practitioner facing unachievable assumptions, however, I need something else as well: I'd like to have a "2nd line of defense" that e.g. allows me to judge whether the violation is actually causing trouble or whether it is harmless.
Long version:
From a practical point of view, some typical assumptions are almost never met. Sometimes I can formulate sensible assumptions about the data, but often then the problems become so complicated from a statistical point of view that solutions are not yet known. By now I believe that doing science means that you'll hit the borders of what is known likely not only in your particular discipline but maybe also in other disciplines (here: applied statistics).
There are other situations where certain violations are known to be usually harmless - e.g. the multivariate normality with equal covariance for LDA is needed to show that LDA is optimal, but it is well known that the projection follows a heuristic that often performs well also if the assumption is not met. And which violations are likely to cause trouble: It is also known that heavy tails in the distribution lead to problems with LDA in practice.
Unfortunately, such knowledge rarely makes it into the condensed writing of a paper, so the reader has no clue whether the authors did decide for their model after well considering the properties of the application as well as of the model or whether they just picked whatever model they came across.
Sometimes practical approaches (heuristics) evolve that turn out to be very useful from a practical point of view, even if it takes decades until their statistical properties are understood (I'm thinking of PLS).
The other thing that happens (and should happen more) is that the possible consequences of the violation can be monitored (measured), which allows to decide whether there is a problem or not. For the application, maybe I don't care whether my model optimal as long as it is sufficiently good.
In chemometrics, we have a rather strong focus on prediction. And this offers a very nice escape in case the modeling assumptions are not met: regardless of those assumptions, we can measure whether the model does work well. From a practicioner's point of view, I'd say that you are allowed to do whatever you like during your modeling if you do and report an honest state-of-the-art validation.
For chemometric analysis of spectroscopic data, we're at a point where we don't look at residuals because we know that the models are easily overfit. Instead we look at test data performance (and possibly the difference to training data predicitve performance).
There are other situations where while we're not able to predict precisely how much violation of which assumption leads to a breakdown of the model, but we're able to measure consequences of serious violations of the assumption rather directly.
Next example: the study data I typically deal with is orders of magnitude below the sample sizes that the statistical rules-of-thumb recommend for cases per variate (in order to guarantee stable estimates). But the statistics books typically don't care much about what to do in practice if this assumption cannot be met. Nor how to measure whether you actually are in trouble in this respect. But: such questions are treated in the more applied disciplines. Turns out, it is often quite easy to directly measure model stability or at least whether your predictions are unstable (read here on CV on resampling validation and model stability). And there are ways to stabilize unstable models (e.g. bagging).
As an example of the "2nd line of defense" consider resampling validation. The usual and strongest assumption is that all surrogate models are equivalent to a model trained on the whole data set. If this assumption is violated, we get the well-known pessimistic bias. The 2nd line is that at least the surrogate models are equivalent to each other, so we can pool the test results.
Last but not least, I'd like to encourage the "customer scientists" and the statisticians to speak more with each other. The statistical data analysis IMHO is not something that can be done in a one-way fashion. At some point, each side will need to acquire some knowledge of the other side. I sometimes help "translating" between statisticians and chemists and biologists. A statistician can know that the model needs regularization. But to choose, say, between LASSO and a ridge, they need to know properties of the data that only the chemist, physicist or biologist can know. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often | I'll answer from an intermediate perspective. I'm not a statistician, I'm chemist. However, I've spent the last 10 years specializing in chemometrics = statistical data analysis for chemistry-related | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen
I'll answer from an intermediate perspective. I'm not a statistician, I'm chemist. However, I've spent the last 10 years specializing in chemometrics = statistical data analysis for chemistry-related data.
I simply believe that researchers are not doing their statistics well enough.
That is probably the case.
Short version:
Now about the assumptions. IMHO the situation here is far too heterogeneous to deal with it in one statement. Understanding of both what exactly the assumption is needed for and in which way it is likely to be violated by the application is necessary in order to judge whether the violation is harmless or critical. And this needs both the statistics as well as the application knowledge.
As a practitioner facing unachievable assumptions, however, I need something else as well: I'd like to have a "2nd line of defense" that e.g. allows me to judge whether the violation is actually causing trouble or whether it is harmless.
Long version:
From a practical point of view, some typical assumptions are almost never met. Sometimes I can formulate sensible assumptions about the data, but often then the problems become so complicated from a statistical point of view that solutions are not yet known. By now I believe that doing science means that you'll hit the borders of what is known likely not only in your particular discipline but maybe also in other disciplines (here: applied statistics).
There are other situations where certain violations are known to be usually harmless - e.g. the multivariate normality with equal covariance for LDA is needed to show that LDA is optimal, but it is well known that the projection follows a heuristic that often performs well also if the assumption is not met. And which violations are likely to cause trouble: It is also known that heavy tails in the distribution lead to problems with LDA in practice.
Unfortunately, such knowledge rarely makes it into the condensed writing of a paper, so the reader has no clue whether the authors did decide for their model after well considering the properties of the application as well as of the model or whether they just picked whatever model they came across.
Sometimes practical approaches (heuristics) evolve that turn out to be very useful from a practical point of view, even if it takes decades until their statistical properties are understood (I'm thinking of PLS).
The other thing that happens (and should happen more) is that the possible consequences of the violation can be monitored (measured), which allows to decide whether there is a problem or not. For the application, maybe I don't care whether my model optimal as long as it is sufficiently good.
In chemometrics, we have a rather strong focus on prediction. And this offers a very nice escape in case the modeling assumptions are not met: regardless of those assumptions, we can measure whether the model does work well. From a practicioner's point of view, I'd say that you are allowed to do whatever you like during your modeling if you do and report an honest state-of-the-art validation.
For chemometric analysis of spectroscopic data, we're at a point where we don't look at residuals because we know that the models are easily overfit. Instead we look at test data performance (and possibly the difference to training data predicitve performance).
There are other situations where while we're not able to predict precisely how much violation of which assumption leads to a breakdown of the model, but we're able to measure consequences of serious violations of the assumption rather directly.
Next example: the study data I typically deal with is orders of magnitude below the sample sizes that the statistical rules-of-thumb recommend for cases per variate (in order to guarantee stable estimates). But the statistics books typically don't care much about what to do in practice if this assumption cannot be met. Nor how to measure whether you actually are in trouble in this respect. But: such questions are treated in the more applied disciplines. Turns out, it is often quite easy to directly measure model stability or at least whether your predictions are unstable (read here on CV on resampling validation and model stability). And there are ways to stabilize unstable models (e.g. bagging).
As an example of the "2nd line of defense" consider resampling validation. The usual and strongest assumption is that all surrogate models are equivalent to a model trained on the whole data set. If this assumption is violated, we get the well-known pessimistic bias. The 2nd line is that at least the surrogate models are equivalent to each other, so we can pool the test results.
Last but not least, I'd like to encourage the "customer scientists" and the statisticians to speak more with each other. The statistical data analysis IMHO is not something that can be done in a one-way fashion. At some point, each side will need to acquire some knowledge of the other side. I sometimes help "translating" between statisticians and chemists and biologists. A statistician can know that the model needs regularization. But to choose, say, between LASSO and a ridge, they need to know properties of the data that only the chemist, physicist or biologist can know. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often
I'll answer from an intermediate perspective. I'm not a statistician, I'm chemist. However, I've spent the last 10 years specializing in chemometrics = statistical data analysis for chemistry-related |
4,345 | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen | Given that CV is populated by statisticians and people who are curious, if not competent, about statistics, I am not surprised about all the answers emphasizing the need to understand the assumptions. I also agree with these answers in principle.
However, when taking account the pressure to publish and the low standard for statistical integrity currently, I have to say that these answers are quite naive. We can tell people what they should do all day long (i.e. check your assumption), but what they will do depends solely on the institutional incentives. The OP himself states that he manages to publish 20 articles without understanding the model's assumption. Given my own experience, I don't find this hard to believe.
Thus I want to play the devil's advocate, directly answering OP's question. This is by no means an answer that promotes "good practice," but it is one that reflects how things are practised with a hint of satire.
Is it worth the extra effort?
No, if the goal is to publish, it's not worth it to spend all the time understanding the model. Just follow the prevalent model in the literature. That way, 1) your paper will pass reviews more easily, and 2) the risk of being exposed for "statistical incompetence" is small, because exposing you means exposing the entire field, including many senior people.
Is it not likely that the majority of all published results do not respect these assumptions and perhaps have not even assessed them? This is probably a growing issue since databases grow larger every day and there is a notion that the bigger the data, the less important is the assumptions and evaluations.
Yes, it's likely that most published results are not true. The more involved I am in actual research, the more I think it is likely. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often | Given that CV is populated by statisticians and people who are curious, if not competent, about statistics, I am not surprised about all the answers emphasizing the need to understand the assumptions. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen
Given that CV is populated by statisticians and people who are curious, if not competent, about statistics, I am not surprised about all the answers emphasizing the need to understand the assumptions. I also agree with these answers in principle.
However, when taking account the pressure to publish and the low standard for statistical integrity currently, I have to say that these answers are quite naive. We can tell people what they should do all day long (i.e. check your assumption), but what they will do depends solely on the institutional incentives. The OP himself states that he manages to publish 20 articles without understanding the model's assumption. Given my own experience, I don't find this hard to believe.
Thus I want to play the devil's advocate, directly answering OP's question. This is by no means an answer that promotes "good practice," but it is one that reflects how things are practised with a hint of satire.
Is it worth the extra effort?
No, if the goal is to publish, it's not worth it to spend all the time understanding the model. Just follow the prevalent model in the literature. That way, 1) your paper will pass reviews more easily, and 2) the risk of being exposed for "statistical incompetence" is small, because exposing you means exposing the entire field, including many senior people.
Is it not likely that the majority of all published results do not respect these assumptions and perhaps have not even assessed them? This is probably a growing issue since databases grow larger every day and there is a notion that the bigger the data, the less important is the assumptions and evaluations.
Yes, it's likely that most published results are not true. The more involved I am in actual research, the more I think it is likely. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often
Given that CV is populated by statisticians and people who are curious, if not competent, about statistics, I am not surprised about all the answers emphasizing the need to understand the assumptions. |
4,346 | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen | The short answer is "no." Statistical methods were developed under sets of assumptions that should be met for the results to be valid. It stands to reason, then, that if the assumptions were not met, the results may not be valid. Of course, some estimates may still be robust despite violations of model assumptions. For example, multinomial logit appears to perform well despite violations of the IIA assumption (see Kropko's [2011] dissertation in the reference below).
As scientists, we have an obligation to ensure that the results we put out there are valid, even if the people in the field don't care whether assumptions have been met. This is because science is built on the assumption that scientists will do things the right way in their pursuit of the facts. We trust our colleagues to check their work before sending it out to the journals. We trust the referees to competently review a manuscript before it gets published. We assume that both the researchers and the referees know what they are doing, so that the results in papers that are published in peer-reviewed journals can be trusted. We know this is not always true in the real world based on the sheer amount of articles in the literature where you end up shaking your head and rolling your eyes at the obviously cherry-picked results in respectable journals ("Jama published this paper?!").
So no, the importance cannot be overstated, especially since people trust you--the expert--to have done your due diligence. The least you can do is talk about these violations in the "limitations" section of your paper to help people interpret the validity of your results.
Reference
Kropko, J. 2011. New Approaches to Discrete Choice and Time-Series Cross-Section Methodology for Political Research (dissertation). UNC-Chapel Hill, Chapel Hill, NC. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often | The short answer is "no." Statistical methods were developed under sets of assumptions that should be met for the results to be valid. It stands to reason, then, that if the assumptions were not met, | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen
The short answer is "no." Statistical methods were developed under sets of assumptions that should be met for the results to be valid. It stands to reason, then, that if the assumptions were not met, the results may not be valid. Of course, some estimates may still be robust despite violations of model assumptions. For example, multinomial logit appears to perform well despite violations of the IIA assumption (see Kropko's [2011] dissertation in the reference below).
As scientists, we have an obligation to ensure that the results we put out there are valid, even if the people in the field don't care whether assumptions have been met. This is because science is built on the assumption that scientists will do things the right way in their pursuit of the facts. We trust our colleagues to check their work before sending it out to the journals. We trust the referees to competently review a manuscript before it gets published. We assume that both the researchers and the referees know what they are doing, so that the results in papers that are published in peer-reviewed journals can be trusted. We know this is not always true in the real world based on the sheer amount of articles in the literature where you end up shaking your head and rolling your eyes at the obviously cherry-picked results in respectable journals ("Jama published this paper?!").
So no, the importance cannot be overstated, especially since people trust you--the expert--to have done your due diligence. The least you can do is talk about these violations in the "limitations" section of your paper to help people interpret the validity of your results.
Reference
Kropko, J. 2011. New Approaches to Discrete Choice and Time-Series Cross-Section Methodology for Political Research (dissertation). UNC-Chapel Hill, Chapel Hill, NC. | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often
The short answer is "no." Statistical methods were developed under sets of assumptions that should be met for the results to be valid. It stands to reason, then, that if the assumptions were not met, |
4,347 | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen | If you need very advanced statistics it's most likely because your data is a mess, which is the case with most social sciences, not to mention psychology. In those fields where you have good data you need very little statistics. Physics is a very good example.
Consider this quote from Galileo on his famous gravitational acceleration experiment:
A piece of wooden moulding or scantling, about 12 cubits long, half a
cubit wide, and three finger-breadths thick, was taken; on its edge
was cut a channel a little more than one finger in breadth; having
made this groove very straight, smooth, and polished, and having lined
it with parchment, also as smooth and polished as possible, we rolled
along it a hard, smooth, and very round bronze ball. Having placed
this board in a sloping position, by raising one end some one or two
cubits above the other, we rolled the ball, as I was just saying,
along the channel, noting, in a manner presently to be described, the
time required to make the descent. We repeated this experiment more
than once in order to measure the time with an accuracy such that the
deviation between two observations never exceeded one-tenth of a
pulse-beat. Having performed this operation and having assured
ourselves of its reliability, we now rolled the ball only one-quarter
the length of the channel; and having measured the time of its
descent, we found it precisely one-half of the former. Next we tried
other distances, compared the time for the whole length with that for
the half, or with that for two-thirds, or three-fourths, or indeed for
any fraction; in such experiments, repeated a full hundred times, we
always found that the spaces traversed were to each other as the
squares of the times, and this was true for all inclinations of the
plane, i.e., of the channel, along which we rolled the ball. We also
observed that the times of descent, for various inclinations of the
plane, bore to one another precisely that ratio which, as we shall see
later, the Author had predicted and demonstrated for them.
For the measurement of time, we employed a large vessel of water
placed in an elevated position; to the bottom of this vessel was
soldered a pipe of small diameter giving a thin jet of water which we
collected in a small glass during the time of each descent, whether
for the whole length of the channel or for part of its length; the
water thus collected was weighed, after each descent, on a very
accurate balance; the differences and ratios of these weights gave us
the differences and ratios of the times, and this with such accuracy
that although the operation was repeated many, many times, there was
no appreciable discrepancy in the results.
Note the highlighted by me text. This is what good data is. It comes from a well planned experiment based on a good theory. You don't need statistics to extract what you're interested in. There was no statistics at that time, neither there were computers. The outcome? A pretty fundamental relationship, which still holds, and can be tested at home by a 6th grader.
I stole the quote from this awesome page.
UPDATE:
To @Silverfish comment, here's an example of statistics in experimental particle physics. Pretty basic, huh? Barely over MBA level. Note, how they love $\chi^2$ :) Take that, statisticians! | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often | If you need very advanced statistics it's most likely because your data is a mess, which is the case with most social sciences, not to mention psychology. In those fields where you have good data you | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen
If you need very advanced statistics it's most likely because your data is a mess, which is the case with most social sciences, not to mention psychology. In those fields where you have good data you need very little statistics. Physics is a very good example.
Consider this quote from Galileo on his famous gravitational acceleration experiment:
A piece of wooden moulding or scantling, about 12 cubits long, half a
cubit wide, and three finger-breadths thick, was taken; on its edge
was cut a channel a little more than one finger in breadth; having
made this groove very straight, smooth, and polished, and having lined
it with parchment, also as smooth and polished as possible, we rolled
along it a hard, smooth, and very round bronze ball. Having placed
this board in a sloping position, by raising one end some one or two
cubits above the other, we rolled the ball, as I was just saying,
along the channel, noting, in a manner presently to be described, the
time required to make the descent. We repeated this experiment more
than once in order to measure the time with an accuracy such that the
deviation between two observations never exceeded one-tenth of a
pulse-beat. Having performed this operation and having assured
ourselves of its reliability, we now rolled the ball only one-quarter
the length of the channel; and having measured the time of its
descent, we found it precisely one-half of the former. Next we tried
other distances, compared the time for the whole length with that for
the half, or with that for two-thirds, or three-fourths, or indeed for
any fraction; in such experiments, repeated a full hundred times, we
always found that the spaces traversed were to each other as the
squares of the times, and this was true for all inclinations of the
plane, i.e., of the channel, along which we rolled the ball. We also
observed that the times of descent, for various inclinations of the
plane, bore to one another precisely that ratio which, as we shall see
later, the Author had predicted and demonstrated for them.
For the measurement of time, we employed a large vessel of water
placed in an elevated position; to the bottom of this vessel was
soldered a pipe of small diameter giving a thin jet of water which we
collected in a small glass during the time of each descent, whether
for the whole length of the channel or for part of its length; the
water thus collected was weighed, after each descent, on a very
accurate balance; the differences and ratios of these weights gave us
the differences and ratios of the times, and this with such accuracy
that although the operation was repeated many, many times, there was
no appreciable discrepancy in the results.
Note the highlighted by me text. This is what good data is. It comes from a well planned experiment based on a good theory. You don't need statistics to extract what you're interested in. There was no statistics at that time, neither there were computers. The outcome? A pretty fundamental relationship, which still holds, and can be tested at home by a 6th grader.
I stole the quote from this awesome page.
UPDATE:
To @Silverfish comment, here's an example of statistics in experimental particle physics. Pretty basic, huh? Barely over MBA level. Note, how they love $\chi^2$ :) Take that, statisticians! | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often
If you need very advanced statistics it's most likely because your data is a mess, which is the case with most social sciences, not to mention psychology. In those fields where you have good data you |
4,348 | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen | This question seems to be a case of professional integrity.
The problem seems to be that either:
(a) there isn't enough critical assessment of statistical analysis by lay persons or
(b) a case of common knowledge is insufficient to identify statistical error (like a Type 2 error)?
I know enough about my area of expertise to request an experts input when I am near the boundary of that expertise. I have seen people use things like the F-test (and R-squared in Excel) without sufficient knowledge.
In my experience, the education systems, in our eagerness to promote statistics, have over-simplified the tools and understated the risks / limits. Is this a common theme that others have experienced and would explain the situation? | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often | This question seems to be a case of professional integrity.
The problem seems to be that either:
(a) there isn't enough critical assessment of statistical analysis by lay persons or
(b) a case of comm | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often carried out by laymen
This question seems to be a case of professional integrity.
The problem seems to be that either:
(a) there isn't enough critical assessment of statistical analysis by lay persons or
(b) a case of common knowledge is insufficient to identify statistical error (like a Type 2 error)?
I know enough about my area of expertise to request an experts input when I am near the boundary of that expertise. I have seen people use things like the F-test (and R-squared in Excel) without sufficient knowledge.
In my experience, the education systems, in our eagerness to promote statistics, have over-simplified the tools and understated the risks / limits. Is this a common theme that others have experienced and would explain the situation? | Are we exaggerating importance of model assumption and evaluation in an era when analyses are often
This question seems to be a case of professional integrity.
The problem seems to be that either:
(a) there isn't enough critical assessment of statistical analysis by lay persons or
(b) a case of comm |
4,349 | Which has the heavier tail, lognormal or gamma? | The (right) tail of a distribution describes its behavior at large values. The correct object to study is not its density--which in many practical cases does not exist--but rather its distribution function $F$. More specifically, because $F$ must rise asymptotically to $1$ for large arguments $x$ (by the Law of Total Probability), we are interested in how rapidly it approaches that asymptote: we need to investigate the behavior of its survival function $1- F(x)$ as $x \to \infty$.
Specifically, one distribution $F$ for a random variable $X$ is "heavier" than another one $G$ provided that eventually $F$ has more probability at large values than $G$. This can be formalized: there must exist a finite number $x_0$ such that for all $x \gt x_0$, $${\Pr}_F(X\gt x) = 1 - F(x) \gt 1 - G(x) = {\Pr}_G(X\gt x).$$
The red curve in this figure is the survival function for a Poisson$(3)$ distribution. The blue curve is for a Gamma$(3)$ distribution, which has the same variance. Eventually the blue curve always exceeds the red curve, showing that this Gamma distribution has a heavier tail than this Poisson distribution. These distributions cannot readily be compared using densities, because the Poisson distribution has no density.
It is true that when the densities $f$ and $g$ exist and $f(x) \gt g(x)$ for $x \gt x_0$ then $F$ is heavier-tailed than $G$. However, the converse is false--and this is a compelling reason to base the definition of tail heaviness on survival functions rather than densities, even if often the analysis of tails may be more easily carried out using the densities.
Counter-examples can be constructed by taking a discrete distribution $H$ of positive unbounded support that nevertheless is no heavier-tailed than $G$ (discretizing $G$ will do the trick). Turn this into a continuous distribution by replacing the probability mass of $H$ at each of its support points $k$, written $h(k)$, by (say) a scaled Beta$(2,2)$ distribution with support on a suitable interval $[k-\varepsilon(k), k+\varepsilon(k)]$ and weighted by $h(k)$. Given a small positive number $\delta,$ choose $\varepsilon(k)$ sufficiently small to ensure that the peak density of this scaled Beta distribution exceeds $f(k)/\delta$. By construction, the mixture $\delta H + (1-\delta )G$ is a continuous distribution $G^\prime$ whose tail looks like that of $G$ (it is uniformly a tiny bit lower by an amount $\delta$) but has spikes in its density at the support of $H$ and all those spikes have points where they exceed the density of $f$. Thus $G^\prime$ is lighter-tailed than $F$ but no matter how far out in the tail we go there will be points where its density exceeds that of $F$.
The red curve is the PDF of a Gamma distribution $G$, the gold curve is the PDF of a lognormal distribution $F$, and the blue curve (with spikes) is the PDF of a mixture $G^\prime$ constructed as in the counterexample. (Notice the logarithmic density axis.) The survival function of $G^\prime$ is close to that of a Gamma distribution (with rapidly decaying wiggles): it will eventually grow less than that of $F$, even though its PDF will always spike above that of $F$ no matter how far out into the tails we look.
Discussion
Incidentally, we can perform this analysis directly on the survival functions of lognormal and Gamma distributions, expanding them around $x=\infty$ to find their asymptotic behavior, and conclude that all lognormals have heavier tails than all Gammas. But, because these distributions have "nice" densities, the analysis is more easily carried out by showing that for sufficiently large $x$, a lognormal density exceeds a Gamma density. Let us not, however, confuse this analytical convenience with the meaning of a heavy tail.
Similarly, although higher moments and their variants (such as skewness and kurtosis) say a little about the tails, they do not provide sufficient information. As a simple example, we may truncate any lognormal distribution at such a large value that any given number of its moments will scarcely change--but in so doing we will have removed its tail entirely, making it lighter-tailed than any distribution with unbounded support (such as a Gamma).
A fair objection to these mathematical contortions would be to point out that behavior so far out in the tail has no practical application, because nobody would ever believe that any distributional model will be valid at such extreme (perhaps physically unattainable) values. That shows, however, that in applications we ought to take some care to identify which portion of the tail is of concern and analyze it accordingly. (Flood recurrence times, for instance, can be understood in this sense: 10-year floods, 100-year floods, and 1000-year floods characterize particular sections of the tail of the flood distribution.) The same principles apply, though: the fundamental object of analysis here is the distribution function and not its density. | Which has the heavier tail, lognormal or gamma? | The (right) tail of a distribution describes its behavior at large values. The correct object to study is not its density--which in many practical cases does not exist--but rather its distribution fu | Which has the heavier tail, lognormal or gamma?
The (right) tail of a distribution describes its behavior at large values. The correct object to study is not its density--which in many practical cases does not exist--but rather its distribution function $F$. More specifically, because $F$ must rise asymptotically to $1$ for large arguments $x$ (by the Law of Total Probability), we are interested in how rapidly it approaches that asymptote: we need to investigate the behavior of its survival function $1- F(x)$ as $x \to \infty$.
Specifically, one distribution $F$ for a random variable $X$ is "heavier" than another one $G$ provided that eventually $F$ has more probability at large values than $G$. This can be formalized: there must exist a finite number $x_0$ such that for all $x \gt x_0$, $${\Pr}_F(X\gt x) = 1 - F(x) \gt 1 - G(x) = {\Pr}_G(X\gt x).$$
The red curve in this figure is the survival function for a Poisson$(3)$ distribution. The blue curve is for a Gamma$(3)$ distribution, which has the same variance. Eventually the blue curve always exceeds the red curve, showing that this Gamma distribution has a heavier tail than this Poisson distribution. These distributions cannot readily be compared using densities, because the Poisson distribution has no density.
It is true that when the densities $f$ and $g$ exist and $f(x) \gt g(x)$ for $x \gt x_0$ then $F$ is heavier-tailed than $G$. However, the converse is false--and this is a compelling reason to base the definition of tail heaviness on survival functions rather than densities, even if often the analysis of tails may be more easily carried out using the densities.
Counter-examples can be constructed by taking a discrete distribution $H$ of positive unbounded support that nevertheless is no heavier-tailed than $G$ (discretizing $G$ will do the trick). Turn this into a continuous distribution by replacing the probability mass of $H$ at each of its support points $k$, written $h(k)$, by (say) a scaled Beta$(2,2)$ distribution with support on a suitable interval $[k-\varepsilon(k), k+\varepsilon(k)]$ and weighted by $h(k)$. Given a small positive number $\delta,$ choose $\varepsilon(k)$ sufficiently small to ensure that the peak density of this scaled Beta distribution exceeds $f(k)/\delta$. By construction, the mixture $\delta H + (1-\delta )G$ is a continuous distribution $G^\prime$ whose tail looks like that of $G$ (it is uniformly a tiny bit lower by an amount $\delta$) but has spikes in its density at the support of $H$ and all those spikes have points where they exceed the density of $f$. Thus $G^\prime$ is lighter-tailed than $F$ but no matter how far out in the tail we go there will be points where its density exceeds that of $F$.
The red curve is the PDF of a Gamma distribution $G$, the gold curve is the PDF of a lognormal distribution $F$, and the blue curve (with spikes) is the PDF of a mixture $G^\prime$ constructed as in the counterexample. (Notice the logarithmic density axis.) The survival function of $G^\prime$ is close to that of a Gamma distribution (with rapidly decaying wiggles): it will eventually grow less than that of $F$, even though its PDF will always spike above that of $F$ no matter how far out into the tails we look.
Discussion
Incidentally, we can perform this analysis directly on the survival functions of lognormal and Gamma distributions, expanding them around $x=\infty$ to find their asymptotic behavior, and conclude that all lognormals have heavier tails than all Gammas. But, because these distributions have "nice" densities, the analysis is more easily carried out by showing that for sufficiently large $x$, a lognormal density exceeds a Gamma density. Let us not, however, confuse this analytical convenience with the meaning of a heavy tail.
Similarly, although higher moments and their variants (such as skewness and kurtosis) say a little about the tails, they do not provide sufficient information. As a simple example, we may truncate any lognormal distribution at such a large value that any given number of its moments will scarcely change--but in so doing we will have removed its tail entirely, making it lighter-tailed than any distribution with unbounded support (such as a Gamma).
A fair objection to these mathematical contortions would be to point out that behavior so far out in the tail has no practical application, because nobody would ever believe that any distributional model will be valid at such extreme (perhaps physically unattainable) values. That shows, however, that in applications we ought to take some care to identify which portion of the tail is of concern and analyze it accordingly. (Flood recurrence times, for instance, can be understood in this sense: 10-year floods, 100-year floods, and 1000-year floods characterize particular sections of the tail of the flood distribution.) The same principles apply, though: the fundamental object of analysis here is the distribution function and not its density. | Which has the heavier tail, lognormal or gamma?
The (right) tail of a distribution describes its behavior at large values. The correct object to study is not its density--which in many practical cases does not exist--but rather its distribution fu |
4,350 | Which has the heavier tail, lognormal or gamma? | The gamma and the lognormal are both right skew, constant-coefficient-of-variation distributions on $(0,\infty)$, and they're often the basis of "competing" models for particular kinds of phenomena.
There are various ways to define the heaviness of a tail, but in this case I think all the usual ones show that the lognormal is heavier. (What the first person might have been talking about is what goes on not in the far tail, but a little to the right of the mode (say, around the 75th percentile on the first plot below, which for the lognormal is just below 5 and the gamma just above 5.)
However, let's just explore the question in a very simple way to begin.
Below are gamma and lognormal densities with mean 4 and variance 4 (top plot - gamma is dark green, lognormal is blue), and then the log of the density (bottom), so you can compare the trends in the tails:
It's hard to see much detail in the top plot, because all the action is to the right of 10. But it's quite clear in the second plot, where the gamma is heading down much more rapidly than the lognormal.
Another way to explore the relationship is to look at the density of the logs, as in the answer here; we see that the density of the logs for the lognormal is symmetric (it's normal!), and that for the gamma is left-skew, with a light tail on the right.
We can do it algebraically, where we can look at the ratio of densities as $x\rightarrow\infty$ (or the log of the ratio). Let $g$ be a gamma density and $f$ lognormal:
$$\log(g(x)/f(x)) = \log(g(x)) - \log(f(x))$$
$$=\log\left(\frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha-1}e^{-x/\beta}\right)-\log\left(\frac{1}{\sqrt{2\pi}\sigma x}e^{-\frac{(\log(x)-\mu)^2}{2\sigma^2}}\right)$$
$$=-k_1-(\alpha-1)\log(x)-x/\beta - (-k_2-\log(x)-\frac{(\log(x)-\mu)^2}{2\sigma^2})$$
$$=\left[c-(\alpha-2)\log(x)+\frac{(\log(x)-\mu)^2}{2\sigma^2}\right]-x/\beta $$
The term in the [ ] is a quadratic in $\log(x)$, while the remaining term is decreasing linearly in $x$. No matter what, that $-x/\beta$ will eventually go down faster than the quadratic increases irrespective of what the parameter values are. In the limit as $x\rightarrow\infty$, the log of the ratio of densities is decreasing toward $-\infty$, which means the gamma pdf is eventually much smaller than the lognormal pdf, and it keeps decreasing, relatively. If you take the ratio the other way (with lognormal on top), it eventually must increase beyond any bound.
That is, any given lognormal is eventually heavier tailed than any gamma.
Other definitions of heaviness:
Some people are interested in skewness or kurtosis to measure the heaviness of the right tail. At a given coefficient of variation, the lognormal is both more skew and has higher kurtosis than the gamma.**
For example, with skewness, the gamma has a skewness of 2CV while the lognormal is 3CV + CV$^3$.
There are some technical definitions of various measures of how heavy the tails are here. You might like to try some of those with these two distributions. The lognormal is an interesting special case in the first definition - all its moments exist, but its MGF doesn't converge above 0, while the MGF for the Gamma does converge in a neighborhood around zero.
--
** As Nick Cox mentions below, the usual transformation to approximate normality for the gamma, the Wilson-Hilferty transformation, is weaker than the log - it's a cube root transformation. At small values of the shape parameter, the fourth root has been mentioned instead see the discussion in this answer, but in either case it's a weaker transformation to achieve near-normality.
The comparison of skewness (or kurtosis) doesn't suggest any necessary relationship in the extreme tail - it instead tells us something about average behavior; but it may for that reason work better if the original point was not being made about the extreme tail.
Resources: It's easy to use programs like R or Minitab or Matlab or Excel or whatever you like to draw densities and log-densities and logs of ratios of densities ... and so on, to see how things go in particular cases. That's what I'd suggest to start with. | Which has the heavier tail, lognormal or gamma? | The gamma and the lognormal are both right skew, constant-coefficient-of-variation distributions on $(0,\infty)$, and they're often the basis of "competing" models for particular kinds of phenomena.
T | Which has the heavier tail, lognormal or gamma?
The gamma and the lognormal are both right skew, constant-coefficient-of-variation distributions on $(0,\infty)$, and they're often the basis of "competing" models for particular kinds of phenomena.
There are various ways to define the heaviness of a tail, but in this case I think all the usual ones show that the lognormal is heavier. (What the first person might have been talking about is what goes on not in the far tail, but a little to the right of the mode (say, around the 75th percentile on the first plot below, which for the lognormal is just below 5 and the gamma just above 5.)
However, let's just explore the question in a very simple way to begin.
Below are gamma and lognormal densities with mean 4 and variance 4 (top plot - gamma is dark green, lognormal is blue), and then the log of the density (bottom), so you can compare the trends in the tails:
It's hard to see much detail in the top plot, because all the action is to the right of 10. But it's quite clear in the second plot, where the gamma is heading down much more rapidly than the lognormal.
Another way to explore the relationship is to look at the density of the logs, as in the answer here; we see that the density of the logs for the lognormal is symmetric (it's normal!), and that for the gamma is left-skew, with a light tail on the right.
We can do it algebraically, where we can look at the ratio of densities as $x\rightarrow\infty$ (or the log of the ratio). Let $g$ be a gamma density and $f$ lognormal:
$$\log(g(x)/f(x)) = \log(g(x)) - \log(f(x))$$
$$=\log\left(\frac{1}{\Gamma(\alpha)\beta^\alpha}x^{\alpha-1}e^{-x/\beta}\right)-\log\left(\frac{1}{\sqrt{2\pi}\sigma x}e^{-\frac{(\log(x)-\mu)^2}{2\sigma^2}}\right)$$
$$=-k_1-(\alpha-1)\log(x)-x/\beta - (-k_2-\log(x)-\frac{(\log(x)-\mu)^2}{2\sigma^2})$$
$$=\left[c-(\alpha-2)\log(x)+\frac{(\log(x)-\mu)^2}{2\sigma^2}\right]-x/\beta $$
The term in the [ ] is a quadratic in $\log(x)$, while the remaining term is decreasing linearly in $x$. No matter what, that $-x/\beta$ will eventually go down faster than the quadratic increases irrespective of what the parameter values are. In the limit as $x\rightarrow\infty$, the log of the ratio of densities is decreasing toward $-\infty$, which means the gamma pdf is eventually much smaller than the lognormal pdf, and it keeps decreasing, relatively. If you take the ratio the other way (with lognormal on top), it eventually must increase beyond any bound.
That is, any given lognormal is eventually heavier tailed than any gamma.
Other definitions of heaviness:
Some people are interested in skewness or kurtosis to measure the heaviness of the right tail. At a given coefficient of variation, the lognormal is both more skew and has higher kurtosis than the gamma.**
For example, with skewness, the gamma has a skewness of 2CV while the lognormal is 3CV + CV$^3$.
There are some technical definitions of various measures of how heavy the tails are here. You might like to try some of those with these two distributions. The lognormal is an interesting special case in the first definition - all its moments exist, but its MGF doesn't converge above 0, while the MGF for the Gamma does converge in a neighborhood around zero.
--
** As Nick Cox mentions below, the usual transformation to approximate normality for the gamma, the Wilson-Hilferty transformation, is weaker than the log - it's a cube root transformation. At small values of the shape parameter, the fourth root has been mentioned instead see the discussion in this answer, but in either case it's a weaker transformation to achieve near-normality.
The comparison of skewness (or kurtosis) doesn't suggest any necessary relationship in the extreme tail - it instead tells us something about average behavior; but it may for that reason work better if the original point was not being made about the extreme tail.
Resources: It's easy to use programs like R or Minitab or Matlab or Excel or whatever you like to draw densities and log-densities and logs of ratios of densities ... and so on, to see how things go in particular cases. That's what I'd suggest to start with. | Which has the heavier tail, lognormal or gamma?
The gamma and the lognormal are both right skew, constant-coefficient-of-variation distributions on $(0,\infty)$, and they're often the basis of "competing" models for particular kinds of phenomena.
T |
4,351 | Which has the heavier tail, lognormal or gamma? | Although kurtosis is a related to the heaviness of tails, it would contribute more to the notion of fat tailed distributions, and relatively less to tail heaviness itself, as the following example shows. Herein, I now regurgitate what I have learned in the posts above and below, which are really excellent comments. First, the area of a right tail is the area from x to $\infty$ of a $f(x)$ density function, A.K.A. the survival function, $1-F(t)$. For the lognormal distribution $\frac{e^{-\frac{(\log (x)-\mu )^2}{2 \sigma ^2}}}{\sqrt{2 \pi } \sigma x};x\geq 0$ and the gamma distribution $\frac{\beta ^{\alpha } x^{\alpha -1} e^{-\beta x}}{\Gamma (\alpha )};x\geq 0$, let us compare their respective survival functions $\frac{1}{2} \text{erfc}\left(\frac{ \log (x)-\mu}{\sqrt{2} \sigma}\right)$ and $Q(\alpha ,\beta x)=\frac{\Gamma (\alpha , \beta x)}{\Gamma (\alpha )}$ graphically. To do this, I arbitrarily set their respective variances $\left(e^{\sigma ^2}-1\right) e^{2 \mu +\sigma ^2}$ and $\frac{\alpha }{\beta ^2}$, as well as their respective excess kurtoses $3 e^{2 \sigma ^2}+2 e^{3 \sigma ^2}+e^{4 \sigma ^2}-6$ and $\frac{6}{\alpha }$ equal by choosing $\mu =0, \sigma =0.8$ and solved for $\alpha \to 0.19128,\beta \to 0.335421$. This shows
the survival function for the lognormal distribution (LND) in blue and the gamma distribution (GD) in orange. This brings us to our first caution. That is, if this plot were all we were to examine, we might conclude that the tail for GD is heavier than for LND. That this is not the case is shown by extending the x-axis values of the plot, thus
This plot shows that 1) even with equal kurtoses, the right tail areas of LND and GD can differ. 2) That graphic interpretation alone has its dangers, as it can only display results for fixed parameter values over a limited range. Thus, there is a need to find general expressions for the limiting survival function ratio of $\lim_{x\to \infty } \, \frac{S(\text{LND},x)}{S(\text{GD},x)}$. I was unable to do this with infinite series expansions. However, I was able to do this by using the intermediary of terminal or asymptotic functions, which are not unique functions and where for right hand tails then $\lim_{x\to \infty } \, \frac{F(x)}{G(x)}=1$ is sufficient for $F(x)$ and $G(x)$ to be mutually asymptotic. With appropriate care taken to finding these functions, this has the potential to identify a subset of simpler functions than the survival functions themselves, that can be shared or held in common with more than one density function, for example, two different density functions may share a limiting exponential tail. In the prior version of this post, this is what I was referring to as the "added complexity of comparing survival functions." Note that, $\lim_{u\to \infty } \, \frac{\text{erfc}(u)}{\frac{e^{-u^2}}{\sqrt{\pi } u}}=1$ and $\lim_{u\to \infty } \, \frac{\Gamma (\alpha ,u)}{e^{-u} u^{\alpha -1}}=1$ (Incidentally and not necessarily $\text{erfc}(u)<\frac{e^{-u^2}}{\sqrt{\pi } u}$ and $\Gamma (\alpha ,u )<e^{-u} u^{\alpha -1}$. That is, it is not necessary to choose an upper bound, just an asymptotic function). Here we write $\frac{1}{2} \text{erfc}\left(\frac{\log (x)-\mu }{\sqrt{2} \sigma }\right)<\frac{e^{-\left(\frac{\log (x)-\mu }{\sqrt{2} \sigma }\right)^2}}{\frac{2 \left(\sqrt{\pi } (\log (x)-\mu )\right)}{\sqrt{2} \sigma }}$ and $\frac{\Gamma (\alpha ,\beta x)}{\Gamma (\alpha )}<\frac{e^{-\text{$\beta $x}} (\beta x)^{\alpha -1}}{\Gamma (\alpha )}$ where the ratio of the right hand terms has the same limit as $x\to \infty$ as the left hand terms. Simplifying the limiting ratio of right hand terms yields $\lim_{x\to \infty } \, \frac{\sigma \Gamma (\alpha ) (\beta x)^{1-\alpha } e^{\beta x-\frac{(\mu -\log (x))^2}{2 \sigma ^2}}}{\sqrt{2 \pi } (\log (x)-\mu )}=\infty$ meaning that for x sufficiently large, the LND tail area is as large as we like compared to the GD tail area, irrespective of what the parameter values are. That brings up another problem, we do not always have solutions that are true for all parameter values, thus, using graphic illustrations alone can be misleading. For example, the gamma distribution right tail area is greater than the exponential distribution's tail area when $\alpha < 1$, less than exponential when $\alpha >1$ and the GD is exactly an exponential distribution when $\alpha =1$.
What then is the use of taking the logarithms of the ratio of survival functions, since we obviously do not need to take logarithms to find a limiting ratio? Many distribution function contain exponential terms that look simpler when the logarithm is taken, and if the ratio goes to infinity in the limit as x increases, then the logarithm will do so as well. In our case, that would allow us to inspect $\lim_{x\to \infty } \, \left(\log \left(\frac{\sigma \Gamma (\alpha ) (\beta x)^{1-\alpha }}{\sqrt{2 \pi } (\log (x)-\mu )}\right)+\beta x-\frac{(\mu -\log (x))^2}{2 \sigma ^2}\right)=\infty$, which some people would find simpler to look at. Lastly, if the ratio of survival functions goes to zero, then the logarithm of that ratio will go to $-\infty$, and in all cases after finding the limit of a logarithm of a ratio, we have to take the antilogarithm of that value to understand its relationship to the limiting value of the ordinary ratio of survival function.
Edit 2020-02-18: BTW, there is a lot of literature on classifying tail heaviness of functions that, in effect, assumes (incorrectly) that one can compare hazard functions while ignoring the requirement for having an indeterminate form to do so. There does not seem to be much literature in support of the methods of survival function comparison outlined herein, at least that I could find. However, there is a recent publication appendix that may be cite-worthy. Any other references for the methods outlined herein would be greatly appreciated. | Which has the heavier tail, lognormal or gamma? | Although kurtosis is a related to the heaviness of tails, it would contribute more to the notion of fat tailed distributions, and relatively less to tail heaviness itself, as the following example sho | Which has the heavier tail, lognormal or gamma?
Although kurtosis is a related to the heaviness of tails, it would contribute more to the notion of fat tailed distributions, and relatively less to tail heaviness itself, as the following example shows. Herein, I now regurgitate what I have learned in the posts above and below, which are really excellent comments. First, the area of a right tail is the area from x to $\infty$ of a $f(x)$ density function, A.K.A. the survival function, $1-F(t)$. For the lognormal distribution $\frac{e^{-\frac{(\log (x)-\mu )^2}{2 \sigma ^2}}}{\sqrt{2 \pi } \sigma x};x\geq 0$ and the gamma distribution $\frac{\beta ^{\alpha } x^{\alpha -1} e^{-\beta x}}{\Gamma (\alpha )};x\geq 0$, let us compare their respective survival functions $\frac{1}{2} \text{erfc}\left(\frac{ \log (x)-\mu}{\sqrt{2} \sigma}\right)$ and $Q(\alpha ,\beta x)=\frac{\Gamma (\alpha , \beta x)}{\Gamma (\alpha )}$ graphically. To do this, I arbitrarily set their respective variances $\left(e^{\sigma ^2}-1\right) e^{2 \mu +\sigma ^2}$ and $\frac{\alpha }{\beta ^2}$, as well as their respective excess kurtoses $3 e^{2 \sigma ^2}+2 e^{3 \sigma ^2}+e^{4 \sigma ^2}-6$ and $\frac{6}{\alpha }$ equal by choosing $\mu =0, \sigma =0.8$ and solved for $\alpha \to 0.19128,\beta \to 0.335421$. This shows
the survival function for the lognormal distribution (LND) in blue and the gamma distribution (GD) in orange. This brings us to our first caution. That is, if this plot were all we were to examine, we might conclude that the tail for GD is heavier than for LND. That this is not the case is shown by extending the x-axis values of the plot, thus
This plot shows that 1) even with equal kurtoses, the right tail areas of LND and GD can differ. 2) That graphic interpretation alone has its dangers, as it can only display results for fixed parameter values over a limited range. Thus, there is a need to find general expressions for the limiting survival function ratio of $\lim_{x\to \infty } \, \frac{S(\text{LND},x)}{S(\text{GD},x)}$. I was unable to do this with infinite series expansions. However, I was able to do this by using the intermediary of terminal or asymptotic functions, which are not unique functions and where for right hand tails then $\lim_{x\to \infty } \, \frac{F(x)}{G(x)}=1$ is sufficient for $F(x)$ and $G(x)$ to be mutually asymptotic. With appropriate care taken to finding these functions, this has the potential to identify a subset of simpler functions than the survival functions themselves, that can be shared or held in common with more than one density function, for example, two different density functions may share a limiting exponential tail. In the prior version of this post, this is what I was referring to as the "added complexity of comparing survival functions." Note that, $\lim_{u\to \infty } \, \frac{\text{erfc}(u)}{\frac{e^{-u^2}}{\sqrt{\pi } u}}=1$ and $\lim_{u\to \infty } \, \frac{\Gamma (\alpha ,u)}{e^{-u} u^{\alpha -1}}=1$ (Incidentally and not necessarily $\text{erfc}(u)<\frac{e^{-u^2}}{\sqrt{\pi } u}$ and $\Gamma (\alpha ,u )<e^{-u} u^{\alpha -1}$. That is, it is not necessary to choose an upper bound, just an asymptotic function). Here we write $\frac{1}{2} \text{erfc}\left(\frac{\log (x)-\mu }{\sqrt{2} \sigma }\right)<\frac{e^{-\left(\frac{\log (x)-\mu }{\sqrt{2} \sigma }\right)^2}}{\frac{2 \left(\sqrt{\pi } (\log (x)-\mu )\right)}{\sqrt{2} \sigma }}$ and $\frac{\Gamma (\alpha ,\beta x)}{\Gamma (\alpha )}<\frac{e^{-\text{$\beta $x}} (\beta x)^{\alpha -1}}{\Gamma (\alpha )}$ where the ratio of the right hand terms has the same limit as $x\to \infty$ as the left hand terms. Simplifying the limiting ratio of right hand terms yields $\lim_{x\to \infty } \, \frac{\sigma \Gamma (\alpha ) (\beta x)^{1-\alpha } e^{\beta x-\frac{(\mu -\log (x))^2}{2 \sigma ^2}}}{\sqrt{2 \pi } (\log (x)-\mu )}=\infty$ meaning that for x sufficiently large, the LND tail area is as large as we like compared to the GD tail area, irrespective of what the parameter values are. That brings up another problem, we do not always have solutions that are true for all parameter values, thus, using graphic illustrations alone can be misleading. For example, the gamma distribution right tail area is greater than the exponential distribution's tail area when $\alpha < 1$, less than exponential when $\alpha >1$ and the GD is exactly an exponential distribution when $\alpha =1$.
What then is the use of taking the logarithms of the ratio of survival functions, since we obviously do not need to take logarithms to find a limiting ratio? Many distribution function contain exponential terms that look simpler when the logarithm is taken, and if the ratio goes to infinity in the limit as x increases, then the logarithm will do so as well. In our case, that would allow us to inspect $\lim_{x\to \infty } \, \left(\log \left(\frac{\sigma \Gamma (\alpha ) (\beta x)^{1-\alpha }}{\sqrt{2 \pi } (\log (x)-\mu )}\right)+\beta x-\frac{(\mu -\log (x))^2}{2 \sigma ^2}\right)=\infty$, which some people would find simpler to look at. Lastly, if the ratio of survival functions goes to zero, then the logarithm of that ratio will go to $-\infty$, and in all cases after finding the limit of a logarithm of a ratio, we have to take the antilogarithm of that value to understand its relationship to the limiting value of the ordinary ratio of survival function.
Edit 2020-02-18: BTW, there is a lot of literature on classifying tail heaviness of functions that, in effect, assumes (incorrectly) that one can compare hazard functions while ignoring the requirement for having an indeterminate form to do so. There does not seem to be much literature in support of the methods of survival function comparison outlined herein, at least that I could find. However, there is a recent publication appendix that may be cite-worthy. Any other references for the methods outlined herein would be greatly appreciated. | Which has the heavier tail, lognormal or gamma?
Although kurtosis is a related to the heaviness of tails, it would contribute more to the notion of fat tailed distributions, and relatively less to tail heaviness itself, as the following example sho |
4,352 | Software for drawing bayesian networks (graphical models) | I currently have a similar problem (drawing multiple path diagrams for my dissertation), and so I was examining many of the options listed here already to draw similar diagrams. Many of the listed resources for drawing such vector graphics (such as in microsoft office or google drawings) can produce really nice path diagrams, with fairly minimal effort. But, part of the reason I was unsatisfied with such programs is that I needed to produce many diagrams, with only fairly minor changes between each diagram (e.g. add another node, change a label). The point and click vector graphics tools aren't well suited for this, and take more effort than need be to make such minor changes. Also it becomes difficult to maintain a template between many drawings. So, I decided to examine options to produce such graphics programattically.
Graphviz, as was already mentioned by thias, came really close to having all the bells and whistles I wanted for my graphics (as well as quite simple code to produce them), but it fell short for my needs in two ways; 1) mathematical fonts are lacking (e.g. I'm not sure if you can label a node with the $\beta$ symbol in Graphviz, 2) curved lines are hard to draw (see this post on drawing path diagrams using Graphviz on @Stask's website). Because of these limitations I have currently settled (very happily) on using the Tikz/pgf drawing library in Latex. An example is below of my attempt at reproducing your graphic (the biggest pain was the labels in the lower right corners of the boxes!);
\documentclass[11pt]{report}
\usepackage{tikz}
\usetikzlibrary{fit,positioning}
\begin{document}
\begin{figure}
\centering
\begin{tikzpicture}
\tikzstyle{main}=[circle, minimum size = 10mm, thick, draw =black!80, node distance = 16mm]
\tikzstyle{connect}=[-latex, thick]
\tikzstyle{box}=[rectangle, draw=black!100]
\node[main, fill = white!100] (alpha) [label=below:$\alpha$] { };
\node[main] (theta) [right=of alpha,label=below:$\theta$] { };
\node[main] (z) [right=of theta,label=below:z] {};
\node[main] (beta) [above=of z,label=below:$\beta$] { };
\node[main, fill = black!10] (w) [right=of z,label=below:w] { };
\path (alpha) edge [connect] (theta)
(theta) edge [connect] (z)
(z) edge [connect] (w)
(beta) edge [connect] (w);
\node[rectangle, inner sep=0mm, fit= (z) (w),label=below right:N, xshift=13mm] {};
\node[rectangle, inner sep=4.4mm,draw=black!100, fit= (z) (w)] {};
\node[rectangle, inner sep=4.6mm, fit= (z) (w),label=below right:M, xshift=12.5mm] {};
\node[rectangle, inner sep=9mm, draw=black!100, fit = (theta) (z) (w)] {};
\end{tikzpicture}
\end{figure}
\end{document}
%note - compiled with pdflatex
Now, I am already writing up my dissertation in Latex, so if you just want the image without having to compile a whole Latex document it is slightly inconveniant, but there are some fairly minor workarounds to produce an image more directly (see this question over on stackoverflow). There are a host of other benifits to using Tikz for such a project though
Extensive documentation. The pgf manual holds your hand through making some similar diagrams. And once you get your feet wet...
A huge library of examples is there to demonstrate how to produce a huge variety of graphics.
And finally, the Tex stack exchange site is a good place to ask any questions about Tikz. They have some wizzes over there making some pretty fancy graphics (check out their blog for some examples).
At this time I have not considered some of the libraries for drawing the diagrams in the statistical package R directly from the specified models, but in the future I may consider them to a greater extent. There are some nice examples from the qgraph library for a proof of concept of what can be accomplished in R. | Software for drawing bayesian networks (graphical models) | I currently have a similar problem (drawing multiple path diagrams for my dissertation), and so I was examining many of the options listed here already to draw similar diagrams. Many of the listed res | Software for drawing bayesian networks (graphical models)
I currently have a similar problem (drawing multiple path diagrams for my dissertation), and so I was examining many of the options listed here already to draw similar diagrams. Many of the listed resources for drawing such vector graphics (such as in microsoft office or google drawings) can produce really nice path diagrams, with fairly minimal effort. But, part of the reason I was unsatisfied with such programs is that I needed to produce many diagrams, with only fairly minor changes between each diagram (e.g. add another node, change a label). The point and click vector graphics tools aren't well suited for this, and take more effort than need be to make such minor changes. Also it becomes difficult to maintain a template between many drawings. So, I decided to examine options to produce such graphics programattically.
Graphviz, as was already mentioned by thias, came really close to having all the bells and whistles I wanted for my graphics (as well as quite simple code to produce them), but it fell short for my needs in two ways; 1) mathematical fonts are lacking (e.g. I'm not sure if you can label a node with the $\beta$ symbol in Graphviz, 2) curved lines are hard to draw (see this post on drawing path diagrams using Graphviz on @Stask's website). Because of these limitations I have currently settled (very happily) on using the Tikz/pgf drawing library in Latex. An example is below of my attempt at reproducing your graphic (the biggest pain was the labels in the lower right corners of the boxes!);
\documentclass[11pt]{report}
\usepackage{tikz}
\usetikzlibrary{fit,positioning}
\begin{document}
\begin{figure}
\centering
\begin{tikzpicture}
\tikzstyle{main}=[circle, minimum size = 10mm, thick, draw =black!80, node distance = 16mm]
\tikzstyle{connect}=[-latex, thick]
\tikzstyle{box}=[rectangle, draw=black!100]
\node[main, fill = white!100] (alpha) [label=below:$\alpha$] { };
\node[main] (theta) [right=of alpha,label=below:$\theta$] { };
\node[main] (z) [right=of theta,label=below:z] {};
\node[main] (beta) [above=of z,label=below:$\beta$] { };
\node[main, fill = black!10] (w) [right=of z,label=below:w] { };
\path (alpha) edge [connect] (theta)
(theta) edge [connect] (z)
(z) edge [connect] (w)
(beta) edge [connect] (w);
\node[rectangle, inner sep=0mm, fit= (z) (w),label=below right:N, xshift=13mm] {};
\node[rectangle, inner sep=4.4mm,draw=black!100, fit= (z) (w)] {};
\node[rectangle, inner sep=4.6mm, fit= (z) (w),label=below right:M, xshift=12.5mm] {};
\node[rectangle, inner sep=9mm, draw=black!100, fit = (theta) (z) (w)] {};
\end{tikzpicture}
\end{figure}
\end{document}
%note - compiled with pdflatex
Now, I am already writing up my dissertation in Latex, so if you just want the image without having to compile a whole Latex document it is slightly inconveniant, but there are some fairly minor workarounds to produce an image more directly (see this question over on stackoverflow). There are a host of other benifits to using Tikz for such a project though
Extensive documentation. The pgf manual holds your hand through making some similar diagrams. And once you get your feet wet...
A huge library of examples is there to demonstrate how to produce a huge variety of graphics.
And finally, the Tex stack exchange site is a good place to ask any questions about Tikz. They have some wizzes over there making some pretty fancy graphics (check out their blog for some examples).
At this time I have not considered some of the libraries for drawing the diagrams in the statistical package R directly from the specified models, but in the future I may consider them to a greater extent. There are some nice examples from the qgraph library for a proof of concept of what can be accomplished in R. | Software for drawing bayesian networks (graphical models)
I currently have a similar problem (drawing multiple path diagrams for my dissertation), and so I was examining many of the options listed here already to draw similar diagrams. Many of the listed res |
4,353 | Software for drawing bayesian networks (graphical models) | Laura Dietz has written a very nice library for tikz that enables drawing of Bayesian Networks in latex without needing to actually use tikz directly.
To demonstrate this package, see the following example for this question:
\documentclass[11pt]{report}
\usepackage{tikz}
\usetikzlibrary{bayesnet}
\begin{document}
\begin{figure}
\centering
\tikz{ %
\node[latent] (alpha) {$\alpha$} ; %
\node[latent, right=of alpha] (theta) {$\theta$} ; %
\node[latent, right=of theta] (z) {z} ; %
\node[latent, above=of z] (beta) {$\beta$} ; %
\node[obs, right=of z] (w) {w} ; %
\plate[inner sep=0.25cm, xshift=-0.12cm, yshift=0.12cm] {plate1} {(z) (w)} {N}; %
\plate[inner sep=0.25cm, xshift=-0.12cm, yshift=0.12cm] {plate2} {(theta) (plate1)} {M}; %
\edge {alpha} {theta} ; %
\edge {theta} {z} ; %
\edge {z,beta} {w} ; %
}
\end{figure}
\end{document}
%note - compiled with pdflatex
While not exactly the same, it certainly conveys the same information and could be tweaked to better fit specific requirements. This package generates very acceptable figures without needing to learn the full tikz package. | Software for drawing bayesian networks (graphical models) | Laura Dietz has written a very nice library for tikz that enables drawing of Bayesian Networks in latex without needing to actually use tikz directly.
To demonstrate this package, see the following ex | Software for drawing bayesian networks (graphical models)
Laura Dietz has written a very nice library for tikz that enables drawing of Bayesian Networks in latex without needing to actually use tikz directly.
To demonstrate this package, see the following example for this question:
\documentclass[11pt]{report}
\usepackage{tikz}
\usetikzlibrary{bayesnet}
\begin{document}
\begin{figure}
\centering
\tikz{ %
\node[latent] (alpha) {$\alpha$} ; %
\node[latent, right=of alpha] (theta) {$\theta$} ; %
\node[latent, right=of theta] (z) {z} ; %
\node[latent, above=of z] (beta) {$\beta$} ; %
\node[obs, right=of z] (w) {w} ; %
\plate[inner sep=0.25cm, xshift=-0.12cm, yshift=0.12cm] {plate1} {(z) (w)} {N}; %
\plate[inner sep=0.25cm, xshift=-0.12cm, yshift=0.12cm] {plate2} {(theta) (plate1)} {M}; %
\edge {alpha} {theta} ; %
\edge {theta} {z} ; %
\edge {z,beta} {w} ; %
}
\end{figure}
\end{document}
%note - compiled with pdflatex
While not exactly the same, it certainly conveys the same information and could be tweaked to better fit specific requirements. This package generates very acceptable figures without needing to learn the full tikz package. | Software for drawing bayesian networks (graphical models)
Laura Dietz has written a very nice library for tikz that enables drawing of Bayesian Networks in latex without needing to actually use tikz directly.
To demonstrate this package, see the following ex |
4,354 | Software for drawing bayesian networks (graphical models) | You can't beat http://daft-pgm.org/
Daft is a Python package that uses matplotlib to render pixel-perfect probabilistic graphical models for publication in a journal or on the internet. With a short Python script and an intuitive model-building syntax you can design directed (Bayesian Networks, directed acyclic graphs) and undirected (Markov random fields) models and save them in any formats that matplotlib supports (including PDF, PNG, EPS and SVG). | Software for drawing bayesian networks (graphical models) | You can't beat http://daft-pgm.org/
Daft is a Python package that uses matplotlib to render pixel-perfect probabilistic graphical models for publication in a journal or on the internet. With a short | Software for drawing bayesian networks (graphical models)
You can't beat http://daft-pgm.org/
Daft is a Python package that uses matplotlib to render pixel-perfect probabilistic graphical models for publication in a journal or on the internet. With a short Python script and an intuitive model-building syntax you can design directed (Bayesian Networks, directed acyclic graphs) and undirected (Markov random fields) models and save them in any formats that matplotlib supports (including PDF, PNG, EPS and SVG). | Software for drawing bayesian networks (graphical models)
You can't beat http://daft-pgm.org/
Daft is a Python package that uses matplotlib to render pixel-perfect probabilistic graphical models for publication in a journal or on the internet. With a short |
4,355 | Software for drawing bayesian networks (graphical models) | You could try GraphViz.
This allows you to specify the graph in a text-file and it will be drawn automatically (avoiding overlapping arrows and so on). Go here (pdf) for a minimal example and a manual. | Software for drawing bayesian networks (graphical models) | You could try GraphViz.
This allows you to specify the graph in a text-file and it will be drawn automatically (avoiding overlapping arrows and so on). Go here (pdf) for a minimal example and a manual | Software for drawing bayesian networks (graphical models)
You could try GraphViz.
This allows you to specify the graph in a text-file and it will be drawn automatically (avoiding overlapping arrows and so on). Go here (pdf) for a minimal example and a manual. | Software for drawing bayesian networks (graphical models)
You could try GraphViz.
This allows you to specify the graph in a text-file and it will be drawn automatically (avoiding overlapping arrows and so on). Go here (pdf) for a minimal example and a manual |
4,356 | Software for drawing bayesian networks (graphical models) | Inkscape is essentially a free version of Adobe Illustrator, and is a very strong program for doing vector graphics, like the picture you posted. It also plays rather nicely with most statistical packages for doing final edits/annotations/etc. to graphs - R, SAS, etc. can output a graph as a PDF or other vector format (like .eps), and then you can bring it in to Inkscape to mess about with colors, symbols, axis labels etc. | Software for drawing bayesian networks (graphical models) | Inkscape is essentially a free version of Adobe Illustrator, and is a very strong program for doing vector graphics, like the picture you posted. It also plays rather nicely with most statistical pack | Software for drawing bayesian networks (graphical models)
Inkscape is essentially a free version of Adobe Illustrator, and is a very strong program for doing vector graphics, like the picture you posted. It also plays rather nicely with most statistical packages for doing final edits/annotations/etc. to graphs - R, SAS, etc. can output a graph as a PDF or other vector format (like .eps), and then you can bring it in to Inkscape to mess about with colors, symbols, axis labels etc. | Software for drawing bayesian networks (graphical models)
Inkscape is essentially a free version of Adobe Illustrator, and is a very strong program for doing vector graphics, like the picture you posted. It also plays rather nicely with most statistical pack |
4,357 | Software for drawing bayesian networks (graphical models) | If you have a particular interest in using LaTeX, the LaTeXDraw program has some nice functionality for creating flow charts with embedded latex code.
It imports / exports PSTricks code and SVG, and can also export svg, pdf, eps, jpg, png, etc. It runs in Linux, Mac OS X, and Windows. | Software for drawing bayesian networks (graphical models) | If you have a particular interest in using LaTeX, the LaTeXDraw program has some nice functionality for creating flow charts with embedded latex code.
It imports / exports PSTricks code and SVG, and | Software for drawing bayesian networks (graphical models)
If you have a particular interest in using LaTeX, the LaTeXDraw program has some nice functionality for creating flow charts with embedded latex code.
It imports / exports PSTricks code and SVG, and can also export svg, pdf, eps, jpg, png, etc. It runs in Linux, Mac OS X, and Windows. | Software for drawing bayesian networks (graphical models)
If you have a particular interest in using LaTeX, the LaTeXDraw program has some nice functionality for creating flow charts with embedded latex code.
It imports / exports PSTricks code and SVG, and |
4,358 | Software for drawing bayesian networks (graphical models) | Dia is a free open source program for drawing diagrams. I find it useful and it's not too difficult to get started. | Software for drawing bayesian networks (graphical models) | Dia is a free open source program for drawing diagrams. I find it useful and it's not too difficult to get started. | Software for drawing bayesian networks (graphical models)
Dia is a free open source program for drawing diagrams. I find it useful and it's not too difficult to get started. | Software for drawing bayesian networks (graphical models)
Dia is a free open source program for drawing diagrams. I find it useful and it's not too difficult to get started. |
4,359 | Software for drawing bayesian networks (graphical models) | I have found Diagrammix to be a very flexible package, available for Mac OS X. It is a well rounded vector graphics package and does a good job at graphical models. It is fairly inexpensive and has some good add-ons that have helped improve the shapes and directions of edges. | Software for drawing bayesian networks (graphical models) | I have found Diagrammix to be a very flexible package, available for Mac OS X. It is a well rounded vector graphics package and does a good job at graphical models. It is fairly inexpensive and has | Software for drawing bayesian networks (graphical models)
I have found Diagrammix to be a very flexible package, available for Mac OS X. It is a well rounded vector graphics package and does a good job at graphical models. It is fairly inexpensive and has some good add-ons that have helped improve the shapes and directions of edges. | Software for drawing bayesian networks (graphical models)
I have found Diagrammix to be a very flexible package, available for Mac OS X. It is a well rounded vector graphics package and does a good job at graphical models. It is fairly inexpensive and has |
4,360 | Software for drawing bayesian networks (graphical models) | You could try Google Docs Draw. It looks like it will do what you want for free, right in your browser. | Software for drawing bayesian networks (graphical models) | You could try Google Docs Draw. It looks like it will do what you want for free, right in your browser. | Software for drawing bayesian networks (graphical models)
You could try Google Docs Draw. It looks like it will do what you want for free, right in your browser. | Software for drawing bayesian networks (graphical models)
You could try Google Docs Draw. It looks like it will do what you want for free, right in your browser. |
4,361 | Software for drawing bayesian networks (graphical models) | You can go for PlantUML. It is open source and quite flexible. | Software for drawing bayesian networks (graphical models) | You can go for PlantUML. It is open source and quite flexible. | Software for drawing bayesian networks (graphical models)
You can go for PlantUML. It is open source and quite flexible. | Software for drawing bayesian networks (graphical models)
You can go for PlantUML. It is open source and quite flexible. |
4,362 | Software for drawing bayesian networks (graphical models) | You can also use the Lucidchart webapp.
I've used it in the past for drawing graphs and it's free. | Software for drawing bayesian networks (graphical models) | You can also use the Lucidchart webapp.
I've used it in the past for drawing graphs and it's free. | Software for drawing bayesian networks (graphical models)
You can also use the Lucidchart webapp.
I've used it in the past for drawing graphs and it's free. | Software for drawing bayesian networks (graphical models)
You can also use the Lucidchart webapp.
I've used it in the past for drawing graphs and it's free. |
4,363 | Software for drawing bayesian networks (graphical models) | SCAVIS has a Bayesian network. Try to google "scavis baysian network".
The same program can draw different diagrams using Python (or Java) syntax. | Software for drawing bayesian networks (graphical models) | SCAVIS has a Bayesian network. Try to google "scavis baysian network".
The same program can draw different diagrams using Python (or Java) syntax. | Software for drawing bayesian networks (graphical models)
SCAVIS has a Bayesian network. Try to google "scavis baysian network".
The same program can draw different diagrams using Python (or Java) syntax. | Software for drawing bayesian networks (graphical models)
SCAVIS has a Bayesian network. Try to google "scavis baysian network".
The same program can draw different diagrams using Python (or Java) syntax. |
4,364 | Software for drawing bayesian networks (graphical models) | you can use draw.io and use one or their many templates to create these icons. It helps you create SVGs or any other format. and does not require you to install anything on your system. | Software for drawing bayesian networks (graphical models) | you can use draw.io and use one or their many templates to create these icons. It helps you create SVGs or any other format. and does not require you to install anything on your system. | Software for drawing bayesian networks (graphical models)
you can use draw.io and use one or their many templates to create these icons. It helps you create SVGs or any other format. and does not require you to install anything on your system. | Software for drawing bayesian networks (graphical models)
you can use draw.io and use one or their many templates to create these icons. It helps you create SVGs or any other format. and does not require you to install anything on your system. |
4,365 | Is it unusual for the MEAN to outperform ARIMA? | I'm a practitioner, both producer and user of forecasting and NOT a trained statistician. Below I share some of my thoughts on why your mean forecast turned out better than ARIMA by referring to research article that rely on empirical evidence. One book that time and time again I go back to refer is the Principles of Forecasting book by Armstrong and its website which I would recommend as an excellent read for any forecaster, provides great insight on usage and guiding principles of extrapolation methods.
To answer you first question - What I want to know is if this is unusual?
There is a chapter called Extrapolation for Time-Series and Cross-Sectional Data which also available free in the same website. The following is the quote from the chapter
"For example, in the real-time M2-competition, which examined 29
monthly series, Box-Jenkins proved to be one of the least-accurate
methods and its overall median error was 17% greater than that for a
naive forecast"
There lies an empirical evidence on why your mean forecasts was better than ARIMA models.
There is also been study after study in empirical competitions and the third M3 competition that show Box - Jenkins ARIMA approach fails to produce accurate forecast and lacks evidence that it performs better for univariate trend extrapolation.
There is also another paper and an ongoing study by Greene and Armstrong entitled "Simple Forecasting: Avoid Tears Before Bedtime" in the same website. The authors of the paper summarize as follows:
In total we identified 29 papers incorporating 94 formal comparisons
of the accuracy of forecasts from complex methods with those from
simple—but not in all cases sophisticatedly simple—methods.
Eighty-three percent of the comparisons found that forecasts from
simple methods were more accurate than, or similarly accurate to,
those from complex methods. On average, the errors of forecasts from
complex methods were about 32 percent greater than the errors of
forecasts from simple methods in the 21 studies that provide
comparisons of errors
To answer your third question: does this indicate that I've set something up wrong?
No, I would aconsider ARIMA as complex method and Mean forecast as simple methods. There is ample evidence that simple methods like Mean forecast outperform complex methods like ARIMA.
To answer your second question: Does this mean the times series I'm using are strange?
Below are what I considered to be experts in real world forecasting:
Makridakis (Pioneered Empirical competition on Forecasting called M, M2 and M3, and paved way for evidence based methods in forecasting)
Armstrong (Provides valuable insights in the form of books/articles on Forecasting Practice)
Gardner (Invented Damped Trend exponential smoothing another simple method which works surprisingly well vs. ARIMA)
All of the above researchers advocate, simplicity (methods like your mean forecast) vs. Complex methods like ARIMA. So you should feel comfortable that your forecasts are good and always favor simplicity over complexity based on empirical evidence. These researchers have all contributed immensely to the field of applied forecasting.
In addition to Stephan's good list of simple forecasting method. there is also another method called Theta forecasting method which is a very simple method (basically Simple Exponential smoothing with a drift that equal 1/2 the slope of linear regression) I would add this to your toolbox. Forecast package in R implements this method. | Is it unusual for the MEAN to outperform ARIMA? | I'm a practitioner, both producer and user of forecasting and NOT a trained statistician. Below I share some of my thoughts on why your mean forecast turned out better than ARIMA by referring to resea | Is it unusual for the MEAN to outperform ARIMA?
I'm a practitioner, both producer and user of forecasting and NOT a trained statistician. Below I share some of my thoughts on why your mean forecast turned out better than ARIMA by referring to research article that rely on empirical evidence. One book that time and time again I go back to refer is the Principles of Forecasting book by Armstrong and its website which I would recommend as an excellent read for any forecaster, provides great insight on usage and guiding principles of extrapolation methods.
To answer you first question - What I want to know is if this is unusual?
There is a chapter called Extrapolation for Time-Series and Cross-Sectional Data which also available free in the same website. The following is the quote from the chapter
"For example, in the real-time M2-competition, which examined 29
monthly series, Box-Jenkins proved to be one of the least-accurate
methods and its overall median error was 17% greater than that for a
naive forecast"
There lies an empirical evidence on why your mean forecasts was better than ARIMA models.
There is also been study after study in empirical competitions and the third M3 competition that show Box - Jenkins ARIMA approach fails to produce accurate forecast and lacks evidence that it performs better for univariate trend extrapolation.
There is also another paper and an ongoing study by Greene and Armstrong entitled "Simple Forecasting: Avoid Tears Before Bedtime" in the same website. The authors of the paper summarize as follows:
In total we identified 29 papers incorporating 94 formal comparisons
of the accuracy of forecasts from complex methods with those from
simple—but not in all cases sophisticatedly simple—methods.
Eighty-three percent of the comparisons found that forecasts from
simple methods were more accurate than, or similarly accurate to,
those from complex methods. On average, the errors of forecasts from
complex methods were about 32 percent greater than the errors of
forecasts from simple methods in the 21 studies that provide
comparisons of errors
To answer your third question: does this indicate that I've set something up wrong?
No, I would aconsider ARIMA as complex method and Mean forecast as simple methods. There is ample evidence that simple methods like Mean forecast outperform complex methods like ARIMA.
To answer your second question: Does this mean the times series I'm using are strange?
Below are what I considered to be experts in real world forecasting:
Makridakis (Pioneered Empirical competition on Forecasting called M, M2 and M3, and paved way for evidence based methods in forecasting)
Armstrong (Provides valuable insights in the form of books/articles on Forecasting Practice)
Gardner (Invented Damped Trend exponential smoothing another simple method which works surprisingly well vs. ARIMA)
All of the above researchers advocate, simplicity (methods like your mean forecast) vs. Complex methods like ARIMA. So you should feel comfortable that your forecasts are good and always favor simplicity over complexity based on empirical evidence. These researchers have all contributed immensely to the field of applied forecasting.
In addition to Stephan's good list of simple forecasting method. there is also another method called Theta forecasting method which is a very simple method (basically Simple Exponential smoothing with a drift that equal 1/2 the slope of linear regression) I would add this to your toolbox. Forecast package in R implements this method. | Is it unusual for the MEAN to outperform ARIMA?
I'm a practitioner, both producer and user of forecasting and NOT a trained statistician. Below I share some of my thoughts on why your mean forecast turned out better than ARIMA by referring to resea |
4,366 | Is it unusual for the MEAN to outperform ARIMA? | This is not at all surprising. In forecasting, you very often find that extremely simple methods, like
the overall mean
the naive random walk (i.e., the last observation used as a forecast)
a seasonal random walk (i.e., the observation from one year back)
Single Exponential Smoothing
outperform more complex methods. That is why you should always test your methods against these very simple benchmarks.
A quote from George Athanosopoulos and Rob Hyndman (who are experts in the field):
Some forecasting methods are very simple and surprisingly effective.
Note how they explicitly say they will be using some very simple methods as benchmarks.
In fact, their entire free open online textbook on forecasting is very much recommended.
EDIT: One of the better-accepted forecast error measures, the Mean Absolute Scaled Error (MASE) by Hyndman & Koehler (see also here) measures how much a given forecast improves on the (in-sample) naive random walk forecast: if MASE < 1, your forecast is better than the in-sample random walk. You'd expect this to be an easily beaten bound, right?
Not so: sometimes, even the best out of multiple standard forecasting methods like ARIMA or ETS will only yield a MASE of 1.38, i.e., be worse (out-of-sample) than the (in-sample) random walk forecast. This is sufficiently disconcerting to generate questions here. (That question is not a duplicate of this one, since the MASE compares out-of-sample accuracy to in-sample accuracy of a naive method, but it is also enlightening for the present question.) | Is it unusual for the MEAN to outperform ARIMA? | This is not at all surprising. In forecasting, you very often find that extremely simple methods, like
the overall mean
the naive random walk (i.e., the last observation used as a forecast)
a seasona | Is it unusual for the MEAN to outperform ARIMA?
This is not at all surprising. In forecasting, you very often find that extremely simple methods, like
the overall mean
the naive random walk (i.e., the last observation used as a forecast)
a seasonal random walk (i.e., the observation from one year back)
Single Exponential Smoothing
outperform more complex methods. That is why you should always test your methods against these very simple benchmarks.
A quote from George Athanosopoulos and Rob Hyndman (who are experts in the field):
Some forecasting methods are very simple and surprisingly effective.
Note how they explicitly say they will be using some very simple methods as benchmarks.
In fact, their entire free open online textbook on forecasting is very much recommended.
EDIT: One of the better-accepted forecast error measures, the Mean Absolute Scaled Error (MASE) by Hyndman & Koehler (see also here) measures how much a given forecast improves on the (in-sample) naive random walk forecast: if MASE < 1, your forecast is better than the in-sample random walk. You'd expect this to be an easily beaten bound, right?
Not so: sometimes, even the best out of multiple standard forecasting methods like ARIMA or ETS will only yield a MASE of 1.38, i.e., be worse (out-of-sample) than the (in-sample) random walk forecast. This is sufficiently disconcerting to generate questions here. (That question is not a duplicate of this one, since the MASE compares out-of-sample accuracy to in-sample accuracy of a naive method, but it is also enlightening for the present question.) | Is it unusual for the MEAN to outperform ARIMA?
This is not at all surprising. In forecasting, you very often find that extremely simple methods, like
the overall mean
the naive random walk (i.e., the last observation used as a forecast)
a seasona |
4,367 | What is the difference between GARCH and ARMA? | You are conflating the features of a process with its representation. Consider the (return) process $(Y_t)_{t=0}^\infty$.
An ARMA(p,q) model specifies the conditional mean of the process as
$$
\begin{align}
\mathbb{E}(Y_t \mid \mathcal{I}_t) &= \alpha_0 + \sum_{j=1}^p \alpha_j Y_{t-j}+ \sum_{k=1}^q \beta_k\epsilon_{t-k}\\
\end{align}
$$
Here, $\mathcal{I}_t$ is the information set at time $t$, which is the $\sigma$-algebra generated by the lagged values of the outcome process $(Y_t)$.
The GARCH(r,s) model specifies the conditional variance of the process
$$
\begin{alignat}{2}
& \mathbb{V}(Y_t \mid \mathcal{I}_t) &{}={}& \mathbb{V}(\epsilon_t \mid \mathcal{I}_t) \\
\equiv \,& \sigma^2_t&{}={}& \delta_0 + \sum_{l=1}^r \delta_j \sigma^2_{t-l} + \sum_{m=1}^s \gamma_k \epsilon^2_{t-m}
\end{alignat}
$$
Note in particular the first equivalence $ \mathbb{V}(Y_t \mid \mathcal{I}_t)= \mathbb{V}(\epsilon_t \mid \mathcal{I}_t)$.
Aside: Based on this representation, you can write
$$
\epsilon_t \equiv \sigma_t Z_t
$$
where $Z_t$ is a strong white noise process, but this follows from the way the process is defined.
The two models (for the conditional mean and the variance) are perfectly compatible with each other, in that the mean of the process can be modeled as ARMA, and the variances as GARCH. This leads to the complete specification of an ARMA(p,q)-GARCH(r,s) model for the process as in the following representation
$$
\begin{align}
Y_t &= \alpha_0 + \sum_{j=1}^p \alpha_j Y_{t-j} + \sum_{k=1}^q \beta_k\epsilon_{t-k} +\epsilon_t\\
\mathbb{E}(\epsilon_t\mid \mathcal{I}_t) &=0,\, \forall t \\
\mathbb{V}(\epsilon_t \mid \mathcal{I}_t) &= \delta_0 + \sum_{l=1}^r \delta_l \sigma^2_{t-l} + \sum_{m=1}^s \gamma_m \epsilon^2_{t-m}\, \forall t
\end{align}
$$ | What is the difference between GARCH and ARMA? | You are conflating the features of a process with its representation. Consider the (return) process $(Y_t)_{t=0}^\infty$.
An ARMA(p,q) model specifies the conditional mean of the process as
$$
\beg | What is the difference between GARCH and ARMA?
You are conflating the features of a process with its representation. Consider the (return) process $(Y_t)_{t=0}^\infty$.
An ARMA(p,q) model specifies the conditional mean of the process as
$$
\begin{align}
\mathbb{E}(Y_t \mid \mathcal{I}_t) &= \alpha_0 + \sum_{j=1}^p \alpha_j Y_{t-j}+ \sum_{k=1}^q \beta_k\epsilon_{t-k}\\
\end{align}
$$
Here, $\mathcal{I}_t$ is the information set at time $t$, which is the $\sigma$-algebra generated by the lagged values of the outcome process $(Y_t)$.
The GARCH(r,s) model specifies the conditional variance of the process
$$
\begin{alignat}{2}
& \mathbb{V}(Y_t \mid \mathcal{I}_t) &{}={}& \mathbb{V}(\epsilon_t \mid \mathcal{I}_t) \\
\equiv \,& \sigma^2_t&{}={}& \delta_0 + \sum_{l=1}^r \delta_j \sigma^2_{t-l} + \sum_{m=1}^s \gamma_k \epsilon^2_{t-m}
\end{alignat}
$$
Note in particular the first equivalence $ \mathbb{V}(Y_t \mid \mathcal{I}_t)= \mathbb{V}(\epsilon_t \mid \mathcal{I}_t)$.
Aside: Based on this representation, you can write
$$
\epsilon_t \equiv \sigma_t Z_t
$$
where $Z_t$ is a strong white noise process, but this follows from the way the process is defined.
The two models (for the conditional mean and the variance) are perfectly compatible with each other, in that the mean of the process can be modeled as ARMA, and the variances as GARCH. This leads to the complete specification of an ARMA(p,q)-GARCH(r,s) model for the process as in the following representation
$$
\begin{align}
Y_t &= \alpha_0 + \sum_{j=1}^p \alpha_j Y_{t-j} + \sum_{k=1}^q \beta_k\epsilon_{t-k} +\epsilon_t\\
\mathbb{E}(\epsilon_t\mid \mathcal{I}_t) &=0,\, \forall t \\
\mathbb{V}(\epsilon_t \mid \mathcal{I}_t) &= \delta_0 + \sum_{l=1}^r \delta_l \sigma^2_{t-l} + \sum_{m=1}^s \gamma_m \epsilon^2_{t-m}\, \forall t
\end{align}
$$ | What is the difference between GARCH and ARMA?
You are conflating the features of a process with its representation. Consider the (return) process $(Y_t)_{t=0}^\infty$.
An ARMA(p,q) model specifies the conditional mean of the process as
$$
\beg |
4,368 | What is the difference between GARCH and ARMA? | Edit: I realized the answer was lacking and have thus provided a more precise answer (see below -- or maybe above). I have edited this one for factual mistakes and am leaving it for the record.
Different focus parameters:
ARMA is a model for the realizations of a stochastic process imposing a specific structure of the conditional mean of the process.
GARCH is a model for the realizations of a stochastic process imposing a specific structure of the conditional variance of the process. | What is the difference between GARCH and ARMA? | Edit: I realized the answer was lacking and have thus provided a more precise answer (see below -- or maybe above). I have edited this one for factual mistakes and am leaving it for the record.
Diffe | What is the difference between GARCH and ARMA?
Edit: I realized the answer was lacking and have thus provided a more precise answer (see below -- or maybe above). I have edited this one for factual mistakes and am leaving it for the record.
Different focus parameters:
ARMA is a model for the realizations of a stochastic process imposing a specific structure of the conditional mean of the process.
GARCH is a model for the realizations of a stochastic process imposing a specific structure of the conditional variance of the process. | What is the difference between GARCH and ARMA?
Edit: I realized the answer was lacking and have thus provided a more precise answer (see below -- or maybe above). I have edited this one for factual mistakes and am leaving it for the record.
Diffe |
4,369 | What is the difference between GARCH and ARMA? | ARMA
Consider $y_t$ that follows an ARMA($p,q$) process. Suppose for simplicity it has zero mean and constant variance. Conditionally on information $I_{t-1}$, $y_t$ can be partitioned into a known (predetermined) part $\mu_t$ (which is the conditional mean of $y_t$ given $I_{t-1}$) and a random part $u_t$:
\begin{aligned}
y_t &= \mu_t + u_t; \\
\mu_t &= \varphi_1 y_{t-1} + \dotsc + \varphi_p y_{t-p} + \theta_1 u_{t-1} + \dotsc + \theta_q u_{t-q} \ \ \text{(known, predetermined)}; \\
u_t | I_{t-1} &~\sim D(0,\sigma^2) \ \ \text{(random)} \\
\end{aligned}
where $D$ is some density.
The conditional mean $\mu_t$ itself follows a process similar to ARMA($p,q$) but without the random contemporaneous error term:
$$
\mu_t = \varphi_1 \mu_{t-1} + \dotsc + \varphi_p \mu_{t-p} + (\varphi_1 + \theta_1) u_{t-1} + \dotsc + (\varphi_m + \theta_m) u_{t-m}, $$
where $m:=\max(p,q)$; $\varphi_i=0$ for $i>p$; and $\theta_j=0$ for $j>q$. Note that this process has order ($p,m$) rather than ($p,q$) as does $y_t$.
We can also write the conditional distribution of $y_t$ in terms of its past conditional means (rather than past realized values) and model parameters as
\begin{aligned}
y_t &\sim D(\mu_t,\sigma_t^2); \\
\mu_t &= \varphi_1 \mu_{t-1} + \dotsc + \varphi_p \mu_{t-p} + (\varphi_1 + \theta_1) u_{t-1} + \dotsc + (\varphi_m + \theta_m) u_{t-m}; \\
\sigma_t^2 &= \sigma^2,
\end{aligned}
The latter representation makes the comparison of ARMA to GARCH and ARMA-GARCH easier.
GARCH
Consider $y_t$ that follows a GARCH($s,r$) process. Suppose for simplicity it has constant mean. Then
\begin{aligned}
y_t &\sim D(\mu_t,\sigma_t^2); \\
\mu_t &= \mu; \\
\sigma_t^2 &= \omega + \alpha_1 u_{t-1}^2 + \dotsc + \alpha_s u_{t-s}^2 + \beta_1 \sigma_{t-1}^2 + \dotsc + \beta_r \sigma_{t-r}^2; \\
\frac{u_t}{\sigma_t} &\sim i.i.D(0,1), \\
\end{aligned}
where $u_t:=y_t-\mu_t$ and $D$ is some density.
The conditional variance $\sigma_t^2$ follows a process similar to ARMA($s,r$) but without the random contemporaneous error term.
ARMA-GARCH
Consider $y_t$ that has unconditional mean zero and follows an ARMA($p,q$)-GARCH($s,r$) process. Then
\begin{aligned}
y_t &\sim D(\mu_t,\sigma_t^2); \\
\mu_t &= \varphi_1 \mu_{t-1} + \dotsc + \varphi_p \mu_{t-p} + (\varphi_1 + \theta_1) u_{t-1} + \dotsc + (\varphi_m + \theta_m) u_{t-m}; \\
\sigma_t^2 &= \omega + \alpha_1 u_{t-1}^2 + \dotsc + \alpha_s u_{t-s}^2 + \beta_1 \sigma_{t-1}^2 + \dotsc + \beta_r \sigma_{t-r}^2; \\
\frac{u_t}{\sigma_t} &\sim i.i.D(0,1), \\
\end{aligned}
where $u_t:=y_t-\mu_t$; $D$ is some density, e.g. Normal; $\varphi_i=0$ for $i>p$; and $\theta_j=0$ for $j>q$.
The conditional mean process due to ARMA has essentially the same shape as the conditional variance process due to GARCH, just the lag orders may differ (allowing for a nonzero unconditional mean of $y_t$ should not change this result significantly). Importantly, neither has random error terms once conditioned on $I_{t-1}$, thus both are predetermined. | What is the difference between GARCH and ARMA? | ARMA
Consider $y_t$ that follows an ARMA($p,q$) process. Suppose for simplicity it has zero mean and constant variance. Conditionally on information $I_{t-1}$, $y_t$ can be partitioned into a known (p | What is the difference between GARCH and ARMA?
ARMA
Consider $y_t$ that follows an ARMA($p,q$) process. Suppose for simplicity it has zero mean and constant variance. Conditionally on information $I_{t-1}$, $y_t$ can be partitioned into a known (predetermined) part $\mu_t$ (which is the conditional mean of $y_t$ given $I_{t-1}$) and a random part $u_t$:
\begin{aligned}
y_t &= \mu_t + u_t; \\
\mu_t &= \varphi_1 y_{t-1} + \dotsc + \varphi_p y_{t-p} + \theta_1 u_{t-1} + \dotsc + \theta_q u_{t-q} \ \ \text{(known, predetermined)}; \\
u_t | I_{t-1} &~\sim D(0,\sigma^2) \ \ \text{(random)} \\
\end{aligned}
where $D$ is some density.
The conditional mean $\mu_t$ itself follows a process similar to ARMA($p,q$) but without the random contemporaneous error term:
$$
\mu_t = \varphi_1 \mu_{t-1} + \dotsc + \varphi_p \mu_{t-p} + (\varphi_1 + \theta_1) u_{t-1} + \dotsc + (\varphi_m + \theta_m) u_{t-m}, $$
where $m:=\max(p,q)$; $\varphi_i=0$ for $i>p$; and $\theta_j=0$ for $j>q$. Note that this process has order ($p,m$) rather than ($p,q$) as does $y_t$.
We can also write the conditional distribution of $y_t$ in terms of its past conditional means (rather than past realized values) and model parameters as
\begin{aligned}
y_t &\sim D(\mu_t,\sigma_t^2); \\
\mu_t &= \varphi_1 \mu_{t-1} + \dotsc + \varphi_p \mu_{t-p} + (\varphi_1 + \theta_1) u_{t-1} + \dotsc + (\varphi_m + \theta_m) u_{t-m}; \\
\sigma_t^2 &= \sigma^2,
\end{aligned}
The latter representation makes the comparison of ARMA to GARCH and ARMA-GARCH easier.
GARCH
Consider $y_t$ that follows a GARCH($s,r$) process. Suppose for simplicity it has constant mean. Then
\begin{aligned}
y_t &\sim D(\mu_t,\sigma_t^2); \\
\mu_t &= \mu; \\
\sigma_t^2 &= \omega + \alpha_1 u_{t-1}^2 + \dotsc + \alpha_s u_{t-s}^2 + \beta_1 \sigma_{t-1}^2 + \dotsc + \beta_r \sigma_{t-r}^2; \\
\frac{u_t}{\sigma_t} &\sim i.i.D(0,1), \\
\end{aligned}
where $u_t:=y_t-\mu_t$ and $D$ is some density.
The conditional variance $\sigma_t^2$ follows a process similar to ARMA($s,r$) but without the random contemporaneous error term.
ARMA-GARCH
Consider $y_t$ that has unconditional mean zero and follows an ARMA($p,q$)-GARCH($s,r$) process. Then
\begin{aligned}
y_t &\sim D(\mu_t,\sigma_t^2); \\
\mu_t &= \varphi_1 \mu_{t-1} + \dotsc + \varphi_p \mu_{t-p} + (\varphi_1 + \theta_1) u_{t-1} + \dotsc + (\varphi_m + \theta_m) u_{t-m}; \\
\sigma_t^2 &= \omega + \alpha_1 u_{t-1}^2 + \dotsc + \alpha_s u_{t-s}^2 + \beta_1 \sigma_{t-1}^2 + \dotsc + \beta_r \sigma_{t-r}^2; \\
\frac{u_t}{\sigma_t} &\sim i.i.D(0,1), \\
\end{aligned}
where $u_t:=y_t-\mu_t$; $D$ is some density, e.g. Normal; $\varphi_i=0$ for $i>p$; and $\theta_j=0$ for $j>q$.
The conditional mean process due to ARMA has essentially the same shape as the conditional variance process due to GARCH, just the lag orders may differ (allowing for a nonzero unconditional mean of $y_t$ should not change this result significantly). Importantly, neither has random error terms once conditioned on $I_{t-1}$, thus both are predetermined. | What is the difference between GARCH and ARMA?
ARMA
Consider $y_t$ that follows an ARMA($p,q$) process. Suppose for simplicity it has zero mean and constant variance. Conditionally on information $I_{t-1}$, $y_t$ can be partitioned into a known (p |
4,370 | What is the difference between GARCH and ARMA? | The ARMA and GARCH processes are very similar in their presentation. The dividing line between the two is very thin since we get GARCH when an ARMA process is assumed for the error variance. | What is the difference between GARCH and ARMA? | The ARMA and GARCH processes are very similar in their presentation. The dividing line between the two is very thin since we get GARCH when an ARMA process is assumed for the error variance. | What is the difference between GARCH and ARMA?
The ARMA and GARCH processes are very similar in their presentation. The dividing line between the two is very thin since we get GARCH when an ARMA process is assumed for the error variance. | What is the difference between GARCH and ARMA?
The ARMA and GARCH processes are very similar in their presentation. The dividing line between the two is very thin since we get GARCH when an ARMA process is assumed for the error variance. |
4,371 | What is the difference between GARCH and ARMA? | the difference is in stochastic part or lack of it.
Notice the most recent time index of the stochastic part in your formulation of ARMA:
$$X_t=\varepsilon_t+\dots$$
Compare it to GARCH:
$$\sigma^2_t=r^2_{t-1}+\dots$$
You can immediately see that in ARMA at future time $t$ the disturbance $\varepsilon_{t}$ is not yet observed, while in GARCH $r_{t-1}$ is already in the past, i.e. observed. Hence, ARMA is stochastic when it comes to forecasting $\hat X_{t}|I_{t-1}$ and GARCH is not. At time $t-1$ you already have all information to calculate forecast for $\hat\sigma^2_t|I_{t-1}$ in GARCH | What is the difference between GARCH and ARMA? | the difference is in stochastic part or lack of it.
Notice the most recent time index of the stochastic part in your formulation of ARMA:
$$X_t=\varepsilon_t+\dots$$
Compare it to GARCH:
$$\sigma^2_t= | What is the difference between GARCH and ARMA?
the difference is in stochastic part or lack of it.
Notice the most recent time index of the stochastic part in your formulation of ARMA:
$$X_t=\varepsilon_t+\dots$$
Compare it to GARCH:
$$\sigma^2_t=r^2_{t-1}+\dots$$
You can immediately see that in ARMA at future time $t$ the disturbance $\varepsilon_{t}$ is not yet observed, while in GARCH $r_{t-1}$ is already in the past, i.e. observed. Hence, ARMA is stochastic when it comes to forecasting $\hat X_{t}|I_{t-1}$ and GARCH is not. At time $t-1$ you already have all information to calculate forecast for $\hat\sigma^2_t|I_{t-1}$ in GARCH | What is the difference between GARCH and ARMA?
the difference is in stochastic part or lack of it.
Notice the most recent time index of the stochastic part in your formulation of ARMA:
$$X_t=\varepsilon_t+\dots$$
Compare it to GARCH:
$$\sigma^2_t= |
4,372 | Statistical inference when the sample "is" the population | There may be varying opinions on this, but I would treat the population data as a sample and assume a hypothetical population, then make inferences in the usual way. One way to think about this is that there is an underlying data generating process responsible for the collected data, the "population" distribution.
In your particular case, this might make even more sense since you will have cohorts in the future. Then your population is really cohorts who take the test even in the future. In this way, you could account for time based variations if you have data for more than a year, or try to account for latent factors through your error model. In short, you can develop richer models with greater explanatory power. | Statistical inference when the sample "is" the population | There may be varying opinions on this, but I would treat the population data as a sample and assume a hypothetical population, then make inferences in the usual way. One way to think about this is th | Statistical inference when the sample "is" the population
There may be varying opinions on this, but I would treat the population data as a sample and assume a hypothetical population, then make inferences in the usual way. One way to think about this is that there is an underlying data generating process responsible for the collected data, the "population" distribution.
In your particular case, this might make even more sense since you will have cohorts in the future. Then your population is really cohorts who take the test even in the future. In this way, you could account for time based variations if you have data for more than a year, or try to account for latent factors through your error model. In short, you can develop richer models with greater explanatory power. | Statistical inference when the sample "is" the population
There may be varying opinions on this, but I would treat the population data as a sample and assume a hypothetical population, then make inferences in the usual way. One way to think about this is th |
4,373 | Statistical inference when the sample "is" the population | Actually, if you're really positive you have the whole population, there's even no need to go into statistics. Then you know exactly how big the difference is, and there is no reason whatsoever to test it any more. A classical mistake is using statistical significance as "relevant" significance. If you sampled the population, the difference is what it is.
On the other hand, if you reformulate your hypothesis, then the candidates can be seen as a sample of possible candidates, which would allow for statistical testing. In this case, you'd test in general whether male and female differ on the test at hand.
As ars said, you can use tests of multiple years and add time as a random factor. But if your interest really is in the differences between these candidates on this particular test, you cannot use the generalization and testing is senseless. | Statistical inference when the sample "is" the population | Actually, if you're really positive you have the whole population, there's even no need to go into statistics. Then you know exactly how big the difference is, and there is no reason whatsoever to tes | Statistical inference when the sample "is" the population
Actually, if you're really positive you have the whole population, there's even no need to go into statistics. Then you know exactly how big the difference is, and there is no reason whatsoever to test it any more. A classical mistake is using statistical significance as "relevant" significance. If you sampled the population, the difference is what it is.
On the other hand, if you reformulate your hypothesis, then the candidates can be seen as a sample of possible candidates, which would allow for statistical testing. In this case, you'd test in general whether male and female differ on the test at hand.
As ars said, you can use tests of multiple years and add time as a random factor. But if your interest really is in the differences between these candidates on this particular test, you cannot use the generalization and testing is senseless. | Statistical inference when the sample "is" the population
Actually, if you're really positive you have the whole population, there's even no need to go into statistics. Then you know exactly how big the difference is, and there is no reason whatsoever to tes |
4,374 | Statistical inference when the sample "is" the population | Traditionally, statistical inference is taught in the context of probability samples and the nature of sampling error. This model is the basis for the test of significance. However, there are other ways to model systematic departures from chance and it turns out that our parametric (sampling based) tests tend to be good approximations of these alternatives.
Parametric tests of hypotheses rely on sampling theory to produce estimates of likely error. If a sample of a given size is taken from a population, knowledge of the systematic
nature of sampling makes testing and confidence intervals meaningful. With a population, sampling theory is simply not relevant and tests are not meaningful in the traditional sense. Inference is useless, there is nothing to infer to, there is just the thing...the
parameter itself.
Some get around this by appealing to super-populations that the current census represents. I find these appeals unconvincing--parametric tests are premised on probability sampling and its characteristics. A population at a given time may be a sample of a larger population over time and place. However, I don't see any way that one could legitimately argue that this is a random (or more generally any form form of a probability) sample. Without a probability sample, sampling theory and the
traditional logic of testing simply do not apply. You may just as well test on the basis of a convenience sample.
Clearly, to accept testing when using a population, we need to dispense with the basis of those tests in sampling procedures. One way to do this is to recognize the close connection between our sample-theoretic tests--such as t, Z, and F--and randomization procedures. Randomization tests are based on the sample at hand. If I collect
data on the income of males and females, the probability model and the basis for our estimates of error are repeated random allocations of the actual data values. I could compare observed differences across groups to a distribution based on this randomization. (We do this all the time in experiments, by the way, where the random sampling from a population model is rarely appropriate).
Now, it turns out that sample-theoretic tests are often good approximations of randomization tests. So, ultimately, I think tests from populations are useful and meaningful within this framework and can help to distinguish systematic from chance variation--just like with sample-based tests. The logic used to get there is a little different, but it doesn't have much affect on the practical meaning and use of tests. Of course, it might be better to just use randomization and permutation tests directly given they are easily available with all our modern computing power. | Statistical inference when the sample "is" the population | Traditionally, statistical inference is taught in the context of probability samples and the nature of sampling error. This model is the basis for the test of significance. However, there are other | Statistical inference when the sample "is" the population
Traditionally, statistical inference is taught in the context of probability samples and the nature of sampling error. This model is the basis for the test of significance. However, there are other ways to model systematic departures from chance and it turns out that our parametric (sampling based) tests tend to be good approximations of these alternatives.
Parametric tests of hypotheses rely on sampling theory to produce estimates of likely error. If a sample of a given size is taken from a population, knowledge of the systematic
nature of sampling makes testing and confidence intervals meaningful. With a population, sampling theory is simply not relevant and tests are not meaningful in the traditional sense. Inference is useless, there is nothing to infer to, there is just the thing...the
parameter itself.
Some get around this by appealing to super-populations that the current census represents. I find these appeals unconvincing--parametric tests are premised on probability sampling and its characteristics. A population at a given time may be a sample of a larger population over time and place. However, I don't see any way that one could legitimately argue that this is a random (or more generally any form form of a probability) sample. Without a probability sample, sampling theory and the
traditional logic of testing simply do not apply. You may just as well test on the basis of a convenience sample.
Clearly, to accept testing when using a population, we need to dispense with the basis of those tests in sampling procedures. One way to do this is to recognize the close connection between our sample-theoretic tests--such as t, Z, and F--and randomization procedures. Randomization tests are based on the sample at hand. If I collect
data on the income of males and females, the probability model and the basis for our estimates of error are repeated random allocations of the actual data values. I could compare observed differences across groups to a distribution based on this randomization. (We do this all the time in experiments, by the way, where the random sampling from a population model is rarely appropriate).
Now, it turns out that sample-theoretic tests are often good approximations of randomization tests. So, ultimately, I think tests from populations are useful and meaningful within this framework and can help to distinguish systematic from chance variation--just like with sample-based tests. The logic used to get there is a little different, but it doesn't have much affect on the practical meaning and use of tests. Of course, it might be better to just use randomization and permutation tests directly given they are easily available with all our modern computing power. | Statistical inference when the sample "is" the population
Traditionally, statistical inference is taught in the context of probability samples and the nature of sampling error. This model is the basis for the test of significance. However, there are other |
4,375 | Statistical inference when the sample "is" the population | Assume the results indicate that candidates differ along lines of gender. For example, the proportion of those who completed the tests are as follows: 40% female and 60% male. To suggest the obvious, 40% is different than 60%. Now what is important is to decide: 1) your population of interest; 2) how your observations relate to the population of interest. Here are some details about these two issues:
If your population of interest is just the candidates you have observed (e.g., the 100 candidates who applied to a university in 2016), you do not need to report statistical significance tests. This is because your population of interest was completely sampled...all you care about is the 100 candidates on which you have complete data. That is, 60% is, full stop, different than 40%. The kind of question this answers is, were there gender differences in the population of 100 that applied to the program? This is a descriptive question and the answer is yes.
However, many important questions are about what will happen in different settings. That is, many researchers want to come up with trends about the past that help us predict (and then plan for) the future. An example question in this regard would be, How likely are future tests of candidates likely to be different along lines of gender? The population of interest is then broader than in scenario #1 above. At this point, an important question to ask is: Is your observed data likely to be representative of future trends? This is an inferential question, and based on the info provided by the original poster, the answer is: we don't know.
In sum, what statistics you report depend on the type of question you want to answer.
Thinking about basic research design may be most helpful (try here: http://www.socialresearchmethods.net/kb/design.php). Thinking about superpopulations may be of help if you want more advanced info (here is an article that may help: http://projecteuclid.org/euclid.ss/1023798999#ui-tabs-1). | Statistical inference when the sample "is" the population | Assume the results indicate that candidates differ along lines of gender. For example, the proportion of those who completed the tests are as follows: 40% female and 60% male. To suggest the obvious, | Statistical inference when the sample "is" the population
Assume the results indicate that candidates differ along lines of gender. For example, the proportion of those who completed the tests are as follows: 40% female and 60% male. To suggest the obvious, 40% is different than 60%. Now what is important is to decide: 1) your population of interest; 2) how your observations relate to the population of interest. Here are some details about these two issues:
If your population of interest is just the candidates you have observed (e.g., the 100 candidates who applied to a university in 2016), you do not need to report statistical significance tests. This is because your population of interest was completely sampled...all you care about is the 100 candidates on which you have complete data. That is, 60% is, full stop, different than 40%. The kind of question this answers is, were there gender differences in the population of 100 that applied to the program? This is a descriptive question and the answer is yes.
However, many important questions are about what will happen in different settings. That is, many researchers want to come up with trends about the past that help us predict (and then plan for) the future. An example question in this regard would be, How likely are future tests of candidates likely to be different along lines of gender? The population of interest is then broader than in scenario #1 above. At this point, an important question to ask is: Is your observed data likely to be representative of future trends? This is an inferential question, and based on the info provided by the original poster, the answer is: we don't know.
In sum, what statistics you report depend on the type of question you want to answer.
Thinking about basic research design may be most helpful (try here: http://www.socialresearchmethods.net/kb/design.php). Thinking about superpopulations may be of help if you want more advanced info (here is an article that may help: http://projecteuclid.org/euclid.ss/1023798999#ui-tabs-1). | Statistical inference when the sample "is" the population
Assume the results indicate that candidates differ along lines of gender. For example, the proportion of those who completed the tests are as follows: 40% female and 60% male. To suggest the obvious, |
4,376 | Statistical inference when the sample "is" the population | If you consider whatever it is that you are measuring to be a random process, then yes statistical tests are relevant. Take for example, flipping a coin 10 times to see if it is fair. You get 6 heads and 4 tails - what do you conclude? | Statistical inference when the sample "is" the population | If you consider whatever it is that you are measuring to be a random process, then yes statistical tests are relevant. Take for example, flipping a coin 10 times to see if it is fair. You get 6 heads | Statistical inference when the sample "is" the population
If you consider whatever it is that you are measuring to be a random process, then yes statistical tests are relevant. Take for example, flipping a coin 10 times to see if it is fair. You get 6 heads and 4 tails - what do you conclude? | Statistical inference when the sample "is" the population
If you consider whatever it is that you are measuring to be a random process, then yes statistical tests are relevant. Take for example, flipping a coin 10 times to see if it is fair. You get 6 heads |
4,377 | Random forest computing time in R | The overall complexity of RF is something like $\text{ntree}\cdot\text{mtry}\cdot(\text{# objects})\log( \text{# objects})$; if you want to speed your computations up, you can try the following:
Use randomForest instead of party, or, even better, ranger or Rborist (although both are not yet battle-tested).
Don't use formula, i.e. call randomForest(predictors,decision) instead of randomForest(decision~.,data=input).
Use do.trace argument to see the OOB error in real-time; this way you may detect that you can lower ntree.
About factors; RF (and all tree methods) try to find an optimal subset of levels thus scanning $2^\text{(# of levels-1)}$ possibilities; to this end it is rather naive this factor can give you so much information -- not to mention that randomForest won't eat factors with more than 32 levels. Maybe you can simply treat it as an ordered one (and thus equivalent to a normal, numeric variable for RF) or cluster it in some groups, splitting this one attribute into several?
Check if your computer haven't run out of RAM and it is using swap space. If so, buy a bigger computer.
Finally, you can extract some random subset of objects and make some initial experiments on this. | Random forest computing time in R | The overall complexity of RF is something like $\text{ntree}\cdot\text{mtry}\cdot(\text{# objects})\log( \text{# objects})$; if you want to speed your computations up, you can try the following:
Use | Random forest computing time in R
The overall complexity of RF is something like $\text{ntree}\cdot\text{mtry}\cdot(\text{# objects})\log( \text{# objects})$; if you want to speed your computations up, you can try the following:
Use randomForest instead of party, or, even better, ranger or Rborist (although both are not yet battle-tested).
Don't use formula, i.e. call randomForest(predictors,decision) instead of randomForest(decision~.,data=input).
Use do.trace argument to see the OOB error in real-time; this way you may detect that you can lower ntree.
About factors; RF (and all tree methods) try to find an optimal subset of levels thus scanning $2^\text{(# of levels-1)}$ possibilities; to this end it is rather naive this factor can give you so much information -- not to mention that randomForest won't eat factors with more than 32 levels. Maybe you can simply treat it as an ordered one (and thus equivalent to a normal, numeric variable for RF) or cluster it in some groups, splitting this one attribute into several?
Check if your computer haven't run out of RAM and it is using swap space. If so, buy a bigger computer.
Finally, you can extract some random subset of objects and make some initial experiments on this. | Random forest computing time in R
The overall complexity of RF is something like $\text{ntree}\cdot\text{mtry}\cdot(\text{# objects})\log( \text{# objects})$; if you want to speed your computations up, you can try the following:
Use |
4,378 | Random forest computing time in R | Because randomForest is a collection of independent carts trained upon a random subset of features and records it lends itself to parallelization. The combine() function in the randomForest package will stitch together independently trained forests. Here is a toy example. As @mpq 's answer states you should not use the formula notation, but pass in a dataframe/matrix of variables and a vector of outcomes. I shameless lifted these from the docs.
library("doMC")
library("randomForest")
data(iris)
registerDoMC(4) #number of cores on the machine
darkAndScaryForest <- foreach(y=seq(10), .combine=combine ) %dopar% {
set.seed(y) # not really needed
rf <- randomForest(Species ~ ., iris, ntree=50, norm.votes=FALSE)
}
I passed the randomForest combine function to the similarly named .combine parameter( which controls the function on the output of the loop. The down side is you get no OOB error rate or more tragically variable importance.
Edit:
After rereading the post I realize that I talk nothing about the 34+ factor issue. A wholey un-thought out answer could be to represent them as binary variables. That is each factor a column that is encoded 0/1 -level factor about its presence/non-presence. By doing some variable selection on unimportant factors and removing them you could keep you feature space from growing too too large. | Random forest computing time in R | Because randomForest is a collection of independent carts trained upon a random subset of features and records it lends itself to parallelization. The combine() function in the randomForest package wi | Random forest computing time in R
Because randomForest is a collection of independent carts trained upon a random subset of features and records it lends itself to parallelization. The combine() function in the randomForest package will stitch together independently trained forests. Here is a toy example. As @mpq 's answer states you should not use the formula notation, but pass in a dataframe/matrix of variables and a vector of outcomes. I shameless lifted these from the docs.
library("doMC")
library("randomForest")
data(iris)
registerDoMC(4) #number of cores on the machine
darkAndScaryForest <- foreach(y=seq(10), .combine=combine ) %dopar% {
set.seed(y) # not really needed
rf <- randomForest(Species ~ ., iris, ntree=50, norm.votes=FALSE)
}
I passed the randomForest combine function to the similarly named .combine parameter( which controls the function on the output of the loop. The down side is you get no OOB error rate or more tragically variable importance.
Edit:
After rereading the post I realize that I talk nothing about the 34+ factor issue. A wholey un-thought out answer could be to represent them as binary variables. That is each factor a column that is encoded 0/1 -level factor about its presence/non-presence. By doing some variable selection on unimportant factors and removing them you could keep you feature space from growing too too large. | Random forest computing time in R
Because randomForest is a collection of independent carts trained upon a random subset of features and records it lends itself to parallelization. The combine() function in the randomForest package wi |
4,379 | Random forest computing time in R | I can't speak to the speed of specific algorithms in R but it should be obvious what is causing long computing time. For each tree at each branch CART is looking form the best binary split. So for each of the 34 features it most look at the splits given by each of the levels of the variables. Multiply the run time for each split in a tree by the number of branches in the tree and then multiple that by the number of trees in the forest and you have a long running time. Who knows? Maybe even with a fast computer this could take years to finish?
The best way to speed things up I think would be to lump some of the levels together so that each variable is down to maybe 3 to 5 levels instead of as many as 300. Of course this depends on being able to do this without losing important information in your data.
After that maybe you could look to see if there is some clever algorithm that can speed up the search time for splitting at each node of the individual trees. it could be that at a particular tree the split search is a repeat of a search already done for a previous tree. So if you can save the solutions of the previous split decisions and identify when you are repeating maybe that strategy could save a little on computing time. | Random forest computing time in R | I can't speak to the speed of specific algorithms in R but it should be obvious what is causing long computing time. For each tree at each branch CART is looking form the best binary split. So for e | Random forest computing time in R
I can't speak to the speed of specific algorithms in R but it should be obvious what is causing long computing time. For each tree at each branch CART is looking form the best binary split. So for each of the 34 features it most look at the splits given by each of the levels of the variables. Multiply the run time for each split in a tree by the number of branches in the tree and then multiple that by the number of trees in the forest and you have a long running time. Who knows? Maybe even with a fast computer this could take years to finish?
The best way to speed things up I think would be to lump some of the levels together so that each variable is down to maybe 3 to 5 levels instead of as many as 300. Of course this depends on being able to do this without losing important information in your data.
After that maybe you could look to see if there is some clever algorithm that can speed up the search time for splitting at each node of the individual trees. it could be that at a particular tree the split search is a repeat of a search already done for a previous tree. So if you can save the solutions of the previous split decisions and identify when you are repeating maybe that strategy could save a little on computing time. | Random forest computing time in R
I can't speak to the speed of specific algorithms in R but it should be obvious what is causing long computing time. For each tree at each branch CART is looking form the best binary split. So for e |
4,380 | Random forest computing time in R | I would suggest a couple of links:
1) Shrink number of levels of a factor variable
is a link to a question on stackoverflow to deal with a similar issue while using the randomForest package. Specifically it deals with using only the most frequently occurring levels and assigning a new level to all other, less frequently occurring levels.
The idea for it came from here: 2009 KDD Cup Slow Challenge. The data for this competition had lots of factors with lots of levels and it discusses some of the methods they used to pare the data down from 50,000 rows by 15,000 columns to run on a 2-core/2GB RAM laptop.
My last suggestion would be to look at running the problem, as suggested above, in parallel on a hi-CPU Amazon EC2 instance. | Random forest computing time in R | I would suggest a couple of links:
1) Shrink number of levels of a factor variable
is a link to a question on stackoverflow to deal with a similar issue while using the randomForest package. Specif | Random forest computing time in R
I would suggest a couple of links:
1) Shrink number of levels of a factor variable
is a link to a question on stackoverflow to deal with a similar issue while using the randomForest package. Specifically it deals with using only the most frequently occurring levels and assigning a new level to all other, less frequently occurring levels.
The idea for it came from here: 2009 KDD Cup Slow Challenge. The data for this competition had lots of factors with lots of levels and it discusses some of the methods they used to pare the data down from 50,000 rows by 15,000 columns to run on a 2-core/2GB RAM laptop.
My last suggestion would be to look at running the problem, as suggested above, in parallel on a hi-CPU Amazon EC2 instance. | Random forest computing time in R
I would suggest a couple of links:
1) Shrink number of levels of a factor variable
is a link to a question on stackoverflow to deal with a similar issue while using the randomForest package. Specif |
4,381 | How to decide on the correct number of clusters? | This has been asked a couple of times on stackoverflow: here, here and here. You can take a look at what the crowd over there thinks about this question (or a small variant thereof).
Let me also copy my own answer to this question, on stackoverflow.com:
Unfortunately there is no way to automatically set the "right" K nor is there a definition of what "right" is. There isn't a principled statistical method, simple or complex that can set the "right K". There are heuristics, rules of thumb that sometimes work, sometimes don't.
The situation is more general as many clustering methods have these type of parameters, and I think this is a big open problem in the clustering/unsupervised learning research community. | How to decide on the correct number of clusters? | This has been asked a couple of times on stackoverflow: here, here and here. You can take a look at what the crowd over there thinks about this question (or a small variant thereof).
Let me also copy | How to decide on the correct number of clusters?
This has been asked a couple of times on stackoverflow: here, here and here. You can take a look at what the crowd over there thinks about this question (or a small variant thereof).
Let me also copy my own answer to this question, on stackoverflow.com:
Unfortunately there is no way to automatically set the "right" K nor is there a definition of what "right" is. There isn't a principled statistical method, simple or complex that can set the "right K". There are heuristics, rules of thumb that sometimes work, sometimes don't.
The situation is more general as many clustering methods have these type of parameters, and I think this is a big open problem in the clustering/unsupervised learning research community. | How to decide on the correct number of clusters?
This has been asked a couple of times on stackoverflow: here, here and here. You can take a look at what the crowd over there thinks about this question (or a small variant thereof).
Let me also copy |
4,382 | How to decide on the correct number of clusters? | Firstly a caveat. In clustering there is often no one "correct answer" - one clustering may be better than another by one metric, and the reverse may be true using another metric. And in some situations two different clusterings could be equally probable under the same metric.
Having said that, you might want to have a look at Dirichlet Processes. Also see this tutorial.
If you begin with a Gaussian Mixture model, you have the same problem as with k-means - that you have to choose the number of clusters. You could use model evidence, but it won't be robust in this case. So the trick is to use a Dirichlet Process prior over the mixture components, which then allows you to have a potentially infinite number of mixture components, but the model will (usually) automatically find the "correct" number of components (under the assumptions of the model).
Note that you still have to specify the concentration parameter $\alpha$ of the Dirichlet Process prior. For small values of $\alpha$, samples from a DP are likely to be composed of a small number of atomic measures with large weights. For large values, most samples are likely to be distinct (concentrated). You can use a hyper-prior on the concentration parameter and then infer its value from the data, and this hyper-prior can be suitably vague as to allow many different possible values. Given enough data, however, the concentration parameter will cease to be so important, and this hyper-prior could be dropped. | How to decide on the correct number of clusters? | Firstly a caveat. In clustering there is often no one "correct answer" - one clustering may be better than another by one metric, and the reverse may be true using another metric. And in some situatio | How to decide on the correct number of clusters?
Firstly a caveat. In clustering there is often no one "correct answer" - one clustering may be better than another by one metric, and the reverse may be true using another metric. And in some situations two different clusterings could be equally probable under the same metric.
Having said that, you might want to have a look at Dirichlet Processes. Also see this tutorial.
If you begin with a Gaussian Mixture model, you have the same problem as with k-means - that you have to choose the number of clusters. You could use model evidence, but it won't be robust in this case. So the trick is to use a Dirichlet Process prior over the mixture components, which then allows you to have a potentially infinite number of mixture components, but the model will (usually) automatically find the "correct" number of components (under the assumptions of the model).
Note that you still have to specify the concentration parameter $\alpha$ of the Dirichlet Process prior. For small values of $\alpha$, samples from a DP are likely to be composed of a small number of atomic measures with large weights. For large values, most samples are likely to be distinct (concentrated). You can use a hyper-prior on the concentration parameter and then infer its value from the data, and this hyper-prior can be suitably vague as to allow many different possible values. Given enough data, however, the concentration parameter will cease to be so important, and this hyper-prior could be dropped. | How to decide on the correct number of clusters?
Firstly a caveat. In clustering there is often no one "correct answer" - one clustering may be better than another by one metric, and the reverse may be true using another metric. And in some situatio |
4,383 | How to decide on the correct number of clusters? | I use the Elbow method:
Start with K=2, and keep increasing it in each step by 1, calculating your clusters and the cost that comes with the training. At some value for K the cost drops dramatically, and after that it reaches a plateau when you increase it further. This is the K value you want.
The rationale is that after this, you increase the number of clusters but the new cluster is very near some of the existing. | How to decide on the correct number of clusters? | I use the Elbow method:
Start with K=2, and keep increasing it in each step by 1, calculating your clusters and the cost that comes with the training. At some value for K the cost drops dramatically, | How to decide on the correct number of clusters?
I use the Elbow method:
Start with K=2, and keep increasing it in each step by 1, calculating your clusters and the cost that comes with the training. At some value for K the cost drops dramatically, and after that it reaches a plateau when you increase it further. This is the K value you want.
The rationale is that after this, you increase the number of clusters but the new cluster is very near some of the existing. | How to decide on the correct number of clusters?
I use the Elbow method:
Start with K=2, and keep increasing it in each step by 1, calculating your clusters and the cost that comes with the training. At some value for K the cost drops dramatically, |
4,384 | How to decide on the correct number of clusters? | Cluster sizes depend highly on both your data and what you're gonna use the results for. If your using your data for splitting things into categories, try to imagine how many categories you want first. If it's for data visualization, make it configurable, so people can see both the large clusters and the smaller ones.
If you need to automate it, you might wanna add a penalty to increasing k, and calculate the optimal cluster that way. And then you just weight k depending on whether you want a ton of clusters or you want very few. | How to decide on the correct number of clusters? | Cluster sizes depend highly on both your data and what you're gonna use the results for. If your using your data for splitting things into categories, try to imagine how many categories you want first | How to decide on the correct number of clusters?
Cluster sizes depend highly on both your data and what you're gonna use the results for. If your using your data for splitting things into categories, try to imagine how many categories you want first. If it's for data visualization, make it configurable, so people can see both the large clusters and the smaller ones.
If you need to automate it, you might wanna add a penalty to increasing k, and calculate the optimal cluster that way. And then you just weight k depending on whether you want a ton of clusters or you want very few. | How to decide on the correct number of clusters?
Cluster sizes depend highly on both your data and what you're gonna use the results for. If your using your data for splitting things into categories, try to imagine how many categories you want first |
4,385 | How to decide on the correct number of clusters? | You can also check Unsupervised Optimal Fuzzy Clustering which deal with the problem you have mentioned (finding the number of clusters) which a modified version of it is implemented here | How to decide on the correct number of clusters? | You can also check Unsupervised Optimal Fuzzy Clustering which deal with the problem you have mentioned (finding the number of clusters) which a modified version of it is implemented here | How to decide on the correct number of clusters?
You can also check Unsupervised Optimal Fuzzy Clustering which deal with the problem you have mentioned (finding the number of clusters) which a modified version of it is implemented here | How to decide on the correct number of clusters?
You can also check Unsupervised Optimal Fuzzy Clustering which deal with the problem you have mentioned (finding the number of clusters) which a modified version of it is implemented here |
4,386 | How to decide on the correct number of clusters? | I have managed to use the "L Method" to determine the number of clusters in a geographic application (ie. essentally a 2d problem although technically non-Euclidean).
The L Method is described here:
Determining the Number of Clusters/Segments in Hierarchical Clustering/Segmentation Algorithms Stan Salvador and Philip Chan
Essentially this evaluates the fit for various values of k. An "L" shaped graph is seen with the optimum k value represented by the knee in the graph. A simple dual-line least-squares fitting calculation is used to find the knee point.
I found the method very slow because the iterative k-means has to be calculated for each value of k. Also I found k-means worked best with multiple runs and choosing the best at the end. Although each data point had only two dimensions, a simple Pythagorean distance could not be used. So that's a lot of calculating.
One thought is to skip every other value of k (say) to half the calculations and/or to reduce the number of k-means iterations, and then to slightly smooth the resulting curve to produce a more accurate fit. I asked about this over at StackOverflow - IMHO, the smoothing question remains an open research question. | How to decide on the correct number of clusters? | I have managed to use the "L Method" to determine the number of clusters in a geographic application (ie. essentally a 2d problem although technically non-Euclidean).
The L Method is described here:
| How to decide on the correct number of clusters?
I have managed to use the "L Method" to determine the number of clusters in a geographic application (ie. essentally a 2d problem although technically non-Euclidean).
The L Method is described here:
Determining the Number of Clusters/Segments in Hierarchical Clustering/Segmentation Algorithms Stan Salvador and Philip Chan
Essentially this evaluates the fit for various values of k. An "L" shaped graph is seen with the optimum k value represented by the knee in the graph. A simple dual-line least-squares fitting calculation is used to find the knee point.
I found the method very slow because the iterative k-means has to be calculated for each value of k. Also I found k-means worked best with multiple runs and choosing the best at the end. Although each data point had only two dimensions, a simple Pythagorean distance could not be used. So that's a lot of calculating.
One thought is to skip every other value of k (say) to half the calculations and/or to reduce the number of k-means iterations, and then to slightly smooth the resulting curve to produce a more accurate fit. I asked about this over at StackOverflow - IMHO, the smoothing question remains an open research question. | How to decide on the correct number of clusters?
I have managed to use the "L Method" to determine the number of clusters in a geographic application (ie. essentally a 2d problem although technically non-Euclidean).
The L Method is described here:
|
4,387 | How to decide on the correct number of clusters? | You need to reconsider what k-means does. It tries to find the optimal Voronoi partitioning of the data set into $k$ cells. Voronoi cells are oddly shaped cells, the orthogonal structure of a Delaunay triangulation.
But what if your data set doesn't actually fit into the Voronoi scheme?
Most likely, the actual clusters will not be very meaningful. However, they may still work for whatever you are doing. Even breaking a "true" cluster into two parts because your $k$ is too high, the result can work very well for example for classification. So I'd say: the best $k$ is, which works best for your particular task.
In fact, when you have $k$ clusters that are not equally sized and spaced (and thus don't fit into the Voronoi partitioning scheme), you may need to increase k for k-means to get better results. | How to decide on the correct number of clusters? | You need to reconsider what k-means does. It tries to find the optimal Voronoi partitioning of the data set into $k$ cells. Voronoi cells are oddly shaped cells, the orthogonal structure of a Delaunay | How to decide on the correct number of clusters?
You need to reconsider what k-means does. It tries to find the optimal Voronoi partitioning of the data set into $k$ cells. Voronoi cells are oddly shaped cells, the orthogonal structure of a Delaunay triangulation.
But what if your data set doesn't actually fit into the Voronoi scheme?
Most likely, the actual clusters will not be very meaningful. However, they may still work for whatever you are doing. Even breaking a "true" cluster into two parts because your $k$ is too high, the result can work very well for example for classification. So I'd say: the best $k$ is, which works best for your particular task.
In fact, when you have $k$ clusters that are not equally sized and spaced (and thus don't fit into the Voronoi partitioning scheme), you may need to increase k for k-means to get better results. | How to decide on the correct number of clusters?
You need to reconsider what k-means does. It tries to find the optimal Voronoi partitioning of the data set into $k$ cells. Voronoi cells are oddly shaped cells, the orthogonal structure of a Delaunay |
4,388 | How to decide on the correct number of clusters? | As no one has pointed it yet, I thought I would share this. There is a method called X-means, (see this link) which estimates proper number of clusters using Bayesian information criterion (BIC). Essentially, this would be like trying K means with different Ks, calculating BIC for each K and choosing the best K. This algorithm does that efficiently.
There is also a weka implementation, details of which can be found here. | How to decide on the correct number of clusters? | As no one has pointed it yet, I thought I would share this. There is a method called X-means, (see this link) which estimates proper number of clusters using Bayesian information criterion (BIC). Esse | How to decide on the correct number of clusters?
As no one has pointed it yet, I thought I would share this. There is a method called X-means, (see this link) which estimates proper number of clusters using Bayesian information criterion (BIC). Essentially, this would be like trying K means with different Ks, calculating BIC for each K and choosing the best K. This algorithm does that efficiently.
There is also a weka implementation, details of which can be found here. | How to decide on the correct number of clusters?
As no one has pointed it yet, I thought I would share this. There is a method called X-means, (see this link) which estimates proper number of clusters using Bayesian information criterion (BIC). Esse |
4,389 | How to decide on the correct number of clusters? | Overall, you can choose number of clusters in in two different paths.
knowledge driven: you should have some ideas how many cluster do you need from business point of view. For example, you are clustering customers, you should ask yourself, after getting these customers, what should I do next? May be you will have different treatment for different clusters? (e.g., advertising by email or phone). Then how many possible treatments are you planning? In this example, you select say 100 clusters will not make too much sense.
Data driven: more number of clusters is over-fitting and less number of clusters is under-fitting. You can always split data in half and run cross validation to see how many number of clusters are good. Note, in clustering you still have the loss function, similar to supervised setting.
Finally, you should always combine knowledge driven and data driven together in real world. | How to decide on the correct number of clusters? | Overall, you can choose number of clusters in in two different paths.
knowledge driven: you should have some ideas how many cluster do you need from business point of view. For example, you are clust | How to decide on the correct number of clusters?
Overall, you can choose number of clusters in in two different paths.
knowledge driven: you should have some ideas how many cluster do you need from business point of view. For example, you are clustering customers, you should ask yourself, after getting these customers, what should I do next? May be you will have different treatment for different clusters? (e.g., advertising by email or phone). Then how many possible treatments are you planning? In this example, you select say 100 clusters will not make too much sense.
Data driven: more number of clusters is over-fitting and less number of clusters is under-fitting. You can always split data in half and run cross validation to see how many number of clusters are good. Note, in clustering you still have the loss function, similar to supervised setting.
Finally, you should always combine knowledge driven and data driven together in real world. | How to decide on the correct number of clusters?
Overall, you can choose number of clusters in in two different paths.
knowledge driven: you should have some ideas how many cluster do you need from business point of view. For example, you are clust |
4,390 | How to decide on the correct number of clusters? | Rather than use some statistical criteria (although those may be useful) I would base it on utility for the problem at hand.
Look at various solutions and then judge which one best answers your research question or your business need or else which one gives you insight into the data. So, in one situation, you might choose k based on some other characteristic of the observations. In another, you might have a particular hypothesis to test or question to answer. And in yet another you may be looking for hypotheses or research questions. | How to decide on the correct number of clusters? | Rather than use some statistical criteria (although those may be useful) I would base it on utility for the problem at hand.
Look at various solutions and then judge which one best answers your resear | How to decide on the correct number of clusters?
Rather than use some statistical criteria (although those may be useful) I would base it on utility for the problem at hand.
Look at various solutions and then judge which one best answers your research question or your business need or else which one gives you insight into the data. So, in one situation, you might choose k based on some other characteristic of the observations. In another, you might have a particular hypothesis to test or question to answer. And in yet another you may be looking for hypotheses or research questions. | How to decide on the correct number of clusters?
Rather than use some statistical criteria (although those may be useful) I would base it on utility for the problem at hand.
Look at various solutions and then judge which one best answers your resear |
4,391 | How to decide on the correct number of clusters? | Another approach is to use an evolutionary algorithm whose individuals have chromosomes of different lengths. Each individual is a candidate solution: each one carries the centroids coordinates. The number of centroids and their coordinates are evolved in order to reach a solution that yields the best clustering evaluation score.
This paper explains the algorithm. | How to decide on the correct number of clusters? | Another approach is to use an evolutionary algorithm whose individuals have chromosomes of different lengths. Each individual is a candidate solution: each one carries the centroids coordinates. The n | How to decide on the correct number of clusters?
Another approach is to use an evolutionary algorithm whose individuals have chromosomes of different lengths. Each individual is a candidate solution: each one carries the centroids coordinates. The number of centroids and their coordinates are evolved in order to reach a solution that yields the best clustering evaluation score.
This paper explains the algorithm. | How to decide on the correct number of clusters?
Another approach is to use an evolutionary algorithm whose individuals have chromosomes of different lengths. Each individual is a candidate solution: each one carries the centroids coordinates. The n |
4,392 | What is perplexity? | You have looked at the Wikipedia article on perplexity. It gives the perplexity of a discrete distribution as
$$2^{-\sum_x p(x)\log_2 p(x)}$$
which could also be written as
$$\exp\left({\sum_x p(x)\log_e \frac{1}{p(x)}}\right)$$
i.e. as a weighted geometric average of the inverses of the probabilities. For a continuous distribution, the sum would turn into a integral.
The article also gives a way of estimating perplexity for a model using $N$ pieces of test data
$$2^{-\sum_{i=1}^N \frac{1}{N} \log_2 q(x_i)}$$
which could also be written
$$\exp\left(\frac{{\sum_{i=1}^N \log_e \left(\dfrac{1}{q(x_i)}\right)}}{N}\right) \text{ or } \sqrt[N]{\prod_{i=1}^N \frac{1}{q(x_i)}}$$
or in a variety of other ways, and this should make it even clearer where "log-average inverse probability" comes from. | What is perplexity? | You have looked at the Wikipedia article on perplexity. It gives the perplexity of a discrete distribution as
$$2^{-\sum_x p(x)\log_2 p(x)}$$
which could also be written as
$$\exp\left({\sum_x p(x) | What is perplexity?
You have looked at the Wikipedia article on perplexity. It gives the perplexity of a discrete distribution as
$$2^{-\sum_x p(x)\log_2 p(x)}$$
which could also be written as
$$\exp\left({\sum_x p(x)\log_e \frac{1}{p(x)}}\right)$$
i.e. as a weighted geometric average of the inverses of the probabilities. For a continuous distribution, the sum would turn into a integral.
The article also gives a way of estimating perplexity for a model using $N$ pieces of test data
$$2^{-\sum_{i=1}^N \frac{1}{N} \log_2 q(x_i)}$$
which could also be written
$$\exp\left(\frac{{\sum_{i=1}^N \log_e \left(\dfrac{1}{q(x_i)}\right)}}{N}\right) \text{ or } \sqrt[N]{\prod_{i=1}^N \frac{1}{q(x_i)}}$$
or in a variety of other ways, and this should make it even clearer where "log-average inverse probability" comes from. | What is perplexity?
You have looked at the Wikipedia article on perplexity. It gives the perplexity of a discrete distribution as
$$2^{-\sum_x p(x)\log_2 p(x)}$$
which could also be written as
$$\exp\left({\sum_x p(x) |
4,393 | What is perplexity? | I found this rather intuitive:
The perplexity of whatever you're evaluating, on the data you're
evaluating it on, sort of tells you "this thing is right about as
often as an x-sided die would be."
http://planspace.org/2013/09/23/perplexity-what-it-is-and-what-yours-is/ | What is perplexity? | I found this rather intuitive:
The perplexity of whatever you're evaluating, on the data you're
evaluating it on, sort of tells you "this thing is right about as
often as an x-sided die would be. | What is perplexity?
I found this rather intuitive:
The perplexity of whatever you're evaluating, on the data you're
evaluating it on, sort of tells you "this thing is right about as
often as an x-sided die would be."
http://planspace.org/2013/09/23/perplexity-what-it-is-and-what-yours-is/ | What is perplexity?
I found this rather intuitive:
The perplexity of whatever you're evaluating, on the data you're
evaluating it on, sort of tells you "this thing is right about as
often as an x-sided die would be. |
4,394 | What is perplexity? | I've wondered this too. The first explanation isn't bad, but here are my 2 nats for whatever that's worth.
First of all, perplexity has nothing to do with characterizing how often you guess something right. It has more to do with characterizing the complexity of a stochastic sequence.
We're looking at a quantity, $$2^{-\sum_x p(x)\log_2 p(x)}$$
Let's first cancel out the log and the exponentiation.
$$2^{-\sum_{x} p(x)\log_2 p(x)}=\frac{1}{\prod_{x} p(x)^{p(x)}}$$
I think it's worth pointing out that perplexity is invariant with the base you use to define entropy. So in this sense, perplexity is infinitely more unique/less arbitrary than entropy as a measurement.
Relationship to Dice
Let's play with this a bit. Let's say you're just looking at a coin. When the coin is fair, entropy is at a maximum, and perplexity is at a maximum of $$\frac{1}{\frac{1}{2}^\frac{1}{2}\times\frac{1}{2}^\frac{1}{2}}=2$$
Now what happens when we look at an $N$ sided dice? Perplexity is $$\frac{1}{\left(\frac{1}{N}^\frac{1}{N}\right)^N}=N$$
So perplexity represents the number of sides of a fair die that when rolled, produces a sequence with the same entropy as your given probability distribution.
Number of States
OK, so now that we have an intuitive definition of perplexity, let's take a quick look at how it is affected by the number of states in a model. Let's start with a probability distribution over $N$ states, and create a new probability distribution over $N+1$ states such that the likelihood ratio of the original $N$ states remain the same and the new state has probability $\epsilon$. In the case of starting with a fair $N$ sided die, we might imagine creating a new $N + 1$ sided die such that the new side gets rolled with probability $\epsilon$ and the original $N$ sides are rolled with equal likelihood. So in the case of an arbitrary original probability distribution, if the probability of each state $x$ is given by $p_x$, the new distribution of the original $N$ states given the new state will be $$p^\prime_x=p_x\left(1-\epsilon\right)$$, and the new perplexity will be given by:
$$\frac{1}{\epsilon^\epsilon\prod_x^N {p^\prime_x}^{p^\prime_x}}=\frac{1}{\epsilon^\epsilon\prod_x^N {\left(p_x\left(1-\epsilon\right)\right)}^{p_x\left(1-\epsilon\right)}} =
\frac{1}{\epsilon^\epsilon\prod_x^N p_x^{p_x\left(1-\epsilon\right)} {\left(1-\epsilon\right)}^{p_x\left(1-\epsilon\right)}}
=
\frac{1}{\epsilon^\epsilon{\left(1-\epsilon\right)}^{\left(1-\epsilon\right)}\prod_x^N p_x^{p_x\left(1-\epsilon\right)}}
$$
In the limit as $\epsilon\rightarrow 0$, this quantity approaches $$\frac{1}{\prod_x^N {p_x}^{p_x}}$$
So as you make make rolling one side of the die increasingly unlikely, the perplexity ends up looking as though the side doesn't exist. | What is perplexity? | I've wondered this too. The first explanation isn't bad, but here are my 2 nats for whatever that's worth.
First of all, perplexity has nothing to do with characterizing how often you guess something | What is perplexity?
I've wondered this too. The first explanation isn't bad, but here are my 2 nats for whatever that's worth.
First of all, perplexity has nothing to do with characterizing how often you guess something right. It has more to do with characterizing the complexity of a stochastic sequence.
We're looking at a quantity, $$2^{-\sum_x p(x)\log_2 p(x)}$$
Let's first cancel out the log and the exponentiation.
$$2^{-\sum_{x} p(x)\log_2 p(x)}=\frac{1}{\prod_{x} p(x)^{p(x)}}$$
I think it's worth pointing out that perplexity is invariant with the base you use to define entropy. So in this sense, perplexity is infinitely more unique/less arbitrary than entropy as a measurement.
Relationship to Dice
Let's play with this a bit. Let's say you're just looking at a coin. When the coin is fair, entropy is at a maximum, and perplexity is at a maximum of $$\frac{1}{\frac{1}{2}^\frac{1}{2}\times\frac{1}{2}^\frac{1}{2}}=2$$
Now what happens when we look at an $N$ sided dice? Perplexity is $$\frac{1}{\left(\frac{1}{N}^\frac{1}{N}\right)^N}=N$$
So perplexity represents the number of sides of a fair die that when rolled, produces a sequence with the same entropy as your given probability distribution.
Number of States
OK, so now that we have an intuitive definition of perplexity, let's take a quick look at how it is affected by the number of states in a model. Let's start with a probability distribution over $N$ states, and create a new probability distribution over $N+1$ states such that the likelihood ratio of the original $N$ states remain the same and the new state has probability $\epsilon$. In the case of starting with a fair $N$ sided die, we might imagine creating a new $N + 1$ sided die such that the new side gets rolled with probability $\epsilon$ and the original $N$ sides are rolled with equal likelihood. So in the case of an arbitrary original probability distribution, if the probability of each state $x$ is given by $p_x$, the new distribution of the original $N$ states given the new state will be $$p^\prime_x=p_x\left(1-\epsilon\right)$$, and the new perplexity will be given by:
$$\frac{1}{\epsilon^\epsilon\prod_x^N {p^\prime_x}^{p^\prime_x}}=\frac{1}{\epsilon^\epsilon\prod_x^N {\left(p_x\left(1-\epsilon\right)\right)}^{p_x\left(1-\epsilon\right)}} =
\frac{1}{\epsilon^\epsilon\prod_x^N p_x^{p_x\left(1-\epsilon\right)} {\left(1-\epsilon\right)}^{p_x\left(1-\epsilon\right)}}
=
\frac{1}{\epsilon^\epsilon{\left(1-\epsilon\right)}^{\left(1-\epsilon\right)}\prod_x^N p_x^{p_x\left(1-\epsilon\right)}}
$$
In the limit as $\epsilon\rightarrow 0$, this quantity approaches $$\frac{1}{\prod_x^N {p_x}^{p_x}}$$
So as you make make rolling one side of the die increasingly unlikely, the perplexity ends up looking as though the side doesn't exist. | What is perplexity?
I've wondered this too. The first explanation isn't bad, but here are my 2 nats for whatever that's worth.
First of all, perplexity has nothing to do with characterizing how often you guess something |
4,395 | What is perplexity? | There is actually a clear connection between perplexity and the odds of correctly guessing a value from a distribution, given by Cover's Elements of Information Theory 2ed (2.146): If $X$ and $X'$ are iid variables, then
$P(X=X') \ge 2^{-H(X)} = \frac{1}{2^{H(X)}} = \frac{1}{\text{perplexity}}$ (1)
To explain, perplexity of a uniform distribution X is just |X|, the number of elements. If we try to guess the values that iid samples from a uniform distribution X will take by simply making iid guesses from X, we will be correct 1/|X|=1/perplexity of the time. Since the uniform distribution is the hardest to guess values from, we can use 1/perplexity as a lower bound / heuristic approximation for how often our guesses will be right. | What is perplexity? | There is actually a clear connection between perplexity and the odds of correctly guessing a value from a distribution, given by Cover's Elements of Information Theory 2ed (2.146): If $X$ and $X'$ are | What is perplexity?
There is actually a clear connection between perplexity and the odds of correctly guessing a value from a distribution, given by Cover's Elements of Information Theory 2ed (2.146): If $X$ and $X'$ are iid variables, then
$P(X=X') \ge 2^{-H(X)} = \frac{1}{2^{H(X)}} = \frac{1}{\text{perplexity}}$ (1)
To explain, perplexity of a uniform distribution X is just |X|, the number of elements. If we try to guess the values that iid samples from a uniform distribution X will take by simply making iid guesses from X, we will be correct 1/|X|=1/perplexity of the time. Since the uniform distribution is the hardest to guess values from, we can use 1/perplexity as a lower bound / heuristic approximation for how often our guesses will be right. | What is perplexity?
There is actually a clear connection between perplexity and the odds of correctly guessing a value from a distribution, given by Cover's Elements of Information Theory 2ed (2.146): If $X$ and $X'$ are |
4,396 | Why do we only see $L_1$ and $L_2$ regularization but not other norms? | In addition to @whuber's comments (*).
The book by Hastie et al Statistical learning with Sparsity discusses this. They also uses what is called the $L_0$ "norm" (quotation marks because this is not a norm in the strict mathematical sense (**)), which simply counts the number of nonzero components of a vector.
In that sense the $L_0$ norm is used for variable selection, but it together with the $l_q$ norms with $q<1$ is not convex, so difficult to optimize. They argue (an argument I think come from Donohoe in compressed sensing) that the $L_1$ norm, that is, the lasso, is the best convexification of the $L_0$ "norm" ("the closest convex relaxation of best subset selection"). That book also references some uses of other $L_q$ norms. The unit ball in the $l_q$-norm with $q<1$ looks like this
(image from wikipedia) while a pictorial explication of why the lasso can provide variable selection is
This image is from the above referenced book. You can see that in the lasso case (the unit ball drawn as a diamond) it is much more probable that the ellipsoidal (sum of squares) contours will first touch the diamond at one of the corners. In the non-convex case (first unit ball figure) it is even more likely that the first touch between the ellipsoid and the unit ball will be at one of the corners, so that case will emphasis variable selection even more than the lasso.
If you try this "lasso with non-convex penalty" in google you will get a lot of papers doing lasso-like problems with non-convex penalty like $l_q$ with $q < 1$.
(*) For completeness I copy in whuber's comments here:
I haven't investigated this question specifically, but experience with similar situations suggests there may be a nice qualitative answer: all norms that are second differentiable at the origin will be locally equivalent to each other, of which the $L_2$ norm is the standard. All other norms will not be differentiable at the origin and $L_1$ qualitatively reproduces their behavior. That covers the gamut. In effect, a linear combination of an $L_1$ and $L_2$ norm approximates any norm to second order at the origin--and this is what matters most in regression without outlying residuals.
(**) The $l_0$-"norm" lacks homogeneity, which is one of the axioms for norms. Homogeneity means for $\alpha \ge 0$ that $\| \alpha x \| = \alpha \| x \|$. | Why do we only see $L_1$ and $L_2$ regularization but not other norms? | In addition to @whuber's comments (*).
The book by Hastie et al Statistical learning with Sparsity discusses this. They also uses what is called the $L_0$ "norm" (quotation marks because this is not | Why do we only see $L_1$ and $L_2$ regularization but not other norms?
In addition to @whuber's comments (*).
The book by Hastie et al Statistical learning with Sparsity discusses this. They also uses what is called the $L_0$ "norm" (quotation marks because this is not a norm in the strict mathematical sense (**)), which simply counts the number of nonzero components of a vector.
In that sense the $L_0$ norm is used for variable selection, but it together with the $l_q$ norms with $q<1$ is not convex, so difficult to optimize. They argue (an argument I think come from Donohoe in compressed sensing) that the $L_1$ norm, that is, the lasso, is the best convexification of the $L_0$ "norm" ("the closest convex relaxation of best subset selection"). That book also references some uses of other $L_q$ norms. The unit ball in the $l_q$-norm with $q<1$ looks like this
(image from wikipedia) while a pictorial explication of why the lasso can provide variable selection is
This image is from the above referenced book. You can see that in the lasso case (the unit ball drawn as a diamond) it is much more probable that the ellipsoidal (sum of squares) contours will first touch the diamond at one of the corners. In the non-convex case (first unit ball figure) it is even more likely that the first touch between the ellipsoid and the unit ball will be at one of the corners, so that case will emphasis variable selection even more than the lasso.
If you try this "lasso with non-convex penalty" in google you will get a lot of papers doing lasso-like problems with non-convex penalty like $l_q$ with $q < 1$.
(*) For completeness I copy in whuber's comments here:
I haven't investigated this question specifically, but experience with similar situations suggests there may be a nice qualitative answer: all norms that are second differentiable at the origin will be locally equivalent to each other, of which the $L_2$ norm is the standard. All other norms will not be differentiable at the origin and $L_1$ qualitatively reproduces their behavior. That covers the gamut. In effect, a linear combination of an $L_1$ and $L_2$ norm approximates any norm to second order at the origin--and this is what matters most in regression without outlying residuals.
(**) The $l_0$-"norm" lacks homogeneity, which is one of the axioms for norms. Homogeneity means for $\alpha \ge 0$ that $\| \alpha x \| = \alpha \| x \|$. | Why do we only see $L_1$ and $L_2$ regularization but not other norms?
In addition to @whuber's comments (*).
The book by Hastie et al Statistical learning with Sparsity discusses this. They also uses what is called the $L_0$ "norm" (quotation marks because this is not |
4,397 | Why do we only see $L_1$ and $L_2$ regularization but not other norms? | I think the answer to the question depends a lot on how you define "better." If I'm interpreting right, you want to know why it is that these norms appear so frequently as compared to other options. In this case, the answer is simplicity. The intuition behind regularization is that I have some vector, and I would like that vector to be "small" in some sense. How do you describe a vector's size? Well, you have choices:
Do you count how many elements it has $(L_0)$?
Do you add up all the elements $(L_1)$?
Do you measure how "long" the "arrow" is $(L_2)$?
Do you use the size of the biggest element $(L_\infty)$?
You could employ alternative norms like $L_3$, but they don't have friendly, physical interpretations like the ones above.
Within this list, the $L_2$ norm happens to have nice, closed-form analytic solutions for things like least squares problems. Before you had unlimited computing power, one wouldn't be able to make much headway otherwise. I would speculate that the "length of the arrow" visual is also more appealing to people than other measures of size. Even though the norm you choose for regularization impacts on the types of residuals you get with an optimal solution, I don't think most people are a) aware of that, or b) consider it deeply when formulating their problem. At this point, I expect most people keep using $L_2$ because it's "what everyone does."
An analogy would be the exponential function, $e^x$ - this shows up literally everywhere in physics, economics, stats, machine learning, or any other mathematically-driven field. I wondered forever why everything in life seemed to be described by exponentials, until I realized that we humans just don't have that many tricks up our sleeve. Exponentials have very handy properties for doing algebra and calculus, and so they end up being the #1 go-to function in any mathematician's toolbox when trying to model something in the real world. It may be that things like decoherence time are "better" described by a high-order polynomial, but those are relatively harder to do algebra with, and at the end of the day what matters is that your company is making money - the exponential is simpler and good enough.
Otherwise, the choice of norm has very subjective effects, and it is up to you as the person stating the problem to define what you prefer in an optimal solution. Do you care more that all of the components in your solution vector be similar in magnitude, or that the size of the biggest component be as small as possible? That choice will depend on the specific problem you're solving. | Why do we only see $L_1$ and $L_2$ regularization but not other norms? | I think the answer to the question depends a lot on how you define "better." If I'm interpreting right, you want to know why it is that these norms appear so frequently as compared to other options. I | Why do we only see $L_1$ and $L_2$ regularization but not other norms?
I think the answer to the question depends a lot on how you define "better." If I'm interpreting right, you want to know why it is that these norms appear so frequently as compared to other options. In this case, the answer is simplicity. The intuition behind regularization is that I have some vector, and I would like that vector to be "small" in some sense. How do you describe a vector's size? Well, you have choices:
Do you count how many elements it has $(L_0)$?
Do you add up all the elements $(L_1)$?
Do you measure how "long" the "arrow" is $(L_2)$?
Do you use the size of the biggest element $(L_\infty)$?
You could employ alternative norms like $L_3$, but they don't have friendly, physical interpretations like the ones above.
Within this list, the $L_2$ norm happens to have nice, closed-form analytic solutions for things like least squares problems. Before you had unlimited computing power, one wouldn't be able to make much headway otherwise. I would speculate that the "length of the arrow" visual is also more appealing to people than other measures of size. Even though the norm you choose for regularization impacts on the types of residuals you get with an optimal solution, I don't think most people are a) aware of that, or b) consider it deeply when formulating their problem. At this point, I expect most people keep using $L_2$ because it's "what everyone does."
An analogy would be the exponential function, $e^x$ - this shows up literally everywhere in physics, economics, stats, machine learning, or any other mathematically-driven field. I wondered forever why everything in life seemed to be described by exponentials, until I realized that we humans just don't have that many tricks up our sleeve. Exponentials have very handy properties for doing algebra and calculus, and so they end up being the #1 go-to function in any mathematician's toolbox when trying to model something in the real world. It may be that things like decoherence time are "better" described by a high-order polynomial, but those are relatively harder to do algebra with, and at the end of the day what matters is that your company is making money - the exponential is simpler and good enough.
Otherwise, the choice of norm has very subjective effects, and it is up to you as the person stating the problem to define what you prefer in an optimal solution. Do you care more that all of the components in your solution vector be similar in magnitude, or that the size of the biggest component be as small as possible? That choice will depend on the specific problem you're solving. | Why do we only see $L_1$ and $L_2$ regularization but not other norms?
I think the answer to the question depends a lot on how you define "better." If I'm interpreting right, you want to know why it is that these norms appear so frequently as compared to other options. I |
4,398 | Why do we only see $L_1$ and $L_2$ regularization but not other norms? | The main reason for seeing mostly $L_1$ and $L_2$ norms is that they cover the majority of current applications. For example, the norm $L_1$ also called the taxicab norm, a lattice rectilinear connecting norm, includes the absolute value norm.
$L_2$ norms are, in addition to least squares, Euclidean distances in $n$-space as well as the complex variable norm. Moreover, Tikhonov regularization and ridge regression, i.e., applications minimizing $\|A\mathbf{x}-\mathbf{b}\|^2+ \|\Gamma \mathbf{x}\|^2$, are often considered $L_2$ norms.
Wikipedia gives information about these and the other norms. Worth a mention are $L_0$. The generalized $L_p$ norm, the $L_\infty$ norm also called the uniform norm.
For higher dimensions, the L$_1$ norm or even fractional norms, e.g., L$_\frac{2}{3}$ may better discriminate between nearest neighbors than the $n$-space distance norm, L$_2$. | Why do we only see $L_1$ and $L_2$ regularization but not other norms? | The main reason for seeing mostly $L_1$ and $L_2$ norms is that they cover the majority of current applications. For example, the norm $L_1$ also called the taxicab norm, a lattice rectilinear connect | Why do we only see $L_1$ and $L_2$ regularization but not other norms?
The main reason for seeing mostly $L_1$ and $L_2$ norms is that they cover the majority of current applications. For example, the norm $L_1$ also called the taxicab norm, a lattice rectilinear connecting norm, includes the absolute value norm.
$L_2$ norms are, in addition to least squares, Euclidean distances in $n$-space as well as the complex variable norm. Moreover, Tikhonov regularization and ridge regression, i.e., applications minimizing $\|A\mathbf{x}-\mathbf{b}\|^2+ \|\Gamma \mathbf{x}\|^2$, are often considered $L_2$ norms.
Wikipedia gives information about these and the other norms. Worth a mention are $L_0$. The generalized $L_p$ norm, the $L_\infty$ norm also called the uniform norm.
For higher dimensions, the L$_1$ norm or even fractional norms, e.g., L$_\frac{2}{3}$ may better discriminate between nearest neighbors than the $n$-space distance norm, L$_2$. | Why do we only see $L_1$ and $L_2$ regularization but not other norms?
The main reason for seeing mostly $L_1$ and $L_2$ norms is that they cover the majority of current applications. For example, the norm $L_1$ also called the taxicab norm, a lattice rectilinear connect |
4,399 | Mathematical Statistics Videos | Check out the following links. I'm not sure what exactly are you looking for.
Monte Carlo Simulation for Statistical Inference
Kernel methods and Support Vector Machines
Introduction to Support Vector Machines
Monte Carlo Simulations
Free Science and Video Lectures Online!
Video lectures on Machine Learning | Mathematical Statistics Videos | Check out the following links. I'm not sure what exactly are you looking for.
Monte Carlo Simulation for Statistical Inference
Kernel methods and Support Vector Machines
Introduction to Support Vector | Mathematical Statistics Videos
Check out the following links. I'm not sure what exactly are you looking for.
Monte Carlo Simulation for Statistical Inference
Kernel methods and Support Vector Machines
Introduction to Support Vector Machines
Monte Carlo Simulations
Free Science and Video Lectures Online!
Video lectures on Machine Learning | Mathematical Statistics Videos
Check out the following links. I'm not sure what exactly are you looking for.
Monte Carlo Simulation for Statistical Inference
Kernel methods and Support Vector Machines
Introduction to Support Vector |
4,400 | Mathematical Statistics Videos | See Videos on data analysis using R on Jeromy Anglim's blog. There are many links at that page and he updates it. He has another post with many links to videos on probability and statistics as well as linear algebra and calculus. | Mathematical Statistics Videos | See Videos on data analysis using R on Jeromy Anglim's blog. There are many links at that page and he updates it. He has another post with many links to videos on probability and statistics as well | Mathematical Statistics Videos
See Videos on data analysis using R on Jeromy Anglim's blog. There are many links at that page and he updates it. He has another post with many links to videos on probability and statistics as well as linear algebra and calculus. | Mathematical Statistics Videos
See Videos on data analysis using R on Jeromy Anglim's blog. There are many links at that page and he updates it. He has another post with many links to videos on probability and statistics as well |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.