idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
4,001 | Which loss function is correct for logistic regression? | The relationship is as follows: $l(\beta) = \sum_i L(z_i)$.
Define a logistic function as $f(z) = \frac{e^{z}}{1 + e^{z}} = \frac{1}{1+e^{-z}}$. They possess the property that $f(-z) = 1-f(z)$. Or in other words:
$$
\frac{1}{1+e^{z}} = \frac{e^{-z}}{1+e^{-z}}.
$$
If you take the reciprocal of both sides, then take the log you get:
$$
\ln(1+e^{z}) = \ln(1+e^{-z}) + z.
$$
Subtract $z$ from both sides and you should see this:
$$
-y_i\beta^Tx_i+ln(1+e^{y_i\beta^Tx_i}) = L(z_i).
$$
Edit:
At the moment I am re-reading this answer and am confused about how I got $-y_i\beta^Tx_i+ln(1+e^{\beta^Tx_i})$ to be equal to $-y_i\beta^Tx_i+ln(1+e^{y_i\beta^Tx_i})$. Perhaps there's a typo in the original question.
Edit 2:
In the case that there wasn't a typo in the original question, @ManelMorales appears to be correct to draw attention to the fact that, when $y \in \{-1,1\}$, the probability mass function can be written as $P(Y_i=y_i) = f(y_i\beta^Tx_i)$, due to the property that $f(-z) = 1 - f(z)$. I am re-writing it differently here, because he introduces a new equivocation on the notation $z_i$. The rest follows by taking the negative log-likelihood for each $y$ coding. See his answer below for more details. | Which loss function is correct for logistic regression? | The relationship is as follows: $l(\beta) = \sum_i L(z_i)$.
Define a logistic function as $f(z) = \frac{e^{z}}{1 + e^{z}} = \frac{1}{1+e^{-z}}$. They possess the property that $f(-z) = 1-f(z)$. Or in | Which loss function is correct for logistic regression?
The relationship is as follows: $l(\beta) = \sum_i L(z_i)$.
Define a logistic function as $f(z) = \frac{e^{z}}{1 + e^{z}} = \frac{1}{1+e^{-z}}$. They possess the property that $f(-z) = 1-f(z)$. Or in other words:
$$
\frac{1}{1+e^{z}} = \frac{e^{-z}}{1+e^{-z}}.
$$
If you take the reciprocal of both sides, then take the log you get:
$$
\ln(1+e^{z}) = \ln(1+e^{-z}) + z.
$$
Subtract $z$ from both sides and you should see this:
$$
-y_i\beta^Tx_i+ln(1+e^{y_i\beta^Tx_i}) = L(z_i).
$$
Edit:
At the moment I am re-reading this answer and am confused about how I got $-y_i\beta^Tx_i+ln(1+e^{\beta^Tx_i})$ to be equal to $-y_i\beta^Tx_i+ln(1+e^{y_i\beta^Tx_i})$. Perhaps there's a typo in the original question.
Edit 2:
In the case that there wasn't a typo in the original question, @ManelMorales appears to be correct to draw attention to the fact that, when $y \in \{-1,1\}$, the probability mass function can be written as $P(Y_i=y_i) = f(y_i\beta^Tx_i)$, due to the property that $f(-z) = 1 - f(z)$. I am re-writing it differently here, because he introduces a new equivocation on the notation $z_i$. The rest follows by taking the negative log-likelihood for each $y$ coding. See his answer below for more details. | Which loss function is correct for logistic regression?
The relationship is as follows: $l(\beta) = \sum_i L(z_i)$.
Define a logistic function as $f(z) = \frac{e^{z}}{1 + e^{z}} = \frac{1}{1+e^{-z}}$. They possess the property that $f(-z) = 1-f(z)$. Or in |
4,002 | Which loss function is correct for logistic regression? | OP mistakenly believes the relationship between these two functions is due to the number of samples (i.e. single vs all). However, the actual difference is simply how we select our training labels.
In the case of binary classification we may assign the labels $y=\pm1$ or $y=0,1$.
As it has already been stated, the logistic function $\sigma(z)$ is a good choice since it has the form of a probability, i.e. $\sigma(-z)=1-\sigma(z)$ and $\sigma(z)\in (0,1)$ as $z\rightarrow \pm \infty$. If we pick the labels $y=0,1$ we may assign
\begin{equation}
\begin{aligned}
\mathbb{P}(y=1|z) & =\sigma(z)=\frac{1}{1+e^{-z}}\\
\mathbb{P}(y=0|z) & =1-\sigma(z)=\frac{1}{1+e^{z}}\\
\end{aligned}
\end{equation}
which can be written more compactly as $\mathbb{P}(y|z) =\sigma(z)^y(1-\sigma(z))^{1-y}$.
It is easier to maximize the log-likelihood. Maximizing the log-likelihood is the same as minimizing the negative log-likelihood. For $m$ samples $\{x_i,y_i\}$, after taking the natural logarithm and some simplification, we will find out:
\begin{equation}
\begin{aligned}
l(z)=-\log\big(\prod_i^m\mathbb{P}(y_i|z_i)\big)=-\sum_i^m\log\big(\mathbb{P}(y_i|z_i)\big)=\sum_i^m-y_iz_i+\log(1+e^{z_i})
\end{aligned}
\end{equation}
Full derivation and additional information can be found on this jupyter notebook. On the other hand, we may have instead used the labels $y=\pm 1$. It is pretty obvious then that we can assign
\begin{equation}
\mathbb{P}(y|z)=\sigma(yz).
\end{equation}
It is also obvious that $\mathbb{P}(y=0|z)=\mathbb{P}(y=-1|z)=\sigma(-z)$. Following the same steps as before we minimize in this case the loss function
\begin{equation}
\begin{aligned}
L(z)=-\log\big(\prod_j^m\mathbb{P}(y_j|z_j)\big)=-\sum_j^m\log\big(\mathbb{P}(y_j|z_j)\big)=\sum_j^m\log(1+e^{-yz_j})
\end{aligned}
\end{equation}
Where the last step follows after we take the reciprocal which is induced by the negative sign. While we should not equate these two forms, given that in each form $y$ takes different values, nevertheless these two are equivalent:
\begin{equation}
\begin{aligned}
-y_iz_i+\log(1+e^{z_i})\equiv \log(1+e^{-yz_j})
\end{aligned}
\end{equation}
The case $y_i=1$ is trivial to show. If $y_i \neq 1$, then $y_i=0$ on the left hand side and $y_i=-1$ on the right hand side.
While there may be fundamental reasons as to why we have two different forms (see Why there are two different logistic loss formulation / notations?), one reason to choose the former is for practical considerations. In the former we can use the property $\partial \sigma(z) / \partial z=\sigma(z)(1-\sigma(z))$ to trivially calculate $\nabla l(z)$ and $\nabla^2l(z)$, both of which are needed for convergence analysis (i.e. to determine the convexity of the loss function by calculating the Hessian). | Which loss function is correct for logistic regression? | OP mistakenly believes the relationship between these two functions is due to the number of samples (i.e. single vs all). However, the actual difference is simply how we select our training labels.
In | Which loss function is correct for logistic regression?
OP mistakenly believes the relationship between these two functions is due to the number of samples (i.e. single vs all). However, the actual difference is simply how we select our training labels.
In the case of binary classification we may assign the labels $y=\pm1$ or $y=0,1$.
As it has already been stated, the logistic function $\sigma(z)$ is a good choice since it has the form of a probability, i.e. $\sigma(-z)=1-\sigma(z)$ and $\sigma(z)\in (0,1)$ as $z\rightarrow \pm \infty$. If we pick the labels $y=0,1$ we may assign
\begin{equation}
\begin{aligned}
\mathbb{P}(y=1|z) & =\sigma(z)=\frac{1}{1+e^{-z}}\\
\mathbb{P}(y=0|z) & =1-\sigma(z)=\frac{1}{1+e^{z}}\\
\end{aligned}
\end{equation}
which can be written more compactly as $\mathbb{P}(y|z) =\sigma(z)^y(1-\sigma(z))^{1-y}$.
It is easier to maximize the log-likelihood. Maximizing the log-likelihood is the same as minimizing the negative log-likelihood. For $m$ samples $\{x_i,y_i\}$, after taking the natural logarithm and some simplification, we will find out:
\begin{equation}
\begin{aligned}
l(z)=-\log\big(\prod_i^m\mathbb{P}(y_i|z_i)\big)=-\sum_i^m\log\big(\mathbb{P}(y_i|z_i)\big)=\sum_i^m-y_iz_i+\log(1+e^{z_i})
\end{aligned}
\end{equation}
Full derivation and additional information can be found on this jupyter notebook. On the other hand, we may have instead used the labels $y=\pm 1$. It is pretty obvious then that we can assign
\begin{equation}
\mathbb{P}(y|z)=\sigma(yz).
\end{equation}
It is also obvious that $\mathbb{P}(y=0|z)=\mathbb{P}(y=-1|z)=\sigma(-z)$. Following the same steps as before we minimize in this case the loss function
\begin{equation}
\begin{aligned}
L(z)=-\log\big(\prod_j^m\mathbb{P}(y_j|z_j)\big)=-\sum_j^m\log\big(\mathbb{P}(y_j|z_j)\big)=\sum_j^m\log(1+e^{-yz_j})
\end{aligned}
\end{equation}
Where the last step follows after we take the reciprocal which is induced by the negative sign. While we should not equate these two forms, given that in each form $y$ takes different values, nevertheless these two are equivalent:
\begin{equation}
\begin{aligned}
-y_iz_i+\log(1+e^{z_i})\equiv \log(1+e^{-yz_j})
\end{aligned}
\end{equation}
The case $y_i=1$ is trivial to show. If $y_i \neq 1$, then $y_i=0$ on the left hand side and $y_i=-1$ on the right hand side.
While there may be fundamental reasons as to why we have two different forms (see Why there are two different logistic loss formulation / notations?), one reason to choose the former is for practical considerations. In the former we can use the property $\partial \sigma(z) / \partial z=\sigma(z)(1-\sigma(z))$ to trivially calculate $\nabla l(z)$ and $\nabla^2l(z)$, both of which are needed for convergence analysis (i.e. to determine the convexity of the loss function by calculating the Hessian). | Which loss function is correct for logistic regression?
OP mistakenly believes the relationship between these two functions is due to the number of samples (i.e. single vs all). However, the actual difference is simply how we select our training labels.
In |
4,003 | Which loss function is correct for logistic regression? | I learned the loss function for logistic regression as follows.
Logistic regression performs binary classification, and so the label outputs are binary, 0 or 1. Let $P(y=1|x)$ be the probability that the binary output $y$ is 1 given the input feature vector $x$. The coefficients $w$ are the weights that the algorithm is trying to learn.
$$P(y=1|x) = \frac{1}{1 + e^{-w^{T}x}}$$
Because logistic regression is binary, the probability $P(y=0|x)$ is simply 1 minus the term above.
$$P(y=0|x) = 1- \frac{1}{1 + e^{-w^{T}x}}$$
The loss function $J(w)$ is the sum of (A) the output $y=1$ multiplied by $P(y=1)$ and (B) the output $y=0$ multiplied by $P(y=0)$ for one training example, summed over $m$ training examples.
$$J(w) = \sum_{i=1}^{m} y^{(i)} \log P(y=1) + (1 - y^{(i)}) \log P(y=0)$$
where $y^{(i)}$ indicates the $i^{th}$ label in your training data. If a training instance has a label of $1$, then $y^{(i)}=1$, leaving the left summand in place but making the right summand with $1-y^{(i)}$ become $0$. On the other hand, if a training instance has $y=0$, then the right summand with the term $1-y^{(i)}$ remains in place, but the left summand becomes $0$. Log probability is used for ease of calculation.
If we then replace $P(y=1)$ and $P(y=0)$ with the earlier expressions, then we get:
$$J(w) = \sum_{i=1}^{m} y^{(i)} \log \left(\frac{1}{1 + e^{-w^{T}x}}\right) + (1 - y^{(i)}) \log \left(1- \frac{1}{1 + e^{-w^{T}x}}\right)$$
You can read more about this form in these Stanford lecture notes. | Which loss function is correct for logistic regression? | I learned the loss function for logistic regression as follows.
Logistic regression performs binary classification, and so the label outputs are binary, 0 or 1. Let $P(y=1|x)$ be the probability that | Which loss function is correct for logistic regression?
I learned the loss function for logistic regression as follows.
Logistic regression performs binary classification, and so the label outputs are binary, 0 or 1. Let $P(y=1|x)$ be the probability that the binary output $y$ is 1 given the input feature vector $x$. The coefficients $w$ are the weights that the algorithm is trying to learn.
$$P(y=1|x) = \frac{1}{1 + e^{-w^{T}x}}$$
Because logistic regression is binary, the probability $P(y=0|x)$ is simply 1 minus the term above.
$$P(y=0|x) = 1- \frac{1}{1 + e^{-w^{T}x}}$$
The loss function $J(w)$ is the sum of (A) the output $y=1$ multiplied by $P(y=1)$ and (B) the output $y=0$ multiplied by $P(y=0)$ for one training example, summed over $m$ training examples.
$$J(w) = \sum_{i=1}^{m} y^{(i)} \log P(y=1) + (1 - y^{(i)}) \log P(y=0)$$
where $y^{(i)}$ indicates the $i^{th}$ label in your training data. If a training instance has a label of $1$, then $y^{(i)}=1$, leaving the left summand in place but making the right summand with $1-y^{(i)}$ become $0$. On the other hand, if a training instance has $y=0$, then the right summand with the term $1-y^{(i)}$ remains in place, but the left summand becomes $0$. Log probability is used for ease of calculation.
If we then replace $P(y=1)$ and $P(y=0)$ with the earlier expressions, then we get:
$$J(w) = \sum_{i=1}^{m} y^{(i)} \log \left(\frac{1}{1 + e^{-w^{T}x}}\right) + (1 - y^{(i)}) \log \left(1- \frac{1}{1 + e^{-w^{T}x}}\right)$$
You can read more about this form in these Stanford lecture notes. | Which loss function is correct for logistic regression?
I learned the loss function for logistic regression as follows.
Logistic regression performs binary classification, and so the label outputs are binary, 0 or 1. Let $P(y=1|x)$ be the probability that |
4,004 | Which loss function is correct for logistic regression? | Instead of Mean Squared Error, we use a cost function called Cross-Entropy, also known as Log Loss. Cross-entropy loss can be divided into two separate cost functions: one for y=1 and one for y=0.
\begin{align}\newcommand{\Cost}{{\rm Cost}}\newcommand{\if}{{\rm if}}
j(\theta) &= \frac 1 m \sum_{i=1}^m \Cost(h_\theta(x^{(i)}), y^{(i)}) & & \\
\Cost(h_\theta(x), y) &= -\log(h_\theta(x)) & \if\ y &= 1 \\
\Cost(h_\theta(x), y) &= -\log(1-h_\theta(x)) & \if\ y &= 0
\end{align}
When we put them together we have:
$$
j(\theta) = \frac 1 m \sum_{i=1}^m \big[y^{(i)}\log(h_\theta(x^{(i)})) + (1-y^{(i)})\log(1-h_\theta(x)^{(i)}) \big]
$$
Multiplying by $y$ and $(1−y)$ in the above equation is a sneaky trick that let’s us use the same equation to solve for both $y=1$ and $y=0$ cases. If $y=0$, the first side cancels out. If $y=1$, the second side cancels out. In both cases we only perform the operation we need to perform.
If you don't want to use a for loop, you can try a vectorized form of the equation above
\begin{align}
h &= g(X\theta) \\
J(\theta) &= \frac 1 m \cdot \big(-y^T\log(h)-(1-y)^T\log(1-h)\big)
\end{align}
The entire explanation can be view on Machine Learning Cheatsheet. | Which loss function is correct for logistic regression? | Instead of Mean Squared Error, we use a cost function called Cross-Entropy, also known as Log Loss. Cross-entropy loss can be divided into two separate cost functions: one for y=1 and one for y=0.
\be | Which loss function is correct for logistic regression?
Instead of Mean Squared Error, we use a cost function called Cross-Entropy, also known as Log Loss. Cross-entropy loss can be divided into two separate cost functions: one for y=1 and one for y=0.
\begin{align}\newcommand{\Cost}{{\rm Cost}}\newcommand{\if}{{\rm if}}
j(\theta) &= \frac 1 m \sum_{i=1}^m \Cost(h_\theta(x^{(i)}), y^{(i)}) & & \\
\Cost(h_\theta(x), y) &= -\log(h_\theta(x)) & \if\ y &= 1 \\
\Cost(h_\theta(x), y) &= -\log(1-h_\theta(x)) & \if\ y &= 0
\end{align}
When we put them together we have:
$$
j(\theta) = \frac 1 m \sum_{i=1}^m \big[y^{(i)}\log(h_\theta(x^{(i)})) + (1-y^{(i)})\log(1-h_\theta(x)^{(i)}) \big]
$$
Multiplying by $y$ and $(1−y)$ in the above equation is a sneaky trick that let’s us use the same equation to solve for both $y=1$ and $y=0$ cases. If $y=0$, the first side cancels out. If $y=1$, the second side cancels out. In both cases we only perform the operation we need to perform.
If you don't want to use a for loop, you can try a vectorized form of the equation above
\begin{align}
h &= g(X\theta) \\
J(\theta) &= \frac 1 m \cdot \big(-y^T\log(h)-(1-y)^T\log(1-h)\big)
\end{align}
The entire explanation can be view on Machine Learning Cheatsheet. | Which loss function is correct for logistic regression?
Instead of Mean Squared Error, we use a cost function called Cross-Entropy, also known as Log Loss. Cross-entropy loss can be divided into two separate cost functions: one for y=1 and one for y=0.
\be |
4,005 | Which loss function is correct for logistic regression? | They are the same functions.
In the first one, $y_i$ is either $0$ or $1$. While in the second, $y_i$ is either $-1$ or $1$.
The second one can be derived from the first one, as the probabilities in the second function can be written as only one equation, that is, the sigmoid function of $z_i y_i$ ($z_i$ is the linear combinations of features of observation $i$; $y_i$ is $-1$ or $1$ based on the fact of observation $i$). | Which loss function is correct for logistic regression? | They are the same functions.
In the first one, $y_i$ is either $0$ or $1$. While in the second, $y_i$ is either $-1$ or $1$.
The second one can be derived from the first one, as the probabilities in t | Which loss function is correct for logistic regression?
They are the same functions.
In the first one, $y_i$ is either $0$ or $1$. While in the second, $y_i$ is either $-1$ or $1$.
The second one can be derived from the first one, as the probabilities in the second function can be written as only one equation, that is, the sigmoid function of $z_i y_i$ ($z_i$ is the linear combinations of features of observation $i$; $y_i$ is $-1$ or $1$ based on the fact of observation $i$). | Which loss function is correct for logistic regression?
They are the same functions.
In the first one, $y_i$ is either $0$ or $1$. While in the second, $y_i$ is either $-1$ or $1$.
The second one can be derived from the first one, as the probabilities in t |
4,006 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | The first three points, as far as I can tell, are a variation on a single argument.
Scientists often treat uncertainty measurements ($12 \pm 1 $, for instance) as probability distributions that look like this:
When actually, they are much more likely to look like this:
As a former chemist, I can confirm that many scientists with non-mathematical backgrounds (primarily non-physical chemists and biologists) don't really understand how uncertainty (or error, as they call it) is supposed to work. They recall a time in undergrad physics where they maybe had to use them, possibly even having to calculate a compound error through several different measurements, but they never really understood them. I too was guilty of this, and assumed all measurements had to come within the $\pm$ interval. Only recently (and outside academia), did I find out that error measurements usually refer to a certain standard deviation, not an absolute limit.
So to break down the numbered points in the article:
Measurements outside the CI still have a chance of happening, because the real (likely gaussian) probability is non-zero there (or anywhere for that matter, although they become vanishingly small when you get far out). If the values after the $\pm$ do indeed represent one s.d., then there is still a 32% chance of a data point falling outside of them.
The distribution is not uniform (flat topped, as in the first graph), it is peaked. You are more likely to get a value in the middle than you are at the edges. It's like rolling a bunch of dice, rather than a single die.
95% is an arbitrary cutoff, and coincides almost exactly with two standard deviations.
This point is more of a comment on academic honesty in general. A realisation I had during my PhD is that science isn't some abstract force, it is the cumulative efforts of people attempting to do science. These are people who are trying to discover new things about the universe, but at the same time are also trying to keep their kids fed and keep their jobs, which unfortunately in modern times means some form of publish or perish is at play. In reality, scientists depend on discoveries that are both true and interesting, because uninteresting results don't result in publications.
Arbitrary thresholds such as $p < 0.05$ can often be self-perpetuating, especially among those who don't fully understand statistics and just need a pass/fail stamp on their results. As such, people do sometimes half-jokingly talk about 'running the test again until you get $p < 0.05$'. It can be very tempting, especially if a Ph.D/grant/employment is riding on the outcome, for these marginal results to be, jiggled around until the desired $p = 0.0498$ shows up in the analysis.
Such practices can be detrimental to the science as a whole, especially if it is done widely, all in the pursuit of a number which is in the eyes of nature, meaningless. This part in effect is exhorting scientists to be honest about their data and work, even when that honesty is to their detriment. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | The first three points, as far as I can tell, are a variation on a single argument.
Scientists often treat uncertainty measurements ($12 \pm 1 $, for instance) as probability distributions that look | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
The first three points, as far as I can tell, are a variation on a single argument.
Scientists often treat uncertainty measurements ($12 \pm 1 $, for instance) as probability distributions that look like this:
When actually, they are much more likely to look like this:
As a former chemist, I can confirm that many scientists with non-mathematical backgrounds (primarily non-physical chemists and biologists) don't really understand how uncertainty (or error, as they call it) is supposed to work. They recall a time in undergrad physics where they maybe had to use them, possibly even having to calculate a compound error through several different measurements, but they never really understood them. I too was guilty of this, and assumed all measurements had to come within the $\pm$ interval. Only recently (and outside academia), did I find out that error measurements usually refer to a certain standard deviation, not an absolute limit.
So to break down the numbered points in the article:
Measurements outside the CI still have a chance of happening, because the real (likely gaussian) probability is non-zero there (or anywhere for that matter, although they become vanishingly small when you get far out). If the values after the $\pm$ do indeed represent one s.d., then there is still a 32% chance of a data point falling outside of them.
The distribution is not uniform (flat topped, as in the first graph), it is peaked. You are more likely to get a value in the middle than you are at the edges. It's like rolling a bunch of dice, rather than a single die.
95% is an arbitrary cutoff, and coincides almost exactly with two standard deviations.
This point is more of a comment on academic honesty in general. A realisation I had during my PhD is that science isn't some abstract force, it is the cumulative efforts of people attempting to do science. These are people who are trying to discover new things about the universe, but at the same time are also trying to keep their kids fed and keep their jobs, which unfortunately in modern times means some form of publish or perish is at play. In reality, scientists depend on discoveries that are both true and interesting, because uninteresting results don't result in publications.
Arbitrary thresholds such as $p < 0.05$ can often be self-perpetuating, especially among those who don't fully understand statistics and just need a pass/fail stamp on their results. As such, people do sometimes half-jokingly talk about 'running the test again until you get $p < 0.05$'. It can be very tempting, especially if a Ph.D/grant/employment is riding on the outcome, for these marginal results to be, jiggled around until the desired $p = 0.0498$ shows up in the analysis.
Such practices can be detrimental to the science as a whole, especially if it is done widely, all in the pursuit of a number which is in the eyes of nature, meaningless. This part in effect is exhorting scientists to be honest about their data and work, even when that honesty is to their detriment. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
The first three points, as far as I can tell, are a variation on a single argument.
Scientists often treat uncertainty measurements ($12 \pm 1 $, for instance) as probability distributions that look |
4,007 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | Much of the article and the figure you include make a very simple point:
Lack of evidence for an effect is not evidence that it does not exist.
For example,
"In our study, mice given cyanide did not die at statistically-significantly higher rates" is not evidence for the claim "cyanide has no effect on mouse deaths".
Suppose we give two mice a dose of cyanide and one of them dies. In the control group of two mice, neither dies. Since the sample size was so small, this result is not statistically significant ($p > 0.05$). So this experiment does not show a statistically significant effect of cyanide on mouse lifespan. Should we conclude that cyanide has no effect on mice? Obviously not.
But this is the mistake the authors claim scientists routinely make.
For example in your figure, the red line could arise from a study on very few mice, while the blue line could arise from the exact same study, but on many mice.
The authors suggest that, instead of using effect sizes and p-values, scientists instead describe the range of possibilities that are more or less compatible with their findings. In our two-mouse experiment, we would have to write that our findings are both compatible with cyanide being very poisonous, and with it not being poisonous at all. In a 100-mouse experiment, we might find a confidence interval range of $[60\%,70\%]$ fatality with a point estimate of $65\%$. Then we should write that our results would be most compatible with an assumption that this dose kills 65% of mice, but our results would also be somewhat compatible with percentages as low as 60 or high as 70, and that our results would be less compatible with a truth outside that range. (We should also describe what statistical assumptions we make to compute these numbers.) | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | Much of the article and the figure you include make a very simple point:
Lack of evidence for an effect is not evidence that it does not exist.
For example,
"In our study, mice given cyanide did no | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
Much of the article and the figure you include make a very simple point:
Lack of evidence for an effect is not evidence that it does not exist.
For example,
"In our study, mice given cyanide did not die at statistically-significantly higher rates" is not evidence for the claim "cyanide has no effect on mouse deaths".
Suppose we give two mice a dose of cyanide and one of them dies. In the control group of two mice, neither dies. Since the sample size was so small, this result is not statistically significant ($p > 0.05$). So this experiment does not show a statistically significant effect of cyanide on mouse lifespan. Should we conclude that cyanide has no effect on mice? Obviously not.
But this is the mistake the authors claim scientists routinely make.
For example in your figure, the red line could arise from a study on very few mice, while the blue line could arise from the exact same study, but on many mice.
The authors suggest that, instead of using effect sizes and p-values, scientists instead describe the range of possibilities that are more or less compatible with their findings. In our two-mouse experiment, we would have to write that our findings are both compatible with cyanide being very poisonous, and with it not being poisonous at all. In a 100-mouse experiment, we might find a confidence interval range of $[60\%,70\%]$ fatality with a point estimate of $65\%$. Then we should write that our results would be most compatible with an assumption that this dose kills 65% of mice, but our results would also be somewhat compatible with percentages as low as 60 or high as 70, and that our results would be less compatible with a truth outside that range. (We should also describe what statistical assumptions we make to compute these numbers.) | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
Much of the article and the figure you include make a very simple point:
Lack of evidence for an effect is not evidence that it does not exist.
For example,
"In our study, mice given cyanide did no |
4,008 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | I'll try.
The confidence interval (which they rename compatibility interval) shows the values of the parameter that are most compatible with the data. But that doesn't mean the values outside the interval are absolutely incompatible with the data.
Values near the middle of the confidence (compatibility) interval are more compatible with the data than values near the ends of the interval.
95% is just a convention. You can compute 90% or 99% or any% intervals.
The confidence/compatibility intervals are only helpful if the experiment was done properly, if the analysis was done according to a preset plan, and the data conform with the assumption of the analysis methods. If you've got bad data analyzed badly, the compatibility interval is not meaningful or helpful. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | I'll try.
The confidence interval (which they rename compatibility interval) shows the values of the parameter that are most compatible with the data. But that doesn't mean the values outside the in | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
I'll try.
The confidence interval (which they rename compatibility interval) shows the values of the parameter that are most compatible with the data. But that doesn't mean the values outside the interval are absolutely incompatible with the data.
Values near the middle of the confidence (compatibility) interval are more compatible with the data than values near the ends of the interval.
95% is just a convention. You can compute 90% or 99% or any% intervals.
The confidence/compatibility intervals are only helpful if the experiment was done properly, if the analysis was done according to a preset plan, and the data conform with the assumption of the analysis methods. If you've got bad data analyzed badly, the compatibility interval is not meaningful or helpful. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
I'll try.
The confidence interval (which they rename compatibility interval) shows the values of the parameter that are most compatible with the data. But that doesn't mean the values outside the in |
4,009 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | The great XKCD did this cartoon a while ago, illustrating the problem. If results with $P\gt0.05$ are simplistically treated as proving a hypothesis - and all too often they are - then 1 in 20 hypotheses so proven will actually be false. Similarly, if $P\lt0.05$ is taken as disproving a hypotheses then 1 in 20 true hypotheses will be wrongly rejected. P-values don't tell you whether a hypothesis is true or false, they tell you whether a hypothesis is probably true or false. It seems the referenced article is kicking back against the all-too-common naïve interpretation. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | The great XKCD did this cartoon a while ago, illustrating the problem. If results with $P\gt0.05$ are simplistically treated as proving a hypothesis - and all too often they are - then 1 in 20 hypoth | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
The great XKCD did this cartoon a while ago, illustrating the problem. If results with $P\gt0.05$ are simplistically treated as proving a hypothesis - and all too often they are - then 1 in 20 hypotheses so proven will actually be false. Similarly, if $P\lt0.05$ is taken as disproving a hypotheses then 1 in 20 true hypotheses will be wrongly rejected. P-values don't tell you whether a hypothesis is true or false, they tell you whether a hypothesis is probably true or false. It seems the referenced article is kicking back against the all-too-common naïve interpretation. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
The great XKCD did this cartoon a while ago, illustrating the problem. If results with $P\gt0.05$ are simplistically treated as proving a hypothesis - and all too often they are - then 1 in 20 hypoth |
4,010 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | tl;dr- It's fundamentally impossible to prove that things are unrelated; statistics can only be used to show when things are related. Despite this well-established fact, people frequently misinterpret a lack of statistical significance to imply a lack of relationship.
A good encryption method should generate a ciphertext that, as far as an attacker can tell, doesn't bare any statistical relationship to the protected message. Because if an attacker can determine some sort of relationship, then they can get information about your protected messages by just looking at the ciphertexts – which is a Bad ThingTM.
However, the ciphertext and its corresponding plaintext 100% determine each other. So even if the world's very best mathematicians can't find any significant relationship no matter how hard they try, we still obviously know that the relationship isn't just there, but that it's completely and fully deterministic. This determinism can exist even when we know that it's impossible to find a relationship.
Despite this, we still get people who'll do stuff like:
Pick some relationship they want to "disprove".
Do some study on it that's inadequate to detect the alleged relationship.
Report the lack of a statistically significant relationship.
Twist this into a lack of relationship.
This leads to all sorts of "scientific studies" that the media will (falsely) report as disproving the existence of some relationship.
If you want to design your own study around this, there're a bunch of ways you can do it:
Lazy research:The easiest way, by far, is to just be incredibly lazy about it. It's just like from that figure linked in the question:$\hspace{50px}$.You can easily get that $`` {\small{\color{darkred}{\begin{array}{c} \text{'Non-significant' study} \\[-10px] \left(\text{high}~P~\text{value}\right) \end{array}}}} "$ by simply having small sample sizes, allowing a lot of noise, and other various lazy things. In fact, if you're so lazy as to not collect any data, then you're already done!
Lazy analysis:For some silly reason, some people think a Pearson correlation coefficient of $0$ means "no correlation". Which is true, in a very limited sense. But, here're a few cases to observe:$\hspace{50px}$.This is, there may not be a "linear" relationship, but obviously there can be a more complex one. And it doesn't need to be "encryption"-level complex, but rather "it's actually just a bit of a squiggly line" or "there're two correlations" or whatever.
Lazy answering:In the spirit of the above, I'm going to stop here. To, ya know, be lazy!
But, seriously, the article sums it up well in:
Let’s be clear about what must stop: we should never conclude there is ‘no difference’ or ‘no association’ just because a P value is larger than a threshold such as 0.05 or, equivalently, because a confidence interval includes zero. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | tl;dr- It's fundamentally impossible to prove that things are unrelated; statistics can only be used to show when things are related. Despite this well-established fact, people frequently misinterpr | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
tl;dr- It's fundamentally impossible to prove that things are unrelated; statistics can only be used to show when things are related. Despite this well-established fact, people frequently misinterpret a lack of statistical significance to imply a lack of relationship.
A good encryption method should generate a ciphertext that, as far as an attacker can tell, doesn't bare any statistical relationship to the protected message. Because if an attacker can determine some sort of relationship, then they can get information about your protected messages by just looking at the ciphertexts – which is a Bad ThingTM.
However, the ciphertext and its corresponding plaintext 100% determine each other. So even if the world's very best mathematicians can't find any significant relationship no matter how hard they try, we still obviously know that the relationship isn't just there, but that it's completely and fully deterministic. This determinism can exist even when we know that it's impossible to find a relationship.
Despite this, we still get people who'll do stuff like:
Pick some relationship they want to "disprove".
Do some study on it that's inadequate to detect the alleged relationship.
Report the lack of a statistically significant relationship.
Twist this into a lack of relationship.
This leads to all sorts of "scientific studies" that the media will (falsely) report as disproving the existence of some relationship.
If you want to design your own study around this, there're a bunch of ways you can do it:
Lazy research:The easiest way, by far, is to just be incredibly lazy about it. It's just like from that figure linked in the question:$\hspace{50px}$.You can easily get that $`` {\small{\color{darkred}{\begin{array}{c} \text{'Non-significant' study} \\[-10px] \left(\text{high}~P~\text{value}\right) \end{array}}}} "$ by simply having small sample sizes, allowing a lot of noise, and other various lazy things. In fact, if you're so lazy as to not collect any data, then you're already done!
Lazy analysis:For some silly reason, some people think a Pearson correlation coefficient of $0$ means "no correlation". Which is true, in a very limited sense. But, here're a few cases to observe:$\hspace{50px}$.This is, there may not be a "linear" relationship, but obviously there can be a more complex one. And it doesn't need to be "encryption"-level complex, but rather "it's actually just a bit of a squiggly line" or "there're two correlations" or whatever.
Lazy answering:In the spirit of the above, I'm going to stop here. To, ya know, be lazy!
But, seriously, the article sums it up well in:
Let’s be clear about what must stop: we should never conclude there is ‘no difference’ or ‘no association’ just because a P value is larger than a threshold such as 0.05 or, equivalently, because a confidence interval includes zero. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
tl;dr- It's fundamentally impossible to prove that things are unrelated; statistics can only be used to show when things are related. Despite this well-established fact, people frequently misinterpr |
4,011 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | For a didactic introduction to the problem, Alex Reinhart wrote a book fully available online and edited at No Starch Press (with more content):
https://www.statisticsdonewrong.com
It explains the root of the problem without sophisticated maths and has specific chapters with examples from simulated data set:
https://www.statisticsdonewrong.com/p-value.html
https://www.statisticsdonewrong.com/regression.html
In the second link, a graphical example illustrates the p-value problem. P-value is often used as a single indicator of statistical difference between dataset but is clearly not enough by its own.
Edit for a more detailed answer:
In many cases, studies aim to reproduce a precise type of data, either physical measurements (say the number of particles in an accelerator during a specific experiment) or quantitative indicators (like the number of patients developing specific symptoms during drug tests). In either this situation, many factors can interfere with the measurement process like human error or systems variations (people reacting differently to the same drug). This is the reason experiments are often done hundreds times if possible and drug testing is done, ideally, on cohorts of thousands patients.
The data set is then reduced to its most simple values using statistics: means, standard deviations and so on. The problem in comparing models through their mean is that the measured values are only indicators of the true values, and are also statistically changing depending on the number and precision of the individual measurements. We have ways to give a good guess on which measures are likely to be the same and which are not, but only with a certain certainty. The usual threshold is to say that if we have less than one out of twenty chance to be wrong saying two values are different, we consider them "statistically different" (that's the meaning of $P<0.05$), else we do not conclude.
This leads to the odd conclusions illustrated in Nature's article where two same measures give the same mean values but researchers conclusions differ due to the size of the sample. This, and other trops from statistical vocabulary and habits is becoming more and more important in the sciences. An other side of the problem is that people tend to forget that they use statistical tools and conclude about effect without proper verification of the statistical power of their samples.
For an other illustration, recently social and life sciences are going through a true replication crisis due to the fact that a lot of effects were taken for granted by people who didn't check the proper statistical power of famous studies (while other falsified the data but this is another problem). | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | For a didactic introduction to the problem, Alex Reinhart wrote a book fully available online and edited at No Starch Press (with more content):
https://www.statisticsdonewrong.com
It explains the roo | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
For a didactic introduction to the problem, Alex Reinhart wrote a book fully available online and edited at No Starch Press (with more content):
https://www.statisticsdonewrong.com
It explains the root of the problem without sophisticated maths and has specific chapters with examples from simulated data set:
https://www.statisticsdonewrong.com/p-value.html
https://www.statisticsdonewrong.com/regression.html
In the second link, a graphical example illustrates the p-value problem. P-value is often used as a single indicator of statistical difference between dataset but is clearly not enough by its own.
Edit for a more detailed answer:
In many cases, studies aim to reproduce a precise type of data, either physical measurements (say the number of particles in an accelerator during a specific experiment) or quantitative indicators (like the number of patients developing specific symptoms during drug tests). In either this situation, many factors can interfere with the measurement process like human error or systems variations (people reacting differently to the same drug). This is the reason experiments are often done hundreds times if possible and drug testing is done, ideally, on cohorts of thousands patients.
The data set is then reduced to its most simple values using statistics: means, standard deviations and so on. The problem in comparing models through their mean is that the measured values are only indicators of the true values, and are also statistically changing depending on the number and precision of the individual measurements. We have ways to give a good guess on which measures are likely to be the same and which are not, but only with a certain certainty. The usual threshold is to say that if we have less than one out of twenty chance to be wrong saying two values are different, we consider them "statistically different" (that's the meaning of $P<0.05$), else we do not conclude.
This leads to the odd conclusions illustrated in Nature's article where two same measures give the same mean values but researchers conclusions differ due to the size of the sample. This, and other trops from statistical vocabulary and habits is becoming more and more important in the sciences. An other side of the problem is that people tend to forget that they use statistical tools and conclude about effect without proper verification of the statistical power of their samples.
For an other illustration, recently social and life sciences are going through a true replication crisis due to the fact that a lot of effects were taken for granted by people who didn't check the proper statistical power of famous studies (while other falsified the data but this is another problem). | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
For a didactic introduction to the problem, Alex Reinhart wrote a book fully available online and edited at No Starch Press (with more content):
https://www.statisticsdonewrong.com
It explains the roo |
4,012 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | For me, the most important part was:
...[We] urge authors to
discuss the point estimate, even when they have a large P value or a
wide interval, as well as discussing the limits of that interval.
In other words: Place a higher emphasis on discussing estimates (center and confidence interval), and
a lower emphasis on "Null-hypothesis testing".
How does this work in practice? A lot of research boils down to measuring effect sizes, for example
"We measured a risk ratio of 1.20, with a 95% C.I.
ranging from 0.97 to 1.33". This is a suitable summary of a study. You can
immediately see the most probable effect size and the uncertainty of the measurement. Using this
summary, you can quickly compare this study to other studies like it,
and ideally you can combine all the findings in a weighted average.
Unfortunately, such studies are often summarized as "We did not find a statiscally significant
increase of the risk ratio". This is a valid conclusion of the study above. But it is not a
suitable summary of the study, because you can't easily compare studies using these kinds of summaries. You don't
know which study had the most precise measurement, and you can't intuit what the finding of a meta-study might be.
And you don't immediately spot when studies claim "non-significant risk ratio increase" by having confidence
intervals that are so large you can hide an elephant in them. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | For me, the most important part was:
...[We] urge authors to
discuss the point estimate, even when they have a large P value or a
wide interval, as well as discussing the limits of that interval. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
For me, the most important part was:
...[We] urge authors to
discuss the point estimate, even when they have a large P value or a
wide interval, as well as discussing the limits of that interval.
In other words: Place a higher emphasis on discussing estimates (center and confidence interval), and
a lower emphasis on "Null-hypothesis testing".
How does this work in practice? A lot of research boils down to measuring effect sizes, for example
"We measured a risk ratio of 1.20, with a 95% C.I.
ranging from 0.97 to 1.33". This is a suitable summary of a study. You can
immediately see the most probable effect size and the uncertainty of the measurement. Using this
summary, you can quickly compare this study to other studies like it,
and ideally you can combine all the findings in a weighted average.
Unfortunately, such studies are often summarized as "We did not find a statiscally significant
increase of the risk ratio". This is a valid conclusion of the study above. But it is not a
suitable summary of the study, because you can't easily compare studies using these kinds of summaries. You don't
know which study had the most precise measurement, and you can't intuit what the finding of a meta-study might be.
And you don't immediately spot when studies claim "non-significant risk ratio increase" by having confidence
intervals that are so large you can hide an elephant in them. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
For me, the most important part was:
...[We] urge authors to
discuss the point estimate, even when they have a large P value or a
wide interval, as well as discussing the limits of that interval. |
4,013 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | It is "significant" that statisticians, not just scientists, are rising up and objecting to the loose use of "significance" and $P$ values. The most recent issue of The American Statistician is devoted entirely to this matter. See especially the lead editorial by Wasserman, Schirm, and Lazar. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | It is "significant" that statisticians, not just scientists, are rising up and objecting to the loose use of "significance" and $P$ values. The most recent issue of The American Statistician is devote | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
It is "significant" that statisticians, not just scientists, are rising up and objecting to the loose use of "significance" and $P$ values. The most recent issue of The American Statistician is devoted entirely to this matter. See especially the lead editorial by Wasserman, Schirm, and Lazar. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
It is "significant" that statisticians, not just scientists, are rising up and objecting to the loose use of "significance" and $P$ values. The most recent issue of The American Statistician is devote |
4,014 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | It is a fact that for several reasons, p-values have indeed become a problem.
However, despite their weaknesses, they have important advantages such as simplicity and intuitive theory. Therefore, while overall I agree with the Comment in Nature, I do think that rather than ditching statistical significance completely, a more balanced solution is needed. Here are a few options:
1. "Changing the default P-value threshold for
statistical significance from 0.05 to 0.005 for claims of new
discoveries". In my view, Benjamin et al addressed very well the
most compelling arguments against adopting a higher standard of
evidence.
2. Adopting the second-generation p-values. These seem
to be a reasonable solution to most of the problems affecting
classical p-values. As Blume et al say here, second-generation
p-values could help "improve rigor, reproducibility, & transparency in statistical analyses."
3. Redefining p-value as "a quantitative measure of certainty — a
“confidence index” — that an observed relationship, or claim,
is true." This could help change analysis goal from achieving significance to appropriately estimating this confidence.
Importantly, "results that do not reach the threshold for statistical significance or “confidence” (whatever it is) can still be important and merit publication in leading journals if they address important research questions with rigorous methods."
I think that could help mitigate the obsession with p-values by leading journals, which is behind the misuse of p-values. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | It is a fact that for several reasons, p-values have indeed become a problem.
However, despite their weaknesses, they have important advantages such as simplicity and intuitive theory. Therefore, whi | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
It is a fact that for several reasons, p-values have indeed become a problem.
However, despite their weaknesses, they have important advantages such as simplicity and intuitive theory. Therefore, while overall I agree with the Comment in Nature, I do think that rather than ditching statistical significance completely, a more balanced solution is needed. Here are a few options:
1. "Changing the default P-value threshold for
statistical significance from 0.05 to 0.005 for claims of new
discoveries". In my view, Benjamin et al addressed very well the
most compelling arguments against adopting a higher standard of
evidence.
2. Adopting the second-generation p-values. These seem
to be a reasonable solution to most of the problems affecting
classical p-values. As Blume et al say here, second-generation
p-values could help "improve rigor, reproducibility, & transparency in statistical analyses."
3. Redefining p-value as "a quantitative measure of certainty — a
“confidence index” — that an observed relationship, or claim,
is true." This could help change analysis goal from achieving significance to appropriately estimating this confidence.
Importantly, "results that do not reach the threshold for statistical significance or “confidence” (whatever it is) can still be important and merit publication in leading journals if they address important research questions with rigorous methods."
I think that could help mitigate the obsession with p-values by leading journals, which is behind the misuse of p-values. | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
It is a fact that for several reasons, p-values have indeed become a problem.
However, despite their weaknesses, they have important advantages such as simplicity and intuitive theory. Therefore, whi |
4,015 | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | One thing that has not been mentioned is that error or significance are statistical estimates, not actual physical measurements: They depend heavily on the data you have available and how you process it. You can only provide precise value of error and significance if you have measured every possible event. This is usually not the case, far from it!
Therefore, every estimate of error or significance, in this case any given P-value, is by definition inaccurate and should not be trusted to describe the underlying research – let alone phenomena! – accurately. In fact, it should not be trusted to convey anything about results WITHOUT knowledge of what is being represented, how the error was estimated and what was done to quality control the data. For example, one way to reduce estimated error is to remove outliers. If this is removal is also done statistically, then how can you actually know the outliers were real errors instead of unlikely real measurements that should be included in the error? How could the reduced error improve the significance of the results? What about erroneous measurements near the estimates? They improve the error and can impact statistical significance but can lead to wrong conclusions!
For that matter, I do physical modeling and have created models myself where 3-sigma error is completely unphysical. That is, statistically there's around one event in a thousand (well...more often than that, but I digress) that would result in completely ridiculous value. The magnitude of 3 interval error in my field is roughly equivalent of having best possible estimate of 1 cm turning out to be a meter every now and then. However, this is indeed an accepted result when providing statistical +/- interval calculated from physical, empirical data in my field. Sure, narrowness of uncertainty interval is respected, but often the value of best guess estimate is more useful result even when nominal error interval would be larger.
As a side note, I was once personally responsible for one of those one in a thousand outliers. I was in process of calibrating an instrument when an event happened which we were supposed to measure. Alas, that data point would have been exactly one of those 100 fold outliers, so in a sense, they DO happen and are included in the modeling error! | What does "Scientists rise up against statistical significance" mean? (Comment in Nature) | One thing that has not been mentioned is that error or significance are statistical estimates, not actual physical measurements: They depend heavily on the data you have available and how you process | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
One thing that has not been mentioned is that error or significance are statistical estimates, not actual physical measurements: They depend heavily on the data you have available and how you process it. You can only provide precise value of error and significance if you have measured every possible event. This is usually not the case, far from it!
Therefore, every estimate of error or significance, in this case any given P-value, is by definition inaccurate and should not be trusted to describe the underlying research – let alone phenomena! – accurately. In fact, it should not be trusted to convey anything about results WITHOUT knowledge of what is being represented, how the error was estimated and what was done to quality control the data. For example, one way to reduce estimated error is to remove outliers. If this is removal is also done statistically, then how can you actually know the outliers were real errors instead of unlikely real measurements that should be included in the error? How could the reduced error improve the significance of the results? What about erroneous measurements near the estimates? They improve the error and can impact statistical significance but can lead to wrong conclusions!
For that matter, I do physical modeling and have created models myself where 3-sigma error is completely unphysical. That is, statistically there's around one event in a thousand (well...more often than that, but I digress) that would result in completely ridiculous value. The magnitude of 3 interval error in my field is roughly equivalent of having best possible estimate of 1 cm turning out to be a meter every now and then. However, this is indeed an accepted result when providing statistical +/- interval calculated from physical, empirical data in my field. Sure, narrowness of uncertainty interval is respected, but often the value of best guess estimate is more useful result even when nominal error interval would be larger.
As a side note, I was once personally responsible for one of those one in a thousand outliers. I was in process of calibrating an instrument when an event happened which we were supposed to measure. Alas, that data point would have been exactly one of those 100 fold outliers, so in a sense, they DO happen and are included in the modeling error! | What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
One thing that has not been mentioned is that error or significance are statistical estimates, not actual physical measurements: They depend heavily on the data you have available and how you process |
4,016 | Mean absolute deviation vs. standard deviation | Both answer how far your values are spread around the mean of the observations.
An observation that is 1 under the mean is equally "far" from the mean as a value that is 1 above the mean. Hence you should neglect the sign of the deviation. This can be done in two ways:
Calculate the absolute value of the deviations and sum these.
Square the deviations and sum these squares. Due to the square, you give more weight to high deviations, and hence the sum of these squares will be different from the sum of the means.
After calculating the "sum of absolute deviations" or the "square root of the sum of squared deviations", you average them to get the "mean deviation" and the "standard deviation" respectively.
The mean deviation is rarely used. | Mean absolute deviation vs. standard deviation | Both answer how far your values are spread around the mean of the observations.
An observation that is 1 under the mean is equally "far" from the mean as a value that is 1 above the mean. Hence you sh | Mean absolute deviation vs. standard deviation
Both answer how far your values are spread around the mean of the observations.
An observation that is 1 under the mean is equally "far" from the mean as a value that is 1 above the mean. Hence you should neglect the sign of the deviation. This can be done in two ways:
Calculate the absolute value of the deviations and sum these.
Square the deviations and sum these squares. Due to the square, you give more weight to high deviations, and hence the sum of these squares will be different from the sum of the means.
After calculating the "sum of absolute deviations" or the "square root of the sum of squared deviations", you average them to get the "mean deviation" and the "standard deviation" respectively.
The mean deviation is rarely used. | Mean absolute deviation vs. standard deviation
Both answer how far your values are spread around the mean of the observations.
An observation that is 1 under the mean is equally "far" from the mean as a value that is 1 above the mean. Hence you sh |
4,017 | Mean absolute deviation vs. standard deviation | Today, statistical values are predominantly calculated by computer programs (Excel, ...), not by hand-held calculators anymore . Hence, I would posit that calculating "mean deviation" is no more cumbersome than calculating "standard deviation". Although standard deviation may have "... mathematical properties that make it more useful in statistics", it is, in fact, a distortion of the concept of variance from a mean, since it gives extra weighting to data points far from the mean. It may take some time, but I, for one, hope statisticians evolve back to using "mean deviation" more often when discussing the distribution among data points -- it more accurately represents how we actually think of the distribution. | Mean absolute deviation vs. standard deviation | Today, statistical values are predominantly calculated by computer programs (Excel, ...), not by hand-held calculators anymore . Hence, I would posit that calculating "mean deviation" is no more cumb | Mean absolute deviation vs. standard deviation
Today, statistical values are predominantly calculated by computer programs (Excel, ...), not by hand-held calculators anymore . Hence, I would posit that calculating "mean deviation" is no more cumbersome than calculating "standard deviation". Although standard deviation may have "... mathematical properties that make it more useful in statistics", it is, in fact, a distortion of the concept of variance from a mean, since it gives extra weighting to data points far from the mean. It may take some time, but I, for one, hope statisticians evolve back to using "mean deviation" more often when discussing the distribution among data points -- it more accurately represents how we actually think of the distribution. | Mean absolute deviation vs. standard deviation
Today, statistical values are predominantly calculated by computer programs (Excel, ...), not by hand-held calculators anymore . Hence, I would posit that calculating "mean deviation" is no more cumb |
4,018 | Mean absolute deviation vs. standard deviation | They both measure the same concept, but are not equal.
You are comparing $\frac{1}{n} \sum |x_i-\bar{x}|$ with $\sqrt{\frac{1}{n} \sum (x_i-\bar{x})^2}$. They aren't equal for two reasons:
Firstly the square-root operator is not linear, or $\sqrt{a+b} \neq \sqrt{a} + \sqrt{b}$. Therefore the sum of absolute deviations is not equal to the square root of the sum of squared deviations, even though the absolute function can be represented as the square function followed by a square root:
$\sum|x_i-\bar{x}| = \sum \sqrt{(x_i-\bar{x})^2} \neq \sqrt{\sum(x_i-\bar{x})^2}$
as the square root is taken after the sum has been calculated.
Secondly, $n$ is now also under the square root in the standard deviation calculation.
Try calculating $\frac{1}{n}\sum \sqrt{(x_i-\bar{x})^2}$ - it should yield the same answer as the mean deviation and help you to understand.
The reason why the standard deviation is preferred is because it is mathematically easier to work with later on, when calculations become more complicated. | Mean absolute deviation vs. standard deviation | They both measure the same concept, but are not equal.
You are comparing $\frac{1}{n} \sum |x_i-\bar{x}|$ with $\sqrt{\frac{1}{n} \sum (x_i-\bar{x})^2}$. They aren't equal for two reasons:
Firstly th | Mean absolute deviation vs. standard deviation
They both measure the same concept, but are not equal.
You are comparing $\frac{1}{n} \sum |x_i-\bar{x}|$ with $\sqrt{\frac{1}{n} \sum (x_i-\bar{x})^2}$. They aren't equal for two reasons:
Firstly the square-root operator is not linear, or $\sqrt{a+b} \neq \sqrt{a} + \sqrt{b}$. Therefore the sum of absolute deviations is not equal to the square root of the sum of squared deviations, even though the absolute function can be represented as the square function followed by a square root:
$\sum|x_i-\bar{x}| = \sum \sqrt{(x_i-\bar{x})^2} \neq \sqrt{\sum(x_i-\bar{x})^2}$
as the square root is taken after the sum has been calculated.
Secondly, $n$ is now also under the square root in the standard deviation calculation.
Try calculating $\frac{1}{n}\sum \sqrt{(x_i-\bar{x})^2}$ - it should yield the same answer as the mean deviation and help you to understand.
The reason why the standard deviation is preferred is because it is mathematically easier to work with later on, when calculations become more complicated. | Mean absolute deviation vs. standard deviation
They both measure the same concept, but are not equal.
You are comparing $\frac{1}{n} \sum |x_i-\bar{x}|$ with $\sqrt{\frac{1}{n} \sum (x_i-\bar{x})^2}$. They aren't equal for two reasons:
Firstly th |
4,019 | Mean absolute deviation vs. standard deviation | Both measure the dispersion of your data by computing the distance of the data to its mean.
the mean absolute deviation is using norm L1 (it is also called Manhattan distance or rectilinear distance)
the standard deviation is using norm L2 (also called Euclidean distance)
The difference between the two norms is that the standard deviation is calculating the square of the difference whereas the mean absolute deviation is only looking at the absolute difference. Hence large outliers will create a higher dispersion when using the standard deviation instead of the other method.
The Euclidean distance is indeed also more often used. The main reason is that the standard deviation have nice properties when the data is normally distributed. So under this assumption, it is recommended to use it. However people often do this assumption for data which is actually not normally distributed which creates issues. If your data is not normally distributed, you can still use the standard deviation, but you should be careful with the interpretation of the results.
Finally you should know that both measures of dispersion are particular cases of the Minkowski distance, for p=1 and p=2. You can increase p to get other measures of the dispersion of your data. | Mean absolute deviation vs. standard deviation | Both measure the dispersion of your data by computing the distance of the data to its mean.
the mean absolute deviation is using norm L1 (it is also called Manhattan distance or rectilinear distance) | Mean absolute deviation vs. standard deviation
Both measure the dispersion of your data by computing the distance of the data to its mean.
the mean absolute deviation is using norm L1 (it is also called Manhattan distance or rectilinear distance)
the standard deviation is using norm L2 (also called Euclidean distance)
The difference between the two norms is that the standard deviation is calculating the square of the difference whereas the mean absolute deviation is only looking at the absolute difference. Hence large outliers will create a higher dispersion when using the standard deviation instead of the other method.
The Euclidean distance is indeed also more often used. The main reason is that the standard deviation have nice properties when the data is normally distributed. So under this assumption, it is recommended to use it. However people often do this assumption for data which is actually not normally distributed which creates issues. If your data is not normally distributed, you can still use the standard deviation, but you should be careful with the interpretation of the results.
Finally you should know that both measures of dispersion are particular cases of the Minkowski distance, for p=1 and p=2. You can increase p to get other measures of the dispersion of your data. | Mean absolute deviation vs. standard deviation
Both measure the dispersion of your data by computing the distance of the data to its mean.
the mean absolute deviation is using norm L1 (it is also called Manhattan distance or rectilinear distance) |
4,020 | Mean absolute deviation vs. standard deviation | @itsols, I'll add to Kasper's important notion that The mean deviation is rarely used. Why is standard deviation considered generally a better measure of variability than mean absolute deviation? Because arithmetic mean is the locus of minimal sum of squared (and not sum of absolute) deviations from it.
Suppose you want to assess the degree of altruism. Then you probably won't ask a person about how much he is ready to give money in "general situation" of life. Rather, you'll choose to ask how much he is ready to do it in the constained situation, where he has minimal possible resourses for his own living. I.e. what is the amount of individual altruism in the situation when that amount is individual's minimal?
Likewise, what is the degree of variability of these data? Intuitively, the best measuring index for it is the one which is minimized (or maximized) down to the limit in this context. The context is "around the arithmetic mean". Then st. deviation is the best choice in this sense. If the context were "around the median" then mean |deviation| would be the best choice, because median is the locus of minimal sum of absolute deviations from it. | Mean absolute deviation vs. standard deviation | @itsols, I'll add to Kasper's important notion that The mean deviation is rarely used. Why is standard deviation considered generally a better measure of variability than mean absolute deviation? Beca | Mean absolute deviation vs. standard deviation
@itsols, I'll add to Kasper's important notion that The mean deviation is rarely used. Why is standard deviation considered generally a better measure of variability than mean absolute deviation? Because arithmetic mean is the locus of minimal sum of squared (and not sum of absolute) deviations from it.
Suppose you want to assess the degree of altruism. Then you probably won't ask a person about how much he is ready to give money in "general situation" of life. Rather, you'll choose to ask how much he is ready to do it in the constained situation, where he has minimal possible resourses for his own living. I.e. what is the amount of individual altruism in the situation when that amount is individual's minimal?
Likewise, what is the degree of variability of these data? Intuitively, the best measuring index for it is the one which is minimized (or maximized) down to the limit in this context. The context is "around the arithmetic mean". Then st. deviation is the best choice in this sense. If the context were "around the median" then mean |deviation| would be the best choice, because median is the locus of minimal sum of absolute deviations from it. | Mean absolute deviation vs. standard deviation
@itsols, I'll add to Kasper's important notion that The mean deviation is rarely used. Why is standard deviation considered generally a better measure of variability than mean absolute deviation? Beca |
4,021 | Mean absolute deviation vs. standard deviation | One thing worth adding is that the most likely reason your 30-year-old textbook used the absolute mean deviation as opposed to standard deviation is that it is easier to calculate by hand (no squaring / square roots). Now that calculators are readily accessible to high school students, there is no reason not to ask them to calculate standard deviation.
There are still some situations where absolute deviations are used instead of standard deviations in complex model fitting. Absolute deviations are less sensitive to extreme outliers (values far from the mean/trendline) compared to standard deviations because they don't square that distance before adding it to the values from other data points. Since model fitting methods aim to reduce the total deviation from the trendline (according to whichever method deviation is calculation), methods that use standard deviation can end up creating a trendline that diverges away from the majority of points in order to be closer to an outlier. Using absolute deviations reduces this distortion, but at the cost of making calculation of the trendline more complicated.
That's because, as others have noted, the standard deviation has mathematical properties and relationships which generally make it more useful in statistics. But "useful" should never be confused with perfect. | Mean absolute deviation vs. standard deviation | One thing worth adding is that the most likely reason your 30-year-old textbook used the absolute mean deviation as opposed to standard deviation is that it is easier to calculate by hand (no squaring | Mean absolute deviation vs. standard deviation
One thing worth adding is that the most likely reason your 30-year-old textbook used the absolute mean deviation as opposed to standard deviation is that it is easier to calculate by hand (no squaring / square roots). Now that calculators are readily accessible to high school students, there is no reason not to ask them to calculate standard deviation.
There are still some situations where absolute deviations are used instead of standard deviations in complex model fitting. Absolute deviations are less sensitive to extreme outliers (values far from the mean/trendline) compared to standard deviations because they don't square that distance before adding it to the values from other data points. Since model fitting methods aim to reduce the total deviation from the trendline (according to whichever method deviation is calculation), methods that use standard deviation can end up creating a trendline that diverges away from the majority of points in order to be closer to an outlier. Using absolute deviations reduces this distortion, but at the cost of making calculation of the trendline more complicated.
That's because, as others have noted, the standard deviation has mathematical properties and relationships which generally make it more useful in statistics. But "useful" should never be confused with perfect. | Mean absolute deviation vs. standard deviation
One thing worth adding is that the most likely reason your 30-year-old textbook used the absolute mean deviation as opposed to standard deviation is that it is easier to calculate by hand (no squaring |
4,022 | Mean absolute deviation vs. standard deviation | They are similar measures that try to quantify the same notion. Typically you use st. deviation since it has nice properties, if you make some assumption about the underlying distribution.
On the other hand the absolute value in mean deviation causes some issues from a mathematical perspective since you can't differentiate it and you can't analyse it easily. Some discussion here. | Mean absolute deviation vs. standard deviation | They are similar measures that try to quantify the same notion. Typically you use st. deviation since it has nice properties, if you make some assumption about the underlying distribution.
On the oth | Mean absolute deviation vs. standard deviation
They are similar measures that try to quantify the same notion. Typically you use st. deviation since it has nice properties, if you make some assumption about the underlying distribution.
On the other hand the absolute value in mean deviation causes some issues from a mathematical perspective since you can't differentiate it and you can't analyse it easily. Some discussion here. | Mean absolute deviation vs. standard deviation
They are similar measures that try to quantify the same notion. Typically you use st. deviation since it has nice properties, if you make some assumption about the underlying distribution.
On the oth |
4,023 | Mean absolute deviation vs. standard deviation | No. You are wrong. Just kidding. There are, however, many viable reasons why one would want to compute mean deviation rather than formal std, and in this way I am in agreement with the viewpoint of my engineering Brethren. Certainly if I am computing statistics to compare with a body of existing work which is expressing qualitative as well as quantitative conclusions, I woud stick with std. But, for example, assume I am trying to run some fast anomaly-detection algorithms on binary, machine-generated data. I'm not after academic comparisons as my final goal. But I am interested in the fundamental inference about the "spread" of a particular flow of data about its mean. I'm also interested in computing this iteratively, and as efficiently as possible. In digital electronic hardware, we play dirty tricks all the time -- we distill multiplications and divisions into left and right shifts, respectively, and for "computing" absolute values, we simply drop the sign bit (and compute one's or two's complement if necessary, both easy transforms). So, my choice is to compute it in the most knuckle-dragging way I can, and apply linear thresholds to my computations for fast anomaly detection over desired time windows. | Mean absolute deviation vs. standard deviation | No. You are wrong. Just kidding. There are, however, many viable reasons why one would want to compute mean deviation rather than formal std, and in this way I am in agreement with the viewpoint of | Mean absolute deviation vs. standard deviation
No. You are wrong. Just kidding. There are, however, many viable reasons why one would want to compute mean deviation rather than formal std, and in this way I am in agreement with the viewpoint of my engineering Brethren. Certainly if I am computing statistics to compare with a body of existing work which is expressing qualitative as well as quantitative conclusions, I woud stick with std. But, for example, assume I am trying to run some fast anomaly-detection algorithms on binary, machine-generated data. I'm not after academic comparisons as my final goal. But I am interested in the fundamental inference about the "spread" of a particular flow of data about its mean. I'm also interested in computing this iteratively, and as efficiently as possible. In digital electronic hardware, we play dirty tricks all the time -- we distill multiplications and divisions into left and right shifts, respectively, and for "computing" absolute values, we simply drop the sign bit (and compute one's or two's complement if necessary, both easy transforms). So, my choice is to compute it in the most knuckle-dragging way I can, and apply linear thresholds to my computations for fast anomaly detection over desired time windows. | Mean absolute deviation vs. standard deviation
No. You are wrong. Just kidding. There are, however, many viable reasons why one would want to compute mean deviation rather than formal std, and in this way I am in agreement with the viewpoint of |
4,024 | Mean absolute deviation vs. standard deviation | Amar Sagoo has a very good article explaining this.
To add my own attempt at an intuitive understanding:
Mean deviation is a decent way of asking how far a hypothetical "average" point is from the mean, but it doesn't really work for asking how far all the points are from each other, or how "spread out" the data are.
Standard deviation is asking how far apart all the points are, so in incorporates more useful information than just the mean deviation (which is why mean deviation is usually only used as a stepping stone toward understanding standard deviation).
A good analogy is the Pythagorean Theorem.
The Pythagorean Theorem tells us the distance between points in two dimensions by taking the horizontal distance and the vertical distance, squaring them, adding the squares, and taking the square root of the total.
If you look at it closely, the formula for (population) Standard Deviation is basically the same as the Pythagorean Theorem, but with a lot more than two dimensions (and using distance from each point to the mean as the distance in each dimension).
As such it gives the most accurate picture of the "distance" between all the points in your data set.
To push that analogy a little further, the mean absolute deviation would be like taking the average of the horizontal and vertical distances, which is shorter than the total distance, while the sum absolute deviation would be adding the horizontal and vertical distances, which is longer than the actual distance. | Mean absolute deviation vs. standard deviation | Amar Sagoo has a very good article explaining this.
To add my own attempt at an intuitive understanding:
Mean deviation is a decent way of asking how far a hypothetical "average" point is from the mea | Mean absolute deviation vs. standard deviation
Amar Sagoo has a very good article explaining this.
To add my own attempt at an intuitive understanding:
Mean deviation is a decent way of asking how far a hypothetical "average" point is from the mean, but it doesn't really work for asking how far all the points are from each other, or how "spread out" the data are.
Standard deviation is asking how far apart all the points are, so in incorporates more useful information than just the mean deviation (which is why mean deviation is usually only used as a stepping stone toward understanding standard deviation).
A good analogy is the Pythagorean Theorem.
The Pythagorean Theorem tells us the distance between points in two dimensions by taking the horizontal distance and the vertical distance, squaring them, adding the squares, and taking the square root of the total.
If you look at it closely, the formula for (population) Standard Deviation is basically the same as the Pythagorean Theorem, but with a lot more than two dimensions (and using distance from each point to the mean as the distance in each dimension).
As such it gives the most accurate picture of the "distance" between all the points in your data set.
To push that analogy a little further, the mean absolute deviation would be like taking the average of the horizontal and vertical distances, which is shorter than the total distance, while the sum absolute deviation would be adding the horizontal and vertical distances, which is longer than the actual distance. | Mean absolute deviation vs. standard deviation
Amar Sagoo has a very good article explaining this.
To add my own attempt at an intuitive understanding:
Mean deviation is a decent way of asking how far a hypothetical "average" point is from the mea |
4,025 | Mean absolute deviation vs. standard deviation | The standard deviation represents dispersion due to random processes. Specifically, many physical measurements which are expected to be due to the sum of many independent processes have normal (bell curve) distributions.
The normal probability distribution is given by:
$
\Large Y = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{\left(x-\mu\right)^2}{2\sigma^2}}
$
Where $Y$ is the probability of getting a value $x$ given a mean $\mu$ and $\sigma$…the standard deviation!
In other words, the standard deviation is a term that arises out of independent random variables being summed together. So, I disagree with some of the answers given here - standard deviation isn't just an alternative to mean deviation which "happens to be more convenient for later calculations". Standard deviation is the right way to model dispersion for normally distributed phenomena.
If you look at the equation, you can see the standard deviation more heavily weights larger deviations from the mean. Intuitively, you can think of the mean deviation as measuring the actual average deviation from the mean, whereas the standard deviation accounts for a bell shaped aka "normal" distribution around the mean. So if your data is normally distributed, the standard deviation tells you that if you sample more values, ~68% of them will be found within one standard deviation around the mean.
On the other hand, if you have a single random variable, the distribution might look like a rectangle, with an equal probability of values appearing anywhere within a range. In this case, the mean deviation might be more appropriate.
TL;DR if you have data that are due to many underlying random processes or which you simply know to be distributed normally, use standard deviation function. | Mean absolute deviation vs. standard deviation | The standard deviation represents dispersion due to random processes. Specifically, many physical measurements which are expected to be due to the sum of many independent processes have normal (bell c | Mean absolute deviation vs. standard deviation
The standard deviation represents dispersion due to random processes. Specifically, many physical measurements which are expected to be due to the sum of many independent processes have normal (bell curve) distributions.
The normal probability distribution is given by:
$
\Large Y = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{\left(x-\mu\right)^2}{2\sigma^2}}
$
Where $Y$ is the probability of getting a value $x$ given a mean $\mu$ and $\sigma$…the standard deviation!
In other words, the standard deviation is a term that arises out of independent random variables being summed together. So, I disagree with some of the answers given here - standard deviation isn't just an alternative to mean deviation which "happens to be more convenient for later calculations". Standard deviation is the right way to model dispersion for normally distributed phenomena.
If you look at the equation, you can see the standard deviation more heavily weights larger deviations from the mean. Intuitively, you can think of the mean deviation as measuring the actual average deviation from the mean, whereas the standard deviation accounts for a bell shaped aka "normal" distribution around the mean. So if your data is normally distributed, the standard deviation tells you that if you sample more values, ~68% of them will be found within one standard deviation around the mean.
On the other hand, if you have a single random variable, the distribution might look like a rectangle, with an equal probability of values appearing anywhere within a range. In this case, the mean deviation might be more appropriate.
TL;DR if you have data that are due to many underlying random processes or which you simply know to be distributed normally, use standard deviation function. | Mean absolute deviation vs. standard deviation
The standard deviation represents dispersion due to random processes. Specifically, many physical measurements which are expected to be due to the sum of many independent processes have normal (bell c |
4,026 | Mean absolute deviation vs. standard deviation | Consider three set of data having same mean and MD but their ranges are changing. It is interesting to see how SD changes with change in the range of the data.
SET 1: 1, 3,5,7,9,11,13,15,17,19 Range:1-19 Mean=10, MD=5 SD= 6.05
SET 2: 2,3,5,7,7,9,13,15,14,23 Range: 1-23 Mean=10 MD=5 SD=6.28
SET 3: 3,5,5,7,7,8,10,12,13,30 Range: 1-30 Mean =10 MD=5 SD=7.70
It can be observed that all the three sets have same mean and MD. It is to be highlighted that while MD do not change with change in range, SD show changes with every change in ranges. This clearly establishes the supremacy of SD as compared to MD in dealing with variation in the data. | Mean absolute deviation vs. standard deviation | Consider three set of data having same mean and MD but their ranges are changing. It is interesting to see how SD changes with change in the range of the data.
SET 1: 1, 3,5,7,9,11,13,15,17,19 Range: | Mean absolute deviation vs. standard deviation
Consider three set of data having same mean and MD but their ranges are changing. It is interesting to see how SD changes with change in the range of the data.
SET 1: 1, 3,5,7,9,11,13,15,17,19 Range:1-19 Mean=10, MD=5 SD= 6.05
SET 2: 2,3,5,7,7,9,13,15,14,23 Range: 1-23 Mean=10 MD=5 SD=6.28
SET 3: 3,5,5,7,7,8,10,12,13,30 Range: 1-30 Mean =10 MD=5 SD=7.70
It can be observed that all the three sets have same mean and MD. It is to be highlighted that while MD do not change with change in range, SD show changes with every change in ranges. This clearly establishes the supremacy of SD as compared to MD in dealing with variation in the data. | Mean absolute deviation vs. standard deviation
Consider three set of data having same mean and MD but their ranges are changing. It is interesting to see how SD changes with change in the range of the data.
SET 1: 1, 3,5,7,9,11,13,15,17,19 Range: |
4,027 | Mean absolute deviation vs. standard deviation | The two measures differ indeed. The first is often referred to as Mean Absolute Deviation (MAD) and the second is Standard Deviation (STD). In embedded applications with severely limited computing power and limited program memory, avoiding the square root calculations can be very desirable.
From a quick rough test it seems that MAD = f * STD with f somewhere between 0.78 and 0.80 for a set of gaussian distributed random samples. | Mean absolute deviation vs. standard deviation | The two measures differ indeed. The first is often referred to as Mean Absolute Deviation (MAD) and the second is Standard Deviation (STD). In embedded applications with severely limited computing pow | Mean absolute deviation vs. standard deviation
The two measures differ indeed. The first is often referred to as Mean Absolute Deviation (MAD) and the second is Standard Deviation (STD). In embedded applications with severely limited computing power and limited program memory, avoiding the square root calculations can be very desirable.
From a quick rough test it seems that MAD = f * STD with f somewhere between 0.78 and 0.80 for a set of gaussian distributed random samples. | Mean absolute deviation vs. standard deviation
The two measures differ indeed. The first is often referred to as Mean Absolute Deviation (MAD) and the second is Standard Deviation (STD). In embedded applications with severely limited computing pow |
4,028 | Mean absolute deviation vs. standard deviation | Each of the three parameters - Mean (M), Mean Absolute Deviation (MAD) and Standard Deviation (σ), calculated for a set, provide some unique information about the set which the other two parameters don't. σ loosely includes the information provided by MAD, but it isn't vice versa. Hence, σ is conveniently used everywhere.
M => around which number the observations are centered. But a set can have its observations quite far from the mean, on an average, as compared to another set having the same mean. In order to get that information (i.e. the average distance of observations from its mean), we move to MAD.
MAD => how far each observation individually is from the mean of all observations, but it doesn't tell how the observations are arranged in relation to one another. To get that information (i.e. the average distance of the set itself from its mean, which depends upon how the observations are arranged in relation to one another), we move to σ.
σ => how far the complete set is from its mean (or, how far the observations are from each other).
If you want to go deeper, have a look at my article here. | Mean absolute deviation vs. standard deviation | Each of the three parameters - Mean (M), Mean Absolute Deviation (MAD) and Standard Deviation (σ), calculated for a set, provide some unique information about the set which the other two parameters do | Mean absolute deviation vs. standard deviation
Each of the three parameters - Mean (M), Mean Absolute Deviation (MAD) and Standard Deviation (σ), calculated for a set, provide some unique information about the set which the other two parameters don't. σ loosely includes the information provided by MAD, but it isn't vice versa. Hence, σ is conveniently used everywhere.
M => around which number the observations are centered. But a set can have its observations quite far from the mean, on an average, as compared to another set having the same mean. In order to get that information (i.e. the average distance of observations from its mean), we move to MAD.
MAD => how far each observation individually is from the mean of all observations, but it doesn't tell how the observations are arranged in relation to one another. To get that information (i.e. the average distance of the set itself from its mean, which depends upon how the observations are arranged in relation to one another), we move to σ.
σ => how far the complete set is from its mean (or, how far the observations are from each other).
If you want to go deeper, have a look at my article here. | Mean absolute deviation vs. standard deviation
Each of the three parameters - Mean (M), Mean Absolute Deviation (MAD) and Standard Deviation (σ), calculated for a set, provide some unique information about the set which the other two parameters do |
4,029 | Intuitive explanation of the bias-variance tradeoff? | Imagine some 2D data--let's say height versus weight for students at a high school--plotted on a pair of axes.
Now suppose you fit a straight line through it. This line, which of course represents a set of predicted values, has zero statistical variance. But the bias is (probably) high--i.e., it doesn't fit the data very well.
Next, suppose you model the data with a high-degree polynomial spline. You're not satisfied with the fit, so you increase the polynomial degree until the fit improves (and it will, to arbitrary precision, in fact). Now you have a situation with bias that tends to zero, but the variance is very high.
Note that the bias-variance trade-off doesn't describe a proportional relationship--i.e., if you plot bias versus variance you won't necessarily see a straight line through the origin with slope -1. In the polynomial spline example above, reducing the degree almost certainly increases the variance much less than it decreases the bias.
The bias-variance tradeoff is also embedded in the sum-of-squares error function. Below, I have rewritten (but not altered) the usual form of this equation to emphasize this:
$$
E\left(\left(y - \dot{f}(x)\right)^2\right) = \sigma^2 + \left[f(x) - \frac{1}{\kappa}\sum_{i=0}^nf(x_n)\right]^2+\frac{\sigma^2}{\kappa}
$$
On the right-hand side, there are three terms: the first of these is just the irreducible error (the variance in the data itself); this is beyond our control so ignore it. The second term is the square of the bias; and the third is the variance. It's easy to see that as one goes up the other goes down--they can't both vary together in the same direction. Put another way, you can think of least-squares regression as (implicitly) finding the optimal combination of bias and variance from among candidate models. | Intuitive explanation of the bias-variance tradeoff? | Imagine some 2D data--let's say height versus weight for students at a high school--plotted on a pair of axes.
Now suppose you fit a straight line through it. This line, which of course represents a s | Intuitive explanation of the bias-variance tradeoff?
Imagine some 2D data--let's say height versus weight for students at a high school--plotted on a pair of axes.
Now suppose you fit a straight line through it. This line, which of course represents a set of predicted values, has zero statistical variance. But the bias is (probably) high--i.e., it doesn't fit the data very well.
Next, suppose you model the data with a high-degree polynomial spline. You're not satisfied with the fit, so you increase the polynomial degree until the fit improves (and it will, to arbitrary precision, in fact). Now you have a situation with bias that tends to zero, but the variance is very high.
Note that the bias-variance trade-off doesn't describe a proportional relationship--i.e., if you plot bias versus variance you won't necessarily see a straight line through the origin with slope -1. In the polynomial spline example above, reducing the degree almost certainly increases the variance much less than it decreases the bias.
The bias-variance tradeoff is also embedded in the sum-of-squares error function. Below, I have rewritten (but not altered) the usual form of this equation to emphasize this:
$$
E\left(\left(y - \dot{f}(x)\right)^2\right) = \sigma^2 + \left[f(x) - \frac{1}{\kappa}\sum_{i=0}^nf(x_n)\right]^2+\frac{\sigma^2}{\kappa}
$$
On the right-hand side, there are three terms: the first of these is just the irreducible error (the variance in the data itself); this is beyond our control so ignore it. The second term is the square of the bias; and the third is the variance. It's easy to see that as one goes up the other goes down--they can't both vary together in the same direction. Put another way, you can think of least-squares regression as (implicitly) finding the optimal combination of bias and variance from among candidate models. | Intuitive explanation of the bias-variance tradeoff?
Imagine some 2D data--let's say height versus weight for students at a high school--plotted on a pair of axes.
Now suppose you fit a straight line through it. This line, which of course represents a s |
4,030 | Intuitive explanation of the bias-variance tradeoff? | Let's say you are considering catastrophic health insurance, and there is a 1% probability of getting sick which would cost 1 million dollars. The expected cost of getting sick is thus 10,000 dollars. The insurance company, wanting to make a profit, will charge you 15,000 for the policy.
Buying the policy gives an expected cost to you of 15,000, which has a variance of 0 but can be thought of as biased since it is 5,000 more than the real expected cost of getting sick.
Not buying the policy gives an expected cost of 10,000, which is unbiased since it is equal to the true expected cost of getting sick, but has a very high variance.
The tradeoff here is between an approach that is consistently wrong but never by much and an approach that is correct on average but is more variable. | Intuitive explanation of the bias-variance tradeoff? | Let's say you are considering catastrophic health insurance, and there is a 1% probability of getting sick which would cost 1 million dollars. The expected cost of getting sick is thus 10,000 dollars | Intuitive explanation of the bias-variance tradeoff?
Let's say you are considering catastrophic health insurance, and there is a 1% probability of getting sick which would cost 1 million dollars. The expected cost of getting sick is thus 10,000 dollars. The insurance company, wanting to make a profit, will charge you 15,000 for the policy.
Buying the policy gives an expected cost to you of 15,000, which has a variance of 0 but can be thought of as biased since it is 5,000 more than the real expected cost of getting sick.
Not buying the policy gives an expected cost of 10,000, which is unbiased since it is equal to the true expected cost of getting sick, but has a very high variance.
The tradeoff here is between an approach that is consistently wrong but never by much and an approach that is correct on average but is more variable. | Intuitive explanation of the bias-variance tradeoff?
Let's say you are considering catastrophic health insurance, and there is a 1% probability of getting sick which would cost 1 million dollars. The expected cost of getting sick is thus 10,000 dollars |
4,031 | Intuitive explanation of the bias-variance tradeoff? | First, lets understand the meaning of bias and variance:
Imagine the center of the red bulls' eye region is the true mean value of our target random variable which we are trying to predict. Every time we take a sample set of observations and predict the value of this variable we plot a blue dot. We predicted correctly if the blue dot falls inside the red region. Bias is the measure of how far off are the predicted blue dots from the center of the red region (the true mean). Intuitively bias is quantification of error.
Variance is how scattered are our predictions.
The top left is the ideal condition but it is hard to achieve in practice, and the bottom right is the worst-case scenario which is easy to achieve in practice (usually the starting condition for randomly initialized models).
Our goal is to go from the bottom right (high bias high variance) situation to the top left situation (low variance low bias).
But the problem here is:
Unfortunately, achieving the lowest variance and lowest bias simulatneously is hard. (Why so? That's a deeper question).
When we try to decrease one of this parameter (either bias or variance), the other parameter increases.
Now the trade-off here is:
There is a sweet spot somewhere in between which produces the least prediction error in the long run.
These pictures are taken from http://scott.fortmann-roe.com/docs/BiasVariance.html . Checkout the explanations with linear regression and K-nearest neighbors for more details | Intuitive explanation of the bias-variance tradeoff? | First, lets understand the meaning of bias and variance:
Imagine the center of the red bulls' eye region is the true mean value of our target random variable which we are trying to predict. Every tim | Intuitive explanation of the bias-variance tradeoff?
First, lets understand the meaning of bias and variance:
Imagine the center of the red bulls' eye region is the true mean value of our target random variable which we are trying to predict. Every time we take a sample set of observations and predict the value of this variable we plot a blue dot. We predicted correctly if the blue dot falls inside the red region. Bias is the measure of how far off are the predicted blue dots from the center of the red region (the true mean). Intuitively bias is quantification of error.
Variance is how scattered are our predictions.
The top left is the ideal condition but it is hard to achieve in practice, and the bottom right is the worst-case scenario which is easy to achieve in practice (usually the starting condition for randomly initialized models).
Our goal is to go from the bottom right (high bias high variance) situation to the top left situation (low variance low bias).
But the problem here is:
Unfortunately, achieving the lowest variance and lowest bias simulatneously is hard. (Why so? That's a deeper question).
When we try to decrease one of this parameter (either bias or variance), the other parameter increases.
Now the trade-off here is:
There is a sweet spot somewhere in between which produces the least prediction error in the long run.
These pictures are taken from http://scott.fortmann-roe.com/docs/BiasVariance.html . Checkout the explanations with linear regression and K-nearest neighbors for more details | Intuitive explanation of the bias-variance tradeoff?
First, lets understand the meaning of bias and variance:
Imagine the center of the red bulls' eye region is the true mean value of our target random variable which we are trying to predict. Every tim |
4,032 | Intuitive explanation of the bias-variance tradeoff? | I highly recommend having a look at Caltech ML course by Yaser Abu-Mostafa, Lecture 8 (Bias-Variance Tradeoff) . Here are the outlines:
Say you are trying to learn the sine function:
Our training set consists of only 2 data points.
Let's try to do it with two models, $h_0(x)=b$ and $h_1(x)=ax+b$:
For $h_0(x)=b$, when we try with many different training sets (i.e. we repeatedly select 2 data points and perform the learning on them), we obtain (left graph represents all the learnt models, right graph represent their mean g and their variance (grey area)):
For $h_1(x)=ax+b$, when we try with many different training sets, we obtain:
If we compare the learnt model with $h_0$ and $h_1$, we can see that $h_0$ yields more simple models than $h_1$, hence a lower variance when we consider all the models learnt with $h_0$, but the best model g (in red on the graph) learnt with $h_1$ is better than the best model learnt g with $h_0$, hence a lower bias with $h_1$:
If you look at the evolution of the cost function with respect to the size of the training set (figures from Coursera - Machine Learning by Andrew Ng):
High bias:
High variance: | Intuitive explanation of the bias-variance tradeoff? | I highly recommend having a look at Caltech ML course by Yaser Abu-Mostafa, Lecture 8 (Bias-Variance Tradeoff) . Here are the outlines:
Say you are trying to learn the sine function:
Our training set | Intuitive explanation of the bias-variance tradeoff?
I highly recommend having a look at Caltech ML course by Yaser Abu-Mostafa, Lecture 8 (Bias-Variance Tradeoff) . Here are the outlines:
Say you are trying to learn the sine function:
Our training set consists of only 2 data points.
Let's try to do it with two models, $h_0(x)=b$ and $h_1(x)=ax+b$:
For $h_0(x)=b$, when we try with many different training sets (i.e. we repeatedly select 2 data points and perform the learning on them), we obtain (left graph represents all the learnt models, right graph represent their mean g and their variance (grey area)):
For $h_1(x)=ax+b$, when we try with many different training sets, we obtain:
If we compare the learnt model with $h_0$ and $h_1$, we can see that $h_0$ yields more simple models than $h_1$, hence a lower variance when we consider all the models learnt with $h_0$, but the best model g (in red on the graph) learnt with $h_1$ is better than the best model learnt g with $h_0$, hence a lower bias with $h_1$:
If you look at the evolution of the cost function with respect to the size of the training set (figures from Coursera - Machine Learning by Andrew Ng):
High bias:
High variance: | Intuitive explanation of the bias-variance tradeoff?
I highly recommend having a look at Caltech ML course by Yaser Abu-Mostafa, Lecture 8 (Bias-Variance Tradeoff) . Here are the outlines:
Say you are trying to learn the sine function:
Our training set |
4,033 | Intuitive explanation of the bias-variance tradeoff? | The basic idea is that too simple a model will underfit (high bias)
while too complex a model will overfit (high variance) and that bias
and variance trade off as model complexity is varied.
(Neal, 2019)
However, while bias-variance tradeoff seems to hold for some simple algorithms like linear regression, or $k$-NN, it's not that simple. I'll briefly summarize some of the points made in this blog entry, by Neal (2019), and Neal et al (2018).
There's growing body of evidence that this is not generally true and in some machine learning algorithms we observe, so called, double descent phenomenon. There are some preliminary evidence that for random forests, gradient boosting algorithms, and neural networks this might not be the case. It was observed that wider networks (more neurons) generalize better. Moreover, as discussed by Belkin et al (2019), for overparametrized neural networks and random forests, the bias-variance curve hits certain threshold, where the model overfits, and then, as the number of parameters grows beyond the number of datapoints, the test error starts falling again with growing model complexity (see figure from the paper reproduced below).
Nice example for this was given by Neal (2019), and Neal et al (2018), using simple, single layer, dense neural network, trained with stochastic gradient descent on the subset of 100 samples from MNIST. Nonetheless that the number of parameters starts exceeding the number of samples, we do not see tradeoff in terms of decrease of test set performance.
Belkin et al (2019) give even more striking example using random forest.
As discussed by Neal (2019), lack of bias-variance tradeoff for neural networks was even visible in the widely cited paper by Geman et al (1992) who did the first empirical study on this topic and popularized it. Moreover, when discussing bias-variance tradeoff, it is often shown how squared error can be decomposed into bias and variance, no matter that it does not directly apply to other error metrics, and the fact that you can decompose it does not prove anyhow that there is a tradeoff.
All this shows that we do not yet have good understanding of how and why some of the modern machine learning algorithms work, and some of our commonly held intuitions may be misleading.
Belkin, M., Hsub, D., Maa, S., & Mandala, S. (2019). Reconciling modern machine learning practice and the bias-variance trade-off. stat, 1050, 10.
Neal, B. (2019). On the Bias-Variance Tradeoff: Textbooks Need an Update. arXiv preprint arXiv:1912.08286.
Neal, B., Mittal, S., Baratin, A., Tantia, V., Scicluna, M., Lacoste-Julien, S., & Mitliagkas, I. (2018). A modern take on the bias-variance tradeoff in neural networks. arXiv preprint arXiv:1810.08591. | Intuitive explanation of the bias-variance tradeoff? | The basic idea is that too simple a model will underfit (high bias)
while too complex a model will overfit (high variance) and that bias
and variance trade off as model complexity is varied.
(Neal, 2 | Intuitive explanation of the bias-variance tradeoff?
The basic idea is that too simple a model will underfit (high bias)
while too complex a model will overfit (high variance) and that bias
and variance trade off as model complexity is varied.
(Neal, 2019)
However, while bias-variance tradeoff seems to hold for some simple algorithms like linear regression, or $k$-NN, it's not that simple. I'll briefly summarize some of the points made in this blog entry, by Neal (2019), and Neal et al (2018).
There's growing body of evidence that this is not generally true and in some machine learning algorithms we observe, so called, double descent phenomenon. There are some preliminary evidence that for random forests, gradient boosting algorithms, and neural networks this might not be the case. It was observed that wider networks (more neurons) generalize better. Moreover, as discussed by Belkin et al (2019), for overparametrized neural networks and random forests, the bias-variance curve hits certain threshold, where the model overfits, and then, as the number of parameters grows beyond the number of datapoints, the test error starts falling again with growing model complexity (see figure from the paper reproduced below).
Nice example for this was given by Neal (2019), and Neal et al (2018), using simple, single layer, dense neural network, trained with stochastic gradient descent on the subset of 100 samples from MNIST. Nonetheless that the number of parameters starts exceeding the number of samples, we do not see tradeoff in terms of decrease of test set performance.
Belkin et al (2019) give even more striking example using random forest.
As discussed by Neal (2019), lack of bias-variance tradeoff for neural networks was even visible in the widely cited paper by Geman et al (1992) who did the first empirical study on this topic and popularized it. Moreover, when discussing bias-variance tradeoff, it is often shown how squared error can be decomposed into bias and variance, no matter that it does not directly apply to other error metrics, and the fact that you can decompose it does not prove anyhow that there is a tradeoff.
All this shows that we do not yet have good understanding of how and why some of the modern machine learning algorithms work, and some of our commonly held intuitions may be misleading.
Belkin, M., Hsub, D., Maa, S., & Mandala, S. (2019). Reconciling modern machine learning practice and the bias-variance trade-off. stat, 1050, 10.
Neal, B. (2019). On the Bias-Variance Tradeoff: Textbooks Need an Update. arXiv preprint arXiv:1912.08286.
Neal, B., Mittal, S., Baratin, A., Tantia, V., Scicluna, M., Lacoste-Julien, S., & Mitliagkas, I. (2018). A modern take on the bias-variance tradeoff in neural networks. arXiv preprint arXiv:1810.08591. | Intuitive explanation of the bias-variance tradeoff?
The basic idea is that too simple a model will underfit (high bias)
while too complex a model will overfit (high variance) and that bias
and variance trade off as model complexity is varied.
(Neal, 2 |
4,034 | Intuitive explanation of the bias-variance tradeoff? | Here is a very simple explanation. Imagine you have a scatter plot of points {x_i,y_i} that were sampled from some distribution. You want to fit some model to it. You can choose a linear curve or a higher order polynomial curve or something else. Whatever you choose is going to be applied to predict new y values for a set of {x_i} points. Let's call these the validation set. Let's assume that you also know their true {y_i} values and we are using these just to test out model.
The predicted values are going to be different from the real values. We can measure the properties of their differences. Let's just consider a single validation point. Call it x_v and choose some model. Let's make a set of predictions for that one validation point by using say 100 different random samples for training the model. So we are going to get 100 y values. The difference between the mean of those values and the true value is called the bias. The variance of the distribution is the variance.
Depending on what model we use we can trade off between these two. Let's consider the two extremes. The lowest variance model is one where completely ignore the data. Let's say we simply predict 42 for every x. That model has zero variance across different training samples at every point. However it is clearly biased. The bias is simply 42-y_v.
One the other extreme we can choose a model which overfits as much as possible. For example fit a 100 degree polynomial to 100 data points. Or alternatively, linearly interpolate between nearest neighbors. This has low bias. Why? Because for any random sample the neighboring points to x_v will fluctuate widely but they will interpolate higher just about as often as they will interpolate low. So on average across the samples, they will cancel out and the bias will therefore be very low unless the true curve has lots of high frequency variation.
Hoever these overfit models have large variance across the random samples because they are not smoothing the data. The interpolation model just uses two data points to predict the intermediate one and these therefore create a lot of noise.
Note that the bias is measured at a single point. It doesn't matter if it is positive or negative. It is still a bias at any given x. The biases averaged over all the x values will probably be small but that doesn't make it unbiased.
One more example. Say you are trying to predict the temperature at set of locations in the US at some time. Let's assume you have 10,000 training points. Again, you can get a low variance model by doing something simple by just returning the average. But this will be biased low in the state of Florida and biased high in the state of Alaska. You'd be better if you used the average for each state. But even then, you will be biased high in the winter and low in the summer. So now you include the month in your model. But you're still going to be biased low in Death Valley and high on Mt Shasta. So now you go to the zip code level of granularity. But eventually if you keep doing this to reduce the bias, you run out of data points. Maybe for a given zip code and month, you have only one data point. Clearly this is going to create lots of variance. So you see having a more complicated model lowers the bias at the expense of variance.
So you see there is a trade off. Models that are smoother have lower variance across training samples but don't capture the real shape of the curve as well. Models that are less smooth can better capture the curve but at the expense of being noisier. Somewhere in the middle is a Goldilocks model that makes an acceptable tradeoff between the two. | Intuitive explanation of the bias-variance tradeoff? | Here is a very simple explanation. Imagine you have a scatter plot of points {x_i,y_i} that were sampled from some distribution. You want to fit some model to it. You can choose a linear curve or a hi | Intuitive explanation of the bias-variance tradeoff?
Here is a very simple explanation. Imagine you have a scatter plot of points {x_i,y_i} that were sampled from some distribution. You want to fit some model to it. You can choose a linear curve or a higher order polynomial curve or something else. Whatever you choose is going to be applied to predict new y values for a set of {x_i} points. Let's call these the validation set. Let's assume that you also know their true {y_i} values and we are using these just to test out model.
The predicted values are going to be different from the real values. We can measure the properties of their differences. Let's just consider a single validation point. Call it x_v and choose some model. Let's make a set of predictions for that one validation point by using say 100 different random samples for training the model. So we are going to get 100 y values. The difference between the mean of those values and the true value is called the bias. The variance of the distribution is the variance.
Depending on what model we use we can trade off between these two. Let's consider the two extremes. The lowest variance model is one where completely ignore the data. Let's say we simply predict 42 for every x. That model has zero variance across different training samples at every point. However it is clearly biased. The bias is simply 42-y_v.
One the other extreme we can choose a model which overfits as much as possible. For example fit a 100 degree polynomial to 100 data points. Or alternatively, linearly interpolate between nearest neighbors. This has low bias. Why? Because for any random sample the neighboring points to x_v will fluctuate widely but they will interpolate higher just about as often as they will interpolate low. So on average across the samples, they will cancel out and the bias will therefore be very low unless the true curve has lots of high frequency variation.
Hoever these overfit models have large variance across the random samples because they are not smoothing the data. The interpolation model just uses two data points to predict the intermediate one and these therefore create a lot of noise.
Note that the bias is measured at a single point. It doesn't matter if it is positive or negative. It is still a bias at any given x. The biases averaged over all the x values will probably be small but that doesn't make it unbiased.
One more example. Say you are trying to predict the temperature at set of locations in the US at some time. Let's assume you have 10,000 training points. Again, you can get a low variance model by doing something simple by just returning the average. But this will be biased low in the state of Florida and biased high in the state of Alaska. You'd be better if you used the average for each state. But even then, you will be biased high in the winter and low in the summer. So now you include the month in your model. But you're still going to be biased low in Death Valley and high on Mt Shasta. So now you go to the zip code level of granularity. But eventually if you keep doing this to reduce the bias, you run out of data points. Maybe for a given zip code and month, you have only one data point. Clearly this is going to create lots of variance. So you see having a more complicated model lowers the bias at the expense of variance.
So you see there is a trade off. Models that are smoother have lower variance across training samples but don't capture the real shape of the curve as well. Models that are less smooth can better capture the curve but at the expense of being noisier. Somewhere in the middle is a Goldilocks model that makes an acceptable tradeoff between the two. | Intuitive explanation of the bias-variance tradeoff?
Here is a very simple explanation. Imagine you have a scatter plot of points {x_i,y_i} that were sampled from some distribution. You want to fit some model to it. You can choose a linear curve or a hi |
4,035 | Intuitive explanation of the bias-variance tradeoff? | Imagine if model building task could be repeated for different training datasets, i.e. we train a new model for different dataset every time(shown in the figure below). If we fix a test data point and evaluate model prediction on this point, the predictions will be varied due to randomness in the model generation process. From the below figure for this situation, P_1, P_2, …, P_n are different predictions and random too.
Let the mean of predictions be -
Bias Error is due to the difference between mean of these predictions and the correct value.
Variance Error is nothing but the variance in these predictions, i.e. how varied are these predictions.
This is the intuition behind bias and variance error.
For detailed explanation visit right intuition behind bias variance tradeoff | Intuitive explanation of the bias-variance tradeoff? | Imagine if model building task could be repeated for different training datasets, i.e. we train a new model for different dataset every time(shown in the figure below). If we fix a test data point and | Intuitive explanation of the bias-variance tradeoff?
Imagine if model building task could be repeated for different training datasets, i.e. we train a new model for different dataset every time(shown in the figure below). If we fix a test data point and evaluate model prediction on this point, the predictions will be varied due to randomness in the model generation process. From the below figure for this situation, P_1, P_2, …, P_n are different predictions and random too.
Let the mean of predictions be -
Bias Error is due to the difference between mean of these predictions and the correct value.
Variance Error is nothing but the variance in these predictions, i.e. how varied are these predictions.
This is the intuition behind bias and variance error.
For detailed explanation visit right intuition behind bias variance tradeoff | Intuitive explanation of the bias-variance tradeoff?
Imagine if model building task could be repeated for different training datasets, i.e. we train a new model for different dataset every time(shown in the figure below). If we fix a test data point and |
4,036 | What is the difference between prediction and inference? | Inference: Given a set of data you want to infer how the output is generated as a function of the data.
Prediction: Given a new measurement, you want to use an existing data set to build a model that reliably chooses the correct identifier from a set of outcomes.
Inference: You want to find out what the effect of Age, Passenger Class and, Gender has on surviving the Titanic Disaster. You can put up a logistic regression and infer the effect each passenger characteristic has on survival rates.
Prediction: Given some information on a Titanic passenger, you want to choose from the set $\{\text{lives}, \text{dies}\}$ and be correct as often as possible. (See bias-variance tradeoff for prediction in case you wonder how to be correct as often as possible.)
Prediction doesn't revolve around establishing the most accurate relation between the input and the output, accurate prediction cares about putting new observations into the right class as often as possible.
So the 'practical example' crudely boils down to the following difference:
Given a set of passenger data for a single passenger the inference approach gives you a probability of surviving, the classifier gives you a choice between lives or dies.
Tuning classifiers is a very interesting and crucial topic in the same way that correctly interpreting p-values and confidence intervals is. | What is the difference between prediction and inference? | Inference: Given a set of data you want to infer how the output is generated as a function of the data.
Prediction: Given a new measurement, you want to use an existing data set to build a model that | What is the difference between prediction and inference?
Inference: Given a set of data you want to infer how the output is generated as a function of the data.
Prediction: Given a new measurement, you want to use an existing data set to build a model that reliably chooses the correct identifier from a set of outcomes.
Inference: You want to find out what the effect of Age, Passenger Class and, Gender has on surviving the Titanic Disaster. You can put up a logistic regression and infer the effect each passenger characteristic has on survival rates.
Prediction: Given some information on a Titanic passenger, you want to choose from the set $\{\text{lives}, \text{dies}\}$ and be correct as often as possible. (See bias-variance tradeoff for prediction in case you wonder how to be correct as often as possible.)
Prediction doesn't revolve around establishing the most accurate relation between the input and the output, accurate prediction cares about putting new observations into the right class as often as possible.
So the 'practical example' crudely boils down to the following difference:
Given a set of passenger data for a single passenger the inference approach gives you a probability of surviving, the classifier gives you a choice between lives or dies.
Tuning classifiers is a very interesting and crucial topic in the same way that correctly interpreting p-values and confidence intervals is. | What is the difference between prediction and inference?
Inference: Given a set of data you want to infer how the output is generated as a function of the data.
Prediction: Given a new measurement, you want to use an existing data set to build a model that |
4,037 | What is the difference between prediction and inference? | In page 20 of the book, the authors provide a beautiful example which made me understand the difference.
Here's the paragraph from the book : An Introduction to Statistical Learning
"
For example, in a real estate setting, one may seek to relate values of
homes to inputs such as crime rate, zoning, distance from a river, air quality, schools, income level of community, size of houses, and so forth. In this case one might be interested in how the individual input variables affect the prices—that is, how much extra will a house be worth if it has a view of the river? This is a inference problem. Alternatively, one may simply be interested in predicting the value of a home given its characteristics: is this house under- or over-valued? This is a prediction problem.
" | What is the difference between prediction and inference? | In page 20 of the book, the authors provide a beautiful example which made me understand the difference.
Here's the paragraph from the book : An Introduction to Statistical Learning
"
For example, in | What is the difference between prediction and inference?
In page 20 of the book, the authors provide a beautiful example which made me understand the difference.
Here's the paragraph from the book : An Introduction to Statistical Learning
"
For example, in a real estate setting, one may seek to relate values of
homes to inputs such as crime rate, zoning, distance from a river, air quality, schools, income level of community, size of houses, and so forth. In this case one might be interested in how the individual input variables affect the prices—that is, how much extra will a house be worth if it has a view of the river? This is a inference problem. Alternatively, one may simply be interested in predicting the value of a home given its characteristics: is this house under- or over-valued? This is a prediction problem.
" | What is the difference between prediction and inference?
In page 20 of the book, the authors provide a beautiful example which made me understand the difference.
Here's the paragraph from the book : An Introduction to Statistical Learning
"
For example, in |
4,038 | What is the difference between prediction and inference? | Generally when doing data analysis we imagine that there is some kind of "data generating process" which gives rise to the data, and inference refers to learning about the structure of this process while prediction means being able to actually forecast the data that come from it. Oftentimes the two go together, but not always.
An example where the two go hand in hand would be the simple linear regression model
$$
Y_i = \beta_0 + \beta_1 x_i + \epsilon_i .
$$
Inference in this case would mean estimating the parameters of the model $\beta_0$ and $\beta_1$ and our predictions would just be computed from our estimates of these parameters. But there are other types of models where one is able to make sensible predictions, but the model doesn't necessarily lead to meaningful insights about what is happening behind the scenes. Some examples of these kinds of models would be complicated ensemble methods which can lead to good predictions but are sometimes difficult or impossible to understand. | What is the difference between prediction and inference? | Generally when doing data analysis we imagine that there is some kind of "data generating process" which gives rise to the data, and inference refers to learning about the structure of this process wh | What is the difference between prediction and inference?
Generally when doing data analysis we imagine that there is some kind of "data generating process" which gives rise to the data, and inference refers to learning about the structure of this process while prediction means being able to actually forecast the data that come from it. Oftentimes the two go together, but not always.
An example where the two go hand in hand would be the simple linear regression model
$$
Y_i = \beta_0 + \beta_1 x_i + \epsilon_i .
$$
Inference in this case would mean estimating the parameters of the model $\beta_0$ and $\beta_1$ and our predictions would just be computed from our estimates of these parameters. But there are other types of models where one is able to make sensible predictions, but the model doesn't necessarily lead to meaningful insights about what is happening behind the scenes. Some examples of these kinds of models would be complicated ensemble methods which can lead to good predictions but are sometimes difficult or impossible to understand. | What is the difference between prediction and inference?
Generally when doing data analysis we imagine that there is some kind of "data generating process" which gives rise to the data, and inference refers to learning about the structure of this process wh |
4,039 | What is the difference between prediction and inference? | Prediction uses estimated f to forecast into the future. Suppose you observe a variable $y_t$, maybe it's the revenue of the store. You want to make financial plans for your business, and need to forecast the revenue in next quarter. You suspect that the revenue depends on the income of population in this quarter $x_{1,t}$ and the time of the year $x_{2,t}$. So, you posit that it is a function:
$$y_t=f(x_{1,t-1},x_{2,t-1})+\varepsilon_t$$
Now, if you get the data on income, say personal disposable income series from BEA, and construct the time of year variable, you may estimate the function f, then plug the latest values of the population income and the time of the year into this function. This will yield the prediction for the next quarter of the revenue of the store.
Inference uses estimated function f to study the impact of the factors on the outcome, and do other things of this nature. In my earlier example you might be interested in how much the season of the year determines the revenue of the store. So, you could look at the partial derivative $\partial f/\partial x_{2t}$ - sensitivity to the season. If f was in fact a linear model then it would be a regression coefficient of the second variable $\beta_2x_{2,t-1}$.
Prediction and inference may use the same estimation procedure to determine f, but they have different requirements to this procedure and incoming data. A well-known case is so called collinearity, whereas your input variables are highly correlated with each other. For instance, you measure weight, height and belly circumference of obese people. It is likely that these variables are strongly correlated, not necessarily linearly though. It happens so that collinearity can be a serious issue for inference, but merely an annoyance to prediction. The reason is that when predictors $x$ are correlated it's harder to separate the impact of predictor from the impact of other predictors. For prediction this doesn't matter, all you care is the quality of the forecast. | What is the difference between prediction and inference? | Prediction uses estimated f to forecast into the future. Suppose you observe a variable $y_t$, maybe it's the revenue of the store. You want to make financial plans for your business, and need to fore | What is the difference between prediction and inference?
Prediction uses estimated f to forecast into the future. Suppose you observe a variable $y_t$, maybe it's the revenue of the store. You want to make financial plans for your business, and need to forecast the revenue in next quarter. You suspect that the revenue depends on the income of population in this quarter $x_{1,t}$ and the time of the year $x_{2,t}$. So, you posit that it is a function:
$$y_t=f(x_{1,t-1},x_{2,t-1})+\varepsilon_t$$
Now, if you get the data on income, say personal disposable income series from BEA, and construct the time of year variable, you may estimate the function f, then plug the latest values of the population income and the time of the year into this function. This will yield the prediction for the next quarter of the revenue of the store.
Inference uses estimated function f to study the impact of the factors on the outcome, and do other things of this nature. In my earlier example you might be interested in how much the season of the year determines the revenue of the store. So, you could look at the partial derivative $\partial f/\partial x_{2t}$ - sensitivity to the season. If f was in fact a linear model then it would be a regression coefficient of the second variable $\beta_2x_{2,t-1}$.
Prediction and inference may use the same estimation procedure to determine f, but they have different requirements to this procedure and incoming data. A well-known case is so called collinearity, whereas your input variables are highly correlated with each other. For instance, you measure weight, height and belly circumference of obese people. It is likely that these variables are strongly correlated, not necessarily linearly though. It happens so that collinearity can be a serious issue for inference, but merely an annoyance to prediction. The reason is that when predictors $x$ are correlated it's harder to separate the impact of predictor from the impact of other predictors. For prediction this doesn't matter, all you care is the quality of the forecast. | What is the difference between prediction and inference?
Prediction uses estimated f to forecast into the future. Suppose you observe a variable $y_t$, maybe it's the revenue of the store. You want to make financial plans for your business, and need to fore |
4,040 | What is the difference between prediction and inference? | You are not alone here.
After reading answers, I am not confused anymore - not because I understand the difference, but because I understand it is in the eye of the beholder and verbally induced.
I am sure now those two terms are political definitions rather than scientific ones.
Take for example the explanation from the book, the one that colleges tried to use as a good one: "how much extra will a house be worth if it has a view of the river? This is a inference problem."
From my point of view, this is absolutely a prediction problem. You are civil construction company owner, and you want to choose the best ground for building next set of houses. You have to choose between two location in the same town, one near the river, the next near the train station. You want to predict the prices for both locations. Or you want to infer. You are going to apply the exact methods of statistics, but you name the process. :) | What is the difference between prediction and inference? | You are not alone here.
After reading answers, I am not confused anymore - not because I understand the difference, but because I understand it is in the eye of the beholder and verbally induced.
I a | What is the difference between prediction and inference?
You are not alone here.
After reading answers, I am not confused anymore - not because I understand the difference, but because I understand it is in the eye of the beholder and verbally induced.
I am sure now those two terms are political definitions rather than scientific ones.
Take for example the explanation from the book, the one that colleges tried to use as a good one: "how much extra will a house be worth if it has a view of the river? This is a inference problem."
From my point of view, this is absolutely a prediction problem. You are civil construction company owner, and you want to choose the best ground for building next set of houses. You have to choose between two location in the same town, one near the river, the next near the train station. You want to predict the prices for both locations. Or you want to infer. You are going to apply the exact methods of statistics, but you name the process. :) | What is the difference between prediction and inference?
You are not alone here.
After reading answers, I am not confused anymore - not because I understand the difference, but because I understand it is in the eye of the beholder and verbally induced.
I a |
4,041 | What is the difference between prediction and inference? | Imagine, you are a medical doctor on an intensive care unit. You have a patient with a strong fever and a given number of blood cells and a given body weight and a hundred different data and you want to predict, if he or she is going to survive. If yes, he is going to conceal that story about his other kid to his wife, if not, it is important for him do reveal it, while he can.
The doctor can do this prediction based on the data of former patients he had at his unit. Based on his software knowledge, he can predict using either a generalized linear regression (glm) or via a neural net (nn).
1. Generalized Linear Model
There are far to many correlated parameters for the glm so to get to a result, the doctor will have to make assumptions (linearity etc.) and decisions about which parameters are likely to have an influence. The glm will reward him with a t-test of significance for each of his parameters so he might gather strong evidence, that gender and fever have a significant influence, body weight not necessarily so.
2. Neural net
The neural net will swallow and digest all information that there is in the sample of former patients. It will not care, whether predictors are correlated and it will not reveal that much information, on whether the influence of body weight seems to be important only in the sample at hand or in general (at least not at the level of expertise that the doctor has to offer). It will just compute a result.
What's better
What method to choose depends on the angle from which you look on the problem: As a patient, I would prefer the neural net which uses all available data for a best guess on what will happen to me without strong and obviously wrong assumptions like linearity. As the doctor, who wants to present some data in a journal, he needs p-values. Medicine is very conservative: they are going to ask for p-values. So the doctor wants to report, that in such a situation, gender has a significant influence. For the patient, that does not matter, just use whatever influence the sample suggests to be most likely.
In this example, the patient wants prediction, the scientist-side of the doctor wants inference. Mostly, when you want to understand a system, then inference is good. If you need to make a decision where you cannot understand the system, prediction will have to suffice. | What is the difference between prediction and inference? | Imagine, you are a medical doctor on an intensive care unit. You have a patient with a strong fever and a given number of blood cells and a given body weight and a hundred different data and you want | What is the difference between prediction and inference?
Imagine, you are a medical doctor on an intensive care unit. You have a patient with a strong fever and a given number of blood cells and a given body weight and a hundred different data and you want to predict, if he or she is going to survive. If yes, he is going to conceal that story about his other kid to his wife, if not, it is important for him do reveal it, while he can.
The doctor can do this prediction based on the data of former patients he had at his unit. Based on his software knowledge, he can predict using either a generalized linear regression (glm) or via a neural net (nn).
1. Generalized Linear Model
There are far to many correlated parameters for the glm so to get to a result, the doctor will have to make assumptions (linearity etc.) and decisions about which parameters are likely to have an influence. The glm will reward him with a t-test of significance for each of his parameters so he might gather strong evidence, that gender and fever have a significant influence, body weight not necessarily so.
2. Neural net
The neural net will swallow and digest all information that there is in the sample of former patients. It will not care, whether predictors are correlated and it will not reveal that much information, on whether the influence of body weight seems to be important only in the sample at hand or in general (at least not at the level of expertise that the doctor has to offer). It will just compute a result.
What's better
What method to choose depends on the angle from which you look on the problem: As a patient, I would prefer the neural net which uses all available data for a best guess on what will happen to me without strong and obviously wrong assumptions like linearity. As the doctor, who wants to present some data in a journal, he needs p-values. Medicine is very conservative: they are going to ask for p-values. So the doctor wants to report, that in such a situation, gender has a significant influence. For the patient, that does not matter, just use whatever influence the sample suggests to be most likely.
In this example, the patient wants prediction, the scientist-side of the doctor wants inference. Mostly, when you want to understand a system, then inference is good. If you need to make a decision where you cannot understand the system, prediction will have to suffice. | What is the difference between prediction and inference?
Imagine, you are a medical doctor on an intensive care unit. You have a patient with a strong fever and a given number of blood cells and a given body weight and a hundred different data and you want |
4,042 | What is the difference between prediction and inference? | Given a data set of $n=100$ observations, $k=50$ independent variables $x_i$, and one dependent variable $y$, inference answers questions such as:
What subset or combination of the $k$ independent variables affect $y$?
If I were able to increase the value of $x_1$ by 10%, how much would $y$ increase? (i.e. $\frac{\partial y}{\partial x_1}$)
Both of these questions are questions about the parameters in the “true model” that generated the data.
Prediction answers a much simpler question:
If we set the independent variables $x_i$ to some specific values, what is my best guess for $y$?
This question does not ask anything about the parameters in the true model. Nor does it require the existence of a “true model”. Prediction simply involves a plug-and-chug to generate a value $\hat{y}$ that is ideally close to $y$. | What is the difference between prediction and inference? | Given a data set of $n=100$ observations, $k=50$ independent variables $x_i$, and one dependent variable $y$, inference answers questions such as:
What subset or combination of the $k$ independent va | What is the difference between prediction and inference?
Given a data set of $n=100$ observations, $k=50$ independent variables $x_i$, and one dependent variable $y$, inference answers questions such as:
What subset or combination of the $k$ independent variables affect $y$?
If I were able to increase the value of $x_1$ by 10%, how much would $y$ increase? (i.e. $\frac{\partial y}{\partial x_1}$)
Both of these questions are questions about the parameters in the “true model” that generated the data.
Prediction answers a much simpler question:
If we set the independent variables $x_i$ to some specific values, what is my best guess for $y$?
This question does not ask anything about the parameters in the true model. Nor does it require the existence of a “true model”. Prediction simply involves a plug-and-chug to generate a value $\hat{y}$ that is ideally close to $y$. | What is the difference between prediction and inference?
Given a data set of $n=100$ observations, $k=50$ independent variables $x_i$, and one dependent variable $y$, inference answers questions such as:
What subset or combination of the $k$ independent va |
4,043 | What is the difference between prediction and inference? | I know many answers have been posted already, but for those of you who don't read the book (Introduction to Statistical Learning), here's three exercises found in the second chapter. See if you can solve them, they helped me quite a bit to understand the difference between inference and prediction.
Explain whether each scenario is a classification or regression problem, and indicate whether we are most interested in inference or prediction.
We collect a set of data on the top 500 firms in the US. For each
firm we record profit, number of employees, industry and the CEO
salary. We are interested in understanding which factors affect CEO
salary.
We are considering launching a new product and wish to
know whether it will be a success or a failure. We collect data on 20 similar products
that were previously launched. For each product we
have recorded whether it was a success or failure, price charged for the product, marketing budget, competition price, and ten
other variables.
We are interesting in predicting the % change in
the US dollar in relation to the weekly changes in the world stock
markets. Hence we collect weekly data for all of 2012. For each week
we record the % change in the dollar, the % change in the US market,
the % change in the British market, and the % change in the German
market.
If you want the answers, they can be found here. Note that the exercise above is number 2. | What is the difference between prediction and inference? | I know many answers have been posted already, but for those of you who don't read the book (Introduction to Statistical Learning), here's three exercises found in the second chapter. See if you can so | What is the difference between prediction and inference?
I know many answers have been posted already, but for those of you who don't read the book (Introduction to Statistical Learning), here's three exercises found in the second chapter. See if you can solve them, they helped me quite a bit to understand the difference between inference and prediction.
Explain whether each scenario is a classification or regression problem, and indicate whether we are most interested in inference or prediction.
We collect a set of data on the top 500 firms in the US. For each
firm we record profit, number of employees, industry and the CEO
salary. We are interested in understanding which factors affect CEO
salary.
We are considering launching a new product and wish to
know whether it will be a success or a failure. We collect data on 20 similar products
that were previously launched. For each product we
have recorded whether it was a success or failure, price charged for the product, marketing budget, competition price, and ten
other variables.
We are interesting in predicting the % change in
the US dollar in relation to the weekly changes in the world stock
markets. Hence we collect weekly data for all of 2012. For each week
we record the % change in the dollar, the % change in the US market,
the % change in the British market, and the % change in the German
market.
If you want the answers, they can be found here. Note that the exercise above is number 2. | What is the difference between prediction and inference?
I know many answers have been posted already, but for those of you who don't read the book (Introduction to Statistical Learning), here's three exercises found in the second chapter. See if you can so |
4,044 | What is the difference between prediction and inference? | There's good research showing that a strong predictor of whether borrowers will repay their loans is whether they use felt to protect their floors from being scratched by furniture legs. This "felt" variable will be a distinct aid to a predictive model where the outcome is repay vs. default. However, if lenders want to gain greater leverage over this outcome, they will be remiss in thinking they can do so by distributing felt as widely as they can.
"How likely is this borrower to repay?" is a prediction problem; "How can I influence the result?" is a causal inference problem. | What is the difference between prediction and inference? | There's good research showing that a strong predictor of whether borrowers will repay their loans is whether they use felt to protect their floors from being scratched by furniture legs. This "felt" | What is the difference between prediction and inference?
There's good research showing that a strong predictor of whether borrowers will repay their loans is whether they use felt to protect their floors from being scratched by furniture legs. This "felt" variable will be a distinct aid to a predictive model where the outcome is repay vs. default. However, if lenders want to gain greater leverage over this outcome, they will be remiss in thinking they can do so by distributing felt as widely as they can.
"How likely is this borrower to repay?" is a prediction problem; "How can I influence the result?" is a causal inference problem. | What is the difference between prediction and inference?
There's good research showing that a strong predictor of whether borrowers will repay their loans is whether they use felt to protect their floors from being scratched by furniture legs. This "felt" |
4,045 | What is the difference between prediction and inference? | y = f(x) then
prediction(what is the value of Y with a given value of x: if specific value of x what could be the value of Y
inference(how y changes with change in x) : what could be the affect on Y if x changes
Prediction example : suppose y represent the salary of a person then if we provide input such as years of experience, degree as input variables then our function predicts the salary of the employee.
Inference example : suppose cost of living changes then how much is the change in salary | What is the difference between prediction and inference? | y = f(x) then
prediction(what is the value of Y with a given value of x: if specific value of x what could be the value of Y
inference(how y changes with change in x) : what could be the affect on Y i | What is the difference between prediction and inference?
y = f(x) then
prediction(what is the value of Y with a given value of x: if specific value of x what could be the value of Y
inference(how y changes with change in x) : what could be the affect on Y if x changes
Prediction example : suppose y represent the salary of a person then if we provide input such as years of experience, degree as input variables then our function predicts the salary of the employee.
Inference example : suppose cost of living changes then how much is the change in salary | What is the difference between prediction and inference?
y = f(x) then
prediction(what is the value of Y with a given value of x: if specific value of x what could be the value of Y
inference(how y changes with change in x) : what could be the affect on Y i |
4,046 | How to generate correlated random numbers (given means, variances and degree of correlation)? | To answer your question on "a good, ideally quick way to generate correlated random numbers":
Given a desired variance-covariance matrix $C$ that is by definition positive definite, the Cholesky decomposition of it is: $C$=$LL^T$; $L$ being lower triangular matrix.
If you now use this matrix $L$ to project an uncorrelated random variable vector $X$, the resulting projection $Y = LX$ will be that of correlated random variables.
You can find an concise explanation why this happens here. | How to generate correlated random numbers (given means, variances and degree of correlation)? | To answer your question on "a good, ideally quick way to generate correlated random numbers":
Given a desired variance-covariance matrix $C$ that is by definition positive definite, the Cholesky decom | How to generate correlated random numbers (given means, variances and degree of correlation)?
To answer your question on "a good, ideally quick way to generate correlated random numbers":
Given a desired variance-covariance matrix $C$ that is by definition positive definite, the Cholesky decomposition of it is: $C$=$LL^T$; $L$ being lower triangular matrix.
If you now use this matrix $L$ to project an uncorrelated random variable vector $X$, the resulting projection $Y = LX$ will be that of correlated random variables.
You can find an concise explanation why this happens here. | How to generate correlated random numbers (given means, variances and degree of correlation)?
To answer your question on "a good, ideally quick way to generate correlated random numbers":
Given a desired variance-covariance matrix $C$ that is by definition positive definite, the Cholesky decom |
4,047 | How to generate correlated random numbers (given means, variances and degree of correlation)? | +1 to @user11852, and @jem77bfp, these are good answers. Let me approach this from a different perspective, not because I think it's necessarily better in practice, but because I think it's instructive. Here are a few relevant facts that we know already:
$r$ is the slope of the regression line when both $X$ and $Y$ are standardized, i.e., $\mathcal N(0,1)$,
$r^2$ is the proportion of the variance in $Y$ attributable to the variance in $X$,
(also, from the rules for variances):
the variance of a random variable multiplied by a constant is the constant squared times the original variance:
$$\text{Var}[aX]=a^2\text{Var}[X]$$
variances add, i.e., the variance of the sum of two random variables (assuming they are independent) is the sum of the two variances:
$$\text{Var}[X+\varepsilon]=\text{Var}[X]+\text{Var}[\varepsilon]$$
Now, we can combine these four facts to create two standard normal variables whose populations will have a given correlation, $r$ (more properly, $\rho$), although the samples you generate will have sample correlations that vary. The idea is to create a pseudorandom variable, $X$, that is standard normal, $\mathcal N(0,1)$, and then find a coefficient, $a$, and an error variance, $v_e$, such that $Y \sim\mathcal N(0,a^2+v_e)$, where $a^2+v_e=1$. (Note that $|a|$ must be $\le 1$ for this to work, and that, moreover, $a=r$.) Thus, you start with the $r$ that you want; that's your coefficient, $a$. Then you figure out the error variance that you will need, it's $1-r^2$. (If your software requires you to use the standard deviation, take the square root of that value.) Finally, for each pseudorandom variate, $x_i$, that you have generated, generate a pseudorandom error variate, $e_i$, with the appropriate error variance $v_e$, and compute the correlated pseudorandom variate, $y_i$, by multiplying and adding.
If you wanted to do this in R, the following code might work for you:
correlatedValue = function(x, r){
r2 = r**2
ve = 1-r2
SD = sqrt(ve)
e = rnorm(length(x), mean=0, sd=SD)
y = r*x + e
return(y)
}
set.seed(5)
x = rnorm(10000)
y = correlatedValue(x=x, r=.5)
cor(x,y)
[1] 0.4945964
(Edit: I forgot to mention:) As I've described it, this procedure gives you two standard normal correlated variables. If you don't want standard normals, but want the variables to have some specific means (not 0) and SDs (not 1), you can transform them without affecting the correlation. Thus, you would subtract the observed mean to ensure that the mean is exactly $0$, multiply the variable by the SD you want and then add the mean you want. If you want the observed mean to fluctuate normally around the desired mean, you would add the initial difference back. Essentially, this is a z-score transformation in reverse. Because this is a linear transformation, the transformed variable will have the same correlation with the other variable as before.
Again, this, in it's simplest form, only lets you generate a pair of correlated variables (this could be scaled up, but gets ugly fast), and is certainly not the most convenient way to get the job done. In R, you would want to use ?mvrnorm in the MASS package, both because it's easier and because you can generate many variables with a given population correlation matrix. Nonetheless, I think it's worthwhile to have walked through this process to see how some basic principles play out in a simple way. | How to generate correlated random numbers (given means, variances and degree of correlation)? | +1 to @user11852, and @jem77bfp, these are good answers. Let me approach this from a different perspective, not because I think it's necessarily better in practice, but because I think it's instructi | How to generate correlated random numbers (given means, variances and degree of correlation)?
+1 to @user11852, and @jem77bfp, these are good answers. Let me approach this from a different perspective, not because I think it's necessarily better in practice, but because I think it's instructive. Here are a few relevant facts that we know already:
$r$ is the slope of the regression line when both $X$ and $Y$ are standardized, i.e., $\mathcal N(0,1)$,
$r^2$ is the proportion of the variance in $Y$ attributable to the variance in $X$,
(also, from the rules for variances):
the variance of a random variable multiplied by a constant is the constant squared times the original variance:
$$\text{Var}[aX]=a^2\text{Var}[X]$$
variances add, i.e., the variance of the sum of two random variables (assuming they are independent) is the sum of the two variances:
$$\text{Var}[X+\varepsilon]=\text{Var}[X]+\text{Var}[\varepsilon]$$
Now, we can combine these four facts to create two standard normal variables whose populations will have a given correlation, $r$ (more properly, $\rho$), although the samples you generate will have sample correlations that vary. The idea is to create a pseudorandom variable, $X$, that is standard normal, $\mathcal N(0,1)$, and then find a coefficient, $a$, and an error variance, $v_e$, such that $Y \sim\mathcal N(0,a^2+v_e)$, where $a^2+v_e=1$. (Note that $|a|$ must be $\le 1$ for this to work, and that, moreover, $a=r$.) Thus, you start with the $r$ that you want; that's your coefficient, $a$. Then you figure out the error variance that you will need, it's $1-r^2$. (If your software requires you to use the standard deviation, take the square root of that value.) Finally, for each pseudorandom variate, $x_i$, that you have generated, generate a pseudorandom error variate, $e_i$, with the appropriate error variance $v_e$, and compute the correlated pseudorandom variate, $y_i$, by multiplying and adding.
If you wanted to do this in R, the following code might work for you:
correlatedValue = function(x, r){
r2 = r**2
ve = 1-r2
SD = sqrt(ve)
e = rnorm(length(x), mean=0, sd=SD)
y = r*x + e
return(y)
}
set.seed(5)
x = rnorm(10000)
y = correlatedValue(x=x, r=.5)
cor(x,y)
[1] 0.4945964
(Edit: I forgot to mention:) As I've described it, this procedure gives you two standard normal correlated variables. If you don't want standard normals, but want the variables to have some specific means (not 0) and SDs (not 1), you can transform them without affecting the correlation. Thus, you would subtract the observed mean to ensure that the mean is exactly $0$, multiply the variable by the SD you want and then add the mean you want. If you want the observed mean to fluctuate normally around the desired mean, you would add the initial difference back. Essentially, this is a z-score transformation in reverse. Because this is a linear transformation, the transformed variable will have the same correlation with the other variable as before.
Again, this, in it's simplest form, only lets you generate a pair of correlated variables (this could be scaled up, but gets ugly fast), and is certainly not the most convenient way to get the job done. In R, you would want to use ?mvrnorm in the MASS package, both because it's easier and because you can generate many variables with a given population correlation matrix. Nonetheless, I think it's worthwhile to have walked through this process to see how some basic principles play out in a simple way. | How to generate correlated random numbers (given means, variances and degree of correlation)?
+1 to @user11852, and @jem77bfp, these are good answers. Let me approach this from a different perspective, not because I think it's necessarily better in practice, but because I think it's instructi |
4,048 | How to generate correlated random numbers (given means, variances and degree of correlation)? | In general this not a simple thing to do, but I believe there are packages for multivariate normal variable generation (at least in R, see mvrnorm in the MASS package), where you just input a covariance matrix and a mean vector.
There is also one more "constructive" approach. Let's say we want to model a random vector $(X_1,X_2)$ and we have its distribution function $F(x_1,x_2)$. The first step is to get the marginal distribution function; i.e. integrate $F$ over all $x_2$:
$$F_{X_1}(x_1)= \int_{-\infty}^{\infty} F(x_1,x_2)dx_2. $$
Then we find $F^{-1}_{X_1}$ - the inverse function of $F_{X_1}$ - and plug in a random variable $\xi_1$ which is uniformly distributed on the interval $[0,1]$. In this step we generate the first coordinate $\hat{x}_1=F^{-1}_{X_1}(\xi)$.
Now, since we have got one coordinate, we need to plug it in to our initial distribution function $F(x_1,x_2)$ and then get a conditional distribution function with condition $x_1=\hat{x}_1$:
$$F(x_2 | X_1=\hat{x}_1)= \frac{F(\hat{x}_1,x_2)}{f_{X_1}(\hat{x}_1)}, $$
where $f_{X_1}$ is a probability density function of the marginal $X_1$ distribution; i.e. $F'_{X_1}(x_1)=f_{X_1}(x_1)$.
Then again you generate a uniformly distributed variable $\xi_2$ on $[0,1]$ (independent of $\xi_1$) and plug it in to the inverse of $F(x_2 | X_1=\hat{x}_1)$. Therefore you obtain $\hat{x}_2=(F(x_2 | X_1=\hat{x}_1))^{-1}(\xi)$; that is, $\hat x_2$ satisfies $F(\hat x_2 | X_1=\hat{x}_1) = \xi$. This method can be generalized to vectors with more dimensions, but its downside is that you have to calculate, analytically or numerically, many functions. The idea can be found in this article as well: http://www.econ-pol.unisi.it/dmq/pdf/DMQ_WP_34.pdf.
If you don't understand the meaning of plugging a uniform variable into an inverse probability distribution function, try to make a sketch of the univariate case and then remember what the geometric interpretation of the inverse function is. | How to generate correlated random numbers (given means, variances and degree of correlation)? | In general this not a simple thing to do, but I believe there are packages for multivariate normal variable generation (at least in R, see mvrnorm in the MASS package), where you just input a covarian | How to generate correlated random numbers (given means, variances and degree of correlation)?
In general this not a simple thing to do, but I believe there are packages for multivariate normal variable generation (at least in R, see mvrnorm in the MASS package), where you just input a covariance matrix and a mean vector.
There is also one more "constructive" approach. Let's say we want to model a random vector $(X_1,X_2)$ and we have its distribution function $F(x_1,x_2)$. The first step is to get the marginal distribution function; i.e. integrate $F$ over all $x_2$:
$$F_{X_1}(x_1)= \int_{-\infty}^{\infty} F(x_1,x_2)dx_2. $$
Then we find $F^{-1}_{X_1}$ - the inverse function of $F_{X_1}$ - and plug in a random variable $\xi_1$ which is uniformly distributed on the interval $[0,1]$. In this step we generate the first coordinate $\hat{x}_1=F^{-1}_{X_1}(\xi)$.
Now, since we have got one coordinate, we need to plug it in to our initial distribution function $F(x_1,x_2)$ and then get a conditional distribution function with condition $x_1=\hat{x}_1$:
$$F(x_2 | X_1=\hat{x}_1)= \frac{F(\hat{x}_1,x_2)}{f_{X_1}(\hat{x}_1)}, $$
where $f_{X_1}$ is a probability density function of the marginal $X_1$ distribution; i.e. $F'_{X_1}(x_1)=f_{X_1}(x_1)$.
Then again you generate a uniformly distributed variable $\xi_2$ on $[0,1]$ (independent of $\xi_1$) and plug it in to the inverse of $F(x_2 | X_1=\hat{x}_1)$. Therefore you obtain $\hat{x}_2=(F(x_2 | X_1=\hat{x}_1))^{-1}(\xi)$; that is, $\hat x_2$ satisfies $F(\hat x_2 | X_1=\hat{x}_1) = \xi$. This method can be generalized to vectors with more dimensions, but its downside is that you have to calculate, analytically or numerically, many functions. The idea can be found in this article as well: http://www.econ-pol.unisi.it/dmq/pdf/DMQ_WP_34.pdf.
If you don't understand the meaning of plugging a uniform variable into an inverse probability distribution function, try to make a sketch of the univariate case and then remember what the geometric interpretation of the inverse function is. | How to generate correlated random numbers (given means, variances and degree of correlation)?
In general this not a simple thing to do, but I believe there are packages for multivariate normal variable generation (at least in R, see mvrnorm in the MASS package), where you just input a covarian |
4,049 | How to generate correlated random numbers (given means, variances and degree of correlation)? | If you are ready to give up efficiency, you can use a throw-away alogorithm. Its advantage is, that it allows for any kind of distributions (not only Gaussian).
Start by generating two uncorrelated sequences of random numbers $\{x_i\}_{i=1}^N$ and $\{y_i\}_{i=1}^N$ with any desired distributions. Let $C$ by the desired value of the correlation coefficient. Then do the following:
1) Compute correlation coefficient $c_{old}=corr(\{x_i\},\{y_i\})$
2) Generate two random munbers $n_1$ and $n_2: 1 \leq n_{1,2} \leq N$
3) Swap numbers $x_{n_1}$ and $x_{n_2}$
4) Compute new correlation $c_{new}=corr( \{x_i\},\{y_i\})$
5) If $|C-c_{new}| < |C-c_{old}|$ then keep the swap. Else undo the swap.
6) If $|C-c| < \epsilon$ stop, else goto 1)
Random swaps will not alter the marginal distribution of ${x_i}$.
Good luck! | How to generate correlated random numbers (given means, variances and degree of correlation)? | If you are ready to give up efficiency, you can use a throw-away alogorithm. Its advantage is, that it allows for any kind of distributions (not only Gaussian).
Start by generating two uncorrelated se | How to generate correlated random numbers (given means, variances and degree of correlation)?
If you are ready to give up efficiency, you can use a throw-away alogorithm. Its advantage is, that it allows for any kind of distributions (not only Gaussian).
Start by generating two uncorrelated sequences of random numbers $\{x_i\}_{i=1}^N$ and $\{y_i\}_{i=1}^N$ with any desired distributions. Let $C$ by the desired value of the correlation coefficient. Then do the following:
1) Compute correlation coefficient $c_{old}=corr(\{x_i\},\{y_i\})$
2) Generate two random munbers $n_1$ and $n_2: 1 \leq n_{1,2} \leq N$
3) Swap numbers $x_{n_1}$ and $x_{n_2}$
4) Compute new correlation $c_{new}=corr( \{x_i\},\{y_i\})$
5) If $|C-c_{new}| < |C-c_{old}|$ then keep the swap. Else undo the swap.
6) If $|C-c| < \epsilon$ stop, else goto 1)
Random swaps will not alter the marginal distribution of ${x_i}$.
Good luck! | How to generate correlated random numbers (given means, variances and degree of correlation)?
If you are ready to give up efficiency, you can use a throw-away alogorithm. Its advantage is, that it allows for any kind of distributions (not only Gaussian).
Start by generating two uncorrelated se |
4,050 | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | It seems your question more generally addresses the problem of identifying good predictors. In this case, you should consider using some kind of penalized regression (methods dealing with variable or feature selection are relevant too), with e.g. L1, L2 (or a combination thereof, the so-called elasticnet) penalties (look for related questions on this site, or the R penalized and elasticnet package, among others).
Now, about correcting p-values for your regression coefficients (or equivalently your partial correlation coefficients) to protect against over-optimism (e.g. with Bonferroni or, better, step-down methods), it seems this would only be relevant if you are considering one model and seek those predictors that contribute a significant part of explained variance, that is if you don't perform model selection (with stepwise selection, or hierarchical testing). This article may be a good start: Bonferroni Adjustments in Tests for Regression Coefficients. Be aware that such correction won't protect you against multicollinearity issue, which affects the reported p-values.
Given your data, I would recommend using some kind of iterative model selection techniques. In R for instance, the stepAIC function allows to perform stepwise model selection by exact AIC. You can also estimate the relative importance of your predictors based on their contribution to $R^2$ using boostrap (see the relaimpo package). I think that reporting effect size measure or % of explained variance are more informative than p-value, especially in a confirmatory model.
It should be noted that stepwise approaches have also their drawbacks (e.g., Wald tests are not adapted to conditional hypothesis as induced by the stepwise procedure), or as indicated by Frank Harrell on R mailing, "stepwise variable selection based on AIC has all the problems of stepwise variable selection based on P-values. AIC is just a restatement of the P-Value" (but AIC remains useful if the set of predictors is already defined); a related question -- Is a variable significant in a linear regression model? -- raised interesting comments (@Rob, among others) about the use of AIC for variable selection. I append a couple of references at the end (including papers kindly provided by @Stephan); there is also a lot of other references on P.Mean.
Frank Harrell authored a book on Regression Modeling Strategy which includes a lot of discussion and advices around this problem (§4.3, pp. 56-60). He also developed efficient R routines to deal with generalized linear models (See the Design or rms packages). So, I think you definitely have to take a look at it (his handouts are available on his homepage).
References
Whittingham, MJ, Stephens, P, Bradbury, RB, and Freckleton, RP (2006). Why do we still use stepwise modelling in ecology and behaviour? Journal of Animal Ecology, 75, 1182-1189.
Austin, PC (2008). Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study. Journal of Clinical Epidemiology, 61(10), 1009-1017.
Austin, PC and Tu, JV (2004). Automated variable selection methods for logistic regression produced unstable models for predicting acute myocardial infarction mortality. Journal of Clinical Epidemiology, 57, 1138–1146.
Greenland, S (1994). Hierarchical regression for epidemiologic analyses of multiple exposures. Environmental Health Perspectives, 102(Suppl 8), 33–39.
Greenland, S (2008). Multiple comparisons and association selection in general epidemiology. International Journal of Epidemiology, 37(3), 430-434.
Beyene, J, Atenafu, EG, Hamid, JS, To, T, and Sung L (2009). Determining relative importance of variables in developing and validating predictive models. BMC Medical Research Methodology, 9, 64.
Bursac, Z, Gauss, CH, Williams, DK, and Hosmer, DW (2008). Purposeful selection of variables in logistic regression. Source Code for Biology and Medicine, 3, 17.
Brombin, C, Finos, L, and Salmaso, L (2007). Adjusting stepwise p-values in generalized linear models. International Conference on Multiple Comparison Procedures. -- see step.adj() in the R someMTP package.
Wiegand, RE (2010). Performance of using multiple stepwise algorithms for variable selection. Statistics in Medicine, 29(15), 1647–1659.
Moons KG, Donders AR, Steyerberg EW, and Harrell FE (2004). Penalized Maximum Likelihood Estimation to predict binary outcomes. Journal of Clinical Epidemiology, 57(12), 1262–1270.
Tibshirani, R (1996). Regression shrinkage and selection via the lasso. Journal of The Royal Statistical Society B, 58(1), 267–288.
Efron, B, Hastie, T, Johnstone, I, and Tibshirani, R (2004). Least Angle Regression. Annals of Statistics, 32(2), 407-499.
Flom, PL and Cassell, DL (2007). Stopping Stepwise: Why stepwise and similar selection methods are bad, and what you should use. NESUG 2007 Proceedings.
Shtatland, E.S., Cain, E., and Barton, M.B. (2001). The perils of stepwise logistic regression and how to escape them using information criteria and the Output Delivery System. SUGI 26 Proceedings (pp. 222–226). | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | It seems your question more generally addresses the problem of identifying good predictors. In this case, you should consider using some kind of penalized regression (methods dealing with variable or | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
It seems your question more generally addresses the problem of identifying good predictors. In this case, you should consider using some kind of penalized regression (methods dealing with variable or feature selection are relevant too), with e.g. L1, L2 (or a combination thereof, the so-called elasticnet) penalties (look for related questions on this site, or the R penalized and elasticnet package, among others).
Now, about correcting p-values for your regression coefficients (or equivalently your partial correlation coefficients) to protect against over-optimism (e.g. with Bonferroni or, better, step-down methods), it seems this would only be relevant if you are considering one model and seek those predictors that contribute a significant part of explained variance, that is if you don't perform model selection (with stepwise selection, or hierarchical testing). This article may be a good start: Bonferroni Adjustments in Tests for Regression Coefficients. Be aware that such correction won't protect you against multicollinearity issue, which affects the reported p-values.
Given your data, I would recommend using some kind of iterative model selection techniques. In R for instance, the stepAIC function allows to perform stepwise model selection by exact AIC. You can also estimate the relative importance of your predictors based on their contribution to $R^2$ using boostrap (see the relaimpo package). I think that reporting effect size measure or % of explained variance are more informative than p-value, especially in a confirmatory model.
It should be noted that stepwise approaches have also their drawbacks (e.g., Wald tests are not adapted to conditional hypothesis as induced by the stepwise procedure), or as indicated by Frank Harrell on R mailing, "stepwise variable selection based on AIC has all the problems of stepwise variable selection based on P-values. AIC is just a restatement of the P-Value" (but AIC remains useful if the set of predictors is already defined); a related question -- Is a variable significant in a linear regression model? -- raised interesting comments (@Rob, among others) about the use of AIC for variable selection. I append a couple of references at the end (including papers kindly provided by @Stephan); there is also a lot of other references on P.Mean.
Frank Harrell authored a book on Regression Modeling Strategy which includes a lot of discussion and advices around this problem (§4.3, pp. 56-60). He also developed efficient R routines to deal with generalized linear models (See the Design or rms packages). So, I think you definitely have to take a look at it (his handouts are available on his homepage).
References
Whittingham, MJ, Stephens, P, Bradbury, RB, and Freckleton, RP (2006). Why do we still use stepwise modelling in ecology and behaviour? Journal of Animal Ecology, 75, 1182-1189.
Austin, PC (2008). Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study. Journal of Clinical Epidemiology, 61(10), 1009-1017.
Austin, PC and Tu, JV (2004). Automated variable selection methods for logistic regression produced unstable models for predicting acute myocardial infarction mortality. Journal of Clinical Epidemiology, 57, 1138–1146.
Greenland, S (1994). Hierarchical regression for epidemiologic analyses of multiple exposures. Environmental Health Perspectives, 102(Suppl 8), 33–39.
Greenland, S (2008). Multiple comparisons and association selection in general epidemiology. International Journal of Epidemiology, 37(3), 430-434.
Beyene, J, Atenafu, EG, Hamid, JS, To, T, and Sung L (2009). Determining relative importance of variables in developing and validating predictive models. BMC Medical Research Methodology, 9, 64.
Bursac, Z, Gauss, CH, Williams, DK, and Hosmer, DW (2008). Purposeful selection of variables in logistic regression. Source Code for Biology and Medicine, 3, 17.
Brombin, C, Finos, L, and Salmaso, L (2007). Adjusting stepwise p-values in generalized linear models. International Conference on Multiple Comparison Procedures. -- see step.adj() in the R someMTP package.
Wiegand, RE (2010). Performance of using multiple stepwise algorithms for variable selection. Statistics in Medicine, 29(15), 1647–1659.
Moons KG, Donders AR, Steyerberg EW, and Harrell FE (2004). Penalized Maximum Likelihood Estimation to predict binary outcomes. Journal of Clinical Epidemiology, 57(12), 1262–1270.
Tibshirani, R (1996). Regression shrinkage and selection via the lasso. Journal of The Royal Statistical Society B, 58(1), 267–288.
Efron, B, Hastie, T, Johnstone, I, and Tibshirani, R (2004). Least Angle Regression. Annals of Statistics, 32(2), 407-499.
Flom, PL and Cassell, DL (2007). Stopping Stepwise: Why stepwise and similar selection methods are bad, and what you should use. NESUG 2007 Proceedings.
Shtatland, E.S., Cain, E., and Barton, M.B. (2001). The perils of stepwise logistic regression and how to escape them using information criteria and the Output Delivery System. SUGI 26 Proceedings (pp. 222–226). | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
It seems your question more generally addresses the problem of identifying good predictors. In this case, you should consider using some kind of penalized regression (methods dealing with variable or |
4,051 | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | To a great degree you can do whatever you like provided you hold out enough data at random to test whatever model you come up with based on the retained data. A 50% split can be a good idea. Yes, you lose some ability to detect relationships, but what you gain is enormous; namely, the ability to replicate your work before it is published. No matter how sophisticated the statistical techniques you bring to bear, you will be shocked at how many "significant" predictors wind up being entirely useless when applied to the confirmation data.
Bear in mind, too, that "relevant" for prediction means more than a low p-value. That, after all, only means it's likely a relationship found in this particular dataset is not due to chance. For prediction it's actually more important to find the variables that exert substantial influence on the predictand (without over-fitting the model); that is, to find the variables that are likely to be "real" and, when varied throughout a reasonable range of values (not just the values that might occur in your sample!), cause the predictand to vary appreciably. When you have hold-out data to confirm a model, you can be more comfortable provisionally retaining marginally "significant" variables that might not have low p-values.
For these reasons (and building on chl's fine answer), although I have found stepwise models, AIC comparisons, and Bonferroni corrections quite useful (especially with hundreds or thousands of possible predictors in play), these should not be the sole determinants of which variables enter your model. Do not lose sight of the guidance afforded by theory, either: variables having strong theoretical justification to be in a model usually should be kept in, even when they are not significant, provided they do not create ill-conditioned equations (e.g., collinearity).
NB: After you have settled on a model and confirmed its usefulness with the hold-out data, it's fine to recombine the retained data with the hold-out data for final estimation. Thus, nothing is lost in terms of the precision with which you can estimate model coefficients. | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | To a great degree you can do whatever you like provided you hold out enough data at random to test whatever model you come up with based on the retained data. A 50% split can be a good idea. Yes, yo | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
To a great degree you can do whatever you like provided you hold out enough data at random to test whatever model you come up with based on the retained data. A 50% split can be a good idea. Yes, you lose some ability to detect relationships, but what you gain is enormous; namely, the ability to replicate your work before it is published. No matter how sophisticated the statistical techniques you bring to bear, you will be shocked at how many "significant" predictors wind up being entirely useless when applied to the confirmation data.
Bear in mind, too, that "relevant" for prediction means more than a low p-value. That, after all, only means it's likely a relationship found in this particular dataset is not due to chance. For prediction it's actually more important to find the variables that exert substantial influence on the predictand (without over-fitting the model); that is, to find the variables that are likely to be "real" and, when varied throughout a reasonable range of values (not just the values that might occur in your sample!), cause the predictand to vary appreciably. When you have hold-out data to confirm a model, you can be more comfortable provisionally retaining marginally "significant" variables that might not have low p-values.
For these reasons (and building on chl's fine answer), although I have found stepwise models, AIC comparisons, and Bonferroni corrections quite useful (especially with hundreds or thousands of possible predictors in play), these should not be the sole determinants of which variables enter your model. Do not lose sight of the guidance afforded by theory, either: variables having strong theoretical justification to be in a model usually should be kept in, even when they are not significant, provided they do not create ill-conditioned equations (e.g., collinearity).
NB: After you have settled on a model and confirmed its usefulness with the hold-out data, it's fine to recombine the retained data with the hold-out data for final estimation. Thus, nothing is lost in terms of the precision with which you can estimate model coefficients. | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
To a great degree you can do whatever you like provided you hold out enough data at random to test whatever model you come up with based on the retained data. A 50% split can be a good idea. Yes, yo |
4,052 | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | I think this is a very good question; it gets to the heart of the contentious multiple testing "problem" that plagues fields ranging from epidemiology to econometrics. After all, how can we know if the significance we find is spurious or not? How true is our multivariable model?
In terms of technical approaches to offset the likelihood of publishing noise variables, I would heartily agree with 'whuber' that using part of your sample as training data and the rest as test data is a good idea. This is an approach that gets discussed in the technical literature, so if you take the time you can probably find out some good guidelines for when and how to use it.
But to strike more directly at the philosophy of multiple testing, I suggest you read the articles I reference below, some of which support the position that adjustment for multiple testing is often harmful (costs power), unnecessary, and may even be a logical fallacy. I for one do not automatically accept the claim that our ability to investigate one potential predictor is inexorably reduced by the investigation of another. The family-wise Type 1 error rate may increase as we include more predictors in a given model, but so long as we do not go beyond the limits of our sample size, the probability of Type 1 error for each individual predictor is constant; and controlling for family-wise error does not illuminate which specific variable is noise and which is not. Of course, there are cogent counter-arguments as well.
So, as long as you limit your list of potential variables to those which are plausible (ie, would have known pathways to the outcome) then the risk of spuriousness is already handled fairly well.
However, I would add that a predictive model is not as concerned with the "truth-value" of its predictors as a causal model; there may be a great deal of confounding in the model, but so long as we explain a large degree of the variance then we aren't too concerned. This makes the job easier, at least in one sense.
Cheers,
Brenden, Biostatistical Consultant
PS: you may want to do a zero-inflated Poisson regression for the data you describe, instead of two separate regressions.
Perneger, T.V. What's wrong with Bonferroni adjustments. BMJ 1998; 316 : 1236
Cook, R.J. & Farewell, V.T. Multiplicity considerations in the design and analysis of clinical trials. Journal of the Royal Statistical Society, Series A 1996; Vol. 159, No. 1 : 93-110
Rothman, K.J. No adjustments are needed for multiple comparisons. Epidemiology 1990; Vol. 1, No. 1 : 43-46
Marshall, J.R. Data dredging and noteworthiness. Epidemiology 1990; Vol. 1, No. 1 : 5-7
Greenland, S. & Robins, J.M. Empirical-Bayes adjustments for multiple comparisons are sometimes useful. Epidemiology 1991; Vol. 2, No. 4 : 244-251 | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | I think this is a very good question; it gets to the heart of the contentious multiple testing "problem" that plagues fields ranging from epidemiology to econometrics. After all, how can we know if th | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
I think this is a very good question; it gets to the heart of the contentious multiple testing "problem" that plagues fields ranging from epidemiology to econometrics. After all, how can we know if the significance we find is spurious or not? How true is our multivariable model?
In terms of technical approaches to offset the likelihood of publishing noise variables, I would heartily agree with 'whuber' that using part of your sample as training data and the rest as test data is a good idea. This is an approach that gets discussed in the technical literature, so if you take the time you can probably find out some good guidelines for when and how to use it.
But to strike more directly at the philosophy of multiple testing, I suggest you read the articles I reference below, some of which support the position that adjustment for multiple testing is often harmful (costs power), unnecessary, and may even be a logical fallacy. I for one do not automatically accept the claim that our ability to investigate one potential predictor is inexorably reduced by the investigation of another. The family-wise Type 1 error rate may increase as we include more predictors in a given model, but so long as we do not go beyond the limits of our sample size, the probability of Type 1 error for each individual predictor is constant; and controlling for family-wise error does not illuminate which specific variable is noise and which is not. Of course, there are cogent counter-arguments as well.
So, as long as you limit your list of potential variables to those which are plausible (ie, would have known pathways to the outcome) then the risk of spuriousness is already handled fairly well.
However, I would add that a predictive model is not as concerned with the "truth-value" of its predictors as a causal model; there may be a great deal of confounding in the model, but so long as we explain a large degree of the variance then we aren't too concerned. This makes the job easier, at least in one sense.
Cheers,
Brenden, Biostatistical Consultant
PS: you may want to do a zero-inflated Poisson regression for the data you describe, instead of two separate regressions.
Perneger, T.V. What's wrong with Bonferroni adjustments. BMJ 1998; 316 : 1236
Cook, R.J. & Farewell, V.T. Multiplicity considerations in the design and analysis of clinical trials. Journal of the Royal Statistical Society, Series A 1996; Vol. 159, No. 1 : 93-110
Rothman, K.J. No adjustments are needed for multiple comparisons. Epidemiology 1990; Vol. 1, No. 1 : 43-46
Marshall, J.R. Data dredging and noteworthiness. Epidemiology 1990; Vol. 1, No. 1 : 5-7
Greenland, S. & Robins, J.M. Empirical-Bayes adjustments for multiple comparisons are sometimes useful. Epidemiology 1991; Vol. 2, No. 4 : 244-251 | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
I think this is a very good question; it gets to the heart of the contentious multiple testing "problem" that plagues fields ranging from epidemiology to econometrics. After all, how can we know if th |
4,053 | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | There are good answers here. Let me add a couple of small points that I don't see covered elsewhere.
First, what is the nature of your response variables? More specifically, are they understood as related to each other? You should only do two separate multiple regressions if they are understood to be independent (theoretically) / if the residuals from the two models are independent (empirically). Otherwise, you should consider a multivariate regression. ('Multivariate' means >1 response variable; 'multiple' means >1 predictor variable.)
The other thing to bear in mind is that the model comes with a global $F$ test, which is a simultaneous test of all the predictors. It is possible that the global test is 'non-significant' while some of the individual predictors appear to be 'significant'. That should give you pause, if it occurs. On the other hand, if the global test suggests at least some of the predictors are related, that gives you some protection from the problem of multiple comparisons (i.e., it suggests not all nulls are true). | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | There are good answers here. Let me add a couple of small points that I don't see covered elsewhere.
First, what is the nature of your response variables? More specifically, are they understood as | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
There are good answers here. Let me add a couple of small points that I don't see covered elsewhere.
First, what is the nature of your response variables? More specifically, are they understood as related to each other? You should only do two separate multiple regressions if they are understood to be independent (theoretically) / if the residuals from the two models are independent (empirically). Otherwise, you should consider a multivariate regression. ('Multivariate' means >1 response variable; 'multiple' means >1 predictor variable.)
The other thing to bear in mind is that the model comes with a global $F$ test, which is a simultaneous test of all the predictors. It is possible that the global test is 'non-significant' while some of the individual predictors appear to be 'significant'. That should give you pause, if it occurs. On the other hand, if the global test suggests at least some of the predictors are related, that gives you some protection from the problem of multiple comparisons (i.e., it suggests not all nulls are true). | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
There are good answers here. Let me add a couple of small points that I don't see covered elsewhere.
First, what is the nature of your response variables? More specifically, are they understood as |
4,054 | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | You can do a seemingly unrelated regression and use an F test. Put your data in a form like this:
Out1 1 P11 P12 0 0 0
Out2 0 0 0 1 P21 P22
so that the predictors for your first outcome have their values when that outcome is the y variable and 0 otherwise and vice-versa. So your y is a list of both outcomes. P11 and P12 are the two predictors for the first outcome and P21 and P22 are the two predictors for the second outcome. If sex, say, is a predictor for both outcomes, its use to predict outcome 1 should be in a separate variable/column when predicting outcome 2. This lets your regression have different slopes/impacts for sex for each outcome.
In this framework, you can use standard F testing procedures. | Is adjusting p-values in a multiple regression for multiple comparisons a good idea? | You can do a seemingly unrelated regression and use an F test. Put your data in a form like this:
Out1 1 P11 P12 0 0 0
Out2 0 0 0 1 P21 P22
so that the predictors for your first outcome have | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
You can do a seemingly unrelated regression and use an F test. Put your data in a form like this:
Out1 1 P11 P12 0 0 0
Out2 0 0 0 1 P21 P22
so that the predictors for your first outcome have their values when that outcome is the y variable and 0 otherwise and vice-versa. So your y is a list of both outcomes. P11 and P12 are the two predictors for the first outcome and P21 and P22 are the two predictors for the second outcome. If sex, say, is a predictor for both outcomes, its use to predict outcome 1 should be in a separate variable/column when predicting outcome 2. This lets your regression have different slopes/impacts for sex for each outcome.
In this framework, you can use standard F testing procedures. | Is adjusting p-values in a multiple regression for multiple comparisons a good idea?
You can do a seemingly unrelated regression and use an F test. Put your data in a form like this:
Out1 1 P11 P12 0 0 0
Out2 0 0 0 1 P21 P22
so that the predictors for your first outcome have |
4,055 | How can a distribution have infinite mean and variance? | The mean and variance are defined in terms of (sufficiently general) integrals. What it means for the mean or variance to be infinite is a statement about the limiting behavior for those integrals
For example, for a continuous density the mean is $\lim_{a,b\to\infty}\int_{-a}^b x f(x)\ dx$ (which might here be considered as a Riemann integral, say).
This can happen, for example, if the tail is "heavy enough"; either the upper or the lower part (or both) may not converge to a finite value. Consider the following examples for four cases of finite/infinite mean and variance:
A distribution with infinite mean and non-finite variance.
Examples: Pareto distribution with $\alpha= 1$, a zeta(2) distribution.
A distribution with infinite mean and finite variance.
Not possible.
A distribution with finite mean and infinite variance.
Examples: $t_2$ distribution. Pareto with $\alpha=\frac{3}{2}$.
A distribution with finite mean and finite variance.
Examples: Any normal. Any uniform (indeed, any bounded variable has all moments). $t_3$.
These notes by Charles Geyer talk about how to compute relevant integrals in simple terms. It looks like it's dealing with Riemann integrals there, which only covers the continuous case but more general definitions of integrals will cover all the cases you will be likely to require [Lebesgue integration is the form of integration used in measure theory (which underlies probability) but the point here works just fine with more basic methods]. It also covers (Sec 2.5, p13-14) why "2." isn't possible (the mean exists if the variance exists). | How can a distribution have infinite mean and variance? | The mean and variance are defined in terms of (sufficiently general) integrals. What it means for the mean or variance to be infinite is a statement about the limiting behavior for those integrals
For | How can a distribution have infinite mean and variance?
The mean and variance are defined in terms of (sufficiently general) integrals. What it means for the mean or variance to be infinite is a statement about the limiting behavior for those integrals
For example, for a continuous density the mean is $\lim_{a,b\to\infty}\int_{-a}^b x f(x)\ dx$ (which might here be considered as a Riemann integral, say).
This can happen, for example, if the tail is "heavy enough"; either the upper or the lower part (or both) may not converge to a finite value. Consider the following examples for four cases of finite/infinite mean and variance:
A distribution with infinite mean and non-finite variance.
Examples: Pareto distribution with $\alpha= 1$, a zeta(2) distribution.
A distribution with infinite mean and finite variance.
Not possible.
A distribution with finite mean and infinite variance.
Examples: $t_2$ distribution. Pareto with $\alpha=\frac{3}{2}$.
A distribution with finite mean and finite variance.
Examples: Any normal. Any uniform (indeed, any bounded variable has all moments). $t_3$.
These notes by Charles Geyer talk about how to compute relevant integrals in simple terms. It looks like it's dealing with Riemann integrals there, which only covers the continuous case but more general definitions of integrals will cover all the cases you will be likely to require [Lebesgue integration is the form of integration used in measure theory (which underlies probability) but the point here works just fine with more basic methods]. It also covers (Sec 2.5, p13-14) why "2." isn't possible (the mean exists if the variance exists). | How can a distribution have infinite mean and variance?
The mean and variance are defined in terms of (sufficiently general) integrals. What it means for the mean or variance to be infinite is a statement about the limiting behavior for those integrals
For |
4,056 | How can a distribution have infinite mean and variance? | It's instructive to see what goes wrong -- the integrals are all very well, but a sample average is always finite, so what is the issue?
I'll use the Cauchy distribution, which has no finite mean. The distribution is symmetric around zero, so if it had a mean, zero would be that mean. Here are cumulative averages of two samples of ten thousand Cauchy variates (in red and black). First, the first 100, then the first 1000, then all of them. The vertical scale increases over the panels (that's part of the point)
If you had a distribution with a mean, the cumulative averages would settle down to that mean (by the law of large numbers). If you had a mean and variance, they would settle down at a known rate: the standard deviation of the $n$th mean would be proportional to $1/\sqrt{n}$.
The Cauchy averages are 'trying to' settle down to zero, but every so often you get a big value and the average gets bumped away from zero again. In a distribution with finite mean this would eventually stop happening, but with the Cauchy it never does. The averages don't go off to infinity, as they would for a non-negative variable with infinite mean, they just keep being kicked around by outliers for ever. | How can a distribution have infinite mean and variance? | It's instructive to see what goes wrong -- the integrals are all very well, but a sample average is always finite, so what is the issue?
I'll use the Cauchy distribution, which has no finite mean. The | How can a distribution have infinite mean and variance?
It's instructive to see what goes wrong -- the integrals are all very well, but a sample average is always finite, so what is the issue?
I'll use the Cauchy distribution, which has no finite mean. The distribution is symmetric around zero, so if it had a mean, zero would be that mean. Here are cumulative averages of two samples of ten thousand Cauchy variates (in red and black). First, the first 100, then the first 1000, then all of them. The vertical scale increases over the panels (that's part of the point)
If you had a distribution with a mean, the cumulative averages would settle down to that mean (by the law of large numbers). If you had a mean and variance, they would settle down at a known rate: the standard deviation of the $n$th mean would be proportional to $1/\sqrt{n}$.
The Cauchy averages are 'trying to' settle down to zero, but every so often you get a big value and the average gets bumped away from zero again. In a distribution with finite mean this would eventually stop happening, but with the Cauchy it never does. The averages don't go off to infinity, as they would for a non-negative variable with infinite mean, they just keep being kicked around by outliers for ever. | How can a distribution have infinite mean and variance?
It's instructive to see what goes wrong -- the integrals are all very well, but a sample average is always finite, so what is the issue?
I'll use the Cauchy distribution, which has no finite mean. The |
4,057 | How can a distribution have infinite mean and variance? | Stable distributions provide nice, parametric examples of what you're looking for:
infinite mean and variance: $0 < \text{stability parameter} < 1$
N/A
finite mean and infinite variance: $1 \leq \text{stability parameter} < 2$
finite mean and variance: $\text{stability parameter} = 2$ (Gaussian) | How can a distribution have infinite mean and variance? | Stable distributions provide nice, parametric examples of what you're looking for:
infinite mean and variance: $0 < \text{stability parameter} < 1$
N/A
finite mean and infinite variance: $1 \leq \tex | How can a distribution have infinite mean and variance?
Stable distributions provide nice, parametric examples of what you're looking for:
infinite mean and variance: $0 < \text{stability parameter} < 1$
N/A
finite mean and infinite variance: $1 \leq \text{stability parameter} < 2$
finite mean and variance: $\text{stability parameter} = 2$ (Gaussian) | How can a distribution have infinite mean and variance?
Stable distributions provide nice, parametric examples of what you're looking for:
infinite mean and variance: $0 < \text{stability parameter} < 1$
N/A
finite mean and infinite variance: $1 \leq \tex |
4,058 | How can a distribution have infinite mean and variance? | No one has mentioned the St. Petersburg paradox here; otherwise I wouldn't post in a thread this old that already has multiple answers including one "accepted" answer.
If a coin lands "heads" you win one cent.
If "tails", the winnings double and then if "heads" on the second toss, you win two cents.
If "tails" the second time, the winnings double again and if "heads" on the third toss, you win four cents.
And so on:
$$
\begin{array}{|r|c|c|c|}
\hline \text{outcome} & \text{winnings} & \text{probability} & \text{product} \\
\hline
\text{H} & 1 & 1/2 & 1/2 \\
\text{TH} & 2 & 1/4 & 1/2 \\
\text{TTH} & 4 & 1/8 & 1/2 \\
\text{TTTH} & 8 & 1/16 & 1/2 \\
\text{TTTTH} & 16 & 1/32 & 1/2 \\
\text{TTTTTH} & 32 & 1/64 & 1/2 \\
\vdots\quad & \vdots & \vdots & \vdots
\end{array}
$$
The sum of products is $\dfrac 12 + \dfrac 12 + \dfrac 12+\cdots = +\infty,$ so that is an infinite expected value.
That means if you pay $\$1$ million for each coin toss, or $\$1$ trillion, etc., then you ultimately come out ahead. How can that be, when you're unlikely to win more than a few cents each time?
The answer is that one very rare occasions, you will get a long sequence of tails, so that the winnings will compensate you for the immense expense you've incurred. That is true no matter how high the price is that you pay for each toss. | How can a distribution have infinite mean and variance? | No one has mentioned the St. Petersburg paradox here; otherwise I wouldn't post in a thread this old that already has multiple answers including one "accepted" answer.
If a coin lands "heads" you win | How can a distribution have infinite mean and variance?
No one has mentioned the St. Petersburg paradox here; otherwise I wouldn't post in a thread this old that already has multiple answers including one "accepted" answer.
If a coin lands "heads" you win one cent.
If "tails", the winnings double and then if "heads" on the second toss, you win two cents.
If "tails" the second time, the winnings double again and if "heads" on the third toss, you win four cents.
And so on:
$$
\begin{array}{|r|c|c|c|}
\hline \text{outcome} & \text{winnings} & \text{probability} & \text{product} \\
\hline
\text{H} & 1 & 1/2 & 1/2 \\
\text{TH} & 2 & 1/4 & 1/2 \\
\text{TTH} & 4 & 1/8 & 1/2 \\
\text{TTTH} & 8 & 1/16 & 1/2 \\
\text{TTTTH} & 16 & 1/32 & 1/2 \\
\text{TTTTTH} & 32 & 1/64 & 1/2 \\
\vdots\quad & \vdots & \vdots & \vdots
\end{array}
$$
The sum of products is $\dfrac 12 + \dfrac 12 + \dfrac 12+\cdots = +\infty,$ so that is an infinite expected value.
That means if you pay $\$1$ million for each coin toss, or $\$1$ trillion, etc., then you ultimately come out ahead. How can that be, when you're unlikely to win more than a few cents each time?
The answer is that one very rare occasions, you will get a long sequence of tails, so that the winnings will compensate you for the immense expense you've incurred. That is true no matter how high the price is that you pay for each toss. | How can a distribution have infinite mean and variance?
No one has mentioned the St. Petersburg paradox here; otherwise I wouldn't post in a thread this old that already has multiple answers including one "accepted" answer.
If a coin lands "heads" you win |
4,059 | How can a distribution have infinite mean and variance? | For simplicity, suppose we are dealing with an absolutely continuous distribution with density function $f_X$ with some corresponding non-negative kernel function $g_X \propto f_X$. Suppose we consider the general $k$th absolute moment, which is given by the following integral expressions:
$$\mathbb{E}(|X^k|)
= \int \limits_\mathbb{R} |x|^k f_X(x) \ dx
= \frac{\int_\mathbb{R} |x|^k g_X(x) \ dx}{\int_\mathbb{R} g_X(x) \ dx}.$$
Broadly speaking, this integral will be finite so long as the "tails" of $g_X$ decrease fast enough relative to the growth of $|x^k|$ that their product (i.e., the integrand) yields a finite integral when taken over the whole set of real numbers. (For the specific condition required in the tails, see the analysis below.)
The norming axiom of probability theory requires that the above integral is one when $k=0$, and this means that the tails of the kernel function $g_X$ integrates to a finite positive number. This imposes a requirement on how fast the tails of the kernel function decrease to zero. However, it is possible for the tails of the kernel function $g_X$ to decrease fast enough to ensure that $\int_\mathbb{R} g_X(x) \ dx$ is finite, but not fast enough to ensure that $\int_\mathbb{R} |x|^k g_X(x) \ dx$ is finite for some $k>0$. When this happens, the above integral is infinite, and you get moments that do not exist.
Summing this up in intuitive terms, the reason you can have distributions with moments that don't exist is that probability theory imposes only weak requirements on the rate at which the tails of a distribution decrease to zero. The norming axiom imposes a weak condition that requires the tails to decrease to zero fast enough for the integral of the density do exist, but this does not impose any requirement that the integral of the density multiplied by a positive power function must exist.
Sufficient condition for finite limit: A sufficient condition for a finite integral is $g_X(x) = \mathcal{O}(|x|^{-(k+1+\varepsilon)})$ for some $\varepsilon > 0$. This condition ensures that the kernel function (and thus also the density function) decreases fast enough in its tails to yield a finite integral (see explanation of Big-O notation here). To see why this condition is sufficient, suppose the condition holds and denote the corresponding limit supremum:
$$K \equiv \limsup_{|x| \rightarrow \infty} \Bigg| \frac{f_X(x)}{|x|^{k+1+\varepsilon}} \Bigg|.$$
The condition stated here ensures that $K < \infty$ and we then have:
$$\begin{align}
\mathbb{E}(|X^k|)
&= \int \limits_\mathbb{R} |x|^k f_X(x) \ dx \\[6pt]
&= \int \limits_{-1}^1 |x|^k f_X(x) \ dx + \int \limits_\mathbb{R} |x|^k f_X(x) \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&\leqslant 2 + \int \limits_\mathbb{R} |x|^k f_X(x) \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&= 2 + \int \limits_\mathbb{R} |x|^k \mathcal{O}(|x|^{-(k+1+\varepsilon)}) \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&= 2 + \int \limits_\mathbb{R} \frac{\mathcal{O}(1)}{|x|^{1+\varepsilon}} \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&\leqslant 2 + K \times \int \limits_\mathbb{R} \frac{1}{|x|^{1+\varepsilon}} \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&= 2 + K \times 2 \int \limits_1^\infty \frac{1}{x^{1+\varepsilon}} \ dx \\[6pt]
&= 2 + K \times 2 \Bigg[ - \frac{1}{\varepsilon x^{\varepsilon}} \Bigg]_{x=1}^{x \rightarrow \infty} \\[6pt]
&= K \times 2 \Bigg[ 0 - - \frac{1}{\varepsilon} \Bigg]_{x=1}^{x \rightarrow \infty} \\[6pt]
&= K \times \frac{2}{\varepsilon} < \infty, \\[6pt]
\end{align}$$
which establishes that the integral is finite. (Note that the weaker condition $g_X(x) = \mathcal{O}(|x|^{-(k+1)})$ is not sufficient for this result.) | How can a distribution have infinite mean and variance? | For simplicity, suppose we are dealing with an absolutely continuous distribution with density function $f_X$ with some corresponding non-negative kernel function $g_X \propto f_X$. Suppose we consid | How can a distribution have infinite mean and variance?
For simplicity, suppose we are dealing with an absolutely continuous distribution with density function $f_X$ with some corresponding non-negative kernel function $g_X \propto f_X$. Suppose we consider the general $k$th absolute moment, which is given by the following integral expressions:
$$\mathbb{E}(|X^k|)
= \int \limits_\mathbb{R} |x|^k f_X(x) \ dx
= \frac{\int_\mathbb{R} |x|^k g_X(x) \ dx}{\int_\mathbb{R} g_X(x) \ dx}.$$
Broadly speaking, this integral will be finite so long as the "tails" of $g_X$ decrease fast enough relative to the growth of $|x^k|$ that their product (i.e., the integrand) yields a finite integral when taken over the whole set of real numbers. (For the specific condition required in the tails, see the analysis below.)
The norming axiom of probability theory requires that the above integral is one when $k=0$, and this means that the tails of the kernel function $g_X$ integrates to a finite positive number. This imposes a requirement on how fast the tails of the kernel function decrease to zero. However, it is possible for the tails of the kernel function $g_X$ to decrease fast enough to ensure that $\int_\mathbb{R} g_X(x) \ dx$ is finite, but not fast enough to ensure that $\int_\mathbb{R} |x|^k g_X(x) \ dx$ is finite for some $k>0$. When this happens, the above integral is infinite, and you get moments that do not exist.
Summing this up in intuitive terms, the reason you can have distributions with moments that don't exist is that probability theory imposes only weak requirements on the rate at which the tails of a distribution decrease to zero. The norming axiom imposes a weak condition that requires the tails to decrease to zero fast enough for the integral of the density do exist, but this does not impose any requirement that the integral of the density multiplied by a positive power function must exist.
Sufficient condition for finite limit: A sufficient condition for a finite integral is $g_X(x) = \mathcal{O}(|x|^{-(k+1+\varepsilon)})$ for some $\varepsilon > 0$. This condition ensures that the kernel function (and thus also the density function) decreases fast enough in its tails to yield a finite integral (see explanation of Big-O notation here). To see why this condition is sufficient, suppose the condition holds and denote the corresponding limit supremum:
$$K \equiv \limsup_{|x| \rightarrow \infty} \Bigg| \frac{f_X(x)}{|x|^{k+1+\varepsilon}} \Bigg|.$$
The condition stated here ensures that $K < \infty$ and we then have:
$$\begin{align}
\mathbb{E}(|X^k|)
&= \int \limits_\mathbb{R} |x|^k f_X(x) \ dx \\[6pt]
&= \int \limits_{-1}^1 |x|^k f_X(x) \ dx + \int \limits_\mathbb{R} |x|^k f_X(x) \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&\leqslant 2 + \int \limits_\mathbb{R} |x|^k f_X(x) \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&= 2 + \int \limits_\mathbb{R} |x|^k \mathcal{O}(|x|^{-(k+1+\varepsilon)}) \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&= 2 + \int \limits_\mathbb{R} \frac{\mathcal{O}(1)}{|x|^{1+\varepsilon}} \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&\leqslant 2 + K \times \int \limits_\mathbb{R} \frac{1}{|x|^{1+\varepsilon}} \cdot \mathbb{I}(|x| \geqslant 1) \ dx \\[6pt]
&= 2 + K \times 2 \int \limits_1^\infty \frac{1}{x^{1+\varepsilon}} \ dx \\[6pt]
&= 2 + K \times 2 \Bigg[ - \frac{1}{\varepsilon x^{\varepsilon}} \Bigg]_{x=1}^{x \rightarrow \infty} \\[6pt]
&= K \times 2 \Bigg[ 0 - - \frac{1}{\varepsilon} \Bigg]_{x=1}^{x \rightarrow \infty} \\[6pt]
&= K \times \frac{2}{\varepsilon} < \infty, \\[6pt]
\end{align}$$
which establishes that the integral is finite. (Note that the weaker condition $g_X(x) = \mathcal{O}(|x|^{-(k+1)})$ is not sufficient for this result.) | How can a distribution have infinite mean and variance?
For simplicity, suppose we are dealing with an absolutely continuous distribution with density function $f_X$ with some corresponding non-negative kernel function $g_X \propto f_X$. Suppose we consid |
4,060 | How can a distribution have infinite mean and variance? | About the second distribution you are looking for, consider the random variable $$ X_2 = \text{number of times you can zoom in like 10cm into a fractal} $$
then the answer is infinite with probability one, and therefore the variance is zero and the mean of the distribution has a value of infinite. | How can a distribution have infinite mean and variance? | About the second distribution you are looking for, consider the random variable $$ X_2 = \text{number of times you can zoom in like 10cm into a fractal} $$
then the answer is infinite with probability | How can a distribution have infinite mean and variance?
About the second distribution you are looking for, consider the random variable $$ X_2 = \text{number of times you can zoom in like 10cm into a fractal} $$
then the answer is infinite with probability one, and therefore the variance is zero and the mean of the distribution has a value of infinite. | How can a distribution have infinite mean and variance?
About the second distribution you are looking for, consider the random variable $$ X_2 = \text{number of times you can zoom in like 10cm into a fractal} $$
then the answer is infinite with probability |
4,061 | Reference book for linear algebra applied to statistics? | The "big three" that I have used/heard of are:
Gentle, Matrix Algebra: Theory, Computations, and Applications in Statistics. (Amazon link).
Searle, Matrix Algebra Useful for Statistics. (Amazon link).
Harville, Matrix Algebra From a Statistician's Perspective. (Amazon link).
I have used Gentle and Harville and found both to be very helpful and quite manageable. | Reference book for linear algebra applied to statistics? | The "big three" that I have used/heard of are:
Gentle, Matrix Algebra: Theory, Computations, and Applications in Statistics. (Amazon link).
Searle, Matrix Algebra Useful for Statistics. (Amazon link) | Reference book for linear algebra applied to statistics?
The "big three" that I have used/heard of are:
Gentle, Matrix Algebra: Theory, Computations, and Applications in Statistics. (Amazon link).
Searle, Matrix Algebra Useful for Statistics. (Amazon link).
Harville, Matrix Algebra From a Statistician's Perspective. (Amazon link).
I have used Gentle and Harville and found both to be very helpful and quite manageable. | Reference book for linear algebra applied to statistics?
The "big three" that I have used/heard of are:
Gentle, Matrix Algebra: Theory, Computations, and Applications in Statistics. (Amazon link).
Searle, Matrix Algebra Useful for Statistics. (Amazon link) |
4,062 | Reference book for linear algebra applied to statistics? | The Matrix Cookbook by K. B. Petersen.
is a free resource will all sorts of useful identities involving various decompositions, forms of inverses for various commonly encountered matrix structures, formulas for differentiating matrix functions and much more. You'll probably find whatever you're looking for in the matrix cookbook. I've never found any mistakes at all there, but since the matrix cookbook is a free resource, it is not professionally edited, so there could potentially be errors there. But, it is regularly being updated, so I wouldn't worry too much about that.
Although this is a general purpose manual, there is certainly a statistics slant to it, as you will see. | Reference book for linear algebra applied to statistics? | The Matrix Cookbook by K. B. Petersen.
is a free resource will all sorts of useful identities involving various decompositions, forms of inverses for various commonly encountered matrix structures, f | Reference book for linear algebra applied to statistics?
The Matrix Cookbook by K. B. Petersen.
is a free resource will all sorts of useful identities involving various decompositions, forms of inverses for various commonly encountered matrix structures, formulas for differentiating matrix functions and much more. You'll probably find whatever you're looking for in the matrix cookbook. I've never found any mistakes at all there, but since the matrix cookbook is a free resource, it is not professionally edited, so there could potentially be errors there. But, it is regularly being updated, so I wouldn't worry too much about that.
Although this is a general purpose manual, there is certainly a statistics slant to it, as you will see. | Reference book for linear algebra applied to statistics?
The Matrix Cookbook by K. B. Petersen.
is a free resource will all sorts of useful identities involving various decompositions, forms of inverses for various commonly encountered matrix structures, f |
4,063 | Reference book for linear algebra applied to statistics? | Matrix Computations by Golub and Van Loan is the standard reference for matrix computation for many. | Reference book for linear algebra applied to statistics? | Matrix Computations by Golub and Van Loan is the standard reference for matrix computation for many. | Reference book for linear algebra applied to statistics?
Matrix Computations by Golub and Van Loan is the standard reference for matrix computation for many. | Reference book for linear algebra applied to statistics?
Matrix Computations by Golub and Van Loan is the standard reference for matrix computation for many. |
4,064 | Reference book for linear algebra applied to statistics? | I've found Advanced Multivariate Statistics with Matrices by Kollo and von Rosen to be very useful when working with multivariate statistics. The first 170 pages are linear algebra. It then goes on to cover multivariate distributions, asymptotics and linear models - all in a rigorous way. It doesn't cover projection methods though. | Reference book for linear algebra applied to statistics? | I've found Advanced Multivariate Statistics with Matrices by Kollo and von Rosen to be very useful when working with multivariate statistics. The first 170 pages are linear algebra. It then goes on to | Reference book for linear algebra applied to statistics?
I've found Advanced Multivariate Statistics with Matrices by Kollo and von Rosen to be very useful when working with multivariate statistics. The first 170 pages are linear algebra. It then goes on to cover multivariate distributions, asymptotics and linear models - all in a rigorous way. It doesn't cover projection methods though. | Reference book for linear algebra applied to statistics?
I've found Advanced Multivariate Statistics with Matrices by Kollo and von Rosen to be very useful when working with multivariate statistics. The first 170 pages are linear algebra. It then goes on to |
4,065 | Reference book for linear algebra applied to statistics? | In addition to the three mentioned by @Mike Wierzbicki (all of which I use), another useful one is "Matrix Tricks for Linear Statistical Models" by Puntanen, Styan and Isotalo (2011). | Reference book for linear algebra applied to statistics? | In addition to the three mentioned by @Mike Wierzbicki (all of which I use), another useful one is "Matrix Tricks for Linear Statistical Models" by Puntanen, Styan and Isotalo (2011). | Reference book for linear algebra applied to statistics?
In addition to the three mentioned by @Mike Wierzbicki (all of which I use), another useful one is "Matrix Tricks for Linear Statistical Models" by Puntanen, Styan and Isotalo (2011). | Reference book for linear algebra applied to statistics?
In addition to the three mentioned by @Mike Wierzbicki (all of which I use), another useful one is "Matrix Tricks for Linear Statistical Models" by Puntanen, Styan and Isotalo (2011). |
4,066 | Reference book for linear algebra applied to statistics? | You could try "Numerical Methods of Statistics", by John F. Monahan. It assumes that you know linear algebra, but the author's web site provides programs coded in R. | Reference book for linear algebra applied to statistics? | You could try "Numerical Methods of Statistics", by John F. Monahan. It assumes that you know linear algebra, but the author's web site provides programs coded in R. | Reference book for linear algebra applied to statistics?
You could try "Numerical Methods of Statistics", by John F. Monahan. It assumes that you know linear algebra, but the author's web site provides programs coded in R. | Reference book for linear algebra applied to statistics?
You could try "Numerical Methods of Statistics", by John F. Monahan. It assumes that you know linear algebra, but the author's web site provides programs coded in R. |
4,067 | Reference book for linear algebra applied to statistics? | Krishnan Namboodiri's Matrix Algebra: An Introduction is a quick, bare-bones way to learn much of the linear algebra you'll need.
You can also try MIT OCW. | Reference book for linear algebra applied to statistics? | Krishnan Namboodiri's Matrix Algebra: An Introduction is a quick, bare-bones way to learn much of the linear algebra you'll need.
You can also try MIT OCW. | Reference book for linear algebra applied to statistics?
Krishnan Namboodiri's Matrix Algebra: An Introduction is a quick, bare-bones way to learn much of the linear algebra you'll need.
You can also try MIT OCW. | Reference book for linear algebra applied to statistics?
Krishnan Namboodiri's Matrix Algebra: An Introduction is a quick, bare-bones way to learn much of the linear algebra you'll need.
You can also try MIT OCW. |
4,068 | Reference book for linear algebra applied to statistics? | I have Anton's Elementary Linear Algebra, mainly for the chapters on linear equations and matrices and on determinants (I have the 7th edition). | Reference book for linear algebra applied to statistics? | I have Anton's Elementary Linear Algebra, mainly for the chapters on linear equations and matrices and on determinants (I have the 7th edition). | Reference book for linear algebra applied to statistics?
I have Anton's Elementary Linear Algebra, mainly for the chapters on linear equations and matrices and on determinants (I have the 7th edition). | Reference book for linear algebra applied to statistics?
I have Anton's Elementary Linear Algebra, mainly for the chapters on linear equations and matrices and on determinants (I have the 7th edition). |
4,069 | Reference book for linear algebra applied to statistics? | As a mathematical statistics student Rencher's book named Linear Models In Statistics was very helpful for me, especially in working with mean and variance of quadratic forms. It is available in this link. I hope it could be useful for other students and researchers too. | Reference book for linear algebra applied to statistics? | As a mathematical statistics student Rencher's book named Linear Models In Statistics was very helpful for me, especially in working with mean and variance of quadratic forms. It is available in this | Reference book for linear algebra applied to statistics?
As a mathematical statistics student Rencher's book named Linear Models In Statistics was very helpful for me, especially in working with mean and variance of quadratic forms. It is available in this link. I hope it could be useful for other students and researchers too. | Reference book for linear algebra applied to statistics?
As a mathematical statistics student Rencher's book named Linear Models In Statistics was very helpful for me, especially in working with mean and variance of quadratic forms. It is available in this |
4,070 | Reference book for linear algebra applied to statistics? | It doesn't advertise itself as "for statisticians", but many statisticians have made great use of Gil Strang's Intro to Linear Algebra, which covers all the topics you describe, and has chapters about statistical applications. | Reference book for linear algebra applied to statistics? | It doesn't advertise itself as "for statisticians", but many statisticians have made great use of Gil Strang's Intro to Linear Algebra, which covers all the topics you describe, and has chapters about | Reference book for linear algebra applied to statistics?
It doesn't advertise itself as "for statisticians", but many statisticians have made great use of Gil Strang's Intro to Linear Algebra, which covers all the topics you describe, and has chapters about statistical applications. | Reference book for linear algebra applied to statistics?
It doesn't advertise itself as "for statisticians", but many statisticians have made great use of Gil Strang's Intro to Linear Algebra, which covers all the topics you describe, and has chapters about |
4,071 | Reference book for linear algebra applied to statistics? | Mathematics for Machine Learning is another nice alternative (freely available) | Reference book for linear algebra applied to statistics? | Mathematics for Machine Learning is another nice alternative (freely available) | Reference book for linear algebra applied to statistics?
Mathematics for Machine Learning is another nice alternative (freely available) | Reference book for linear algebra applied to statistics?
Mathematics for Machine Learning is another nice alternative (freely available) |
4,072 | Reference book for linear algebra applied to statistics? | I second many of the books recommended, especially Rencher's book Linear Models In Statistics. Another book I would recommend is Hands-On Matrix Algebra Using R: Active And Motivated Learning With Applications (amazon link). It is not overly technical, and provides many examples in R, which I found useful when learning. | Reference book for linear algebra applied to statistics? | I second many of the books recommended, especially Rencher's book Linear Models In Statistics. Another book I would recommend is Hands-On Matrix Algebra Using R: Active And Motivated Learning With App | Reference book for linear algebra applied to statistics?
I second many of the books recommended, especially Rencher's book Linear Models In Statistics. Another book I would recommend is Hands-On Matrix Algebra Using R: Active And Motivated Learning With Applications (amazon link). It is not overly technical, and provides many examples in R, which I found useful when learning. | Reference book for linear algebra applied to statistics?
I second many of the books recommended, especially Rencher's book Linear Models In Statistics. Another book I would recommend is Hands-On Matrix Algebra Using R: Active And Motivated Learning With App |
4,073 | How should one interpret the comparison of means from different sample sizes? | You can use a t-test to assess if there are differences in the means. The different sample sizes don't cause a problem for the t-test, and don't require the results to be interpreted with any extra care. Ultimately, you can even compare a single observation to an infinite population with a known distribution and mean and SD; for example someone with an IQ of 130 is smarter than 97.7% of people. One thing to note though, is that for a given $N$ (i.e., total sample size), power is maximized if the group $n$'s are equal; with highly unequal group sizes, you don't get as much additional resolution with each additional observation.
To clarify my point about power, here is a very simple simulation written for R:
set.seed(9) # this makes the simulation exactly reproducible
power5050 = vector(length=10000) # these will store the p-values from each
power7525 = vector(length=10000) # simulated test to keep track of how many
power9010 = vector(length=10000) # are 'significant'
for(i in 1:10000){ # I run the following procedure 10k times
n1a = rnorm(50, mean=0, sd=1) # I'm drawing 2 samples of size 50 from 2 normal
n2a = rnorm(50, mean=.5, sd=1) # distributions w/ dif means, but equal SDs
n1b = rnorm(75, mean=0, sd=1) # this version has group sizes of 75 & 25
n2b = rnorm(25, mean=.5, sd=1)
n1c = rnorm(90, mean=0, sd=1) # this one has 90 & 10
n2c = rnorm(10, mean=.5, sd=1)
power5050[i] = t.test(n1a, n2a, var.equal=T)$p.value # here t-tests are run &
power7525[i] = t.test(n1b, n2b, var.equal=T)$p.value # the p-values are stored
power9010[i] = t.test(n1c, n2c, var.equal=T)$p.value # for each version
}
mean(power5050<.05) # this code counts how many of the p-values for
[1] 0.7019 # each of the versions are less than .05 &
mean(power7525<.05) # divides the number by 10k to compute the %
[1] 0.5648 # of times the results were 'significant'. That
mean(power9010<.05) # gives an estimate of the power
[1] 0.3261
Notice that in all cases $N=100$, but that in the first case $n_1=50$ & $n_2=50$, in the second case $n_1=75$ & $n_2=25$, and in the last case $n_1=90$ and $n_2=10$. Note further that the standardized mean difference / data generating process was the same in all cases. However, whereas the test was 'significant' 70% of the time for the 50-50 sample, power was 56% with 75-25 and only 33% when the group sizes were 90-10.
I think of this by analogy. If you want to know the area of a rectangle, and the perimeter is fixed, then the area will be maximized if the length and width are equal (i.e., if the rectangle is a square). On the other hand, as the length and width diverge (as the rectangle becomes elongated), the area shrinks. | How should one interpret the comparison of means from different sample sizes? | You can use a t-test to assess if there are differences in the means. The different sample sizes don't cause a problem for the t-test, and don't require the results to be interpreted with any extra c | How should one interpret the comparison of means from different sample sizes?
You can use a t-test to assess if there are differences in the means. The different sample sizes don't cause a problem for the t-test, and don't require the results to be interpreted with any extra care. Ultimately, you can even compare a single observation to an infinite population with a known distribution and mean and SD; for example someone with an IQ of 130 is smarter than 97.7% of people. One thing to note though, is that for a given $N$ (i.e., total sample size), power is maximized if the group $n$'s are equal; with highly unequal group sizes, you don't get as much additional resolution with each additional observation.
To clarify my point about power, here is a very simple simulation written for R:
set.seed(9) # this makes the simulation exactly reproducible
power5050 = vector(length=10000) # these will store the p-values from each
power7525 = vector(length=10000) # simulated test to keep track of how many
power9010 = vector(length=10000) # are 'significant'
for(i in 1:10000){ # I run the following procedure 10k times
n1a = rnorm(50, mean=0, sd=1) # I'm drawing 2 samples of size 50 from 2 normal
n2a = rnorm(50, mean=.5, sd=1) # distributions w/ dif means, but equal SDs
n1b = rnorm(75, mean=0, sd=1) # this version has group sizes of 75 & 25
n2b = rnorm(25, mean=.5, sd=1)
n1c = rnorm(90, mean=0, sd=1) # this one has 90 & 10
n2c = rnorm(10, mean=.5, sd=1)
power5050[i] = t.test(n1a, n2a, var.equal=T)$p.value # here t-tests are run &
power7525[i] = t.test(n1b, n2b, var.equal=T)$p.value # the p-values are stored
power9010[i] = t.test(n1c, n2c, var.equal=T)$p.value # for each version
}
mean(power5050<.05) # this code counts how many of the p-values for
[1] 0.7019 # each of the versions are less than .05 &
mean(power7525<.05) # divides the number by 10k to compute the %
[1] 0.5648 # of times the results were 'significant'. That
mean(power9010<.05) # gives an estimate of the power
[1] 0.3261
Notice that in all cases $N=100$, but that in the first case $n_1=50$ & $n_2=50$, in the second case $n_1=75$ & $n_2=25$, and in the last case $n_1=90$ and $n_2=10$. Note further that the standardized mean difference / data generating process was the same in all cases. However, whereas the test was 'significant' 70% of the time for the 50-50 sample, power was 56% with 75-25 and only 33% when the group sizes were 90-10.
I think of this by analogy. If you want to know the area of a rectangle, and the perimeter is fixed, then the area will be maximized if the length and width are equal (i.e., if the rectangle is a square). On the other hand, as the length and width diverge (as the rectangle becomes elongated), the area shrinks. | How should one interpret the comparison of means from different sample sizes?
You can use a t-test to assess if there are differences in the means. The different sample sizes don't cause a problem for the t-test, and don't require the results to be interpreted with any extra c |
4,074 | How should one interpret the comparison of means from different sample sizes? | In addition to the answer mentioned by @gung referring you to the t-test, it sounds like you might be interested in Bayesian rating systems. Websites can use such systems to rank order items that vary in the number of votes received. Essentially, such systems work by assigning a rating that is a composite of the mean rating of all items plus the mean of the sample of ratings for the specific object. As the number of ratings increases, the weight assigned to the mean for the object increases and the weight assigned to mean rating of all items decreases. Perhaps check out bayesian averages.
Of course things can get a lot more complex as you deal with a wide range of issues such as voting fraud, changes over time, etc. | How should one interpret the comparison of means from different sample sizes? | In addition to the answer mentioned by @gung referring you to the t-test, it sounds like you might be interested in Bayesian rating systems. Websites can use such systems to rank order items that vary | How should one interpret the comparison of means from different sample sizes?
In addition to the answer mentioned by @gung referring you to the t-test, it sounds like you might be interested in Bayesian rating systems. Websites can use such systems to rank order items that vary in the number of votes received. Essentially, such systems work by assigning a rating that is a composite of the mean rating of all items plus the mean of the sample of ratings for the specific object. As the number of ratings increases, the weight assigned to the mean for the object increases and the weight assigned to mean rating of all items decreases. Perhaps check out bayesian averages.
Of course things can get a lot more complex as you deal with a wide range of issues such as voting fraud, changes over time, etc. | How should one interpret the comparison of means from different sample sizes?
In addition to the answer mentioned by @gung referring you to the t-test, it sounds like you might be interested in Bayesian rating systems. Websites can use such systems to rank order items that vary |
4,075 | Period detection of a generic time series | If you really have no idea what the periodicity is, probably the best approach is to find the frequency corresponding to the maximum of the spectral density. However, the spectrum at low frequencies will be affected by trend, so you need to detrend the series first. The following R function should do the job for most series. It is far from perfect, but I've tested it on a few dozen examples and it seems to work ok. It will return 1 for data that have no strong periodicity, and the length of period otherwise.
Update: Version 2 of function. This is much faster and seems to be more robust.
find.freq <- function(x)
{
n <- length(x)
spec <- spec.ar(c(x),plot=FALSE)
if(max(spec$spec)>10) # Arbitrary threshold chosen by trial and error.
{
period <- round(1/spec$freq[which.max(spec$spec)])
if(period==Inf) # Find next local maximum
{
j <- which(diff(spec$spec)>0)
if(length(j)>0)
{
nextmax <- j[1] + which.max(spec$spec[j[1]:500])
period <- round(1/spec$freq[nextmax])
}
else
period <- 1
}
}
else
period <- 1
return(period)
} | Period detection of a generic time series | If you really have no idea what the periodicity is, probably the best approach is to find the frequency corresponding to the maximum of the spectral density. However, the spectrum at low frequencies w | Period detection of a generic time series
If you really have no idea what the periodicity is, probably the best approach is to find the frequency corresponding to the maximum of the spectral density. However, the spectrum at low frequencies will be affected by trend, so you need to detrend the series first. The following R function should do the job for most series. It is far from perfect, but I've tested it on a few dozen examples and it seems to work ok. It will return 1 for data that have no strong periodicity, and the length of period otherwise.
Update: Version 2 of function. This is much faster and seems to be more robust.
find.freq <- function(x)
{
n <- length(x)
spec <- spec.ar(c(x),plot=FALSE)
if(max(spec$spec)>10) # Arbitrary threshold chosen by trial and error.
{
period <- round(1/spec$freq[which.max(spec$spec)])
if(period==Inf) # Find next local maximum
{
j <- which(diff(spec$spec)>0)
if(length(j)>0)
{
nextmax <- j[1] + which.max(spec$spec[j[1]:500])
period <- round(1/spec$freq[nextmax])
}
else
period <- 1
}
}
else
period <- 1
return(period)
} | Period detection of a generic time series
If you really have no idea what the periodicity is, probably the best approach is to find the frequency corresponding to the maximum of the spectral density. However, the spectrum at low frequencies w |
4,076 | Period detection of a generic time series | If you expect the process to be stationary -- the periodicity/seasonality will not change over time -- then something like a Chi-square periodogram (see e.g. Sokolove and Bushell, 1978) might be a good choice. It's commonly used in analysis of circadian data which can have extremely large amounts of noise in it, but is expected to have very stable periodicities.
This approach makes no assumption about the shape of the waveform (other than that it is consistent from cycle to cycle), but does require that any noise be of constant mean and uncorrelated to the signal.
chisq.pd <- function(x, min.period, max.period, alpha) {
N <- length(x)
variances = NULL
periods = seq(min.period, max.period)
rowlist = NULL
for(lc in periods){
ncol = lc
nrow = floor(N/ncol)
rowlist = c(rowlist, nrow)
x.trunc = x[1:(ncol*nrow)]
x.reshape = t(array(x.trunc, c(ncol, nrow)))
variances = c(variances, var(colMeans(x.reshape)))
}
Qp = (rowlist * periods * variances) / var(x)
df = periods - 1
pvals = 1-pchisq(Qp, df)
pass.periods = periods[pvals<alpha]
pass.pvals = pvals[pvals<alpha]
#return(cbind(pass.periods, pass.pvals))
return(cbind(periods[pvals==min(pvals)], pvals[pvals==min(pvals)]))
}
x = cos( (2*pi/37) * (1:1000))+rnorm(1000)
chisq.pd(x, 2, 72, .05)
The last two lines are just an example, showing that it can identify the period of a pure trigonometric function, even with lots of additive noise.
As written, the last argument (alpha) in the call is superfluous, the function simply returns the 'best' period it can find; uncomment the first return statement and comment out the second to have it return a list of all periods significant at the level alpha.
This function doesn't do any sort of sanity checking to make sure that you've put in identifiable periods, nor does it (can it) work with fractional periods, nor is there any sort of multiple comparison control built in if you decide to look at multiple periods. But other than that it should be reasonably robust. | Period detection of a generic time series | If you expect the process to be stationary -- the periodicity/seasonality will not change over time -- then something like a Chi-square periodogram (see e.g. Sokolove and Bushell, 1978) might be a goo | Period detection of a generic time series
If you expect the process to be stationary -- the periodicity/seasonality will not change over time -- then something like a Chi-square periodogram (see e.g. Sokolove and Bushell, 1978) might be a good choice. It's commonly used in analysis of circadian data which can have extremely large amounts of noise in it, but is expected to have very stable periodicities.
This approach makes no assumption about the shape of the waveform (other than that it is consistent from cycle to cycle), but does require that any noise be of constant mean and uncorrelated to the signal.
chisq.pd <- function(x, min.period, max.period, alpha) {
N <- length(x)
variances = NULL
periods = seq(min.period, max.period)
rowlist = NULL
for(lc in periods){
ncol = lc
nrow = floor(N/ncol)
rowlist = c(rowlist, nrow)
x.trunc = x[1:(ncol*nrow)]
x.reshape = t(array(x.trunc, c(ncol, nrow)))
variances = c(variances, var(colMeans(x.reshape)))
}
Qp = (rowlist * periods * variances) / var(x)
df = periods - 1
pvals = 1-pchisq(Qp, df)
pass.periods = periods[pvals<alpha]
pass.pvals = pvals[pvals<alpha]
#return(cbind(pass.periods, pass.pvals))
return(cbind(periods[pvals==min(pvals)], pvals[pvals==min(pvals)]))
}
x = cos( (2*pi/37) * (1:1000))+rnorm(1000)
chisq.pd(x, 2, 72, .05)
The last two lines are just an example, showing that it can identify the period of a pure trigonometric function, even with lots of additive noise.
As written, the last argument (alpha) in the call is superfluous, the function simply returns the 'best' period it can find; uncomment the first return statement and comment out the second to have it return a list of all periods significant at the level alpha.
This function doesn't do any sort of sanity checking to make sure that you've put in identifiable periods, nor does it (can it) work with fractional periods, nor is there any sort of multiple comparison control built in if you decide to look at multiple periods. But other than that it should be reasonably robust. | Period detection of a generic time series
If you expect the process to be stationary -- the periodicity/seasonality will not change over time -- then something like a Chi-square periodogram (see e.g. Sokolove and Bushell, 1978) might be a goo |
4,077 | Period detection of a generic time series | You may want to define what you want more clearly (to yourself, if not here). If what you're looking for is the most statistically significant stationary period contained in your noisy data, there's essentially two routes to take:
1) compute a robust autocorrelation estimate, and take the maximum coefficient
2) compute a robust power spectral density estimate, and take the maximum of the spectrum
The problem with #2 is that for any noisy time series, you will get a large amount of power in low frequencies, making it difficult to distinguish. There are some techniques for resolving this problem (i.e. pre-whiten, then estimate the PSD), but if the true period from your data is long enough, automatic detection will be iffy.
Your best bet is probably to implement a robust autocorrelation routine such as can be found in chapter 8.6, 8.7 in Robust Statistics - Theory and Methods by Maronna, Martin and Yohai. Searching Google for "robust durbin-levinson" will also yield some results.
If you're just looking for a simple answer, I'm not sure that one exists. Period detection in time series can be complicated, and asking for an automated routine that can perform magic may be too much. | Period detection of a generic time series | You may want to define what you want more clearly (to yourself, if not here). If what you're looking for is the most statistically significant stationary period contained in your noisy data, there's e | Period detection of a generic time series
You may want to define what you want more clearly (to yourself, if not here). If what you're looking for is the most statistically significant stationary period contained in your noisy data, there's essentially two routes to take:
1) compute a robust autocorrelation estimate, and take the maximum coefficient
2) compute a robust power spectral density estimate, and take the maximum of the spectrum
The problem with #2 is that for any noisy time series, you will get a large amount of power in low frequencies, making it difficult to distinguish. There are some techniques for resolving this problem (i.e. pre-whiten, then estimate the PSD), but if the true period from your data is long enough, automatic detection will be iffy.
Your best bet is probably to implement a robust autocorrelation routine such as can be found in chapter 8.6, 8.7 in Robust Statistics - Theory and Methods by Maronna, Martin and Yohai. Searching Google for "robust durbin-levinson" will also yield some results.
If you're just looking for a simple answer, I'm not sure that one exists. Period detection in time series can be complicated, and asking for an automated routine that can perform magic may be too much. | Period detection of a generic time series
You may want to define what you want more clearly (to yourself, if not here). If what you're looking for is the most statistically significant stationary period contained in your noisy data, there's e |
4,078 | Period detection of a generic time series | You could use the Hilbert Transformation from DSP theory to measure the instantaneous frequency of your data. The site http://ta-lib.org/ has open source code for measuring the dominant cycle period of financial data; the relevant function is called HT_DCPERIOD; you might be able to use this or adapt the code to your purposes. | Period detection of a generic time series | You could use the Hilbert Transformation from DSP theory to measure the instantaneous frequency of your data. The site http://ta-lib.org/ has open source code for measuring the dominant cycle period o | Period detection of a generic time series
You could use the Hilbert Transformation from DSP theory to measure the instantaneous frequency of your data. The site http://ta-lib.org/ has open source code for measuring the dominant cycle period of financial data; the relevant function is called HT_DCPERIOD; you might be able to use this or adapt the code to your purposes. | Period detection of a generic time series
You could use the Hilbert Transformation from DSP theory to measure the instantaneous frequency of your data. The site http://ta-lib.org/ has open source code for measuring the dominant cycle period o |
4,079 | Period detection of a generic time series | A different approach could be Empirical Mode Decomposition. The R package is called EMD developed by the inventor of the method:
require(EMD)
ndata <- 3000
tt2 <- seq(0, 9, length = ndata)
xt2 <- sin(pi * tt2) + sin(2* pi * tt2) + sin(6 * pi * tt2) + 0.5 * tt2
try <- emd(xt2, tt2, boundary = "wave")
### Ploting the IMF's
par(mfrow = c(try$nimf + 1, 1), mar=c(2,1,2,1))
rangeimf <- range(try$imf)
for(i in 1:try$nimf) {
plot(tt2, try$imf[,i], type="l", xlab="", ylab="", ylim=rangeimf, main=paste(i, "-th IMF", sep="")); abline(h=0)
}
plot(tt2, try$residue, xlab="", ylab="", main="residue", type="l", axes=FALSE); box()
The method was branded 'Empirical' for a good reason and there is a risk that the Intrinsic Mode Functions (the individual additive components) get mixed up. On the other hand the method is very intuitive and may be helpful for a quick visual inspection of cyclicity. | Period detection of a generic time series | A different approach could be Empirical Mode Decomposition. The R package is called EMD developed by the inventor of the method:
require(EMD)
ndata <- 3000
tt2 <- seq(0, 9, length = ndata)
xt2 <- | Period detection of a generic time series
A different approach could be Empirical Mode Decomposition. The R package is called EMD developed by the inventor of the method:
require(EMD)
ndata <- 3000
tt2 <- seq(0, 9, length = ndata)
xt2 <- sin(pi * tt2) + sin(2* pi * tt2) + sin(6 * pi * tt2) + 0.5 * tt2
try <- emd(xt2, tt2, boundary = "wave")
### Ploting the IMF's
par(mfrow = c(try$nimf + 1, 1), mar=c(2,1,2,1))
rangeimf <- range(try$imf)
for(i in 1:try$nimf) {
plot(tt2, try$imf[,i], type="l", xlab="", ylab="", ylim=rangeimf, main=paste(i, "-th IMF", sep="")); abline(h=0)
}
plot(tt2, try$residue, xlab="", ylab="", main="residue", type="l", axes=FALSE); box()
The method was branded 'Empirical' for a good reason and there is a risk that the Intrinsic Mode Functions (the individual additive components) get mixed up. On the other hand the method is very intuitive and may be helpful for a quick visual inspection of cyclicity. | Period detection of a generic time series
A different approach could be Empirical Mode Decomposition. The R package is called EMD developed by the inventor of the method:
require(EMD)
ndata <- 3000
tt2 <- seq(0, 9, length = ndata)
xt2 <- |
4,080 | Period detection of a generic time series | In reference to Rob Hyndman's post above https://stats.stackexchange.com/a/1214/70282
The find.freq function works brilliantly. On the daily data set I am using, it correctly worked out the frequency to be 7.
When I tried it on only the week days, it mentioned the frequency is 23, which is remarkably close to 21.42857=29.6*5/7 which is the average number of work days in a month. (Or conversely 23*7/5 is 32.)
Looking back at my daily data, I experimented with a hunch of taking the first period, averaging by that and then finding the next period, etc. See below:
find.freq.all=function(x){
f=find.freq(x);
freqs=c(f);
while(f>1){
start=1; #also try start=f;
x=period.apply(x,seq(start,length(x),f),mean);
f=find.freq(x);
freqs=c(freqs,f);
}
if(length(freqs)==1){ return(freqs); }
for(i in 2:length(freqs)){
freqs[i]=freqs[i]*freqs[i-1];
}
freqs[1:(length(freqs)-1)];
}
find.freq.all(dailyts) #using daily data
The above gives (7,28) or (7,35) depending on if the seq starts with 1 or f. (See comment above.)
Which would imply that the seasonal periods for msts(...) should be (7,28) or (7,35).
The logic appears sensitive to initial conditions given the sensitivity of the algorithm parameters. The mean of 28 and 35 is 31.5 which is close to the average length of a month.
I suspect I reinvented the wheel, what is the name of this algorithm? Is there a better implementation in R somewhere?
Later, I ran the above code in trying all starts of 1 through 7 and I got 35,35,28,28,28,28,28 for the second period. The average works out to 30 which is the average number of days in a month. Interesting...
Any thoughts or comments? | Period detection of a generic time series | In reference to Rob Hyndman's post above https://stats.stackexchange.com/a/1214/70282
The find.freq function works brilliantly. On the daily data set I am using, it correctly worked out the frequency | Period detection of a generic time series
In reference to Rob Hyndman's post above https://stats.stackexchange.com/a/1214/70282
The find.freq function works brilliantly. On the daily data set I am using, it correctly worked out the frequency to be 7.
When I tried it on only the week days, it mentioned the frequency is 23, which is remarkably close to 21.42857=29.6*5/7 which is the average number of work days in a month. (Or conversely 23*7/5 is 32.)
Looking back at my daily data, I experimented with a hunch of taking the first period, averaging by that and then finding the next period, etc. See below:
find.freq.all=function(x){
f=find.freq(x);
freqs=c(f);
while(f>1){
start=1; #also try start=f;
x=period.apply(x,seq(start,length(x),f),mean);
f=find.freq(x);
freqs=c(freqs,f);
}
if(length(freqs)==1){ return(freqs); }
for(i in 2:length(freqs)){
freqs[i]=freqs[i]*freqs[i-1];
}
freqs[1:(length(freqs)-1)];
}
find.freq.all(dailyts) #using daily data
The above gives (7,28) or (7,35) depending on if the seq starts with 1 or f. (See comment above.)
Which would imply that the seasonal periods for msts(...) should be (7,28) or (7,35).
The logic appears sensitive to initial conditions given the sensitivity of the algorithm parameters. The mean of 28 and 35 is 31.5 which is close to the average length of a month.
I suspect I reinvented the wheel, what is the name of this algorithm? Is there a better implementation in R somewhere?
Later, I ran the above code in trying all starts of 1 through 7 and I got 35,35,28,28,28,28,28 for the second period. The average works out to 30 which is the average number of days in a month. Interesting...
Any thoughts or comments? | Period detection of a generic time series
In reference to Rob Hyndman's post above https://stats.stackexchange.com/a/1214/70282
The find.freq function works brilliantly. On the daily data set I am using, it correctly worked out the frequency |
4,081 | Period detection of a generic time series | One can also use Ljung-Box test to figure out which seasonal difference reaches to best stationarity. I was working on a different subject and I used this actually for the same purposes. Try different periods such as 3 to 24 for a monthly data. And test each of them by Ljung-Box and store Chi-Square results. And choose the period with the lowest chi-square value.
Here is a simple code to do that.
minval0 <- 5000 #assign a big number to be sure Chi values are smaller
minindex0 <- 0
periyot <- 0
for (i in 3:24) { #find optimum period by Qtests over original data
d0D1 <- diff(a, lag=i)
#store results
Qtest_d0D1[[i]] <- Box.test(d0D1, lag=20, type = "Ljung-Box")
#store Chi-Square statistics
sira0[i] <- Qtest_d0D1[[i]][1]
}
#turn list to a data frame, then matrix
datam0 <- data.frame(matrix(unlist(sira0), nrow=length(Qtest_d0D1)-2, byrow = T))
datamtrx0 <- as.matrix(datam0[])
#get min value's index
minindex0 <- which(datamtrx0 == min(datamtrx0), arr.ind = F)
periyot <- minindex0 + 2 | Period detection of a generic time series | One can also use Ljung-Box test to figure out which seasonal difference reaches to best stationarity. I was working on a different subject and I used this actually for the same purposes. Try different | Period detection of a generic time series
One can also use Ljung-Box test to figure out which seasonal difference reaches to best stationarity. I was working on a different subject and I used this actually for the same purposes. Try different periods such as 3 to 24 for a monthly data. And test each of them by Ljung-Box and store Chi-Square results. And choose the period with the lowest chi-square value.
Here is a simple code to do that.
minval0 <- 5000 #assign a big number to be sure Chi values are smaller
minindex0 <- 0
periyot <- 0
for (i in 3:24) { #find optimum period by Qtests over original data
d0D1 <- diff(a, lag=i)
#store results
Qtest_d0D1[[i]] <- Box.test(d0D1, lag=20, type = "Ljung-Box")
#store Chi-Square statistics
sira0[i] <- Qtest_d0D1[[i]][1]
}
#turn list to a data frame, then matrix
datam0 <- data.frame(matrix(unlist(sira0), nrow=length(Qtest_d0D1)-2, byrow = T))
datamtrx0 <- as.matrix(datam0[])
#get min value's index
minindex0 <- which(datamtrx0 == min(datamtrx0), arr.ind = F)
periyot <- minindex0 + 2 | Period detection of a generic time series
One can also use Ljung-Box test to figure out which seasonal difference reaches to best stationarity. I was working on a different subject and I used this actually for the same purposes. Try different |
4,082 | Recurrent vs Recursive Neural Networks: Which is better for NLP? | Recurrent Neural networks are recurring over time. For example if you have a sequence
x = ['h', 'e', 'l', 'l']
This sequence is fed to a single neuron which has a single connection to itself.
At time step 0, the letter 'h' is given as input.At time step 1, 'e' is given as input. The network when unfolded over time will look like this.
A recursive network is just a generalization of a recurrent network. In a recurrent network the weights are shared (and dimensionality remains constant) along the length of the sequence because how would you deal with position-dependent weights when you encounter a sequence at test-time of different length to any you saw at train-time. In a recursive network the weights are shared (and dimensionality remains constant) at every node for the same reason.
This means that all the W_xh weights will be equal(shared) and so will be the W_hh weight. This is simply because it is a single neuron which has been unfolded in time.
This is what a Recursive Neural Network looks like.
It is quite simple to see why it is called a Recursive Neural Network. Each parent node's children are simply a node similar to that node.
The Neural network you want to use depends on your usage. In Karpathy's blog, he is generating characters one at a time so a recurrent neural network is good.
But if you want to generate a parse tree, then using a Recursive Neural Network is better because it helps to create better hierarchical representations.
If you want to do deep learning in c++, then use CUDA. It has a nice user-base, and is fast. I do not know more about that so cannot comment more.
In python, Theano is the best option because it provides automatic differentiation, which means that when you are forming big, awkward NNs, you don't have to find gradients by hand. Theano does it automatically for you. This feature is lacked by Torch7.
Theano is very fast as it provides C wrappers to python code and can be implemented on GPUs. It also has an awesome user base, which is very important while learning something new. | Recurrent vs Recursive Neural Networks: Which is better for NLP? | Recurrent Neural networks are recurring over time. For example if you have a sequence
x = ['h', 'e', 'l', 'l']
This sequence is fed to a single neuron which has a single connection to itself.
At time | Recurrent vs Recursive Neural Networks: Which is better for NLP?
Recurrent Neural networks are recurring over time. For example if you have a sequence
x = ['h', 'e', 'l', 'l']
This sequence is fed to a single neuron which has a single connection to itself.
At time step 0, the letter 'h' is given as input.At time step 1, 'e' is given as input. The network when unfolded over time will look like this.
A recursive network is just a generalization of a recurrent network. In a recurrent network the weights are shared (and dimensionality remains constant) along the length of the sequence because how would you deal with position-dependent weights when you encounter a sequence at test-time of different length to any you saw at train-time. In a recursive network the weights are shared (and dimensionality remains constant) at every node for the same reason.
This means that all the W_xh weights will be equal(shared) and so will be the W_hh weight. This is simply because it is a single neuron which has been unfolded in time.
This is what a Recursive Neural Network looks like.
It is quite simple to see why it is called a Recursive Neural Network. Each parent node's children are simply a node similar to that node.
The Neural network you want to use depends on your usage. In Karpathy's blog, he is generating characters one at a time so a recurrent neural network is good.
But if you want to generate a parse tree, then using a Recursive Neural Network is better because it helps to create better hierarchical representations.
If you want to do deep learning in c++, then use CUDA. It has a nice user-base, and is fast. I do not know more about that so cannot comment more.
In python, Theano is the best option because it provides automatic differentiation, which means that when you are forming big, awkward NNs, you don't have to find gradients by hand. Theano does it automatically for you. This feature is lacked by Torch7.
Theano is very fast as it provides C wrappers to python code and can be implemented on GPUs. It also has an awesome user base, which is very important while learning something new. | Recurrent vs Recursive Neural Networks: Which is better for NLP?
Recurrent Neural networks are recurring over time. For example if you have a sequence
x = ['h', 'e', 'l', 'l']
This sequence is fed to a single neuron which has a single connection to itself.
At time |
4,083 | Recurrent vs Recursive Neural Networks: Which is better for NLP? | Large Recurrent Neural Networks are considered maybe the most powerful model for NLP. A great article written by A. Karpathy on Recurrent Neural Networks and character level modeling is available at http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Having tried a large number of libraries for deep learning (theano, caffe etc.). I would strongly suggest the use Torch7 which is considered the state-of-the-art tool for NNs and it supported by NYU, Facebook AI and Google DeepMind. Torch7 is based on lua and there are so many examples that you can easily familiarize with. A lot of code can be found on github, a good start would be https://github.com/wojzaremba/lstm.
Finally, the beauty of lua is that LuaJIT can be injected very easily in Java, Python, Matlab etc. | Recurrent vs Recursive Neural Networks: Which is better for NLP? | Large Recurrent Neural Networks are considered maybe the most powerful model for NLP. A great article written by A. Karpathy on Recurrent Neural Networks and character level modeling is available at h | Recurrent vs Recursive Neural Networks: Which is better for NLP?
Large Recurrent Neural Networks are considered maybe the most powerful model for NLP. A great article written by A. Karpathy on Recurrent Neural Networks and character level modeling is available at http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Having tried a large number of libraries for deep learning (theano, caffe etc.). I would strongly suggest the use Torch7 which is considered the state-of-the-art tool for NNs and it supported by NYU, Facebook AI and Google DeepMind. Torch7 is based on lua and there are so many examples that you can easily familiarize with. A lot of code can be found on github, a good start would be https://github.com/wojzaremba/lstm.
Finally, the beauty of lua is that LuaJIT can be injected very easily in Java, Python, Matlab etc. | Recurrent vs Recursive Neural Networks: Which is better for NLP?
Large Recurrent Neural Networks are considered maybe the most powerful model for NLP. A great article written by A. Karpathy on Recurrent Neural Networks and character level modeling is available at h |
4,084 | Recurrent vs Recursive Neural Networks: Which is better for NLP? | Recurrent Neural Networks (RNN) basically unfolds over time. It is used for sequential inputs where the time factor is the main differentiating factor between the elements of the sequence. For example, here is a recurrent neural network used for language modeling that has been unfolded over time. At each time step, in addition to the user input at that time step, it also accepts the output of the hidden layer that was computed at the previous time step.
A Recursive Neural Networks is more like a hierarchical network where there is really no time aspect to the input sequence but the input has to be processed hierarchically in a tree fashion. Here is an example of how a recursive neural network looks. It shows the way to learn a parse tree of a sentence by recursively taking the output of the operation performed on a smaller chunk of the text.
[NOTE]:
LSTM and GRU are two extended RNNs types with the forget gate, which are highly common in NLP.
LSTM-Cell Formula: | Recurrent vs Recursive Neural Networks: Which is better for NLP? | Recurrent Neural Networks (RNN) basically unfolds over time. It is used for sequential inputs where the time factor is the main differentiating factor between the elements of the sequence. For example | Recurrent vs Recursive Neural Networks: Which is better for NLP?
Recurrent Neural Networks (RNN) basically unfolds over time. It is used for sequential inputs where the time factor is the main differentiating factor between the elements of the sequence. For example, here is a recurrent neural network used for language modeling that has been unfolded over time. At each time step, in addition to the user input at that time step, it also accepts the output of the hidden layer that was computed at the previous time step.
A Recursive Neural Networks is more like a hierarchical network where there is really no time aspect to the input sequence but the input has to be processed hierarchically in a tree fashion. Here is an example of how a recursive neural network looks. It shows the way to learn a parse tree of a sentence by recursively taking the output of the operation performed on a smaller chunk of the text.
[NOTE]:
LSTM and GRU are two extended RNNs types with the forget gate, which are highly common in NLP.
LSTM-Cell Formula: | Recurrent vs Recursive Neural Networks: Which is better for NLP?
Recurrent Neural Networks (RNN) basically unfolds over time. It is used for sequential inputs where the time factor is the main differentiating factor between the elements of the sequence. For example |
4,085 | Recurrent vs Recursive Neural Networks: Which is better for NLP? | To answer a couple of the questions:
CNNs definitely are used for NLP tasks sometimes. They are one way to take a variable-length natural language input and reduce it to a fixed length output such as a sentence embedding. Google's Multilingual Universal Sentence Encoder (USE) is one example:
https://arxiv.org/abs/1907.04307
https://tfhub.dev/google/universal-sentence-encoder-multilingual/3
Since this question has been asked, there have been a number of new models proposed for NLP that are distinct from those mentioned above such as Transformers and pre-trained neural language models like BERT and some of the other flavors of USE. https://en.wikipedia.org/wiki/Transformer_(machine_learning_model) | Recurrent vs Recursive Neural Networks: Which is better for NLP? | To answer a couple of the questions:
CNNs definitely are used for NLP tasks sometimes. They are one way to take a variable-length natural language input and reduce it to a fixed length output such as | Recurrent vs Recursive Neural Networks: Which is better for NLP?
To answer a couple of the questions:
CNNs definitely are used for NLP tasks sometimes. They are one way to take a variable-length natural language input and reduce it to a fixed length output such as a sentence embedding. Google's Multilingual Universal Sentence Encoder (USE) is one example:
https://arxiv.org/abs/1907.04307
https://tfhub.dev/google/universal-sentence-encoder-multilingual/3
Since this question has been asked, there have been a number of new models proposed for NLP that are distinct from those mentioned above such as Transformers and pre-trained neural language models like BERT and some of the other flavors of USE. https://en.wikipedia.org/wiki/Transformer_(machine_learning_model) | Recurrent vs Recursive Neural Networks: Which is better for NLP?
To answer a couple of the questions:
CNNs definitely are used for NLP tasks sometimes. They are one way to take a variable-length natural language input and reduce it to a fixed length output such as |
4,086 | What is a difference between random effects-, fixed effects- and marginal model? | This question has been partially discussed at this site as below, and opinions seem mixed.
What is the difference between fixed effect, random effect and mixed effect models?
What is the mathematical difference between random- and fixed-effects?
Concepts behind fixed/random effects models
All terms are generally related to longitudinal / panel / clustered / hierarchical data and repeated measures (in the format of advanced regression and ANOVA), but have multiple meanings in different context. I would like to answer the question in formulas based on my knowledge.
Fixed-effects model
In biostatistics, fixed-effects, denoted as $\color{red}{\boldsymbol\beta}$ in Equation (*) below, usually comes together with random effects. But the fixed-effects model is also defined to assume that the observations are independent, like cross-sectional setting, as in Longitudinal Data Analysis of Hedeker and Gibbons (2006).
In econometrics, the fixed-effects model can be written as
$$ y_{ij}=\boldsymbol x_{ij}^{'}\boldsymbol\beta+\color{red}{u_i}+\epsilon_{ij}$$
where $\color{red}{u_i}$ is fixed (not random) intercept for each subject ($i$), or we can also have a fixed-effect as $u_j$ for each repeated measurement ($j$); $\boldsymbol x_{ij}$ denotes covariates.
In meta-analysis, the fixed-effect model assumes underlying effect is the same across all studies (e.g. Mantel and Haenszel, 1959).
Random-effects model
In biostatistics, the random-effects model (Laird and Ware, 1982) can be written as
$$\tag{*} y_{ij}=\boldsymbol x_{ij}^{'}\color{red}{\boldsymbol\beta}+\boldsymbol z_{ij}^{'}\color{blue}{\boldsymbol u_i}+e_{ij}$$
where $\color{blue}{\boldsymbol u_i}$ is assumed to follow a distribution. $\boldsymbol x_{ij}$ denotes covariates for fixed effects, and $\boldsymbol z_{ij}$ denotes covariates for random effects.
In econometrics, the random-effects model may only refer to random intercept model as in biostatistics, i.e. $\boldsymbol z_{ij}^{'}=1$ and $\boldsymbol u_i$ is a scalar.
In meta-analysis, the random-effect model assumes heterogeneous effects across studies (DerSimonian and Laird, 1986).
Marginal model
Marginal model is generally compared to conditional model (random-effects model), and the former focuses on the population mean (take linear model for an example) $$E(y_{ij})=\boldsymbol x_{ij}^{'}\boldsymbol\beta,$$ while the latter deals with the conditional mean $$E(y_{ij}|\boldsymbol u_i)=\boldsymbol x_{ij}^{'}\boldsymbol\beta + \boldsymbol z_{ij}^{'}\boldsymbol u_i.$$ The interpretation and scale of the regression coefficients between marginal model and random-effects model would be different for nonlinear models (e.g. logistic regression). Let $h(E(y_{ij}|\boldsymbol u_i))=\boldsymbol x_{ij}^{'}\boldsymbol\beta + \boldsymbol z_{ij}^{'}\boldsymbol u_i$, then $$E(y_{ij})=E(E(y_{ij}|\boldsymbol u_i))=E(h^{-1}(\boldsymbol x_{ij}^{'}\boldsymbol\beta + \boldsymbol z_{ij}^{'}\boldsymbol u_i))\neq h^{-1}(\boldsymbol x_{ij}^{'}\boldsymbol\beta),$$ unless trivially the link function $h$ is the identity link (linear model), or $u_i=0$ (no random-effects). Good examples include generalized estimating equations (GEE; Zeger, Liang and Albert, 1988) and marginalized multilevel models (Heagerty and Zeger, 2000). | What is a difference between random effects-, fixed effects- and marginal model? | This question has been partially discussed at this site as below, and opinions seem mixed.
What is the difference between fixed effect, random effect and mixed effect models?
What is the mathematical | What is a difference between random effects-, fixed effects- and marginal model?
This question has been partially discussed at this site as below, and opinions seem mixed.
What is the difference between fixed effect, random effect and mixed effect models?
What is the mathematical difference between random- and fixed-effects?
Concepts behind fixed/random effects models
All terms are generally related to longitudinal / panel / clustered / hierarchical data and repeated measures (in the format of advanced regression and ANOVA), but have multiple meanings in different context. I would like to answer the question in formulas based on my knowledge.
Fixed-effects model
In biostatistics, fixed-effects, denoted as $\color{red}{\boldsymbol\beta}$ in Equation (*) below, usually comes together with random effects. But the fixed-effects model is also defined to assume that the observations are independent, like cross-sectional setting, as in Longitudinal Data Analysis of Hedeker and Gibbons (2006).
In econometrics, the fixed-effects model can be written as
$$ y_{ij}=\boldsymbol x_{ij}^{'}\boldsymbol\beta+\color{red}{u_i}+\epsilon_{ij}$$
where $\color{red}{u_i}$ is fixed (not random) intercept for each subject ($i$), or we can also have a fixed-effect as $u_j$ for each repeated measurement ($j$); $\boldsymbol x_{ij}$ denotes covariates.
In meta-analysis, the fixed-effect model assumes underlying effect is the same across all studies (e.g. Mantel and Haenszel, 1959).
Random-effects model
In biostatistics, the random-effects model (Laird and Ware, 1982) can be written as
$$\tag{*} y_{ij}=\boldsymbol x_{ij}^{'}\color{red}{\boldsymbol\beta}+\boldsymbol z_{ij}^{'}\color{blue}{\boldsymbol u_i}+e_{ij}$$
where $\color{blue}{\boldsymbol u_i}$ is assumed to follow a distribution. $\boldsymbol x_{ij}$ denotes covariates for fixed effects, and $\boldsymbol z_{ij}$ denotes covariates for random effects.
In econometrics, the random-effects model may only refer to random intercept model as in biostatistics, i.e. $\boldsymbol z_{ij}^{'}=1$ and $\boldsymbol u_i$ is a scalar.
In meta-analysis, the random-effect model assumes heterogeneous effects across studies (DerSimonian and Laird, 1986).
Marginal model
Marginal model is generally compared to conditional model (random-effects model), and the former focuses on the population mean (take linear model for an example) $$E(y_{ij})=\boldsymbol x_{ij}^{'}\boldsymbol\beta,$$ while the latter deals with the conditional mean $$E(y_{ij}|\boldsymbol u_i)=\boldsymbol x_{ij}^{'}\boldsymbol\beta + \boldsymbol z_{ij}^{'}\boldsymbol u_i.$$ The interpretation and scale of the regression coefficients between marginal model and random-effects model would be different for nonlinear models (e.g. logistic regression). Let $h(E(y_{ij}|\boldsymbol u_i))=\boldsymbol x_{ij}^{'}\boldsymbol\beta + \boldsymbol z_{ij}^{'}\boldsymbol u_i$, then $$E(y_{ij})=E(E(y_{ij}|\boldsymbol u_i))=E(h^{-1}(\boldsymbol x_{ij}^{'}\boldsymbol\beta + \boldsymbol z_{ij}^{'}\boldsymbol u_i))\neq h^{-1}(\boldsymbol x_{ij}^{'}\boldsymbol\beta),$$ unless trivially the link function $h$ is the identity link (linear model), or $u_i=0$ (no random-effects). Good examples include generalized estimating equations (GEE; Zeger, Liang and Albert, 1988) and marginalized multilevel models (Heagerty and Zeger, 2000). | What is a difference between random effects-, fixed effects- and marginal model?
This question has been partially discussed at this site as below, and opinions seem mixed.
What is the difference between fixed effect, random effect and mixed effect models?
What is the mathematical |
4,087 | What is a difference between random effects-, fixed effects- and marginal model? | Correct me if I'm wrong here:
Conceptually, there are four possible effects: Fixed intercept, fixed coefficient, random intercept, random coefficient. Most regression models are 'random effects', so they have random intercepts and random coefficients. The term 'random effect' came into use in contrast to 'fixed effect'.
'Fixed effect' is when a variable effects some of the sample, but not all. The simplest version of a fixed effect model (conceptually) would be a dummy variable, for a fixed effect with a binary value. These models have a single random intercept, fixed effect coefficients, and random variable coefficients.
The next tier of complication (conceptually) is when the fixed effect is not binary, but nominal, with many values. In this case, what is generated is a model with many intercepts (one for each of the nominal values). This is where you get the classic 'multiple lines' of a panel data model, where each of the 'options' of a fixed effect variable gets its own effect. The virtue of throwing all the different factor-specific data series into a single regression (rather than doing each factor of the fixed effect as its own regression) is that you get to pool the variance of all the different effects in one equation, and so get better (more certain) values for all of your coefficients.
'Tier three' of complication would be when the 'fixed effect' is itself a random variable, except that its effects are 'fixed' to affect only a sub-set of the sample. At which point the model would have a random intercept, multiple fixed intercepts, and multiple random variables. I think this is what is known as a 'mixed effects' model?
'Mixed effect' models get used for multi-level modeling (MLM), as the 'fixed effects' can be used for nesting one subset of data within another. This grouping can have multiple tiers, with students nested within classrooms, nested within schools. The school is a fixed effect on the classrooms, and the classrooms on the students. (The school may or may not be a fixed effect on the student, depending on the experimental design--not sure)
Panel data models are 'mixed effect' models, but use two dimensions for grouping, typically time and some sort of category. | What is a difference between random effects-, fixed effects- and marginal model? | Correct me if I'm wrong here:
Conceptually, there are four possible effects: Fixed intercept, fixed coefficient, random intercept, random coefficient. Most regression models are 'random effects', so | What is a difference between random effects-, fixed effects- and marginal model?
Correct me if I'm wrong here:
Conceptually, there are four possible effects: Fixed intercept, fixed coefficient, random intercept, random coefficient. Most regression models are 'random effects', so they have random intercepts and random coefficients. The term 'random effect' came into use in contrast to 'fixed effect'.
'Fixed effect' is when a variable effects some of the sample, but not all. The simplest version of a fixed effect model (conceptually) would be a dummy variable, for a fixed effect with a binary value. These models have a single random intercept, fixed effect coefficients, and random variable coefficients.
The next tier of complication (conceptually) is when the fixed effect is not binary, but nominal, with many values. In this case, what is generated is a model with many intercepts (one for each of the nominal values). This is where you get the classic 'multiple lines' of a panel data model, where each of the 'options' of a fixed effect variable gets its own effect. The virtue of throwing all the different factor-specific data series into a single regression (rather than doing each factor of the fixed effect as its own regression) is that you get to pool the variance of all the different effects in one equation, and so get better (more certain) values for all of your coefficients.
'Tier three' of complication would be when the 'fixed effect' is itself a random variable, except that its effects are 'fixed' to affect only a sub-set of the sample. At which point the model would have a random intercept, multiple fixed intercepts, and multiple random variables. I think this is what is known as a 'mixed effects' model?
'Mixed effect' models get used for multi-level modeling (MLM), as the 'fixed effects' can be used for nesting one subset of data within another. This grouping can have multiple tiers, with students nested within classrooms, nested within schools. The school is a fixed effect on the classrooms, and the classrooms on the students. (The school may or may not be a fixed effect on the student, depending on the experimental design--not sure)
Panel data models are 'mixed effect' models, but use two dimensions for grouping, typically time and some sort of category. | What is a difference between random effects-, fixed effects- and marginal model?
Correct me if I'm wrong here:
Conceptually, there are four possible effects: Fixed intercept, fixed coefficient, random intercept, random coefficient. Most regression models are 'random effects', so |
4,088 | How to apply standardization/normalization to train- and testset if prediction is the goal? | The third way is correct. Exactly why is covered in wonderful detail in The Elements of Statistical Learning, see the section "The Wrong and Right Way to Do Cross-validation", and also in the final chapter of Learning From Data, in the stock market example.
Essentially, procedures 1 and 2 leak information about either the response, or from the future, from your hold out data set into the training, or evaluation, of your model. This can cause considerable optimism bias in your model evaluation.
The idea in model validation is to mimic the situation you would be in when your model is making production decisions, when you do not have access to the true response. The consequence is that you cannot use the response in the test set for anything except comparing to your predicted values.
Another way to approach it is to imagine that you only have access to one data point from your hold out at a time (a common situation for production models). Anything you cannot do under this assumption you should hold in great suspicion. Clearly, one thing you cannot do is aggregate over all new data-points past and future to normalize your production stream of data - so doing the same for model validation is invalid.
You don't have to worry about the mean of your test set being non-zero, that's a better situation to be in than biasing your hold out performance estimates. Though, of course, if the test is truly drawn from the same underlying distribution as your train (an essential assumption in statistical learning), said mean should come out as approximately zero. | How to apply standardization/normalization to train- and testset if prediction is the goal? | The third way is correct. Exactly why is covered in wonderful detail in The Elements of Statistical Learning, see the section "The Wrong and Right Way to Do Cross-validation", and also in the final c | How to apply standardization/normalization to train- and testset if prediction is the goal?
The third way is correct. Exactly why is covered in wonderful detail in The Elements of Statistical Learning, see the section "The Wrong and Right Way to Do Cross-validation", and also in the final chapter of Learning From Data, in the stock market example.
Essentially, procedures 1 and 2 leak information about either the response, or from the future, from your hold out data set into the training, or evaluation, of your model. This can cause considerable optimism bias in your model evaluation.
The idea in model validation is to mimic the situation you would be in when your model is making production decisions, when you do not have access to the true response. The consequence is that you cannot use the response in the test set for anything except comparing to your predicted values.
Another way to approach it is to imagine that you only have access to one data point from your hold out at a time (a common situation for production models). Anything you cannot do under this assumption you should hold in great suspicion. Clearly, one thing you cannot do is aggregate over all new data-points past and future to normalize your production stream of data - so doing the same for model validation is invalid.
You don't have to worry about the mean of your test set being non-zero, that's a better situation to be in than biasing your hold out performance estimates. Though, of course, if the test is truly drawn from the same underlying distribution as your train (an essential assumption in statistical learning), said mean should come out as approximately zero. | How to apply standardization/normalization to train- and testset if prediction is the goal?
The third way is correct. Exactly why is covered in wonderful detail in The Elements of Statistical Learning, see the section "The Wrong and Right Way to Do Cross-validation", and also in the final c |
4,089 | What is the root cause of the class imbalance problem? | An entry from the Encyclopedia of Machine Learning (https://cling.csd.uwo.ca/papers/cost_sensitive.pdf) helpfully explains that what gets called "the class imbalance problem" is better understood as three separate problems:
assuming that an accuracy metric is appropriate when it is not
assuming that the test distribution matches the training distribution when it does not
assuming that you have enough minority class data when you do not
The authors explain:
The class imbalanced datasets occurs in many real-world applications where the class distributions of data are highly imbalanced. Again, without loss of generality, we assume that the minority or rare class is the positive class, and the majority class is the negative class. Often the minority class is very small, such as 1%of the dataset. If we apply most traditional (cost-insensitive) classifiers on the dataset, they will likely to predict everything as negative (the majority class). This was often regarded as a problem in learning from highly imbalanced datasets.
However, as pointed out by (Provost, 2000), two fundamental assumptions are often made in the traditional cost-insensitive classifiers. The first is that the goal of the classifiers is to maximize the accuracy (or minimize the error rate); the second is that the class distribution of the training and test datasets is the same. Under these two assumptions, predicting everything as negative for a highly imbalanced dataset is often the right thing to do. (Drummond and Holte, 2005) show that it is usually very difficult to outperform this simple classifier in this situation.
Thus, the imbalanced class problem becomes meaningful only if one or both of the two assumptions above are not true; that is, if the cost of different types of error (false positive and false negative in the binary classification) is not the same, or if the class distribution in the test data is different from that of the training data. The first case can be dealt with effectively using methods in cost-sensitive meta-learning.
In the case when the misclassification cost is not equal, it is usually more expensive to misclassify a minority (positive) example into the majority (negative) class, than a majority example into the minority class (otherwise it is more plausible to predict everything as negative). That is, FN > FP. Thus, given the values of FN and FP, a variety of cost-sensitive meta-learning methods can be, and have been, used to solve the class imbalance problem (Ling and Li, 1998; Japkowicz and Stephen, 2002). If the values of FN and FP are not unknown explicitly, FN and FP can be assigned to be proportional to p(-):p(+) (Japkowicz and Stephen, 2002).
In case the class distributions of training and test datasets are different (for example, if the training data is highly imbalanced but the test data is more balanced), an obvious approach is to sample the training data such that its class distribution is the same as the test data (by oversampling the minority class and/or undersampling the majority class)(Provost, 2000).
Note that sometimes the number of examples of the minority class is too small for classifiers to learn adequately. This is the problem of insufficient (small) training data, different from that of the imbalanced datasets.
Thus, as Murphy implies, there is nothing inherently problematic about using imbalanced classes, provided you avoid these three mistakes. Models that yield posterior probabilities make it easier to avoid error (1) than do discriminant models like SVM because they enable you to separate inference from decision-making. (See Bishop's section 1.5.4 Inference and Decision for further discussion of that last point.)
Hope that helps. | What is the root cause of the class imbalance problem? | An entry from the Encyclopedia of Machine Learning (https://cling.csd.uwo.ca/papers/cost_sensitive.pdf) helpfully explains that what gets called "the class imbalance problem" is better understood as t | What is the root cause of the class imbalance problem?
An entry from the Encyclopedia of Machine Learning (https://cling.csd.uwo.ca/papers/cost_sensitive.pdf) helpfully explains that what gets called "the class imbalance problem" is better understood as three separate problems:
assuming that an accuracy metric is appropriate when it is not
assuming that the test distribution matches the training distribution when it does not
assuming that you have enough minority class data when you do not
The authors explain:
The class imbalanced datasets occurs in many real-world applications where the class distributions of data are highly imbalanced. Again, without loss of generality, we assume that the minority or rare class is the positive class, and the majority class is the negative class. Often the minority class is very small, such as 1%of the dataset. If we apply most traditional (cost-insensitive) classifiers on the dataset, they will likely to predict everything as negative (the majority class). This was often regarded as a problem in learning from highly imbalanced datasets.
However, as pointed out by (Provost, 2000), two fundamental assumptions are often made in the traditional cost-insensitive classifiers. The first is that the goal of the classifiers is to maximize the accuracy (or minimize the error rate); the second is that the class distribution of the training and test datasets is the same. Under these two assumptions, predicting everything as negative for a highly imbalanced dataset is often the right thing to do. (Drummond and Holte, 2005) show that it is usually very difficult to outperform this simple classifier in this situation.
Thus, the imbalanced class problem becomes meaningful only if one or both of the two assumptions above are not true; that is, if the cost of different types of error (false positive and false negative in the binary classification) is not the same, or if the class distribution in the test data is different from that of the training data. The first case can be dealt with effectively using methods in cost-sensitive meta-learning.
In the case when the misclassification cost is not equal, it is usually more expensive to misclassify a minority (positive) example into the majority (negative) class, than a majority example into the minority class (otherwise it is more plausible to predict everything as negative). That is, FN > FP. Thus, given the values of FN and FP, a variety of cost-sensitive meta-learning methods can be, and have been, used to solve the class imbalance problem (Ling and Li, 1998; Japkowicz and Stephen, 2002). If the values of FN and FP are not unknown explicitly, FN and FP can be assigned to be proportional to p(-):p(+) (Japkowicz and Stephen, 2002).
In case the class distributions of training and test datasets are different (for example, if the training data is highly imbalanced but the test data is more balanced), an obvious approach is to sample the training data such that its class distribution is the same as the test data (by oversampling the minority class and/or undersampling the majority class)(Provost, 2000).
Note that sometimes the number of examples of the minority class is too small for classifiers to learn adequately. This is the problem of insufficient (small) training data, different from that of the imbalanced datasets.
Thus, as Murphy implies, there is nothing inherently problematic about using imbalanced classes, provided you avoid these three mistakes. Models that yield posterior probabilities make it easier to avoid error (1) than do discriminant models like SVM because they enable you to separate inference from decision-making. (See Bishop's section 1.5.4 Inference and Decision for further discussion of that last point.)
Hope that helps. | What is the root cause of the class imbalance problem?
An entry from the Encyclopedia of Machine Learning (https://cling.csd.uwo.ca/papers/cost_sensitive.pdf) helpfully explains that what gets called "the class imbalance problem" is better understood as t |
4,090 | What is the root cause of the class imbalance problem? | Anything that involves optimization to minimize a loss function will, if sufficiently convex, give a solution that is a global minimum of that loss function. I say 'sufficiently convex' since deep networks are not on the whole convex, but give reasonable minimums in practice, with careful choices of learning rate etc.
Therefore, the behavior of such models is defined by whatever we put in the loss function.
Imagine that we have a model, $F$, that assigns some arbitrary real scalar to each example, such that more negative values tend to indicate class A, and more positive numbers tend to indicate class B.
$$y_f = f(\mathbf{x})$$
We use $F$ to create model $G$, which assigns a threshold, $b$, to the output of $F$, implicitly or explicitly, such that when $F$ outputs a value greater than $b$ then model $G$ predicts class B, else it predicts class A.
$$
y_g = \begin{cases}
B & \text{if } f(\mathbf{x}) > b \\
A & \text{otherwise}\\
\end{cases}
$$
By varying the threshold $b$ that model $G$ learns, we can vary the proportion of examples that are classified as class A or class B. We can move along a curve of precision/recall, for each class. A higher threshold gives lower recall for class B, but probably higher precision.
Imagine that the model $F$ is such that if we choose a threshold that gives equal precision and recall to either class, then the accuracy of model G is 90%, for either class (by symmetry). So, given a training example, $G$ would get the example right 90% of the time, no matter what is the ground truth, A or B. This is presumably where we want to get to? Let's call this our 'ideal threshold', or 'ideal model G', or perhaps $G^*$.
Now, let's say we have a loss function which is:
$$
\mathcal{L} = \frac{1}{N}\sum_{n=1}^N I_{y_i \ne g(x_i)}
$$
where $I_c$ is an indicator variable that is $1$ when $c$ is true, else $0$, $y_i$ is the true class for example $i$, and $g(x_i)$ is the predicted class for example $i$, by model G.
Imagine that we have a dataset that has 100 times as many training examples of class A than class B. And then we feed examples through. For every 99 examples of A, we expect to get $99*0.9 = 89.1$ examples correct, and $99*0.1=9.9$ examples incorrect. Similarly, for every 1 example of B, we expect to get $1 * 0.9=0.9$ examples correct, and $1 * 0.1=0.1$ examples incorrect. The expected loss will be:
$
\mathcal{L} = (9.9 + 0.1)/100 = 0.1
$
Now, lets look at a model $G$ where the threshold is set such that class A is systematically chosen. Now, for every 99 examples of A, all 99 will be correct. Zero loss. But each example of B will be systematically not chosen, giving a loss of $1/100$, so the expected loss over the training set will be:
$
\mathcal{L} = 0.01
$
Ten times lower than the loss when setting the threshold such as to assign equal recall and precision to each class.
Therefore, the loss function will drive model $G$ to choose a threshold which chooses A with higher probability than class B, driving up the recall for class A, but lowering that for class B. The resulting model no longer matches what we might hope, no longer matches our ideal model $G^*$.
To correct the model, we'd need to for example modify the loss function such that getting B wrong costs a lot more than getting A wrong. Then this will modify the loss function to have a minimum closer to the earlier ideal model $G^*$, which assigned equal precision/recall to each class.
Alternatively, we can modify the dataset by cloning every B example 99 times, which will also cause the loss function to no longer have a minimum at a position different from our earlier ideal threshold. | What is the root cause of the class imbalance problem? | Anything that involves optimization to minimize a loss function will, if sufficiently convex, give a solution that is a global minimum of that loss function. I say 'sufficiently convex' since deep ne | What is the root cause of the class imbalance problem?
Anything that involves optimization to minimize a loss function will, if sufficiently convex, give a solution that is a global minimum of that loss function. I say 'sufficiently convex' since deep networks are not on the whole convex, but give reasonable minimums in practice, with careful choices of learning rate etc.
Therefore, the behavior of such models is defined by whatever we put in the loss function.
Imagine that we have a model, $F$, that assigns some arbitrary real scalar to each example, such that more negative values tend to indicate class A, and more positive numbers tend to indicate class B.
$$y_f = f(\mathbf{x})$$
We use $F$ to create model $G$, which assigns a threshold, $b$, to the output of $F$, implicitly or explicitly, such that when $F$ outputs a value greater than $b$ then model $G$ predicts class B, else it predicts class A.
$$
y_g = \begin{cases}
B & \text{if } f(\mathbf{x}) > b \\
A & \text{otherwise}\\
\end{cases}
$$
By varying the threshold $b$ that model $G$ learns, we can vary the proportion of examples that are classified as class A or class B. We can move along a curve of precision/recall, for each class. A higher threshold gives lower recall for class B, but probably higher precision.
Imagine that the model $F$ is such that if we choose a threshold that gives equal precision and recall to either class, then the accuracy of model G is 90%, for either class (by symmetry). So, given a training example, $G$ would get the example right 90% of the time, no matter what is the ground truth, A or B. This is presumably where we want to get to? Let's call this our 'ideal threshold', or 'ideal model G', or perhaps $G^*$.
Now, let's say we have a loss function which is:
$$
\mathcal{L} = \frac{1}{N}\sum_{n=1}^N I_{y_i \ne g(x_i)}
$$
where $I_c$ is an indicator variable that is $1$ when $c$ is true, else $0$, $y_i$ is the true class for example $i$, and $g(x_i)$ is the predicted class for example $i$, by model G.
Imagine that we have a dataset that has 100 times as many training examples of class A than class B. And then we feed examples through. For every 99 examples of A, we expect to get $99*0.9 = 89.1$ examples correct, and $99*0.1=9.9$ examples incorrect. Similarly, for every 1 example of B, we expect to get $1 * 0.9=0.9$ examples correct, and $1 * 0.1=0.1$ examples incorrect. The expected loss will be:
$
\mathcal{L} = (9.9 + 0.1)/100 = 0.1
$
Now, lets look at a model $G$ where the threshold is set such that class A is systematically chosen. Now, for every 99 examples of A, all 99 will be correct. Zero loss. But each example of B will be systematically not chosen, giving a loss of $1/100$, so the expected loss over the training set will be:
$
\mathcal{L} = 0.01
$
Ten times lower than the loss when setting the threshold such as to assign equal recall and precision to each class.
Therefore, the loss function will drive model $G$ to choose a threshold which chooses A with higher probability than class B, driving up the recall for class A, but lowering that for class B. The resulting model no longer matches what we might hope, no longer matches our ideal model $G^*$.
To correct the model, we'd need to for example modify the loss function such that getting B wrong costs a lot more than getting A wrong. Then this will modify the loss function to have a minimum closer to the earlier ideal model $G^*$, which assigned equal precision/recall to each class.
Alternatively, we can modify the dataset by cloning every B example 99 times, which will also cause the loss function to no longer have a minimum at a position different from our earlier ideal threshold. | What is the root cause of the class imbalance problem?
Anything that involves optimization to minimize a loss function will, if sufficiently convex, give a solution that is a global minimum of that loss function. I say 'sufficiently convex' since deep ne |
4,091 | What is the root cause of the class imbalance problem? | Note that one-class classifiers don't have an imbalance problem as they look at each class independently from all other classes and they can cope with "not-classes" by just not modeling them. (They may have a problem with too small sample size, of course).
Many problems that would be more appropriately modeled by one-class classifiers lead to ill-defined models when dicriminative approaches are used, of which "class imbalance problems" are one symptom.
As an example, consider some product that can be good to be sold or not. Such a situation is usually characterized by
class | "good" | "not good"
--------------+-------------------------------+------------------------------------------
sample size | large | small
| |
feature space | single, well-delimited region | many possibilities of *something* wrong
| | (possibly well-defined sub-groups of
| | particular fault reasons/mechanisms)
| | => not a well defined region,
| | spread over large parts of feature space
| |
future cases | can be expected to end up | may show up *anywhere*
| inside modeled region | (except in good region)
Thus, class "good" is well-defined while class "not-good" is ill-defined. If such a situation is modeled by a discriminative classifier, we have a two-fold "imbalance problem": not only has the "not-good" class small sample size, it also has even lower sample density (fewer samples spread out over a larger part of the feature space).
This type of "class imbalance problem" will vanish when the task is modeled as one-class recognition of the well-defined "good" class. | What is the root cause of the class imbalance problem? | Note that one-class classifiers don't have an imbalance problem as they look at each class independently from all other classes and they can cope with "not-classes" by just not modeling them. (They ma | What is the root cause of the class imbalance problem?
Note that one-class classifiers don't have an imbalance problem as they look at each class independently from all other classes and they can cope with "not-classes" by just not modeling them. (They may have a problem with too small sample size, of course).
Many problems that would be more appropriately modeled by one-class classifiers lead to ill-defined models when dicriminative approaches are used, of which "class imbalance problems" are one symptom.
As an example, consider some product that can be good to be sold or not. Such a situation is usually characterized by
class | "good" | "not good"
--------------+-------------------------------+------------------------------------------
sample size | large | small
| |
feature space | single, well-delimited region | many possibilities of *something* wrong
| | (possibly well-defined sub-groups of
| | particular fault reasons/mechanisms)
| | => not a well defined region,
| | spread over large parts of feature space
| |
future cases | can be expected to end up | may show up *anywhere*
| inside modeled region | (except in good region)
Thus, class "good" is well-defined while class "not-good" is ill-defined. If such a situation is modeled by a discriminative classifier, we have a two-fold "imbalance problem": not only has the "not-good" class small sample size, it also has even lower sample density (fewer samples spread out over a larger part of the feature space).
This type of "class imbalance problem" will vanish when the task is modeled as one-class recognition of the well-defined "good" class. | What is the root cause of the class imbalance problem?
Note that one-class classifiers don't have an imbalance problem as they look at each class independently from all other classes and they can cope with "not-classes" by just not modeling them. (They ma |
4,092 | What is the root cause of the class imbalance problem? | Tongue slightly in cheek - the root cause of the class imbalance problem is calling it the class imbalance problem, which implies that the class imbalance causes a problem. This is very rarely the case (and when it does happen the only solution is likely to be to collect more data). The real problem is practitioners (and algorithm developers) not paying attention to the requirements of the application. In most cases it is a cost-sensitive learning problem in disguise (where the degree of imbalance is completely irrelevant to the solution, it depends only on the misclassification costs) or a problem of a difference in the distribution of patterns in the training set and in the test set or operational conditions (for which the degree of imbalance is again essentially irrelevant - the solution is the same as for balanced datasets).
We should stop talking about class imbalance being a problem as it obscures the real problems (e.g. cost-sensitive learning) and prevents people from addressing them. | What is the root cause of the class imbalance problem? | Tongue slightly in cheek - the root cause of the class imbalance problem is calling it the class imbalance problem, which implies that the class imbalance causes a problem. This is very rarely the ca | What is the root cause of the class imbalance problem?
Tongue slightly in cheek - the root cause of the class imbalance problem is calling it the class imbalance problem, which implies that the class imbalance causes a problem. This is very rarely the case (and when it does happen the only solution is likely to be to collect more data). The real problem is practitioners (and algorithm developers) not paying attention to the requirements of the application. In most cases it is a cost-sensitive learning problem in disguise (where the degree of imbalance is completely irrelevant to the solution, it depends only on the misclassification costs) or a problem of a difference in the distribution of patterns in the training set and in the test set or operational conditions (for which the degree of imbalance is again essentially irrelevant - the solution is the same as for balanced datasets).
We should stop talking about class imbalance being a problem as it obscures the real problems (e.g. cost-sensitive learning) and prevents people from addressing them. | What is the root cause of the class imbalance problem?
Tongue slightly in cheek - the root cause of the class imbalance problem is calling it the class imbalance problem, which implies that the class imbalance causes a problem. This is very rarely the ca |
4,093 | Bootstrap vs. jackknife | Bootstrapping is a superior technique and can be used pretty much anywhere jackknifing has been used. Jackknifing is much older (perhaps ~20 years); it's main advantage in the days when computing power was limited, was that it's computationally much simpler. However, the bootstrap provides information about the whole sampling distribution, and can offer greater precision. The jackknife is still useful in outlier detection, for example in calculating dfbeta (the change in a parameter estimate when a data point is dropped). | Bootstrap vs. jackknife | Bootstrapping is a superior technique and can be used pretty much anywhere jackknifing has been used. Jackknifing is much older (perhaps ~20 years); it's main advantage in the days when computing pow | Bootstrap vs. jackknife
Bootstrapping is a superior technique and can be used pretty much anywhere jackknifing has been used. Jackknifing is much older (perhaps ~20 years); it's main advantage in the days when computing power was limited, was that it's computationally much simpler. However, the bootstrap provides information about the whole sampling distribution, and can offer greater precision. The jackknife is still useful in outlier detection, for example in calculating dfbeta (the change in a parameter estimate when a data point is dropped). | Bootstrap vs. jackknife
Bootstrapping is a superior technique and can be used pretty much anywhere jackknifing has been used. Jackknifing is much older (perhaps ~20 years); it's main advantage in the days when computing pow |
4,094 | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling? | By way of background, I have been doing forecasting store $\times$ SKU time series for retail sales for 12 years now. Tens of thousands of time series across hundreds or thousands of stores. I like saying that we have been doing Big Data since before the term became popular.
I have consistently found that the single most important thing is to understand your data. If you don't understand major drivers like Easter or promotions, you are doomed. Often enough, this comes down to understanding the specific business well enough to ask the correct questions and telling known unknowns from unknown unknowns.
Once you understand your data, you need to work to get clean data. I have supervised quite a number of juniors and interns, and the one thing they had never experienced in all their statistics and data science classes was how much sheer crap there can be in the data you have. Then you need to either go back to the source and try to get it to bring forth good data, or try to clean it, or even just throw some stuff away. Changing a running system to yield better data can be surprisingly hard.
Once you understand your data and actually have somewhat-clean data, you can start fiddling with it. Unfortunately, by this time, I have often found myself out of time and resources.
I personally am a big fan of model combination ("stacking"), at least in an abstract sense, less so of fancy feature engineering, which often crosses the line into overfitting territory - and even if your fancier model performs slightly better on average, one often finds that the really bad predictions get worse with a more complex model. This is a dealbreaker in my line of business. A single really bad forecast can pretty completely destroy the trust in the entire system, so robustness is extremely high in my list of priorities. Your mileage may vary.
In my experience, yes, model combination can improve accuracy. However, the really big gains are made with the first two steps: understanding your data, and cleaning it (or getting clean data in the first place). | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m | By way of background, I have been doing forecasting store $\times$ SKU time series for retail sales for 12 years now. Tens of thousands of time series across hundreds or thousands of stores. I like sa | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling?
By way of background, I have been doing forecasting store $\times$ SKU time series for retail sales for 12 years now. Tens of thousands of time series across hundreds or thousands of stores. I like saying that we have been doing Big Data since before the term became popular.
I have consistently found that the single most important thing is to understand your data. If you don't understand major drivers like Easter or promotions, you are doomed. Often enough, this comes down to understanding the specific business well enough to ask the correct questions and telling known unknowns from unknown unknowns.
Once you understand your data, you need to work to get clean data. I have supervised quite a number of juniors and interns, and the one thing they had never experienced in all their statistics and data science classes was how much sheer crap there can be in the data you have. Then you need to either go back to the source and try to get it to bring forth good data, or try to clean it, or even just throw some stuff away. Changing a running system to yield better data can be surprisingly hard.
Once you understand your data and actually have somewhat-clean data, you can start fiddling with it. Unfortunately, by this time, I have often found myself out of time and resources.
I personally am a big fan of model combination ("stacking"), at least in an abstract sense, less so of fancy feature engineering, which often crosses the line into overfitting territory - and even if your fancier model performs slightly better on average, one often finds that the really bad predictions get worse with a more complex model. This is a dealbreaker in my line of business. A single really bad forecast can pretty completely destroy the trust in the entire system, so robustness is extremely high in my list of priorities. Your mileage may vary.
In my experience, yes, model combination can improve accuracy. However, the really big gains are made with the first two steps: understanding your data, and cleaning it (or getting clean data in the first place). | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m
By way of background, I have been doing forecasting store $\times$ SKU time series for retail sales for 12 years now. Tens of thousands of time series across hundreds or thousands of stores. I like sa |
4,095 | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling? | I can't speak for the whole of industry, obviously, but I work in industry and have competed on Kaggle so I will share my POV.
First, you're right to suspect that Kaggle doesn't exactly match what people are doing in industry. It's a game, and subject to gamesmanship, with lots of crazy restrictions. For example, in the currently running Santander competition:
The feature names were artificially hashed to hide their meaning
The "training" set was artificially limited to have fewer rows than columns specifically so that feature selection, robustness, and regularization technique would be indispensable to success.
The so-called "test" set has a markedly different distribution than the training set and the two are clearly not random samples from the same population.
If someone gave me a data set like this at work, I would immediately offer to work with them on feature engineering so we could get features that were more useful. I would suggest we use domain knowledge to decide on likely interaction terms, thresholds, categorical variable coding strategies, etc. Approaching the problem in that way would clearly be more productive than trying to extract meaning from an exhaust file produced by a database engineer with no training in ML.
Furthermore, if you learn, say, that a particular numeric column is not numeric at all but rather a ZIP code, well, you can go and get data from 3rd-party data sources such as the US Census to augment your data. Or if you have a date, maybe you'll include the S&P 500 closing price for that day. Such external augmentation strategies require detailed knowledge of the specific data set and significant domain knowledge but usually have the much larger payoffs than pure algorithmic improvements.
So, the first big difference between industry and Kaggle is that in industry, features (in the sense of input data) are negotiable.
A second class of differences is performance. Often, models will be deployed to production in one of two ways: 1) model predictions will be pre-computed for every row in a very large database table, or 2) an application or website will pass the model a single row of data and need a prediction returned in real-time. Both use cases require good performance. For these reasons, you don't often see models that can be slow to predict or use a huge amount of memory like K-Nearest-Neighbors or Extra Random Forests. A logistic regression or neural network, in contrast, can score a batch of records with a few matrix multiplications, and matrix multiplication can be highly optimized with the right libraries. Even though I could get maybe +0.001 AUC if I stacked on yet another non-parametric model, I wouldn't because prediction throughput and latency would drop too much.
There's a reliability dimension to this as well - stacking four different state-of-the-art 3rd-party libraries, say LightGBM, xgboost, catboost, and Tensorflow (on GPUs, of course) might get you that .01 reduction in MSE that wins Kaggle competitions, but it's four different libraries to install, deploy, and debug if something goes wrong. It's great if you can get all that stuff working on your laptop, but getting it running inside a Docker container running on AWS is a completely different story. Most companies don't want to front a small devops team just to deal with these kinds of deployment issues.
That said, stacking in itself isn't necessarily a huge deal. In fact, stacking a couple different models that all perform equally well but have very different decision boundaries is a great way to get a small bump in AUC and a big bump in robustness. Just don't go throwing so many kitchen sinks into your heterogeneous ensemble that you start to have deployment issues. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m | I can't speak for the whole of industry, obviously, but I work in industry and have competed on Kaggle so I will share my POV.
First, you're right to suspect that Kaggle doesn't exactly match what peo | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling?
I can't speak for the whole of industry, obviously, but I work in industry and have competed on Kaggle so I will share my POV.
First, you're right to suspect that Kaggle doesn't exactly match what people are doing in industry. It's a game, and subject to gamesmanship, with lots of crazy restrictions. For example, in the currently running Santander competition:
The feature names were artificially hashed to hide their meaning
The "training" set was artificially limited to have fewer rows than columns specifically so that feature selection, robustness, and regularization technique would be indispensable to success.
The so-called "test" set has a markedly different distribution than the training set and the two are clearly not random samples from the same population.
If someone gave me a data set like this at work, I would immediately offer to work with them on feature engineering so we could get features that were more useful. I would suggest we use domain knowledge to decide on likely interaction terms, thresholds, categorical variable coding strategies, etc. Approaching the problem in that way would clearly be more productive than trying to extract meaning from an exhaust file produced by a database engineer with no training in ML.
Furthermore, if you learn, say, that a particular numeric column is not numeric at all but rather a ZIP code, well, you can go and get data from 3rd-party data sources such as the US Census to augment your data. Or if you have a date, maybe you'll include the S&P 500 closing price for that day. Such external augmentation strategies require detailed knowledge of the specific data set and significant domain knowledge but usually have the much larger payoffs than pure algorithmic improvements.
So, the first big difference between industry and Kaggle is that in industry, features (in the sense of input data) are negotiable.
A second class of differences is performance. Often, models will be deployed to production in one of two ways: 1) model predictions will be pre-computed for every row in a very large database table, or 2) an application or website will pass the model a single row of data and need a prediction returned in real-time. Both use cases require good performance. For these reasons, you don't often see models that can be slow to predict or use a huge amount of memory like K-Nearest-Neighbors or Extra Random Forests. A logistic regression or neural network, in contrast, can score a batch of records with a few matrix multiplications, and matrix multiplication can be highly optimized with the right libraries. Even though I could get maybe +0.001 AUC if I stacked on yet another non-parametric model, I wouldn't because prediction throughput and latency would drop too much.
There's a reliability dimension to this as well - stacking four different state-of-the-art 3rd-party libraries, say LightGBM, xgboost, catboost, and Tensorflow (on GPUs, of course) might get you that .01 reduction in MSE that wins Kaggle competitions, but it's four different libraries to install, deploy, and debug if something goes wrong. It's great if you can get all that stuff working on your laptop, but getting it running inside a Docker container running on AWS is a completely different story. Most companies don't want to front a small devops team just to deal with these kinds of deployment issues.
That said, stacking in itself isn't necessarily a huge deal. In fact, stacking a couple different models that all perform equally well but have very different decision boundaries is a great way to get a small bump in AUC and a big bump in robustness. Just don't go throwing so many kitchen sinks into your heterogeneous ensemble that you start to have deployment issues. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m
I can't speak for the whole of industry, obviously, but I work in industry and have competed on Kaggle so I will share my POV.
First, you're right to suspect that Kaggle doesn't exactly match what peo |
4,096 | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling? | From my experience, more data and more features are more important than the fanciest, most stacked, most tuned, model one can come up with.
Look at the online advertising competitions that took place. Winning models were so complex they ended up taking a whole week to train (on a very small dataset, compared to the industry standard). On top of that, prediction in a stacked model is longer than in a simple linear model. On the same topic, remember that Netflix never used its 1M$ algorithm because of engineering costs.
I would say that online data science competitions are a good way for a company to know "what is the highest accuracy (or any performance metric) that can be achieved" using the data they collect (at some point in time). Note that this actually is a hard problem which is being solved ! But, in the industry, field knowledge, hardware and business constraints usually discourage the use of "fancy modelling". | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m | From my experience, more data and more features are more important than the fanciest, most stacked, most tuned, model one can come up with.
Look at the online advertising competitions that took place. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling?
From my experience, more data and more features are more important than the fanciest, most stacked, most tuned, model one can come up with.
Look at the online advertising competitions that took place. Winning models were so complex they ended up taking a whole week to train (on a very small dataset, compared to the industry standard). On top of that, prediction in a stacked model is longer than in a simple linear model. On the same topic, remember that Netflix never used its 1M$ algorithm because of engineering costs.
I would say that online data science competitions are a good way for a company to know "what is the highest accuracy (or any performance metric) that can be achieved" using the data they collect (at some point in time). Note that this actually is a hard problem which is being solved ! But, in the industry, field knowledge, hardware and business constraints usually discourage the use of "fancy modelling". | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m
From my experience, more data and more features are more important than the fanciest, most stacked, most tuned, model one can come up with.
Look at the online advertising competitions that took place. |
4,097 | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling? | Stacking significantly increases complexity and reduces interpretability. The gains are usually relatively small to justify it. So while ensembling is probably widely used (e.g. XGBoost), I think stacking is relatively rare in industry. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m | Stacking significantly increases complexity and reduces interpretability. The gains are usually relatively small to justify it. So while ensembling is probably widely used (e.g. XGBoost), I think stac | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling?
Stacking significantly increases complexity and reduces interpretability. The gains are usually relatively small to justify it. So while ensembling is probably widely used (e.g. XGBoost), I think stacking is relatively rare in industry. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m
Stacking significantly increases complexity and reduces interpretability. The gains are usually relatively small to justify it. So while ensembling is probably widely used (e.g. XGBoost), I think stac |
4,098 | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling? | In my experience collecting good data and features is much more important.
The clients we worked with usually have a lot of data, and not all of it in format that can be readily exported or easy to work with. The first batch of data is usually not very useful; it is our task to work with the client to figure what data we would need to make the model more useful. This is a very iterative process.
There is a lot of experimentation going on, and we need models that are:
Fast to train
Fast to predict (Also is often a business requirement)
Easy to interpret
Point 3) is especially important, because models that are easy to interpret are easier to communicate to the client and it is easier to catch if we have done something wrong. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m | In my experience collecting good data and features is much more important.
The clients we worked with usually have a lot of data, and not all of it in format that can be readily exported or easy to wo | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling?
In my experience collecting good data and features is much more important.
The clients we worked with usually have a lot of data, and not all of it in format that can be readily exported or easy to work with. The first batch of data is usually not very useful; it is our task to work with the client to figure what data we would need to make the model more useful. This is a very iterative process.
There is a lot of experimentation going on, and we need models that are:
Fast to train
Fast to predict (Also is often a business requirement)
Easy to interpret
Point 3) is especially important, because models that are easy to interpret are easier to communicate to the client and it is easier to catch if we have done something wrong. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m
In my experience collecting good data and features is much more important.
The clients we worked with usually have a lot of data, and not all of it in format that can be readily exported or easy to wo |
4,099 | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling? | Here's something that doesn't come up much on Kaggle: the
more variables you have in your model, and
the more complex the relationship between those variables and the output,
the more risk you will face over the lifetime of that model. Time is typically either frozen in Kaggle competitions, or there's a short future time window where test set values come in. In industry, that model might run for years. And all it might take is for one variable to go haywire for your entire model to go to hell, even if it was built flawlessly. I get it, no one wants to watch a contest where competitors carefully balance model complexity against the risk, but out there in a job, your business and quality of life will suffer if something goes wrong with a model you're in charge of. Even extremely smart people aren't immune. Take, for instance, the Google Flu Trends prediction failure. The world changed, and they didn't see it coming.
To O.P.'s question, "In general, in your experience, how important is fancy modelling such as stacking vs simply collecting more data and more features for the data?" Well, I'm officially old, but my answer is that unless you have a really robust modeling infrastructure, it's better to have straightforward models, with a minimal set of variables, where the input-to-output relationship is relatively straightforward. If a variable barely improves your loss metric, leave it out. Remember that it's a job. Get your kicks outside of work on Kaggle contests where there is the "go big or go home" incentive.
One exception would be if the business situation demanded a certain level of model performance, for instance if your company needed to match or beat the performance of a competitor to gain some advantage (probably in marketing). But when there's a linear relationship between the model performance and business gain, the increases in complexity don't typically justify the financial gain (see "Netflix never used its $1 Million Algorithm due to Engineering costs" - apologies to @RUser4512 for citing the same article). In a Kaggle competition however, that additional gain may move you hundreds of ranks as you pass nearby solutions. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m | Here's something that doesn't come up much on Kaggle: the
more variables you have in your model, and
the more complex the relationship between those variables and the output,
the more risk you will | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling?
Here's something that doesn't come up much on Kaggle: the
more variables you have in your model, and
the more complex the relationship between those variables and the output,
the more risk you will face over the lifetime of that model. Time is typically either frozen in Kaggle competitions, or there's a short future time window where test set values come in. In industry, that model might run for years. And all it might take is for one variable to go haywire for your entire model to go to hell, even if it was built flawlessly. I get it, no one wants to watch a contest where competitors carefully balance model complexity against the risk, but out there in a job, your business and quality of life will suffer if something goes wrong with a model you're in charge of. Even extremely smart people aren't immune. Take, for instance, the Google Flu Trends prediction failure. The world changed, and they didn't see it coming.
To O.P.'s question, "In general, in your experience, how important is fancy modelling such as stacking vs simply collecting more data and more features for the data?" Well, I'm officially old, but my answer is that unless you have a really robust modeling infrastructure, it's better to have straightforward models, with a minimal set of variables, where the input-to-output relationship is relatively straightforward. If a variable barely improves your loss metric, leave it out. Remember that it's a job. Get your kicks outside of work on Kaggle contests where there is the "go big or go home" incentive.
One exception would be if the business situation demanded a certain level of model performance, for instance if your company needed to match or beat the performance of a competitor to gain some advantage (probably in marketing). But when there's a linear relationship between the model performance and business gain, the increases in complexity don't typically justify the financial gain (see "Netflix never used its $1 Million Algorithm due to Engineering costs" - apologies to @RUser4512 for citing the same article). In a Kaggle competition however, that additional gain may move you hundreds of ranks as you pass nearby solutions. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m
Here's something that doesn't come up much on Kaggle: the
more variables you have in your model, and
the more complex the relationship between those variables and the output,
the more risk you will |
4,100 | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling? | A short answer which is a quote I like from Gary Kasparov's book Deep Thinking
A clever process beats superior knowledge and superior technology
I work mainly with time-series financial data, and the process from gathering data, cleaning it, processing it, and then working with the problem owners to figure out what they actually want to do, to then building features and models to try and tackle the problem and finally to retrospectively examine the process to improve for next time.
This whole process is greater than the sum of its parts. I tend to get 'acceptable' generalisation performance with a linear/logistic regression and talking with domain experts to generate features, way better time spent than spending time over-fitting my model to the data I have. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m | A short answer which is a quote I like from Gary Kasparov's book Deep Thinking
A clever process beats superior knowledge and superior technology
I work mainly with time-series financial data, and th | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables more important than fancy modelling?
A short answer which is a quote I like from Gary Kasparov's book Deep Thinking
A clever process beats superior knowledge and superior technology
I work mainly with time-series financial data, and the process from gathering data, cleaning it, processing it, and then working with the problem owners to figure out what they actually want to do, to then building features and models to try and tackle the problem and finally to retrospectively examine the process to improve for next time.
This whole process is greater than the sum of its parts. I tend to get 'acceptable' generalisation performance with a linear/logistic regression and talking with domain experts to generate features, way better time spent than spending time over-fitting my model to the data I have. | Industry vs Kaggle challenges. Is collecting more observations and having access to more variables m
A short answer which is a quote I like from Gary Kasparov's book Deep Thinking
A clever process beats superior knowledge and superior technology
I work mainly with time-series financial data, and th |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.