idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
2,201 | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | If you want to sample points uniformly distributed on the 3D sphere (i.e., the surface of a 3D ball), use a simple rejection, or the method of Marsaglia (Ann. Math. Statist., 43 (1972), pp. 645–646). For low dimensions, the rejection ratio is quite low.
If you want to generate random points from higher-dimensional spheres and balls, then it depends on the purpose and scale of the simulation. If you do not want to perform large simulations, use the method of Muller (Commun. ACM, 2 (1959), pp. 19–20) or its "ball" version (see the paper of Harman & Lacko cited above). That is:
to get a sample uniformly distributed on an n-sphere (surface)
1) generate X from n-dimensional standard normal distribution
2) divide each component of X by the Euclidean norm of X
to get a sample uniformly distributed on an n-ball (interior)
1) generate X from (n+2)-dimensional standard normal distribution
2) divide each component of X by the Euclidean norm of X and take only first n components
If you want to perform large simulations, then you should investigate more specialized methods. Upon request, I can send you the paper of Harman and Lacko on conditional distribution methods, which provides the classification and generalizations of some algorithms mentioned in this discussion. The contact is available at my website (http://www.iam.fmph.uniba.sk/ospm/Lacko)
If you want to check, whether yout points are truly uniform on the surface or interior of a ball, look at the marginals (all should by the same, because of the rotational invariance, the squared norm of a projected sample is beta distributed). | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | If you want to sample points uniformly distributed on the 3D sphere (i.e., the surface of a 3D ball), use a simple rejection, or the method of Marsaglia (Ann. Math. Statist., 43 (1972), pp. 645–646) | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
If you want to sample points uniformly distributed on the 3D sphere (i.e., the surface of a 3D ball), use a simple rejection, or the method of Marsaglia (Ann. Math. Statist., 43 (1972), pp. 645–646). For low dimensions, the rejection ratio is quite low.
If you want to generate random points from higher-dimensional spheres and balls, then it depends on the purpose and scale of the simulation. If you do not want to perform large simulations, use the method of Muller (Commun. ACM, 2 (1959), pp. 19–20) or its "ball" version (see the paper of Harman & Lacko cited above). That is:
to get a sample uniformly distributed on an n-sphere (surface)
1) generate X from n-dimensional standard normal distribution
2) divide each component of X by the Euclidean norm of X
to get a sample uniformly distributed on an n-ball (interior)
1) generate X from (n+2)-dimensional standard normal distribution
2) divide each component of X by the Euclidean norm of X and take only first n components
If you want to perform large simulations, then you should investigate more specialized methods. Upon request, I can send you the paper of Harman and Lacko on conditional distribution methods, which provides the classification and generalizations of some algorithms mentioned in this discussion. The contact is available at my website (http://www.iam.fmph.uniba.sk/ospm/Lacko)
If you want to check, whether yout points are truly uniform on the surface or interior of a ball, look at the marginals (all should by the same, because of the rotational invariance, the squared norm of a projected sample is beta distributed). | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
If you want to sample points uniformly distributed on the 3D sphere (i.e., the surface of a 3D ball), use a simple rejection, or the method of Marsaglia (Ann. Math. Statist., 43 (1972), pp. 645–646) |
2,202 | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | I had a similar problem (n-sphere) during my PhD and one of the local 'experts' suggested rejection sampling from a n-cube! This, of course, would have taken the age of the universe as I was looking at n in the order of hunderds.
The algorithm I ended up using is very simple and published in:
W.P. Petersen and A. Bernasconic
Uniform sampling from an n-sphere: Isotropic method
Technical Report, TR-97-06, Swiss Centre for Scientific Computing
I also have this paper in my bibliography that I havent looked at. You may find it useful.
Harman, R. & Lacko, V. On decompositional algorithms for uniform sampling from $n$-spheres and $n$-balls Journal of Multivariate Analysis, 2010 | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | I had a similar problem (n-sphere) during my PhD and one of the local 'experts' suggested rejection sampling from a n-cube! This, of course, would have taken the age of the universe as I was looking a | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
I had a similar problem (n-sphere) during my PhD and one of the local 'experts' suggested rejection sampling from a n-cube! This, of course, would have taken the age of the universe as I was looking at n in the order of hunderds.
The algorithm I ended up using is very simple and published in:
W.P. Petersen and A. Bernasconic
Uniform sampling from an n-sphere: Isotropic method
Technical Report, TR-97-06, Swiss Centre for Scientific Computing
I also have this paper in my bibliography that I havent looked at. You may find it useful.
Harman, R. & Lacko, V. On decompositional algorithms for uniform sampling from $n$-spheres and $n$-balls Journal of Multivariate Analysis, 2010 | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
I had a similar problem (n-sphere) during my PhD and one of the local 'experts' suggested rejection sampling from a n-cube! This, of course, would have taken the age of the universe as I was looking a |
2,203 | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | Here is the pseudocode:
$v \sim MultiVariateGaussian(\mu,\sigma I)$
$v = \frac{v}{ \| v \| }$
In pytorch:
v = MultivariateNormal(torch.zeros(10000), torch.eye(10000))
v = v/v.norm(2)
I don't understand this well enough but I've been told by whuber that:
v = torch.normal(torch.zeros(10000), torch.eye(10000))
v = v/v.norm(2)
is also correct i.e. sampling from a univariate normal for each coordinate. | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | Here is the pseudocode:
$v \sim MultiVariateGaussian(\mu,\sigma I)$
$v = \frac{v}{ \| v \| }$
In pytorch:
v = MultivariateNormal(torch.zeros(10000), torch.eye(10000))
v = v/v.norm(2)
I don't und | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
Here is the pseudocode:
$v \sim MultiVariateGaussian(\mu,\sigma I)$
$v = \frac{v}{ \| v \| }$
In pytorch:
v = MultivariateNormal(torch.zeros(10000), torch.eye(10000))
v = v/v.norm(2)
I don't understand this well enough but I've been told by whuber that:
v = torch.normal(torch.zeros(10000), torch.eye(10000))
v = v/v.norm(2)
is also correct i.e. sampling from a univariate normal for each coordinate. | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
Here is the pseudocode:
$v \sim MultiVariateGaussian(\mu,\sigma I)$
$v = \frac{v}{ \| v \| }$
In pytorch:
v = MultivariateNormal(torch.zeros(10000), torch.eye(10000))
v = v/v.norm(2)
I don't und |
2,204 | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | I have had this problem before, and here is an alternative I found,
As for the distribution itself,
the formula I found that works decently is to use polar coordinates (I actually use a variation of poler coordinates that developed), then convert to Cartesian coordinates.
The radius is of course the radius of the sphere on which you are plotting .
Then you have the second value for angle on the flat plane, followed by the third value which is the angle above or below that plane.
To get a decent distribution, assume that U is a uniformly distributed random number, r is radius, a is the second polar coordinate, and b is the third polar coordinate,
a=U*360
b=U+U-1
then convert to cartesian via
x=r*sin(b)sin(a)
z=rsin(b)cos(a)
y=rsin(b)
I recently found the following which is better mathematically speaking,
a=2(pi)*U
b=cos^-1(2U-1)
Not much different from my original formula actually, though mine is degrees vs radians.
This recent version supposedly can be used for hyperspheres, though no mention was made on how to achieve it.
Though I check the uniformity visually by the rather cheap method of making maps for Homeworld 2 and then "playing" those maps. In fact, because the maps are made with lua scripts, you can build your formula right into the map and thus check multiple samplings without ever leaving the game. Not scientific perhaps, but is a good method for visually seeing the the results. | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | I have had this problem before, and here is an alternative I found,
As for the distribution itself,
the formula I found that works decently is to use polar coordinates (I actually use a variation of p | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
I have had this problem before, and here is an alternative I found,
As for the distribution itself,
the formula I found that works decently is to use polar coordinates (I actually use a variation of poler coordinates that developed), then convert to Cartesian coordinates.
The radius is of course the radius of the sphere on which you are plotting .
Then you have the second value for angle on the flat plane, followed by the third value which is the angle above or below that plane.
To get a decent distribution, assume that U is a uniformly distributed random number, r is radius, a is the second polar coordinate, and b is the third polar coordinate,
a=U*360
b=U+U-1
then convert to cartesian via
x=r*sin(b)sin(a)
z=rsin(b)cos(a)
y=rsin(b)
I recently found the following which is better mathematically speaking,
a=2(pi)*U
b=cos^-1(2U-1)
Not much different from my original formula actually, though mine is degrees vs radians.
This recent version supposedly can be used for hyperspheres, though no mention was made on how to achieve it.
Though I check the uniformity visually by the rather cheap method of making maps for Homeworld 2 and then "playing" those maps. In fact, because the maps are made with lua scripts, you can build your formula right into the map and thus check multiple samplings without ever leaving the game. Not scientific perhaps, but is a good method for visually seeing the the results. | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
I have had this problem before, and here is an alternative I found,
As for the distribution itself,
the formula I found that works decently is to use polar coordinates (I actually use a variation of p |
2,205 | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | My best guess would be to first generate a set of uniformly distributed points in 2 dimensional space and to then project those points onto the surface of a sphere using some sort of projection.
You will probably have to mix and match the way you generate the points with the way that you map them. In terms of the 2D point generationgeneration, I think that scrambled low-discrepancy sequences would be a good place to start (i.e. a scrambled Sobol sequence) since it usually produces points that are not "clumped together". I'm not as sure about which type of mapping to use, but Woflram popped up the Gnonomic projection... so maybe that could work?
MATLAB has a decent implementation of low discrepancy sequences which you can generate using q = sobolset(2) and scramble using q = scramble(q). There is also a mapping toolbox in MATLAB with a bunch of different projection functions you could use in case you did not want to code the mapping and graphics yourself. | How to generate uniformly distributed points on the surface of the 3-d unit sphere? | My best guess would be to first generate a set of uniformly distributed points in 2 dimensional space and to then project those points onto the surface of a sphere using some sort of projection.
You | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
My best guess would be to first generate a set of uniformly distributed points in 2 dimensional space and to then project those points onto the surface of a sphere using some sort of projection.
You will probably have to mix and match the way you generate the points with the way that you map them. In terms of the 2D point generationgeneration, I think that scrambled low-discrepancy sequences would be a good place to start (i.e. a scrambled Sobol sequence) since it usually produces points that are not "clumped together". I'm not as sure about which type of mapping to use, but Woflram popped up the Gnonomic projection... so maybe that could work?
MATLAB has a decent implementation of low discrepancy sequences which you can generate using q = sobolset(2) and scramble using q = scramble(q). There is also a mapping toolbox in MATLAB with a bunch of different projection functions you could use in case you did not want to code the mapping and graphics yourself. | How to generate uniformly distributed points on the surface of the 3-d unit sphere?
My best guess would be to first generate a set of uniformly distributed points in 2 dimensional space and to then project those points onto the surface of a sphere using some sort of projection.
You |
2,206 | What is the difference between a "link function" and a "canonical link function" for GLM | The above answers are more intuitive, so I try more rigor.
What is a GLM?
Let $Y=(y,\mathbf{x})$ denote a set of a response $y$ and $p$-dimensional covariate vector $\mathbf{x}=(x_1,\dots,x_p)$ with expected value $E(y)=\mu$. For $i=1,\dots,n$ independent observations, the distribution of each $y_i$ is an exponential family with density
$$
f(y_i;\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-\gamma(\theta_i)}{\phi}+\tau(y_i,\phi)\right) = \alpha(y_i, \phi)\exp\left(\frac{y_i\theta_i-\gamma(\theta_i)}{\phi}\right)
$$
Here, the parameter of interest (natural or canonical parameter) is $\theta_i$, $\phi$ is a scale parameter (known or seen as a nuisance) and $\gamma$ and $\tau$ are known functions. The $n$-dimensional vectors of fixed input values for the $p$ explanatory variables are denoted by $\mathbf{x}_1,\dots,\mathbf{x}_p$. We assume that the input vectors influence (1) only via a linear function, the linear predictor,
$$
\eta_i=\beta_0+\beta_1x_{i1}+\dots+\beta_px_{ip}
$$
upon which $\theta_i$ depends. As it can be shown that $\theta=(\gamma')^{-1}(\mu)$, this dependency is established by connecting the linear predictor $\eta$ and $\theta$ via the mean. More specifically, the mean $\mu$ is seen as an invertible and smooth function of the linear predictor, i.e.
$$
g(\mu)=\eta\ \textrm{or}\ \mu=g^{-1}(\eta)
$$
Now to answer your question:
The function $g(\cdot)$ is called the link function. If the function connects $\mu$, $\eta$ and $\theta$ such that $\eta \equiv\theta$, then this link is called canonical and has the form $g=(\gamma')^{-1}$.
That's it. Then there are a number of desirable statistical properties of using the canonical link, e.g., the sufficient statistic is $X'y$ with
components $\sum_i x_{ij} y_i$ for $j = 1, \dots, p$, the Newton Method and Fisher scoring for finding the ML estimator coincide, these links simplify the derivation of the MLE, they ensure that some properties of linear regression (e.g., the sum of the residuals is 0) hold up or they ensure that $\mu$ stays within the range of the outcome variable.
Hence they tend to be used by default. Note however, that there is no a priori reason why the effects in the model should be additive on the scale given by this or any other link. | What is the difference between a "link function" and a "canonical link function" for GLM | The above answers are more intuitive, so I try more rigor.
What is a GLM?
Let $Y=(y,\mathbf{x})$ denote a set of a response $y$ and $p$-dimensional covariate vector $\mathbf{x}=(x_1,\dots,x_p)$ with e | What is the difference between a "link function" and a "canonical link function" for GLM
The above answers are more intuitive, so I try more rigor.
What is a GLM?
Let $Y=(y,\mathbf{x})$ denote a set of a response $y$ and $p$-dimensional covariate vector $\mathbf{x}=(x_1,\dots,x_p)$ with expected value $E(y)=\mu$. For $i=1,\dots,n$ independent observations, the distribution of each $y_i$ is an exponential family with density
$$
f(y_i;\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-\gamma(\theta_i)}{\phi}+\tau(y_i,\phi)\right) = \alpha(y_i, \phi)\exp\left(\frac{y_i\theta_i-\gamma(\theta_i)}{\phi}\right)
$$
Here, the parameter of interest (natural or canonical parameter) is $\theta_i$, $\phi$ is a scale parameter (known or seen as a nuisance) and $\gamma$ and $\tau$ are known functions. The $n$-dimensional vectors of fixed input values for the $p$ explanatory variables are denoted by $\mathbf{x}_1,\dots,\mathbf{x}_p$. We assume that the input vectors influence (1) only via a linear function, the linear predictor,
$$
\eta_i=\beta_0+\beta_1x_{i1}+\dots+\beta_px_{ip}
$$
upon which $\theta_i$ depends. As it can be shown that $\theta=(\gamma')^{-1}(\mu)$, this dependency is established by connecting the linear predictor $\eta$ and $\theta$ via the mean. More specifically, the mean $\mu$ is seen as an invertible and smooth function of the linear predictor, i.e.
$$
g(\mu)=\eta\ \textrm{or}\ \mu=g^{-1}(\eta)
$$
Now to answer your question:
The function $g(\cdot)$ is called the link function. If the function connects $\mu$, $\eta$ and $\theta$ such that $\eta \equiv\theta$, then this link is called canonical and has the form $g=(\gamma')^{-1}$.
That's it. Then there are a number of desirable statistical properties of using the canonical link, e.g., the sufficient statistic is $X'y$ with
components $\sum_i x_{ij} y_i$ for $j = 1, \dots, p$, the Newton Method and Fisher scoring for finding the ML estimator coincide, these links simplify the derivation of the MLE, they ensure that some properties of linear regression (e.g., the sum of the residuals is 0) hold up or they ensure that $\mu$ stays within the range of the outcome variable.
Hence they tend to be used by default. Note however, that there is no a priori reason why the effects in the model should be additive on the scale given by this or any other link. | What is the difference between a "link function" and a "canonical link function" for GLM
The above answers are more intuitive, so I try more rigor.
What is a GLM?
Let $Y=(y,\mathbf{x})$ denote a set of a response $y$ and $p$-dimensional covariate vector $\mathbf{x}=(x_1,\dots,x_p)$ with e |
2,207 | What is the difference between a "link function" and a "canonical link function" for GLM | Here is a little diagram inspired from MIT's 18.650 class which I find quite useful as it helps visualizing the relationships between these functions. I have used the same notation as in @momo's post:
$\gamma(\theta)$ is the cumulant moment generating function
$g(\mu)$ is the link function
So the link function $g$ relates the linear predictor to the mean and is required to be monotone increasing, continuously differentiable and invertible.
The diagram allows to easily go from one direction to the other, for example:
$$ \eta = g \left( \gamma(\theta)\right)$$
$$ \theta = \gamma'^{-1}\left( g^{-1}(\eta)\right)$$
Canonical link function
Another way of seing what Momo has described rigorously is that when $g$ is the canonical link function, then the function composition
$$\gamma^{-1} \circ g^{-1}= \left( g \circ \gamma' \right)^{-1} = I$$ is the identity and so we get
$$\theta = \eta $$ | What is the difference between a "link function" and a "canonical link function" for GLM | Here is a little diagram inspired from MIT's 18.650 class which I find quite useful as it helps visualizing the relationships between these functions. I have used the same notation as in @momo's post: | What is the difference between a "link function" and a "canonical link function" for GLM
Here is a little diagram inspired from MIT's 18.650 class which I find quite useful as it helps visualizing the relationships between these functions. I have used the same notation as in @momo's post:
$\gamma(\theta)$ is the cumulant moment generating function
$g(\mu)$ is the link function
So the link function $g$ relates the linear predictor to the mean and is required to be monotone increasing, continuously differentiable and invertible.
The diagram allows to easily go from one direction to the other, for example:
$$ \eta = g \left( \gamma(\theta)\right)$$
$$ \theta = \gamma'^{-1}\left( g^{-1}(\eta)\right)$$
Canonical link function
Another way of seing what Momo has described rigorously is that when $g$ is the canonical link function, then the function composition
$$\gamma^{-1} \circ g^{-1}= \left( g \circ \gamma' \right)^{-1} = I$$ is the identity and so we get
$$\theta = \eta $$ | What is the difference between a "link function" and a "canonical link function" for GLM
Here is a little diagram inspired from MIT's 18.650 class which I find quite useful as it helps visualizing the relationships between these functions. I have used the same notation as in @momo's post: |
2,208 | What is the difference between a "link function" and a "canonical link function" for GLM | gung's quoted a good explanation: the canonical link possesses special theoretical properties of minimal sufficiency. This means that you can define a conditional logit model (which economists call a fixed effect model) by conditioning on the number of outcomes, but you cannot define a conditional probit model, because there is no sufficient statistics to use with the probit link. | What is the difference between a "link function" and a "canonical link function" for GLM | gung's quoted a good explanation: the canonical link possesses special theoretical properties of minimal sufficiency. This means that you can define a conditional logit model (which economists call a | What is the difference between a "link function" and a "canonical link function" for GLM
gung's quoted a good explanation: the canonical link possesses special theoretical properties of minimal sufficiency. This means that you can define a conditional logit model (which economists call a fixed effect model) by conditioning on the number of outcomes, but you cannot define a conditional probit model, because there is no sufficient statistics to use with the probit link. | What is the difference between a "link function" and a "canonical link function" for GLM
gung's quoted a good explanation: the canonical link possesses special theoretical properties of minimal sufficiency. This means that you can define a conditional logit model (which economists call a |
2,209 | What is the difference between a "link function" and a "canonical link function" for GLM | The answers above have already covered what I want to say. Just to clarify a few points as a researcher of machine learning:
link function is nothing but the inverse of the activation function. For example, logit is the inverse of sigmoid, probit is the inverse of the cumulative distribution function of Gaussian.
If we take the parameter of the generalized linear model to only depend on $w^T x$, with $w$ being the weight vector and $x$ as the input, then the link function is called canonical.
The discussion above has nothing to do with exponential family, but a nice discussion can be found in Christopher Bishop's PRML book Chapter 4.3.6. | What is the difference between a "link function" and a "canonical link function" for GLM | The answers above have already covered what I want to say. Just to clarify a few points as a researcher of machine learning:
link function is nothing but the inverse of the activation function. For | What is the difference between a "link function" and a "canonical link function" for GLM
The answers above have already covered what I want to say. Just to clarify a few points as a researcher of machine learning:
link function is nothing but the inverse of the activation function. For example, logit is the inverse of sigmoid, probit is the inverse of the cumulative distribution function of Gaussian.
If we take the parameter of the generalized linear model to only depend on $w^T x$, with $w$ being the weight vector and $x$ as the input, then the link function is called canonical.
The discussion above has nothing to do with exponential family, but a nice discussion can be found in Christopher Bishop's PRML book Chapter 4.3.6. | What is the difference between a "link function" and a "canonical link function" for GLM
The answers above have already covered what I want to say. Just to clarify a few points as a researcher of machine learning:
link function is nothing but the inverse of the activation function. For |
2,210 | Shape of confidence interval for predicted values in linear regression | I'll discuss it in intuitive terms.
Both confidence intervals and prediction intervals in regression take account of the fact that the intercept and slope are uncertain - you estimate the values from the data, but the population values may be different (if you took a new sample, you'd get different estimated values).
A regression line will pass through $(\bar x, \bar y)$, and it's best to center the discussion about changes to the fit around that point - that is to think about the line $y= a + b(x-\bar x)$ (in this formulation, $\hat a = \bar y$).
If the line went through that $(\bar x, \bar y)$ point, but the slope were little higher or lower (i.e. if the height of the line at the mean was fixed but the slope was a little different), what would that look like?
You'd see that the new line would move further away from the current line near the ends than near the middle, making a kind of slanted X that crossed at the mean (as each of the purple lines below do with respect to the red line; the purple lines represent the estimated slope $\pm$ two standard errors of the slope).
If you drew a collection of such lines with the slope varying a little from its estimate, you'd see the distribution of predicted values near the ends 'fan out' (imagine the region between the two purple lines shaded in grey, for example, because we sampled again and drew many such slopes near the estimated one; We can get a sense of this by bootstrapping a line through the point ($\bar{x},\bar{y}$)). Here's an example using 2000 resamples with a parametric bootstrap:
If instead you take account of the uncertainty in the constant (making the line pass close to but not quite through $(\bar x, \bar y)$), that moves the line up and down, so intervals for the mean at any $x$ will sit above and below the fitted line.
(Here the purple lines are $\pm$ two standard errors of the constant term either side of the estimated line).
When you do both at once (the line may be up or down a tiny bit, and the slope may be slightly steeper or shallower), then you get some amount of spread at the mean, $\bar x$, because of the uncertainty in the constant, and you get some additional fanning out due to the slope's uncertainty, between them producing the characteristic hyperbolic shape of your plots.
That's the intuition.
Now, if you like, we can consider a little algebra (but it's not essential):
It's actually the square root of the sum of the squares of those two effects - you can see it in the confidence interval's formula. Let's build up the pieces:
The $a$ standard error with $b$ known is $\sigma /\sqrt{n}$ (remember $a$ here is the expected value of $y$ at the mean of $x$, not the usual intercept; it's just a standard error of a mean). That's the standard error of the line's position at the mean ($\bar x$).
The $b$ standard error with $a$ known is $\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$. The effect of uncertainty in slope at some value $x^*$ is multiplied by how far you are from the mean ($x^*-\bar x$) (because the change in level is the change in slope times the distance you move), giving $(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$.
Now the overall effect is just the square root of the sum of the squares of those two things (why? because variances of uncorrelated things add, and if you write your line in the $y= a + b(x-\bar x)$ form, the estimates of $a$ and $b$ are uncorrelated. So the overall standard error is the square root of the overall variance, and the variance is the sum of the variances of the components - that is, we have
$\sqrt{(\sigma /\sqrt{n})^2+ \left[(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}\right]^2 }$
A little simple manipulation gives the usual term for the standard error of the estimate of the mean value at $x^*$:
$\sigma\sqrt{\frac{1}{n}+ \frac{(x^*-\bar x)^2}{\sum_{i=1}^n (x_i-\bar{x})^2} }$
If you draw that as a function of $x^*$, you'll see it forms a curve (looks like a smile) with a minimum at $\bar x$, that gets bigger as you move out. That's what gets added to / subtracted from the fitted line (well, a multiple of it is, in order to get a desired confidence level).
[With prediction intervals, there's also the variation in position due to the process variability; this adds another term that shifts the limits up and down, making a much wider spread, and because that term usually dominates the sum under the square root, the curvature is much less pronounced.] | Shape of confidence interval for predicted values in linear regression | I'll discuss it in intuitive terms.
Both confidence intervals and prediction intervals in regression take account of the fact that the intercept and slope are uncertain - you estimate the values from | Shape of confidence interval for predicted values in linear regression
I'll discuss it in intuitive terms.
Both confidence intervals and prediction intervals in regression take account of the fact that the intercept and slope are uncertain - you estimate the values from the data, but the population values may be different (if you took a new sample, you'd get different estimated values).
A regression line will pass through $(\bar x, \bar y)$, and it's best to center the discussion about changes to the fit around that point - that is to think about the line $y= a + b(x-\bar x)$ (in this formulation, $\hat a = \bar y$).
If the line went through that $(\bar x, \bar y)$ point, but the slope were little higher or lower (i.e. if the height of the line at the mean was fixed but the slope was a little different), what would that look like?
You'd see that the new line would move further away from the current line near the ends than near the middle, making a kind of slanted X that crossed at the mean (as each of the purple lines below do with respect to the red line; the purple lines represent the estimated slope $\pm$ two standard errors of the slope).
If you drew a collection of such lines with the slope varying a little from its estimate, you'd see the distribution of predicted values near the ends 'fan out' (imagine the region between the two purple lines shaded in grey, for example, because we sampled again and drew many such slopes near the estimated one; We can get a sense of this by bootstrapping a line through the point ($\bar{x},\bar{y}$)). Here's an example using 2000 resamples with a parametric bootstrap:
If instead you take account of the uncertainty in the constant (making the line pass close to but not quite through $(\bar x, \bar y)$), that moves the line up and down, so intervals for the mean at any $x$ will sit above and below the fitted line.
(Here the purple lines are $\pm$ two standard errors of the constant term either side of the estimated line).
When you do both at once (the line may be up or down a tiny bit, and the slope may be slightly steeper or shallower), then you get some amount of spread at the mean, $\bar x$, because of the uncertainty in the constant, and you get some additional fanning out due to the slope's uncertainty, between them producing the characteristic hyperbolic shape of your plots.
That's the intuition.
Now, if you like, we can consider a little algebra (but it's not essential):
It's actually the square root of the sum of the squares of those two effects - you can see it in the confidence interval's formula. Let's build up the pieces:
The $a$ standard error with $b$ known is $\sigma /\sqrt{n}$ (remember $a$ here is the expected value of $y$ at the mean of $x$, not the usual intercept; it's just a standard error of a mean). That's the standard error of the line's position at the mean ($\bar x$).
The $b$ standard error with $a$ known is $\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$. The effect of uncertainty in slope at some value $x^*$ is multiplied by how far you are from the mean ($x^*-\bar x$) (because the change in level is the change in slope times the distance you move), giving $(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$.
Now the overall effect is just the square root of the sum of the squares of those two things (why? because variances of uncorrelated things add, and if you write your line in the $y= a + b(x-\bar x)$ form, the estimates of $a$ and $b$ are uncorrelated. So the overall standard error is the square root of the overall variance, and the variance is the sum of the variances of the components - that is, we have
$\sqrt{(\sigma /\sqrt{n})^2+ \left[(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}\right]^2 }$
A little simple manipulation gives the usual term for the standard error of the estimate of the mean value at $x^*$:
$\sigma\sqrt{\frac{1}{n}+ \frac{(x^*-\bar x)^2}{\sum_{i=1}^n (x_i-\bar{x})^2} }$
If you draw that as a function of $x^*$, you'll see it forms a curve (looks like a smile) with a minimum at $\bar x$, that gets bigger as you move out. That's what gets added to / subtracted from the fitted line (well, a multiple of it is, in order to get a desired confidence level).
[With prediction intervals, there's also the variation in position due to the process variability; this adds another term that shifts the limits up and down, making a much wider spread, and because that term usually dominates the sum under the square root, the curvature is much less pronounced.] | Shape of confidence interval for predicted values in linear regression
I'll discuss it in intuitive terms.
Both confidence intervals and prediction intervals in regression take account of the fact that the intercept and slope are uncertain - you estimate the values from |
2,211 | Shape of confidence interval for predicted values in linear regression | The accepted answer brings indeed the necessary intuition. It only misses the visualization of combining both linear and angular uncertainties, which refers very nicely back to the plots in the question. So here it goes. Let's call a' and b' the uncertainties of a, and b, respectively, quantities commonly returned by any popular statistics package. Then we have, apart from the best fit a*x + b, four possible lines to draw (in this case of 1 covariate x):
(a+a')*x + b+b'
(a-a')*x + b-b'
(a+a')*x + b-b'
(a-a')*x + b+b'
These are the four collored lines in the graph below. The black thick line in the middle represents the best fit without uncertainties. So to draw the "hyperbolic" shadings, one should take the maximum and minimum values of these four lines combined, which are in fact four line segments, no curves there (I wonder how exactely these fency plots draw the curving, doesn't seem any accurate to me).
I hope this adds something to the already nice answer from @Glen_b. | Shape of confidence interval for predicted values in linear regression | The accepted answer brings indeed the necessary intuition. It only misses the visualization of combining both linear and angular uncertainties, which refers very nicely back to the plots in the questi | Shape of confidence interval for predicted values in linear regression
The accepted answer brings indeed the necessary intuition. It only misses the visualization of combining both linear and angular uncertainties, which refers very nicely back to the plots in the question. So here it goes. Let's call a' and b' the uncertainties of a, and b, respectively, quantities commonly returned by any popular statistics package. Then we have, apart from the best fit a*x + b, four possible lines to draw (in this case of 1 covariate x):
(a+a')*x + b+b'
(a-a')*x + b-b'
(a+a')*x + b-b'
(a-a')*x + b+b'
These are the four collored lines in the graph below. The black thick line in the middle represents the best fit without uncertainties. So to draw the "hyperbolic" shadings, one should take the maximum and minimum values of these four lines combined, which are in fact four line segments, no curves there (I wonder how exactely these fency plots draw the curving, doesn't seem any accurate to me).
I hope this adds something to the already nice answer from @Glen_b. | Shape of confidence interval for predicted values in linear regression
The accepted answer brings indeed the necessary intuition. It only misses the visualization of combining both linear and angular uncertainties, which refers very nicely back to the plots in the questi |
2,212 | Shape of confidence interval for predicted values in linear regression | An even simpler answer is to look at the formula for the confidence interval around a certain point, x:
$$
\hat{y}_{ci}(x) = \hat{y} \pm t_{n-2, \alpha/2} \sqrt{\frac{\Sigma_{i=1}^n (y_i-\hat{y})^2}{n-2}} \sqrt{\frac{1}{n} + \frac{(x-\bar{x})^2}{\Sigma_{i=1}^n (x_i-\bar{x})^2}}
$$
We can see that in the second squareroot term, we have $(x-\bar{x})^2$. Therefore, the farther away from the empirical data mean for x-values, the larger this quantity will be. | Shape of confidence interval for predicted values in linear regression | An even simpler answer is to look at the formula for the confidence interval around a certain point, x:
$$
\hat{y}_{ci}(x) = \hat{y} \pm t_{n-2, \alpha/2} \sqrt{\frac{\Sigma_{i=1}^n (y_i-\hat{y})^2}{n | Shape of confidence interval for predicted values in linear regression
An even simpler answer is to look at the formula for the confidence interval around a certain point, x:
$$
\hat{y}_{ci}(x) = \hat{y} \pm t_{n-2, \alpha/2} \sqrt{\frac{\Sigma_{i=1}^n (y_i-\hat{y})^2}{n-2}} \sqrt{\frac{1}{n} + \frac{(x-\bar{x})^2}{\Sigma_{i=1}^n (x_i-\bar{x})^2}}
$$
We can see that in the second squareroot term, we have $(x-\bar{x})^2$. Therefore, the farther away from the empirical data mean for x-values, the larger this quantity will be. | Shape of confidence interval for predicted values in linear regression
An even simpler answer is to look at the formula for the confidence interval around a certain point, x:
$$
\hat{y}_{ci}(x) = \hat{y} \pm t_{n-2, \alpha/2} \sqrt{\frac{\Sigma_{i=1}^n (y_i-\hat{y})^2}{n |
2,213 | How to efficiently manage a statistical analysis project? | I am compiling a quick series of guidelines I found on SO (as suggested by @Shane), Biostar (hereafter, BS), and this SE. I tried my best to acknowledge ownership for each item, and to select first or highly upvoted answer. I also added things of my own, and flagged items that are specific to the [R] environment.
Data management
Create a project structure for keeping all things at the right place (data, code, figures, etc., giovanni /BS)
Never modify raw data files (ideally, they should be read-only), copy/rename to new ones when making transformations, cleaning, etc.
Check data consistency (whuber /SE)
Manage script dependencies and data flow with a build automation tool, like GNU make (Karl Broman/Zachary Jones)
Coding
organize source code in logical units or building blocks (Josh Reich/hadley/ars /SO; giovanni/Khader Shameer /BS)
separate source code from editing stuff, especially for large project -- partly overlapping with previous item and reporting
Document everything, with e.g. [R]oxygen (Shane /SO) or consistent self-annotation in the source file -- a good discussion on Medstats, Documenting analyses and data edits Options
[R] Custom functions can be put in a dedicated file (that can be sourced when necessary), in a new environment (so as to avoid populating the top-level namespace, Brendan OConnor /SO), or a package (Dirk Eddelbuettel/Shane /SO)
Analysis
Don't forget to set/record the seed you used when calling RNG or stochastic algorithms (e.g. k-means)
For Monte Carlo studies, it may be interesting to store specs/parameters in a separate file (sumatra may be a good candidate, giovanni /BS)
Don't limit yourself to one plot per variable, use multivariate (Trellis) displays and interactive visualization tools (e.g. GGobi)
Versioning
Use some kind of revision control for easy tracking/export, e.g. Git (Sharpie/VonC/JD Long /SO) -- this follows from nice questions asked by @Jeromy and @Tal
Backup everything, on a regular basis (Sharpie/JD Long /SO)
Keep a log of your ideas, or rely on an issue tracker, like ditz (giovanni /BS) -- partly redundant with the previous item since it is available in Git
Editing/Reporting
[R] Sweave (Matt Parker /SO) or the more up-to-date knitr
[R] Brew (Shane /SO)
[R] R2HTML or ascii
As a side note, Hadley Wickham offers a comprehensive overview of R project management, including reproducible exemplification and an unified philosophy of data.
Finally, in his R-oriented Workflow of statistical data analysis Oliver Kirchkamp offers a very detailed overview of why adopting and obeying a specific workflow will help statisticians collaborate with each other, while ensuring data integrity and reproducibility of results. It further includes some discussion of using a weaving and version control system. Stata users might find J. Scott Long's The Workflow of Data Analysis Using Stata useful too. | How to efficiently manage a statistical analysis project? | I am compiling a quick series of guidelines I found on SO (as suggested by @Shane), Biostar (hereafter, BS), and this SE. I tried my best to acknowledge ownership for each item, and to select first or | How to efficiently manage a statistical analysis project?
I am compiling a quick series of guidelines I found on SO (as suggested by @Shane), Biostar (hereafter, BS), and this SE. I tried my best to acknowledge ownership for each item, and to select first or highly upvoted answer. I also added things of my own, and flagged items that are specific to the [R] environment.
Data management
Create a project structure for keeping all things at the right place (data, code, figures, etc., giovanni /BS)
Never modify raw data files (ideally, they should be read-only), copy/rename to new ones when making transformations, cleaning, etc.
Check data consistency (whuber /SE)
Manage script dependencies and data flow with a build automation tool, like GNU make (Karl Broman/Zachary Jones)
Coding
organize source code in logical units or building blocks (Josh Reich/hadley/ars /SO; giovanni/Khader Shameer /BS)
separate source code from editing stuff, especially for large project -- partly overlapping with previous item and reporting
Document everything, with e.g. [R]oxygen (Shane /SO) or consistent self-annotation in the source file -- a good discussion on Medstats, Documenting analyses and data edits Options
[R] Custom functions can be put in a dedicated file (that can be sourced when necessary), in a new environment (so as to avoid populating the top-level namespace, Brendan OConnor /SO), or a package (Dirk Eddelbuettel/Shane /SO)
Analysis
Don't forget to set/record the seed you used when calling RNG or stochastic algorithms (e.g. k-means)
For Monte Carlo studies, it may be interesting to store specs/parameters in a separate file (sumatra may be a good candidate, giovanni /BS)
Don't limit yourself to one plot per variable, use multivariate (Trellis) displays and interactive visualization tools (e.g. GGobi)
Versioning
Use some kind of revision control for easy tracking/export, e.g. Git (Sharpie/VonC/JD Long /SO) -- this follows from nice questions asked by @Jeromy and @Tal
Backup everything, on a regular basis (Sharpie/JD Long /SO)
Keep a log of your ideas, or rely on an issue tracker, like ditz (giovanni /BS) -- partly redundant with the previous item since it is available in Git
Editing/Reporting
[R] Sweave (Matt Parker /SO) or the more up-to-date knitr
[R] Brew (Shane /SO)
[R] R2HTML or ascii
As a side note, Hadley Wickham offers a comprehensive overview of R project management, including reproducible exemplification and an unified philosophy of data.
Finally, in his R-oriented Workflow of statistical data analysis Oliver Kirchkamp offers a very detailed overview of why adopting and obeying a specific workflow will help statisticians collaborate with each other, while ensuring data integrity and reproducibility of results. It further includes some discussion of using a weaving and version control system. Stata users might find J. Scott Long's The Workflow of Data Analysis Using Stata useful too. | How to efficiently manage a statistical analysis project?
I am compiling a quick series of guidelines I found on SO (as suggested by @Shane), Biostar (hereafter, BS), and this SE. I tried my best to acknowledge ownership for each item, and to select first or |
2,214 | How to efficiently manage a statistical analysis project? | This doesn't specifically provide an answer, but you may want to look at these related stackoverflow questions:
"Workflow for statistical analysis and report writing"
"Organizing R Source Code"
"How to organize large R programs?"
"R and version control for the solo data analyst"
"How does software development compare with statistical programming/analysis ?"
"How do you combine “Revision Control” with “WorkFlow” for R?"
You may also be interested in John Myles White's recent project to create a statistical project template. | How to efficiently manage a statistical analysis project? | This doesn't specifically provide an answer, but you may want to look at these related stackoverflow questions:
"Workflow for statistical analysis and report writing"
"Organizing R Source Code"
"How | How to efficiently manage a statistical analysis project?
This doesn't specifically provide an answer, but you may want to look at these related stackoverflow questions:
"Workflow for statistical analysis and report writing"
"Organizing R Source Code"
"How to organize large R programs?"
"R and version control for the solo data analyst"
"How does software development compare with statistical programming/analysis ?"
"How do you combine “Revision Control” with “WorkFlow” for R?"
You may also be interested in John Myles White's recent project to create a statistical project template. | How to efficiently manage a statistical analysis project?
This doesn't specifically provide an answer, but you may want to look at these related stackoverflow questions:
"Workflow for statistical analysis and report writing"
"Organizing R Source Code"
"How |
2,215 | How to efficiently manage a statistical analysis project? | This overlaps with Shane's answer, but in my view there are two main piers:
Reproducibility; not
only because you won't end with
results that are made "somehow" but
also be able to rerun the analysis
faster (on other data or with
slightly changed parameters) and have
more time to think about the results. For a huge data, you can first test your ideas on some small "playset" and then easily extend on the whole data.
Good documentation; commented scripts under version
control, some research journal, even
ticket system for more complex
projects. Improves reproducibility, makes error tracking easier and writing final reports trivial. | How to efficiently manage a statistical analysis project? | This overlaps with Shane's answer, but in my view there are two main piers:
Reproducibility; not
only because you won't end with
results that are made "somehow" but
also be able to rerun the analysis | How to efficiently manage a statistical analysis project?
This overlaps with Shane's answer, but in my view there are two main piers:
Reproducibility; not
only because you won't end with
results that are made "somehow" but
also be able to rerun the analysis
faster (on other data or with
slightly changed parameters) and have
more time to think about the results. For a huge data, you can first test your ideas on some small "playset" and then easily extend on the whole data.
Good documentation; commented scripts under version
control, some research journal, even
ticket system for more complex
projects. Improves reproducibility, makes error tracking easier and writing final reports trivial. | How to efficiently manage a statistical analysis project?
This overlaps with Shane's answer, but in my view there are two main piers:
Reproducibility; not
only because you won't end with
results that are made "somehow" but
also be able to rerun the analysis |
2,216 | How to efficiently manage a statistical analysis project? | van Belle is the source for the rules of successful statistical projects. | How to efficiently manage a statistical analysis project? | van Belle is the source for the rules of successful statistical projects. | How to efficiently manage a statistical analysis project?
van Belle is the source for the rules of successful statistical projects. | How to efficiently manage a statistical analysis project?
van Belle is the source for the rules of successful statistical projects. |
2,217 | How to efficiently manage a statistical analysis project? | Just my 2 cents. I've found Notepad++ useful for this. I can maintain separate scripts (program control, data formatting, etc.) and a .pad file for each project. The .pad file call's all the scripts associated with that project. | How to efficiently manage a statistical analysis project? | Just my 2 cents. I've found Notepad++ useful for this. I can maintain separate scripts (program control, data formatting, etc.) and a .pad file for each project. The .pad file call's all the scripts | How to efficiently manage a statistical analysis project?
Just my 2 cents. I've found Notepad++ useful for this. I can maintain separate scripts (program control, data formatting, etc.) and a .pad file for each project. The .pad file call's all the scripts associated with that project. | How to efficiently manage a statistical analysis project?
Just my 2 cents. I've found Notepad++ useful for this. I can maintain separate scripts (program control, data formatting, etc.) and a .pad file for each project. The .pad file call's all the scripts |
2,218 | How to efficiently manage a statistical analysis project? | While the other answers are great, I would add another sentiment: Avoid using SPSS. I used SPSS for my master's thesis and now on my regular job in market research.
While working with SPSS, it was incredibly hard to develop organized statistical code, due to the fact that SPSS is bad at handling multiple files (sure, you can handle multiple files , but it's not as painless as R), because you cannot store datasets to a variable - you have to use "dataset activate x"- code, which can be a total pain. Also, the syntax is clunky and encourages shorthands, which make code even more unreadable. | How to efficiently manage a statistical analysis project? | While the other answers are great, I would add another sentiment: Avoid using SPSS. I used SPSS for my master's thesis and now on my regular job in market research.
While working with SPSS, it was in | How to efficiently manage a statistical analysis project?
While the other answers are great, I would add another sentiment: Avoid using SPSS. I used SPSS for my master's thesis and now on my regular job in market research.
While working with SPSS, it was incredibly hard to develop organized statistical code, due to the fact that SPSS is bad at handling multiple files (sure, you can handle multiple files , but it's not as painless as R), because you cannot store datasets to a variable - you have to use "dataset activate x"- code, which can be a total pain. Also, the syntax is clunky and encourages shorthands, which make code even more unreadable. | How to efficiently manage a statistical analysis project?
While the other answers are great, I would add another sentiment: Avoid using SPSS. I used SPSS for my master's thesis and now on my regular job in market research.
While working with SPSS, it was in |
2,219 | How to efficiently manage a statistical analysis project? | Jupyter Notebooks, which work with R/Python/Matlab/etc, remove the hassle of remembering which script generates a certain figure. This post describes a tidy way of keeping the code and the figure right beside each other. Keeping all figures for a paper or thesis chapter in a single notebook makes the asccoiated code very easy to find.
Even better, in fact, because you can scroll through, say, a dozen figures to find the one you want. The code is kept hidden until it is needed. | How to efficiently manage a statistical analysis project? | Jupyter Notebooks, which work with R/Python/Matlab/etc, remove the hassle of remembering which script generates a certain figure. This post describes a tidy way of keeping the code and the figure righ | How to efficiently manage a statistical analysis project?
Jupyter Notebooks, which work with R/Python/Matlab/etc, remove the hassle of remembering which script generates a certain figure. This post describes a tidy way of keeping the code and the figure right beside each other. Keeping all figures for a paper or thesis chapter in a single notebook makes the asccoiated code very easy to find.
Even better, in fact, because you can scroll through, say, a dozen figures to find the one you want. The code is kept hidden until it is needed. | How to efficiently manage a statistical analysis project?
Jupyter Notebooks, which work with R/Python/Matlab/etc, remove the hassle of remembering which script generates a certain figure. This post describes a tidy way of keeping the code and the figure righ |
2,220 | Feature selection for "final" model when performing cross-validation in machine learning | Whether you use LOO or K-fold CV, you'll end up with different features since the cross-validation iteration must be the most outer loop, as you said. You can think of some kind of voting scheme which would rate the n-vectors of features you got from your LOO-CV (can't remember the paper but it is worth checking the work of Harald Binder or Antoine Cornuéjols). In the absence of a new test sample, what is usually done is to re-apply the ML algorithm to the whole sample once you have found its optimal cross-validated parameters. But proceeding this way, you cannot ensure that there is no overfitting (since the sample was already used for model optimization).
Or, alternatively, you can use embedded methods which provide you with features ranking through a measure of variable importance, e.g. like in Random Forests (RF). As cross-validation is included in RFs, you don't have to worry about the $n\ll p$ case or curse of dimensionality. Here are nice papers of their applications in gene expression studies:
Cutler, A., Cutler, D.R., and Stevens, J.R. (2009). Tree-Based Methods, in High-Dimensional Data Analysis in Cancer Research, Li, X. and Xu, R. (eds.), pp. 83-101, Springer.
Saeys, Y., Inza, I., and Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19): 2507-2517.
Díaz-Uriarte, R., Alvarez de Andrés, S. (2006). Gene selection and classification of microarray data using random forest. BMC Bioinformatics, 7:3.
Diaz-Uriarte, R. (2007). GeneSrF and varSelRF: a web-based tool and R package for gene selection and classification using random forest. BMC Bioinformatics, 8: 328
Since you are talking of SVM, you can look for penalized SVM. | Feature selection for "final" model when performing cross-validation in machine learning | Whether you use LOO or K-fold CV, you'll end up with different features since the cross-validation iteration must be the most outer loop, as you said. You can think of some kind of voting scheme which | Feature selection for "final" model when performing cross-validation in machine learning
Whether you use LOO or K-fold CV, you'll end up with different features since the cross-validation iteration must be the most outer loop, as you said. You can think of some kind of voting scheme which would rate the n-vectors of features you got from your LOO-CV (can't remember the paper but it is worth checking the work of Harald Binder or Antoine Cornuéjols). In the absence of a new test sample, what is usually done is to re-apply the ML algorithm to the whole sample once you have found its optimal cross-validated parameters. But proceeding this way, you cannot ensure that there is no overfitting (since the sample was already used for model optimization).
Or, alternatively, you can use embedded methods which provide you with features ranking through a measure of variable importance, e.g. like in Random Forests (RF). As cross-validation is included in RFs, you don't have to worry about the $n\ll p$ case or curse of dimensionality. Here are nice papers of their applications in gene expression studies:
Cutler, A., Cutler, D.R., and Stevens, J.R. (2009). Tree-Based Methods, in High-Dimensional Data Analysis in Cancer Research, Li, X. and Xu, R. (eds.), pp. 83-101, Springer.
Saeys, Y., Inza, I., and Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19): 2507-2517.
Díaz-Uriarte, R., Alvarez de Andrés, S. (2006). Gene selection and classification of microarray data using random forest. BMC Bioinformatics, 7:3.
Diaz-Uriarte, R. (2007). GeneSrF and varSelRF: a web-based tool and R package for gene selection and classification using random forest. BMC Bioinformatics, 8: 328
Since you are talking of SVM, you can look for penalized SVM. | Feature selection for "final" model when performing cross-validation in machine learning
Whether you use LOO or K-fold CV, you'll end up with different features since the cross-validation iteration must be the most outer loop, as you said. You can think of some kind of voting scheme which |
2,221 | Feature selection for "final" model when performing cross-validation in machine learning | In principle:
Make your predictions using a single model trained on the entire dataset (so there is only one set of features). The cross-validation is only used to estimate the predictive performance of the single model trained on the whole dataset. It is VITAL in using cross-validation that in each fold you repeat the entire procedure used to fit the primary model, as otherwise you can end up with a substantial optimistic bias in performance.
To see why this happens, consider a binary classification problem with 1000 binary features but only 100 cases, where the cases and features are all purely random, so there is no statistical relationship between the features and the cases whatsoever. If we train a primary model on the full dataset, we can always achieve zero error on the training set as there are more features than cases. We can even find a subset of "informative" features (that happen to be correlated by chance). If we then perform cross-validation using only those features, we will get an estimate of performance that is better than random guessing. The reason is that in each fold of the cross-validation procedure there is some information about the held-out cases used for testing as the features were chosen because they were good for predicting, all of them, including those held out. Of course the true error rate will be 0.5.
If we adopt the proper procedure, and perform feature selection in each fold, there is no longer any information about the held out cases in the choice of features used in that fold. If you use the proper procedure, in this case, you will get an error rate of about 0.5 (although it will vary a bit for different realisations of the dataset).
Good papers to read are:
Christophe Ambroise, Geoffrey J. McLachlan, "Selection bias in gene extraction on the basis of microarray gene-expression data", PNAS http://www.pnas.org/content/99/10/6562.abstract
which is highly relevant to the OP and
Gavin C. Cawley, Nicola L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR 11(Jul):2079−2107, 2010 http://jmlr.csail.mit.edu/papers/v11/cawley10a.html
which demonstrates that the same thing can easily ocurr in model selection (e.g. tuning the hyper-parameters of an SVM, which also need to be repeated in each iteration of the CV procedure).
In practice:
I would recommend using Bagging, and using the out-of-bag error for estimating performance. You will get a committee model using many features, but that is actually a good thing. If you only use a single model, it will be likely that you will over-fit the feature selection criterion, and end up with a model that gives poorer predictions than a model that uses a larger number of features.
Alan Millers book on subset selection in regression (Chapman and Hall monographs on statistics and applied probability, volume 95) gives the good bit of advice (page 221) that if predictive performance is the most important thing, then don't do any feature selection, just use ridge regression instead. And that is in a book on subset selection!!! ;o) | Feature selection for "final" model when performing cross-validation in machine learning | In principle:
Make your predictions using a single model trained on the entire dataset (so there is only one set of features). The cross-validation is only used to estimate the predictive performance | Feature selection for "final" model when performing cross-validation in machine learning
In principle:
Make your predictions using a single model trained on the entire dataset (so there is only one set of features). The cross-validation is only used to estimate the predictive performance of the single model trained on the whole dataset. It is VITAL in using cross-validation that in each fold you repeat the entire procedure used to fit the primary model, as otherwise you can end up with a substantial optimistic bias in performance.
To see why this happens, consider a binary classification problem with 1000 binary features but only 100 cases, where the cases and features are all purely random, so there is no statistical relationship between the features and the cases whatsoever. If we train a primary model on the full dataset, we can always achieve zero error on the training set as there are more features than cases. We can even find a subset of "informative" features (that happen to be correlated by chance). If we then perform cross-validation using only those features, we will get an estimate of performance that is better than random guessing. The reason is that in each fold of the cross-validation procedure there is some information about the held-out cases used for testing as the features were chosen because they were good for predicting, all of them, including those held out. Of course the true error rate will be 0.5.
If we adopt the proper procedure, and perform feature selection in each fold, there is no longer any information about the held out cases in the choice of features used in that fold. If you use the proper procedure, in this case, you will get an error rate of about 0.5 (although it will vary a bit for different realisations of the dataset).
Good papers to read are:
Christophe Ambroise, Geoffrey J. McLachlan, "Selection bias in gene extraction on the basis of microarray gene-expression data", PNAS http://www.pnas.org/content/99/10/6562.abstract
which is highly relevant to the OP and
Gavin C. Cawley, Nicola L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", JMLR 11(Jul):2079−2107, 2010 http://jmlr.csail.mit.edu/papers/v11/cawley10a.html
which demonstrates that the same thing can easily ocurr in model selection (e.g. tuning the hyper-parameters of an SVM, which also need to be repeated in each iteration of the CV procedure).
In practice:
I would recommend using Bagging, and using the out-of-bag error for estimating performance. You will get a committee model using many features, but that is actually a good thing. If you only use a single model, it will be likely that you will over-fit the feature selection criterion, and end up with a model that gives poorer predictions than a model that uses a larger number of features.
Alan Millers book on subset selection in regression (Chapman and Hall monographs on statistics and applied probability, volume 95) gives the good bit of advice (page 221) that if predictive performance is the most important thing, then don't do any feature selection, just use ridge regression instead. And that is in a book on subset selection!!! ;o) | Feature selection for "final" model when performing cross-validation in machine learning
In principle:
Make your predictions using a single model trained on the entire dataset (so there is only one set of features). The cross-validation is only used to estimate the predictive performance |
2,222 | Feature selection for "final" model when performing cross-validation in machine learning | To add to chl: When using support vector machines, a highly recommended penalization method is the elastic net. This method will shrink coefficients towards zero, and in theory retains the most stable coefficients in the model. Initially it was used in a regression framework, but it is easily extended for use with support vector machines.
The original publication : Zou and Hastie (2005) : Regularization and variable selection via the elastic net. J.R.Statist.Soc. B, 67-2,pp.301-320
Elastic net for SVM : Zhu & Zou (2007): Variable Selection for the Support Vector Machine : Trends in Neural Computation, chapter 2 (Editors: Chen and Wang)
improvements on the elastic net Jun-Tao and Ying-Min(2010): An Improved Elastic Net for Cancer Classification and Gene Selection : Acta Automatica Sinica, 36-7,pp.976-981 | Feature selection for "final" model when performing cross-validation in machine learning | To add to chl: When using support vector machines, a highly recommended penalization method is the elastic net. This method will shrink coefficients towards zero, and in theory retains the most stable | Feature selection for "final" model when performing cross-validation in machine learning
To add to chl: When using support vector machines, a highly recommended penalization method is the elastic net. This method will shrink coefficients towards zero, and in theory retains the most stable coefficients in the model. Initially it was used in a regression framework, but it is easily extended for use with support vector machines.
The original publication : Zou and Hastie (2005) : Regularization and variable selection via the elastic net. J.R.Statist.Soc. B, 67-2,pp.301-320
Elastic net for SVM : Zhu & Zou (2007): Variable Selection for the Support Vector Machine : Trends in Neural Computation, chapter 2 (Editors: Chen and Wang)
improvements on the elastic net Jun-Tao and Ying-Min(2010): An Improved Elastic Net for Cancer Classification and Gene Selection : Acta Automatica Sinica, 36-7,pp.976-981 | Feature selection for "final" model when performing cross-validation in machine learning
To add to chl: When using support vector machines, a highly recommended penalization method is the elastic net. This method will shrink coefficients towards zero, and in theory retains the most stable |
2,223 | Feature selection for "final" model when performing cross-validation in machine learning | As step 6 (or 0) you run the feature detection algorithm on the entire data set.
The logic is the following: you have to think of cross-validation as a method for finding out the properties of the procedure you are using to select the features. It answers the question: "if I have some data and perform this procedure, then what is the error rate for classifying a new sample?". Once you know the answer, you can use the procedure (feature selection + classification rule development) on the entire data set. People like leave-one-out because the predictive properties usually depend on the sample size, and $n-1$ is usually close enough to $n$ not to matter much. | Feature selection for "final" model when performing cross-validation in machine learning | As step 6 (or 0) you run the feature detection algorithm on the entire data set.
The logic is the following: you have to think of cross-validation as a method for finding out the properties of the pro | Feature selection for "final" model when performing cross-validation in machine learning
As step 6 (or 0) you run the feature detection algorithm on the entire data set.
The logic is the following: you have to think of cross-validation as a method for finding out the properties of the procedure you are using to select the features. It answers the question: "if I have some data and perform this procedure, then what is the error rate for classifying a new sample?". Once you know the answer, you can use the procedure (feature selection + classification rule development) on the entire data set. People like leave-one-out because the predictive properties usually depend on the sample size, and $n-1$ is usually close enough to $n$ not to matter much. | Feature selection for "final" model when performing cross-validation in machine learning
As step 6 (or 0) you run the feature detection algorithm on the entire data set.
The logic is the following: you have to think of cross-validation as a method for finding out the properties of the pro |
2,224 | Feature selection for "final" model when performing cross-validation in machine learning | This is how I select features. Suppose based on certain knowledge, there are 2 models to be compared. Model A uses features no.1 to no. 10. Model B uses no.11 to no. 20. I will apply LOO CV to model A to get its out-of-sample performance. Do the same to model B and then compare them. | Feature selection for "final" model when performing cross-validation in machine learning | This is how I select features. Suppose based on certain knowledge, there are 2 models to be compared. Model A uses features no.1 to no. 10. Model B uses no.11 to no. 20. I will apply LOO CV to model A | Feature selection for "final" model when performing cross-validation in machine learning
This is how I select features. Suppose based on certain knowledge, there are 2 models to be compared. Model A uses features no.1 to no. 10. Model B uses no.11 to no. 20. I will apply LOO CV to model A to get its out-of-sample performance. Do the same to model B and then compare them. | Feature selection for "final" model when performing cross-validation in machine learning
This is how I select features. Suppose based on certain knowledge, there are 2 models to be compared. Model A uses features no.1 to no. 10. Model B uses no.11 to no. 20. I will apply LOO CV to model A |
2,225 | Feature selection for "final" model when performing cross-validation in machine learning | I'm not sure about classification problems, but in the case of feature selection for regression problems, Jun Shao showed that Leave-One-Out CV is asymptotically inconsistent, i.e. the probability of selecting the proper subset of features does not converge to 1 as the number of samples increases. From a practical point of view, Shao recommends a Monte-Carlo cross-validation, or leave-many-out procedure. | Feature selection for "final" model when performing cross-validation in machine learning | I'm not sure about classification problems, but in the case of feature selection for regression problems, Jun Shao showed that Leave-One-Out CV is asymptotically inconsistent, i.e. the probability of | Feature selection for "final" model when performing cross-validation in machine learning
I'm not sure about classification problems, but in the case of feature selection for regression problems, Jun Shao showed that Leave-One-Out CV is asymptotically inconsistent, i.e. the probability of selecting the proper subset of features does not converge to 1 as the number of samples increases. From a practical point of view, Shao recommends a Monte-Carlo cross-validation, or leave-many-out procedure. | Feature selection for "final" model when performing cross-validation in machine learning
I'm not sure about classification problems, but in the case of feature selection for regression problems, Jun Shao showed that Leave-One-Out CV is asymptotically inconsistent, i.e. the probability of |
2,226 | What do the residuals in a logistic regression mean? | The easiest residuals to understand are the deviance residuals as when squared these sum to -2 times the log-likelihood. In its simplest terms logistic regression can be understood in terms of fitting the function $p = \text{logit}^{-1}(X\beta)$ for known $X$ in such a way as to minimise the total deviance, which is the sum of squared deviance residuals of all the data points.
The (squared) deviance of each data point is equal to (-2 times) the logarithm of the difference between its predicted probability $\text{logit}^{-1}(X\beta)$ and the complement of its actual value (1 for a control; a 0 for a case) in absolute terms. A perfect fit of a point (which never occurs) gives a deviance of zero as log(1) is zero. A poorly fitting point has a large residual deviance as -2 times the log of a very small value is a large number.
Doing logistic regression is akin to finding a beta value such that the sum of squared deviance residuals is minimised.
This can be illustrated with a plot, but I don't know how to upload one. | What do the residuals in a logistic regression mean? | The easiest residuals to understand are the deviance residuals as when squared these sum to -2 times the log-likelihood. In its simplest terms logistic regression can be understood in terms of fitting | What do the residuals in a logistic regression mean?
The easiest residuals to understand are the deviance residuals as when squared these sum to -2 times the log-likelihood. In its simplest terms logistic regression can be understood in terms of fitting the function $p = \text{logit}^{-1}(X\beta)$ for known $X$ in such a way as to minimise the total deviance, which is the sum of squared deviance residuals of all the data points.
The (squared) deviance of each data point is equal to (-2 times) the logarithm of the difference between its predicted probability $\text{logit}^{-1}(X\beta)$ and the complement of its actual value (1 for a control; a 0 for a case) in absolute terms. A perfect fit of a point (which never occurs) gives a deviance of zero as log(1) is zero. A poorly fitting point has a large residual deviance as -2 times the log of a very small value is a large number.
Doing logistic regression is akin to finding a beta value such that the sum of squared deviance residuals is minimised.
This can be illustrated with a plot, but I don't know how to upload one. | What do the residuals in a logistic regression mean?
The easiest residuals to understand are the deviance residuals as when squared these sum to -2 times the log-likelihood. In its simplest terms logistic regression can be understood in terms of fitting |
2,227 | What do the residuals in a logistic regression mean? | Response:
$$y_i - \hat\mu_i$$
response residuals are inadequate for assessing a fitted glm, because GLMs are based on distributions where (in general) the variance depends on the mean.
Pearson:
The most direct way to handle the non-constant variance is to divide
it out:
$$ \frac{y_i - \hat\mu_i}{\sqrt{V(\mu_i)|_{\hat\mu_i}}}$$
where $V()$ is the (GLM) variance function ($Var(y_i) = a(\phi)*V(\mu_i)$)
Under "Small dispersion asymptotics" conditions, the Pearson residuals have an approximate normal distribution.
Deviance: $$sign(y_i-\hat\mu_i)*\sqrt{d_i}$$ where $d_i$ is the unit deviance, i.e. $d_i = 2(t(y_i,y_i)-t(y_i,\hat\mu_i))$
The deviance statistic (sum of squared unit-deviances) has an approximate chi-square distribution (when the saddlepoint approximation applies and under "Small dispersion asymptotics" conditions). Under these same conditions, the deviance residuals have an approximate normal distribution.
Working:
$$z_i - \eta_i $$
where $z_i$ are the working responses $\eta_i + \frac{d\eta_i}{d\mu_i}(y_i-\hat\mu_i)$ and $\eta_i$ is the linear predictor. Meaning you get that the residual is $\frac{d\eta_i}{d\mu_i}(y_i-\hat\mu_i)$.
The model coefficients are fitted using Fisher scoring algorithm / Iterative Reweighted Least Square (IRLS). And it can be shown that each iteration of this algorithm is equivalent to doing ordinary least-squares on the working responses as defined here.
To test the link function - plotting the linear predictor against the working responses should come out linear if the right link function was used.
Partial:
$$z_i - \eta_i + X^*\beta$$
where $X^*$ is the centered $X$. Partial residuals can be used to determine if a covariate/predictor is on an inappropriate scale.
Quantile:
$$\Phi^{-1}(F(y_i))$$
Where $F(y_i)$ is the CDF of $y_i$, and $\Phi^{-1}$ is the quantile function of standard normal (inverse CDF). For discrete $y_i$'s you take $u \sim Unif(F(y_i-1), F(y_i))$ and $\Phi^{-1}(u)$.
Here is an example code to calculate these residuals:
Y = c(0,0,0,0,1,1,1,1,1)
x1 = c(1,2,3,1,2,2,3,3,3)
x2 = c(1,0,0,1,0,0,0,0,0)
fit = glm(y ~ x1 + x2, family = 'binomial')
lp = predict(fit)
mu = exp(lp)/(1+exp(lp))
# manually calculating the 1st response residual
resid(fit, type="response")[1]
Y[1] - mu[1]
# manually calculating the 1st pearson residual
resid(fit, type="pearson")[1]
(Y[1]-mu[1]) / sqrt(mu[1]*(1-mu[1]))
# manually calculating the 1st deviance residual
resid(fit, type="deviance")[1]
sqrt(-2*log(1-mu[1]))*sign(Y[1]-mu[1])
# manually calculating the 1st working residual
resid(fit, type="working")[1]
(Y[1]-mu[1]) / (mu[1]*(1-mu[1]))
# manually calculating the 1st partial residual
resid(fit, type="partial")[1,1]
(Y[1]-mu[1]) / (mu[1]*(1-mu[1])) + fit$coefficients[2]*(x1[1] - mean(x1))
resid(fit, type="partial")[1,2]
(Y[1]-mu[1]) / (mu[1]*(1-mu[1])) + fit$coefficients[3]*(x2[1] - mean(x2))
# manually calculating the 1st quantile residual
library(statmod)
qresid(fit)[1] # results are random (uniformly), so won't come the same
a = pbinom(Y[1]-1, 1, mu[1])
b = pbinom(Y[1], 1, mu[1])
qnorm(runif(1, a, b)) # results are random (uniformly), so won't come the same
n = 10000
mean(replicate(n, qresid(fit)[1]))
mean(qnorm(runif(1000, a, b))) # should be close
For more information I suggest you check this book: Generalized Linear Models With Examples in R:
working response - section 6.3, working residuals - section 6.7, response residuals - section 8.3.1, pearson residuals - section 8.3.2, deviance residuals - section 8.3.3, partial residuals - section 8.7.3
So,
will sum of squared residuals provide a meaningful measure of model fit ?
For Deviance/Pearson - I think so.
But more generally inspecting the residuals can be a bit tricky. In many cases neither the Pearson nor deviance residuals can be guaranteed to have distributions close to normal, especially for discrete distributions. "Small dispersion asymptotics" need to hold (see section 7.5 in the book), so some rule of thumbs are used. For Binomial distributions, and the deviance residual $\min(n_i y_i) > 3$ as well as $\min(n_i(1-y_i)) > 3$. There are also the Quantile Residuals that can be used when these conditions are not met. Check section 8.3.4 of the book. | What do the residuals in a logistic regression mean? | Response:
$$y_i - \hat\mu_i$$
response residuals are inadequate for assessing a fitted glm, because GLMs are based on distributions where (in general) the variance depends on the mean.
Pearson:
The mo | What do the residuals in a logistic regression mean?
Response:
$$y_i - \hat\mu_i$$
response residuals are inadequate for assessing a fitted glm, because GLMs are based on distributions where (in general) the variance depends on the mean.
Pearson:
The most direct way to handle the non-constant variance is to divide
it out:
$$ \frac{y_i - \hat\mu_i}{\sqrt{V(\mu_i)|_{\hat\mu_i}}}$$
where $V()$ is the (GLM) variance function ($Var(y_i) = a(\phi)*V(\mu_i)$)
Under "Small dispersion asymptotics" conditions, the Pearson residuals have an approximate normal distribution.
Deviance: $$sign(y_i-\hat\mu_i)*\sqrt{d_i}$$ where $d_i$ is the unit deviance, i.e. $d_i = 2(t(y_i,y_i)-t(y_i,\hat\mu_i))$
The deviance statistic (sum of squared unit-deviances) has an approximate chi-square distribution (when the saddlepoint approximation applies and under "Small dispersion asymptotics" conditions). Under these same conditions, the deviance residuals have an approximate normal distribution.
Working:
$$z_i - \eta_i $$
where $z_i$ are the working responses $\eta_i + \frac{d\eta_i}{d\mu_i}(y_i-\hat\mu_i)$ and $\eta_i$ is the linear predictor. Meaning you get that the residual is $\frac{d\eta_i}{d\mu_i}(y_i-\hat\mu_i)$.
The model coefficients are fitted using Fisher scoring algorithm / Iterative Reweighted Least Square (IRLS). And it can be shown that each iteration of this algorithm is equivalent to doing ordinary least-squares on the working responses as defined here.
To test the link function - plotting the linear predictor against the working responses should come out linear if the right link function was used.
Partial:
$$z_i - \eta_i + X^*\beta$$
where $X^*$ is the centered $X$. Partial residuals can be used to determine if a covariate/predictor is on an inappropriate scale.
Quantile:
$$\Phi^{-1}(F(y_i))$$
Where $F(y_i)$ is the CDF of $y_i$, and $\Phi^{-1}$ is the quantile function of standard normal (inverse CDF). For discrete $y_i$'s you take $u \sim Unif(F(y_i-1), F(y_i))$ and $\Phi^{-1}(u)$.
Here is an example code to calculate these residuals:
Y = c(0,0,0,0,1,1,1,1,1)
x1 = c(1,2,3,1,2,2,3,3,3)
x2 = c(1,0,0,1,0,0,0,0,0)
fit = glm(y ~ x1 + x2, family = 'binomial')
lp = predict(fit)
mu = exp(lp)/(1+exp(lp))
# manually calculating the 1st response residual
resid(fit, type="response")[1]
Y[1] - mu[1]
# manually calculating the 1st pearson residual
resid(fit, type="pearson")[1]
(Y[1]-mu[1]) / sqrt(mu[1]*(1-mu[1]))
# manually calculating the 1st deviance residual
resid(fit, type="deviance")[1]
sqrt(-2*log(1-mu[1]))*sign(Y[1]-mu[1])
# manually calculating the 1st working residual
resid(fit, type="working")[1]
(Y[1]-mu[1]) / (mu[1]*(1-mu[1]))
# manually calculating the 1st partial residual
resid(fit, type="partial")[1,1]
(Y[1]-mu[1]) / (mu[1]*(1-mu[1])) + fit$coefficients[2]*(x1[1] - mean(x1))
resid(fit, type="partial")[1,2]
(Y[1]-mu[1]) / (mu[1]*(1-mu[1])) + fit$coefficients[3]*(x2[1] - mean(x2))
# manually calculating the 1st quantile residual
library(statmod)
qresid(fit)[1] # results are random (uniformly), so won't come the same
a = pbinom(Y[1]-1, 1, mu[1])
b = pbinom(Y[1], 1, mu[1])
qnorm(runif(1, a, b)) # results are random (uniformly), so won't come the same
n = 10000
mean(replicate(n, qresid(fit)[1]))
mean(qnorm(runif(1000, a, b))) # should be close
For more information I suggest you check this book: Generalized Linear Models With Examples in R:
working response - section 6.3, working residuals - section 6.7, response residuals - section 8.3.1, pearson residuals - section 8.3.2, deviance residuals - section 8.3.3, partial residuals - section 8.7.3
So,
will sum of squared residuals provide a meaningful measure of model fit ?
For Deviance/Pearson - I think so.
But more generally inspecting the residuals can be a bit tricky. In many cases neither the Pearson nor deviance residuals can be guaranteed to have distributions close to normal, especially for discrete distributions. "Small dispersion asymptotics" need to hold (see section 7.5 in the book), so some rule of thumbs are used. For Binomial distributions, and the deviance residual $\min(n_i y_i) > 3$ as well as $\min(n_i(1-y_i)) > 3$. There are also the Quantile Residuals that can be used when these conditions are not met. Check section 8.3.4 of the book. | What do the residuals in a logistic regression mean?
Response:
$$y_i - \hat\mu_i$$
response residuals are inadequate for assessing a fitted glm, because GLMs are based on distributions where (in general) the variance depends on the mean.
Pearson:
The mo |
2,228 | What do the residuals in a logistic regression mean? | On Pearsons residuals,
The Pearson residual is the difference between the observed and estimated probabilities divided by the binomial standard deviation of the estimated probability. Therefore standardizing the residuals.
For large samples the standardized residuals should have a normal distribution.
From Menard, Scott (2002). Applied logistic regression analysis, 2nd Edition. Thousand Oaks, CA: Sage Publications. Series: Quantitative Applications in the Social Sciences, No. 106. First ed., 1995. See Chapter 4.4 | What do the residuals in a logistic regression mean? | On Pearsons residuals,
The Pearson residual is the difference between the observed and estimated probabilities divided by the binomial standard deviation of the estimated probability. Therefore standa | What do the residuals in a logistic regression mean?
On Pearsons residuals,
The Pearson residual is the difference between the observed and estimated probabilities divided by the binomial standard deviation of the estimated probability. Therefore standardizing the residuals.
For large samples the standardized residuals should have a normal distribution.
From Menard, Scott (2002). Applied logistic regression analysis, 2nd Edition. Thousand Oaks, CA: Sage Publications. Series: Quantitative Applications in the Social Sciences, No. 106. First ed., 1995. See Chapter 4.4 | What do the residuals in a logistic regression mean?
On Pearsons residuals,
The Pearson residual is the difference between the observed and estimated probabilities divided by the binomial standard deviation of the estimated probability. Therefore standa |
2,229 | What do the residuals in a logistic regression mean? | The working residuals are the residuals in the final iteration of any iteratively weighted least squares method. I reckon that means the residuals when we think its the last iteration of our running of model. That can give rise to discussion that model running is an iterative exercise. | What do the residuals in a logistic regression mean? | The working residuals are the residuals in the final iteration of any iteratively weighted least squares method. I reckon that means the residuals when we think its the last iteration of our running o | What do the residuals in a logistic regression mean?
The working residuals are the residuals in the final iteration of any iteratively weighted least squares method. I reckon that means the residuals when we think its the last iteration of our running of model. That can give rise to discussion that model running is an iterative exercise. | What do the residuals in a logistic regression mean?
The working residuals are the residuals in the final iteration of any iteratively weighted least squares method. I reckon that means the residuals when we think its the last iteration of our running o |
2,230 | What do the residuals in a logistic regression mean? | Pearson Residuals
As tosonb1 points out, "The Pearson residual is the difference between the observed and estimated probabilities divided by the binomial standard deviation of the estimated probability".
I just wanted to mention that Pearson residual is mostly useful with grouped data i.e, say, there are $n_i$ trials at setting i of the explanatory variables (many observations for the same value of predictors) and let $y_i$ denote the number of “successes” for $n_i$ trials. Also, let $\hat{π_i} $ denote the estimated probability of success for the logistic regression model we have fit.
Pearson residual = $e_i = \frac{y_i - \hat{π_i}}{\sqrt {n_i \hat{π_i}(1 - \hat{π_i})}}$
For ungrouped binary data and often when explanatory variables are continuous, each $n_i$ = 1. Then, $yi$ can equal only 0 or 1, and a residual can assume only two values and is usually uninformative. Plots of residuals also then have limited use,
consisting merely of two parallel lines of dots.
From, An Introduction to Categorical Data Analysis, 2nd Edition by Alan Agresti - vide chapter 5, section 5.2.4
Deviance Residuals
(I am not entirely sure about this one, please point out errors, if any)
The i-th deviance residual can be computed as square root of twice the difference between loglikelihood of the ith observation in the saturated model and loglikelihood of the ith observation in the fitted model. Saturated Model is the model that predicts each observation perfectly and for all purposes in logistic regression, loglikelihood of saturated model $= ln(1) = 0$. Finally, we add a sign '+' in front of the residual if the observed response is 1 and put '-' if the observed response is 0.
Hence, deviance residual for the ith observation,
$$d_i = (-1)^{y_i + 1}\sqrt{-2 (y_i ln(\hat{π_i}) + (1-y_i) ln(1 - \hat{π_i}))}$$ $y_i \in $ {0,1}
The sum of squares of deviance residuals add up to the residual deviance which is an indicator of model fit.
If a deviance residual is unusually large (which can be identified after plotting them) you might want to check if there was a mistake in labelling that data point. | What do the residuals in a logistic regression mean? | Pearson Residuals
As tosonb1 points out, "The Pearson residual is the difference between the observed and estimated probabilities divided by the binomial standard deviation of the estimated probabili | What do the residuals in a logistic regression mean?
Pearson Residuals
As tosonb1 points out, "The Pearson residual is the difference between the observed and estimated probabilities divided by the binomial standard deviation of the estimated probability".
I just wanted to mention that Pearson residual is mostly useful with grouped data i.e, say, there are $n_i$ trials at setting i of the explanatory variables (many observations for the same value of predictors) and let $y_i$ denote the number of “successes” for $n_i$ trials. Also, let $\hat{π_i} $ denote the estimated probability of success for the logistic regression model we have fit.
Pearson residual = $e_i = \frac{y_i - \hat{π_i}}{\sqrt {n_i \hat{π_i}(1 - \hat{π_i})}}$
For ungrouped binary data and often when explanatory variables are continuous, each $n_i$ = 1. Then, $yi$ can equal only 0 or 1, and a residual can assume only two values and is usually uninformative. Plots of residuals also then have limited use,
consisting merely of two parallel lines of dots.
From, An Introduction to Categorical Data Analysis, 2nd Edition by Alan Agresti - vide chapter 5, section 5.2.4
Deviance Residuals
(I am not entirely sure about this one, please point out errors, if any)
The i-th deviance residual can be computed as square root of twice the difference between loglikelihood of the ith observation in the saturated model and loglikelihood of the ith observation in the fitted model. Saturated Model is the model that predicts each observation perfectly and for all purposes in logistic regression, loglikelihood of saturated model $= ln(1) = 0$. Finally, we add a sign '+' in front of the residual if the observed response is 1 and put '-' if the observed response is 0.
Hence, deviance residual for the ith observation,
$$d_i = (-1)^{y_i + 1}\sqrt{-2 (y_i ln(\hat{π_i}) + (1-y_i) ln(1 - \hat{π_i}))}$$ $y_i \in $ {0,1}
The sum of squares of deviance residuals add up to the residual deviance which is an indicator of model fit.
If a deviance residual is unusually large (which can be identified after plotting them) you might want to check if there was a mistake in labelling that data point. | What do the residuals in a logistic regression mean?
Pearson Residuals
As tosonb1 points out, "The Pearson residual is the difference between the observed and estimated probabilities divided by the binomial standard deviation of the estimated probabili |
2,231 | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2? | The analysis is complicated by the prospect that the game goes into "overtime" in order to win by a margin of at least two points. (Otherwise it would be as simple as the solution shown at https://stats.stackexchange.com/a/327015/919.) I will show how to visualize the problem and use that to break it down into readily-computed contributions to the answer. The result, although a bit messy, is manageable. A simulation bears out its correctness.
Let $p$ be your probability of winning a point. Assume all points are independent. The chance that you win a game can be broken down into (nonoverlapping) events according to how many points your opponent has at the end assuming you don't go into overtime ($0,1,\ldots, 19$) or you go into overtime. In the latter case it is (or will become) obvious that at some stage the score was 20-20.
There is a nice visualization. Let scores during the game be plotted as points $(x,y)$ where $x$ is your score and $y$ is your opponent's score. As the game unfolds, the scores move along the integer lattice in the first quadrant beginning at $(0,0)$, creating a game path. It ends the first time one of you has scored at least $21$ and has a margin of at least $2$. Such winning points form two sets of points, the "absorbing boundary" of this process, whereat the game path must terminate.
This figure shows part of the absorbing boundary (it extends infinitely up and to the right) along with the path of a game that went into overtime (with a loss for you, alas).
Let's count. The number of ways the game can end with $y$ points for your opponent is the number of distinct paths in the integer lattice of $(x,y)$ scores beginning at the initial score $(0,0)$ and ending at the penultimate score $(20,y)$. Such paths are determined by which of the $20+y$ points in the game you won. They correspond therefore to the subsets of size $20$ of the numbers $1,2,\ldots, 20+y$, and there are $\binom{20+y}{20}$ of them. Since in each such path you won $21$ points (with independent probabilities $p$ each time, counting the final point) and your opponent won $y$ points (with independent probabilities $1-p$ each time), the paths associated with $y$ account for a total chance of
$$f(y) = \binom{20+y}{20}p^{21}(1-p)^y.$$
Similarly, there are $\binom{20+20}{20}$ ways to arrive at $(20,20)$ representing the 20-20 tie. In this situation you don't have a definite win. We may compute the chance of your win by adopting a common convention: forget how many points have been scored so far and start tracking the point differential. The game is at a differential of $0$ and will end when it first reaches $+2$ or $-2$, necessarily passing through $\pm 1$ along the way. Let $g(i)$ be the chance you win when the differential is $i\in\{-1,0,1\}$.
Since your chance of winning in any situation is $p$, we have
$$\eqalign{
g(0) &= p g(1) + (1-p)g(-1), \\
g(1) &= p + (1-p)g(0),\\
g(-1) &= pg(0).
}$$
The unique solution to this system of linear equations for the vector $(g(-1),g(0),g(1))$ implies
$$g(0) = \frac{p^2}{1-2p+2p^2}.$$
This, therefore, is your chance of winning once $(20,20)$ is reached (which occurs with a chance of $\binom{20+20}{20}p^{20}(1-p)^{20}$).
Consequently your chance of winning is the sum of all these disjoint possibilities, equal to
$$\eqalign{
&\sum_{y=0}^{19}f(y) + g(0)p^{20}(1-p)^{20} \binom{20+20}{20} \\
= &\sum_{y=0}^{19}\binom{20+y}{20}p^{21}(1-p)^y + \frac{p^2}{1-2p+2p^2}p^{20}(1-p)^{20} \binom{20+20}{20}\\
= &\frac{p^{21}}{1-2p+2p^2}\left(\sum_{y=0}^{19}\binom{20+y}{20}(1-2p+2p^2)(1-p)^y + \binom{20+20}{20}p(1-p)^{20} \right).
}$$
The stuff inside the parentheses on the right is a polynomial in $p$. (It looks like its degree is $21$, but the leading terms all cancel: its degree is $20$.)
When $p=0.58$, the chance of a win is close to $0.855913992.$
You should have no trouble generalizing this analysis to games that terminate with any numbers of points. When the required margin is greater than $2$ the result gets more complicated but is just as straightforward.
Incidentally, with these chances of winning, you had a $(0.8559\ldots)^{15}\approx 9.7\%$ chance of winning the first $15$ games. That's not inconsistent with what you report, which might encourage us to continue supposing the outcomes of each point are independent. We would thereby project that you have a chance of
$$(0.8559\ldots)^{35}\approx 0.432\%$$
of winning all the remaining $35$ games, assuming they proceed according to all these assumptions. It doesn't sound like a good bet to make unless the payoff is large!
I like to check work like this with a quick simulation. Here is R code to generate tens of thousands of games in a second. It assumes the game will be over within 126 points (extremely few games need to continue that long, so this assumption has no material effect on the results).
n <- 21 # Points your opponent needs to win
m <- 21 # Points you need to win
margin <- 2 # Minimum winning margin
p <- .58 # Your chance of winning a point
n.sim <- 1e4 # Iterations in the simulation
sim <- replicate(n.sim, {
x <- sample(1:0, 3*(m+n), prob=c(p, 1-p), replace=TRUE)
points.1 <- cumsum(x)
points.0 <- cumsum(1-x)
win.1 <- points.1 >= m & points.0 <= points.1-margin
win.0 <- points.0 >= n & points.1 <= points.0-margin
which.max(c(win.1, TRUE)) < which.max(c(win.0, TRUE))
})
mean(sim)
When I ran this, you won in 8,570 cases out of the 10,000 iterations. A Z-score (with approximately a Normal distribution) can be computed to test such results:
Z <- (mean(sim) - 0.85591399165186659) / (sd(sim)/sqrt(n.sim))
message(round(Z, 3)) # Should be between -3 and 3, roughly.
The value of $0.31$ in this simulation is perfectly consistent with the foregoing theoretical computation.
Appendix 1
In light of the update to the question, which lists the outcomes of the first 18 games, here are reconstructions of game paths consistent with these data. You can see that two or three of the games were perilously close to losses. (Any path ending on a light gray square is a loss for you.)
Potential uses of this figure include observing:
The paths concentrate around a slope given by the ratio 267:380 of total scores, equal approximately to 58.7%.
The scatter of the paths around that slope shows the variation expected when points are independent.
If points are made in streaks, then individual paths would tend to have long vertical and horizontal stretches.
In a longer set of similar games, expect to see paths that tend to stay within the colored range, but also expect a few to extend beyond it.
The prospect of a game or two whose path lies generally above this spread indicates the possibility that your opponent will eventually win a game, probably sooner rather than later.
Appendix 2
The code to create the figure was requested. Here it is (cleaned up to produce a slightly nicer graphic).
library(data.table)
library(ggplot2)
n <- 21 # Points your opponent needs to win
m <- 21 # Points you need to win
margin <- 2 # Minimum winning margin
p <- 0.58 # Your chance of winning a point
#
# Quick and dirty generation of a game that goes into overtime.
#
done <- FALSE
iter <- 0
iter.max <- 2000
while(!done & iter < iter.max) {
Y <- sample(1:0, 3*(m+n), prob=c(p, 1-p), replace=TRUE)
Y <- data.table(You=c(0,cumsum(Y)), Opponent=c(0,cumsum(1-Y)))
Y[, Complete := (You >= m & You-Opponent >= margin) |
(Opponent >= n & Opponent-You >= margin)]
Y <- Y[1:which.max(Complete)]
done <- nrow(Y[You==m-1 & Opponent==n-1 & !Complete]) > 0
iter <- iter+1
}
if (iter >= iter.max) warning("Unable to find a solution. Using last.")
i.max <- max(n+margin, m+margin, max(c(Y$You, Y$Opponent))) + 1
#
# Represent the relevant part of the lattice.
#
X <- as.data.table(expand.grid(You=0:i.max,
Opponent=0:i.max))
X[, Win := (You == m & You-Opponent >= margin) |
(You > m & You-Opponent == margin)]
X[, Loss := (Opponent == n & You-Opponent <= -margin) |
(Opponent > n & You-Opponent == -margin)]
#
# Represent the absorbing boundary.
#
A <- data.table(x=c(m, m, i.max, 0, n-margin, i.max-margin),
y=c(0, m-margin, i.max-margin, n, n, i.max),
Winner=rep(c("You", "Opponent"), each=3))
#
# Plotting.
#
ggplot(X[Win==TRUE | Loss==TRUE], aes(You, Opponent)) +
geom_path(aes(x, y, color=Winner, group=Winner), inherit.aes=FALSE,
data=A, size=1.5) +
geom_point(data=X, color="#c0c0c0") +
geom_point(aes(fill=Win), size=3, shape=22, show.legend=FALSE) +
geom_path(data=Y, size=1) +
coord_equal(xlim=c(-1/2, i.max-1/2), ylim=c(-1/2, i.max-1/2),
ratio=1, expand=FALSE) +
ggtitle("Example Game Path",
paste0("You need ", m, " points to win; opponent needs ", n,
"; and the margin is ", margin, ".")) | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w | The analysis is complicated by the prospect that the game goes into "overtime" in order to win by a margin of at least two points. (Otherwise it would be as simple as the solution shown at https://st | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2?
The analysis is complicated by the prospect that the game goes into "overtime" in order to win by a margin of at least two points. (Otherwise it would be as simple as the solution shown at https://stats.stackexchange.com/a/327015/919.) I will show how to visualize the problem and use that to break it down into readily-computed contributions to the answer. The result, although a bit messy, is manageable. A simulation bears out its correctness.
Let $p$ be your probability of winning a point. Assume all points are independent. The chance that you win a game can be broken down into (nonoverlapping) events according to how many points your opponent has at the end assuming you don't go into overtime ($0,1,\ldots, 19$) or you go into overtime. In the latter case it is (or will become) obvious that at some stage the score was 20-20.
There is a nice visualization. Let scores during the game be plotted as points $(x,y)$ where $x$ is your score and $y$ is your opponent's score. As the game unfolds, the scores move along the integer lattice in the first quadrant beginning at $(0,0)$, creating a game path. It ends the first time one of you has scored at least $21$ and has a margin of at least $2$. Such winning points form two sets of points, the "absorbing boundary" of this process, whereat the game path must terminate.
This figure shows part of the absorbing boundary (it extends infinitely up and to the right) along with the path of a game that went into overtime (with a loss for you, alas).
Let's count. The number of ways the game can end with $y$ points for your opponent is the number of distinct paths in the integer lattice of $(x,y)$ scores beginning at the initial score $(0,0)$ and ending at the penultimate score $(20,y)$. Such paths are determined by which of the $20+y$ points in the game you won. They correspond therefore to the subsets of size $20$ of the numbers $1,2,\ldots, 20+y$, and there are $\binom{20+y}{20}$ of them. Since in each such path you won $21$ points (with independent probabilities $p$ each time, counting the final point) and your opponent won $y$ points (with independent probabilities $1-p$ each time), the paths associated with $y$ account for a total chance of
$$f(y) = \binom{20+y}{20}p^{21}(1-p)^y.$$
Similarly, there are $\binom{20+20}{20}$ ways to arrive at $(20,20)$ representing the 20-20 tie. In this situation you don't have a definite win. We may compute the chance of your win by adopting a common convention: forget how many points have been scored so far and start tracking the point differential. The game is at a differential of $0$ and will end when it first reaches $+2$ or $-2$, necessarily passing through $\pm 1$ along the way. Let $g(i)$ be the chance you win when the differential is $i\in\{-1,0,1\}$.
Since your chance of winning in any situation is $p$, we have
$$\eqalign{
g(0) &= p g(1) + (1-p)g(-1), \\
g(1) &= p + (1-p)g(0),\\
g(-1) &= pg(0).
}$$
The unique solution to this system of linear equations for the vector $(g(-1),g(0),g(1))$ implies
$$g(0) = \frac{p^2}{1-2p+2p^2}.$$
This, therefore, is your chance of winning once $(20,20)$ is reached (which occurs with a chance of $\binom{20+20}{20}p^{20}(1-p)^{20}$).
Consequently your chance of winning is the sum of all these disjoint possibilities, equal to
$$\eqalign{
&\sum_{y=0}^{19}f(y) + g(0)p^{20}(1-p)^{20} \binom{20+20}{20} \\
= &\sum_{y=0}^{19}\binom{20+y}{20}p^{21}(1-p)^y + \frac{p^2}{1-2p+2p^2}p^{20}(1-p)^{20} \binom{20+20}{20}\\
= &\frac{p^{21}}{1-2p+2p^2}\left(\sum_{y=0}^{19}\binom{20+y}{20}(1-2p+2p^2)(1-p)^y + \binom{20+20}{20}p(1-p)^{20} \right).
}$$
The stuff inside the parentheses on the right is a polynomial in $p$. (It looks like its degree is $21$, but the leading terms all cancel: its degree is $20$.)
When $p=0.58$, the chance of a win is close to $0.855913992.$
You should have no trouble generalizing this analysis to games that terminate with any numbers of points. When the required margin is greater than $2$ the result gets more complicated but is just as straightforward.
Incidentally, with these chances of winning, you had a $(0.8559\ldots)^{15}\approx 9.7\%$ chance of winning the first $15$ games. That's not inconsistent with what you report, which might encourage us to continue supposing the outcomes of each point are independent. We would thereby project that you have a chance of
$$(0.8559\ldots)^{35}\approx 0.432\%$$
of winning all the remaining $35$ games, assuming they proceed according to all these assumptions. It doesn't sound like a good bet to make unless the payoff is large!
I like to check work like this with a quick simulation. Here is R code to generate tens of thousands of games in a second. It assumes the game will be over within 126 points (extremely few games need to continue that long, so this assumption has no material effect on the results).
n <- 21 # Points your opponent needs to win
m <- 21 # Points you need to win
margin <- 2 # Minimum winning margin
p <- .58 # Your chance of winning a point
n.sim <- 1e4 # Iterations in the simulation
sim <- replicate(n.sim, {
x <- sample(1:0, 3*(m+n), prob=c(p, 1-p), replace=TRUE)
points.1 <- cumsum(x)
points.0 <- cumsum(1-x)
win.1 <- points.1 >= m & points.0 <= points.1-margin
win.0 <- points.0 >= n & points.1 <= points.0-margin
which.max(c(win.1, TRUE)) < which.max(c(win.0, TRUE))
})
mean(sim)
When I ran this, you won in 8,570 cases out of the 10,000 iterations. A Z-score (with approximately a Normal distribution) can be computed to test such results:
Z <- (mean(sim) - 0.85591399165186659) / (sd(sim)/sqrt(n.sim))
message(round(Z, 3)) # Should be between -3 and 3, roughly.
The value of $0.31$ in this simulation is perfectly consistent with the foregoing theoretical computation.
Appendix 1
In light of the update to the question, which lists the outcomes of the first 18 games, here are reconstructions of game paths consistent with these data. You can see that two or three of the games were perilously close to losses. (Any path ending on a light gray square is a loss for you.)
Potential uses of this figure include observing:
The paths concentrate around a slope given by the ratio 267:380 of total scores, equal approximately to 58.7%.
The scatter of the paths around that slope shows the variation expected when points are independent.
If points are made in streaks, then individual paths would tend to have long vertical and horizontal stretches.
In a longer set of similar games, expect to see paths that tend to stay within the colored range, but also expect a few to extend beyond it.
The prospect of a game or two whose path lies generally above this spread indicates the possibility that your opponent will eventually win a game, probably sooner rather than later.
Appendix 2
The code to create the figure was requested. Here it is (cleaned up to produce a slightly nicer graphic).
library(data.table)
library(ggplot2)
n <- 21 # Points your opponent needs to win
m <- 21 # Points you need to win
margin <- 2 # Minimum winning margin
p <- 0.58 # Your chance of winning a point
#
# Quick and dirty generation of a game that goes into overtime.
#
done <- FALSE
iter <- 0
iter.max <- 2000
while(!done & iter < iter.max) {
Y <- sample(1:0, 3*(m+n), prob=c(p, 1-p), replace=TRUE)
Y <- data.table(You=c(0,cumsum(Y)), Opponent=c(0,cumsum(1-Y)))
Y[, Complete := (You >= m & You-Opponent >= margin) |
(Opponent >= n & Opponent-You >= margin)]
Y <- Y[1:which.max(Complete)]
done <- nrow(Y[You==m-1 & Opponent==n-1 & !Complete]) > 0
iter <- iter+1
}
if (iter >= iter.max) warning("Unable to find a solution. Using last.")
i.max <- max(n+margin, m+margin, max(c(Y$You, Y$Opponent))) + 1
#
# Represent the relevant part of the lattice.
#
X <- as.data.table(expand.grid(You=0:i.max,
Opponent=0:i.max))
X[, Win := (You == m & You-Opponent >= margin) |
(You > m & You-Opponent == margin)]
X[, Loss := (Opponent == n & You-Opponent <= -margin) |
(Opponent > n & You-Opponent == -margin)]
#
# Represent the absorbing boundary.
#
A <- data.table(x=c(m, m, i.max, 0, n-margin, i.max-margin),
y=c(0, m-margin, i.max-margin, n, n, i.max),
Winner=rep(c("You", "Opponent"), each=3))
#
# Plotting.
#
ggplot(X[Win==TRUE | Loss==TRUE], aes(You, Opponent)) +
geom_path(aes(x, y, color=Winner, group=Winner), inherit.aes=FALSE,
data=A, size=1.5) +
geom_point(data=X, color="#c0c0c0") +
geom_point(aes(fill=Win), size=3, shape=22, show.legend=FALSE) +
geom_path(data=Y, size=1) +
coord_equal(xlim=c(-1/2, i.max-1/2), ylim=c(-1/2, i.max-1/2),
ratio=1, expand=FALSE) +
ggtitle("Example Game Path",
paste0("You need ", m, " points to win; opponent needs ", n,
"; and the margin is ", margin, ".")) | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w
The analysis is complicated by the prospect that the game goes into "overtime" in order to win by a margin of at least two points. (Otherwise it would be as simple as the solution shown at https://st |
2,232 | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2? | Using the binomial distribution and assuming every point is independent:
The probability the $58\%$ player gets to $21$ in the first $40$ points (taking account of the fact the last point must be won) is $\sum_{n=21}^{40} {n-1 \choose 20} 0.58^{21}0.42^{n-21}$ $=\sum_{k=21}^{40} {40 \choose k} 0.58^{k}0.42^{40-k}$ $\approx 0.80695$
The probability $58\%$ player gets $20$ from $40$ points played is the binomial ${40 \choose 20} 0.58^{20}0.42^{20} \approx 0.074635$. Conditioned on that, the probability the $58\%$ player then wins with the two point margin is $\frac{0.58^2}{0.58^2+0.42^2}\approx 0.656006$
So the overall probability the $58\%$ player wins is about $0.80695+0.074635\times 0.656006$ $\approx 0.8559$
The probability of the $58\%$ player winning the first $15$ games is then about $0.85559^{15} \approx 0.0969$ which is fairly unlikely. The probability of the $58\%$ player winning the final $35$ games is about $0.85559^{35} \approx 0.0043$ which is very unlikely. | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w | Using the binomial distribution and assuming every point is independent:
The probability the $58\%$ player gets to $21$ in the first $40$ points (taking account of the fact the last point must be won | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2?
Using the binomial distribution and assuming every point is independent:
The probability the $58\%$ player gets to $21$ in the first $40$ points (taking account of the fact the last point must be won) is $\sum_{n=21}^{40} {n-1 \choose 20} 0.58^{21}0.42^{n-21}$ $=\sum_{k=21}^{40} {40 \choose k} 0.58^{k}0.42^{40-k}$ $\approx 0.80695$
The probability $58\%$ player gets $20$ from $40$ points played is the binomial ${40 \choose 20} 0.58^{20}0.42^{20} \approx 0.074635$. Conditioned on that, the probability the $58\%$ player then wins with the two point margin is $\frac{0.58^2}{0.58^2+0.42^2}\approx 0.656006$
So the overall probability the $58\%$ player wins is about $0.80695+0.074635\times 0.656006$ $\approx 0.8559$
The probability of the $58\%$ player winning the first $15$ games is then about $0.85559^{15} \approx 0.0969$ which is fairly unlikely. The probability of the $58\%$ player winning the final $35$ games is about $0.85559^{35} \approx 0.0043$ which is very unlikely. | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w
Using the binomial distribution and assuming every point is independent:
The probability the $58\%$ player gets to $21$ in the first $40$ points (taking account of the fact the last point must be won |
2,233 | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2? | I went with a computational answer. Here is an R function that simulates a ping-pong game where the winner has to win by 2. The only argument is the probability that you win a point. It will return the final score of that game:
## data simulation function ----------------------------------------------------
sim_game <- function(pt_chance) {
them <- 0
you <- 0
while (sum((them < 21 & you < 21), abs(them - you) < 2) > 0) {
if (rbinom(1, 1, pt_chance) == 1) {
you <- you + 1
them <- them + 0
} else {
you <- you + 0
them <- them + 1
}
}
return(list(them = them, you = you))
}
Let's first make sure it works by simulating 10,000 games where you have a 50% chance of winning each point. We should observe that your win percentage is about 50%:
## testing 10,000 games --------------------------------------------------------
set.seed(1839)
results <- lapply(1:10000, function(x) sim_game(.5))
results <- as.data.frame(do.call(rbind, results))
results$you_win <- unlist(results$you) > unlist(results$them)
mean(results$you_win)
This returns .4955, about what we would expect. So let's plug in your 58%:
## simulate 10,000 games -------------------------------------------------------
set.seed(1839)
results <- lapply(1:10000, function(x) sim_game(.58))
results <- as.data.frame(do.call(rbind, results))
results$you_win <- unlist(results$you) > unlist(results$them)
mean(results$you_win)
This returns .8606. So you have about an 86.06% chance of winning one game.
We can now simulate across 35 game batches and see how many times you would win all 35:
## how often do you win all 35? ------------------------------------------------
set.seed(1839)
won_all_35 <- c()
for (i in 1:10000) {
results <- lapply(1:35, function(x) sim_game(.58))
results <- as.data.frame(do.call(rbind, results))
results$you_win <- unlist(results$you) > unlist(results$them)
won_all_35[i] <- mean(results$you_win) == 1
}
mean(won_all_35)
This returns .0037, which means you have about a 0.37% chance of winning the next 35 games. This assumes that all games and all points are independent of one another. You could program that explicitly into the function above, if you wanted to.
Note: I'm doing this on the fly. I'm sure there is a more computationally efficient way of programming this. | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w | I went with a computational answer. Here is an R function that simulates a ping-pong game where the winner has to win by 2. The only argument is the probability that you win a point. It will return th | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2?
I went with a computational answer. Here is an R function that simulates a ping-pong game where the winner has to win by 2. The only argument is the probability that you win a point. It will return the final score of that game:
## data simulation function ----------------------------------------------------
sim_game <- function(pt_chance) {
them <- 0
you <- 0
while (sum((them < 21 & you < 21), abs(them - you) < 2) > 0) {
if (rbinom(1, 1, pt_chance) == 1) {
you <- you + 1
them <- them + 0
} else {
you <- you + 0
them <- them + 1
}
}
return(list(them = them, you = you))
}
Let's first make sure it works by simulating 10,000 games where you have a 50% chance of winning each point. We should observe that your win percentage is about 50%:
## testing 10,000 games --------------------------------------------------------
set.seed(1839)
results <- lapply(1:10000, function(x) sim_game(.5))
results <- as.data.frame(do.call(rbind, results))
results$you_win <- unlist(results$you) > unlist(results$them)
mean(results$you_win)
This returns .4955, about what we would expect. So let's plug in your 58%:
## simulate 10,000 games -------------------------------------------------------
set.seed(1839)
results <- lapply(1:10000, function(x) sim_game(.58))
results <- as.data.frame(do.call(rbind, results))
results$you_win <- unlist(results$you) > unlist(results$them)
mean(results$you_win)
This returns .8606. So you have about an 86.06% chance of winning one game.
We can now simulate across 35 game batches and see how many times you would win all 35:
## how often do you win all 35? ------------------------------------------------
set.seed(1839)
won_all_35 <- c()
for (i in 1:10000) {
results <- lapply(1:35, function(x) sim_game(.58))
results <- as.data.frame(do.call(rbind, results))
results$you_win <- unlist(results$you) > unlist(results$them)
won_all_35[i] <- mean(results$you_win) == 1
}
mean(won_all_35)
This returns .0037, which means you have about a 0.37% chance of winning the next 35 games. This assumes that all games and all points are independent of one another. You could program that explicitly into the function above, if you wanted to.
Note: I'm doing this on the fly. I'm sure there is a more computationally efficient way of programming this. | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w
I went with a computational answer. Here is an R function that simulates a ping-pong game where the winner has to win by 2. The only argument is the probability that you win a point. It will return th |
2,234 | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2? | Should we assume that the 58% chance of winning is fixed and that points are independent?
I believe that Whuber's answer is a good one, and beautifully written and explained, when the consideration is that every point is independent from the next one. However I believe that, in practice it is only an interesting starting point (theoretic/idealized). I imagine that in reality the points are not independent from each other, and this might make it more or less likely that your co-worker opponent gets to a win at least once out of 50.
At first I imagined that the dependence of the points would be a random process, ie not controlled by the players (e.g. when one is winning or loosing playing differently), and this should create a greater dispersion of the results benefiting the lesser player to get this one point out of fifty.
A second thought however might suggest the opposite: The fact that you already "achieved" something with a 9.7% of chance may give some (but only slight) benefit, from a Bayesian point of view, to ideas about favouring mechanisms that get you to win more than 85% probability to win a game (or at least make it less likely that your opponent has a much higher probability than 15% as argued in the previous two paragraphs). For instance, it could be that you score better when your position is less good (it is not strange for people scoring much more different on match points, in favor or against, than on regular points). You can improve estimates of the 85% by taking these dynamics into account and possibly you have more than 85% probability to win a game.
Anyway, it might be very wrong to use this simple points statistic to provide an answer. Yes you can do it, but it won't be right since the premises (independency of points) are not necessarily correct and highly influence the answer. The 42/58 statistic is more information but we do not know very well how to use it (the correctness of the model) and using the information might provide answers with high precision that it actually does not have.
Example
Example: an equally reasonable model with a completely different result
So the hypothetical question (assuming independent points and known, theoretical, probabilities for these points) is in itself interesting and can be answered, But just to be annoying and skeptical/cynical; an answer to the hypothetical case does not relate that much to your underlying/original problem, and might be why the statisticians/data-scientists at your company are reluctant to provide a straight answer.
Just to give an alternative example (not neccesarily better) that provides a confusing (counter-) statement 'Q: what is the probability to win all of the total of 50 games if I already won 15?' If we do not start to think that 'the point scores 42/58 are relevant or give us better predictions' then we would start to make predictions of your probability to win the game and predictions to win another 35 games solely based on your previously won 15 games:
with a Bayesian technique for your probability to win a game this would mean: $p(\text{win another 35 | after already 15}) = \frac{\int_0^1 f(p) p^{50}}{\int_0^1 f(p) p^{15}}$ which is roughly 31% for a uniform prior f(x) = 1, although that might be a bit too optimistic. But still if you consider a beta distribution with $\beta=\alpha$ between 1 and 5 then you get to:
which means that I would not be so pessimistic as the straightforward 0.432% prediction The fact that you already won 15 games should elevate the probability that you win the next 35 games.
Note based on the new data
Based on your data for the 18 games I tried fitting a beta-binomial model. Varying $\alpha=\mu\nu$ and $\beta=(1-\mu)\nu$ and calculating the probabilities to get to a score i,21 (via i,20) or a score 20,20 and then sum their logs to a log-likelihood score.
It shows that a very high $\nu$ parameter (little dispersion in the underlying beta distribution) has a higher likelihood and thus there is probably little over-dispersion. That means that the data does not suggest that it is better to use a variable parameter for your probability of winning a point, instead of your fixed 58% chance of winning. This new data is providing extra support for Whuber's analysis, which assumes scores based on a binomial distribution. But of course, this still assumes that the model is static and also that you and your co-worker behave according to a random model (in which every game and point are independent).
Maximum likelihood estimation for parameters of beta distribution in place of fixed 58% winning chance:
Q: how do I read the "LogLikelihood for parameters mu and nu" graph?
A:
1) Maximum likelihood estimate (MLE) is a way to fit a model. Likelihood means the probability of the data given the parameters of the model and then we look for the model that maximizes this. There is a lot of philosophy and mathematics behind it.
2) The plot is a lazy computational method to get to the optimum MLE. I just compute all possible values on a grid and see what the valeu is. If you need to be faster you can either use a computational iterative method/algorithm that seeks the optimum, or possibly there might be a direct analytical solution.
3) The parameters $\mu$ and $\nu$ relate to the beta distribution https://en.wikipedia.org/wiki/Beta_distribution which is used as a model for the p=0.58 (to make it not fixed but instead vary from time to time). This modeled 'beta-p' is than combined with a binomial model to get to predictions of probabilities to reach certain scores. It is almost the same as the beta-binomial distribution. You can see that the optimum is around $\mu \simeq 0.6$ which is not surprising. The $\nu$ value is high (meaning low dispersion). I had imagined/expected at least some over-dispersion.
code/computation for graph 1
posterior <- sapply(seq(1,5,0.1), function(x) {
integrate(function(p) dbeta(p,x,x)*p^50,0,1)[1]$value/
integrate(function(p) dbeta(p,x,x)*p^15,0,1)[1]$value
}
)
prior <- sapply(seq(1,5,0.1), function(x) {
integrate(function(p) dbeta(p,x,x)*p^35,0,1)[1]$value
}
)
layout(t(c(1,2)))
plot( seq(1,5,0.1), posterior,
ylim = c(0,0.32),
xlab = expression(paste(alpha, " and ", beta ," values for prior beta-distribution")),
ylab = "P(win another 35| after already 15)"
)
title("posterior probability assuming beta-distribution")
plot( seq(1,5,0.1), prior,
ylim = c(0,0.32),
xlab = expression(paste(alpha, " and ", beta ," values for prior beta-distribution")),
ylab = "P(win 35)"
)
title("prior probability assuming beta-distribution")
code/computation for graph 2
library("shape")
# probability that you win and opponent has kl points
Pwl <- function(a,b,kl,kw=21) {
kt <- kl+kw-1
Pwl <- choose(kt,kw-1) * beta(kw+a,kl+b)/beta(a,b)
Pwl
}
# probability to end in the 20-20 score
Pww <- function(a,b,kl=20,kw=20) {
kt <- kl+kw
Pww <- choose(kt,kw) * beta(kw+a,kl+b)/beta(a,b)
Pww
}
# probability that you lin with kw points
Plw <- function(a,b,kl=21,kw) {
kt <- kl+kw-1
Plw <- choose(kt,kw) * beta(kw+a,kl+b)/beta(a,b)
Plw
}
# calculation of log likelihood for data consisting of 17 opponent scores and 1 tie-position
# parametezation change from mu (mean) and nu to a and b
loglike <- function(mu,nu) {
a <- mu*nu
b <- (1-mu)*nu
scores <- c(18, 17, 11, 13, 15, 15, 16, 9, 17, 17, 13, 8, 17, 11, 17, 13, 19)
ps <- sapply(scores, function(x) log(Pwl(a,b,x)))
loglike <- sum(ps,log(Pww(a,b)))
loglike
}
#vectors and matrices for plotting contour
mu <- c(1:199)/200
nu <- 2^(c(0:400)/40)
z <- matrix(rep(0,length(nu)*length(mu)),length(mu))
for (i in 1:length(mu)) {
for(j in 1:length(nu)) {
z[i,j] <- loglike(mu[i],nu[j])
}
}
#plotting
levs <- c(-900,-800,-700,-600,-500,-400,-300,-200,-100,-90,-80,-70,-60,-55,-52.5,-50,-47.5)
# contour plot
filled.contour(mu,log(nu),z,
xlab="mu",ylab="log(nu)",
#levels=c(-500,-400,-300,-200,-100,-10:-1),
color.palette=function(n) {hsv(c(seq(0.15,0.7,length.out=n),0),
c(seq(0.7,0.2,length.out=n),0),
c(seq(1,0.7,length.out=n),0.9))},
levels=levs,
plot.axes= c({
contour(mu,log(nu),z,add=1, levels=levs)
title("loglikelihood for parameters mu and nu")
axis(1)
axis(2)
},""),
xlim=range(mu)+c(-0.05,0.05),
ylim=range(log(nu))+c(-0.05,0.05)
) | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w | Should we assume that the 58% chance of winning is fixed and that points are independent?
I believe that Whuber's answer is a good one, and beautifully written and explained, when the consideration is | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2?
Should we assume that the 58% chance of winning is fixed and that points are independent?
I believe that Whuber's answer is a good one, and beautifully written and explained, when the consideration is that every point is independent from the next one. However I believe that, in practice it is only an interesting starting point (theoretic/idealized). I imagine that in reality the points are not independent from each other, and this might make it more or less likely that your co-worker opponent gets to a win at least once out of 50.
At first I imagined that the dependence of the points would be a random process, ie not controlled by the players (e.g. when one is winning or loosing playing differently), and this should create a greater dispersion of the results benefiting the lesser player to get this one point out of fifty.
A second thought however might suggest the opposite: The fact that you already "achieved" something with a 9.7% of chance may give some (but only slight) benefit, from a Bayesian point of view, to ideas about favouring mechanisms that get you to win more than 85% probability to win a game (or at least make it less likely that your opponent has a much higher probability than 15% as argued in the previous two paragraphs). For instance, it could be that you score better when your position is less good (it is not strange for people scoring much more different on match points, in favor or against, than on regular points). You can improve estimates of the 85% by taking these dynamics into account and possibly you have more than 85% probability to win a game.
Anyway, it might be very wrong to use this simple points statistic to provide an answer. Yes you can do it, but it won't be right since the premises (independency of points) are not necessarily correct and highly influence the answer. The 42/58 statistic is more information but we do not know very well how to use it (the correctness of the model) and using the information might provide answers with high precision that it actually does not have.
Example
Example: an equally reasonable model with a completely different result
So the hypothetical question (assuming independent points and known, theoretical, probabilities for these points) is in itself interesting and can be answered, But just to be annoying and skeptical/cynical; an answer to the hypothetical case does not relate that much to your underlying/original problem, and might be why the statisticians/data-scientists at your company are reluctant to provide a straight answer.
Just to give an alternative example (not neccesarily better) that provides a confusing (counter-) statement 'Q: what is the probability to win all of the total of 50 games if I already won 15?' If we do not start to think that 'the point scores 42/58 are relevant or give us better predictions' then we would start to make predictions of your probability to win the game and predictions to win another 35 games solely based on your previously won 15 games:
with a Bayesian technique for your probability to win a game this would mean: $p(\text{win another 35 | after already 15}) = \frac{\int_0^1 f(p) p^{50}}{\int_0^1 f(p) p^{15}}$ which is roughly 31% for a uniform prior f(x) = 1, although that might be a bit too optimistic. But still if you consider a beta distribution with $\beta=\alpha$ between 1 and 5 then you get to:
which means that I would not be so pessimistic as the straightforward 0.432% prediction The fact that you already won 15 games should elevate the probability that you win the next 35 games.
Note based on the new data
Based on your data for the 18 games I tried fitting a beta-binomial model. Varying $\alpha=\mu\nu$ and $\beta=(1-\mu)\nu$ and calculating the probabilities to get to a score i,21 (via i,20) or a score 20,20 and then sum their logs to a log-likelihood score.
It shows that a very high $\nu$ parameter (little dispersion in the underlying beta distribution) has a higher likelihood and thus there is probably little over-dispersion. That means that the data does not suggest that it is better to use a variable parameter for your probability of winning a point, instead of your fixed 58% chance of winning. This new data is providing extra support for Whuber's analysis, which assumes scores based on a binomial distribution. But of course, this still assumes that the model is static and also that you and your co-worker behave according to a random model (in which every game and point are independent).
Maximum likelihood estimation for parameters of beta distribution in place of fixed 58% winning chance:
Q: how do I read the "LogLikelihood for parameters mu and nu" graph?
A:
1) Maximum likelihood estimate (MLE) is a way to fit a model. Likelihood means the probability of the data given the parameters of the model and then we look for the model that maximizes this. There is a lot of philosophy and mathematics behind it.
2) The plot is a lazy computational method to get to the optimum MLE. I just compute all possible values on a grid and see what the valeu is. If you need to be faster you can either use a computational iterative method/algorithm that seeks the optimum, or possibly there might be a direct analytical solution.
3) The parameters $\mu$ and $\nu$ relate to the beta distribution https://en.wikipedia.org/wiki/Beta_distribution which is used as a model for the p=0.58 (to make it not fixed but instead vary from time to time). This modeled 'beta-p' is than combined with a binomial model to get to predictions of probabilities to reach certain scores. It is almost the same as the beta-binomial distribution. You can see that the optimum is around $\mu \simeq 0.6$ which is not surprising. The $\nu$ value is high (meaning low dispersion). I had imagined/expected at least some over-dispersion.
code/computation for graph 1
posterior <- sapply(seq(1,5,0.1), function(x) {
integrate(function(p) dbeta(p,x,x)*p^50,0,1)[1]$value/
integrate(function(p) dbeta(p,x,x)*p^15,0,1)[1]$value
}
)
prior <- sapply(seq(1,5,0.1), function(x) {
integrate(function(p) dbeta(p,x,x)*p^35,0,1)[1]$value
}
)
layout(t(c(1,2)))
plot( seq(1,5,0.1), posterior,
ylim = c(0,0.32),
xlab = expression(paste(alpha, " and ", beta ," values for prior beta-distribution")),
ylab = "P(win another 35| after already 15)"
)
title("posterior probability assuming beta-distribution")
plot( seq(1,5,0.1), prior,
ylim = c(0,0.32),
xlab = expression(paste(alpha, " and ", beta ," values for prior beta-distribution")),
ylab = "P(win 35)"
)
title("prior probability assuming beta-distribution")
code/computation for graph 2
library("shape")
# probability that you win and opponent has kl points
Pwl <- function(a,b,kl,kw=21) {
kt <- kl+kw-1
Pwl <- choose(kt,kw-1) * beta(kw+a,kl+b)/beta(a,b)
Pwl
}
# probability to end in the 20-20 score
Pww <- function(a,b,kl=20,kw=20) {
kt <- kl+kw
Pww <- choose(kt,kw) * beta(kw+a,kl+b)/beta(a,b)
Pww
}
# probability that you lin with kw points
Plw <- function(a,b,kl=21,kw) {
kt <- kl+kw-1
Plw <- choose(kt,kw) * beta(kw+a,kl+b)/beta(a,b)
Plw
}
# calculation of log likelihood for data consisting of 17 opponent scores and 1 tie-position
# parametezation change from mu (mean) and nu to a and b
loglike <- function(mu,nu) {
a <- mu*nu
b <- (1-mu)*nu
scores <- c(18, 17, 11, 13, 15, 15, 16, 9, 17, 17, 13, 8, 17, 11, 17, 13, 19)
ps <- sapply(scores, function(x) log(Pwl(a,b,x)))
loglike <- sum(ps,log(Pww(a,b)))
loglike
}
#vectors and matrices for plotting contour
mu <- c(1:199)/200
nu <- 2^(c(0:400)/40)
z <- matrix(rep(0,length(nu)*length(mu)),length(mu))
for (i in 1:length(mu)) {
for(j in 1:length(nu)) {
z[i,j] <- loglike(mu[i],nu[j])
}
}
#plotting
levs <- c(-900,-800,-700,-600,-500,-400,-300,-200,-100,-90,-80,-70,-60,-55,-52.5,-50,-47.5)
# contour plot
filled.contour(mu,log(nu),z,
xlab="mu",ylab="log(nu)",
#levels=c(-500,-400,-300,-200,-100,-10:-1),
color.palette=function(n) {hsv(c(seq(0.15,0.7,length.out=n),0),
c(seq(0.7,0.2,length.out=n),0),
c(seq(1,0.7,length.out=n),0.9))},
levels=levs,
plot.axes= c({
contour(mu,log(nu),z,add=1, levels=levs)
title("loglikelihood for parameters mu and nu")
axis(1)
axis(2)
},""),
xlim=range(mu)+c(-0.05,0.05),
ylim=range(log(nu))+c(-0.05,0.05)
) | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w
Should we assume that the 58% chance of winning is fixed and that points are independent?
I believe that Whuber's answer is a good one, and beautifully written and explained, when the consideration is |
2,235 | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2? | Much effort could be spent on a perfect model. But sometimes a bad model is better. And nothing says bad model like the central limit theorem -- everything is a normal curve.
We'll ignore "overtime". We'll model the sum of individual points as a normal curve. We'll model playing 38 rounds and whomever has the most points win, instead of first to 20. This is quite similar game wise!
And, blindly, I'll claim we get close to the right answer.
Let $X$ be the distribution of a point. $X$ has value 1 when you get a point, and 0 when you don't.
So $E(X)$ =~ $0.58$ and $Var(X)$ = $E(X)*(1-E(X))$ =~ $0.24$.
If $X_i$ are independent points, then $\sum_{i=1}^{38}{X_i}$ is the points you get after playing 38 rounds.
$E(\sum_{i=1}^{38}{X_i})$ = $38*E(X)$ =~ $22.04$
$Var(\sum_{i=1}^{38}{X_i})$ = 38*Var($X$) =~ $9.12$
and $SD(\sum_{i=1}^{38}{X_i})$ = $\sqrt{38*Var(X))}$ =~ $3.02$
In our crude model, we lose if $\sum_{i=1}^{38}{X_i} < 19$ and win if $\sum_{i=1}^{38}{X_i} > 19$.
$\frac{22.04-19}{3.02}$ is $1.01$ standard deviations away from the mean, which works out to a $15.62\%$ chance of failure after consulting a z score chart.
If we compare to the more rigorous answers, this is about $1\%$ off the correct value.
You'd generally be better off examining the reliability of the $58\%$ victory chance rather than a more rigorous model that assumes $58\%$ chance and models it perfectly. | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w | Much effort could be spent on a perfect model. But sometimes a bad model is better. And nothing says bad model like the central limit theorem -- everything is a normal curve.
We'll ignore "overtime" | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2?
Much effort could be spent on a perfect model. But sometimes a bad model is better. And nothing says bad model like the central limit theorem -- everything is a normal curve.
We'll ignore "overtime". We'll model the sum of individual points as a normal curve. We'll model playing 38 rounds and whomever has the most points win, instead of first to 20. This is quite similar game wise!
And, blindly, I'll claim we get close to the right answer.
Let $X$ be the distribution of a point. $X$ has value 1 when you get a point, and 0 when you don't.
So $E(X)$ =~ $0.58$ and $Var(X)$ = $E(X)*(1-E(X))$ =~ $0.24$.
If $X_i$ are independent points, then $\sum_{i=1}^{38}{X_i}$ is the points you get after playing 38 rounds.
$E(\sum_{i=1}^{38}{X_i})$ = $38*E(X)$ =~ $22.04$
$Var(\sum_{i=1}^{38}{X_i})$ = 38*Var($X$) =~ $9.12$
and $SD(\sum_{i=1}^{38}{X_i})$ = $\sqrt{38*Var(X))}$ =~ $3.02$
In our crude model, we lose if $\sum_{i=1}^{38}{X_i} < 19$ and win if $\sum_{i=1}^{38}{X_i} > 19$.
$\frac{22.04-19}{3.02}$ is $1.01$ standard deviations away from the mean, which works out to a $15.62\%$ chance of failure after consulting a z score chart.
If we compare to the more rigorous answers, this is about $1\%$ off the correct value.
You'd generally be better off examining the reliability of the $58\%$ victory chance rather than a more rigorous model that assumes $58\%$ chance and models it perfectly. | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w
Much effort could be spent on a perfect model. But sometimes a bad model is better. And nothing says bad model like the central limit theorem -- everything is a normal curve.
We'll ignore "overtime" |
2,236 | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2? | Based on simulation, it looks like the probability of winning any given game is about 85.5%.
The probability of winning by exactly 2 (which is how I read the title, but doesn't seem to be what you're asking) is about 10.1%.
Run the code below.
set.seed(328409)
sim.game <- function(p)
{
x1 = 0
x2 = 0
while( (max(c(x1,x2)) < 21) | abs(x1-x2)<2 )
{
if(runif(1) < p) x1 = x1 + 1 else x2 = x2 + 1
}
return( c(x1,x2) )
}
S <- matrix(0, 1e5, 2)
for(k in 1:1e5) S[k,] <- sim.game(0.58)
mean( (S[,1]-S[,2]) == 2 ) #chance of winning by 2
mean(S[,1]>S[,2]) #chance of winning | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w | Based on simulation, it looks like the probability of winning any given game is about 85.5%.
The probability of winning by exactly 2 (which is how I read the title, but doesn't seem to be what you're | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, win by 2?
Based on simulation, it looks like the probability of winning any given game is about 85.5%.
The probability of winning by exactly 2 (which is how I read the title, but doesn't seem to be what you're asking) is about 10.1%.
Run the code below.
set.seed(328409)
sim.game <- function(p)
{
x1 = 0
x2 = 0
while( (max(c(x1,x2)) < 21) | abs(x1-x2)<2 )
{
if(runif(1) < p) x1 = x1 + 1 else x2 = x2 + 1
}
return( c(x1,x2) )
}
S <- matrix(0, 1e5, 2)
for(k in 1:1e5) S[k,] <- sim.game(0.58)
mean( (S[,1]-S[,2]) == 2 ) #chance of winning by 2
mean(S[,1]>S[,2]) #chance of winning | If I have a 58% chance of winning a point, what's the chance of me winning a ping pong game to 21, w
Based on simulation, it looks like the probability of winning any given game is about 85.5%.
The probability of winning by exactly 2 (which is how I read the title, but doesn't seem to be what you're |
2,237 | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to infinity? | Since you ask for insights, I'm going to take a fairly intuitive approach rather than a more mathematical tack:
Following the concepts in my answer here, we can formulate a ridge regression as a regression with dummy data by adding $p$ (in your formulation) observations, where $y_{n+j}=0$, $x_{j,n+j}=\sqrt{\lambda}$ and $x_{i,n+j}=0$ for $i\neq j$. If you write out the new RSS for this expanded data set, you'll see the additional observations each add a term of the form $(0-\sqrt{\lambda}\beta_j)^2=\lambda\beta_j^2$, so the new RSS is the original $\text{RSS} + \lambda \sum_{j=1}^p\beta_j^2$ -- and minimizing the RSS on this new, expanded data set is the same as minimizing the ridge regression criterion.
So what can we see here? As $\lambda$ increases, the additional $x$-rows each have one component that increases, and so the influence of these points also increases. They pull the fitted hyperplane toward themselves. Then as $\lambda$ and the corresponding components of the $x$'s go off to infinity, all the involved coefficients "flatten out" to $0$.
That is, as $\lambda\to\infty$, the penalty will dominate the minimization, so the $\beta$s will go to zero. If the intercept is not penalized (the usual case) then the model shrinks more and more toward the mean of the response.
I'll give an intuitive sense of why we're talking about ridges first (which also suggests why it's needed), then tackle a little history. The first is adapted from my answer here:
If there's multicollinearity, you get a "ridge" in the likelihood function (likelihood is a function of the $\beta$'s). This in turn yields a long "valley" in the RSS (since RSS=$-2\log\mathcal{L}$).
Ridge regression "fixes" the ridge - it adds a penalty that turns the ridge into a nice peak in likelihood space, equivalently a nice depression in the criterion we're minimizing:
[Clearer image]
The actual story behind the name is a little more complicated. In 1959 A.E. Hoerl [1] introduced ridge analysis for response surface methodology, and it very soon [2] became adapted to dealing with multicollinearity in regression ('ridge regression'). See for example, the discussion by R.W. Hoerl in [3], where it describes Hoerl's (A.E. not R.W.) use of contour plots of the response surface* in the identification of where to head to find local optima (where one 'heads up the ridge'). In ill-conditioned problems, the issue of a very long ridge arises, and insights and methodology from ridge analysis are adapted to the related issue with the likelihood/RSS in regression, producing ridge regression.
* examples of response surface contour plots (in the case of quadratic response) can be seen here (Fig 3.9-3.12).
That is, "ridge" actually refers to the characteristics of the function we were attempting to optimize, rather than to adding a "ridge" (+ve diagonal) to the $X^TX$ matrix (so while ridge regression does add to the diagonal, that's not why we call it 'ridge' regression).
For some additional information on the need for ridge regression, see the first link under list item 2. above.
References:
[1]: Hoerl, A.E. (1959). Optimum solution of many variables equations. Chemical Engineering Progress,
55 (11) 69-78.
[2]: Hoerl, A.E. (1962). Applications of ridge analysis to regression problems. Chemical Engineering Progress,
58 (3) 54-59.
[3] Hoerl, R.W. (1985). Ridge Analysis 25 Years Later.
American Statistician, 39 (3), 186-192 | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to in | Since you ask for insights, I'm going to take a fairly intuitive approach rather than a more mathematical tack:
Following the concepts in my answer here, we can formulate a ridge regression as a regr | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to infinity?
Since you ask for insights, I'm going to take a fairly intuitive approach rather than a more mathematical tack:
Following the concepts in my answer here, we can formulate a ridge regression as a regression with dummy data by adding $p$ (in your formulation) observations, where $y_{n+j}=0$, $x_{j,n+j}=\sqrt{\lambda}$ and $x_{i,n+j}=0$ for $i\neq j$. If you write out the new RSS for this expanded data set, you'll see the additional observations each add a term of the form $(0-\sqrt{\lambda}\beta_j)^2=\lambda\beta_j^2$, so the new RSS is the original $\text{RSS} + \lambda \sum_{j=1}^p\beta_j^2$ -- and minimizing the RSS on this new, expanded data set is the same as minimizing the ridge regression criterion.
So what can we see here? As $\lambda$ increases, the additional $x$-rows each have one component that increases, and so the influence of these points also increases. They pull the fitted hyperplane toward themselves. Then as $\lambda$ and the corresponding components of the $x$'s go off to infinity, all the involved coefficients "flatten out" to $0$.
That is, as $\lambda\to\infty$, the penalty will dominate the minimization, so the $\beta$s will go to zero. If the intercept is not penalized (the usual case) then the model shrinks more and more toward the mean of the response.
I'll give an intuitive sense of why we're talking about ridges first (which also suggests why it's needed), then tackle a little history. The first is adapted from my answer here:
If there's multicollinearity, you get a "ridge" in the likelihood function (likelihood is a function of the $\beta$'s). This in turn yields a long "valley" in the RSS (since RSS=$-2\log\mathcal{L}$).
Ridge regression "fixes" the ridge - it adds a penalty that turns the ridge into a nice peak in likelihood space, equivalently a nice depression in the criterion we're minimizing:
[Clearer image]
The actual story behind the name is a little more complicated. In 1959 A.E. Hoerl [1] introduced ridge analysis for response surface methodology, and it very soon [2] became adapted to dealing with multicollinearity in regression ('ridge regression'). See for example, the discussion by R.W. Hoerl in [3], where it describes Hoerl's (A.E. not R.W.) use of contour plots of the response surface* in the identification of where to head to find local optima (where one 'heads up the ridge'). In ill-conditioned problems, the issue of a very long ridge arises, and insights and methodology from ridge analysis are adapted to the related issue with the likelihood/RSS in regression, producing ridge regression.
* examples of response surface contour plots (in the case of quadratic response) can be seen here (Fig 3.9-3.12).
That is, "ridge" actually refers to the characteristics of the function we were attempting to optimize, rather than to adding a "ridge" (+ve diagonal) to the $X^TX$ matrix (so while ridge regression does add to the diagonal, that's not why we call it 'ridge' regression).
For some additional information on the need for ridge regression, see the first link under list item 2. above.
References:
[1]: Hoerl, A.E. (1959). Optimum solution of many variables equations. Chemical Engineering Progress,
55 (11) 69-78.
[2]: Hoerl, A.E. (1962). Applications of ridge analysis to regression problems. Chemical Engineering Progress,
58 (3) 54-59.
[3] Hoerl, R.W. (1985). Ridge Analysis 25 Years Later.
American Statistician, 39 (3), 186-192 | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to in
Since you ask for insights, I'm going to take a fairly intuitive approach rather than a more mathematical tack:
Following the concepts in my answer here, we can formulate a ridge regression as a regr |
2,238 | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to infinity? | If $\lambda \rightarrow \infty$ then our penalty term will be infinite for any $\beta$ other than $\beta = 0$, so that's the one we'll get. There is no other vector that will give us a finite value of the objective function.
(Update: Please see Glen_b's answer. This is not the correct historical reason!)
This comes from ridge regression's solution in matrix notation. The solution turns out to be
$$
\hat \beta = (X^TX + \lambda I)^{-1} X^TY.
$$
The $\lambda I$ term adds a "ridge" to the main diagonal and guarantees that the resulting matrix is invertible. This means that, unlike OLS, we'll always get a solution.
Ridge regression is useful when the predictors are correlated. In this case OLS can give wild results with huge coefficients, but if they are penalized we can get much more reasonable results. In general a big advantage to ridge regression is that the solution always exists, as mentioned above. This applies even to the case where $n < p$, for which OLS cannot provide a (unique) solution.
Ridge regression also is the result when a normal prior is put on the $\beta$ vector.
Here's the Bayesian take on ridge regression:
Suppose our prior for $\beta$ is $\beta \sim N(0, \frac{\sigma^2}{\lambda}I_p)$. Then because $(Y|X, \beta) \sim N(X\beta, \sigma^2 I_n)$ [by assumption] we have that
$$
\pi(\beta | y) \propto \pi(\beta) f(y|\beta)
$$
$$
\propto \frac{1}{(\sigma^2/\lambda)^{p/2}} \exp \left( -{\lambda \over 2\sigma^2} \beta^T\beta \right) \times \frac{1}{(\sigma^2)^{n/2}} \exp \left( \frac{-1}{2\sigma^2} ||y - X\beta||^2 \right)
$$
$$
\propto \exp \left( -{\lambda \over 2\sigma^2} \beta^T\beta - \frac{1}{2\sigma^2} ||y - X\beta||^2 \right).
$$
Let's find the posterior mode (we could look at posterior mean or other things too but for this let's look at the mode, i.e. the most probable value).
This means we want
$$
\max_{\beta \in \mathbb R^p} \ \exp \left( -{\lambda \over 2\sigma^2} \beta^T\beta - \frac{1}{2\sigma^2} ||y - X\beta||^2 \right)
$$
which is equivalent to
$$
\max_{\beta \in \mathbb R^p} \ -{\lambda \over 2\sigma^2} \beta^T\beta - \frac{1}{2\sigma^2} ||y - X\beta||^2
$$
because $\log$ is strictly monotone and this in turn is equivalent to
$$
\min_{\beta \in \mathbb R^p} ||y - X\beta||^2 + \lambda \beta^T\beta
$$
which ought to look pretty familiar.
Thus we see that if we put a normal prior with mean 0 and variance $\frac{\sigma^2}{\lambda}$ on our $\beta$ vector, the value of $\beta$ which maximizes the posterior is the ridge estimator. Note that this treats $\sigma^2$ more as a frequentist parameter because there's no prior on it but it isn't known, so this isn't fully Bayesian.
Edit: you asked about the case where $n < p$.
We know that a hyperplane in $\mathbb R^p$ is defined by exactly $p$ points. If we are running a linear regression and $n = p$ then we exactly interpolate our data and get $||y - X\hat\beta||^2 = 0$. This is a solution, but it is a terrible one: our performance on future data will most likely be abysmal. Now suppose $n < p$: there is no longer a unique hyperplane defined by these points. We can fit a multitude of hyperplanes, each with 0 residual sum of squares.
A very simple example: suppose $n = p = 2$. Then we'll just get a line between these two points. Now suppose $n = 2$ but $p = 3$. Picture a plane with these two points in it. We can rotate this plane without changing the fact that these two points are in it, so there are uncountably many models all with a perfect value of our objective function, so even beyond the issue of overfitting it is not clear which one to pick.
As a final comment (per @gung's suggestion), the LASSO (using an $L_1$ penalty) is commonly used for high dimensional problems because it automatically performs variable selection (sets some $\beta_j = 0$). Delightfully enough, it turns out that the LASSO is equivalent to finding the posterior mode when using a double exponential (aka Laplace) prior on the $\beta$ vector. The LASSO also has some limitations, such as saturating at $n$ predictors and not necessarily handling groups of correlated predictors in an ideal fashion, so the elastic net (convex combination of $L_1$ and $L_2$ penalties) may be brought to bear. | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to in | If $\lambda \rightarrow \infty$ then our penalty term will be infinite for any $\beta$ other than $\beta = 0$, so that's the one we'll get. There is no other vector that will give us a finite value of | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to infinity?
If $\lambda \rightarrow \infty$ then our penalty term will be infinite for any $\beta$ other than $\beta = 0$, so that's the one we'll get. There is no other vector that will give us a finite value of the objective function.
(Update: Please see Glen_b's answer. This is not the correct historical reason!)
This comes from ridge regression's solution in matrix notation. The solution turns out to be
$$
\hat \beta = (X^TX + \lambda I)^{-1} X^TY.
$$
The $\lambda I$ term adds a "ridge" to the main diagonal and guarantees that the resulting matrix is invertible. This means that, unlike OLS, we'll always get a solution.
Ridge regression is useful when the predictors are correlated. In this case OLS can give wild results with huge coefficients, but if they are penalized we can get much more reasonable results. In general a big advantage to ridge regression is that the solution always exists, as mentioned above. This applies even to the case where $n < p$, for which OLS cannot provide a (unique) solution.
Ridge regression also is the result when a normal prior is put on the $\beta$ vector.
Here's the Bayesian take on ridge regression:
Suppose our prior for $\beta$ is $\beta \sim N(0, \frac{\sigma^2}{\lambda}I_p)$. Then because $(Y|X, \beta) \sim N(X\beta, \sigma^2 I_n)$ [by assumption] we have that
$$
\pi(\beta | y) \propto \pi(\beta) f(y|\beta)
$$
$$
\propto \frac{1}{(\sigma^2/\lambda)^{p/2}} \exp \left( -{\lambda \over 2\sigma^2} \beta^T\beta \right) \times \frac{1}{(\sigma^2)^{n/2}} \exp \left( \frac{-1}{2\sigma^2} ||y - X\beta||^2 \right)
$$
$$
\propto \exp \left( -{\lambda \over 2\sigma^2} \beta^T\beta - \frac{1}{2\sigma^2} ||y - X\beta||^2 \right).
$$
Let's find the posterior mode (we could look at posterior mean or other things too but for this let's look at the mode, i.e. the most probable value).
This means we want
$$
\max_{\beta \in \mathbb R^p} \ \exp \left( -{\lambda \over 2\sigma^2} \beta^T\beta - \frac{1}{2\sigma^2} ||y - X\beta||^2 \right)
$$
which is equivalent to
$$
\max_{\beta \in \mathbb R^p} \ -{\lambda \over 2\sigma^2} \beta^T\beta - \frac{1}{2\sigma^2} ||y - X\beta||^2
$$
because $\log$ is strictly monotone and this in turn is equivalent to
$$
\min_{\beta \in \mathbb R^p} ||y - X\beta||^2 + \lambda \beta^T\beta
$$
which ought to look pretty familiar.
Thus we see that if we put a normal prior with mean 0 and variance $\frac{\sigma^2}{\lambda}$ on our $\beta$ vector, the value of $\beta$ which maximizes the posterior is the ridge estimator. Note that this treats $\sigma^2$ more as a frequentist parameter because there's no prior on it but it isn't known, so this isn't fully Bayesian.
Edit: you asked about the case where $n < p$.
We know that a hyperplane in $\mathbb R^p$ is defined by exactly $p$ points. If we are running a linear regression and $n = p$ then we exactly interpolate our data and get $||y - X\hat\beta||^2 = 0$. This is a solution, but it is a terrible one: our performance on future data will most likely be abysmal. Now suppose $n < p$: there is no longer a unique hyperplane defined by these points. We can fit a multitude of hyperplanes, each with 0 residual sum of squares.
A very simple example: suppose $n = p = 2$. Then we'll just get a line between these two points. Now suppose $n = 2$ but $p = 3$. Picture a plane with these two points in it. We can rotate this plane without changing the fact that these two points are in it, so there are uncountably many models all with a perfect value of our objective function, so even beyond the issue of overfitting it is not clear which one to pick.
As a final comment (per @gung's suggestion), the LASSO (using an $L_1$ penalty) is commonly used for high dimensional problems because it automatically performs variable selection (sets some $\beta_j = 0$). Delightfully enough, it turns out that the LASSO is equivalent to finding the posterior mode when using a double exponential (aka Laplace) prior on the $\beta$ vector. The LASSO also has some limitations, such as saturating at $n$ predictors and not necessarily handling groups of correlated predictors in an ideal fashion, so the elastic net (convex combination of $L_1$ and $L_2$ penalties) may be brought to bear. | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to in
If $\lambda \rightarrow \infty$ then our penalty term will be infinite for any $\beta$ other than $\beta = 0$, so that's the one we'll get. There is no other vector that will give us a finite value of |
2,239 | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to infinity? | why is the term called Ridge Regression?
From Ridge Regression: Biased Estimation for Nonorthogonal Problems (1970):
A. E. Hoerl first suggested in 1962 [9] [11] that to control the inflation and
general instability associated with the least squares estimates, one can use
$$\beta^* = [X'X + kI]^{-1}X'Y; k \geq 0 (2.1) \\\\ = WX'Y (2.2)$$
The family of estimates given by $k \geq 0$ has many mathematical similarities
with the portrayal of quadratic response functions [10]. For this reason, estimation and analysis built around (2.1) has been labeled "ridge regression." | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to in | why is the term called Ridge Regression?
From Ridge Regression: Biased Estimation for Nonorthogonal Problems (1970):
A. E. Hoerl first suggested in 1962 [9] [11] that to control the inflation and
ge | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to infinity?
why is the term called Ridge Regression?
From Ridge Regression: Biased Estimation for Nonorthogonal Problems (1970):
A. E. Hoerl first suggested in 1962 [9] [11] that to control the inflation and
general instability associated with the least squares estimates, one can use
$$\beta^* = [X'X + kI]^{-1}X'Y; k \geq 0 (2.1) \\\\ = WX'Y (2.2)$$
The family of estimates given by $k \geq 0$ has many mathematical similarities
with the portrayal of quadratic response functions [10]. For this reason, estimation and analysis built around (2.1) has been labeled "ridge regression." | Why is ridge regression called "ridge", why is it needed, and what happens when $\lambda$ goes to in
why is the term called Ridge Regression?
From Ridge Regression: Biased Estimation for Nonorthogonal Problems (1970):
A. E. Hoerl first suggested in 1962 [9] [11] that to control the inflation and
ge |
2,240 | What are principal component scores? | First, let's define a score.
John, Mike and Kate get the following percentages for exams in Maths, Science, English and Music as follows:
Maths Science English Music
John 80 85 60 55
Mike 90 85 70 45
Kate 95 80 40 50
In this case there are 12 scores in total. Each score represents the exam results for each person in a particular subject. So a score in this case is simply a representation of where a row and column intersect.
Now let's informally define a Principal Component.
In the table above, can you easily plot the data in a 2D graph? No, because there are four subjects (which means four variables: Maths, Science, English, and Music), i.e.:
You could plot two subjects in the exact same way you would with $x$ and $y$ co-ordinates in a 2D graph.
You could even plot three subjects in the same way you would plot $x$, $y$ and $z$ in a 3D graph (though this is generally bad practice, because some distortion is inevitable in the 2D representation of 3D data).
But how would you plot 4 subjects?
At the moment we have four variables which each represent just one subject. So a method around this might be to somehow combine the subjects into maybe just two new variables which we can then plot. This is known as Multidimensional scaling.
Principal Component analysis is a form of multidimensional scaling. It is a linear transformation of the variables into a lower dimensional space which retain maximal amount of information about the variables. For example, this would mean we could look at the types of subjects each student is maybe more suited to.
A principal component is therefore a combination of the original variables after a linear transformation. In R, this is:
DF <- data.frame(Maths=c(80, 90, 95), Science=c(85, 85, 80),
English=c(60, 70, 40), Music=c(55, 45, 50))
prcomp(DF, scale = FALSE)
Which will give you something like this (first two Principal Components only for sake of simplicity):
PC1 PC2
Maths 0.27795606 0.76772853
Science -0.17428077 -0.08162874
English -0.94200929 0.19632732
Music 0.07060547 -0.60447104
The first column here shows coefficients of linear combination that defines principal component #1, and the second column shows coefficients for principal component #2.
So what is a Principal Component Score?
It's a score from the table at the end of this post (see below).
The above output from R means we can now plot each person's score across all subjects in a 2D graph as follows. First, we need to center the original variables by subtracting column means:
Maths Science English Music
John -8.33 1.66 3.33 5
Mike 1.66 1.66 13.33 -5
Kate 6.66 -3.33 -16.66 0
And then to form linear combinations to get PC1 and PC2 scores:
x y
John -0.28*8.33 + -0.17*1.66 + -0.94*3.33 + 0.07*5 -0.77*8.33 + -0.08*1.66 + 0.19*3.33 + -0.60*5
Mike 0.28*1.66 + -0.17*1.66 + -0.94*13.33 + -0.07*5 0.77*1.66 + -0.08*1.66 + 0.19*13.33 + -0.60*5
Kate 0.28*6.66 + 0.17*3.33 + 0.94*16.66 + 0.07*0 0.77*6.66 + 0.08*3.33 + -0.19*16.66 + -0.60*0
Which simplifies to:
x y
John -5.39 -8.90
Mike -12.74 6.78
Kate 18.13 2.12
There are six principal component scores in the table above. You can now plot the scores in a 2D graph to get a sense of the type of subjects each student is perhaps more suited to.
The same output can be obtained in R by typing prcomp(DF, scale = FALSE)$x.
EDIT 1: Hmm, I probably could have thought up a better example, and there is more to it than what I've put here, but I hope you get the idea.
EDIT 2: full credit to @drpaulbrewer for his comment in improving this answer. | What are principal component scores? | First, let's define a score.
John, Mike and Kate get the following percentages for exams in Maths, Science, English and Music as follows:
Maths Science English Music
John 80 | What are principal component scores?
First, let's define a score.
John, Mike and Kate get the following percentages for exams in Maths, Science, English and Music as follows:
Maths Science English Music
John 80 85 60 55
Mike 90 85 70 45
Kate 95 80 40 50
In this case there are 12 scores in total. Each score represents the exam results for each person in a particular subject. So a score in this case is simply a representation of where a row and column intersect.
Now let's informally define a Principal Component.
In the table above, can you easily plot the data in a 2D graph? No, because there are four subjects (which means four variables: Maths, Science, English, and Music), i.e.:
You could plot two subjects in the exact same way you would with $x$ and $y$ co-ordinates in a 2D graph.
You could even plot three subjects in the same way you would plot $x$, $y$ and $z$ in a 3D graph (though this is generally bad practice, because some distortion is inevitable in the 2D representation of 3D data).
But how would you plot 4 subjects?
At the moment we have four variables which each represent just one subject. So a method around this might be to somehow combine the subjects into maybe just two new variables which we can then plot. This is known as Multidimensional scaling.
Principal Component analysis is a form of multidimensional scaling. It is a linear transformation of the variables into a lower dimensional space which retain maximal amount of information about the variables. For example, this would mean we could look at the types of subjects each student is maybe more suited to.
A principal component is therefore a combination of the original variables after a linear transformation. In R, this is:
DF <- data.frame(Maths=c(80, 90, 95), Science=c(85, 85, 80),
English=c(60, 70, 40), Music=c(55, 45, 50))
prcomp(DF, scale = FALSE)
Which will give you something like this (first two Principal Components only for sake of simplicity):
PC1 PC2
Maths 0.27795606 0.76772853
Science -0.17428077 -0.08162874
English -0.94200929 0.19632732
Music 0.07060547 -0.60447104
The first column here shows coefficients of linear combination that defines principal component #1, and the second column shows coefficients for principal component #2.
So what is a Principal Component Score?
It's a score from the table at the end of this post (see below).
The above output from R means we can now plot each person's score across all subjects in a 2D graph as follows. First, we need to center the original variables by subtracting column means:
Maths Science English Music
John -8.33 1.66 3.33 5
Mike 1.66 1.66 13.33 -5
Kate 6.66 -3.33 -16.66 0
And then to form linear combinations to get PC1 and PC2 scores:
x y
John -0.28*8.33 + -0.17*1.66 + -0.94*3.33 + 0.07*5 -0.77*8.33 + -0.08*1.66 + 0.19*3.33 + -0.60*5
Mike 0.28*1.66 + -0.17*1.66 + -0.94*13.33 + -0.07*5 0.77*1.66 + -0.08*1.66 + 0.19*13.33 + -0.60*5
Kate 0.28*6.66 + 0.17*3.33 + 0.94*16.66 + 0.07*0 0.77*6.66 + 0.08*3.33 + -0.19*16.66 + -0.60*0
Which simplifies to:
x y
John -5.39 -8.90
Mike -12.74 6.78
Kate 18.13 2.12
There are six principal component scores in the table above. You can now plot the scores in a 2D graph to get a sense of the type of subjects each student is perhaps more suited to.
The same output can be obtained in R by typing prcomp(DF, scale = FALSE)$x.
EDIT 1: Hmm, I probably could have thought up a better example, and there is more to it than what I've put here, but I hope you get the idea.
EDIT 2: full credit to @drpaulbrewer for his comment in improving this answer. | What are principal component scores?
First, let's define a score.
John, Mike and Kate get the following percentages for exams in Maths, Science, English and Music as follows:
Maths Science English Music
John 80 |
2,241 | What are principal component scores? | Principal component analysis (PCA) is one popular approach analyzing variance when you are dealing with multivariate data. You have random variables X1, X2,...Xn which are all correlated (positively or negatively) to varying degrees, and you want to get a better understanding of what's going on. PCA can help.
What PCA gives you is a change of variable into Y1, Y2,..., Yn (i.e. the same number of variables) which are linear combinations of the Xs. For example, you might have Y1 = 2.1 X1 - 1.76 X2 + 0.2 X3...
The Ys the nice property that each of these have zero correlation with each other. Better still, you get them in decreasing order of variance. So, Y1 "explains" a big chunk of the variance of the original variables, Y2 a bit less and so on. Usually after the first few Ys, the variables become somewhat meaningless. The PCA score for any of the Xi is just it's coefficient in each of the Ys. In my earlier example, the score for X2 in the first principal component (Y1) is 1.76.
The way PCA does this magic is by computing eigenvectors of the covariance matrix.
To give a concrete example, imagine X1,...X10 are changes in 1 year, 2 year, ..., 10 year Treasury bond yields over some time period. When you compute PCA you generally find that the first component has scores for each bond of the same sign and about the same sign. This tells you that most of the variance in bond yields comes from everything moving the same way: "parallel shifts" up or down. The second component typically shows "steepening" and "flattening" of the curve and has opposite signs for X1 and X10. | What are principal component scores? | Principal component analysis (PCA) is one popular approach analyzing variance when you are dealing with multivariate data. You have random variables X1, X2,...Xn which are all correlated (positively o | What are principal component scores?
Principal component analysis (PCA) is one popular approach analyzing variance when you are dealing with multivariate data. You have random variables X1, X2,...Xn which are all correlated (positively or negatively) to varying degrees, and you want to get a better understanding of what's going on. PCA can help.
What PCA gives you is a change of variable into Y1, Y2,..., Yn (i.e. the same number of variables) which are linear combinations of the Xs. For example, you might have Y1 = 2.1 X1 - 1.76 X2 + 0.2 X3...
The Ys the nice property that each of these have zero correlation with each other. Better still, you get them in decreasing order of variance. So, Y1 "explains" a big chunk of the variance of the original variables, Y2 a bit less and so on. Usually after the first few Ys, the variables become somewhat meaningless. The PCA score for any of the Xi is just it's coefficient in each of the Ys. In my earlier example, the score for X2 in the first principal component (Y1) is 1.76.
The way PCA does this magic is by computing eigenvectors of the covariance matrix.
To give a concrete example, imagine X1,...X10 are changes in 1 year, 2 year, ..., 10 year Treasury bond yields over some time period. When you compute PCA you generally find that the first component has scores for each bond of the same sign and about the same sign. This tells you that most of the variance in bond yields comes from everything moving the same way: "parallel shifts" up or down. The second component typically shows "steepening" and "flattening" of the curve and has opposite signs for X1 and X10. | What are principal component scores?
Principal component analysis (PCA) is one popular approach analyzing variance when you are dealing with multivariate data. You have random variables X1, X2,...Xn which are all correlated (positively o |
2,242 | What are principal component scores? | I like to think of principal component scores as "basically meaningless" until you actually give them some meaning. Interpretting PC scores in terms of "reality" is a tricky business - and there can really be no unique way to do it. It depends on what you know about the particular variables that are going into the PCA, and how they relate to each other in terms of interpretations.
As far as the mathematics goes, I like to interpret PC scores as the co-ordinates of each point, with respect to the principal component axes. So in the raw variables you have $\bf{}x_i$ $=(x_{1i},x_{2i},\dots,x_{pi})$ which is a "point" in p-dimensional space. In these co-ordinates, this means along the $x_{1}$ axis the point is a distance $x_{1i}$ away from the origin. Now a PCA is basically a different way to describe this "point" - with respect to its principal component axis, rather than the "raw variable" axis. So we have $\bf{}z_i$ $=(z_{1i},z_{2i},\dots,z_{pi})=\bf{}A(x_i-\overline{x})$, where $\bf{}A$ is the $p\times p$ matrix of principal component weights (i.e. eigenvectors in each row), and $\bf{}\overline{x}$ is the "centroid" of the data (or mean vector of the data points).
So you can think of the eigenvectors as describing where the "straight lines" which describe the PCs are. Then the principal component scores describe where each data point lies on each straight line, relative to the "centriod" of the data. You can also think of the PC scores in combination with the weights/eigenvectors as a series of rank 1 predictions for each of the original data points, which have the form:
$$\hat{x}_{ji}^{(k)}=\overline{x}_j+z_{ki}A_{kj}$$
Where $\hat{x}_{ji}^{(k)}$ is the prediction for the $i$th observation, for the $j$th variable using the $k$th PC. | What are principal component scores? | I like to think of principal component scores as "basically meaningless" until you actually give them some meaning. Interpretting PC scores in terms of "reality" is a tricky business - and there can | What are principal component scores?
I like to think of principal component scores as "basically meaningless" until you actually give them some meaning. Interpretting PC scores in terms of "reality" is a tricky business - and there can really be no unique way to do it. It depends on what you know about the particular variables that are going into the PCA, and how they relate to each other in terms of interpretations.
As far as the mathematics goes, I like to interpret PC scores as the co-ordinates of each point, with respect to the principal component axes. So in the raw variables you have $\bf{}x_i$ $=(x_{1i},x_{2i},\dots,x_{pi})$ which is a "point" in p-dimensional space. In these co-ordinates, this means along the $x_{1}$ axis the point is a distance $x_{1i}$ away from the origin. Now a PCA is basically a different way to describe this "point" - with respect to its principal component axis, rather than the "raw variable" axis. So we have $\bf{}z_i$ $=(z_{1i},z_{2i},\dots,z_{pi})=\bf{}A(x_i-\overline{x})$, where $\bf{}A$ is the $p\times p$ matrix of principal component weights (i.e. eigenvectors in each row), and $\bf{}\overline{x}$ is the "centroid" of the data (or mean vector of the data points).
So you can think of the eigenvectors as describing where the "straight lines" which describe the PCs are. Then the principal component scores describe where each data point lies on each straight line, relative to the "centriod" of the data. You can also think of the PC scores in combination with the weights/eigenvectors as a series of rank 1 predictions for each of the original data points, which have the form:
$$\hat{x}_{ji}^{(k)}=\overline{x}_j+z_{ki}A_{kj}$$
Where $\hat{x}_{ji}^{(k)}$ is the prediction for the $i$th observation, for the $j$th variable using the $k$th PC. | What are principal component scores?
I like to think of principal component scores as "basically meaningless" until you actually give them some meaning. Interpretting PC scores in terms of "reality" is a tricky business - and there can |
2,243 | What are principal component scores? | Say you have a cloud of N points in, say, 3D (which can be listed in a 100x3 array). Then, the principal components analysis (PCA) fits an arbitrarily oriented ellipsoid into the data. The principal component score is the length of the diameters of the ellipsoid.
In the direction in which the diameter is large, the data varies a lot, while in the direction in which the diameter is small, the data varies litte. If you wanted to project N-d data into a 2-d scatter plot, you plot them along the two largest principal components, because with that approach you display most of the variance in the data. | What are principal component scores? | Say you have a cloud of N points in, say, 3D (which can be listed in a 100x3 array). Then, the principal components analysis (PCA) fits an arbitrarily oriented ellipsoid into the data. The principal c | What are principal component scores?
Say you have a cloud of N points in, say, 3D (which can be listed in a 100x3 array). Then, the principal components analysis (PCA) fits an arbitrarily oriented ellipsoid into the data. The principal component score is the length of the diameters of the ellipsoid.
In the direction in which the diameter is large, the data varies a lot, while in the direction in which the diameter is small, the data varies litte. If you wanted to project N-d data into a 2-d scatter plot, you plot them along the two largest principal components, because with that approach you display most of the variance in the data. | What are principal component scores?
Say you have a cloud of N points in, say, 3D (which can be listed in a 100x3 array). Then, the principal components analysis (PCA) fits an arbitrarily oriented ellipsoid into the data. The principal c |
2,244 | What are principal component scores? | The principal components of a data matrix are the eigenvector-eigenvalue pairs of its variance-covariance matrix. In essence, they are the decorrelated pieces of the variance. Each one is a linear combination of the variables for an observation -- suppose you measure w, x, y,z on each of a bunch of subjects. Your first PC might work out to be something like
0.5w + 4x + 5y - 1.5z
The loadings (eigenvectors) here are (0.5, 4, 5, -1.5). The score (eigenvalue) for each observation is the resulting value when you substitute in the observed (w, x, y, z) and compute the total.
This comes in handy when you project things onto their principal components (for, say, outlier detection) because you just plot the scores on each like you would any other data. This can reveal a lot about your data if much of the variance is correlated (== in the first few PCs). | What are principal component scores? | The principal components of a data matrix are the eigenvector-eigenvalue pairs of its variance-covariance matrix. In essence, they are the decorrelated pieces of the variance. Each one is a linear c | What are principal component scores?
The principal components of a data matrix are the eigenvector-eigenvalue pairs of its variance-covariance matrix. In essence, they are the decorrelated pieces of the variance. Each one is a linear combination of the variables for an observation -- suppose you measure w, x, y,z on each of a bunch of subjects. Your first PC might work out to be something like
0.5w + 4x + 5y - 1.5z
The loadings (eigenvectors) here are (0.5, 4, 5, -1.5). The score (eigenvalue) for each observation is the resulting value when you substitute in the observed (w, x, y, z) and compute the total.
This comes in handy when you project things onto their principal components (for, say, outlier detection) because you just plot the scores on each like you would any other data. This can reveal a lot about your data if much of the variance is correlated (== in the first few PCs). | What are principal component scores?
The principal components of a data matrix are the eigenvector-eigenvalue pairs of its variance-covariance matrix. In essence, they are the decorrelated pieces of the variance. Each one is a linear c |
2,245 | What are principal component scores? | Let $i=1,\dots,N$ index the rows and $j=1,\dots,M$ index the columns. Suppose you linearize the combination of variables (columns):
$$Z_{i,1} = c_{i,1}\cdot Y_{i,1} + c_{i,2}\cdot Y_{i,2} + ... + c_{i,M}\cdot Y_{i,M}$$
The above formula basically says to multiply row elements with a certain value $c$ (loadings) and sum them by columns. Resulting values ($Y$ values times the loading) are scores.
A principal component (PC) is a linear combination $Z_1 = (Z_{1,1}, ..., Z_{N,1}$) (values by columns which are called scores). In essence, the PC should present the most important features of variables (columns). Ergo, you can extract as many PC as there are variables (or less).
An output from R on PCA (a fake example) looks like this. PC1, PC2... are principal components 1, 2... The example below is showing only the first 8 principal components (out of 17). You can also extract other elements from PCA, like loadings and scores.
Importance of components:
PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
Standard deviation 1.0889 1.0642 1.0550 1.0475 1.0387 1.0277 1.0169 1.0105
Proportion of Variance 0.0697 0.0666 0.0655 0.0645 0.0635 0.0621 0.0608 0.0601
Cumulative Proportion 0.0697 0.1364 0.2018 0.2664 0.3298 0.3920 0.4528 0.5129 | What are principal component scores? | Let $i=1,\dots,N$ index the rows and $j=1,\dots,M$ index the columns. Suppose you linearize the combination of variables (columns):
$$Z_{i,1} = c_{i,1}\cdot Y_{i,1} + c_{i,2}\cdot Y_{i,2} + ... + c_{i | What are principal component scores?
Let $i=1,\dots,N$ index the rows and $j=1,\dots,M$ index the columns. Suppose you linearize the combination of variables (columns):
$$Z_{i,1} = c_{i,1}\cdot Y_{i,1} + c_{i,2}\cdot Y_{i,2} + ... + c_{i,M}\cdot Y_{i,M}$$
The above formula basically says to multiply row elements with a certain value $c$ (loadings) and sum them by columns. Resulting values ($Y$ values times the loading) are scores.
A principal component (PC) is a linear combination $Z_1 = (Z_{1,1}, ..., Z_{N,1}$) (values by columns which are called scores). In essence, the PC should present the most important features of variables (columns). Ergo, you can extract as many PC as there are variables (or less).
An output from R on PCA (a fake example) looks like this. PC1, PC2... are principal components 1, 2... The example below is showing only the first 8 principal components (out of 17). You can also extract other elements from PCA, like loadings and scores.
Importance of components:
PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
Standard deviation 1.0889 1.0642 1.0550 1.0475 1.0387 1.0277 1.0169 1.0105
Proportion of Variance 0.0697 0.0666 0.0655 0.0645 0.0635 0.0621 0.0608 0.0601
Cumulative Proportion 0.0697 0.1364 0.2018 0.2664 0.3298 0.3920 0.4528 0.5129 | What are principal component scores?
Let $i=1,\dots,N$ index the rows and $j=1,\dots,M$ index the columns. Suppose you linearize the combination of variables (columns):
$$Z_{i,1} = c_{i,1}\cdot Y_{i,1} + c_{i,2}\cdot Y_{i,2} + ... + c_{i |
2,246 | What are principal component scores? | Principal component scores are a group of scores that are obtained following a Principle Components Analysis (PCA). In PCA the relationships between a group of scores is analyzed such that an equal number of new "imaginary" variables (aka principle components) are created. The first of these new imaginary variables is maximally correlated with all of the original group of variables. The next is somewhat less correlated, and so forth until the point that if you used all of the principal components scores to predict any given variable from the initial group you would be able to explain all of its variance. The way in which PCA proceeds is complex and has certain restrictions. Among these is the restriction that the correlation between any two principal components (i.e. imaginary variables) is zero; thus it doesn't make sense to try to predict one principal component with another. | What are principal component scores? | Principal component scores are a group of scores that are obtained following a Principle Components Analysis (PCA). In PCA the relationships between a group of scores is analyzed such that an equal n | What are principal component scores?
Principal component scores are a group of scores that are obtained following a Principle Components Analysis (PCA). In PCA the relationships between a group of scores is analyzed such that an equal number of new "imaginary" variables (aka principle components) are created. The first of these new imaginary variables is maximally correlated with all of the original group of variables. The next is somewhat less correlated, and so forth until the point that if you used all of the principal components scores to predict any given variable from the initial group you would be able to explain all of its variance. The way in which PCA proceeds is complex and has certain restrictions. Among these is the restriction that the correlation between any two principal components (i.e. imaginary variables) is zero; thus it doesn't make sense to try to predict one principal component with another. | What are principal component scores?
Principal component scores are a group of scores that are obtained following a Principle Components Analysis (PCA). In PCA the relationships between a group of scores is analyzed such that an equal n |
2,247 | How to apply Neural Network to time series forecasting? | Here is a simple recipe that may help you get started writing code and testing ideas...
Let's assume you have monthly data recorded over several years, so you have 36 values. Let's also assume that you only care about predicting one month (value) in advance.
Exploratory data analysis: Apply some of the traditional time series analysis methods to estimate the lag dependence in the data (e.g. auto-correlation and partial auto-correlation plots, transformations, differencing).
Let's say that you find a given month's value is correlated with the past three month's data but not much so beyond that.
Partition your data into training and validation sets: Take the first 24 points as your training values and the remaining points as the validation set.
Create the neural network layout: You'll take the past three month's values as inputs and you want to predict the next month's value. So, you need a neural network with an input layer containing three nodes and an output layer containing one node. You should probably have a hidden layer with at least a couple of nodes. Unfortunately, picking the number of hidden layers, and their respective number of nodes, is not something for which there are clear guidelines. I'd start small, like 3:2:1.
Create the training patterns: Each training pattern will be four values, with the first three corresponding to the input nodes and the last one defining what the correct value is for the output node. For example, if your training data are values $$x_1,x_2\dots,x_{24}$$ then $$pattern 1: x_1,x_2,x_3,x_4$$ $$pattern 2: x_2,x_3,x_4,x_5$$ $$\dots$$ $$pattern 21: x_{21},x_{22},x_{23},x_{24}$$
Train the neural network on these patterns
Test the network on the validation set (months 25-36): Here you will pass in the three values the neural network needs for the input layer and see what the output node gets set to. So, to see how well the trained neural network can predict month 32's value you'll pass in values for months 29, 30, and 31
This recipe is obviously high level and you may scratch your head at first when trying to map your context into different software libraries/programs. But, hopefully this sketches out the main point: you need to create training patterns that reasonably contain the correlation structure of the series you are trying to forecast. And whether you do the forecasting with a neural network or an ARIMA model, the exploratory work to determine what that structure is is often the most time consuming and difficult part.
In my experience, neural networks can provide great classification and forecasting functionality but setting them up can be time consuming. In the example above, you may find that 21 training patterns is not enough; different input data transformations lead to a better/worse forecasts; varying the number of hidden layers and hidden layer nodes greatly affects forecasts; etc.
I highly recommend looking at the neural_forecasting website, which contains tons of information on neural network forecasting competitions. The Motivations page is especially useful. | How to apply Neural Network to time series forecasting? | Here is a simple recipe that may help you get started writing code and testing ideas...
Let's assume you have monthly data recorded over several years, so you have 36 values. Let's also assume that yo | How to apply Neural Network to time series forecasting?
Here is a simple recipe that may help you get started writing code and testing ideas...
Let's assume you have monthly data recorded over several years, so you have 36 values. Let's also assume that you only care about predicting one month (value) in advance.
Exploratory data analysis: Apply some of the traditional time series analysis methods to estimate the lag dependence in the data (e.g. auto-correlation and partial auto-correlation plots, transformations, differencing).
Let's say that you find a given month's value is correlated with the past three month's data but not much so beyond that.
Partition your data into training and validation sets: Take the first 24 points as your training values and the remaining points as the validation set.
Create the neural network layout: You'll take the past three month's values as inputs and you want to predict the next month's value. So, you need a neural network with an input layer containing three nodes and an output layer containing one node. You should probably have a hidden layer with at least a couple of nodes. Unfortunately, picking the number of hidden layers, and their respective number of nodes, is not something for which there are clear guidelines. I'd start small, like 3:2:1.
Create the training patterns: Each training pattern will be four values, with the first three corresponding to the input nodes and the last one defining what the correct value is for the output node. For example, if your training data are values $$x_1,x_2\dots,x_{24}$$ then $$pattern 1: x_1,x_2,x_3,x_4$$ $$pattern 2: x_2,x_3,x_4,x_5$$ $$\dots$$ $$pattern 21: x_{21},x_{22},x_{23},x_{24}$$
Train the neural network on these patterns
Test the network on the validation set (months 25-36): Here you will pass in the three values the neural network needs for the input layer and see what the output node gets set to. So, to see how well the trained neural network can predict month 32's value you'll pass in values for months 29, 30, and 31
This recipe is obviously high level and you may scratch your head at first when trying to map your context into different software libraries/programs. But, hopefully this sketches out the main point: you need to create training patterns that reasonably contain the correlation structure of the series you are trying to forecast. And whether you do the forecasting with a neural network or an ARIMA model, the exploratory work to determine what that structure is is often the most time consuming and difficult part.
In my experience, neural networks can provide great classification and forecasting functionality but setting them up can be time consuming. In the example above, you may find that 21 training patterns is not enough; different input data transformations lead to a better/worse forecasts; varying the number of hidden layers and hidden layer nodes greatly affects forecasts; etc.
I highly recommend looking at the neural_forecasting website, which contains tons of information on neural network forecasting competitions. The Motivations page is especially useful. | How to apply Neural Network to time series forecasting?
Here is a simple recipe that may help you get started writing code and testing ideas...
Let's assume you have monthly data recorded over several years, so you have 36 values. Let's also assume that yo |
2,248 | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math | Very quickly, I would say: 'multiple' applies to the number of predictors that enter the model (or equivalently the design matrix) with a single outcome (Y response), while 'multivariate' refers to a matrix of response vectors. Cannot remember the author who starts its introductory section on multivariate modeling with that consideration, but I think it is Brian Everitt in his textbook An R and S-Plus Companion to Multivariate Analysis. For a thorough discussion about this, I would suggest to look at his latest book, Multivariable Modeling and Multivariate Analysis for the Behavioral Sciences.
For 'variate', I would say this is a common way to refer to any random variable that follows a known or hypothesized distribution, e.g. we speak of gaussian variates $X_i$ as a series of observations drawn from a normal distribution (with parameters $\mu$ and $\sigma^2$). In probabilistic terms, we said that these are some random realizations of X, with mathematical expectation $\mu$, and about 95% of them are expected to lie on the range $[\mu-2\sigma;\mu+2\sigma]$ . | Explain the difference between multiple regression and multivariate regression, with minimal use of | Very quickly, I would say: 'multiple' applies to the number of predictors that enter the model (or equivalently the design matrix) with a single outcome (Y response), while 'multivariate' refers to a | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math
Very quickly, I would say: 'multiple' applies to the number of predictors that enter the model (or equivalently the design matrix) with a single outcome (Y response), while 'multivariate' refers to a matrix of response vectors. Cannot remember the author who starts its introductory section on multivariate modeling with that consideration, but I think it is Brian Everitt in his textbook An R and S-Plus Companion to Multivariate Analysis. For a thorough discussion about this, I would suggest to look at his latest book, Multivariable Modeling and Multivariate Analysis for the Behavioral Sciences.
For 'variate', I would say this is a common way to refer to any random variable that follows a known or hypothesized distribution, e.g. we speak of gaussian variates $X_i$ as a series of observations drawn from a normal distribution (with parameters $\mu$ and $\sigma^2$). In probabilistic terms, we said that these are some random realizations of X, with mathematical expectation $\mu$, and about 95% of them are expected to lie on the range $[\mu-2\sigma;\mu+2\sigma]$ . | Explain the difference between multiple regression and multivariate regression, with minimal use of
Very quickly, I would say: 'multiple' applies to the number of predictors that enter the model (or equivalently the design matrix) with a single outcome (Y response), while 'multivariate' refers to a |
2,249 | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math | Here are two closely related examples which illustrate the ideas. The examples are somewhat US centric but the ideas can be extrapolated to other countries.
Example 1
Suppose that a university wishes to refine its admission criteria so that they admit 'better' students. Also, suppose that a student's grade Point Average (GPA) is what the university wishes to use as a performance metric for students. They have several criteria in mind such as high school GPA (HSGPA), SAT scores (SAT), Gender etc and would like to know which one of these criteria matter as far as GPA is concerned.
Solution: Multiple Regression
In the above context, there is one dependent variable (GPA) and you have multiple independent variables (HSGPA, SAT, Gender etc). You want to find out which one of the independent variables are good predictors for your dependent variable. You would use multiple regression to make this assessment.
Example 2
Instead of the above situation, suppose the admissions office wants to track student performance across time and wishes to determine which one of their criteria drives student performance across time. In other words, they have GPA scores for the four years that a student stays in school (say, GPA1, GPA2, GPA3, GPA4) and they want to know which one of the independent variables predict GPA scores better on a year-by-year basis. The admissions office hopes to find that the same independent variables predict performance across all four years so that their choice of admissions criteria ensures that student performance is consistently high across all four years.
Solution: Multivariate Regression
In example 2, we have multiple dependent variables (i.e., GPA1, GPA2, GPA3, GPA4) and multiple independent variables. In such a situation, you would use multivariate regression. | Explain the difference between multiple regression and multivariate regression, with minimal use of | Here are two closely related examples which illustrate the ideas. The examples are somewhat US centric but the ideas can be extrapolated to other countries.
Example 1
Suppose that a university wishes | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math
Here are two closely related examples which illustrate the ideas. The examples are somewhat US centric but the ideas can be extrapolated to other countries.
Example 1
Suppose that a university wishes to refine its admission criteria so that they admit 'better' students. Also, suppose that a student's grade Point Average (GPA) is what the university wishes to use as a performance metric for students. They have several criteria in mind such as high school GPA (HSGPA), SAT scores (SAT), Gender etc and would like to know which one of these criteria matter as far as GPA is concerned.
Solution: Multiple Regression
In the above context, there is one dependent variable (GPA) and you have multiple independent variables (HSGPA, SAT, Gender etc). You want to find out which one of the independent variables are good predictors for your dependent variable. You would use multiple regression to make this assessment.
Example 2
Instead of the above situation, suppose the admissions office wants to track student performance across time and wishes to determine which one of their criteria drives student performance across time. In other words, they have GPA scores for the four years that a student stays in school (say, GPA1, GPA2, GPA3, GPA4) and they want to know which one of the independent variables predict GPA scores better on a year-by-year basis. The admissions office hopes to find that the same independent variables predict performance across all four years so that their choice of admissions criteria ensures that student performance is consistently high across all four years.
Solution: Multivariate Regression
In example 2, we have multiple dependent variables (i.e., GPA1, GPA2, GPA3, GPA4) and multiple independent variables. In such a situation, you would use multivariate regression. | Explain the difference between multiple regression and multivariate regression, with minimal use of
Here are two closely related examples which illustrate the ideas. The examples are somewhat US centric but the ideas can be extrapolated to other countries.
Example 1
Suppose that a university wishes |
2,250 | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math | Simple regression pertains to one dependent variable ($y$) and one independent variable ($x$): $y = f(x)$
Multiple regression (aka multivariable regression) pertains to one dependent variable and multiple independent variables: $y = f(x_1, x_2, ..., x_n)$
Multivariate regression pertains to multiple dependent variables and multiple independent variables: $y_1, y_2, ..., y_m = f(x_1, x_2, ..., x_n)$. You may encounter problems where both the dependent and independent variables are arranged as matrices of variables (e.g. $y_{11}, y_{12}, ...$ and $x_{11}, x_{12}, ...$), so the expression may be written as $Y = f(X)$, where capital letters indicate matrices.
Further reading:
"R Cookbook" by P. Teetor, O'Reilly publisher, 2011, Chapter 11 on "Linear Regression and ANOVA".
Quora question "What is the difference between a multiple linear regression and a multivariate regression?"
Mathworks (Matlab) tutorial on linear regression. | Explain the difference between multiple regression and multivariate regression, with minimal use of | Simple regression pertains to one dependent variable ($y$) and one independent variable ($x$): $y = f(x)$
Multiple regression (aka multivariable regression) pertains to one dependent variable and mult | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math
Simple regression pertains to one dependent variable ($y$) and one independent variable ($x$): $y = f(x)$
Multiple regression (aka multivariable regression) pertains to one dependent variable and multiple independent variables: $y = f(x_1, x_2, ..., x_n)$
Multivariate regression pertains to multiple dependent variables and multiple independent variables: $y_1, y_2, ..., y_m = f(x_1, x_2, ..., x_n)$. You may encounter problems where both the dependent and independent variables are arranged as matrices of variables (e.g. $y_{11}, y_{12}, ...$ and $x_{11}, x_{12}, ...$), so the expression may be written as $Y = f(X)$, where capital letters indicate matrices.
Further reading:
"R Cookbook" by P. Teetor, O'Reilly publisher, 2011, Chapter 11 on "Linear Regression and ANOVA".
Quora question "What is the difference between a multiple linear regression and a multivariate regression?"
Mathworks (Matlab) tutorial on linear regression. | Explain the difference between multiple regression and multivariate regression, with minimal use of
Simple regression pertains to one dependent variable ($y$) and one independent variable ($x$): $y = f(x)$
Multiple regression (aka multivariable regression) pertains to one dependent variable and mult |
2,251 | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math | I think the key insight (and differentiator) here aside from the number of variables on either side of the equation is that for the case of multivariate regression, the goal is to utilize the fact that there is (generally) correlation between response variables (or outcomes). For example, in a medical trial, predictors might be weight, age, and race, and outcome variables are blood pressure and cholesterol. We could, in theory, create two "multiple regression" models, one regressing blood pressure on weight, age, and race, and a second model regressing cholesterol on those same factors. However, alternatively, we could create a single multivariate regression model that predicts both blood pressure and cholesterol simultaneously based on the three predictor variables. The idea being that the multivariate regression model may be better (more predictive) to the extent that it can learn more from the correlation between blood pressure and cholesterol in patients. | Explain the difference between multiple regression and multivariate regression, with minimal use of | I think the key insight (and differentiator) here aside from the number of variables on either side of the equation is that for the case of multivariate regression, the goal is to utilize the fact tha | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math
I think the key insight (and differentiator) here aside from the number of variables on either side of the equation is that for the case of multivariate regression, the goal is to utilize the fact that there is (generally) correlation between response variables (or outcomes). For example, in a medical trial, predictors might be weight, age, and race, and outcome variables are blood pressure and cholesterol. We could, in theory, create two "multiple regression" models, one regressing blood pressure on weight, age, and race, and a second model regressing cholesterol on those same factors. However, alternatively, we could create a single multivariate regression model that predicts both blood pressure and cholesterol simultaneously based on the three predictor variables. The idea being that the multivariate regression model may be better (more predictive) to the extent that it can learn more from the correlation between blood pressure and cholesterol in patients. | Explain the difference between multiple regression and multivariate regression, with minimal use of
I think the key insight (and differentiator) here aside from the number of variables on either side of the equation is that for the case of multivariate regression, the goal is to utilize the fact tha |
2,252 | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math | In multivariate regression there are more than one dependent variable with different variances (or distributions). The predictor variables may be more than one or multiple. So it is may be a multiple regression with a matrix of dependent variables, i. e. multiple variances.
But when we say multiple regression, we mean only one dependent variable with a single distribution or variance. The predictor variables are more than one.
To summarise multiple refers to more than one predictor variables but multivariate refers to more than one dependent variables. | Explain the difference between multiple regression and multivariate regression, with minimal use of | In multivariate regression there are more than one dependent variable with different variances (or distributions). The predictor variables may be more than one or multiple. So it is may be a multiple | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math
In multivariate regression there are more than one dependent variable with different variances (or distributions). The predictor variables may be more than one or multiple. So it is may be a multiple regression with a matrix of dependent variables, i. e. multiple variances.
But when we say multiple regression, we mean only one dependent variable with a single distribution or variance. The predictor variables are more than one.
To summarise multiple refers to more than one predictor variables but multivariate refers to more than one dependent variables. | Explain the difference between multiple regression and multivariate regression, with minimal use of
In multivariate regression there are more than one dependent variable with different variances (or distributions). The predictor variables may be more than one or multiple. So it is may be a multiple |
2,253 | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math | There is no difference. This is because the maximum likelihood solution of the parameters of the joint problem $Y = W^T φ(x)$ with K target variables decouples to K independent regression problems, assuming a conditional distribution of the target vector to be an isotropic Gaussian of the form $p(t|φ(x),W, β) = N (t|W^T φ(x), β^{-1} I)$. Refer to section '3.1.5 Multiple outputs' from the book 'Pattern Recognition and Machine Learning', Bishop for details. | Explain the difference between multiple regression and multivariate regression, with minimal use of | There is no difference. This is because the maximum likelihood solution of the parameters of the joint problem $Y = W^T φ(x)$ with K target variables decouples to K independent regression problems, as | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math
There is no difference. This is because the maximum likelihood solution of the parameters of the joint problem $Y = W^T φ(x)$ with K target variables decouples to K independent regression problems, assuming a conditional distribution of the target vector to be an isotropic Gaussian of the form $p(t|φ(x),W, β) = N (t|W^T φ(x), β^{-1} I)$. Refer to section '3.1.5 Multiple outputs' from the book 'Pattern Recognition and Machine Learning', Bishop for details. | Explain the difference between multiple regression and multivariate regression, with minimal use of
There is no difference. This is because the maximum likelihood solution of the parameters of the joint problem $Y = W^T φ(x)$ with K target variables decouples to K independent regression problems, as |
2,254 | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math | There ain’t no difference between multiple regression and multivariate regression in that, they both constitute a system with 2 or more independent variables and 1 or more dependent variables. As long as the outcome doesn’t depend on lag obs or a single predictor, it’s called multiple or multivariate regression otherwise it is termed univariate regression. | Explain the difference between multiple regression and multivariate regression, with minimal use of | There ain’t no difference between multiple regression and multivariate regression in that, they both constitute a system with 2 or more independent variables and 1 or more dependent variables. As long | Explain the difference between multiple regression and multivariate regression, with minimal use of symbols/math
There ain’t no difference between multiple regression and multivariate regression in that, they both constitute a system with 2 or more independent variables and 1 or more dependent variables. As long as the outcome doesn’t depend on lag obs or a single predictor, it’s called multiple or multivariate regression otherwise it is termed univariate regression. | Explain the difference between multiple regression and multivariate regression, with minimal use of
There ain’t no difference between multiple regression and multivariate regression in that, they both constitute a system with 2 or more independent variables and 1 or more dependent variables. As long |
2,255 | Understanding the role of the discount factor in reinforcement learning | TL;DR.
The fact that the discount rate is bounded to be smaller than 1 is a mathematical trick to make an infinite sum finite. This helps proving the convergence of certain algorithms.
In practice, the discount factor could be used to model the fact that the decision maker is uncertain about if in the next decision instant the world (e.g., environment / game / process ) is going to end.
For example:
If the decision maker is a robot, the discount factor could be the
probability that the robot is switched off in the next time instant
(the world ends in the previous terminology). That is the reason why the robot is
short sighted and does not optimize the sum reward but the
discounted sum reward.
Discount factor smaller than 1 (In Detail)
In order to answer more precisely, why the discount rate has to be smaller than one I will first introduce the Markov Decision Processes (MDPs).
Reinforcement learning techniques can be used to solve MDPs. An MDP provides a mathematical framework for modeling decision-making situations where outcomes are partly random and partly under the control of the decision maker. An MDP is defined via a state space $\mathcal{S}$, an action space $\mathcal{A}$, a function of transition probabilities between states (conditioned to the action taken by the decision maker), and a reward function.
In its basic setting, the decision maker takes and action, and gets a reward from the environment, and the environment changes its state. Then the decision maker senses the state of the environment, takes an action, gets a reward, and so on so forth. The state transitions are probabilistic and depend solely on the actual state and the action taken by the decision maker. The reward obtained by the decision maker depends on the action taken, and on both the original and the new state of the environment.
A reward $R_{a_i}(s_j,s_k)$ is obtained when taking action $a_i$ in state $s_j$ and the environment/system changes to state $s_k$ after the decision maker takes action $a_i$. The decision maker follows a policy, $\pi$ $\pi(\cdot):\mathcal{S}\rightarrow\mathcal{A}$, that for each state $s_j \in \mathcal{S}$ takes an action $a_i \in \mathcal{A}$. So that the policy is what tells the decision maker which actions to take in each state. The policy $\pi$ may be randomized as well but it does not matter for now.
The objective is to find a policy $\pi$ such that
\begin{equation} \label{eq:1}
\max_{\pi:S(n)\rightarrow a_i} \lim_{T\rightarrow \infty } E \left\{ \sum_{n=1}^T \beta^n R_{x_i}(S(n),S(n+1)) \right\} (1),
\end{equation}
where $\beta$ is the discount factor and $\beta<1$.
Note that the optimization problem above, has infinite time horizon ($T\rightarrow \infty $), and the objective is to maximize the sum $discounted$ reward (the reward $R$ is multiplied by $\beta^n$).
This is usually called an MDP problem with a infinite horizon discounted reward criteria.
The problem is called discounted because $\beta<1$. If it was not a discounted problem $\beta=1$ the sum would not converge. All policies that have obtain on average a positive reward at each time instant would sum up to infinity. The would be a infinite horizon sum reward criteria, and is not a good optimization criteria.
Here is a toy example to show you what I mean:
Assume that there are only two possible actions $a={0,1}$ and that the reward function $R$ is equal to $1$ if $a=1$, and $0$ if $a=0$ (reward does not depend on the state).
It is clear the the policy that get more reward is to take always action $a=1$ and never action $a=0$.
I'll call this policy $\pi^*$. I'll compare $\pi^*$ to another policy $\pi'$ that takes action $a=1$ with small probability $\alpha << 1$, and action $a=0$ otherwise.
In the infinite horizon discounted reward criteria equation (1) becomes $\frac{1}{1-\beta}$ (the sum of a geometric series) for policy $\pi^*$ while for policy $\pi '$ equation (1) becomes $\frac{\alpha}{1-\beta}$. Since $\frac{1}{1-\beta} > \frac{\alpha}{1-\beta}$, we say that $\pi^*$ is a better policy than $\pi '$. Actually $\pi^*$ is the optimal policy.
In the infinite horizon sum reward criteria ($\beta=1$) equation (1) does not converge for any of the polices (it sums up to infinity). So whereas policy $\pi$ achieves higher rewards than $\pi'$ both policies are equal according to this criteria. That is one reason why the infinite horizon sum reward criteria is not useful.
As I mentioned before, $\beta<1$ makes the trick of making the sum in equation (1) converge.
Other optimality criteria
There are other optimality criteria that do not impose that $\beta<1$:
The finite horizon criteria case the objective is to maximize the discounted reward until the time horizon $T$
\begin{equation} \label{eq:2}
\max_{\pi:S(n)\rightarrow a_i} E \left\{ \sum_{n=1}^T \beta^n R_{x_i}(S(n),S(n+1)) \right\},
\end{equation}
for $\beta \leq 1$ and $T$ finite.
In the infinite horizon average reward criteria the objective is
\begin{equation}
\max_{\pi:S(n)\rightarrow a_i} \lim_{T\rightarrow \infty } E \left\{ \sum_{n=1}^T \frac{1}{T} R_{x_i}(S(n),S(n+1)) \right\},
\end{equation}
End note
Depending on the optimality criteria one would use a different algorithm to find the optimal policy. For instances the optimal policies of the finite horizon problems would depend on both the state and the actual time instant. Most Reinforcement Learning algorithms (such as SARSA or Q-learning) converge to the optimal policy only for the discounted reward infinite horizon criteria (the same happens for the Dynamic programming algorithms). For the average reward criteria there is no algorithm that has been shown to converge to the optimal policy, however one can use R-learning which have good performance albeit not good theoretical convergence. | Understanding the role of the discount factor in reinforcement learning | TL;DR.
The fact that the discount rate is bounded to be smaller than 1 is a mathematical trick to make an infinite sum finite. This helps proving the convergence of certain algorithms.
In practice, th | Understanding the role of the discount factor in reinforcement learning
TL;DR.
The fact that the discount rate is bounded to be smaller than 1 is a mathematical trick to make an infinite sum finite. This helps proving the convergence of certain algorithms.
In practice, the discount factor could be used to model the fact that the decision maker is uncertain about if in the next decision instant the world (e.g., environment / game / process ) is going to end.
For example:
If the decision maker is a robot, the discount factor could be the
probability that the robot is switched off in the next time instant
(the world ends in the previous terminology). That is the reason why the robot is
short sighted and does not optimize the sum reward but the
discounted sum reward.
Discount factor smaller than 1 (In Detail)
In order to answer more precisely, why the discount rate has to be smaller than one I will first introduce the Markov Decision Processes (MDPs).
Reinforcement learning techniques can be used to solve MDPs. An MDP provides a mathematical framework for modeling decision-making situations where outcomes are partly random and partly under the control of the decision maker. An MDP is defined via a state space $\mathcal{S}$, an action space $\mathcal{A}$, a function of transition probabilities between states (conditioned to the action taken by the decision maker), and a reward function.
In its basic setting, the decision maker takes and action, and gets a reward from the environment, and the environment changes its state. Then the decision maker senses the state of the environment, takes an action, gets a reward, and so on so forth. The state transitions are probabilistic and depend solely on the actual state and the action taken by the decision maker. The reward obtained by the decision maker depends on the action taken, and on both the original and the new state of the environment.
A reward $R_{a_i}(s_j,s_k)$ is obtained when taking action $a_i$ in state $s_j$ and the environment/system changes to state $s_k$ after the decision maker takes action $a_i$. The decision maker follows a policy, $\pi$ $\pi(\cdot):\mathcal{S}\rightarrow\mathcal{A}$, that for each state $s_j \in \mathcal{S}$ takes an action $a_i \in \mathcal{A}$. So that the policy is what tells the decision maker which actions to take in each state. The policy $\pi$ may be randomized as well but it does not matter for now.
The objective is to find a policy $\pi$ such that
\begin{equation} \label{eq:1}
\max_{\pi:S(n)\rightarrow a_i} \lim_{T\rightarrow \infty } E \left\{ \sum_{n=1}^T \beta^n R_{x_i}(S(n),S(n+1)) \right\} (1),
\end{equation}
where $\beta$ is the discount factor and $\beta<1$.
Note that the optimization problem above, has infinite time horizon ($T\rightarrow \infty $), and the objective is to maximize the sum $discounted$ reward (the reward $R$ is multiplied by $\beta^n$).
This is usually called an MDP problem with a infinite horizon discounted reward criteria.
The problem is called discounted because $\beta<1$. If it was not a discounted problem $\beta=1$ the sum would not converge. All policies that have obtain on average a positive reward at each time instant would sum up to infinity. The would be a infinite horizon sum reward criteria, and is not a good optimization criteria.
Here is a toy example to show you what I mean:
Assume that there are only two possible actions $a={0,1}$ and that the reward function $R$ is equal to $1$ if $a=1$, and $0$ if $a=0$ (reward does not depend on the state).
It is clear the the policy that get more reward is to take always action $a=1$ and never action $a=0$.
I'll call this policy $\pi^*$. I'll compare $\pi^*$ to another policy $\pi'$ that takes action $a=1$ with small probability $\alpha << 1$, and action $a=0$ otherwise.
In the infinite horizon discounted reward criteria equation (1) becomes $\frac{1}{1-\beta}$ (the sum of a geometric series) for policy $\pi^*$ while for policy $\pi '$ equation (1) becomes $\frac{\alpha}{1-\beta}$. Since $\frac{1}{1-\beta} > \frac{\alpha}{1-\beta}$, we say that $\pi^*$ is a better policy than $\pi '$. Actually $\pi^*$ is the optimal policy.
In the infinite horizon sum reward criteria ($\beta=1$) equation (1) does not converge for any of the polices (it sums up to infinity). So whereas policy $\pi$ achieves higher rewards than $\pi'$ both policies are equal according to this criteria. That is one reason why the infinite horizon sum reward criteria is not useful.
As I mentioned before, $\beta<1$ makes the trick of making the sum in equation (1) converge.
Other optimality criteria
There are other optimality criteria that do not impose that $\beta<1$:
The finite horizon criteria case the objective is to maximize the discounted reward until the time horizon $T$
\begin{equation} \label{eq:2}
\max_{\pi:S(n)\rightarrow a_i} E \left\{ \sum_{n=1}^T \beta^n R_{x_i}(S(n),S(n+1)) \right\},
\end{equation}
for $\beta \leq 1$ and $T$ finite.
In the infinite horizon average reward criteria the objective is
\begin{equation}
\max_{\pi:S(n)\rightarrow a_i} \lim_{T\rightarrow \infty } E \left\{ \sum_{n=1}^T \frac{1}{T} R_{x_i}(S(n),S(n+1)) \right\},
\end{equation}
End note
Depending on the optimality criteria one would use a different algorithm to find the optimal policy. For instances the optimal policies of the finite horizon problems would depend on both the state and the actual time instant. Most Reinforcement Learning algorithms (such as SARSA or Q-learning) converge to the optimal policy only for the discounted reward infinite horizon criteria (the same happens for the Dynamic programming algorithms). For the average reward criteria there is no algorithm that has been shown to converge to the optimal policy, however one can use R-learning which have good performance albeit not good theoretical convergence. | Understanding the role of the discount factor in reinforcement learning
TL;DR.
The fact that the discount rate is bounded to be smaller than 1 is a mathematical trick to make an infinite sum finite. This helps proving the convergence of certain algorithms.
In practice, th |
2,256 | Understanding the role of the discount factor in reinforcement learning | TL;DR: Discount factors are associated with time horizons. Longer time horizons have have much more variance as they include more irrelevant information, while short time horizons are biased towards only short-term gains.
The discount factor essentially determines how much the reinforcement learning agents cares about rewards in the distant future relative to those in the immediate future. If $\gamma = 0$, the agent will be completely myopic and only learn about actions that produce an immediate reward. If $\gamma = 1$, the agent will evaluate each of its actions based on the sum total of all of its future rewards.
So why wouldn't you always want to make $\gamma$ as high as possible? Well, most actions don't have long-lasting repercussions. For example, suppose that on the first day of every month you decide to treat yourself to a smoothie, and you have to decide whether you'll get a blueberry smoothie or a strawberry smoothie. As a good reinforcement learner, you judge the quality of your decision by how big your subsequent rewards are. If your time horizon is very short, you'll only factor in the immediate rewards, like how tasty your smoothie is. With a longer time horizon, like a few hours, you might also factor in things like whether or not you got an upset stomach. But if your time horizon lasts for the entire month, then every single thing that makes you feel good or bad for the entire month will factor into your judgement on whether or not you made the right smoothie decision. You'll be factoring in lots of irrelevant information, and therefore your judgement will have a huge variance and it'll be hard to learn.
Picking a particular value of $\gamma$ is equivalent to picking a time horizon. It helps to rewrite an agent's discounted reward $G$ as
$$
G_t = R_{t} + \gamma R_{t+1} + \gamma^2 R_{t+2} + \cdots \\
= \sum_{k=0}^{\infty} \gamma^k R_{t+k} = \sum_{\Delta t=0}^{\infty} e^{-\Delta t / \tau} R_{t+\Delta t}
$$
where I identify $\gamma = e^{-1/\tau}$ and $k \rightarrow \Delta t$. The value $\tau$ explicitly shows the time horizon associated with a discount factor; $\gamma = 1$ corresponds to $\tau = \infty$, and any rewards that are much more than $\tau$ time steps in the future are exponentially suppressed. You should generally pick a discount factor such that the time horizon contains all of the relevant rewards for a particular action, but not any more. | Understanding the role of the discount factor in reinforcement learning | TL;DR: Discount factors are associated with time horizons. Longer time horizons have have much more variance as they include more irrelevant information, while short time horizons are biased towards o | Understanding the role of the discount factor in reinforcement learning
TL;DR: Discount factors are associated with time horizons. Longer time horizons have have much more variance as they include more irrelevant information, while short time horizons are biased towards only short-term gains.
The discount factor essentially determines how much the reinforcement learning agents cares about rewards in the distant future relative to those in the immediate future. If $\gamma = 0$, the agent will be completely myopic and only learn about actions that produce an immediate reward. If $\gamma = 1$, the agent will evaluate each of its actions based on the sum total of all of its future rewards.
So why wouldn't you always want to make $\gamma$ as high as possible? Well, most actions don't have long-lasting repercussions. For example, suppose that on the first day of every month you decide to treat yourself to a smoothie, and you have to decide whether you'll get a blueberry smoothie or a strawberry smoothie. As a good reinforcement learner, you judge the quality of your decision by how big your subsequent rewards are. If your time horizon is very short, you'll only factor in the immediate rewards, like how tasty your smoothie is. With a longer time horizon, like a few hours, you might also factor in things like whether or not you got an upset stomach. But if your time horizon lasts for the entire month, then every single thing that makes you feel good or bad for the entire month will factor into your judgement on whether or not you made the right smoothie decision. You'll be factoring in lots of irrelevant information, and therefore your judgement will have a huge variance and it'll be hard to learn.
Picking a particular value of $\gamma$ is equivalent to picking a time horizon. It helps to rewrite an agent's discounted reward $G$ as
$$
G_t = R_{t} + \gamma R_{t+1} + \gamma^2 R_{t+2} + \cdots \\
= \sum_{k=0}^{\infty} \gamma^k R_{t+k} = \sum_{\Delta t=0}^{\infty} e^{-\Delta t / \tau} R_{t+\Delta t}
$$
where I identify $\gamma = e^{-1/\tau}$ and $k \rightarrow \Delta t$. The value $\tau$ explicitly shows the time horizon associated with a discount factor; $\gamma = 1$ corresponds to $\tau = \infty$, and any rewards that are much more than $\tau$ time steps in the future are exponentially suppressed. You should generally pick a discount factor such that the time horizon contains all of the relevant rewards for a particular action, but not any more. | Understanding the role of the discount factor in reinforcement learning
TL;DR: Discount factors are associated with time horizons. Longer time horizons have have much more variance as they include more irrelevant information, while short time horizons are biased towards o |
2,257 | Understanding the role of the discount factor in reinforcement learning | You're right that the discount factor (so-called $\gamma$ — note that this is different than $\lambda$ from TD-$\lambda$) acts like an "urgency of life" and is therefore part of the problem — just like it is in human lives: Some people live as if they'll live forever; some people live as if they're going to die tomorrow. | Understanding the role of the discount factor in reinforcement learning | You're right that the discount factor (so-called $\gamma$ — note that this is different than $\lambda$ from TD-$\lambda$) acts like an "urgency of life" and is therefore part of the problem — just lik | Understanding the role of the discount factor in reinforcement learning
You're right that the discount factor (so-called $\gamma$ — note that this is different than $\lambda$ from TD-$\lambda$) acts like an "urgency of life" and is therefore part of the problem — just like it is in human lives: Some people live as if they'll live forever; some people live as if they're going to die tomorrow. | Understanding the role of the discount factor in reinforcement learning
You're right that the discount factor (so-called $\gamma$ — note that this is different than $\lambda$ from TD-$\lambda$) acts like an "urgency of life" and is therefore part of the problem — just lik |
2,258 | Understanding the role of the discount factor in reinforcement learning | Inspired by "PolBM"'s answer, an intuitive example helps understand the usefulness of the discount factor. Imagining that there are two stocks we can purchase.
Stock A: Rising ten dollars on Monday of every week and falling ten dollars on Tuesday of every week.
Stock B: Falling ten dollars on Monday of every week and rising ten dollars on Tuesday of every week.
Both stocks are unchanged on other days of a week. Now, we want to design a policy to purchase a stock on Sunday. In the long term (without discount factor), both stocks have zero expected rewards. Therefore, it seems like we can purchase either stock mentioned above.
However, Stock A is better than Stock B, because we will never lose money by purchasing the stock A.
For example, if we buy the stock A on Sunday and sell it on Monday, then we will earn ten dollars. And if we buy the stock A on Sunday and sell it on other days in a week, then we will not get any revenue. Similarly, if we buy the stock B on Sunday and sell it on Monday, then we will lose ten dollars. And if we buy the stock on Sunday and sell it on other days in a week, then we will not get any revenue.
This scenario is quite common in the reinforcement learning domain, such as the classical Multi-armed bandit problem. Even though the expectations of two slot machines are the same, the real rewards may be significantly different. Therefore, in these scenarios, the discount factor is necessary. | Understanding the role of the discount factor in reinforcement learning | Inspired by "PolBM"'s answer, an intuitive example helps understand the usefulness of the discount factor. Imagining that there are two stocks we can purchase.
Stock A: Rising ten dollars on Monday o | Understanding the role of the discount factor in reinforcement learning
Inspired by "PolBM"'s answer, an intuitive example helps understand the usefulness of the discount factor. Imagining that there are two stocks we can purchase.
Stock A: Rising ten dollars on Monday of every week and falling ten dollars on Tuesday of every week.
Stock B: Falling ten dollars on Monday of every week and rising ten dollars on Tuesday of every week.
Both stocks are unchanged on other days of a week. Now, we want to design a policy to purchase a stock on Sunday. In the long term (without discount factor), both stocks have zero expected rewards. Therefore, it seems like we can purchase either stock mentioned above.
However, Stock A is better than Stock B, because we will never lose money by purchasing the stock A.
For example, if we buy the stock A on Sunday and sell it on Monday, then we will earn ten dollars. And if we buy the stock A on Sunday and sell it on other days in a week, then we will not get any revenue. Similarly, if we buy the stock B on Sunday and sell it on Monday, then we will lose ten dollars. And if we buy the stock on Sunday and sell it on other days in a week, then we will not get any revenue.
This scenario is quite common in the reinforcement learning domain, such as the classical Multi-armed bandit problem. Even though the expectations of two slot machines are the same, the real rewards may be significantly different. Therefore, in these scenarios, the discount factor is necessary. | Understanding the role of the discount factor in reinforcement learning
Inspired by "PolBM"'s answer, an intuitive example helps understand the usefulness of the discount factor. Imagining that there are two stocks we can purchase.
Stock A: Rising ten dollars on Monday o |
2,259 | Understanding the role of the discount factor in reinforcement learning | According to the paper:
Markov games as a framework for multi-agent reinforcement learning by Michael Littman, 1994, the notion of discount factor is defined in terms of the probability that the game will be allowed to continue. Even thought this paper is talking about Markov games, I believe the abstract can be used to get a more general, intuition about the importance of discount factor.
Here it is:
"As in MDP’s, the discount factor can be thought of as the
probability that the game will be allowed to continue after the
current move. It is possible to define a no- tion of undiscounted
rewards [Schwartz, 1993], but not all Markov games have optimal
strategies in the undiscounted case [Owen, 1982]. This is because, in
many games, it is best to postpone risky actions indefinitely. For
current pur- poses, the discount factor has the desirable effect of
goading the players into trying to win sooner rather than later." | Understanding the role of the discount factor in reinforcement learning | According to the paper:
Markov games as a framework for multi-agent reinforcement learning by Michael Littman, 1994, the notion of discount factor is defined in terms of the probability that the game | Understanding the role of the discount factor in reinforcement learning
According to the paper:
Markov games as a framework for multi-agent reinforcement learning by Michael Littman, 1994, the notion of discount factor is defined in terms of the probability that the game will be allowed to continue. Even thought this paper is talking about Markov games, I believe the abstract can be used to get a more general, intuition about the importance of discount factor.
Here it is:
"As in MDP’s, the discount factor can be thought of as the
probability that the game will be allowed to continue after the
current move. It is possible to define a no- tion of undiscounted
rewards [Schwartz, 1993], but not all Markov games have optimal
strategies in the undiscounted case [Owen, 1982]. This is because, in
many games, it is best to postpone risky actions indefinitely. For
current pur- poses, the discount factor has the desirable effect of
goading the players into trying to win sooner rather than later." | Understanding the role of the discount factor in reinforcement learning
According to the paper:
Markov games as a framework for multi-agent reinforcement learning by Michael Littman, 1994, the notion of discount factor is defined in terms of the probability that the game |
2,260 | Can bootstrap be seen as a "cure" for the small sample size? | I remember reading that using the percentile confidence interval for bootstrapping is equivalent to using a Z interval instead of a T interval and using $n$ instead of $n-1$ for the denominator. Unfortunately I don't remember where I read this and could not find a reference in my quick searches. These differences don't matter much when n is large (and the advantages of the bootstrap outweigh these minor problems when $n$ is large), but with small $n$ this can cause problems. Here is some R code to simulate and compare:
simfun <- function(n=5) {
x <- rnorm(n)
m.x <- mean(x)
s.x <- sd(x)
z <- m.x/(1/sqrt(n))
t <- m.x/(s.x/sqrt(n))
b <- replicate(10000, mean(sample(x, replace=TRUE)))
c( t=abs(t) > qt(0.975,n-1), z=abs(z) > qnorm(0.975),
z2 = abs(t) > qnorm(0.975),
b= (0 < quantile(b, 0.025)) | (0 > quantile(b, 0.975))
)
}
out <- replicate(10000, simfun())
rowMeans(out)
My results for one run are:
t z z2 b.2.5%
0.0486 0.0493 0.1199 0.1631
So we can see that using the t-test and the z-test (with the true population standard deviation) both give a type I error rate that is essentially $\alpha$ as designed. The improper z test (dividing by sample standard deviation, but using Z critical value instead of T) rejects the null more than twice as often as it should. Now to the bootstrap, it is rejecting the null 3 times as often as it should (looking if 0, the true mean, is in the interval or not), so for this small sample size the simple bootstrap is not sized properly and therefore does not fix problems (and this is when the data is optimally normal). The improved bootstrap intervals (BCa etc.) will probably do better, but this should raise some concern about using bootstrapping as a panacea for small sample sizes. | Can bootstrap be seen as a "cure" for the small sample size? | I remember reading that using the percentile confidence interval for bootstrapping is equivalent to using a Z interval instead of a T interval and using $n$ instead of $n-1$ for the denominator. Unfo | Can bootstrap be seen as a "cure" for the small sample size?
I remember reading that using the percentile confidence interval for bootstrapping is equivalent to using a Z interval instead of a T interval and using $n$ instead of $n-1$ for the denominator. Unfortunately I don't remember where I read this and could not find a reference in my quick searches. These differences don't matter much when n is large (and the advantages of the bootstrap outweigh these minor problems when $n$ is large), but with small $n$ this can cause problems. Here is some R code to simulate and compare:
simfun <- function(n=5) {
x <- rnorm(n)
m.x <- mean(x)
s.x <- sd(x)
z <- m.x/(1/sqrt(n))
t <- m.x/(s.x/sqrt(n))
b <- replicate(10000, mean(sample(x, replace=TRUE)))
c( t=abs(t) > qt(0.975,n-1), z=abs(z) > qnorm(0.975),
z2 = abs(t) > qnorm(0.975),
b= (0 < quantile(b, 0.025)) | (0 > quantile(b, 0.975))
)
}
out <- replicate(10000, simfun())
rowMeans(out)
My results for one run are:
t z z2 b.2.5%
0.0486 0.0493 0.1199 0.1631
So we can see that using the t-test and the z-test (with the true population standard deviation) both give a type I error rate that is essentially $\alpha$ as designed. The improper z test (dividing by sample standard deviation, but using Z critical value instead of T) rejects the null more than twice as often as it should. Now to the bootstrap, it is rejecting the null 3 times as often as it should (looking if 0, the true mean, is in the interval or not), so for this small sample size the simple bootstrap is not sized properly and therefore does not fix problems (and this is when the data is optimally normal). The improved bootstrap intervals (BCa etc.) will probably do better, but this should raise some concern about using bootstrapping as a panacea for small sample sizes. | Can bootstrap be seen as a "cure" for the small sample size?
I remember reading that using the percentile confidence interval for bootstrapping is equivalent to using a Z interval instead of a T interval and using $n$ instead of $n-1$ for the denominator. Unfo |
2,261 | Can bootstrap be seen as a "cure" for the small sample size? | Other answers criticise the performance of bootstrap confidence intervals, not bootstrap itself. This is a different problem.
If your context satisfy the regularity conditions for the convergence of the bootstrap distribution (convergence in terms of the number of bootstrap samples), then the method will work if you use a large enough bootstrap sample.
In case you really want to find issues of using nonparametric bootstrap, here are two problems:
(1) Issues with resampling.
One of the problems with bootstrap, either for small or large samples, is the resampling step. It is not always possible to resample while keeping the structure (dependence, temporal, ...) of the sample. An example of this is a superposed process.
Suppose that there are a number of independent sources at each of which events occur from time to time. The intervals between successive events at any one source are assumed to be independant random variables all with the same distribution, so that each source constitutes a renewal process of a familiar type. The outputs of the sources are combined into one pooled output.
How would you resample while keeping the dependence unknown structure?
(2) Narrow bootstrap samples and bootstrap confidence intervals for small samples.
In small samples the minimum and maximum of the estimators for each subsample may define a narrow interval, then the right and left end points of any confidence intervals will be very narrow (which is counterintuitive given the small sample!) in some models.
Suppose that $x_1,x_2\sim \text{Exp}(\lambda)$, where $\lambda>0$ is the rate. Using the profile likelihood you can obtain an approximate confidence interval (the 95% approximate confidence interval is the 0.147-level profile likelihood interval) as follows:
set.seed(1)
x <- rexp(2,1)
# Maximum likelihood estimator
1/mean(x)
# Profile likelihood: provides a confidence interval with right-end point beyond the maximum inverse of the mean
Rp <- Vectorize(function(l) exp(sum(dexp(x,rate=l,log=T))-sum(dexp(x,rate=1/mean(x),log=T))))
curve(Rp,0,5)
lines(c(0,5),c(0.147,0.147),col="red")
This method produces a continuous curve from where you can extract the confidence interval. The maximum likelihood estimator of $\lambda$ is $\hat{\lambda}=2/(x_1+x_2)$. By resampling, there are only three possible values that we can obtain for this estimator, whose maximum and minimum define the bounds for the corresponding bootstrap confidence intervals. This may look odd even for large bootstrap samples (you don't gain much by increasing this number):
library(boot)
set.seed(1)
x <- rexp(2,1)
1/mean(x)
# Bootstrap interval: limited to the maximum inverse of the mean
f.boot <- function(data,ind) 1/mean(data[ind])
b.b <- boot(data=x, statistic=f.boot, R=100000)
boot.ci(b.b, conf = 0.95, type = "all")
hist(b.b$t)
In this case, the closer $x_1$ and $x_2$ are, the narrower the bootstrap distribution is, and consequently the narrower the confidence interval (which might be located far from the real value). This example is, in fact, related to the example presented by @GregSnow, although his argument was more empirical. The bounds I mention explain the bad performance of all the bootstrap confidence intervals analysed by @Wolfgang. | Can bootstrap be seen as a "cure" for the small sample size? | Other answers criticise the performance of bootstrap confidence intervals, not bootstrap itself. This is a different problem.
If your context satisfy the regularity conditions for the convergence of t | Can bootstrap be seen as a "cure" for the small sample size?
Other answers criticise the performance of bootstrap confidence intervals, not bootstrap itself. This is a different problem.
If your context satisfy the regularity conditions for the convergence of the bootstrap distribution (convergence in terms of the number of bootstrap samples), then the method will work if you use a large enough bootstrap sample.
In case you really want to find issues of using nonparametric bootstrap, here are two problems:
(1) Issues with resampling.
One of the problems with bootstrap, either for small or large samples, is the resampling step. It is not always possible to resample while keeping the structure (dependence, temporal, ...) of the sample. An example of this is a superposed process.
Suppose that there are a number of independent sources at each of which events occur from time to time. The intervals between successive events at any one source are assumed to be independant random variables all with the same distribution, so that each source constitutes a renewal process of a familiar type. The outputs of the sources are combined into one pooled output.
How would you resample while keeping the dependence unknown structure?
(2) Narrow bootstrap samples and bootstrap confidence intervals for small samples.
In small samples the minimum and maximum of the estimators for each subsample may define a narrow interval, then the right and left end points of any confidence intervals will be very narrow (which is counterintuitive given the small sample!) in some models.
Suppose that $x_1,x_2\sim \text{Exp}(\lambda)$, where $\lambda>0$ is the rate. Using the profile likelihood you can obtain an approximate confidence interval (the 95% approximate confidence interval is the 0.147-level profile likelihood interval) as follows:
set.seed(1)
x <- rexp(2,1)
# Maximum likelihood estimator
1/mean(x)
# Profile likelihood: provides a confidence interval with right-end point beyond the maximum inverse of the mean
Rp <- Vectorize(function(l) exp(sum(dexp(x,rate=l,log=T))-sum(dexp(x,rate=1/mean(x),log=T))))
curve(Rp,0,5)
lines(c(0,5),c(0.147,0.147),col="red")
This method produces a continuous curve from where you can extract the confidence interval. The maximum likelihood estimator of $\lambda$ is $\hat{\lambda}=2/(x_1+x_2)$. By resampling, there are only three possible values that we can obtain for this estimator, whose maximum and minimum define the bounds for the corresponding bootstrap confidence intervals. This may look odd even for large bootstrap samples (you don't gain much by increasing this number):
library(boot)
set.seed(1)
x <- rexp(2,1)
1/mean(x)
# Bootstrap interval: limited to the maximum inverse of the mean
f.boot <- function(data,ind) 1/mean(data[ind])
b.b <- boot(data=x, statistic=f.boot, R=100000)
boot.ci(b.b, conf = 0.95, type = "all")
hist(b.b$t)
In this case, the closer $x_1$ and $x_2$ are, the narrower the bootstrap distribution is, and consequently the narrower the confidence interval (which might be located far from the real value). This example is, in fact, related to the example presented by @GregSnow, although his argument was more empirical. The bounds I mention explain the bad performance of all the bootstrap confidence intervals analysed by @Wolfgang. | Can bootstrap be seen as a "cure" for the small sample size?
Other answers criticise the performance of bootstrap confidence intervals, not bootstrap itself. This is a different problem.
If your context satisfy the regularity conditions for the convergence of t |
2,262 | Can bootstrap be seen as a "cure" for the small sample size? | If you are provided with small sample size (as a sidelight, what is "small" seems to depend on some underlying customary rule in each research field), no bootstrap will do the magic. Assuming a database contains three observations for each of the two variables under investigation, no inference will make sense. In my experience, non-parametric bootstrap (1,000 or 10,000 replications) works well in replacing t-test when sample distributions (at least 10-15 observations each) are skewed and therefore the prerequisites for the usual t-test are not satisfied. Besides, regardless the number of observations, non-parametric bootstrap may be a mandatory choice when data are positively skewed, as it always happens for health care costs.
Other interesting applications for non-parametric bootstrap relate to standard errors calculation for coefficients included in regressions and panel datasets. | Can bootstrap be seen as a "cure" for the small sample size? | If you are provided with small sample size (as a sidelight, what is "small" seems to depend on some underlying customary rule in each research field), no bootstrap will do the magic. Assuming a databa | Can bootstrap be seen as a "cure" for the small sample size?
If you are provided with small sample size (as a sidelight, what is "small" seems to depend on some underlying customary rule in each research field), no bootstrap will do the magic. Assuming a database contains three observations for each of the two variables under investigation, no inference will make sense. In my experience, non-parametric bootstrap (1,000 or 10,000 replications) works well in replacing t-test when sample distributions (at least 10-15 observations each) are skewed and therefore the prerequisites for the usual t-test are not satisfied. Besides, regardless the number of observations, non-parametric bootstrap may be a mandatory choice when data are positively skewed, as it always happens for health care costs.
Other interesting applications for non-parametric bootstrap relate to standard errors calculation for coefficients included in regressions and panel datasets. | Can bootstrap be seen as a "cure" for the small sample size?
If you are provided with small sample size (as a sidelight, what is "small" seems to depend on some underlying customary rule in each research field), no bootstrap will do the magic. Assuming a databa |
2,263 | Can bootstrap be seen as a "cure" for the small sample size? | Bootstrap works well in small sample sizes by ensuring the correctness of tests (e.g. that the nominal 0.05 significance level is close to the actual size of the test), however the bootstrap does not magically grant you extra power. If you have a small sample, you have little power, end of story.
Parametric (linear models) and semiparametric (GEE) regressions tend to have poor small sample properties... the former as a consequence of large dependence on parametric assumptions, the latter because of magnification of robust standard error estimates in small samples. Bootstrapping (and other resampling based tests) performs really well in those circumstances.
For prediction, bootstrapping will give you better (more honest) estimates of internal validity than split sample validation.
Bootstrapping often times gives you less power as a consequence of inadvertently correcting mean imputation procedures / hotdecking (such as in fuzzy matching). Bootstrapping has been erroneously purported to give more power in matched analyses where individuals were resampled to meet the sufficient cluster size, giving bootstrapped matched datasets with a greater $n$ than the analysis dataset. | Can bootstrap be seen as a "cure" for the small sample size? | Bootstrap works well in small sample sizes by ensuring the correctness of tests (e.g. that the nominal 0.05 significance level is close to the actual size of the test), however the bootstrap does not | Can bootstrap be seen as a "cure" for the small sample size?
Bootstrap works well in small sample sizes by ensuring the correctness of tests (e.g. that the nominal 0.05 significance level is close to the actual size of the test), however the bootstrap does not magically grant you extra power. If you have a small sample, you have little power, end of story.
Parametric (linear models) and semiparametric (GEE) regressions tend to have poor small sample properties... the former as a consequence of large dependence on parametric assumptions, the latter because of magnification of robust standard error estimates in small samples. Bootstrapping (and other resampling based tests) performs really well in those circumstances.
For prediction, bootstrapping will give you better (more honest) estimates of internal validity than split sample validation.
Bootstrapping often times gives you less power as a consequence of inadvertently correcting mean imputation procedures / hotdecking (such as in fuzzy matching). Bootstrapping has been erroneously purported to give more power in matched analyses where individuals were resampled to meet the sufficient cluster size, giving bootstrapped matched datasets with a greater $n$ than the analysis dataset. | Can bootstrap be seen as a "cure" for the small sample size?
Bootstrap works well in small sample sizes by ensuring the correctness of tests (e.g. that the nominal 0.05 significance level is close to the actual size of the test), however the bootstrap does not |
2,264 | What is the single most influential book every statistician should read? [closed] | Here are two to put on the list:
Tufte. The visual display of quantitative information
Tukey. Exploratory data analysis | What is the single most influential book every statistician should read? [closed] | Here are two to put on the list:
Tufte. The visual display of quantitative information
Tukey. Exploratory data analysis | What is the single most influential book every statistician should read? [closed]
Here are two to put on the list:
Tufte. The visual display of quantitative information
Tukey. Exploratory data analysis | What is the single most influential book every statistician should read? [closed]
Here are two to put on the list:
Tufte. The visual display of quantitative information
Tukey. Exploratory data analysis |
2,265 | What is the single most influential book every statistician should read? [closed] | The Elements of Statistical Learning from Hastie, Tibshirani and Friedman http://www-stat.stanford.edu/~tibs/ElemStatLearn/ should be in any statistician's library ! | What is the single most influential book every statistician should read? [closed] | The Elements of Statistical Learning from Hastie, Tibshirani and Friedman http://www-stat.stanford.edu/~tibs/ElemStatLearn/ should be in any statistician's library ! | What is the single most influential book every statistician should read? [closed]
The Elements of Statistical Learning from Hastie, Tibshirani and Friedman http://www-stat.stanford.edu/~tibs/ElemStatLearn/ should be in any statistician's library ! | What is the single most influential book every statistician should read? [closed]
The Elements of Statistical Learning from Hastie, Tibshirani and Friedman http://www-stat.stanford.edu/~tibs/ElemStatLearn/ should be in any statistician's library ! |
2,266 | What is the single most influential book every statistician should read? [closed] | I am no statistician, and I haven't read that much on the topic, but perhaps
Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century
should be mentioned? It is no textbook, but still worth reading. | What is the single most influential book every statistician should read? [closed] | I am no statistician, and I haven't read that much on the topic, but perhaps
Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century
should be mentioned? It is no textbook, b | What is the single most influential book every statistician should read? [closed]
I am no statistician, and I haven't read that much on the topic, but perhaps
Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century
should be mentioned? It is no textbook, but still worth reading. | What is the single most influential book every statistician should read? [closed]
I am no statistician, and I haven't read that much on the topic, but perhaps
Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century
should be mentioned? It is no textbook, b |
2,267 | What is the single most influential book every statistician should read? [closed] | Probability Theory: The Logic of Science | What is the single most influential book every statistician should read? [closed] | Probability Theory: The Logic of Science | What is the single most influential book every statistician should read? [closed]
Probability Theory: The Logic of Science | What is the single most influential book every statistician should read? [closed]
Probability Theory: The Logic of Science |
2,268 | What is the single most influential book every statistician should read? [closed] | Darrell Huff -- How to Lie with Statistics | What is the single most influential book every statistician should read? [closed] | Darrell Huff -- How to Lie with Statistics | What is the single most influential book every statistician should read? [closed]
Darrell Huff -- How to Lie with Statistics | What is the single most influential book every statistician should read? [closed]
Darrell Huff -- How to Lie with Statistics |
2,269 | What is the single most influential book every statistician should read? [closed] | Not a book, but I recently discovered an article by Jacob Cohen in American Psychologist entitled "Things I have learned (so far)." It's available as a pdf here. | What is the single most influential book every statistician should read? [closed] | Not a book, but I recently discovered an article by Jacob Cohen in American Psychologist entitled "Things I have learned (so far)." It's available as a pdf here. | What is the single most influential book every statistician should read? [closed]
Not a book, but I recently discovered an article by Jacob Cohen in American Psychologist entitled "Things I have learned (so far)." It's available as a pdf here. | What is the single most influential book every statistician should read? [closed]
Not a book, but I recently discovered an article by Jacob Cohen in American Psychologist entitled "Things I have learned (so far)." It's available as a pdf here. |
2,270 | What is the single most influential book every statistician should read? [closed] | Long ago, Jack Kiefer's little monograph "Introduction to Statistical Inference" peeled away the mystery of a great deal of classical statistics and helped me get started with the rest of the literature. I still refer to it and warmly recommend it to strong students in second-year stats courses. | What is the single most influential book every statistician should read? [closed] | Long ago, Jack Kiefer's little monograph "Introduction to Statistical Inference" peeled away the mystery of a great deal of classical statistics and helped me get started with the rest of the literatu | What is the single most influential book every statistician should read? [closed]
Long ago, Jack Kiefer's little monograph "Introduction to Statistical Inference" peeled away the mystery of a great deal of classical statistics and helped me get started with the rest of the literature. I still refer to it and warmly recommend it to strong students in second-year stats courses. | What is the single most influential book every statistician should read? [closed]
Long ago, Jack Kiefer's little monograph "Introduction to Statistical Inference" peeled away the mystery of a great deal of classical statistics and helped me get started with the rest of the literatu |
2,271 | What is the single most influential book every statistician should read? [closed] | I wouldn't argue that either of these should be considered "the most influential book... [for] statistician[s]", but for those who are just starting to learn about the topic, two helpful books are:
Robert Abelson, Statistics as Principled Argument
Paul Murrell, Introduction to Data Technologies | What is the single most influential book every statistician should read? [closed] | I wouldn't argue that either of these should be considered "the most influential book... [for] statistician[s]", but for those who are just starting to learn about the topic, two helpful books are:
R | What is the single most influential book every statistician should read? [closed]
I wouldn't argue that either of these should be considered "the most influential book... [for] statistician[s]", but for those who are just starting to learn about the topic, two helpful books are:
Robert Abelson, Statistics as Principled Argument
Paul Murrell, Introduction to Data Technologies | What is the single most influential book every statistician should read? [closed]
I wouldn't argue that either of these should be considered "the most influential book... [for] statistician[s]", but for those who are just starting to learn about the topic, two helpful books are:
R |
2,272 | What is the single most influential book every statistician should read? [closed] | I think every statistician should read Stigler's The History of Statistics: The Measurement of Uncertainty before 1900
It is beautifully written, thorough and it isn't a historian's perspective but a mathematician's, hence it doesn't avoid the technical details. | What is the single most influential book every statistician should read? [closed] | I think every statistician should read Stigler's The History of Statistics: The Measurement of Uncertainty before 1900
It is beautifully written, thorough and it isn't a historian's perspective but a | What is the single most influential book every statistician should read? [closed]
I think every statistician should read Stigler's The History of Statistics: The Measurement of Uncertainty before 1900
It is beautifully written, thorough and it isn't a historian's perspective but a mathematician's, hence it doesn't avoid the technical details. | What is the single most influential book every statistician should read? [closed]
I think every statistician should read Stigler's The History of Statistics: The Measurement of Uncertainty before 1900
It is beautifully written, thorough and it isn't a historian's perspective but a |
2,273 | What is the single most influential book every statistician should read? [closed] | William Cleveland's book "The Elements of Graphing Data" or his book "Visualizing Data" | What is the single most influential book every statistician should read? [closed] | William Cleveland's book "The Elements of Graphing Data" or his book "Visualizing Data" | What is the single most influential book every statistician should read? [closed]
William Cleveland's book "The Elements of Graphing Data" or his book "Visualizing Data" | What is the single most influential book every statistician should read? [closed]
William Cleveland's book "The Elements of Graphing Data" or his book "Visualizing Data" |
2,274 | What is the single most influential book every statistician should read? [closed] | I say the visual display of quantitative information by Tufte, and Freakonomics for something fun. | What is the single most influential book every statistician should read? [closed] | I say the visual display of quantitative information by Tufte, and Freakonomics for something fun. | What is the single most influential book every statistician should read? [closed]
I say the visual display of quantitative information by Tufte, and Freakonomics for something fun. | What is the single most influential book every statistician should read? [closed]
I say the visual display of quantitative information by Tufte, and Freakonomics for something fun. |
2,275 | What is the single most influential book every statistician should read? [closed] | Andrew Gelman's interesting book recommendations are here:
http://thebrowser.com/interviews/andrew-gelman-on-statistics | What is the single most influential book every statistician should read? [closed] | Andrew Gelman's interesting book recommendations are here:
http://thebrowser.com/interviews/andrew-gelman-on-statistics | What is the single most influential book every statistician should read? [closed]
Andrew Gelman's interesting book recommendations are here:
http://thebrowser.com/interviews/andrew-gelman-on-statistics | What is the single most influential book every statistician should read? [closed]
Andrew Gelman's interesting book recommendations are here:
http://thebrowser.com/interviews/andrew-gelman-on-statistics |
2,276 | What is the single most influential book every statistician should read? [closed] | In addition to "The History of Statistics" suggested by Graham, another Stigler book worth reading is
Statistics on the Table: The History of Statistical Concepts and Methods | What is the single most influential book every statistician should read? [closed] | In addition to "The History of Statistics" suggested by Graham, another Stigler book worth reading is
Statistics on the Table: The History of Statistical Concepts and Methods | What is the single most influential book every statistician should read? [closed]
In addition to "The History of Statistics" suggested by Graham, another Stigler book worth reading is
Statistics on the Table: The History of Statistical Concepts and Methods | What is the single most influential book every statistician should read? [closed]
In addition to "The History of Statistics" suggested by Graham, another Stigler book worth reading is
Statistics on the Table: The History of Statistical Concepts and Methods |
2,277 | What is the single most influential book every statistician should read? [closed] | On the math/foundations side: Harald Cramér's Mathematical Methods of Statistics. | What is the single most influential book every statistician should read? [closed] | On the math/foundations side: Harald Cramér's Mathematical Methods of Statistics. | What is the single most influential book every statistician should read? [closed]
On the math/foundations side: Harald Cramér's Mathematical Methods of Statistics. | What is the single most influential book every statistician should read? [closed]
On the math/foundations side: Harald Cramér's Mathematical Methods of Statistics. |
2,278 | What is the single most influential book every statistician should read? [closed] | For a clear exposition of what should be in social science journal articles (assistance if you're writing or peer reviewing) I like The Reviewer's Guide to Quantitative Methods in the Social Sciences. In particular I like the table desideratra as a synopsis of the minimum that a paper (article, thesis, dissertation) should contain. The chapters are separated by analysis technique, which is nice. I think the book has wider applications than "just" the social sciences as the techniques covered are used across many fields.
Quite early on, so perhaps not covered by the question, I was introduced to Ott's Introduction to Statistical Methods and Data Analysis. It's quite expensive, but is a wonderful resource at showing the underlying statistical models for various GLM methods. I dream of the day that journals require articles to contain show the formula of the statistical model tested.
For checking test assumptions, looking at the effects of various options within a test, and so forth, this is the one book I wish I had when I was studying. I have the previous edition and it is one of the best general resources I have purchased because of the clear and consistent manner in which information about the tests is laid out. It contains nice examples illustrating the test(s), and does not require the reader to have a particular statistical package in order to follow the expositions. | What is the single most influential book every statistician should read? [closed] | For a clear exposition of what should be in social science journal articles (assistance if you're writing or peer reviewing) I like The Reviewer's Guide to Quantitative Methods in the Social Sciences. | What is the single most influential book every statistician should read? [closed]
For a clear exposition of what should be in social science journal articles (assistance if you're writing or peer reviewing) I like The Reviewer's Guide to Quantitative Methods in the Social Sciences. In particular I like the table desideratra as a synopsis of the minimum that a paper (article, thesis, dissertation) should contain. The chapters are separated by analysis technique, which is nice. I think the book has wider applications than "just" the social sciences as the techniques covered are used across many fields.
Quite early on, so perhaps not covered by the question, I was introduced to Ott's Introduction to Statistical Methods and Data Analysis. It's quite expensive, but is a wonderful resource at showing the underlying statistical models for various GLM methods. I dream of the day that journals require articles to contain show the formula of the statistical model tested.
For checking test assumptions, looking at the effects of various options within a test, and so forth, this is the one book I wish I had when I was studying. I have the previous edition and it is one of the best general resources I have purchased because of the clear and consistent manner in which information about the tests is laid out. It contains nice examples illustrating the test(s), and does not require the reader to have a particular statistical package in order to follow the expositions. | What is the single most influential book every statistician should read? [closed]
For a clear exposition of what should be in social science journal articles (assistance if you're writing or peer reviewing) I like The Reviewer's Guide to Quantitative Methods in the Social Sciences. |
2,279 | What is the single most influential book every statistician should read? [closed] | Fooled By Randomness by Taleb
Taleb is a professor at Columbia and an options trader. He made about $800 million dollars in 2008 betting against the market. He also wrote Black Swan. He discusses the absurdity of using the normal distribution to model markets, and philosophizes on our ability to use induction. | What is the single most influential book every statistician should read? [closed] | Fooled By Randomness by Taleb
Taleb is a professor at Columbia and an options trader. He made about $800 million dollars in 2008 betting against the market. He also wrote Black Swan. He discusses the | What is the single most influential book every statistician should read? [closed]
Fooled By Randomness by Taleb
Taleb is a professor at Columbia and an options trader. He made about $800 million dollars in 2008 betting against the market. He also wrote Black Swan. He discusses the absurdity of using the normal distribution to model markets, and philosophizes on our ability to use induction. | What is the single most influential book every statistician should read? [closed]
Fooled By Randomness by Taleb
Taleb is a professor at Columbia and an options trader. He made about $800 million dollars in 2008 betting against the market. He also wrote Black Swan. He discusses the |
2,280 | What is the single most influential book every statistician should read? [closed] | I have read the above recommendations and was surprised to find that most of the people who answered the question were people who are not statisticians themselves. With 2 or 3 exceptions ...
As an industrial statistician who also happened to work with social scientists and health professionals I would say that if I could take only one book with me to a desert island it would be George E.P Box, Statistics for Experimenters (Wiley). In his inimitable humorous and lucid style he explains the essence and the philosophy of building mathematical models for real data. Rigorous thinking, no mathematical frivolities, no nonsense, teaches us to think statistically, plot and visualize whatever you can. A masterpiece of a competent applied scientist (chemical engineer turned statistician). Always fun to read again. | What is the single most influential book every statistician should read? [closed] | I have read the above recommendations and was surprised to find that most of the people who answered the question were people who are not statisticians themselves. With 2 or 3 exceptions ...
As an in | What is the single most influential book every statistician should read? [closed]
I have read the above recommendations and was surprised to find that most of the people who answered the question were people who are not statisticians themselves. With 2 or 3 exceptions ...
As an industrial statistician who also happened to work with social scientists and health professionals I would say that if I could take only one book with me to a desert island it would be George E.P Box, Statistics for Experimenters (Wiley). In his inimitable humorous and lucid style he explains the essence and the philosophy of building mathematical models for real data. Rigorous thinking, no mathematical frivolities, no nonsense, teaches us to think statistically, plot and visualize whatever you can. A masterpiece of a competent applied scientist (chemical engineer turned statistician). Always fun to read again. | What is the single most influential book every statistician should read? [closed]
I have read the above recommendations and was surprised to find that most of the people who answered the question were people who are not statisticians themselves. With 2 or 3 exceptions ...
As an in |
2,281 | What is the single most influential book every statistician should read? [closed] | Michael Oakes' Statistical Inference: A Commentary for the Social and Behavioral Sciences.
Elazar Pedhazur's Multiple Regression in Behavioral Research. If you can stand the immense detail and the self-important tone.
In case you're interested, I've reviewed both on Amazon and at https://yellowbrickstats.com/favorites.htm | What is the single most influential book every statistician should read? [closed] | Michael Oakes' Statistical Inference: A Commentary for the Social and Behavioral Sciences.
Elazar Pedhazur's Multiple Regression in Behavioral Research. If you can stand the immense detail and the s | What is the single most influential book every statistician should read? [closed]
Michael Oakes' Statistical Inference: A Commentary for the Social and Behavioral Sciences.
Elazar Pedhazur's Multiple Regression in Behavioral Research. If you can stand the immense detail and the self-important tone.
In case you're interested, I've reviewed both on Amazon and at https://yellowbrickstats.com/favorites.htm | What is the single most influential book every statistician should read? [closed]
Michael Oakes' Statistical Inference: A Commentary for the Social and Behavioral Sciences.
Elazar Pedhazur's Multiple Regression in Behavioral Research. If you can stand the immense detail and the s |
2,282 | What is the single most influential book every statistician should read? [closed] | Rice: Mathematical Statistics and Data Analysis | What is the single most influential book every statistician should read? [closed] | Rice: Mathematical Statistics and Data Analysis | What is the single most influential book every statistician should read? [closed]
Rice: Mathematical Statistics and Data Analysis | What is the single most influential book every statistician should read? [closed]
Rice: Mathematical Statistics and Data Analysis |
2,283 | What is the single most influential book every statistician should read? [closed] | Lots of good books already suggested. But here is another: Gerd Gigerenzer's "Reckoning With Risk" because understanding how statistics affect decisions is more important than getting all the theory right. In fact number one sin of statisticians is failing to communicate clearly. His book talks about the consequences of poor communication and how to avoid it. | What is the single most influential book every statistician should read? [closed] | Lots of good books already suggested. But here is another: Gerd Gigerenzer's "Reckoning With Risk" because understanding how statistics affect decisions is more important than getting all the theory r | What is the single most influential book every statistician should read? [closed]
Lots of good books already suggested. But here is another: Gerd Gigerenzer's "Reckoning With Risk" because understanding how statistics affect decisions is more important than getting all the theory right. In fact number one sin of statisticians is failing to communicate clearly. His book talks about the consequences of poor communication and how to avoid it. | What is the single most influential book every statistician should read? [closed]
Lots of good books already suggested. But here is another: Gerd Gigerenzer's "Reckoning With Risk" because understanding how statistics affect decisions is more important than getting all the theory r |
2,284 | What is the single most influential book every statistician should read? [closed] | I learned a great deal from the Bible of Bayesian statistics:
Jose Bernardo and Adrian Smith (2000) Bayesian Theory. | What is the single most influential book every statistician should read? [closed] | I learned a great deal from the Bible of Bayesian statistics:
Jose Bernardo and Adrian Smith (2000) Bayesian Theory. | What is the single most influential book every statistician should read? [closed]
I learned a great deal from the Bible of Bayesian statistics:
Jose Bernardo and Adrian Smith (2000) Bayesian Theory. | What is the single most influential book every statistician should read? [closed]
I learned a great deal from the Bible of Bayesian statistics:
Jose Bernardo and Adrian Smith (2000) Bayesian Theory. |
2,285 | What is the single most influential book every statistician should read? [closed] | It would probably be Bayesian Data Analysis by Gelman or Deep Learning with Python. But that's a bit like taking streptomycin to the middle ages. These were not written when I started my career and quite a few things from the books would have been big news back then. Some of the most influential things everyone should know are in no single source though (perhaps they should be, but...). | What is the single most influential book every statistician should read? [closed] | It would probably be Bayesian Data Analysis by Gelman or Deep Learning with Python. But that's a bit like taking streptomycin to the middle ages. These were not written when I started my career and qu | What is the single most influential book every statistician should read? [closed]
It would probably be Bayesian Data Analysis by Gelman or Deep Learning with Python. But that's a bit like taking streptomycin to the middle ages. These were not written when I started my career and quite a few things from the books would have been big news back then. Some of the most influential things everyone should know are in no single source though (perhaps they should be, but...). | What is the single most influential book every statistician should read? [closed]
It would probably be Bayesian Data Analysis by Gelman or Deep Learning with Python. But that's a bit like taking streptomycin to the middle ages. These were not written when I started my career and qu |
2,286 | What is the single most influential book every statistician should read? [closed] | I am going to go ahead and propose a standard textbook in the field. I am talking about Probability and Statistics by DeGroot and Schervish, first published in 1975.
This book has served as a textbook for many students and is considered a classic, rightfully so in my opinion. It covers topics such as combinatorics, distributions, Bayesian statistics, likelihood inference and regression analysis. As far as I know no other textbook is so thorough so I believe it is a must-have. | What is the single most influential book every statistician should read? [closed] | I am going to go ahead and propose a standard textbook in the field. I am talking about Probability and Statistics by DeGroot and Schervish, first published in 1975.
This book has served as a textbook | What is the single most influential book every statistician should read? [closed]
I am going to go ahead and propose a standard textbook in the field. I am talking about Probability and Statistics by DeGroot and Schervish, first published in 1975.
This book has served as a textbook for many students and is considered a classic, rightfully so in my opinion. It covers topics such as combinatorics, distributions, Bayesian statistics, likelihood inference and regression analysis. As far as I know no other textbook is so thorough so I believe it is a must-have. | What is the single most influential book every statistician should read? [closed]
I am going to go ahead and propose a standard textbook in the field. I am talking about Probability and Statistics by DeGroot and Schervish, first published in 1975.
This book has served as a textbook |
2,287 | What is the single most influential book every statistician should read? [closed] | The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results by Paul D. Ellis
This book if a "must have" for everyone conducting any scientific research, especially one that comes not from pure stats/maths. The book below extends the first one regarding confidence intervals.
Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis by Geoff Cumming | What is the single most influential book every statistician should read? [closed] | The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results by Paul D. Ellis
This book if a "must have" for everyone conducting any scientific res | What is the single most influential book every statistician should read? [closed]
The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results by Paul D. Ellis
This book if a "must have" for everyone conducting any scientific research, especially one that comes not from pure stats/maths. The book below extends the first one regarding confidence intervals.
Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis by Geoff Cumming | What is the single most influential book every statistician should read? [closed]
The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results by Paul D. Ellis
This book if a "must have" for everyone conducting any scientific res |
2,288 | What is the single most influential book every statistician should read? [closed] | Kennedy's A Guide to Econometrics contains a wealth of practical advice about a wide range of statistical analysis. It's somehow both incredibly information-dense and easy to read, and I still learn something new every time I pick it up.
Wooldridge's Introductory Econometrics has a good amount of this kind of discussion too, but as an introductory textbook it is more self-contained. I wish I'd had a course based around it. | What is the single most influential book every statistician should read? [closed] | Kennedy's A Guide to Econometrics contains a wealth of practical advice about a wide range of statistical analysis. It's somehow both incredibly information-dense and easy to read, and I still learn | What is the single most influential book every statistician should read? [closed]
Kennedy's A Guide to Econometrics contains a wealth of practical advice about a wide range of statistical analysis. It's somehow both incredibly information-dense and easy to read, and I still learn something new every time I pick it up.
Wooldridge's Introductory Econometrics has a good amount of this kind of discussion too, but as an introductory textbook it is more self-contained. I wish I'd had a course based around it. | What is the single most influential book every statistician should read? [closed]
Kennedy's A Guide to Econometrics contains a wealth of practical advice about a wide range of statistical analysis. It's somehow both incredibly information-dense and easy to read, and I still learn |
2,289 | What is the single most influential book every statistician should read? [closed] | "Most influential" is a very different notion from "everyone should read". I am not qualified to answer the first - you'd need someone who is an historian of statistics - but for the second, here are some:
Statistics as Principled Argument by Robert Abelson should be read by anyone doing or using statistics in the pursuit of science, humanities, etc.
William S. Cleveland's two books on graphics: The elements of graphing data and Visualizing Data. For statisticians, I'd put these ahead of even Tufte's work, not because Tufte isn't worthwhile but because a) Cleveland wrote with statisticians as his intended audience and b) Cleveland based his recommendations on experimental data about how people look at graphs, rather than intuition.
Exploratory Data Analysis by John Tukey. It's dated but valuable - you can do a lot with a pencil and paper and a brain (at least, if your brain is as good as Tukey's!) | What is the single most influential book every statistician should read? [closed] | "Most influential" is a very different notion from "everyone should read". I am not qualified to answer the first - you'd need someone who is an historian of statistics - but for the second, here are | What is the single most influential book every statistician should read? [closed]
"Most influential" is a very different notion from "everyone should read". I am not qualified to answer the first - you'd need someone who is an historian of statistics - but for the second, here are some:
Statistics as Principled Argument by Robert Abelson should be read by anyone doing or using statistics in the pursuit of science, humanities, etc.
William S. Cleveland's two books on graphics: The elements of graphing data and Visualizing Data. For statisticians, I'd put these ahead of even Tufte's work, not because Tufte isn't worthwhile but because a) Cleveland wrote with statisticians as his intended audience and b) Cleveland based his recommendations on experimental data about how people look at graphs, rather than intuition.
Exploratory Data Analysis by John Tukey. It's dated but valuable - you can do a lot with a pencil and paper and a brain (at least, if your brain is as good as Tukey's!) | What is the single most influential book every statistician should read? [closed]
"Most influential" is a very different notion from "everyone should read". I am not qualified to answer the first - you'd need someone who is an historian of statistics - but for the second, here are |
2,290 | What is the meaning of "All models are wrong, but some are useful" | I think its meaning is best analyzed by looking at it in two parts:
"All models are wrong" that is, every model is wrong because it is a simplification of reality. Some models, especially in the "hard" sciences, are only a little wrong. They ignore things like friction or the gravitational effect of tiny bodies. Other models are a lot wrong - they ignore bigger things. In the social sciences, we ignore a lot.
"But some are useful" - simplifications of reality can be quite useful. They can help us explain, predict and understand the universe and all its various components.
This isn't just true in statistics! Maps are a type of model; they are wrong. But good maps are very useful. Examples of other useful but wrong models abound. | What is the meaning of "All models are wrong, but some are useful" | I think its meaning is best analyzed by looking at it in two parts:
"All models are wrong" that is, every model is wrong because it is a simplification of reality. Some models, especially in the "har | What is the meaning of "All models are wrong, but some are useful"
I think its meaning is best analyzed by looking at it in two parts:
"All models are wrong" that is, every model is wrong because it is a simplification of reality. Some models, especially in the "hard" sciences, are only a little wrong. They ignore things like friction or the gravitational effect of tiny bodies. Other models are a lot wrong - they ignore bigger things. In the social sciences, we ignore a lot.
"But some are useful" - simplifications of reality can be quite useful. They can help us explain, predict and understand the universe and all its various components.
This isn't just true in statistics! Maps are a type of model; they are wrong. But good maps are very useful. Examples of other useful but wrong models abound. | What is the meaning of "All models are wrong, but some are useful"
I think its meaning is best analyzed by looking at it in two parts:
"All models are wrong" that is, every model is wrong because it is a simplification of reality. Some models, especially in the "har |
2,291 | What is the meaning of "All models are wrong, but some are useful" | It means useful insights can be provided from models which are not a perfect representation of the phenomena they model.
A statistical model is a description of a system using mathematical concepts. As such in many cases you add a certain layer of abstraction to facilitate your inferential procedure (eg. normality of measurement errors, compound symmetry in correlation structures etc.). It is almost impossible for a single model to describe perfectly a real world phenomenon given ourselves have a subjective view of the world (our sensory system is not perfect); nevertheless successful statistical inference does happen as our world does have a certain degree of consistency we exploit. So our almost always wrong models do prove useful.
(I am sure you'll get a big bold answer soon but I tried to be concise on this one!) | What is the meaning of "All models are wrong, but some are useful" | It means useful insights can be provided from models which are not a perfect representation of the phenomena they model.
A statistical model is a description of a system using mathematical concepts. A | What is the meaning of "All models are wrong, but some are useful"
It means useful insights can be provided from models which are not a perfect representation of the phenomena they model.
A statistical model is a description of a system using mathematical concepts. As such in many cases you add a certain layer of abstraction to facilitate your inferential procedure (eg. normality of measurement errors, compound symmetry in correlation structures etc.). It is almost impossible for a single model to describe perfectly a real world phenomenon given ourselves have a subjective view of the world (our sensory system is not perfect); nevertheless successful statistical inference does happen as our world does have a certain degree of consistency we exploit. So our almost always wrong models do prove useful.
(I am sure you'll get a big bold answer soon but I tried to be concise on this one!) | What is the meaning of "All models are wrong, but some are useful"
It means useful insights can be provided from models which are not a perfect representation of the phenomena they model.
A statistical model is a description of a system using mathematical concepts. A |
2,292 | What is the meaning of "All models are wrong, but some are useful" | I found this 2009 JSA talk by Thad Tarpey to provide a useful explanation and commentary on the Box passage. He argues that if we regard models as approximations to the truth, we could just as easily call all models right.
Here’s the abstract:
Students of statistics are often introduced to George Box’s famous
quote: “all models are wrong, some are useful.” In this talk I argue
that this quote, although useful, is wrong. A different and more
positive perspective is to acknowledge that a model is simply a means
of extracting information of interest from data. The truth is
infinitely complex and a model is merely an approximation to the
truth. If the approximation is poor or misleading, then the model is
useless. In this talk I give examples of correct models that are not
true models. I illustrate how the notion of a “wrong” model can lead
to wrong conclusions. | What is the meaning of "All models are wrong, but some are useful" | I found this 2009 JSA talk by Thad Tarpey to provide a useful explanation and commentary on the Box passage. He argues that if we regard models as approximations to the truth, we could just as easily | What is the meaning of "All models are wrong, but some are useful"
I found this 2009 JSA talk by Thad Tarpey to provide a useful explanation and commentary on the Box passage. He argues that if we regard models as approximations to the truth, we could just as easily call all models right.
Here’s the abstract:
Students of statistics are often introduced to George Box’s famous
quote: “all models are wrong, some are useful.” In this talk I argue
that this quote, although useful, is wrong. A different and more
positive perspective is to acknowledge that a model is simply a means
of extracting information of interest from data. The truth is
infinitely complex and a model is merely an approximation to the
truth. If the approximation is poor or misleading, then the model is
useless. In this talk I give examples of correct models that are not
true models. I illustrate how the notion of a “wrong” model can lead
to wrong conclusions. | What is the meaning of "All models are wrong, but some are useful"
I found this 2009 JSA talk by Thad Tarpey to provide a useful explanation and commentary on the Box passage. He argues that if we regard models as approximations to the truth, we could just as easily |
2,293 | What is the meaning of "All models are wrong, but some are useful" | For me the actual insight lies in the following aspect:
A model doesn't have to be correct to be useful.
Unfortunately in many sciences it is often forgotten that models don't necessarily need to be exact representations of reality to allow new discoveries and predictions!
So don't waste your time building a complicated model that needs accurate measurements of a myriade of variables. The true genius invents a simple model that does the job. | What is the meaning of "All models are wrong, but some are useful" | For me the actual insight lies in the following aspect:
A model doesn't have to be correct to be useful.
Unfortunately in many sciences it is often forgotten that models don't necessarily need to be | What is the meaning of "All models are wrong, but some are useful"
For me the actual insight lies in the following aspect:
A model doesn't have to be correct to be useful.
Unfortunately in many sciences it is often forgotten that models don't necessarily need to be exact representations of reality to allow new discoveries and predictions!
So don't waste your time building a complicated model that needs accurate measurements of a myriade of variables. The true genius invents a simple model that does the job. | What is the meaning of "All models are wrong, but some are useful"
For me the actual insight lies in the following aspect:
A model doesn't have to be correct to be useful.
Unfortunately in many sciences it is often forgotten that models don't necessarily need to be |
2,294 | What is the meaning of "All models are wrong, but some are useful" | Because no one has added it, George Box used the phase quoted to introduce the following section in a book. I believe he does the best job of explaining what he meant:
Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law $PV = RT$ relating pressure $P$, volume $V$ and temperature $T$ of an "ideal" gas via a constant $R$ is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.
For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
Box, G. E. P. (1979), "Robustness in the strategy of scientific model building", in Launer, R. L.; Wilkinson, G. N., Robustness in Statistics, Academic Press, pp. 201–236. | What is the meaning of "All models are wrong, but some are useful" | Because no one has added it, George Box used the phase quoted to introduce the following section in a book. I believe he does the best job of explaining what he meant:
Now it would be very remarkabl | What is the meaning of "All models are wrong, but some are useful"
Because no one has added it, George Box used the phase quoted to introduce the following section in a book. I believe he does the best job of explaining what he meant:
Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law $PV = RT$ relating pressure $P$, volume $V$ and temperature $T$ of an "ideal" gas via a constant $R$ is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.
For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
Box, G. E. P. (1979), "Robustness in the strategy of scientific model building", in Launer, R. L.; Wilkinson, G. N., Robustness in Statistics, Academic Press, pp. 201–236. | What is the meaning of "All models are wrong, but some are useful"
Because no one has added it, George Box used the phase quoted to introduce the following section in a book. I believe he does the best job of explaining what he meant:
Now it would be very remarkabl |
2,295 | What is the meaning of "All models are wrong, but some are useful" | If I may, than just one more comment may be useful. The version of the prase that I prefer is
(...) all models are approximations. Essentially, all models are wrong, but
some are useful (...)
taken from Response Surfaces, Mixtures, and Ridge Analyses by Box and Draper (2007, p. 414, Wiley). Looking at the extended quote it is more clear what Box meant -- statistical modeling is about approximating the reality and approximation is never exact, so it is about finding the most appropriate approximation. What is appropriate for your purpose is a subjective thing, that is why it is not one of the models that is useful, but possibly some of them are, depending on purpose of modeling. | What is the meaning of "All models are wrong, but some are useful" | If I may, than just one more comment may be useful. The version of the prase that I prefer is
(...) all models are approximations. Essentially, all models are wrong, but
some are useful (...)
take | What is the meaning of "All models are wrong, but some are useful"
If I may, than just one more comment may be useful. The version of the prase that I prefer is
(...) all models are approximations. Essentially, all models are wrong, but
some are useful (...)
taken from Response Surfaces, Mixtures, and Ridge Analyses by Box and Draper (2007, p. 414, Wiley). Looking at the extended quote it is more clear what Box meant -- statistical modeling is about approximating the reality and approximation is never exact, so it is about finding the most appropriate approximation. What is appropriate for your purpose is a subjective thing, that is why it is not one of the models that is useful, but possibly some of them are, depending on purpose of modeling. | What is the meaning of "All models are wrong, but some are useful"
If I may, than just one more comment may be useful. The version of the prase that I prefer is
(...) all models are approximations. Essentially, all models are wrong, but
some are useful (...)
take |
2,296 | What is the meaning of "All models are wrong, but some are useful" | You might think of it this way. the maximum complexity (i.e., entropy) of an object obeys some form of the Bekenstein bound:
$$
I \le \frac{2\pi RE}{\hbar c\ln 2}
$$
where $E$ is the total rest energy including mass, and $R$ is the radius of a sphere that encloses the object.
That's a big number, in most cases:
The Bekenstein bound for an average human brain would be $2.58991·10^{42}$ bit and represents an upper bound on the information needed to perfectly recreate the average human brain down to the quantum level. This implies that the number of different states ($Ω=2^I$) of the human brain (and of the mind if the physicalism is true) is at most $107.79640·10^{41}$.
So do you want to use "the best map", i.e. the territory itself, with all of the wave equations for all the particles in every cell? Absolutely not. Not only would it be a computational disaster, but you would be modeling things that may have essentially nothing to do with what you care about. If all you want to do is, say, identify whether or not I'm awake, you don't need to know what electron #32458 is doing in neuron #844030 ribosome #2305 molecule #2. If you don't model that, your model is indeed "wrong," but if you can identify whether or not I'm awake, your model is definitely useful. | What is the meaning of "All models are wrong, but some are useful" | You might think of it this way. the maximum complexity (i.e., entropy) of an object obeys some form of the Bekenstein bound:
$$
I \le \frac{2\pi RE}{\hbar c\ln 2}
$$
where $E$ is the total rest energ | What is the meaning of "All models are wrong, but some are useful"
You might think of it this way. the maximum complexity (i.e., entropy) of an object obeys some form of the Bekenstein bound:
$$
I \le \frac{2\pi RE}{\hbar c\ln 2}
$$
where $E$ is the total rest energy including mass, and $R$ is the radius of a sphere that encloses the object.
That's a big number, in most cases:
The Bekenstein bound for an average human brain would be $2.58991·10^{42}$ bit and represents an upper bound on the information needed to perfectly recreate the average human brain down to the quantum level. This implies that the number of different states ($Ω=2^I$) of the human brain (and of the mind if the physicalism is true) is at most $107.79640·10^{41}$.
So do you want to use "the best map", i.e. the territory itself, with all of the wave equations for all the particles in every cell? Absolutely not. Not only would it be a computational disaster, but you would be modeling things that may have essentially nothing to do with what you care about. If all you want to do is, say, identify whether or not I'm awake, you don't need to know what electron #32458 is doing in neuron #844030 ribosome #2305 molecule #2. If you don't model that, your model is indeed "wrong," but if you can identify whether or not I'm awake, your model is definitely useful. | What is the meaning of "All models are wrong, but some are useful"
You might think of it this way. the maximum complexity (i.e., entropy) of an object obeys some form of the Bekenstein bound:
$$
I \le \frac{2\pi RE}{\hbar c\ln 2}
$$
where $E$ is the total rest energ |
2,297 | What is the meaning of "All models are wrong, but some are useful" | A model cannot provide 100% accurate predictions if there is any randomness in the outcomes. If there was no uncertainty, no randomness, and no error, then it would be considered a fact rather than a model.
The first is very important, because models are frequently used for modelling expectations of events that have not occurred. This almost guarantees that there is some uncertainty about the real events.
Given perfect information, in theory it might be possible to create a model which gives perfect predictions for such precisely known events. However, even given these unlikely circumstances, such a model may be so complex as to be computationally infeasible to use, and may only be accurate at a particular moment in time as other factors change how values change with events.
Since uncertainty and randomness is present in most real-world data, efforts to obtain a perfect model are a futile exercise. Instead, it is more valuable to look at obtaining a sufficiently accurate model that is simple enough to be usable in terms of both the data and the computation required for its use. While these models are known to be imperfect, some of these flaws are well known and can be considered for decision-making based on the models.
Simpler models may be imperfect, but they are also easier to reason about, to compare to one another, and may be easier to work with because they are likely to be less computationally demanding. | What is the meaning of "All models are wrong, but some are useful" | A model cannot provide 100% accurate predictions if there is any randomness in the outcomes. If there was no uncertainty, no randomness, and no error, then it would be considered a fact rather than a | What is the meaning of "All models are wrong, but some are useful"
A model cannot provide 100% accurate predictions if there is any randomness in the outcomes. If there was no uncertainty, no randomness, and no error, then it would be considered a fact rather than a model.
The first is very important, because models are frequently used for modelling expectations of events that have not occurred. This almost guarantees that there is some uncertainty about the real events.
Given perfect information, in theory it might be possible to create a model which gives perfect predictions for such precisely known events. However, even given these unlikely circumstances, such a model may be so complex as to be computationally infeasible to use, and may only be accurate at a particular moment in time as other factors change how values change with events.
Since uncertainty and randomness is present in most real-world data, efforts to obtain a perfect model are a futile exercise. Instead, it is more valuable to look at obtaining a sufficiently accurate model that is simple enough to be usable in terms of both the data and the computation required for its use. While these models are known to be imperfect, some of these flaws are well known and can be considered for decision-making based on the models.
Simpler models may be imperfect, but they are also easier to reason about, to compare to one another, and may be easier to work with because they are likely to be less computationally demanding. | What is the meaning of "All models are wrong, but some are useful"
A model cannot provide 100% accurate predictions if there is any randomness in the outcomes. If there was no uncertainty, no randomness, and no error, then it would be considered a fact rather than a |
2,298 | What is the meaning of "All models are wrong, but some are useful" | I think Peter and user11852 gave great answers. I would also add (by negation) that if a model was really good, it would probably be useless because of overfitting (hence, not generalizable). | What is the meaning of "All models are wrong, but some are useful" | I think Peter and user11852 gave great answers. I would also add (by negation) that if a model was really good, it would probably be useless because of overfitting (hence, not generalizable). | What is the meaning of "All models are wrong, but some are useful"
I think Peter and user11852 gave great answers. I would also add (by negation) that if a model was really good, it would probably be useless because of overfitting (hence, not generalizable). | What is the meaning of "All models are wrong, but some are useful"
I think Peter and user11852 gave great answers. I would also add (by negation) that if a model was really good, it would probably be useless because of overfitting (hence, not generalizable). |
2,299 | What is the meaning of "All models are wrong, but some are useful" | My acid interpretation is: Believing that a mathematical model describes exactly all the factors, and their interactions, governing a phenomenon of interest would be too simplistic and arrogant. We do not even know if the logic we use is enough to understand our universe. However, some mathematical models represent a good enough approximation (in terms of the scientific method) that are useful to draw conclusions about such phenomenon. | What is the meaning of "All models are wrong, but some are useful" | My acid interpretation is: Believing that a mathematical model describes exactly all the factors, and their interactions, governing a phenomenon of interest would be too simplistic and arrogant. We do | What is the meaning of "All models are wrong, but some are useful"
My acid interpretation is: Believing that a mathematical model describes exactly all the factors, and their interactions, governing a phenomenon of interest would be too simplistic and arrogant. We do not even know if the logic we use is enough to understand our universe. However, some mathematical models represent a good enough approximation (in terms of the scientific method) that are useful to draw conclusions about such phenomenon. | What is the meaning of "All models are wrong, but some are useful"
My acid interpretation is: Believing that a mathematical model describes exactly all the factors, and their interactions, governing a phenomenon of interest would be too simplistic and arrogant. We do |
2,300 | What is the meaning of "All models are wrong, but some are useful" | As an astrostatistician (a rare breed perhaps), I find the fame of Box's dictum to be unfortunate. In the physical sciences, we often have a strong consensus for understanding the processes underlying of an observed phenomenon, and these processes can often expressed by mathematical models arising from the laws of gravitation, quantum mechanics, thermodynamics, etc. The statistical goals are to estimate the best-fit model parameters physical properties, as well as model selection and validation. A dramatic recent case arose from the March 2013 release of papers from the European Space Agency's Planck satellite's measurements of the cosmic microwave background that convincingly establishes a simple 6-parameter `LambdaCDM' model for the Big Bang. I doubt that Box's dictum would apply anywhere within the wide range of advanced statistical methods used in these 29 papers. | What is the meaning of "All models are wrong, but some are useful" | As an astrostatistician (a rare breed perhaps), I find the fame of Box's dictum to be unfortunate. In the physical sciences, we often have a strong consensus for understanding the processes underlyin | What is the meaning of "All models are wrong, but some are useful"
As an astrostatistician (a rare breed perhaps), I find the fame of Box's dictum to be unfortunate. In the physical sciences, we often have a strong consensus for understanding the processes underlying of an observed phenomenon, and these processes can often expressed by mathematical models arising from the laws of gravitation, quantum mechanics, thermodynamics, etc. The statistical goals are to estimate the best-fit model parameters physical properties, as well as model selection and validation. A dramatic recent case arose from the March 2013 release of papers from the European Space Agency's Planck satellite's measurements of the cosmic microwave background that convincingly establishes a simple 6-parameter `LambdaCDM' model for the Big Bang. I doubt that Box's dictum would apply anywhere within the wide range of advanced statistical methods used in these 29 papers. | What is the meaning of "All models are wrong, but some are useful"
As an astrostatistician (a rare breed perhaps), I find the fame of Box's dictum to be unfortunate. In the physical sciences, we often have a strong consensus for understanding the processes underlyin |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.