idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
3,101 | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation? | In general, to make your sample mean and variance exactly equal to a pre-specified value, you can appropriately shift and scale the variable. Specifically, if $X_1, X_2, ..., X_n$ is a sample, then the new variables
$$ Z_i = \sqrt{c_{1}} \left( \frac{X_i-\overline{X}}{s_{X}} \right) + c_{2} $$
where $\overline{X} = \frac{1}{n} \sum_{i=1}^{n} X_i$ is the sample mean and $ s^{2}_{X} = \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X})^2$ is the sample variance are such that the sample mean of the $Z_{i}$'s is exactly $c_2$ and their sample variance is exactly $c_1$.
A similarly constructed example can restrict the range -
$$ B_i = a + (b-a) \left( \frac{ X_i - \min (\{X_1, ..., X_n\}) }{\max (\{X_1, ..., X_n\}) - \min (\{X_1, ..., X_n\}) } \right) $$
will produce a data set $B_1, ..., B_n$ that is restricted to the interval $(a,b)$.
Note: These types of shifting/scaling will, in general, change the distributional family of the data, even if the original data comes from a location-scale family.
Within the context of the normal distribution the mvrnorm function in R allows you to simulate normal (or multivariate normal) data with a pre-specified sample mean/covariance by setting empirical=TRUE. Specifically, this function simulates data from the conditional distribution of a normally distributed variable, given the sample mean and (co)variance is equal to a pre-specified value. Note that the resulting marginal distributions are not normal, as pointed out by @whuber in a comment to the main question.
Here is a simple univariate example where the sample mean (from a sample of $n=4$) is constrained to be 0 and the sample standard deviation is 1. We can see that the first element is far more similar to a uniform distribution than a normal distribution:
library(MASS)
z = rep(0,10000)
for(i in 1:10000)
{
x = mvrnorm(n = 4, rep(0,1), 1, tol = 1e-6, empirical = TRUE)
z[i] = x[1]
}
hist(z, col="blue")
$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ | How to simulate data that satisfy specific constraints such as having specific mean and standard dev | In general, to make your sample mean and variance exactly equal to a pre-specified value, you can appropriately shift and scale the variable. Specifically, if $X_1, X_2, ..., X_n$ is a sample, then th | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation?
In general, to make your sample mean and variance exactly equal to a pre-specified value, you can appropriately shift and scale the variable. Specifically, if $X_1, X_2, ..., X_n$ is a sample, then the new variables
$$ Z_i = \sqrt{c_{1}} \left( \frac{X_i-\overline{X}}{s_{X}} \right) + c_{2} $$
where $\overline{X} = \frac{1}{n} \sum_{i=1}^{n} X_i$ is the sample mean and $ s^{2}_{X} = \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X})^2$ is the sample variance are such that the sample mean of the $Z_{i}$'s is exactly $c_2$ and their sample variance is exactly $c_1$.
A similarly constructed example can restrict the range -
$$ B_i = a + (b-a) \left( \frac{ X_i - \min (\{X_1, ..., X_n\}) }{\max (\{X_1, ..., X_n\}) - \min (\{X_1, ..., X_n\}) } \right) $$
will produce a data set $B_1, ..., B_n$ that is restricted to the interval $(a,b)$.
Note: These types of shifting/scaling will, in general, change the distributional family of the data, even if the original data comes from a location-scale family.
Within the context of the normal distribution the mvrnorm function in R allows you to simulate normal (or multivariate normal) data with a pre-specified sample mean/covariance by setting empirical=TRUE. Specifically, this function simulates data from the conditional distribution of a normally distributed variable, given the sample mean and (co)variance is equal to a pre-specified value. Note that the resulting marginal distributions are not normal, as pointed out by @whuber in a comment to the main question.
Here is a simple univariate example where the sample mean (from a sample of $n=4$) is constrained to be 0 and the sample standard deviation is 1. We can see that the first element is far more similar to a uniform distribution than a normal distribution:
library(MASS)
z = rep(0,10000)
for(i in 1:10000)
{
x = mvrnorm(n = 4, rep(0,1), 1, tol = 1e-6, empirical = TRUE)
z[i] = x[1]
}
hist(z, col="blue")
$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ | How to simulate data that satisfy specific constraints such as having specific mean and standard dev
In general, to make your sample mean and variance exactly equal to a pre-specified value, you can appropriately shift and scale the variable. Specifically, if $X_1, X_2, ..., X_n$ is a sample, then th |
3,102 | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation? | Regarding your request for papers, there is:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graphics: A follow up to the Anscombe dataset. The American Statistician, 61, 3, pp. 248-254.
This isn't quite what you're looking for, but might serve as grist for the mill.
There is another strategy that no one seems to have mentioned. It is possible to generate $N-k$ (pseudo) random data out of a set of size $N$ such that whole set meets $k$ constraints so long as the remaining $k$ data are fixed at appropriate values. The required values should be solvable with a system of $k$ equations, algebra, and some elbow grease.
For example, to generate a set of $N$ data from a normal distribution that will have a given sample mean, $\bar x$, and variance, $s^2$, you will need to fix the values of two points: $y$ and $z$. Since the sample mean is:
$$
\bar x = \frac{\sum_{i=1}^{N-2}x_i\; + \;y\!+\!z}{N}
$$
$y$ must be:
$$
y = N\bar x\; - \;\left(\sum_{i=1}^{N-2}x_i\!+\!z\right)
$$
The sample variance is:
$$
s^2 = \frac{\sum_{i=1}^{N-2}(x_i - \bar x)^2\; + \;(y - \bar x)^2\!+\!(z - \bar x)^2}{N-1}
$$
thus (after substituting the above for $y$, foiling / distributing, & rearranging...) we get:
$$
2(N\bar{x}\! - \!\sum_{i=1}^{N-2}x_i)z - 2z^2 = N\bar{x}^2(N\!-\!1) + \sum_{i=1}^{N-2}x_i^2 + \left[\sum_{i=1}^{N-2}x_i\right]^2 - 2N\bar{x}\sum_{i=1}^{N-2}x_i - (N\!-\!1)s^2
$$
If we take $a=-2$, $b=2(N\bar{x} - \sum_{i=1}^{N-2}x_i)$, and $c$ as the negation of the RHS, we can solve for $z$ using the quadratic formula. For example, in R, the following code could be used:
find.yz = function(x, xbar, s2){
N = length(x) + 2
sumx = sum(x)
sx2 = as.numeric(x%*%x) # this is the sum of x^2
a = -2
b = 2*(N*xbar - sumx)
c = -N*xbar^2*(N-1) - sx2 - sumx^2 + 2*N*xbar*sumx + (N-1)*s2
rt = sqrt(b^2 - 4*a*c)
z = (-b + rt)/(2*a)
y = N*xbar - (sumx + z)
newx = c(x, y, z)
return(newx)
}
set.seed(62)
x = rnorm(2)
newx = find.yz(x, xbar=0, s2=1)
newx # [1] 0.8012701 0.2844567 0.3757358 -1.4614627
mean(newx) # [1] 0
var(newx) # [1] 1
There are some things to understand about this approach. First, it's not guaranteed to work. For example, it is possible that your initial $N-2$ data are such that no values $y$ and $z$ exist that will make the variance of the resulting set equal $s^2$. Consider:
set.seed(22)
x = rnorm(2)
newx = find.yz(x, xbar=0, s2=1)
Warning message:
In sqrt(b^2 - 4 * a * c) : NaNs produced
newx # [1] -0.5121391 2.4851837 NaN NaN
var(c(x, mean(x), mean(x))) # [1] 1.497324
Second, whereas standardizing makes the marginal distributions of all your variates more uniform, this approach only affects the last two values, but makes their marginal distributions skewed:
set.seed(82)
xScaled = matrix(NA, ncol=4, nrow=10000)
for(i in 1:10000){
x = rnorm(4)
xScaled[i,] = scale(x)
}
set.seed(82)
xDf = matrix(NA, ncol=4, nrow=10000)
i = 1
while(i<10001){
x = rnorm(2)
xDf[i,] = try(find.yz(x, xbar=0, s2=2), silent=TRUE) # keeps the code from crashing
if(!is.nan(xDf[i,4])){ i = i+1 } # increments if worked
}
Third, the resulting sample may not look very normal; it might look like it has 'outliers' (i.e., points that come from a different data generating process than the rest), since that is essentially the case. This is less likely to be a problem with larger sample sizes, as the sample statistics from the generated data should converge to the required values and thus need less adjustment. With smaller samples, you could always combine this approach with an accept / reject algorithm that tries again if the generated sample has shape statistics (e.g., skewness and kurtosis) that are outside of acceptable bounds (cf., @cardinal's comment), or extend this approach to generate a sample with a fixed mean, variance, skewness, and kurtosis (I'll leave the algebra up to you, though). Alternatively, you could generate a small number of samples and use the one with the smallest (say) Kolmogorov-Smirnov statistic.
library(moments)
set.seed(7900)
x = rnorm(18)
newx.ss7900 = find.yz(x, xbar=0, s2=1)
skewness(newx.ss7900) # [1] 1.832733
kurtosis(newx.ss7900) - 3 # [1] 4.334414
ks.test(newx.ss7900, "pnorm")$statistic # 0.1934226
set.seed(200)
x = rnorm(18)
newx.ss200 = find.yz(x, xbar=0, s2=1)
skewness(newx.ss200) # [1] 0.137446
kurtosis(newx.ss200) - 3 # [1] 0.1148834
ks.test(newx.ss200, "pnorm")$statistic # 0.1326304
set.seed(4700)
x = rnorm(18)
newx.ss4700 = find.yz(x, xbar=0, s2=1)
skewness(newx.ss4700) # [1] 0.3258491
kurtosis(newx.ss4700) - 3 # [1] -0.02997377
ks.test(newx.ss4700, "pnorm")$statistic # 0.07707929S | How to simulate data that satisfy specific constraints such as having specific mean and standard dev | Regarding your request for papers, there is:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graphics: A follow up to the Anscombe dataset. The American | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation?
Regarding your request for papers, there is:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graphics: A follow up to the Anscombe dataset. The American Statistician, 61, 3, pp. 248-254.
This isn't quite what you're looking for, but might serve as grist for the mill.
There is another strategy that no one seems to have mentioned. It is possible to generate $N-k$ (pseudo) random data out of a set of size $N$ such that whole set meets $k$ constraints so long as the remaining $k$ data are fixed at appropriate values. The required values should be solvable with a system of $k$ equations, algebra, and some elbow grease.
For example, to generate a set of $N$ data from a normal distribution that will have a given sample mean, $\bar x$, and variance, $s^2$, you will need to fix the values of two points: $y$ and $z$. Since the sample mean is:
$$
\bar x = \frac{\sum_{i=1}^{N-2}x_i\; + \;y\!+\!z}{N}
$$
$y$ must be:
$$
y = N\bar x\; - \;\left(\sum_{i=1}^{N-2}x_i\!+\!z\right)
$$
The sample variance is:
$$
s^2 = \frac{\sum_{i=1}^{N-2}(x_i - \bar x)^2\; + \;(y - \bar x)^2\!+\!(z - \bar x)^2}{N-1}
$$
thus (after substituting the above for $y$, foiling / distributing, & rearranging...) we get:
$$
2(N\bar{x}\! - \!\sum_{i=1}^{N-2}x_i)z - 2z^2 = N\bar{x}^2(N\!-\!1) + \sum_{i=1}^{N-2}x_i^2 + \left[\sum_{i=1}^{N-2}x_i\right]^2 - 2N\bar{x}\sum_{i=1}^{N-2}x_i - (N\!-\!1)s^2
$$
If we take $a=-2$, $b=2(N\bar{x} - \sum_{i=1}^{N-2}x_i)$, and $c$ as the negation of the RHS, we can solve for $z$ using the quadratic formula. For example, in R, the following code could be used:
find.yz = function(x, xbar, s2){
N = length(x) + 2
sumx = sum(x)
sx2 = as.numeric(x%*%x) # this is the sum of x^2
a = -2
b = 2*(N*xbar - sumx)
c = -N*xbar^2*(N-1) - sx2 - sumx^2 + 2*N*xbar*sumx + (N-1)*s2
rt = sqrt(b^2 - 4*a*c)
z = (-b + rt)/(2*a)
y = N*xbar - (sumx + z)
newx = c(x, y, z)
return(newx)
}
set.seed(62)
x = rnorm(2)
newx = find.yz(x, xbar=0, s2=1)
newx # [1] 0.8012701 0.2844567 0.3757358 -1.4614627
mean(newx) # [1] 0
var(newx) # [1] 1
There are some things to understand about this approach. First, it's not guaranteed to work. For example, it is possible that your initial $N-2$ data are such that no values $y$ and $z$ exist that will make the variance of the resulting set equal $s^2$. Consider:
set.seed(22)
x = rnorm(2)
newx = find.yz(x, xbar=0, s2=1)
Warning message:
In sqrt(b^2 - 4 * a * c) : NaNs produced
newx # [1] -0.5121391 2.4851837 NaN NaN
var(c(x, mean(x), mean(x))) # [1] 1.497324
Second, whereas standardizing makes the marginal distributions of all your variates more uniform, this approach only affects the last two values, but makes their marginal distributions skewed:
set.seed(82)
xScaled = matrix(NA, ncol=4, nrow=10000)
for(i in 1:10000){
x = rnorm(4)
xScaled[i,] = scale(x)
}
set.seed(82)
xDf = matrix(NA, ncol=4, nrow=10000)
i = 1
while(i<10001){
x = rnorm(2)
xDf[i,] = try(find.yz(x, xbar=0, s2=2), silent=TRUE) # keeps the code from crashing
if(!is.nan(xDf[i,4])){ i = i+1 } # increments if worked
}
Third, the resulting sample may not look very normal; it might look like it has 'outliers' (i.e., points that come from a different data generating process than the rest), since that is essentially the case. This is less likely to be a problem with larger sample sizes, as the sample statistics from the generated data should converge to the required values and thus need less adjustment. With smaller samples, you could always combine this approach with an accept / reject algorithm that tries again if the generated sample has shape statistics (e.g., skewness and kurtosis) that are outside of acceptable bounds (cf., @cardinal's comment), or extend this approach to generate a sample with a fixed mean, variance, skewness, and kurtosis (I'll leave the algebra up to you, though). Alternatively, you could generate a small number of samples and use the one with the smallest (say) Kolmogorov-Smirnov statistic.
library(moments)
set.seed(7900)
x = rnorm(18)
newx.ss7900 = find.yz(x, xbar=0, s2=1)
skewness(newx.ss7900) # [1] 1.832733
kurtosis(newx.ss7900) - 3 # [1] 4.334414
ks.test(newx.ss7900, "pnorm")$statistic # 0.1934226
set.seed(200)
x = rnorm(18)
newx.ss200 = find.yz(x, xbar=0, s2=1)
skewness(newx.ss200) # [1] 0.137446
kurtosis(newx.ss200) - 3 # [1] 0.1148834
ks.test(newx.ss200, "pnorm")$statistic # 0.1326304
set.seed(4700)
x = rnorm(18)
newx.ss4700 = find.yz(x, xbar=0, s2=1)
skewness(newx.ss4700) # [1] 0.3258491
kurtosis(newx.ss4700) - 3 # [1] -0.02997377
ks.test(newx.ss4700, "pnorm")$statistic # 0.07707929S | How to simulate data that satisfy specific constraints such as having specific mean and standard dev
Regarding your request for papers, there is:
Chatterjee, S. & Firat, A. (2007). Generating data with identical statistics but dissimilar graphics: A follow up to the Anscombe dataset. The American |
3,103 | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation? | Are there any programs in R that do this?
The Runuran R package contains many methods for generating random variates. It uses C libraries from the UNU.RAN (Universal Non-Uniform RAndom Number generator) project. My own knowledge of the field of random variate generation is limited, but the Runuran vignette provides a nice overview. Below are the available methods in the Runuran package, taken from the vignette:
Continuous distributions:
Adaptive Rejection Sampling
Inverse Transformed Density Rejection
Polynomial Interpolation of Inverse CDF
Simple Ratio-of-Uniforms Method
Transformed Density Rejection
Discrete distributions:
Discrete Automatic Rejection Inversion
Alias-Urn Method
Guide-Table Method for Discrete Inversion
Multivariate distributions:
Hit-and-Run algorithm with Ratio-of-Uniforms Method
Multivariate Naive Ratio-of-Uniforms Method
Example:
For a quick example, suppose you wanted to generate a Normal distribution bounded between 0 and 100:
require("Runuran")
## Normal distribution bounded between 0 and 100
d1 <- urnorm(n = 1000, mean = 50, sd = 25, lb = 0, ub = 100)
summary(d1)
sd(d1)
hist(d1)
The urnorm() function is a convenient wrapper function. I believe that behind the scenes it uses the Polynomial Interpolation of Inverse CDF method but am not sure. For something more complex, say, a discrete Normal distribution bounded between 0 and 100:
require("Runuran")
## Discrete normal distribution bounded between 0 and 100
# Create UNU.RAN discrete distribution object
discrete <- unuran.discr.new(pv = dnorm(0:100, mean = 50, sd = 25), lb = 0, ub = 100)
# Create UNU.RAN object using the Guide-Table Method for Discrete Inversion
unr <- unuran.new(distr = discrete, method = "dgt")
# Generate random variates from the UNU.RAN object
d2 <- ur(unr = unr, n = 1000)
summary(d2)
sd(d2)
head(d2)
hist(d2) | How to simulate data that satisfy specific constraints such as having specific mean and standard dev | Are there any programs in R that do this?
The Runuran R package contains many methods for generating random variates. It uses C libraries from the UNU.RAN (Universal Non-Uniform RAndom Number generat | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation?
Are there any programs in R that do this?
The Runuran R package contains many methods for generating random variates. It uses C libraries from the UNU.RAN (Universal Non-Uniform RAndom Number generator) project. My own knowledge of the field of random variate generation is limited, but the Runuran vignette provides a nice overview. Below are the available methods in the Runuran package, taken from the vignette:
Continuous distributions:
Adaptive Rejection Sampling
Inverse Transformed Density Rejection
Polynomial Interpolation of Inverse CDF
Simple Ratio-of-Uniforms Method
Transformed Density Rejection
Discrete distributions:
Discrete Automatic Rejection Inversion
Alias-Urn Method
Guide-Table Method for Discrete Inversion
Multivariate distributions:
Hit-and-Run algorithm with Ratio-of-Uniforms Method
Multivariate Naive Ratio-of-Uniforms Method
Example:
For a quick example, suppose you wanted to generate a Normal distribution bounded between 0 and 100:
require("Runuran")
## Normal distribution bounded between 0 and 100
d1 <- urnorm(n = 1000, mean = 50, sd = 25, lb = 0, ub = 100)
summary(d1)
sd(d1)
hist(d1)
The urnorm() function is a convenient wrapper function. I believe that behind the scenes it uses the Polynomial Interpolation of Inverse CDF method but am not sure. For something more complex, say, a discrete Normal distribution bounded between 0 and 100:
require("Runuran")
## Discrete normal distribution bounded between 0 and 100
# Create UNU.RAN discrete distribution object
discrete <- unuran.discr.new(pv = dnorm(0:100, mean = 50, sd = 25), lb = 0, ub = 100)
# Create UNU.RAN object using the Guide-Table Method for Discrete Inversion
unr <- unuran.new(distr = discrete, method = "dgt")
# Generate random variates from the UNU.RAN object
d2 <- ur(unr = unr, n = 1000)
summary(d2)
sd(d2)
head(d2)
hist(d2) | How to simulate data that satisfy specific constraints such as having specific mean and standard dev
Are there any programs in R that do this?
The Runuran R package contains many methods for generating random variates. It uses C libraries from the UNU.RAN (Universal Non-Uniform RAndom Number generat |
3,104 | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation? | The general technique is the 'Rejection Method', where you just reject results that don't meet your constraints. Unless you have some sort of guidance (like MCMC), then you could be generating a lot of cases (depending on your scenario) which are rejected!
Where you're looking for something like a mean and standard deviation and you can create a distance metric of some kind to say how far you are away from your goal, you can use optimisation to search for the input variables which give you the desired output values.
As an ugly example where we will look for a random uniform vector with length 100 which has mean=0 and standard deviation=1.
# simplistic optimisation example
# I am looking for a mean of zero and a standard deviation of one
# but starting from a plain uniform(0,1) distribution :-)
# create a function to optimise
fun <- function(xvec, N=100) {
xmin <- xvec[1]
xmax <- xvec[2]
x <- runif(N, xmin, xmax)
xdist <- (mean(x) - 0)^2 + (sd(x) - 1)^2
xdist
}
xr <- optim(c(0,1), fun)
# now lets test those results
X <- runif(100, xr$par[1], xr$par[2])
mean(X) # approx 0
sd(X) # approx 1 | How to simulate data that satisfy specific constraints such as having specific mean and standard dev | The general technique is the 'Rejection Method', where you just reject results that don't meet your constraints. Unless you have some sort of guidance (like MCMC), then you could be generating a lot | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation?
The general technique is the 'Rejection Method', where you just reject results that don't meet your constraints. Unless you have some sort of guidance (like MCMC), then you could be generating a lot of cases (depending on your scenario) which are rejected!
Where you're looking for something like a mean and standard deviation and you can create a distance metric of some kind to say how far you are away from your goal, you can use optimisation to search for the input variables which give you the desired output values.
As an ugly example where we will look for a random uniform vector with length 100 which has mean=0 and standard deviation=1.
# simplistic optimisation example
# I am looking for a mean of zero and a standard deviation of one
# but starting from a plain uniform(0,1) distribution :-)
# create a function to optimise
fun <- function(xvec, N=100) {
xmin <- xvec[1]
xmax <- xvec[2]
x <- runif(N, xmin, xmax)
xdist <- (mean(x) - 0)^2 + (sd(x) - 1)^2
xdist
}
xr <- optim(c(0,1), fun)
# now lets test those results
X <- runif(100, xr$par[1], xr$par[2])
mean(X) # approx 0
sd(X) # approx 1 | How to simulate data that satisfy specific constraints such as having specific mean and standard dev
The general technique is the 'Rejection Method', where you just reject results that don't meet your constraints. Unless you have some sort of guidance (like MCMC), then you could be generating a lot |
3,105 | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation? | It seems that there is an R package meeting your requirement published just yesterday!
simstudy By Keith Goldfeld
Simulates data sets in order to explore modeling techniques or better understand data generating processes. The user specifies a set of relationships between covariates, and generates data based on these specifications. The final data sets can represent data from randomized control trials, repeated measure (longitudinal) designs, and cluster randomized trials. Missingness can be generated using various mechanisms (MCAR, MAR, NMAR). | How to simulate data that satisfy specific constraints such as having specific mean and standard dev | It seems that there is an R package meeting your requirement published just yesterday!
simstudy By Keith Goldfeld
Simulates data sets in order to explore modeling techniques or better understand data | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation?
It seems that there is an R package meeting your requirement published just yesterday!
simstudy By Keith Goldfeld
Simulates data sets in order to explore modeling techniques or better understand data generating processes. The user specifies a set of relationships between covariates, and generates data based on these specifications. The final data sets can represent data from randomized control trials, repeated measure (longitudinal) designs, and cluster randomized trials. Missingness can be generated using various mechanisms (MCAR, MAR, NMAR). | How to simulate data that satisfy specific constraints such as having specific mean and standard dev
It seems that there is an R package meeting your requirement published just yesterday!
simstudy By Keith Goldfeld
Simulates data sets in order to explore modeling techniques or better understand data |
3,106 | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation? | This is an answer coming so late it is presumably meaningless, but there is always an MCMC solution to the question. Namely, to project the joint density of the sample$$\prod_{i=1}^n f(x_i)$$on the manifold defined by the constraints, for instance
$$\sum_{i=1}^n x_i=\mu_0\qquad\sum_{i=1}^n x_i^2=\sigma_0^2$$
The only issue is then in simulating values over that manifold, i.e., finding a parameterisation of the correct dimension. A 2015 paper by Bornn, Shephard and Solgi studies this very problem (with an interesting if not ultimate answer). | How to simulate data that satisfy specific constraints such as having specific mean and standard dev | This is an answer coming so late it is presumably meaningless, but there is always an MCMC solution to the question. Namely, to project the joint density of the sample$$\prod_{i=1}^n f(x_i)$$on the ma | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation?
This is an answer coming so late it is presumably meaningless, but there is always an MCMC solution to the question. Namely, to project the joint density of the sample$$\prod_{i=1}^n f(x_i)$$on the manifold defined by the constraints, for instance
$$\sum_{i=1}^n x_i=\mu_0\qquad\sum_{i=1}^n x_i^2=\sigma_0^2$$
The only issue is then in simulating values over that manifold, i.e., finding a parameterisation of the correct dimension. A 2015 paper by Bornn, Shephard and Solgi studies this very problem (with an interesting if not ultimate answer). | How to simulate data that satisfy specific constraints such as having specific mean and standard dev
This is an answer coming so late it is presumably meaningless, but there is always an MCMC solution to the question. Namely, to project the joint density of the sample$$\prod_{i=1}^n f(x_i)$$on the ma |
3,107 | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation? | This answer considers another approach to the case where you want to force the variates to lie in a specified range and additionally dictate the mean and/or variance.
Restrict our attention to the unit interval $[0,1]$. Let's use a weighted mean for generality, so fix some weights $w_k\in[0,1]$ with $\sum_{k=1}^Nw_k=1$, or set $w_k=1/N$ if you want standard weighting. Suppose the quantities $\mu\in(0,1)$ and $0<\sigma^2<\mu(1-\mu)$ represent the desired (weighted) mean and (weighted) variance, respectively. The upper bound on $\sigma^2$ is necessary because that's the maximum variance possible on a unit interval. We are interested in drawing some variates $x_1,...,x_N$ from $[0,1]$ with these moment restrictions.
First we draw some variates $y_1,...,y_N$ from any distribution, like $N(0,1)$. This distribution will affect the shape of the final distribution. Then we constrain them to the unit interval $[0,1]$ using a logistic function:
$$
x_k=\frac{1}{1+e^{-(y_k v-h)}}
$$
Before we do that, however, as seen in the equation above, we transform the $y_k$'s with the translation $h$ and scale $v$.
This is analogous to the first equation in @Macro's answer.
The trick is now to choose $h$ and $v$ so that the transformed variables $x_1,...,x_N$ have the desired moment(s). That is, we require one or both of the following to hold:
$$
\mu=\sum_{k=1}^N \frac{w_k}{1+e^{-(y_k v-h)}} \\
\sigma^2=\sum_{k=1}^N \frac{w_k}{(1+e^{-(y_k v-h)})^2} - \left( \sum_{k=1}^N \frac{w_k}{1+e^{-(y_k v-h)}} \right)^2
$$
Inverting these equations for $v$ and $h$ analytically is not feasible, but doing so numerically is straight forward, especially since derivatives with respect to $v$ and $h$ are easy to compute; it only takes a few iterations of Newton's method.
As a first example, let's say we only care about constraining the weighted mean and not the variance. Fix $\mu=0.8$, $v=1$, $w_k=1/N$, $N=200000$.
Then for the underlying distributions $N(0,1)$, $N(0,0.1)$ and $\text{Unif}(0,1)$ we end up with the following histograms, respectively, and such that the mean of the variates is exactly $0.8$ (even for small $N$):
Next, let's constrain both the mean and variance.
Take $\mu=0.2$, $w_k=1/N$, $N=2000$ and consider the three desired standard deviations $\sigma=0.1,0.05,0.01$.
Using the same underlying distribution $N(0,1)$, here are the histograms for each:
Note that these may look a bit beta-distributed, but they are not. | How to simulate data that satisfy specific constraints such as having specific mean and standard dev | This answer considers another approach to the case where you want to force the variates to lie in a specified range and additionally dictate the mean and/or variance.
Restrict our attention to the uni | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation?
This answer considers another approach to the case where you want to force the variates to lie in a specified range and additionally dictate the mean and/or variance.
Restrict our attention to the unit interval $[0,1]$. Let's use a weighted mean for generality, so fix some weights $w_k\in[0,1]$ with $\sum_{k=1}^Nw_k=1$, or set $w_k=1/N$ if you want standard weighting. Suppose the quantities $\mu\in(0,1)$ and $0<\sigma^2<\mu(1-\mu)$ represent the desired (weighted) mean and (weighted) variance, respectively. The upper bound on $\sigma^2$ is necessary because that's the maximum variance possible on a unit interval. We are interested in drawing some variates $x_1,...,x_N$ from $[0,1]$ with these moment restrictions.
First we draw some variates $y_1,...,y_N$ from any distribution, like $N(0,1)$. This distribution will affect the shape of the final distribution. Then we constrain them to the unit interval $[0,1]$ using a logistic function:
$$
x_k=\frac{1}{1+e^{-(y_k v-h)}}
$$
Before we do that, however, as seen in the equation above, we transform the $y_k$'s with the translation $h$ and scale $v$.
This is analogous to the first equation in @Macro's answer.
The trick is now to choose $h$ and $v$ so that the transformed variables $x_1,...,x_N$ have the desired moment(s). That is, we require one or both of the following to hold:
$$
\mu=\sum_{k=1}^N \frac{w_k}{1+e^{-(y_k v-h)}} \\
\sigma^2=\sum_{k=1}^N \frac{w_k}{(1+e^{-(y_k v-h)})^2} - \left( \sum_{k=1}^N \frac{w_k}{1+e^{-(y_k v-h)}} \right)^2
$$
Inverting these equations for $v$ and $h$ analytically is not feasible, but doing so numerically is straight forward, especially since derivatives with respect to $v$ and $h$ are easy to compute; it only takes a few iterations of Newton's method.
As a first example, let's say we only care about constraining the weighted mean and not the variance. Fix $\mu=0.8$, $v=1$, $w_k=1/N$, $N=200000$.
Then for the underlying distributions $N(0,1)$, $N(0,0.1)$ and $\text{Unif}(0,1)$ we end up with the following histograms, respectively, and such that the mean of the variates is exactly $0.8$ (even for small $N$):
Next, let's constrain both the mean and variance.
Take $\mu=0.2$, $w_k=1/N$, $N=2000$ and consider the three desired standard deviations $\sigma=0.1,0.05,0.01$.
Using the same underlying distribution $N(0,1)$, here are the histograms for each:
Note that these may look a bit beta-distributed, but they are not. | How to simulate data that satisfy specific constraints such as having specific mean and standard dev
This answer considers another approach to the case where you want to force the variates to lie in a specified range and additionally dictate the mean and/or variance.
Restrict our attention to the uni |
3,108 | How to simulate data that satisfy specific constraints such as having specific mean and standard deviation? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
In my answer here, I listed three R packages for doing this:
SimCorrMix
SimMultiCorrData
simrel | How to simulate data that satisfy specific constraints such as having specific mean and standard dev | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to simulate data that satisfy specific constraints such as having specific mean and standard deviation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
In my answer here, I listed three R packages for doing this:
SimCorrMix
SimMultiCorrData
simrel | How to simulate data that satisfy specific constraints such as having specific mean and standard dev
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
3,109 | Clustering with a distance matrix | There are a number of options.
k-medoids clustering
First, you could try partitioning around medoids (pam) instead of using k-means clustering. This one is more robust, and could give better results. Van der Laan reworked the algorithm. If you're going to implement it yourself, his article is worth a read.
There is a specific k-medoids clustering algorithm for large datasets. The algorithm is called Clara in R, and is described in chapter 3 of Finding Groups in Data: An Introduction to Cluster Analysis. by Kaufman, L and Rousseeuw, PJ (1990).
hierarchical clustering
Instead of UPGMA, you could try some other hierarchical clustering options. First of all, when you use hierarchical clustering, be sure you define the partitioning method properly. This partitioning method is essentially how the distances between observations and clusters are calculated. I mostly use Ward's method or complete linkage, but other options might be the choice for you.
Don't know if you tried it yet, but the single linkage method or neighbour joining is often preferred above UPGMA in phylogenetic applications. If you didn't try it yet, you could give it a shot as well, as it often gives remarkably good results.
In R you can take a look at the package cluster. All described algorithms are implemented there. See ?pam, ?clara, ?hclust, ... Check also the different implementation of the algorithm in ?kmeans. Sometimes chosing another algorithm can improve the clustering substantially.
EDIT : Just thought about something: If you work with graphs and nodes and the likes, you should take a look at the markov clustering algorithm as well. That one is used for example in grouping sequences based on blast similarities, and performs incredibly well. It can do the clustering for you, or give you some ideas on how to solve the research problem you're focusing on. Without knowing anything about it in fact, I guess his results are definitely worth looking at. If I may say so, I still consider this method of Stijn van Dongen one of the nicest results in clustering I've ever encountered.
http://www.micans.org/mcl/ | Clustering with a distance matrix | There are a number of options.
k-medoids clustering
First, you could try partitioning around medoids (pam) instead of using k-means clustering. This one is more robust, and could give better results. | Clustering with a distance matrix
There are a number of options.
k-medoids clustering
First, you could try partitioning around medoids (pam) instead of using k-means clustering. This one is more robust, and could give better results. Van der Laan reworked the algorithm. If you're going to implement it yourself, his article is worth a read.
There is a specific k-medoids clustering algorithm for large datasets. The algorithm is called Clara in R, and is described in chapter 3 of Finding Groups in Data: An Introduction to Cluster Analysis. by Kaufman, L and Rousseeuw, PJ (1990).
hierarchical clustering
Instead of UPGMA, you could try some other hierarchical clustering options. First of all, when you use hierarchical clustering, be sure you define the partitioning method properly. This partitioning method is essentially how the distances between observations and clusters are calculated. I mostly use Ward's method or complete linkage, but other options might be the choice for you.
Don't know if you tried it yet, but the single linkage method or neighbour joining is often preferred above UPGMA in phylogenetic applications. If you didn't try it yet, you could give it a shot as well, as it often gives remarkably good results.
In R you can take a look at the package cluster. All described algorithms are implemented there. See ?pam, ?clara, ?hclust, ... Check also the different implementation of the algorithm in ?kmeans. Sometimes chosing another algorithm can improve the clustering substantially.
EDIT : Just thought about something: If you work with graphs and nodes and the likes, you should take a look at the markov clustering algorithm as well. That one is used for example in grouping sequences based on blast similarities, and performs incredibly well. It can do the clustering for you, or give you some ideas on how to solve the research problem you're focusing on. Without knowing anything about it in fact, I guess his results are definitely worth looking at. If I may say so, I still consider this method of Stijn van Dongen one of the nicest results in clustering I've ever encountered.
http://www.micans.org/mcl/ | Clustering with a distance matrix
There are a number of options.
k-medoids clustering
First, you could try partitioning around medoids (pam) instead of using k-means clustering. This one is more robust, and could give better results. |
3,110 | Clustering with a distance matrix | One way to highlight clusters on your distance matrix is by way of Multidimensional scaling. When projecting individuals (here what you call your nodes) in an 2D-space, it provides a comparable solution to PCA. This is unsupervised, so you won't be able to specify a priori the number of clusters, but I think it may help to quickly summarize a given distance or similarity matrix.
Here is what you would get with your data:
tmp <- matrix(c(0,20,20,20,40,60,60,60,100,120,120,120,
20,0,20,20,60,80,80,80,120,140,140,140,
20,20,0,20,60,80,80,80,120,140,140,140,
20,20,20,0,60,80,80,80,120,140,140,140,
40,60,60,60,0,20,20,20,60,80,80,80,
60,80,80,80,20,0,20,20,40,60,60,60,
60,80,80,80,20,20,0,20,60,80,80,80,
60,80,80,80,20,20,20,0,60,80,80,80,
100,120,120,120,60,40,60,60,0,20,20,20,
120,140,140,140,80,60,80,80,20,0,20,20,
120,140,140,140,80,60,80,80,20,20,0,20,
120,140,140,140,80,60,80,80,20,20,20,0),
nr=12, dimnames=list(LETTERS[1:12], LETTERS[1:12]))
d <- as.dist(tmp)
mds.coor <- cmdscale(d)
plot(mds.coor[,1], mds.coor[,2], type="n", xlab="", ylab="")
text(jitter(mds.coor[,1]), jitter(mds.coor[,2]),
rownames(mds.coor), cex=0.8)
abline(h=0,v=0,col="gray75")
I added a small jittering on the x and y coordinates to allow distinguishing cases. Replace tmp by 1-tmp if you'd prefer working with dissimilarities, but this yields essentially the same picture. However, here is the hierarchical clustering solution, with single agglomeration criteria:
plot(hclust(dist(1-tmp), method="single"))
You might further refine the selection of clusters based on the dendrogram, or more robust methods, see e.g. this related question: What stop-criteria for agglomerative hierarchical clustering are used in practice? | Clustering with a distance matrix | One way to highlight clusters on your distance matrix is by way of Multidimensional scaling. When projecting individuals (here what you call your nodes) in an 2D-space, it provides a comparable soluti | Clustering with a distance matrix
One way to highlight clusters on your distance matrix is by way of Multidimensional scaling. When projecting individuals (here what you call your nodes) in an 2D-space, it provides a comparable solution to PCA. This is unsupervised, so you won't be able to specify a priori the number of clusters, but I think it may help to quickly summarize a given distance or similarity matrix.
Here is what you would get with your data:
tmp <- matrix(c(0,20,20,20,40,60,60,60,100,120,120,120,
20,0,20,20,60,80,80,80,120,140,140,140,
20,20,0,20,60,80,80,80,120,140,140,140,
20,20,20,0,60,80,80,80,120,140,140,140,
40,60,60,60,0,20,20,20,60,80,80,80,
60,80,80,80,20,0,20,20,40,60,60,60,
60,80,80,80,20,20,0,20,60,80,80,80,
60,80,80,80,20,20,20,0,60,80,80,80,
100,120,120,120,60,40,60,60,0,20,20,20,
120,140,140,140,80,60,80,80,20,0,20,20,
120,140,140,140,80,60,80,80,20,20,0,20,
120,140,140,140,80,60,80,80,20,20,20,0),
nr=12, dimnames=list(LETTERS[1:12], LETTERS[1:12]))
d <- as.dist(tmp)
mds.coor <- cmdscale(d)
plot(mds.coor[,1], mds.coor[,2], type="n", xlab="", ylab="")
text(jitter(mds.coor[,1]), jitter(mds.coor[,2]),
rownames(mds.coor), cex=0.8)
abline(h=0,v=0,col="gray75")
I added a small jittering on the x and y coordinates to allow distinguishing cases. Replace tmp by 1-tmp if you'd prefer working with dissimilarities, but this yields essentially the same picture. However, here is the hierarchical clustering solution, with single agglomeration criteria:
plot(hclust(dist(1-tmp), method="single"))
You might further refine the selection of clusters based on the dendrogram, or more robust methods, see e.g. this related question: What stop-criteria for agglomerative hierarchical clustering are used in practice? | Clustering with a distance matrix
One way to highlight clusters on your distance matrix is by way of Multidimensional scaling. When projecting individuals (here what you call your nodes) in an 2D-space, it provides a comparable soluti |
3,111 | Clustering with a distance matrix | Spectral Clustering [1] requires an affinity matrix, clustering being defined by the $K$ first eigenfunctions of the decomposition of
$$\textbf{L} = \textbf{D}^{-1/2} \textbf{A} \textbf{D}^{-1/2}$$
With $\textbf{A}$ being the affinity matrix of the data and $\textbf{D}$ being the diagonal matrix defined as (edit: sorry for being unclear, but you can generate an affinity matrix from a distance matrix provided you know the maximum possible/reasonable distance as $A_{ij}=1-d_{ij}/\max(d)$, though other schemes exist as well)
$$\left\{\begin{matrix}\begin{align}&\textbf{D}_{i,i}=\sum_{j}{\textbf{A}_{i,j}}\\ &\textbf{D}_{i \neq j}=0\end{align}\end{matrix}\right.$$
With $\textbf{X}$ being the eigendecomposition of $\textbf{L}$, with eigenfunctions stacked as columns, keeping only the $K$ largest eigenvectors in $\textbf{X}$, we define the row normalized matrix
$$\textbf{Y}_{ij}=\frac{\textbf{X}_{ij}}{\left(\sum_{j}{\left( \textbf{X}_{ij} \right)^{2}}\right)^{1/2}}$$
Each row of $\textbf{Y}$ is a point in $\mathbb{R}^{k}$ and can be clustered with an ordinary clustering algorithm (like K-means).
Look at my answer here to see an example: https://stackoverflow.com/a/37933688/2874779
[1] Ng, A. Y., Jordan, M. I., & Weiss, Y. (2002). On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2, 849-856. Pg.2 | Clustering with a distance matrix | Spectral Clustering [1] requires an affinity matrix, clustering being defined by the $K$ first eigenfunctions of the decomposition of
$$\textbf{L} = \textbf{D}^{-1/2} \textbf{A} \textbf{D}^{-1/2}$$
| Clustering with a distance matrix
Spectral Clustering [1] requires an affinity matrix, clustering being defined by the $K$ first eigenfunctions of the decomposition of
$$\textbf{L} = \textbf{D}^{-1/2} \textbf{A} \textbf{D}^{-1/2}$$
With $\textbf{A}$ being the affinity matrix of the data and $\textbf{D}$ being the diagonal matrix defined as (edit: sorry for being unclear, but you can generate an affinity matrix from a distance matrix provided you know the maximum possible/reasonable distance as $A_{ij}=1-d_{ij}/\max(d)$, though other schemes exist as well)
$$\left\{\begin{matrix}\begin{align}&\textbf{D}_{i,i}=\sum_{j}{\textbf{A}_{i,j}}\\ &\textbf{D}_{i \neq j}=0\end{align}\end{matrix}\right.$$
With $\textbf{X}$ being the eigendecomposition of $\textbf{L}$, with eigenfunctions stacked as columns, keeping only the $K$ largest eigenvectors in $\textbf{X}$, we define the row normalized matrix
$$\textbf{Y}_{ij}=\frac{\textbf{X}_{ij}}{\left(\sum_{j}{\left( \textbf{X}_{ij} \right)^{2}}\right)^{1/2}}$$
Each row of $\textbf{Y}$ is a point in $\mathbb{R}^{k}$ and can be clustered with an ordinary clustering algorithm (like K-means).
Look at my answer here to see an example: https://stackoverflow.com/a/37933688/2874779
[1] Ng, A. Y., Jordan, M. I., & Weiss, Y. (2002). On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2, 849-856. Pg.2 | Clustering with a distance matrix
Spectral Clustering [1] requires an affinity matrix, clustering being defined by the $K$ first eigenfunctions of the decomposition of
$$\textbf{L} = \textbf{D}^{-1/2} \textbf{A} \textbf{D}^{-1/2}$$
|
3,112 | Clustering with a distance matrix | What you're doing is trying to cluster together nodes of a graph, or network, that are close to each other.
There is a entire field of research dedicated to this problem which is sometimes called community detection in networks.
Looking at your problem from this point of view can probably clarify things.
You will find many algorithms dedicated to this problem and in fact some of them are based on the same idea that you had, which is to measure distances between nodes with random walks.
The problem is often formulated as modularity optimization [1] where the modularity of a clustering measures how well the clustering separates the network in densely connected clusters (i.e. clusters where nodes are close to each others).
Actually, you can show that the modularity is equal to the probability that a random walker stays, after one step, in the same clusters than initially minus the same probability for two independent random walkers [2].
If you allow for more steps of the random walkers, you are looking for a coarser clustering of the network. The number of steps of the random walk plays therefore the role of a resolution parameter that allows to recover a hierarchy of clusters. In this case, the quantity that expresses the tendency of random walkers to stay in their initial cluster after t steps is called the Markov stability of a partition at time t [2] and it is equivalent to the modularity when t=1.
You can therefore solve your problem by finding the clustering of your graph that optimizes the stability at a given time t, where t is the resolution parameter (larger t will give you larger clusters). One of the most used method to optimize the stability (or modularity with a resolution parameter) is the Louvain Algorithm [3].
You can find an implementation here: https://github.com/michaelschaub/generalizedLouvain.
[1] Newman, M. E. J. & Girvan, M. Finding and evaluating community structure in networks. Phys. Rev. E 69, 026113 (2004).
[2] Delvenne, J.-C., Yaliraki, S. N. & Barahona, M. Stability of graph communities across time scales. Proc. Natl. Acad. Sci. 107, 12755–12760 (2010).
[3] Blondel, V. D., Guillaume, J.-L., Lambiotte, R. & Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, P10008 (2008). | Clustering with a distance matrix | What you're doing is trying to cluster together nodes of a graph, or network, that are close to each other.
There is a entire field of research dedicated to this problem which is sometimes called comm | Clustering with a distance matrix
What you're doing is trying to cluster together nodes of a graph, or network, that are close to each other.
There is a entire field of research dedicated to this problem which is sometimes called community detection in networks.
Looking at your problem from this point of view can probably clarify things.
You will find many algorithms dedicated to this problem and in fact some of them are based on the same idea that you had, which is to measure distances between nodes with random walks.
The problem is often formulated as modularity optimization [1] where the modularity of a clustering measures how well the clustering separates the network in densely connected clusters (i.e. clusters where nodes are close to each others).
Actually, you can show that the modularity is equal to the probability that a random walker stays, after one step, in the same clusters than initially minus the same probability for two independent random walkers [2].
If you allow for more steps of the random walkers, you are looking for a coarser clustering of the network. The number of steps of the random walk plays therefore the role of a resolution parameter that allows to recover a hierarchy of clusters. In this case, the quantity that expresses the tendency of random walkers to stay in their initial cluster after t steps is called the Markov stability of a partition at time t [2] and it is equivalent to the modularity when t=1.
You can therefore solve your problem by finding the clustering of your graph that optimizes the stability at a given time t, where t is the resolution parameter (larger t will give you larger clusters). One of the most used method to optimize the stability (or modularity with a resolution parameter) is the Louvain Algorithm [3].
You can find an implementation here: https://github.com/michaelschaub/generalizedLouvain.
[1] Newman, M. E. J. & Girvan, M. Finding and evaluating community structure in networks. Phys. Rev. E 69, 026113 (2004).
[2] Delvenne, J.-C., Yaliraki, S. N. & Barahona, M. Stability of graph communities across time scales. Proc. Natl. Acad. Sci. 107, 12755–12760 (2010).
[3] Blondel, V. D., Guillaume, J.-L., Lambiotte, R. & Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, P10008 (2008). | Clustering with a distance matrix
What you're doing is trying to cluster together nodes of a graph, or network, that are close to each other.
There is a entire field of research dedicated to this problem which is sometimes called comm |
3,113 | Clustering with a distance matrix | Well, It is possible to perform K-means clustering on a given similarity matrix, at first you need to center the matrix and then take the eigenvalues of the matrix. The final and the most important step is multiplying the first two set of eigenvectors to the square root of diagonals of the eigenvalues to get the vectors and then move on with K-means . Below the code shows how to do it. You can change similarity matrix. fpdist is the similarity matrix.
mds.tau <- function(H)
{
n <- nrow(H)
P <- diag(n) - 1/n
return(-0.5 * P %*% H %*% P)
}
B<-mds.tau(fpdist)
eig <- eigen(B, symmetric = TRUE)
v <- eig$values[1:2]
#convert negative values to 0.
v[v < 0] <- 0
X <- eig$vectors[, 1:2] %*% diag(sqrt(v))
library(vegan)
km <- kmeans(X,centers= 5, iter.max=1000, nstart=10000) .
#embedding using MDS
cmd<-cmdscale(fpdist) | Clustering with a distance matrix | Well, It is possible to perform K-means clustering on a given similarity matrix, at first you need to center the matrix and then take the eigenvalues of the matrix. The final and the most important st | Clustering with a distance matrix
Well, It is possible to perform K-means clustering on a given similarity matrix, at first you need to center the matrix and then take the eigenvalues of the matrix. The final and the most important step is multiplying the first two set of eigenvectors to the square root of diagonals of the eigenvalues to get the vectors and then move on with K-means . Below the code shows how to do it. You can change similarity matrix. fpdist is the similarity matrix.
mds.tau <- function(H)
{
n <- nrow(H)
P <- diag(n) - 1/n
return(-0.5 * P %*% H %*% P)
}
B<-mds.tau(fpdist)
eig <- eigen(B, symmetric = TRUE)
v <- eig$values[1:2]
#convert negative values to 0.
v[v < 0] <- 0
X <- eig$vectors[, 1:2] %*% diag(sqrt(v))
library(vegan)
km <- kmeans(X,centers= 5, iter.max=1000, nstart=10000) .
#embedding using MDS
cmd<-cmdscale(fpdist) | Clustering with a distance matrix
Well, It is possible to perform K-means clustering on a given similarity matrix, at first you need to center the matrix and then take the eigenvalues of the matrix. The final and the most important st |
3,114 | Clustering with a distance matrix | Before you try running the clustering on the matrix you can try doing one of the factor analysis techniques, and keep just the most important variables to compute the distance matrix.
Another thing you can do is to try use fuzzy-methods which tend to work better (at least in my experience) in this kind of cases, try first Cmeans, Fuzzy K-medoids, and Specially GKCmeans. | Clustering with a distance matrix | Before you try running the clustering on the matrix you can try doing one of the factor analysis techniques, and keep just the most important variables to compute the distance matrix.
Another thing yo | Clustering with a distance matrix
Before you try running the clustering on the matrix you can try doing one of the factor analysis techniques, and keep just the most important variables to compute the distance matrix.
Another thing you can do is to try use fuzzy-methods which tend to work better (at least in my experience) in this kind of cases, try first Cmeans, Fuzzy K-medoids, and Specially GKCmeans. | Clustering with a distance matrix
Before you try running the clustering on the matrix you can try doing one of the factor analysis techniques, and keep just the most important variables to compute the distance matrix.
Another thing yo |
3,115 | Clustering with a distance matrix | Co-clustering is one of the answers I think. But Im not expert here. Co-clustring isn't newborn method, so you can find some algos in R, wiki shows that concepts in good way. Another method that isnt menthioned is graph partitioning (but I see that graph wouldnt be sparse,graph partitioning would be useful if your matrix would be dominated by values meaning=maximum distance=no similarity between the nodes). | Clustering with a distance matrix | Co-clustering is one of the answers I think. But Im not expert here. Co-clustring isn't newborn method, so you can find some algos in R, wiki shows that concepts in good way. Another method that isnt | Clustering with a distance matrix
Co-clustering is one of the answers I think. But Im not expert here. Co-clustring isn't newborn method, so you can find some algos in R, wiki shows that concepts in good way. Another method that isnt menthioned is graph partitioning (but I see that graph wouldnt be sparse,graph partitioning would be useful if your matrix would be dominated by values meaning=maximum distance=no similarity between the nodes). | Clustering with a distance matrix
Co-clustering is one of the answers I think. But Im not expert here. Co-clustring isn't newborn method, so you can find some algos in R, wiki shows that concepts in good way. Another method that isnt |
3,116 | Clustering with a distance matrix | You can also use the Kruskal algorithm for finding minimum spanning trees, but ending as soon as you get the three clusters. I tried this way and it produces the clusters you mentioned: {ABCD}, {EFGH} and {IJKL}. | Clustering with a distance matrix | You can also use the Kruskal algorithm for finding minimum spanning trees, but ending as soon as you get the three clusters. I tried this way and it produces the clusters you mentioned: {ABCD}, {EFGH} | Clustering with a distance matrix
You can also use the Kruskal algorithm for finding minimum spanning trees, but ending as soon as you get the three clusters. I tried this way and it produces the clusters you mentioned: {ABCD}, {EFGH} and {IJKL}. | Clustering with a distance matrix
You can also use the Kruskal algorithm for finding minimum spanning trees, but ending as soon as you get the three clusters. I tried this way and it produces the clusters you mentioned: {ABCD}, {EFGH} |
3,117 | Why bother with the dual problem when fitting SVM? | Based on the lecture notes referenced in @user765195's answer (thanks!), the most apparent reasons seem to be:
Solving the primal problem, we obtain the optimal $w$, but know nothing about the $\alpha_i$. In order to classify a query point $x$ we need to explicitly compute the scalar product $w^Tx$, which may be expensive if $d$ is large.
Solving the dual problem, we obtain the $\alpha_i$ (where $\alpha_i = 0$ for all but a few points - the support vectors). In order to classify a query point $x$, we calculate
$$ w^Tx + w_0 = \left(\sum_{i=1}^{n}{\alpha_i y_i x_i} \right)^T x + w_0 = \sum_{i=1}^{n}{\alpha_i y_i \langle x_i, x \rangle} + w_0 $$
This term is very efficiently calculated if there are only few support vectors. Further, since we now have a scalar product only involving data vectors, we may apply the kernel trick. | Why bother with the dual problem when fitting SVM? | Based on the lecture notes referenced in @user765195's answer (thanks!), the most apparent reasons seem to be:
Solving the primal problem, we obtain the optimal $w$, but know nothing about the $\alpha | Why bother with the dual problem when fitting SVM?
Based on the lecture notes referenced in @user765195's answer (thanks!), the most apparent reasons seem to be:
Solving the primal problem, we obtain the optimal $w$, but know nothing about the $\alpha_i$. In order to classify a query point $x$ we need to explicitly compute the scalar product $w^Tx$, which may be expensive if $d$ is large.
Solving the dual problem, we obtain the $\alpha_i$ (where $\alpha_i = 0$ for all but a few points - the support vectors). In order to classify a query point $x$, we calculate
$$ w^Tx + w_0 = \left(\sum_{i=1}^{n}{\alpha_i y_i x_i} \right)^T x + w_0 = \sum_{i=1}^{n}{\alpha_i y_i \langle x_i, x \rangle} + w_0 $$
This term is very efficiently calculated if there are only few support vectors. Further, since we now have a scalar product only involving data vectors, we may apply the kernel trick. | Why bother with the dual problem when fitting SVM?
Based on the lecture notes referenced in @user765195's answer (thanks!), the most apparent reasons seem to be:
Solving the primal problem, we obtain the optimal $w$, but know nothing about the $\alpha |
3,118 | Why bother with the dual problem when fitting SVM? | Read the second paragraph in page 13 and the discussion proceeding it in these notes:
Tengyu Ma and Andrew Ng. Part V: Kernel Methods. CS229 Lecture Notes. 2020 October 7. | Why bother with the dual problem when fitting SVM? | Read the second paragraph in page 13 and the discussion proceeding it in these notes:
Tengyu Ma and Andrew Ng. Part V: Kernel Methods. CS229 Lecture Notes. 2020 October 7. | Why bother with the dual problem when fitting SVM?
Read the second paragraph in page 13 and the discussion proceeding it in these notes:
Tengyu Ma and Andrew Ng. Part V: Kernel Methods. CS229 Lecture Notes. 2020 October 7. | Why bother with the dual problem when fitting SVM?
Read the second paragraph in page 13 and the discussion proceeding it in these notes:
Tengyu Ma and Andrew Ng. Part V: Kernel Methods. CS229 Lecture Notes. 2020 October 7. |
3,119 | Why bother with the dual problem when fitting SVM? | Here's one reason why the dual formulation is attractive from a numerical optimization point of view. You can find the details in the following paper:
Hsieh, C.-J., Chang, K.-W., Lin, C.-J., Keerthi, S. S., and Sundararajan, S., “A Dual coordinate descent method forlarge-scale linear SVM”, Proceedings of the 25th International Conference on Machine Learning, Helsinki, 2008.
The dual formulation involves a single affine equality constraint and n bound constraints.
1. The affine equality constraint can be "eliminated" from the dual formulation.
This can be done by simply looking at your data in $R^{d+1}$ via the embedding of $R^d$ in $R^{d+1}$ resuling from adding a single "$1$" coordinate to each data point, i.e. $R^d \to R^{d+1}: (a_1,..., a_d) \mapsto (a_1, ..., a_d, 1)$.
Doing this for all points in the training set recasts the linear separability problem in $R^{d+1}$ and eliminates the constant term $w_0$ from your classifier, which in turn eliminates the affine equality constraint from the dual.
2. By point 1, the dual can be easily cast as a convex quadratic optimization problem whose constraints are only bound constraints.
3. The dual problem can now be solved efficiently, i.e. via a dual coordinate descent algorithm that yields an epsilon-optimal solution in $O(\log(\frac{1}{\varepsilon}))$.
This is done by noting that fixing all alphas except one yields a closed-form solution. You can then cycle through all alphas one by one (e.g. choosing one at random, fixing all other alphas, calculating the closed form solution). One can show that you'll thus obtain a near-optimal solution "rather quickly" (see Theorem 1 in the aforementioned paper).
There are many other reasons why the dual problem is attractive from an optimization point of view, some of which exploit the fact that it has only one affine equality constraint (the remaing constraints are all bound constraints) while others exploit the observation that at the solution of the dual problem "often most alphas" are zero (non-zero alphas corresponding to support vectors).
You can get a good overview of numerical optimization considerations for SVMs from Stephen Wright's presentation at the Computational Learning Workshop (2009).
P.S.: I'm new here. Apologies for not being good at using mathematical notation on this website. | Why bother with the dual problem when fitting SVM? | Here's one reason why the dual formulation is attractive from a numerical optimization point of view. You can find the details in the following paper:
Hsieh, C.-J., Chang, K.-W., Lin, C.-J., Keerthi, | Why bother with the dual problem when fitting SVM?
Here's one reason why the dual formulation is attractive from a numerical optimization point of view. You can find the details in the following paper:
Hsieh, C.-J., Chang, K.-W., Lin, C.-J., Keerthi, S. S., and Sundararajan, S., “A Dual coordinate descent method forlarge-scale linear SVM”, Proceedings of the 25th International Conference on Machine Learning, Helsinki, 2008.
The dual formulation involves a single affine equality constraint and n bound constraints.
1. The affine equality constraint can be "eliminated" from the dual formulation.
This can be done by simply looking at your data in $R^{d+1}$ via the embedding of $R^d$ in $R^{d+1}$ resuling from adding a single "$1$" coordinate to each data point, i.e. $R^d \to R^{d+1}: (a_1,..., a_d) \mapsto (a_1, ..., a_d, 1)$.
Doing this for all points in the training set recasts the linear separability problem in $R^{d+1}$ and eliminates the constant term $w_0$ from your classifier, which in turn eliminates the affine equality constraint from the dual.
2. By point 1, the dual can be easily cast as a convex quadratic optimization problem whose constraints are only bound constraints.
3. The dual problem can now be solved efficiently, i.e. via a dual coordinate descent algorithm that yields an epsilon-optimal solution in $O(\log(\frac{1}{\varepsilon}))$.
This is done by noting that fixing all alphas except one yields a closed-form solution. You can then cycle through all alphas one by one (e.g. choosing one at random, fixing all other alphas, calculating the closed form solution). One can show that you'll thus obtain a near-optimal solution "rather quickly" (see Theorem 1 in the aforementioned paper).
There are many other reasons why the dual problem is attractive from an optimization point of view, some of which exploit the fact that it has only one affine equality constraint (the remaing constraints are all bound constraints) while others exploit the observation that at the solution of the dual problem "often most alphas" are zero (non-zero alphas corresponding to support vectors).
You can get a good overview of numerical optimization considerations for SVMs from Stephen Wright's presentation at the Computational Learning Workshop (2009).
P.S.: I'm new here. Apologies for not being good at using mathematical notation on this website. | Why bother with the dual problem when fitting SVM?
Here's one reason why the dual formulation is attractive from a numerical optimization point of view. You can find the details in the following paper:
Hsieh, C.-J., Chang, K.-W., Lin, C.-J., Keerthi, |
3,120 | Why bother with the dual problem when fitting SVM? | In my opinion, the prediction time argument (that predictions from the dual solution are faster than from the primal solution) is nonsense.
A comparison makes only sense if you use a linear kernel in the first place, because otherwise you cannot make predictions with the primal (or at least it is not clear to me how that would work).
But if you have a linear kernel, then computing the kernel is the dot product and is strictly slower for the dual than for the primal, because you need to compute it for at least 2 (and typically much more points). It would then be much faster to rely on the simply dot product $\mathbf w^Tx$.
In fact, if you have a linear kernel, you would just compute $\mathbf w$ explicitly and use it for predictions, just as you would do with the primal solution.
The two true arguments are:
The kernel trick can be applied (already mentioned by others)
The optimization process is much more straight forward for the dual problem and can be easily done with gradient descent
I will shed some light on the second point, since little has been said about that.
The main problem with the primal problem is that it cannot be easily optimized using standard gradient descent in spite of its convexity.
You can easily detect this if you try to implement a solution to both problems (primal and dual) with a gradient-descent approach.
Btw. the dual you formulated also has an issued, but this can be resolved as said below.
There are in fact three annoying things about optimizing the primal, at least when trying to solve it using gradient descent:
there is no trivial valid initial solution. In fact, already an initial solution must be a perfect separator. You can find one with the perceptron algorithm. In the dual, you can initialize all the $\alpha_i := 0$, which is also common practice. It is noteworthy that this corresponds to setting $w = 0$, which is not a feasible solution to the primal. However, in the dual this is not a problem, because only the optimal solution needs to be feasible in the primal.
the standard gradient update rule $\mathbf w' \leftarrow \mathbf w + \eta \nabla$ does not make a lot of sense and will not help you to converge to the optimum even though the problem is convex.
The issue is that the gradient $\nabla$ is $\mathbf w$ itself, so you would in fact only shorten or stretch $\mathbf w$ but not change its orientation.
But the latter is typically necessary unless your initial solution already had the correct slope.
the threshold $w_0$ does not occur in the objective function and its gradient is 0, but it should be updated in accordance with changes in $\mathbf w$.
Putting all this together, my personal reason to not optimize the primal is that, even though it is convex, there is no straight forward way of doing this with gradient descent.
Note that this issue is not prevalent in the soft margin classifier, because here you can initialize $\mathbf w = 0$ and also have more sensible gradient steps.
The problem is simply that it is annoying to deal with the linear constraints.
The dual problem as posed by you also is annoying when being solved with GD, because you still have the balance constraint $\sum y_i\alpha_i = 0$. You can get rid of that when adding the $\mathbf 1$ column to the data and treating it as part of $\mathbf w$. If you do that, you do not have the condition on optimality anymore for the Lagrangian, and optimization can be easily and efficiently done with simple batch gradient descent. | Why bother with the dual problem when fitting SVM? | In my opinion, the prediction time argument (that predictions from the dual solution are faster than from the primal solution) is nonsense.
A comparison makes only sense if you use a linear kernel in | Why bother with the dual problem when fitting SVM?
In my opinion, the prediction time argument (that predictions from the dual solution are faster than from the primal solution) is nonsense.
A comparison makes only sense if you use a linear kernel in the first place, because otherwise you cannot make predictions with the primal (or at least it is not clear to me how that would work).
But if you have a linear kernel, then computing the kernel is the dot product and is strictly slower for the dual than for the primal, because you need to compute it for at least 2 (and typically much more points). It would then be much faster to rely on the simply dot product $\mathbf w^Tx$.
In fact, if you have a linear kernel, you would just compute $\mathbf w$ explicitly and use it for predictions, just as you would do with the primal solution.
The two true arguments are:
The kernel trick can be applied (already mentioned by others)
The optimization process is much more straight forward for the dual problem and can be easily done with gradient descent
I will shed some light on the second point, since little has been said about that.
The main problem with the primal problem is that it cannot be easily optimized using standard gradient descent in spite of its convexity.
You can easily detect this if you try to implement a solution to both problems (primal and dual) with a gradient-descent approach.
Btw. the dual you formulated also has an issued, but this can be resolved as said below.
There are in fact three annoying things about optimizing the primal, at least when trying to solve it using gradient descent:
there is no trivial valid initial solution. In fact, already an initial solution must be a perfect separator. You can find one with the perceptron algorithm. In the dual, you can initialize all the $\alpha_i := 0$, which is also common practice. It is noteworthy that this corresponds to setting $w = 0$, which is not a feasible solution to the primal. However, in the dual this is not a problem, because only the optimal solution needs to be feasible in the primal.
the standard gradient update rule $\mathbf w' \leftarrow \mathbf w + \eta \nabla$ does not make a lot of sense and will not help you to converge to the optimum even though the problem is convex.
The issue is that the gradient $\nabla$ is $\mathbf w$ itself, so you would in fact only shorten or stretch $\mathbf w$ but not change its orientation.
But the latter is typically necessary unless your initial solution already had the correct slope.
the threshold $w_0$ does not occur in the objective function and its gradient is 0, but it should be updated in accordance with changes in $\mathbf w$.
Putting all this together, my personal reason to not optimize the primal is that, even though it is convex, there is no straight forward way of doing this with gradient descent.
Note that this issue is not prevalent in the soft margin classifier, because here you can initialize $\mathbf w = 0$ and also have more sensible gradient steps.
The problem is simply that it is annoying to deal with the linear constraints.
The dual problem as posed by you also is annoying when being solved with GD, because you still have the balance constraint $\sum y_i\alpha_i = 0$. You can get rid of that when adding the $\mathbf 1$ column to the data and treating it as part of $\mathbf w$. If you do that, you do not have the condition on optimality anymore for the Lagrangian, and optimization can be easily and efficiently done with simple batch gradient descent. | Why bother with the dual problem when fitting SVM?
In my opinion, the prediction time argument (that predictions from the dual solution are faster than from the primal solution) is nonsense.
A comparison makes only sense if you use a linear kernel in |
3,121 | Intuitive explanation of Fisher Information and Cramer-Rao bound | Here I explain why the asymptotic variance of the maximum likelihood estimator is the Cramer-Rao lower bound. Hopefully this will provide some insight as to the relevance of the Fisher information.
Statistical inference proceeds with the use of a likelihood function $\mathcal{L}(\theta)$ which you construct from the data. The point estimate $\hat{\theta}$ is the value which maximizes $\mathcal{L}(\theta)$. The estimator $\hat{\theta}$ is a random variable, but it helps to realize that the likelihood function $\mathcal{L}(\theta)$ is a "random curve".
Here we assume iid data drawn from a distribution $f(x|\theta)$, and we define the likelihood
$$
\mathcal{L}(\theta) = \frac{1}{n}\sum_{i=1}^n \log f(x_i|\theta)
$$
The parameter $\theta$ has the property that it maximizes the value of the "true" likelihood, $\mathbb{E}\mathcal{L}(\theta)$. However, the "observed" likelihood function $\mathcal{L}(\theta)$ which is constructed from the data is slightly "off" from the true likelihood. Yet as you can imagine, as the sample size increases, the "observed" likelihood converges to the shape of the true likelihood curve. The same applies to the derivative of the likelihood with respect to the parameter, the score function $\partial \mathcal{L}/\partial \theta$. (Long story short, the Fisher information determines how quickly the observed score function converges to the shape of the true score function.)
At a large sample size, we assume that our maximum likelihood estimate $\hat{\theta}$ is very close to $\theta$. We zoom into a small neighborhood around $\theta$ and $\hat{\theta}$ so that the likelihood function is "locally quadratic".
There, $\hat{\theta}$ is the point at which the score function $\partial \mathcal{L}/\partial \theta$ intersects the origin. In this small region, we treat the score function as a line, one with slope $a$ and random intercept $b$ at $\theta$. We know from the equation for a line that
$$a(\hat{\theta} - \theta) + b = 0$$
or
$$
\hat{\theta} = \theta - b/a .
$$
From the consistency of the MLE estimator, we know that
$$
\mathbb{E}(\hat{\theta}) = \theta
$$
in the limit.
Therefore, asymptotically
$$
nVar(\hat{\theta}) = nVar(b/a)
$$
It turns out that the slope varies much less than the intercept, and asymptotically, we can treat the score function as having a constant slope in a small neighborhood around $\theta$. Thus we can write
$$
nVar(\hat{\theta}) = \frac{1}{a^2}nVar(b)
$$
So, what are the values of $a$ and $nVar(b)$? It turns out that due to a marvelous mathematical coincidence, they are the very same quantity (modulo a minus sign), the Fisher information.
$$-a = \mathbb{E}\left[-\frac{\partial^2 \mathcal{L}}{\partial \theta^2}\right] = I(\theta)$$
$$nVar(b) = nVar\left[\frac{\partial \mathcal{L}}{\partial \theta}\right] = I(\theta)$$
Thus,
$$
nVar(\hat{\theta}) = \frac{1}{a^2}nVar(b) = (1/I(\theta)^2)I(\theta) = 1/I(\theta)
$$
asymptotically: the Cramer-Rao lower bound. (Showing that $1/I(\theta)$ is a the lower bound on the variance of an unbiased estimator is another matter.) | Intuitive explanation of Fisher Information and Cramer-Rao bound | Here I explain why the asymptotic variance of the maximum likelihood estimator is the Cramer-Rao lower bound. Hopefully this will provide some insight as to the relevance of the Fisher information.
S | Intuitive explanation of Fisher Information and Cramer-Rao bound
Here I explain why the asymptotic variance of the maximum likelihood estimator is the Cramer-Rao lower bound. Hopefully this will provide some insight as to the relevance of the Fisher information.
Statistical inference proceeds with the use of a likelihood function $\mathcal{L}(\theta)$ which you construct from the data. The point estimate $\hat{\theta}$ is the value which maximizes $\mathcal{L}(\theta)$. The estimator $\hat{\theta}$ is a random variable, but it helps to realize that the likelihood function $\mathcal{L}(\theta)$ is a "random curve".
Here we assume iid data drawn from a distribution $f(x|\theta)$, and we define the likelihood
$$
\mathcal{L}(\theta) = \frac{1}{n}\sum_{i=1}^n \log f(x_i|\theta)
$$
The parameter $\theta$ has the property that it maximizes the value of the "true" likelihood, $\mathbb{E}\mathcal{L}(\theta)$. However, the "observed" likelihood function $\mathcal{L}(\theta)$ which is constructed from the data is slightly "off" from the true likelihood. Yet as you can imagine, as the sample size increases, the "observed" likelihood converges to the shape of the true likelihood curve. The same applies to the derivative of the likelihood with respect to the parameter, the score function $\partial \mathcal{L}/\partial \theta$. (Long story short, the Fisher information determines how quickly the observed score function converges to the shape of the true score function.)
At a large sample size, we assume that our maximum likelihood estimate $\hat{\theta}$ is very close to $\theta$. We zoom into a small neighborhood around $\theta$ and $\hat{\theta}$ so that the likelihood function is "locally quadratic".
There, $\hat{\theta}$ is the point at which the score function $\partial \mathcal{L}/\partial \theta$ intersects the origin. In this small region, we treat the score function as a line, one with slope $a$ and random intercept $b$ at $\theta$. We know from the equation for a line that
$$a(\hat{\theta} - \theta) + b = 0$$
or
$$
\hat{\theta} = \theta - b/a .
$$
From the consistency of the MLE estimator, we know that
$$
\mathbb{E}(\hat{\theta}) = \theta
$$
in the limit.
Therefore, asymptotically
$$
nVar(\hat{\theta}) = nVar(b/a)
$$
It turns out that the slope varies much less than the intercept, and asymptotically, we can treat the score function as having a constant slope in a small neighborhood around $\theta$. Thus we can write
$$
nVar(\hat{\theta}) = \frac{1}{a^2}nVar(b)
$$
So, what are the values of $a$ and $nVar(b)$? It turns out that due to a marvelous mathematical coincidence, they are the very same quantity (modulo a minus sign), the Fisher information.
$$-a = \mathbb{E}\left[-\frac{\partial^2 \mathcal{L}}{\partial \theta^2}\right] = I(\theta)$$
$$nVar(b) = nVar\left[\frac{\partial \mathcal{L}}{\partial \theta}\right] = I(\theta)$$
Thus,
$$
nVar(\hat{\theta}) = \frac{1}{a^2}nVar(b) = (1/I(\theta)^2)I(\theta) = 1/I(\theta)
$$
asymptotically: the Cramer-Rao lower bound. (Showing that $1/I(\theta)$ is a the lower bound on the variance of an unbiased estimator is another matter.) | Intuitive explanation of Fisher Information and Cramer-Rao bound
Here I explain why the asymptotic variance of the maximum likelihood estimator is the Cramer-Rao lower bound. Hopefully this will provide some insight as to the relevance of the Fisher information.
S |
3,122 | Intuitive explanation of Fisher Information and Cramer-Rao bound | One way that I understand the fisher information is by the following definition:
$$I(\theta)=\int_{\cal{X}} \frac{\partial^{2}f(x|\theta)}{\partial \theta^{2}}dx-\int_{\cal{X}} f(x|\theta)\frac{\partial^{2}}{\partial \theta^{2}}\log[f(x|\theta)]dx$$
The Fisher Information can be written this way whenever the density $f(x|\theta)$ is twice differentiable. If the sample space $\cal{X}$ does not depend on the parameter $\theta$, then we can use the Leibniz integral formula to show that the first term is zero (differentiate both sides of $\int_{\cal{X}} f(x|\theta)dx=1$ twice and you get zero), and the second term is the "standard" definition. I will take the case when the first term is zero. The cases when it isn't zero aren't much use for understanding Fisher Information.
Now when you do maximum likelihood estimation (insert "regularity conditions" here) you set
$$\frac{\partial}{\partial \theta}\log[f(x|\theta)]=0$$
And solve for $\theta$. So the second derivative says how quickly the gradient is changing, and in a sense "how far" $\theta$ can depart from the MLE without making an appreciable change in the right hand side of the above equation. Another way you can think of it is to imagine a "mountain" drawn on the paper - this is the log-likelihood function. Solving the MLE equation above tells you where the peak of this mountain is located as a function of the random variable $x$. The second derivative tells you how steep the mountain is - which in a sense tells you how easy it is to find the peak of the mountain. Fisher information comes from taking the expected steepness of the peak, and so it has a bit of a "pre-data" interpretation.
One thing that I still find curious is that its how steep the log-likelihood is and not how steep some other monotonic function of the likelihood is (perhaps related to "proper" scoring functions in decision theory? or maybe to the consistency axioms of entropy?).
The Fisher information also "shows up" in many asymptotic analysis due to what is known as the Laplace approximation. This basically due to the fact that any function with a "well-rounded" single maximum raise to a higher and higher power goes into a Gaussian function $\exp(-ax^{2})$ (similar to Central Limit Theorem, but slightly more general). So when you have a large sample you are effectively in this position and you can write:
$$f(data|\theta)=\exp(\log[f(data|\theta)])$$
And when you taylor expand the log-likelihood about the MLE:
$$f(data|\theta)\approx [f(data|\theta)]_{\theta=\theta_{MLE}}\exp\left(-\frac{1}{2}\left[-\frac{\partial^{2}}{\partial \theta^{2}}\log[f(data|\theta)]\right]_{\theta=\theta_{MLE}}(\theta-\theta_{MLE})^{2}\right)$$
and that second derivative of the log-likelihood shows up (but in "observed" instead of "expected" form). What is usually done here is to make to further approximation:
$$-\frac{\partial^{2}}{\partial \theta^{2}}\log[f(data|\theta)]=n\left(-\frac{1}{n}\sum_{i=1}^{n}\frac{\partial^{2}}{\partial \theta^{2}}\log[f(x_{i}|\theta)]\right)\approx nI(\theta)$$
Which amounts to the usually good approximation of replacing a sum by an integral, but this requires that the data be independent. So for large independent samples (given $\theta$) you can see that the Fisher information is how variable the MLE is, for various values of the MLE. | Intuitive explanation of Fisher Information and Cramer-Rao bound | One way that I understand the fisher information is by the following definition:
$$I(\theta)=\int_{\cal{X}} \frac{\partial^{2}f(x|\theta)}{\partial \theta^{2}}dx-\int_{\cal{X}} f(x|\theta)\frac{\parti | Intuitive explanation of Fisher Information and Cramer-Rao bound
One way that I understand the fisher information is by the following definition:
$$I(\theta)=\int_{\cal{X}} \frac{\partial^{2}f(x|\theta)}{\partial \theta^{2}}dx-\int_{\cal{X}} f(x|\theta)\frac{\partial^{2}}{\partial \theta^{2}}\log[f(x|\theta)]dx$$
The Fisher Information can be written this way whenever the density $f(x|\theta)$ is twice differentiable. If the sample space $\cal{X}$ does not depend on the parameter $\theta$, then we can use the Leibniz integral formula to show that the first term is zero (differentiate both sides of $\int_{\cal{X}} f(x|\theta)dx=1$ twice and you get zero), and the second term is the "standard" definition. I will take the case when the first term is zero. The cases when it isn't zero aren't much use for understanding Fisher Information.
Now when you do maximum likelihood estimation (insert "regularity conditions" here) you set
$$\frac{\partial}{\partial \theta}\log[f(x|\theta)]=0$$
And solve for $\theta$. So the second derivative says how quickly the gradient is changing, and in a sense "how far" $\theta$ can depart from the MLE without making an appreciable change in the right hand side of the above equation. Another way you can think of it is to imagine a "mountain" drawn on the paper - this is the log-likelihood function. Solving the MLE equation above tells you where the peak of this mountain is located as a function of the random variable $x$. The second derivative tells you how steep the mountain is - which in a sense tells you how easy it is to find the peak of the mountain. Fisher information comes from taking the expected steepness of the peak, and so it has a bit of a "pre-data" interpretation.
One thing that I still find curious is that its how steep the log-likelihood is and not how steep some other monotonic function of the likelihood is (perhaps related to "proper" scoring functions in decision theory? or maybe to the consistency axioms of entropy?).
The Fisher information also "shows up" in many asymptotic analysis due to what is known as the Laplace approximation. This basically due to the fact that any function with a "well-rounded" single maximum raise to a higher and higher power goes into a Gaussian function $\exp(-ax^{2})$ (similar to Central Limit Theorem, but slightly more general). So when you have a large sample you are effectively in this position and you can write:
$$f(data|\theta)=\exp(\log[f(data|\theta)])$$
And when you taylor expand the log-likelihood about the MLE:
$$f(data|\theta)\approx [f(data|\theta)]_{\theta=\theta_{MLE}}\exp\left(-\frac{1}{2}\left[-\frac{\partial^{2}}{\partial \theta^{2}}\log[f(data|\theta)]\right]_{\theta=\theta_{MLE}}(\theta-\theta_{MLE})^{2}\right)$$
and that second derivative of the log-likelihood shows up (but in "observed" instead of "expected" form). What is usually done here is to make to further approximation:
$$-\frac{\partial^{2}}{\partial \theta^{2}}\log[f(data|\theta)]=n\left(-\frac{1}{n}\sum_{i=1}^{n}\frac{\partial^{2}}{\partial \theta^{2}}\log[f(x_{i}|\theta)]\right)\approx nI(\theta)$$
Which amounts to the usually good approximation of replacing a sum by an integral, but this requires that the data be independent. So for large independent samples (given $\theta$) you can see that the Fisher information is how variable the MLE is, for various values of the MLE. | Intuitive explanation of Fisher Information and Cramer-Rao bound
One way that I understand the fisher information is by the following definition:
$$I(\theta)=\int_{\cal{X}} \frac{\partial^{2}f(x|\theta)}{\partial \theta^{2}}dx-\int_{\cal{X}} f(x|\theta)\frac{\parti |
3,123 | Intuitive explanation of Fisher Information and Cramer-Rao bound | Although the explanations provided above are very interesting and I've enjoyed going through them, I feel that the nature of the Cramer-Rao Lower Bound was best explained to me from a geometric perspective. This intuition is a summary of the concept of concentration ellipses from Chapter 6 of Scharf's book on Statistical Signal Processing.
Consider any unbiased estimator of ${\boldsymbol\theta}$. Additionally, assume that the estimator $\hat{\boldsymbol\theta}$ has a Gaussian distribution with covariance ${\boldsymbol\Sigma}$. Under these conditions, the distribution of $\hat{\boldsymbol\theta}$ is proportional to:
$f(\hat{\boldsymbol\theta})\propto \exp(-\frac{1}{2}(\hat{\boldsymbol\theta}-{\boldsymbol\theta})^T{\boldsymbol\Sigma}^{-1}(\hat{\boldsymbol\theta}-{\boldsymbol\theta}))$.
Now think of the contour plots of this distribution for ${\boldsymbol\theta}\in R^2$. Any upper bound constraint on the probability of $\hat{\boldsymbol\theta}$ (i.e., $\int f(\hat{\boldsymbol\theta})d{\boldsymbol\theta} \le P_r$) will result in an ellipsoid centered at ${\boldsymbol\theta}$ with fixed radius $r$. It's easy to show that there is a one-to-one relationship between the radius $r$ of the ellipsoid and the desired probability $P_r$. In other words, $\hat{\boldsymbol\theta}$ is close to ${\boldsymbol\theta}$ within an ellipsoid determined by radius $r$ with probability $P_r$. This ellipsoid is called a concentration ellipsoid.
Considering the description above, we can say the following about the CRLB. Among all unbiased estimators, the CRLB represents an estimator $\hat{\boldsymbol\theta}_{crlb}$ with covariance $\boldsymbol\Sigma_{crlb}$ that, for fixed probability of "closeness" $P_r$ (as defined above), has the smallest concentration ellipsoid. The Figure below provides a 2D illustration (inspired by illustration in Scharf's book). | Intuitive explanation of Fisher Information and Cramer-Rao bound | Although the explanations provided above are very interesting and I've enjoyed going through them, I feel that the nature of the Cramer-Rao Lower Bound was best explained to me from a geometric perspe | Intuitive explanation of Fisher Information and Cramer-Rao bound
Although the explanations provided above are very interesting and I've enjoyed going through them, I feel that the nature of the Cramer-Rao Lower Bound was best explained to me from a geometric perspective. This intuition is a summary of the concept of concentration ellipses from Chapter 6 of Scharf's book on Statistical Signal Processing.
Consider any unbiased estimator of ${\boldsymbol\theta}$. Additionally, assume that the estimator $\hat{\boldsymbol\theta}$ has a Gaussian distribution with covariance ${\boldsymbol\Sigma}$. Under these conditions, the distribution of $\hat{\boldsymbol\theta}$ is proportional to:
$f(\hat{\boldsymbol\theta})\propto \exp(-\frac{1}{2}(\hat{\boldsymbol\theta}-{\boldsymbol\theta})^T{\boldsymbol\Sigma}^{-1}(\hat{\boldsymbol\theta}-{\boldsymbol\theta}))$.
Now think of the contour plots of this distribution for ${\boldsymbol\theta}\in R^2$. Any upper bound constraint on the probability of $\hat{\boldsymbol\theta}$ (i.e., $\int f(\hat{\boldsymbol\theta})d{\boldsymbol\theta} \le P_r$) will result in an ellipsoid centered at ${\boldsymbol\theta}$ with fixed radius $r$. It's easy to show that there is a one-to-one relationship between the radius $r$ of the ellipsoid and the desired probability $P_r$. In other words, $\hat{\boldsymbol\theta}$ is close to ${\boldsymbol\theta}$ within an ellipsoid determined by radius $r$ with probability $P_r$. This ellipsoid is called a concentration ellipsoid.
Considering the description above, we can say the following about the CRLB. Among all unbiased estimators, the CRLB represents an estimator $\hat{\boldsymbol\theta}_{crlb}$ with covariance $\boldsymbol\Sigma_{crlb}$ that, for fixed probability of "closeness" $P_r$ (as defined above), has the smallest concentration ellipsoid. The Figure below provides a 2D illustration (inspired by illustration in Scharf's book). | Intuitive explanation of Fisher Information and Cramer-Rao bound
Although the explanations provided above are very interesting and I've enjoyed going through them, I feel that the nature of the Cramer-Rao Lower Bound was best explained to me from a geometric perspe |
3,124 | Intuitive explanation of Fisher Information and Cramer-Rao bound | This is the most intuitive article that I have seen so far:
The Cramér-Rao Lower Bound on Variance: Adam and Eve’s “Uncertainty Principle” by
Michael R. Powers, Journal of Risk Finance, Vol. 7, No. 3, 2006
The bound is explained by an analogy of Adam and Eve in the Garden of Eden tossing a coin to see who gets to eat the fruit and they then ask themselves just how big a sample is necessary to achieve a certain level of accuracy in their estimate, and they then discover this bound...
Nice story with a profound message about reality indeed. | Intuitive explanation of Fisher Information and Cramer-Rao bound | This is the most intuitive article that I have seen so far:
The Cramér-Rao Lower Bound on Variance: Adam and Eve’s “Uncertainty Principle” by
Michael R. Powers, Journal of Risk Finance, Vol. 7, No. 3 | Intuitive explanation of Fisher Information and Cramer-Rao bound
This is the most intuitive article that I have seen so far:
The Cramér-Rao Lower Bound on Variance: Adam and Eve’s “Uncertainty Principle” by
Michael R. Powers, Journal of Risk Finance, Vol. 7, No. 3, 2006
The bound is explained by an analogy of Adam and Eve in the Garden of Eden tossing a coin to see who gets to eat the fruit and they then ask themselves just how big a sample is necessary to achieve a certain level of accuracy in their estimate, and they then discover this bound...
Nice story with a profound message about reality indeed. | Intuitive explanation of Fisher Information and Cramer-Rao bound
This is the most intuitive article that I have seen so far:
The Cramér-Rao Lower Bound on Variance: Adam and Eve’s “Uncertainty Principle” by
Michael R. Powers, Journal of Risk Finance, Vol. 7, No. 3 |
3,125 | Is this the solution to the p-value problem? | I've been advocating for my own new approach to statistical decision making called RADD: Roll A Damn Die. It also addresses all the key points.
1) RADD can indicate how compatible the data are with a specified statistical model.
If you roll a higher number, clearly the evidence is more in favor of your model! An extra benefit is that, if we desire even more confidence, we can roll a die with more sides. You can even find 100 sided dice if you search enough!
2) RADD can decide whether a hypothesis is true or not.
You only have to roll a 2 sided die, i.e., flip a coin.
3) RADD can be used to make business or policy decisions
Get a bunch of policy makers in a room, and have them all roll dice! Highest wins!
4) RADD is transparant.
The result can be recorded, and the die itself can be kept for further research*
5) RADD measures the importance of the result.
Obviously, rolling higher signifies a very important event has occurred.
6) RADD provides a good measure of evidence.
Didn't we say higher rolls are better?
So, no, STOP is not the answer. The answer is RADD. | Is this the solution to the p-value problem? | I've been advocating for my own new approach to statistical decision making called RADD: Roll A Damn Die. It also addresses all the key points.
1) RADD can indicate how compatible the data are with a | Is this the solution to the p-value problem?
I've been advocating for my own new approach to statistical decision making called RADD: Roll A Damn Die. It also addresses all the key points.
1) RADD can indicate how compatible the data are with a specified statistical model.
If you roll a higher number, clearly the evidence is more in favor of your model! An extra benefit is that, if we desire even more confidence, we can roll a die with more sides. You can even find 100 sided dice if you search enough!
2) RADD can decide whether a hypothesis is true or not.
You only have to roll a 2 sided die, i.e., flip a coin.
3) RADD can be used to make business or policy decisions
Get a bunch of policy makers in a room, and have them all roll dice! Highest wins!
4) RADD is transparant.
The result can be recorded, and the die itself can be kept for further research*
5) RADD measures the importance of the result.
Obviously, rolling higher signifies a very important event has occurred.
6) RADD provides a good measure of evidence.
Didn't we say higher rolls are better?
So, no, STOP is not the answer. The answer is RADD. | Is this the solution to the p-value problem?
I've been advocating for my own new approach to statistical decision making called RADD: Roll A Damn Die. It also addresses all the key points.
1) RADD can indicate how compatible the data are with a |
3,126 | Is this the solution to the p-value problem? | I must say from my experience that in business reality STOP is the default decision making criteria, preferred to $p$-values and other frequentist, or Bayesian methods. From business perspective STOP provides simple and definitive answers what makes it more reliable than uncertain "probabilistic" methods. Moreover, in vast majority of cases it is simpler to implement and easier to adapt to changing reality than other methods. The Yes/No decisions are more convincing for middle and senior management. The "STOP reports" in most cases are shorter and easier to read than the data-based ones. Moreover, adopting this method enables your employer to cut costs on data scientists and SAS licenses. I would say that the only problem with STOP is that it is harder to make PowerPoint presentation presenting STOP results, but this is a dynamically developing field, so in the future better visualization methods may be proposed. | Is this the solution to the p-value problem? | I must say from my experience that in business reality STOP is the default decision making criteria, preferred to $p$-values and other frequentist, or Bayesian methods. From business perspective STOP | Is this the solution to the p-value problem?
I must say from my experience that in business reality STOP is the default decision making criteria, preferred to $p$-values and other frequentist, or Bayesian methods. From business perspective STOP provides simple and definitive answers what makes it more reliable than uncertain "probabilistic" methods. Moreover, in vast majority of cases it is simpler to implement and easier to adapt to changing reality than other methods. The Yes/No decisions are more convincing for middle and senior management. The "STOP reports" in most cases are shorter and easier to read than the data-based ones. Moreover, adopting this method enables your employer to cut costs on data scientists and SAS licenses. I would say that the only problem with STOP is that it is harder to make PowerPoint presentation presenting STOP results, but this is a dynamically developing field, so in the future better visualization methods may be proposed. | Is this the solution to the p-value problem?
I must say from my experience that in business reality STOP is the default decision making criteria, preferred to $p$-values and other frequentist, or Bayesian methods. From business perspective STOP |
3,127 | Is this the solution to the p-value problem? | This fine adjunct to the p-value debate, interesting but also somewhat stale in my opinion, reminds me of a unique paper published some years ago in the Christmas issue of the British Medical Journal (BMJ), which every Christmas publishes real yet funny research articles.
In particular, this work by Isaacs and Fitzgerald highlighted seven key alternatives to evidence based medicine (ie the practice of medicine based on actual clinical and statistical evidence):
Eminence based medicine
Vehemence based medicine
Eloquence based medicine
Providence based medicine
Diffidence based medicine
Nervousness based medicine
Confidence based medicine
Most interestingly, you must look at the columns highlighting the measuring devices and units of measurements for the items above (eg audiometer and decibels for vehemence based medicine!). | Is this the solution to the p-value problem? | This fine adjunct to the p-value debate, interesting but also somewhat stale in my opinion, reminds me of a unique paper published some years ago in the Christmas issue of the British Medical Journal | Is this the solution to the p-value problem?
This fine adjunct to the p-value debate, interesting but also somewhat stale in my opinion, reminds me of a unique paper published some years ago in the Christmas issue of the British Medical Journal (BMJ), which every Christmas publishes real yet funny research articles.
In particular, this work by Isaacs and Fitzgerald highlighted seven key alternatives to evidence based medicine (ie the practice of medicine based on actual clinical and statistical evidence):
Eminence based medicine
Vehemence based medicine
Eloquence based medicine
Providence based medicine
Diffidence based medicine
Nervousness based medicine
Confidence based medicine
Most interestingly, you must look at the columns highlighting the measuring devices and units of measurements for the items above (eg audiometer and decibels for vehemence based medicine!). | Is this the solution to the p-value problem?
This fine adjunct to the p-value debate, interesting but also somewhat stale in my opinion, reminds me of a unique paper published some years ago in the Christmas issue of the British Medical Journal |
3,128 | What are some valuable Statistical Analysis open source projects? | The R-project
http://www.r-project.org/
R is valuable and significant because it was the first widely-accepted Open-Source alternative to big-box packages. It's mature, well supported, and a standard within many scientific communities.
Some reasons why it is useful and valuable
There are some nice tutorials here. | What are some valuable Statistical Analysis open source projects? | The R-project
http://www.r-project.org/
R is valuable and significant because it was the first widely-accepted Open-Source alternative to big-box packages. It's mature, well supported, and a standard | What are some valuable Statistical Analysis open source projects?
The R-project
http://www.r-project.org/
R is valuable and significant because it was the first widely-accepted Open-Source alternative to big-box packages. It's mature, well supported, and a standard within many scientific communities.
Some reasons why it is useful and valuable
There are some nice tutorials here. | What are some valuable Statistical Analysis open source projects?
The R-project
http://www.r-project.org/
R is valuable and significant because it was the first widely-accepted Open-Source alternative to big-box packages. It's mature, well supported, and a standard |
3,129 | What are some valuable Statistical Analysis open source projects? | For doing a variety of MCMC tasks in Python, there's PyMC, which I've gotten quite a bit of use out of. I haven't run across anything that I can do in BUGS that I can't do in PyMC, and the way you specify models and bring in data seems to be a lot more intuitive to me. | What are some valuable Statistical Analysis open source projects? | For doing a variety of MCMC tasks in Python, there's PyMC, which I've gotten quite a bit of use out of. I haven't run across anything that I can do in BUGS that I can't do in PyMC, and the way you sp | What are some valuable Statistical Analysis open source projects?
For doing a variety of MCMC tasks in Python, there's PyMC, which I've gotten quite a bit of use out of. I haven't run across anything that I can do in BUGS that I can't do in PyMC, and the way you specify models and bring in data seems to be a lot more intuitive to me. | What are some valuable Statistical Analysis open source projects?
For doing a variety of MCMC tasks in Python, there's PyMC, which I've gotten quite a bit of use out of. I haven't run across anything that I can do in BUGS that I can't do in PyMC, and the way you sp |
3,130 | What are some valuable Statistical Analysis open source projects? | This may get downvoted to oblivion, but I happily used the Matlab clone Octave for many years. There are fairly good libraries in octave forge for generation of random variables from different distributions, statistical tests, etc, though clearly it is dwarfed by R. One possible advantage over R is that Matlab/octave is the lingua franca among numerical analysts, optimization researchers, and some subset of applied mathematicians (at least when I was in school), whereas nobody in my department, to my knowledge, used R. my loss. learn both if possible! | What are some valuable Statistical Analysis open source projects? | This may get downvoted to oblivion, but I happily used the Matlab clone Octave for many years. There are fairly good libraries in octave forge for generation of random variables from different distrib | What are some valuable Statistical Analysis open source projects?
This may get downvoted to oblivion, but I happily used the Matlab clone Octave for many years. There are fairly good libraries in octave forge for generation of random variables from different distributions, statistical tests, etc, though clearly it is dwarfed by R. One possible advantage over R is that Matlab/octave is the lingua franca among numerical analysts, optimization researchers, and some subset of applied mathematicians (at least when I was in school), whereas nobody in my department, to my knowledge, used R. my loss. learn both if possible! | What are some valuable Statistical Analysis open source projects?
This may get downvoted to oblivion, but I happily used the Matlab clone Octave for many years. There are fairly good libraries in octave forge for generation of random variables from different distrib |
3,131 | What are some valuable Statistical Analysis open source projects? | Two projects spring to mind:
Bugs - taking (some of) the pain out of Bayesian statistics. It allows the user to focus more on the model and a bit less on MCMC.
Bioconductor - perhaps the most popular statistical tool in Bioinformatics. I know it's a R repository, but there are a large number of people who want to learn R, just for Bioconductor. The number of packages available for cutting edge analysis, make it second to none. | What are some valuable Statistical Analysis open source projects? | Two projects spring to mind:
Bugs - taking (some of) the pain out of Bayesian statistics. It allows the user to focus more on the model and a bit less on MCMC.
Bioconductor - perhaps the most popular | What are some valuable Statistical Analysis open source projects?
Two projects spring to mind:
Bugs - taking (some of) the pain out of Bayesian statistics. It allows the user to focus more on the model and a bit less on MCMC.
Bioconductor - perhaps the most popular statistical tool in Bioinformatics. I know it's a R repository, but there are a large number of people who want to learn R, just for Bioconductor. The number of packages available for cutting edge analysis, make it second to none. | What are some valuable Statistical Analysis open source projects?
Two projects spring to mind:
Bugs - taking (some of) the pain out of Bayesian statistics. It allows the user to focus more on the model and a bit less on MCMC.
Bioconductor - perhaps the most popular |
3,132 | What are some valuable Statistical Analysis open source projects? | Incanter is a Clojure-based, R-like platform (environment + libraries) for statistical computing and graphics. | What are some valuable Statistical Analysis open source projects? | Incanter is a Clojure-based, R-like platform (environment + libraries) for statistical computing and graphics. | What are some valuable Statistical Analysis open source projects?
Incanter is a Clojure-based, R-like platform (environment + libraries) for statistical computing and graphics. | What are some valuable Statistical Analysis open source projects?
Incanter is a Clojure-based, R-like platform (environment + libraries) for statistical computing and graphics. |
3,133 | What are some valuable Statistical Analysis open source projects? | Weka for data mining - contains many classification and clustering algorithms in Java. | What are some valuable Statistical Analysis open source projects? | Weka for data mining - contains many classification and clustering algorithms in Java. | What are some valuable Statistical Analysis open source projects?
Weka for data mining - contains many classification and clustering algorithms in Java. | What are some valuable Statistical Analysis open source projects?
Weka for data mining - contains many classification and clustering algorithms in Java. |
3,134 | What are some valuable Statistical Analysis open source projects? | ggobi "is an open source visualization program for exploring high-dimensional data."
Mat Kelcey has a good 5 minute intro to ggobi. | What are some valuable Statistical Analysis open source projects? | ggobi "is an open source visualization program for exploring high-dimensional data."
Mat Kelcey has a good 5 minute intro to ggobi. | What are some valuable Statistical Analysis open source projects?
ggobi "is an open source visualization program for exploring high-dimensional data."
Mat Kelcey has a good 5 minute intro to ggobi. | What are some valuable Statistical Analysis open source projects?
ggobi "is an open source visualization program for exploring high-dimensional data."
Mat Kelcey has a good 5 minute intro to ggobi. |
3,135 | What are some valuable Statistical Analysis open source projects? | There are also those projects initiated by the FSF or redistributed under GNU General Public License, like:
PSPP, which aims to be a free alternative to SPSS
GRETL, mostly dedicated to regression and econometrics
There is even applications that were released just as a companion software for a textbook, like JMulTi, but are still in use by few people.
I am still playing with xlispstat, from time to time, although Lisp has been largely superseded by R (see Jan de Leeuw's overview on Lisp vs. R in the Journal of Statistical Software). Interestingly, one of the cofounders of the R language, Ross Ihaka, argued on the contrary that the future of statistical software is... Lisp: Back to the Future: Lisp as a Base for a Statistical Computing System. @Alex already pointed to the Clojure-based statistical environment Incanter, so maybe we will see a revival of Lisp-based software in the near future? :-) | What are some valuable Statistical Analysis open source projects? | There are also those projects initiated by the FSF or redistributed under GNU General Public License, like:
PSPP, which aims to be a free alternative to SPSS
GRETL, mostly dedicated to regression and | What are some valuable Statistical Analysis open source projects?
There are also those projects initiated by the FSF or redistributed under GNU General Public License, like:
PSPP, which aims to be a free alternative to SPSS
GRETL, mostly dedicated to regression and econometrics
There is even applications that were released just as a companion software for a textbook, like JMulTi, but are still in use by few people.
I am still playing with xlispstat, from time to time, although Lisp has been largely superseded by R (see Jan de Leeuw's overview on Lisp vs. R in the Journal of Statistical Software). Interestingly, one of the cofounders of the R language, Ross Ihaka, argued on the contrary that the future of statistical software is... Lisp: Back to the Future: Lisp as a Base for a Statistical Computing System. @Alex already pointed to the Clojure-based statistical environment Incanter, so maybe we will see a revival of Lisp-based software in the near future? :-) | What are some valuable Statistical Analysis open source projects?
There are also those projects initiated by the FSF or redistributed under GNU General Public License, like:
PSPP, which aims to be a free alternative to SPSS
GRETL, mostly dedicated to regression and |
3,136 | What are some valuable Statistical Analysis open source projects? | RapidMiner for data and text mining | What are some valuable Statistical Analysis open source projects? | RapidMiner for data and text mining | What are some valuable Statistical Analysis open source projects?
RapidMiner for data and text mining | What are some valuable Statistical Analysis open source projects?
RapidMiner for data and text mining |
3,137 | What are some valuable Statistical Analysis open source projects? | First of all let me tell you that in my opinion the best tool of all by far is R, which has tons of libraries and utilities I am not going to enumerate here.
Let me expand the discussion about weka
There is a library for R, which is called RWeka, which you can easily install in R, and use many of the functionalities from this great program along with the ones in R, let me give you a code example for doing a simple decision tree read from a standard database that comes with this package (it is also very easy to draw the resulting tree but I am going to let you do the research about how to do it, which is in the RWeka documentation:
library(RWeka)
iris <- read.arff(system.file("arff", "iris.arff", package = "RWeka"))
classifier <- IBk(class ~., data = iris)
summary(classifier)
There are also several python libraries for doing this (python is very very easy to learn)
First let me enumerate the packages you can use, I am not going to go in detail about them;
Weka (yes you have a library for python), NLKT (the most famous open source package for textmining besides datamining), statPy, sickits, and scipy.
There is also orange which is excellent (I will also talk about it later), here is a code example for doing a tree from the data in the table cmpart1, that also performs 10 folds validation, you can also graph the tree
import orange, orngMySQL, orngTree
data = orange.ExampleTable("c:\\python26\\orange\\cmpart1.tab")
domain=data.domain
n=10
buck=len(data)/n
l2=[]
for i in range(n):
tmp=[]
if i==n-1:
tmp=data[n*buck:]
else:
tmp=data[buck*i:buck*(i+1)]
l2.append(tmp)
train=[]
test=[]
di={'yy':0,'yn':0,'ny':0,'nn':0}
for i in range(n):
train=[]
test=[]
for j in range(n):
if j==i:
test=l2[i]
else:
train.extend(l2[j])
print "-----"
trai=orange.Example(domain, train)
tree = orngTree.TreeLearner(train)
for ins in test:
d1= ins.getclass()
d2=tree(ins)
print d1
print d2
ind=str(d1)+str(d2)
di[ind]=di[ind]+1
print di
To end with some other packages I used and found interesting
Orange:data visualization and analysis for novice and experts. Data mining through visual programming or Python scripting. Components for machine learning. Extensions for bioinformatics and text mining. (I personally recommend this, I used it a lot integrating it in python and it was excelent) I can send you some python code if you want me to.
ROSETTA: toolkit for analyzing tabular data within the framework of rough set theory. ROSETTA is designed to support the overall data mining and knowledge discovery process: From initial browsing and preprocessing of the data, via computation of minimal attribute sets and generation of if-then rules or descriptive patterns, to validation and analysis of the induced rules or patterns.(This I also enjoyed using very much)
KEEL:assess evolutionary algorithms for Data Mining problems including regression, classification, clustering, pattern mining and so on. It allows us to perform a complete analysis of any learning model in comparison to existing ones, including a statistical test module for comparison.
DataPlot: for scientific visualization, statistical analysis, and non-linear modeling. The target Dataplot user is the researcher and analyst engaged in the characterization, modeling, visualization, analysis, monitoring, and optimization of scientific and engineering processes.
Openstats: Includes A Statistics and Measurement Primer, Descriptive Statistics, Simple Comparisons, Analyses of Variance, Correlation, Multiple Regression, Interrupted Time Series, Multivariate Statistics, Non-Parametric Statistics, Measurement, Statistical Process Control, Financial Procedures, Neural Networks,
Simulation. | What are some valuable Statistical Analysis open source projects? | First of all let me tell you that in my opinion the best tool of all by far is R, which has tons of libraries and utilities I am not going to enumerate here.
Let me expand the discussion about weka
Th | What are some valuable Statistical Analysis open source projects?
First of all let me tell you that in my opinion the best tool of all by far is R, which has tons of libraries and utilities I am not going to enumerate here.
Let me expand the discussion about weka
There is a library for R, which is called RWeka, which you can easily install in R, and use many of the functionalities from this great program along with the ones in R, let me give you a code example for doing a simple decision tree read from a standard database that comes with this package (it is also very easy to draw the resulting tree but I am going to let you do the research about how to do it, which is in the RWeka documentation:
library(RWeka)
iris <- read.arff(system.file("arff", "iris.arff", package = "RWeka"))
classifier <- IBk(class ~., data = iris)
summary(classifier)
There are also several python libraries for doing this (python is very very easy to learn)
First let me enumerate the packages you can use, I am not going to go in detail about them;
Weka (yes you have a library for python), NLKT (the most famous open source package for textmining besides datamining), statPy, sickits, and scipy.
There is also orange which is excellent (I will also talk about it later), here is a code example for doing a tree from the data in the table cmpart1, that also performs 10 folds validation, you can also graph the tree
import orange, orngMySQL, orngTree
data = orange.ExampleTable("c:\\python26\\orange\\cmpart1.tab")
domain=data.domain
n=10
buck=len(data)/n
l2=[]
for i in range(n):
tmp=[]
if i==n-1:
tmp=data[n*buck:]
else:
tmp=data[buck*i:buck*(i+1)]
l2.append(tmp)
train=[]
test=[]
di={'yy':0,'yn':0,'ny':0,'nn':0}
for i in range(n):
train=[]
test=[]
for j in range(n):
if j==i:
test=l2[i]
else:
train.extend(l2[j])
print "-----"
trai=orange.Example(domain, train)
tree = orngTree.TreeLearner(train)
for ins in test:
d1= ins.getclass()
d2=tree(ins)
print d1
print d2
ind=str(d1)+str(d2)
di[ind]=di[ind]+1
print di
To end with some other packages I used and found interesting
Orange:data visualization and analysis for novice and experts. Data mining through visual programming or Python scripting. Components for machine learning. Extensions for bioinformatics and text mining. (I personally recommend this, I used it a lot integrating it in python and it was excelent) I can send you some python code if you want me to.
ROSETTA: toolkit for analyzing tabular data within the framework of rough set theory. ROSETTA is designed to support the overall data mining and knowledge discovery process: From initial browsing and preprocessing of the data, via computation of minimal attribute sets and generation of if-then rules or descriptive patterns, to validation and analysis of the induced rules or patterns.(This I also enjoyed using very much)
KEEL:assess evolutionary algorithms for Data Mining problems including regression, classification, clustering, pattern mining and so on. It allows us to perform a complete analysis of any learning model in comparison to existing ones, including a statistical test module for comparison.
DataPlot: for scientific visualization, statistical analysis, and non-linear modeling. The target Dataplot user is the researcher and analyst engaged in the characterization, modeling, visualization, analysis, monitoring, and optimization of scientific and engineering processes.
Openstats: Includes A Statistics and Measurement Primer, Descriptive Statistics, Simple Comparisons, Analyses of Variance, Correlation, Multiple Regression, Interrupted Time Series, Multivariate Statistics, Non-Parametric Statistics, Measurement, Statistical Process Control, Financial Procedures, Neural Networks,
Simulation. | What are some valuable Statistical Analysis open source projects?
First of all let me tell you that in my opinion the best tool of all by far is R, which has tons of libraries and utilities I am not going to enumerate here.
Let me expand the discussion about weka
Th |
3,138 | What are some valuable Statistical Analysis open source projects? | Colin Gillespie mentioned BUGS, but a better option for Gibbs Sampling, etc, is JAGS.
If all you want to do is ARIMA, you can't beat X12-ARIMA, which is a gold-standard in the field and open source. It doesn't do real graphs (I use R to do that), but the diagnostics are a lesson on their own.
Venturing a bit farther afield to something I recently discovered and have just begun to learn...
ADMB (AD Model Builder), which does non-linear modeling based on the AUTODIF library, with MCMC and a few other features thrown in. It preprocesses and compiles the model down to a C++ executable and compiles it as a standalone app, which is supposed to be way faster than equivalent models implemented in R, MATLAB, etc. ADMB Project.
It started and is still most popular in the fisheries world, but looks quite interesting for other purposes. It does not have graphing or other features of R, and would most likely be used in conjunction with R.
If you want to work with Bayesian Networks in a GUI: SamIam is a nice tool. R has a couple of packages that also do this, but SamIam is very nice. | What are some valuable Statistical Analysis open source projects? | Colin Gillespie mentioned BUGS, but a better option for Gibbs Sampling, etc, is JAGS.
If all you want to do is ARIMA, you can't beat X12-ARIMA, which is a gold-standard in the field and open source. I | What are some valuable Statistical Analysis open source projects?
Colin Gillespie mentioned BUGS, but a better option for Gibbs Sampling, etc, is JAGS.
If all you want to do is ARIMA, you can't beat X12-ARIMA, which is a gold-standard in the field and open source. It doesn't do real graphs (I use R to do that), but the diagnostics are a lesson on their own.
Venturing a bit farther afield to something I recently discovered and have just begun to learn...
ADMB (AD Model Builder), which does non-linear modeling based on the AUTODIF library, with MCMC and a few other features thrown in. It preprocesses and compiles the model down to a C++ executable and compiles it as a standalone app, which is supposed to be way faster than equivalent models implemented in R, MATLAB, etc. ADMB Project.
It started and is still most popular in the fisheries world, but looks quite interesting for other purposes. It does not have graphing or other features of R, and would most likely be used in conjunction with R.
If you want to work with Bayesian Networks in a GUI: SamIam is a nice tool. R has a couple of packages that also do this, but SamIam is very nice. | What are some valuable Statistical Analysis open source projects?
Colin Gillespie mentioned BUGS, but a better option for Gibbs Sampling, etc, is JAGS.
If all you want to do is ARIMA, you can't beat X12-ARIMA, which is a gold-standard in the field and open source. I |
3,139 | What are some valuable Statistical Analysis open source projects? | I really enjoy working with RooFit for easy proper fitting of signal and background distributions and TMVA for quick principal component analyses and modelling of multivariate problems with some standard tools (like genetic algorithms and neural networks, also does BDTs). They are both part of the ROOT C++ libraries which have a pretty heavy bias towards particle physics problems though. | What are some valuable Statistical Analysis open source projects? | I really enjoy working with RooFit for easy proper fitting of signal and background distributions and TMVA for quick principal component analyses and modelling of multivariate problems with some stand | What are some valuable Statistical Analysis open source projects?
I really enjoy working with RooFit for easy proper fitting of signal and background distributions and TMVA for quick principal component analyses and modelling of multivariate problems with some standard tools (like genetic algorithms and neural networks, also does BDTs). They are both part of the ROOT C++ libraries which have a pretty heavy bias towards particle physics problems though. | What are some valuable Statistical Analysis open source projects?
I really enjoy working with RooFit for easy proper fitting of signal and background distributions and TMVA for quick principal component analyses and modelling of multivariate problems with some stand |
3,140 | What are some valuable Statistical Analysis open source projects? | GSL for those of you who wish to program in C / C++ is a valuable resource as it provides several routines for random generators, linear algebra etc. While GSL is primarily available for Linux there are also ports for Windows (See: this and this). | What are some valuable Statistical Analysis open source projects? | GSL for those of you who wish to program in C / C++ is a valuable resource as it provides several routines for random generators, linear algebra etc. While GSL is primarily available for Linux there a | What are some valuable Statistical Analysis open source projects?
GSL for those of you who wish to program in C / C++ is a valuable resource as it provides several routines for random generators, linear algebra etc. While GSL is primarily available for Linux there are also ports for Windows (See: this and this). | What are some valuable Statistical Analysis open source projects?
GSL for those of you who wish to program in C / C++ is a valuable resource as it provides several routines for random generators, linear algebra etc. While GSL is primarily available for Linux there a |
3,141 | What are some valuable Statistical Analysis open source projects? | Few more on top of already mentioned:
KNIME together with R, Python and Weka integration extensions for data mining
Mondrian for quick EDA
And from spatial perspective:
GeoDa for spatial EDA and clustering of areal data
SaTScan for clustering of point data | What are some valuable Statistical Analysis open source projects? | Few more on top of already mentioned:
KNIME together with R, Python and Weka integration extensions for data mining
Mondrian for quick EDA
And from spatial perspective:
GeoDa for spatial EDA and cl | What are some valuable Statistical Analysis open source projects?
Few more on top of already mentioned:
KNIME together with R, Python and Weka integration extensions for data mining
Mondrian for quick EDA
And from spatial perspective:
GeoDa for spatial EDA and clustering of areal data
SaTScan for clustering of point data | What are some valuable Statistical Analysis open source projects?
Few more on top of already mentioned:
KNIME together with R, Python and Weka integration extensions for data mining
Mondrian for quick EDA
And from spatial perspective:
GeoDa for spatial EDA and cl |
3,142 | What are some valuable Statistical Analysis open source projects? | I second that Jay. Why is R valuable? Here's a short list of reasons. http://www.inside-r.org/why-use-r. Also check out ggplot2 - a very nice graphics package for R. Some nice tutorials here. | What are some valuable Statistical Analysis open source projects? | I second that Jay. Why is R valuable? Here's a short list of reasons. http://www.inside-r.org/why-use-r. Also check out ggplot2 - a very nice graphics package for R. Some nice tutorials here. | What are some valuable Statistical Analysis open source projects?
I second that Jay. Why is R valuable? Here's a short list of reasons. http://www.inside-r.org/why-use-r. Also check out ggplot2 - a very nice graphics package for R. Some nice tutorials here. | What are some valuable Statistical Analysis open source projects?
I second that Jay. Why is R valuable? Here's a short list of reasons. http://www.inside-r.org/why-use-r. Also check out ggplot2 - a very nice graphics package for R. Some nice tutorials here. |
3,143 | What are some valuable Statistical Analysis open source projects? | This falls on the outer limits of 'statistical analysis', but Eureqa is a very user friendly program for data-mining nonlinear relationships in data via genetic programming. Eureqa is not as general purpose, but it does what it does fairly well, and the GUI is quite intuitive. It can also take advantage of the available computing power via the eureqa server. | What are some valuable Statistical Analysis open source projects? | This falls on the outer limits of 'statistical analysis', but Eureqa is a very user friendly program for data-mining nonlinear relationships in data via genetic programming. Eureqa is not as general p | What are some valuable Statistical Analysis open source projects?
This falls on the outer limits of 'statistical analysis', but Eureqa is a very user friendly program for data-mining nonlinear relationships in data via genetic programming. Eureqa is not as general purpose, but it does what it does fairly well, and the GUI is quite intuitive. It can also take advantage of the available computing power via the eureqa server. | What are some valuable Statistical Analysis open source projects?
This falls on the outer limits of 'statistical analysis', but Eureqa is a very user friendly program for data-mining nonlinear relationships in data via genetic programming. Eureqa is not as general p |
3,144 | What are some valuable Statistical Analysis open source projects? | Symbolic mathematics software can be a good support for statistics, too. Here are a few GPL ones I use from time to time:
sympy is python-based and very small, but can still do a lot: derivatives, integrals, symbolic sums, combinatorics, series expansions, tensor manipulations, etc. There is an R package to call it from R.
sage is python-based and HUGE! If sympy can't do what you want, try sage (but there is no native windows version).
maxima is lisp-based and very classical, intermediate in size between (1) and (2).
All three are in active development. | What are some valuable Statistical Analysis open source projects? | Symbolic mathematics software can be a good support for statistics, too. Here are a few GPL ones I use from time to time:
sympy is python-based and very small, but can still do a lot: derivatives, | What are some valuable Statistical Analysis open source projects?
Symbolic mathematics software can be a good support for statistics, too. Here are a few GPL ones I use from time to time:
sympy is python-based and very small, but can still do a lot: derivatives, integrals, symbolic sums, combinatorics, series expansions, tensor manipulations, etc. There is an R package to call it from R.
sage is python-based and HUGE! If sympy can't do what you want, try sage (but there is no native windows version).
maxima is lisp-based and very classical, intermediate in size between (1) and (2).
All three are in active development. | What are some valuable Statistical Analysis open source projects?
Symbolic mathematics software can be a good support for statistics, too. Here are a few GPL ones I use from time to time:
sympy is python-based and very small, but can still do a lot: derivatives, |
3,145 | What are some valuable Statistical Analysis open source projects? | Meta.Numerics is a .NET library with good support for statistical analysis.
Unlike R (an S clone) and Octave (a Matlab clone), it does not have a "front end". It is more like GSL, in that it is a library that you link to when you are writing your own application that needs to do statistical analysis. C# and Visual Basic are more common programming languages than C/C++ for line-of-business apps, and Meta.Numerics has more extensive support for statistical constructs and tests than GSL. | What are some valuable Statistical Analysis open source projects? | Meta.Numerics is a .NET library with good support for statistical analysis.
Unlike R (an S clone) and Octave (a Matlab clone), it does not have a "front end". It is more like GSL, in that it is a libr | What are some valuable Statistical Analysis open source projects?
Meta.Numerics is a .NET library with good support for statistical analysis.
Unlike R (an S clone) and Octave (a Matlab clone), it does not have a "front end". It is more like GSL, in that it is a library that you link to when you are writing your own application that needs to do statistical analysis. C# and Visual Basic are more common programming languages than C/C++ for line-of-business apps, and Meta.Numerics has more extensive support for statistical constructs and tests than GSL. | What are some valuable Statistical Analysis open source projects?
Meta.Numerics is a .NET library with good support for statistical analysis.
Unlike R (an S clone) and Octave (a Matlab clone), it does not have a "front end". It is more like GSL, in that it is a libr |
3,146 | What are some valuable Statistical Analysis open source projects? | clusterPy for analytical
regionalization or geospatial
clustering
PySal for spatial data analysis. | What are some valuable Statistical Analysis open source projects? | clusterPy for analytical
regionalization or geospatial
clustering
PySal for spatial data analysis. | What are some valuable Statistical Analysis open source projects?
clusterPy for analytical
regionalization or geospatial
clustering
PySal for spatial data analysis. | What are some valuable Statistical Analysis open source projects?
clusterPy for analytical
regionalization or geospatial
clustering
PySal for spatial data analysis. |
3,147 | Removing duplicated rows data frame in R [closed] | unique() indeed answers your question, but another related and interesting function to achieve the same end is duplicated().
It gives you the possibility to look up which rows are duplicated.
a <- c(rep("A", 3), rep("B", 3), rep("C",2))
b <- c(1,1,2,4,1,1,2,2)
df <-data.frame(a,b)
duplicated(df)
[1] FALSE TRUE FALSE FALSE FALSE TRUE FALSE TRUE
> df[duplicated(df), ]
a b
2 A 1
6 B 1
8 C 2
> df[!duplicated(df), ]
a b
1 A 1
3 A 2
4 B 4
5 B 1
7 C 2 | Removing duplicated rows data frame in R [closed] | unique() indeed answers your question, but another related and interesting function to achieve the same end is duplicated().
It gives you the possibility to look up which rows are duplicated.
a <- c(r | Removing duplicated rows data frame in R [closed]
unique() indeed answers your question, but another related and interesting function to achieve the same end is duplicated().
It gives you the possibility to look up which rows are duplicated.
a <- c(rep("A", 3), rep("B", 3), rep("C",2))
b <- c(1,1,2,4,1,1,2,2)
df <-data.frame(a,b)
duplicated(df)
[1] FALSE TRUE FALSE FALSE FALSE TRUE FALSE TRUE
> df[duplicated(df), ]
a b
2 A 1
6 B 1
8 C 2
> df[!duplicated(df), ]
a b
1 A 1
3 A 2
4 B 4
5 B 1
7 C 2 | Removing duplicated rows data frame in R [closed]
unique() indeed answers your question, but another related and interesting function to achieve the same end is duplicated().
It gives you the possibility to look up which rows are duplicated.
a <- c(r |
3,148 | Removing duplicated rows data frame in R [closed] | You are looking for unique().
a <- c(rep("A", 3), rep("B", 3), rep("C",2))
b <- c(1,1,2,4,1,1,2,2)
df <-data.frame(a,b)
unique(df)
> unique(df)
a b
1 A 1
3 A 2
4 B 4
5 B 1
7 C 2 | Removing duplicated rows data frame in R [closed] | You are looking for unique().
a <- c(rep("A", 3), rep("B", 3), rep("C",2))
b <- c(1,1,2,4,1,1,2,2)
df <-data.frame(a,b)
unique(df)
> unique(df)
a b
1 A 1
3 A 2
4 B 4
5 B 1
7 C 2 | Removing duplicated rows data frame in R [closed]
You are looking for unique().
a <- c(rep("A", 3), rep("B", 3), rep("C",2))
b <- c(1,1,2,4,1,1,2,2)
df <-data.frame(a,b)
unique(df)
> unique(df)
a b
1 A 1
3 A 2
4 B 4
5 B 1
7 C 2 | Removing duplicated rows data frame in R [closed]
You are looking for unique().
a <- c(rep("A", 3), rep("B", 3), rep("C",2))
b <- c(1,1,2,4,1,1,2,2)
df <-data.frame(a,b)
unique(df)
> unique(df)
a b
1 A 1
3 A 2
4 B 4
5 B 1
7 C 2 |
3,149 | Why does ridge estimate become better than OLS by adding a constant to the diagonal? | In an unpenalized regression, you can often get a ridge* in parameter space, where many different values along the ridge all do as well or nearly as well on the least squares criterion.
* (at least, it's a ridge in the likelihood function -- they're actually valleys$ in the RSS criterion, but I'll continue to call it a ridge, as this seems to be conventional -- or even, as Alexis points out in comments, I could call that a thalweg, being the valley's counterpart of a ridge)
In the presence of a ridge in the least squares criterion in parameter space, the penalty you get with ridge regression gets rid of those ridges by pushing the criterion up as the parameters head away from the origin:
[Clearer image]
In the first plot, a large change in parameter values (along the ridge) produces a miniscule change in the RSS criterion. This can cause numerical instability; it's very sensitive to small changes (e.g. a tiny change in a data value, even truncation or rounding error). The parameter estimates are almost perfectly correlated. You may get parameter estimates that are very large in magnitude.
By contrast, by lifting up the thing that ridge regression minimizes (by adding the $L_2$ penalty) when the parameters are far from 0, small changes in conditions (such as a little rounding or truncation error) can't produce gigantic changes in the resulting estimates. The penalty term results in shrinkage toward 0 (resulting in some bias). A small amount of bias can buy a substantial improvement in the variance (by eliminating that ridge).
The uncertainty of the estimates are reduced (the standard errors are inversely related to the second derivative, which is made larger by the penalty).
Correlation in parameter estimates is reduced. You now won't get parameter estimates that are very large in magnitude if the RSS for small parameters would not be much worse. | Why does ridge estimate become better than OLS by adding a constant to the diagonal? | In an unpenalized regression, you can often get a ridge* in parameter space, where many different values along the ridge all do as well or nearly as well on the least squares criterion.
* (at least, | Why does ridge estimate become better than OLS by adding a constant to the diagonal?
In an unpenalized regression, you can often get a ridge* in parameter space, where many different values along the ridge all do as well or nearly as well on the least squares criterion.
* (at least, it's a ridge in the likelihood function -- they're actually valleys$ in the RSS criterion, but I'll continue to call it a ridge, as this seems to be conventional -- or even, as Alexis points out in comments, I could call that a thalweg, being the valley's counterpart of a ridge)
In the presence of a ridge in the least squares criterion in parameter space, the penalty you get with ridge regression gets rid of those ridges by pushing the criterion up as the parameters head away from the origin:
[Clearer image]
In the first plot, a large change in parameter values (along the ridge) produces a miniscule change in the RSS criterion. This can cause numerical instability; it's very sensitive to small changes (e.g. a tiny change in a data value, even truncation or rounding error). The parameter estimates are almost perfectly correlated. You may get parameter estimates that are very large in magnitude.
By contrast, by lifting up the thing that ridge regression minimizes (by adding the $L_2$ penalty) when the parameters are far from 0, small changes in conditions (such as a little rounding or truncation error) can't produce gigantic changes in the resulting estimates. The penalty term results in shrinkage toward 0 (resulting in some bias). A small amount of bias can buy a substantial improvement in the variance (by eliminating that ridge).
The uncertainty of the estimates are reduced (the standard errors are inversely related to the second derivative, which is made larger by the penalty).
Correlation in parameter estimates is reduced. You now won't get parameter estimates that are very large in magnitude if the RSS for small parameters would not be much worse. | Why does ridge estimate become better than OLS by adding a constant to the diagonal?
In an unpenalized regression, you can often get a ridge* in parameter space, where many different values along the ridge all do as well or nearly as well on the least squares criterion.
* (at least, |
3,150 | Why does ridge estimate become better than OLS by adding a constant to the diagonal? | +1 on Glen_b's illustration and the stats comments on the Ridge estimator. I would just like to add a purely mathematical (linear algebra) pov on Ridge regression which answers OPs questions 1) and 2).
First note that $X'X$ is a $p \times p$ symmetric positive semidefinite matrix - $n$ times the sample covariance matrix. Hence it has the eigen-decomposition
$$
X'X = V D V', \quad D = \begin{bmatrix}
d_1 & & \\
& \ddots & \\
& & d_p
\end{bmatrix}, d_i \geq 0
$$
Now since matrix inversion corresponds to inversion of the eigenvalues, the OLS estimator requires $(X'X)^{-1} = V D^{-1} V'$ (note that $V ' = V^{-1}$). Obviously this only works if all eigenvalues are strictly greater than zero, $d_i > 0$. For $p \gg n$ this is impossible; for $n \gg p$ it is in general true - this is were we are usually concerned with multicollinearity.
As statisticians we also want to know how small perturbations in the data $X$ change the estimates. It is clear that a small change in any $d_i$ leads to huge variation in $1 / d_i$ if $d_i$ is very small.
So what Ridge regression does is move all eigenvalues further away from zero as
$$
X'X + \lambda I_p = V D V' + \lambda I_p = V D V' + \lambda V V' = V (D + \lambda I_p) V',
$$
which now has eigenvalues $d_i + \lambda \geq \lambda \geq 0$. This is why choosing a positive penalty parameter makes the matrix invertible -- even in the $p \gg n$ case. For Ridge regression a small variation in the data $X$ does not have anymore the extremely unstable effect it has on the matrix inversion.
The numerical stability is related to shrinkage to zero as they both are a consequence of adding a positive constant to the eigenvalues: it makes it more stable because a small perturbation in $X$ does not change the inverse too much; it shrinks it close to $0$ since now the $V^{-1} X'y$ term is multiplied by $1 / (d_i + \lambda)$ which is closer to zero than the OLS solution with inverse eigenvalues $1 / d$. | Why does ridge estimate become better than OLS by adding a constant to the diagonal? | +1 on Glen_b's illustration and the stats comments on the Ridge estimator. I would just like to add a purely mathematical (linear algebra) pov on Ridge regression which answers OPs questions 1) and 2 | Why does ridge estimate become better than OLS by adding a constant to the diagonal?
+1 on Glen_b's illustration and the stats comments on the Ridge estimator. I would just like to add a purely mathematical (linear algebra) pov on Ridge regression which answers OPs questions 1) and 2).
First note that $X'X$ is a $p \times p$ symmetric positive semidefinite matrix - $n$ times the sample covariance matrix. Hence it has the eigen-decomposition
$$
X'X = V D V', \quad D = \begin{bmatrix}
d_1 & & \\
& \ddots & \\
& & d_p
\end{bmatrix}, d_i \geq 0
$$
Now since matrix inversion corresponds to inversion of the eigenvalues, the OLS estimator requires $(X'X)^{-1} = V D^{-1} V'$ (note that $V ' = V^{-1}$). Obviously this only works if all eigenvalues are strictly greater than zero, $d_i > 0$. For $p \gg n$ this is impossible; for $n \gg p$ it is in general true - this is were we are usually concerned with multicollinearity.
As statisticians we also want to know how small perturbations in the data $X$ change the estimates. It is clear that a small change in any $d_i$ leads to huge variation in $1 / d_i$ if $d_i$ is very small.
So what Ridge regression does is move all eigenvalues further away from zero as
$$
X'X + \lambda I_p = V D V' + \lambda I_p = V D V' + \lambda V V' = V (D + \lambda I_p) V',
$$
which now has eigenvalues $d_i + \lambda \geq \lambda \geq 0$. This is why choosing a positive penalty parameter makes the matrix invertible -- even in the $p \gg n$ case. For Ridge regression a small variation in the data $X$ does not have anymore the extremely unstable effect it has on the matrix inversion.
The numerical stability is related to shrinkage to zero as they both are a consequence of adding a positive constant to the eigenvalues: it makes it more stable because a small perturbation in $X$ does not change the inverse too much; it shrinks it close to $0$ since now the $V^{-1} X'y$ term is multiplied by $1 / (d_i + \lambda)$ which is closer to zero than the OLS solution with inverse eigenvalues $1 / d$. | Why does ridge estimate become better than OLS by adding a constant to the diagonal?
+1 on Glen_b's illustration and the stats comments on the Ridge estimator. I would just like to add a purely mathematical (linear algebra) pov on Ridge regression which answers OPs questions 1) and 2 |
3,151 | Why does ridge estimate become better than OLS by adding a constant to the diagonal? | @Glen_b's demonstration is wonderful. I would just add that aside from the exact cause of the problem and description about how quadratic penalized regression works, there is the bottom line that penalization has the net effect of shrinking the coefficients other than the intercept towards zero. This provides a direct solution to the problem of overfitting that is inherent in most regression analyses when the sample size is not enormous in relation to the number of parameters to be estimated. Almost any penalization towards zero for non-intercepts is going to improve predictive accuracy over an un-penalized model. | Why does ridge estimate become better than OLS by adding a constant to the diagonal? | @Glen_b's demonstration is wonderful. I would just add that aside from the exact cause of the problem and description about how quadratic penalized regression works, there is the bottom line that pen | Why does ridge estimate become better than OLS by adding a constant to the diagonal?
@Glen_b's demonstration is wonderful. I would just add that aside from the exact cause of the problem and description about how quadratic penalized regression works, there is the bottom line that penalization has the net effect of shrinking the coefficients other than the intercept towards zero. This provides a direct solution to the problem of overfitting that is inherent in most regression analyses when the sample size is not enormous in relation to the number of parameters to be estimated. Almost any penalization towards zero for non-intercepts is going to improve predictive accuracy over an un-penalized model. | Why does ridge estimate become better than OLS by adding a constant to the diagonal?
@Glen_b's demonstration is wonderful. I would just add that aside from the exact cause of the problem and description about how quadratic penalized regression works, there is the bottom line that pen |
3,152 | What is a "saturated" model? | A saturated model is one in which there are as many estimated parameters as data points. By definition, this will lead to a perfect fit, but will be of little use statistically, as you have no data left to estimate variance.
For example, if you have 6 data points and fit a 5th-order polynomial to the data, you would have a saturated model (one parameter for each of the 5 powers of your independant variable plus one for the constant term). | What is a "saturated" model? | A saturated model is one in which there are as many estimated parameters as data points. By definition, this will lead to a perfect fit, but will be of little use statistically, as you have no data le | What is a "saturated" model?
A saturated model is one in which there are as many estimated parameters as data points. By definition, this will lead to a perfect fit, but will be of little use statistically, as you have no data left to estimate variance.
For example, if you have 6 data points and fit a 5th-order polynomial to the data, you would have a saturated model (one parameter for each of the 5 powers of your independant variable plus one for the constant term). | What is a "saturated" model?
A saturated model is one in which there are as many estimated parameters as data points. By definition, this will lead to a perfect fit, but will be of little use statistically, as you have no data le |
3,153 | What is a "saturated" model? | A saturated model is a model that is overparameterized to the point that it is basically just interpolating the data. In some settings, such as image compression and reconstruction, this isn't necessarily a bad thing, but if you're trying to build a predictive model it's very problematic.
In short, saturated models lead to extremely high-variance predictors that are being pushed around by the noise more than the actual data.
As a thought experiment, imagine you've got a saturated model, and there is noise in the data, then imagine fitting the model a few hundred times, each time with a different realization of the noise, and then predicting a new point. You're likely to get radically different results each time, both for your fit and your prediction (and polynomial models are especially egregious in this regard); in other words the variance of the fit and the predictor are extremely high.
By contrast a model that is not saturated will (if constructed reasonably) give fits that are more consistent with each other even under different noise realization, and the variance of the predictor will also be reduced. | What is a "saturated" model? | A saturated model is a model that is overparameterized to the point that it is basically just interpolating the data. In some settings, such as image compression and reconstruction, this isn't necess | What is a "saturated" model?
A saturated model is a model that is overparameterized to the point that it is basically just interpolating the data. In some settings, such as image compression and reconstruction, this isn't necessarily a bad thing, but if you're trying to build a predictive model it's very problematic.
In short, saturated models lead to extremely high-variance predictors that are being pushed around by the noise more than the actual data.
As a thought experiment, imagine you've got a saturated model, and there is noise in the data, then imagine fitting the model a few hundred times, each time with a different realization of the noise, and then predicting a new point. You're likely to get radically different results each time, both for your fit and your prediction (and polynomial models are especially egregious in this regard); in other words the variance of the fit and the predictor are extremely high.
By contrast a model that is not saturated will (if constructed reasonably) give fits that are more consistent with each other even under different noise realization, and the variance of the predictor will also be reduced. | What is a "saturated" model?
A saturated model is a model that is overparameterized to the point that it is basically just interpolating the data. In some settings, such as image compression and reconstruction, this isn't necess |
3,154 | What is a "saturated" model? | As everybody else said before, it means that you have as much parameters have you have data points. So, no goodness of fit testing. But this does not mean that "by definition", the model can perfectly fit any data point. I can tell you by personal experience of working with some saturated models that could not predict specific data points. It is quite rare, but possible.
Another important issue is that saturated does not mean useless. For instance, in mathematical models of human cognition, model parameters are associated with specific cognitive processes that have a theoretical background. If a model is saturated, you can test its adequacy by doing focused experiments with manipulations that should affect only specific parameters. If the theoretical predictions match the observed differences (or lack of) in parameter estimates, then one can say that the model is valid.
An example: Imagine for instance a model that has two sets of parameters, one for cognitive processing, and another for motor responses. Imagine now that you have an experiment with two conditions, one in which the participants ability to respond is impaired (they can only use one hand instead of two), and in the other condition there is no impairment. If the model is valid, differences in parameter estimates for both conditions should only occur for the motor response parameters.
Also, be aware that even if one model is non-saturated, it might still be non-identifiable, which means that different combinations of parameter values produce the same result, which compromises any model fit.
If you wanna find more information on these issues in general, you might wanna take look at these papers:
Bamber, D., & van Santen, J. P. H. (1985). How many parameters can a model have and still be testable? Journal of Mathematical Psychology, 29, 443-473.
Bamber, D., & van Santen, J. P. H. (2000). How to Assess a Model's Testability and Identifiability. Journal of Mathematical Psychology, 44, 20-40.
cheers | What is a "saturated" model? | As everybody else said before, it means that you have as much parameters have you have data points. So, no goodness of fit testing. But this does not mean that "by definition", the model can perfectly | What is a "saturated" model?
As everybody else said before, it means that you have as much parameters have you have data points. So, no goodness of fit testing. But this does not mean that "by definition", the model can perfectly fit any data point. I can tell you by personal experience of working with some saturated models that could not predict specific data points. It is quite rare, but possible.
Another important issue is that saturated does not mean useless. For instance, in mathematical models of human cognition, model parameters are associated with specific cognitive processes that have a theoretical background. If a model is saturated, you can test its adequacy by doing focused experiments with manipulations that should affect only specific parameters. If the theoretical predictions match the observed differences (or lack of) in parameter estimates, then one can say that the model is valid.
An example: Imagine for instance a model that has two sets of parameters, one for cognitive processing, and another for motor responses. Imagine now that you have an experiment with two conditions, one in which the participants ability to respond is impaired (they can only use one hand instead of two), and in the other condition there is no impairment. If the model is valid, differences in parameter estimates for both conditions should only occur for the motor response parameters.
Also, be aware that even if one model is non-saturated, it might still be non-identifiable, which means that different combinations of parameter values produce the same result, which compromises any model fit.
If you wanna find more information on these issues in general, you might wanna take look at these papers:
Bamber, D., & van Santen, J. P. H. (1985). How many parameters can a model have and still be testable? Journal of Mathematical Psychology, 29, 443-473.
Bamber, D., & van Santen, J. P. H. (2000). How to Assess a Model's Testability and Identifiability. Journal of Mathematical Psychology, 44, 20-40.
cheers | What is a "saturated" model?
As everybody else said before, it means that you have as much parameters have you have data points. So, no goodness of fit testing. But this does not mean that "by definition", the model can perfectly |
3,155 | What is a "saturated" model? | A model is saturated if and only if it has as many parameters as it has data points (observations). Or put otherwise, in non-saturated models the degrees of freedom are bigger than zero.
This basically means that this model is useless, because it does not describe the data more parsimoniously than the raw data does (and describing data parsimoniously is generally the idea behind using a model). Furthermore, saturated models can (but don't necessarily) provide a (useless) perfect fit because they just interpolate or iterate the data.
Take for example the mean as a model for some data. If you have only one data point (e.g., 5) using the mean (i.e., 5; note that the mean is a saturated model for only one data point) does not help at all. However if you already have two data points (e.g., 5 and 7) using the mean (i.e., 6) as a model provides you with a more parsimonious description than the original data. | What is a "saturated" model? | A model is saturated if and only if it has as many parameters as it has data points (observations). Or put otherwise, in non-saturated models the degrees of freedom are bigger than zero.
This basicall | What is a "saturated" model?
A model is saturated if and only if it has as many parameters as it has data points (observations). Or put otherwise, in non-saturated models the degrees of freedom are bigger than zero.
This basically means that this model is useless, because it does not describe the data more parsimoniously than the raw data does (and describing data parsimoniously is generally the idea behind using a model). Furthermore, saturated models can (but don't necessarily) provide a (useless) perfect fit because they just interpolate or iterate the data.
Take for example the mean as a model for some data. If you have only one data point (e.g., 5) using the mean (i.e., 5; note that the mean is a saturated model for only one data point) does not help at all. However if you already have two data points (e.g., 5 and 7) using the mean (i.e., 6) as a model provides you with a more parsimonious description than the original data. | What is a "saturated" model?
A model is saturated if and only if it has as many parameters as it has data points (observations). Or put otherwise, in non-saturated models the degrees of freedom are bigger than zero.
This basicall |
3,156 | What is a "saturated" model? | In regression, a common use of the term "saturated model" is as follows. A saturated model has as many independent variables as there are unique levels (combinations) of the covariates. Of course this is only possible with categorical covariates. So if you have two dummy variables X1 and X2, a regression is saturated if the independent variables you include are X1, X2, and X1*X2.
This is advantageous because the conditional expectation function of Y given X1 and X2 is necessarily linear in parameters when the model is saturated (it is linear in X1, X2, X1*X2). Importantly, this model does not generally have "as many estimated parameters as data points," nor does it generally have a "perfect fit."
Here is one source for this, there are many others: "When would we expect the CEF to be linear? Two cases. One is if the data (the outcome and covariates) are multivariate Normal. The other is if the linear regression is saturated. A saturated regression model is one in which there is a parameter for each unique combination of the covariates. In this case, the regression fits the CEF perfectly because the CEF is a linear function of the dummy categories." Prof. Blackwell's lecture notes, page 2. | What is a "saturated" model? | In regression, a common use of the term "saturated model" is as follows. A saturated model has as many independent variables as there are unique levels (combinations) of the covariates. Of course this | What is a "saturated" model?
In regression, a common use of the term "saturated model" is as follows. A saturated model has as many independent variables as there are unique levels (combinations) of the covariates. Of course this is only possible with categorical covariates. So if you have two dummy variables X1 and X2, a regression is saturated if the independent variables you include are X1, X2, and X1*X2.
This is advantageous because the conditional expectation function of Y given X1 and X2 is necessarily linear in parameters when the model is saturated (it is linear in X1, X2, X1*X2). Importantly, this model does not generally have "as many estimated parameters as data points," nor does it generally have a "perfect fit."
Here is one source for this, there are many others: "When would we expect the CEF to be linear? Two cases. One is if the data (the outcome and covariates) are multivariate Normal. The other is if the linear regression is saturated. A saturated regression model is one in which there is a parameter for each unique combination of the covariates. In this case, the regression fits the CEF perfectly because the CEF is a linear function of the dummy categories." Prof. Blackwell's lecture notes, page 2. | What is a "saturated" model?
In regression, a common use of the term "saturated model" is as follows. A saturated model has as many independent variables as there are unique levels (combinations) of the covariates. Of course this |
3,157 | What is a "saturated" model? | It is also useful if you need to calculate AIC for a quasi-likelihood model. The estimate of dispersion should come from the saturated model. You would divide the LL you are fitting by the estimated dispersion from the saturated model in the AIC calculation. | What is a "saturated" model? | It is also useful if you need to calculate AIC for a quasi-likelihood model. The estimate of dispersion should come from the saturated model. You would divide the LL you are fitting by the estimated | What is a "saturated" model?
It is also useful if you need to calculate AIC for a quasi-likelihood model. The estimate of dispersion should come from the saturated model. You would divide the LL you are fitting by the estimated dispersion from the saturated model in the AIC calculation. | What is a "saturated" model?
It is also useful if you need to calculate AIC for a quasi-likelihood model. The estimate of dispersion should come from the saturated model. You would divide the LL you are fitting by the estimated |
3,158 | What is a "saturated" model? | In the context of SEM (or path analysis), a saturated model or a just-identified model is a model in which the number of free parameters exactly equals the number of variances and unique covariances. For example the following model is a saturated model because there are 3*4/2 data points (variances and unique covariances) and also 6 free parameters to be estimated: | What is a "saturated" model? | In the context of SEM (or path analysis), a saturated model or a just-identified model is a model in which the number of free parameters exactly equals the number of variances and unique covariances. | What is a "saturated" model?
In the context of SEM (or path analysis), a saturated model or a just-identified model is a model in which the number of free parameters exactly equals the number of variances and unique covariances. For example the following model is a saturated model because there are 3*4/2 data points (variances and unique covariances) and also 6 free parameters to be estimated: | What is a "saturated" model?
In the context of SEM (or path analysis), a saturated model or a just-identified model is a model in which the number of free parameters exactly equals the number of variances and unique covariances. |
3,159 | Look and you shall find (a correlation) | This is an excellent question, worthy of someone who is a clear statistical thinker, because it recognizes a subtle but important aspect of multiple testing.
There are standard methods to adjust the p-values of multiple correlation coefficients (or, equivalently, to broaden their confidence intervals), such as the Bonferroni and Sidak methods (q.v.). However, these are far too conservative with large correlation matrices due to the inherent mathematical relationships that must hold among correlation coefficients in general. (For some examples of such relationships see the recent question and the ensuing thread.) One of the best approaches for dealing with this situation is to conduct a permutation (or resampling) test. It's easy to do this with correlations: in each iteration of the test, just randomly scramble the order of values of each of the fields (thereby destroying any inherent correlation) and recompute the full correlation matrix. Do this for several thousand iterations (or more), then summarize the distributions of the entries of the correlation matrix by, for instance, giving their 97.5 and 2.5 percentiles: these would serve as mutual symmetric two-sided 95% confidence intervals under the null hypothesis of no correlation. (The first time you do this with a large number of variables you will be astonished at how high some of the correlation coefficients can be even when there is no inherent correlation.)
When reporting the results, no matter what computations you do, you should include the following:
The size of the correlation matrix (i.e., how many variables you have looked at).
How you determined the p-values or "significance" of any of the correlation coefficients (e.g., left them as-is, applied a Bonferroni correction, did a permutation test, or whatever).
Whether you looked at alternative measures of correlation, such as Spearman rank correlation. If you did, also indicate why you chose the method you are actually reporting on and using. | Look and you shall find (a correlation) | This is an excellent question, worthy of someone who is a clear statistical thinker, because it recognizes a subtle but important aspect of multiple testing.
There are standard methods to adjust the p | Look and you shall find (a correlation)
This is an excellent question, worthy of someone who is a clear statistical thinker, because it recognizes a subtle but important aspect of multiple testing.
There are standard methods to adjust the p-values of multiple correlation coefficients (or, equivalently, to broaden their confidence intervals), such as the Bonferroni and Sidak methods (q.v.). However, these are far too conservative with large correlation matrices due to the inherent mathematical relationships that must hold among correlation coefficients in general. (For some examples of such relationships see the recent question and the ensuing thread.) One of the best approaches for dealing with this situation is to conduct a permutation (or resampling) test. It's easy to do this with correlations: in each iteration of the test, just randomly scramble the order of values of each of the fields (thereby destroying any inherent correlation) and recompute the full correlation matrix. Do this for several thousand iterations (or more), then summarize the distributions of the entries of the correlation matrix by, for instance, giving their 97.5 and 2.5 percentiles: these would serve as mutual symmetric two-sided 95% confidence intervals under the null hypothesis of no correlation. (The first time you do this with a large number of variables you will be astonished at how high some of the correlation coefficients can be even when there is no inherent correlation.)
When reporting the results, no matter what computations you do, you should include the following:
The size of the correlation matrix (i.e., how many variables you have looked at).
How you determined the p-values or "significance" of any of the correlation coefficients (e.g., left them as-is, applied a Bonferroni correction, did a permutation test, or whatever).
Whether you looked at alternative measures of correlation, such as Spearman rank correlation. If you did, also indicate why you chose the method you are actually reporting on and using. | Look and you shall find (a correlation)
This is an excellent question, worthy of someone who is a clear statistical thinker, because it recognizes a subtle but important aspect of multiple testing.
There are standard methods to adjust the p |
3,160 | Look and you shall find (a correlation) | From your follow up response to Peter Flom's question, it sounds like you might be better served by techniques that look at higher level structure in your correlation matrix.
Techniques like factor analysis, PCA, multidimensional scaling, and cluster analysis of variables can be used to group your variables into sets of relatively more related variables.
Also, you may want to think theoretically about what kind of structure should be present. When your number of variables is large and the number of observations is small, you are often better relying more on prior expectations. | Look and you shall find (a correlation) | From your follow up response to Peter Flom's question, it sounds like you might be better served by techniques that look at higher level structure in your correlation matrix.
Techniques like factor an | Look and you shall find (a correlation)
From your follow up response to Peter Flom's question, it sounds like you might be better served by techniques that look at higher level structure in your correlation matrix.
Techniques like factor analysis, PCA, multidimensional scaling, and cluster analysis of variables can be used to group your variables into sets of relatively more related variables.
Also, you may want to think theoretically about what kind of structure should be present. When your number of variables is large and the number of observations is small, you are often better relying more on prior expectations. | Look and you shall find (a correlation)
From your follow up response to Peter Flom's question, it sounds like you might be better served by techniques that look at higher level structure in your correlation matrix.
Techniques like factor an |
3,161 | Look and you shall find (a correlation) | Perhaps you could do a preliminary analysis on a random subset of the data to form hypotheses, and then test those few hypotheses of interest using the rest of the data. That way you would not have to correct for nearly as many multiple tests. (I think...)
Of course, if you use such a procedure you will be reducing the size of the dataset used for the final analysis and so reduce your power to find real effects. However, corrections for multiple comparisons reduce power as well and so I'm not sure that you would necessarily lose anything. | Look and you shall find (a correlation) | Perhaps you could do a preliminary analysis on a random subset of the data to form hypotheses, and then test those few hypotheses of interest using the rest of the data. That way you would not have to | Look and you shall find (a correlation)
Perhaps you could do a preliminary analysis on a random subset of the data to form hypotheses, and then test those few hypotheses of interest using the rest of the data. That way you would not have to correct for nearly as many multiple tests. (I think...)
Of course, if you use such a procedure you will be reducing the size of the dataset used for the final analysis and so reduce your power to find real effects. However, corrections for multiple comparisons reduce power as well and so I'm not sure that you would necessarily lose anything. | Look and you shall find (a correlation)
Perhaps you could do a preliminary analysis on a random subset of the data to form hypotheses, and then test those few hypotheses of interest using the rest of the data. That way you would not have to |
3,162 | Look and you shall find (a correlation) | This is an example of multiple comparisons. There's a large literature on this.
If you have, say, 100 variables, then you will have 100*99/2 =4950 correlations.
If the data are just noise, then you would expect 1 in 20 of these to be significant at p = .05. That's 247.5
Before going farther, though, it would be good if you could say WHY you are doing this. What are these variables, why are you correlating them, what is your substantive idea?
Or, are you just fishing for high correlations? | Look and you shall find (a correlation) | This is an example of multiple comparisons. There's a large literature on this.
If you have, say, 100 variables, then you will have 100*99/2 =4950 correlations.
If the data are just noise, then you | Look and you shall find (a correlation)
This is an example of multiple comparisons. There's a large literature on this.
If you have, say, 100 variables, then you will have 100*99/2 =4950 correlations.
If the data are just noise, then you would expect 1 in 20 of these to be significant at p = .05. That's 247.5
Before going farther, though, it would be good if you could say WHY you are doing this. What are these variables, why are you correlating them, what is your substantive idea?
Or, are you just fishing for high correlations? | Look and you shall find (a correlation)
This is an example of multiple comparisons. There's a large literature on this.
If you have, say, 100 variables, then you will have 100*99/2 =4950 correlations.
If the data are just noise, then you |
3,163 | What problem do shrinkage methods solve? | I suspect you want a deeper answer, and I'll have to let someone else provide that, but I can give you some thoughts on ridge regression from a loose, conceptual perspective.
OLS regression yields parameter estimates that are unbiased (i.e., if such samples are gathered and parameters are estimated indefinitely, the sampling distribution of parameter estimates will be centered on the true value). Moreover, the sampling distribution will have the lowest variance of all possible unbiased estimates (this means that, on average, an OLS parameter estimate will be closer to the true value than an estimate from some other unbiased estimation procedure will be). This is old news (and I apologize, I know you know this well), however, the fact that the variance is lower does not mean that it is terribly low. Under some circumstances, the variance of the sampling distribution can be so large as to make the OLS estimator essentially worthless. (One situation where this could occur is when there is a high degree of multicollinearity.)
What is one to do in such a situation? Well, a different estimator could be found that has lower variance (although, obviously, it must be biased, given what was stipulated above). That is, we are trading off unbiasedness for lower variance. For example, we get parameter estimates that are likely to be substantially closer to the true value, albeit probably a little below the true value. Whether this tradeoff is worthwhile is a judgment the analyst must make when confronted with this situation. At any rate, ridge regression is just such a technique. The following (completely fabricated) figure is intended to illustrate these ideas.
This provides a short, simple, conceptual introduction to ridge regression. I know less about lasso and LAR, but I believe the same ideas could be applied. More information about the lasso and least angle regression can be found here, the "simple explanation..." link is especially helpful. This provides much more information about shrinkage methods.
I hope this is of some value. | What problem do shrinkage methods solve? | I suspect you want a deeper answer, and I'll have to let someone else provide that, but I can give you some thoughts on ridge regression from a loose, conceptual perspective.
OLS regression yields p | What problem do shrinkage methods solve?
I suspect you want a deeper answer, and I'll have to let someone else provide that, but I can give you some thoughts on ridge regression from a loose, conceptual perspective.
OLS regression yields parameter estimates that are unbiased (i.e., if such samples are gathered and parameters are estimated indefinitely, the sampling distribution of parameter estimates will be centered on the true value). Moreover, the sampling distribution will have the lowest variance of all possible unbiased estimates (this means that, on average, an OLS parameter estimate will be closer to the true value than an estimate from some other unbiased estimation procedure will be). This is old news (and I apologize, I know you know this well), however, the fact that the variance is lower does not mean that it is terribly low. Under some circumstances, the variance of the sampling distribution can be so large as to make the OLS estimator essentially worthless. (One situation where this could occur is when there is a high degree of multicollinearity.)
What is one to do in such a situation? Well, a different estimator could be found that has lower variance (although, obviously, it must be biased, given what was stipulated above). That is, we are trading off unbiasedness for lower variance. For example, we get parameter estimates that are likely to be substantially closer to the true value, albeit probably a little below the true value. Whether this tradeoff is worthwhile is a judgment the analyst must make when confronted with this situation. At any rate, ridge regression is just such a technique. The following (completely fabricated) figure is intended to illustrate these ideas.
This provides a short, simple, conceptual introduction to ridge regression. I know less about lasso and LAR, but I believe the same ideas could be applied. More information about the lasso and least angle regression can be found here, the "simple explanation..." link is especially helpful. This provides much more information about shrinkage methods.
I hope this is of some value. | What problem do shrinkage methods solve?
I suspect you want a deeper answer, and I'll have to let someone else provide that, but I can give you some thoughts on ridge regression from a loose, conceptual perspective.
OLS regression yields p |
3,164 | What problem do shrinkage methods solve? | The error of an estimator is a combination of (squared) bias and variance components. However in practice we want to fit a model to a particular finite sample of data and we want to minimise the total error of the estimator evaluated on the particular sample of data we actually have, rather than a zero error on average over some population of samples (that we don't have). Thus we want to reduce both the bias and variance, to minimise the error, which often means sacrificing unbiasedness to make a greater reduction in the variance component. This is especially true when dealing with small datasets, where the variance is likely to be high.
I think the difference in focus depends on whether one is interested in the properties of a procedure, or getting the best results on a particular sample. Frequentists typically find the former easier to deal with within that framework; Bayesians are often more focussed on the latter. | What problem do shrinkage methods solve? | The error of an estimator is a combination of (squared) bias and variance components. However in practice we want to fit a model to a particular finite sample of data and we want to minimise the tot | What problem do shrinkage methods solve?
The error of an estimator is a combination of (squared) bias and variance components. However in practice we want to fit a model to a particular finite sample of data and we want to minimise the total error of the estimator evaluated on the particular sample of data we actually have, rather than a zero error on average over some population of samples (that we don't have). Thus we want to reduce both the bias and variance, to minimise the error, which often means sacrificing unbiasedness to make a greater reduction in the variance component. This is especially true when dealing with small datasets, where the variance is likely to be high.
I think the difference in focus depends on whether one is interested in the properties of a procedure, or getting the best results on a particular sample. Frequentists typically find the former easier to deal with within that framework; Bayesians are often more focussed on the latter. | What problem do shrinkage methods solve?
The error of an estimator is a combination of (squared) bias and variance components. However in practice we want to fit a model to a particular finite sample of data and we want to minimise the tot |
3,165 | What problem do shrinkage methods solve? | I guess that there are a few answers that may be applicable:
Ridge regression can provide identification when the matrix of predictors is not full column rank.
Lasso and LAR can be used when the number of predictors is greater than the number of observations (another variant of the non-singular issue).
Lasso and LAR are automatic variable selection algorithms.
I'm not sure that the first point regarding ridge regression is really a feature; I think that I'd rather change my model to deal with non-identification. Even without a modeling change, OLS provides unique (and unbiased/consistent) predictions of the outcome in this case.
I could see how the second point could be helpful, but forward selection can also work in the case of the number of parameters exceeding the number of observations while yielding unbiased/consistent estimates.
On the last point, forward/backward selection, as examples, are easily automated.
So I still don't see the real advantages. | What problem do shrinkage methods solve? | I guess that there are a few answers that may be applicable:
Ridge regression can provide identification when the matrix of predictors is not full column rank.
Lasso and LAR can be used when the numb | What problem do shrinkage methods solve?
I guess that there are a few answers that may be applicable:
Ridge regression can provide identification when the matrix of predictors is not full column rank.
Lasso and LAR can be used when the number of predictors is greater than the number of observations (another variant of the non-singular issue).
Lasso and LAR are automatic variable selection algorithms.
I'm not sure that the first point regarding ridge regression is really a feature; I think that I'd rather change my model to deal with non-identification. Even without a modeling change, OLS provides unique (and unbiased/consistent) predictions of the outcome in this case.
I could see how the second point could be helpful, but forward selection can also work in the case of the number of parameters exceeding the number of observations while yielding unbiased/consistent estimates.
On the last point, forward/backward selection, as examples, are easily automated.
So I still don't see the real advantages. | What problem do shrinkage methods solve?
I guess that there are a few answers that may be applicable:
Ridge regression can provide identification when the matrix of predictors is not full column rank.
Lasso and LAR can be used when the numb |
3,166 | What problem do shrinkage methods solve? | Here's a basic applied example from Biostatistics
Let's assume that I am studying possible relationships between the presence of ovarian cancer and a set of genes.
My dependent variable is a binary (coded as a zero or a 1)
My independent variables codes data from a proteomic database.
As is common in many genetics studies, my data is much wider than it is tall. I have 216 different observations but 4000 or so possible predictors.
Linear regression is right out (the system is horrible over determined).
feature selection techniques really aren't feasible. With 4,000+ different independent variables all possible subset techniques are completely out of the question and even sequential feature selection is dubious.
The best option is probably to use logistic regression with an elastic net.
I want to do feature selection (identify which independent variables are important) so ridge regression really isn't appropriate.
It's entirely possible that there are more than 216 independent variables that have significant influence, so I probably shouldn't use a lasso (Lasso can't identify more predictors than you have observations)...
Enter the elastic net... | What problem do shrinkage methods solve? | Here's a basic applied example from Biostatistics
Let's assume that I am studying possible relationships between the presence of ovarian cancer and a set of genes.
My dependent variable is a binary (c | What problem do shrinkage methods solve?
Here's a basic applied example from Biostatistics
Let's assume that I am studying possible relationships between the presence of ovarian cancer and a set of genes.
My dependent variable is a binary (coded as a zero or a 1)
My independent variables codes data from a proteomic database.
As is common in many genetics studies, my data is much wider than it is tall. I have 216 different observations but 4000 or so possible predictors.
Linear regression is right out (the system is horrible over determined).
feature selection techniques really aren't feasible. With 4,000+ different independent variables all possible subset techniques are completely out of the question and even sequential feature selection is dubious.
The best option is probably to use logistic regression with an elastic net.
I want to do feature selection (identify which independent variables are important) so ridge regression really isn't appropriate.
It's entirely possible that there are more than 216 independent variables that have significant influence, so I probably shouldn't use a lasso (Lasso can't identify more predictors than you have observations)...
Enter the elastic net... | What problem do shrinkage methods solve?
Here's a basic applied example from Biostatistics
Let's assume that I am studying possible relationships between the presence of ovarian cancer and a set of genes.
My dependent variable is a binary (c |
3,167 | What problem do shrinkage methods solve? | Another problem which linear regression shrinkage methods can address is obtaining a low variance (possibly unbiased) estimate of an average treatment effect (ATE) in high-dimensional case-control studies on observational data.
Specifically, in cases where 1) there are a large number of variables (making it difficult to select variables for exact matching), 2) propensity score matching fails to eliminate imbalance in the treatment and control samples, and 3) multicollinearity is present, there are several techniques, such as the adaptive lasso (Zou, 2006) that obtain asymptotically unbiased estimates. There have been several papers that discuss using lasso regression for causal inference and generating confidence intervals on coefficient estimates (see the following post: Inference after using Lasso for variable selection). | What problem do shrinkage methods solve? | Another problem which linear regression shrinkage methods can address is obtaining a low variance (possibly unbiased) estimate of an average treatment effect (ATE) in high-dimensional case-control stu | What problem do shrinkage methods solve?
Another problem which linear regression shrinkage methods can address is obtaining a low variance (possibly unbiased) estimate of an average treatment effect (ATE) in high-dimensional case-control studies on observational data.
Specifically, in cases where 1) there are a large number of variables (making it difficult to select variables for exact matching), 2) propensity score matching fails to eliminate imbalance in the treatment and control samples, and 3) multicollinearity is present, there are several techniques, such as the adaptive lasso (Zou, 2006) that obtain asymptotically unbiased estimates. There have been several papers that discuss using lasso regression for causal inference and generating confidence intervals on coefficient estimates (see the following post: Inference after using Lasso for variable selection). | What problem do shrinkage methods solve?
Another problem which linear regression shrinkage methods can address is obtaining a low variance (possibly unbiased) estimate of an average treatment effect (ATE) in high-dimensional case-control stu |
3,168 | Is PCA followed by a rotation (such as varimax) still PCA? | This question is largely about definitions of PCA/FA, so opinions might differ. My opinion is that PCA+varimax should not be called either PCA or FA, bur rather explicitly referred to e.g. as "varimax-rotated PCA".
I should add that this is quite a confusing topic. In this answer I want to explain what a rotation actually is; this will require some mathematics. A casual reader can skip directly to the illustration. Only then we can discuss whether PCA+rotation should or should not be called "PCA".
One reference is Jolliffe's book "Principal Component Analysis", section 11.1 "Rotation of Principal Components", but I find it could be clearer.
Let $\mathbf X$ be a $n \times p$ data matrix which we assume is centered. PCA amounts (see my answer here) to a singular-value decomposition: $\mathbf X=\mathbf{USV}^\top$. There are two equivalent but complimentary views on this decomposition: a more PCA-style "projection" view and a more FA-style "latent variables" view.
According to the PCA-style view, we found a bunch of orthogonal directions $\mathbf V$ (these are eigenvectors of the covariance matrix, also called "principal directions" or "axes"), and "principal components" $\mathbf{US}$ (also called principal component "scores") are projections of the data on these directions. Principal components are uncorrelated, the first one has maximally possible variance, etc. We can write: $$\mathbf X = \mathbf{US}\cdot \mathbf V^\top = \text{Scores} \cdot \text{Principal directions}.$$
According to the FA-style view, we found some uncorrelated unit-variance "latent factors" that give rise to the observed variables via "loadings". Indeed, $\widetilde{\mathbf U}=\sqrt{n-1}\mathbf{U}$ are standardized principal components (uncorrelated and with unit variance), and if we define loadings as $\mathbf L = \mathbf{VS}/\sqrt{n-1}$, then $$\mathbf X= \sqrt{n-1}\mathbf{U}\cdot (\mathbf{VS}/\sqrt{n-1})^\top =\widetilde{\mathbf U}\cdot \mathbf L^\top = \text{Standardized scores} \cdot \text{Loadings}.$$ (Note that $\mathbf{S}^\top=\mathbf{S}$.) Both views are equivalent. Note that loadings are eigenvectors scaled by the respective eigenvalues ($\mathbf{S}/\sqrt{n-1}$ are eigenvalues of the covariance matrix).
(I should add in brackets that PCA$\ne$FA; FA explicitly aims at finding latent factors that are linearly mapped to the observed variables via loadings; it is more flexible than PCA and yields different loadings. That is why I prefer to call the above "FA-style view on PCA" and not FA, even though some people take it to be one of FA methods.)
Now, what does a rotation do? E.g. an orthogonal rotation, such as varimax. First, it considers only $k<p$ components, i.e.: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_k \mathbf L^\top_k.$$ Then it takes a square orthogonal $k \times k$ matrix $\mathbf T$, and plugs $\mathbf T\mathbf T^\top=\mathbf I$ into this decomposition: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \mathbf U_k \mathbf T \mathbf T^\top \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_\mathrm{rot} \mathbf L^\top_\mathrm{rot},$$ where rotated loadings are given by $\mathbf L_\mathrm{rot} = \mathbf L_k \mathbf T$, and rotated standardized scores are given by $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U}_k \mathbf T$. (The purpose of this is to find $\mathbf T$ such that $\mathbf L_\mathrm{rot}$ became as close to being sparse as possible, to facilitate its interpretation.)
Note that what is rotated are: (1) standardized scores, (2) loadings. But not the raw scores and not the principal directions! So the rotation happens in the latent space, not in the original space. This is absolutely crucial.
From the FA-style point of view, nothing much happened. (A) The latent factors are still uncorrelated and standardized. (B) They are still mapped to the observed variables via (rotated) loadings. (C) The amount of variance captured by each component/factor is given by the sum of squared values of the corresponding loadings column in $\mathbf L_\mathrm{rot}$. (D) Geometrically, loadings still span the same $k$-dimensional subspace in $\mathbb R^p$ (the subspace spanned by the first $k$ PCA eigenvectors). (E) The approximation to $\mathbf X$ and the reconstruction error did not change at all. (F) The covariance matrix is still approximated equally well:$$\boldsymbol \Sigma \approx \mathbf L_k\mathbf L_k^\top = \mathbf L_\mathrm{rot}\mathbf L_\mathrm{rot}^\top.$$
But the PCA-style point of view has practically collapsed. Rotated loadings do not correspond to orthogonal directions/axes in $\mathbb R^p$ anymore, i.e. columns of $\mathbf L_\mathrm{rot}$ are not orthogonal! Worse, if you [orthogonally] project the data onto the directions given by the rotated loadings, you will get correlated (!) projections and will not be able to recover the scores. [Instead, to compute the standardized scores after rotation, one needs to multiply the data matrix with the pseudo-inverse of loadings $\widetilde{\mathbf U}_\mathrm{rot} = \mathbf X (\mathbf L_\mathrm{rot}^+)^\top$. Alternatively, one can simply rotate the original standardized scores with the rotation matrix: $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U} \mathbf T$.] Also, the rotated components do not successively capture the maximal amount of variance: the variance gets redistributed among the components (even though all $k$ rotated components capture exactly as much variance as all $k$ original principal components).
Here is an illustration. The data is a 2D ellipse stretched along the main diagonal. First principal direction is the main diagonal, the second one is orthogonal to it. PCA loading vectors (eigenvectors scaled by the eigenvalues) are shown in red -- pointing in both directions and also stretched by a constant factor for visibility. Then I applied an orthogonal rotation by $30^\circ$ to the loadings. Resulting loading vectors are shown in magenta. Note how they are not orthogonal (!).
An FA-style intuition here is as follows: imagine a "latent space" where points fill a small circle (come from a 2D Gaussian with unit variances). These distribution of points is then stretched along the PCA loadings (red) to become the data ellipse that we see on this figure. However, the same distribution of points can be rotated and then stretched along the rotated PCA loadings (magenta) to become the same data ellipse.
[To actually see that an orthogonal rotation of loadings is a rotation, one needs to look at a PCA biplot; there the vectors/rays corresponding to original variables will simply rotate.]
Let us summarize. After an orthogonal rotation (such as varimax), the "rotated-principal" axes are not orthogonal, and orthogonal projections on them do not make sense. So one should rather drop this whole axes/projections point of view. It would be weird to still call it PCA (which is all about projections with maximal variance etc.).
From FA-style point of view, we simply rotated our (standardized and uncorrelated) latent factors, which is a valid operation. There are no "projections" in FA; instead, latent factors generate the observed variables via loadings. This logic is still preserved. However, we started with principal components, which are not actually factors (as PCA is not the same as FA). So it would be weird to call it FA as well.
Instead of debating whether one "should" rather call it PCA or FA, I would suggest to be meticulous in specifying the exact used procedure: "PCA followed by a varimax rotation".
Postscriptum. It is possible to consider an alternative rotation procedure, where $\mathbf{TT}^\top$ is inserted between $\mathbf{US}$ and $\mathbf V^\top$. This would rotate raw scores and eigenvectors (instead of standardized scores and loadings). The biggest problem with this approach is that after such a "rotation", scores will not be uncorrelated anymore, which is pretty fatal for PCA. One can do it, but it is not how rotations are usually being understood and applied. | Is PCA followed by a rotation (such as varimax) still PCA? | This question is largely about definitions of PCA/FA, so opinions might differ. My opinion is that PCA+varimax should not be called either PCA or FA, bur rather explicitly referred to e.g. as "varimax | Is PCA followed by a rotation (such as varimax) still PCA?
This question is largely about definitions of PCA/FA, so opinions might differ. My opinion is that PCA+varimax should not be called either PCA or FA, bur rather explicitly referred to e.g. as "varimax-rotated PCA".
I should add that this is quite a confusing topic. In this answer I want to explain what a rotation actually is; this will require some mathematics. A casual reader can skip directly to the illustration. Only then we can discuss whether PCA+rotation should or should not be called "PCA".
One reference is Jolliffe's book "Principal Component Analysis", section 11.1 "Rotation of Principal Components", but I find it could be clearer.
Let $\mathbf X$ be a $n \times p$ data matrix which we assume is centered. PCA amounts (see my answer here) to a singular-value decomposition: $\mathbf X=\mathbf{USV}^\top$. There are two equivalent but complimentary views on this decomposition: a more PCA-style "projection" view and a more FA-style "latent variables" view.
According to the PCA-style view, we found a bunch of orthogonal directions $\mathbf V$ (these are eigenvectors of the covariance matrix, also called "principal directions" or "axes"), and "principal components" $\mathbf{US}$ (also called principal component "scores") are projections of the data on these directions. Principal components are uncorrelated, the first one has maximally possible variance, etc. We can write: $$\mathbf X = \mathbf{US}\cdot \mathbf V^\top = \text{Scores} \cdot \text{Principal directions}.$$
According to the FA-style view, we found some uncorrelated unit-variance "latent factors" that give rise to the observed variables via "loadings". Indeed, $\widetilde{\mathbf U}=\sqrt{n-1}\mathbf{U}$ are standardized principal components (uncorrelated and with unit variance), and if we define loadings as $\mathbf L = \mathbf{VS}/\sqrt{n-1}$, then $$\mathbf X= \sqrt{n-1}\mathbf{U}\cdot (\mathbf{VS}/\sqrt{n-1})^\top =\widetilde{\mathbf U}\cdot \mathbf L^\top = \text{Standardized scores} \cdot \text{Loadings}.$$ (Note that $\mathbf{S}^\top=\mathbf{S}$.) Both views are equivalent. Note that loadings are eigenvectors scaled by the respective eigenvalues ($\mathbf{S}/\sqrt{n-1}$ are eigenvalues of the covariance matrix).
(I should add in brackets that PCA$\ne$FA; FA explicitly aims at finding latent factors that are linearly mapped to the observed variables via loadings; it is more flexible than PCA and yields different loadings. That is why I prefer to call the above "FA-style view on PCA" and not FA, even though some people take it to be one of FA methods.)
Now, what does a rotation do? E.g. an orthogonal rotation, such as varimax. First, it considers only $k<p$ components, i.e.: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_k \mathbf L^\top_k.$$ Then it takes a square orthogonal $k \times k$ matrix $\mathbf T$, and plugs $\mathbf T\mathbf T^\top=\mathbf I$ into this decomposition: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \mathbf U_k \mathbf T \mathbf T^\top \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_\mathrm{rot} \mathbf L^\top_\mathrm{rot},$$ where rotated loadings are given by $\mathbf L_\mathrm{rot} = \mathbf L_k \mathbf T$, and rotated standardized scores are given by $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U}_k \mathbf T$. (The purpose of this is to find $\mathbf T$ such that $\mathbf L_\mathrm{rot}$ became as close to being sparse as possible, to facilitate its interpretation.)
Note that what is rotated are: (1) standardized scores, (2) loadings. But not the raw scores and not the principal directions! So the rotation happens in the latent space, not in the original space. This is absolutely crucial.
From the FA-style point of view, nothing much happened. (A) The latent factors are still uncorrelated and standardized. (B) They are still mapped to the observed variables via (rotated) loadings. (C) The amount of variance captured by each component/factor is given by the sum of squared values of the corresponding loadings column in $\mathbf L_\mathrm{rot}$. (D) Geometrically, loadings still span the same $k$-dimensional subspace in $\mathbb R^p$ (the subspace spanned by the first $k$ PCA eigenvectors). (E) The approximation to $\mathbf X$ and the reconstruction error did not change at all. (F) The covariance matrix is still approximated equally well:$$\boldsymbol \Sigma \approx \mathbf L_k\mathbf L_k^\top = \mathbf L_\mathrm{rot}\mathbf L_\mathrm{rot}^\top.$$
But the PCA-style point of view has practically collapsed. Rotated loadings do not correspond to orthogonal directions/axes in $\mathbb R^p$ anymore, i.e. columns of $\mathbf L_\mathrm{rot}$ are not orthogonal! Worse, if you [orthogonally] project the data onto the directions given by the rotated loadings, you will get correlated (!) projections and will not be able to recover the scores. [Instead, to compute the standardized scores after rotation, one needs to multiply the data matrix with the pseudo-inverse of loadings $\widetilde{\mathbf U}_\mathrm{rot} = \mathbf X (\mathbf L_\mathrm{rot}^+)^\top$. Alternatively, one can simply rotate the original standardized scores with the rotation matrix: $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U} \mathbf T$.] Also, the rotated components do not successively capture the maximal amount of variance: the variance gets redistributed among the components (even though all $k$ rotated components capture exactly as much variance as all $k$ original principal components).
Here is an illustration. The data is a 2D ellipse stretched along the main diagonal. First principal direction is the main diagonal, the second one is orthogonal to it. PCA loading vectors (eigenvectors scaled by the eigenvalues) are shown in red -- pointing in both directions and also stretched by a constant factor for visibility. Then I applied an orthogonal rotation by $30^\circ$ to the loadings. Resulting loading vectors are shown in magenta. Note how they are not orthogonal (!).
An FA-style intuition here is as follows: imagine a "latent space" where points fill a small circle (come from a 2D Gaussian with unit variances). These distribution of points is then stretched along the PCA loadings (red) to become the data ellipse that we see on this figure. However, the same distribution of points can be rotated and then stretched along the rotated PCA loadings (magenta) to become the same data ellipse.
[To actually see that an orthogonal rotation of loadings is a rotation, one needs to look at a PCA biplot; there the vectors/rays corresponding to original variables will simply rotate.]
Let us summarize. After an orthogonal rotation (such as varimax), the "rotated-principal" axes are not orthogonal, and orthogonal projections on them do not make sense. So one should rather drop this whole axes/projections point of view. It would be weird to still call it PCA (which is all about projections with maximal variance etc.).
From FA-style point of view, we simply rotated our (standardized and uncorrelated) latent factors, which is a valid operation. There are no "projections" in FA; instead, latent factors generate the observed variables via loadings. This logic is still preserved. However, we started with principal components, which are not actually factors (as PCA is not the same as FA). So it would be weird to call it FA as well.
Instead of debating whether one "should" rather call it PCA or FA, I would suggest to be meticulous in specifying the exact used procedure: "PCA followed by a varimax rotation".
Postscriptum. It is possible to consider an alternative rotation procedure, where $\mathbf{TT}^\top$ is inserted between $\mathbf{US}$ and $\mathbf V^\top$. This would rotate raw scores and eigenvectors (instead of standardized scores and loadings). The biggest problem with this approach is that after such a "rotation", scores will not be uncorrelated anymore, which is pretty fatal for PCA. One can do it, but it is not how rotations are usually being understood and applied. | Is PCA followed by a rotation (such as varimax) still PCA?
This question is largely about definitions of PCA/FA, so opinions might differ. My opinion is that PCA+varimax should not be called either PCA or FA, bur rather explicitly referred to e.g. as "varimax |
3,169 | Is PCA followed by a rotation (such as varimax) still PCA? | Principal Components Analysis (PCA) and Common Factor Analysis (CFA) are distinct methods. Often, they produce similar results and PCA is used as the default extraction method in the SPSS Factor Analysis routines. This undoubtedly results in a lot of confusion about the distinction between the two.
The bottom line is, these are two different models, conceptually. In PCA, the components are actual orthogonal linear combinations that maximize the total variance. In FA, the factors are linear combinations that maximize the shared portion of the variance--underlying "latent constructs". That's why FA is often called "common factor analysis". FA uses a variety of optimization routines and the result, unlike PCA, depends on the optimization routine used and starting points for those routines. Simply there is not a single unique solution.
In R, the factanal() function provides CFA with a maximum likelihood extraction. So, you shouldn't expect it to reproduce an SPSS result which is based on a PCA extraction. It's simply not the same model or logic. I'm not sure if you would get the same result if you used SPSS's Maximum Likelihood extraction either as they may not use the same algorithm.
For better or for worse in R, you can, however, reproduce the mixed up "factor analysis" that SPSS provides as its default. Here's the process in R. With this code, I'm able to reproduce the SPSS Principal Component "Factor Analysis" result using this dataset. (With the exception of the sign, which is indeterminant). That result could also then be rotated using any of Rs available rotation methods.
# Load the base dataset attitude to work with.
data(attitude)
# Compute eigenvalues and eigen vectors of the correlation matrix.
pfa.eigen<-eigen(cor(attitude))
# Print and note that eigen values are those produced by SPSS.
# Also note that SPSS will extract 2 components as eigen values > 1 = 2
pfa.eigen$values
# set a value for the number of factors (for clarity)
factors<-2
# Extract and transform two components.
pfa.eigen$vectors [ , 1:factors ] %*%
+ diag ( sqrt (pfa.eigen$values [ 1:factors ] ),factors,factors ) | Is PCA followed by a rotation (such as varimax) still PCA? | Principal Components Analysis (PCA) and Common Factor Analysis (CFA) are distinct methods. Often, they produce similar results and PCA is used as the default extraction method in the SPSS Factor Anal | Is PCA followed by a rotation (such as varimax) still PCA?
Principal Components Analysis (PCA) and Common Factor Analysis (CFA) are distinct methods. Often, they produce similar results and PCA is used as the default extraction method in the SPSS Factor Analysis routines. This undoubtedly results in a lot of confusion about the distinction between the two.
The bottom line is, these are two different models, conceptually. In PCA, the components are actual orthogonal linear combinations that maximize the total variance. In FA, the factors are linear combinations that maximize the shared portion of the variance--underlying "latent constructs". That's why FA is often called "common factor analysis". FA uses a variety of optimization routines and the result, unlike PCA, depends on the optimization routine used and starting points for those routines. Simply there is not a single unique solution.
In R, the factanal() function provides CFA with a maximum likelihood extraction. So, you shouldn't expect it to reproduce an SPSS result which is based on a PCA extraction. It's simply not the same model or logic. I'm not sure if you would get the same result if you used SPSS's Maximum Likelihood extraction either as they may not use the same algorithm.
For better or for worse in R, you can, however, reproduce the mixed up "factor analysis" that SPSS provides as its default. Here's the process in R. With this code, I'm able to reproduce the SPSS Principal Component "Factor Analysis" result using this dataset. (With the exception of the sign, which is indeterminant). That result could also then be rotated using any of Rs available rotation methods.
# Load the base dataset attitude to work with.
data(attitude)
# Compute eigenvalues and eigen vectors of the correlation matrix.
pfa.eigen<-eigen(cor(attitude))
# Print and note that eigen values are those produced by SPSS.
# Also note that SPSS will extract 2 components as eigen values > 1 = 2
pfa.eigen$values
# set a value for the number of factors (for clarity)
factors<-2
# Extract and transform two components.
pfa.eigen$vectors [ , 1:factors ] %*%
+ diag ( sqrt (pfa.eigen$values [ 1:factors ] ),factors,factors ) | Is PCA followed by a rotation (such as varimax) still PCA?
Principal Components Analysis (PCA) and Common Factor Analysis (CFA) are distinct methods. Often, they produce similar results and PCA is used as the default extraction method in the SPSS Factor Anal |
3,170 | Is PCA followed by a rotation (such as varimax) still PCA? | This answer is to present, in a path chart form, things about which @amoeba reasoned in his deep (but slightly complicated) answer on this thread (I'm a kind of agree with it by 95%) and how they appear to me.
PCA in its proper, minimal form is the specific orthogonal rotation of correlated data to its uncorrelated form, with the principal components skimming sequentially less and less of the overall variability. If the dimensionality reduction is all we want we usually don't compute loadings and whatever they drag after them. We're happy with the (raw) principal component scores $\bf P$. [Please note that notations on the chart don't precisely follow @amoeba's, - I stick to what I adopt in some of my other answers.]
On the chart, I take a simple example of two variables p=2 and use both extracted principal components. Though we usually keep only few first m<p components, for the theoretical question we're considering ("Is PCA with rotation a PCA or what?") it makes no difference if to keep m or all p of them; at least in my particular answer.
The trick of loadings is to pull scale (magnitude, variability, inertia $\bf L$) off the components (raw scores) and onto the coefficients $\bf V$ (eigenvectors) leaving the former to be bare "framework" $\bf P_z$ (standardized pr. component scores) and the latter to be fleshy $\bf A$ (loadings). You restore the data equally well with both: $\bf X=PV'=P_zA'$. But loadings open prospects: (i) to interpret the components; (ii) to be rotated; (iii) to restore correlations/covariances of the variables. This is all due to the fact that the variability of the data has been written in loadings, as their load.
And they can return that load back to the data points any time - now or after rotation. If we conceive of an orthogonal rotation such as varimax that means that we want the components to remain uncorrelated after the rotation done. Only data with spherical covariance matrix, when rotated orthogonally, preserves uncorrelatedness. And voila, the standardized principal components (which in machine learning often are called "PCA-whitened data") $\bf P_z$ are that magic data ($\bf P_z$ are actually proportional to the left, i.e. row eigenvectors of the data). While we are in search of the varimax rotation matrix $\bf Q$ to facilitate interpretation of loadings the data points passively await in their chaste sphericity & identity (or "whiteness").
After $\bf Q$ is found, rotation of $\bf P_z$ by it is equivalent to usual way computation of standardized principal component scores via the generalized inverse of the loading matrix, - this time, of the rotated loadings, $\bf A_r$ (see the chart). The resultant varimax-rotated principal components, $\bf C_z$ are uncorrelated, like we wanted it, plus data are restored by them as nicely as before rotation: $\bf X=P_zA'=C_zA_r'$. We may then give them back their scale deposited (and accordingly rotated) in $\bf A_r$ - to unstandardize them: $\bf C$.
We should be aware, that "varimax-rotated principal components" are not principal components anymore: I used notation Cz, C, instead of Pz, P, to stress it. They are just "components". Principal components are unique, but components can be many. Rotations other than varimax will yield other new variables also called components and also uncorrelated, besides our $\bf C$ ones.
Also to say, varimax-rotated (or otherwise orthogonally rotated) principal components (now just "components"), while remain uncorrelated, orthogonal, do not imply that their loadings are also still orthogonal. Columns of $\bf A$ are mutually orthogonal (as were eigenvectors $\bf V$), but not columns of $\bf A_r$ (see also footnote here).
And finally - rotating raw principal components $\bf P$ with our $\bf Q$ isn't useful action. We'll get some correlated varibles $\bf "C"$ with problematic meaning. $\bf Q$ appeared as to optimize (in some specific way) the configuration of loadings which had absorbed all the scale into them. $\bf Q$ was never trained to rotate data points with all the scale left on them. The rotating $\bf P$ with $\bf Q$ will be equivalent to rotating eigenvectors $\bf V$ with $\bf Q$ (into $\bf V_r$) and then computing the raw component scores as $\bf "C"=XV_r$. These "paths" noted by @amoeba in their Postscriptum.
These lastly outlined actions (pointless for the most part) remind us that eigenvectors, not only loadings, could be rotated, in general. For example, varimax procedure could be applied to them to simplify their structure. But since eigenvectors are not as helpful in interpreting the meaning of the components as the loadings are, rotation of eigenvectors is rarely done.
So, PCA with subsequent varimax (or other) rotation is
still PCA
which on the way abandoned principal components for just components
that are potentially more (than the PCs) interpretable as "latent traits"
but were not modeled satistically as those (PCA is
not fair factor analysis)
I did not refer to factor analysis in this answer. It seems to me that @amoeba's usage of word "latent space" is a bit risky in the context of the question asked. I will, however, concur that PCA + analytic rotation might be called "FA-style view on PCA". | Is PCA followed by a rotation (such as varimax) still PCA? | This answer is to present, in a path chart form, things about which @amoeba reasoned in his deep (but slightly complicated) answer on this thread (I'm a kind of agree with it by 95%) and how they appe | Is PCA followed by a rotation (such as varimax) still PCA?
This answer is to present, in a path chart form, things about which @amoeba reasoned in his deep (but slightly complicated) answer on this thread (I'm a kind of agree with it by 95%) and how they appear to me.
PCA in its proper, minimal form is the specific orthogonal rotation of correlated data to its uncorrelated form, with the principal components skimming sequentially less and less of the overall variability. If the dimensionality reduction is all we want we usually don't compute loadings and whatever they drag after them. We're happy with the (raw) principal component scores $\bf P$. [Please note that notations on the chart don't precisely follow @amoeba's, - I stick to what I adopt in some of my other answers.]
On the chart, I take a simple example of two variables p=2 and use both extracted principal components. Though we usually keep only few first m<p components, for the theoretical question we're considering ("Is PCA with rotation a PCA or what?") it makes no difference if to keep m or all p of them; at least in my particular answer.
The trick of loadings is to pull scale (magnitude, variability, inertia $\bf L$) off the components (raw scores) and onto the coefficients $\bf V$ (eigenvectors) leaving the former to be bare "framework" $\bf P_z$ (standardized pr. component scores) and the latter to be fleshy $\bf A$ (loadings). You restore the data equally well with both: $\bf X=PV'=P_zA'$. But loadings open prospects: (i) to interpret the components; (ii) to be rotated; (iii) to restore correlations/covariances of the variables. This is all due to the fact that the variability of the data has been written in loadings, as their load.
And they can return that load back to the data points any time - now or after rotation. If we conceive of an orthogonal rotation such as varimax that means that we want the components to remain uncorrelated after the rotation done. Only data with spherical covariance matrix, when rotated orthogonally, preserves uncorrelatedness. And voila, the standardized principal components (which in machine learning often are called "PCA-whitened data") $\bf P_z$ are that magic data ($\bf P_z$ are actually proportional to the left, i.e. row eigenvectors of the data). While we are in search of the varimax rotation matrix $\bf Q$ to facilitate interpretation of loadings the data points passively await in their chaste sphericity & identity (or "whiteness").
After $\bf Q$ is found, rotation of $\bf P_z$ by it is equivalent to usual way computation of standardized principal component scores via the generalized inverse of the loading matrix, - this time, of the rotated loadings, $\bf A_r$ (see the chart). The resultant varimax-rotated principal components, $\bf C_z$ are uncorrelated, like we wanted it, plus data are restored by them as nicely as before rotation: $\bf X=P_zA'=C_zA_r'$. We may then give them back their scale deposited (and accordingly rotated) in $\bf A_r$ - to unstandardize them: $\bf C$.
We should be aware, that "varimax-rotated principal components" are not principal components anymore: I used notation Cz, C, instead of Pz, P, to stress it. They are just "components". Principal components are unique, but components can be many. Rotations other than varimax will yield other new variables also called components and also uncorrelated, besides our $\bf C$ ones.
Also to say, varimax-rotated (or otherwise orthogonally rotated) principal components (now just "components"), while remain uncorrelated, orthogonal, do not imply that their loadings are also still orthogonal. Columns of $\bf A$ are mutually orthogonal (as were eigenvectors $\bf V$), but not columns of $\bf A_r$ (see also footnote here).
And finally - rotating raw principal components $\bf P$ with our $\bf Q$ isn't useful action. We'll get some correlated varibles $\bf "C"$ with problematic meaning. $\bf Q$ appeared as to optimize (in some specific way) the configuration of loadings which had absorbed all the scale into them. $\bf Q$ was never trained to rotate data points with all the scale left on them. The rotating $\bf P$ with $\bf Q$ will be equivalent to rotating eigenvectors $\bf V$ with $\bf Q$ (into $\bf V_r$) and then computing the raw component scores as $\bf "C"=XV_r$. These "paths" noted by @amoeba in their Postscriptum.
These lastly outlined actions (pointless for the most part) remind us that eigenvectors, not only loadings, could be rotated, in general. For example, varimax procedure could be applied to them to simplify their structure. But since eigenvectors are not as helpful in interpreting the meaning of the components as the loadings are, rotation of eigenvectors is rarely done.
So, PCA with subsequent varimax (or other) rotation is
still PCA
which on the way abandoned principal components for just components
that are potentially more (than the PCs) interpretable as "latent traits"
but were not modeled satistically as those (PCA is
not fair factor analysis)
I did not refer to factor analysis in this answer. It seems to me that @amoeba's usage of word "latent space" is a bit risky in the context of the question asked. I will, however, concur that PCA + analytic rotation might be called "FA-style view on PCA". | Is PCA followed by a rotation (such as varimax) still PCA?
This answer is to present, in a path chart form, things about which @amoeba reasoned in his deep (but slightly complicated) answer on this thread (I'm a kind of agree with it by 95%) and how they appe |
3,171 | Is PCA followed by a rotation (such as varimax) still PCA? | In psych::principal() you can do different types of rotations/transformations to your extracted Principal Component(s) or ''PCs'' using the rotate= argument, like:
"none", "varimax" (Default), "quatimax", "promax", "oblimin", "simplimax", and "cluster". You have to empirically decide which one should make sense in your case, if needed, depending on your own appraisal and knowledge of the subject matter under investigation. A key question which might give you a hint: which one is more interpretable (again if needed)?
In the help you might find the following also helpful:
It is important to recognize that rotated principal components are not principal components (the axes associated with the eigen value decomposition) but are merely components. To point this out, unrotated principal components are labelled as PCi, while rotated PCs are now labeled as RCi (for rotated components) and obliquely transformed components as TCi (for transformed components). (Thanks to Ulrike Gromping for this suggestion.) | Is PCA followed by a rotation (such as varimax) still PCA? | In psych::principal() you can do different types of rotations/transformations to your extracted Principal Component(s) or ''PCs'' using the rotate= argument, like:
"none", "varimax" (Default), "quatim | Is PCA followed by a rotation (such as varimax) still PCA?
In psych::principal() you can do different types of rotations/transformations to your extracted Principal Component(s) or ''PCs'' using the rotate= argument, like:
"none", "varimax" (Default), "quatimax", "promax", "oblimin", "simplimax", and "cluster". You have to empirically decide which one should make sense in your case, if needed, depending on your own appraisal and knowledge of the subject matter under investigation. A key question which might give you a hint: which one is more interpretable (again if needed)?
In the help you might find the following also helpful:
It is important to recognize that rotated principal components are not principal components (the axes associated with the eigen value decomposition) but are merely components. To point this out, unrotated principal components are labelled as PCi, while rotated PCs are now labeled as RCi (for rotated components) and obliquely transformed components as TCi (for transformed components). (Thanks to Ulrike Gromping for this suggestion.) | Is PCA followed by a rotation (such as varimax) still PCA?
In psych::principal() you can do different types of rotations/transformations to your extracted Principal Component(s) or ''PCs'' using the rotate= argument, like:
"none", "varimax" (Default), "quatim |
3,172 | Is PCA followed by a rotation (such as varimax) still PCA? | My understanding is that the distinction between PCA and Factor analysis primarily is in whether there is an error term. Thus PCA can, and will, faithfully represent the data whereas factor analysis is less faithful to the data it is trained on but attempts to represent underlying trends or communality in the data. Under a standard approach PCA is not rotated, but it is mathematically possible to do so, so people do it from time to time. I agree with the commenters in that the "meaning" of these methods is somewhat up for grabs and that it probably is wise to be sure the function you are using does what you intend - for example, as you note R has some functions that perform a different sort of PCA than users of SPSS are familiar with. | Is PCA followed by a rotation (such as varimax) still PCA? | My understanding is that the distinction between PCA and Factor analysis primarily is in whether there is an error term. Thus PCA can, and will, faithfully represent the data whereas factor analysis i | Is PCA followed by a rotation (such as varimax) still PCA?
My understanding is that the distinction between PCA and Factor analysis primarily is in whether there is an error term. Thus PCA can, and will, faithfully represent the data whereas factor analysis is less faithful to the data it is trained on but attempts to represent underlying trends or communality in the data. Under a standard approach PCA is not rotated, but it is mathematically possible to do so, so people do it from time to time. I agree with the commenters in that the "meaning" of these methods is somewhat up for grabs and that it probably is wise to be sure the function you are using does what you intend - for example, as you note R has some functions that perform a different sort of PCA than users of SPSS are familiar with. | Is PCA followed by a rotation (such as varimax) still PCA?
My understanding is that the distinction between PCA and Factor analysis primarily is in whether there is an error term. Thus PCA can, and will, faithfully represent the data whereas factor analysis i |
3,173 | Is PCA followed by a rotation (such as varimax) still PCA? | Thanks to the chaos in definitions of both they are effectively a synonyms. Don't believe words and look deep into the docks to find the equations. | Is PCA followed by a rotation (such as varimax) still PCA? | Thanks to the chaos in definitions of both they are effectively a synonyms. Don't believe words and look deep into the docks to find the equations. | Is PCA followed by a rotation (such as varimax) still PCA?
Thanks to the chaos in definitions of both they are effectively a synonyms. Don't believe words and look deep into the docks to find the equations. | Is PCA followed by a rotation (such as varimax) still PCA?
Thanks to the chaos in definitions of both they are effectively a synonyms. Don't believe words and look deep into the docks to find the equations. |
3,174 | Is PCA followed by a rotation (such as varimax) still PCA? | Although this question has already an accepted answer I'd like to add something to the point of the question.
"PCA" -if I recall correctly - means "principal components analysis"; so as long as you're analyzing the principal components, may it be without rotation or with rotation, we are still in the analysis of the "principal components" (which were found by the appropriate initial matrix-decomposition).
I'd formulate that after "varimax"-rotation on the first two principal components, that we have the "varimax-solution of the two first pc's" (or something else), but still are in the framework of analysis of principal components, or shorter, are in the framework of "pca".
To make my point even clearer: I don't feel that the simple question of rotation introduces the problem of distinguishing between EFA and CFA (the latter mentioned /introduced into the problem for instance in the answer of Brett) | Is PCA followed by a rotation (such as varimax) still PCA? | Although this question has already an accepted answer I'd like to add something to the point of the question.
"PCA" -if I recall correctly - means "principal components analysis"; so as | Is PCA followed by a rotation (such as varimax) still PCA?
Although this question has already an accepted answer I'd like to add something to the point of the question.
"PCA" -if I recall correctly - means "principal components analysis"; so as long as you're analyzing the principal components, may it be without rotation or with rotation, we are still in the analysis of the "principal components" (which were found by the appropriate initial matrix-decomposition).
I'd formulate that after "varimax"-rotation on the first two principal components, that we have the "varimax-solution of the two first pc's" (or something else), but still are in the framework of analysis of principal components, or shorter, are in the framework of "pca".
To make my point even clearer: I don't feel that the simple question of rotation introduces the problem of distinguishing between EFA and CFA (the latter mentioned /introduced into the problem for instance in the answer of Brett) | Is PCA followed by a rotation (such as varimax) still PCA?
Although this question has already an accepted answer I'd like to add something to the point of the question.
"PCA" -if I recall correctly - means "principal components analysis"; so as |
3,175 | Is PCA followed by a rotation (such as varimax) still PCA? | I found this to be the most helpful: Abdi & Williams, 2010, Principal component analysis.
ROTATION
After the number of components has been determined,
and in order to facilitate the interpretation, the
analysis often involves a rotation of the components
that were retained [see, e.g., Ref 40 and 67, for
more details]. Two main types of rotation are used:
orthogonal when the new axes are also orthogonal
to each other, and oblique when the new axes are
not required to be orthogonal. Because the rotations
are always performed in a subspace, the new axes
will always explain less inertia than the original
components (which are computed to be optimal).
However, the part of the inertia explained by the
total subspace after rotation is the same as it was
before rotation (only the partition of the inertia has
changed). It is also important to note that because
rotation always takes place in a subspace (i.e., the
space of the retained components), the choice of this
subspace strongly influences the result of the rotation.
Therefore, it is strongly recommended to try several
sizes for the subspace of the retained components in
order to assess the robustness of the interpretation of
the rotation. When performing a rotation, the term
loadings almost always refer to the elements of matrix
Q.
(see paper for definition of Q). | Is PCA followed by a rotation (such as varimax) still PCA? | I found this to be the most helpful: Abdi & Williams, 2010, Principal component analysis.
ROTATION
After the number of components has been determined,
and in order to facilitate the interpretation, t | Is PCA followed by a rotation (such as varimax) still PCA?
I found this to be the most helpful: Abdi & Williams, 2010, Principal component analysis.
ROTATION
After the number of components has been determined,
and in order to facilitate the interpretation, the
analysis often involves a rotation of the components
that were retained [see, e.g., Ref 40 and 67, for
more details]. Two main types of rotation are used:
orthogonal when the new axes are also orthogonal
to each other, and oblique when the new axes are
not required to be orthogonal. Because the rotations
are always performed in a subspace, the new axes
will always explain less inertia than the original
components (which are computed to be optimal).
However, the part of the inertia explained by the
total subspace after rotation is the same as it was
before rotation (only the partition of the inertia has
changed). It is also important to note that because
rotation always takes place in a subspace (i.e., the
space of the retained components), the choice of this
subspace strongly influences the result of the rotation.
Therefore, it is strongly recommended to try several
sizes for the subspace of the retained components in
order to assess the robustness of the interpretation of
the rotation. When performing a rotation, the term
loadings almost always refer to the elements of matrix
Q.
(see paper for definition of Q). | Is PCA followed by a rotation (such as varimax) still PCA?
I found this to be the most helpful: Abdi & Williams, 2010, Principal component analysis.
ROTATION
After the number of components has been determined,
and in order to facilitate the interpretation, t |
3,176 | How to interpret type I, type II, and type III ANOVA and MANOVA? | What you are calling type II SS, I would call type III SS. Lets imagine that there are just two factors A and B (and we'll throw in the A*B interaction later to distinguish type II SS). Further, lets imagine that there are different $n$s in the four cells (e.g., $n_{11}$=11, $n_{12}$=9, $n_{21}$=9, and $n_{22}$=11). Now your two factors are correlated with each other. (Try this yourself, make 2 columns of 1's and 0's and correlate them, $r=.1$; n.b. it doesn't matter if $r$ is 'significant', this is the whole population that you care about). The problem with your factors being correlated is that there are sums of squares that are associated with both A and B. When computing an ANOVA (or any other linear regression), we want to partition the sums of squares. A partition puts all sums of squares into one and only one of several subsets. (For example, we might want to divide the SS up into A, B and error.) However, since your factors (still only A and B here) are not orthogonal there is no unique partition of these SS. In fact, there can be very many partitions, and if you are willing to slice your SS up into fractions (e.g., "I'll put .5 into this bin and .5 into that one"), there are infinite partitions. A way to visualize this is to imagine the MasterCard symbol: The rectangle represents the total SS, and each of the circles represents the SS that are attributable to that factor, but notice the overlap between the circles in the center, those SS could be given to either circle.
The question is: How are we to choose the 'right' partition out of all of these possibilities? Let's bring the interaction back in and discuss some possibilities:
Type I SS:
SS(A)
SS(B|A)
SS(A*B|A,B)
Type II SS:
SS(A|B)
SS(B|A)
SS(A*B|A,B)
Type III SS:
SS(A|B,A*B)
SS(B|A,A*B)
SS(A*B|A,B)
Notice how these different possibilities work. Only type I SS actually uses those SS in the overlapping portion between the circles in the MasterCard symbol. That is, the SS that could be attributed to either A or B, are actually attributed to one of them when you use type I SS (specifically, the one you entered into the model first). In both of the other approaches, the overlapping SS are not used at all. Thus, type I SS gives to A all the SS attributable to A (including those that could also have been attributed elsewhere), then gives to B all of the remaining SS that are attributable to B, then gives to the A*B interaction all of the remaining SS that are attributable to A*B, and leaves the left-overs that couldn't be attributed to anything to the error term.
Type III SS only gives A those SS that are uniquely attributable to A, likewise it only gives to B and the interaction those SS that are uniquely attributable to them. The error term only gets those SS that couldn't be attributed to any of the factors. Thus, those 'ambiguous' SS that could be attributed to 2 or more possibilities are not used. If you sum the type III SS in an ANOVA table, you will notice that they do not equal the total SS. In other words, this analysis must be wrong, but errs in a kind of epistemically conservative way. Many statisticians find this approach egregious, however government funding agencies (I believe the FDA) requires their use.
The type II approach is intended to capture what might be worthwhile about the idea behind type III, but mitigate against its excesses. Specifically, it only adjusts the SS for A and B for each other, not the interaction. However, in practice type II SS is essentially never used. You would need to know about all of this and be savvy enough with your software to get these estimates, and the analysts who are typically think this is bunk.
There are more types of SS (I believe IV and V). They were suggested in the late 60's to deal with certain situations, but it was later shown that they do not do what was thought. Thus, at this point they are just a historical footnote.
As for what questions these are answering, you basically have that right already in your question:
Estimates using type I SS tell you how much of the variability in Y can be explained by A, how much of the residual variability can be explained by B, how much of the remaining residual variability can be explained by the interaction, and so on, in order.
Estimates based on type III SS tell you how much of the residual variability in Y can be accounted for by A after having accounted for everything else, and how much of the residual variability in Y can be accounted for by B after having accounted for everything else as well, and so on. (Note that both go both first and last simultaneously; if this makes sense to you, and accurately reflects your research question, then use type III SS.) | How to interpret type I, type II, and type III ANOVA and MANOVA? | What you are calling type II SS, I would call type III SS. Lets imagine that there are just two factors A and B (and we'll throw in the A*B interaction later to distinguish type II SS). Further, let | How to interpret type I, type II, and type III ANOVA and MANOVA?
What you are calling type II SS, I would call type III SS. Lets imagine that there are just two factors A and B (and we'll throw in the A*B interaction later to distinguish type II SS). Further, lets imagine that there are different $n$s in the four cells (e.g., $n_{11}$=11, $n_{12}$=9, $n_{21}$=9, and $n_{22}$=11). Now your two factors are correlated with each other. (Try this yourself, make 2 columns of 1's and 0's and correlate them, $r=.1$; n.b. it doesn't matter if $r$ is 'significant', this is the whole population that you care about). The problem with your factors being correlated is that there are sums of squares that are associated with both A and B. When computing an ANOVA (or any other linear regression), we want to partition the sums of squares. A partition puts all sums of squares into one and only one of several subsets. (For example, we might want to divide the SS up into A, B and error.) However, since your factors (still only A and B here) are not orthogonal there is no unique partition of these SS. In fact, there can be very many partitions, and if you are willing to slice your SS up into fractions (e.g., "I'll put .5 into this bin and .5 into that one"), there are infinite partitions. A way to visualize this is to imagine the MasterCard symbol: The rectangle represents the total SS, and each of the circles represents the SS that are attributable to that factor, but notice the overlap between the circles in the center, those SS could be given to either circle.
The question is: How are we to choose the 'right' partition out of all of these possibilities? Let's bring the interaction back in and discuss some possibilities:
Type I SS:
SS(A)
SS(B|A)
SS(A*B|A,B)
Type II SS:
SS(A|B)
SS(B|A)
SS(A*B|A,B)
Type III SS:
SS(A|B,A*B)
SS(B|A,A*B)
SS(A*B|A,B)
Notice how these different possibilities work. Only type I SS actually uses those SS in the overlapping portion between the circles in the MasterCard symbol. That is, the SS that could be attributed to either A or B, are actually attributed to one of them when you use type I SS (specifically, the one you entered into the model first). In both of the other approaches, the overlapping SS are not used at all. Thus, type I SS gives to A all the SS attributable to A (including those that could also have been attributed elsewhere), then gives to B all of the remaining SS that are attributable to B, then gives to the A*B interaction all of the remaining SS that are attributable to A*B, and leaves the left-overs that couldn't be attributed to anything to the error term.
Type III SS only gives A those SS that are uniquely attributable to A, likewise it only gives to B and the interaction those SS that are uniquely attributable to them. The error term only gets those SS that couldn't be attributed to any of the factors. Thus, those 'ambiguous' SS that could be attributed to 2 or more possibilities are not used. If you sum the type III SS in an ANOVA table, you will notice that they do not equal the total SS. In other words, this analysis must be wrong, but errs in a kind of epistemically conservative way. Many statisticians find this approach egregious, however government funding agencies (I believe the FDA) requires their use.
The type II approach is intended to capture what might be worthwhile about the idea behind type III, but mitigate against its excesses. Specifically, it only adjusts the SS for A and B for each other, not the interaction. However, in practice type II SS is essentially never used. You would need to know about all of this and be savvy enough with your software to get these estimates, and the analysts who are typically think this is bunk.
There are more types of SS (I believe IV and V). They were suggested in the late 60's to deal with certain situations, but it was later shown that they do not do what was thought. Thus, at this point they are just a historical footnote.
As for what questions these are answering, you basically have that right already in your question:
Estimates using type I SS tell you how much of the variability in Y can be explained by A, how much of the residual variability can be explained by B, how much of the remaining residual variability can be explained by the interaction, and so on, in order.
Estimates based on type III SS tell you how much of the residual variability in Y can be accounted for by A after having accounted for everything else, and how much of the residual variability in Y can be accounted for by B after having accounted for everything else as well, and so on. (Note that both go both first and last simultaneously; if this makes sense to you, and accurately reflects your research question, then use type III SS.) | How to interpret type I, type II, and type III ANOVA and MANOVA?
What you are calling type II SS, I would call type III SS. Lets imagine that there are just two factors A and B (and we'll throw in the A*B interaction later to distinguish type II SS). Further, let |
3,177 | How to interpret type I, type II, and type III ANOVA and MANOVA? | For illustration I assume a two dimensional ANOVA model specified by y ~ A * B
Type I ANOVA
Line term in ANOVA table
Hypothesis from model
Hypothesis to model
A
y~ A
y~ 1
B
y~ A+B
y~ A
A:B
y~ A*B
y~ A+B
The from-model of every line is the to-model of the line below. The to-model is the from-model without the line term.
Type II ANOVA
Line term in ANOVA table
Hypothesis from model
Hypothesis to model
A
y~ A+B
y~ B
B
y~ A+B
y~ A
A:B
y~ A*B
y~ A+B
The from-model is the full model without all interactions involving the line term. The to-model is the from-model without the line term. This means that the from-model in line B is the full model A*B, but without A*B - that is A+B. The to-model is then A+B without B - that is A.
Type III ANOVA
In the Anova III model interactions are parameterized such that they are orthogonal to all lower-level interactions. As a consequence it is meaningful to remove a main term from a model even though an interaction involving that term is still present in the model formula. R doesn't have a good formula notation for this, so I define o(A,B) as the part of the interaction A:B that is orthogonal to both A and B
Line term in ANOVA table
Hypothesis from model
Hypothesis to model
A
y~ A*B
y~ B + o(A,B)
B
y~ A*B
y~ A + o(A,B)
A:B
y~ A*B
y~ A+B
The from-model is always the full model. The to-model is the from model without the line term (but keeping all higher-order orthogonal components of the interactions). | How to interpret type I, type II, and type III ANOVA and MANOVA? | For illustration I assume a two dimensional ANOVA model specified by y ~ A * B
Type I ANOVA
Line term in ANOVA table
Hypothesis from model
Hypothesis to model
A
y~ A
y~ 1
B
y~ A+B
y~ A
A:B | How to interpret type I, type II, and type III ANOVA and MANOVA?
For illustration I assume a two dimensional ANOVA model specified by y ~ A * B
Type I ANOVA
Line term in ANOVA table
Hypothesis from model
Hypothesis to model
A
y~ A
y~ 1
B
y~ A+B
y~ A
A:B
y~ A*B
y~ A+B
The from-model of every line is the to-model of the line below. The to-model is the from-model without the line term.
Type II ANOVA
Line term in ANOVA table
Hypothesis from model
Hypothesis to model
A
y~ A+B
y~ B
B
y~ A+B
y~ A
A:B
y~ A*B
y~ A+B
The from-model is the full model without all interactions involving the line term. The to-model is the from-model without the line term. This means that the from-model in line B is the full model A*B, but without A*B - that is A+B. The to-model is then A+B without B - that is A.
Type III ANOVA
In the Anova III model interactions are parameterized such that they are orthogonal to all lower-level interactions. As a consequence it is meaningful to remove a main term from a model even though an interaction involving that term is still present in the model formula. R doesn't have a good formula notation for this, so I define o(A,B) as the part of the interaction A:B that is orthogonal to both A and B
Line term in ANOVA table
Hypothesis from model
Hypothesis to model
A
y~ A*B
y~ B + o(A,B)
B
y~ A*B
y~ A + o(A,B)
A:B
y~ A*B
y~ A+B
The from-model is always the full model. The to-model is the from model without the line term (but keeping all higher-order orthogonal components of the interactions). | How to interpret type I, type II, and type III ANOVA and MANOVA?
For illustration I assume a two dimensional ANOVA model specified by y ~ A * B
Type I ANOVA
Line term in ANOVA table
Hypothesis from model
Hypothesis to model
A
y~ A
y~ 1
B
y~ A+B
y~ A
A:B |
3,178 | Linear model with log-transformed response vs. generalized linear model with log link | Although it may appear that the mean of the log-transformed variables is preferable (since this is how log-normal is typically parameterised), from a practical point of view, the log of the mean is typically much more useful.
This is particularly true when your model is not exactly correct, and to quote George Box: "All models are wrong, some are useful"
Suppose some quantity is log normally distributed, blood pressure say (I'm not a medic!), and we have two populations, men and women. One might hypothesise that the average blood pressure is higher in women than in men. This exactly corresponds to asking whether log of average blood pressure is higher in women than in men. It is not the same as asking whether the average of log blood pressure is higher in women that man.
Don't get confused by the text book parameterisation of a distribution - it doesn't have any "real" meaning. The log-normal distribution is parameterised by the mean of the log ($\mu_{\ln}$) because of mathematical convenience, but equally we could choose to parameterise it by its actual mean and variance
$\mu = e^{\mu_{\ln} + \sigma_{\ln}^2/2}$
$\sigma^2 = (e^{\sigma^2_{\ln}} -1)e^{2 \mu_{\ln} + \sigma_{\ln}^2}$
Obviously, doing so makes the algebra horribly complicated, but it still works and means the same thing.
Looking at the above formula, we can see an important difference between transforming the variables and transforming the mean. The log of the mean, $\ln(\mu)$, increases as $\sigma^2_{\ln}$ increases, while the mean of the log, $\mu_{\ln}$ doesn't.
This means that women could, on average, have higher blood pressure that men, even though the mean paramater of the log normal distribution ($\mu_{\ln}$) is the same, simply because the variance parameter is larger. This fact would get missed by a test that used log(Blood Pressure).
So far, we have assumed that blood pressure genuinly is log-normal. If the true distributions are not quite log normal, then transforming the data will (typically) make things even worse than above - since we won't quite know what our "mean" parameter actually means. I.e. we won't know those two equations for mean and variance I gave above are correct. Using those to transform back and forth will then introduce additional errors. | Linear model with log-transformed response vs. generalized linear model with log link | Although it may appear that the mean of the log-transformed variables is preferable (since this is how log-normal is typically parameterised), from a practical point of view, the log of the mean is ty | Linear model with log-transformed response vs. generalized linear model with log link
Although it may appear that the mean of the log-transformed variables is preferable (since this is how log-normal is typically parameterised), from a practical point of view, the log of the mean is typically much more useful.
This is particularly true when your model is not exactly correct, and to quote George Box: "All models are wrong, some are useful"
Suppose some quantity is log normally distributed, blood pressure say (I'm not a medic!), and we have two populations, men and women. One might hypothesise that the average blood pressure is higher in women than in men. This exactly corresponds to asking whether log of average blood pressure is higher in women than in men. It is not the same as asking whether the average of log blood pressure is higher in women that man.
Don't get confused by the text book parameterisation of a distribution - it doesn't have any "real" meaning. The log-normal distribution is parameterised by the mean of the log ($\mu_{\ln}$) because of mathematical convenience, but equally we could choose to parameterise it by its actual mean and variance
$\mu = e^{\mu_{\ln} + \sigma_{\ln}^2/2}$
$\sigma^2 = (e^{\sigma^2_{\ln}} -1)e^{2 \mu_{\ln} + \sigma_{\ln}^2}$
Obviously, doing so makes the algebra horribly complicated, but it still works and means the same thing.
Looking at the above formula, we can see an important difference between transforming the variables and transforming the mean. The log of the mean, $\ln(\mu)$, increases as $\sigma^2_{\ln}$ increases, while the mean of the log, $\mu_{\ln}$ doesn't.
This means that women could, on average, have higher blood pressure that men, even though the mean paramater of the log normal distribution ($\mu_{\ln}$) is the same, simply because the variance parameter is larger. This fact would get missed by a test that used log(Blood Pressure).
So far, we have assumed that blood pressure genuinly is log-normal. If the true distributions are not quite log normal, then transforming the data will (typically) make things even worse than above - since we won't quite know what our "mean" parameter actually means. I.e. we won't know those two equations for mean and variance I gave above are correct. Using those to transform back and forth will then introduce additional errors. | Linear model with log-transformed response vs. generalized linear model with log link
Although it may appear that the mean of the log-transformed variables is preferable (since this is how log-normal is typically parameterised), from a practical point of view, the log of the mean is ty |
3,179 | Linear model with log-transformed response vs. generalized linear model with log link | Here are my two cents from an advanced data analysis course I took while studying biostatistics (although I don't have any references other than my professor's notes):
It boils down to whether or not you need to address linearity and heteroscedasticity (unequal variances) in your data, or just linearity.
She notes that transforming the data affects both the linearity and variance assumptions of a model. For example, if your residuals exhibit issues with both, you could consider transforming the data, which potentially could fix both. The transformation transforms the errors and thus their variance.
In contrast, using the link function only affects the linearity assumption, not the variance. The log is taken of the mean (expected value), and thus the variance of the residuals is not affected.
In summary, if you don't have an issue with non-constant variance, she suggests using the link function over transformation, because you don't want to change your variance in that case (you're already meeting the assumption). | Linear model with log-transformed response vs. generalized linear model with log link | Here are my two cents from an advanced data analysis course I took while studying biostatistics (although I don't have any references other than my professor's notes):
It boils down to whether or not | Linear model with log-transformed response vs. generalized linear model with log link
Here are my two cents from an advanced data analysis course I took while studying biostatistics (although I don't have any references other than my professor's notes):
It boils down to whether or not you need to address linearity and heteroscedasticity (unequal variances) in your data, or just linearity.
She notes that transforming the data affects both the linearity and variance assumptions of a model. For example, if your residuals exhibit issues with both, you could consider transforming the data, which potentially could fix both. The transformation transforms the errors and thus their variance.
In contrast, using the link function only affects the linearity assumption, not the variance. The log is taken of the mean (expected value), and thus the variance of the residuals is not affected.
In summary, if you don't have an issue with non-constant variance, she suggests using the link function over transformation, because you don't want to change your variance in that case (you're already meeting the assumption). | Linear model with log-transformed response vs. generalized linear model with log link
Here are my two cents from an advanced data analysis course I took while studying biostatistics (although I don't have any references other than my professor's notes):
It boils down to whether or not |
3,180 | Linear model with log-transformed response vs. generalized linear model with log link | In the following I try to give some additional details to @Meg's answer with some mathematical notation.
The fixed part is the same for both, transformation and GLM. However, the transformation also affects the random part, while this is not the case for the link in the GLM.
Transformation
When we speak of a gaussian linear model with log-transformed response, we usually mean the following model
$$
\log(y) = \pmb x^T \pmb \beta + \varepsilon \qquad \text{with} \quad \varepsilon \sim N(0, \sigma^2)
$$
which can also be written on the original scale of $y$ as
$$
y = \exp(\pmb x^T \pmb \beta) \exp(\varepsilon)
$$
On the original scale we have
a multiplicative error
the error follows a $\log$-normal distribution
GLM
When we speak of a gaussian GLM with $\log$-link we usually assume the following model
$$
y \sim N(\mu, 0) \\
\log(\mu) = \pmb x^T \pmb \beta
$$
which can also be written as
$$
y = \exp(\pmb x^T \pmb \beta) + \varepsilon \qquad \text{with} \quad \varepsilon \sim N(0, \sigma^2)
$$
On the original scale we have
an additive error
the error follows a normal distribution | Linear model with log-transformed response vs. generalized linear model with log link | In the following I try to give some additional details to @Meg's answer with some mathematical notation.
The fixed part is the same for both, transformation and GLM. However, the transformation also a | Linear model with log-transformed response vs. generalized linear model with log link
In the following I try to give some additional details to @Meg's answer with some mathematical notation.
The fixed part is the same for both, transformation and GLM. However, the transformation also affects the random part, while this is not the case for the link in the GLM.
Transformation
When we speak of a gaussian linear model with log-transformed response, we usually mean the following model
$$
\log(y) = \pmb x^T \pmb \beta + \varepsilon \qquad \text{with} \quad \varepsilon \sim N(0, \sigma^2)
$$
which can also be written on the original scale of $y$ as
$$
y = \exp(\pmb x^T \pmb \beta) \exp(\varepsilon)
$$
On the original scale we have
a multiplicative error
the error follows a $\log$-normal distribution
GLM
When we speak of a gaussian GLM with $\log$-link we usually assume the following model
$$
y \sim N(\mu, 0) \\
\log(\mu) = \pmb x^T \pmb \beta
$$
which can also be written as
$$
y = \exp(\pmb x^T \pmb \beta) + \varepsilon \qquad \text{with} \quad \varepsilon \sim N(0, \sigma^2)
$$
On the original scale we have
an additive error
the error follows a normal distribution | Linear model with log-transformed response vs. generalized linear model with log link
In the following I try to give some additional details to @Meg's answer with some mathematical notation.
The fixed part is the same for both, transformation and GLM. However, the transformation also a |
3,181 | Linear model with log-transformed response vs. generalized linear model with log link | Corvus pretty much answered the question. I can add:
Transformation introduces 'bias' such that the mean on the transformed scale is not consistent with that on the original scale (see the first formula in the answer from Corvus).
The log-transform can be useful when effects are nonlinear and multiplicative (Pek et al. 2017). The geometric mean is equal to the exponential of the arithmetic mean of log-transformed values = exp(mu). For a log-normal distribution, the geometric mean equals the median.
Applying the central limit theorem to the log domain, the geometric mean of a large number of independent random variables is approximately log-normally distributed around the true population geometric mean (this is sometimes called the ‘Multiplicative Central Limit Theorem’). Contrary to the answer from Corvus, the true distribution does not have to be quite log normal if the sample size is 'large enough.'
Please note that transformation of the response variable should not be used for the sole purpose of satisfying LM assumptions without consideration of changes to inference. Transformation should be guided by theory, should enhance interpretation and then estimation and interpretation should be done on the transformed scale. (Box & Cox 1964; Pek et al. 2017). Following these recommendations would limit the use of data transformation in applied statistics.
Pek, J., Wong, O. and Wong, A.C. (2017) Data transformations for inference with linear regression: clarifications and recommendations. Practical Assessment, Research and Evaluation, 22, 9. doi: https://doi.org/10.7275/2w3n-0f07
Box, G.E. & Cox, D.R. (1964). An analysis of transformations. Journal of the Royal Statistical Society: Series B, 26, 211–243. | Linear model with log-transformed response vs. generalized linear model with log link | Corvus pretty much answered the question. I can add:
Transformation introduces 'bias' such that the mean on the transformed scale is not consistent with that on the original scale (see the first form | Linear model with log-transformed response vs. generalized linear model with log link
Corvus pretty much answered the question. I can add:
Transformation introduces 'bias' such that the mean on the transformed scale is not consistent with that on the original scale (see the first formula in the answer from Corvus).
The log-transform can be useful when effects are nonlinear and multiplicative (Pek et al. 2017). The geometric mean is equal to the exponential of the arithmetic mean of log-transformed values = exp(mu). For a log-normal distribution, the geometric mean equals the median.
Applying the central limit theorem to the log domain, the geometric mean of a large number of independent random variables is approximately log-normally distributed around the true population geometric mean (this is sometimes called the ‘Multiplicative Central Limit Theorem’). Contrary to the answer from Corvus, the true distribution does not have to be quite log normal if the sample size is 'large enough.'
Please note that transformation of the response variable should not be used for the sole purpose of satisfying LM assumptions without consideration of changes to inference. Transformation should be guided by theory, should enhance interpretation and then estimation and interpretation should be done on the transformed scale. (Box & Cox 1964; Pek et al. 2017). Following these recommendations would limit the use of data transformation in applied statistics.
Pek, J., Wong, O. and Wong, A.C. (2017) Data transformations for inference with linear regression: clarifications and recommendations. Practical Assessment, Research and Evaluation, 22, 9. doi: https://doi.org/10.7275/2w3n-0f07
Box, G.E. & Cox, D.R. (1964). An analysis of transformations. Journal of the Royal Statistical Society: Series B, 26, 211–243. | Linear model with log-transformed response vs. generalized linear model with log link
Corvus pretty much answered the question. I can add:
Transformation introduces 'bias' such that the mean on the transformed scale is not consistent with that on the original scale (see the first form |
3,182 | What is the relationship between independent component analysis and factor analysis? | FA, PCA, and ICA, are all 'related', in as much as all three of them seek basis vectors that the data is projected against, such that you maximize insert-criteria-here. Think of the basis vectors as just encapsulating linear combinations.
For example, lets say your data matrix $\mathbf Z$ was a $2$ x $N$ matrix, that is, you have two random variables, and $N$ observations of them each. Then lets say you found a basis vector of $\mathbf w = \begin{bmatrix}0.1 \\-4 \end{bmatrix}$. When you extract (the first) signal, (call it the vector $\mathbf y$), it is done as so:
$$
\mathbf {y = w^{\mathrm T}Z}
$$
This just means "Multiply 0.1 by the first row of your data, and subtract 4 times the second row of your data". Then this gives $\mathbf y$, which is of course a $1$ x $N$ vector that has the property that you maximized its insert-criteria-here.
So what are those criteria?
Second-Order Criteria:
In PCA, you are finding basis vectors that 'best explain' the variance of your data. The first (ie highest ranked) basis vector is going to be one that best fits all the variance from your data. The second one also has this criterion, but must be orthogonal to the first, and so on and so forth. (Turns out those basis vectors for PCA are nothing but the eigenvectors of your data's covariance matrix).
In FA, there is difference between it and PCA, because FA is generative, whereas PCA is not. I have seen FA as being described as 'PCA with noise', where the 'noise' are called 'specific factors'. All the same, the overall conclusion is that PCA and FA are based on second-order statistics, (covariance), and nothing above.
Higher Order Criteria:
In ICA, you are again finding basis vectors, but this time, you want basis vectors that give a result, such that this resulting vector is one of the independent components of the original data. You can do this by maximization of the absolute value of normalized kurtosis - a 4th order statistic. That is, you project your data on some basis vector, and measure the kurtosis of the result. You change your basis vector a little, (usually through gradient ascent), and then measure the kurtosis again, etc etc. Eventually you will happen unto a basis vector that gives you a result that has the highest possible kurtosis, and this is your independent component.
The top diagram above can help you visualize it. You can clearly see how the ICA vectors correspond to the axes of the data, (independent of each other), whereas the PCA vectors try to find directions where variance is maximized. (Somewhat like resultant).
If in the top diagram the PCA vectors look like they almost correspond to the ICA vectors, that is just coincidental. Here is another instance on different data and mixing matrix where they are very different. ;-) | What is the relationship between independent component analysis and factor analysis? | FA, PCA, and ICA, are all 'related', in as much as all three of them seek basis vectors that the data is projected against, such that you maximize insert-criteria-here. Think of the basis vectors as j | What is the relationship between independent component analysis and factor analysis?
FA, PCA, and ICA, are all 'related', in as much as all three of them seek basis vectors that the data is projected against, such that you maximize insert-criteria-here. Think of the basis vectors as just encapsulating linear combinations.
For example, lets say your data matrix $\mathbf Z$ was a $2$ x $N$ matrix, that is, you have two random variables, and $N$ observations of them each. Then lets say you found a basis vector of $\mathbf w = \begin{bmatrix}0.1 \\-4 \end{bmatrix}$. When you extract (the first) signal, (call it the vector $\mathbf y$), it is done as so:
$$
\mathbf {y = w^{\mathrm T}Z}
$$
This just means "Multiply 0.1 by the first row of your data, and subtract 4 times the second row of your data". Then this gives $\mathbf y$, which is of course a $1$ x $N$ vector that has the property that you maximized its insert-criteria-here.
So what are those criteria?
Second-Order Criteria:
In PCA, you are finding basis vectors that 'best explain' the variance of your data. The first (ie highest ranked) basis vector is going to be one that best fits all the variance from your data. The second one also has this criterion, but must be orthogonal to the first, and so on and so forth. (Turns out those basis vectors for PCA are nothing but the eigenvectors of your data's covariance matrix).
In FA, there is difference between it and PCA, because FA is generative, whereas PCA is not. I have seen FA as being described as 'PCA with noise', where the 'noise' are called 'specific factors'. All the same, the overall conclusion is that PCA and FA are based on second-order statistics, (covariance), and nothing above.
Higher Order Criteria:
In ICA, you are again finding basis vectors, but this time, you want basis vectors that give a result, such that this resulting vector is one of the independent components of the original data. You can do this by maximization of the absolute value of normalized kurtosis - a 4th order statistic. That is, you project your data on some basis vector, and measure the kurtosis of the result. You change your basis vector a little, (usually through gradient ascent), and then measure the kurtosis again, etc etc. Eventually you will happen unto a basis vector that gives you a result that has the highest possible kurtosis, and this is your independent component.
The top diagram above can help you visualize it. You can clearly see how the ICA vectors correspond to the axes of the data, (independent of each other), whereas the PCA vectors try to find directions where variance is maximized. (Somewhat like resultant).
If in the top diagram the PCA vectors look like they almost correspond to the ICA vectors, that is just coincidental. Here is another instance on different data and mixing matrix where they are very different. ;-) | What is the relationship between independent component analysis and factor analysis?
FA, PCA, and ICA, are all 'related', in as much as all three of them seek basis vectors that the data is projected against, such that you maximize insert-criteria-here. Think of the basis vectors as j |
3,183 | What is the relationship between independent component analysis and factor analysis? | Not quite. Factor analysis operates with the second moments, and really hopes that the data are Gaussian so that the likelihood ratios and stuff like that is not affected by non-normality. ICA, on the other hand, is motivated by the idea that when you add things up, you get something normal, due to CLT, and really hopes that the data are non-normal, so that the non-normal components can be extracted from them. To exploit non-normality, ICA tries to maximize the fourth moment of a linear combination of the inputs:
$$\max_{{\bf a}: \| {\bf a}\| =1} \frac1n \sum_i \bigl[ {\bf a}'({\bf x}_i-\bar {\bf x})\bigr]^4 $$
If anything, ICA should be compared to PCA, which maximizes the second moment (variance) of a standardized combination of inputs. | What is the relationship between independent component analysis and factor analysis? | Not quite. Factor analysis operates with the second moments, and really hopes that the data are Gaussian so that the likelihood ratios and stuff like that is not affected by non-normality. ICA, on the | What is the relationship between independent component analysis and factor analysis?
Not quite. Factor analysis operates with the second moments, and really hopes that the data are Gaussian so that the likelihood ratios and stuff like that is not affected by non-normality. ICA, on the other hand, is motivated by the idea that when you add things up, you get something normal, due to CLT, and really hopes that the data are non-normal, so that the non-normal components can be extracted from them. To exploit non-normality, ICA tries to maximize the fourth moment of a linear combination of the inputs:
$$\max_{{\bf a}: \| {\bf a}\| =1} \frac1n \sum_i \bigl[ {\bf a}'({\bf x}_i-\bar {\bf x})\bigr]^4 $$
If anything, ICA should be compared to PCA, which maximizes the second moment (variance) of a standardized combination of inputs. | What is the relationship between independent component analysis and factor analysis?
Not quite. Factor analysis operates with the second moments, and really hopes that the data are Gaussian so that the likelihood ratios and stuff like that is not affected by non-normality. ICA, on the |
3,184 | Difference between "kernel" and "filter" in CNN | In the context of convolutional neural networks, kernel = filter = feature detector.
Here is a great illustration from Stanford's deep learning tutorial (also nicely explained by Denny Britz).
The filter is the yellow sliding window, and its value is:
\begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 1
\end{bmatrix} | Difference between "kernel" and "filter" in CNN | In the context of convolutional neural networks, kernel = filter = feature detector.
Here is a great illustration from Stanford's deep learning tutorial (also nicely explained by Denny Britz).
The | Difference between "kernel" and "filter" in CNN
In the context of convolutional neural networks, kernel = filter = feature detector.
Here is a great illustration from Stanford's deep learning tutorial (also nicely explained by Denny Britz).
The filter is the yellow sliding window, and its value is:
\begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 1
\end{bmatrix} | Difference between "kernel" and "filter" in CNN
In the context of convolutional neural networks, kernel = filter = feature detector.
Here is a great illustration from Stanford's deep learning tutorial (also nicely explained by Denny Britz).
The |
3,185 | Difference between "kernel" and "filter" in CNN | How about we use the term "kernel" for a 2D array of weights, and the term "filter" for the 3D structure of multiple kernels stacked together? The dimension of a filter is $k \times k \times C$ (assuming square kernels). Each one of the $C$ kernels that compose a filter will be convolved with one of the $C$ channels of the input (input dimensions $H_{in} \times H_{in} \times C$, for example a $32 \times 32$ RGB image). It makes sense to use a different word to describe a 2D array of weights and a different for the 3D structure of the weights, since the multiplication happens between 2D arrays and then the results are summed to calculate the 3D operation.
Currently there is a problem with the nomenclature in this field. There are many terms describing the same thing and even terms used interchangeably for different concepts! Take as an example the terminology used to describe the output of a convolution layer: feature maps, channels, activations, tensors, planes, etc...
Based on wikipedia, "In image processing, a kernel, is a small matrix".
Based on wikipedia, "A matrix is a rectangular array arranged in rows and columns".
If a kernel is a rectangular array, then it cannot be the 3D structure of the weights, which in general is of $k_1 \times k_2 \times C$ dimensions.
Well, I cant argue that this is the best terminology, but it is better than just use the terms "kernel" and "filter" interchangeably. Moreover, we do need a word to describe the concept of the distinct 2D arrays that form a filter. | Difference between "kernel" and "filter" in CNN | How about we use the term "kernel" for a 2D array of weights, and the term "filter" for the 3D structure of multiple kernels stacked together? The dimension of a filter is $k \times k \times C$ (assum | Difference between "kernel" and "filter" in CNN
How about we use the term "kernel" for a 2D array of weights, and the term "filter" for the 3D structure of multiple kernels stacked together? The dimension of a filter is $k \times k \times C$ (assuming square kernels). Each one of the $C$ kernels that compose a filter will be convolved with one of the $C$ channels of the input (input dimensions $H_{in} \times H_{in} \times C$, for example a $32 \times 32$ RGB image). It makes sense to use a different word to describe a 2D array of weights and a different for the 3D structure of the weights, since the multiplication happens between 2D arrays and then the results are summed to calculate the 3D operation.
Currently there is a problem with the nomenclature in this field. There are many terms describing the same thing and even terms used interchangeably for different concepts! Take as an example the terminology used to describe the output of a convolution layer: feature maps, channels, activations, tensors, planes, etc...
Based on wikipedia, "In image processing, a kernel, is a small matrix".
Based on wikipedia, "A matrix is a rectangular array arranged in rows and columns".
If a kernel is a rectangular array, then it cannot be the 3D structure of the weights, which in general is of $k_1 \times k_2 \times C$ dimensions.
Well, I cant argue that this is the best terminology, but it is better than just use the terms "kernel" and "filter" interchangeably. Moreover, we do need a word to describe the concept of the distinct 2D arrays that form a filter. | Difference between "kernel" and "filter" in CNN
How about we use the term "kernel" for a 2D array of weights, and the term "filter" for the 3D structure of multiple kernels stacked together? The dimension of a filter is $k \times k \times C$ (assum |
3,186 | Difference between "kernel" and "filter" in CNN | Filter consists of kernels. This means, in 2D convolutional neural network, filter is 3D. Check this gif from CS231n Convolutional Neural Networks for Visual Recognition:
Those three 3x3 kernels in second column of this gif form a filter. So as in the third column. The number of filters always equal to the number of feature maps in next layer. While the number of kernels in each filter will always equal to the number of feature maps in this layer. | Difference between "kernel" and "filter" in CNN | Filter consists of kernels. This means, in 2D convolutional neural network, filter is 3D. Check this gif from CS231n Convolutional Neural Networks for Visual Recognition:
Those three 3x3 kernels in s | Difference between "kernel" and "filter" in CNN
Filter consists of kernels. This means, in 2D convolutional neural network, filter is 3D. Check this gif from CS231n Convolutional Neural Networks for Visual Recognition:
Those three 3x3 kernels in second column of this gif form a filter. So as in the third column. The number of filters always equal to the number of feature maps in next layer. While the number of kernels in each filter will always equal to the number of feature maps in this layer. | Difference between "kernel" and "filter" in CNN
Filter consists of kernels. This means, in 2D convolutional neural network, filter is 3D. Check this gif from CS231n Convolutional Neural Networks for Visual Recognition:
Those three 3x3 kernels in s |
3,187 | Difference between "kernel" and "filter" in CNN | A feature map is the same as a filter or "kernel" in this particular context.
The weights of the filter determine what specific features are detected.
So for example, Franck has provided a great visual. Notice that his filter/feature-detector has x1 along the diagonal elements and x0 along all the other elements. This kernel weighting would thus detect pixels in the image that have a value of 1 along the image's diagonals.
Observe that the resulting convolved feature shows values of 4 wherever the image has a "1" along the diagonal values of the 3x3 filter (thus detecting the filter in that specific 3x3 section of the image), and lower values of 2 in the areas of the image where that filter didn't match as strongly. | Difference between "kernel" and "filter" in CNN | A feature map is the same as a filter or "kernel" in this particular context.
The weights of the filter determine what specific features are detected.
So for example, Franck has provided a great visua | Difference between "kernel" and "filter" in CNN
A feature map is the same as a filter or "kernel" in this particular context.
The weights of the filter determine what specific features are detected.
So for example, Franck has provided a great visual. Notice that his filter/feature-detector has x1 along the diagonal elements and x0 along all the other elements. This kernel weighting would thus detect pixels in the image that have a value of 1 along the image's diagonals.
Observe that the resulting convolved feature shows values of 4 wherever the image has a "1" along the diagonal values of the 3x3 filter (thus detecting the filter in that specific 3x3 section of the image), and lower values of 2 in the areas of the image where that filter didn't match as strongly. | Difference between "kernel" and "filter" in CNN
A feature map is the same as a filter or "kernel" in this particular context.
The weights of the filter determine what specific features are detected.
So for example, Franck has provided a great visua |
3,188 | Difference between "kernel" and "filter" in CNN | The existing answers are excellent and comprehensively answer the question. Just want to add that filters in Convolutional networks are shared across the entire image (i.e., the input is convolved with the filter, as visualized in Franck's answer). The receptive field of a particular neuron are all input units that affect the neuron in question. The receptive field of a neuron in a Convolutional network is generally smaller than the receptive field of a neuron in a Dense network courtesy of shared filters(also called parameter sharing).
Parameter sharing confers a certain benefit on CNNs, namely a property termed equivariance to translation. This is to say that if the input is perturbed or translated, the output is also modified in the same manner. Ian Goodfellow provides a great example in the Deep Learning Book regarding how practitioners can capitalize on equivariance in CNNs:
When processing time-series data, this means that convolution produces a sort of timeline that shows when different features appear in the input.If we move an event later in time in the input, the exact same representation of it will appear in the output, just later. Similarly with images, convolution creates a 2-D map of where certain features appear in the input. If we move the object in the input, its representation will move the same amount in the output. This is useful for when we know that some function of a small number of neighboring pixels is useful when applied to multiple input locations. For example, when processing images, it is useful to detect edges in the first layer of a convolutional network. The same edges appear more or less everywhere in the image, so it is practical to share parameters across the entire image. | Difference between "kernel" and "filter" in CNN | The existing answers are excellent and comprehensively answer the question. Just want to add that filters in Convolutional networks are shared across the entire image (i.e., the input is convolved wit | Difference between "kernel" and "filter" in CNN
The existing answers are excellent and comprehensively answer the question. Just want to add that filters in Convolutional networks are shared across the entire image (i.e., the input is convolved with the filter, as visualized in Franck's answer). The receptive field of a particular neuron are all input units that affect the neuron in question. The receptive field of a neuron in a Convolutional network is generally smaller than the receptive field of a neuron in a Dense network courtesy of shared filters(also called parameter sharing).
Parameter sharing confers a certain benefit on CNNs, namely a property termed equivariance to translation. This is to say that if the input is perturbed or translated, the output is also modified in the same manner. Ian Goodfellow provides a great example in the Deep Learning Book regarding how practitioners can capitalize on equivariance in CNNs:
When processing time-series data, this means that convolution produces a sort of timeline that shows when different features appear in the input.If we move an event later in time in the input, the exact same representation of it will appear in the output, just later. Similarly with images, convolution creates a 2-D map of where certain features appear in the input. If we move the object in the input, its representation will move the same amount in the output. This is useful for when we know that some function of a small number of neighboring pixels is useful when applied to multiple input locations. For example, when processing images, it is useful to detect edges in the first layer of a convolutional network. The same edges appear more or less everywhere in the image, so it is practical to share parameters across the entire image. | Difference between "kernel" and "filter" in CNN
The existing answers are excellent and comprehensively answer the question. Just want to add that filters in Convolutional networks are shared across the entire image (i.e., the input is convolved wit |
3,189 | Difference between "kernel" and "filter" in CNN | To be straightforward:
A filter is a collection of kernels, although we use filter and kernel interchangeably.
Example:
Let's say you want to apply P 3x3xN filter to a K x K x N input with stride =1 and pad = 0. So each of the 3 x 3 matrix in 3 x 3 x N filter is a kernel. And your output will be K-2 x K-2 x P . | Difference between "kernel" and "filter" in CNN | To be straightforward:
A filter is a collection of kernels, although we use filter and kernel interchangeably.
Example:
Let's say you want to apply P 3x3xN filter to a K x K x N input with stride =1 a | Difference between "kernel" and "filter" in CNN
To be straightforward:
A filter is a collection of kernels, although we use filter and kernel interchangeably.
Example:
Let's say you want to apply P 3x3xN filter to a K x K x N input with stride =1 and pad = 0. So each of the 3 x 3 matrix in 3 x 3 x N filter is a kernel. And your output will be K-2 x K-2 x P . | Difference between "kernel" and "filter" in CNN
To be straightforward:
A filter is a collection of kernels, although we use filter and kernel interchangeably.
Example:
Let's say you want to apply P 3x3xN filter to a K x K x N input with stride =1 a |
3,190 | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | A natural regularization happens because of the presence of many small components in the theoretical PCA of $x$. These small components are implicitly used to fit the noise using small coefficients. When using minimum norm OLS, you fit the noise with many small independent components and this has a regularizing effect equivalent to Ridge regularization. This regularization is often too strong, and it is possible to compensate it using "anti-regularization" know as negative Ridge. In that case, you will see the minimum of the MSE curve appears for negative values of $\lambda$.
By theoretical PCA, I mean:
Let $x\sim N(0,\Sigma)$ a multivariate normal distribution. There is a
linear isometry $f$ such as $u=f(x)\sim N(0,D)$ where $D$ is diagonal:
the components of $u$ are independent. $D$ is simply obtained by diagonalizing $\Sigma$.
Now the model $y=\beta.x+\epsilon$ can be written
$y=f(\beta).f(x)+\epsilon$ (a linear isometry preserves dot product).
If you write $\gamma=f(\beta)$, the model can be written
$y=\gamma.u+\epsilon$. Furthermore $\|\beta\|=\|\gamma\|$ hence
fitting methods like Ridge or minimum norm OLS are perfectly
isomorphic: the estimator of $y=\gamma.u+\epsilon$ is the image by $f$
of the estimator of $y=\beta.x+\epsilon$.
Theoretical PCA transforms non independent predictors into independent predictors. It is only loosely related to empirical PCA where you use the empirical covariance matrix (that differs a lot from the theoretical one with small sample size). Theoretical PCA is not practically computable but is only used here to interpret the model in an orthogonal predictor space.
Let's see what happens when we append many small variance independent predictors to a model:
Theorem
Ridge regularization with coefficient $\lambda$ is equivalent (when $p\rightarrow\infty$) to:
adding $p$ fake independent predictors (centred and identically distributed) each with variance $\frac{\lambda}{p}$
fitting the enriched model with minimum norm OLS estimator
keeping only the parameters for the true predictors
(sketch of) Proof
We are going to prove that the cost functions are asymptotically
equal. Let's split the model into real and fake predictors: $y=\beta x+\beta'x'+\epsilon$. The cost function of Ridge (for the true
predictors) can be written:
$$\mathrm{cost}_\lambda=\|\beta\|^2+\frac{1}{\lambda}\|y-X\beta\|^2$$
When using minimum norm OLS, the response is fitted perfectly: the
error term is 0. The cost function is only about the norm of the
parameters. It can be split into the true parameters and the fake
ones:
$$\mathrm{cost}_{\lambda,p}=\|\beta\|^2+\inf\{\|\beta'\|^2 \mid X'\beta'=y-X\beta\}$$
In the right expression, the minimum norm solution is given by:
$$\beta'=X'^+(y-X\beta )$$
Now using SVD for $X'$:
$$X'=U\Sigma V$$
$$X'^{+}=V^\top\Sigma^{+} U^\top$$
We see that the norm of $\beta'$ essentially depends on the singular
values of $X'^+$ that are the reciprocals of the singular values of
$X'$. The normalized version of $X'$ is $\sqrt{p/\lambda} X'$. I've
looked at literature and singular values of large random matrices are
well known. For $p$ and $n$ large enough, minimum $s_\min$ and maximum
$s_\max$ singular values are approximated by (see theorem 1.1):
$$s_\min(\sqrt{p/\lambda}X')\approx \sqrt p\left(1-\sqrt{n/p}\right)$$
$$s_\max(\sqrt{p/\lambda}X')\approx \sqrt p \left(1+\sqrt{n/p}\right)$$
Since, for large $p$, $\sqrt{n/p}$ tends towards 0, we can just say
that all singular values are approximated by $\sqrt p$. Thus:
$$\|\beta'\|\approx\frac{1}{\sqrt\lambda}\|y-X\beta\|$$
Finally:
$$\mathrm{cost}_{\lambda,p}\approx\|\beta\|^2+\frac{1}{\lambda}\|y-X\beta\|^2=\mathrm{cost}_\lambda$$
Note: it does not matter if you keep the coefficients of the fake
predictors in your model. The variance introduced by $\beta'x'$ is
$\frac{\lambda}{p}\|\beta'\|^2\approx\frac{1}{p}\|y-X\beta\|^2\approx\frac{n}{p}MSE(\beta)$.
Thus you increase your MSE by a factor $1+n/p$ only which tends
towards 1 anyway. Somehow you don't need to treat the
fake predictors differently than the real ones.
Now, back to @amoeba's data. After applying theoretical PCA to $x$ (assumed to be normal), $x$ is transformed by a linear isometry into a variable $u$ whose components are independent and sorted in decreasing variance order. The problem $y=\beta x+\epsilon$ is equivalent the transformed problem $y=\gamma u+\epsilon$.
Now imagine the variance of the components look like:
Consider many $p$ of the last components, call the sum of their variance $\lambda$. They each have a variance approximatively equal to $\lambda/p$ and are independent. They play the role of the fake predictors in the theorem.
This fact is clearer in @jonny's model: only the first component of theoretical PCA is correlated to $y$ (it is proportional $\overline{x}$) and has huge variance. All the other components (proportional to $x_i-\overline{x}$) have comparatively very small variance (write the covariance matrix and diagonalize it to see this) and play the role of fake predictors. I calculated that the regularization here corresponds (approx.) to prior $N(0,\frac{1}{p^2})$ on $\gamma_1$ while the true $\gamma_1^2=\frac{1}{p}$. This definitely over-shrinks. This is visible by the fact that the final MSE is much larger than the ideal MSE. The regularization effect is too strong.
It is sometimes possible to improve this natural regularization by Ridge. First you sometimes need $p$ in the theorem really big (1000, 10000...) to seriously rival Ridge and the finiteness of $p$ is like an imprecision. But it also shows that Ridge is an additional regularization over a naturally existing implicit regularization and can thus have only a very small effect. Sometimes this natural regularization is already too strong and Ridge may not even be an improvement. More than this, it is better to use anti-regularization: Ridge with negative coefficient. This shows MSE for @jonny's model ($p=1000$), using $\lambda\in\mathbb{R}$: | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | A natural regularization happens because of the presence of many small components in the theoretical PCA of $x$. These small components are implicitly used to fit the noise using small coefficients. W | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
A natural regularization happens because of the presence of many small components in the theoretical PCA of $x$. These small components are implicitly used to fit the noise using small coefficients. When using minimum norm OLS, you fit the noise with many small independent components and this has a regularizing effect equivalent to Ridge regularization. This regularization is often too strong, and it is possible to compensate it using "anti-regularization" know as negative Ridge. In that case, you will see the minimum of the MSE curve appears for negative values of $\lambda$.
By theoretical PCA, I mean:
Let $x\sim N(0,\Sigma)$ a multivariate normal distribution. There is a
linear isometry $f$ such as $u=f(x)\sim N(0,D)$ where $D$ is diagonal:
the components of $u$ are independent. $D$ is simply obtained by diagonalizing $\Sigma$.
Now the model $y=\beta.x+\epsilon$ can be written
$y=f(\beta).f(x)+\epsilon$ (a linear isometry preserves dot product).
If you write $\gamma=f(\beta)$, the model can be written
$y=\gamma.u+\epsilon$. Furthermore $\|\beta\|=\|\gamma\|$ hence
fitting methods like Ridge or minimum norm OLS are perfectly
isomorphic: the estimator of $y=\gamma.u+\epsilon$ is the image by $f$
of the estimator of $y=\beta.x+\epsilon$.
Theoretical PCA transforms non independent predictors into independent predictors. It is only loosely related to empirical PCA where you use the empirical covariance matrix (that differs a lot from the theoretical one with small sample size). Theoretical PCA is not practically computable but is only used here to interpret the model in an orthogonal predictor space.
Let's see what happens when we append many small variance independent predictors to a model:
Theorem
Ridge regularization with coefficient $\lambda$ is equivalent (when $p\rightarrow\infty$) to:
adding $p$ fake independent predictors (centred and identically distributed) each with variance $\frac{\lambda}{p}$
fitting the enriched model with minimum norm OLS estimator
keeping only the parameters for the true predictors
(sketch of) Proof
We are going to prove that the cost functions are asymptotically
equal. Let's split the model into real and fake predictors: $y=\beta x+\beta'x'+\epsilon$. The cost function of Ridge (for the true
predictors) can be written:
$$\mathrm{cost}_\lambda=\|\beta\|^2+\frac{1}{\lambda}\|y-X\beta\|^2$$
When using minimum norm OLS, the response is fitted perfectly: the
error term is 0. The cost function is only about the norm of the
parameters. It can be split into the true parameters and the fake
ones:
$$\mathrm{cost}_{\lambda,p}=\|\beta\|^2+\inf\{\|\beta'\|^2 \mid X'\beta'=y-X\beta\}$$
In the right expression, the minimum norm solution is given by:
$$\beta'=X'^+(y-X\beta )$$
Now using SVD for $X'$:
$$X'=U\Sigma V$$
$$X'^{+}=V^\top\Sigma^{+} U^\top$$
We see that the norm of $\beta'$ essentially depends on the singular
values of $X'^+$ that are the reciprocals of the singular values of
$X'$. The normalized version of $X'$ is $\sqrt{p/\lambda} X'$. I've
looked at literature and singular values of large random matrices are
well known. For $p$ and $n$ large enough, minimum $s_\min$ and maximum
$s_\max$ singular values are approximated by (see theorem 1.1):
$$s_\min(\sqrt{p/\lambda}X')\approx \sqrt p\left(1-\sqrt{n/p}\right)$$
$$s_\max(\sqrt{p/\lambda}X')\approx \sqrt p \left(1+\sqrt{n/p}\right)$$
Since, for large $p$, $\sqrt{n/p}$ tends towards 0, we can just say
that all singular values are approximated by $\sqrt p$. Thus:
$$\|\beta'\|\approx\frac{1}{\sqrt\lambda}\|y-X\beta\|$$
Finally:
$$\mathrm{cost}_{\lambda,p}\approx\|\beta\|^2+\frac{1}{\lambda}\|y-X\beta\|^2=\mathrm{cost}_\lambda$$
Note: it does not matter if you keep the coefficients of the fake
predictors in your model. The variance introduced by $\beta'x'$ is
$\frac{\lambda}{p}\|\beta'\|^2\approx\frac{1}{p}\|y-X\beta\|^2\approx\frac{n}{p}MSE(\beta)$.
Thus you increase your MSE by a factor $1+n/p$ only which tends
towards 1 anyway. Somehow you don't need to treat the
fake predictors differently than the real ones.
Now, back to @amoeba's data. After applying theoretical PCA to $x$ (assumed to be normal), $x$ is transformed by a linear isometry into a variable $u$ whose components are independent and sorted in decreasing variance order. The problem $y=\beta x+\epsilon$ is equivalent the transformed problem $y=\gamma u+\epsilon$.
Now imagine the variance of the components look like:
Consider many $p$ of the last components, call the sum of their variance $\lambda$. They each have a variance approximatively equal to $\lambda/p$ and are independent. They play the role of the fake predictors in the theorem.
This fact is clearer in @jonny's model: only the first component of theoretical PCA is correlated to $y$ (it is proportional $\overline{x}$) and has huge variance. All the other components (proportional to $x_i-\overline{x}$) have comparatively very small variance (write the covariance matrix and diagonalize it to see this) and play the role of fake predictors. I calculated that the regularization here corresponds (approx.) to prior $N(0,\frac{1}{p^2})$ on $\gamma_1$ while the true $\gamma_1^2=\frac{1}{p}$. This definitely over-shrinks. This is visible by the fact that the final MSE is much larger than the ideal MSE. The regularization effect is too strong.
It is sometimes possible to improve this natural regularization by Ridge. First you sometimes need $p$ in the theorem really big (1000, 10000...) to seriously rival Ridge and the finiteness of $p$ is like an imprecision. But it also shows that Ridge is an additional regularization over a naturally existing implicit regularization and can thus have only a very small effect. Sometimes this natural regularization is already too strong and Ridge may not even be an improvement. More than this, it is better to use anti-regularization: Ridge with negative coefficient. This shows MSE for @jonny's model ($p=1000$), using $\lambda\in\mathbb{R}$: | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
A natural regularization happens because of the presence of many small components in the theoretical PCA of $x$. These small components are implicitly used to fit the noise using small coefficients. W |
3,191 | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | Thanks everybody for the great ongoing discussion. The crux of the matter seems to be that minimum-norm OLS is effectively performing shrinkage that is similar to the ridge regression. This seems to occur whenever $p\gg n$. Ironically, adding pure noise predictors can even be used as a very weird form or regularization.
Part I. Demonstration with artificial data and analytical CV
@Jonny (+1) came up with a really simple artificial example that I will slightly adapt here. $X$ of $n\times p$ size and $y$ are generated such that all variables are Gaussian with unit variance, and correlation between each predictor and the response is $\rho$. I will fix $\rho=.2$.
I will use leave-one-out CV because there is analytical expression for the squared error: it is known as PRESS, "predicted sum of squares". $$\text{PRESS} = \sum_i \left( \frac{e_i}{1-H_{ii}}\right)^2,$$ where $e_i$ are residuals $$e = y - \hat y = y - Hy,$$ and $H$ is the hat matrix $$H = X (X^\top X + \lambda I)^{-1} X^\top=U\frac{S^2}{S^2+\lambda} U^\top$$ in terms of SVD $X=USV^\top$. This allows to replicate @Jonny's results without using glmnet and without actually performing cross-validation (I am plotting the ratio of PRESS to the sum of squares of $y$):
This analytical approach allows to compute the limit at $\lambda\to 0$. Simply plugging in $\lambda=0$ into the PRESS formula does not work: when $n<p$ and $\lambda=0$, the residuals are all zero and hat matrix is the identity matrix with ones on the diagonal, meaning that the fractions in the PRESS equation are undefined. But if we compute the limit at $\lambda \to 0$, then it will correspond to the minimum-norm OLS solution with $\lambda=0$.
The trick is to do Taylor expansion of the hat matrix when $\lambda\to 0$: $$H=U\frac{1}{1+\lambda/S^2} U^\top\approx U(1-\lambda/S^2) U^\top = I - \lambda US^{-2}U^\top = I-\lambda G^{-1}.$$ Here I introduced Gram matrix $G=XX^\top = US^2U^\top$.
We are almost done: $$\text{PRESS} = \sum_i\Big( \frac{\lambda [G^{-1}y]_i}{\lambda G^{-1}_{ii}}\Big)^2 = \sum_i\Big( \frac{ [G^{-1}y]_i}{G^{-1}_{ii}}\Big)^2.$$ Lambda got canceled out, so here we have the limiting value. I plotted it with a big black dot on the figure above (on the panels where $p>n$), and it matches perfectly.
Update Feb 21. The above formula is exact, but we can gain some insight by doing further approximations. It looks like $G^{-1}$ has approximately equal values on the diagonal even if $S$ has very unequal values (probably because $U$ mixes up all the eigenvalues pretty well). So for each $i$ we have that $G^{-1}_{ii}\approx \langle S^{-2} \rangle$ where angular brackets denote averaging. Using this approximation, we can rewrite: $$\text{PRESS}\approx \Big\lVert \frac{S^{-2}}{\langle S^{-2} \rangle}U^\top y\Big\rVert^2.$$ This approximation is shown on the figure above with red open circles.
Whether this will be larger or smaller than $\lVert y \rVert^2 = \lVert U^\top y \rVert^2$ depends on the singular values $S$. In this simulation $y$ is correlated with the first PC of $X$ so $U_1^\top y$ is large and all other terms are small. (In my real data, $y$ is also well predicted by the leading PCs.) Now, in the $p\gg n$ case, if the columns of $X$ are sufficiently random, then all singular values will be rather close to each other (rows approximately orthogonal). The "main" term $U_1^\top y$ will be multiplied by a factor less than 1. The terms towards the end will get multiplied by factors larger than 1 but not much larger. Overall the norm decreases. In contrast, in the $p\gtrsim n$ case, there will be some very small singular values. After inversion they will become large factors that will increase the overall norm.
[This argument is very hand-wavy; I hope it can be made more precise.]
As a sanity check, if I swap the order of singular values by S = diag(flipud(diag(S))); then the predicted MSE is above $1$ everywhere on the 2nd and the 3rd panels.
figure('Position', [100 100 1000 300])
ps = [10, 100, 1000];
for pnum = 1:length(ps)
rng(42)
n = 80;
p = ps(pnum);
rho = .2;
y = randn(n,1);
X = repmat(y, [1 p])*rho + randn(n,p)*sqrt(1-rho^2);
lambdas = exp(-10:.1:20);
press = zeros(size(lambdas));
[U,S,V] = svd(X, 'econ');
% S = diag(flipud(diag(S))); % sanity check
for i = 1:length(lambdas)
H = U * diag(diag(S).^2./(diag(S).^2 + lambdas(i))) * U';
e = y - H*y;
press(i) = sum((e ./ (1-diag(H))).^2);
end
subplot(1, length(ps), pnum)
plot(log(lambdas), press/sum(y.^2))
hold on
title(['p = ' num2str(p)])
plot(xlim, [1 1], 'k--')
if p > n
Ginv = U * diag(diag(S).^-2) * U';
press0 = sum((Ginv*y ./ diag(Ginv)).^2);
plot(log(lambdas(1)), press0/sum(y.^2), 'ko', 'MarkerFaceColor', [0,0,0]);
press0approx = sum((diag(diag(S).^-2/mean(diag(S).^-2)) * U' * y).^2);
plot(log(lambdas(1)), press0approx/sum(y.^2), 'ro');
end
end
Part II. Adding pure noise predictors as a form of regularization
Good arguments were made by @Jonny, @Benoit, @Paul, @Dikran, and others that increasing the number of predictors will shrink the minimum-norm OLS solution. Indeed, once $p>n$, any new predictor can only decrease the norm of the minimum-norm solution. So adding predictors will push the norm down, somewhat similar to how ridge regression is penalizing the norm.
So can this be used as a regularization strategy? We start with $n=80$ and $p=40$ and then keep adding $q$ pure noise predictors as a regularization attempt. I will do LOOCV and compare it with LOOCV for the ridge (computed as above). Note that after obtaining $\hat\beta$ on the $p+q$ predictors, I am "truncating" it at $p$ because I am only interested in the original predictors.
IT WORKS!!!
In fact, one does not need to "truncate" the beta; even if I use the full beta and the full $p+q$ predictors, I can get good performance (dashed line on the right subplot). This I think mimics my actual data in the question: only few predictors are truly predicting $y$, most of them are pure noise, and they serve as a regularization. In this regime additional ridge regularization does not help at all.
rng(42)
n = 80;
p = 40;
rho = .2;
y = randn(n,1);
X = repmat(y, [1 p])*rho + randn(n,p)*sqrt(1-rho^2);
lambdas = exp(-10:.1:20);
press = zeros(size(lambdas));
[U,S,V] = svd(X, 'econ');
for i = 1:length(lambdas)
H = U * diag(diag(S).^2./(diag(S).^2 + lambdas(i))) * U';
e = y - H*y;
press(i) = sum((e ./ (1-diag(H))).^2);
end
figure('Position', [100 100 1000 300])
subplot(121)
plot(log(lambdas), press/sum(y.^2))
hold on
xlabel('Ridge penalty (log)')
plot(xlim, [1 1], 'k--')
title('Ridge regression (n=80, p=40)')
ylim([0 2])
ps = [0 20 40 60 80 100 200 300 400 500 1000];
error = zeros(n, length(ps));
error_trunc = zeros(n, length(ps));
for fold = 1:n
indtrain = setdiff(1:n, fold);
for pi = 1:length(ps)
XX = [X randn(n,ps(pi))];
if size(XX,2) < size(XX,1)
beta = XX(indtrain,:) \ y(indtrain,:);
else
beta = pinv(XX(indtrain,:)) * y(indtrain,:);
end
error(fold, pi) = y(fold) - XX(fold,:) * beta;
error_trunc(fold, pi) = y(fold) - XX(fold,1:size(X,2)) * beta(1:size(X,2));
end
end
subplot(122)
hold on
plot(ps, sum(error.^2)/sum(y.^2), 'k.--')
plot(ps, sum(error_trunc.^2)/sum(y.^2), '.-')
legend({'Entire beta', 'Truncated beta'}, 'AutoUpdate','off')
legend boxoff
xlabel('Number of extra predictors')
title('Extra pure noise predictors')
plot(xlim, [1 1], 'k--')
ylim([0 2]) | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | Thanks everybody for the great ongoing discussion. The crux of the matter seems to be that minimum-norm OLS is effectively performing shrinkage that is similar to the ridge regression. This seems to o | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
Thanks everybody for the great ongoing discussion. The crux of the matter seems to be that minimum-norm OLS is effectively performing shrinkage that is similar to the ridge regression. This seems to occur whenever $p\gg n$. Ironically, adding pure noise predictors can even be used as a very weird form or regularization.
Part I. Demonstration with artificial data and analytical CV
@Jonny (+1) came up with a really simple artificial example that I will slightly adapt here. $X$ of $n\times p$ size and $y$ are generated such that all variables are Gaussian with unit variance, and correlation between each predictor and the response is $\rho$. I will fix $\rho=.2$.
I will use leave-one-out CV because there is analytical expression for the squared error: it is known as PRESS, "predicted sum of squares". $$\text{PRESS} = \sum_i \left( \frac{e_i}{1-H_{ii}}\right)^2,$$ where $e_i$ are residuals $$e = y - \hat y = y - Hy,$$ and $H$ is the hat matrix $$H = X (X^\top X + \lambda I)^{-1} X^\top=U\frac{S^2}{S^2+\lambda} U^\top$$ in terms of SVD $X=USV^\top$. This allows to replicate @Jonny's results without using glmnet and without actually performing cross-validation (I am plotting the ratio of PRESS to the sum of squares of $y$):
This analytical approach allows to compute the limit at $\lambda\to 0$. Simply plugging in $\lambda=0$ into the PRESS formula does not work: when $n<p$ and $\lambda=0$, the residuals are all zero and hat matrix is the identity matrix with ones on the diagonal, meaning that the fractions in the PRESS equation are undefined. But if we compute the limit at $\lambda \to 0$, then it will correspond to the minimum-norm OLS solution with $\lambda=0$.
The trick is to do Taylor expansion of the hat matrix when $\lambda\to 0$: $$H=U\frac{1}{1+\lambda/S^2} U^\top\approx U(1-\lambda/S^2) U^\top = I - \lambda US^{-2}U^\top = I-\lambda G^{-1}.$$ Here I introduced Gram matrix $G=XX^\top = US^2U^\top$.
We are almost done: $$\text{PRESS} = \sum_i\Big( \frac{\lambda [G^{-1}y]_i}{\lambda G^{-1}_{ii}}\Big)^2 = \sum_i\Big( \frac{ [G^{-1}y]_i}{G^{-1}_{ii}}\Big)^2.$$ Lambda got canceled out, so here we have the limiting value. I plotted it with a big black dot on the figure above (on the panels where $p>n$), and it matches perfectly.
Update Feb 21. The above formula is exact, but we can gain some insight by doing further approximations. It looks like $G^{-1}$ has approximately equal values on the diagonal even if $S$ has very unequal values (probably because $U$ mixes up all the eigenvalues pretty well). So for each $i$ we have that $G^{-1}_{ii}\approx \langle S^{-2} \rangle$ where angular brackets denote averaging. Using this approximation, we can rewrite: $$\text{PRESS}\approx \Big\lVert \frac{S^{-2}}{\langle S^{-2} \rangle}U^\top y\Big\rVert^2.$$ This approximation is shown on the figure above with red open circles.
Whether this will be larger or smaller than $\lVert y \rVert^2 = \lVert U^\top y \rVert^2$ depends on the singular values $S$. In this simulation $y$ is correlated with the first PC of $X$ so $U_1^\top y$ is large and all other terms are small. (In my real data, $y$ is also well predicted by the leading PCs.) Now, in the $p\gg n$ case, if the columns of $X$ are sufficiently random, then all singular values will be rather close to each other (rows approximately orthogonal). The "main" term $U_1^\top y$ will be multiplied by a factor less than 1. The terms towards the end will get multiplied by factors larger than 1 but not much larger. Overall the norm decreases. In contrast, in the $p\gtrsim n$ case, there will be some very small singular values. After inversion they will become large factors that will increase the overall norm.
[This argument is very hand-wavy; I hope it can be made more precise.]
As a sanity check, if I swap the order of singular values by S = diag(flipud(diag(S))); then the predicted MSE is above $1$ everywhere on the 2nd and the 3rd panels.
figure('Position', [100 100 1000 300])
ps = [10, 100, 1000];
for pnum = 1:length(ps)
rng(42)
n = 80;
p = ps(pnum);
rho = .2;
y = randn(n,1);
X = repmat(y, [1 p])*rho + randn(n,p)*sqrt(1-rho^2);
lambdas = exp(-10:.1:20);
press = zeros(size(lambdas));
[U,S,V] = svd(X, 'econ');
% S = diag(flipud(diag(S))); % sanity check
for i = 1:length(lambdas)
H = U * diag(diag(S).^2./(diag(S).^2 + lambdas(i))) * U';
e = y - H*y;
press(i) = sum((e ./ (1-diag(H))).^2);
end
subplot(1, length(ps), pnum)
plot(log(lambdas), press/sum(y.^2))
hold on
title(['p = ' num2str(p)])
plot(xlim, [1 1], 'k--')
if p > n
Ginv = U * diag(diag(S).^-2) * U';
press0 = sum((Ginv*y ./ diag(Ginv)).^2);
plot(log(lambdas(1)), press0/sum(y.^2), 'ko', 'MarkerFaceColor', [0,0,0]);
press0approx = sum((diag(diag(S).^-2/mean(diag(S).^-2)) * U' * y).^2);
plot(log(lambdas(1)), press0approx/sum(y.^2), 'ro');
end
end
Part II. Adding pure noise predictors as a form of regularization
Good arguments were made by @Jonny, @Benoit, @Paul, @Dikran, and others that increasing the number of predictors will shrink the minimum-norm OLS solution. Indeed, once $p>n$, any new predictor can only decrease the norm of the minimum-norm solution. So adding predictors will push the norm down, somewhat similar to how ridge regression is penalizing the norm.
So can this be used as a regularization strategy? We start with $n=80$ and $p=40$ and then keep adding $q$ pure noise predictors as a regularization attempt. I will do LOOCV and compare it with LOOCV for the ridge (computed as above). Note that after obtaining $\hat\beta$ on the $p+q$ predictors, I am "truncating" it at $p$ because I am only interested in the original predictors.
IT WORKS!!!
In fact, one does not need to "truncate" the beta; even if I use the full beta and the full $p+q$ predictors, I can get good performance (dashed line on the right subplot). This I think mimics my actual data in the question: only few predictors are truly predicting $y$, most of them are pure noise, and they serve as a regularization. In this regime additional ridge regularization does not help at all.
rng(42)
n = 80;
p = 40;
rho = .2;
y = randn(n,1);
X = repmat(y, [1 p])*rho + randn(n,p)*sqrt(1-rho^2);
lambdas = exp(-10:.1:20);
press = zeros(size(lambdas));
[U,S,V] = svd(X, 'econ');
for i = 1:length(lambdas)
H = U * diag(diag(S).^2./(diag(S).^2 + lambdas(i))) * U';
e = y - H*y;
press(i) = sum((e ./ (1-diag(H))).^2);
end
figure('Position', [100 100 1000 300])
subplot(121)
plot(log(lambdas), press/sum(y.^2))
hold on
xlabel('Ridge penalty (log)')
plot(xlim, [1 1], 'k--')
title('Ridge regression (n=80, p=40)')
ylim([0 2])
ps = [0 20 40 60 80 100 200 300 400 500 1000];
error = zeros(n, length(ps));
error_trunc = zeros(n, length(ps));
for fold = 1:n
indtrain = setdiff(1:n, fold);
for pi = 1:length(ps)
XX = [X randn(n,ps(pi))];
if size(XX,2) < size(XX,1)
beta = XX(indtrain,:) \ y(indtrain,:);
else
beta = pinv(XX(indtrain,:)) * y(indtrain,:);
end
error(fold, pi) = y(fold) - XX(fold,:) * beta;
error_trunc(fold, pi) = y(fold) - XX(fold,1:size(X,2)) * beta(1:size(X,2));
end
end
subplot(122)
hold on
plot(ps, sum(error.^2)/sum(y.^2), 'k.--')
plot(ps, sum(error_trunc.^2)/sum(y.^2), '.-')
legend({'Entire beta', 'Truncated beta'}, 'AutoUpdate','off')
legend boxoff
xlabel('Number of extra predictors')
title('Extra pure noise predictors')
plot(xlim, [1 1], 'k--')
ylim([0 2]) | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
Thanks everybody for the great ongoing discussion. The crux of the matter seems to be that minimum-norm OLS is effectively performing shrinkage that is similar to the ridge regression. This seems to o |
3,192 | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | Here is an artificial situation where this occurs. Suppose each predictor variable is a copy of the target variable with a large amount of gaussian noise applied. The best possible model is an average of all predictor variables.
library(glmnet)
set.seed(1846)
noise <- 10
N <- 80
num.vars <- 100
target <- runif(N,-1,1)
training.data <- matrix(nrow = N, ncol = num.vars)
for(i in 1:num.vars){
training.data[,i] <- target + rnorm(N,0,noise)
}
plot(cv.glmnet(training.data, target, alpha = 0,
lambda = exp(seq(-10, 10, by = 0.1))))
100 variables behave in a "normal" way: Some positive value of lambda minimizes out of sample error.
But increase num.vars in the above code to 1000, and here is the new MSE path. (I extended to log(Lambda) = -100 to convince myself.
What I think is happening
When fitting a lot of parameters with low regularization, the coefficients are randomly distributed around their true value with high variance.
As the number of predictors becomes very large, the "average error" tends towards zero, and it becomes better to just let the coefficients fall where they may and sum everything up than to bias them toward 0.
I'm sure this situation of the true prediction being an average of all predictors isn't the only time this occurs, but I don't know how to begin pinpoint the biggest necessary condition here.
EDIT:
The "flat" behavior for very low lambda will always happen, since the solution is converging to the minimum-norm OLS solution. Similarly the curve will be flat for very high lambda as the solution converges to 0. There will be no minimum iff one of those two solution is optimal.
Why is the minimum-norm OLS solution so (comparably) good in this case? I think it is related to the following behavior that I found very counter-intuitive, but on reflection makes a lot of sense.
max.beta.random <- function(num.vars){
num.vars <- round(num.vars)
set.seed(1846)
noise <- 10
N <- 80
target <- runif(N,-1,1)
training.data <- matrix(nrow = N, ncol = num.vars)
for(i in 1:num.vars){
training.data[,i] <- rnorm(N,0,noise)
}
udv <- svd(training.data)
U <- udv$u
S <- diag(udv$d)
V <- udv$v
beta.hat <- V %*% solve(S) %*% t(U) %*% target
max(abs(beta.hat))
}
curve(Vectorize(max.beta.random)(x), from = 10, to = 1000, n = 50,
xlab = "Number of Predictors", y = "Max Magnitude of Coefficients")
abline(v = 80)
With randomly generated predictors unrelated to the response, as p increases the coefficients become larger, but once p is much bigger than N they shrink toward zero. This also happens in my example. So very loosely, the unregularized solutions for those problems don't need shrinkage because they are already very small!
This happens for a trivial reason. $y$ can be expressed exactly as a linear combination of columns of $X$. $\hat{\beta}$ is the minimum-norm vector of coefficients. As more columns are added the norm of $\hat{\beta}$ must decrease or remain constant, because a possible linear combination is to keep the previous coefficients the same and set the new coefficients to $0$. | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | Here is an artificial situation where this occurs. Suppose each predictor variable is a copy of the target variable with a large amount of gaussian noise applied. The best possible model is an average | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
Here is an artificial situation where this occurs. Suppose each predictor variable is a copy of the target variable with a large amount of gaussian noise applied. The best possible model is an average of all predictor variables.
library(glmnet)
set.seed(1846)
noise <- 10
N <- 80
num.vars <- 100
target <- runif(N,-1,1)
training.data <- matrix(nrow = N, ncol = num.vars)
for(i in 1:num.vars){
training.data[,i] <- target + rnorm(N,0,noise)
}
plot(cv.glmnet(training.data, target, alpha = 0,
lambda = exp(seq(-10, 10, by = 0.1))))
100 variables behave in a "normal" way: Some positive value of lambda minimizes out of sample error.
But increase num.vars in the above code to 1000, and here is the new MSE path. (I extended to log(Lambda) = -100 to convince myself.
What I think is happening
When fitting a lot of parameters with low regularization, the coefficients are randomly distributed around their true value with high variance.
As the number of predictors becomes very large, the "average error" tends towards zero, and it becomes better to just let the coefficients fall where they may and sum everything up than to bias them toward 0.
I'm sure this situation of the true prediction being an average of all predictors isn't the only time this occurs, but I don't know how to begin pinpoint the biggest necessary condition here.
EDIT:
The "flat" behavior for very low lambda will always happen, since the solution is converging to the minimum-norm OLS solution. Similarly the curve will be flat for very high lambda as the solution converges to 0. There will be no minimum iff one of those two solution is optimal.
Why is the minimum-norm OLS solution so (comparably) good in this case? I think it is related to the following behavior that I found very counter-intuitive, but on reflection makes a lot of sense.
max.beta.random <- function(num.vars){
num.vars <- round(num.vars)
set.seed(1846)
noise <- 10
N <- 80
target <- runif(N,-1,1)
training.data <- matrix(nrow = N, ncol = num.vars)
for(i in 1:num.vars){
training.data[,i] <- rnorm(N,0,noise)
}
udv <- svd(training.data)
U <- udv$u
S <- diag(udv$d)
V <- udv$v
beta.hat <- V %*% solve(S) %*% t(U) %*% target
max(abs(beta.hat))
}
curve(Vectorize(max.beta.random)(x), from = 10, to = 1000, n = 50,
xlab = "Number of Predictors", y = "Max Magnitude of Coefficients")
abline(v = 80)
With randomly generated predictors unrelated to the response, as p increases the coefficients become larger, but once p is much bigger than N they shrink toward zero. This also happens in my example. So very loosely, the unregularized solutions for those problems don't need shrinkage because they are already very small!
This happens for a trivial reason. $y$ can be expressed exactly as a linear combination of columns of $X$. $\hat{\beta}$ is the minimum-norm vector of coefficients. As more columns are added the norm of $\hat{\beta}$ must decrease or remain constant, because a possible linear combination is to keep the previous coefficients the same and set the new coefficients to $0$. | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
Here is an artificial situation where this occurs. Suppose each predictor variable is a copy of the target variable with a large amount of gaussian noise applied. The best possible model is an average |
3,193 | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | So I decided to run nested cross-validation using the specialized mlr package in R to see what's actually coming from the modelling approach.
Code (it takes a few minutes to run on an ordinary notebook)
library(mlr)
daf = read.csv("https://pastebin.com/raw/p1cCCYBR", sep = " ", header = FALSE)
tsk = list(
tsk1110 = makeRegrTask(id = "tsk1110", data = daf, target = colnames(daf)[1]),
tsk500 = makeRegrTask(id = "tsk500", data = daf[, c(1,sample(ncol(daf)-1, 500)+1)], target = colnames(daf)[1]),
tsk100 = makeRegrTask(id = "tsk100", data = daf[, c(1,sample(ncol(daf)-1, 100)+1)], target = colnames(daf)[1]),
tsk50 = makeRegrTask(id = "tsk50", data = daf[, c(1,sample(ncol(daf)-1, 50)+1)], target = colnames(daf)[1]),
tsk10 = makeRegrTask(id = "tsk10", data = daf[, c(1,sample(ncol(daf)-1, 10)+1)], target = colnames(daf)[1])
)
rdesc = makeResampleDesc("CV", iters = 10)
msrs = list(mse, rsq)
configureMlr(on.par.without.desc = "quiet")
bm3 = benchmark(learners = list(
makeLearner("regr.cvglmnet", alpha = 0, lambda = c(0, exp(seq(-10, 10, length.out = 150))),
makeLearner("regr.glmnet", alpha = 0, lambda = c(0, exp(seq(-10, 10, length.out = 150))), s = 151)
), tasks = tsk, resamplings = rdesc, measures = msrs)
Results
getBMRAggrPerformances(bm3, as.df = TRUE)
# task.id learner.id mse.test.mean rsq.test.mean
#1 tsk10 regr.cvglmnet 1.0308055 -0.224534550
#2 tsk10 regr.glmnet 1.3685799 -0.669473387
#3 tsk100 regr.cvglmnet 0.7996823 0.031731316
#4 tsk100 regr.glmnet 1.3092522 -0.656879104
#5 tsk1110 regr.cvglmnet 0.8236786 0.009315037
#6 tsk1110 regr.glmnet 0.6866745 0.117540454
#7 tsk50 regr.cvglmnet 1.0348319 -0.188568886
#8 tsk50 regr.glmnet 2.5468091 -2.423461744
#9 tsk500 regr.cvglmnet 0.7210185 0.173851634
#10 tsk500 regr.glmnet 0.6171841 0.296530437
They do basically the same across tasks.
So, what about the optimal lambdas?
sapply(lapply(getBMRModels(bm3, task.ids = "tsk1110")[[1]][[1]], "[[", 2), "[[", "lambda.min")
# [1] 4.539993e-05 4.539993e-05 2.442908e-01 1.398738e+00 4.539993e-05
# [6] 0.000000e+00 4.539993e-05 3.195187e-01 2.793841e-01 4.539993e-05
Notice the lambdas are already transformed. Some fold even picked the minimal lambda $\lambda = 0$.
I fiddled a bit more with glmnet and discovered neither there the minimal lambda is picked. Check:
EDIT:
After comments by amoeba, it became clear the regularization path is an important step in the glmnet estimation, so the code now reflects it. This way, most discrepancies vanished.
cvfit = cv.glmnet(x = x, y = y, alpha = 0, lambda = exp(seq(-10, 10, length.out = 150)))
plot(cvfit)
Conclusion
So, basically, $\lambda>0$ really improves the fit (edit: but not by much!).
How is it possible and what does it say about my dataset? Am I missing something obvious or is it indeed counter-intuitive?
We are likely nearer the true distribution of the data setting $\lambda$ to a small value larger than zero. There's nothing counter-intuitive about it though.
Edit: Keep in mind, though, the ridge regularization path makes use of previous parameter estimates when we call glmnet, but this is beyond my expertise. If we set a really low lambda in isolation, it'll likely degrade performance.
EDIT: The lambda selection does say something more about your data. As larger lambdas decrease performance, it means there are preferential, i.e. larger, coefficients in your model, as large lambdas shrink all coefficients towards zero. Though $\lambda\neq0$ means that the effective degrees of freedom in your model is smaller than the apparent degrees of freedom, $p$.
How can there be any qualitative difference between p=100 and p=1000 given that both are larger than n?
$p=1000$ invariably contains at least the same of information or even more than $p=100$.
Comments
It seems you are getting a tiny minimum for some non-zero lambda (I am looking at your figure), but the curve is still really really flat to the left of it. So my main question remains as to why λ→0 does not noticeably overfit. I don't see an answer here yet. Do you expect this to be a general phenomenon? I.e. for any data with n≪p, lambda=0 will perform [almost] as good as optimal lambda? Or is it something special about these data? If you look above in the comments, you'll see that many people did not even believe me that it's possible.
I think you're conflating validation performance with test performance, and such comparison is not warranted.
Edit: notice though when we set lambda to 0 after running the whole regularization path performance doesn't degrade as such, therefore the regularization path is key to understand what's going on!
Also, I don't quite understand your last line. Look at the cv.glmnet output for p=100. It will have very different shape. So what affects this shape (asymptote on the left vs. no asymptote) when p=100 or p=1000?
Let's compare the regularization paths for both:
fit1000 = glmnet(x, y, alpha = 0, lambda = exp(seq(-10,10, length.out = 1001)))
fit100 = glmnet(x[, sample(1000, 100)], y, alpha = 0, lambda = exp(seq(-10,10, length.out = 1001)))
plot(fit1000, "lambda")
x11()
plot(fit100, "lambda")
It becomes clear $p=1000$ affords larger coefficients at increasing $\lambda$, even though it has smaller coefficients for asymptotically-OLS ridge, at the left of both plots. So, basically, $p=100$ overfits at the left of the graph, and that probably explains the difference in behavior between them.
It's harder for $p=1000$ to overfit because, even though Ridge shrinks coefficients to zero, they are never reach zero. This mean that the predictive power of the model is shared between many more components, making it easier to predict around the mean instead of being carried away by noise. | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | So I decided to run nested cross-validation using the specialized mlr package in R to see what's actually coming from the modelling approach.
Code (it takes a few minutes to run on an ordinary noteboo | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
So I decided to run nested cross-validation using the specialized mlr package in R to see what's actually coming from the modelling approach.
Code (it takes a few minutes to run on an ordinary notebook)
library(mlr)
daf = read.csv("https://pastebin.com/raw/p1cCCYBR", sep = " ", header = FALSE)
tsk = list(
tsk1110 = makeRegrTask(id = "tsk1110", data = daf, target = colnames(daf)[1]),
tsk500 = makeRegrTask(id = "tsk500", data = daf[, c(1,sample(ncol(daf)-1, 500)+1)], target = colnames(daf)[1]),
tsk100 = makeRegrTask(id = "tsk100", data = daf[, c(1,sample(ncol(daf)-1, 100)+1)], target = colnames(daf)[1]),
tsk50 = makeRegrTask(id = "tsk50", data = daf[, c(1,sample(ncol(daf)-1, 50)+1)], target = colnames(daf)[1]),
tsk10 = makeRegrTask(id = "tsk10", data = daf[, c(1,sample(ncol(daf)-1, 10)+1)], target = colnames(daf)[1])
)
rdesc = makeResampleDesc("CV", iters = 10)
msrs = list(mse, rsq)
configureMlr(on.par.without.desc = "quiet")
bm3 = benchmark(learners = list(
makeLearner("regr.cvglmnet", alpha = 0, lambda = c(0, exp(seq(-10, 10, length.out = 150))),
makeLearner("regr.glmnet", alpha = 0, lambda = c(0, exp(seq(-10, 10, length.out = 150))), s = 151)
), tasks = tsk, resamplings = rdesc, measures = msrs)
Results
getBMRAggrPerformances(bm3, as.df = TRUE)
# task.id learner.id mse.test.mean rsq.test.mean
#1 tsk10 regr.cvglmnet 1.0308055 -0.224534550
#2 tsk10 regr.glmnet 1.3685799 -0.669473387
#3 tsk100 regr.cvglmnet 0.7996823 0.031731316
#4 tsk100 regr.glmnet 1.3092522 -0.656879104
#5 tsk1110 regr.cvglmnet 0.8236786 0.009315037
#6 tsk1110 regr.glmnet 0.6866745 0.117540454
#7 tsk50 regr.cvglmnet 1.0348319 -0.188568886
#8 tsk50 regr.glmnet 2.5468091 -2.423461744
#9 tsk500 regr.cvglmnet 0.7210185 0.173851634
#10 tsk500 regr.glmnet 0.6171841 0.296530437
They do basically the same across tasks.
So, what about the optimal lambdas?
sapply(lapply(getBMRModels(bm3, task.ids = "tsk1110")[[1]][[1]], "[[", 2), "[[", "lambda.min")
# [1] 4.539993e-05 4.539993e-05 2.442908e-01 1.398738e+00 4.539993e-05
# [6] 0.000000e+00 4.539993e-05 3.195187e-01 2.793841e-01 4.539993e-05
Notice the lambdas are already transformed. Some fold even picked the minimal lambda $\lambda = 0$.
I fiddled a bit more with glmnet and discovered neither there the minimal lambda is picked. Check:
EDIT:
After comments by amoeba, it became clear the regularization path is an important step in the glmnet estimation, so the code now reflects it. This way, most discrepancies vanished.
cvfit = cv.glmnet(x = x, y = y, alpha = 0, lambda = exp(seq(-10, 10, length.out = 150)))
plot(cvfit)
Conclusion
So, basically, $\lambda>0$ really improves the fit (edit: but not by much!).
How is it possible and what does it say about my dataset? Am I missing something obvious or is it indeed counter-intuitive?
We are likely nearer the true distribution of the data setting $\lambda$ to a small value larger than zero. There's nothing counter-intuitive about it though.
Edit: Keep in mind, though, the ridge regularization path makes use of previous parameter estimates when we call glmnet, but this is beyond my expertise. If we set a really low lambda in isolation, it'll likely degrade performance.
EDIT: The lambda selection does say something more about your data. As larger lambdas decrease performance, it means there are preferential, i.e. larger, coefficients in your model, as large lambdas shrink all coefficients towards zero. Though $\lambda\neq0$ means that the effective degrees of freedom in your model is smaller than the apparent degrees of freedom, $p$.
How can there be any qualitative difference between p=100 and p=1000 given that both are larger than n?
$p=1000$ invariably contains at least the same of information or even more than $p=100$.
Comments
It seems you are getting a tiny minimum for some non-zero lambda (I am looking at your figure), but the curve is still really really flat to the left of it. So my main question remains as to why λ→0 does not noticeably overfit. I don't see an answer here yet. Do you expect this to be a general phenomenon? I.e. for any data with n≪p, lambda=0 will perform [almost] as good as optimal lambda? Or is it something special about these data? If you look above in the comments, you'll see that many people did not even believe me that it's possible.
I think you're conflating validation performance with test performance, and such comparison is not warranted.
Edit: notice though when we set lambda to 0 after running the whole regularization path performance doesn't degrade as such, therefore the regularization path is key to understand what's going on!
Also, I don't quite understand your last line. Look at the cv.glmnet output for p=100. It will have very different shape. So what affects this shape (asymptote on the left vs. no asymptote) when p=100 or p=1000?
Let's compare the regularization paths for both:
fit1000 = glmnet(x, y, alpha = 0, lambda = exp(seq(-10,10, length.out = 1001)))
fit100 = glmnet(x[, sample(1000, 100)], y, alpha = 0, lambda = exp(seq(-10,10, length.out = 1001)))
plot(fit1000, "lambda")
x11()
plot(fit100, "lambda")
It becomes clear $p=1000$ affords larger coefficients at increasing $\lambda$, even though it has smaller coefficients for asymptotically-OLS ridge, at the left of both plots. So, basically, $p=100$ overfits at the left of the graph, and that probably explains the difference in behavior between them.
It's harder for $p=1000$ to overfit because, even though Ridge shrinks coefficients to zero, they are never reach zero. This mean that the predictive power of the model is shared between many more components, making it easier to predict around the mean instead of being carried away by noise. | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
So I decided to run nested cross-validation using the specialized mlr package in R to see what's actually coming from the modelling approach.
Code (it takes a few minutes to run on an ordinary noteboo |
3,194 | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | How can (minimal norm) OLS fail to overfit?
In short:
Experimental parameters that correlate with the (unknown) parameters in the true model will be more likely to be estimated with high values in a minimal norm OLS fitting procedure. That is because they will fit the 'model+noise' whereas the other parameters will only fit the 'noise' (thus they will fit a larger part of the model with a lower value of the coefficient and be more likely to have a high value in the minimal norm OLS).
This effect will reduce the amount of overfitting in a minimal norm OLS fitting procedure. The effect is more pronounced if more parameters are available since then it becomes more likely that a larger portion of the 'true model' is being incorporated in the estimate.
Longer part: (I am not sure what to place here since the issue is not entirely clear to me, or I do not know to what precision an answer needs to address the question)
Below is an example that can be easily constructed and demonstrates the problem. The effect is not so strange and examples are easy to make.
I took $p=200$ sin-functions (because they are perpendicular) as variables
created a random model with $n=50$ measurements.
The model is
constructed with only $tm=10$ of the variables so 190 of the 200
variables are creating the possibility to generate over-fitting.
model coefficients are randomly determined
In this example case we observe that there is some over-fitting but the coefficients of the parameters that belong to the true model have a higher value. Thus the R^2 may have some positive value.
The image below (and the code to generate it) demonstrate that the over-fitting is limited. The dots that relate to the estimation model of 200 parameters. The red dots relate to those parameters that are also present in the 'true model' and we see that they have a higher value. Thus, there is some degree of approaching the real model and getting the R^2 above 0.
Note that I used a model with orthogonal variables (the sine-functions). If parameters are correlated then they may occur in the model with relatively very high coefficient and become more penalized in the minimal norm OLS.
Note that the 'orthogonal variables' are not orthogonal when we consider the data. The inner product of $sin(ax) \cdot sin(bx)$ is only zero when we integrate the entire space of $x$ and not when we only have a few samples $x$. The consequence is that even with zero noise the over-fitting will occur (and the R^2 value seems to depend on many factors, aside from noise. Of course there is the relation $n$ and $p$, but also important is how many variables are in the true model and how many of them are in the fitting model).
library(MASS)
par(mar=c(5.1, 4.1, 9.1, 4.1), xpd=TRUE)
p <- 200
l <- 24000
n <- 50
tm <- 10
# generate i sinus vectors as possible parameters
t <- c(1:l)
xm <- sapply(c(0:(p-1)), FUN = function(x) sin(x*t/l*2*pi))
# generate random model by selecting only tm parameters
sel <- sample(1:p, tm)
coef <- rnorm(tm, 2, 0.5)
# generate random data xv and yv with n samples
xv <- sample(t, n)
yv <- xm[xv, sel] %*% coef + rnorm(n, 0, 0.1)
# generate model
M <- ginv(t(xm[xv,]) %*% xm[xv,])
Bsol <- M %*% t(xm[xv,]) %*% yv
ysol <- xm[xv,] %*% Bsol
# plotting comparision of model with true model
plot(1:p, Bsol, ylim=c(min(Bsol,coef),max(Bsol,coef)))
points(sel, Bsol[sel], col=1, bg=2, pch=21)
points(sel,coef,pch=3,col=2)
title("comparing overfitted model (circles) with true model (crosses)",line=5)
legend(0,max(coef,Bsol)+0.55,c("all 100 estimated coefficients","the 10 estimated coefficients corresponding to true model","true coefficient values"),pch=c(21,21,3),pt.bg=c(0,2,0),col=c(1,1,2))
Truncated beta technique in relation to ridge regression
I have transformed the python code from Amoeba into R and combined the two graphs together. For each minimal norm OLS estimate with added noise variables I match a ridge regression estimate with the same (approximately) $l_2$-norm for the $\beta$ vector.
It seems like the truncated noise model does much the same (only computes a bit slower, and maybe a bit more often less good).
However without the truncation the effect is much less strong.
This correspondence between adding parameters and ridge penalty is not necessarily the strongest mechanism behind the absence of
over-fitting. This can be seen especially in the 1000p curve (in the
image of the question) going to almost 0.3 while the other curves,
with different p, don't reach this level, no matter what the ridge
regression parameter is. The additional parameters, in that practical case, are not the same as a shift of the ridge parameter (and I guess that this is because the extra parameters will create a better, more complete, model).
The noise parameters reduce the norm on the one hand (just like ridge regression) but also introduce additional noise. Benoit Sanchez shows that in the limit, adding many many noise parameters with smaller deviation, it will become eventually the same as ridge regression (the growing number of noise parameters cancel each other out). But at the same time, it requires much more computations (if we increase the deviation of the noise, to allow to use less parameters and speed up computation, the difference becomes larger).
Rho = 0.2
Rho = 0.4
Rho = 0.2 increasing the variance of the noise parameters to 2
code example
# prepare the data
set.seed(42)
n = 80
p = 40
rho = .2
y = rnorm(n,0,1)
X = matrix(rep(y,p), ncol = p)*rho + rnorm(n*p,0,1)*(1-rho^2)
# range of variables to add
ps = c(0, 5, 10, 15, 20, 40, 45, 50, 55, 60, 70, 80, 100, 125, 150, 175, 200, 300, 400, 500, 1000)
#ps = c(0, 5, 10, 15, 20, 40, 60, 80, 100, 150, 200, 300) #,500,1000)
# variables to store output (the sse)
error = matrix(0,nrow=n, ncol=length(ps))
error_t = matrix(0,nrow=n, ncol=length(ps))
error_s = matrix(0,nrow=n, ncol=length(ps))
# adding a progression bar
pb <- txtProgressBar(min = 0, max = n, style = 3)
# training set by leaving out measurement 1, repeat n times
for (fold in 1:n) {
indtrain = c(1:n)[-fold]
# ridge regression
beta_s <- glmnet(X[indtrain,],y[indtrain],alpha=0,lambda = 10^c(seq(-4,2,by=0.01)))$beta
# calculate l2-norm to compare with adding variables
l2_bs <- colSums(beta_s^2)
for (pi in 1:length(ps)) {
XX = cbind(X, matrix(rnorm(n*ps[pi],0,1), nrow=80))
XXt = XX[indtrain,]
if (p+ps[pi] < n) {
beta = solve(t(XXt) %*% (XXt)) %*% t(XXt) %*% y[indtrain]
}
else {
beta = ginv(t(XXt) %*% (XXt)) %*% t(XXt) %*% y[indtrain]
}
# pickout comparable ridge regression with the same l2 norm
l2_b <- sum(beta[1:p]^2)
beta_shrink <- beta_s[,which.min((l2_b-l2_bs)^2)]
# compute errors
error[fold, pi] = y[fold] - XX[fold,1:p] %*% beta[1:p]
error_t[fold, pi] = y[fold] - XX[fold,] %*% beta[]
error_s[fold, pi] = y[fold] - XX[fold,1:p] %*% beta_shrink[]
}
setTxtProgressBar(pb, fold) # update progression bar
}
# plotting
plot(ps,colSums(error^2)/sum(y^2) ,
ylim = c(0,2),
xlab ="Number of extra predictors",
ylab ="relative sum of squared error")
lines(ps,colSums(error^2)/sum(y^2))
points(ps,colSums(error_t^2)/sum(y^2),col=2)
lines(ps,colSums(error_t^2)/sum(y^2),col=2)
points(ps,colSums(error_s^2)/sum(y^2),col=4)
lines(ps,colSums(error_s^2)/sum(y^2),col=4)
title('Extra pure noise predictors')
legend(200,2,c("complete model with p + extra predictors",
"truncated model with p + extra predictors",
"ridge regression with similar l2-norm",
"idealized model uniform beta with 1/p/rho"),
pch=c(1,1,1,NA), col=c(2,1,4,1),lt=c(1,1,1,2))
# idealized model (if we put all beta to 1/rho/p we should theoretically have a reasonable good model)
error_op <- rep(0,n)
for (fold in 1:n) {
beta = rep(1/rho/p,p)
error_op[fold] = y[fold] - X[fold,] %*% beta
}
id <- sum(error_op^2)/sum(y^2)
lines(range(ps),rep(id,2),lty=2) | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | How can (minimal norm) OLS fail to overfit?
In short:
Experimental parameters that correlate with the (unknown) parameters in the true model will be more likely to be estimated with high values in a | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
How can (minimal norm) OLS fail to overfit?
In short:
Experimental parameters that correlate with the (unknown) parameters in the true model will be more likely to be estimated with high values in a minimal norm OLS fitting procedure. That is because they will fit the 'model+noise' whereas the other parameters will only fit the 'noise' (thus they will fit a larger part of the model with a lower value of the coefficient and be more likely to have a high value in the minimal norm OLS).
This effect will reduce the amount of overfitting in a minimal norm OLS fitting procedure. The effect is more pronounced if more parameters are available since then it becomes more likely that a larger portion of the 'true model' is being incorporated in the estimate.
Longer part: (I am not sure what to place here since the issue is not entirely clear to me, or I do not know to what precision an answer needs to address the question)
Below is an example that can be easily constructed and demonstrates the problem. The effect is not so strange and examples are easy to make.
I took $p=200$ sin-functions (because they are perpendicular) as variables
created a random model with $n=50$ measurements.
The model is
constructed with only $tm=10$ of the variables so 190 of the 200
variables are creating the possibility to generate over-fitting.
model coefficients are randomly determined
In this example case we observe that there is some over-fitting but the coefficients of the parameters that belong to the true model have a higher value. Thus the R^2 may have some positive value.
The image below (and the code to generate it) demonstrate that the over-fitting is limited. The dots that relate to the estimation model of 200 parameters. The red dots relate to those parameters that are also present in the 'true model' and we see that they have a higher value. Thus, there is some degree of approaching the real model and getting the R^2 above 0.
Note that I used a model with orthogonal variables (the sine-functions). If parameters are correlated then they may occur in the model with relatively very high coefficient and become more penalized in the minimal norm OLS.
Note that the 'orthogonal variables' are not orthogonal when we consider the data. The inner product of $sin(ax) \cdot sin(bx)$ is only zero when we integrate the entire space of $x$ and not when we only have a few samples $x$. The consequence is that even with zero noise the over-fitting will occur (and the R^2 value seems to depend on many factors, aside from noise. Of course there is the relation $n$ and $p$, but also important is how many variables are in the true model and how many of them are in the fitting model).
library(MASS)
par(mar=c(5.1, 4.1, 9.1, 4.1), xpd=TRUE)
p <- 200
l <- 24000
n <- 50
tm <- 10
# generate i sinus vectors as possible parameters
t <- c(1:l)
xm <- sapply(c(0:(p-1)), FUN = function(x) sin(x*t/l*2*pi))
# generate random model by selecting only tm parameters
sel <- sample(1:p, tm)
coef <- rnorm(tm, 2, 0.5)
# generate random data xv and yv with n samples
xv <- sample(t, n)
yv <- xm[xv, sel] %*% coef + rnorm(n, 0, 0.1)
# generate model
M <- ginv(t(xm[xv,]) %*% xm[xv,])
Bsol <- M %*% t(xm[xv,]) %*% yv
ysol <- xm[xv,] %*% Bsol
# plotting comparision of model with true model
plot(1:p, Bsol, ylim=c(min(Bsol,coef),max(Bsol,coef)))
points(sel, Bsol[sel], col=1, bg=2, pch=21)
points(sel,coef,pch=3,col=2)
title("comparing overfitted model (circles) with true model (crosses)",line=5)
legend(0,max(coef,Bsol)+0.55,c("all 100 estimated coefficients","the 10 estimated coefficients corresponding to true model","true coefficient values"),pch=c(21,21,3),pt.bg=c(0,2,0),col=c(1,1,2))
Truncated beta technique in relation to ridge regression
I have transformed the python code from Amoeba into R and combined the two graphs together. For each minimal norm OLS estimate with added noise variables I match a ridge regression estimate with the same (approximately) $l_2$-norm for the $\beta$ vector.
It seems like the truncated noise model does much the same (only computes a bit slower, and maybe a bit more often less good).
However without the truncation the effect is much less strong.
This correspondence between adding parameters and ridge penalty is not necessarily the strongest mechanism behind the absence of
over-fitting. This can be seen especially in the 1000p curve (in the
image of the question) going to almost 0.3 while the other curves,
with different p, don't reach this level, no matter what the ridge
regression parameter is. The additional parameters, in that practical case, are not the same as a shift of the ridge parameter (and I guess that this is because the extra parameters will create a better, more complete, model).
The noise parameters reduce the norm on the one hand (just like ridge regression) but also introduce additional noise. Benoit Sanchez shows that in the limit, adding many many noise parameters with smaller deviation, it will become eventually the same as ridge regression (the growing number of noise parameters cancel each other out). But at the same time, it requires much more computations (if we increase the deviation of the noise, to allow to use less parameters and speed up computation, the difference becomes larger).
Rho = 0.2
Rho = 0.4
Rho = 0.2 increasing the variance of the noise parameters to 2
code example
# prepare the data
set.seed(42)
n = 80
p = 40
rho = .2
y = rnorm(n,0,1)
X = matrix(rep(y,p), ncol = p)*rho + rnorm(n*p,0,1)*(1-rho^2)
# range of variables to add
ps = c(0, 5, 10, 15, 20, 40, 45, 50, 55, 60, 70, 80, 100, 125, 150, 175, 200, 300, 400, 500, 1000)
#ps = c(0, 5, 10, 15, 20, 40, 60, 80, 100, 150, 200, 300) #,500,1000)
# variables to store output (the sse)
error = matrix(0,nrow=n, ncol=length(ps))
error_t = matrix(0,nrow=n, ncol=length(ps))
error_s = matrix(0,nrow=n, ncol=length(ps))
# adding a progression bar
pb <- txtProgressBar(min = 0, max = n, style = 3)
# training set by leaving out measurement 1, repeat n times
for (fold in 1:n) {
indtrain = c(1:n)[-fold]
# ridge regression
beta_s <- glmnet(X[indtrain,],y[indtrain],alpha=0,lambda = 10^c(seq(-4,2,by=0.01)))$beta
# calculate l2-norm to compare with adding variables
l2_bs <- colSums(beta_s^2)
for (pi in 1:length(ps)) {
XX = cbind(X, matrix(rnorm(n*ps[pi],0,1), nrow=80))
XXt = XX[indtrain,]
if (p+ps[pi] < n) {
beta = solve(t(XXt) %*% (XXt)) %*% t(XXt) %*% y[indtrain]
}
else {
beta = ginv(t(XXt) %*% (XXt)) %*% t(XXt) %*% y[indtrain]
}
# pickout comparable ridge regression with the same l2 norm
l2_b <- sum(beta[1:p]^2)
beta_shrink <- beta_s[,which.min((l2_b-l2_bs)^2)]
# compute errors
error[fold, pi] = y[fold] - XX[fold,1:p] %*% beta[1:p]
error_t[fold, pi] = y[fold] - XX[fold,] %*% beta[]
error_s[fold, pi] = y[fold] - XX[fold,1:p] %*% beta_shrink[]
}
setTxtProgressBar(pb, fold) # update progression bar
}
# plotting
plot(ps,colSums(error^2)/sum(y^2) ,
ylim = c(0,2),
xlab ="Number of extra predictors",
ylab ="relative sum of squared error")
lines(ps,colSums(error^2)/sum(y^2))
points(ps,colSums(error_t^2)/sum(y^2),col=2)
lines(ps,colSums(error_t^2)/sum(y^2),col=2)
points(ps,colSums(error_s^2)/sum(y^2),col=4)
lines(ps,colSums(error_s^2)/sum(y^2),col=4)
title('Extra pure noise predictors')
legend(200,2,c("complete model with p + extra predictors",
"truncated model with p + extra predictors",
"ridge regression with similar l2-norm",
"idealized model uniform beta with 1/p/rho"),
pch=c(1,1,1,NA), col=c(2,1,4,1),lt=c(1,1,1,2))
# idealized model (if we put all beta to 1/rho/p we should theoretically have a reasonable good model)
error_op <- rep(0,n)
for (fold in 1:n) {
beta = rep(1/rho/p,p)
error_op[fold] = y[fold] - X[fold,] %*% beta
}
id <- sum(error_op^2)/sum(y^2)
lines(range(ps),rep(id,2),lty=2) | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
How can (minimal norm) OLS fail to overfit?
In short:
Experimental parameters that correlate with the (unknown) parameters in the true model will be more likely to be estimated with high values in a |
3,195 | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | If you're familiar with linear operators then you may like my answer as most direct path to understanding the phenomenon: why doesn't least norm regression fail outright? The reason is that your problem ($n\ll p$) is the ill posed inverse problem and pseudo-inverse is one of the ways of solving it. Regularization is an improvement though.
This paper is probably the most compact and relevant explanation: Lorenzo Rosasco et al, Learning, Regularization and Ill-Posed Inverse Problems. They set up your regression problem as learning, see Eq.3., where the number of parameters exceeds the number of observations:
$$Ax=g_\delta,$$ where $A$ is a linear operator on Hilbert space and $g_\delta$ - noisy data.
Obviously, this is an ill-posed inverse problem. So, you can solve it with SVD or Moore-Penrose inverse, which would render the least norm solution indeed. Thus it should not be surprising that your least norm solution is not failing outright.
However, if you follow the paper you can see that the ridge regression would be an improvement upon the above. The improvement is really a better behavior of the estimator, since Moore-Penrose solution is not necessarily bounded.
UPDATE
I realized that I wasn't making it clear that ill-posed problems lead to overfitting. Here's the quote from the paper Gábor A, Banga JR. Robust and efficient parameter estimation in dynamic models of biological systems. BMC Systems Biology. 2015;9:74. doi:10.1186/s12918-015-0219-2:
The ill-conditioning of these problems typically arise from (i) models
with large number of parameters (over-parametrization), (ii)
experimental data scarcity and (iii) significant measurement errors
[19, 40]. As a consequence, we often obtain overfitting of such
kinetic models, i.e. calibrated models with reasonable fits to the
available data but poor capability for generalization (low predictive
value)
So, my argument can be stated as follows:
ill posed problems lead to overfitting
(n < p) case is an extremely ill-posed inverse problem
Moore-Penrose psudo-inverse (or other tools like SVD), which you refer to in the question as $X^+$, solves an ill-posed problem
therefore, it takes care of overfitting at least to some extent, and it shouldn't be surprising that it doesn't completely fail, unlike a regular OLS should
Again, regularization is a more robust solution still. | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit? | If you're familiar with linear operators then you may like my answer as most direct path to understanding the phenomenon: why doesn't least norm regression fail outright? The reason is that your probl | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
If you're familiar with linear operators then you may like my answer as most direct path to understanding the phenomenon: why doesn't least norm regression fail outright? The reason is that your problem ($n\ll p$) is the ill posed inverse problem and pseudo-inverse is one of the ways of solving it. Regularization is an improvement though.
This paper is probably the most compact and relevant explanation: Lorenzo Rosasco et al, Learning, Regularization and Ill-Posed Inverse Problems. They set up your regression problem as learning, see Eq.3., where the number of parameters exceeds the number of observations:
$$Ax=g_\delta,$$ where $A$ is a linear operator on Hilbert space and $g_\delta$ - noisy data.
Obviously, this is an ill-posed inverse problem. So, you can solve it with SVD or Moore-Penrose inverse, which would render the least norm solution indeed. Thus it should not be surprising that your least norm solution is not failing outright.
However, if you follow the paper you can see that the ridge regression would be an improvement upon the above. The improvement is really a better behavior of the estimator, since Moore-Penrose solution is not necessarily bounded.
UPDATE
I realized that I wasn't making it clear that ill-posed problems lead to overfitting. Here's the quote from the paper Gábor A, Banga JR. Robust and efficient parameter estimation in dynamic models of biological systems. BMC Systems Biology. 2015;9:74. doi:10.1186/s12918-015-0219-2:
The ill-conditioning of these problems typically arise from (i) models
with large number of parameters (over-parametrization), (ii)
experimental data scarcity and (iii) significant measurement errors
[19, 40]. As a consequence, we often obtain overfitting of such
kinetic models, i.e. calibrated models with reasonable fits to the
available data but poor capability for generalization (low predictive
value)
So, my argument can be stated as follows:
ill posed problems lead to overfitting
(n < p) case is an extremely ill-posed inverse problem
Moore-Penrose psudo-inverse (or other tools like SVD), which you refer to in the question as $X^+$, solves an ill-posed problem
therefore, it takes care of overfitting at least to some extent, and it shouldn't be surprising that it doesn't completely fail, unlike a regular OLS should
Again, regularization is a more robust solution still. | Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?
If you're familiar with linear operators then you may like my answer as most direct path to understanding the phenomenon: why doesn't least norm regression fail outright? The reason is that your probl |
3,196 | Why is logistic regression a linear classifier? | Logistic regression is linear in the sense that the predictions can be written as
$$ \hat{p} = \frac{1}{1 + e^{-\hat{\mu}}}, \text{ where } \hat{\mu} = \hat{\theta} \cdot x. $$
Thus, the prediction can be written in terms of $\hat{\mu}$, which is a linear function of $x$. (More precisely, the predicted log-odds is a linear function of $x$.)
Conversely, there is no way to summarize the output of a neural network in terms of a linear function of $x$, and that is why neural networks are called non-linear.
Also, for logistic regression, the decision boundary $\{x:\hat{p} = 0.5\}$ is linear: it's the solution to $\hat{\theta} \cdot x = 0$. The decision boundary of a neural network is in general not linear. | Why is logistic regression a linear classifier? | Logistic regression is linear in the sense that the predictions can be written as
$$ \hat{p} = \frac{1}{1 + e^{-\hat{\mu}}}, \text{ where } \hat{\mu} = \hat{\theta} \cdot x. $$
Thus, the prediction ca | Why is logistic regression a linear classifier?
Logistic regression is linear in the sense that the predictions can be written as
$$ \hat{p} = \frac{1}{1 + e^{-\hat{\mu}}}, \text{ where } \hat{\mu} = \hat{\theta} \cdot x. $$
Thus, the prediction can be written in terms of $\hat{\mu}$, which is a linear function of $x$. (More precisely, the predicted log-odds is a linear function of $x$.)
Conversely, there is no way to summarize the output of a neural network in terms of a linear function of $x$, and that is why neural networks are called non-linear.
Also, for logistic regression, the decision boundary $\{x:\hat{p} = 0.5\}$ is linear: it's the solution to $\hat{\theta} \cdot x = 0$. The decision boundary of a neural network is in general not linear. | Why is logistic regression a linear classifier?
Logistic regression is linear in the sense that the predictions can be written as
$$ \hat{p} = \frac{1}{1 + e^{-\hat{\mu}}}, \text{ where } \hat{\mu} = \hat{\theta} \cdot x. $$
Thus, the prediction ca |
3,197 | Why is logistic regression a linear classifier? | As Stefan Wagner notes, the decision boundary for a logistic classifier is linear. (The classifier needs the inputs to be linearly separable.) I wanted to expand on the math for this in case it's not obvious.
The decision boundary is the set of x such that
$${1 \over {1 + e^{-{\theta \cdot x}}}} = 0.5$$
A little bit of algebra shows that this is equivalent to
$${1 = e^{-{\theta \cdot x}}}$$
and, taking the natural log of both sides,
$$0 = -\theta \cdot x = -\sum\limits_{i=0}^{n} \theta_i x_i$$
so the decision boundary is linear.
The reason the decision boundary for a neural network is not linear is because there are two layers of sigmoid functions in the neural network: one in each of the output nodes plus an additional sigmoid function to combine and threshold the results of each output node. | Why is logistic regression a linear classifier? | As Stefan Wagner notes, the decision boundary for a logistic classifier is linear. (The classifier needs the inputs to be linearly separable.) I wanted to expand on the math for this in case it's no | Why is logistic regression a linear classifier?
As Stefan Wagner notes, the decision boundary for a logistic classifier is linear. (The classifier needs the inputs to be linearly separable.) I wanted to expand on the math for this in case it's not obvious.
The decision boundary is the set of x such that
$${1 \over {1 + e^{-{\theta \cdot x}}}} = 0.5$$
A little bit of algebra shows that this is equivalent to
$${1 = e^{-{\theta \cdot x}}}$$
and, taking the natural log of both sides,
$$0 = -\theta \cdot x = -\sum\limits_{i=0}^{n} \theta_i x_i$$
so the decision boundary is linear.
The reason the decision boundary for a neural network is not linear is because there are two layers of sigmoid functions in the neural network: one in each of the output nodes plus an additional sigmoid function to combine and threshold the results of each output node. | Why is logistic regression a linear classifier?
As Stefan Wagner notes, the decision boundary for a logistic classifier is linear. (The classifier needs the inputs to be linearly separable.) I wanted to expand on the math for this in case it's no |
3,198 | Why is logistic regression a linear classifier? | It we have two classes, $C_{0}$ and $C_{1}$, then we can express the conditional probability as,
$$
P(C_{0}|x) = \frac{P(x|C_{0})P(C_{0})}{P(x)}
$$
applying the Bayes' theorem,
$$
P(C_{0}|x) = \frac{P(x|C_{0})P(C_{0})}{P(x|C_{0})P(C_{0})+P(x|C_{1})P(C_{1})}
= \frac{1}{1+ \exp\left(-\log\frac{P(x|C_{0})}{P(x|C_{1})}-\log \frac{P(C_{0})}{P(C_{1})}\right)}
$$
the denominator is expressed as $1+e^{\omega x}$.
Under which conditions reduces the first expression to a linear term?.
If you consider the exponential family (a canonical form for the exponential distributions like Gauß or Poisson),
$$
P(x|C_{i}) = \exp \left(\frac{\theta_{i} x -b(\theta_{i})}{a(\phi)}+c(x,\phi)\right)
$$
then you end up having a linear form,
$$
\log\frac{P(x|C_{0})}{P(x|C_{1})} = \left[ (\theta_{0}-\theta_{1})x - b(\theta_{0})+b(\theta_{1}) \right]/a(\phi)
$$
Notice that we assume that both distributions belong to the same family and have the same dispersion parameters. But, under that assumption, the logistic regression can model the probabilities for the whole family of exponential distributions. | Why is logistic regression a linear classifier? | It we have two classes, $C_{0}$ and $C_{1}$, then we can express the conditional probability as,
$$
P(C_{0}|x) = \frac{P(x|C_{0})P(C_{0})}{P(x)}
$$
applying the Bayes' theorem,
$$
P(C_{0}|x) = \frac{P | Why is logistic regression a linear classifier?
It we have two classes, $C_{0}$ and $C_{1}$, then we can express the conditional probability as,
$$
P(C_{0}|x) = \frac{P(x|C_{0})P(C_{0})}{P(x)}
$$
applying the Bayes' theorem,
$$
P(C_{0}|x) = \frac{P(x|C_{0})P(C_{0})}{P(x|C_{0})P(C_{0})+P(x|C_{1})P(C_{1})}
= \frac{1}{1+ \exp\left(-\log\frac{P(x|C_{0})}{P(x|C_{1})}-\log \frac{P(C_{0})}{P(C_{1})}\right)}
$$
the denominator is expressed as $1+e^{\omega x}$.
Under which conditions reduces the first expression to a linear term?.
If you consider the exponential family (a canonical form for the exponential distributions like Gauß or Poisson),
$$
P(x|C_{i}) = \exp \left(\frac{\theta_{i} x -b(\theta_{i})}{a(\phi)}+c(x,\phi)\right)
$$
then you end up having a linear form,
$$
\log\frac{P(x|C_{0})}{P(x|C_{1})} = \left[ (\theta_{0}-\theta_{1})x - b(\theta_{0})+b(\theta_{1}) \right]/a(\phi)
$$
Notice that we assume that both distributions belong to the same family and have the same dispersion parameters. But, under that assumption, the logistic regression can model the probabilities for the whole family of exponential distributions. | Why is logistic regression a linear classifier?
It we have two classes, $C_{0}$ and $C_{1}$, then we can express the conditional probability as,
$$
P(C_{0}|x) = \frac{P(x|C_{0})P(C_{0})}{P(x)}
$$
applying the Bayes' theorem,
$$
P(C_{0}|x) = \frac{P |
3,199 | Why is the validation accuracy fluctuating? | If I understand the definition of accuracy correctly, accuracy (% of data points classified correctly) is less cumulative than let's say MSE (mean squared error). That's why you see that your loss is rapidly increasing, while accuracy is fluctuating.
Intuitively, this basically means, that some portion of examples is classified randomly, which produces fluctuations, as the number of correct random guesses always fluctuate (imagine accuracy when coin should always return "heads"). Basically sensitivity to noise (when classification produces random result) is a common definition of overfitting (see wikipedia):
In statistics and machine learning, one of the most common tasks is to
fit a "model" to a set of training data, so as to be able to make
reliable predictions on general untrained data. In overfitting, a
statistical model describes random error or noise instead of the
underlying relationship
Another evidence of overfitting is that your loss is increasing, Loss is measured more precisely, it's more sensitive to the noisy prediction if it's not squashed by sigmoids/thresholds (which seems to be your case for the Loss itself). Intuitively, you can imagine a situation when network is too sure about output (when it's wrong), so it gives a value far away from threshold in case of random misclassification.
Regarding your case, your model is not properly regularized, possible reasons:
not enough data-points, too much capacity
ordering
no/wrong feature scaling/normalization
learning rate: $\alpha$ is too large, so SGD jumps too far and misses the area near local minima. This would be extreme case of "under-fitting" (insensitivity to data itself), but might generate (kind of) "low-frequency" noise on the output by scrambling data from the input - contrary to the overfitting intuition, it would be like always guessing heads when predicting a coin. As @JanKukacka pointed out, arriving at the area "too close to" a minima might cause overfitting, so if $\alpha$ is too small it would get sensitive to "high-frequency" noise in your data. $\alpha$ should be somewhere in between.
Possible solutions:
obtain more data-points (or artificially expand the set of existing ones)
play with hyper-parameters (increase/decrease capacity or regularization term for instance)
regularization: try dropout, early-stopping, so on | Why is the validation accuracy fluctuating? | If I understand the definition of accuracy correctly, accuracy (% of data points classified correctly) is less cumulative than let's say MSE (mean squared error). That's why you see that your loss is | Why is the validation accuracy fluctuating?
If I understand the definition of accuracy correctly, accuracy (% of data points classified correctly) is less cumulative than let's say MSE (mean squared error). That's why you see that your loss is rapidly increasing, while accuracy is fluctuating.
Intuitively, this basically means, that some portion of examples is classified randomly, which produces fluctuations, as the number of correct random guesses always fluctuate (imagine accuracy when coin should always return "heads"). Basically sensitivity to noise (when classification produces random result) is a common definition of overfitting (see wikipedia):
In statistics and machine learning, one of the most common tasks is to
fit a "model" to a set of training data, so as to be able to make
reliable predictions on general untrained data. In overfitting, a
statistical model describes random error or noise instead of the
underlying relationship
Another evidence of overfitting is that your loss is increasing, Loss is measured more precisely, it's more sensitive to the noisy prediction if it's not squashed by sigmoids/thresholds (which seems to be your case for the Loss itself). Intuitively, you can imagine a situation when network is too sure about output (when it's wrong), so it gives a value far away from threshold in case of random misclassification.
Regarding your case, your model is not properly regularized, possible reasons:
not enough data-points, too much capacity
ordering
no/wrong feature scaling/normalization
learning rate: $\alpha$ is too large, so SGD jumps too far and misses the area near local minima. This would be extreme case of "under-fitting" (insensitivity to data itself), but might generate (kind of) "low-frequency" noise on the output by scrambling data from the input - contrary to the overfitting intuition, it would be like always guessing heads when predicting a coin. As @JanKukacka pointed out, arriving at the area "too close to" a minima might cause overfitting, so if $\alpha$ is too small it would get sensitive to "high-frequency" noise in your data. $\alpha$ should be somewhere in between.
Possible solutions:
obtain more data-points (or artificially expand the set of existing ones)
play with hyper-parameters (increase/decrease capacity or regularization term for instance)
regularization: try dropout, early-stopping, so on | Why is the validation accuracy fluctuating?
If I understand the definition of accuracy correctly, accuracy (% of data points classified correctly) is less cumulative than let's say MSE (mean squared error). That's why you see that your loss is |
3,200 | Why is the validation accuracy fluctuating? | This question is old but posting this as it hasn't been pointed out yet:
Possibility 1: You're applying some sort of preprocessing (zero meaning, normalizing, etc.) to either your training set or validation set, but not the other.
Possibility 2: If you built some layers that perform differently during training and inference from scratch, your model might be incorrectly implemented (e.g. are moving mean and moving standard deviation for batch normalization getting updated during training? If using dropout, are weights scaled properly during inference?). This might be the case if your code implements these things from scratch and does not use Tensorflow/Pytorch's builtin functions.
Possibility 3: Overfitting, as everybody has pointed out. I find the other two options more likely in your specific situation as your validation accuracy is stuck at 50% from epoch 3. Generally, I would be more concerned about overfitting if this was happening in a later stage (unless you have a very specific problem at hand). | Why is the validation accuracy fluctuating? | This question is old but posting this as it hasn't been pointed out yet:
Possibility 1: You're applying some sort of preprocessing (zero meaning, normalizing, etc.) to either your training set or vali | Why is the validation accuracy fluctuating?
This question is old but posting this as it hasn't been pointed out yet:
Possibility 1: You're applying some sort of preprocessing (zero meaning, normalizing, etc.) to either your training set or validation set, but not the other.
Possibility 2: If you built some layers that perform differently during training and inference from scratch, your model might be incorrectly implemented (e.g. are moving mean and moving standard deviation for batch normalization getting updated during training? If using dropout, are weights scaled properly during inference?). This might be the case if your code implements these things from scratch and does not use Tensorflow/Pytorch's builtin functions.
Possibility 3: Overfitting, as everybody has pointed out. I find the other two options more likely in your specific situation as your validation accuracy is stuck at 50% from epoch 3. Generally, I would be more concerned about overfitting if this was happening in a later stage (unless you have a very specific problem at hand). | Why is the validation accuracy fluctuating?
This question is old but posting this as it hasn't been pointed out yet:
Possibility 1: You're applying some sort of preprocessing (zero meaning, normalizing, etc.) to either your training set or vali |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.