idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
4,101
How does saddlepoint approximation work?
The saddlepoint approximation to a probability density function (it works likewise for mass functions, but I will only talk here in terms of densities) is a surprisingly well working approximation, that can be seen as a refinement on the central limit theorem. So, it will only work in settings where there is a central limit theorem, but it needs stronger assumptions. We start with the assumption that the moment generating function exists and is twice differentiable. This implies in particular that all moments exist. Let $X$ be a random variable with moment generating function (mgf) $$ \DeclareMathOperator{\E}{\mathbb{E}} M(t) = \E e^{t X} $$ and cgf (cumulant generating function) $K(t)=\log M(t)$ (where $\log $ denotes the natural logarithm). In the development I will follow closely Ronald W Butler: "Saddlepoint Approximations with Applications" (CUP). We will develop the saddlepoint approximation using the Laplace approximation to a certain integral. Write $$ e^{K(t)} = \int_{-\infty}^\infty e^{t x} f(x) \; dx =\int_{-\infty}^\infty \exp(tx+\log f(x) ) \; dx \\ = \int_{-\infty}^\infty \exp(-h(t,x)) \; dx $$ where $h(t,x) = -tx - \log f(x) $. Now we will Taylor expand $h(t,x)$ in $x$ considering $t$ as a constant. This gives $$ h(t,x)=h(t,x_0) + h'(t,x_0)(x-x_0) +\frac12 h''(t,x_0) (x-x_0)^2 +\dotsm $$ where $'$ denotes differentiation with respect to $x$. Note that $$ h'(t,x)=-t-\frac{\partial}{\partial x}\log f(x) \\ h''(t,x)= -\frac{\partial^2}{\partial x^2} \log f(x) > 0 $$ (the last inequality by assumption as it is needed for the approximation to work). Let $x_t$ be the solution to $h'(t,x_t)=0$. We will assume that this gives a minimum for $h(t,x)$ as a function of $x$. Using this expansion in the integral and forgetting about the $\dotsm$ part, gives $$ e^{K(t)} \approx \int_{-\infty}^\infty \exp(-h(t,x_t)-\frac12 h''(t,x_t) (x-x_t)^2 ) \; dx \\ = e^{-h(t,x_t)} \int_{-\infty}^\infty e^{-\frac12 h''(t,x_t) (x-x_t)^2} \; dx $$ which is a Gaussian integral, giving $$ e^{K(t)} \approx e^{-h(t,x_t)} \sqrt{\frac{2\pi}{h''(t,x_t)}}. $$ This gives (a first version) of the saddlepoint approximation as $$ f(x_t) \approx \sqrt{\frac{h''(t,x_t)}{2\pi}} \exp(K(t) -t x_t) \\ \tag{*} \label{*} $$ Note that the approximation has the form of an exponential family. Now we need to do some work to get this in a more useful form. From $h'(t,x_t)=0$ we get $$ t = -\frac{\partial}{\partial x_t} \log f(x_t). $$ Differentiating this with respect to $x_t$ gives $$ \frac{\partial t}{\partial x_t} = -\frac{\partial^2}{\partial x_t^2} \log f(x_t) > 0$$ (by our assumptions), so the relationship between $t$ and $x_t$ is monotone, so $x_t$ is well defined. We need an approximation to $\frac{\partial}{\partial x_t} \log f(x_t)$. To that end, we get by solving from \eqref{*} $$ \log f(x_t) = K(t) -t x_t -\frac12 \log \frac{2\pi}{-\frac{\partial^2}{\partial x_t^2} \log f(x_t)}. \tag{**} \label{**} $$ Assuming the last term above only depends weakly on $x_t$, so its derivative with respect to $x_t$ is approximately zero (we will come back to comment on this), we get $$ \frac{\partial \log f(x_t)}{\partial x_t} \approx (K'(t)-x_t) \frac{\partial t}{\partial x_t} - t $$ Up to this approximation we then have that $$ 0 \approx t + \frac{\partial \log f(x_t)}{\partial x_t} = (K'(t)-x_t) \frac{\partial t}{\partial x_t} $$ so that $t$ and $x_t$ must be related through the equation $$ K'(t) - x_t=0, \\ \tag{§} \label{§} $$ which is called the saddlepoint equation. What we miss now in determining \eqref{*} is $$ h''(t,x_t) = -\frac{\partial^2 \log f(x_t)}{\partial x_t^2} \\ = -\frac{\partial}{\partial x_t} \left(\frac{\partial \log f(x_t)}{\partial x_t} \right) \\ = -\frac{\partial}{\partial x_t}(-t)= \left(\frac{\partial x_t}{\partial t}\right)^{-1} $$ and that we can find by implicit differentiation of the saddlepoint equation $K'(t)=x_t$: $$ \frac{\partial x_t}{\partial t} = K''(t). $$ The result is that (up to our approximation) $$ h''(t,x_t) = \frac1{K''(t)} $$ Putting everything together, we have the final saddlepoint approximation of the density $f(x)$ as $$ f(x_t) \approx e^{K(t)- t x_t} \sqrt{\frac1{2\pi K''(t)}}. $$ Now, to use this practically, to approximate the density at a specific point $x_t$, we solve the saddlepoint equation for that $x_t$ to find $t$. The saddlepoint approximation is often stated as an approximation to the density of the mean based on $n$ iid observations $X_1, X_2, \dotsc, X_n$. The cumulant generating function of the mean is simply $n K(t)$, so the saddlepoint approximation for the mean becomes $$ f(\bar{x}_t) = e^{nK(t) - n t \bar{x}_t} \sqrt{\frac{n}{2\pi K''(t)}} $$ Let us look at a first example. What does we get if we try to approximate the standard normal density $$ f(x)=\frac1{\sqrt{2\pi}} e^{-\frac12 x^2} $$ The mgf is $M(t)=\exp(\frac12 t^2)$ so $$ K(t)=\frac12 t^2 \\ K'(t)=t \\ K''(t)=1 $$ so the saddlepoint equation is $t=x_t$ and the saddlepoint approximation gives $$ f(x_t) \approx e^{\frac12 t^2 -t x_t} \sqrt{\frac1{2\pi \cdot 1}} = \frac1{\sqrt{2\pi}} e^{-\frac12 x_t^2} $$ so in this case the approximation is exact. Let us look at a very different application: Bootstrap in the transform domain, we can do bootstrapping analytically using the saddlepoint approximation to the bootstrap distribution of the mean! Assume we have $X_1, X_2, \dotsc, X_n$ iid distributed from some density $f$ (in the simulated example we will use a unit exponential distribution). From the sample we calculate the empirical moment generating function $$ \hat{M}(t)= \frac1{n} \sum_{i=1}^n e^{t x_i} $$ and then the empirical cgf $\hat{K}(t) = \log \hat{M}(t)$. We need the empirical mgf for the mean which is $\log ( \hat{M}(t/n)^n )$ and the empirical cgf for the mean $$ \hat{K}_{\bar{X}}(t) = n \log \hat{M}(t/n) $$ which we use to construct a saddlepoint approximation. In the following some R code (R version 3.2.3): set.seed(1234) x <- rexp(10) require(Deriv) ### From CRAN drule[["sexpmean"]] <- alist(t=sexpmean1(t)) # adding diff rules to # Deriv drule[["sexpmean1"]] <- alist(t=sexpmean2(t)) ### make_ecgf_mean <- function(x) { n <- length(x) sexpmean <- function(t) mean(exp(t*x)) sexpmean1 <- function(t) mean(x*exp(t*x)) sexpmean2 <- function(t) mean(x*x*exp(t*x)) emgf <- function(t) sexpmean(t) ecgf <- function(t) n * log( emgf(t/n) ) ecgf1 <- Deriv(ecgf) ecgf2 <- Deriv(ecgf1) return( list(ecgf=Vectorize(ecgf), ecgf1=Vectorize(ecgf1), ecgf2 =Vectorize(ecgf2) ) ) } ### Now we need a function solving the saddlepoint equation and constructing ### the approximation: ### make_spa <- function(cumgenfun_list) { K <- cumgenfun_list[[1]] K1 <- cumgenfun_list[[2]] K2 <- cumgenfun_list[[3]] # local function for solving the speq: solve_speq <- function(x) { # Returns saddle point! uniroot(function(s) K1(s)-x,lower=-100, upper = 100, extendInt = "yes")$root } # Function finding fhat for one specific x: fhat0 <- function(x) { # Solve saddlepoint equation: s <- solve_speq(x) # Calculating saddlepoint density value: (1/sqrt(2*pi*K2(s)))*exp(K(s)-s*x) } # Returning a vectorized version: return(Vectorize(fhat0)) } #end make_fhat ( I have tried to write this as general code which can be modified easily for other cgfs, but the code is still not very robust ...) Then we use this for a sample of ten independent observations from a unit exponential distribution. We do the usual nonparametric bootstrapping "by hand", plot the resulting bootstrap histogram for the mean, and overplot the saddlepoint approximation: > ECGF <- make_ecgf_mean(x) > fhat <- make_spa(ECGF) > fhat function (x) { args <- lapply(as.list(match.call())[-1L], eval, parent.frame()) names <- if (is.null(names(args))) character(length(args)) else names(args) dovec <- names %in% vectorize.args do.call("mapply", c(FUN = FUN, args[dovec], MoreArgs = list(args[!dovec]), SIMPLIFY = SIMPLIFY, USE.NAMES = USE.NAMES)) } <environment: 0x4e5a598> > boots <- replicate(10000, mean(sample(x, length(x), replace=TRUE)), simplify=TRUE) > boots <- replicate(10000, mean(sample(x, length(x), replace=TRUE)), simplify=TRUE) > hist(boots, prob=TRUE) > plot(fhat, from=0.001, to=2, col="red", add=TRUE) Giving the resulting plot: The approximation seems to be rather good! We could get an even better approximation by integrating the saddlepoint approximation and rescaling: > integrate(fhat, lower=0.1, upper=2) 1.026476 with absolute error < 9.7e-07 Now the cumulative distribution function based on this approximation could be found by numerical integration, but it is also possible to make a direct saddlepoint approximation for that. But that is for another post, this is long enough. Finally, some comments left out of the development above. In \eqref{**} we did an approximation essentially ignoring the third term. Why can we do that? One observation is that for the normal density function, the left-out term contributes nothing, so that approximation is exact. So, since the saddlepoint-approximation is a refinement on the central limit theorem, so we are somewhat close to the normal, so this should work well. One can also look at specific examples. Looking at the saddlepoint approximation to the Poisson distribution, looking at that left-out third term, in this case that becomes a trigamma function, which indeed is rather flat when the argument is not to close to zero. Finally, why the name? The name come from an alternative derivation, using complex-analysis techniques. Later we can look into that, but in another post!
How does saddlepoint approximation work?
The saddlepoint approximation to a probability density function (it works likewise for mass functions, but I will only talk here in terms of densities) is a surprisingly well working approximation, th
How does saddlepoint approximation work? The saddlepoint approximation to a probability density function (it works likewise for mass functions, but I will only talk here in terms of densities) is a surprisingly well working approximation, that can be seen as a refinement on the central limit theorem. So, it will only work in settings where there is a central limit theorem, but it needs stronger assumptions. We start with the assumption that the moment generating function exists and is twice differentiable. This implies in particular that all moments exist. Let $X$ be a random variable with moment generating function (mgf) $$ \DeclareMathOperator{\E}{\mathbb{E}} M(t) = \E e^{t X} $$ and cgf (cumulant generating function) $K(t)=\log M(t)$ (where $\log $ denotes the natural logarithm). In the development I will follow closely Ronald W Butler: "Saddlepoint Approximations with Applications" (CUP). We will develop the saddlepoint approximation using the Laplace approximation to a certain integral. Write $$ e^{K(t)} = \int_{-\infty}^\infty e^{t x} f(x) \; dx =\int_{-\infty}^\infty \exp(tx+\log f(x) ) \; dx \\ = \int_{-\infty}^\infty \exp(-h(t,x)) \; dx $$ where $h(t,x) = -tx - \log f(x) $. Now we will Taylor expand $h(t,x)$ in $x$ considering $t$ as a constant. This gives $$ h(t,x)=h(t,x_0) + h'(t,x_0)(x-x_0) +\frac12 h''(t,x_0) (x-x_0)^2 +\dotsm $$ where $'$ denotes differentiation with respect to $x$. Note that $$ h'(t,x)=-t-\frac{\partial}{\partial x}\log f(x) \\ h''(t,x)= -\frac{\partial^2}{\partial x^2} \log f(x) > 0 $$ (the last inequality by assumption as it is needed for the approximation to work). Let $x_t$ be the solution to $h'(t,x_t)=0$. We will assume that this gives a minimum for $h(t,x)$ as a function of $x$. Using this expansion in the integral and forgetting about the $\dotsm$ part, gives $$ e^{K(t)} \approx \int_{-\infty}^\infty \exp(-h(t,x_t)-\frac12 h''(t,x_t) (x-x_t)^2 ) \; dx \\ = e^{-h(t,x_t)} \int_{-\infty}^\infty e^{-\frac12 h''(t,x_t) (x-x_t)^2} \; dx $$ which is a Gaussian integral, giving $$ e^{K(t)} \approx e^{-h(t,x_t)} \sqrt{\frac{2\pi}{h''(t,x_t)}}. $$ This gives (a first version) of the saddlepoint approximation as $$ f(x_t) \approx \sqrt{\frac{h''(t,x_t)}{2\pi}} \exp(K(t) -t x_t) \\ \tag{*} \label{*} $$ Note that the approximation has the form of an exponential family. Now we need to do some work to get this in a more useful form. From $h'(t,x_t)=0$ we get $$ t = -\frac{\partial}{\partial x_t} \log f(x_t). $$ Differentiating this with respect to $x_t$ gives $$ \frac{\partial t}{\partial x_t} = -\frac{\partial^2}{\partial x_t^2} \log f(x_t) > 0$$ (by our assumptions), so the relationship between $t$ and $x_t$ is monotone, so $x_t$ is well defined. We need an approximation to $\frac{\partial}{\partial x_t} \log f(x_t)$. To that end, we get by solving from \eqref{*} $$ \log f(x_t) = K(t) -t x_t -\frac12 \log \frac{2\pi}{-\frac{\partial^2}{\partial x_t^2} \log f(x_t)}. \tag{**} \label{**} $$ Assuming the last term above only depends weakly on $x_t$, so its derivative with respect to $x_t$ is approximately zero (we will come back to comment on this), we get $$ \frac{\partial \log f(x_t)}{\partial x_t} \approx (K'(t)-x_t) \frac{\partial t}{\partial x_t} - t $$ Up to this approximation we then have that $$ 0 \approx t + \frac{\partial \log f(x_t)}{\partial x_t} = (K'(t)-x_t) \frac{\partial t}{\partial x_t} $$ so that $t$ and $x_t$ must be related through the equation $$ K'(t) - x_t=0, \\ \tag{§} \label{§} $$ which is called the saddlepoint equation. What we miss now in determining \eqref{*} is $$ h''(t,x_t) = -\frac{\partial^2 \log f(x_t)}{\partial x_t^2} \\ = -\frac{\partial}{\partial x_t} \left(\frac{\partial \log f(x_t)}{\partial x_t} \right) \\ = -\frac{\partial}{\partial x_t}(-t)= \left(\frac{\partial x_t}{\partial t}\right)^{-1} $$ and that we can find by implicit differentiation of the saddlepoint equation $K'(t)=x_t$: $$ \frac{\partial x_t}{\partial t} = K''(t). $$ The result is that (up to our approximation) $$ h''(t,x_t) = \frac1{K''(t)} $$ Putting everything together, we have the final saddlepoint approximation of the density $f(x)$ as $$ f(x_t) \approx e^{K(t)- t x_t} \sqrt{\frac1{2\pi K''(t)}}. $$ Now, to use this practically, to approximate the density at a specific point $x_t$, we solve the saddlepoint equation for that $x_t$ to find $t$. The saddlepoint approximation is often stated as an approximation to the density of the mean based on $n$ iid observations $X_1, X_2, \dotsc, X_n$. The cumulant generating function of the mean is simply $n K(t)$, so the saddlepoint approximation for the mean becomes $$ f(\bar{x}_t) = e^{nK(t) - n t \bar{x}_t} \sqrt{\frac{n}{2\pi K''(t)}} $$ Let us look at a first example. What does we get if we try to approximate the standard normal density $$ f(x)=\frac1{\sqrt{2\pi}} e^{-\frac12 x^2} $$ The mgf is $M(t)=\exp(\frac12 t^2)$ so $$ K(t)=\frac12 t^2 \\ K'(t)=t \\ K''(t)=1 $$ so the saddlepoint equation is $t=x_t$ and the saddlepoint approximation gives $$ f(x_t) \approx e^{\frac12 t^2 -t x_t} \sqrt{\frac1{2\pi \cdot 1}} = \frac1{\sqrt{2\pi}} e^{-\frac12 x_t^2} $$ so in this case the approximation is exact. Let us look at a very different application: Bootstrap in the transform domain, we can do bootstrapping analytically using the saddlepoint approximation to the bootstrap distribution of the mean! Assume we have $X_1, X_2, \dotsc, X_n$ iid distributed from some density $f$ (in the simulated example we will use a unit exponential distribution). From the sample we calculate the empirical moment generating function $$ \hat{M}(t)= \frac1{n} \sum_{i=1}^n e^{t x_i} $$ and then the empirical cgf $\hat{K}(t) = \log \hat{M}(t)$. We need the empirical mgf for the mean which is $\log ( \hat{M}(t/n)^n )$ and the empirical cgf for the mean $$ \hat{K}_{\bar{X}}(t) = n \log \hat{M}(t/n) $$ which we use to construct a saddlepoint approximation. In the following some R code (R version 3.2.3): set.seed(1234) x <- rexp(10) require(Deriv) ### From CRAN drule[["sexpmean"]] <- alist(t=sexpmean1(t)) # adding diff rules to # Deriv drule[["sexpmean1"]] <- alist(t=sexpmean2(t)) ### make_ecgf_mean <- function(x) { n <- length(x) sexpmean <- function(t) mean(exp(t*x)) sexpmean1 <- function(t) mean(x*exp(t*x)) sexpmean2 <- function(t) mean(x*x*exp(t*x)) emgf <- function(t) sexpmean(t) ecgf <- function(t) n * log( emgf(t/n) ) ecgf1 <- Deriv(ecgf) ecgf2 <- Deriv(ecgf1) return( list(ecgf=Vectorize(ecgf), ecgf1=Vectorize(ecgf1), ecgf2 =Vectorize(ecgf2) ) ) } ### Now we need a function solving the saddlepoint equation and constructing ### the approximation: ### make_spa <- function(cumgenfun_list) { K <- cumgenfun_list[[1]] K1 <- cumgenfun_list[[2]] K2 <- cumgenfun_list[[3]] # local function for solving the speq: solve_speq <- function(x) { # Returns saddle point! uniroot(function(s) K1(s)-x,lower=-100, upper = 100, extendInt = "yes")$root } # Function finding fhat for one specific x: fhat0 <- function(x) { # Solve saddlepoint equation: s <- solve_speq(x) # Calculating saddlepoint density value: (1/sqrt(2*pi*K2(s)))*exp(K(s)-s*x) } # Returning a vectorized version: return(Vectorize(fhat0)) } #end make_fhat ( I have tried to write this as general code which can be modified easily for other cgfs, but the code is still not very robust ...) Then we use this for a sample of ten independent observations from a unit exponential distribution. We do the usual nonparametric bootstrapping "by hand", plot the resulting bootstrap histogram for the mean, and overplot the saddlepoint approximation: > ECGF <- make_ecgf_mean(x) > fhat <- make_spa(ECGF) > fhat function (x) { args <- lapply(as.list(match.call())[-1L], eval, parent.frame()) names <- if (is.null(names(args))) character(length(args)) else names(args) dovec <- names %in% vectorize.args do.call("mapply", c(FUN = FUN, args[dovec], MoreArgs = list(args[!dovec]), SIMPLIFY = SIMPLIFY, USE.NAMES = USE.NAMES)) } <environment: 0x4e5a598> > boots <- replicate(10000, mean(sample(x, length(x), replace=TRUE)), simplify=TRUE) > boots <- replicate(10000, mean(sample(x, length(x), replace=TRUE)), simplify=TRUE) > hist(boots, prob=TRUE) > plot(fhat, from=0.001, to=2, col="red", add=TRUE) Giving the resulting plot: The approximation seems to be rather good! We could get an even better approximation by integrating the saddlepoint approximation and rescaling: > integrate(fhat, lower=0.1, upper=2) 1.026476 with absolute error < 9.7e-07 Now the cumulative distribution function based on this approximation could be found by numerical integration, but it is also possible to make a direct saddlepoint approximation for that. But that is for another post, this is long enough. Finally, some comments left out of the development above. In \eqref{**} we did an approximation essentially ignoring the third term. Why can we do that? One observation is that for the normal density function, the left-out term contributes nothing, so that approximation is exact. So, since the saddlepoint-approximation is a refinement on the central limit theorem, so we are somewhat close to the normal, so this should work well. One can also look at specific examples. Looking at the saddlepoint approximation to the Poisson distribution, looking at that left-out third term, in this case that becomes a trigamma function, which indeed is rather flat when the argument is not to close to zero. Finally, why the name? The name come from an alternative derivation, using complex-analysis techniques. Later we can look into that, but in another post!
How does saddlepoint approximation work? The saddlepoint approximation to a probability density function (it works likewise for mass functions, but I will only talk here in terms of densities) is a surprisingly well working approximation, th
4,102
How does saddlepoint approximation work?
Here I expand upon kjetil's answer, and I focus on those situations where the Cumulant Generating Function (CGF) is unknown, but it can be estimated from the data $x_1,\dots,x_n$, where $x\in R^d$. The simplest CGF estimator is probably that of Davison and Hinkley (1988) $$ \hat{K}(\lambda) = \frac{1}{n}\sum_{i=1}^{n}e^{\lambda^Tx_i}, $$ which is the one used in kjetil's bootstrap example. This estimator has the drawback that the resulting saddlepoint equation $$ \hat{K}'(\lambda) = y, $$ can be solved only if $y$, the point at which we want to evaluate the saddlepoint density, falls within the convex hull of $x_1,\dots,x_n$. Wong (1992) and Fasiolo et al. (2016) addressed this problem by proposing two alternative CGF estimators, designed in such a way that the saddlepoint equation can be solved for any $y$. The solution of Fasiolo et al. (2016), called the extended Empirical Saddlepoint Approximation ESA, is implemented in the esaddle R package and here I give a couple of examples. As a simple univariate example, consider using ESA to approximate a $\text{Gamma}(2, 1)$ density. library("devtools") install_github("mfasiolo/esaddle") library("esaddle") ########## Simulating data x <- rgamma(1000, 2, 1) # Fixing tuning parameter of ESA decay <- 0.05 # Evaluating ESA at several point xSeq <- seq(-2, 8, length.out = 200) tmp <- dsaddle(y = xSeq, X = x, decay = decay, log = TRUE) # Plotting true density, ESA and normal approximation plot(xSeq, exp(tmp$llk), type = 'l', ylab = "Density", xlab = "x") lines(xSeq, dgamma(xSeq, 2, 1), col = 3) lines(xSeq, dnorm(xSeq, mean(x), sd(x)), col = 2) suppressWarnings( rug(x) ) legend("topright", c("ESA", "Truth", "Gaussian"), col = c(1, 3, 2), lty = 1) This is the fit Looking at the rug it is clear that we evaluated the ESA density outside the range of the data. A more challenging example is the following warped bivariate Gaussian. # Function that evaluates the true density dwarp <- function(x, alpha) { d <- length(alpha) + 1 lik <- dnorm(x[ , 1], log = TRUE) tmp <- x[ , 1]^2 for(ii in 2:d) lik <- lik + dnorm(x[ , ii] - alpha[ii-1]*tmp, log = TRUE) lik } # Function that simulates from true distribution rwarp <- function(n = 1, alpha) { d <- length(alpha) + 1 z <- matrix(rnorm(n*d), n, d) tmp <- z[ , 1]^2 for(ii in 2:d) z[ , ii] <- z[ , ii] + alpha[ii-1]*tmp z } set.seed(64141) # Creating 2d grid m <- 50 expansion <- 1 x1 <- seq(-2, 3, length=m)* expansion; x2 <- seq(-3, 3, length=m) * expansion x <- expand.grid(x1, x2) # Evaluating true density on grid alpha <- 1 dw <- dwarp(x, alpha = alpha) # Simulate random variables X <- rwarp(1000, alpha = alpha) # Evaluating ESA density dwa <- dsaddle(as.matrix(x), X, decay = 0.1, log = FALSE)$llk # Plotting true density par(mfrow = c(1, 2)) plot(X, pch=".", col=1, ylim = c(min(x2), max(x2)), xlim = c(min(x1), max(x1)), main = "True density", xlab = expression(X[1]), ylab = expression(X[2])) contour(x1, x2, matrix(dw, m, m), levels = quantile(as.vector(dw), seq(0.8, 0.995, length.out = 10)), col=2, add=T) # Plotting ESA density plot(X, pch=".",col=2, ylim = c(min(x2), max(x2)), xlim = c(min(x1), max(x1)), main = "ESA density", xlab = expression(X[1]), ylab = expression(X[2])) contour(x1, x2, matrix(dwa, m, m), levels = quantile(as.vector(dwa), seq(0.8, 0.995, length.out = 10)), col=2, add=T) The fit is pretty good.
How does saddlepoint approximation work?
Here I expand upon kjetil's answer, and I focus on those situations where the Cumulant Generating Function (CGF) is unknown, but it can be estimated from the data $x_1,\dots,x_n$, where $x\in R^d$. Th
How does saddlepoint approximation work? Here I expand upon kjetil's answer, and I focus on those situations where the Cumulant Generating Function (CGF) is unknown, but it can be estimated from the data $x_1,\dots,x_n$, where $x\in R^d$. The simplest CGF estimator is probably that of Davison and Hinkley (1988) $$ \hat{K}(\lambda) = \frac{1}{n}\sum_{i=1}^{n}e^{\lambda^Tx_i}, $$ which is the one used in kjetil's bootstrap example. This estimator has the drawback that the resulting saddlepoint equation $$ \hat{K}'(\lambda) = y, $$ can be solved only if $y$, the point at which we want to evaluate the saddlepoint density, falls within the convex hull of $x_1,\dots,x_n$. Wong (1992) and Fasiolo et al. (2016) addressed this problem by proposing two alternative CGF estimators, designed in such a way that the saddlepoint equation can be solved for any $y$. The solution of Fasiolo et al. (2016), called the extended Empirical Saddlepoint Approximation ESA, is implemented in the esaddle R package and here I give a couple of examples. As a simple univariate example, consider using ESA to approximate a $\text{Gamma}(2, 1)$ density. library("devtools") install_github("mfasiolo/esaddle") library("esaddle") ########## Simulating data x <- rgamma(1000, 2, 1) # Fixing tuning parameter of ESA decay <- 0.05 # Evaluating ESA at several point xSeq <- seq(-2, 8, length.out = 200) tmp <- dsaddle(y = xSeq, X = x, decay = decay, log = TRUE) # Plotting true density, ESA and normal approximation plot(xSeq, exp(tmp$llk), type = 'l', ylab = "Density", xlab = "x") lines(xSeq, dgamma(xSeq, 2, 1), col = 3) lines(xSeq, dnorm(xSeq, mean(x), sd(x)), col = 2) suppressWarnings( rug(x) ) legend("topright", c("ESA", "Truth", "Gaussian"), col = c(1, 3, 2), lty = 1) This is the fit Looking at the rug it is clear that we evaluated the ESA density outside the range of the data. A more challenging example is the following warped bivariate Gaussian. # Function that evaluates the true density dwarp <- function(x, alpha) { d <- length(alpha) + 1 lik <- dnorm(x[ , 1], log = TRUE) tmp <- x[ , 1]^2 for(ii in 2:d) lik <- lik + dnorm(x[ , ii] - alpha[ii-1]*tmp, log = TRUE) lik } # Function that simulates from true distribution rwarp <- function(n = 1, alpha) { d <- length(alpha) + 1 z <- matrix(rnorm(n*d), n, d) tmp <- z[ , 1]^2 for(ii in 2:d) z[ , ii] <- z[ , ii] + alpha[ii-1]*tmp z } set.seed(64141) # Creating 2d grid m <- 50 expansion <- 1 x1 <- seq(-2, 3, length=m)* expansion; x2 <- seq(-3, 3, length=m) * expansion x <- expand.grid(x1, x2) # Evaluating true density on grid alpha <- 1 dw <- dwarp(x, alpha = alpha) # Simulate random variables X <- rwarp(1000, alpha = alpha) # Evaluating ESA density dwa <- dsaddle(as.matrix(x), X, decay = 0.1, log = FALSE)$llk # Plotting true density par(mfrow = c(1, 2)) plot(X, pch=".", col=1, ylim = c(min(x2), max(x2)), xlim = c(min(x1), max(x1)), main = "True density", xlab = expression(X[1]), ylab = expression(X[2])) contour(x1, x2, matrix(dw, m, m), levels = quantile(as.vector(dw), seq(0.8, 0.995, length.out = 10)), col=2, add=T) # Plotting ESA density plot(X, pch=".",col=2, ylim = c(min(x2), max(x2)), xlim = c(min(x1), max(x1)), main = "ESA density", xlab = expression(X[1]), ylab = expression(X[2])) contour(x1, x2, matrix(dwa, m, m), levels = quantile(as.vector(dwa), seq(0.8, 0.995, length.out = 10)), col=2, add=T) The fit is pretty good.
How does saddlepoint approximation work? Here I expand upon kjetil's answer, and I focus on those situations where the Cumulant Generating Function (CGF) is unknown, but it can be estimated from the data $x_1,\dots,x_n$, where $x\in R^d$. Th
4,103
How does saddlepoint approximation work?
Thanks to Kjetil's great answer I am trying to come up with a little example myself, which I would like to discuss because it seems to raise a relevant point: Consider the $\chi^2(m)$ distribution. $K(t)$ and its derivatives may be found here and are reproduced in the functions in the code below. x <- seq(0.01,20,by=.1) m <- 5 K <- function(t,m) -1/2*m*log(1-2*t) K1 <- function(t,m) m/(1-2*t) K2 <- function(t,m) 2*m/(1-2*t)^2 saddlepointapproximation <- function(x) { t <- .5-m/(2*x) exp( K(t,m)-t*x )*sqrt( 1/(2*pi*K2(t,m)) ) } plot( x, saddlepointapproximation(x), type="l", col="salmon", lwd=2) lines(x, dchisq(x,df=m), col="lightgreen", lwd=2) This produces This obviously produces an approximation that gets the qualitative features of the density right, but, as confirmed in Kjetil's comment, is not a proper density, as it is above the exact density everywhere. Rescaling the approximation as follows gives the almost negligible approximation error plotted below. scalingconstant <- integrate(saddlepointapproximation, x[1], x[length(x)])$value approximationerror_unscaled <- dchisq(x,df=m) - saddlepointapproximation(x) approximationerror_scaled <- dchisq(x,df=m) - saddlepointapproximation(x) / scalingconstant plot( x, approximationerror_unscaled, type="l", col="salmon", lwd=2) lines(x, approximationerror_scaled, col="blue", lwd=2)
How does saddlepoint approximation work?
Thanks to Kjetil's great answer I am trying to come up with a little example myself, which I would like to discuss because it seems to raise a relevant point: Consider the $\chi^2(m)$ distribution. $K
How does saddlepoint approximation work? Thanks to Kjetil's great answer I am trying to come up with a little example myself, which I would like to discuss because it seems to raise a relevant point: Consider the $\chi^2(m)$ distribution. $K(t)$ and its derivatives may be found here and are reproduced in the functions in the code below. x <- seq(0.01,20,by=.1) m <- 5 K <- function(t,m) -1/2*m*log(1-2*t) K1 <- function(t,m) m/(1-2*t) K2 <- function(t,m) 2*m/(1-2*t)^2 saddlepointapproximation <- function(x) { t <- .5-m/(2*x) exp( K(t,m)-t*x )*sqrt( 1/(2*pi*K2(t,m)) ) } plot( x, saddlepointapproximation(x), type="l", col="salmon", lwd=2) lines(x, dchisq(x,df=m), col="lightgreen", lwd=2) This produces This obviously produces an approximation that gets the qualitative features of the density right, but, as confirmed in Kjetil's comment, is not a proper density, as it is above the exact density everywhere. Rescaling the approximation as follows gives the almost negligible approximation error plotted below. scalingconstant <- integrate(saddlepointapproximation, x[1], x[length(x)])$value approximationerror_unscaled <- dchisq(x,df=m) - saddlepointapproximation(x) approximationerror_scaled <- dchisq(x,df=m) - saddlepointapproximation(x) / scalingconstant plot( x, approximationerror_unscaled, type="l", col="salmon", lwd=2) lines(x, approximationerror_scaled, col="blue", lwd=2)
How does saddlepoint approximation work? Thanks to Kjetil's great answer I am trying to come up with a little example myself, which I would like to discuss because it seems to raise a relevant point: Consider the $\chi^2(m)$ distribution. $K
4,104
Box-Cox like transformation for independent variables?
John Tukey advocated his "three point method" for finding re-expressions of variables to linearize relationships. I will illustrate with an exercise from his book, Exploratory Data Analysis. These are mercury vapor pressure data from an experiment in which temperature was varied and vapor pressure was measured. pressure <- c(0.0004, 0.0013, 0.006, 0.03, 0.09, 0.28, 0.8, 1.85, 4.4, 9.2, 18.3, 33.7, 59, 98, 156, 246, 371, 548, 790) # mm Hg temperature <- seq(0, 360, 20) # Degrees C The relation is strongly nonlinear: see the left panel in the illustration. Because this is an exploratory exercise, we expect it to be interactive. The analyst is asked to begin by identifying three "typical" points in the plot: one near each end and one in the middle. I have done so here and marked them in red. (When I first did this exercise long ago, I used a different set of points but arrived at the same results.) In the three point method, one searches--by brute force or otherwise--for a Box-Cox transformation that when applied to one of the coordinates--either y or x--will (a) place the typical points approximately on a line and (b) uses a "nice" power, usually chosen from a "ladder" of powers that might be interpretable by the analyst. For reasons that will become apparent later, I have extended the Box-Cox family by allowing an "offset" so that the transformations are in the form $$x \to \frac{(x + \alpha)^\lambda - 1}{\lambda}.$$ Here's a quick and dirty R implementation. It first finds an optimal $(\lambda,\alpha)$ solution, then rounds $\lambda$ to the nearest value on the ladder and, subject to that restriction, optimizes $\alpha$ (within reasonable limits). It's incredibly quick because all the calculations are based on just those three typical points out of the original dataset. (You could do them with pencil and paper, even, which is exactly what Tukey did.) box.cox <- function(x, parms=c(1,0)) { lambda <- parms[1] offset <- parms[2] if (lambda==0) log(x+offset) else ((x+offset)^lambda - 1)/lambda } threepoint <- function(x, y, ladder=c(1, 1/2, 1/3, 0, -1/2, -1)) { # x and y are length-three samples from a dataset. dx <- diff(x) f <- function(parms) (diff(diff(box.cox(y, parms)) / dx))^2 fit <- nlm(f, c(1,0)) parms <- fit$estimate #$ lambda <- ladder[which.min(abs(parms[1] - ladder))] if (lambda==0) offset = 0 else { do <- diff(range(y)) offset <- optimize(function(x) f(c(lambda, x)), c(max(-min(x), parms[2]-do), parms[2]+do))$minimum } c(lambda, offset) } When the three-point method is applied to the pressure (y) values in the mercury vapor dataset, we obtain the middle panel of the plots. data <- cbind(temperature, pressure) n <- dim(data)[1] i3 <- c(2, floor((n+1)/2), n-1) parms <- threepoint(temperature[i3], pressure[i3]) y <- box.cox(pressure, parms) In this case, parms turns out to equal $(0,0)$: the method elects to log-transform the pressure. We have reached a point analogous to the context of the question: for whatever reason (usually to stabilize residual variance), we have re-expressed the dependent variable, but we find that the relation with an independent variable is nonlinear. So now we turn to re-expressing the independent variable in an effort to linearize the relation. This is done in the same way, merely reversing the roles of x and y: parms <- threepoint(y[i3], temperature[i3]) x <- box.cox(temperature, parms) The values of parms for the independent variable (temperature) are found to be $(-1, 253.75)$: in other words, we should express the temperature as degrees Celsius above $-254$C and use its reciprocal (the $-1$ power). (For technical reasons, the Box-Cox transformation further adds $1$ to the result.) The resulting relation is shown in the right panel. By now, anybody with the least science background has recognized that the data are "telling" us to use absolute temperatures--where the offset is $273$ instead of $254$--because those will be physically meaningful. (When the last plot is re-drawn using an offset of $273$ instead of $254$, there is little visible change. A physicist would then label the x-axis with $1/(1-x)$: that is, reciprocal absolute temperature.) This is a nice example of how statistical exploration needs to interact with understanding of the subject of investigation. In fact, reciprocal absolute temperatures show up all the time in physical laws. Consequently, using simple EDA methods alone to explore this century-old, simple, dataset, we have rediscovered the Clausius-Clapeyron relation: the logarithm of the vapor pressure is a linear function of the reciprocal absolute temperature. Not only that, we have a not very bad estimate of absolute zero ($-254$ degrees C), from the slope of the righthand plot we can calculate the specific enthalpy of vaporization, and--as it turns out--a careful analysis of the residuals identifies an outlier (the value at a temperature of $0$ degrees C), shows us how the enthalphy of vaporization varies (very slightly) with temperature (thereby violating the Ideal Gas Law), and ultimately can give us accurate information about the effective radius of the mercury gas molecules! All that from 19 data points and some basic skills in EDA.
Box-Cox like transformation for independent variables?
John Tukey advocated his "three point method" for finding re-expressions of variables to linearize relationships. I will illustrate with an exercise from his book, Exploratory Data Analysis. These ar
Box-Cox like transformation for independent variables? John Tukey advocated his "three point method" for finding re-expressions of variables to linearize relationships. I will illustrate with an exercise from his book, Exploratory Data Analysis. These are mercury vapor pressure data from an experiment in which temperature was varied and vapor pressure was measured. pressure <- c(0.0004, 0.0013, 0.006, 0.03, 0.09, 0.28, 0.8, 1.85, 4.4, 9.2, 18.3, 33.7, 59, 98, 156, 246, 371, 548, 790) # mm Hg temperature <- seq(0, 360, 20) # Degrees C The relation is strongly nonlinear: see the left panel in the illustration. Because this is an exploratory exercise, we expect it to be interactive. The analyst is asked to begin by identifying three "typical" points in the plot: one near each end and one in the middle. I have done so here and marked them in red. (When I first did this exercise long ago, I used a different set of points but arrived at the same results.) In the three point method, one searches--by brute force or otherwise--for a Box-Cox transformation that when applied to one of the coordinates--either y or x--will (a) place the typical points approximately on a line and (b) uses a "nice" power, usually chosen from a "ladder" of powers that might be interpretable by the analyst. For reasons that will become apparent later, I have extended the Box-Cox family by allowing an "offset" so that the transformations are in the form $$x \to \frac{(x + \alpha)^\lambda - 1}{\lambda}.$$ Here's a quick and dirty R implementation. It first finds an optimal $(\lambda,\alpha)$ solution, then rounds $\lambda$ to the nearest value on the ladder and, subject to that restriction, optimizes $\alpha$ (within reasonable limits). It's incredibly quick because all the calculations are based on just those three typical points out of the original dataset. (You could do them with pencil and paper, even, which is exactly what Tukey did.) box.cox <- function(x, parms=c(1,0)) { lambda <- parms[1] offset <- parms[2] if (lambda==0) log(x+offset) else ((x+offset)^lambda - 1)/lambda } threepoint <- function(x, y, ladder=c(1, 1/2, 1/3, 0, -1/2, -1)) { # x and y are length-three samples from a dataset. dx <- diff(x) f <- function(parms) (diff(diff(box.cox(y, parms)) / dx))^2 fit <- nlm(f, c(1,0)) parms <- fit$estimate #$ lambda <- ladder[which.min(abs(parms[1] - ladder))] if (lambda==0) offset = 0 else { do <- diff(range(y)) offset <- optimize(function(x) f(c(lambda, x)), c(max(-min(x), parms[2]-do), parms[2]+do))$minimum } c(lambda, offset) } When the three-point method is applied to the pressure (y) values in the mercury vapor dataset, we obtain the middle panel of the plots. data <- cbind(temperature, pressure) n <- dim(data)[1] i3 <- c(2, floor((n+1)/2), n-1) parms <- threepoint(temperature[i3], pressure[i3]) y <- box.cox(pressure, parms) In this case, parms turns out to equal $(0,0)$: the method elects to log-transform the pressure. We have reached a point analogous to the context of the question: for whatever reason (usually to stabilize residual variance), we have re-expressed the dependent variable, but we find that the relation with an independent variable is nonlinear. So now we turn to re-expressing the independent variable in an effort to linearize the relation. This is done in the same way, merely reversing the roles of x and y: parms <- threepoint(y[i3], temperature[i3]) x <- box.cox(temperature, parms) The values of parms for the independent variable (temperature) are found to be $(-1, 253.75)$: in other words, we should express the temperature as degrees Celsius above $-254$C and use its reciprocal (the $-1$ power). (For technical reasons, the Box-Cox transformation further adds $1$ to the result.) The resulting relation is shown in the right panel. By now, anybody with the least science background has recognized that the data are "telling" us to use absolute temperatures--where the offset is $273$ instead of $254$--because those will be physically meaningful. (When the last plot is re-drawn using an offset of $273$ instead of $254$, there is little visible change. A physicist would then label the x-axis with $1/(1-x)$: that is, reciprocal absolute temperature.) This is a nice example of how statistical exploration needs to interact with understanding of the subject of investigation. In fact, reciprocal absolute temperatures show up all the time in physical laws. Consequently, using simple EDA methods alone to explore this century-old, simple, dataset, we have rediscovered the Clausius-Clapeyron relation: the logarithm of the vapor pressure is a linear function of the reciprocal absolute temperature. Not only that, we have a not very bad estimate of absolute zero ($-254$ degrees C), from the slope of the righthand plot we can calculate the specific enthalpy of vaporization, and--as it turns out--a careful analysis of the residuals identifies an outlier (the value at a temperature of $0$ degrees C), shows us how the enthalphy of vaporization varies (very slightly) with temperature (thereby violating the Ideal Gas Law), and ultimately can give us accurate information about the effective radius of the mercury gas molecules! All that from 19 data points and some basic skills in EDA.
Box-Cox like transformation for independent variables? John Tukey advocated his "three point method" for finding re-expressions of variables to linearize relationships. I will illustrate with an exercise from his book, Exploratory Data Analysis. These ar
4,105
Box-Cox like transformation for independent variables?
Take a look at these slides on "Regression diagnostics" by John Fox (available from here, complete with references), which briefly discuss the issue of transforming nonlinearity. It covers Tukey's "bulging rule" for selecting power transformations (addressed by the accepted answer), but also mentions the Box-Cox and Yeo-Johnson families of transformations. See Section 3.6 of the slides. For a more formal take by the same author see J. Fox, Applied Regression Analysis and Generalized Linear Models, Second Edition (Sage, 2008). As for actual R packages that help with this, absolutely take a look at the car package, authored by J. Fox and S. Weisberg. This package accompanies J. Fox and S. Weisberg, An R Companion to Applied Regression, Second Edition, (Sage, 2011), another must-read. Using that package you can start off from basicPower() (simple power transformations), bcPower() (Box-Cox transformations) and yjPower() (Yeo-Johnson transformations). There is also powerTransform(): The function powerTransform is used to estimate normalizing transformations of a univariate or a multivariate random variable. Check both books for more details on the theory behind these transformations and on computational approaches.
Box-Cox like transformation for independent variables?
Take a look at these slides on "Regression diagnostics" by John Fox (available from here, complete with references), which briefly discuss the issue of transforming nonlinearity. It covers Tukey's "bu
Box-Cox like transformation for independent variables? Take a look at these slides on "Regression diagnostics" by John Fox (available from here, complete with references), which briefly discuss the issue of transforming nonlinearity. It covers Tukey's "bulging rule" for selecting power transformations (addressed by the accepted answer), but also mentions the Box-Cox and Yeo-Johnson families of transformations. See Section 3.6 of the slides. For a more formal take by the same author see J. Fox, Applied Regression Analysis and Generalized Linear Models, Second Edition (Sage, 2008). As for actual R packages that help with this, absolutely take a look at the car package, authored by J. Fox and S. Weisberg. This package accompanies J. Fox and S. Weisberg, An R Companion to Applied Regression, Second Edition, (Sage, 2011), another must-read. Using that package you can start off from basicPower() (simple power transformations), bcPower() (Box-Cox transformations) and yjPower() (Yeo-Johnson transformations). There is also powerTransform(): The function powerTransform is used to estimate normalizing transformations of a univariate or a multivariate random variable. Check both books for more details on the theory behind these transformations and on computational approaches.
Box-Cox like transformation for independent variables? Take a look at these slides on "Regression diagnostics" by John Fox (available from here, complete with references), which briefly discuss the issue of transforming nonlinearity. It covers Tukey's "bu
4,106
Box-Cox like transformation for independent variables?
There are many advantages to making estimation of covariate transformations a formal part of the estimation process. This will recognize the number of parameters involved and produced good confidence interval coverage and type I error preservation. Regression splines are some of the best approaches. And splines will work with zero and negative values of $X$ unlike logarithmic approaches.
Box-Cox like transformation for independent variables?
There are many advantages to making estimation of covariate transformations a formal part of the estimation process. This will recognize the number of parameters involved and produced good confidence
Box-Cox like transformation for independent variables? There are many advantages to making estimation of covariate transformations a formal part of the estimation process. This will recognize the number of parameters involved and produced good confidence interval coverage and type I error preservation. Regression splines are some of the best approaches. And splines will work with zero and negative values of $X$ unlike logarithmic approaches.
Box-Cox like transformation for independent variables? There are many advantages to making estimation of covariate transformations a formal part of the estimation process. This will recognize the number of parameters involved and produced good confidence
4,107
Box-Cox like transformation for independent variables?
The method of fractional polynomials due to Royston and Altman (1994) https://doi.org/10.2307/2986270 (paywalled) may be just such a method you might use to handle the case of an optimal power law relating the X to the Y. In this case, you believe that the $Y$ (or an appropriate transformation) is truly normally distributed about the conditional mean response given an $X$ (one or more predictors), but the exact functional form of the mean is unknown: $$ Y = p(X) + \epsilon $$ with $p(X)$ an unknown mapping of a linear combination of $X$, and $\epsilon \sim \mathcal{N}(0, \sigma^2$). Note in this particular case it does not suffice to transform the $Y$ variable according to Box-Cox, because the error term loses normality, and the error becomes correlated with the response. In the spectral theory of analysis, polynomials give us good local approximations to smooth functions, that's why we are often interested (as in the case of splines, or in the Box Cox transformation) expressing the functional relationship according to a power law. The advantage of fractional polynomials over splines is that the procedure gives one an estimate of the optimal power law that may be used generally and with relative simplicity. I consider this an optimal process when the point of the analysis is to communicate a simple approximation, as would be the case in statistical mechanics when we might estimate simple functional relationships (like stopping distance relates to the square root of velocity at pre-braking speed). Splines, however, are far more flexible, and the extra degrees of freedom are well spent in terms of predictiveness to allow for breakpoints, and higher order terms where necessary. Fractional polynominals are an interesting and underutilized procedure. Stata implemented fracpoly but to my knowledge nobody has done this in R.
Box-Cox like transformation for independent variables?
The method of fractional polynomials due to Royston and Altman (1994) https://doi.org/10.2307/2986270 (paywalled) may be just such a method you might use to handle the case of an optimal power law rel
Box-Cox like transformation for independent variables? The method of fractional polynomials due to Royston and Altman (1994) https://doi.org/10.2307/2986270 (paywalled) may be just such a method you might use to handle the case of an optimal power law relating the X to the Y. In this case, you believe that the $Y$ (or an appropriate transformation) is truly normally distributed about the conditional mean response given an $X$ (one or more predictors), but the exact functional form of the mean is unknown: $$ Y = p(X) + \epsilon $$ with $p(X)$ an unknown mapping of a linear combination of $X$, and $\epsilon \sim \mathcal{N}(0, \sigma^2$). Note in this particular case it does not suffice to transform the $Y$ variable according to Box-Cox, because the error term loses normality, and the error becomes correlated with the response. In the spectral theory of analysis, polynomials give us good local approximations to smooth functions, that's why we are often interested (as in the case of splines, or in the Box Cox transformation) expressing the functional relationship according to a power law. The advantage of fractional polynomials over splines is that the procedure gives one an estimate of the optimal power law that may be used generally and with relative simplicity. I consider this an optimal process when the point of the analysis is to communicate a simple approximation, as would be the case in statistical mechanics when we might estimate simple functional relationships (like stopping distance relates to the square root of velocity at pre-braking speed). Splines, however, are far more flexible, and the extra degrees of freedom are well spent in terms of predictiveness to allow for breakpoints, and higher order terms where necessary. Fractional polynominals are an interesting and underutilized procedure. Stata implemented fracpoly but to my knowledge nobody has done this in R.
Box-Cox like transformation for independent variables? The method of fractional polynomials due to Royston and Altman (1994) https://doi.org/10.2307/2986270 (paywalled) may be just such a method you might use to handle the case of an optimal power law rel
4,108
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis?
Traditionally, the null hypothesis is a point value. (It is typically $0$, but can in fact be any point value.) The alternative hypothesis is that the true value is any value other than the null value. Because a continuous variable (such as a mean difference) can take on a value which is indefinitely close to the null value but still not quite equal and thus make the null hypothesis false, a traditional point null hypothesis cannot be proven. Imagine your null hypothesis is $0$, and the mean difference you observe is $0.01$. Is it reasonable to assume the null hypothesis is true? You don't know yet; it would be helpful to know what our confidence interval looks like. Let's say that your 95% confidence interval is $(-4.99,\ 5.01)$. Now, should we conclude that the true value is $0$? I would not feel comfortable saying that, because the CI is very wide, and there are many, large non-zero values that we might reasonably suspect are consistent with our data. So let's say we gather much, much more data, and now our observed mean difference is $0.01$, but the 95% CI is $(0.005,\ 0.015)$. The observed mean difference has stayed the same (which would be amazing if it really happened), but the confidence interval now excludes the null value. Of course, this is just a thought experiment, but it should make the basic ideas clear. We can never prove that the true value is any particular point value; we can only (possibly) disprove that it is some point value. In statistical hypothesis testing, the fact that the p-value is > 0.05 (and that the 95% CI includes zero) means that we are not sure if the null hypothesis is true. As for your concrete case, you cannot construct a test where the alternative hypothesis is that the mean difference is $0$ and the null hypothesis is anything other than zero. This violates the logic of hypothesis testing. It is perfectly reasonable that it is your substantive, scientific hypothesis, but it cannot be your alternative hypothesis in a hypothesis testing situation. So what can you do? In this situation, you use equivalence testing. (You might want to read through some of our threads on this topic by clicking on the equivalence tag.) The typical strategy is to use the two one sided tests approach. Very briefly, you select an interval within which you would consider that the true mean difference might as well be $0$ for all you could care, then you perform a one-sided test to determine if the observed value is less than the upper bound of that interval, and another one-sided test to see if it is greater than the lower bound. If both of these tests are significant, then you have rejected the hypothesis that the true value is outside the interval you care about. If one (or both) are non-significant, you fail to reject the hypothesis that the true value is outside the interval. For example, suppose anything within the interval $(-0.02,\ 0.02)$ is so close to zero that you think it is essentially the same as zero for your purposes, so you use that as your substantive hypothesis. Now imagine that you get the first result described above. Although $0.01$ falls within that interval, you would not be able to reject the null hypothesis on either one-sided t-test, so you would fail to reject the null hypothesis. On the other hand, imagine that you got the second result described above. Now you find that the observed value falls within the designated interval, and it can be shown to be both less than the upper bound and greater than the lower bound, so you can reject the null. (It is worth noting that you can reject both the hypothesis that the true value is $0$, and the hypothesis that the true value lies outside of the interval $(-0.02,\ 0.02)$, which may seem perplexing at first, but is fully consistent with the logic of hypothesis testing.)
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to ac
Traditionally, the null hypothesis is a point value. (It is typically $0$, but can in fact be any point value.) The alternative hypothesis is that the true value is any value other than the null val
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis? Traditionally, the null hypothesis is a point value. (It is typically $0$, but can in fact be any point value.) The alternative hypothesis is that the true value is any value other than the null value. Because a continuous variable (such as a mean difference) can take on a value which is indefinitely close to the null value but still not quite equal and thus make the null hypothesis false, a traditional point null hypothesis cannot be proven. Imagine your null hypothesis is $0$, and the mean difference you observe is $0.01$. Is it reasonable to assume the null hypothesis is true? You don't know yet; it would be helpful to know what our confidence interval looks like. Let's say that your 95% confidence interval is $(-4.99,\ 5.01)$. Now, should we conclude that the true value is $0$? I would not feel comfortable saying that, because the CI is very wide, and there are many, large non-zero values that we might reasonably suspect are consistent with our data. So let's say we gather much, much more data, and now our observed mean difference is $0.01$, but the 95% CI is $(0.005,\ 0.015)$. The observed mean difference has stayed the same (which would be amazing if it really happened), but the confidence interval now excludes the null value. Of course, this is just a thought experiment, but it should make the basic ideas clear. We can never prove that the true value is any particular point value; we can only (possibly) disprove that it is some point value. In statistical hypothesis testing, the fact that the p-value is > 0.05 (and that the 95% CI includes zero) means that we are not sure if the null hypothesis is true. As for your concrete case, you cannot construct a test where the alternative hypothesis is that the mean difference is $0$ and the null hypothesis is anything other than zero. This violates the logic of hypothesis testing. It is perfectly reasonable that it is your substantive, scientific hypothesis, but it cannot be your alternative hypothesis in a hypothesis testing situation. So what can you do? In this situation, you use equivalence testing. (You might want to read through some of our threads on this topic by clicking on the equivalence tag.) The typical strategy is to use the two one sided tests approach. Very briefly, you select an interval within which you would consider that the true mean difference might as well be $0$ for all you could care, then you perform a one-sided test to determine if the observed value is less than the upper bound of that interval, and another one-sided test to see if it is greater than the lower bound. If both of these tests are significant, then you have rejected the hypothesis that the true value is outside the interval you care about. If one (or both) are non-significant, you fail to reject the hypothesis that the true value is outside the interval. For example, suppose anything within the interval $(-0.02,\ 0.02)$ is so close to zero that you think it is essentially the same as zero for your purposes, so you use that as your substantive hypothesis. Now imagine that you get the first result described above. Although $0.01$ falls within that interval, you would not be able to reject the null hypothesis on either one-sided t-test, so you would fail to reject the null hypothesis. On the other hand, imagine that you got the second result described above. Now you find that the observed value falls within the designated interval, and it can be shown to be both less than the upper bound and greater than the lower bound, so you can reject the null. (It is worth noting that you can reject both the hypothesis that the true value is $0$, and the hypothesis that the true value lies outside of the interval $(-0.02,\ 0.02)$, which may seem perplexing at first, but is fully consistent with the logic of hypothesis testing.)
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to ac Traditionally, the null hypothesis is a point value. (It is typically $0$, but can in fact be any point value.) The alternative hypothesis is that the true value is any value other than the null val
4,109
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis?
Consider the case where the null hypothesis is that a coin is 2 headed, i.e. the probability of heads is 1. Now the data is the result of flipping a coin a single time and seeing heads. This results in a p-value of 1.0 which is greater than every reasonable alpha. Does this mean that the coin is 2 headed? it could be, but it could also be a fair coin and we saw heads due to chance (would happen 50% of the time with a fair coin). So the high p-value in this case says that the observed data is perfectly consistent with the null, but it is also consistent with other possibilities. Just like a "Not Guilty" verdict in court can mean the defendant is innocent, it can also be because the defendant is guilty but there is not enough evidence. The same with the null hypothesis we fail to reject because the null could be true, or it could be we don't have enough evidence to reject even though it is false.
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to ac
Consider the case where the null hypothesis is that a coin is 2 headed, i.e. the probability of heads is 1. Now the data is the result of flipping a coin a single time and seeing heads. This results
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis? Consider the case where the null hypothesis is that a coin is 2 headed, i.e. the probability of heads is 1. Now the data is the result of flipping a coin a single time and seeing heads. This results in a p-value of 1.0 which is greater than every reasonable alpha. Does this mean that the coin is 2 headed? it could be, but it could also be a fair coin and we saw heads due to chance (would happen 50% of the time with a fair coin). So the high p-value in this case says that the observed data is perfectly consistent with the null, but it is also consistent with other possibilities. Just like a "Not Guilty" verdict in court can mean the defendant is innocent, it can also be because the defendant is guilty but there is not enough evidence. The same with the null hypothesis we fail to reject because the null could be true, or it could be we don't have enough evidence to reject even though it is false.
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to ac Consider the case where the null hypothesis is that a coin is 2 headed, i.e. the probability of heads is 1. Now the data is the result of flipping a coin a single time and seeing heads. This results
4,110
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis?
Absence of evidence is not evidence of an absence (the title of an Altman, Bland paper on BMJ). P-values only give us evidence of an absence when we consider them significant. Otherwise, they tell us nothing. Hence, absence of evidence. In other words: we don't know and more data may help.
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to ac
Absence of evidence is not evidence of an absence (the title of an Altman, Bland paper on BMJ). P-values only give us evidence of an absence when we consider them significant. Otherwise, they tell u
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis? Absence of evidence is not evidence of an absence (the title of an Altman, Bland paper on BMJ). P-values only give us evidence of an absence when we consider them significant. Otherwise, they tell us nothing. Hence, absence of evidence. In other words: we don't know and more data may help.
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to ac Absence of evidence is not evidence of an absence (the title of an Altman, Bland paper on BMJ). P-values only give us evidence of an absence when we consider them significant. Otherwise, they tell u
4,111
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis?
The null hypothesis, $H_0$, is usually taken to be the thing you have reason to assume. Often times it is the "current state of knowledge" that you wish to show is statistically unlikely. The usual set-up for hypothesis testing is minimize type I error, that is, minimize the chance that we reject the null hypothesis in favor of the alternative $H_1$ even though $H_0$ is true. This is the error we choose to first minimize because we don't want to overturn common knowledge when that common knowledge is indeed true. You should always design your test bearing in mind that $H_0$ should be what you expect. If we have two samples we expect to be identically distributed then our null hypothesis is the samples are the same. If we have two samples that we would expect to be (wildly) different, our null hypothesis is that they are different.
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to ac
The null hypothesis, $H_0$, is usually taken to be the thing you have reason to assume. Often times it is the "current state of knowledge" that you wish to show is statistically unlikely. The usual s
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis? The null hypothesis, $H_0$, is usually taken to be the thing you have reason to assume. Often times it is the "current state of knowledge" that you wish to show is statistically unlikely. The usual set-up for hypothesis testing is minimize type I error, that is, minimize the chance that we reject the null hypothesis in favor of the alternative $H_1$ even though $H_0$ is true. This is the error we choose to first minimize because we don't want to overturn common knowledge when that common knowledge is indeed true. You should always design your test bearing in mind that $H_0$ should be what you expect. If we have two samples we expect to be identically distributed then our null hypothesis is the samples are the same. If we have two samples that we would expect to be (wildly) different, our null hypothesis is that they are different.
Why do statisticians say a non-significant result means "you can't reject the null" as opposed to ac The null hypothesis, $H_0$, is usually taken to be the thing you have reason to assume. Often times it is the "current state of knowledge" that you wish to show is statistically unlikely. The usual s
4,112
Practical hyperparameter optimization: Random vs. grid search
Random search has a probability of 95% of finding a combination of parameters within the 5% optima with only 60 iterations. Also compared to other methods it doesn't bog down in local optima. Check this great blog post at Dato by Alice Zheng, specifically the section Hyperparameter tuning algorithms. I love movies where the underdog wins, and I love machine learning papers where simple solutions are shown to be surprisingly effective. This is the storyline of “Random search for hyperparameter optimization” by Bergstra and Bengio. [...] Random search wasn’t taken very seriously before. This is because it doesn’t search over all the grid points, so it cannot possibly beat the optimum found by grid search. But then came along Bergstra and Bengio. They showed that, in surprisingly many instances, random search performs about as well as grid search. All in all, trying 60 random points sampled from the grid seems to be good enough. In hindsight, there is a simple probabilistic explanation for the result: for any distribution over a sample space with a finite maximum, the maximum of 60 random observations lies within the top 5% of the true maximum, with 95% probability. That may sound complicated, but it’s not. Imagine the 5% interval around the true maximum. Now imagine that we sample points from his space and see if any of it lands within that maximum. Each random draw has a 5% chance of landing in that interval, if we draw n points independently, then the probability that all of them miss the desired interval is $\left(1−0.05\right)^{n}$. So the probability that at least one of them succeeds in hitting the interval is 1 minus that quantity. We want at least a .95 probability of success. To figure out the number of draws we need, just solve for n in the equation: $$1−\left(1−0.05\right)^{n}>0.95$$ We get $n\geqslant60$. Ta-da! The moral of the story is: if the close-to-optimal region of hyperparameters occupies at least 5% of the grid surface, then random search with 60 trials will find that region with high probability. You can improve that chance with a higher number of trials. All in all, if you have too many parameters to tune, grid search may become unfeasible. That's when I try random search.
Practical hyperparameter optimization: Random vs. grid search
Random search has a probability of 95% of finding a combination of parameters within the 5% optima with only 60 iterations. Also compared to other methods it doesn't bog down in local optima. Check th
Practical hyperparameter optimization: Random vs. grid search Random search has a probability of 95% of finding a combination of parameters within the 5% optima with only 60 iterations. Also compared to other methods it doesn't bog down in local optima. Check this great blog post at Dato by Alice Zheng, specifically the section Hyperparameter tuning algorithms. I love movies where the underdog wins, and I love machine learning papers where simple solutions are shown to be surprisingly effective. This is the storyline of “Random search for hyperparameter optimization” by Bergstra and Bengio. [...] Random search wasn’t taken very seriously before. This is because it doesn’t search over all the grid points, so it cannot possibly beat the optimum found by grid search. But then came along Bergstra and Bengio. They showed that, in surprisingly many instances, random search performs about as well as grid search. All in all, trying 60 random points sampled from the grid seems to be good enough. In hindsight, there is a simple probabilistic explanation for the result: for any distribution over a sample space with a finite maximum, the maximum of 60 random observations lies within the top 5% of the true maximum, with 95% probability. That may sound complicated, but it’s not. Imagine the 5% interval around the true maximum. Now imagine that we sample points from his space and see if any of it lands within that maximum. Each random draw has a 5% chance of landing in that interval, if we draw n points independently, then the probability that all of them miss the desired interval is $\left(1−0.05\right)^{n}$. So the probability that at least one of them succeeds in hitting the interval is 1 minus that quantity. We want at least a .95 probability of success. To figure out the number of draws we need, just solve for n in the equation: $$1−\left(1−0.05\right)^{n}>0.95$$ We get $n\geqslant60$. Ta-da! The moral of the story is: if the close-to-optimal region of hyperparameters occupies at least 5% of the grid surface, then random search with 60 trials will find that region with high probability. You can improve that chance with a higher number of trials. All in all, if you have too many parameters to tune, grid search may become unfeasible. That's when I try random search.
Practical hyperparameter optimization: Random vs. grid search Random search has a probability of 95% of finding a combination of parameters within the 5% optima with only 60 iterations. Also compared to other methods it doesn't bog down in local optima. Check th
4,113
Practical hyperparameter optimization: Random vs. grid search
Look again at the graphic from the paper (Figure 1). Say that you have two parameters, with 3x3 grid search you check only three different parameter values from each of the parameters (three rows and three columns on the plot on the left), while with random search you check nine (!) different parameter values of each of the parameters (nine distinct rows and nine distinct columns). Obviously, random search, by chance, may not be representative for all the range of the parameters, but as sample size grows, the chances of this get smaller and smaller.
Practical hyperparameter optimization: Random vs. grid search
Look again at the graphic from the paper (Figure 1). Say that you have two parameters, with 3x3 grid search you check only three different parameter values from each of the parameters (three rows and
Practical hyperparameter optimization: Random vs. grid search Look again at the graphic from the paper (Figure 1). Say that you have two parameters, with 3x3 grid search you check only three different parameter values from each of the parameters (three rows and three columns on the plot on the left), while with random search you check nine (!) different parameter values of each of the parameters (nine distinct rows and nine distinct columns). Obviously, random search, by chance, may not be representative for all the range of the parameters, but as sample size grows, the chances of this get smaller and smaller.
Practical hyperparameter optimization: Random vs. grid search Look again at the graphic from the paper (Figure 1). Say that you have two parameters, with 3x3 grid search you check only three different parameter values from each of the parameters (three rows and
4,114
Practical hyperparameter optimization: Random vs. grid search
If you can write a function to to grid search, it's probably even easier to write a function to do random search because you don't have to pre-specify and store the grid up front. Setting that aside, methods like LIPO, particle swarm optimization and Bayesian optimization make intelligent choices about which hyperparameters are likely to be better, so if you need to keep the number of models fit to an absolute minimum (say, because it's expensive to fit a model), these tools are promising options. They're also global optimizers, so they have a high probability of locating the global maximum. Some of the acquisition functions of BO methods have provable regret bounds, which make them even more attractive. More information can be found in these questions: What are some of the disavantage of bayesian hyper parameter optimization? Optimization when Cost Function Slow to Evaluate
Practical hyperparameter optimization: Random vs. grid search
If you can write a function to to grid search, it's probably even easier to write a function to do random search because you don't have to pre-specify and store the grid up front. Setting that aside,
Practical hyperparameter optimization: Random vs. grid search If you can write a function to to grid search, it's probably even easier to write a function to do random search because you don't have to pre-specify and store the grid up front. Setting that aside, methods like LIPO, particle swarm optimization and Bayesian optimization make intelligent choices about which hyperparameters are likely to be better, so if you need to keep the number of models fit to an absolute minimum (say, because it's expensive to fit a model), these tools are promising options. They're also global optimizers, so they have a high probability of locating the global maximum. Some of the acquisition functions of BO methods have provable regret bounds, which make them even more attractive. More information can be found in these questions: What are some of the disavantage of bayesian hyper parameter optimization? Optimization when Cost Function Slow to Evaluate
Practical hyperparameter optimization: Random vs. grid search If you can write a function to to grid search, it's probably even easier to write a function to do random search because you don't have to pre-specify and store the grid up front. Setting that aside,
4,115
Practical hyperparameter optimization: Random vs. grid search
By default, random search and grid search are terrible algorithms unless one of the following holds. Your problem does not have a global structure, e.g., if the problem is multimodal and the number of local optima is huge Your problem is noisy, i.e., evaluating the same solution twice leads to different objective function values The budget of objective function calls is very small compared to the number of variables, e.g., smaller than 1x or 10x. The number of variables is very small, e.g., smaller than 5 (in practice). a few other conditions. Most people claim that random search is better than grid search. However, note that when the total number of function evaluations is predefined, grid search will lead to a good coverage of the search space which is not worse than random search with the same budget and the difference between the two is negligible if any. If you start to add some assumptions, e.g., that your problem is separable or almost separable, then you will find arguments to support grid search. Overall, both are comparably terrible unless in very few cases. Thus, there is no need to distinguish between them unless some additional assumptions about the problem are considered.
Practical hyperparameter optimization: Random vs. grid search
By default, random search and grid search are terrible algorithms unless one of the following holds. Your problem does not have a global structure, e.g., if the problem is multimodal and the number
Practical hyperparameter optimization: Random vs. grid search By default, random search and grid search are terrible algorithms unless one of the following holds. Your problem does not have a global structure, e.g., if the problem is multimodal and the number of local optima is huge Your problem is noisy, i.e., evaluating the same solution twice leads to different objective function values The budget of objective function calls is very small compared to the number of variables, e.g., smaller than 1x or 10x. The number of variables is very small, e.g., smaller than 5 (in practice). a few other conditions. Most people claim that random search is better than grid search. However, note that when the total number of function evaluations is predefined, grid search will lead to a good coverage of the search space which is not worse than random search with the same budget and the difference between the two is negligible if any. If you start to add some assumptions, e.g., that your problem is separable or almost separable, then you will find arguments to support grid search. Overall, both are comparably terrible unless in very few cases. Thus, there is no need to distinguish between them unless some additional assumptions about the problem are considered.
Practical hyperparameter optimization: Random vs. grid search By default, random search and grid search are terrible algorithms unless one of the following holds. Your problem does not have a global structure, e.g., if the problem is multimodal and the number
4,116
Practical hyperparameter optimization: Random vs. grid search
Finding a spot within 95% of maxima in a 2D topography with only one maxima takes 100%/25 =25%, 6.25%, 1.5625%, or 16 observations. So long as the first four observations correctly determine which quadrant the maxima (extrema) is in. 1D topography takes 100/2= 50, 25, 12.5, 6.25, 3.125 or 5*2. I guess people searching for multiple farflung local maxima use big inital grid search then regression or some other prediction method. A grid of 60 observations should have one observation within 100/60=1.66% of the extrema. Global Optimization Wikipedia I still think there is always a better method than randomness.
Practical hyperparameter optimization: Random vs. grid search
Finding a spot within 95% of maxima in a 2D topography with only one maxima takes 100%/25 =25%, 6.25%, 1.5625%, or 16 observations. So long as the first four observations correctly determine which qua
Practical hyperparameter optimization: Random vs. grid search Finding a spot within 95% of maxima in a 2D topography with only one maxima takes 100%/25 =25%, 6.25%, 1.5625%, or 16 observations. So long as the first four observations correctly determine which quadrant the maxima (extrema) is in. 1D topography takes 100/2= 50, 25, 12.5, 6.25, 3.125 or 5*2. I guess people searching for multiple farflung local maxima use big inital grid search then regression or some other prediction method. A grid of 60 observations should have one observation within 100/60=1.66% of the extrema. Global Optimization Wikipedia I still think there is always a better method than randomness.
Practical hyperparameter optimization: Random vs. grid search Finding a spot within 95% of maxima in a 2D topography with only one maxima takes 100%/25 =25%, 6.25%, 1.5625%, or 16 observations. So long as the first four observations correctly determine which qua
4,117
Practical hyperparameter optimization: Random vs. grid search
As Tim showed you can test more parameter values with random search than with grid search. This is especially efficient if some of the parameters you test end up not being impactful for your problem, like the 'Unimportant parameter' on Fig 1 from the article. I did a post about hyperparameters tuning where I explain the differences between grid search, random search and Bayesian Optimization. You can check it out (and let me know if it was useful, feedback is appreciated!)
Practical hyperparameter optimization: Random vs. grid search
As Tim showed you can test more parameter values with random search than with grid search. This is especially efficient if some of the parameters you test end up not being impactful for your problem,
Practical hyperparameter optimization: Random vs. grid search As Tim showed you can test more parameter values with random search than with grid search. This is especially efficient if some of the parameters you test end up not being impactful for your problem, like the 'Unimportant parameter' on Fig 1 from the article. I did a post about hyperparameters tuning where I explain the differences between grid search, random search and Bayesian Optimization. You can check it out (and let me know if it was useful, feedback is appreciated!)
Practical hyperparameter optimization: Random vs. grid search As Tim showed you can test more parameter values with random search than with grid search. This is especially efficient if some of the parameters you test end up not being impactful for your problem,
4,118
Confidence interval for Bernoulli sampling
If the average, $\hat{p}$, is not near $1$ or $0$, and sample size $n$ is sufficiently large (i.e. $n\hat{p}>5$ and $n(1-\hat{p})>5$, the confidence interval can be estimated by a normal distribution and the confidence interval constructed thus: $$\hat{p}\pm z_{1-\alpha/2}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$$ If $\hat{p} = 0$ and $n>30$, the $95\%$ confidence interval is approximately $[0,\frac{3}{n}]$ (Javanovic and Levy, 1997); the opposite holds for $\hat{p}=1$. The reference also discusses using using $n+1$ and $n+b$ (the later to incorporate prior information). Else Wikipedia provides a good overview and points to Agresti and Couli (1998) and Ross (2003) for details about the use of estimates other than the normal approximation, the Wilson score, Clopper-Pearson, or Agresti-Coull intervals. These can be more accurate when above assumptions about $n$ and $\hat{p}$ are not met. R provides functions binconf {Hmisc} and binom.confint {binom} which can be used in the following manner: set.seed(0) p <- runif(1,0,1) X <- sample(c(0,1), size = 100, replace = TRUE, prob = c(1-p, p)) library(Hmisc) binconf(sum(X), length(X), alpha = 0.05, method = 'all') library(binom) binom.confint(sum(X), length(X), conf.level = 0.95, method = 'all') Agresti, Alan; Coull, Brent A. (1998). "Approximate is better than 'exact' for interval estimation of binomial proportions". The American Statistician 52: 119–126. Jovanovic, B. D. and P. S. Levy, 1997. A Look at the Rule of Three. The American Statistician Vol. 51, No. 2, pp. 137-139 Ross, T. D. (2003). "Accurate confidence intervals for binomial proportion and Poisson rate estimation". Computers in Biology and Medicine 33: 509–531.
Confidence interval for Bernoulli sampling
If the average, $\hat{p}$, is not near $1$ or $0$, and sample size $n$ is sufficiently large (i.e. $n\hat{p}>5$ and $n(1-\hat{p})>5$, the confidence interval can be estimated by a normal distribution
Confidence interval for Bernoulli sampling If the average, $\hat{p}$, is not near $1$ or $0$, and sample size $n$ is sufficiently large (i.e. $n\hat{p}>5$ and $n(1-\hat{p})>5$, the confidence interval can be estimated by a normal distribution and the confidence interval constructed thus: $$\hat{p}\pm z_{1-\alpha/2}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$$ If $\hat{p} = 0$ and $n>30$, the $95\%$ confidence interval is approximately $[0,\frac{3}{n}]$ (Javanovic and Levy, 1997); the opposite holds for $\hat{p}=1$. The reference also discusses using using $n+1$ and $n+b$ (the later to incorporate prior information). Else Wikipedia provides a good overview and points to Agresti and Couli (1998) and Ross (2003) for details about the use of estimates other than the normal approximation, the Wilson score, Clopper-Pearson, or Agresti-Coull intervals. These can be more accurate when above assumptions about $n$ and $\hat{p}$ are not met. R provides functions binconf {Hmisc} and binom.confint {binom} which can be used in the following manner: set.seed(0) p <- runif(1,0,1) X <- sample(c(0,1), size = 100, replace = TRUE, prob = c(1-p, p)) library(Hmisc) binconf(sum(X), length(X), alpha = 0.05, method = 'all') library(binom) binom.confint(sum(X), length(X), conf.level = 0.95, method = 'all') Agresti, Alan; Coull, Brent A. (1998). "Approximate is better than 'exact' for interval estimation of binomial proportions". The American Statistician 52: 119–126. Jovanovic, B. D. and P. S. Levy, 1997. A Look at the Rule of Three. The American Statistician Vol. 51, No. 2, pp. 137-139 Ross, T. D. (2003). "Accurate confidence intervals for binomial proportion and Poisson rate estimation". Computers in Biology and Medicine 33: 509–531.
Confidence interval for Bernoulli sampling If the average, $\hat{p}$, is not near $1$ or $0$, and sample size $n$ is sufficiently large (i.e. $n\hat{p}>5$ and $n(1-\hat{p})>5$, the confidence interval can be estimated by a normal distribution
4,119
Confidence interval for Bernoulli sampling
Maximum likelihood confidence intervals The normal approximation to the Bernoulli sample relies on having a relatively large sample size and sample proportions far from the tails. The maximum likelihood estimate focuses on the log-transformed odds and this provides non-symmetric, efficient intervals for $p$ that should be used instead. Define the log-odds as $\hat{\beta}_0 = \log(\hat{p}/(1-\hat{p}))$ A 1-$\alpha$ CI for $\beta_0$ is given by: $$\text{CI}(\beta_0)_\alpha = \hat{\beta}_0 \pm \mathcal{Z}_{\alpha/2} \sqrt{1/(n\hat{p}(1-\hat{p})}$$ And this is back transformed into a (non-symmetric) interval for $p$ with: $$\text{CI}(p)_\alpha = 1/(1+\exp(-\text{CI}(\beta_0)_\alpha)$$ This CI has the added benefit that proportions lie in the interval between 0 or 1, and the CI is always narrower than the normal interval while being of the correct level. You can get this very easily in R by specifying: set.seed(123) y <- rbinom(100, 1, 0.35) plogis(confint(glm(y ~ 1, family=binomial))) 2.5 % 97.5 % 0.2795322 0.4670450 Exact binomial confidence intervals In small samples, the normal approximation to the MLE--while better than the normal approximation to the sample proportion--may not be reliable. That is okay. $Y = n\hat{p}$ can be taken to follow a binomial$(n,p)$ density. Bounds for $\hat{p}$ can be found taking the 2.5th and 97.5-th percentiles from this distribution. $$\text{CI}_\alpha = (F^{-1}_{\hat{p}}(0.025), F^{-1}_{\hat{p}}(0.975))$$ Rarely possible by-hand, an exact binomial confidence interval can be obtained for $p$ using computational methods. qbinom(p = c(0.025, 0.975), size = length(y), prob = mean(y))/length(y) [1] 0.28 0.47 Median unbiased confidence intervals And if $p$ is 0 or 1 exactly, a median unbiased estimator can be used to obtain non-singular interval estimates based on the median unbiased probability function. You can trivially take the lower bound of the all-0 case as 0 WLOG. The upper bound is any proportion $p_{1-\alpha/2}$ that satisfies: $$p_{1-\alpha/2} : P(Y = 0)/2 + P(Y > y) > 0.975$$ This is also a computational routine. set.seed(12345) y <- rbinom(100, 1, 0.01) ## all 0 cil <- 0 mupfun <- function(p) { 0.5*dbinom(0, 100, p) + pbinom(1, 100, p, lower.tail = F) - 0.975 } ## for y=0 successes out of n=100 trials ciu <- uniroot(mupfun, c(0, 1))$root c(cil, ciu) [1] 0.00000000 0.05357998 ## includes the 0.01 actual probability The last two methods are implemented in the epitools package in R.
Confidence interval for Bernoulli sampling
Maximum likelihood confidence intervals The normal approximation to the Bernoulli sample relies on having a relatively large sample size and sample proportions far from the tails. The maximum likeliho
Confidence interval for Bernoulli sampling Maximum likelihood confidence intervals The normal approximation to the Bernoulli sample relies on having a relatively large sample size and sample proportions far from the tails. The maximum likelihood estimate focuses on the log-transformed odds and this provides non-symmetric, efficient intervals for $p$ that should be used instead. Define the log-odds as $\hat{\beta}_0 = \log(\hat{p}/(1-\hat{p}))$ A 1-$\alpha$ CI for $\beta_0$ is given by: $$\text{CI}(\beta_0)_\alpha = \hat{\beta}_0 \pm \mathcal{Z}_{\alpha/2} \sqrt{1/(n\hat{p}(1-\hat{p})}$$ And this is back transformed into a (non-symmetric) interval for $p$ with: $$\text{CI}(p)_\alpha = 1/(1+\exp(-\text{CI}(\beta_0)_\alpha)$$ This CI has the added benefit that proportions lie in the interval between 0 or 1, and the CI is always narrower than the normal interval while being of the correct level. You can get this very easily in R by specifying: set.seed(123) y <- rbinom(100, 1, 0.35) plogis(confint(glm(y ~ 1, family=binomial))) 2.5 % 97.5 % 0.2795322 0.4670450 Exact binomial confidence intervals In small samples, the normal approximation to the MLE--while better than the normal approximation to the sample proportion--may not be reliable. That is okay. $Y = n\hat{p}$ can be taken to follow a binomial$(n,p)$ density. Bounds for $\hat{p}$ can be found taking the 2.5th and 97.5-th percentiles from this distribution. $$\text{CI}_\alpha = (F^{-1}_{\hat{p}}(0.025), F^{-1}_{\hat{p}}(0.975))$$ Rarely possible by-hand, an exact binomial confidence interval can be obtained for $p$ using computational methods. qbinom(p = c(0.025, 0.975), size = length(y), prob = mean(y))/length(y) [1] 0.28 0.47 Median unbiased confidence intervals And if $p$ is 0 or 1 exactly, a median unbiased estimator can be used to obtain non-singular interval estimates based on the median unbiased probability function. You can trivially take the lower bound of the all-0 case as 0 WLOG. The upper bound is any proportion $p_{1-\alpha/2}$ that satisfies: $$p_{1-\alpha/2} : P(Y = 0)/2 + P(Y > y) > 0.975$$ This is also a computational routine. set.seed(12345) y <- rbinom(100, 1, 0.01) ## all 0 cil <- 0 mupfun <- function(p) { 0.5*dbinom(0, 100, p) + pbinom(1, 100, p, lower.tail = F) - 0.975 } ## for y=0 successes out of n=100 trials ciu <- uniroot(mupfun, c(0, 1))$root c(cil, ciu) [1] 0.00000000 0.05357998 ## includes the 0.01 actual probability The last two methods are implemented in the epitools package in R.
Confidence interval for Bernoulli sampling Maximum likelihood confidence intervals The normal approximation to the Bernoulli sample relies on having a relatively large sample size and sample proportions far from the tails. The maximum likeliho
4,120
Confidence interval for Bernoulli sampling
The Wilson score interval performs well in general for inference for the binomial probability parameter. The performance of various confidence intervals is examined in Brown, Cai and DasGupta (2001) and the Wilson score interval performs well compared to other intervals; in particular, it performs better than the Wald interval. The Wilson score interval has a number of useful consistency properties and it can be extended to handle both finite and infinite populations (see O'Neill 2021 for details). The Wilson score interval can be implemented for finite or infinite populations in R using the CONF.prop function in the stat.extend package. In the code below we give a simple example of a 95% confidence interval for the probability parameter for an infinite superpopulation. To obtain a confidence interval for the proportion quantity in a finite population (either the full population or the unsampled part) you can add inputs for the population size N and the logical value unsampled. #Set parameters n <- 40 p <- 0.15 #Generate some binary data set.seed(1) x <- sample(c(0,1), size = n, replace = TRUE, prob = c(1-p, p)) #Generate a 95% confidence interval for probability parameter library(stat.extend) CONF.prop(x, alpha = 0.05) Confidence Interval (CI) 95.00% CI for proportion parameter for infinite population Interval uses 40 binary data points from data x with sample proportion = 0.1500 [0.0706118771732036, 0.290723243664897]
Confidence interval for Bernoulli sampling
The Wilson score interval performs well in general for inference for the binomial probability parameter. The performance of various confidence intervals is examined in Brown, Cai and DasGupta (2001)
Confidence interval for Bernoulli sampling The Wilson score interval performs well in general for inference for the binomial probability parameter. The performance of various confidence intervals is examined in Brown, Cai and DasGupta (2001) and the Wilson score interval performs well compared to other intervals; in particular, it performs better than the Wald interval. The Wilson score interval has a number of useful consistency properties and it can be extended to handle both finite and infinite populations (see O'Neill 2021 for details). The Wilson score interval can be implemented for finite or infinite populations in R using the CONF.prop function in the stat.extend package. In the code below we give a simple example of a 95% confidence interval for the probability parameter for an infinite superpopulation. To obtain a confidence interval for the proportion quantity in a finite population (either the full population or the unsampled part) you can add inputs for the population size N and the logical value unsampled. #Set parameters n <- 40 p <- 0.15 #Generate some binary data set.seed(1) x <- sample(c(0,1), size = n, replace = TRUE, prob = c(1-p, p)) #Generate a 95% confidence interval for probability parameter library(stat.extend) CONF.prop(x, alpha = 0.05) Confidence Interval (CI) 95.00% CI for proportion parameter for infinite population Interval uses 40 binary data points from data x with sample proportion = 0.1500 [0.0706118771732036, 0.290723243664897]
Confidence interval for Bernoulli sampling The Wilson score interval performs well in general for inference for the binomial probability parameter. The performance of various confidence intervals is examined in Brown, Cai and DasGupta (2001)
4,121
Confidence interval for Bernoulli sampling
Suppose $X_1,...,X_n$ is a sample of successes and failures from a Bernoulli population with probability of success $p$, and we are asked to find a 75% confidence interval for $p$. One solution is to invert the CDF of a binomial distribution. Since $Y=\sum X_i\sim \text{Binomial}(n,p)$ we can define a $100(1-\alpha)\%$ confidence interval for $p$ as $$\bigg\{p:F_Y(y;n,p)\ge\alpha/2 \text{ and } 1-F_Y(y-1;n,p)\ge\alpha/2 \bigg\}$$ where $F_Y(y;n,p)=P(Y\le y)$ is the CDF of $Y$. An approximate solution is to note that $\text{Var}[\sum X_i]=np(1-p)\implies\text{Var}[\bar{X}]=p(1-p)/n$, where $\bar{X}=\frac{1}{n}\sum X_i$, and construct a $100(1-\alpha)\%$ confidence interval for $p$ by inverting a Wald test $$\bigg\{p: \Phi\bigg(\frac{\bar{x}-p}{\bar{x}(1-\bar{x})}\bigg)\ge \alpha/2 \text{ and } 1-\Phi\bigg(\frac{\bar{x}-p}{\bar{x}(1-\bar{x})}\bigg)\ge \alpha/2\bigg\}$$ $$= \bar{x}\pm z_{1-\alpha/2}\hat{\text{se}}$$ where $\hat{\text{se}}=\sqrt{\bar{x}(1-\bar{x})/n}$, $\Phi(\cdot)$ is the CDF of a standard normal distribution, and $z_{1-\alpha/2}$ is the $(1-\alpha/2)^{th}$ percentile of the standard normal distribution. The Wald test approximates the binomial sampling distribution of $\sum X_i$, or equivalently the distribution of $\bar{X}$, using a normal distribution and expresses it in terms of the standard normal CDF. For example, if you observed $y=\sum x_i=6$ out of $n=10$ trials, $\bar{x}=0.6$. The $75\%$ confidence interval from inverting the binomial CDF is (0.37, 0.8). If $p$ is truly $0.37$ we would observe a result as or more extreme than $\bar{x}=0.6$ only $12.5\%$ of the time. Likewise, if $p$ is truly $0.8$ then $12.5\%$ of the time would we would observe a result as or more extreme than $\bar{x}=0.6$. In this way we are $75\%$ confident that the unknown fixed true $p$ is in $(0.37, 0.8)$. Intervals constructed in this manner will cover the unknown fixed true $p$ $75\%$ of the time in repeated sampling, and ours is one such sample. The confidence interval from inverting a Wald test is (0.42, 0.78). $$\text{Inverting Binomial Sampling Distribution CDF} $$ $$\text{(PMF Depicted Below)}$$ $$\text{Inverting Normal CDF Approximating X-bar Sampling Distribution} $$ $$\text{(Normal Density Depicted Below)}$$ $$\text{One-sided P-value from Inverting Binomial CDF} $$ The confidence curve above shows p-values and confidence intervals of all levels from inverting the binomial CDF.
Confidence interval for Bernoulli sampling
Suppose $X_1,...,X_n$ is a sample of successes and failures from a Bernoulli population with probability of success $p$, and we are asked to find a 75% confidence interval for $p$. One solution is to
Confidence interval for Bernoulli sampling Suppose $X_1,...,X_n$ is a sample of successes and failures from a Bernoulli population with probability of success $p$, and we are asked to find a 75% confidence interval for $p$. One solution is to invert the CDF of a binomial distribution. Since $Y=\sum X_i\sim \text{Binomial}(n,p)$ we can define a $100(1-\alpha)\%$ confidence interval for $p$ as $$\bigg\{p:F_Y(y;n,p)\ge\alpha/2 \text{ and } 1-F_Y(y-1;n,p)\ge\alpha/2 \bigg\}$$ where $F_Y(y;n,p)=P(Y\le y)$ is the CDF of $Y$. An approximate solution is to note that $\text{Var}[\sum X_i]=np(1-p)\implies\text{Var}[\bar{X}]=p(1-p)/n$, where $\bar{X}=\frac{1}{n}\sum X_i$, and construct a $100(1-\alpha)\%$ confidence interval for $p$ by inverting a Wald test $$\bigg\{p: \Phi\bigg(\frac{\bar{x}-p}{\bar{x}(1-\bar{x})}\bigg)\ge \alpha/2 \text{ and } 1-\Phi\bigg(\frac{\bar{x}-p}{\bar{x}(1-\bar{x})}\bigg)\ge \alpha/2\bigg\}$$ $$= \bar{x}\pm z_{1-\alpha/2}\hat{\text{se}}$$ where $\hat{\text{se}}=\sqrt{\bar{x}(1-\bar{x})/n}$, $\Phi(\cdot)$ is the CDF of a standard normal distribution, and $z_{1-\alpha/2}$ is the $(1-\alpha/2)^{th}$ percentile of the standard normal distribution. The Wald test approximates the binomial sampling distribution of $\sum X_i$, or equivalently the distribution of $\bar{X}$, using a normal distribution and expresses it in terms of the standard normal CDF. For example, if you observed $y=\sum x_i=6$ out of $n=10$ trials, $\bar{x}=0.6$. The $75\%$ confidence interval from inverting the binomial CDF is (0.37, 0.8). If $p$ is truly $0.37$ we would observe a result as or more extreme than $\bar{x}=0.6$ only $12.5\%$ of the time. Likewise, if $p$ is truly $0.8$ then $12.5\%$ of the time would we would observe a result as or more extreme than $\bar{x}=0.6$. In this way we are $75\%$ confident that the unknown fixed true $p$ is in $(0.37, 0.8)$. Intervals constructed in this manner will cover the unknown fixed true $p$ $75\%$ of the time in repeated sampling, and ours is one such sample. The confidence interval from inverting a Wald test is (0.42, 0.78). $$\text{Inverting Binomial Sampling Distribution CDF} $$ $$\text{(PMF Depicted Below)}$$ $$\text{Inverting Normal CDF Approximating X-bar Sampling Distribution} $$ $$\text{(Normal Density Depicted Below)}$$ $$\text{One-sided P-value from Inverting Binomial CDF} $$ The confidence curve above shows p-values and confidence intervals of all levels from inverting the binomial CDF.
Confidence interval for Bernoulli sampling Suppose $X_1,...,X_n$ is a sample of successes and failures from a Bernoulli population with probability of success $p$, and we are asked to find a 75% confidence interval for $p$. One solution is to
4,122
Introduction to statistics for mathematicians
As you said, it's not necessarily the case that a mathematician may want a rigorous book. Maybe the goal is to get some intuition of the concepts quickly, and then fill in the details. I recommend two books from CMU professors, both published by Springer: "All of Statistics" by Larry Wasserman is quick and informal. "Theory of Statistics" by Mark Schervish is rigorous and relatively complete. It has decision theory, finite sample, some asymptotics and sequential analysis. Added 7/28/10: There is one additional reference that is orthogonal to the other two: very rigorous, focused on learning theory, and short. It's by Smale (Steven Smale!) and Cucker, "On the Mathematical Foundations of Learning". Not easy read, but the best crash course on the theory.
Introduction to statistics for mathematicians
As you said, it's not necessarily the case that a mathematician may want a rigorous book. Maybe the goal is to get some intuition of the concepts quickly, and then fill in the details. I recommend two
Introduction to statistics for mathematicians As you said, it's not necessarily the case that a mathematician may want a rigorous book. Maybe the goal is to get some intuition of the concepts quickly, and then fill in the details. I recommend two books from CMU professors, both published by Springer: "All of Statistics" by Larry Wasserman is quick and informal. "Theory of Statistics" by Mark Schervish is rigorous and relatively complete. It has decision theory, finite sample, some asymptotics and sequential analysis. Added 7/28/10: There is one additional reference that is orthogonal to the other two: very rigorous, focused on learning theory, and short. It's by Smale (Steven Smale!) and Cucker, "On the Mathematical Foundations of Learning". Not easy read, but the best crash course on the theory.
Introduction to statistics for mathematicians As you said, it's not necessarily the case that a mathematician may want a rigorous book. Maybe the goal is to get some intuition of the concepts quickly, and then fill in the details. I recommend two
4,123
Introduction to statistics for mathematicians
Mathematical Methods of Statistics, Harald Cramér is really great if you're coming to Statistics from the mathematical side. It's a bit dated, but still relevant for all the basic mathematical statistics. Two other noteworthy books come to mind for inference and estimation theory: Theory of Point Estimation, E. L. Lehmann Theory of Statistics, Schervish Not entirely sure if this is what you wanted, but you can check out the reviews and see if they meet your expectations.
Introduction to statistics for mathematicians
Mathematical Methods of Statistics, Harald Cramér is really great if you're coming to Statistics from the mathematical side. It's a bit dated, but still relevant for all the basic mathematical statis
Introduction to statistics for mathematicians Mathematical Methods of Statistics, Harald Cramér is really great if you're coming to Statistics from the mathematical side. It's a bit dated, but still relevant for all the basic mathematical statistics. Two other noteworthy books come to mind for inference and estimation theory: Theory of Point Estimation, E. L. Lehmann Theory of Statistics, Schervish Not entirely sure if this is what you wanted, but you can check out the reviews and see if they meet your expectations.
Introduction to statistics for mathematicians Mathematical Methods of Statistics, Harald Cramér is really great if you're coming to Statistics from the mathematical side. It's a bit dated, but still relevant for all the basic mathematical statis
4,124
Introduction to statistics for mathematicians
I loved the Freedman, Pisani, Purves' Statistics text because it is extremely non-mathematical. As a mathematician, you will find it to be such a clear guide to the statistical concepts that you will be able to develop all the mathematical theory as an exercise: that's a rewarding thing to do. (The first edition of this text was my initiation to statistics after I completed a PhD in pure mathematics and I still enjoy re-reading it.)
Introduction to statistics for mathematicians
I loved the Freedman, Pisani, Purves' Statistics text because it is extremely non-mathematical. As a mathematician, you will find it to be such a clear guide to the statistical concepts that you will
Introduction to statistics for mathematicians I loved the Freedman, Pisani, Purves' Statistics text because it is extremely non-mathematical. As a mathematician, you will find it to be such a clear guide to the statistical concepts that you will be able to develop all the mathematical theory as an exercise: that's a rewarding thing to do. (The first edition of this text was my initiation to statistics after I completed a PhD in pure mathematics and I still enjoy re-reading it.)
Introduction to statistics for mathematicians I loved the Freedman, Pisani, Purves' Statistics text because it is extremely non-mathematical. As a mathematician, you will find it to be such a clear guide to the statistical concepts that you will
4,125
Introduction to statistics for mathematicians
I think you should take a look to the similar post from mathoverflow. My answer to this post was Asymptotic Statistics by Van der Vaart.
Introduction to statistics for mathematicians
I think you should take a look to the similar post from mathoverflow. My answer to this post was Asymptotic Statistics by Van der Vaart.
Introduction to statistics for mathematicians I think you should take a look to the similar post from mathoverflow. My answer to this post was Asymptotic Statistics by Van der Vaart.
Introduction to statistics for mathematicians I think you should take a look to the similar post from mathoverflow. My answer to this post was Asymptotic Statistics by Van der Vaart.
4,126
Introduction to statistics for mathematicians
You will find many applications of Mathematical Statistics in 'Mathematical Statistics and Data Analysis' by John A. Rice. The 'Application Index' lists all applications discussed in the text.
Introduction to statistics for mathematicians
You will find many applications of Mathematical Statistics in 'Mathematical Statistics and Data Analysis' by John A. Rice. The 'Application Index' lists all applications discussed in the text.
Introduction to statistics for mathematicians You will find many applications of Mathematical Statistics in 'Mathematical Statistics and Data Analysis' by John A. Rice. The 'Application Index' lists all applications discussed in the text.
Introduction to statistics for mathematicians You will find many applications of Mathematical Statistics in 'Mathematical Statistics and Data Analysis' by John A. Rice. The 'Application Index' lists all applications discussed in the text.
4,127
Introduction to statistics for mathematicians
For you I would suggest: Introduction to the Mathematical and Statistical Foundations of Econometrics by Herman J. Bierens, CUP. The word "Introduction" in the title is a sick joke for most PhD econometrics students. Markov Chain Monte Carlo by Dani Gamerman, Chapman & Hall is also concise.
Introduction to statistics for mathematicians
For you I would suggest: Introduction to the Mathematical and Statistical Foundations of Econometrics by Herman J. Bierens, CUP. The word "Introduction" in the title is a sick joke for most PhD econom
Introduction to statistics for mathematicians For you I would suggest: Introduction to the Mathematical and Statistical Foundations of Econometrics by Herman J. Bierens, CUP. The word "Introduction" in the title is a sick joke for most PhD econometrics students. Markov Chain Monte Carlo by Dani Gamerman, Chapman & Hall is also concise.
Introduction to statistics for mathematicians For you I would suggest: Introduction to the Mathematical and Statistical Foundations of Econometrics by Herman J. Bierens, CUP. The word "Introduction" in the title is a sick joke for most PhD econom
4,128
Apply word embeddings to entire document, to get a feature vector
One simple technique that seems to work reasonably well for short texts (e.g., a sentence or a tweet) is to compute the vector for each word in the document, and then aggregate them using the coordinate-wise mean, min, or max. Based on results in one recent paper, it seems that using the min and the max works reasonably well. It's not optimal, but it's simple and about as good or better as other simple techniques. In particular, if the vectors for the $n$ words in the document are $v^1,v^2,\dots,v^n \in \mathbb{R}^d$, then you compute $\min(v^1,\dots,v^n)$ and $\max(v^1,\dots,v^n)$. Here we're taking the coordinate-wise minimum, i.e., the minimum is a vector $u$ such that $u_i = \min(v^1_i, \dots, v^n_i)$, and similarly for the max. The feature vector is the concatenation of these two vectors, so we obtain a feature vector in $\mathbb{R}^{2d}$. I don't know if this is better or worse than a bag-of-words representation, but for short documents I suspect it might perform better than bag-of-words, and it allows using pre-trained word embeddings. TL;DR: Surprisingly, the concatenation of the min and max works reasonably well. Reference: Representation learning for very short texts using weighted word embedding aggregation. Cedric De Boom, Steven Van Canneyt, Thomas Demeester, Bart Dhoedt. Pattern Recognition Letters; arxiv:1607.00570. abstract, pdf. See especially Tables 1 and 2. Credits: Thanks to @user115202 for bringing this paper to my attention.
Apply word embeddings to entire document, to get a feature vector
One simple technique that seems to work reasonably well for short texts (e.g., a sentence or a tweet) is to compute the vector for each word in the document, and then aggregate them using the coordina
Apply word embeddings to entire document, to get a feature vector One simple technique that seems to work reasonably well for short texts (e.g., a sentence or a tweet) is to compute the vector for each word in the document, and then aggregate them using the coordinate-wise mean, min, or max. Based on results in one recent paper, it seems that using the min and the max works reasonably well. It's not optimal, but it's simple and about as good or better as other simple techniques. In particular, if the vectors for the $n$ words in the document are $v^1,v^2,\dots,v^n \in \mathbb{R}^d$, then you compute $\min(v^1,\dots,v^n)$ and $\max(v^1,\dots,v^n)$. Here we're taking the coordinate-wise minimum, i.e., the minimum is a vector $u$ such that $u_i = \min(v^1_i, \dots, v^n_i)$, and similarly for the max. The feature vector is the concatenation of these two vectors, so we obtain a feature vector in $\mathbb{R}^{2d}$. I don't know if this is better or worse than a bag-of-words representation, but for short documents I suspect it might perform better than bag-of-words, and it allows using pre-trained word embeddings. TL;DR: Surprisingly, the concatenation of the min and max works reasonably well. Reference: Representation learning for very short texts using weighted word embedding aggregation. Cedric De Boom, Steven Van Canneyt, Thomas Demeester, Bart Dhoedt. Pattern Recognition Letters; arxiv:1607.00570. abstract, pdf. See especially Tables 1 and 2. Credits: Thanks to @user115202 for bringing this paper to my attention.
Apply word embeddings to entire document, to get a feature vector One simple technique that seems to work reasonably well for short texts (e.g., a sentence or a tweet) is to compute the vector for each word in the document, and then aggregate them using the coordina
4,129
Apply word embeddings to entire document, to get a feature vector
You can use doc2vec similar to word2vec and use a pre-trained model from a large corpus. Then use something like .infer_vector() in gensim to construct a document vector. The doc2vec training doesn't necessary need to come from the training set. Another method is to use an RNN, CNN or feed forward network to classify. This effectively combines the word vectors into a document vector. You can also combine sparse features (words) with dense (word vector) features to complement each other. So your feature matrix would be a concatenation of the sparse bag of words matrix with the average of word vectors. https://research.googleblog.com/2016/06/wide-deep-learning-better-together-with.html Another interesting method is to use a similar algorithm to word2vec but instead of predicting a target word, you can predict a target label. This directly tunes the word vectors to the classification task. http://arxiv.org/pdf/1607.01759v2.pdf For more ad hoc methods, you might try weighing the words differently depending on syntax. For example, you can weigh verbs more strongly than determiners.
Apply word embeddings to entire document, to get a feature vector
You can use doc2vec similar to word2vec and use a pre-trained model from a large corpus. Then use something like .infer_vector() in gensim to construct a document vector. The doc2vec training doesn't
Apply word embeddings to entire document, to get a feature vector You can use doc2vec similar to word2vec and use a pre-trained model from a large corpus. Then use something like .infer_vector() in gensim to construct a document vector. The doc2vec training doesn't necessary need to come from the training set. Another method is to use an RNN, CNN or feed forward network to classify. This effectively combines the word vectors into a document vector. You can also combine sparse features (words) with dense (word vector) features to complement each other. So your feature matrix would be a concatenation of the sparse bag of words matrix with the average of word vectors. https://research.googleblog.com/2016/06/wide-deep-learning-better-together-with.html Another interesting method is to use a similar algorithm to word2vec but instead of predicting a target word, you can predict a target label. This directly tunes the word vectors to the classification task. http://arxiv.org/pdf/1607.01759v2.pdf For more ad hoc methods, you might try weighing the words differently depending on syntax. For example, you can weigh verbs more strongly than determiners.
Apply word embeddings to entire document, to get a feature vector You can use doc2vec similar to word2vec and use a pre-trained model from a large corpus. Then use something like .infer_vector() in gensim to construct a document vector. The doc2vec training doesn't
4,130
Apply word embeddings to entire document, to get a feature vector
If you are working with English text and want pre-trained word embeddings to begin with, then please see this: https://code.google.com/archive/p/word2vec/ This is the original C version of word2vec. Along with this release, they also released a model trained on 100 billion words taken from Google News articles (see subsection titled: "Pre-trained word and phrase vectors"). In my opinion and experience of working on word embeddings, for document classification, a model like doc2vec (with CBOW) works much better than bag of words. Since, you have a small corpus, I suggest, you initialize your word embedding matrix by the pre-trained embeddings mentioned above. Then train for the paragraph vector in the doc2vec code. If you are comfortable with python, you can checkout the gensim version of it, which is very easy to modify. Also check this paper that details the inner workings of word2vec/doc2vec: http://arxiv.org/abs/1411.2738. This will make understanding the gensim code very easy.
Apply word embeddings to entire document, to get a feature vector
If you are working with English text and want pre-trained word embeddings to begin with, then please see this: https://code.google.com/archive/p/word2vec/ This is the original C version of word2vec. A
Apply word embeddings to entire document, to get a feature vector If you are working with English text and want pre-trained word embeddings to begin with, then please see this: https://code.google.com/archive/p/word2vec/ This is the original C version of word2vec. Along with this release, they also released a model trained on 100 billion words taken from Google News articles (see subsection titled: "Pre-trained word and phrase vectors"). In my opinion and experience of working on word embeddings, for document classification, a model like doc2vec (with CBOW) works much better than bag of words. Since, you have a small corpus, I suggest, you initialize your word embedding matrix by the pre-trained embeddings mentioned above. Then train for the paragraph vector in the doc2vec code. If you are comfortable with python, you can checkout the gensim version of it, which is very easy to modify. Also check this paper that details the inner workings of word2vec/doc2vec: http://arxiv.org/abs/1411.2738. This will make understanding the gensim code very easy.
Apply word embeddings to entire document, to get a feature vector If you are working with English text and want pre-trained word embeddings to begin with, then please see this: https://code.google.com/archive/p/word2vec/ This is the original C version of word2vec. A
4,131
Apply word embeddings to entire document, to get a feature vector
I'm impressed no one mentioned it, but other best practices are to pad the sentences into a fixed size, initialize an embedding layer with the weights of Word2Vec and feed it into an LSTM. So it is basically what OP mentioned here, but including padding for handling the different lengths: Concatenating the vectors for all the words doesn't work, because it doesn't lead to a fixed-size feature vector. Example Consider the following sentence (taken from the Toxic Comment Classification Challenge): "Explanation Why the edits made under my username Hardcore Metallica Fan were reverted? They weren't vandalisms, just closure on some GAs after I voted at New York Dolls FAC. And please don't remove the template from the talk page since I'm retired now.89.205.38.27" First, we clean such sentence: "explanation why the edits made under my username hardcore metallica fan were reverted ? they weren ' t vandalisms , just closure on some gas after i voted at new york dolls fac . and please don ' t remove the template from the talk page since i ' m retired now . ipaddress" Next, we encode their words into integers: 776 92 2 161 153 212 44 754 4597 9964 1290 104 399 34 57 2292 10 29 14515 3 66 6964 22 75 2730 173 5 2952 47 136 1298 16686 2615 1 8 67 73 10 29 290 2 398 45 2 60 43 164 5 10 81 4030 107 1 216 And finally, if we perform padding with a length of 200, it would look like this: array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 776, 92, 2, 161, 153, 212, 44, 754, 4597, 9964, 1290, 104, 399, 34, 57, 2292, 10, 29, 14515, 3, 66, 6964, 22, 75, 2730, 173, 5, 2952, 47, 136, 1298, 16686, 2615, 1, 8, 67, 73, 10, 29, 290, 2, 398, 45, 2, 60, 43, 164, 5, 10, 81, 4030, 107, 1, 216], dtype=int32) We can force all sentences to have a maximum of 200 words, fill with zeros if they have less, or cut words that come later if they have more. Next, we initialize an embedding model with the weights of word2vec, here's an example using Keras: model.add(Embedding(nb_words, WV_DIM, weights=[wv_matrix], input_length=MAX_SEQUENCE_LENGTH, trainable=False)) wv_matrix contains a matrix with shape $ℝ^{nd}$ (number of unique words versus embedding dimension). And finally we add an LSTM layer after that, for example: embedded_sequences = SpatialDropout1D(0.2)(embedded_sequences) x = Bidirectional(CuDNNLSTM(64, return_sequences=False))(embedded_sequences) References The full implementation of the above example if implemented in this Kaggle's Kernel. The code comes from this post.
Apply word embeddings to entire document, to get a feature vector
I'm impressed no one mentioned it, but other best practices are to pad the sentences into a fixed size, initialize an embedding layer with the weights of Word2Vec and feed it into an LSTM. So it is ba
Apply word embeddings to entire document, to get a feature vector I'm impressed no one mentioned it, but other best practices are to pad the sentences into a fixed size, initialize an embedding layer with the weights of Word2Vec and feed it into an LSTM. So it is basically what OP mentioned here, but including padding for handling the different lengths: Concatenating the vectors for all the words doesn't work, because it doesn't lead to a fixed-size feature vector. Example Consider the following sentence (taken from the Toxic Comment Classification Challenge): "Explanation Why the edits made under my username Hardcore Metallica Fan were reverted? They weren't vandalisms, just closure on some GAs after I voted at New York Dolls FAC. And please don't remove the template from the talk page since I'm retired now.89.205.38.27" First, we clean such sentence: "explanation why the edits made under my username hardcore metallica fan were reverted ? they weren ' t vandalisms , just closure on some gas after i voted at new york dolls fac . and please don ' t remove the template from the talk page since i ' m retired now . ipaddress" Next, we encode their words into integers: 776 92 2 161 153 212 44 754 4597 9964 1290 104 399 34 57 2292 10 29 14515 3 66 6964 22 75 2730 173 5 2952 47 136 1298 16686 2615 1 8 67 73 10 29 290 2 398 45 2 60 43 164 5 10 81 4030 107 1 216 And finally, if we perform padding with a length of 200, it would look like this: array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 776, 92, 2, 161, 153, 212, 44, 754, 4597, 9964, 1290, 104, 399, 34, 57, 2292, 10, 29, 14515, 3, 66, 6964, 22, 75, 2730, 173, 5, 2952, 47, 136, 1298, 16686, 2615, 1, 8, 67, 73, 10, 29, 290, 2, 398, 45, 2, 60, 43, 164, 5, 10, 81, 4030, 107, 1, 216], dtype=int32) We can force all sentences to have a maximum of 200 words, fill with zeros if they have less, or cut words that come later if they have more. Next, we initialize an embedding model with the weights of word2vec, here's an example using Keras: model.add(Embedding(nb_words, WV_DIM, weights=[wv_matrix], input_length=MAX_SEQUENCE_LENGTH, trainable=False)) wv_matrix contains a matrix with shape $ℝ^{nd}$ (number of unique words versus embedding dimension). And finally we add an LSTM layer after that, for example: embedded_sequences = SpatialDropout1D(0.2)(embedded_sequences) x = Bidirectional(CuDNNLSTM(64, return_sequences=False))(embedded_sequences) References The full implementation of the above example if implemented in this Kaggle's Kernel. The code comes from this post.
Apply word embeddings to entire document, to get a feature vector I'm impressed no one mentioned it, but other best practices are to pad the sentences into a fixed size, initialize an embedding layer with the weights of Word2Vec and feed it into an LSTM. So it is ba
4,132
Apply word embeddings to entire document, to get a feature vector
I would suggest to use window-size approach. Given window-size=1024 (token) and you pre-define says 10 windows, then concatenating all vectors of the windows. This is similar to your solution 2, but rather than using word vectors, using window vectors. With this approach, you can try with other embedding such as BERT or similar as these have limited size of token length. If using Word2Vec, or word vector, would you consider to use a linear combination with the word weighting such as TFIDF and the word vectors. I found it's outperformed compared with word vectors without weightings.
Apply word embeddings to entire document, to get a feature vector
I would suggest to use window-size approach. Given window-size=1024 (token) and you pre-define says 10 windows, then concatenating all vectors of the windows. This is similar to your solution 2, but r
Apply word embeddings to entire document, to get a feature vector I would suggest to use window-size approach. Given window-size=1024 (token) and you pre-define says 10 windows, then concatenating all vectors of the windows. This is similar to your solution 2, but rather than using word vectors, using window vectors. With this approach, you can try with other embedding such as BERT or similar as these have limited size of token length. If using Word2Vec, or word vector, would you consider to use a linear combination with the word weighting such as TFIDF and the word vectors. I found it's outperformed compared with word vectors without weightings.
Apply word embeddings to entire document, to get a feature vector I would suggest to use window-size approach. Given window-size=1024 (token) and you pre-define says 10 windows, then concatenating all vectors of the windows. This is similar to your solution 2, but r
4,133
What is the difference between a Normal and a Gaussian Distribution
Wikipedia is right. The Gaussian is the same as the normal. Wikipedia can usually be trusted on this sort of question.
What is the difference between a Normal and a Gaussian Distribution
Wikipedia is right. The Gaussian is the same as the normal. Wikipedia can usually be trusted on this sort of question.
What is the difference between a Normal and a Gaussian Distribution Wikipedia is right. The Gaussian is the same as the normal. Wikipedia can usually be trusted on this sort of question.
What is the difference between a Normal and a Gaussian Distribution Wikipedia is right. The Gaussian is the same as the normal. Wikipedia can usually be trusted on this sort of question.
4,134
What is the difference between a Normal and a Gaussian Distribution
In http://mathworld.wolfram.com/NormalDistribution.html, there is a mention of a standard Normal distribution which looks like the one you were mentioning as mean = 0 and std = 1. But the Normal distribution is the same as Gaussian which can be converted to a standard normal distribution by representing using the variable z = (x-mean)/std.
What is the difference between a Normal and a Gaussian Distribution
In http://mathworld.wolfram.com/NormalDistribution.html, there is a mention of a standard Normal distribution which looks like the one you were mentioning as mean = 0 and std = 1. But the Normal distr
What is the difference between a Normal and a Gaussian Distribution In http://mathworld.wolfram.com/NormalDistribution.html, there is a mention of a standard Normal distribution which looks like the one you were mentioning as mean = 0 and std = 1. But the Normal distribution is the same as Gaussian which can be converted to a standard normal distribution by representing using the variable z = (x-mean)/std.
What is the difference between a Normal and a Gaussian Distribution In http://mathworld.wolfram.com/NormalDistribution.html, there is a mention of a standard Normal distribution which looks like the one you were mentioning as mean = 0 and std = 1. But the Normal distr
4,135
How does LSTM prevent the vanishing gradient problem?
The vanishing gradient is best explained in the one-dimensional case. The multi-dimensional is more complicated but essentially analogous. You can review it in this excellent paper [1]. Assume we have a hidden state $h_t$ at time step $t$. If we make things simple and remove biases and inputs, we have $$h_t = \sigma(w h_{t-1}).$$ Then you can show that \begin{align} \frac{\partial h_{t'}}{\partial h_t} &= \prod_{k=1}^{t' - t} w \sigma'(w h_{t'-k})\\ &= \underbrace{w^{t' - t}}_{!!!}\prod_{k=1}^{t' - t} \sigma'(w h_{t'-k}) \end{align} The factored marked with !!! is the crucial one. If the weight is not equal to 1, it will either decay to zero exponentially fast in $t'-t$, or grow exponentially fast. In LSTMs, you have the cell state $s_t$. The derivative there is of the form $$\frac{\partial s_{t'}}{\partial s_t} = \prod_{k=1}^{t' - t} \sigma(v_{t+k}).$$ Here $v_t$ is the input to the forget gate. As you can see, there is no exponentially fast decaying factor involved. Consequently, there is at least one path where the gradient does not vanish. For the complete derivation, see [2]. [1] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." ICML (3) 28 (2013): 1310-1318. [2] Bayer, Justin Simon. Learning Sequence Representations. Diss. München, Technische Universität München, Diss., 2015, 2015.
How does LSTM prevent the vanishing gradient problem?
The vanishing gradient is best explained in the one-dimensional case. The multi-dimensional is more complicated but essentially analogous. You can review it in this excellent paper [1]. Assume we have
How does LSTM prevent the vanishing gradient problem? The vanishing gradient is best explained in the one-dimensional case. The multi-dimensional is more complicated but essentially analogous. You can review it in this excellent paper [1]. Assume we have a hidden state $h_t$ at time step $t$. If we make things simple and remove biases and inputs, we have $$h_t = \sigma(w h_{t-1}).$$ Then you can show that \begin{align} \frac{\partial h_{t'}}{\partial h_t} &= \prod_{k=1}^{t' - t} w \sigma'(w h_{t'-k})\\ &= \underbrace{w^{t' - t}}_{!!!}\prod_{k=1}^{t' - t} \sigma'(w h_{t'-k}) \end{align} The factored marked with !!! is the crucial one. If the weight is not equal to 1, it will either decay to zero exponentially fast in $t'-t$, or grow exponentially fast. In LSTMs, you have the cell state $s_t$. The derivative there is of the form $$\frac{\partial s_{t'}}{\partial s_t} = \prod_{k=1}^{t' - t} \sigma(v_{t+k}).$$ Here $v_t$ is the input to the forget gate. As you can see, there is no exponentially fast decaying factor involved. Consequently, there is at least one path where the gradient does not vanish. For the complete derivation, see [2]. [1] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." ICML (3) 28 (2013): 1310-1318. [2] Bayer, Justin Simon. Learning Sequence Representations. Diss. München, Technische Universität München, Diss., 2015, 2015.
How does LSTM prevent the vanishing gradient problem? The vanishing gradient is best explained in the one-dimensional case. The multi-dimensional is more complicated but essentially analogous. You can review it in this excellent paper [1]. Assume we have
4,136
How does LSTM prevent the vanishing gradient problem?
I'd like to add some detail to the accepted answer, because I think it's a bit more nuanced and the nuance may not be obvious to someone first learning about RNNs. For the vanilla RNN, $$\frac{\partial h_{t'}}{\partial h_{t}} = \prod _{k=1} ^{t'-t} w \sigma'(w h_{t'-k})$$. For the LSTM, $$\frac{\partial s_{t'}}{\partial s_{t}} = \prod _{k=1} ^{t'-t} \sigma(v_{t+k})$$ a natural question to ask is, don't both the product-sums have a sigmoid term that when multiplied together $t'-t$ times can vanish? the answer is yes, which is why LSTM will suffer from vanishing gradients as well, but not nearly as much as the vanilla RNN The difference is for the vanilla RNN, the gradient decays with $w \sigma'(\cdot)$ while for the LSTM the gradient decays with $\sigma (\cdot)$. For the LSTM, there's is a set of weights which can be learned such that $$\sigma (\cdot) \approx 1$$ Suppose $v_{t+k} = wx$ for some weight $w$ and input $x$. Then the neural network can learn a large $w$ to prevent gradients from vanishing. e.g. In the 1D case if $x=1$, $w=10$ $v_{t+k}=10$ then the decay factor $\sigma (\cdot) = 0.99995$, or the gradient dies as: $$(0.99995)^{t'-t}$$ For the vanilla RNN, there is no set of weights which can be learned such that $$w \sigma'(w h_{t'-k}) \approx 1 $$ e.g. In the 1D case, suppose $h_{t'-k}=1$. The function $w \sigma'(w*1)$ achieves a maximum of $0.224$ at $w=1.5434$. This means the gradient will decay as, $$(0.224)^{t'-t}$$
How does LSTM prevent the vanishing gradient problem?
I'd like to add some detail to the accepted answer, because I think it's a bit more nuanced and the nuance may not be obvious to someone first learning about RNNs. For the vanilla RNN, $$\frac{\partia
How does LSTM prevent the vanishing gradient problem? I'd like to add some detail to the accepted answer, because I think it's a bit more nuanced and the nuance may not be obvious to someone first learning about RNNs. For the vanilla RNN, $$\frac{\partial h_{t'}}{\partial h_{t}} = \prod _{k=1} ^{t'-t} w \sigma'(w h_{t'-k})$$. For the LSTM, $$\frac{\partial s_{t'}}{\partial s_{t}} = \prod _{k=1} ^{t'-t} \sigma(v_{t+k})$$ a natural question to ask is, don't both the product-sums have a sigmoid term that when multiplied together $t'-t$ times can vanish? the answer is yes, which is why LSTM will suffer from vanishing gradients as well, but not nearly as much as the vanilla RNN The difference is for the vanilla RNN, the gradient decays with $w \sigma'(\cdot)$ while for the LSTM the gradient decays with $\sigma (\cdot)$. For the LSTM, there's is a set of weights which can be learned such that $$\sigma (\cdot) \approx 1$$ Suppose $v_{t+k} = wx$ for some weight $w$ and input $x$. Then the neural network can learn a large $w$ to prevent gradients from vanishing. e.g. In the 1D case if $x=1$, $w=10$ $v_{t+k}=10$ then the decay factor $\sigma (\cdot) = 0.99995$, or the gradient dies as: $$(0.99995)^{t'-t}$$ For the vanilla RNN, there is no set of weights which can be learned such that $$w \sigma'(w h_{t'-k}) \approx 1 $$ e.g. In the 1D case, suppose $h_{t'-k}=1$. The function $w \sigma'(w*1)$ achieves a maximum of $0.224$ at $w=1.5434$. This means the gradient will decay as, $$(0.224)^{t'-t}$$
How does LSTM prevent the vanishing gradient problem? I'd like to add some detail to the accepted answer, because I think it's a bit more nuanced and the nuance may not be obvious to someone first learning about RNNs. For the vanilla RNN, $$\frac{\partia
4,137
How does LSTM prevent the vanishing gradient problem?
http://www.felixgers.de/papers/phd.pdf Please refer to section 2.2 and 3.2.2 where the truncated error part is explained. They don't propagate the error if it leaks out of the cell memory (i.e. if there is a closed/activated input gate), but they update the weights of the gate based on the error only for that time instant. Later it is made zero during further back propagation. This is kind of hack but the reason to do is that the error flow along the gates anyway decay over time.
How does LSTM prevent the vanishing gradient problem?
http://www.felixgers.de/papers/phd.pdf Please refer to section 2.2 and 3.2.2 where the truncated error part is explained. They don't propagate the error if it leaks out of the cell memory (i.e. if
How does LSTM prevent the vanishing gradient problem? http://www.felixgers.de/papers/phd.pdf Please refer to section 2.2 and 3.2.2 where the truncated error part is explained. They don't propagate the error if it leaks out of the cell memory (i.e. if there is a closed/activated input gate), but they update the weights of the gate based on the error only for that time instant. Later it is made zero during further back propagation. This is kind of hack but the reason to do is that the error flow along the gates anyway decay over time.
How does LSTM prevent the vanishing gradient problem? http://www.felixgers.de/papers/phd.pdf Please refer to section 2.2 and 3.2.2 where the truncated error part is explained. They don't propagate the error if it leaks out of the cell memory (i.e. if
4,138
How does LSTM prevent the vanishing gradient problem?
The picture of LSTM block from Greff et al. (2015) describes a variant that the authors call vanilla LSTM. It's a bit different from the original definition from Hochreiter & Schmidhuber (1997). The original definition did not include the forget gate and the peephole connections. The term Constant Error Carousel was used in the original paper to denote the recurrent connection of the cell state. Consider the original definition where the cell state is changed only by addition, when the input gate opens. The gradient of the cell state with regard to the cell state at an earlier time step is zero. Error may still enter the CEC through the output gate and the activation function. The activation function reduces the magnitude of the error a little bit before it is added to the CEC. CEC is the only place where the error can flow unchanged. Again, when the input gate opens, the error exits through the input gate, activation function, and affine transformation, reducing the magnitude of the error. Thus the error is reduced when it is backpropagated through an LSTM layer, but only when it enters and exits the CEC. The important thing is that it does not change in the CEC no matter how long distance it travels. This solves the problem in the basic RNN that every time step applies an affine transformation and nonlinearity, meaning that the longer the time distance between the input and output, the smaller the error gets.
How does LSTM prevent the vanishing gradient problem?
The picture of LSTM block from Greff et al. (2015) describes a variant that the authors call vanilla LSTM. It's a bit different from the original definition from Hochreiter & Schmidhuber (1997). The o
How does LSTM prevent the vanishing gradient problem? The picture of LSTM block from Greff et al. (2015) describes a variant that the authors call vanilla LSTM. It's a bit different from the original definition from Hochreiter & Schmidhuber (1997). The original definition did not include the forget gate and the peephole connections. The term Constant Error Carousel was used in the original paper to denote the recurrent connection of the cell state. Consider the original definition where the cell state is changed only by addition, when the input gate opens. The gradient of the cell state with regard to the cell state at an earlier time step is zero. Error may still enter the CEC through the output gate and the activation function. The activation function reduces the magnitude of the error a little bit before it is added to the CEC. CEC is the only place where the error can flow unchanged. Again, when the input gate opens, the error exits through the input gate, activation function, and affine transformation, reducing the magnitude of the error. Thus the error is reduced when it is backpropagated through an LSTM layer, but only when it enters and exits the CEC. The important thing is that it does not change in the CEC no matter how long distance it travels. This solves the problem in the basic RNN that every time step applies an affine transformation and nonlinearity, meaning that the longer the time distance between the input and output, the smaller the error gets.
How does LSTM prevent the vanishing gradient problem? The picture of LSTM block from Greff et al. (2015) describes a variant that the authors call vanilla LSTM. It's a bit different from the original definition from Hochreiter & Schmidhuber (1997). The o
4,139
Neural networks vs support vector machines: are the second definitely superior?
It is a matter of trade-offs. SVMs are in right now, NNs used to be in. You'll find a rising number of papers that claim Random Forests, Probabilistic Graphic Models or Nonparametric Bayesian methods are in. Someone should publish a forecasting model in the Annals of Improbable Research on what models will be considered hip. Having said that for many famously difficult supervised problems the best performing single models are some type of NN, some type of SVMs or a problem specific stochastic gradient descent method implemented using signal processing methods. Pros of NN: They are extremely flexible in the types of data they can support. NNs do a decent job at learning the important features from basically any data structure, without having to manually derive features. NN still benefit from feature engineering, e.g. you should have an area feature if you have a length and width. The model will perform better for the same computational effort. Most of supervised machine learning requires you to have your data structured in a observations by features matrix, with the labels as a vector of length observations. This restriction is not necessary with NN. There is fantastic work with structured SVM, but it is unlikely it will ever be as flexible as NNs. Pros of SVM: Fewer hyperparameters. Generally SVMs require less grid-searching to get a reasonably accurate model. SVM with a RBF kernel usually performs quite well. Global optimum guaranteed. Cons of NN and SVM: For most purposes they are both black boxes. There is some research on interpreting SVMs, but I doubt it will ever be as intuitive as GLMs. This is a serious problem in some problem domains. If you're going to accept a black box then you can usually squeeze out quite a bit more accuracy by bagging/stacking/boosting many many models with different trade-offs. Random forests are attractive because they can produce out-of-bag predictions(leave-one-out predictions) with no extra effort, they are very interpretable, they have an good bias-variance trade-off(great for bagging models) and they are relatively robust to selection bias. Stupidly simple to write a parallel implementation of. Probabilistic graphical models are attractive because they can incorporate domain-specific-knowledge directly into the model and are interpretable in this regard. Nonparametric(or really extremely parametric) Bayesian methods are attractive because they produce confidence intervals directly. They perform very well on small sample sizes and very well on large sample sizes. Stupidly simple to write a linear algebra implementation of.
Neural networks vs support vector machines: are the second definitely superior?
It is a matter of trade-offs. SVMs are in right now, NNs used to be in. You'll find a rising number of papers that claim Random Forests, Probabilistic Graphic Models or Nonparametric Bayesian methods
Neural networks vs support vector machines: are the second definitely superior? It is a matter of trade-offs. SVMs are in right now, NNs used to be in. You'll find a rising number of papers that claim Random Forests, Probabilistic Graphic Models or Nonparametric Bayesian methods are in. Someone should publish a forecasting model in the Annals of Improbable Research on what models will be considered hip. Having said that for many famously difficult supervised problems the best performing single models are some type of NN, some type of SVMs or a problem specific stochastic gradient descent method implemented using signal processing methods. Pros of NN: They are extremely flexible in the types of data they can support. NNs do a decent job at learning the important features from basically any data structure, without having to manually derive features. NN still benefit from feature engineering, e.g. you should have an area feature if you have a length and width. The model will perform better for the same computational effort. Most of supervised machine learning requires you to have your data structured in a observations by features matrix, with the labels as a vector of length observations. This restriction is not necessary with NN. There is fantastic work with structured SVM, but it is unlikely it will ever be as flexible as NNs. Pros of SVM: Fewer hyperparameters. Generally SVMs require less grid-searching to get a reasonably accurate model. SVM with a RBF kernel usually performs quite well. Global optimum guaranteed. Cons of NN and SVM: For most purposes they are both black boxes. There is some research on interpreting SVMs, but I doubt it will ever be as intuitive as GLMs. This is a serious problem in some problem domains. If you're going to accept a black box then you can usually squeeze out quite a bit more accuracy by bagging/stacking/boosting many many models with different trade-offs. Random forests are attractive because they can produce out-of-bag predictions(leave-one-out predictions) with no extra effort, they are very interpretable, they have an good bias-variance trade-off(great for bagging models) and they are relatively robust to selection bias. Stupidly simple to write a parallel implementation of. Probabilistic graphical models are attractive because they can incorporate domain-specific-knowledge directly into the model and are interpretable in this regard. Nonparametric(or really extremely parametric) Bayesian methods are attractive because they produce confidence intervals directly. They perform very well on small sample sizes and very well on large sample sizes. Stupidly simple to write a linear algebra implementation of.
Neural networks vs support vector machines: are the second definitely superior? It is a matter of trade-offs. SVMs are in right now, NNs used to be in. You'll find a rising number of papers that claim Random Forests, Probabilistic Graphic Models or Nonparametric Bayesian methods
4,140
Neural networks vs support vector machines: are the second definitely superior?
The answer to your question is in my experience "no", SVMs are not definitely superior, and which works best depends on the nature of the dataset at hand and on the relative skill of the operator with each set of tools. In general SVMs are good because the training algorithm is efficient, and it has a regularisation parameter, which forces you to think about regularisation and over-fitting. However, there are datasets where MLPs give much better performance than SVMs (as they are allowed to decide their own internal representation, rather than having it pre-specified by the kernel function). A good implementation of MLPs (e.g. NETLAB) and regularisation or early stopping or architecture selection (or better still all three) can often give very good results and be reproducible (at least in terms of performance). Model selection is the major problem with SVMs, choosing the kernel and optimising the kernel and regularisation parameters can often lead to severe over-fitting if you over-optimise the model selection criterion. While the theory under-pinning the SVM is a comfort, most of it only applies for a fixed kernel, so as soon as you try to optimise the kernel parameters it no longer applies (for instance the optimisation problem to be solved in tuning the kernel is generally non-convex and may well have local minima).
Neural networks vs support vector machines: are the second definitely superior?
The answer to your question is in my experience "no", SVMs are not definitely superior, and which works best depends on the nature of the dataset at hand and on the relative skill of the operator with
Neural networks vs support vector machines: are the second definitely superior? The answer to your question is in my experience "no", SVMs are not definitely superior, and which works best depends on the nature of the dataset at hand and on the relative skill of the operator with each set of tools. In general SVMs are good because the training algorithm is efficient, and it has a regularisation parameter, which forces you to think about regularisation and over-fitting. However, there are datasets where MLPs give much better performance than SVMs (as they are allowed to decide their own internal representation, rather than having it pre-specified by the kernel function). A good implementation of MLPs (e.g. NETLAB) and regularisation or early stopping or architecture selection (or better still all three) can often give very good results and be reproducible (at least in terms of performance). Model selection is the major problem with SVMs, choosing the kernel and optimising the kernel and regularisation parameters can often lead to severe over-fitting if you over-optimise the model selection criterion. While the theory under-pinning the SVM is a comfort, most of it only applies for a fixed kernel, so as soon as you try to optimise the kernel parameters it no longer applies (for instance the optimisation problem to be solved in tuning the kernel is generally non-convex and may well have local minima).
Neural networks vs support vector machines: are the second definitely superior? The answer to your question is in my experience "no", SVMs are not definitely superior, and which works best depends on the nature of the dataset at hand and on the relative skill of the operator with
4,141
Neural networks vs support vector machines: are the second definitely superior?
I will just try to explain my opinion that appeared to be shared by most of my friends. I have the following concerns about NN that are not about SVM at all: In a classic NN, the amount of parameters is enormously high. Let's say you have the vectors of the length 100 you want to classify into two classes. One hidden layer of the same size as an input layer will lead you to more then 100000 free parameters. Just imagine how badly you can overfit (how easy is it to fall to local minimum in such a space), and how many training points you will need to prevent that (and how much time will you need to train then). Usually you have to be a real expert to chose the topology at a glance. That means that if you want to get good results you should perform lots of experiments. That's why it's easier to use SVM and tell, that you couldn't get similar results with NN. Usually NN results are not reproducible. Even if you run your NN training twice, you will probably get different results due to the randomness of a learning algorithm. Usually you have no interpretation of the results at all. That is a small concern, but anyway. That doesn't mean that you should not use NN, you should just use it carefully. For example, Convolutional NN can be extremely good for image processing, other Deep NN proved to be good for other problems as well. Hope it will help.
Neural networks vs support vector machines: are the second definitely superior?
I will just try to explain my opinion that appeared to be shared by most of my friends. I have the following concerns about NN that are not about SVM at all: In a classic NN, the amount of parameters
Neural networks vs support vector machines: are the second definitely superior? I will just try to explain my opinion that appeared to be shared by most of my friends. I have the following concerns about NN that are not about SVM at all: In a classic NN, the amount of parameters is enormously high. Let's say you have the vectors of the length 100 you want to classify into two classes. One hidden layer of the same size as an input layer will lead you to more then 100000 free parameters. Just imagine how badly you can overfit (how easy is it to fall to local minimum in such a space), and how many training points you will need to prevent that (and how much time will you need to train then). Usually you have to be a real expert to chose the topology at a glance. That means that if you want to get good results you should perform lots of experiments. That's why it's easier to use SVM and tell, that you couldn't get similar results with NN. Usually NN results are not reproducible. Even if you run your NN training twice, you will probably get different results due to the randomness of a learning algorithm. Usually you have no interpretation of the results at all. That is a small concern, but anyway. That doesn't mean that you should not use NN, you should just use it carefully. For example, Convolutional NN can be extremely good for image processing, other Deep NN proved to be good for other problems as well. Hope it will help.
Neural networks vs support vector machines: are the second definitely superior? I will just try to explain my opinion that appeared to be shared by most of my friends. I have the following concerns about NN that are not about SVM at all: In a classic NN, the amount of parameters
4,142
Neural networks vs support vector machines: are the second definitely superior?
I am using neural networks for most problem. The point is that it's in most cases more about the experience of the user than about the model. Here are some reasons why I like NNs. They are flexible. I can throw whatever loss I want at them: hinge loss, squared, cross entropy, you name it. As long as it is differentiable, I can even design a loss which fits my needs exactly. They can be treated probabilistically: Bayesian neural networks, variational Bayes, MLE/MAP, everything is there. (But in some cases more difficult.) They are fast. Most MLPs will be two matrix multiplications and one nonlinearity applied component wise in between. Beat that with an SVM. I will go through your other points step by step. Have a strong founding theory I'd say, NNs are equally strong in that case: since you train them in a probabilistic framework. That makes the use of priors and a Bayesian treatment (e.g. with variational techniques or approximations) possible. Reach the global optimum due to quadratic programming For one set of hyperparameters. However, the search for good hps is non-convex, and you won't know whether you found the global optimum as well. Have no issue for choosing a proper number of parameters With SVMs, you have to select hyper parameters as well. Needs less memory to store the predictive model You need to store the support vectors. SVMs will not in general be cheaper to store MLPs, it depends on the case. Yield more readable results and a geometrical interpretation The top layer of an MLP is a logistic regression in the case of classification. Thus, there is a geometrical interpretation (separating hyper plane) and a probabilistic interpretation as well.
Neural networks vs support vector machines: are the second definitely superior?
I am using neural networks for most problem. The point is that it's in most cases more about the experience of the user than about the model. Here are some reasons why I like NNs. They are flexible.
Neural networks vs support vector machines: are the second definitely superior? I am using neural networks for most problem. The point is that it's in most cases more about the experience of the user than about the model. Here are some reasons why I like NNs. They are flexible. I can throw whatever loss I want at them: hinge loss, squared, cross entropy, you name it. As long as it is differentiable, I can even design a loss which fits my needs exactly. They can be treated probabilistically: Bayesian neural networks, variational Bayes, MLE/MAP, everything is there. (But in some cases more difficult.) They are fast. Most MLPs will be two matrix multiplications and one nonlinearity applied component wise in between. Beat that with an SVM. I will go through your other points step by step. Have a strong founding theory I'd say, NNs are equally strong in that case: since you train them in a probabilistic framework. That makes the use of priors and a Bayesian treatment (e.g. with variational techniques or approximations) possible. Reach the global optimum due to quadratic programming For one set of hyperparameters. However, the search for good hps is non-convex, and you won't know whether you found the global optimum as well. Have no issue for choosing a proper number of parameters With SVMs, you have to select hyper parameters as well. Needs less memory to store the predictive model You need to store the support vectors. SVMs will not in general be cheaper to store MLPs, it depends on the case. Yield more readable results and a geometrical interpretation The top layer of an MLP is a logistic regression in the case of classification. Thus, there is a geometrical interpretation (separating hyper plane) and a probabilistic interpretation as well.
Neural networks vs support vector machines: are the second definitely superior? I am using neural networks for most problem. The point is that it's in most cases more about the experience of the user than about the model. Here are some reasons why I like NNs. They are flexible.
4,143
Neural networks vs support vector machines: are the second definitely superior?
In some ways these two broad categories of machine learning techniques are related. Though not perfect, two papers I have found helpful in showing the similarities in these techniques are below Ronan Collobert and Samy Bengio. 2004. Links between perceptrons, MLPs and SVMs. In Proceedings of the twenty-first international conference on Machine learning (ICML '04). ACM, New York, NY, USA, 23-. DOI: https://doi.org/10.1145/1015330.1015415 and Andras, Peter. (2002). The Equivalence of Support Vector Machine and Regularization Neural Networks. Neural Processing Letters. 15. 97-104. 10.1023/A:1015292818897.
Neural networks vs support vector machines: are the second definitely superior?
In some ways these two broad categories of machine learning techniques are related. Though not perfect, two papers I have found helpful in showing the similarities in these techniques are below Rona
Neural networks vs support vector machines: are the second definitely superior? In some ways these two broad categories of machine learning techniques are related. Though not perfect, two papers I have found helpful in showing the similarities in these techniques are below Ronan Collobert and Samy Bengio. 2004. Links between perceptrons, MLPs and SVMs. In Proceedings of the twenty-first international conference on Machine learning (ICML '04). ACM, New York, NY, USA, 23-. DOI: https://doi.org/10.1145/1015330.1015415 and Andras, Peter. (2002). The Equivalence of Support Vector Machine and Regularization Neural Networks. Neural Processing Letters. 15. 97-104. 10.1023/A:1015292818897.
Neural networks vs support vector machines: are the second definitely superior? In some ways these two broad categories of machine learning techniques are related. Though not perfect, two papers I have found helpful in showing the similarities in these techniques are below Rona
4,144
Examples where method of moments can beat maximum likelihood in small samples?
This may be considered... cheating, but the OLS estimator is a MoM estimator. Consider a standard linear regression specification (with $K$ stochastic regressors, so magnitudes are conditional on the regressor matrix), and a sample of size $n$. Denote $s^2$ the OLS estimator of the variance $\sigma^2$ of the error term. It is unbiased so $$ MSE(s^2) = \operatorname {Var}(s^2) = \frac {2\sigma^4}{n-K} $$ Consider now the MLE of $\sigma^2$. It is $$\hat \sigma^2_{ML} = \frac {n-K}{n}s^2$$ Is it biased. Its MSE is $$MSE (\hat \sigma^2_{ML}) = \operatorname {Var}(\hat \sigma^2_{ML}) + \Big[E(\hat \sigma^2_{ML})-\sigma^2\Big]^2$$ Expressing the MLE in terms of the OLS and using the expression for the OLS estimator variance we obtain $$MSE (\hat \sigma^2_{ML}) = \left(\frac {n-K}{n}\right)^2\frac {2\sigma^4}{n-K} + \left(\frac {K}{n}\right)^2\sigma^4$$ $$\Rightarrow MSE (\hat \sigma^2_{ML}) = \frac {2(n-K)+K^2}{n^2}\sigma^4$$ We want the conditions (if they exist) under which $$MSE (\hat \sigma^2_{ML}) > MSE (s^2) \Rightarrow \frac {2(n-K)+K^2}{n^2} > \frac {2}{n-K}$$ $$\Rightarrow 2(n-K)^2+K^2(n-K)> 2n^2$$ $$ 2n^2 -4nK + 2K^2 +nK^2 - K^3 > 2n^2 $$ Simplifying we obtain $$ -4n + 2K +nK - K^2 > 0 \Rightarrow K^2 - (n+2)K + 4n < 0 $$ Is it feasible for this quadratic in $K$ to obtain negative values? We need its discriminant to be positive. We have $$\Delta_K = (n+2)^2 -16n = n^2 + 4n + 4 - 16n = n^2 -12n + 4$$ which is another quadratic, in $n$ this time. This discriminant is $$\Delta_n = 12^2 - 4^2 = 8\cdot 16$$ so $$n_1,n_2 = \frac {12\pm \sqrt{8\cdot 16}}{2} = 6 \pm 4\sqrt2 \Rightarrow n_1,n_2 = \{1, 12\}$$ to take into account the fact that $n$ is an integer. If $n$ is inside this interval we have that $\Delta_K <0$ and the quadratic in $K$ takes always positive values, so we cannot obtain the required inequality. So: we need a sample size larger than 12. Given this the roots for $K$-quadratic are $$K_1, K_2 = \frac {(n+2)\pm \sqrt{n^2 -12n + 4}}{2} = \frac n2 +1 \pm \sqrt{\left(\frac n2\right)^2 +1 -3n}$$ Overall : for sample size $n>12$ and number of regressors $K$ such that $\lceil K_1\rceil <K<\lfloor K_2\rfloor $ we have $$MSE (\hat \sigma^2_{ML}) > MSE (s^2)$$ For example, if $n=50$ then one finds that the number of regressors must be $5<K<47$ for the inequality to hold. It is interesting that for small numbers of regressors the MLE is better in MSE sense. ADDENDUM The equation for the roots of the $K$-quadratic can be written $$K_1, K_2 = \left(\frac n2 +1\right) \pm \sqrt{\left(\frac n2 +1\right)^2 -4n}$$ which by a quick look I think implies that the lower root will always be $5$ (taking into account the "integer-value" restriction) -so MLE will be MSE-efficient when regressors are up to $5$ for any (finite) sample size.
Examples where method of moments can beat maximum likelihood in small samples?
This may be considered... cheating, but the OLS estimator is a MoM estimator. Consider a standard linear regression specification (with $K$ stochastic regressors, so magnitudes are conditional on the
Examples where method of moments can beat maximum likelihood in small samples? This may be considered... cheating, but the OLS estimator is a MoM estimator. Consider a standard linear regression specification (with $K$ stochastic regressors, so magnitudes are conditional on the regressor matrix), and a sample of size $n$. Denote $s^2$ the OLS estimator of the variance $\sigma^2$ of the error term. It is unbiased so $$ MSE(s^2) = \operatorname {Var}(s^2) = \frac {2\sigma^4}{n-K} $$ Consider now the MLE of $\sigma^2$. It is $$\hat \sigma^2_{ML} = \frac {n-K}{n}s^2$$ Is it biased. Its MSE is $$MSE (\hat \sigma^2_{ML}) = \operatorname {Var}(\hat \sigma^2_{ML}) + \Big[E(\hat \sigma^2_{ML})-\sigma^2\Big]^2$$ Expressing the MLE in terms of the OLS and using the expression for the OLS estimator variance we obtain $$MSE (\hat \sigma^2_{ML}) = \left(\frac {n-K}{n}\right)^2\frac {2\sigma^4}{n-K} + \left(\frac {K}{n}\right)^2\sigma^4$$ $$\Rightarrow MSE (\hat \sigma^2_{ML}) = \frac {2(n-K)+K^2}{n^2}\sigma^4$$ We want the conditions (if they exist) under which $$MSE (\hat \sigma^2_{ML}) > MSE (s^2) \Rightarrow \frac {2(n-K)+K^2}{n^2} > \frac {2}{n-K}$$ $$\Rightarrow 2(n-K)^2+K^2(n-K)> 2n^2$$ $$ 2n^2 -4nK + 2K^2 +nK^2 - K^3 > 2n^2 $$ Simplifying we obtain $$ -4n + 2K +nK - K^2 > 0 \Rightarrow K^2 - (n+2)K + 4n < 0 $$ Is it feasible for this quadratic in $K$ to obtain negative values? We need its discriminant to be positive. We have $$\Delta_K = (n+2)^2 -16n = n^2 + 4n + 4 - 16n = n^2 -12n + 4$$ which is another quadratic, in $n$ this time. This discriminant is $$\Delta_n = 12^2 - 4^2 = 8\cdot 16$$ so $$n_1,n_2 = \frac {12\pm \sqrt{8\cdot 16}}{2} = 6 \pm 4\sqrt2 \Rightarrow n_1,n_2 = \{1, 12\}$$ to take into account the fact that $n$ is an integer. If $n$ is inside this interval we have that $\Delta_K <0$ and the quadratic in $K$ takes always positive values, so we cannot obtain the required inequality. So: we need a sample size larger than 12. Given this the roots for $K$-quadratic are $$K_1, K_2 = \frac {(n+2)\pm \sqrt{n^2 -12n + 4}}{2} = \frac n2 +1 \pm \sqrt{\left(\frac n2\right)^2 +1 -3n}$$ Overall : for sample size $n>12$ and number of regressors $K$ such that $\lceil K_1\rceil <K<\lfloor K_2\rfloor $ we have $$MSE (\hat \sigma^2_{ML}) > MSE (s^2)$$ For example, if $n=50$ then one finds that the number of regressors must be $5<K<47$ for the inequality to hold. It is interesting that for small numbers of regressors the MLE is better in MSE sense. ADDENDUM The equation for the roots of the $K$-quadratic can be written $$K_1, K_2 = \left(\frac n2 +1\right) \pm \sqrt{\left(\frac n2 +1\right)^2 -4n}$$ which by a quick look I think implies that the lower root will always be $5$ (taking into account the "integer-value" restriction) -so MLE will be MSE-efficient when regressors are up to $5$ for any (finite) sample size.
Examples where method of moments can beat maximum likelihood in small samples? This may be considered... cheating, but the OLS estimator is a MoM estimator. Consider a standard linear regression specification (with $K$ stochastic regressors, so magnitudes are conditional on the
4,145
Examples where method of moments can beat maximum likelihood in small samples?
"In this article, we consider a new parametrization of the two-parameter Inverse Gaussian distribution. We find the estimators for parameters of the Inverse Gaussian distribution by the method of moments and the method of maximum likelihood. Then, we compare the efficiency of the estimators for the two methods based on their bias and mean square error (MSE). For this we fix values of parameters, run simulations, and report MSE and bias for estimates obtained by both methods. The conclusion is that when sample sizes are 10, the method of moments tends to be more efficient than the maximum likelihood method for estimates of both parameters (lambda and theta)...." read more Nowadays one cannot (or should not) trust everything published, but the paper's last page appears promising. I hope this addresses your note added retrospectively.
Examples where method of moments can beat maximum likelihood in small samples?
"In this article, we consider a new parametrization of the two-parameter Inverse Gaussian distribution. We find the estimators for parameters of the Inverse Gaussian distribution by the method
Examples where method of moments can beat maximum likelihood in small samples? "In this article, we consider a new parametrization of the two-parameter Inverse Gaussian distribution. We find the estimators for parameters of the Inverse Gaussian distribution by the method of moments and the method of maximum likelihood. Then, we compare the efficiency of the estimators for the two methods based on their bias and mean square error (MSE). For this we fix values of parameters, run simulations, and report MSE and bias for estimates obtained by both methods. The conclusion is that when sample sizes are 10, the method of moments tends to be more efficient than the maximum likelihood method for estimates of both parameters (lambda and theta)...." read more Nowadays one cannot (or should not) trust everything published, but the paper's last page appears promising. I hope this addresses your note added retrospectively.
Examples where method of moments can beat maximum likelihood in small samples? "In this article, we consider a new parametrization of the two-parameter Inverse Gaussian distribution. We find the estimators for parameters of the Inverse Gaussian distribution by the method
4,146
Examples where method of moments can beat maximum likelihood in small samples?
I found one: For the asymmetric exponential power distribution $$f(x) = \frac{\alpha}{\sigma\Gamma(\frac{1}{\alpha})} \frac{\kappa}{1+\kappa^2}\exp\left(-\frac{\kappa^\alpha}{\sigma^\alpha}[(x-\theta)^+]^\alpha -\frac{1}{\kappa^\alpha \sigma^\alpha}[(x-\theta)^-]^\alpha\right)\,,\quad \alpha,\sigma,\kappa>0, \text{ and } x,\theta\in \mathbb R$$ the simulation results of Delicado and Goria (2008) suggest that for some of the parameters at the smaller sample sizes, method of moments can outperform MLE; for example in the known-$\theta$ case at sample size 10, when estimating $\sigma$, the MSE of MoM is smaller than for ML. Delicado and Goria (2008), A small sample comparison of maximum likelihood, moments and L-moments methods for the asymmetric exponential power distribution, Journal Computational Statistics & Data Analysis Volume 52 Issue 3, January, pp 1661-1673 (also see http://www-eio.upc.es/~delicado/my-public-files/LmomAEP.pdf)
Examples where method of moments can beat maximum likelihood in small samples?
I found one: For the asymmetric exponential power distribution $$f(x) = \frac{\alpha}{\sigma\Gamma(\frac{1}{\alpha})} \frac{\kappa}{1+\kappa^2}\exp\left(-\frac{\kappa^\alpha}{\sigma^\alpha}[(x-\theta)
Examples where method of moments can beat maximum likelihood in small samples? I found one: For the asymmetric exponential power distribution $$f(x) = \frac{\alpha}{\sigma\Gamma(\frac{1}{\alpha})} \frac{\kappa}{1+\kappa^2}\exp\left(-\frac{\kappa^\alpha}{\sigma^\alpha}[(x-\theta)^+]^\alpha -\frac{1}{\kappa^\alpha \sigma^\alpha}[(x-\theta)^-]^\alpha\right)\,,\quad \alpha,\sigma,\kappa>0, \text{ and } x,\theta\in \mathbb R$$ the simulation results of Delicado and Goria (2008) suggest that for some of the parameters at the smaller sample sizes, method of moments can outperform MLE; for example in the known-$\theta$ case at sample size 10, when estimating $\sigma$, the MSE of MoM is smaller than for ML. Delicado and Goria (2008), A small sample comparison of maximum likelihood, moments and L-moments methods for the asymmetric exponential power distribution, Journal Computational Statistics & Data Analysis Volume 52 Issue 3, January, pp 1661-1673 (also see http://www-eio.upc.es/~delicado/my-public-files/LmomAEP.pdf)
Examples where method of moments can beat maximum likelihood in small samples? I found one: For the asymmetric exponential power distribution $$f(x) = \frac{\alpha}{\sigma\Gamma(\frac{1}{\alpha})} \frac{\kappa}{1+\kappa^2}\exp\left(-\frac{\kappa^\alpha}{\sigma^\alpha}[(x-\theta)
4,147
Examples where method of moments can beat maximum likelihood in small samples?
The method of moments (MM) can beat the maximum likelihood (ML) approach when it is possible to specify only some population moments. If the distribution is ill-defined, the ML estimators will not be consistent. Assuming finite moments and i.i.d observations, the MM can provide good estimators with nice asymptotic properties. Example: Let $X_1, \ldots, X_n$ be an i.i.d sample of $X \sim f$, where $f: \mathbb{R} \to \mathbb{R}_+$ is an unknown probability density function. Define $\nu_k = \int_{\mathbb{R}} x^k f(x)dx$ the $k$th moment and consider that the interest is to estimate the forth moment $\nu_4$. Let $\bar{X_k} = \frac{1}{n}\sum_{i=1}^n X_i^k$, then by assuming that $\nu_8 < \infty$, the central limit theorem guarantees that $$ \sqrt{n}(\bar{X_4} - \nu_4) \stackrel{d}{\to} N(0, \nu_8 - \nu_4^2), $$ where "$\stackrel{d}{\to}$" means "converges in distribution to". Moreover, by the Slutsky's theorem, $$ \frac{\sqrt{n}(\bar{X_4} - \nu_4)}{\sqrt{\bar{X_8} - \bar{X_4}^2}} \stackrel{d}{\to} N(0, 1) $$ since $\bar{X_8} - \bar{X_4}^2 \stackrel{P}{\to} \nu_8 - \nu_4^2$ (convergence in probability). That is, we can draw (approximate) inferences for $\nu_4$ by using the moment approach (for large samples), we just have to make some assumptions on the population moments of interest. Here, the maximum likelihood estimators cannot be defined without knowing the shape of $f$. A Simulation study: Patriota et al. (2009) conducted some simulation studies to verify the rejection rates of hypothesis testings in an errors-in-variables model. The results suggest that the MM approach produces error rates under the null hypothesis closer to the nominal level than the ML one for small samples. Historical note: The method of moments was proposed by K. Pearson in 1894 "Contributions to the Mathematical Theory of Evolution". The method of maximum likelihood was proposed by R.A. Fisher in 1922 "On the Mathematical Foundations of Theoretical Statistics". Both papers where published in the Philosophical Transactions of the Royal Society of London, Series A. Reference: Fisher, RA (1922). On the Mathematical Foundations of Theoretical Statistics, Philosophical Transactions of the Royal Society of London, Series A, 222, 309-368. Patriota, AG, Bolfarine, H, de Castro, M (2009). A heteroscedastic structural errors-in-variables model with equation error, Statistical Methodology 6 (4), 408-423 (pdf) Pearson, K (1894). Contributions to the Mathematical Theory of Evolution, Philosophical Transactions of the Royal Society of London, Series A, 185, 71-110.
Examples where method of moments can beat maximum likelihood in small samples?
The method of moments (MM) can beat the maximum likelihood (ML) approach when it is possible to specify only some population moments. If the distribution is ill-defined, the ML estimators will not be
Examples where method of moments can beat maximum likelihood in small samples? The method of moments (MM) can beat the maximum likelihood (ML) approach when it is possible to specify only some population moments. If the distribution is ill-defined, the ML estimators will not be consistent. Assuming finite moments and i.i.d observations, the MM can provide good estimators with nice asymptotic properties. Example: Let $X_1, \ldots, X_n$ be an i.i.d sample of $X \sim f$, where $f: \mathbb{R} \to \mathbb{R}_+$ is an unknown probability density function. Define $\nu_k = \int_{\mathbb{R}} x^k f(x)dx$ the $k$th moment and consider that the interest is to estimate the forth moment $\nu_4$. Let $\bar{X_k} = \frac{1}{n}\sum_{i=1}^n X_i^k$, then by assuming that $\nu_8 < \infty$, the central limit theorem guarantees that $$ \sqrt{n}(\bar{X_4} - \nu_4) \stackrel{d}{\to} N(0, \nu_8 - \nu_4^2), $$ where "$\stackrel{d}{\to}$" means "converges in distribution to". Moreover, by the Slutsky's theorem, $$ \frac{\sqrt{n}(\bar{X_4} - \nu_4)}{\sqrt{\bar{X_8} - \bar{X_4}^2}} \stackrel{d}{\to} N(0, 1) $$ since $\bar{X_8} - \bar{X_4}^2 \stackrel{P}{\to} \nu_8 - \nu_4^2$ (convergence in probability). That is, we can draw (approximate) inferences for $\nu_4$ by using the moment approach (for large samples), we just have to make some assumptions on the population moments of interest. Here, the maximum likelihood estimators cannot be defined without knowing the shape of $f$. A Simulation study: Patriota et al. (2009) conducted some simulation studies to verify the rejection rates of hypothesis testings in an errors-in-variables model. The results suggest that the MM approach produces error rates under the null hypothesis closer to the nominal level than the ML one for small samples. Historical note: The method of moments was proposed by K. Pearson in 1894 "Contributions to the Mathematical Theory of Evolution". The method of maximum likelihood was proposed by R.A. Fisher in 1922 "On the Mathematical Foundations of Theoretical Statistics". Both papers where published in the Philosophical Transactions of the Royal Society of London, Series A. Reference: Fisher, RA (1922). On the Mathematical Foundations of Theoretical Statistics, Philosophical Transactions of the Royal Society of London, Series A, 222, 309-368. Patriota, AG, Bolfarine, H, de Castro, M (2009). A heteroscedastic structural errors-in-variables model with equation error, Statistical Methodology 6 (4), 408-423 (pdf) Pearson, K (1894). Contributions to the Mathematical Theory of Evolution, Philosophical Transactions of the Royal Society of London, Series A, 185, 71-110.
Examples where method of moments can beat maximum likelihood in small samples? The method of moments (MM) can beat the maximum likelihood (ML) approach when it is possible to specify only some population moments. If the distribution is ill-defined, the ML estimators will not be
4,148
Examples where method of moments can beat maximum likelihood in small samples?
According to simulations run by Hosking and Wallis (1987) in "Parameter and Quantile Estimation for the Generalized Pareto Distribution", the parameters of the two-parameter generalized Pareto distribution given by the cdf $G(y)= \begin{cases} 1-\left(1+ \frac{\xi y}{\beta} \right)^{-\frac{1}{\xi}} & \xi \neq 0 \\ 1-\exp\left(-\frac{y}{\beta}\right) & \xi=0 \end{cases}$ or the density $g(y)= \begin{cases} \frac{1}{\beta} \left( 1+\frac{\xi y}{\beta} \right)^{-1-\frac{1}{\xi}} & \xi \neq 0 \\ \frac{1}{\beta} \exp\left(-\frac{y}{\beta} \right) & \xi=0 \end{cases}$ are more reliable if they are estimated by means of MOM as opposed to ML. This holds for samples up to size 500. The MOM estimates are given by $\widehat\beta = \frac{\overline y \overline{y^2}}{2(\overline{y^2} - (\overline y)^2)}$ and $\widehat\xi = \frac{1}{2} - \frac{(\overline y)^2}{2(\overline{y^2} - (\overline y)^2)}$ with $\overline{y^2} = \frac{1}{n} \sum_{i=1}^n y_i^2$ The paper contains quite a few typos (at least my version does). Results for the MOM estimators given above were kindly provided by "heropup" in this thread.
Examples where method of moments can beat maximum likelihood in small samples?
According to simulations run by Hosking and Wallis (1987) in "Parameter and Quantile Estimation for the Generalized Pareto Distribution", the parameters of the two-parameter generalized Pareto distrib
Examples where method of moments can beat maximum likelihood in small samples? According to simulations run by Hosking and Wallis (1987) in "Parameter and Quantile Estimation for the Generalized Pareto Distribution", the parameters of the two-parameter generalized Pareto distribution given by the cdf $G(y)= \begin{cases} 1-\left(1+ \frac{\xi y}{\beta} \right)^{-\frac{1}{\xi}} & \xi \neq 0 \\ 1-\exp\left(-\frac{y}{\beta}\right) & \xi=0 \end{cases}$ or the density $g(y)= \begin{cases} \frac{1}{\beta} \left( 1+\frac{\xi y}{\beta} \right)^{-1-\frac{1}{\xi}} & \xi \neq 0 \\ \frac{1}{\beta} \exp\left(-\frac{y}{\beta} \right) & \xi=0 \end{cases}$ are more reliable if they are estimated by means of MOM as opposed to ML. This holds for samples up to size 500. The MOM estimates are given by $\widehat\beta = \frac{\overline y \overline{y^2}}{2(\overline{y^2} - (\overline y)^2)}$ and $\widehat\xi = \frac{1}{2} - \frac{(\overline y)^2}{2(\overline{y^2} - (\overline y)^2)}$ with $\overline{y^2} = \frac{1}{n} \sum_{i=1}^n y_i^2$ The paper contains quite a few typos (at least my version does). Results for the MOM estimators given above were kindly provided by "heropup" in this thread.
Examples where method of moments can beat maximum likelihood in small samples? According to simulations run by Hosking and Wallis (1987) in "Parameter and Quantile Estimation for the Generalized Pareto Distribution", the parameters of the two-parameter generalized Pareto distrib
4,149
Examples where method of moments can beat maximum likelihood in small samples?
Additional sources in favor of MOM: Hong, H. P., and W. Ye. 2014. Analysis of extreme ground snow loads for Canada using snow depth records. Natural Hazards 73 (2):355-371. The use of MML could give unrealistic predictions if the sample size is small (Hosking et al. 1985; Martin and Stedinger 2000). Martins, E. S., and J. R. Stedinger. 2000. Generalized maximum-likelihood generalized extreme-value quantile estimators for hydrologic data. Water Resources Research 36 (3):737-744. Abstract: The three-parameter generalized extreme-value (GEV) distribution has found wide application for describing annual floods, rainfall, wind speeds, wave heights, snow depths, and other maxima. Previous studies show that small-sample maximum-likelihood estimators (MLE) of parameters are unstable and recommend L moment estimators. More recent research shows that method of moments quantile estimators have for −0.25 < κ < 0.30 smaller root-mean-square error than L moments and MLEs. Examination of the behavior of MLEs in small samples demonstrates that absurd values of the GEV-shape parameter κ can be generated. Use of a Bayesian prior distribution to restrict κ values to a statistically/physically reasonable range in a generalized maximum likelihood (GML) analysis eliminates this problem. In our examples the GML estimator did substantially better than moment and L moment quantile estimators for − 0.4 ≤ κ ≤ 0. In the Introduction and Literature Review sections they cite additional papers which concluded that MOM in some cases outperform MLE (again extreme value modelling), e.g. Hosking et al. [1985a] show that small-sample MLE parameter estimators are very unstable and recommend probability-weighted moment (PWM) estimators which are equivalent to L moment estimators [Hosking, 1990]. [...] Hosking et al. [1985a] showed that the probability-weighted moments (PM) or equivalent L moments(LM) estimators for the GEV distribution are better than maximum-likelihood estimators (MLE) in terms of bias and variance for sample sizes varying from 15 to 100. More recently, Madsen et al. [1997a] showed that the method of moments (MOM) quantile estimators have smaller RMSE (root-mean-squareer ror) for -0.25 < K < 0.30 than LM and MLE when estimating the 100-year event for sample sizes of 10-50. MLEs are preferable only when K > 0.3 and the sample sizes are modest (n >= 50). K (kappa) is the shape parameter of GEV. papers which appear in the quotes: Hosking J, Wallis J, Wood E (1985) Estimation of the generalized extreme-value distribution by the method of probability-weighted moments. Technometrics 27:251–261. Madsen, H., P. F. Rasmussen and D. Rosbjerg (1997) Comparison of annual maximum series and partial duration series methods for modeling extreme hydrologic events, 1, At-site modeling, Water Resour. Res., 33(4), 747-758. Hosking, J. R. M., L-moments: Analysis and estimation of distributions using linear combinations of order statistics, J. R. Stat. Soc., Ser. B, 52, 105-124, 1990. Additionally, I have the same experience as concluded in the above papers, in case of modeling extreme events with small and moderate sample size (<50-100 which is typical) MLE can give unrealistic results, simulation shows that MOM is more robust and has smaller RMSE.
Examples where method of moments can beat maximum likelihood in small samples?
Additional sources in favor of MOM: Hong, H. P., and W. Ye. 2014. Analysis of extreme ground snow loads for Canada using snow depth records. Natural Hazards 73 (2):355-371. The use of MML could give
Examples where method of moments can beat maximum likelihood in small samples? Additional sources in favor of MOM: Hong, H. P., and W. Ye. 2014. Analysis of extreme ground snow loads for Canada using snow depth records. Natural Hazards 73 (2):355-371. The use of MML could give unrealistic predictions if the sample size is small (Hosking et al. 1985; Martin and Stedinger 2000). Martins, E. S., and J. R. Stedinger. 2000. Generalized maximum-likelihood generalized extreme-value quantile estimators for hydrologic data. Water Resources Research 36 (3):737-744. Abstract: The three-parameter generalized extreme-value (GEV) distribution has found wide application for describing annual floods, rainfall, wind speeds, wave heights, snow depths, and other maxima. Previous studies show that small-sample maximum-likelihood estimators (MLE) of parameters are unstable and recommend L moment estimators. More recent research shows that method of moments quantile estimators have for −0.25 < κ < 0.30 smaller root-mean-square error than L moments and MLEs. Examination of the behavior of MLEs in small samples demonstrates that absurd values of the GEV-shape parameter κ can be generated. Use of a Bayesian prior distribution to restrict κ values to a statistically/physically reasonable range in a generalized maximum likelihood (GML) analysis eliminates this problem. In our examples the GML estimator did substantially better than moment and L moment quantile estimators for − 0.4 ≤ κ ≤ 0. In the Introduction and Literature Review sections they cite additional papers which concluded that MOM in some cases outperform MLE (again extreme value modelling), e.g. Hosking et al. [1985a] show that small-sample MLE parameter estimators are very unstable and recommend probability-weighted moment (PWM) estimators which are equivalent to L moment estimators [Hosking, 1990]. [...] Hosking et al. [1985a] showed that the probability-weighted moments (PM) or equivalent L moments(LM) estimators for the GEV distribution are better than maximum-likelihood estimators (MLE) in terms of bias and variance for sample sizes varying from 15 to 100. More recently, Madsen et al. [1997a] showed that the method of moments (MOM) quantile estimators have smaller RMSE (root-mean-squareer ror) for -0.25 < K < 0.30 than LM and MLE when estimating the 100-year event for sample sizes of 10-50. MLEs are preferable only when K > 0.3 and the sample sizes are modest (n >= 50). K (kappa) is the shape parameter of GEV. papers which appear in the quotes: Hosking J, Wallis J, Wood E (1985) Estimation of the generalized extreme-value distribution by the method of probability-weighted moments. Technometrics 27:251–261. Madsen, H., P. F. Rasmussen and D. Rosbjerg (1997) Comparison of annual maximum series and partial duration series methods for modeling extreme hydrologic events, 1, At-site modeling, Water Resour. Res., 33(4), 747-758. Hosking, J. R. M., L-moments: Analysis and estimation of distributions using linear combinations of order statistics, J. R. Stat. Soc., Ser. B, 52, 105-124, 1990. Additionally, I have the same experience as concluded in the above papers, in case of modeling extreme events with small and moderate sample size (<50-100 which is typical) MLE can give unrealistic results, simulation shows that MOM is more robust and has smaller RMSE.
Examples where method of moments can beat maximum likelihood in small samples? Additional sources in favor of MOM: Hong, H. P., and W. Ye. 2014. Analysis of extreme ground snow loads for Canada using snow depth records. Natural Hazards 73 (2):355-371. The use of MML could give
4,150
Examples where method of moments can beat maximum likelihood in small samples?
In the process of answering this: Estimating parameters for a binomial I stumbled over this paper: Ingram Olkin, A John Petkau, James V Zidek: A comparison of N estimators for the Binomial Distribution. Jasa 1981. which gives an example where method of moments, at least in some cases, beats maximum likelihood. The problem is estimation of $N$ in the binomial distribution $\text{Bin}(N,p)$ where both parameters are unknown. It appears for example in trying to estimate animal abundance when you cannot see all the animals, and the sighting probability $p$ also is unknown.
Examples where method of moments can beat maximum likelihood in small samples?
In the process of answering this: Estimating parameters for a binomial I stumbled over this paper: Ingram Olkin, A John Petkau, James V Zidek: A comparison of N estimators for the Binomial Distribut
Examples where method of moments can beat maximum likelihood in small samples? In the process of answering this: Estimating parameters for a binomial I stumbled over this paper: Ingram Olkin, A John Petkau, James V Zidek: A comparison of N estimators for the Binomial Distribution. Jasa 1981. which gives an example where method of moments, at least in some cases, beats maximum likelihood. The problem is estimation of $N$ in the binomial distribution $\text{Bin}(N,p)$ where both parameters are unknown. It appears for example in trying to estimate animal abundance when you cannot see all the animals, and the sighting probability $p$ also is unknown.
Examples where method of moments can beat maximum likelihood in small samples? In the process of answering this: Estimating parameters for a binomial I stumbled over this paper: Ingram Olkin, A John Petkau, James V Zidek: A comparison of N estimators for the Binomial Distribut
4,151
Examples where method of moments can beat maximum likelihood in small samples?
An example that is admittedly connected with the James-Stein phenomenon, albeit in dimension one. In the case of estimating the squared norm $\theta=||\mu||^2$ of a Gaussian mean vector, when observing $X\sim\mathcal N_p(\mu,\mathbf I_p)$, the MLE $$\hat\theta^\text{MLE}=||x||^2$$ is doing quite poorly [in terms of squared error loss] when compared with the moment estimator $$\hat\theta^\text{MM}=||x||^2-p$$ itself outperformed by the left-truncated version $$\hat\theta^\text{TMM}=(||x||^2-p)^+$$ Surprisingly, the MLE of $\theta$ based on the original MLE distribution $$\hat\theta^\text{MLE}\sim\chi^2_p(\theta)$$ is different and apparently admissible, standing between $\hat\theta^\text{TMM}$ and $$\hat\theta^\text{JS}=(||x||^2-p+1)^+$$
Examples where method of moments can beat maximum likelihood in small samples?
An example that is admittedly connected with the James-Stein phenomenon, albeit in dimension one. In the case of estimating the squared norm $\theta=||\mu||^2$ of a Gaussian mean vector, when observin
Examples where method of moments can beat maximum likelihood in small samples? An example that is admittedly connected with the James-Stein phenomenon, albeit in dimension one. In the case of estimating the squared norm $\theta=||\mu||^2$ of a Gaussian mean vector, when observing $X\sim\mathcal N_p(\mu,\mathbf I_p)$, the MLE $$\hat\theta^\text{MLE}=||x||^2$$ is doing quite poorly [in terms of squared error loss] when compared with the moment estimator $$\hat\theta^\text{MM}=||x||^2-p$$ itself outperformed by the left-truncated version $$\hat\theta^\text{TMM}=(||x||^2-p)^+$$ Surprisingly, the MLE of $\theta$ based on the original MLE distribution $$\hat\theta^\text{MLE}\sim\chi^2_p(\theta)$$ is different and apparently admissible, standing between $\hat\theta^\text{TMM}$ and $$\hat\theta^\text{JS}=(||x||^2-p+1)^+$$
Examples where method of moments can beat maximum likelihood in small samples? An example that is admittedly connected with the James-Stein phenomenon, albeit in dimension one. In the case of estimating the squared norm $\theta=||\mu||^2$ of a Gaussian mean vector, when observin
4,152
How does centering the data get rid of the intercept in regression and PCA?
Can these pictures help? The first 2 pictures are about regression. Centering the data does not alter the slope of regression line, but it makes intercept equal 0. The pictures below are about PCA. PCA is a regressional model without intercept$^1$. Thus, principal components inevitably come through the origin. If you forget to center your data, the 1st principal component may pierce the cloud not along the main direction of the cloud, and will be (for statistics purposes) misleading. $^1$ PCA isn't a regression analysis, of course. It however shares formally same linear equation (linear combination) with linear regression. PCA equation is like linear regression equation without intercept - because PCA is a rotation operation.
How does centering the data get rid of the intercept in regression and PCA?
Can these pictures help? The first 2 pictures are about regression. Centering the data does not alter the slope of regression line, but it makes intercept equal 0. The pictures below are about PCA. P
How does centering the data get rid of the intercept in regression and PCA? Can these pictures help? The first 2 pictures are about regression. Centering the data does not alter the slope of regression line, but it makes intercept equal 0. The pictures below are about PCA. PCA is a regressional model without intercept$^1$. Thus, principal components inevitably come through the origin. If you forget to center your data, the 1st principal component may pierce the cloud not along the main direction of the cloud, and will be (for statistics purposes) misleading. $^1$ PCA isn't a regression analysis, of course. It however shares formally same linear equation (linear combination) with linear regression. PCA equation is like linear regression equation without intercept - because PCA is a rotation operation.
How does centering the data get rid of the intercept in regression and PCA? Can these pictures help? The first 2 pictures are about regression. Centering the data does not alter the slope of regression line, but it makes intercept equal 0. The pictures below are about PCA. P
4,153
How does centering the data get rid of the intercept in regression and PCA?
At least two references that I can find, an earlier edition of which I have been familiar with for about thirty years, state that there are four basic variants of PCA, using: Covariance about the origin Covariance about the mean - this is the variant of PCA which is most commonly referred to as 'PCA' by e.g. sklearn Correlation about the origin Correlation about the mean PCA 'about the origin' is performed without mean-centring the data, and for 'correlation about', the correlation matrix is used instead of the covariance matrix. Which variation you use will obviously affect downstream calculations, particularly forms of factor analysis. From Applications of Factor Analysis to Spectroscopic Methods (Brockwell, 1992): Correlation about the mean is the traditional form of pre-processing applied before factor analysis, it maintains the Spatial information contained in the data but looses both the origin and the magnitude of the original information. Correlation about the origin maintains the zero point of the data but still looses the relative size information. Covariance about the mean maintains the relative size information but looses the zero point of the data. Covariance about the origin does not alter the data in any way thus preserving the magnitude and origin information. The different techniques find uses depending upon the characteristics of the data; mass spectroscopy data has both an absolute zero point and a common scale for magnitude. The use of the four pre-treatments above was studied by Rozett and Petersen using the mass spectra of 22 alkyl benzenes and they concluded that with both R and Q analysis (R analysis has the data with rows composed of the samples and columns of spectra, Q analysis is the opposite) the use of covariance about the origin was the best method of pre-treatment as it preserved both the origin of the factor space at zero and also the relative sizes of the components. References: Factor Analysis in Chemistry, Third Edition, Malinowski - 2002. BSR 2949 - Signature Data Processing Final Report - Volume II: Equations and Flow Diagrams, Crawford and Hanson, NASA, 1970.
How does centering the data get rid of the intercept in regression and PCA?
At least two references that I can find, an earlier edition of which I have been familiar with for about thirty years, state that there are four basic variants of PCA, using: Covariance about the ori
How does centering the data get rid of the intercept in regression and PCA? At least two references that I can find, an earlier edition of which I have been familiar with for about thirty years, state that there are four basic variants of PCA, using: Covariance about the origin Covariance about the mean - this is the variant of PCA which is most commonly referred to as 'PCA' by e.g. sklearn Correlation about the origin Correlation about the mean PCA 'about the origin' is performed without mean-centring the data, and for 'correlation about', the correlation matrix is used instead of the covariance matrix. Which variation you use will obviously affect downstream calculations, particularly forms of factor analysis. From Applications of Factor Analysis to Spectroscopic Methods (Brockwell, 1992): Correlation about the mean is the traditional form of pre-processing applied before factor analysis, it maintains the Spatial information contained in the data but looses both the origin and the magnitude of the original information. Correlation about the origin maintains the zero point of the data but still looses the relative size information. Covariance about the mean maintains the relative size information but looses the zero point of the data. Covariance about the origin does not alter the data in any way thus preserving the magnitude and origin information. The different techniques find uses depending upon the characteristics of the data; mass spectroscopy data has both an absolute zero point and a common scale for magnitude. The use of the four pre-treatments above was studied by Rozett and Petersen using the mass spectra of 22 alkyl benzenes and they concluded that with both R and Q analysis (R analysis has the data with rows composed of the samples and columns of spectra, Q analysis is the opposite) the use of covariance about the origin was the best method of pre-treatment as it preserved both the origin of the factor space at zero and also the relative sizes of the components. References: Factor Analysis in Chemistry, Third Edition, Malinowski - 2002. BSR 2949 - Signature Data Processing Final Report - Volume II: Equations and Flow Diagrams, Crawford and Hanson, NASA, 1970.
How does centering the data get rid of the intercept in regression and PCA? At least two references that I can find, an earlier edition of which I have been familiar with for about thirty years, state that there are four basic variants of PCA, using: Covariance about the ori
4,154
Alternatives to logistic regression in R
Popular right now are randomForest and gbm (called MART or Gradient Boosting in machine learning literature), rpart for simple trees. Also popular is bayesglm, which uses MAP with priors for regularization. install.packages(c("randomForest", "gbm", "rpart", "arm")) library(randomForest) library(gbm) library(rpart) library(arm) r1 <- randomForest(y~x) r2 <- gbm(y~x) r3 <- rpart(y~x) r4 <- bayesglm(y ~ x, family=binomial) yy1 <- predict(r1, data.frame(x=xx)) yy2 <- predict(r2, data.frame(x=xx)) yy3 <- predict(r3, data.frame(x=xx)) yy4 <- predict(r4, data.frame(x=xx), type="response")
Alternatives to logistic regression in R
Popular right now are randomForest and gbm (called MART or Gradient Boosting in machine learning literature), rpart for simple trees. Also popular is bayesglm, which uses MAP with priors for regulari
Alternatives to logistic regression in R Popular right now are randomForest and gbm (called MART or Gradient Boosting in machine learning literature), rpart for simple trees. Also popular is bayesglm, which uses MAP with priors for regularization. install.packages(c("randomForest", "gbm", "rpart", "arm")) library(randomForest) library(gbm) library(rpart) library(arm) r1 <- randomForest(y~x) r2 <- gbm(y~x) r3 <- rpart(y~x) r4 <- bayesglm(y ~ x, family=binomial) yy1 <- predict(r1, data.frame(x=xx)) yy2 <- predict(r2, data.frame(x=xx)) yy3 <- predict(r3, data.frame(x=xx)) yy4 <- predict(r4, data.frame(x=xx), type="response")
Alternatives to logistic regression in R Popular right now are randomForest and gbm (called MART or Gradient Boosting in machine learning literature), rpart for simple trees. Also popular is bayesglm, which uses MAP with priors for regulari
4,155
Alternatives to logistic regression in R
Actually, that depends on what you want to obtain. If you perform logistic regression only for the predictions, you can use any supervised classification method suited for your data. Another possibility : discriminant analysis ( lda() and qda() from package MASS) r <- lda(y~x) # use qda() for quadratic discriminant analysis xx <- seq(min(x), max(x), length=100) pred <- predict(r, data.frame(x=xx), type='response') yy <- pred$posterior[,2] color <- c("red","blue") plot(y~x,pch=19,col=color[pred$class]) abline(lm(y~x),col='red',lty=2) lines(xx,yy, col='blue', lwd=5, lty=2) title(main='lda implementation') On the other hand, if you need confidence intervals around your predictions or standard errors on your estimates, most classification algorithms ain't going to help you. You could use generalized additive (mixed) models, for which a number of packages are available. I often use the mgcv package of Simon Wood. Generalized additive models allow more flexibility than logistic regression, as you can use splines for modelling your predictors. set.seed(55) require(mgcv) n <- 100 x1 <- c(rnorm(n), 1+rnorm(n)) x2 <- sqrt(c(rnorm(n,4),rnorm(n,6))) y <- c(rep(0,n), rep(1,n)) r <- gam(y~s(x1)+s(x2),family=binomial) xx <- seq(min(x1), max(x1), length=100) xxx <- seq(min(x2), max(x2), length=100) yy <- predict(r, data.frame(x1=xx,x2=xxx), type='response') color=c("red","blue") clustering <- ifelse(r$fitted.values < 0.5,1,2) plot(y~x1,pch=19,col=color[clustering]) abline(lm(y~x1),col='red',lty=2) lines(xx,yy, col='blue', lwd=5, lty=2) title(main='gam implementation') There's a whole lot more to do : op <- par(mfrow=c(2,1)) plot(r,all.terms=T) par(op) summary(r) anova(r) r2 <- gam(y~s(x1),family=binomial) anova(r,r2,test="Chisq") ... I'd recommend the book of Simon Wood about Generalized Additive Models
Alternatives to logistic regression in R
Actually, that depends on what you want to obtain. If you perform logistic regression only for the predictions, you can use any supervised classification method suited for your data. Another possibili
Alternatives to logistic regression in R Actually, that depends on what you want to obtain. If you perform logistic regression only for the predictions, you can use any supervised classification method suited for your data. Another possibility : discriminant analysis ( lda() and qda() from package MASS) r <- lda(y~x) # use qda() for quadratic discriminant analysis xx <- seq(min(x), max(x), length=100) pred <- predict(r, data.frame(x=xx), type='response') yy <- pred$posterior[,2] color <- c("red","blue") plot(y~x,pch=19,col=color[pred$class]) abline(lm(y~x),col='red',lty=2) lines(xx,yy, col='blue', lwd=5, lty=2) title(main='lda implementation') On the other hand, if you need confidence intervals around your predictions or standard errors on your estimates, most classification algorithms ain't going to help you. You could use generalized additive (mixed) models, for which a number of packages are available. I often use the mgcv package of Simon Wood. Generalized additive models allow more flexibility than logistic regression, as you can use splines for modelling your predictors. set.seed(55) require(mgcv) n <- 100 x1 <- c(rnorm(n), 1+rnorm(n)) x2 <- sqrt(c(rnorm(n,4),rnorm(n,6))) y <- c(rep(0,n), rep(1,n)) r <- gam(y~s(x1)+s(x2),family=binomial) xx <- seq(min(x1), max(x1), length=100) xxx <- seq(min(x2), max(x2), length=100) yy <- predict(r, data.frame(x1=xx,x2=xxx), type='response') color=c("red","blue") clustering <- ifelse(r$fitted.values < 0.5,1,2) plot(y~x1,pch=19,col=color[clustering]) abline(lm(y~x1),col='red',lty=2) lines(xx,yy, col='blue', lwd=5, lty=2) title(main='gam implementation') There's a whole lot more to do : op <- par(mfrow=c(2,1)) plot(r,all.terms=T) par(op) summary(r) anova(r) r2 <- gam(y~s(x1),family=binomial) anova(r,r2,test="Chisq") ... I'd recommend the book of Simon Wood about Generalized Additive Models
Alternatives to logistic regression in R Actually, that depends on what you want to obtain. If you perform logistic regression only for the predictions, you can use any supervised classification method suited for your data. Another possibili
4,156
Alternatives to logistic regression in R
I agree with Joe, and would add: Any classification method could in principle be used, although it will depend on the data/situation. For instance, you could also use a SVM, possibly with the popular C-SVM model. Here's an example from kernlab using a radial basis kernel function: library(kernlab) x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2)) y <- matrix(c(rep(1,60),rep(-1,60))) svp <- ksvm(x,y,type="C-svc") plot(svp,data=x)
Alternatives to logistic regression in R
I agree with Joe, and would add: Any classification method could in principle be used, although it will depend on the data/situation. For instance, you could also use a SVM, possibly with the popular
Alternatives to logistic regression in R I agree with Joe, and would add: Any classification method could in principle be used, although it will depend on the data/situation. For instance, you could also use a SVM, possibly with the popular C-SVM model. Here's an example from kernlab using a radial basis kernel function: library(kernlab) x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2)) y <- matrix(c(rep(1,60),rep(-1,60))) svp <- ksvm(x,y,type="C-svc") plot(svp,data=x)
Alternatives to logistic regression in R I agree with Joe, and would add: Any classification method could in principle be used, although it will depend on the data/situation. For instance, you could also use a SVM, possibly with the popular
4,157
Alternatives to logistic regression in R
There are around 100 classification and regression models which are trainable via the caret package. Any of the classification models will be an option for you (as opposed to regression models, which require a continuous response). For example to train a random forest: library(caret) train(response~., data, method="rf") See the caret model training vignette which comes with the distribution for a full list of the models available. It is split into dual-use and classification models (both of which you can use) and regression-only (which you can't). caret will automatically train the parameters for your chosen model for you.
Alternatives to logistic regression in R
There are around 100 classification and regression models which are trainable via the caret package. Any of the classification models will be an option for you (as opposed to regression models, which
Alternatives to logistic regression in R There are around 100 classification and regression models which are trainable via the caret package. Any of the classification models will be an option for you (as opposed to regression models, which require a continuous response). For example to train a random forest: library(caret) train(response~., data, method="rf") See the caret model training vignette which comes with the distribution for a full list of the models available. It is split into dual-use and classification models (both of which you can use) and regression-only (which you can't). caret will automatically train the parameters for your chosen model for you.
Alternatives to logistic regression in R There are around 100 classification and regression models which are trainable via the caret package. Any of the classification models will be an option for you (as opposed to regression models, which
4,158
Alternatives to logistic regression in R
Naive Bayes is a good simple method of training data to find a binary response. library(e1071) fitNB <- naiveBayes(y~x) predict(fitNB, x)
Alternatives to logistic regression in R
Naive Bayes is a good simple method of training data to find a binary response. library(e1071) fitNB <- naiveBayes(y~x) predict(fitNB, x)
Alternatives to logistic regression in R Naive Bayes is a good simple method of training data to find a binary response. library(e1071) fitNB <- naiveBayes(y~x) predict(fitNB, x)
Alternatives to logistic regression in R Naive Bayes is a good simple method of training data to find a binary response. library(e1071) fitNB <- naiveBayes(y~x) predict(fitNB, x)
4,159
Alternatives to logistic regression in R
There are two variations of the logistic regression which are not yet outlined. Firstly the logistic regression estimates probabilities using a logistic function which is a cumulativ logistic distribution (also known as sigmoid). You can also estimate probabilities using functions derived from other distributions. The most common way apart from the logistic regression is the probit regression which is derived from the normal distribution. For a more detailed discussion between the differences of probit and logit please visit the following site. Difference between logit and probit models set.seed(55) n <- 100 x <- c(rnorm(n), 1+rnorm(n)) y <- c(rep(0,n), rep(1,n)) r <- glm(y~x, family=binomial(link="probit")) plot(y~x) abline(lm(y~x),col='red',lty=2) xx <- seq(min(x), max(x), length=100) yy <- predict(r, data.frame(x=xx), type='response') lines(xx,yy, col='red', lwd=5, lty=2) title(main='Probit regression with the "glm" function') The second alternative points out a weekness of the logistical function you implemented. If you have a small sample size and/or missing values logistic function is not advisable. Hence an exact logistic regression is a better model. The log odds of the outcome is modeled as a linear combination of the predictor variables. elrm(formula = y ~ x) Furthermore there are other alternatives like to be mentioned: Two-way contingency table Two-group discriminant function analysis. Hotelling's T2. Final remark: A logistic regression is the same as a small neural network without hidden layers and only one point in the final layer. Therefore you can use implementations of neural network packages such as nnet in R. Edit: Some weeks later I realized that there is also the Winnow and the Perceptron algorithm. Both are classifiers which work also for classifications into two groups, but both are fallen out of favor in the last 15 years.
Alternatives to logistic regression in R
There are two variations of the logistic regression which are not yet outlined. Firstly the logistic regression estimates probabilities using a logistic function which is a cumulativ logistic distrib
Alternatives to logistic regression in R There are two variations of the logistic regression which are not yet outlined. Firstly the logistic regression estimates probabilities using a logistic function which is a cumulativ logistic distribution (also known as sigmoid). You can also estimate probabilities using functions derived from other distributions. The most common way apart from the logistic regression is the probit regression which is derived from the normal distribution. For a more detailed discussion between the differences of probit and logit please visit the following site. Difference between logit and probit models set.seed(55) n <- 100 x <- c(rnorm(n), 1+rnorm(n)) y <- c(rep(0,n), rep(1,n)) r <- glm(y~x, family=binomial(link="probit")) plot(y~x) abline(lm(y~x),col='red',lty=2) xx <- seq(min(x), max(x), length=100) yy <- predict(r, data.frame(x=xx), type='response') lines(xx,yy, col='red', lwd=5, lty=2) title(main='Probit regression with the "glm" function') The second alternative points out a weekness of the logistical function you implemented. If you have a small sample size and/or missing values logistic function is not advisable. Hence an exact logistic regression is a better model. The log odds of the outcome is modeled as a linear combination of the predictor variables. elrm(formula = y ~ x) Furthermore there are other alternatives like to be mentioned: Two-way contingency table Two-group discriminant function analysis. Hotelling's T2. Final remark: A logistic regression is the same as a small neural network without hidden layers and only one point in the final layer. Therefore you can use implementations of neural network packages such as nnet in R. Edit: Some weeks later I realized that there is also the Winnow and the Perceptron algorithm. Both are classifiers which work also for classifications into two groups, but both are fallen out of favor in the last 15 years.
Alternatives to logistic regression in R There are two variations of the logistic regression which are not yet outlined. Firstly the logistic regression estimates probabilities using a logistic function which is a cumulativ logistic distrib
4,160
Who are frequentists?
Some existing answers talk about statistical inference and some about interpretation of probability, and none clearly makes the distinction. The main purpose of this answer is to make this distinction. The word "frequentism" (and "frequentist") can refer to TWO DIFFERENT THINGS: One is the question about what is the definition or the interpretation of "probability". There are multiple interpretations, "frequentist interpretation" being one of them. Frequentists would be the people adhering to this interpretation. Another is statistical inference about model parameters based on observed data. There is a Bayesian and a frequentist approaches to statistical inference, and frequentists would be the people preferring to use the frequentist approach. Now comes a speculation: I think there are almost no frequentists of the first kind (P-frequentists), but there are lots of frequentists of the second kind (S-frequentists). Frequentist interpretation of probability The question of what is probability is a subject of intense ongoing debate with 100+ years of history. It belongs to philosophy. I refer anybody not familiar with this debate to the Interpretations of Probability article in the Stanford Encyclopedia of Philosophy which contains a section on frequentist interpretation(s). Another very readable account that I happen to know of, is this paper: Appleby, 2004, Probability is single-case or nothing -- which is written in the context of foundations of quantum mechanics, but contains sections focusing on what probability is. Appleby writes: Frequentism is the position that a probability statement is equivalent to a frequency statement about some suitably chosen ensemble. For instance, according to von Mises [21, 22] the statement “the probability of this coin coming up heads is 0.5” is equivalent to the statement “in an infinite sequence of tosses this coin will come up heads with limiting relative frequency 0.5”. This might seem reasonable, but there are so many philosophical problems with this definition that one hardly knows where to start. What is the probability that it will rain tomorrow? Meaningless question, because how would we have an infinite sequence of trials. What is the probability of the coin in my pocket coming up heads? A relative frequency of heads in an infinite sequence of tosses, you say? But the coin will wear off and the Sun will go supernova before the infinite sequence can be finished. So we should be talking about a hypothetical infinite sequence. This brings one to the discussion of reference classes etc. etc. In philosophy one does not get away so easily. And by the way, why should the limit exist at all? Furthermore, what if my coin were to come up heads 50% of the time during the first billion of years but then would start coming up heads only 25% of the time (thought experiment from Appleby)? This means that $P(\mathrm{Heads})=1/4$ by definition. But we will always be observing $\mathrm{Frequency}(\mathrm{Heads})\approx 1/2$ during the next billion years. Do you think such a situation is not really possible? Sure, but why? Because the $P(\mathrm{Heads})$ cannot suddenly change? But this sentence is meaningless for a P-frequentist. I want to keep this answer short so I stop here; see above for the references. I think it is really difficult to be a die-hard P-frequentist. (Update: In the comments below, @mpiktas insists that it is because the frequentist definition is mathematically meaningless. My opinion expressed above is rather that the frequentist definition is philosophically problematic.) Frequentist approach to statistics Consider a probabilistic model $P(X\mid\theta)$ that has some parameters $\theta$ and allows to compute the probability of observing data $X$. You did an experiment and observed some data $X$. What can you say about $\theta$? S-frequentism is the position that $\theta$ is not a random variable; its true values in Real World are what they are. We can try to estimate them as some $\hat \theta$, but we cannot meaningfully talk about probability of $\theta$ being in some interval (e.g. being positive). The only thing we can do, is to come up with a procedure of constructing some interval around our estimate such that this procedure succeeds in encompassing true $\theta$ with a particular long-run success frequency (particular probability). Most of the statistics used in natural sciences today is based on this approach, so there certainly are lots of S-frequentists around today. (Update: if you look for an example of a philosopher of statistics, as opposed to practitioners of statistics, defending S-frequentist point of view, then read Deborah Mayo's writings; +1 to @NRH's answer.) UPDATE: On the relationship between P-frequentism and S-frequentism @fcop and others ask about the relationship between P-frequentism and S-frequentism. Does one of these positions imply another one? There is no doubt that historically S-frequentism was developed based on P-frequentist stance; but do they logically imply one another? Before approaching this question I should say the following. When I wrote above that there are almost no P-frequentists I did not mean that almost everybody is P-subjective-bayesian-a-la-de-finetti or P-propensitist-a-la-popper. In fact, I believe that most statisticans (or data-scientists, or machine-learners) are P-nothing-at-all, or P-shut-up-and-calculate (to borrow Mermin's famous phrase). Most people tend to ignore foundation problems. And it is fine. We do not have a good definition of free will, or intelligence, or time, or love. But this should not stop us from working on neuroscience, or on AI, or on physics, or from falling in love. Personally, I am not a S-frequentist, but neither do I have any coherent view on foundations of probability. In contrast, almost everybody who did some practical statistical analysis is either a S-frequentist or a S-Bayesian (or perhaps a mixture). Personally, I published papers containing $p$-values and I have never (so far) published papers containing priors and posteriors over model parameters so this makes me a S-frequentist, at least in practice. It is therefore clearly possible to be a S-frequentist without being a P-frequentist, despite what @fcop says in his answer. Okay. Fine. But still: Can a P-bayesian be a S-frequentist? And can a P-frequentist be a S-bayesian? For a convinced P-bayesian it is probably atypical to be a S-frequentist, but in principle entirely possible. E.g. a P-bayesian can decide that they do not have any prior information over $\theta$ and hence adopt a S-frequentist analysis. Why not. Every S-frequentist claim can certainly be interpreted with P-bayesian interpretation of probability. For a convinced P-frequentist to be S-bayesian is probably problematic. But then it is very problematic to be a convinced P-frequentist...
Who are frequentists?
Some existing answers talk about statistical inference and some about interpretation of probability, and none clearly makes the distinction. The main purpose of this answer is to make this distinction
Who are frequentists? Some existing answers talk about statistical inference and some about interpretation of probability, and none clearly makes the distinction. The main purpose of this answer is to make this distinction. The word "frequentism" (and "frequentist") can refer to TWO DIFFERENT THINGS: One is the question about what is the definition or the interpretation of "probability". There are multiple interpretations, "frequentist interpretation" being one of them. Frequentists would be the people adhering to this interpretation. Another is statistical inference about model parameters based on observed data. There is a Bayesian and a frequentist approaches to statistical inference, and frequentists would be the people preferring to use the frequentist approach. Now comes a speculation: I think there are almost no frequentists of the first kind (P-frequentists), but there are lots of frequentists of the second kind (S-frequentists). Frequentist interpretation of probability The question of what is probability is a subject of intense ongoing debate with 100+ years of history. It belongs to philosophy. I refer anybody not familiar with this debate to the Interpretations of Probability article in the Stanford Encyclopedia of Philosophy which contains a section on frequentist interpretation(s). Another very readable account that I happen to know of, is this paper: Appleby, 2004, Probability is single-case or nothing -- which is written in the context of foundations of quantum mechanics, but contains sections focusing on what probability is. Appleby writes: Frequentism is the position that a probability statement is equivalent to a frequency statement about some suitably chosen ensemble. For instance, according to von Mises [21, 22] the statement “the probability of this coin coming up heads is 0.5” is equivalent to the statement “in an infinite sequence of tosses this coin will come up heads with limiting relative frequency 0.5”. This might seem reasonable, but there are so many philosophical problems with this definition that one hardly knows where to start. What is the probability that it will rain tomorrow? Meaningless question, because how would we have an infinite sequence of trials. What is the probability of the coin in my pocket coming up heads? A relative frequency of heads in an infinite sequence of tosses, you say? But the coin will wear off and the Sun will go supernova before the infinite sequence can be finished. So we should be talking about a hypothetical infinite sequence. This brings one to the discussion of reference classes etc. etc. In philosophy one does not get away so easily. And by the way, why should the limit exist at all? Furthermore, what if my coin were to come up heads 50% of the time during the first billion of years but then would start coming up heads only 25% of the time (thought experiment from Appleby)? This means that $P(\mathrm{Heads})=1/4$ by definition. But we will always be observing $\mathrm{Frequency}(\mathrm{Heads})\approx 1/2$ during the next billion years. Do you think such a situation is not really possible? Sure, but why? Because the $P(\mathrm{Heads})$ cannot suddenly change? But this sentence is meaningless for a P-frequentist. I want to keep this answer short so I stop here; see above for the references. I think it is really difficult to be a die-hard P-frequentist. (Update: In the comments below, @mpiktas insists that it is because the frequentist definition is mathematically meaningless. My opinion expressed above is rather that the frequentist definition is philosophically problematic.) Frequentist approach to statistics Consider a probabilistic model $P(X\mid\theta)$ that has some parameters $\theta$ and allows to compute the probability of observing data $X$. You did an experiment and observed some data $X$. What can you say about $\theta$? S-frequentism is the position that $\theta$ is not a random variable; its true values in Real World are what they are. We can try to estimate them as some $\hat \theta$, but we cannot meaningfully talk about probability of $\theta$ being in some interval (e.g. being positive). The only thing we can do, is to come up with a procedure of constructing some interval around our estimate such that this procedure succeeds in encompassing true $\theta$ with a particular long-run success frequency (particular probability). Most of the statistics used in natural sciences today is based on this approach, so there certainly are lots of S-frequentists around today. (Update: if you look for an example of a philosopher of statistics, as opposed to practitioners of statistics, defending S-frequentist point of view, then read Deborah Mayo's writings; +1 to @NRH's answer.) UPDATE: On the relationship between P-frequentism and S-frequentism @fcop and others ask about the relationship between P-frequentism and S-frequentism. Does one of these positions imply another one? There is no doubt that historically S-frequentism was developed based on P-frequentist stance; but do they logically imply one another? Before approaching this question I should say the following. When I wrote above that there are almost no P-frequentists I did not mean that almost everybody is P-subjective-bayesian-a-la-de-finetti or P-propensitist-a-la-popper. In fact, I believe that most statisticans (or data-scientists, or machine-learners) are P-nothing-at-all, or P-shut-up-and-calculate (to borrow Mermin's famous phrase). Most people tend to ignore foundation problems. And it is fine. We do not have a good definition of free will, or intelligence, or time, or love. But this should not stop us from working on neuroscience, or on AI, or on physics, or from falling in love. Personally, I am not a S-frequentist, but neither do I have any coherent view on foundations of probability. In contrast, almost everybody who did some practical statistical analysis is either a S-frequentist or a S-Bayesian (or perhaps a mixture). Personally, I published papers containing $p$-values and I have never (so far) published papers containing priors and posteriors over model parameters so this makes me a S-frequentist, at least in practice. It is therefore clearly possible to be a S-frequentist without being a P-frequentist, despite what @fcop says in his answer. Okay. Fine. But still: Can a P-bayesian be a S-frequentist? And can a P-frequentist be a S-bayesian? For a convinced P-bayesian it is probably atypical to be a S-frequentist, but in principle entirely possible. E.g. a P-bayesian can decide that they do not have any prior information over $\theta$ and hence adopt a S-frequentist analysis. Why not. Every S-frequentist claim can certainly be interpreted with P-bayesian interpretation of probability. For a convinced P-frequentist to be S-bayesian is probably problematic. But then it is very problematic to be a convinced P-frequentist...
Who are frequentists? Some existing answers talk about statistical inference and some about interpretation of probability, and none clearly makes the distinction. The main purpose of this answer is to make this distinction
4,161
Who are frequentists?
Kolmogorov's work on Foundations of Theory of Probability has the section called "Relation to Experimental Data" on p.3. This is what he wrote there: He's showing how one could deduct his Axioms by observing experiments. This is quite a frequentist way of interpreting the probabilities. He has another interesting quote for impossible events (empty sets): So, I think that if you're comfortable with these arguments, then you must admit that you're a frequentist. This label is not exclusive. You can be bi-paradigmous (I made up the word), i.e. both a frequentist and Bayesian. For instance, I become Bayesian when applying stochastic methods to phenomena which are not inherently stochastic. UPDATE As I wrote earlier on CV, Kolmogorov's theory itself is not frequentist per se. It's as compatible with Bayesian view as with frequentist view. He put this cute footnote to the section to make very clear that he's abstaining from philosophy:
Who are frequentists?
Kolmogorov's work on Foundations of Theory of Probability has the section called "Relation to Experimental Data" on p.3. This is what he wrote there: He's showing how one could deduct his Axioms by
Who are frequentists? Kolmogorov's work on Foundations of Theory of Probability has the section called "Relation to Experimental Data" on p.3. This is what he wrote there: He's showing how one could deduct his Axioms by observing experiments. This is quite a frequentist way of interpreting the probabilities. He has another interesting quote for impossible events (empty sets): So, I think that if you're comfortable with these arguments, then you must admit that you're a frequentist. This label is not exclusive. You can be bi-paradigmous (I made up the word), i.e. both a frequentist and Bayesian. For instance, I become Bayesian when applying stochastic methods to phenomena which are not inherently stochastic. UPDATE As I wrote earlier on CV, Kolmogorov's theory itself is not frequentist per se. It's as compatible with Bayesian view as with frequentist view. He put this cute footnote to the section to make very clear that he's abstaining from philosophy:
Who are frequentists? Kolmogorov's work on Foundations of Theory of Probability has the section called "Relation to Experimental Data" on p.3. This is what he wrote there: He's showing how one could deduct his Axioms by
4,162
Who are frequentists?
I believe that it is relevant to mention Deborah Mayo, who writes the blog Error Statistics Philosophy. I won't claim to have a deep understanding of her philosophical position, but the framework of error statistics, as described in a paper with Aris Spanos, does include what is regarded as classical frequentist statistical methods. To quote the paper: Under the umbrella of error-statistical methods, one may include all standard methods using error probabilities based on the relative frequencies of errors in repeated sampling – often called sampling theory or frequentist statistics. And further down in the same paper you can read that: For the error statistician probability arises not to measure degrees of confirmation or belief (actual or rational) in hypotheses, but to quantify how frequently methods are capable of discriminating between alternative hypotheses and how reliably they facilitate the detection of error.
Who are frequentists?
I believe that it is relevant to mention Deborah Mayo, who writes the blog Error Statistics Philosophy. I won't claim to have a deep understanding of her philosophical position, but the framework of
Who are frequentists? I believe that it is relevant to mention Deborah Mayo, who writes the blog Error Statistics Philosophy. I won't claim to have a deep understanding of her philosophical position, but the framework of error statistics, as described in a paper with Aris Spanos, does include what is regarded as classical frequentist statistical methods. To quote the paper: Under the umbrella of error-statistical methods, one may include all standard methods using error probabilities based on the relative frequencies of errors in repeated sampling – often called sampling theory or frequentist statistics. And further down in the same paper you can read that: For the error statistician probability arises not to measure degrees of confirmation or belief (actual or rational) in hypotheses, but to quantify how frequently methods are capable of discriminating between alternative hypotheses and how reliably they facilitate the detection of error.
Who are frequentists? I believe that it is relevant to mention Deborah Mayo, who writes the blog Error Statistics Philosophy. I won't claim to have a deep understanding of her philosophical position, but the framework of
4,163
Who are frequentists?
Referring to this thread and the comments on it I think that the frequentists are those that define ''probability'' of an event as the long run relative frequency of the occurence of that event. So if $n$ is the number of experiments and $n_A$ the number of occurences of event $A$ then the probability of the event $A$, denoted by $P(A)$, is defined as $$P(A):=\lim_{n\to +\infty} \frac{n_A}n$$. It is not hard to see that this definition fulfills Kolmogorov's axioms (because taking limits is linear, see also Is there any *mathematical* basis for the Bayesian vs frequentist debate?). In order to give such a definition they must ''believe'' that this limit exists. So the frequentists are those who believe in the existence of this limit. EDIT on 31/8/2016: on the distintion between S- and P-frequentism As @amoeba distinguishes in his answer between S-frequentists and P-frequentists, where P-frequentists are the type of frequentists that I define supra, and as he also argues that it is hard to be a P-frequentist I added an EDIT section to argue that the opposite is true; I argue that all S-frequentists are P-frequentists. In the S-frequentism section @amoeba says ''this procedure succeeds in encompassing true $\theta$ with a particular long-run success frequency (particular probability).'' In his answer he also states that P-frequentists are a rare species. But this ''long-run success frequency'', used to define S-frequentism, is what he defines as P-frequentism as it is the interpretation of $P(\widehat{CI} \ni \theta)$. Therefore, according to his defintions every S-frequentist is also a P-frequentist. Therefore I conclude that P-frequentists are not so rare as argued by amoeba. There is even more; @amoeba also argues that the S-frequentists consider the unknown parameter $\theta$ as fixed or non-random, therefore one can not talk about ''probability of $\theta$ having a particluar value'', he says that ''The only thing we can do, is to come up with a procedure of constructing some interval around our estimate such that this procedure succeeds in encompassing true $\theta$ with a particular long-run success frequency (particular probability).'' May I ask what might be the origin of the name ''frequentist'' : (a) the ''non-random $\theta$''-idea or (b) the ''long-run frequency''-idea ? May I also ask @mpiktas who writes in his comment to the answer of amoeba: '' It is very hard to be a P-frequentist, because it is practically impossible to give mathematically sound definition of such probability '' If you need a defintion of P-frequentism to define the S-frequentism, how can one then be more S-frequentist than P-frequentist ?
Who are frequentists?
Referring to this thread and the comments on it I think that the frequentists are those that define ''probability'' of an event as the long run relative frequency of the occurence of that event. So if
Who are frequentists? Referring to this thread and the comments on it I think that the frequentists are those that define ''probability'' of an event as the long run relative frequency of the occurence of that event. So if $n$ is the number of experiments and $n_A$ the number of occurences of event $A$ then the probability of the event $A$, denoted by $P(A)$, is defined as $$P(A):=\lim_{n\to +\infty} \frac{n_A}n$$. It is not hard to see that this definition fulfills Kolmogorov's axioms (because taking limits is linear, see also Is there any *mathematical* basis for the Bayesian vs frequentist debate?). In order to give such a definition they must ''believe'' that this limit exists. So the frequentists are those who believe in the existence of this limit. EDIT on 31/8/2016: on the distintion between S- and P-frequentism As @amoeba distinguishes in his answer between S-frequentists and P-frequentists, where P-frequentists are the type of frequentists that I define supra, and as he also argues that it is hard to be a P-frequentist I added an EDIT section to argue that the opposite is true; I argue that all S-frequentists are P-frequentists. In the S-frequentism section @amoeba says ''this procedure succeeds in encompassing true $\theta$ with a particular long-run success frequency (particular probability).'' In his answer he also states that P-frequentists are a rare species. But this ''long-run success frequency'', used to define S-frequentism, is what he defines as P-frequentism as it is the interpretation of $P(\widehat{CI} \ni \theta)$. Therefore, according to his defintions every S-frequentist is also a P-frequentist. Therefore I conclude that P-frequentists are not so rare as argued by amoeba. There is even more; @amoeba also argues that the S-frequentists consider the unknown parameter $\theta$ as fixed or non-random, therefore one can not talk about ''probability of $\theta$ having a particluar value'', he says that ''The only thing we can do, is to come up with a procedure of constructing some interval around our estimate such that this procedure succeeds in encompassing true $\theta$ with a particular long-run success frequency (particular probability).'' May I ask what might be the origin of the name ''frequentist'' : (a) the ''non-random $\theta$''-idea or (b) the ''long-run frequency''-idea ? May I also ask @mpiktas who writes in his comment to the answer of amoeba: '' It is very hard to be a P-frequentist, because it is practically impossible to give mathematically sound definition of such probability '' If you need a defintion of P-frequentism to define the S-frequentism, how can one then be more S-frequentist than P-frequentist ?
Who are frequentists? Referring to this thread and the comments on it I think that the frequentists are those that define ''probability'' of an event as the long run relative frequency of the occurence of that event. So if
4,164
Who are frequentists?
Let me offer an answer that connects this question with a matter of current and very practical importance -- Precision Medicine -- while at the same time answering it literally as it was asked: Who are frequentists? Frequentists are people who say things such as [1] (emphasis mine): What does a 10% risk of an event within the next decade mean to the individual for whom it was generated? Contrary to what is thought, this risk level is not that person’s personal risk because probability is not meaningful in an individual context. Thus, frequentists interpret 'probability' in such a way that it has no meaning in a singular context like that of an individual patient. My PubMed Commons comment on [1] examines the contortions its frequentist authors must undergo to recover a semblance of a probability-like notion applicable to the care of an individual patient. Observing how and why they do this may prove very instructive as to who is a frequentist. Also, the largely unilluminating subsequent exchange in the JAMA Letters section [2,3] is instructive as to the importance of explicitly recognizing limitations in frequentist notions of probability and attacking them directly as such. (I regret many CV users may find that [1] lies behind a paywall.) The excellent and highly readable book [4] by L. Jonathan Cohen would repay the efforts of anyone interested in the OP's question. Of note, Cohen's book oddly was cited by [1] in connection with the claim "probability is not meaningful in an individual context," although Cohen clearly rebukes this view as follows [4,p49]: Nor is it open to a frequency theorist to claim that all important probabilities are indeed general, not singular. It often seems very important to be able to calculate the probability of success for your own child’s appendectomy... 1] Sniderman AD, D’Agostino Sr RB, and Pencina MJ. “The Role of Physicians in the Era of Predictive Analytics.” JAMA 314, no. 1 (July 7, 2015): 25–26. doi:10.1001/jama.2015.6177. PubMed 2] Van Calster B, Steyerberg EW, and Harrell FH. “RIsk Prediction for Individuals.” JAMA 314, no. 17 (November 3, 2015): 1875–1875. doi:10.1001/jama.2015.12215. Full Text 3] Sniderman AD, D’Agostino Sr RB, and Pencina MJ. “RIsk Prediction for Individuals—reply.” JAMA 314, no. 17 (November 3, 2015): 1875–76. doi:10.1001/jama.2015.12221. Full Text 4] Cohen, L. Jonathan. An Introduction to the Philosophy of Induction and Probability. Oxford : New York: Clarendon Press ; Oxford University Press, 1989. Link to scanned pages 46-53 & 81-83
Who are frequentists?
Let me offer an answer that connects this question with a matter of current and very practical importance -- Precision Medicine -- while at the same time answering it literally as it was asked: Who ar
Who are frequentists? Let me offer an answer that connects this question with a matter of current and very practical importance -- Precision Medicine -- while at the same time answering it literally as it was asked: Who are frequentists? Frequentists are people who say things such as [1] (emphasis mine): What does a 10% risk of an event within the next decade mean to the individual for whom it was generated? Contrary to what is thought, this risk level is not that person’s personal risk because probability is not meaningful in an individual context. Thus, frequentists interpret 'probability' in such a way that it has no meaning in a singular context like that of an individual patient. My PubMed Commons comment on [1] examines the contortions its frequentist authors must undergo to recover a semblance of a probability-like notion applicable to the care of an individual patient. Observing how and why they do this may prove very instructive as to who is a frequentist. Also, the largely unilluminating subsequent exchange in the JAMA Letters section [2,3] is instructive as to the importance of explicitly recognizing limitations in frequentist notions of probability and attacking them directly as such. (I regret many CV users may find that [1] lies behind a paywall.) The excellent and highly readable book [4] by L. Jonathan Cohen would repay the efforts of anyone interested in the OP's question. Of note, Cohen's book oddly was cited by [1] in connection with the claim "probability is not meaningful in an individual context," although Cohen clearly rebukes this view as follows [4,p49]: Nor is it open to a frequency theorist to claim that all important probabilities are indeed general, not singular. It often seems very important to be able to calculate the probability of success for your own child’s appendectomy... 1] Sniderman AD, D’Agostino Sr RB, and Pencina MJ. “The Role of Physicians in the Era of Predictive Analytics.” JAMA 314, no. 1 (July 7, 2015): 25–26. doi:10.1001/jama.2015.6177. PubMed 2] Van Calster B, Steyerberg EW, and Harrell FH. “RIsk Prediction for Individuals.” JAMA 314, no. 17 (November 3, 2015): 1875–1875. doi:10.1001/jama.2015.12215. Full Text 3] Sniderman AD, D’Agostino Sr RB, and Pencina MJ. “RIsk Prediction for Individuals—reply.” JAMA 314, no. 17 (November 3, 2015): 1875–76. doi:10.1001/jama.2015.12221. Full Text 4] Cohen, L. Jonathan. An Introduction to the Philosophy of Induction and Probability. Oxford : New York: Clarendon Press ; Oxford University Press, 1989. Link to scanned pages 46-53 & 81-83
Who are frequentists? Let me offer an answer that connects this question with a matter of current and very practical importance -- Precision Medicine -- while at the same time answering it literally as it was asked: Who ar
4,165
Who are frequentists?
Really interesting question! I'd put myself in the frequentist camp when it comes to understanding and interpreting probability statements, although I am not quite so hard-line about the need for an actual sequence of iid experiments to ground this probability. I suspect most people who don't buy the thesis that "probability is a subjective measure of belief" would also think about probability this way. Here's what I mean: take our usual "fair" coin, with assignment $P(H)=0.5$. When I hear this, I form an image of someone tossing this coin many times and the fraction of heads approaches $0.5$. Now, if pressed, I would also say that the fraction of heads in any random sample from a finite sequence of such coin tosses will also approach $0.5$ as the sample size grows (independence assumption). As has been stated by others, the biggest assumption is that this limit exists and is correct (i.e., limit is $0.5$), but I think just as importantly is the assumption that the same limit exists for randomly chosen sub-samples as well. Otherwise, our interpretation only has meaning wrt the entire infinite sequence (e.g., we could have strong autocorrelation that gets averaged out). I think the above is pretty uncontroversial for frequentists. A Bayesian would be more focused on the experiment at hand and less on the long run behavior: they would state that their degree of belief that the next toss will be heads is $P(H) = 0.5$...full stop. For a simple case such as coin tossing, we can see that the frequentist and Bayesian approaches are functionally equivalent, albeit philosophically very different. As Dikran Marsupial has pointed out, the Bayesian may in fact be utilizing the fact that empirically we see coins come up heads about as often as we see them come up tails (long run/large sample frequency as a prior). What about things that cannot possibly have long run frequencies? For example, what is the probability North Korea will start a war with Japan in the next 10 years? For frequentists, we are really left in the lurch, since we cannot really describe the sampling distributions required to test such a hypothesis. A Bayesian would be able to tackle this problem by placing probability distribution over the possibilities, most likely based on eliciting expert input. However, a key question comes up: where do these degrees of belief (or assumed value for the long run frequency) come from? I'd argue from psychology and say that these beliefs (especially in areas far from experimental data) come from what is referred to as the availability heuristic and representativness heuristic. There are slew of others that likely come into play. I argue this because in the absence of data to calibrate our beliefs (towards the observed long run frequency!), we must rely on heuristics, however sophisticated we make them seem. The above mental heuristic thinking applies equally to Frequentists and Bayesians. What is interesting to me is that regardless of our philosophy, at the root, we place more belief in something that we think is more likely to be true, and we believe it to be more likely to be true because we believe there are more ways for it to be true, or we imagine that the pathways leading to it being true would happen more often (frequently:-) than those that would make it not true. Since it's an election year, let's take a political example: What belief would we place in the statement "Ted Cruz will propose a ban assault rifles in the next 4 years". Now, we do have some data on this from his own statements, and we'd likely place our prior belief in the truth of this statement very near zero. But why? Why does his prior statements make us think this way? Because we think that highly ideological people tend to "stick to their guns" more than their pragmatist counterparts. Where does this come from? Likely from the studies done by psychologists and our own experiences with highly principled people. In other words we have some data and the belief that for most cases where someone like Cruz could change their mind, they will not (again, a long-run or large-sample assessment of sorts). This is why I "caucus" with the frequentists. It's not my dislike of Bayesian philosophy (quite reasonable) or methods (they're great!), but that if I dig deep enough into why I hold beliefs that lack strong large-sample backing, I find that I am relying on some sort of mental model where outcomes can be tallied (if implicitly) or where I can invoke long-run probabilities in a particular sub-process (e.g., Republicans vote against gun control measures X% of the time) to weight my belief one way or another. Of course, this not really true frequentism, and I doubt that there are many people who subscribe to the von Mieses-esque interpretation of probability to the letter. However, I think it shows the underlying compatibility between Bayesian and Frequentist probability: Both are appealing to our inner heuristics regarding availability or what I call the "Pachinko" principle about frequencies along a chain of causation. So perhaps I should call myself an "availabilist", to indicate that I assign probabilities based on how often I can imagine an event occurring as the outcome of a chain of events (with some rigor/modelling of course). If I have a lot of data, great. If I don't, then I will try to decompose the hypothesis into a chain of events and use what data I have (anecdotal or "common sense", as need be) to assess how often I would imagine such an event to occur. Sorry for the longish post, great question BTW!
Who are frequentists?
Really interesting question! I'd put myself in the frequentist camp when it comes to understanding and interpreting probability statements, although I am not quite so hard-line about the need for an a
Who are frequentists? Really interesting question! I'd put myself in the frequentist camp when it comes to understanding and interpreting probability statements, although I am not quite so hard-line about the need for an actual sequence of iid experiments to ground this probability. I suspect most people who don't buy the thesis that "probability is a subjective measure of belief" would also think about probability this way. Here's what I mean: take our usual "fair" coin, with assignment $P(H)=0.5$. When I hear this, I form an image of someone tossing this coin many times and the fraction of heads approaches $0.5$. Now, if pressed, I would also say that the fraction of heads in any random sample from a finite sequence of such coin tosses will also approach $0.5$ as the sample size grows (independence assumption). As has been stated by others, the biggest assumption is that this limit exists and is correct (i.e., limit is $0.5$), but I think just as importantly is the assumption that the same limit exists for randomly chosen sub-samples as well. Otherwise, our interpretation only has meaning wrt the entire infinite sequence (e.g., we could have strong autocorrelation that gets averaged out). I think the above is pretty uncontroversial for frequentists. A Bayesian would be more focused on the experiment at hand and less on the long run behavior: they would state that their degree of belief that the next toss will be heads is $P(H) = 0.5$...full stop. For a simple case such as coin tossing, we can see that the frequentist and Bayesian approaches are functionally equivalent, albeit philosophically very different. As Dikran Marsupial has pointed out, the Bayesian may in fact be utilizing the fact that empirically we see coins come up heads about as often as we see them come up tails (long run/large sample frequency as a prior). What about things that cannot possibly have long run frequencies? For example, what is the probability North Korea will start a war with Japan in the next 10 years? For frequentists, we are really left in the lurch, since we cannot really describe the sampling distributions required to test such a hypothesis. A Bayesian would be able to tackle this problem by placing probability distribution over the possibilities, most likely based on eliciting expert input. However, a key question comes up: where do these degrees of belief (or assumed value for the long run frequency) come from? I'd argue from psychology and say that these beliefs (especially in areas far from experimental data) come from what is referred to as the availability heuristic and representativness heuristic. There are slew of others that likely come into play. I argue this because in the absence of data to calibrate our beliefs (towards the observed long run frequency!), we must rely on heuristics, however sophisticated we make them seem. The above mental heuristic thinking applies equally to Frequentists and Bayesians. What is interesting to me is that regardless of our philosophy, at the root, we place more belief in something that we think is more likely to be true, and we believe it to be more likely to be true because we believe there are more ways for it to be true, or we imagine that the pathways leading to it being true would happen more often (frequently:-) than those that would make it not true. Since it's an election year, let's take a political example: What belief would we place in the statement "Ted Cruz will propose a ban assault rifles in the next 4 years". Now, we do have some data on this from his own statements, and we'd likely place our prior belief in the truth of this statement very near zero. But why? Why does his prior statements make us think this way? Because we think that highly ideological people tend to "stick to their guns" more than their pragmatist counterparts. Where does this come from? Likely from the studies done by psychologists and our own experiences with highly principled people. In other words we have some data and the belief that for most cases where someone like Cruz could change their mind, they will not (again, a long-run or large-sample assessment of sorts). This is why I "caucus" with the frequentists. It's not my dislike of Bayesian philosophy (quite reasonable) or methods (they're great!), but that if I dig deep enough into why I hold beliefs that lack strong large-sample backing, I find that I am relying on some sort of mental model where outcomes can be tallied (if implicitly) or where I can invoke long-run probabilities in a particular sub-process (e.g., Republicans vote against gun control measures X% of the time) to weight my belief one way or another. Of course, this not really true frequentism, and I doubt that there are many people who subscribe to the von Mieses-esque interpretation of probability to the letter. However, I think it shows the underlying compatibility between Bayesian and Frequentist probability: Both are appealing to our inner heuristics regarding availability or what I call the "Pachinko" principle about frequencies along a chain of causation. So perhaps I should call myself an "availabilist", to indicate that I assign probabilities based on how often I can imagine an event occurring as the outcome of a chain of events (with some rigor/modelling of course). If I have a lot of data, great. If I don't, then I will try to decompose the hypothesis into a chain of events and use what data I have (anecdotal or "common sense", as need be) to assess how often I would imagine such an event to occur. Sorry for the longish post, great question BTW!
Who are frequentists? Really interesting question! I'd put myself in the frequentist camp when it comes to understanding and interpreting probability statements, although I am not quite so hard-line about the need for an a
4,166
Who are frequentists?
As @amoeba noticed, we have frequentist definition of probability and frequentist statistics. All the sources that I have seen until now say that frequentist inference is based on the frequentist definition of probability, i.e. understanding it as limit in proportion given infinite number random draws (as already noticed by @fcop and @Aksakal quoting Kolmogorov) $$ P(A) = \lim_{n\to\infty} \frac{n_A}{n} $$ So basically, there is a notion of some population that we can repeatably sample from. The same idea is used in frequentist inference. I went through some classic papers, e.g. by Jerzy Neyman, to track the theoretical foundations of frequentist statistics. In the 1937 Neyman wrote (ia) The statistician is concerned with a population, $\pi$, which for some reason or other cannot be studied exhaustively. It is only possible to draw a sample from this population which may be studied in detail and used to form an opinion as to the values of certain constants describing the properties of the population $\pi$. For example, it may be desired to calculate approximately the mean of a certain character possessed by the individuals forming the population $\pi$, etc. (ib) Alternatively, the statistician may be concerned with certain experiments which, if repeated under apparently identical conditions, yield varying results. Such experiments are called random experiments [...] In both cases described, the problem with which the statistician is faced is the problem of estimation. This problem consists in determining what arithmetical operations should be performed on the observational data in order to obtain a result, to be called an estimate, which presumably does not differ very much from the true value of the numerical character, either of the population $\pi$, as in (ia), or of the random experiments, as in (ib). [...] In (ia) we speak of a statistician drawing a sample from the population studied. In another paper (Neyman, 1977), he notices that the evidence provided in the data need to be verified by observing the repeated nature of the studied phenomenon: Ordinarily, the 'verification', or 'validation' of a guessed model consists in deducing some of its frequentist consequences in situations not previously studied empirically, and then in performing appropriate experiments to see whether their results are consistent with predictions. Very generally, the first attempt at verification is negative: the observed frequencies of the various outcomes of the experiment disagree with the model. However, on some lucky occasions there is a reasonable agreement and one feels the satisfaction of having 'understood' the phenomenon, at least in some general way. Later on, invariably, new empirical findings appear, indicating the inadequacy of the original model and demanding its abandonment or modification. And this is the history of science! and in yet another paper Neyman and Pearson (1933) write about random samples drawn from fixed population In common statistical practice, when the observed facts are described as "samples," and the hypotheses concern the "populations", for which the samples have been drawn, the characters of the samples, or as we shall term them criteria, which have been used for testing hypotheses, appear often to be fixed by happy intuition. Frequentist statistics in this context formalize the scientific reasoning where evidence are gathered, then new samples are drawn to verify the initial findings and as we accumulate more evidence our state of knowledge crystallizes. Again, as described by Neyman (1977), the process takes the following steps (i) Empirical establishment of apparently stable long-run relative frequencies (or 'frequencies' for short) of events judged interesting, as they develop in nature. (ii) Guessing and then verifying the 'chance mechanism', the repeated operation of which produces the observed frequencies. This is a problem of 'frequentist probability theory'. Occasionally, this step is labeled 'model building'. Naturally, the guessed chance mechanism is hypothetical. (iii) Using the hypothetical chance mechanism of the phenomenon studied to deduce rules of adjusting our actions (or 'decisions') to the observations so as to ensure the highest 'measure' of 'success'. [...] the deduction of the 'rules of adjusting our actions' is a problem of mathematics, specifically of mathematical statistics. Frequentists plan their research having in mind the random nature of data and the idea of repeated draws from fixed population, they design their methods based on it, and use it to verify their results (Neyman and Pearson, 1933), Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behavior with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong. This is connected to repeated sampling principle (Cox and Hinkley, 1974): (ii) Strong repeated sampling principle According to the strong repeated sampling principle, statistical procedures are to be assessed by their behaviour in hypothetical repetitions under the same conditions. This has two facets. Measures of uncertainty are to be interpreted as hypothetical frequencies in long run repetitions; criteria of optimality are to be formulated in terms of sensitive behaviour in hypothetical repetitions. The argument for this is that it ensures a physical meaning for the quantities that we calculate and that it ensures a close relation between the analysis we make and the underlying model which is regarded as representing the "true" state of affairs. (iii) Weak repeated sampling principle The weak version of the repeated sampling principle requires that we should not follow procedures which for some possible parameter values would give, in hypothetical repetitions, misleading conclusions most of the time. As contrast, when using maximum likelihood we are concerned with the sample that we have, and in Bayesian case we make inference based on the sample and our priors and as new data appears we can perform Bayesian updating. In both cases the idea of repeated sampling is not crucial. Frequentists rely only on the data they have (as noticed by @WBT), but keeping in mind that it is something random and it is to be thought as a part of process of repeated sampling from the population (recall, for example, how confidence intervals are defined). In frequentist case the idea of repeated sampling enables us to quantify the uncertainty (in statistics) and enables us to interpret real-life events in terms of probability. As a side note, notice that neither Neyman (Lehmann, 1988), nor Pearson (Mayo, 1992) were as pure frequentists as we could imagine they were. For example, Neyman (1977) proposes using Empirical Bayesian and Maximum Likelihood for point estimation. On another hand (Mayo, 1992), in Pearson's (1955) response to Fisher (and elsewhere in his work) is that for scientific contexts Pearson rejects both the low long-run error probability rationale [...] So it seems that it is hard to find pure frequentists even among the founding fathers. Neyman, J, and Pearson, E.S. (1933). On the Problem of the Most Efficient Tests of Statistical Hypotheses. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 231 (694–706): 289–337. Neyman, J. (1937). Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability. Phil. Trans. R. Soc. Lond. A. 236: 333–380. Neyman, J. (1977). Frequentist probability and frequentist statistics. Synthese, 36(1), 97-131. Mayo, D. G. (1992). Did Pearson reject the Neyman-Pearson philosophy of statistics? Synthese, 90(2), 233-262. Cox, D. R. and Hinkley, D. V. (1974). Theoretical Statistics. Chapman and Hall. Lehmann, E. (1988). Jerzy Neyman, 1894 - 1981. Technical Report No. 155. Department of Statistics, University of Califomia.
Who are frequentists?
As @amoeba noticed, we have frequentist definition of probability and frequentist statistics. All the sources that I have seen until now say that frequentist inference is based on the frequentist defi
Who are frequentists? As @amoeba noticed, we have frequentist definition of probability and frequentist statistics. All the sources that I have seen until now say that frequentist inference is based on the frequentist definition of probability, i.e. understanding it as limit in proportion given infinite number random draws (as already noticed by @fcop and @Aksakal quoting Kolmogorov) $$ P(A) = \lim_{n\to\infty} \frac{n_A}{n} $$ So basically, there is a notion of some population that we can repeatably sample from. The same idea is used in frequentist inference. I went through some classic papers, e.g. by Jerzy Neyman, to track the theoretical foundations of frequentist statistics. In the 1937 Neyman wrote (ia) The statistician is concerned with a population, $\pi$, which for some reason or other cannot be studied exhaustively. It is only possible to draw a sample from this population which may be studied in detail and used to form an opinion as to the values of certain constants describing the properties of the population $\pi$. For example, it may be desired to calculate approximately the mean of a certain character possessed by the individuals forming the population $\pi$, etc. (ib) Alternatively, the statistician may be concerned with certain experiments which, if repeated under apparently identical conditions, yield varying results. Such experiments are called random experiments [...] In both cases described, the problem with which the statistician is faced is the problem of estimation. This problem consists in determining what arithmetical operations should be performed on the observational data in order to obtain a result, to be called an estimate, which presumably does not differ very much from the true value of the numerical character, either of the population $\pi$, as in (ia), or of the random experiments, as in (ib). [...] In (ia) we speak of a statistician drawing a sample from the population studied. In another paper (Neyman, 1977), he notices that the evidence provided in the data need to be verified by observing the repeated nature of the studied phenomenon: Ordinarily, the 'verification', or 'validation' of a guessed model consists in deducing some of its frequentist consequences in situations not previously studied empirically, and then in performing appropriate experiments to see whether their results are consistent with predictions. Very generally, the first attempt at verification is negative: the observed frequencies of the various outcomes of the experiment disagree with the model. However, on some lucky occasions there is a reasonable agreement and one feels the satisfaction of having 'understood' the phenomenon, at least in some general way. Later on, invariably, new empirical findings appear, indicating the inadequacy of the original model and demanding its abandonment or modification. And this is the history of science! and in yet another paper Neyman and Pearson (1933) write about random samples drawn from fixed population In common statistical practice, when the observed facts are described as "samples," and the hypotheses concern the "populations", for which the samples have been drawn, the characters of the samples, or as we shall term them criteria, which have been used for testing hypotheses, appear often to be fixed by happy intuition. Frequentist statistics in this context formalize the scientific reasoning where evidence are gathered, then new samples are drawn to verify the initial findings and as we accumulate more evidence our state of knowledge crystallizes. Again, as described by Neyman (1977), the process takes the following steps (i) Empirical establishment of apparently stable long-run relative frequencies (or 'frequencies' for short) of events judged interesting, as they develop in nature. (ii) Guessing and then verifying the 'chance mechanism', the repeated operation of which produces the observed frequencies. This is a problem of 'frequentist probability theory'. Occasionally, this step is labeled 'model building'. Naturally, the guessed chance mechanism is hypothetical. (iii) Using the hypothetical chance mechanism of the phenomenon studied to deduce rules of adjusting our actions (or 'decisions') to the observations so as to ensure the highest 'measure' of 'success'. [...] the deduction of the 'rules of adjusting our actions' is a problem of mathematics, specifically of mathematical statistics. Frequentists plan their research having in mind the random nature of data and the idea of repeated draws from fixed population, they design their methods based on it, and use it to verify their results (Neyman and Pearson, 1933), Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behavior with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong. This is connected to repeated sampling principle (Cox and Hinkley, 1974): (ii) Strong repeated sampling principle According to the strong repeated sampling principle, statistical procedures are to be assessed by their behaviour in hypothetical repetitions under the same conditions. This has two facets. Measures of uncertainty are to be interpreted as hypothetical frequencies in long run repetitions; criteria of optimality are to be formulated in terms of sensitive behaviour in hypothetical repetitions. The argument for this is that it ensures a physical meaning for the quantities that we calculate and that it ensures a close relation between the analysis we make and the underlying model which is regarded as representing the "true" state of affairs. (iii) Weak repeated sampling principle The weak version of the repeated sampling principle requires that we should not follow procedures which for some possible parameter values would give, in hypothetical repetitions, misleading conclusions most of the time. As contrast, when using maximum likelihood we are concerned with the sample that we have, and in Bayesian case we make inference based on the sample and our priors and as new data appears we can perform Bayesian updating. In both cases the idea of repeated sampling is not crucial. Frequentists rely only on the data they have (as noticed by @WBT), but keeping in mind that it is something random and it is to be thought as a part of process of repeated sampling from the population (recall, for example, how confidence intervals are defined). In frequentist case the idea of repeated sampling enables us to quantify the uncertainty (in statistics) and enables us to interpret real-life events in terms of probability. As a side note, notice that neither Neyman (Lehmann, 1988), nor Pearson (Mayo, 1992) were as pure frequentists as we could imagine they were. For example, Neyman (1977) proposes using Empirical Bayesian and Maximum Likelihood for point estimation. On another hand (Mayo, 1992), in Pearson's (1955) response to Fisher (and elsewhere in his work) is that for scientific contexts Pearson rejects both the low long-run error probability rationale [...] So it seems that it is hard to find pure frequentists even among the founding fathers. Neyman, J, and Pearson, E.S. (1933). On the Problem of the Most Efficient Tests of Statistical Hypotheses. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 231 (694–706): 289–337. Neyman, J. (1937). Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability. Phil. Trans. R. Soc. Lond. A. 236: 333–380. Neyman, J. (1977). Frequentist probability and frequentist statistics. Synthese, 36(1), 97-131. Mayo, D. G. (1992). Did Pearson reject the Neyman-Pearson philosophy of statistics? Synthese, 90(2), 233-262. Cox, D. R. and Hinkley, D. V. (1974). Theoretical Statistics. Chapman and Hall. Lehmann, E. (1988). Jerzy Neyman, 1894 - 1981. Technical Report No. 155. Department of Statistics, University of Califomia.
Who are frequentists? As @amoeba noticed, we have frequentist definition of probability and frequentist statistics. All the sources that I have seen until now say that frequentist inference is based on the frequentist defi
4,167
Who are frequentists?
"Frequentists vs. Bayesians" from XKCD (under CC-BY-NC 2.5), click to discuss: The general point of the frequentist philosophy illustrated here is a belief in drawing conclusions about the relative likelihood of events based solely ("purely") on the observed data, without "polluting" that estimation process with pre-conceived notions about how things should or should not be. In presenting a probability estimate, the frequentist does not take into account prior beliefs about the likelihood of an event when there are observations available to support computation of its empirical likelihood. The frequentist should take this background information into account when deciding on the threshold for action or conclusion. As Dikran Marsupial wrote in a concise comment below, "The valuable point the cartoon (perhaps unintentionally) makes is that science is indeed more complex and we can't just apply the "null ritual" without thinking about prior knowledge." As another example, when trying to determine/declare what topics are "trending" on Facebook, frequentists would likely welcome the more purely algorithmic counting approach Facebook is shifting towards, instead of the old model where employees would curate that list based in part on their own background perspectives about which topics they thought "should" be most important.
Who are frequentists?
"Frequentists vs. Bayesians" from XKCD (under CC-BY-NC 2.5), click to discuss: The general point of the frequentist philosophy illustrated here is a belief in drawing conclusions about the relative
Who are frequentists? "Frequentists vs. Bayesians" from XKCD (under CC-BY-NC 2.5), click to discuss: The general point of the frequentist philosophy illustrated here is a belief in drawing conclusions about the relative likelihood of events based solely ("purely") on the observed data, without "polluting" that estimation process with pre-conceived notions about how things should or should not be. In presenting a probability estimate, the frequentist does not take into account prior beliefs about the likelihood of an event when there are observations available to support computation of its empirical likelihood. The frequentist should take this background information into account when deciding on the threshold for action or conclusion. As Dikran Marsupial wrote in a concise comment below, "The valuable point the cartoon (perhaps unintentionally) makes is that science is indeed more complex and we can't just apply the "null ritual" without thinking about prior knowledge." As another example, when trying to determine/declare what topics are "trending" on Facebook, frequentists would likely welcome the more purely algorithmic counting approach Facebook is shifting towards, instead of the old model where employees would curate that list based in part on their own background perspectives about which topics they thought "should" be most important.
Who are frequentists? "Frequentists vs. Bayesians" from XKCD (under CC-BY-NC 2.5), click to discuss: The general point of the frequentist philosophy illustrated here is a belief in drawing conclusions about the relative
4,168
Who are frequentists?
(A remark, only tangentially relevant for the question and the site.) Probability is about objective status of individual things. Things cannot have intention and they receive their statuses from the universe. With a thing, an event (giving it its status) always shall have happened: the event is already there accomplished, even if it haven't actually happened yet - the past future of a thing, also called "fate" or contingency. Again, with probability, the fact of event - having yet occured or not, doesn't matter - is already there [as opposed to the meaning which never is there]; and as such it's already got unnecessary and superfluous. The fact should be discarded, and that invalidation of it is what we call "the event is probable". Any fact about a thing bears in itself its primeval unconvincing side, or probability of the fact (even the actually occured fact - we recognize it by pinprick of disbelief). We are inevitably "tired of things" pre-psychically to an extent. It remains therefore only to quantify that partial negation of facticity, if need a number. One way to quantify is to count. Another is to weigh. A frequentist carries out or imagines of a series of trials lying before him which he turns face over to see if the event actually happens; he counts. A Bayesian consideres a series of psychological motives dragging behing him which he screens; he weighs them as things. Both men are busy with charge/excuse game of mind. Fundamentally, there is no much difference between them. Possibility is about potentialities of me in world. Possibility is always mine (a rain's chance is my problem to opt for taking an umbrella or getting wet) and concerns not an object (the one I'm considering as being possible or having the possibility) but the whole world for me. Possibility is always 50/50 and it is always convincing, because it implies - either calls for before or entails after - my decision how to behave. Things themselves have no intentions and thus possibilities. We should not confuse our possibilities of these things for us with their own probabilities of "stochastic determinism". Probability can never be "subjective" in the human sense. An observant reader may feel in the response a masked dig at the bright answer in this thread, where @amoeba says he thinks "there are almost no frequentists of the [probability definition] kind (P-frequentists)". It could be turned opposite: bayesian probability definers do not exist as different class. Because, as I've admitted, bayesians consider chanks of reality the same manner as frequentists do - as series of facts; only these facts are not experiments, sooner recollections of "truths" and "arguments". But such forms of knowledge is factual and can only be counted or weighed. Probability it erects is not synthesized as subjective, that is, anticipatory ("bayesian" to be) unless human expectation (possibility) enters the scene to meddle. And @amoeba anxiously lets it in when imagines as "the coin will wear off and the Sun will go supernova".
Who are frequentists?
(A remark, only tangentially relevant for the question and the site.) Probability is about objective status of individual things. Things cannot have intention and they receive their statuses from the
Who are frequentists? (A remark, only tangentially relevant for the question and the site.) Probability is about objective status of individual things. Things cannot have intention and they receive their statuses from the universe. With a thing, an event (giving it its status) always shall have happened: the event is already there accomplished, even if it haven't actually happened yet - the past future of a thing, also called "fate" or contingency. Again, with probability, the fact of event - having yet occured or not, doesn't matter - is already there [as opposed to the meaning which never is there]; and as such it's already got unnecessary and superfluous. The fact should be discarded, and that invalidation of it is what we call "the event is probable". Any fact about a thing bears in itself its primeval unconvincing side, or probability of the fact (even the actually occured fact - we recognize it by pinprick of disbelief). We are inevitably "tired of things" pre-psychically to an extent. It remains therefore only to quantify that partial negation of facticity, if need a number. One way to quantify is to count. Another is to weigh. A frequentist carries out or imagines of a series of trials lying before him which he turns face over to see if the event actually happens; he counts. A Bayesian consideres a series of psychological motives dragging behing him which he screens; he weighs them as things. Both men are busy with charge/excuse game of mind. Fundamentally, there is no much difference between them. Possibility is about potentialities of me in world. Possibility is always mine (a rain's chance is my problem to opt for taking an umbrella or getting wet) and concerns not an object (the one I'm considering as being possible or having the possibility) but the whole world for me. Possibility is always 50/50 and it is always convincing, because it implies - either calls for before or entails after - my decision how to behave. Things themselves have no intentions and thus possibilities. We should not confuse our possibilities of these things for us with their own probabilities of "stochastic determinism". Probability can never be "subjective" in the human sense. An observant reader may feel in the response a masked dig at the bright answer in this thread, where @amoeba says he thinks "there are almost no frequentists of the [probability definition] kind (P-frequentists)". It could be turned opposite: bayesian probability definers do not exist as different class. Because, as I've admitted, bayesians consider chanks of reality the same manner as frequentists do - as series of facts; only these facts are not experiments, sooner recollections of "truths" and "arguments". But such forms of knowledge is factual and can only be counted or weighed. Probability it erects is not synthesized as subjective, that is, anticipatory ("bayesian" to be) unless human expectation (possibility) enters the scene to meddle. And @amoeba anxiously lets it in when imagines as "the coin will wear off and the Sun will go supernova".
Who are frequentists? (A remark, only tangentially relevant for the question and the site.) Probability is about objective status of individual things. Things cannot have intention and they receive their statuses from the
4,169
Who are frequentists?
Oh, I've been a frequentist for many's the year, And I've spent all my time playing the data by ear, But now I'm returning with Bayes in great store, And I never will play the frequentist no more. For it's no nay never, no nay never, no more, Will I play the frequentist, no never, no more! I went into a lab where I used to consult. The gave me some data, said 'p that for us', I said 'No way, Jose' with a bit of a smile, P values and evident just don't reconcile! Chorus I said it's your prior that we need to shed light, And the researcher's eyes opened wide with delight, He said, 'My prior views are as good as the rest, And for sure a Bayes factor is what will work best!' Chorus I'll go back to my teachers, confess what I've done, And ask them to pardon their prodigal son, But when they've forgave me, as often before, I never will play the frequentist no more! Chorus And it's no, nay never, no nay never no more, Will I play the frequentist, no never, no more! Source: A E Raftery, in The Bayesian Songbook, edited by B P Carlin, at http://www.biostat.umn.edu/. Sung to the traditional folk tune of 'The Wild Rover'. Quoted in Open University M347 Mathematical Statistics, Unit 9.
Who are frequentists?
Oh, I've been a frequentist for many's the year, And I've spent all my time playing the data by ear, But now I'm returning with Bayes in great store, And I never will play the frequentist no mor
Who are frequentists? Oh, I've been a frequentist for many's the year, And I've spent all my time playing the data by ear, But now I'm returning with Bayes in great store, And I never will play the frequentist no more. For it's no nay never, no nay never, no more, Will I play the frequentist, no never, no more! I went into a lab where I used to consult. The gave me some data, said 'p that for us', I said 'No way, Jose' with a bit of a smile, P values and evident just don't reconcile! Chorus I said it's your prior that we need to shed light, And the researcher's eyes opened wide with delight, He said, 'My prior views are as good as the rest, And for sure a Bayes factor is what will work best!' Chorus I'll go back to my teachers, confess what I've done, And ask them to pardon their prodigal son, But when they've forgave me, as often before, I never will play the frequentist no more! Chorus And it's no, nay never, no nay never no more, Will I play the frequentist, no never, no more! Source: A E Raftery, in The Bayesian Songbook, edited by B P Carlin, at http://www.biostat.umn.edu/. Sung to the traditional folk tune of 'The Wild Rover'. Quoted in Open University M347 Mathematical Statistics, Unit 9.
Who are frequentists? Oh, I've been a frequentist for many's the year, And I've spent all my time playing the data by ear, But now I'm returning with Bayes in great store, And I never will play the frequentist no mor
4,170
When combining p-values, why not just averaging?
You can perfectly use the mean $p$-value. Fisher’s method set sets a threshold $s_\alpha$ on $-2 \sum_{i=1}^n \log p_i$, such that if the null hypothesis $H_0$ : all $p$-values are $\sim U(0,1)$ holds, then $-2 \sum_i \log p_i$ exceeds $s_\alpha$ with probability $\alpha$. $H_0$ is rejected when this happens. Usually one takes $\alpha = 0.05$ and $s_\alpha$ is given by a quantile of $\chi^2(2n)$. Equivalently, one can work on the product $\prod_i p_i$ which is lower than $e^{-s_\alpha/2}$ with probability $\alpha$. Here is, for $n=2$, a graph showing the rejection zone (in red) (here we use $s_\alpha = 9.49$. The rejection zone has area = 0.05. Now you can chose to work on ${1\over n} \sum_{i=1}^n p_i$ instead, or equivalently on $\sum_i p_i$. You just need to find a threshold $t_\alpha$ such that $\sum p_i$ is below $t_\alpha$ with probability $\alpha$; exact computation $t_\alpha$ is tedious – for $n$ big enough you can rely on central limit theorem; for $n = 2$, $t_\alpha = (2\alpha)^{1\over 2}$. The following graph shows the rejection zone (area = 0.05 again). As you can imagine, many other shapes for the rejection zone are possibles, and have been proposed. It is not a priori clear which is better – i.e. which has greater power. Let‘s assume that $p_1$, $p_2$ come from a bilateral $z$-test with non-centrality parameter 1 : > p1 <- pchisq( rnorm(1e4, 1, 1)**2, df=1, lower.tail=FALSE ) > p2 <- pchisq( rnorm(1e4, 1, 1)**2, df=1, lower.tail=FALSE ) Let's have a look on the scatterplot with in red the points for which the null hypothesis is rejected. The power of Fisher’s product method is approximately > sum(p1*p2<exp(-9.49/2))/1e4 [1] 0.2245 The power of the method based on the sum of $p$-values is approximately > sum(p1+p2<sqrt(0.1))/1e4 [1] 0.1963 So Fisher’s method wins – at least in this case.
When combining p-values, why not just averaging?
You can perfectly use the mean $p$-value. Fisher’s method set sets a threshold $s_\alpha$ on $-2 \sum_{i=1}^n \log p_i$, such that if the null hypothesis $H_0$ : all $p$-values are $\sim U(0,1)$ holds
When combining p-values, why not just averaging? You can perfectly use the mean $p$-value. Fisher’s method set sets a threshold $s_\alpha$ on $-2 \sum_{i=1}^n \log p_i$, such that if the null hypothesis $H_0$ : all $p$-values are $\sim U(0,1)$ holds, then $-2 \sum_i \log p_i$ exceeds $s_\alpha$ with probability $\alpha$. $H_0$ is rejected when this happens. Usually one takes $\alpha = 0.05$ and $s_\alpha$ is given by a quantile of $\chi^2(2n)$. Equivalently, one can work on the product $\prod_i p_i$ which is lower than $e^{-s_\alpha/2}$ with probability $\alpha$. Here is, for $n=2$, a graph showing the rejection zone (in red) (here we use $s_\alpha = 9.49$. The rejection zone has area = 0.05. Now you can chose to work on ${1\over n} \sum_{i=1}^n p_i$ instead, or equivalently on $\sum_i p_i$. You just need to find a threshold $t_\alpha$ such that $\sum p_i$ is below $t_\alpha$ with probability $\alpha$; exact computation $t_\alpha$ is tedious – for $n$ big enough you can rely on central limit theorem; for $n = 2$, $t_\alpha = (2\alpha)^{1\over 2}$. The following graph shows the rejection zone (area = 0.05 again). As you can imagine, many other shapes for the rejection zone are possibles, and have been proposed. It is not a priori clear which is better – i.e. which has greater power. Let‘s assume that $p_1$, $p_2$ come from a bilateral $z$-test with non-centrality parameter 1 : > p1 <- pchisq( rnorm(1e4, 1, 1)**2, df=1, lower.tail=FALSE ) > p2 <- pchisq( rnorm(1e4, 1, 1)**2, df=1, lower.tail=FALSE ) Let's have a look on the scatterplot with in red the points for which the null hypothesis is rejected. The power of Fisher’s product method is approximately > sum(p1*p2<exp(-9.49/2))/1e4 [1] 0.2245 The power of the method based on the sum of $p$-values is approximately > sum(p1+p2<sqrt(0.1))/1e4 [1] 0.1963 So Fisher’s method wins – at least in this case.
When combining p-values, why not just averaging? You can perfectly use the mean $p$-value. Fisher’s method set sets a threshold $s_\alpha$ on $-2 \sum_{i=1}^n \log p_i$, such that if the null hypothesis $H_0$ : all $p$-values are $\sim U(0,1)$ holds
4,171
When combining p-values, why not just averaging?
What is wrong with summing up all individual $p$-values? As @whuber and @Glen_b argue in the comments, Fisher's method is essentially multiplying all individual $p$-values, and multiplying probabilities is a more natural thing to do than adding them. Still one can add them up. In fact, precisely this was suggested by Edgington (1972) An additive method for combining probability values from independent experiments (under pay-wall), and is sometimes referred to as Edgington's method. The 1972 paper concludes claiming that The additive method is shown to be more powerful than the multiplicative method, having a greater probability than the multiplicative method of yielding significant results when there actually are treatment effects. but given that the method remains relatively unknown, I suspect that this was at least an oversimplification. E.g. a recent overview Cousins (2008) Annotated Bibliography of Some Papers on Combining Significances or p-values does not mention Edgington's method at all and it seems that this term has never been mentioned on CrossValidated either. It is easy to come up with various ways of combining $p$-values (I have once come up with one myself and asked why it is never used: Stouffer's Z-score method: what if we sum $z^2$ instead of $z$?), and what is a better method is largely an empirical question. Please see @whuber's answer there for an empirical comparison of statistical power of two different methods in a specific situation; there is a clear winner. So the answer to the general question about why using any "convoluted" method at all, is that one can gain power. Zaykin et al (2002) Truncated Product Method for Combining p-values runs some simulations and includes Edgington's method in the comparison, but I am not sure about the conclusions. One way to visualize all such methods is to draw rejection regions for $n=2$, as @Elvis did in his nice answer (+1). Here is another figure that explicitly includes Edgington's method from what appears to be a poster Winkler et al (2013) Non-Parametric Combination for Analyses of Multi-Modal Imaging: Having said all that, I think there still remains a question of why Edgington's method would (often?) be suboptimal, as follows from it being obscure. Perhaps one reason for obscurity is that it does not conform to our intuition very well: for $n=2$, if $p_1 = 0.4$ (or higher) then no matter what the value of $p_2$ is, the combined null will not be rejected at $\alpha=0.05$, that is even if e.g. $p_2 = 0.00000001$. More generally, summing $p$-values hardly distinguishes very small numbers like e.g. $p=0.001$ from $p=0.00000001$, but the difference in these probabilities is actually huge. Update. Here is what Hedges and Olkin write about Edgintgon's method (after reviewing other methods for combining $p$-values) in their Statistical Methods for Meta-Analysis (1985), emphasis mine: A quite different combined test procedure was proposed by Edgington (1972a,b). Edgington proposed combining $p$-values by taking the sum $$S = p_1 + \cdots + p_k,$$ and gave a tedious but straightforward method for obtaining significance levels for $S$. A large sample approximation to the significance levels of $S$ is given in Edgington (1972b). Although it is a monotone combination pro­cedure and therefore is admissible, Edgington's method is generally thought to be a poor procedure since one large $p$-value can overwhelm many small values that compose the statistic. However, there have been almost no numerical investigations of this procedure.
When combining p-values, why not just averaging?
What is wrong with summing up all individual $p$-values? As @whuber and @Glen_b argue in the comments, Fisher's method is essentially multiplying all individual $p$-values, and multiplying probabiliti
When combining p-values, why not just averaging? What is wrong with summing up all individual $p$-values? As @whuber and @Glen_b argue in the comments, Fisher's method is essentially multiplying all individual $p$-values, and multiplying probabilities is a more natural thing to do than adding them. Still one can add them up. In fact, precisely this was suggested by Edgington (1972) An additive method for combining probability values from independent experiments (under pay-wall), and is sometimes referred to as Edgington's method. The 1972 paper concludes claiming that The additive method is shown to be more powerful than the multiplicative method, having a greater probability than the multiplicative method of yielding significant results when there actually are treatment effects. but given that the method remains relatively unknown, I suspect that this was at least an oversimplification. E.g. a recent overview Cousins (2008) Annotated Bibliography of Some Papers on Combining Significances or p-values does not mention Edgington's method at all and it seems that this term has never been mentioned on CrossValidated either. It is easy to come up with various ways of combining $p$-values (I have once come up with one myself and asked why it is never used: Stouffer's Z-score method: what if we sum $z^2$ instead of $z$?), and what is a better method is largely an empirical question. Please see @whuber's answer there for an empirical comparison of statistical power of two different methods in a specific situation; there is a clear winner. So the answer to the general question about why using any "convoluted" method at all, is that one can gain power. Zaykin et al (2002) Truncated Product Method for Combining p-values runs some simulations and includes Edgington's method in the comparison, but I am not sure about the conclusions. One way to visualize all such methods is to draw rejection regions for $n=2$, as @Elvis did in his nice answer (+1). Here is another figure that explicitly includes Edgington's method from what appears to be a poster Winkler et al (2013) Non-Parametric Combination for Analyses of Multi-Modal Imaging: Having said all that, I think there still remains a question of why Edgington's method would (often?) be suboptimal, as follows from it being obscure. Perhaps one reason for obscurity is that it does not conform to our intuition very well: for $n=2$, if $p_1 = 0.4$ (or higher) then no matter what the value of $p_2$ is, the combined null will not be rejected at $\alpha=0.05$, that is even if e.g. $p_2 = 0.00000001$. More generally, summing $p$-values hardly distinguishes very small numbers like e.g. $p=0.001$ from $p=0.00000001$, but the difference in these probabilities is actually huge. Update. Here is what Hedges and Olkin write about Edgintgon's method (after reviewing other methods for combining $p$-values) in their Statistical Methods for Meta-Analysis (1985), emphasis mine: A quite different combined test procedure was proposed by Edgington (1972a,b). Edgington proposed combining $p$-values by taking the sum $$S = p_1 + \cdots + p_k,$$ and gave a tedious but straightforward method for obtaining significance levels for $S$. A large sample approximation to the significance levels of $S$ is given in Edgington (1972b). Although it is a monotone combination pro­cedure and therefore is admissible, Edgington's method is generally thought to be a poor procedure since one large $p$-value can overwhelm many small values that compose the statistic. However, there have been almost no numerical investigations of this procedure.
When combining p-values, why not just averaging? What is wrong with summing up all individual $p$-values? As @whuber and @Glen_b argue in the comments, Fisher's method is essentially multiplying all individual $p$-values, and multiplying probabiliti
4,172
When combining p-values, why not just averaging?
So if you did three studies of similar sizes and got a p-value of 0.05 on all three occasions, your intuition is that the "true value" should be 0.05? My intuition is different. Multiple similar results would seem to make the significance higher (and therefore the p-values which are probabilities should be lower). P-values are not really probabilities. They are statements about the sample distribution of observed values under a particular hypothesis. I believe that it may have given support to the notion that one can misuse them as such. I regret making that assertion. At any rate, under the null hypothesis of no difference, the chances of getting multiple extreme p-values would seem to be much more unlikely. Every time I see the statement that the p-value is uniformly distributed from 0-1 under the null hypothesis I feel compelled to test it with simulation, and so far the statement seems to hold. I'm apparently do not think consciously on a logarithmic scale, although at least part of my cerebral neural net must. If you want to quantify this intuition, the formula you offered (with slight revisions) appears in the Wikipedia page: http://en.wikipedia.org/wiki/Fisher%27s_method , and the associated graphic lets you quantify visually and semi-quantitatively the impact of getting two small p-values on the overall significance. For example reading from the color coded graphic, 2 simultaneous p-values of 0.05 would give a synthetic p-value around .02. You could also investigate the impact on the t-statistics of doubling your sample size. The sample size enters into the sample t-statistic as 1/sqrt(n-1) so you could look at the impact of that factor as a result of going from 50 to 100. (in R:) plot(1:100, 1/sqrt(1:100) ,ylim=c(0,1) ) abline(h=1/sqrt(c(50,100))) Those two approaches yield different quantitative results, since the ratio the 1/sqrt(n) values for 50 and 100 are not the same as the ratio of 0.05 to 0.02. Both approaches support my intuition, but to different degrees. Maybe someone else can resolve this discrepancy. Yet a third approach would be to consider the probability of getting two random draws of "True" when the binomial probability of each draw was .05. (an extremely unfair dice) That joint event should have a probability of .05*.05=.002, which result could be considered on the "other side" of the Fisher estimate. I just ran a simulation of 50,000 simultaneous t.tests. If you plot the results it looks very much like the maps of the cosmic background radiation field... ie. mostly random. t1 <- replicate(50000, t.test(rnorm(50))$p.value ) t2 <- replicate(50000, t.test(rnorm(50))$p.value ) table(t1 < 0.05, t2 < 0.05) plot(t1, t2, cex=0.1) # FALSE TRUE # FALSE 45099 2411 # TRUE 2380 110 110/(50000-110) #[1] 0.002204851
When combining p-values, why not just averaging?
So if you did three studies of similar sizes and got a p-value of 0.05 on all three occasions, your intuition is that the "true value" should be 0.05? My intuition is different. Multiple similar resul
When combining p-values, why not just averaging? So if you did three studies of similar sizes and got a p-value of 0.05 on all three occasions, your intuition is that the "true value" should be 0.05? My intuition is different. Multiple similar results would seem to make the significance higher (and therefore the p-values which are probabilities should be lower). P-values are not really probabilities. They are statements about the sample distribution of observed values under a particular hypothesis. I believe that it may have given support to the notion that one can misuse them as such. I regret making that assertion. At any rate, under the null hypothesis of no difference, the chances of getting multiple extreme p-values would seem to be much more unlikely. Every time I see the statement that the p-value is uniformly distributed from 0-1 under the null hypothesis I feel compelled to test it with simulation, and so far the statement seems to hold. I'm apparently do not think consciously on a logarithmic scale, although at least part of my cerebral neural net must. If you want to quantify this intuition, the formula you offered (with slight revisions) appears in the Wikipedia page: http://en.wikipedia.org/wiki/Fisher%27s_method , and the associated graphic lets you quantify visually and semi-quantitatively the impact of getting two small p-values on the overall significance. For example reading from the color coded graphic, 2 simultaneous p-values of 0.05 would give a synthetic p-value around .02. You could also investigate the impact on the t-statistics of doubling your sample size. The sample size enters into the sample t-statistic as 1/sqrt(n-1) so you could look at the impact of that factor as a result of going from 50 to 100. (in R:) plot(1:100, 1/sqrt(1:100) ,ylim=c(0,1) ) abline(h=1/sqrt(c(50,100))) Those two approaches yield different quantitative results, since the ratio the 1/sqrt(n) values for 50 and 100 are not the same as the ratio of 0.05 to 0.02. Both approaches support my intuition, but to different degrees. Maybe someone else can resolve this discrepancy. Yet a third approach would be to consider the probability of getting two random draws of "True" when the binomial probability of each draw was .05. (an extremely unfair dice) That joint event should have a probability of .05*.05=.002, which result could be considered on the "other side" of the Fisher estimate. I just ran a simulation of 50,000 simultaneous t.tests. If you plot the results it looks very much like the maps of the cosmic background radiation field... ie. mostly random. t1 <- replicate(50000, t.test(rnorm(50))$p.value ) t2 <- replicate(50000, t.test(rnorm(50))$p.value ) table(t1 < 0.05, t2 < 0.05) plot(t1, t2, cex=0.1) # FALSE TRUE # FALSE 45099 2411 # TRUE 2380 110 110/(50000-110) #[1] 0.002204851
When combining p-values, why not just averaging? So if you did three studies of similar sizes and got a p-value of 0.05 on all three occasions, your intuition is that the "true value" should be 0.05? My intuition is different. Multiple similar resul
4,173
How to select a clustering method? How to validate a cluster solution (to warrant the method choice)?
Often they say that there is no other analytical technique as strongly of the "as you sow you shall mow" kind, as cluster analysis is. I can imagine of a number dimensions or aspects of "rightness" of this or that clustering method: Cluster metaphor. "I preferred this method because it constitutes clusters such (or such a way) which meets with my concept of a cluster in my particular project". Each clustering algorithm or subalgorithm/method implies its corresponding structure/build/shape of a cluster. In regard to hierarchical methods, I've observed this in one of points here, and also here. I.e. some methods give clusters that are prototypically "types", other give "circles [by interest]", still other "[political] platforms", "classes", "chains", etc. Select that method which cluster metaphor suits you. For example, if I see my customer segments as types - more or less spherical shapes with compaction(s) in the middle I'll choose Ward's linkage method or K-means, but never single linkage method, clearly. If I need a focal representative point I could use medoid method. If I need to screen points for them being core and peripheral representatives I could use DBSCAN approach. Data/method assumptions. "I preferred this method because my data nature or format predispose to it". This important and vast point is also mentioned in my link above. Different algorithms/methods may require different kind of data for them or different proximity measure to be applied to the data, and vice versa, different data may require different methods. There are methods for quantitative and methods for qualitative data. Mixture quantitative + qualitative features dramatically narrows the scope of choice among methods. Ward's or K-means are based - explicitly or implicitly - on (squared) euclidean distance proximity measure only and not on arbitrary measure. Binary data may call for special similarity measures which in turn will strongly question using some methods, for example Ward's or K-means, for them. Big data may need special algorithms or special implementations. Internal validity. "I preferred this method because it gave me most clear-cut, tight-and-isolated clusters". Choose algorithm/method that shows the best results for your data from this point of view. The more tight, dense are clusters inside and the less density is outside of them (or the wider apart are the clusters) - the greater is the internal validity. Select and use appropriate internal clustering criteria (which are plenty - Calinski-Harabasz, Silhouette, etc etc; sometimes also called "stopping rules") to assess it. [Beware of overfitting: all clustering methods seek to maximize some version of internal validity$^1$ (it's what clustering is about), so high validity may be partly due to random peculiarity of the given dataset; having a test dataset is always beneficial.] External validity. "I preferred this method because it gave me clusters which differ by their background or clusters which match with the true ones I know". If a clustering partition presents clusters which are clearly different on some important background (i.e. not participated in the cluster analysis) characteristics then it is an asset for that method which produced the partition. Use any analysis which applies to check the difference; there also exist a number of useful external clustering criteria (Rand, F-measure, etc etc). Another variant of external validation case is when you somehow know the true clusters in your data (know "ground truth"), such as when you generated the clusters yourself. Then how accurately your clustering method is able to uncover the real clusters is the measure of external validity. Cross-validity. "I preferred this method because it is giving me very similar clusters on equivalent samples of the data or extrapolates well onto such samples". There are various approaches and their hybrids, some more feasible with some clustering methods while others with other methods. Two main approaches are stability check and generalizability check. Checking stability of a clustering method, one randomly splits or resamples the data in partly intersecting or fully disjoint sets and does the clustering on each; then matches and compares the solutions wrt some emergent cluster characteristic (for example, a cluster's central tendency location) whether it is stable across the sets. Checking generalizability implies doing clustering on a train set and then using its emergent cluster characteristic or rule to assign objects of a test set, plus also doing clustering on the test set. The assignment result's and the clustering result's cluster memberships of the test set objects are compared then. Interpretation. "I preferred this method because it gave me clusters which, explained, are most persuasive that there is meaning in the world". It's not statistical - it is your psychological validation. How meaningful are the results for you, the domain and, possibly audience/client. Choose method giving most interpretable, spicy results. Gregariousness. Some researches regularly and all researches occasionally would say "I preferred this method because it gave with my data similar results with a number of other methods among all those I probed". This is a heuristic but questionable strategy which assumes that there exist quite universal data or quite universal method. Points 1 and 2 are theoretical and precede obtaining the result; exclusive relying on these points is the haughty, self-assured exploratory strategy. Points 3, 4 and 5 are empirical and follow the result; exclusive relying on these points is the fidgety, try-all-out exploratory strategy. Point 6 is creative which means that it denies any result in order to try to rejustify it. Point 7 is loyal mauvaise foi. Points 3 through 7 can also be judges in your selection of the "best" number of clusters. $^1$ A concrete internal clustering criterion is itself not "orthogonal to" a clustering method (nor to the data kind). This raises a philosophical question to what extent such a biased or prejudiced criterion can be of utility (see answers just noticing it).
How to select a clustering method? How to validate a cluster solution (to warrant the method choice)
Often they say that there is no other analytical technique as strongly of the "as you sow you shall mow" kind, as cluster analysis is. I can imagine of a number dimensions or aspects of "rightness" of
How to select a clustering method? How to validate a cluster solution (to warrant the method choice)? Often they say that there is no other analytical technique as strongly of the "as you sow you shall mow" kind, as cluster analysis is. I can imagine of a number dimensions or aspects of "rightness" of this or that clustering method: Cluster metaphor. "I preferred this method because it constitutes clusters such (or such a way) which meets with my concept of a cluster in my particular project". Each clustering algorithm or subalgorithm/method implies its corresponding structure/build/shape of a cluster. In regard to hierarchical methods, I've observed this in one of points here, and also here. I.e. some methods give clusters that are prototypically "types", other give "circles [by interest]", still other "[political] platforms", "classes", "chains", etc. Select that method which cluster metaphor suits you. For example, if I see my customer segments as types - more or less spherical shapes with compaction(s) in the middle I'll choose Ward's linkage method or K-means, but never single linkage method, clearly. If I need a focal representative point I could use medoid method. If I need to screen points for them being core and peripheral representatives I could use DBSCAN approach. Data/method assumptions. "I preferred this method because my data nature or format predispose to it". This important and vast point is also mentioned in my link above. Different algorithms/methods may require different kind of data for them or different proximity measure to be applied to the data, and vice versa, different data may require different methods. There are methods for quantitative and methods for qualitative data. Mixture quantitative + qualitative features dramatically narrows the scope of choice among methods. Ward's or K-means are based - explicitly or implicitly - on (squared) euclidean distance proximity measure only and not on arbitrary measure. Binary data may call for special similarity measures which in turn will strongly question using some methods, for example Ward's or K-means, for them. Big data may need special algorithms or special implementations. Internal validity. "I preferred this method because it gave me most clear-cut, tight-and-isolated clusters". Choose algorithm/method that shows the best results for your data from this point of view. The more tight, dense are clusters inside and the less density is outside of them (or the wider apart are the clusters) - the greater is the internal validity. Select and use appropriate internal clustering criteria (which are plenty - Calinski-Harabasz, Silhouette, etc etc; sometimes also called "stopping rules") to assess it. [Beware of overfitting: all clustering methods seek to maximize some version of internal validity$^1$ (it's what clustering is about), so high validity may be partly due to random peculiarity of the given dataset; having a test dataset is always beneficial.] External validity. "I preferred this method because it gave me clusters which differ by their background or clusters which match with the true ones I know". If a clustering partition presents clusters which are clearly different on some important background (i.e. not participated in the cluster analysis) characteristics then it is an asset for that method which produced the partition. Use any analysis which applies to check the difference; there also exist a number of useful external clustering criteria (Rand, F-measure, etc etc). Another variant of external validation case is when you somehow know the true clusters in your data (know "ground truth"), such as when you generated the clusters yourself. Then how accurately your clustering method is able to uncover the real clusters is the measure of external validity. Cross-validity. "I preferred this method because it is giving me very similar clusters on equivalent samples of the data or extrapolates well onto such samples". There are various approaches and their hybrids, some more feasible with some clustering methods while others with other methods. Two main approaches are stability check and generalizability check. Checking stability of a clustering method, one randomly splits or resamples the data in partly intersecting or fully disjoint sets and does the clustering on each; then matches and compares the solutions wrt some emergent cluster characteristic (for example, a cluster's central tendency location) whether it is stable across the sets. Checking generalizability implies doing clustering on a train set and then using its emergent cluster characteristic or rule to assign objects of a test set, plus also doing clustering on the test set. The assignment result's and the clustering result's cluster memberships of the test set objects are compared then. Interpretation. "I preferred this method because it gave me clusters which, explained, are most persuasive that there is meaning in the world". It's not statistical - it is your psychological validation. How meaningful are the results for you, the domain and, possibly audience/client. Choose method giving most interpretable, spicy results. Gregariousness. Some researches regularly and all researches occasionally would say "I preferred this method because it gave with my data similar results with a number of other methods among all those I probed". This is a heuristic but questionable strategy which assumes that there exist quite universal data or quite universal method. Points 1 and 2 are theoretical and precede obtaining the result; exclusive relying on these points is the haughty, self-assured exploratory strategy. Points 3, 4 and 5 are empirical and follow the result; exclusive relying on these points is the fidgety, try-all-out exploratory strategy. Point 6 is creative which means that it denies any result in order to try to rejustify it. Point 7 is loyal mauvaise foi. Points 3 through 7 can also be judges in your selection of the "best" number of clusters. $^1$ A concrete internal clustering criterion is itself not "orthogonal to" a clustering method (nor to the data kind). This raises a philosophical question to what extent such a biased or prejudiced criterion can be of utility (see answers just noticing it).
How to select a clustering method? How to validate a cluster solution (to warrant the method choice) Often they say that there is no other analytical technique as strongly of the "as you sow you shall mow" kind, as cluster analysis is. I can imagine of a number dimensions or aspects of "rightness" of
4,174
How to select a clustering method? How to validate a cluster solution (to warrant the method choice)?
There are mostly red flag criteria. Properties of data that tell you that a certain approach will fail for sure. if you have no idea what your data means stop analyzing it. you are just guessing animals in clouds. if attributes vary in scale and are nonlinear or skewed. this can ruin your analysis unless you have a very good idea of appropriate normalization. Stop and learn to understand your features, it is too early to cluster. if every attribute is equivalent (same scale), and linear, and you want to quantize your data set (and least-squared error has a meaning for your data), then k-means is worth a try. If your attributes are of different kind and scale, the result is not well-defined. Counterexample: age and income. Income is very skewed, and x years = y dollar is nonsense. if you have a very clear idea of how to quantify similarity or distance (in a meaningful way; the ability to compute some number is not enough) then hierarchical clustering and DBSCAN are a good choice. If you don't have any idea how to quantify similarity, solve that problem first. You see that the most common problem is that people attempt to dump their raw data into clustering, when they first need to understand and normalize it, and figure out similarity. Examples: Pixels of an image in RGB space. Least-squares makes some sense and all attributes are comparable - k-means is a good choice. Geographic data: least-squares is not very appropriate. there will be outliers. but distance is very meaningful. Use DBSCAN if you have a lot of noise, or HAC (hierarchical agglomerative clustering) if you have very clean data. Species observed in different habitats. Least-squares is dubious, but e. g. Jaccard similarity is meaningful. You probably have only few observations and no "false" habitats - use HAC.
How to select a clustering method? How to validate a cluster solution (to warrant the method choice)
There are mostly red flag criteria. Properties of data that tell you that a certain approach will fail for sure. if you have no idea what your data means stop analyzing it. you are just guessing ani
How to select a clustering method? How to validate a cluster solution (to warrant the method choice)? There are mostly red flag criteria. Properties of data that tell you that a certain approach will fail for sure. if you have no idea what your data means stop analyzing it. you are just guessing animals in clouds. if attributes vary in scale and are nonlinear or skewed. this can ruin your analysis unless you have a very good idea of appropriate normalization. Stop and learn to understand your features, it is too early to cluster. if every attribute is equivalent (same scale), and linear, and you want to quantize your data set (and least-squared error has a meaning for your data), then k-means is worth a try. If your attributes are of different kind and scale, the result is not well-defined. Counterexample: age and income. Income is very skewed, and x years = y dollar is nonsense. if you have a very clear idea of how to quantify similarity or distance (in a meaningful way; the ability to compute some number is not enough) then hierarchical clustering and DBSCAN are a good choice. If you don't have any idea how to quantify similarity, solve that problem first. You see that the most common problem is that people attempt to dump their raw data into clustering, when they first need to understand and normalize it, and figure out similarity. Examples: Pixels of an image in RGB space. Least-squares makes some sense and all attributes are comparable - k-means is a good choice. Geographic data: least-squares is not very appropriate. there will be outliers. but distance is very meaningful. Use DBSCAN if you have a lot of noise, or HAC (hierarchical agglomerative clustering) if you have very clean data. Species observed in different habitats. Least-squares is dubious, but e. g. Jaccard similarity is meaningful. You probably have only few observations and no "false" habitats - use HAC.
How to select a clustering method? How to validate a cluster solution (to warrant the method choice) There are mostly red flag criteria. Properties of data that tell you that a certain approach will fail for sure. if you have no idea what your data means stop analyzing it. you are just guessing ani
4,175
How to select a clustering method? How to validate a cluster solution (to warrant the method choice)?
I don't think there is a good formal way to do this; I think that the good solutions are the ones that make sense, substantively. Of course, you can try splitting the data and clustering multiple times and so one, but then there is still the question of which one is useful.
How to select a clustering method? How to validate a cluster solution (to warrant the method choice)
I don't think there is a good formal way to do this; I think that the good solutions are the ones that make sense, substantively. Of course, you can try splitting the data and clustering multiple tim
How to select a clustering method? How to validate a cluster solution (to warrant the method choice)? I don't think there is a good formal way to do this; I think that the good solutions are the ones that make sense, substantively. Of course, you can try splitting the data and clustering multiple times and so one, but then there is still the question of which one is useful.
How to select a clustering method? How to validate a cluster solution (to warrant the method choice) I don't think there is a good formal way to do this; I think that the good solutions are the ones that make sense, substantively. Of course, you can try splitting the data and clustering multiple tim
4,176
Why sigmoid function instead of anything else?
Quoting myself from this answer to a different question: In section 4.2 of Pattern Recognition and Machine Learning (Springer 2006), Bishop shows that the logit arises naturally as the form of the posterior probability distribution in a Bayesian treatment of two-class classification. He then goes on to show that the same holds for discretely distributed features, as well as a subset of the family of exponential distributions. For multi-class classification the logit generalizes to the normalized exponential or softmax function. This explains why this sigmoid is used in logistic regression. Regarding neural networks, this blog post explains how different nonlinearities including the logit / softmax and the probit used in neural networks can be given a statistical interpretation and thereby a motivation. The underlying idea is that a multi-layered neural network can be regarded as a hierarchy of generalized linear models; according to this, activation functions are link functions, which in turn correspond to different distributional assumptions.
Why sigmoid function instead of anything else?
Quoting myself from this answer to a different question: In section 4.2 of Pattern Recognition and Machine Learning (Springer 2006), Bishop shows that the logit arises naturally as the form of the po
Why sigmoid function instead of anything else? Quoting myself from this answer to a different question: In section 4.2 of Pattern Recognition and Machine Learning (Springer 2006), Bishop shows that the logit arises naturally as the form of the posterior probability distribution in a Bayesian treatment of two-class classification. He then goes on to show that the same holds for discretely distributed features, as well as a subset of the family of exponential distributions. For multi-class classification the logit generalizes to the normalized exponential or softmax function. This explains why this sigmoid is used in logistic regression. Regarding neural networks, this blog post explains how different nonlinearities including the logit / softmax and the probit used in neural networks can be given a statistical interpretation and thereby a motivation. The underlying idea is that a multi-layered neural network can be regarded as a hierarchy of generalized linear models; according to this, activation functions are link functions, which in turn correspond to different distributional assumptions.
Why sigmoid function instead of anything else? Quoting myself from this answer to a different question: In section 4.2 of Pattern Recognition and Machine Learning (Springer 2006), Bishop shows that the logit arises naturally as the form of the po
4,177
Why sigmoid function instead of anything else?
I have asked myself this question for months. The answers on CrossValidated and Quora all list nice properties of the logistic sigmoid function, but it all seems like we cleverly guessed this function. What I missed was the justification for choosing it. I finally found one in section 6.2.2.2 of the "Deep Learning" book by Bengio (2016). In my own words: In short, we want the logarithm of the model's output to be suitable for gradient-based optimization of the log-likelihood of the training data. Motivation We want a linear model, but we can't use $z = w^T x + b$ directly as $z \in (-\infty, +\infty)$. For classification, it makes sense to assume the Bernoulli distribution and model its parameter $\theta$ in $P(Y=1) = \theta$. So, we need to map $z$ from $(-\infty, +\infty)$ to $[0, 1]$ to do classification. Why the logistic sigmoid function? Cutting off $z$ with $P(Y=1|z) = max\{0, min\{1, z\}\}$ yields a zero gradient for $z$ outside of $[0, 1]$. We need a strong gradient whenever the model's prediction is wrong, because we solve logistic regression with gradient descent. For logistic regression, there is no closed form solution. The logistic function has the nice property of asymptoting a constant gradient when the model's prediction is wrong, given that we use Maximum Likelihood Estimation to fit the model. This is shown below: For numerical benefits, Maximum Likelihood Estimation can be done by minimizing the negative log-likelihood of the training data. So, our cost function is: $$ \begin{align} J(w, b) &= \frac{1}{m} \sum_{i=1}^m -\log P(Y = y_i | x_i; w, b) \\ &= \frac{1}{m} \sum_{i=1}^m - \big(y_i \log P(Y=1 | z) + (y_i-1)\log P(Y=0 | z)\big) \end{align}$$ Since $P(Y=0 | z) = 1-P(Y=1|z)$, we can focus on the $Y=1$ case. So, the question is how to model $P(Y=1 | z)$ given that we have $z = w^T x + b$. The obvious requirements for the function $f$ mapping $z$ to $P(Y=1 | z)$ are: $\forall z \in \mathbb{R}: f(z) \in [0, 1]$ $f(0) = 0.5$ $f$ should be rotationally symmetrical w.r.t. $(0, 0.5)$, i.e. $f(-x) = 1-f(x)$, so that flipping the signs of the classes has no effect on the cost function. $f$ should be non-decreasing, continuous and differentiable. These requirements are all fulfilled by rescaling sigmoid functions. Both $f(z) = \frac{1}{1 + e^{-z}}$ and $f(z) = 0.5 + 0.5 \frac{z}{1+|z|}$ fulfill them. However, sigmoid functions differ with respect to their behavior during gradient-based optimization of the log-likelihood. We can see the difference by plugging the logistic function $f(z) = \frac{1}{1 + e^{-z}}$ into our cost function. Saturation for $Y=1$ For $P(Y=1|z) = \frac{1}{1 + e^{-z}}$ and $Y=1$, the cost of a single misclassified sample (i.e. $m=1$) is: $$ \begin{align} J(z) &= -\log(P(Y=1|z)) \\ &= -\log(\frac{1}{1 + e^{-z}}) \\ &= -\log(\frac{e^z}{1+e^z}) \\ &= -z + \log(1 + e^z) \end{align} $$ We can see that there is a linear component $-z$. Now, we can look at two cases: When $z$ is large, the model's prediction was correct, since $Y=1$. In the cost function, the $\log(1 + e^z)$ term asymptotes to $z$ for large $z$. Thus, it roughly cancels the $-z$ out leading to a roughly zero cost for this sample and a weak gradient. That makes sense, as the model is already predicting the correct class. When $z$ is small (but $|z|$ is large), the model's prediction was not correct, since $Y=1$. In the cost function, the $\log(1 + e^z)$ term asymptotes to $0$ for small $z$. Thus, the overall cost for this sample is roughly $-z$, meaning the gradient w.r.t. $z$ is roughly $-1$. This makes it easy for the model to correct its wrong prediction based on the constant gradient it receives. Even for very small $z$, there is no saturation going on, which would cause vanishing gradients. Saturation for $Y=0$ Above, we focussed on the $Y=1$ case. For $Y=0$, the cost function behaves analogously, providing strong gradients only when the model's prediction is wrong. This is the cost function $J(z)$ for $Y=1$: It is the horizontally flipped softplus function. For $Y=0$, it is the softplus function. Alternatives You mentioned the alternatives to the logistic sigmoid function, for example $\frac{z}{1+|z|}$. Normalized to $[0,1]$, this would mean that we model $P(Y=1|z) = 0.5 + 0.5 \frac{z}{1+|z|}$. During MLE, the cost function for $Y=1$ would then be $J(z) = - \log (0.5 + 0.5 \frac{z}{1+|z|})$, which looks like this: You can see, that the gradient of the cost function gets weaker and weaker for $z \rightarrow - \infty$.
Why sigmoid function instead of anything else?
I have asked myself this question for months. The answers on CrossValidated and Quora all list nice properties of the logistic sigmoid function, but it all seems like we cleverly guessed this function
Why sigmoid function instead of anything else? I have asked myself this question for months. The answers on CrossValidated and Quora all list nice properties of the logistic sigmoid function, but it all seems like we cleverly guessed this function. What I missed was the justification for choosing it. I finally found one in section 6.2.2.2 of the "Deep Learning" book by Bengio (2016). In my own words: In short, we want the logarithm of the model's output to be suitable for gradient-based optimization of the log-likelihood of the training data. Motivation We want a linear model, but we can't use $z = w^T x + b$ directly as $z \in (-\infty, +\infty)$. For classification, it makes sense to assume the Bernoulli distribution and model its parameter $\theta$ in $P(Y=1) = \theta$. So, we need to map $z$ from $(-\infty, +\infty)$ to $[0, 1]$ to do classification. Why the logistic sigmoid function? Cutting off $z$ with $P(Y=1|z) = max\{0, min\{1, z\}\}$ yields a zero gradient for $z$ outside of $[0, 1]$. We need a strong gradient whenever the model's prediction is wrong, because we solve logistic regression with gradient descent. For logistic regression, there is no closed form solution. The logistic function has the nice property of asymptoting a constant gradient when the model's prediction is wrong, given that we use Maximum Likelihood Estimation to fit the model. This is shown below: For numerical benefits, Maximum Likelihood Estimation can be done by minimizing the negative log-likelihood of the training data. So, our cost function is: $$ \begin{align} J(w, b) &= \frac{1}{m} \sum_{i=1}^m -\log P(Y = y_i | x_i; w, b) \\ &= \frac{1}{m} \sum_{i=1}^m - \big(y_i \log P(Y=1 | z) + (y_i-1)\log P(Y=0 | z)\big) \end{align}$$ Since $P(Y=0 | z) = 1-P(Y=1|z)$, we can focus on the $Y=1$ case. So, the question is how to model $P(Y=1 | z)$ given that we have $z = w^T x + b$. The obvious requirements for the function $f$ mapping $z$ to $P(Y=1 | z)$ are: $\forall z \in \mathbb{R}: f(z) \in [0, 1]$ $f(0) = 0.5$ $f$ should be rotationally symmetrical w.r.t. $(0, 0.5)$, i.e. $f(-x) = 1-f(x)$, so that flipping the signs of the classes has no effect on the cost function. $f$ should be non-decreasing, continuous and differentiable. These requirements are all fulfilled by rescaling sigmoid functions. Both $f(z) = \frac{1}{1 + e^{-z}}$ and $f(z) = 0.5 + 0.5 \frac{z}{1+|z|}$ fulfill them. However, sigmoid functions differ with respect to their behavior during gradient-based optimization of the log-likelihood. We can see the difference by plugging the logistic function $f(z) = \frac{1}{1 + e^{-z}}$ into our cost function. Saturation for $Y=1$ For $P(Y=1|z) = \frac{1}{1 + e^{-z}}$ and $Y=1$, the cost of a single misclassified sample (i.e. $m=1$) is: $$ \begin{align} J(z) &= -\log(P(Y=1|z)) \\ &= -\log(\frac{1}{1 + e^{-z}}) \\ &= -\log(\frac{e^z}{1+e^z}) \\ &= -z + \log(1 + e^z) \end{align} $$ We can see that there is a linear component $-z$. Now, we can look at two cases: When $z$ is large, the model's prediction was correct, since $Y=1$. In the cost function, the $\log(1 + e^z)$ term asymptotes to $z$ for large $z$. Thus, it roughly cancels the $-z$ out leading to a roughly zero cost for this sample and a weak gradient. That makes sense, as the model is already predicting the correct class. When $z$ is small (but $|z|$ is large), the model's prediction was not correct, since $Y=1$. In the cost function, the $\log(1 + e^z)$ term asymptotes to $0$ for small $z$. Thus, the overall cost for this sample is roughly $-z$, meaning the gradient w.r.t. $z$ is roughly $-1$. This makes it easy for the model to correct its wrong prediction based on the constant gradient it receives. Even for very small $z$, there is no saturation going on, which would cause vanishing gradients. Saturation for $Y=0$ Above, we focussed on the $Y=1$ case. For $Y=0$, the cost function behaves analogously, providing strong gradients only when the model's prediction is wrong. This is the cost function $J(z)$ for $Y=1$: It is the horizontally flipped softplus function. For $Y=0$, it is the softplus function. Alternatives You mentioned the alternatives to the logistic sigmoid function, for example $\frac{z}{1+|z|}$. Normalized to $[0,1]$, this would mean that we model $P(Y=1|z) = 0.5 + 0.5 \frac{z}{1+|z|}$. During MLE, the cost function for $Y=1$ would then be $J(z) = - \log (0.5 + 0.5 \frac{z}{1+|z|})$, which looks like this: You can see, that the gradient of the cost function gets weaker and weaker for $z \rightarrow - \infty$.
Why sigmoid function instead of anything else? I have asked myself this question for months. The answers on CrossValidated and Quora all list nice properties of the logistic sigmoid function, but it all seems like we cleverly guessed this function
4,178
Why sigmoid function instead of anything else?
One reason this function might seem more "natural" than others is that it happens to be the inverse of the canonical parameter of the Bernoulli distribution: \begin{align} f(y) &= p^y (1 - p)^{1 - y} \\ &= (1 - p) \exp \left \{ y \log \left ( \frac{p}{1 - p} \right ) \right \} . \end{align} (The function of $p$ within the exponent is called the canonical parameter.) Maybe a more compelling justification comes from information theory, where the sigmoid function can be derived as a maximum entropy model. Roughly speaking, the sigmoid function assumes minimal structure and reflects our general state of ignorance about the underlying model.
Why sigmoid function instead of anything else?
One reason this function might seem more "natural" than others is that it happens to be the inverse of the canonical parameter of the Bernoulli distribution: \begin{align} f(y) &= p^y (1 - p)^{1 - y}
Why sigmoid function instead of anything else? One reason this function might seem more "natural" than others is that it happens to be the inverse of the canonical parameter of the Bernoulli distribution: \begin{align} f(y) &= p^y (1 - p)^{1 - y} \\ &= (1 - p) \exp \left \{ y \log \left ( \frac{p}{1 - p} \right ) \right \} . \end{align} (The function of $p$ within the exponent is called the canonical parameter.) Maybe a more compelling justification comes from information theory, where the sigmoid function can be derived as a maximum entropy model. Roughly speaking, the sigmoid function assumes minimal structure and reflects our general state of ignorance about the underlying model.
Why sigmoid function instead of anything else? One reason this function might seem more "natural" than others is that it happens to be the inverse of the canonical parameter of the Bernoulli distribution: \begin{align} f(y) &= p^y (1 - p)^{1 - y}
4,179
Why sigmoid function instead of anything else?
Since the original question mentioned the decaying gradient problem, I'd just like to add that, for intermediate layers (where you don't need to interpret activations as class probabilities or regression outputs), other nonlinearities are often preferred over sigmoidal functions. The most prominent are rectifier functions (as in ReLUs), which are linear over the positive domain and zero over the negative. One of their advantages is that they're less subject to the decaying gradient problem, because the derivative is constant over the positive domain. ReLUs have become popular to the point that sigmoids probably can't be called the de-facto standard anymore. Glorot et al. (2011). Deep sparse rectifier neural networks
Why sigmoid function instead of anything else?
Since the original question mentioned the decaying gradient problem, I'd just like to add that, for intermediate layers (where you don't need to interpret activations as class probabilities or regress
Why sigmoid function instead of anything else? Since the original question mentioned the decaying gradient problem, I'd just like to add that, for intermediate layers (where you don't need to interpret activations as class probabilities or regression outputs), other nonlinearities are often preferred over sigmoidal functions. The most prominent are rectifier functions (as in ReLUs), which are linear over the positive domain and zero over the negative. One of their advantages is that they're less subject to the decaying gradient problem, because the derivative is constant over the positive domain. ReLUs have become popular to the point that sigmoids probably can't be called the de-facto standard anymore. Glorot et al. (2011). Deep sparse rectifier neural networks
Why sigmoid function instead of anything else? Since the original question mentioned the decaying gradient problem, I'd just like to add that, for intermediate layers (where you don't need to interpret activations as class probabilities or regress
4,180
What is the difference between N and N-1 in calculating population variance?
Instead of going into maths I'll try to put it in plain words. If you have the whole population at your disposal then its variance (population variance) is computed with the denominator N. Likewise, if you have only sample and want to compute this sample's variance, you use denominator N (n of the sample, in this case). In both cases, note, you don't estimate anything: the mean that you measured is the true mean and the variance you computed from that mean is the true variance. Now, you have only sample and want to infer about the unknown mean and variance in the population. In other words, you want estimates. You take your sample mean for the estimate of population mean (because your sample is representative), OK. To obtain estimate of population variance, you have to pretend that that mean is really population mean and therefore it is not dependent on your sample anymore since when you computed it. To "show" that you now take it as fixed you reserve one (any) observation from your sample to "support" the mean's value: whatever your sample might have happened, one reserved observation could always bring the mean to the value that you've got and which believe is insensitive to sampling contingencies. One reserved observation is "-1" and so you have N-1 in computing the variance estimate. The unbiased estimate is called sample variance (not to be confused with the sample's variance) which is an argot; it is better call what it is: sample unbiased estimate of population variance estimated with the sample's mean. [Pasting here from my below comments: Imagine you are taking repeatedly samples of N=3 size. Of the 3 values in a sample, only 2 values express random deviatedness of observations from the population mean, but the left one expresses (takes on itself) the shift of the sample's mean from the population mean. Thus the "degree of free" observational variability is 2 of the 3, in each separate sample. When we estimate variability on a sample but want it to be an unbiased (unshifted) estimate of populational variability, we "believe" only those 2 free observations. We "pay" for the decision to measure variability off the sample mean as if it were the population mean, for we need to infer about the population variability. This "fee" (N-1 denominator, the Bessel correction) makes the variability wider, incorporating the oscillation of sample means within the variance, but it makes such variance an unbiased estimator.] But imagine now that you somehow know the true population mean, but want to estimate variance from the sample. Then you will substitute that true mean into the formula for variance and apply denominator N: no "-1" is needed here since you know the true mean, you didn't estimate it from this same sample.
What is the difference between N and N-1 in calculating population variance?
Instead of going into maths I'll try to put it in plain words. If you have the whole population at your disposal then its variance (population variance) is computed with the denominator N. Likewise, i
What is the difference between N and N-1 in calculating population variance? Instead of going into maths I'll try to put it in plain words. If you have the whole population at your disposal then its variance (population variance) is computed with the denominator N. Likewise, if you have only sample and want to compute this sample's variance, you use denominator N (n of the sample, in this case). In both cases, note, you don't estimate anything: the mean that you measured is the true mean and the variance you computed from that mean is the true variance. Now, you have only sample and want to infer about the unknown mean and variance in the population. In other words, you want estimates. You take your sample mean for the estimate of population mean (because your sample is representative), OK. To obtain estimate of population variance, you have to pretend that that mean is really population mean and therefore it is not dependent on your sample anymore since when you computed it. To "show" that you now take it as fixed you reserve one (any) observation from your sample to "support" the mean's value: whatever your sample might have happened, one reserved observation could always bring the mean to the value that you've got and which believe is insensitive to sampling contingencies. One reserved observation is "-1" and so you have N-1 in computing the variance estimate. The unbiased estimate is called sample variance (not to be confused with the sample's variance) which is an argot; it is better call what it is: sample unbiased estimate of population variance estimated with the sample's mean. [Pasting here from my below comments: Imagine you are taking repeatedly samples of N=3 size. Of the 3 values in a sample, only 2 values express random deviatedness of observations from the population mean, but the left one expresses (takes on itself) the shift of the sample's mean from the population mean. Thus the "degree of free" observational variability is 2 of the 3, in each separate sample. When we estimate variability on a sample but want it to be an unbiased (unshifted) estimate of populational variability, we "believe" only those 2 free observations. We "pay" for the decision to measure variability off the sample mean as if it were the population mean, for we need to infer about the population variability. This "fee" (N-1 denominator, the Bessel correction) makes the variability wider, incorporating the oscillation of sample means within the variance, but it makes such variance an unbiased estimator.] But imagine now that you somehow know the true population mean, but want to estimate variance from the sample. Then you will substitute that true mean into the formula for variance and apply denominator N: no "-1" is needed here since you know the true mean, you didn't estimate it from this same sample.
What is the difference between N and N-1 in calculating population variance? Instead of going into maths I'll try to put it in plain words. If you have the whole population at your disposal then its variance (population variance) is computed with the denominator N. Likewise, i
4,181
What is the difference between N and N-1 in calculating population variance?
$N$ is the population size and $n$ is the sample size. The question asks why the population variance is the mean squared deviation from the mean rather than $(N-1)/N = 1-(1/N)$ times it. For that matter, why stop there? Why not multiply the mean squared deviation by $1-2/N$, or $1-17/N$, or $\exp(-1/N)$, for instance? There actually is a good reason not to. Any of these figures I just mentioned would serve just fine as a way to quantify a "typical spread" within the population. However, without prior knowledge of the population size, it would be impossible to use a random sample to find an unbiased estimator of such a figure. We know that the sample variance, which multiplies the mean squared deviation from the sample mean by $(n-1)/n$, is an unbiased estimator of the usual population variance when sampling with replacement. (There is no problem with making this correction, because we know $n$!) The sample variance would therefore be a biased estimator of any multiple of the population variance where that multiple, such as $1-1/N$, is not exactly known beforehand. This problem of some unknown amount of bias would propagate to all statistical tests that use the sample variance, including t-tests and F-tests. In effect, dividing by anything other than $N$ in the population variance formula would require us to change all statistical tabulations of t-statistics and F-statistics (and many other tables as well), but the adjustment would depend on the population size. Nobody wants to have to make tables for every possible $N$! Especially when it's not necessary. As a practical matter, when $N$ is small enough that using $N-1$ instead of $N$ in formulas makes a difference, you usually do know the population size (or can guess it accurately) and you would likely resort to much more substantial small-population corrections when working with random samples (without replacement) from the population. In all other cases, who cares? The difference doesn't matter. For these reasons, guided by pedagogical considerations (namely, of focusing on details that matter and glossing over details that don't), some excellent introductory statistics texts don't even bother to teach the difference: they simply provide a single variance formula (divide by $N$ or $n$ as the case may be).
What is the difference between N and N-1 in calculating population variance?
$N$ is the population size and $n$ is the sample size. The question asks why the population variance is the mean squared deviation from the mean rather than $(N-1)/N = 1-(1/N)$ times it. For that ma
What is the difference between N and N-1 in calculating population variance? $N$ is the population size and $n$ is the sample size. The question asks why the population variance is the mean squared deviation from the mean rather than $(N-1)/N = 1-(1/N)$ times it. For that matter, why stop there? Why not multiply the mean squared deviation by $1-2/N$, or $1-17/N$, or $\exp(-1/N)$, for instance? There actually is a good reason not to. Any of these figures I just mentioned would serve just fine as a way to quantify a "typical spread" within the population. However, without prior knowledge of the population size, it would be impossible to use a random sample to find an unbiased estimator of such a figure. We know that the sample variance, which multiplies the mean squared deviation from the sample mean by $(n-1)/n$, is an unbiased estimator of the usual population variance when sampling with replacement. (There is no problem with making this correction, because we know $n$!) The sample variance would therefore be a biased estimator of any multiple of the population variance where that multiple, such as $1-1/N$, is not exactly known beforehand. This problem of some unknown amount of bias would propagate to all statistical tests that use the sample variance, including t-tests and F-tests. In effect, dividing by anything other than $N$ in the population variance formula would require us to change all statistical tabulations of t-statistics and F-statistics (and many other tables as well), but the adjustment would depend on the population size. Nobody wants to have to make tables for every possible $N$! Especially when it's not necessary. As a practical matter, when $N$ is small enough that using $N-1$ instead of $N$ in formulas makes a difference, you usually do know the population size (or can guess it accurately) and you would likely resort to much more substantial small-population corrections when working with random samples (without replacement) from the population. In all other cases, who cares? The difference doesn't matter. For these reasons, guided by pedagogical considerations (namely, of focusing on details that matter and glossing over details that don't), some excellent introductory statistics texts don't even bother to teach the difference: they simply provide a single variance formula (divide by $N$ or $n$ as the case may be).
What is the difference between N and N-1 in calculating population variance? $N$ is the population size and $n$ is the sample size. The question asks why the population variance is the mean squared deviation from the mean rather than $(N-1)/N = 1-(1/N)$ times it. For that ma
4,182
What is the difference between N and N-1 in calculating population variance?
There has, in the past been an argument that you should use N for a non-inferential variance but I wouldn't recommended that anymore. You should always use N-1. As sample size decreases N-1 is a pretty good correction for the fact that the sample variance gets lower (you're just more likely to sample near the peak of the distribution---see figure). If sample size is really big then it doesn't matter any meaningful amount. An alternative explanation is that the population is a theoretical construct that's impossible to achieve. Therefore, always use N-1 because whatever you're doing you're, at best, estimating the population variance. Also, you're going to be seeing N-1 for variance estimates from here on in. You'll likely not ever encounter this issue... except on a test when your teacher might ask you to make a distinction between an inferential and non-inferential variance measure. In that case don't use whuber's answer or mine, refer to ttnphns's answer. Note, in this figure the variance should be close to 1. Look how much it varies with sample size when you use N to estimate the variance. (this is the "bias" referred to elswhere)
What is the difference between N and N-1 in calculating population variance?
There has, in the past been an argument that you should use N for a non-inferential variance but I wouldn't recommended that anymore. You should always use N-1. As sample size decreases N-1 is a pre
What is the difference between N and N-1 in calculating population variance? There has, in the past been an argument that you should use N for a non-inferential variance but I wouldn't recommended that anymore. You should always use N-1. As sample size decreases N-1 is a pretty good correction for the fact that the sample variance gets lower (you're just more likely to sample near the peak of the distribution---see figure). If sample size is really big then it doesn't matter any meaningful amount. An alternative explanation is that the population is a theoretical construct that's impossible to achieve. Therefore, always use N-1 because whatever you're doing you're, at best, estimating the population variance. Also, you're going to be seeing N-1 for variance estimates from here on in. You'll likely not ever encounter this issue... except on a test when your teacher might ask you to make a distinction between an inferential and non-inferential variance measure. In that case don't use whuber's answer or mine, refer to ttnphns's answer. Note, in this figure the variance should be close to 1. Look how much it varies with sample size when you use N to estimate the variance. (this is the "bias" referred to elswhere)
What is the difference between N and N-1 in calculating population variance? There has, in the past been an argument that you should use N for a non-inferential variance but I wouldn't recommended that anymore. You should always use N-1. As sample size decreases N-1 is a pre
4,183
What is the difference between N and N-1 in calculating population variance?
Generally, when one has only a fraction of the population, i.e. a sample, you should divide by n-1. There is a good reason to do so, we know that the sample variance, which multiplies the mean squared deviation from the sample mean by (n−1)/n, is an unbiased estimator of the population variance. You can find a proof that the estimator of the sample variance is unbiased here: https://economictheoryblog.com/2012/06/28/latexlatexs2/ Further, if one were to apply the estimator of the population variance, that is the version of the variance estimator that divides by n, on a sample of instead of the population, the obtained estimate would biased.
What is the difference between N and N-1 in calculating population variance?
Generally, when one has only a fraction of the population, i.e. a sample, you should divide by n-1. There is a good reason to do so, we know that the sample variance, which multiplies the mean squared
What is the difference between N and N-1 in calculating population variance? Generally, when one has only a fraction of the population, i.e. a sample, you should divide by n-1. There is a good reason to do so, we know that the sample variance, which multiplies the mean squared deviation from the sample mean by (n−1)/n, is an unbiased estimator of the population variance. You can find a proof that the estimator of the sample variance is unbiased here: https://economictheoryblog.com/2012/06/28/latexlatexs2/ Further, if one were to apply the estimator of the population variance, that is the version of the variance estimator that divides by n, on a sample of instead of the population, the obtained estimate would biased.
What is the difference between N and N-1 in calculating population variance? Generally, when one has only a fraction of the population, i.e. a sample, you should divide by n-1. There is a good reason to do so, we know that the sample variance, which multiplies the mean squared
4,184
What is the difference between N and N-1 in calculating population variance?
The population variance is the sum of the squared deviations of all of the values in the population divided by the number of values in the population. When we are estimating the variance of a population from a sample, though, we encounter the problem that the deviations of the sample values from the mean of the sample are, on average, a little less than the deviations of those sample values from the (unknown) true population mean. That results in a variance calculated from the sample being a little less than the true population variance. Using an n-1 divisor instead of n corrects for that underestimation.
What is the difference between N and N-1 in calculating population variance?
The population variance is the sum of the squared deviations of all of the values in the population divided by the number of values in the population. When we are estimating the variance of a populati
What is the difference between N and N-1 in calculating population variance? The population variance is the sum of the squared deviations of all of the values in the population divided by the number of values in the population. When we are estimating the variance of a population from a sample, though, we encounter the problem that the deviations of the sample values from the mean of the sample are, on average, a little less than the deviations of those sample values from the (unknown) true population mean. That results in a variance calculated from the sample being a little less than the true population variance. Using an n-1 divisor instead of n corrects for that underestimation.
What is the difference between N and N-1 in calculating population variance? The population variance is the sum of the squared deviations of all of the values in the population divided by the number of values in the population. When we are estimating the variance of a populati
4,185
Where does the misconception that Y must be normally distributed come from?
'Y must be normally distributed' must? In the cases that you mention it is sloppy language (abbreviating 'the error in Y must be normally distributed'), but they don't really (strongly) say that the response must be normally distributed, or at least it does not seem to me that their words were intended like that. The Penn State course material speaks about "a continuous variable $Y$", but also about "$Y_i$" as in $$E(Y_i) = \beta_0 + \beta_1 x_i$$ where we could regard $Y_i$, which is as amoeba called in the comments 'conditional', normally distributed, $$Y_i \sim N(\beta_0 + \beta_1x_i,\sigma^2)$$ The article uses $Y$ and $Y_i$ interchangeably. Throughout the entire article one speaks about the 'distribution of Y', for instance: when explaining some variant of GLM (binary logistic regression), Random component: The distribution of $Y$ is assumed to be $Binomial(n,\pi)$,... in some definition Random Component – refers to the probability distribution of the response variable ($Y$); e.g. normal distribution for $Y$ in the linear regression, or binomial distribution for $Y$ in the binary logistic regression. however at some other point they also refer to $Y_i$ instead of $Y$: The dependent variable $Y_i$ does NOT need to be normally distributed, but it typically assumes a distribution from an exponential family (e.g. binomial, Poisson, multinomial, normal,...) The statisticssolutions webpage is an extremely brief, simplified, stylized description. I am not sure you should take this serious. For instance, it speaks about ..requires all variables to be multivariate normal... so that is not just the response variable, and also the the 'multivariate' descriptor is vague. I am not sure how to get that interpreted. The wikipedia article has an additional context explained in brackets: Ordinary linear regression predicts the expected value of a given unknown quantity (the response variable, a random variable) as a linear combination of a set of observed values (predictors). This implies that a constant change in a predictor leads to a constant change in the response variable (i.e. a linear-response model). This is appropriate when the response variable has a normal distribution (intuitively, when a response variable can vary essentially indefinitely in either direction with no fixed "zero value", or more generally for any quantity that only varies by a relatively small amount, e.g. human heights). This 'no fixed zero value' seems to point to the case that a linear combination $y+\epsilon$ when $\epsilon \sim N(0,\sigma)$ has an infinite domain (from minus infinity to plus infinity) whereas often many variables have some finite cut-off value (such as counts not allowing negative values). The particular line has been added on March 8 2012, but note that the first line of the Wikipedia article still reads "a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution" and is not so much (not everywhere) wrong. Conclusion So, based on these three examples (which indeed could generate misconceptions, or at least could be misunderstood) I would not say that "this misconception has spread". Or at least it does not seem to me that the intention of those three examples is to argue that Y must be normally distributed (although I do remember this issue has arised before here on stackexchange, the swap between normally distributed errors and normally distributed response variable is easy to make). So, the assumption that 'Y must be normally distributed' seems to me not like a widespread believe/misconception (as in something that spreads like a red herring), but more like a common error (which is not spread but made independently each time). Additional comment An example of the mistake on this website is in the following question What if residuals are normally distributed, but y is not? I would consider this as a beginners question. It is not present in the materials like the Penn State course material, the Wikipedia website, and recently noted in the comments the book 'Extending the Linear Regression with R'. The writers of those works do correctly understand the material. Indeed, they use phrases such as 'Y must be normally distributed', but based on the context and the used formulas you can see that they all mean 'Y, conditional on X, must be normally distributed' and not 'the marginal Y must be normally distributed'. They are not misconceiving the idea themselves, and at least the idea is not widespread among statisticians and people that write books and other course materials. But misreading their ambiguous words may indeed cause the misconception.
Where does the misconception that Y must be normally distributed come from?
'Y must be normally distributed' must? In the cases that you mention it is sloppy language (abbreviating 'the error in Y must be normally distributed'), but they don't really (strongly) say that the
Where does the misconception that Y must be normally distributed come from? 'Y must be normally distributed' must? In the cases that you mention it is sloppy language (abbreviating 'the error in Y must be normally distributed'), but they don't really (strongly) say that the response must be normally distributed, or at least it does not seem to me that their words were intended like that. The Penn State course material speaks about "a continuous variable $Y$", but also about "$Y_i$" as in $$E(Y_i) = \beta_0 + \beta_1 x_i$$ where we could regard $Y_i$, which is as amoeba called in the comments 'conditional', normally distributed, $$Y_i \sim N(\beta_0 + \beta_1x_i,\sigma^2)$$ The article uses $Y$ and $Y_i$ interchangeably. Throughout the entire article one speaks about the 'distribution of Y', for instance: when explaining some variant of GLM (binary logistic regression), Random component: The distribution of $Y$ is assumed to be $Binomial(n,\pi)$,... in some definition Random Component – refers to the probability distribution of the response variable ($Y$); e.g. normal distribution for $Y$ in the linear regression, or binomial distribution for $Y$ in the binary logistic regression. however at some other point they also refer to $Y_i$ instead of $Y$: The dependent variable $Y_i$ does NOT need to be normally distributed, but it typically assumes a distribution from an exponential family (e.g. binomial, Poisson, multinomial, normal,...) The statisticssolutions webpage is an extremely brief, simplified, stylized description. I am not sure you should take this serious. For instance, it speaks about ..requires all variables to be multivariate normal... so that is not just the response variable, and also the the 'multivariate' descriptor is vague. I am not sure how to get that interpreted. The wikipedia article has an additional context explained in brackets: Ordinary linear regression predicts the expected value of a given unknown quantity (the response variable, a random variable) as a linear combination of a set of observed values (predictors). This implies that a constant change in a predictor leads to a constant change in the response variable (i.e. a linear-response model). This is appropriate when the response variable has a normal distribution (intuitively, when a response variable can vary essentially indefinitely in either direction with no fixed "zero value", or more generally for any quantity that only varies by a relatively small amount, e.g. human heights). This 'no fixed zero value' seems to point to the case that a linear combination $y+\epsilon$ when $\epsilon \sim N(0,\sigma)$ has an infinite domain (from minus infinity to plus infinity) whereas often many variables have some finite cut-off value (such as counts not allowing negative values). The particular line has been added on March 8 2012, but note that the first line of the Wikipedia article still reads "a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution" and is not so much (not everywhere) wrong. Conclusion So, based on these three examples (which indeed could generate misconceptions, or at least could be misunderstood) I would not say that "this misconception has spread". Or at least it does not seem to me that the intention of those three examples is to argue that Y must be normally distributed (although I do remember this issue has arised before here on stackexchange, the swap between normally distributed errors and normally distributed response variable is easy to make). So, the assumption that 'Y must be normally distributed' seems to me not like a widespread believe/misconception (as in something that spreads like a red herring), but more like a common error (which is not spread but made independently each time). Additional comment An example of the mistake on this website is in the following question What if residuals are normally distributed, but y is not? I would consider this as a beginners question. It is not present in the materials like the Penn State course material, the Wikipedia website, and recently noted in the comments the book 'Extending the Linear Regression with R'. The writers of those works do correctly understand the material. Indeed, they use phrases such as 'Y must be normally distributed', but based on the context and the used formulas you can see that they all mean 'Y, conditional on X, must be normally distributed' and not 'the marginal Y must be normally distributed'. They are not misconceiving the idea themselves, and at least the idea is not widespread among statisticians and people that write books and other course materials. But misreading their ambiguous words may indeed cause the misconception.
Where does the misconception that Y must be normally distributed come from? 'Y must be normally distributed' must? In the cases that you mention it is sloppy language (abbreviating 'the error in Y must be normally distributed'), but they don't really (strongly) say that the
4,186
Where does the misconception that Y must be normally distributed come from?
Is there a good explanation for how/why this misconception has spread? Is its origin known? We generally teach undergraduates a "simplified" version of statistics in many disciplines. I am in psychology, and when I try to tell undergraduates that p-values are "the probability of the data—or more extreme data—given that the null hypothesis is true," colleagues tell me that I am covering more detail than I need to cover. That I am making it more difficult than it has to be, etc. Since students in classes have such a wide range of comfort (or lack thereof) with statistics, instructors generally keep it simple: "We consider it to be a reliable finding if p < .05," for example, instead of giving them the actual definition of a p-value. I think this is where the explanation for why the misconception has spread. For instance, you can write the model as: $Y = \beta_0 + \beta_1X + \epsilon$ where $\epsilon \sim \text{N}(0, \sigma^2_\epsilon)$ This can be re-written as: $Y|X \sim \text{N}(\beta_0 + \beta_1X, \sigma^2_\epsilon)$ Which means that "Y, conditional on X, is normally distributed with a mean of the predicted values and some variance." This is difficult to explain, so as shorthand people might just say: "Y must be normally distributed." Or when it was explained to them originally, people misunderstood the conditional part—since it is, honestly, confusing. So in an effort to not make things terribly complicated, instructors just simplify what they are saying as to not overly confuse most students. And then people continue on in their statistical education or statistical practice with that misconception. I myself didn't fully understand the concept until I started doing Bayesian modeling in Stan, which requires you to write your assumptions in this way: model { vector[n_obs] yhat; for(i in 1:n_obs) { yhat[i] = beta[1] + beta[2] * x1[i] + beta[3] * x2[i]; } y ~ normal(yhat, sigma); } Also, in a lot of statistical packages with a GUI (looking at you, SPSS), it is easier to check if the marginal distribution is normally distributed (simple histogram) than it is to check if the residuals are normally distributed (run regression, save residuals, run histogram on those residuals). Thus, I think the misconception is mainly due to instructors trying to shave off details to keep students from getting confused, genuine—and understandable—confusion among people learning it the correct way, and both of these reinforced by ease of checking marginal normality in the most user-friendly statistical packages.
Where does the misconception that Y must be normally distributed come from?
Is there a good explanation for how/why this misconception has spread? Is its origin known? We generally teach undergraduates a "simplified" version of statistics in many disciplines. I am in psychol
Where does the misconception that Y must be normally distributed come from? Is there a good explanation for how/why this misconception has spread? Is its origin known? We generally teach undergraduates a "simplified" version of statistics in many disciplines. I am in psychology, and when I try to tell undergraduates that p-values are "the probability of the data—or more extreme data—given that the null hypothesis is true," colleagues tell me that I am covering more detail than I need to cover. That I am making it more difficult than it has to be, etc. Since students in classes have such a wide range of comfort (or lack thereof) with statistics, instructors generally keep it simple: "We consider it to be a reliable finding if p < .05," for example, instead of giving them the actual definition of a p-value. I think this is where the explanation for why the misconception has spread. For instance, you can write the model as: $Y = \beta_0 + \beta_1X + \epsilon$ where $\epsilon \sim \text{N}(0, \sigma^2_\epsilon)$ This can be re-written as: $Y|X \sim \text{N}(\beta_0 + \beta_1X, \sigma^2_\epsilon)$ Which means that "Y, conditional on X, is normally distributed with a mean of the predicted values and some variance." This is difficult to explain, so as shorthand people might just say: "Y must be normally distributed." Or when it was explained to them originally, people misunderstood the conditional part—since it is, honestly, confusing. So in an effort to not make things terribly complicated, instructors just simplify what they are saying as to not overly confuse most students. And then people continue on in their statistical education or statistical practice with that misconception. I myself didn't fully understand the concept until I started doing Bayesian modeling in Stan, which requires you to write your assumptions in this way: model { vector[n_obs] yhat; for(i in 1:n_obs) { yhat[i] = beta[1] + beta[2] * x1[i] + beta[3] * x2[i]; } y ~ normal(yhat, sigma); } Also, in a lot of statistical packages with a GUI (looking at you, SPSS), it is easier to check if the marginal distribution is normally distributed (simple histogram) than it is to check if the residuals are normally distributed (run regression, save residuals, run histogram on those residuals). Thus, I think the misconception is mainly due to instructors trying to shave off details to keep students from getting confused, genuine—and understandable—confusion among people learning it the correct way, and both of these reinforced by ease of checking marginal normality in the most user-friendly statistical packages.
Where does the misconception that Y must be normally distributed come from? Is there a good explanation for how/why this misconception has spread? Is its origin known? We generally teach undergraduates a "simplified" version of statistics in many disciplines. I am in psychol
4,187
Where does the misconception that Y must be normally distributed come from?
Regression analysis is difficult for beginners because there are different results that are implied by different starting assumptions. Weaker starting assumptions can justify some of the results, but you can get stronger results when you add stronger assumptions. People who are unfamiliar with the full mathematical derivation of the results can often misunderstand the required assumptions for a result, either by posing their model too weakly to get a required result, or posing some unnecessary assumptions in the belief that these are required for a result. Although it is possible to add stronger assumptions to get additional results, regression analysis concerns itself with the conditional distribution of the response vector. If a model goes beyond this then it is entering the territory of multivariate analysis, and is not strictly (just) a regression model. The matter is further complicated by the fact that it is common to refer to distributional results in regression without always being careful to specify that they are conditional distributions (given the explanatory variables in the design matrix). In cases where models go beyond conditional distributions (by assuming a marginal distribution for the explanatory vectors) the user should be careful to specify this difference; unfortunately people are not always careful with this. Homoskedastic linear regression model: The earliest starting point that is usually used is to assume the model form and first two error-moments without any assumption of normality at all: $$\boldsymbol{Y} = \boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon}\quad \quad \mathbb{E}(\boldsymbol{\varepsilon} | \boldsymbol{x}) = \boldsymbol{0} \quad \quad \mathbb{V}(\boldsymbol{\varepsilon} | \boldsymbol{x}) \propto \boldsymbol{I}.$$ This setup is sufficient to allow you to obtain the OLS estimator for the coefficients, the unbiased estimator for the error variance, the residuals, and the moments of all these random quantities (conditional on the explanatory variables in the design matrix). It does not allow you to get the full conditional distribution of these quantities, but it does allow for appeal to asymptotic distributions if $n$ is large and some additional assumptions are placed on the limiting behaviour of $\boldsymbol{x}$. To go further it is common to assume a specific distributional form for the error vector. Normal errors: Most treatments of the homoskedastic linear regression model assume that the error vector is normally distributed, which in combination with the moment assumptions gives: $$\boldsymbol{\varepsilon} | \boldsymbol{x} \sim \text{N}(\boldsymbol{0}, \sigma^2 \boldsymbol{I}).$$ This additional assumption is sufficient to ensure that the OLS estimator for the coefficients is the MLE for the model, and it also means that the coefficient estimator and residuals are normally distributed and the estimator for the error variance has a scaled chi-squared distribution (all conditional on the explanatory variables in the design matrix). It also ensures that the response vector is conditionally normally distributed. This gives distributional results conditional on the explanatory variables in the analysis, which allows the construction of confidence intervals and hypothesis tests. If the analyst wants to make findings about the marginal distribution of the response, they need to go further and assume a distribution for the explanatory variables in the model. Jointly-normal explanatory variables: Some treatments of the homoscedastic linear regression model go further than standard treatments, and do not condition on fixed explanatory variables. (Arguably this is a transition out of regression modelling and into multivariate analysis.) The most common model of this kind assumes that the explanatory vectors are IID joint-normal random vectors. Letting $\boldsymbol{X}_{(i)}$ be the $i$th explanatory vector (the $i$th row of the design matrix) we have: $$\boldsymbol{X}_{(1)}, ..., \boldsymbol{X}_{(n)} \sim \text{IID N}(\boldsymbol{\mu}_X, \boldsymbol{\Sigma}_X).$$ This additional assumption is sufficient to ensure that the response vector is marginally normally distributed. This is a strong assumption and it is usually not imposed in most problems. As stated, this takes the model outside the territory of regression modelling and into multivariate analysis.
Where does the misconception that Y must be normally distributed come from?
Regression analysis is difficult for beginners because there are different results that are implied by different starting assumptions. Weaker starting assumptions can justify some of the results, but
Where does the misconception that Y must be normally distributed come from? Regression analysis is difficult for beginners because there are different results that are implied by different starting assumptions. Weaker starting assumptions can justify some of the results, but you can get stronger results when you add stronger assumptions. People who are unfamiliar with the full mathematical derivation of the results can often misunderstand the required assumptions for a result, either by posing their model too weakly to get a required result, or posing some unnecessary assumptions in the belief that these are required for a result. Although it is possible to add stronger assumptions to get additional results, regression analysis concerns itself with the conditional distribution of the response vector. If a model goes beyond this then it is entering the territory of multivariate analysis, and is not strictly (just) a regression model. The matter is further complicated by the fact that it is common to refer to distributional results in regression without always being careful to specify that they are conditional distributions (given the explanatory variables in the design matrix). In cases where models go beyond conditional distributions (by assuming a marginal distribution for the explanatory vectors) the user should be careful to specify this difference; unfortunately people are not always careful with this. Homoskedastic linear regression model: The earliest starting point that is usually used is to assume the model form and first two error-moments without any assumption of normality at all: $$\boldsymbol{Y} = \boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon}\quad \quad \mathbb{E}(\boldsymbol{\varepsilon} | \boldsymbol{x}) = \boldsymbol{0} \quad \quad \mathbb{V}(\boldsymbol{\varepsilon} | \boldsymbol{x}) \propto \boldsymbol{I}.$$ This setup is sufficient to allow you to obtain the OLS estimator for the coefficients, the unbiased estimator for the error variance, the residuals, and the moments of all these random quantities (conditional on the explanatory variables in the design matrix). It does not allow you to get the full conditional distribution of these quantities, but it does allow for appeal to asymptotic distributions if $n$ is large and some additional assumptions are placed on the limiting behaviour of $\boldsymbol{x}$. To go further it is common to assume a specific distributional form for the error vector. Normal errors: Most treatments of the homoskedastic linear regression model assume that the error vector is normally distributed, which in combination with the moment assumptions gives: $$\boldsymbol{\varepsilon} | \boldsymbol{x} \sim \text{N}(\boldsymbol{0}, \sigma^2 \boldsymbol{I}).$$ This additional assumption is sufficient to ensure that the OLS estimator for the coefficients is the MLE for the model, and it also means that the coefficient estimator and residuals are normally distributed and the estimator for the error variance has a scaled chi-squared distribution (all conditional on the explanatory variables in the design matrix). It also ensures that the response vector is conditionally normally distributed. This gives distributional results conditional on the explanatory variables in the analysis, which allows the construction of confidence intervals and hypothesis tests. If the analyst wants to make findings about the marginal distribution of the response, they need to go further and assume a distribution for the explanatory variables in the model. Jointly-normal explanatory variables: Some treatments of the homoscedastic linear regression model go further than standard treatments, and do not condition on fixed explanatory variables. (Arguably this is a transition out of regression modelling and into multivariate analysis.) The most common model of this kind assumes that the explanatory vectors are IID joint-normal random vectors. Letting $\boldsymbol{X}_{(i)}$ be the $i$th explanatory vector (the $i$th row of the design matrix) we have: $$\boldsymbol{X}_{(1)}, ..., \boldsymbol{X}_{(n)} \sim \text{IID N}(\boldsymbol{\mu}_X, \boldsymbol{\Sigma}_X).$$ This additional assumption is sufficient to ensure that the response vector is marginally normally distributed. This is a strong assumption and it is usually not imposed in most problems. As stated, this takes the model outside the territory of regression modelling and into multivariate analysis.
Where does the misconception that Y must be normally distributed come from? Regression analysis is difficult for beginners because there are different results that are implied by different starting assumptions. Weaker starting assumptions can justify some of the results, but
4,188
Relationship between $R^2$ and correlation coefficient
This is true that $SS_{tot}$ will change ... but you forgot the fact that the regression sum of of squares will change as well. So let's consider the simple regression model and denote the Correlation Coefficient as $r_{xy}^2=\dfrac{S_{xy}^2}{S_{xx}S_{yy}}$, where I used the sub-index $xy$ to emphasize the fact that $x$ is the independent variable and $y$ is the dependent variable. Obviously, $r_{xy}^2$ is unchanged if you swap $x$ with $y$. We can easily show that $SSR_{xy}=S_{yy}(R_{xy}^2)$, where $SSR_{xy}$ is the regression sum of of squares and $S_{yy}$ is the total sum of squares where $x$ is independent and $y$ is dependent variable. Therefore: $$R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}=\dfrac{S_{yy}-SSE_{xy}}{S_{yy}},$$ where $SSE_{xy}$ is the corresponding residual sum of of squares where $x$ is independent and $y$ is dependent variable. Note that in this case, we have $SSE_{xy}=b^2_{xy}S_{xx}$ with $b=\dfrac{S_{xy}}{S_{xx}}$ (See e.g. Eq. (34)-(41) here.) Therefore: $$R_{xy}^2=\dfrac{S_{yy}-\dfrac{S^2_{xy}}{S^2_{xx}}.S_{xx}}{S_{yy}}=\dfrac{S_{yy}S_{xx}-S^2_{xy}}{S_{xx}.S_{yy}}.$$ Clearly above equation is symmetric with respect to $x$ and $y$. In other words: $$R_{xy}^2=R_{yx}^2.$$ To summarize when you change $x$ with $y$ in the simple regression model, both numerator and denominator of $R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}$ will change in a way that $R_{xy}^2=R_{yx}^2.$
Relationship between $R^2$ and correlation coefficient
This is true that $SS_{tot}$ will change ... but you forgot the fact that the regression sum of of squares will change as well. So let's consider the simple regression model and denote the Correlatio
Relationship between $R^2$ and correlation coefficient This is true that $SS_{tot}$ will change ... but you forgot the fact that the regression sum of of squares will change as well. So let's consider the simple regression model and denote the Correlation Coefficient as $r_{xy}^2=\dfrac{S_{xy}^2}{S_{xx}S_{yy}}$, where I used the sub-index $xy$ to emphasize the fact that $x$ is the independent variable and $y$ is the dependent variable. Obviously, $r_{xy}^2$ is unchanged if you swap $x$ with $y$. We can easily show that $SSR_{xy}=S_{yy}(R_{xy}^2)$, where $SSR_{xy}$ is the regression sum of of squares and $S_{yy}$ is the total sum of squares where $x$ is independent and $y$ is dependent variable. Therefore: $$R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}=\dfrac{S_{yy}-SSE_{xy}}{S_{yy}},$$ where $SSE_{xy}$ is the corresponding residual sum of of squares where $x$ is independent and $y$ is dependent variable. Note that in this case, we have $SSE_{xy}=b^2_{xy}S_{xx}$ with $b=\dfrac{S_{xy}}{S_{xx}}$ (See e.g. Eq. (34)-(41) here.) Therefore: $$R_{xy}^2=\dfrac{S_{yy}-\dfrac{S^2_{xy}}{S^2_{xx}}.S_{xx}}{S_{yy}}=\dfrac{S_{yy}S_{xx}-S^2_{xy}}{S_{xx}.S_{yy}}.$$ Clearly above equation is symmetric with respect to $x$ and $y$. In other words: $$R_{xy}^2=R_{yx}^2.$$ To summarize when you change $x$ with $y$ in the simple regression model, both numerator and denominator of $R_{xy}^2=\dfrac{SSR_{xy}}{S_{yy}}$ will change in a way that $R_{xy}^2=R_{yx}^2.$
Relationship between $R^2$ and correlation coefficient This is true that $SS_{tot}$ will change ... but you forgot the fact that the regression sum of of squares will change as well. So let's consider the simple regression model and denote the Correlatio
4,189
Relationship between $R^2$ and correlation coefficient
One way of interpreting the coefficient of determination $R^{2}$ is to look at it as the Squared Pearson Correlation Coefficient between the observed values $y_{i}$ and the fitted values $\hat{y}_{i}$. The complete proof of how to derive the coefficient of determination R2 from the Squared Pearson Correlation Coefficient between the observed values yi and the fitted values y^i can be found under the following link: http://economictheoryblog.wordpress.com/2014/11/05/proof/ In my eyes it should be pretty easy to understand, just follow the single steps. I guess looking at it is essential to understand how the realtionship between the two key figures actually works.
Relationship between $R^2$ and correlation coefficient
One way of interpreting the coefficient of determination $R^{2}$ is to look at it as the Squared Pearson Correlation Coefficient between the observed values $y_{i}$ and the fitted values $\hat{y}_{i}$
Relationship between $R^2$ and correlation coefficient One way of interpreting the coefficient of determination $R^{2}$ is to look at it as the Squared Pearson Correlation Coefficient between the observed values $y_{i}$ and the fitted values $\hat{y}_{i}$. The complete proof of how to derive the coefficient of determination R2 from the Squared Pearson Correlation Coefficient between the observed values yi and the fitted values y^i can be found under the following link: http://economictheoryblog.wordpress.com/2014/11/05/proof/ In my eyes it should be pretty easy to understand, just follow the single steps. I guess looking at it is essential to understand how the realtionship between the two key figures actually works.
Relationship between $R^2$ and correlation coefficient One way of interpreting the coefficient of determination $R^{2}$ is to look at it as the Squared Pearson Correlation Coefficient between the observed values $y_{i}$ and the fitted values $\hat{y}_{i}$
4,190
Relationship between $R^2$ and correlation coefficient
In case of simple linear regression with only one predictor $R^2 = r^2 = Corr(x,y)^2$. But in multiple linear regression with more than one predictors the concept of correlation between the predictors and the response does not extend automatically. The formula gets: $$R^2 = Corr(y_{estimated},y_{observed})^2$$ The square of the correlation between the response and the fitted linear model.
Relationship between $R^2$ and correlation coefficient
In case of simple linear regression with only one predictor $R^2 = r^2 = Corr(x,y)^2$. But in multiple linear regression with more than one predictors the concept of correlation between the predictors
Relationship between $R^2$ and correlation coefficient In case of simple linear regression with only one predictor $R^2 = r^2 = Corr(x,y)^2$. But in multiple linear regression with more than one predictors the concept of correlation between the predictors and the response does not extend automatically. The formula gets: $$R^2 = Corr(y_{estimated},y_{observed})^2$$ The square of the correlation between the response and the fitted linear model.
Relationship between $R^2$ and correlation coefficient In case of simple linear regression with only one predictor $R^2 = r^2 = Corr(x,y)^2$. But in multiple linear regression with more than one predictors the concept of correlation between the predictors
4,191
Relationship between $R^2$ and correlation coefficient
@Stat has provided a detailed answer. In my short answer I'll show briefly in somewhat different way what is the similarity and difference between $r$ and $r^2$. $r$ is the standardized regression coefficient beta of $Y$ by $X$ or of $X$ by $Y$ and as such, it is a measure of the (mutual) effect size. Which is most clearly seen when the variables are dichotomous. Then $r$, for example, $.30$ means that 30% of cases will change its value to opposite in one variable when the other variable changes its value to the opposite. $r^2$, on the other hand, is the expression of the proportion of co-variability in the total variability: $r^2 = (\frac {cov}{\sigma_x \sigma_y})^2 = \frac {|cov|} {\sigma_x^2} \frac {|cov|} {\sigma_y^2}$. Note that this is a product of two proportions, or, more precise to say, two ratios (a ratio can be >1). If loosely imply any proportion or ratio to be a quasi-probability or propensity, then $r^2$ expresses "joint probability (propensity)". Another and as valid expression for the joint product of two proportions (or ratios) would be their geometric mean, $\sqrt{prop*prop}$, which is very $r$. (The two ratios are multiplicative, not additive, to stress the idea that they collaborate and cannot compensate for each other, in their teamwork. They have to be multiplicative because the magnitude of $cov$ is dependent on both magnitudes $\sigma_x^2$ and $\sigma_y^2$ and, conformably, $cov$ has to be divided two times in once - in order to convert itself to a proper "proportion of the shared variance". But $cov$, the "cross-variance", shares the same measurement units with both $\sigma_x^2$ and $\sigma_y^2$, the "self-variances", and not with $\sigma_x \sigma_y$, the "hybrid variance"; that is why $r^2$, not $r$, is more adequate as the "proportion of shared variance".) So, you see that meaning of $r$ and $r^2$ as a measure of the quantity of the association is different (both meanings valid), but still these coefficients in no way contradict each other. And both are the same whether you predict $Y\text~X$ or $X\text~Y$.
Relationship between $R^2$ and correlation coefficient
@Stat has provided a detailed answer. In my short answer I'll show briefly in somewhat different way what is the similarity and difference between $r$ and $r^2$. $r$ is the standardized regression coe
Relationship between $R^2$ and correlation coefficient @Stat has provided a detailed answer. In my short answer I'll show briefly in somewhat different way what is the similarity and difference between $r$ and $r^2$. $r$ is the standardized regression coefficient beta of $Y$ by $X$ or of $X$ by $Y$ and as such, it is a measure of the (mutual) effect size. Which is most clearly seen when the variables are dichotomous. Then $r$, for example, $.30$ means that 30% of cases will change its value to opposite in one variable when the other variable changes its value to the opposite. $r^2$, on the other hand, is the expression of the proportion of co-variability in the total variability: $r^2 = (\frac {cov}{\sigma_x \sigma_y})^2 = \frac {|cov|} {\sigma_x^2} \frac {|cov|} {\sigma_y^2}$. Note that this is a product of two proportions, or, more precise to say, two ratios (a ratio can be >1). If loosely imply any proportion or ratio to be a quasi-probability or propensity, then $r^2$ expresses "joint probability (propensity)". Another and as valid expression for the joint product of two proportions (or ratios) would be their geometric mean, $\sqrt{prop*prop}$, which is very $r$. (The two ratios are multiplicative, not additive, to stress the idea that they collaborate and cannot compensate for each other, in their teamwork. They have to be multiplicative because the magnitude of $cov$ is dependent on both magnitudes $\sigma_x^2$ and $\sigma_y^2$ and, conformably, $cov$ has to be divided two times in once - in order to convert itself to a proper "proportion of the shared variance". But $cov$, the "cross-variance", shares the same measurement units with both $\sigma_x^2$ and $\sigma_y^2$, the "self-variances", and not with $\sigma_x \sigma_y$, the "hybrid variance"; that is why $r^2$, not $r$, is more adequate as the "proportion of shared variance".) So, you see that meaning of $r$ and $r^2$ as a measure of the quantity of the association is different (both meanings valid), but still these coefficients in no way contradict each other. And both are the same whether you predict $Y\text~X$ or $X\text~Y$.
Relationship between $R^2$ and correlation coefficient @Stat has provided a detailed answer. In my short answer I'll show briefly in somewhat different way what is the similarity and difference between $r$ and $r^2$. $r$ is the standardized regression coe
4,192
Relationship between $R^2$ and correlation coefficient
I think you might be mistaken. If $R^2=r^2$, I assume you have a bivariate model: one DV, one IV. I don't think $R^2$ will change if you swap these, nor if you replace the IV with the predictions of the DV that are based on the IV. Here's code for a demonstration in R: x=rnorm(1000); y=rnorm(1000) # store random data summary(lm(y~x)) # fit a linear regression model (a) summary(lm(x~y)) # swap variables and fit the opposite model (b) z=lm(y~x)$fitted.values; summary(lm(y~z)) # substitute predictions for IV in model (a) If you aren't working with a bivariate model, your choice of DV will affect $R^2$...unless your variables are all identically correlated, I suppose, but this isn't much of an exception. If all the variables have identical strengths of correlation and also share the same portions of the DV's variance (e.g. [or maybe "i.e."], if some of the variables are completely identical), you could just reduce this to a bivariate model without losing any information. Whether you do or don't, $R^2$ still wouldn't change. In all other cases I can think of with more than two variables, $R^2\ne r^2$ where $R^2$ is the coefficient of determination and $r$ is a bivariate correlation coefficient of any kind (not necessarily Pearson's; e.g., possibly also a Spearman's $\rho$).
Relationship between $R^2$ and correlation coefficient
I think you might be mistaken. If $R^2=r^2$, I assume you have a bivariate model: one DV, one IV. I don't think $R^2$ will change if you swap these, nor if you replace the IV with the predictions of t
Relationship between $R^2$ and correlation coefficient I think you might be mistaken. If $R^2=r^2$, I assume you have a bivariate model: one DV, one IV. I don't think $R^2$ will change if you swap these, nor if you replace the IV with the predictions of the DV that are based on the IV. Here's code for a demonstration in R: x=rnorm(1000); y=rnorm(1000) # store random data summary(lm(y~x)) # fit a linear regression model (a) summary(lm(x~y)) # swap variables and fit the opposite model (b) z=lm(y~x)$fitted.values; summary(lm(y~z)) # substitute predictions for IV in model (a) If you aren't working with a bivariate model, your choice of DV will affect $R^2$...unless your variables are all identically correlated, I suppose, but this isn't much of an exception. If all the variables have identical strengths of correlation and also share the same portions of the DV's variance (e.g. [or maybe "i.e."], if some of the variables are completely identical), you could just reduce this to a bivariate model without losing any information. Whether you do or don't, $R^2$ still wouldn't change. In all other cases I can think of with more than two variables, $R^2\ne r^2$ where $R^2$ is the coefficient of determination and $r$ is a bivariate correlation coefficient of any kind (not necessarily Pearson's; e.g., possibly also a Spearman's $\rho$).
Relationship between $R^2$ and correlation coefficient I think you might be mistaken. If $R^2=r^2$, I assume you have a bivariate model: one DV, one IV. I don't think $R^2$ will change if you swap these, nor if you replace the IV with the predictions of t
4,193
Relationship between $R^2$ and correlation coefficient
If the prediction is not a projection onto the space spanned by the independent variables, the first definition is wrong. It can even be negative. The second definition is the same as the first if the prediction is a linear regression. Otherwise they are not the same. That said the second (correlation squared) definition always has meaning and ranges between zero and one.
Relationship between $R^2$ and correlation coefficient
If the prediction is not a projection onto the space spanned by the independent variables, the first definition is wrong. It can even be negative. The second definition is the same as the first if the
Relationship between $R^2$ and correlation coefficient If the prediction is not a projection onto the space spanned by the independent variables, the first definition is wrong. It can even be negative. The second definition is the same as the first if the prediction is a linear regression. Otherwise they are not the same. That said the second (correlation squared) definition always has meaning and ranges between zero and one.
Relationship between $R^2$ and correlation coefficient If the prediction is not a projection onto the space spanned by the independent variables, the first definition is wrong. It can even be negative. The second definition is the same as the first if the
4,194
Is there any gold standard for modeling irregularly spaced time series?
If the observations of a stochastic process are irregularly spaced the most natural way to model the observations is as discrete time observations from a continuous time process. What is generally needed of a model specification is the joint distribution of the observations $X_{1}, \ldots, X_n$ observed at times $t_1 < t_2 < \ldots < t_n$, and this can, for instance, be broken down into conditional distributions of $X_{i}$ given $X_{i-1}, \ldots, X_1$. If the process is a Markov process this conditional distribution depends on $X_{i-1}$ $-$ not on $X_{i-2}, \ldots, X_1$ $-$ and it depends on $t_i$ and $t_{i-1}$. If the process is time-homogeneous the dependence on the time points is only through their difference $t_i - t_{i-1}$. We see from this that if we have equidistant observations (with $t_i - t_{i-1} = 1$, say) from a time-homogeneous Markov process we only need to specify a single conditional probability distribution, $P^1$, to specify a model. Otherwise we need to specify a whole collection $P^{t_{i}-t_{i-1}}$ of conditional probability distributions indexed by the time differences of the observations to specify a model. The latter is, in fact, most easily done by specifying a family $P^t$ of continuous time conditional probability distributions. A common way to obtain a continuous time model specification is through a stochastic differential equation (SDE) $$dX_t = a(X_t) dt + b(X_t) dB_t.$$ A good place to get started with doing statistics for SDE models is Simulation and Inference for Stochastic Differential Equations by Stefano Iacus. It might be that many methods and results are described for equidistant observations, but this is typically just convenient for the presentation and not essential for the application. One main obstacle is that the SDE-specification rarely allows for an explicit likelihood when you have discrete observations, but there are well developed estimation equation alternatives. If you want to get beyond Markov processes the stochastic volatility models are like (G)ARCH models attempts to model a heterogeneous variance (volatility). One can also consider delay equations like $$dX_t = \int_0^t a(s)(X_t-X_s) ds + \sigma dB_t$$ that are continuous time analogs of AR$(p)$-processes. I think it is fair to say that the common practice when dealing with observations at irregular time points is to build a continuous time stochastic model.
Is there any gold standard for modeling irregularly spaced time series?
If the observations of a stochastic process are irregularly spaced the most natural way to model the observations is as discrete time observations from a continuous time process. What is generally ne
Is there any gold standard for modeling irregularly spaced time series? If the observations of a stochastic process are irregularly spaced the most natural way to model the observations is as discrete time observations from a continuous time process. What is generally needed of a model specification is the joint distribution of the observations $X_{1}, \ldots, X_n$ observed at times $t_1 < t_2 < \ldots < t_n$, and this can, for instance, be broken down into conditional distributions of $X_{i}$ given $X_{i-1}, \ldots, X_1$. If the process is a Markov process this conditional distribution depends on $X_{i-1}$ $-$ not on $X_{i-2}, \ldots, X_1$ $-$ and it depends on $t_i$ and $t_{i-1}$. If the process is time-homogeneous the dependence on the time points is only through their difference $t_i - t_{i-1}$. We see from this that if we have equidistant observations (with $t_i - t_{i-1} = 1$, say) from a time-homogeneous Markov process we only need to specify a single conditional probability distribution, $P^1$, to specify a model. Otherwise we need to specify a whole collection $P^{t_{i}-t_{i-1}}$ of conditional probability distributions indexed by the time differences of the observations to specify a model. The latter is, in fact, most easily done by specifying a family $P^t$ of continuous time conditional probability distributions. A common way to obtain a continuous time model specification is through a stochastic differential equation (SDE) $$dX_t = a(X_t) dt + b(X_t) dB_t.$$ A good place to get started with doing statistics for SDE models is Simulation and Inference for Stochastic Differential Equations by Stefano Iacus. It might be that many methods and results are described for equidistant observations, but this is typically just convenient for the presentation and not essential for the application. One main obstacle is that the SDE-specification rarely allows for an explicit likelihood when you have discrete observations, but there are well developed estimation equation alternatives. If you want to get beyond Markov processes the stochastic volatility models are like (G)ARCH models attempts to model a heterogeneous variance (volatility). One can also consider delay equations like $$dX_t = \int_0^t a(s)(X_t-X_s) ds + \sigma dB_t$$ that are continuous time analogs of AR$(p)$-processes. I think it is fair to say that the common practice when dealing with observations at irregular time points is to build a continuous time stochastic model.
Is there any gold standard for modeling irregularly spaced time series? If the observations of a stochastic process are irregularly spaced the most natural way to model the observations is as discrete time observations from a continuous time process. What is generally ne
4,195
Is there any gold standard for modeling irregularly spaced time series?
For irregular spaced time series it's easy to construct a Kalman filter. There is a paper how to transfer ARIMA into state space form here And one paper that compares Kalman to GARCH here$^{(1)}$ $(1)$ Choudhry, Taufiq and Wu, Hao (2008) Forecasting ability of GARCH vs Kalman filter method: evidence from daily UK time-varying beta. Journal of Forecasting, 27, (8), 670-689. (doi:10.1002/for.1096).
Is there any gold standard for modeling irregularly spaced time series?
For irregular spaced time series it's easy to construct a Kalman filter. There is a paper how to transfer ARIMA into state space form here And one paper that compares Kalman to GARCH here$^{(1)}$ $(1)
Is there any gold standard for modeling irregularly spaced time series? For irregular spaced time series it's easy to construct a Kalman filter. There is a paper how to transfer ARIMA into state space form here And one paper that compares Kalman to GARCH here$^{(1)}$ $(1)$ Choudhry, Taufiq and Wu, Hao (2008) Forecasting ability of GARCH vs Kalman filter method: evidence from daily UK time-varying beta. Journal of Forecasting, 27, (8), 670-689. (doi:10.1002/for.1096).
Is there any gold standard for modeling irregularly spaced time series? For irregular spaced time series it's easy to construct a Kalman filter. There is a paper how to transfer ARIMA into state space form here And one paper that compares Kalman to GARCH here$^{(1)}$ $(1)
4,196
Is there any gold standard for modeling irregularly spaced time series?
When I was looking for a way to measure the amount of fluctuation in irregularly sampled data I came across these two papers on exponential smoothing for irregular data by Cipra [1, 2 ]. These build further on the smoothing techniques of Brown, Winters and Holt (see the Wikipedia-entry for Exponential Smoothing), and on another method by Wright (see paper for references). These methods do not assume much about the underlying process and also work for data that shows seasonal fluctuations. I don't know if any of it counts as a 'gold standard'. For my own purpose, I decided to use two way (single) exponential smoothing following Brown's method. I got the idea for two way smoothing reading the summary to a student paper (that I cannot find now).
Is there any gold standard for modeling irregularly spaced time series?
When I was looking for a way to measure the amount of fluctuation in irregularly sampled data I came across these two papers on exponential smoothing for irregular data by Cipra [1, 2 ]. These build
Is there any gold standard for modeling irregularly spaced time series? When I was looking for a way to measure the amount of fluctuation in irregularly sampled data I came across these two papers on exponential smoothing for irregular data by Cipra [1, 2 ]. These build further on the smoothing techniques of Brown, Winters and Holt (see the Wikipedia-entry for Exponential Smoothing), and on another method by Wright (see paper for references). These methods do not assume much about the underlying process and also work for data that shows seasonal fluctuations. I don't know if any of it counts as a 'gold standard'. For my own purpose, I decided to use two way (single) exponential smoothing following Brown's method. I got the idea for two way smoothing reading the summary to a student paper (that I cannot find now).
Is there any gold standard for modeling irregularly spaced time series? When I was looking for a way to measure the amount of fluctuation in irregularly sampled data I came across these two papers on exponential smoothing for irregular data by Cipra [1, 2 ]. These build
4,197
Is there any gold standard for modeling irregularly spaced time series?
The analysis of irregularly sampled time series can be tricky, as there aren't many tools available. Sometimes the practice is to apply regular algorithms and hope for the best. This isn't necessarily the best approach. Other times people try to interpolate the data in the gaps. I have even seen cases where gaps are filled with random numbers which have the same distribution as the known data. One algorithm specifically for irregularly sampled series is the Lomb-Scargle Periodogram which gives a periodogram (think power spectrum) for unevenly sampled time series. Lomb-Scargle doesn't require any "gap conditioning".
Is there any gold standard for modeling irregularly spaced time series?
The analysis of irregularly sampled time series can be tricky, as there aren't many tools available. Sometimes the practice is to apply regular algorithms and hope for the best. This isn't necessarily
Is there any gold standard for modeling irregularly spaced time series? The analysis of irregularly sampled time series can be tricky, as there aren't many tools available. Sometimes the practice is to apply regular algorithms and hope for the best. This isn't necessarily the best approach. Other times people try to interpolate the data in the gaps. I have even seen cases where gaps are filled with random numbers which have the same distribution as the known data. One algorithm specifically for irregularly sampled series is the Lomb-Scargle Periodogram which gives a periodogram (think power spectrum) for unevenly sampled time series. Lomb-Scargle doesn't require any "gap conditioning".
Is there any gold standard for modeling irregularly spaced time series? The analysis of irregularly sampled time series can be tricky, as there aren't many tools available. Sometimes the practice is to apply regular algorithms and hope for the best. This isn't necessarily
4,198
Is there any gold standard for modeling irregularly spaced time series?
If you want a "local" time-domain model -- as opposed to estimating correlation functions or power spectra), say in order to detect and characterize transient pulses, jumps, and the like -- then the Bayesian Block algorithm may be useful. It provides an optimal piecewise constant representation of time series in any data mode and with arbitrary (unevenly) spaced sampling. See "Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations," Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James, Astrophysical Journal, Volume 764, 167, 26 pp. (2013). http://arxiv.org/abs/1207.5578
Is there any gold standard for modeling irregularly spaced time series?
If you want a "local" time-domain model -- as opposed to estimating correlation functions or power spectra), say in order to detect and characterize transient pulses, jumps, and the like -- then the
Is there any gold standard for modeling irregularly spaced time series? If you want a "local" time-domain model -- as opposed to estimating correlation functions or power spectra), say in order to detect and characterize transient pulses, jumps, and the like -- then the Bayesian Block algorithm may be useful. It provides an optimal piecewise constant representation of time series in any data mode and with arbitrary (unevenly) spaced sampling. See "Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations," Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James, Astrophysical Journal, Volume 764, 167, 26 pp. (2013). http://arxiv.org/abs/1207.5578
Is there any gold standard for modeling irregularly spaced time series? If you want a "local" time-domain model -- as opposed to estimating correlation functions or power spectra), say in order to detect and characterize transient pulses, jumps, and the like -- then the
4,199
Is there any gold standard for modeling irregularly spaced time series?
In spatial data analysis data is most of the time sampled irregularly in space. So one idea would be to see what is done there, and implement variogram estimation, kriging, and so on for one-dimensional "time" domain. Variograms could be interesting even for regularly spaced time series data, as it has diferent properties from the autocorrelation function, and is defined and meaningful even for non-stationary data. Here is one paper (in spanish) and here another one.
Is there any gold standard for modeling irregularly spaced time series?
In spatial data analysis data is most of the time sampled irregularly in space. So one idea would be to see what is done there, and implement variogram estimation, kriging, and so on for one-dimension
Is there any gold standard for modeling irregularly spaced time series? In spatial data analysis data is most of the time sampled irregularly in space. So one idea would be to see what is done there, and implement variogram estimation, kriging, and so on for one-dimensional "time" domain. Variograms could be interesting even for regularly spaced time series data, as it has diferent properties from the autocorrelation function, and is defined and meaningful even for non-stationary data. Here is one paper (in spanish) and here another one.
Is there any gold standard for modeling irregularly spaced time series? In spatial data analysis data is most of the time sampled irregularly in space. So one idea would be to see what is done there, and implement variogram estimation, kriging, and so on for one-dimension
4,200
Is there any gold standard for modeling irregularly spaced time series?
This is too long for a comment, but I believe it's an important comment here. What is discussed here is a mathematical approach under some specific assumptions on the process being measured, but we can have time series data that do not follow these assumptions! The modelling approach must answer the question of «why is the time series irregular?» and respond to it correctly. Some potential answers: Observations missing at random (e.g. we observe a process in social sciences and we just can't sample it often enough) → approaches based on approximation at regular timestamps, e.g. Kalman filter, are fine and simplify the problem a lot. Series of externally observable events (e.g. loan payments) → we need to model spacing explicitly, because shorter or longer intervals will have meaning. Series of observable events where the act of observation changes the object (e.g. modelling spaced repetition methods or… some surveys) → can of worms.
Is there any gold standard for modeling irregularly spaced time series?
This is too long for a comment, but I believe it's an important comment here. What is discussed here is a mathematical approach under some specific assumptions on the process being measured, but we ca
Is there any gold standard for modeling irregularly spaced time series? This is too long for a comment, but I believe it's an important comment here. What is discussed here is a mathematical approach under some specific assumptions on the process being measured, but we can have time series data that do not follow these assumptions! The modelling approach must answer the question of «why is the time series irregular?» and respond to it correctly. Some potential answers: Observations missing at random (e.g. we observe a process in social sciences and we just can't sample it often enough) → approaches based on approximation at regular timestamps, e.g. Kalman filter, are fine and simplify the problem a lot. Series of externally observable events (e.g. loan payments) → we need to model spacing explicitly, because shorter or longer intervals will have meaning. Series of observable events where the act of observation changes the object (e.g. modelling spaced repetition methods or… some surveys) → can of worms.
Is there any gold standard for modeling irregularly spaced time series? This is too long for a comment, but I believe it's an important comment here. What is discussed here is a mathematical approach under some specific assumptions on the process being measured, but we ca