idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
β | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
β | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
3,601 | How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics? | In this answer, I would like to elaborate a little on Matthew's +1 answer regarding the GLS perspective on what the econometrics literature calls the random effects estimator.
GLS perspective
Consider the linear model
\begin{equation}
y_{it}=\alpha + X_{it}\beta+u_{it}\qquad i=1,\ldots,m,\quad t=1,\ldots,T
\end{equation}
If it held that $E(u_{it}\vert X_{it})=0$ we could simply estimate the model by pooled OLS, which amounts to ignoring the panel data structure and simply lump all $n=mT$ observations together.
We model the $u_{it}$ using the error-component model
\begin{equation}
u_{it}=\eta_i+\epsilon_{it}
\end{equation}
In matrix notation, the model can be written as
\begin{equation}
y=\alpha \iota_{mT}+X\beta+D\eta+\epsilon
\end{equation}
where $y$ and $\epsilon$ are $n$-vectors with typical elements $y_{it}$ and $\epsilon_{it}$, and $D$ is an $n \times m$ (one column per unit) matrix of dummy variables. $D$ is such that if a row corresponds to an observation belonging to unit $i$, then $D$ has a one in column $i$ and 0 else, $i=1,\ldots,m$.
We furthermore assume
$$
E(\epsilon\epsilon^\prime)=\sigma_\epsilon^2I
$$
The individual-specific effects $\eta$ must be independent of the $\epsilon_{it}$. The random-effects estimator, unlike the fixed effects (again, econometrics terminology) one, however additionally requires the stronger assumption that
\begin{equation}
E(\eta_i\vert X)=0
\end{equation}
Under this assumption, pooled OLS would be unbiased, but we can derive a GLS estimator. Assume that the $\eta_i$ are IID with mean zero and variance $\sigma^2_\eta$.
This assumption accounts for the term random effects. Assuming, moreover, that the two error components are independent, it is easy to see that
\begin{align*}
\operatorname{Var}(u_{it})&=\sigma^2_\eta+\sigma^2_\epsilon\\
\operatorname{Cov}(u_{it},u_{is})&=\sigma^2_\eta\\
\operatorname{Cov}(u_{it},u_{js})&=0\qquad\text{for all } i\neq j
\end{align*}
We then get the following $n\times n$ variance-covariance matrix $\Omega$:
$$
\Omega=
\begin{pmatrix}
\Sigma&O&\cdots&O\\
O&\Sigma&\cdots&O\\
\vdots&\vdots&&\vdots\\
O&O&\cdots&\Sigma
\end{pmatrix}
$$
Here,
$$
\Sigma=\sigma^2_\eta \iota\iota^\prime+\sigma^2_\epsilon I_T
$$
with $\iota$ a $T$-vector of ones. We may hence write
$$
\Omega=\sigma^2_\eta (I_m\otimes\iota\iota^\prime)+\sigma^2_\epsilon (I_m\otimes I_T)
$$
For the GLS estimator
$$
\hat\beta_{RE}=(X'\Omega^{-1}X)^{-1}X'\Omega^{-1}y
$$
we require $\Omega^{-1}$. To this end, let $J_T=\iota\iota^\prime$, $\bar J_T=J_T/T$ and $E_T=I_T-\bar J_T$. Then, write
$$
\Omega=T\sigma^2_\eta (I_m\otimes\bar J_T)+\sigma^2_\epsilon (I_m\otimes E_T)+\sigma^2_\epsilon (I_m\otimes \bar J_T)
$$
or, collecting terms with the same matrices,
$$
\Omega=(T\sigma^2_\eta+\sigma^2_\epsilon) (I_m\otimes\bar J_T)+\sigma^2_\epsilon (I_m\otimes E_T)
$$
Idempotency of $P=I_m\otimes\bar J_T$ and $Q=I_m\otimes E_T$ then allows us to show that
$$\Omega^{-1}=\frac{1}{\sigma^2_1}P+\frac{1}{\sigma^2_\epsilon}Q= -\frac{\sigma^2_\eta}{\sigma^2_1\sigma^2_\epsilon}(I_m\otimes\iota\iota^\prime) + \frac{1}{\sigma^2_\epsilon}(I_m\otimes I_T),$$
where $\sigma^2_1=T\sigma^2_\eta+\sigma^2_\epsilon$.
Gauss-Markov logic then explains why the random effects estimator may be useful, as it is a more efficient estimator than pooled OLS or fixed effects under the given assumptions (provided, which is a very big if in many panel data applications, that the $\eta_i$ are indeed uncorrelated with the regressors). In short, GLS is more efficient because the error covariance matrix is not homoskedastic in this model.
One can show that GLS estimate can be obtained by running OLS on the partially demeaned data:
$$(y_{it}-\theta \bar y_{i\cdot}) = (X_{it} - \theta \bar X_{i\cdot})\beta + (u_{it} - \theta u_{i\cdot}),$$
where $\theta = 1-\sigma_\eta/\sigma_1$. For $\theta=1$ one gets the fixed effect ("within") estimator. For $\theta\to -\infty$ one gets the "between" estimator. The GLS estimator is a weighted average between the two. (For $\theta=0$ one gets the pooled OLS estimator.)
Feasible GLS
To make an FGLS approach practical, we require estimators of $\sigma^2_1$ and $\sigma^2_\epsilon$. Baltagi, Econometric Analysis of Panel Data, p. 16 (quoting from the 3rd edition), discusses the following options on how to proceed.
Assume first we observe $u_{it}$. Then,
$$\hat\sigma^2_1=T\frac{1}{m}\sum_{i=1}^m\bar{u}_{i\cdot}^2$$
and
$$\hat\sigma^2_\epsilon=\frac{1}{m(T-1)}\sum_{i=1}^m\sum_{t=1}^T\left(u_{it}-\frac{1}{m}\sum_{i=1}^m\bar{u}_{i\cdot}\right)^2$$
would be good estimators of their parameters, with $\bar{u}_{i\cdot}$ the time-average corresponding to the obseravations of unit $i$.
The Wallace and Hussein (1969) approach consists of replacing $u$ with residuals of a pooled OLS regression (which, after all, still is unbiased and consistent under the present assumptions).
The Amemiya (1971) approach suggests using FE (or LSDV) residuals instead. As a computational matter, we impose the restriction that $\sum_i\eta_i=0$ to circumvent the dummy variable trap so as to be able to get $\hat\alpha=\bar y_{\cdot\cdot}-\bar X_{\cdot\cdot}'\hat\beta_{FE}$ with $\cdot\cdot$ denoting grand averages over $i$ and $t$ for the LSDV residuals $\hat u=y-\hat\alpha-X\hat\beta_{FE}$.
The default Swamy and Arora (1972) approach estimates
$$
\hat\sigma^2_\epsilon=[y'Q(I-X(X'QX)^{-1}X'Q)y]/[m(T-1)-K]
$$
and
$$
\hat\sigma^2_1=[y'P(I-Z(Z'PX)^{-1}Z'P)y]/[m-K-1]
$$
Here, $Z=(\iota_{mT}\quad X)$.
The Nerlove (1971) approach estimates $\sigma_\eta^2$ from $\sum_{i=1}^m(\hat\eta_i-\bar{\hat\eta})^2/(m-1)$ where the $\hat\eta_i$ are dummies from a fixed effects regression and $\hat\sigma^2_\epsilon$ is estimated from the within residual sums of squares from this regression, with $mT$ in the denominator.
I am also very surprised that these make such a big difference as shown by Randel's calculations!
EDIT:
Regarding the differences, the estimates of the error components may be retrived in the plm package, and indeed return vastly different results, explaining the difference in the point estimates for $\beta$ (as per @Randel's answer, amemiya throws an error that I did not attempt to fix):
> ercomp(stackY~stackX, data = paneldata, method = "walhus")
var std.dev share
idiosyncratic 21.0726 4.5905 0.981
individual 0.4071 0.6380 0.019
theta: 0.06933
> ercomp(stackY~stackX, data = paneldata, method = "swar")
var std.dev share
idiosyncratic 0.6437 0.8023 0.229
individual 2.1732 1.4742 0.771
theta: 0.811
> ercomp(stackY~stackX, data = paneldata, method = "nerlove")
var std.dev share
idiosyncratic 0.5565 0.7460 0.002
individual 342.2514 18.5000 0.998
theta: 0.9857
I suspect that the estimators of the error components are also not consistent in my example in the sister thread where I aim to demonstrate differences between FE and RE using data where the individual effects and $X$ are correlated. (In fact, they cannot be, because they ultimately drive away the RE estimate from the FE estimate as per the fact that RE is a weighted average of FE and between estimation with weights determined by the error component estimates. So, if RE is not consistent, that must ultimately be due to these estimates.)
If you replace the "offending" feature of that example,
alpha = runif(n,seq(0,step*n,by=step),seq(step,step*n+step,by=step))
by simply, say,
alpha = runif(n)
so random effects that are uncorrelated with $X$, you get RE point estimates for $\beta$ very close to the true value $\beta=-1$ for all variantes of estimating the error components.
References
Amemiya, T., 1971, The estimation of the variances in a variance-components model, International Economic Review 12, 1β13.
Baltagi, B. H., Econometric Analysis of Panel Data, Wiley.
Nerlove, M., 1971a, Further evidence on the estimation of dynamic economic relations from a time-series of cross-sections, Econometrica 39, 359β382.
Swamy, P.A.V.B. and S.S. Arora, 1972, The exact finite sample properties of the estimators of coefficients in the error components regression models, Econometrica 40, 261β275.
Wallace, T.D. and A. Hussain, 1969, The use of error components models in combining cross-section and time-series data, Econometrica 37, 55β72. | How exactly does a "random effects model" in econometrics relate to mixed models outside of economet | In this answer, I would like to elaborate a little on Matthew's +1 answer regarding the GLS perspective on what the econometrics literature calls the random effects estimator.
GLS perspective
Conside | How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics?
In this answer, I would like to elaborate a little on Matthew's +1 answer regarding the GLS perspective on what the econometrics literature calls the random effects estimator.
GLS perspective
Consider the linear model
\begin{equation}
y_{it}=\alpha + X_{it}\beta+u_{it}\qquad i=1,\ldots,m,\quad t=1,\ldots,T
\end{equation}
If it held that $E(u_{it}\vert X_{it})=0$ we could simply estimate the model by pooled OLS, which amounts to ignoring the panel data structure and simply lump all $n=mT$ observations together.
We model the $u_{it}$ using the error-component model
\begin{equation}
u_{it}=\eta_i+\epsilon_{it}
\end{equation}
In matrix notation, the model can be written as
\begin{equation}
y=\alpha \iota_{mT}+X\beta+D\eta+\epsilon
\end{equation}
where $y$ and $\epsilon$ are $n$-vectors with typical elements $y_{it}$ and $\epsilon_{it}$, and $D$ is an $n \times m$ (one column per unit) matrix of dummy variables. $D$ is such that if a row corresponds to an observation belonging to unit $i$, then $D$ has a one in column $i$ and 0 else, $i=1,\ldots,m$.
We furthermore assume
$$
E(\epsilon\epsilon^\prime)=\sigma_\epsilon^2I
$$
The individual-specific effects $\eta$ must be independent of the $\epsilon_{it}$. The random-effects estimator, unlike the fixed effects (again, econometrics terminology) one, however additionally requires the stronger assumption that
\begin{equation}
E(\eta_i\vert X)=0
\end{equation}
Under this assumption, pooled OLS would be unbiased, but we can derive a GLS estimator. Assume that the $\eta_i$ are IID with mean zero and variance $\sigma^2_\eta$.
This assumption accounts for the term random effects. Assuming, moreover, that the two error components are independent, it is easy to see that
\begin{align*}
\operatorname{Var}(u_{it})&=\sigma^2_\eta+\sigma^2_\epsilon\\
\operatorname{Cov}(u_{it},u_{is})&=\sigma^2_\eta\\
\operatorname{Cov}(u_{it},u_{js})&=0\qquad\text{for all } i\neq j
\end{align*}
We then get the following $n\times n$ variance-covariance matrix $\Omega$:
$$
\Omega=
\begin{pmatrix}
\Sigma&O&\cdots&O\\
O&\Sigma&\cdots&O\\
\vdots&\vdots&&\vdots\\
O&O&\cdots&\Sigma
\end{pmatrix}
$$
Here,
$$
\Sigma=\sigma^2_\eta \iota\iota^\prime+\sigma^2_\epsilon I_T
$$
with $\iota$ a $T$-vector of ones. We may hence write
$$
\Omega=\sigma^2_\eta (I_m\otimes\iota\iota^\prime)+\sigma^2_\epsilon (I_m\otimes I_T)
$$
For the GLS estimator
$$
\hat\beta_{RE}=(X'\Omega^{-1}X)^{-1}X'\Omega^{-1}y
$$
we require $\Omega^{-1}$. To this end, let $J_T=\iota\iota^\prime$, $\bar J_T=J_T/T$ and $E_T=I_T-\bar J_T$. Then, write
$$
\Omega=T\sigma^2_\eta (I_m\otimes\bar J_T)+\sigma^2_\epsilon (I_m\otimes E_T)+\sigma^2_\epsilon (I_m\otimes \bar J_T)
$$
or, collecting terms with the same matrices,
$$
\Omega=(T\sigma^2_\eta+\sigma^2_\epsilon) (I_m\otimes\bar J_T)+\sigma^2_\epsilon (I_m\otimes E_T)
$$
Idempotency of $P=I_m\otimes\bar J_T$ and $Q=I_m\otimes E_T$ then allows us to show that
$$\Omega^{-1}=\frac{1}{\sigma^2_1}P+\frac{1}{\sigma^2_\epsilon}Q= -\frac{\sigma^2_\eta}{\sigma^2_1\sigma^2_\epsilon}(I_m\otimes\iota\iota^\prime) + \frac{1}{\sigma^2_\epsilon}(I_m\otimes I_T),$$
where $\sigma^2_1=T\sigma^2_\eta+\sigma^2_\epsilon$.
Gauss-Markov logic then explains why the random effects estimator may be useful, as it is a more efficient estimator than pooled OLS or fixed effects under the given assumptions (provided, which is a very big if in many panel data applications, that the $\eta_i$ are indeed uncorrelated with the regressors). In short, GLS is more efficient because the error covariance matrix is not homoskedastic in this model.
One can show that GLS estimate can be obtained by running OLS on the partially demeaned data:
$$(y_{it}-\theta \bar y_{i\cdot}) = (X_{it} - \theta \bar X_{i\cdot})\beta + (u_{it} - \theta u_{i\cdot}),$$
where $\theta = 1-\sigma_\eta/\sigma_1$. For $\theta=1$ one gets the fixed effect ("within") estimator. For $\theta\to -\infty$ one gets the "between" estimator. The GLS estimator is a weighted average between the two. (For $\theta=0$ one gets the pooled OLS estimator.)
Feasible GLS
To make an FGLS approach practical, we require estimators of $\sigma^2_1$ and $\sigma^2_\epsilon$. Baltagi, Econometric Analysis of Panel Data, p. 16 (quoting from the 3rd edition), discusses the following options on how to proceed.
Assume first we observe $u_{it}$. Then,
$$\hat\sigma^2_1=T\frac{1}{m}\sum_{i=1}^m\bar{u}_{i\cdot}^2$$
and
$$\hat\sigma^2_\epsilon=\frac{1}{m(T-1)}\sum_{i=1}^m\sum_{t=1}^T\left(u_{it}-\frac{1}{m}\sum_{i=1}^m\bar{u}_{i\cdot}\right)^2$$
would be good estimators of their parameters, with $\bar{u}_{i\cdot}$ the time-average corresponding to the obseravations of unit $i$.
The Wallace and Hussein (1969) approach consists of replacing $u$ with residuals of a pooled OLS regression (which, after all, still is unbiased and consistent under the present assumptions).
The Amemiya (1971) approach suggests using FE (or LSDV) residuals instead. As a computational matter, we impose the restriction that $\sum_i\eta_i=0$ to circumvent the dummy variable trap so as to be able to get $\hat\alpha=\bar y_{\cdot\cdot}-\bar X_{\cdot\cdot}'\hat\beta_{FE}$ with $\cdot\cdot$ denoting grand averages over $i$ and $t$ for the LSDV residuals $\hat u=y-\hat\alpha-X\hat\beta_{FE}$.
The default Swamy and Arora (1972) approach estimates
$$
\hat\sigma^2_\epsilon=[y'Q(I-X(X'QX)^{-1}X'Q)y]/[m(T-1)-K]
$$
and
$$
\hat\sigma^2_1=[y'P(I-Z(Z'PX)^{-1}Z'P)y]/[m-K-1]
$$
Here, $Z=(\iota_{mT}\quad X)$.
The Nerlove (1971) approach estimates $\sigma_\eta^2$ from $\sum_{i=1}^m(\hat\eta_i-\bar{\hat\eta})^2/(m-1)$ where the $\hat\eta_i$ are dummies from a fixed effects regression and $\hat\sigma^2_\epsilon$ is estimated from the within residual sums of squares from this regression, with $mT$ in the denominator.
I am also very surprised that these make such a big difference as shown by Randel's calculations!
EDIT:
Regarding the differences, the estimates of the error components may be retrived in the plm package, and indeed return vastly different results, explaining the difference in the point estimates for $\beta$ (as per @Randel's answer, amemiya throws an error that I did not attempt to fix):
> ercomp(stackY~stackX, data = paneldata, method = "walhus")
var std.dev share
idiosyncratic 21.0726 4.5905 0.981
individual 0.4071 0.6380 0.019
theta: 0.06933
> ercomp(stackY~stackX, data = paneldata, method = "swar")
var std.dev share
idiosyncratic 0.6437 0.8023 0.229
individual 2.1732 1.4742 0.771
theta: 0.811
> ercomp(stackY~stackX, data = paneldata, method = "nerlove")
var std.dev share
idiosyncratic 0.5565 0.7460 0.002
individual 342.2514 18.5000 0.998
theta: 0.9857
I suspect that the estimators of the error components are also not consistent in my example in the sister thread where I aim to demonstrate differences between FE and RE using data where the individual effects and $X$ are correlated. (In fact, they cannot be, because they ultimately drive away the RE estimate from the FE estimate as per the fact that RE is a weighted average of FE and between estimation with weights determined by the error component estimates. So, if RE is not consistent, that must ultimately be due to these estimates.)
If you replace the "offending" feature of that example,
alpha = runif(n,seq(0,step*n,by=step),seq(step,step*n+step,by=step))
by simply, say,
alpha = runif(n)
so random effects that are uncorrelated with $X$, you get RE point estimates for $\beta$ very close to the true value $\beta=-1$ for all variantes of estimating the error components.
References
Amemiya, T., 1971, The estimation of the variances in a variance-components model, International Economic Review 12, 1β13.
Baltagi, B. H., Econometric Analysis of Panel Data, Wiley.
Nerlove, M., 1971a, Further evidence on the estimation of dynamic economic relations from a time-series of cross-sections, Econometrica 39, 359β382.
Swamy, P.A.V.B. and S.S. Arora, 1972, The exact finite sample properties of the estimators of coefficients in the error components regression models, Econometrica 40, 261β275.
Wallace, T.D. and A. Hussain, 1969, The use of error components models in combining cross-section and time-series data, Econometrica 37, 55β72. | How exactly does a "random effects model" in econometrics relate to mixed models outside of economet
In this answer, I would like to elaborate a little on Matthew's +1 answer regarding the GLS perspective on what the econometrics literature calls the random effects estimator.
GLS perspective
Conside |
3,602 | How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics? | I am not really familiar enough with R to comment on your code, but the simple random intercept mixed model should be identical to the RE MLE estimator, and very close to the RE GLS estimator, except when total $N = \sum_i T_i$ is small and the data are unbalanced. Hopefully, this will be useful in diagnosing the problem. Of course, this is all assuming that the RE estimator is appropriate.
Here is some Stata showing the equivalence (requires esttab and eststo from SSC):
set more off
estimates clear
webuse nlswork, clear
eststo, title(mixed): mixed ln_w grade age c.age#c.age ttl_exp tenure c.tenure#c.tenure || id: // Mixed estimator
eststo, title(MLE): xtreg ln_w grade age c.age#c.age ttl_exp tenure c.tenure#c.tenure, i(id) mle // MLE RE estimator
eststo, title(GLS): xtreg ln_w grade age c.age#c.age ttl_exp tenure c.tenure#c.tenure, i(id) re // GLS RE estimato
esttab *, b(a5) se(a5) mtitle
Here's the output of the last line:
. esttab *, b(a5) se(a5) mtitle
------------------------------------------------------------
(1) (2) (3)
mixed MLE GLS
------------------------------------------------------------
main
grade 0.070790*** 0.070790*** 0.070760***
(0.0017957) (0.0017957) (0.0018336)
age 0.031844*** 0.031844*** 0.031906***
(0.0027201) (0.0027202) (0.0027146)
c.age#c.age -0.00065130*** -0.00065130*** -0.00065295***
(0.000044965) (0.000044971) (0.000044880)
ttl_exp 0.035228*** 0.035228*** 0.035334***
(0.0011382) (0.0011392) (0.0011446)
tenure 0.037134*** 0.037134*** 0.037019***
(0.0015715) (0.0015723) (0.0015681)
c.tenure#c~e -0.0018382*** -0.0018382*** -0.0018387***
(0.00010128) (0.00010128) (0.00010108)
_cons 0.14721*** 0.14721*** 0.14691**
(0.044725) (0.044725) (0.044928)
------------------------------------------------------------
lns1_1_1
_cons -1.31847***
(0.013546)
------------------------------------------------------------
lnsig_e
_cons -1.23024***
(0.0046256)
------------------------------------------------------------
sigma_u
_cons 0.26754***
(0.0036240)
------------------------------------------------------------
sigma_e
_cons 0.29222***
(0.0013517)
------------------------------------------------------------
N 28099 28099 28099
------------------------------------------------------------
Standard errors in parentheses
* p<0.05, ** p<0.01, *** p<0.001
In your data, the assumptions for using the RE estimator are not satisfied since the group effect is clearly correlated with x, so you get very different estimates. The GLS RE estimator actually a generalized method of moments (GMM) estimator that is a matrix-weighted average of the between and within estimators. The within estimator is going to be OK here, but the between is going to be profoundly screwed, showing large positive effects of X. So GLS will be mostly the between estimator. The MLE RE is an MLE that maximizes the likelihood of the random-effects model. They are no longer expected to produce the same answer. Here the mixed estimator is giving something very close to FE "Within" estimator:
. esttab *, b(a5) se(a5) mtitle
----------------------------------------------------------------------------
(1) (2) (3) (4)
mixed GLS MLE Within
----------------------------------------------------------------------------
main
x -1.02502*** 0.77031** 3.37983*** -1.04507***
(0.092425) (0.26346) (0.20635) (0.093136)
_cons 30.2166*** 18.3459*** 0.49507 30.3492***
(5.12978) (2.31566) (.) (0.62124)
----------------------------------------------------------------------------
lns1_1_1
_cons 2.87024***
(0.20498)
----------------------------------------------------------------------------
lnsig_e
_cons -0.22598**
(0.077195)
----------------------------------------------------------------------------
sigma_u
_cons 2.40363
(1.28929)
----------------------------------------------------------------------------
sigma_e
_cons 4.23472***
(0.37819)
----------------------------------------------------------------------------
N 96 96 96 96
----------------------------------------------------------------------------
Standard errors in parentheses
* p<0.05, ** p<0.01, *** p<0.001
Here is the Stata code for the above table:
clear
set more off
estimates clear
input int(obs id t) double(y x)
1 1 1 2.669271 0.5866982
2 1 2 1.475540 1.3500454
3 1 3 4.430008 0.6830919
4 1 4 2.162789 0.5845966
5 1 5 2.678108 1.0038879
6 1 6 3.456636 0.5863289
7 1 7 1.769204 2.3375403
8 1 8 3.413790 0.9640034
9 2 1 4.017493 1.5084121
10 2 2 4.218733 2.8982499
11 2 3 4.509530 3.2141335
12 2 4 6.106228 2.0317799
13 2 5 5.161379 2.1231733
14 2 6 2.724643 4.3369017
15 2 7 4.500306 1.9141065
16 2 8 4.119322 2.8667938
17 3 1 9.987779 2.3961969
18 3 2 7.768579 3.5509275
19 3 3 9.379788 3.3284869
20 3 4 10.035937 2.2997389
21 3 5 11.752360 2.8143474
22 3 6 9.500264 2.1825704
23 3 7 8.921687 5.0126462
24 3 8 8.269932 3.4046339
25 4 1 12.101253 3.2928033
26 4 2 11.482337 3.1645218
27 4 3 10.648010 4.8073987
28 4 4 9.687320 5.3394193
29 4 5 12.796925 3.1197431
30 4 6 9.971434 4.6512983
31 4 7 10.239717 4.7709378
32 4 8 12.245207 2.7952426
33 5 1 18.473320 5.8421967
34 5 2 19.097212 4.9425391
35 5 3 19.460495 4.9166172
36 5 4 18.642305 4.9856035
37 5 5 17.723912 5.0594425
38 5 6 16.783248 4.8615618
39 5 7 16.100984 6.2069167
40 5 8 18.851351 3.8856152
41 6 1 19.683171 7.5568816
42 6 2 21.104231 6.7441900
43 6 3 22.115529 6.4486514
44 6 4 22.061362 5.3727434
45 6 5 22.457905 5.8665798
46 6 6 21.424413 6.0578997
47 6 7 23.475946 4.4024323
48 6 8 24.884950 4.1596914
49 7 1 25.809011 7.6756255
50 7 2 25.432828 7.7910756
51 7 3 26.790387 7.3858301
52 7 4 24.640850 8.2090606
53 7 5 26.050086 7.3779219
54 7 6 25.297148 6.8098617
55 7 7 26.551229 7.6694272
56 7 8 26.669760 6.4425772
57 8 1 26.409669 8.3040894
58 8 2 26.570003 8.4686087
59 8 3 29.018818 7.2476785
60 8 4 30.342613 4.5207729
61 8 5 26.819959 8.7935557
62 8 6 27.147711 8.3141224
63 8 7 26.168568 9.0148308
64 8 8 27.653552 8.2081808
65 9 1 34.120485 7.8415520
66 9 2 31.286463 9.7234259
67 9 3 35.763403 6.9202442
68 9 4 31.974599 9.0078286
69 9 5 32.273719 9.4954288
70 9 6 29.666208 10.2525763
71 9 7 30.949857 9.4751679
72 9 8 33.485967 8.1824810
73 10 1 36.183128 10.7891587
74 10 2 37.706116 9.7119548
75 10 3 38.582725 8.6388290
76 10 4 35.876781 10.8259279
77 10 5 37.111179 9.9805046
78 10 6 40.313149 7.7487456
79 10 7 38.606329 10.2891107
80 10 8 37.041938 10.3568765
81 11 1 42.617586 12.1619185
82 11 2 41.787495 11.1420338
83 11 3 43.944968 11.1898730
84 11 4 43.446467 10.8099599
85 11 5 43.420819 11.2696770
86 11 6 42.367318 11.6183869
87 11 7 43.543785 11.1336555
88 11 8 43.750271 12.0311065
89 12 1 46.122429 12.3528733
90 12 2 47.604306 11.4522787
91 12 3 45.568748 13.6906476
92 12 4 48.331177 12.3561907
93 12 5 47.143246 11.7339915
94 12 6 44.461190 13.3898768
95 12 7 46.879044 11.4054972
96 12 8 46.314055 12.3143487
end
eststo, title(mixed): mixed y x || id:, mle // Mixed estimator
eststo, title(GLS): xtreg y x, i(id) re // GLS RE estimato
eststo, title(MLE): xtreg y x, i(id) mle // MLE RE estimator
eststo, title(Within): xtreg y x, i(id) fe // FE Within estimator
eststo, title(Between): xtreg y x, i(id) be // Between estimator
esttab *, b(a5) se(a5) mtitle | How exactly does a "random effects model" in econometrics relate to mixed models outside of economet | I am not really familiar enough with R to comment on your code, but the simple random intercept mixed model should be identical to the RE MLE estimator, and very close to the RE GLS estimator, except | How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics?
I am not really familiar enough with R to comment on your code, but the simple random intercept mixed model should be identical to the RE MLE estimator, and very close to the RE GLS estimator, except when total $N = \sum_i T_i$ is small and the data are unbalanced. Hopefully, this will be useful in diagnosing the problem. Of course, this is all assuming that the RE estimator is appropriate.
Here is some Stata showing the equivalence (requires esttab and eststo from SSC):
set more off
estimates clear
webuse nlswork, clear
eststo, title(mixed): mixed ln_w grade age c.age#c.age ttl_exp tenure c.tenure#c.tenure || id: // Mixed estimator
eststo, title(MLE): xtreg ln_w grade age c.age#c.age ttl_exp tenure c.tenure#c.tenure, i(id) mle // MLE RE estimator
eststo, title(GLS): xtreg ln_w grade age c.age#c.age ttl_exp tenure c.tenure#c.tenure, i(id) re // GLS RE estimato
esttab *, b(a5) se(a5) mtitle
Here's the output of the last line:
. esttab *, b(a5) se(a5) mtitle
------------------------------------------------------------
(1) (2) (3)
mixed MLE GLS
------------------------------------------------------------
main
grade 0.070790*** 0.070790*** 0.070760***
(0.0017957) (0.0017957) (0.0018336)
age 0.031844*** 0.031844*** 0.031906***
(0.0027201) (0.0027202) (0.0027146)
c.age#c.age -0.00065130*** -0.00065130*** -0.00065295***
(0.000044965) (0.000044971) (0.000044880)
ttl_exp 0.035228*** 0.035228*** 0.035334***
(0.0011382) (0.0011392) (0.0011446)
tenure 0.037134*** 0.037134*** 0.037019***
(0.0015715) (0.0015723) (0.0015681)
c.tenure#c~e -0.0018382*** -0.0018382*** -0.0018387***
(0.00010128) (0.00010128) (0.00010108)
_cons 0.14721*** 0.14721*** 0.14691**
(0.044725) (0.044725) (0.044928)
------------------------------------------------------------
lns1_1_1
_cons -1.31847***
(0.013546)
------------------------------------------------------------
lnsig_e
_cons -1.23024***
(0.0046256)
------------------------------------------------------------
sigma_u
_cons 0.26754***
(0.0036240)
------------------------------------------------------------
sigma_e
_cons 0.29222***
(0.0013517)
------------------------------------------------------------
N 28099 28099 28099
------------------------------------------------------------
Standard errors in parentheses
* p<0.05, ** p<0.01, *** p<0.001
In your data, the assumptions for using the RE estimator are not satisfied since the group effect is clearly correlated with x, so you get very different estimates. The GLS RE estimator actually a generalized method of moments (GMM) estimator that is a matrix-weighted average of the between and within estimators. The within estimator is going to be OK here, but the between is going to be profoundly screwed, showing large positive effects of X. So GLS will be mostly the between estimator. The MLE RE is an MLE that maximizes the likelihood of the random-effects model. They are no longer expected to produce the same answer. Here the mixed estimator is giving something very close to FE "Within" estimator:
. esttab *, b(a5) se(a5) mtitle
----------------------------------------------------------------------------
(1) (2) (3) (4)
mixed GLS MLE Within
----------------------------------------------------------------------------
main
x -1.02502*** 0.77031** 3.37983*** -1.04507***
(0.092425) (0.26346) (0.20635) (0.093136)
_cons 30.2166*** 18.3459*** 0.49507 30.3492***
(5.12978) (2.31566) (.) (0.62124)
----------------------------------------------------------------------------
lns1_1_1
_cons 2.87024***
(0.20498)
----------------------------------------------------------------------------
lnsig_e
_cons -0.22598**
(0.077195)
----------------------------------------------------------------------------
sigma_u
_cons 2.40363
(1.28929)
----------------------------------------------------------------------------
sigma_e
_cons 4.23472***
(0.37819)
----------------------------------------------------------------------------
N 96 96 96 96
----------------------------------------------------------------------------
Standard errors in parentheses
* p<0.05, ** p<0.01, *** p<0.001
Here is the Stata code for the above table:
clear
set more off
estimates clear
input int(obs id t) double(y x)
1 1 1 2.669271 0.5866982
2 1 2 1.475540 1.3500454
3 1 3 4.430008 0.6830919
4 1 4 2.162789 0.5845966
5 1 5 2.678108 1.0038879
6 1 6 3.456636 0.5863289
7 1 7 1.769204 2.3375403
8 1 8 3.413790 0.9640034
9 2 1 4.017493 1.5084121
10 2 2 4.218733 2.8982499
11 2 3 4.509530 3.2141335
12 2 4 6.106228 2.0317799
13 2 5 5.161379 2.1231733
14 2 6 2.724643 4.3369017
15 2 7 4.500306 1.9141065
16 2 8 4.119322 2.8667938
17 3 1 9.987779 2.3961969
18 3 2 7.768579 3.5509275
19 3 3 9.379788 3.3284869
20 3 4 10.035937 2.2997389
21 3 5 11.752360 2.8143474
22 3 6 9.500264 2.1825704
23 3 7 8.921687 5.0126462
24 3 8 8.269932 3.4046339
25 4 1 12.101253 3.2928033
26 4 2 11.482337 3.1645218
27 4 3 10.648010 4.8073987
28 4 4 9.687320 5.3394193
29 4 5 12.796925 3.1197431
30 4 6 9.971434 4.6512983
31 4 7 10.239717 4.7709378
32 4 8 12.245207 2.7952426
33 5 1 18.473320 5.8421967
34 5 2 19.097212 4.9425391
35 5 3 19.460495 4.9166172
36 5 4 18.642305 4.9856035
37 5 5 17.723912 5.0594425
38 5 6 16.783248 4.8615618
39 5 7 16.100984 6.2069167
40 5 8 18.851351 3.8856152
41 6 1 19.683171 7.5568816
42 6 2 21.104231 6.7441900
43 6 3 22.115529 6.4486514
44 6 4 22.061362 5.3727434
45 6 5 22.457905 5.8665798
46 6 6 21.424413 6.0578997
47 6 7 23.475946 4.4024323
48 6 8 24.884950 4.1596914
49 7 1 25.809011 7.6756255
50 7 2 25.432828 7.7910756
51 7 3 26.790387 7.3858301
52 7 4 24.640850 8.2090606
53 7 5 26.050086 7.3779219
54 7 6 25.297148 6.8098617
55 7 7 26.551229 7.6694272
56 7 8 26.669760 6.4425772
57 8 1 26.409669 8.3040894
58 8 2 26.570003 8.4686087
59 8 3 29.018818 7.2476785
60 8 4 30.342613 4.5207729
61 8 5 26.819959 8.7935557
62 8 6 27.147711 8.3141224
63 8 7 26.168568 9.0148308
64 8 8 27.653552 8.2081808
65 9 1 34.120485 7.8415520
66 9 2 31.286463 9.7234259
67 9 3 35.763403 6.9202442
68 9 4 31.974599 9.0078286
69 9 5 32.273719 9.4954288
70 9 6 29.666208 10.2525763
71 9 7 30.949857 9.4751679
72 9 8 33.485967 8.1824810
73 10 1 36.183128 10.7891587
74 10 2 37.706116 9.7119548
75 10 3 38.582725 8.6388290
76 10 4 35.876781 10.8259279
77 10 5 37.111179 9.9805046
78 10 6 40.313149 7.7487456
79 10 7 38.606329 10.2891107
80 10 8 37.041938 10.3568765
81 11 1 42.617586 12.1619185
82 11 2 41.787495 11.1420338
83 11 3 43.944968 11.1898730
84 11 4 43.446467 10.8099599
85 11 5 43.420819 11.2696770
86 11 6 42.367318 11.6183869
87 11 7 43.543785 11.1336555
88 11 8 43.750271 12.0311065
89 12 1 46.122429 12.3528733
90 12 2 47.604306 11.4522787
91 12 3 45.568748 13.6906476
92 12 4 48.331177 12.3561907
93 12 5 47.143246 11.7339915
94 12 6 44.461190 13.3898768
95 12 7 46.879044 11.4054972
96 12 8 46.314055 12.3143487
end
eststo, title(mixed): mixed y x || id:, mle // Mixed estimator
eststo, title(GLS): xtreg y x, i(id) re // GLS RE estimato
eststo, title(MLE): xtreg y x, i(id) mle // MLE RE estimator
eststo, title(Within): xtreg y x, i(id) fe // FE Within estimator
eststo, title(Between): xtreg y x, i(id) be // Between estimator
esttab *, b(a5) se(a5) mtitle | How exactly does a "random effects model" in econometrics relate to mixed models outside of economet
I am not really familiar enough with R to comment on your code, but the simple random intercept mixed model should be identical to the RE MLE estimator, and very close to the RE GLS estimator, except |
3,603 | How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics? | Let me confuse things even more:
ECONOMETRICS - FIXED EFFECTS APPROACH
The "fixed effects" approach in econometrics for panel data, is a way to estimate the slope coefficients (the betas), by "by-passing" the existence of the individual effects variable $\alpha_i$, and so by not making any assumption as to whether it is "fixed" or "random". This is what the "First Difference" estimator (using first differences of the data) and the "Within" estimator (using deviations from time-averages) do: they manage to estimate only the betas.
For a more traditional approach that does explicitly treat the individual effects (the "intercepts") as constants, we use the Least Squares Dummy Variable (LSDV) Estimator, which provides also estimates for the $\alpha_i$'s
Note: in the linear model the three estimators algebraically coincide as regards the produced estimates for the betas - but only in the linear model.
Discussion (partly excerpted from class notes)
"The main advantage of the fixed effects approach is that we do not need to make any assumptions about the nature of the individual
effects. We should apply it whenever we suspect that the latter are
correlated with one or more of the regressors since in this case
ignoring the presence of such correlation and naively applying OLS on
the pooled model produces inconsistent estimators. Despite its appeal
on grounds of the minimal assumptions that we need to make concerning
the individual effects, the fixed effects approach has certain
limitations. First, coefficients of time invariant regressors cannot
be estimated since these variables are differenced out along with the
unobservable individual effects. Second, the individual effects (in
case we use the LSDV estimator) cannot be consistently estimated
(except if we let the time dimension go to infinity)."
ECONOMETRICS - RANDOM EFFECTS APPROACH
In the "traditional" econometric Random Effects approach we assume that the individual "intercepts" $\alpha_i$ are "permanent random components" while the "usual" error terms are "transitory" error components.
In an interesting extension, the additional randomness arises from the existence of a random time effect, common to all cross sections but time varying, alongside a fixed(constant) individual effect and the error term. This "time effect" for example may represent an aggregate shock at economy-wide level that affects equally all households. Such aggregate disturbances are indeed observed and so it appears to be a realistic modelling choice.
Here the "Random Effects" Estimator is a Generalized Least Squares (GLS) estimator, for increased efficiency.
Now, one more conceived estimator, the "Between" Estimator, performs OLS on the time-averaged observations. As a matter of algebra it has been shown that the GLS estimator can be obtained as a weighted average of the Within and the Between estimators, where the weights are not arbitrary but relate to the VCV matrices of the two.
...and there is also the variants of "Uncorrelated Random Effects" and "Correlated Random Effects" models.
I hope the above help make the contrast with the "mixed effects" models. | How exactly does a "random effects model" in econometrics relate to mixed models outside of economet | Let me confuse things even more:
ECONOMETRICS - FIXED EFFECTS APPROACH
The "fixed effects" approach in econometrics for panel data, is a way to estimate the slope coefficients (the betas), by "by-pass | How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics?
Let me confuse things even more:
ECONOMETRICS - FIXED EFFECTS APPROACH
The "fixed effects" approach in econometrics for panel data, is a way to estimate the slope coefficients (the betas), by "by-passing" the existence of the individual effects variable $\alpha_i$, and so by not making any assumption as to whether it is "fixed" or "random". This is what the "First Difference" estimator (using first differences of the data) and the "Within" estimator (using deviations from time-averages) do: they manage to estimate only the betas.
For a more traditional approach that does explicitly treat the individual effects (the "intercepts") as constants, we use the Least Squares Dummy Variable (LSDV) Estimator, which provides also estimates for the $\alpha_i$'s
Note: in the linear model the three estimators algebraically coincide as regards the produced estimates for the betas - but only in the linear model.
Discussion (partly excerpted from class notes)
"The main advantage of the fixed effects approach is that we do not need to make any assumptions about the nature of the individual
effects. We should apply it whenever we suspect that the latter are
correlated with one or more of the regressors since in this case
ignoring the presence of such correlation and naively applying OLS on
the pooled model produces inconsistent estimators. Despite its appeal
on grounds of the minimal assumptions that we need to make concerning
the individual effects, the fixed effects approach has certain
limitations. First, coefficients of time invariant regressors cannot
be estimated since these variables are differenced out along with the
unobservable individual effects. Second, the individual effects (in
case we use the LSDV estimator) cannot be consistently estimated
(except if we let the time dimension go to infinity)."
ECONOMETRICS - RANDOM EFFECTS APPROACH
In the "traditional" econometric Random Effects approach we assume that the individual "intercepts" $\alpha_i$ are "permanent random components" while the "usual" error terms are "transitory" error components.
In an interesting extension, the additional randomness arises from the existence of a random time effect, common to all cross sections but time varying, alongside a fixed(constant) individual effect and the error term. This "time effect" for example may represent an aggregate shock at economy-wide level that affects equally all households. Such aggregate disturbances are indeed observed and so it appears to be a realistic modelling choice.
Here the "Random Effects" Estimator is a Generalized Least Squares (GLS) estimator, for increased efficiency.
Now, one more conceived estimator, the "Between" Estimator, performs OLS on the time-averaged observations. As a matter of algebra it has been shown that the GLS estimator can be obtained as a weighted average of the Within and the Between estimators, where the weights are not arbitrary but relate to the VCV matrices of the two.
...and there is also the variants of "Uncorrelated Random Effects" and "Correlated Random Effects" models.
I hope the above help make the contrast with the "mixed effects" models. | How exactly does a "random effects model" in econometrics relate to mixed models outside of economet
Let me confuse things even more:
ECONOMETRICS - FIXED EFFECTS APPROACH
The "fixed effects" approach in econometrics for panel data, is a way to estimate the slope coefficients (the betas), by "by-pass |
3,604 | Explain the xkcd jelly bean comic: What makes it funny? | Humor is a very personal thing - some people will find it amusing, but it may not be funny to everyone - and attempts to explain what makes something funny often fail to convey the funny, even if they explain the underlying point. Indeed not all xkcd's are even intended to be actually funny. Many do, however make important points in a way that's thought provoking, and at least sometimes they're amusing while doing that. (I personally find it funny, but I find it hard to clearly explain what, exactly, makes it funny to me. I think partly it's the recognition of the way that a doubtful, or even dubious result turns into a media circus (on which see also this PhD comic), and perhaps partly the recognition of the way some research may actually be done - if usually not consciously.)
However, one can appreciate the point whether or not it tickles your funnybone.
The point is about doing multiple hypothesis tests at some moderate significance level like 5%, and then publicizing the one that came out significant. Of course, if you do 20 such tests when there's really nothing of any importance going on, the expected number of those tests to give a significant result is 1. Doing a rough in-head approximation for $n$ tests at significance level $\frac{1}{n}$, there's roughly a 37% chance of no significant result, roughly 37% chance of one and roughly 26% chance of more than one (I just checked the exact answers; they're close enough to that).
In the comic, Randall depicted 20 tests, so this is no doubt his point (that you expect to get one significant even when there's nothing going on). The fictional newspaper article even emphasizes the problem with the subhead "Only 5% chance of coincidence!". (If the one test that ended up in the papers was the only one done, that might be the case.)
Of course, there's also the subtler issue that an individual researcher may behave much more reasonably, but the problem of rampant publicizing of false positives still occurs. Let's say that these researchers only do 5 tests, each at the 1% level, so their overall chance of discovering a bogus result like that is only about five percent.
So far so good. But now imagine there are 20 such research groups, each testing whichever random subset of colors they think they have reason to try. Or 100 research groups... what chance of a headline like the one in the comic now?
So more broadly, the comic may be referencing publication bias more generally. If only significant results are trumpeted, we won't hear about the dozens of groups that found nothing for green jellybeans, only the one that did.
Indeed, that's one of the major points being made in this article, which has been in the news in the last few months (e.g. here, even though it's a 2005 article).
A response to that article emphasizes the need for replication. Note that if there were to be several replications of the study that was published, the "Green jellybeans linked to acne" result would be very unlikely to stand.
(And indeed, the hover text for the comic makes a clever reference to the same point.) | Explain the xkcd jelly bean comic: What makes it funny? | Humor is a very personal thing - some people will find it amusing, but it may not be funny to everyone - and attempts to explain what makes something funny often fail to convey the funny, even if they | Explain the xkcd jelly bean comic: What makes it funny?
Humor is a very personal thing - some people will find it amusing, but it may not be funny to everyone - and attempts to explain what makes something funny often fail to convey the funny, even if they explain the underlying point. Indeed not all xkcd's are even intended to be actually funny. Many do, however make important points in a way that's thought provoking, and at least sometimes they're amusing while doing that. (I personally find it funny, but I find it hard to clearly explain what, exactly, makes it funny to me. I think partly it's the recognition of the way that a doubtful, or even dubious result turns into a media circus (on which see also this PhD comic), and perhaps partly the recognition of the way some research may actually be done - if usually not consciously.)
However, one can appreciate the point whether or not it tickles your funnybone.
The point is about doing multiple hypothesis tests at some moderate significance level like 5%, and then publicizing the one that came out significant. Of course, if you do 20 such tests when there's really nothing of any importance going on, the expected number of those tests to give a significant result is 1. Doing a rough in-head approximation for $n$ tests at significance level $\frac{1}{n}$, there's roughly a 37% chance of no significant result, roughly 37% chance of one and roughly 26% chance of more than one (I just checked the exact answers; they're close enough to that).
In the comic, Randall depicted 20 tests, so this is no doubt his point (that you expect to get one significant even when there's nothing going on). The fictional newspaper article even emphasizes the problem with the subhead "Only 5% chance of coincidence!". (If the one test that ended up in the papers was the only one done, that might be the case.)
Of course, there's also the subtler issue that an individual researcher may behave much more reasonably, but the problem of rampant publicizing of false positives still occurs. Let's say that these researchers only do 5 tests, each at the 1% level, so their overall chance of discovering a bogus result like that is only about five percent.
So far so good. But now imagine there are 20 such research groups, each testing whichever random subset of colors they think they have reason to try. Or 100 research groups... what chance of a headline like the one in the comic now?
So more broadly, the comic may be referencing publication bias more generally. If only significant results are trumpeted, we won't hear about the dozens of groups that found nothing for green jellybeans, only the one that did.
Indeed, that's one of the major points being made in this article, which has been in the news in the last few months (e.g. here, even though it's a 2005 article).
A response to that article emphasizes the need for replication. Note that if there were to be several replications of the study that was published, the "Green jellybeans linked to acne" result would be very unlikely to stand.
(And indeed, the hover text for the comic makes a clever reference to the same point.) | Explain the xkcd jelly bean comic: What makes it funny?
Humor is a very personal thing - some people will find it amusing, but it may not be funny to everyone - and attempts to explain what makes something funny often fail to convey the funny, even if they |
3,605 | Explain the xkcd jelly bean comic: What makes it funny? | The effect of hypothesis testing on the decision to publish has been described more than fifty years ago in the 1959 JASA paper Publication Decisions and Their Possible Effects on Inferences Drawn from Tests of Significance - or Vice Versa (sorry for the paywall).
Overview of the Paper
The paper points out evidence that published results of scientific
papers are not a representative sample of results from all studies. The
author reviewed papers published in four major psychology journals. 97%
of the reviewed papers reported statistically significant outcomes for
their major scientific hypotheses.
The author advances a possible explanation for this observation : that
research which yields nonsignificant results is not published. Such
research being unknown to other investigators may be repeated
independently until eventually by chance a significant result occurs (a
Type 1 error) and is published. This opens the door to the possibility
that the published scientific literature may include a
over-representation of incorrect results resulting from Type 1 errors in
statistical significance tests - exactly the scenario that the original
XKCD comic was poking fun at.
This general observation has been subsequently verified and re-discovered
may times in the intervening years. I believe that the 1959 JASA paper
was the first to advance the hypothesis. The author of that paper was
my PhD supervisor. We updated his 1959 paper 35 years later and reached
the same conclusions. Publication Decisions Revisited: The Effect of the
Outcome of Statistical Tests on the Decision to Publish and Vice Versa.
American Statistician, Vol 49, No 1, Feb 1995 | Explain the xkcd jelly bean comic: What makes it funny? | The effect of hypothesis testing on the decision to publish has been described more than fifty years ago in the 1959 JASA paper Publication Decisions and Their Possible Effects on Inferences Drawn fro | Explain the xkcd jelly bean comic: What makes it funny?
The effect of hypothesis testing on the decision to publish has been described more than fifty years ago in the 1959 JASA paper Publication Decisions and Their Possible Effects on Inferences Drawn from Tests of Significance - or Vice Versa (sorry for the paywall).
Overview of the Paper
The paper points out evidence that published results of scientific
papers are not a representative sample of results from all studies. The
author reviewed papers published in four major psychology journals. 97%
of the reviewed papers reported statistically significant outcomes for
their major scientific hypotheses.
The author advances a possible explanation for this observation : that
research which yields nonsignificant results is not published. Such
research being unknown to other investigators may be repeated
independently until eventually by chance a significant result occurs (a
Type 1 error) and is published. This opens the door to the possibility
that the published scientific literature may include a
over-representation of incorrect results resulting from Type 1 errors in
statistical significance tests - exactly the scenario that the original
XKCD comic was poking fun at.
This general observation has been subsequently verified and re-discovered
may times in the intervening years. I believe that the 1959 JASA paper
was the first to advance the hypothesis. The author of that paper was
my PhD supervisor. We updated his 1959 paper 35 years later and reached
the same conclusions. Publication Decisions Revisited: The Effect of the
Outcome of Statistical Tests on the Decision to Publish and Vice Versa.
American Statistician, Vol 49, No 1, Feb 1995 | Explain the xkcd jelly bean comic: What makes it funny?
The effect of hypothesis testing on the decision to publish has been described more than fifty years ago in the 1959 JASA paper Publication Decisions and Their Possible Effects on Inferences Drawn fro |
3,606 | Explain the xkcd jelly bean comic: What makes it funny? | What people overlook is that the actual p-value for the green jelly bean case is not .05 but around .64. Only the pretend (nominal) p-value is .05. Thereβs a difference between actual and pretend p-values. The probability of finding 1 in 20 that reach the nominal level even if all the nulls are true is NOT .05, but .64. On the other hand, if you appraise evidence looking at comparative likelihoodsβthe most popular view aside from the error statistical one (within which p-values reside) you WILL say thereβs evidence for H: green jelly beans are genuinely correlated with acne. Thatβs because P(x;no effect) < P(x; H). The left side is < .05, whereas the right side is fairly high: if green jelly beans did cause acne then finding the observed association would be probable. Likelihoods alone fail to pick up on error probabilities because they condition on the actual data attained. Thereβs no difference in the appraisal than if there had just been this one test of the green jelly beans and acne. So although this cartoon is often seen as making fun of p-values, the very thing thatβs funny about it demonstrates why we need to consider the overall error probability (as non-pretend p values do) and not merely likelihoods. Bayesian inference is also conditioned on the outcome, ignoring error probabilities. The only way to avoid finding evidence for H, for a Bayesian would be to have a low prior in H. But we would adjust the p-value no matter what the subject matter, and without relying on priors, because of the hunting procedure used to find the hypothesis to test. Even if the H that was hunted was believable, it's still a lousy test. Errorstatistics.com | Explain the xkcd jelly bean comic: What makes it funny? | What people overlook is that the actual p-value for the green jelly bean case is not .05 but around .64. Only the pretend (nominal) p-value is .05. Thereβs a difference between actual and pretend p-va | Explain the xkcd jelly bean comic: What makes it funny?
What people overlook is that the actual p-value for the green jelly bean case is not .05 but around .64. Only the pretend (nominal) p-value is .05. Thereβs a difference between actual and pretend p-values. The probability of finding 1 in 20 that reach the nominal level even if all the nulls are true is NOT .05, but .64. On the other hand, if you appraise evidence looking at comparative likelihoodsβthe most popular view aside from the error statistical one (within which p-values reside) you WILL say thereβs evidence for H: green jelly beans are genuinely correlated with acne. Thatβs because P(x;no effect) < P(x; H). The left side is < .05, whereas the right side is fairly high: if green jelly beans did cause acne then finding the observed association would be probable. Likelihoods alone fail to pick up on error probabilities because they condition on the actual data attained. Thereβs no difference in the appraisal than if there had just been this one test of the green jelly beans and acne. So although this cartoon is often seen as making fun of p-values, the very thing thatβs funny about it demonstrates why we need to consider the overall error probability (as non-pretend p values do) and not merely likelihoods. Bayesian inference is also conditioned on the outcome, ignoring error probabilities. The only way to avoid finding evidence for H, for a Bayesian would be to have a low prior in H. But we would adjust the p-value no matter what the subject matter, and without relying on priors, because of the hunting procedure used to find the hypothesis to test. Even if the H that was hunted was believable, it's still a lousy test. Errorstatistics.com | Explain the xkcd jelly bean comic: What makes it funny?
What people overlook is that the actual p-value for the green jelly bean case is not .05 but around .64. Only the pretend (nominal) p-value is .05. Thereβs a difference between actual and pretend p-va |
3,607 | Difference between Random Forest and Extremely Randomized Trees | The Extra-(Randomized)-Trees (ET) article contains a bias-variance analysis.
In Fig. 6 (on page 16), you can see a comparison with multiple methods including RF
on six tests (tree classification and three regression).
Both methods are about the same, with the ET being a bit worse when there is a high number of noisy features (in high dimensional data-sets).
That said, provided the (perhaps manual) feature selection is near optimal, the performance is about the same, however, ET's can be computationally faster.
From the article itself:
The analysis of the algorithm and the determination of
the optimal value of K on several test problem variants have shown that the value is in
principle dependent on problem specifics, in particular the proportion of irrelevant attributes. [...]
The bias/variance
analysis has shown that Extra-Trees work by decreasing variance while at the same time
increasing bias. [...] When the randomization
is increased above the optimal level, variance decreases slightly while bias
increases often significantly.
No silver bullet as always.
Pierre Geurts, Damien Ernst, Louis Wehenkel. "Extremely randomized trees" | Difference between Random Forest and Extremely Randomized Trees | The Extra-(Randomized)-Trees (ET) article contains a bias-variance analysis.
In Fig. 6 (on page 16), you can see a comparison with multiple methods including RF
on six tests (tree classification and t | Difference between Random Forest and Extremely Randomized Trees
The Extra-(Randomized)-Trees (ET) article contains a bias-variance analysis.
In Fig. 6 (on page 16), you can see a comparison with multiple methods including RF
on six tests (tree classification and three regression).
Both methods are about the same, with the ET being a bit worse when there is a high number of noisy features (in high dimensional data-sets).
That said, provided the (perhaps manual) feature selection is near optimal, the performance is about the same, however, ET's can be computationally faster.
From the article itself:
The analysis of the algorithm and the determination of
the optimal value of K on several test problem variants have shown that the value is in
principle dependent on problem specifics, in particular the proportion of irrelevant attributes. [...]
The bias/variance
analysis has shown that Extra-Trees work by decreasing variance while at the same time
increasing bias. [...] When the randomization
is increased above the optimal level, variance decreases slightly while bias
increases often significantly.
No silver bullet as always.
Pierre Geurts, Damien Ernst, Louis Wehenkel. "Extremely randomized trees" | Difference between Random Forest and Extremely Randomized Trees
The Extra-(Randomized)-Trees (ET) article contains a bias-variance analysis.
In Fig. 6 (on page 16), you can see a comparison with multiple methods including RF
on six tests (tree classification and t |
3,608 | Difference between Random Forest and Extremely Randomized Trees | ExtraTreesClassifier is like a brother of RandomForest but with 2 important differences.
We are building multiple decision trees. For building multiple trees, we need multiple datasets. Best practice is that we don't train the decision trees on the complete dataset but we train only on fraction of data (around 80%) for each tree. In a random forest, we draw observations with replacement. So we can have repetition of observations in a random forest. In an ExtraTreesClassifier, we are drawing observations without replacement, so we will not have repetition of observations like in random forest.
The split is the process of converting a non-homogeneous parent node into 2 homogeneous child node (best possible). In RandomForest, it select the best split to convert the parent into the two most homogeneous child nodes. In an ExtraTreesClassifier, it selects a random split to divide the parent node into two random child nodes.
Letβs look at some ensemble methods ordered from high to low variance, ending in ExtraTreesClassifier.
1. Decision Tree (High Variance)
A single decision tree is usually overfits the data it is learning from because it learn from only one pathway of decisions. Predictions from a single decision tree usually donβt make accurate predictions on new data.
2. Random Forest (Medium Variance)
Random forest models reduce the risk of overfitting by introducing randomness by:
building multiple trees (n_estimators)
drawing observations with replacement (i.e., a bootstrapped sample)
splitting nodes on the best split among a random subset of the features selected at every node. Split is process to convert non-homogeneous parent node into 2 homogeneous child node(best possible).
3. Extra Trees (Low Variance)
Extra Trees is like a Random Forest, in that it builds multiple trees and splits nodes using random subsets of features, but with two key differences: it does not bootstrap observations (meaning it samples without replacement), and nodes are split on random splits, not best splits. So in summary, ExtraTrees:
builds multiple trees with bootstrap = False by default, which means it samples without replacement
nodes are split based on random splits among a random subset of the features selected at every node
In Extra Trees, randomness doesnβt come from bootstrapping the data, but rather comes from the random splits of all observations. ExtraTrees is named for (Extremely Randomized Trees). | Difference between Random Forest and Extremely Randomized Trees | ExtraTreesClassifier is like a brother of RandomForest but with 2 important differences.
We are building multiple decision trees. For building multiple trees, we need multiple datasets. Best practice | Difference between Random Forest and Extremely Randomized Trees
ExtraTreesClassifier is like a brother of RandomForest but with 2 important differences.
We are building multiple decision trees. For building multiple trees, we need multiple datasets. Best practice is that we don't train the decision trees on the complete dataset but we train only on fraction of data (around 80%) for each tree. In a random forest, we draw observations with replacement. So we can have repetition of observations in a random forest. In an ExtraTreesClassifier, we are drawing observations without replacement, so we will not have repetition of observations like in random forest.
The split is the process of converting a non-homogeneous parent node into 2 homogeneous child node (best possible). In RandomForest, it select the best split to convert the parent into the two most homogeneous child nodes. In an ExtraTreesClassifier, it selects a random split to divide the parent node into two random child nodes.
Letβs look at some ensemble methods ordered from high to low variance, ending in ExtraTreesClassifier.
1. Decision Tree (High Variance)
A single decision tree is usually overfits the data it is learning from because it learn from only one pathway of decisions. Predictions from a single decision tree usually donβt make accurate predictions on new data.
2. Random Forest (Medium Variance)
Random forest models reduce the risk of overfitting by introducing randomness by:
building multiple trees (n_estimators)
drawing observations with replacement (i.e., a bootstrapped sample)
splitting nodes on the best split among a random subset of the features selected at every node. Split is process to convert non-homogeneous parent node into 2 homogeneous child node(best possible).
3. Extra Trees (Low Variance)
Extra Trees is like a Random Forest, in that it builds multiple trees and splits nodes using random subsets of features, but with two key differences: it does not bootstrap observations (meaning it samples without replacement), and nodes are split on random splits, not best splits. So in summary, ExtraTrees:
builds multiple trees with bootstrap = False by default, which means it samples without replacement
nodes are split based on random splits among a random subset of the features selected at every node
In Extra Trees, randomness doesnβt come from bootstrapping the data, but rather comes from the random splits of all observations. ExtraTrees is named for (Extremely Randomized Trees). | Difference between Random Forest and Extremely Randomized Trees
ExtraTreesClassifier is like a brother of RandomForest but with 2 important differences.
We are building multiple decision trees. For building multiple trees, we need multiple datasets. Best practice |
3,609 | Difference between Random Forest and Extremely Randomized Trees | Thank you very much for the answers ! As I still had questions, I performed some numerical simulations to have more insights about the behavior of these two methods.
Extra trees seem to keep a higher performance in presence of noisy features.
The picture below shows the performance (evaluated with cross validation) as random columns irrelevant to the target are added to the dataset. The target being just a linear combination of the first three columns.
When all the variables are relevant, both methods seem to achieve the same performance,
Extra trees seem three times faster than the random forest (at least, in scikit learn implementation)
Sources
Link to the full article : random forest vs extra trees. | Difference between Random Forest and Extremely Randomized Trees | Thank you very much for the answers ! As I still had questions, I performed some numerical simulations to have more insights about the behavior of these two methods.
Extra trees seem to keep a higher | Difference between Random Forest and Extremely Randomized Trees
Thank you very much for the answers ! As I still had questions, I performed some numerical simulations to have more insights about the behavior of these two methods.
Extra trees seem to keep a higher performance in presence of noisy features.
The picture below shows the performance (evaluated with cross validation) as random columns irrelevant to the target are added to the dataset. The target being just a linear combination of the first three columns.
When all the variables are relevant, both methods seem to achieve the same performance,
Extra trees seem three times faster than the random forest (at least, in scikit learn implementation)
Sources
Link to the full article : random forest vs extra trees. | Difference between Random Forest and Extremely Randomized Trees
Thank you very much for the answers ! As I still had questions, I performed some numerical simulations to have more insights about the behavior of these two methods.
Extra trees seem to keep a higher |
3,610 | Difference between Random Forest and Extremely Randomized Trees | The answer is that it depends. I suggest you try both random forest and extra trees on your problem. Try large forest (1000 - 3000 trees/estimators, n_estimators in sklearn) and tune the number of features considered at each split (max_features in sklearn) as well as the the minimum samples per split (min_samples_split in sklearn) and the maximum tree depth (max_depth in sklearn). That said, you should keep in mind that over tuning can be a form of overfitting.
Here are two problems I worked on personally where extra trees proved useful with very noisy data:
Decision forests for machine learning classification of large, noisy seafloor feature sets
An efficient distributed protein disorder prediction with pasted samples | Difference between Random Forest and Extremely Randomized Trees | The answer is that it depends. I suggest you try both random forest and extra trees on your problem. Try large forest (1000 - 3000 trees/estimators, n_estimators in sklearn) and tune the number of fea | Difference between Random Forest and Extremely Randomized Trees
The answer is that it depends. I suggest you try both random forest and extra trees on your problem. Try large forest (1000 - 3000 trees/estimators, n_estimators in sklearn) and tune the number of features considered at each split (max_features in sklearn) as well as the the minimum samples per split (min_samples_split in sklearn) and the maximum tree depth (max_depth in sklearn). That said, you should keep in mind that over tuning can be a form of overfitting.
Here are two problems I worked on personally where extra trees proved useful with very noisy data:
Decision forests for machine learning classification of large, noisy seafloor feature sets
An efficient distributed protein disorder prediction with pasted samples | Difference between Random Forest and Extremely Randomized Trees
The answer is that it depends. I suggest you try both random forest and extra trees on your problem. Try large forest (1000 - 3000 trees/estimators, n_estimators in sklearn) and tune the number of fea |
3,611 | Most famous statisticians | Reverend Thomas Bayes for discovering Bayes' theorem | Most famous statisticians | Reverend Thomas Bayes for discovering Bayes' theorem | Most famous statisticians
Reverend Thomas Bayes for discovering Bayes' theorem | Most famous statisticians
Reverend Thomas Bayes for discovering Bayes' theorem |
3,612 | Most famous statisticians | Ronald Fisher for his fundamental contributions to the way we analyze data, whether it be the analysis of variance framework, maximum likelihood, permutation tests, or any number of other ground-breaking discoveries. | Most famous statisticians | Ronald Fisher for his fundamental contributions to the way we analyze data, whether it be the analysis of variance framework, maximum likelihood, permutation tests, or any number of other ground-break | Most famous statisticians
Ronald Fisher for his fundamental contributions to the way we analyze data, whether it be the analysis of variance framework, maximum likelihood, permutation tests, or any number of other ground-breaking discoveries. | Most famous statisticians
Ronald Fisher for his fundamental contributions to the way we analyze data, whether it be the analysis of variance framework, maximum likelihood, permutation tests, or any number of other ground-break |
3,613 | Most famous statisticians | John Tukey for Fast Fourier Transforms, exploratory data analysis (EDA), box plots, projection pursuit, jackknife (along with Quenouille). Coined the words "software" and "bit". | Most famous statisticians | John Tukey for Fast Fourier Transforms, exploratory data analysis (EDA), box plots, projection pursuit, jackknife (along with Quenouille). Coined the words "software" and "bit". | Most famous statisticians
John Tukey for Fast Fourier Transforms, exploratory data analysis (EDA), box plots, projection pursuit, jackknife (along with Quenouille). Coined the words "software" and "bit". | Most famous statisticians
John Tukey for Fast Fourier Transforms, exploratory data analysis (EDA), box plots, projection pursuit, jackknife (along with Quenouille). Coined the words "software" and "bit". |
3,614 | Most famous statisticians | Karl Pearson for his work on mathematical statistics. Pearson correlation, Chi-square test, and principal components analysis are just a few of the incredibly important ideas that stem from his works. | Most famous statisticians | Karl Pearson for his work on mathematical statistics. Pearson correlation, Chi-square test, and principal components analysis are just a few of the incredibly important ideas that stem from his works | Most famous statisticians
Karl Pearson for his work on mathematical statistics. Pearson correlation, Chi-square test, and principal components analysis are just a few of the incredibly important ideas that stem from his works. | Most famous statisticians
Karl Pearson for his work on mathematical statistics. Pearson correlation, Chi-square test, and principal components analysis are just a few of the incredibly important ideas that stem from his works |
3,615 | Most famous statisticians | Carl Gauss for least squares estimation. | Most famous statisticians | Carl Gauss for least squares estimation. | Most famous statisticians
Carl Gauss for least squares estimation. | Most famous statisticians
Carl Gauss for least squares estimation. |
3,616 | Most famous statisticians | William Sealy Gosset for Student's t-distribution and the statistically-driven improvement of beer. | Most famous statisticians | William Sealy Gosset for Student's t-distribution and the statistically-driven improvement of beer. | Most famous statisticians
William Sealy Gosset for Student's t-distribution and the statistically-driven improvement of beer. | Most famous statisticians
William Sealy Gosset for Student's t-distribution and the statistically-driven improvement of beer. |
3,617 | Most famous statisticians | Bradley Efron for the Bootstrap - one of the most useful techniques in computational statistics. | Most famous statisticians | Bradley Efron for the Bootstrap - one of the most useful techniques in computational statistics. | Most famous statisticians
Bradley Efron for the Bootstrap - one of the most useful techniques in computational statistics. | Most famous statisticians
Bradley Efron for the Bootstrap - one of the most useful techniques in computational statistics. |
3,618 | Most famous statisticians | Andrey Nikolayevich Kolmogorov, for putting probability theory on a rigorous mathematical footing. While he was a mathematician, not a statistician, undoubtedly his work is important in many branches of statistics. | Most famous statisticians | Andrey Nikolayevich Kolmogorov, for putting probability theory on a rigorous mathematical footing. While he was a mathematician, not a statistician, undoubtedly his work is important in many branches | Most famous statisticians
Andrey Nikolayevich Kolmogorov, for putting probability theory on a rigorous mathematical footing. While he was a mathematician, not a statistician, undoubtedly his work is important in many branches of statistics. | Most famous statisticians
Andrey Nikolayevich Kolmogorov, for putting probability theory on a rigorous mathematical footing. While he was a mathematician, not a statistician, undoubtedly his work is important in many branches |
3,619 | Most famous statisticians | Pierre-Simon Laplace for work on fundamentals of (Bayesian) probability. | Most famous statisticians | Pierre-Simon Laplace for work on fundamentals of (Bayesian) probability. | Most famous statisticians
Pierre-Simon Laplace for work on fundamentals of (Bayesian) probability. | Most famous statisticians
Pierre-Simon Laplace for work on fundamentals of (Bayesian) probability. |
3,620 | Most famous statisticians | Francis Galton for discovering statistical correlation and promoting regression. | Most famous statisticians | Francis Galton for discovering statistical correlation and promoting regression. | Most famous statisticians
Francis Galton for discovering statistical correlation and promoting regression. | Most famous statisticians
Francis Galton for discovering statistical correlation and promoting regression. |
3,621 | Most famous statisticians | George Box for his work on time series, designed experiments and elucidating the iterative nature of scientific discovery (proposing and testing models). | Most famous statisticians | George Box for his work on time series, designed experiments and elucidating the iterative nature of scientific discovery (proposing and testing models). | Most famous statisticians
George Box for his work on time series, designed experiments and elucidating the iterative nature of scientific discovery (proposing and testing models). | Most famous statisticians
George Box for his work on time series, designed experiments and elucidating the iterative nature of scientific discovery (proposing and testing models). |
3,622 | Most famous statisticians | Andrey Markov for stochastic processes and markov chains. | Most famous statisticians | Andrey Markov for stochastic processes and markov chains. | Most famous statisticians
Andrey Markov for stochastic processes and markov chains. | Most famous statisticians
Andrey Markov for stochastic processes and markov chains. |
3,623 | Most famous statisticians | Jerzy Neyman and Egon Pearson for work on experimental design, hypothesis testing, confidence intervals, and the Neyman-Pearson lemma. | Most famous statisticians | Jerzy Neyman and Egon Pearson for work on experimental design, hypothesis testing, confidence intervals, and the Neyman-Pearson lemma. | Most famous statisticians
Jerzy Neyman and Egon Pearson for work on experimental design, hypothesis testing, confidence intervals, and the Neyman-Pearson lemma. | Most famous statisticians
Jerzy Neyman and Egon Pearson for work on experimental design, hypothesis testing, confidence intervals, and the Neyman-Pearson lemma. |
3,624 | Most famous statisticians | How has Sir David Roxbee Cox not been mentioned yet?
Some feats: Cox proportional hazards models, experimental design, he did a lot of work on stochastic processes and binary data. He also advised many students who went on to do great work (Hinkley, McCullagh, Little, Atkinson, etc.)
And the man was knighted! | Most famous statisticians | How has Sir David Roxbee Cox not been mentioned yet?
Some feats: Cox proportional hazards models, experimental design, he did a lot of work on stochastic processes and binary data. He also advise | Most famous statisticians
How has Sir David Roxbee Cox not been mentioned yet?
Some feats: Cox proportional hazards models, experimental design, he did a lot of work on stochastic processes and binary data. He also advised many students who went on to do great work (Hinkley, McCullagh, Little, Atkinson, etc.)
And the man was knighted! | Most famous statisticians
How has Sir David Roxbee Cox not been mentioned yet?
Some feats: Cox proportional hazards models, experimental design, he did a lot of work on stochastic processes and binary data. He also advise |
3,625 | Most famous statisticians | Leo Breiman for CART, bagging, and random forests. | Most famous statisticians | Leo Breiman for CART, bagging, and random forests. | Most famous statisticians
Leo Breiman for CART, bagging, and random forests. | Most famous statisticians
Leo Breiman for CART, bagging, and random forests. |
3,626 | Most famous statisticians | Harold Jeffreys for revival of Bayesian interpretation of probability. | Most famous statisticians | Harold Jeffreys for revival of Bayesian interpretation of probability. | Most famous statisticians
Harold Jeffreys for revival of Bayesian interpretation of probability. | Most famous statisticians
Harold Jeffreys for revival of Bayesian interpretation of probability. |
3,627 | Most famous statisticians | Edwin Thompson Jaynes for work on objective Bayesian methods, particularly MaxEnt and transformation groups. | Most famous statisticians | Edwin Thompson Jaynes for work on objective Bayesian methods, particularly MaxEnt and transformation groups. | Most famous statisticians
Edwin Thompson Jaynes for work on objective Bayesian methods, particularly MaxEnt and transformation groups. | Most famous statisticians
Edwin Thompson Jaynes for work on objective Bayesian methods, particularly MaxEnt and transformation groups. |
3,628 | Most famous statisticians | C.R. Rao for the RaoβBlackwell theorem and the Cramer-Rao bound. | Most famous statisticians | C.R. Rao for the RaoβBlackwell theorem and the Cramer-Rao bound. | Most famous statisticians
C.R. Rao for the RaoβBlackwell theorem and the Cramer-Rao bound. | Most famous statisticians
C.R. Rao for the RaoβBlackwell theorem and the Cramer-Rao bound. |
3,629 | Most famous statisticians | Florence Nightingale for being "a true pioneer in the graphical representation of statistics" and developing the polar area diagram. Yes, that Florence Nightingale! | Most famous statisticians | Florence Nightingale for being "a true pioneer in the graphical representation of statistics" and developing the polar area diagram. Yes, that Florence Nightingale! | Most famous statisticians
Florence Nightingale for being "a true pioneer in the graphical representation of statistics" and developing the polar area diagram. Yes, that Florence Nightingale! | Most famous statisticians
Florence Nightingale for being "a true pioneer in the graphical representation of statistics" and developing the polar area diagram. Yes, that Florence Nightingale! |
3,630 | Most famous statisticians | Blaise Pascal and Pierre de Fermat for creating the theory of probability and inventing the idea of expected value (1654) in order to solve a problem grounded in statistical observations (from gambling). | Most famous statisticians | Blaise Pascal and Pierre de Fermat for creating the theory of probability and inventing the idea of expected value (1654) in order to solve a problem grounded in statistical observations (from gamblin | Most famous statisticians
Blaise Pascal and Pierre de Fermat for creating the theory of probability and inventing the idea of expected value (1654) in order to solve a problem grounded in statistical observations (from gambling). | Most famous statisticians
Blaise Pascal and Pierre de Fermat for creating the theory of probability and inventing the idea of expected value (1654) in order to solve a problem grounded in statistical observations (from gamblin |
3,631 | Most famous statisticians | Roderick Little and Donald Rubin for the contributions in Missing Data Analysis. | Most famous statisticians | Roderick Little and Donald Rubin for the contributions in Missing Data Analysis. | Most famous statisticians
Roderick Little and Donald Rubin for the contributions in Missing Data Analysis. | Most famous statisticians
Roderick Little and Donald Rubin for the contributions in Missing Data Analysis. |
3,632 | Most famous statisticians | W. Edwards Deming for promoting statistical process control | Most famous statisticians | W. Edwards Deming for promoting statistical process control | Most famous statisticians
W. Edwards Deming for promoting statistical process control | Most famous statisticians
W. Edwards Deming for promoting statistical process control |
3,633 | Most famous statisticians | George Dantzig for the Simplex Method, and for being the student who mistook two open statistics problems that Neyman had written on the board for homework problems, and in his "ignorance" solving them. I'd vote for him just for the story. | Most famous statisticians | George Dantzig for the Simplex Method, and for being the student who mistook two open statistics problems that Neyman had written on the board for homework problems, and in his "ignorance" solving the | Most famous statisticians
George Dantzig for the Simplex Method, and for being the student who mistook two open statistics problems that Neyman had written on the board for homework problems, and in his "ignorance" solving them. I'd vote for him just for the story. | Most famous statisticians
George Dantzig for the Simplex Method, and for being the student who mistook two open statistics problems that Neyman had written on the board for homework problems, and in his "ignorance" solving the |
3,634 | Most famous statisticians | Samuel S. Wilks was a leader in the development of mathematical statistics. He developed the theorem on the distribution of the likelihood ratio, a fundamental result that is used in a wide variety of situations.
He also helped found the Princeton statistics department, where he was Fred Mosteller's advisor, among others, and has a prestigious ASA award named after him. | Most famous statisticians | Samuel S. Wilks was a leader in the development of mathematical statistics. He developed the theorem on the distribution of the likelihood ratio, a fundamental result that is used in a wide variety of | Most famous statisticians
Samuel S. Wilks was a leader in the development of mathematical statistics. He developed the theorem on the distribution of the likelihood ratio, a fundamental result that is used in a wide variety of situations.
He also helped found the Princeton statistics department, where he was Fred Mosteller's advisor, among others, and has a prestigious ASA award named after him. | Most famous statisticians
Samuel S. Wilks was a leader in the development of mathematical statistics. He developed the theorem on the distribution of the likelihood ratio, a fundamental result that is used in a wide variety of |
3,635 | Most famous statisticians | Abraham Wald (1902-1950) for introducing the concept of Wald-tests and for his fundamental work on statistical decision theory. | Most famous statisticians | Abraham Wald (1902-1950) for introducing the concept of Wald-tests and for his fundamental work on statistical decision theory. | Most famous statisticians
Abraham Wald (1902-1950) for introducing the concept of Wald-tests and for his fundamental work on statistical decision theory. | Most famous statisticians
Abraham Wald (1902-1950) for introducing the concept of Wald-tests and for his fundamental work on statistical decision theory. |
3,636 | Most famous statisticians | John Nelder, for providing us the now omnipresent generalized linear model framework. By his approach of unifying various standard statistical models and its estimation method, the iteratively reweighted least squares method for ML, he gave us tools that we are using now in almost all applied and theoretical concepts that are related to exponential family models. Not to mention his contributions to optimization as the superb Nelder-Mead-Algorithm. | Most famous statisticians | John Nelder, for providing us the now omnipresent generalized linear model framework. By his approach of unifying various standard statistical models and its estimation method, the iteratively reweigh | Most famous statisticians
John Nelder, for providing us the now omnipresent generalized linear model framework. By his approach of unifying various standard statistical models and its estimation method, the iteratively reweighted least squares method for ML, he gave us tools that we are using now in almost all applied and theoretical concepts that are related to exponential family models. Not to mention his contributions to optimization as the superb Nelder-Mead-Algorithm. | Most famous statisticians
John Nelder, for providing us the now omnipresent generalized linear model framework. By his approach of unifying various standard statistical models and its estimation method, the iteratively reweigh |
3,637 | Most famous statisticians | Lucien Le Cam for his contribution to mathematical statistics. (maybe Local asymptotic normality and contiguity made him famous) | Most famous statisticians | Lucien Le Cam for his contribution to mathematical statistics. (maybe Local asymptotic normality and contiguity made him famous) | Most famous statisticians
Lucien Le Cam for his contribution to mathematical statistics. (maybe Local asymptotic normality and contiguity made him famous) | Most famous statisticians
Lucien Le Cam for his contribution to mathematical statistics. (maybe Local asymptotic normality and contiguity made him famous) |
3,638 | Most famous statisticians | Leland Wilkinson for his contribution to statistical graphics. | Most famous statisticians | Leland Wilkinson for his contribution to statistical graphics. | Most famous statisticians
Leland Wilkinson for his contribution to statistical graphics. | Most famous statisticians
Leland Wilkinson for his contribution to statistical graphics. |
3,639 | Most famous statisticians | David Donoho development of multiscale ideas in statistics, and a lot of theoretically justified while practically very efficient ideas in very high dimensional statistics, CHA: computational harmonic analysis,... | Most famous statisticians | David Donoho development of multiscale ideas in statistics, and a lot of theoretically justified while practically very efficient ideas in very high dimensional statistics, CHA: computational harmonic | Most famous statisticians
David Donoho development of multiscale ideas in statistics, and a lot of theoretically justified while practically very efficient ideas in very high dimensional statistics, CHA: computational harmonic analysis,... | Most famous statisticians
David Donoho development of multiscale ideas in statistics, and a lot of theoretically justified while practically very efficient ideas in very high dimensional statistics, CHA: computational harmonic |
3,640 | Most famous statisticians | Adolphe Quetelet for his work on the "average man", and for pioneering the use of statistics in the social sciences. Before him, statistics were largely confined to the physical sciences (astronomy, in particular). | Most famous statisticians | Adolphe Quetelet for his work on the "average man", and for pioneering the use of statistics in the social sciences. Before him, statistics were largely confined to the physical sciences (astronomy, | Most famous statisticians
Adolphe Quetelet for his work on the "average man", and for pioneering the use of statistics in the social sciences. Before him, statistics were largely confined to the physical sciences (astronomy, in particular). | Most famous statisticians
Adolphe Quetelet for his work on the "average man", and for pioneering the use of statistics in the social sciences. Before him, statistics were largely confined to the physical sciences (astronomy, |
3,641 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Failing to reject a null hypothesis is evidence that the null hypothesis is true, but it might not be particularly good evidence, and it certainly doesn't prove the null hypothesis.
Let's take a short detour. Consider for a moment the old clichΓ©:
Absence of evidence is not evidence of absence.
Notwithstanding its popularity, this statement is nonsense. If you look for something and fail to find it, that is absolutely evidence that it isn't there. How good that evidence is depends on how thorough your search was. A cursory search provides weak evidence; an exhaustive search provides strong evidence.
Now, back to hypothesis testing. When you run a hypothesis test, you are looking for evidence that the null hypothesis is not true. If you don't find it, then that is certainly evidence that the null hypothesis is true, but how strong is that evidence? To know that, you have to know how likely it is that evidence that would have made you reject the null hypothesis could have eluded your search. That is, what is the probability of a false negative on your test? This is related to the power, $\beta$, of the test (specifically, it is the complement, 1-$\beta$.)
Now, the power of the test, and therefore the false negative rate, usually depends on the size of the effect you are looking for. Large effects are easier to detect than small ones. Therefore, there is no single $\beta$ for an experiment, and therefore no definitive answer to the question of how strong the evidence for the null hypothesis is. Put another way, there is always some effect size small enough that it's not ruled out by the experiment.
From here, there are two ways to proceed. Sometimes you know you don't care about an effect size smaller than some threshold. In that case, you probably should reframe your experiment such that the null hypothesis is that the effect is above that threshold, and then test the alternative hypothesis that the effect is below the threshold. Alternatively, you could use your results to set bounds on the believable size of the effect. Your conclusion would be that the size of the effect lies in some interval, with some probability. That approach is just a small step away from a Bayesian treatment, which you might want to learn more about, if you frequently find yourself in this sort of situation.
There's a nice answer to a related question that touches on evidence of absence testing, which you might find useful. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Failing to reject a null hypothesis is evidence that the null hypothesis is true, but it might not be particularly good evidence, and it certainly doesn't prove the null hypothesis.
Let's take a short | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Failing to reject a null hypothesis is evidence that the null hypothesis is true, but it might not be particularly good evidence, and it certainly doesn't prove the null hypothesis.
Let's take a short detour. Consider for a moment the old clichΓ©:
Absence of evidence is not evidence of absence.
Notwithstanding its popularity, this statement is nonsense. If you look for something and fail to find it, that is absolutely evidence that it isn't there. How good that evidence is depends on how thorough your search was. A cursory search provides weak evidence; an exhaustive search provides strong evidence.
Now, back to hypothesis testing. When you run a hypothesis test, you are looking for evidence that the null hypothesis is not true. If you don't find it, then that is certainly evidence that the null hypothesis is true, but how strong is that evidence? To know that, you have to know how likely it is that evidence that would have made you reject the null hypothesis could have eluded your search. That is, what is the probability of a false negative on your test? This is related to the power, $\beta$, of the test (specifically, it is the complement, 1-$\beta$.)
Now, the power of the test, and therefore the false negative rate, usually depends on the size of the effect you are looking for. Large effects are easier to detect than small ones. Therefore, there is no single $\beta$ for an experiment, and therefore no definitive answer to the question of how strong the evidence for the null hypothesis is. Put another way, there is always some effect size small enough that it's not ruled out by the experiment.
From here, there are two ways to proceed. Sometimes you know you don't care about an effect size smaller than some threshold. In that case, you probably should reframe your experiment such that the null hypothesis is that the effect is above that threshold, and then test the alternative hypothesis that the effect is below the threshold. Alternatively, you could use your results to set bounds on the believable size of the effect. Your conclusion would be that the size of the effect lies in some interval, with some probability. That approach is just a small step away from a Bayesian treatment, which you might want to learn more about, if you frequently find yourself in this sort of situation.
There's a nice answer to a related question that touches on evidence of absence testing, which you might find useful. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Failing to reject a null hypothesis is evidence that the null hypothesis is true, but it might not be particularly good evidence, and it certainly doesn't prove the null hypothesis.
Let's take a short |
3,642 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | NHST relies on p-values, which tell us: Given the null hypothesis is true, what is the probability that we observe our data (or more extreme data)?
We assume that the null hypothesis is trueβit is baked into NHST that the null hypothesis is 100% correct. Small p-values tell us that, if the null hypothesis is true, our data (or more extreme data) are not likely.
But what does a large p-value tell us? It tells us that, given the null hypothesis, our data (or more extreme data) are likely.
Generally speaking, P(A|B) β P(B|A).
Imagine you want to take a large p-value as evidence for the null hypothesis. You would rely on this logic:
If the null is true, then a high p-value is likely. (Update: Not true. See comments below.)
A high p-value is found.
Therefore, the null is true.
This takes on the more general form:
If B is true, then A is likely.
A occurs.
Therefore, B is true.
This is fallacious, though, as can be seen by an example:
If it rained outside, then the ground being wet is likely.
The ground is wet.
Therefore, it rained outside.
The ground could very well be wet because it rained. Or it could be due to a sprinkler, someone cleaning their gutters, a water main broke, etc. More extreme examples can be found in the link above.
It is a very difficult concept to grasp. If we want evidence for the null, Bayesian inference is required. To me, the most accessible explanation of this logic is by Rouder et al. (2016). in paper Is There a Free Lunch in Inference? published in Topics in Cognitive Science, 8, pp. 520β547. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | NHST relies on p-values, which tell us: Given the null hypothesis is true, what is the probability that we observe our data (or more extreme data)?
We assume that the null hypothesis is trueβit is bak | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
NHST relies on p-values, which tell us: Given the null hypothesis is true, what is the probability that we observe our data (or more extreme data)?
We assume that the null hypothesis is trueβit is baked into NHST that the null hypothesis is 100% correct. Small p-values tell us that, if the null hypothesis is true, our data (or more extreme data) are not likely.
But what does a large p-value tell us? It tells us that, given the null hypothesis, our data (or more extreme data) are likely.
Generally speaking, P(A|B) β P(B|A).
Imagine you want to take a large p-value as evidence for the null hypothesis. You would rely on this logic:
If the null is true, then a high p-value is likely. (Update: Not true. See comments below.)
A high p-value is found.
Therefore, the null is true.
This takes on the more general form:
If B is true, then A is likely.
A occurs.
Therefore, B is true.
This is fallacious, though, as can be seen by an example:
If it rained outside, then the ground being wet is likely.
The ground is wet.
Therefore, it rained outside.
The ground could very well be wet because it rained. Or it could be due to a sprinkler, someone cleaning their gutters, a water main broke, etc. More extreme examples can be found in the link above.
It is a very difficult concept to grasp. If we want evidence for the null, Bayesian inference is required. To me, the most accessible explanation of this logic is by Rouder et al. (2016). in paper Is There a Free Lunch in Inference? published in Topics in Cognitive Science, 8, pp. 520β547. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
NHST relies on p-values, which tell us: Given the null hypothesis is true, what is the probability that we observe our data (or more extreme data)?
We assume that the null hypothesis is trueβit is bak |
3,643 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | To grasp what is wrong with the assumption, see the following example:
Imagine an enclosure in a zoo where you can't see its inhabitants. You want to test the hypothesis that it is inhabited by monkeys by putting a banana into the cage and check if it is gone the next day. This is repeated N times for enhanced statistical significance.
Now you can formulate a null hypothesis: Given that there are monkeys in the enclosure, it is very probable that they will find and eat the banana, so if the bananas are untouched each day, it is very improbable that there are any monkeys inside.
But now you see that the bananas are gone (nearly) each day. Does that tell you that monkeys are inside?
Of course not, because there are other animals that like bananas as well, or maybe some attentive zookeeper removes the banana every evening.
So what is the mistake that is made in this logic? The point is that you do not know anything about the probability of bananas being gone if there are no monkeys inside. To corroborate the null hypothesis, the probability of vanishing bananas must be small if the null hypothesis is wrong, but this does not need to be the case. In fact, the event may be equally probable (or even more probable) if the null hypothesis is wrong.
Without knowing about this probability, you can say exactly nothing about the validity of the null hypothesis. If zookeepers remove all bananas each evening, the experiment is completely worthless, even though it seems on first glance that you have corroborated the null hypothesis. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | To grasp what is wrong with the assumption, see the following example:
Imagine an enclosure in a zoo where you can't see its inhabitants. You want to test the hypothesis that it is inhabited by monkey | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
To grasp what is wrong with the assumption, see the following example:
Imagine an enclosure in a zoo where you can't see its inhabitants. You want to test the hypothesis that it is inhabited by monkeys by putting a banana into the cage and check if it is gone the next day. This is repeated N times for enhanced statistical significance.
Now you can formulate a null hypothesis: Given that there are monkeys in the enclosure, it is very probable that they will find and eat the banana, so if the bananas are untouched each day, it is very improbable that there are any monkeys inside.
But now you see that the bananas are gone (nearly) each day. Does that tell you that monkeys are inside?
Of course not, because there are other animals that like bananas as well, or maybe some attentive zookeeper removes the banana every evening.
So what is the mistake that is made in this logic? The point is that you do not know anything about the probability of bananas being gone if there are no monkeys inside. To corroborate the null hypothesis, the probability of vanishing bananas must be small if the null hypothesis is wrong, but this does not need to be the case. In fact, the event may be equally probable (or even more probable) if the null hypothesis is wrong.
Without knowing about this probability, you can say exactly nothing about the validity of the null hypothesis. If zookeepers remove all bananas each evening, the experiment is completely worthless, even though it seems on first glance that you have corroborated the null hypothesis. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
To grasp what is wrong with the assumption, see the following example:
Imagine an enclosure in a zoo where you can't see its inhabitants. You want to test the hypothesis that it is inhabited by monkey |
3,644 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | In his famous paper Why Most Published Research Findings Are False, Ioannidis used Bayesian reasoning and the base rate-fallacy to argue that most findings are false-positives. Shortly, the post-study probability that a particular research hypothesis is true depends - among other things - on the pre-study probability of said hypothesis (i.e. the base rate).
As a response, Moonesinghe et al. (2007) used the same framework to show that replication greatly increases the post-study probability of a hypothesis being true. This makes sense: If multiple studies can replicate a certain finding, we are more sure that the conjectured hypothesis is true.
I used the formulas in Moonesinghe et al. (2007) to create a graph that shows the post-study probability in the case of a failure to replicate a finding. Assume that a certain research hypothesis has a pre-study probability of being true of 50%. Further, I'm assuming that all studies have no bias (unrealistic!) have a power of 80% and use an $\alpha$ of 0.05.
The graph shows that if at least 5 out of 10 studies fail to reach significance, our post-study probability that the hypothesis is true is almost 0. The same relationships exist for more studies. This finding also makes intuitive sense: A repeated failure to find an effect strengthens our belief that the effect is most likely false. This reasoning is in line with the accepted answer by @RPL.
As a second scenario, let's assume that the studies have only a power of 50% (all else equal).
Now our post-study probability decreases more slowly, because every study had only low power to find the effect, if it really existed. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | In his famous paper Why Most Published Research Findings Are False, Ioannidis used Bayesian reasoning and the base rate-fallacy to argue that most findings are false-positives. Shortly, the post-study | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
In his famous paper Why Most Published Research Findings Are False, Ioannidis used Bayesian reasoning and the base rate-fallacy to argue that most findings are false-positives. Shortly, the post-study probability that a particular research hypothesis is true depends - among other things - on the pre-study probability of said hypothesis (i.e. the base rate).
As a response, Moonesinghe et al. (2007) used the same framework to show that replication greatly increases the post-study probability of a hypothesis being true. This makes sense: If multiple studies can replicate a certain finding, we are more sure that the conjectured hypothesis is true.
I used the formulas in Moonesinghe et al. (2007) to create a graph that shows the post-study probability in the case of a failure to replicate a finding. Assume that a certain research hypothesis has a pre-study probability of being true of 50%. Further, I'm assuming that all studies have no bias (unrealistic!) have a power of 80% and use an $\alpha$ of 0.05.
The graph shows that if at least 5 out of 10 studies fail to reach significance, our post-study probability that the hypothesis is true is almost 0. The same relationships exist for more studies. This finding also makes intuitive sense: A repeated failure to find an effect strengthens our belief that the effect is most likely false. This reasoning is in line with the accepted answer by @RPL.
As a second scenario, let's assume that the studies have only a power of 50% (all else equal).
Now our post-study probability decreases more slowly, because every study had only low power to find the effect, if it really existed. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
In his famous paper Why Most Published Research Findings Are False, Ioannidis used Bayesian reasoning and the base rate-fallacy to argue that most findings are false-positives. Shortly, the post-study |
3,645 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | The best explanation I've seen for this is from someone whose training is in mathematics.
Null-Hypothesis Significance Testing is basically a proof by contradiction: assume $H_0$, is there evidence for $H_1$? If there is evidence for $H_1$, reject $H_0$ and accept $H_1$. But if there isn't evidence for $H_1$, it's circular to say that $H_0$ is true because you assumed that $H_0$ was true to begin with. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | The best explanation I've seen for this is from someone whose training is in mathematics.
Null-Hypothesis Significance Testing is basically a proof by contradiction: assume $H_0$, is there evidence fo | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
The best explanation I've seen for this is from someone whose training is in mathematics.
Null-Hypothesis Significance Testing is basically a proof by contradiction: assume $H_0$, is there evidence for $H_1$? If there is evidence for $H_1$, reject $H_0$ and accept $H_1$. But if there isn't evidence for $H_1$, it's circular to say that $H_0$ is true because you assumed that $H_0$ was true to begin with. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
The best explanation I've seen for this is from someone whose training is in mathematics.
Null-Hypothesis Significance Testing is basically a proof by contradiction: assume $H_0$, is there evidence fo |
3,646 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | If you do not like this consequence of hypothesis testing but are not prepared to make the full leap to Bayesian methods, how about a confidence interval?
Suppose you flip a coin $42078$ times and see $20913$ heads, leading to you saying that a 95% confidence interval for the probability of heads is $[0.492,0.502]$.
You have not said you have seen evidence that it is in fact $\frac12$, but the evidence suggests some confidence about how close it might be to $\frac12$. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | If you do not like this consequence of hypothesis testing but are not prepared to make the full leap to Bayesian methods, how about a confidence interval?
Suppose you flip a coin $42078$ times and see | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
If you do not like this consequence of hypothesis testing but are not prepared to make the full leap to Bayesian methods, how about a confidence interval?
Suppose you flip a coin $42078$ times and see $20913$ heads, leading to you saying that a 95% confidence interval for the probability of heads is $[0.492,0.502]$.
You have not said you have seen evidence that it is in fact $\frac12$, but the evidence suggests some confidence about how close it might be to $\frac12$. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
If you do not like this consequence of hypothesis testing but are not prepared to make the full leap to Bayesian methods, how about a confidence interval?
Suppose you flip a coin $42078$ times and see |
3,647 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | It would perhaps be better to say that non-rejection of a null hypothesis is not in itself evidence for the null hypothesis. Once we consider the full likelihood of the data, which more explicitly considers the amount of the data, then the collected data may provide support for the parameters falling within the null hypothesis.
However, we should also carefully think about our hypotheses. In particular, failing to reject a point null hypothesis is not very good evidence that the point null hypothesis is true. Realistically, it accumulates evidence that the true value of the parameter is not that far away from the point in question. Point null hypotheses are to some extent rather artificial constructs and most often you do not truly believe they will be exactly true.
It becomes much more reasonable to talk about the non-rejection supporting the null hypothesis, if you can meaningfully reverse null and alternative hypothesis and if when doing so you would reject your new null hypothesis. When you try to do that with a standard point null hypothesis you immediately see that you can will never manage to reject its complement, because then your inverted null hypothesis contains values arbitrarily close to the point under consideration.
On the other hand, if you, say, test the null hypothesis $H_0: |\mu| \leq \delta$ against the alternative $H_A: |\mu| > \delta$ for the mean of a normal distribution, then for any true value of $\mu$ there is a sample size - unless unrealistically the true value of $\mu$ is $-\delta$ or $+\delta$ - for which we have almost 100% probability that a level $1-\alpha$ confidence interval will fall either completely within $[-\delta, +\delta]$ or outside of this interval. For any finite sample size you can of course get confidence intervals that lie across the boundary, in which case that is not all that strong evidence for the null hypothesis. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | It would perhaps be better to say that non-rejection of a null hypothesis is not in itself evidence for the null hypothesis. Once we consider the full likelihood of the data, which more explicitly con | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
It would perhaps be better to say that non-rejection of a null hypothesis is not in itself evidence for the null hypothesis. Once we consider the full likelihood of the data, which more explicitly considers the amount of the data, then the collected data may provide support for the parameters falling within the null hypothesis.
However, we should also carefully think about our hypotheses. In particular, failing to reject a point null hypothesis is not very good evidence that the point null hypothesis is true. Realistically, it accumulates evidence that the true value of the parameter is not that far away from the point in question. Point null hypotheses are to some extent rather artificial constructs and most often you do not truly believe they will be exactly true.
It becomes much more reasonable to talk about the non-rejection supporting the null hypothesis, if you can meaningfully reverse null and alternative hypothesis and if when doing so you would reject your new null hypothesis. When you try to do that with a standard point null hypothesis you immediately see that you can will never manage to reject its complement, because then your inverted null hypothesis contains values arbitrarily close to the point under consideration.
On the other hand, if you, say, test the null hypothesis $H_0: |\mu| \leq \delta$ against the alternative $H_A: |\mu| > \delta$ for the mean of a normal distribution, then for any true value of $\mu$ there is a sample size - unless unrealistically the true value of $\mu$ is $-\delta$ or $+\delta$ - for which we have almost 100% probability that a level $1-\alpha$ confidence interval will fall either completely within $[-\delta, +\delta]$ or outside of this interval. For any finite sample size you can of course get confidence intervals that lie across the boundary, in which case that is not all that strong evidence for the null hypothesis. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
It would perhaps be better to say that non-rejection of a null hypothesis is not in itself evidence for the null hypothesis. Once we consider the full likelihood of the data, which more explicitly con |
3,648 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | It rather depends on how you are using language. Under Pearson and Neyman decision theory, it is not evidence for the null, but you are to behave as if the null is true.
The difficulty comes from modus tollens. Bayesian methods are a form of inductive reasoning and, as such, are a form of incomplete reasoning. Null hypothesis methods are a probabilistic form of modus tollens and as such are part of deductive reasoning and therefore is a complete form of reasoning.
Modus tollens has the form "if A is true then B is true, and B is not true; therefore A is not true." In this form, it would be if the null is true then the data will appear in a particular manner, they do not appear in that manner, therefore (to some degree of confidence) the null is not true (or at least is "falsified."
The problem is that you want "If A then B and B." From this, you wish to infer A, but that is not valid. "If A then B," does not exclude "if not A then B" from also being a valid statement. Consider the statement "if it is a bear, then it can swim. It is a fish (not a bear)." The statements say nothing about the ability of non-bears to swim.
Probability and statistics are a branch of rhetoric and not a branch of mathematics. It is a heavy user of math but is not part of math. It exists for a variety of reasons, persuasion, decision making or inference. It extends rhetoric into a disciplined discussion of evidence. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | It rather depends on how you are using language. Under Pearson and Neyman decision theory, it is not evidence for the null, but you are to behave as if the null is true.
The difficulty comes from mod | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
It rather depends on how you are using language. Under Pearson and Neyman decision theory, it is not evidence for the null, but you are to behave as if the null is true.
The difficulty comes from modus tollens. Bayesian methods are a form of inductive reasoning and, as such, are a form of incomplete reasoning. Null hypothesis methods are a probabilistic form of modus tollens and as such are part of deductive reasoning and therefore is a complete form of reasoning.
Modus tollens has the form "if A is true then B is true, and B is not true; therefore A is not true." In this form, it would be if the null is true then the data will appear in a particular manner, they do not appear in that manner, therefore (to some degree of confidence) the null is not true (or at least is "falsified."
The problem is that you want "If A then B and B." From this, you wish to infer A, but that is not valid. "If A then B," does not exclude "if not A then B" from also being a valid statement. Consider the statement "if it is a bear, then it can swim. It is a fish (not a bear)." The statements say nothing about the ability of non-bears to swim.
Probability and statistics are a branch of rhetoric and not a branch of mathematics. It is a heavy user of math but is not part of math. It exists for a variety of reasons, persuasion, decision making or inference. It extends rhetoric into a disciplined discussion of evidence. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
It rather depends on how you are using language. Under Pearson and Neyman decision theory, it is not evidence for the null, but you are to behave as if the null is true.
The difficulty comes from mod |
3,649 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | I'll try to illustrate this with an example.
Let us think that we are sampling from a population, with an intention of test for its mean $\mu$. We get a sample with mean $\bar{x}$. If we get a non-significant p-value, we would also get non-significant p-values if we had tested for any other null hypothesis $H_0:\mu=\mu_i$, such that $\mu_i$ is between $\mu_0$ and $\bar{x}$. Now for what value of $\mu$ do we have evidence?
Also when we get significant p-values, we do not obtain evidence for a particular $H_1:\mu=M$, instead it is an evidence against $H_0:\mu=\mu_0$ (which can be tought as evidence for $\mu\ne\mu_0$, $\mu<\mu_0$ or $\mu>\mu_0$ depending on situation). Nature of hypothesis testing do not provide evidence for something, it does only against something, if it does. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | I'll try to illustrate this with an example.
Let us think that we are sampling from a population, with an intention of test for its mean $\mu$. We get a sample with mean $\bar{x}$. If we get a non-sig | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
I'll try to illustrate this with an example.
Let us think that we are sampling from a population, with an intention of test for its mean $\mu$. We get a sample with mean $\bar{x}$. If we get a non-significant p-value, we would also get non-significant p-values if we had tested for any other null hypothesis $H_0:\mu=\mu_i$, such that $\mu_i$ is between $\mu_0$ and $\bar{x}$. Now for what value of $\mu$ do we have evidence?
Also when we get significant p-values, we do not obtain evidence for a particular $H_1:\mu=M$, instead it is an evidence against $H_0:\mu=\mu_0$ (which can be tought as evidence for $\mu\ne\mu_0$, $\mu<\mu_0$ or $\mu>\mu_0$ depending on situation). Nature of hypothesis testing do not provide evidence for something, it does only against something, if it does. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
I'll try to illustrate this with an example.
Let us think that we are sampling from a population, with an intention of test for its mean $\mu$. We get a sample with mean $\bar{x}$. If we get a non-sig |
3,650 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Consider the small dataset (illustrated below) with mean $\bar x \approx 0$, say that you conducted a two-tailed $t$-test with $H_0: \bar x = \mu$, where $\mu = -0.5$. The test appears to be insignificant with $p > 0.05$. Does that signify that your $H_0$ is true? What if you tested against $\mu = 0.5$? Since the $t$ distribution is symmetric, the test would return a similar $p$-value. So you have approximately the same amount of evidence that $\mu = -0.5$ and that $\mu = 0.5$.
The above example shows that small $p$-values lead us away from believing in $H_0$ and that high $p$-values suggest that our data is somehow more consistent with $H_0$, as compared to $H_1$. If you conducted many such tests, then you could find such $\mu$ that is most likely given our data and in fact you would be using semi-maximum likelihood estimation. The idea of MLE is that you seek for such value of $\mu$ that maximizes the probability of observing your data given $\mu$, what leads to likelihood function
$$ L(\mu | X) = f(X | \mu) $$
MLE is a valid way of finding the point estimate for $\hat\mu$, but it tells you nothing about probability of observing $\hat\mu$ given your data. What you did is you picked a single value for $\hat\mu$ and asked about probability of observing your data given it. As already noticed by others, $f(\mu|X) \ne f(X|\mu)$. To find $f(\mu|X)$ we would need to account for the fact that we tested against different candidate values for $\hat\mu$. This leads to Bayes theorem
$$ f(\mu|X) = \frac{ f(X|\mu) \, f(\mu) }{ \int \, f(X|\mu) \, f(\mu) \, d\mu } $$
that first, considers how likely are different $\mu$'s a priori (this can be uniform, what leads to results consistent with MLE) and second, normalizes for the fact that you considered different candidates for $\hat\mu$. Moreover, if you ask about $\mu$ in probabilistic terms, you need to consider it as a random variable, so this is another reason for adopting Bayesian approach.
Concluding, hypothesis test tells you if $H_1$ is more likely then $H_0$, but since the procedure needed you to assume that $H_0$ is true and to pick a specific value for it. To give an analogy, imagine that your test is an oracle. If you ask her, "the ground is wet, is it possible that it was raining?", she'll answer: "yes, it is possible, in 83% of cases when it was raining, the ground become wet". If you ask her again, "is it possible that someone just spilled the water on the ground?", she'll answer "sure, it is also possible, in 100% of cases when someone spilled water on the ground, it become wet", etc. If you ask her for for some numbers, she will give them to you, but the numbers would not be comparable. The problem is that the hypothesis test/oracle operates in a framework, where she can give conclusive answers only for the questions asking if the data is consistent with some hypothesis, not the other way around, since you are not considering other hypotheses. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Consider the small dataset (illustrated below) with mean $\bar x \approx 0$, say that you conducted a two-tailed $t$-test with $H_0: \bar x = \mu$, where $\mu = -0.5$. The test appears to be insignifi | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Consider the small dataset (illustrated below) with mean $\bar x \approx 0$, say that you conducted a two-tailed $t$-test with $H_0: \bar x = \mu$, where $\mu = -0.5$. The test appears to be insignificant with $p > 0.05$. Does that signify that your $H_0$ is true? What if you tested against $\mu = 0.5$? Since the $t$ distribution is symmetric, the test would return a similar $p$-value. So you have approximately the same amount of evidence that $\mu = -0.5$ and that $\mu = 0.5$.
The above example shows that small $p$-values lead us away from believing in $H_0$ and that high $p$-values suggest that our data is somehow more consistent with $H_0$, as compared to $H_1$. If you conducted many such tests, then you could find such $\mu$ that is most likely given our data and in fact you would be using semi-maximum likelihood estimation. The idea of MLE is that you seek for such value of $\mu$ that maximizes the probability of observing your data given $\mu$, what leads to likelihood function
$$ L(\mu | X) = f(X | \mu) $$
MLE is a valid way of finding the point estimate for $\hat\mu$, but it tells you nothing about probability of observing $\hat\mu$ given your data. What you did is you picked a single value for $\hat\mu$ and asked about probability of observing your data given it. As already noticed by others, $f(\mu|X) \ne f(X|\mu)$. To find $f(\mu|X)$ we would need to account for the fact that we tested against different candidate values for $\hat\mu$. This leads to Bayes theorem
$$ f(\mu|X) = \frac{ f(X|\mu) \, f(\mu) }{ \int \, f(X|\mu) \, f(\mu) \, d\mu } $$
that first, considers how likely are different $\mu$'s a priori (this can be uniform, what leads to results consistent with MLE) and second, normalizes for the fact that you considered different candidates for $\hat\mu$. Moreover, if you ask about $\mu$ in probabilistic terms, you need to consider it as a random variable, so this is another reason for adopting Bayesian approach.
Concluding, hypothesis test tells you if $H_1$ is more likely then $H_0$, but since the procedure needed you to assume that $H_0$ is true and to pick a specific value for it. To give an analogy, imagine that your test is an oracle. If you ask her, "the ground is wet, is it possible that it was raining?", she'll answer: "yes, it is possible, in 83% of cases when it was raining, the ground become wet". If you ask her again, "is it possible that someone just spilled the water on the ground?", she'll answer "sure, it is also possible, in 100% of cases when someone spilled water on the ground, it become wet", etc. If you ask her for for some numbers, she will give them to you, but the numbers would not be comparable. The problem is that the hypothesis test/oracle operates in a framework, where she can give conclusive answers only for the questions asking if the data is consistent with some hypothesis, not the other way around, since you are not considering other hypotheses. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Consider the small dataset (illustrated below) with mean $\bar x \approx 0$, say that you conducted a two-tailed $t$-test with $H_0: \bar x = \mu$, where $\mu = -0.5$. The test appears to be insignifi |
3,651 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Let's follow a simple example.
My null hypothesis is that my data follows a normal distribution. The alternative hypothesis is that the distribution for my data is not normal.
I draw two random samples from an uniform distribution on [0,1]. I can't do much with just two samples, thus I wouldn't be able to reject my null hypothesis.
Does that mean I can conclude my data follows normal distribution? No, it's an uniform distribution!!
The problem is I have made the normality assumption in my null hypothesis. Thus, I can't conclude my assumption is correct because I can't reject it. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Let's follow a simple example.
My null hypothesis is that my data follows a normal distribution. The alternative hypothesis is that the distribution for my data is not normal.
I draw two random sample | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Let's follow a simple example.
My null hypothesis is that my data follows a normal distribution. The alternative hypothesis is that the distribution for my data is not normal.
I draw two random samples from an uniform distribution on [0,1]. I can't do much with just two samples, thus I wouldn't be able to reject my null hypothesis.
Does that mean I can conclude my data follows normal distribution? No, it's an uniform distribution!!
The problem is I have made the normality assumption in my null hypothesis. Thus, I can't conclude my assumption is correct because I can't reject it. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Let's follow a simple example.
My null hypothesis is that my data follows a normal distribution. The alternative hypothesis is that the distribution for my data is not normal.
I draw two random sample |
3,652 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Rejecting $H_0$ requires your study to have enough statistical power. If you're able to reject $H_0$, you can say that you have gathered sufficient data to draw a conclusion.
On the other hand, not rejecting $H_0$ doesn't require any data at all, since it's assumed to be true by default. So, if your study doesn't reject $H_0$, it's impossible to tell which is more probable: $H_0$ is true, or your study simply wasn't large enough. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Rejecting $H_0$ requires your study to have enough statistical power. If you're able to reject $H_0$, you can say that you have gathered sufficient data to draw a conclusion.
On the other hand, not re | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Rejecting $H_0$ requires your study to have enough statistical power. If you're able to reject $H_0$, you can say that you have gathered sufficient data to draw a conclusion.
On the other hand, not rejecting $H_0$ doesn't require any data at all, since it's assumed to be true by default. So, if your study doesn't reject $H_0$, it's impossible to tell which is more probable: $H_0$ is true, or your study simply wasn't large enough. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Rejecting $H_0$ requires your study to have enough statistical power. If you're able to reject $H_0$, you can say that you have gathered sufficient data to draw a conclusion.
On the other hand, not re |
3,653 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Both null and alternative hypothesis are models, and as such different from reality and never true. A rejection of the null hypothesis says that the data are not compatible with the null hypothesis, because if the null hypothesis were true, such data would not normally be observed. A non-rejection of the null hypothesis says that the data are compatible with the null hypothesis, meaning that if data are generated from the null hypothesis, they could very well look like the data we have.
But all data is compatible with many models. For example, if our null hypothesis is ${\cal N}(0,1)$ and we observe data with mean 0.01 and sample variance 1.13, data are surely also compatible with a ${\cal N}(0.01,1.13)$ distribution (and all kinds of other distributions "in between"), even though they will (if the sample size is not excessively large) not reject ${\cal N}(0,1)$. Furthermore the data will be compatible with lots of non-normal models with similar expected values and variances, and with all kinds of distributions with dependence or non-identity structures that fit the data well as they are (including a crazy dependence structure that says that if we observe the first observation as it is, all else will happen as it happened with probability 1). In reality nothing is really identical and nothing is really independent, and for sure nothing is really normally distributed, so really the best we can say is that data are compatible with the $H_0$, which means that nobody claiming anything substantially different can use them as argument.
This issue by the way is not specific to tests and null hypotheses, and the only way Bayesian analysis can get around it is by going 100% subjective. Surely if a Bayesian says that the probability that model X is true is 87.5%, that's nonsense. The probability that any model is really true is zero. They are models and not reality after all. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | Both null and alternative hypothesis are models, and as such different from reality and never true. A rejection of the null hypothesis says that the data are not compatible with the null hypothesis, b | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Both null and alternative hypothesis are models, and as such different from reality and never true. A rejection of the null hypothesis says that the data are not compatible with the null hypothesis, because if the null hypothesis were true, such data would not normally be observed. A non-rejection of the null hypothesis says that the data are compatible with the null hypothesis, meaning that if data are generated from the null hypothesis, they could very well look like the data we have.
But all data is compatible with many models. For example, if our null hypothesis is ${\cal N}(0,1)$ and we observe data with mean 0.01 and sample variance 1.13, data are surely also compatible with a ${\cal N}(0.01,1.13)$ distribution (and all kinds of other distributions "in between"), even though they will (if the sample size is not excessively large) not reject ${\cal N}(0,1)$. Furthermore the data will be compatible with lots of non-normal models with similar expected values and variances, and with all kinds of distributions with dependence or non-identity structures that fit the data well as they are (including a crazy dependence structure that says that if we observe the first observation as it is, all else will happen as it happened with probability 1). In reality nothing is really identical and nothing is really independent, and for sure nothing is really normally distributed, so really the best we can say is that data are compatible with the $H_0$, which means that nobody claiming anything substantially different can use them as argument.
This issue by the way is not specific to tests and null hypotheses, and the only way Bayesian analysis can get around it is by going 100% subjective. Surely if a Bayesian says that the probability that model X is true is 87.5%, that's nonsense. The probability that any model is really true is zero. They are models and not reality after all. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
Both null and alternative hypothesis are models, and as such different from reality and never true. A rejection of the null hypothesis says that the data are not compatible with the null hypothesis, b |
3,654 | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | No, it is not evidence unless you have evidence that it is evidence. I'm not trying to be cute, rather literal. You only have probability of seeing such data given your assumption the null is true. That is ALL you get from the p-value (if that, since the p-value is based on assumptions themselves).
Can you present a study that shows that for studies that "fail" to support the null hypothesis, a majority of the null hypotheses turn out to be true? If you can find THAT study, then your failure to disprove the null hypotheses at least reflects a VERY generalized likelihood that the null is true. I'm betting you don't have that study. Since you don't evidence relating to null hypotheses being true based on p-values, you just have to walk away empty-handed.
You started by assuming your null was true to get that p-value, so the p-value can tell you nothing about the null, only about the data. Think about that. It's a one-directional inference - period. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null? | No, it is not evidence unless you have evidence that it is evidence. I'm not trying to be cute, rather literal. You only have probability of seeing such data given your assumption the null is true. Th | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
No, it is not evidence unless you have evidence that it is evidence. I'm not trying to be cute, rather literal. You only have probability of seeing such data given your assumption the null is true. That is ALL you get from the p-value (if that, since the p-value is based on assumptions themselves).
Can you present a study that shows that for studies that "fail" to support the null hypothesis, a majority of the null hypotheses turn out to be true? If you can find THAT study, then your failure to disprove the null hypotheses at least reflects a VERY generalized likelihood that the null is true. I'm betting you don't have that study. Since you don't evidence relating to null hypotheses being true based on p-values, you just have to walk away empty-handed.
You started by assuming your null was true to get that p-value, so the p-value can tell you nothing about the null, only about the data. Think about that. It's a one-directional inference - period. | If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
No, it is not evidence unless you have evidence that it is evidence. I'm not trying to be cute, rather literal. You only have probability of seeing such data given your assumption the null is true. Th |
3,655 | Are bayesians slaves of the likelihood function? | I do not see much appeal in this example, esp. as a potential criticism of Bayesians and likelihood-wallahs.... The constant $c$ is known, being equal to
$$
1\big/ \int_\mathcal{X} g(x) \text{d}x
$$
If $c$ is the only "unknown" in the picture, given a sample $x_1,\ldots,x_n$, then there is no statistical issue about the problem and I do not agree that there exist estimators of $c$. Nor priors on $c$ (other than the Dirac mass on the above value). This is not in the least a statistical problem but rather a numerical issue.
That the sample $x_1,\ldots,x_n$ can be used through a (frequentist) density estimate to provide a numerical approximation of $c$ is a mere curiosity. Not a criticism of alternative statistical approaches: I could also use a Bayesian density estimate... | Are bayesians slaves of the likelihood function? | I do not see much appeal in this example, esp. as a potential criticism of Bayesians and likelihood-wallahs.... The constant $c$ is known, being equal to
$$
1\big/ \int_\mathcal{X} g(x) \text{d}x
$$
I | Are bayesians slaves of the likelihood function?
I do not see much appeal in this example, esp. as a potential criticism of Bayesians and likelihood-wallahs.... The constant $c$ is known, being equal to
$$
1\big/ \int_\mathcal{X} g(x) \text{d}x
$$
If $c$ is the only "unknown" in the picture, given a sample $x_1,\ldots,x_n$, then there is no statistical issue about the problem and I do not agree that there exist estimators of $c$. Nor priors on $c$ (other than the Dirac mass on the above value). This is not in the least a statistical problem but rather a numerical issue.
That the sample $x_1,\ldots,x_n$ can be used through a (frequentist) density estimate to provide a numerical approximation of $c$ is a mere curiosity. Not a criticism of alternative statistical approaches: I could also use a Bayesian density estimate... | Are bayesians slaves of the likelihood function?
I do not see much appeal in this example, esp. as a potential criticism of Bayesians and likelihood-wallahs.... The constant $c$ is known, being equal to
$$
1\big/ \int_\mathcal{X} g(x) \text{d}x
$$
I |
3,656 | Are bayesians slaves of the likelihood function? | This has been discussed in my paper (published only on the internet) "On an Example of Larry Wasserman" [1] and in a blog exchange between me, Wasserman, Robins, and some other commenters on Wasserman's blog: [2]
The short answer is that Wasserman (and Robins) generate paradoxes by suggesting that priors in high dimensional spaces "must" have characteristics that imply either that the parameter of interest is known a priori with near certainty or that a clearly relevant problem (selection bias) is known with near certainty not to be present. In fact, sensible priors would not have these characteristics. I'm in the process of writing a summary blog post to draw this together. There is an excellent 2007 paper, showing sensible Bayesian approaches to the examples Wasserman and Ritov consider, by Hameling and Toussaint: βBayesian estimators for Robins-Ritovβs problemβ [3] | Are bayesians slaves of the likelihood function? | This has been discussed in my paper (published only on the internet) "On an Example of Larry Wasserman" [1] and in a blog exchange between me, Wasserman, Robins, and some other commenters on Wasserman | Are bayesians slaves of the likelihood function?
This has been discussed in my paper (published only on the internet) "On an Example of Larry Wasserman" [1] and in a blog exchange between me, Wasserman, Robins, and some other commenters on Wasserman's blog: [2]
The short answer is that Wasserman (and Robins) generate paradoxes by suggesting that priors in high dimensional spaces "must" have characteristics that imply either that the parameter of interest is known a priori with near certainty or that a clearly relevant problem (selection bias) is known with near certainty not to be present. In fact, sensible priors would not have these characteristics. I'm in the process of writing a summary blog post to draw this together. There is an excellent 2007 paper, showing sensible Bayesian approaches to the examples Wasserman and Ritov consider, by Hameling and Toussaint: βBayesian estimators for Robins-Ritovβs problemβ [3] | Are bayesians slaves of the likelihood function?
This has been discussed in my paper (published only on the internet) "On an Example of Larry Wasserman" [1] and in a blog exchange between me, Wasserman, Robins, and some other commenters on Wasserman |
3,657 | Are bayesians slaves of the likelihood function? | I agree that the example is weird.
I meant it to be more of a puzzle really.
(The example is actually due to Ed George.)
It does raise the question of what it means for something to be
"known". Christian says that $c$ is known. But, at least from the
purely subjective probability point of view, you don't know it
just because it can in principle be known. (Suppose you can't do the
numerical integral.) A subjective Bayesian regards everything as a random
variable with a distribution, including $c$.
At any rate, the paper
A. Kong, P. McCullagh, X.-L. Meng, D. Nicolae, and Z. Tan (2003), A
theory of statistical models for Monte Carlo
integration, J. Royal
Statistic. Soc. B, vol. 65, no. 3, 585β604
(with discussion) treats essentially the same problem.
The example that Chris Sims alludes to in his answer is of a very
different nature. | Are bayesians slaves of the likelihood function? | I agree that the example is weird.
I meant it to be more of a puzzle really.
(The example is actually due to Ed George.)
It does raise the question of what it means for something to be
"known". Christ | Are bayesians slaves of the likelihood function?
I agree that the example is weird.
I meant it to be more of a puzzle really.
(The example is actually due to Ed George.)
It does raise the question of what it means for something to be
"known". Christian says that $c$ is known. But, at least from the
purely subjective probability point of view, you don't know it
just because it can in principle be known. (Suppose you can't do the
numerical integral.) A subjective Bayesian regards everything as a random
variable with a distribution, including $c$.
At any rate, the paper
A. Kong, P. McCullagh, X.-L. Meng, D. Nicolae, and Z. Tan (2003), A
theory of statistical models for Monte Carlo
integration, J. Royal
Statistic. Soc. B, vol. 65, no. 3, 585β604
(with discussion) treats essentially the same problem.
The example that Chris Sims alludes to in his answer is of a very
different nature. | Are bayesians slaves of the likelihood function?
I agree that the example is weird.
I meant it to be more of a puzzle really.
(The example is actually due to Ed George.)
It does raise the question of what it means for something to be
"known". Christ |
3,658 | Are bayesians slaves of the likelihood function? | The proposed statistical model may be described as follows: You have a known nonnegative integrable function $g:\mathbb{R}\to\mathbb{R}$, and a nonnegative random variable $C$. The random variables $X_1,\dots,X_n$ are supposed to be conditionally independent and identically distributed, given that $C=c$, with conditional density $f_{X_i\mid C}(x_i\mid c)=c\,g(x_i)$, for $c>0$.
Unfortunately, in general, this is not a valid description of a statistical model. The problem is that, by definition, $f_{X_i\mid C}(\,\cdot\mid c)$ must be a probability density for almost every possible value of $c$, which is, in general, clearly false. In fact, it is true just for the single value $c=\left(\int_{-\infty}^\infty g(x)\,dx\right)^{-1}$. Therefore, the model is correctly specified only in the trivial case when the distribution of $C$ is concentrated at this particular value. Of course, we are not interested in this case. What we want is the distribution of $C$ to be dominated by Lebesgue measure, having a nice pdf $\pi$.
Hence, defining $x=(x_1,\dots,x_n)$, the expression
$$
L_x(c) = \prod_{i=1}^n \left(c\,g(x_i)\right) \, ,
$$
taken as a function of $c$, for fixed $x$, does not correspond to a genuine likelihood function.
Everything after that inherits from this problem. In particular, the posterior computed with Bayes's Theorem is bogus. It's easy to see that: suppose that you have a proper prior
$$
\pi(c) = \frac{1}{c^2} \,I_{[1,\infty)}(c) \, .
$$
Note that $\int_0^\infty \pi(c)\,dc=1$. According to the computation presented in the example, the posterior should be
$$
\pi(c\mid x) \propto \frac{1}{c^{2-n}}\, I_{[1,\infty)}(c) \, .
$$
But if that is right, this posterior would be always improper, because
$$
\int_0^\infty \frac{1}{c^{2-n}}\,I_{[1,\infty)}(c)\,dc
$$
diverges for every sample size $n\geq 1$.
This is impossible: we know that if we start with a proper prior, our posterior can't be improper for every possible sample (it may be improper inside a set of null prior predictive probability). | Are bayesians slaves of the likelihood function? | The proposed statistical model may be described as follows: You have a known nonnegative integrable function $g:\mathbb{R}\to\mathbb{R}$, and a nonnegative random variable $C$. The random variables $X | Are bayesians slaves of the likelihood function?
The proposed statistical model may be described as follows: You have a known nonnegative integrable function $g:\mathbb{R}\to\mathbb{R}$, and a nonnegative random variable $C$. The random variables $X_1,\dots,X_n$ are supposed to be conditionally independent and identically distributed, given that $C=c$, with conditional density $f_{X_i\mid C}(x_i\mid c)=c\,g(x_i)$, for $c>0$.
Unfortunately, in general, this is not a valid description of a statistical model. The problem is that, by definition, $f_{X_i\mid C}(\,\cdot\mid c)$ must be a probability density for almost every possible value of $c$, which is, in general, clearly false. In fact, it is true just for the single value $c=\left(\int_{-\infty}^\infty g(x)\,dx\right)^{-1}$. Therefore, the model is correctly specified only in the trivial case when the distribution of $C$ is concentrated at this particular value. Of course, we are not interested in this case. What we want is the distribution of $C$ to be dominated by Lebesgue measure, having a nice pdf $\pi$.
Hence, defining $x=(x_1,\dots,x_n)$, the expression
$$
L_x(c) = \prod_{i=1}^n \left(c\,g(x_i)\right) \, ,
$$
taken as a function of $c$, for fixed $x$, does not correspond to a genuine likelihood function.
Everything after that inherits from this problem. In particular, the posterior computed with Bayes's Theorem is bogus. It's easy to see that: suppose that you have a proper prior
$$
\pi(c) = \frac{1}{c^2} \,I_{[1,\infty)}(c) \, .
$$
Note that $\int_0^\infty \pi(c)\,dc=1$. According to the computation presented in the example, the posterior should be
$$
\pi(c\mid x) \propto \frac{1}{c^{2-n}}\, I_{[1,\infty)}(c) \, .
$$
But if that is right, this posterior would be always improper, because
$$
\int_0^\infty \frac{1}{c^{2-n}}\,I_{[1,\infty)}(c)\,dc
$$
diverges for every sample size $n\geq 1$.
This is impossible: we know that if we start with a proper prior, our posterior can't be improper for every possible sample (it may be improper inside a set of null prior predictive probability). | Are bayesians slaves of the likelihood function?
The proposed statistical model may be described as follows: You have a known nonnegative integrable function $g:\mathbb{R}\to\mathbb{R}$, and a nonnegative random variable $C$. The random variables $X |
3,659 | Are bayesians slaves of the likelihood function? | There is an irony that the standard way to do Bayesian computation is to use frequentist analysis of MCMC samples. In this example we might consider $c$ to be closely related to the marginal likelihood, which we would like to calculate, but we are going to be Bayesian purists in the sense of to try to also do the computation in a Bayesian way.
It is not common, but it is possible to do this integral in a Bayesian framework. This involves putting a prior on the function $g()$ (in practice a Gaussian process) evaluating the function at some points, conditioning upon these points and computing an integral over the posterior over $g()$. In this situation the likelihood involves evaluating $g()$ at a number of points, but $g()$ is otherwise unknown, therefore the likelihood is quite different to the likelihood given above. The method is demonstrated in this paper http://mlg.eng.cam.ac.uk/zoubin/papers/RasGha03.pdf
I don't think anything went wrong with Bayesian methodology. The likelihood as written treats $g()$ as known everywhere. If this were the case then there would be no statistical aspect to the problem. If $g()$ is assumed to be unknown except at a finite number of points Bayesian methodology works fine. | Are bayesians slaves of the likelihood function? | There is an irony that the standard way to do Bayesian computation is to use frequentist analysis of MCMC samples. In this example we might consider $c$ to be closely related to the marginal likeliho | Are bayesians slaves of the likelihood function?
There is an irony that the standard way to do Bayesian computation is to use frequentist analysis of MCMC samples. In this example we might consider $c$ to be closely related to the marginal likelihood, which we would like to calculate, but we are going to be Bayesian purists in the sense of to try to also do the computation in a Bayesian way.
It is not common, but it is possible to do this integral in a Bayesian framework. This involves putting a prior on the function $g()$ (in practice a Gaussian process) evaluating the function at some points, conditioning upon these points and computing an integral over the posterior over $g()$. In this situation the likelihood involves evaluating $g()$ at a number of points, but $g()$ is otherwise unknown, therefore the likelihood is quite different to the likelihood given above. The method is demonstrated in this paper http://mlg.eng.cam.ac.uk/zoubin/papers/RasGha03.pdf
I don't think anything went wrong with Bayesian methodology. The likelihood as written treats $g()$ as known everywhere. If this were the case then there would be no statistical aspect to the problem. If $g()$ is assumed to be unknown except at a finite number of points Bayesian methodology works fine. | Are bayesians slaves of the likelihood function?
There is an irony that the standard way to do Bayesian computation is to use frequentist analysis of MCMC samples. In this example we might consider $c$ to be closely related to the marginal likeliho |
3,660 | Are bayesians slaves of the likelihood function? | The example is a little weird and contrived. The reason the likelihood goes awry is because g is a known function. The only unknown parameter is c which is not part of the likelihood. Also since g is known the data gives you no information about f. When do you see such a thing in practice? So the posterior is just proportional to the prior and all the information about c is in the prior.
Okay but think about it. Frequentists use maximum likelihood and so the frequentist sometimes rely on the likelihood function also. Well the frequentist can estimate parameters in other ways you may say. But this cooked up problem has only one parameter c and there is no information in the data about c. Since g is known there is no statistical problem related to unknown parameters that can be gleaned out of the data period. | Are bayesians slaves of the likelihood function? | The example is a little weird and contrived. The reason the likelihood goes awry is because g is a known function. The only unknown parameter is c which is not part of the likelihood. Also since g | Are bayesians slaves of the likelihood function?
The example is a little weird and contrived. The reason the likelihood goes awry is because g is a known function. The only unknown parameter is c which is not part of the likelihood. Also since g is known the data gives you no information about f. When do you see such a thing in practice? So the posterior is just proportional to the prior and all the information about c is in the prior.
Okay but think about it. Frequentists use maximum likelihood and so the frequentist sometimes rely on the likelihood function also. Well the frequentist can estimate parameters in other ways you may say. But this cooked up problem has only one parameter c and there is no information in the data about c. Since g is known there is no statistical problem related to unknown parameters that can be gleaned out of the data period. | Are bayesians slaves of the likelihood function?
The example is a little weird and contrived. The reason the likelihood goes awry is because g is a known function. The only unknown parameter is c which is not part of the likelihood. Also since g |
3,661 | Are bayesians slaves of the likelihood function? | We could extend the definition of possible knowns (analogous to the extension of data to allow for missing data for datum that was observed but lost) to include NULL (no data generated).
Suppose that you have a proper prior
$$
\pi(c) = \frac{1}{c^2} \,I_{[1,\infty)}(c) \, .
$$
Now define the data model for x
If $c=\left(\int_{-\infty}^\infty g(x)\,dx\right)^{-1}$
$f_{X_a\mid C}(x_a\mid c) f_{X_i\mid C}(x_i\mid c) =c\,1 g(x_i)$ {a for any}
Otherwise $f_a{X_a\mid C}(x_a\mid c)=0$
So the posterior would be 0 or 1 (proper) but the likelihood from the above data model is not available (because you cannot determine the condition required in the data model.)
So you do ABC.
Draw a βcβ from the prior.
Now approximate $\left(\int_{-\infty}^\infty g(x)\,dx\right)^{-1}$ by some numerical integration and keep βcβ if that approximation β βcβ < epsilon.
The kept βcβs will be an approximation of the true posterior.
(The accuracy of the approximation will depend on epsilon and the sufficiency of conditioning on that approximation.) | Are bayesians slaves of the likelihood function? | We could extend the definition of possible knowns (analogous to the extension of data to allow for missing data for datum that was observed but lost) to include NULL (no data generated).
Suppose tha | Are bayesians slaves of the likelihood function?
We could extend the definition of possible knowns (analogous to the extension of data to allow for missing data for datum that was observed but lost) to include NULL (no data generated).
Suppose that you have a proper prior
$$
\pi(c) = \frac{1}{c^2} \,I_{[1,\infty)}(c) \, .
$$
Now define the data model for x
If $c=\left(\int_{-\infty}^\infty g(x)\,dx\right)^{-1}$
$f_{X_a\mid C}(x_a\mid c) f_{X_i\mid C}(x_i\mid c) =c\,1 g(x_i)$ {a for any}
Otherwise $f_a{X_a\mid C}(x_a\mid c)=0$
So the posterior would be 0 or 1 (proper) but the likelihood from the above data model is not available (because you cannot determine the condition required in the data model.)
So you do ABC.
Draw a βcβ from the prior.
Now approximate $\left(\int_{-\infty}^\infty g(x)\,dx\right)^{-1}$ by some numerical integration and keep βcβ if that approximation β βcβ < epsilon.
The kept βcβs will be an approximation of the true posterior.
(The accuracy of the approximation will depend on epsilon and the sufficiency of conditioning on that approximation.) | Are bayesians slaves of the likelihood function?
We could extend the definition of possible knowns (analogous to the extension of data to allow for missing data for datum that was observed but lost) to include NULL (no data generated).
Suppose tha |
3,662 | Are bayesians slaves of the likelihood function? | Wait, what? You have $$\pi(c|x) = \left( \Pi_i g(x_i) \right) \cdot c^n \pi(c) \,,$$ so it does depend on the values of $\{x_i\}$. Just because you hide the dependency in a "$\propto$" doesn't mean you can ignore it? | Are bayesians slaves of the likelihood function? | Wait, what? You have $$\pi(c|x) = \left( \Pi_i g(x_i) \right) \cdot c^n \pi(c) \,,$$ so it does depend on the values of $\{x_i\}$. Just because you hide the dependency in a "$\propto$" doesn't mean | Are bayesians slaves of the likelihood function?
Wait, what? You have $$\pi(c|x) = \left( \Pi_i g(x_i) \right) \cdot c^n \pi(c) \,,$$ so it does depend on the values of $\{x_i\}$. Just because you hide the dependency in a "$\propto$" doesn't mean you can ignore it? | Are bayesians slaves of the likelihood function?
Wait, what? You have $$\pi(c|x) = \left( \Pi_i g(x_i) \right) \cdot c^n \pi(c) \,,$$ so it does depend on the values of $\{x_i\}$. Just because you hide the dependency in a "$\propto$" doesn't mean |
3,663 | Why does collecting data until finding a significant result increase Type I error rate? | The problem is that you're giving yourself too many chances to pass the test. It's just a fancy version of this dialog:
I'll flip you to see who pays for dinner.
OK, I call heads.
Rats, you won. Best two out of three?
To understand this better, consider a simplified--but realistic--model of this sequential procedure. Suppose you will start with a "trial run" of a certain number of observations, but are willing to continue experimenting longer in order to get a p-value less than $0.05$. The null hypothesis is that each observation $X_i$ comes (independently) from a standard Normal distribution. The alternative is that the $X_i$ come independently from a unit-variance normal distribution with a nonzero mean. The test statistic will be the mean of all $n$ observations, $\bar X$, divided by their standard error, $1/\sqrt{n}$. For a two-sided test, the critical values are the $0.025$ and $0.975$ percentage points of the standard Normal distribution, $ Z_\alpha=\pm 1.96$ approximately.
This is a good test--for a single experiment with a fixed sample size $n$. It has exactly a $5\%$ chance of rejecting the null hypothesis, no matter what $n$ might be.
Let's algebraically convert this to an equivalent test based on the sum of all $n$ values, $$S_n=X_1+X_2+\cdots+X_n = n\bar X.$$
Thus, the data are "significant" when
$$\left| Z_\alpha\right| \le \left| \frac{\bar X}{1/\sqrt{n}} \right| = \left| \frac{S_n}{n/\sqrt{n}} \right| = \left| S_n \right| / \sqrt{n};$$
that is,
$$\left| Z_\alpha\right| \sqrt{n} \le \left| S_n \right| .\tag{1}$$
If we're smart, we'll cut our losses and give up once $n$ grows very large and the data still haven't entered the critical region.
This describes a random walk $S_n$. The formula $(1)$ amounts to erecting a curved parabolic "fence," or barrier, around the plot of the random walk $(n, S_n)$: the result is "significant" if any point of the random walk hits the fence.
It is a property of random walks that if we wait long enough, it's very likely that at some point the result will look significant.
Here are 20 independent simulations out to a limit of $n=5000$ samples. They all begin testing at $n=30$ samples, at which point we check whether the each point lies outside the barriers that have been drawn according to formula $(1)$. From the point at which the statistical test is first "significant," the simulated data are colored red.
You can see what's going on: the random walk whips up and down more and more as $n$ increases. The barriers are spreading apart at about the same rate--but not fast enough always to avoid the random walk.
In 20% of these simulations, a "significant" difference was found--usually quite early on--even though in every one of them the null hypothesis is absolutely correct! Running more simulations of this type indicates that the true test size is close to $25\%$ rather than the intended value of $\alpha=5\%$: that is, your willingness to keep looking for "significance" up to a sample size of $5000$ gives you a $25\%$ chance of rejecting the null even when the null is true.
Notice that in all four "significant" cases, as testing continued, the data stopped looking significant at some points. In real life, an experimenter who stops early is losing the chance to observe such "reversions." This selectiveness through optional stopping biases the results.
In honest-to-goodness sequential tests, the barriers are lines. They spread faster than the curved barriers shown here.
library(data.table)
library(ggplot2)
alpha <- 0.05 # Test size
n.sim <- 20 # Number of simulated experiments
n.buffer <- 5e3 # Maximum experiment length
i.min <- 30 # Initial number of observations
#
# Generate data.
#
set.seed(17)
X <- data.table(
n = rep(0:n.buffer, n.sim),
Iteration = rep(1:n.sim, each=n.buffer+1),
X = rnorm((1+n.buffer)*n.sim)
)
#
# Perform the testing.
#
Z.alpha <- -qnorm(alpha/2)
X[, Z := Z.alpha * sqrt(n)]
X[, S := c(0, cumsum(X))[-(n.buffer+1)], by=Iteration]
X[, Trigger := abs(S) >= Z & n >= i.min]
X[, Significant := cumsum(Trigger) > 0, by=Iteration]
#
# Plot the results.
#
ggplot(X, aes(n, S, group=Iteration)) +
geom_path(aes(n,Z)) + geom_path(aes(n,-Z)) +
geom_point(aes(color=!Significant), size=1/2) +
facet_wrap(~ Iteration) | Why does collecting data until finding a significant result increase Type I error rate? | The problem is that you're giving yourself too many chances to pass the test. It's just a fancy version of this dialog:
I'll flip you to see who pays for dinner.
OK, I call heads.
Rats, you won. Be | Why does collecting data until finding a significant result increase Type I error rate?
The problem is that you're giving yourself too many chances to pass the test. It's just a fancy version of this dialog:
I'll flip you to see who pays for dinner.
OK, I call heads.
Rats, you won. Best two out of three?
To understand this better, consider a simplified--but realistic--model of this sequential procedure. Suppose you will start with a "trial run" of a certain number of observations, but are willing to continue experimenting longer in order to get a p-value less than $0.05$. The null hypothesis is that each observation $X_i$ comes (independently) from a standard Normal distribution. The alternative is that the $X_i$ come independently from a unit-variance normal distribution with a nonzero mean. The test statistic will be the mean of all $n$ observations, $\bar X$, divided by their standard error, $1/\sqrt{n}$. For a two-sided test, the critical values are the $0.025$ and $0.975$ percentage points of the standard Normal distribution, $ Z_\alpha=\pm 1.96$ approximately.
This is a good test--for a single experiment with a fixed sample size $n$. It has exactly a $5\%$ chance of rejecting the null hypothesis, no matter what $n$ might be.
Let's algebraically convert this to an equivalent test based on the sum of all $n$ values, $$S_n=X_1+X_2+\cdots+X_n = n\bar X.$$
Thus, the data are "significant" when
$$\left| Z_\alpha\right| \le \left| \frac{\bar X}{1/\sqrt{n}} \right| = \left| \frac{S_n}{n/\sqrt{n}} \right| = \left| S_n \right| / \sqrt{n};$$
that is,
$$\left| Z_\alpha\right| \sqrt{n} \le \left| S_n \right| .\tag{1}$$
If we're smart, we'll cut our losses and give up once $n$ grows very large and the data still haven't entered the critical region.
This describes a random walk $S_n$. The formula $(1)$ amounts to erecting a curved parabolic "fence," or barrier, around the plot of the random walk $(n, S_n)$: the result is "significant" if any point of the random walk hits the fence.
It is a property of random walks that if we wait long enough, it's very likely that at some point the result will look significant.
Here are 20 independent simulations out to a limit of $n=5000$ samples. They all begin testing at $n=30$ samples, at which point we check whether the each point lies outside the barriers that have been drawn according to formula $(1)$. From the point at which the statistical test is first "significant," the simulated data are colored red.
You can see what's going on: the random walk whips up and down more and more as $n$ increases. The barriers are spreading apart at about the same rate--but not fast enough always to avoid the random walk.
In 20% of these simulations, a "significant" difference was found--usually quite early on--even though in every one of them the null hypothesis is absolutely correct! Running more simulations of this type indicates that the true test size is close to $25\%$ rather than the intended value of $\alpha=5\%$: that is, your willingness to keep looking for "significance" up to a sample size of $5000$ gives you a $25\%$ chance of rejecting the null even when the null is true.
Notice that in all four "significant" cases, as testing continued, the data stopped looking significant at some points. In real life, an experimenter who stops early is losing the chance to observe such "reversions." This selectiveness through optional stopping biases the results.
In honest-to-goodness sequential tests, the barriers are lines. They spread faster than the curved barriers shown here.
library(data.table)
library(ggplot2)
alpha <- 0.05 # Test size
n.sim <- 20 # Number of simulated experiments
n.buffer <- 5e3 # Maximum experiment length
i.min <- 30 # Initial number of observations
#
# Generate data.
#
set.seed(17)
X <- data.table(
n = rep(0:n.buffer, n.sim),
Iteration = rep(1:n.sim, each=n.buffer+1),
X = rnorm((1+n.buffer)*n.sim)
)
#
# Perform the testing.
#
Z.alpha <- -qnorm(alpha/2)
X[, Z := Z.alpha * sqrt(n)]
X[, S := c(0, cumsum(X))[-(n.buffer+1)], by=Iteration]
X[, Trigger := abs(S) >= Z & n >= i.min]
X[, Significant := cumsum(Trigger) > 0, by=Iteration]
#
# Plot the results.
#
ggplot(X, aes(n, S, group=Iteration)) +
geom_path(aes(n,Z)) + geom_path(aes(n,-Z)) +
geom_point(aes(color=!Significant), size=1/2) +
facet_wrap(~ Iteration) | Why does collecting data until finding a significant result increase Type I error rate?
The problem is that you're giving yourself too many chances to pass the test. It's just a fancy version of this dialog:
I'll flip you to see who pays for dinner.
OK, I call heads.
Rats, you won. Be |
3,664 | Why does collecting data until finding a significant result increase Type I error rate? | People who are new to hypothesis testing tend to think that once a p value goes below .05, adding more participants will only decrease the p value further. But this isn't true. Under the null hypothesis, a p value is uniformly distributed between 0 and 1 and can bounce around quite a bit in that range.
I've simulated some data in R (my R skills are quite basic). In this simulation, I collect 5 data points - each with a random selected group membership (0 or 1) and each with a randomly selected outcome measure ~N(0,1). Starting on participant 6, I conduct a t-test at every iteration.
for (i in 6:150) {
df[i,1] = round(runif(1))
df[i,2] = rnorm(1)
p = t.test(df[ , 2] ~ df[ , 1], data = df)$p.value
df[i,3] = p
}
The p values are in this figure. Notice that I find significant results when the sample size is around 70-75. If I stop there, I'll end up beleiving that my findings are significant because I'll have missed the fact that my p values jumped back up with a larger sample (this actually happened to me once with real data). Since I know both populations have a mean of 0, this must be a false positive.
This is the problem with adding data until p < .05. If you add conduct enough tests, p will eventually cross the .05 threshold and you can find a significant effect is any data set. | Why does collecting data until finding a significant result increase Type I error rate? | People who are new to hypothesis testing tend to think that once a p value goes below .05, adding more participants will only decrease the p value further. But this isn't true. Under the null hypothes | Why does collecting data until finding a significant result increase Type I error rate?
People who are new to hypothesis testing tend to think that once a p value goes below .05, adding more participants will only decrease the p value further. But this isn't true. Under the null hypothesis, a p value is uniformly distributed between 0 and 1 and can bounce around quite a bit in that range.
I've simulated some data in R (my R skills are quite basic). In this simulation, I collect 5 data points - each with a random selected group membership (0 or 1) and each with a randomly selected outcome measure ~N(0,1). Starting on participant 6, I conduct a t-test at every iteration.
for (i in 6:150) {
df[i,1] = round(runif(1))
df[i,2] = rnorm(1)
p = t.test(df[ , 2] ~ df[ , 1], data = df)$p.value
df[i,3] = p
}
The p values are in this figure. Notice that I find significant results when the sample size is around 70-75. If I stop there, I'll end up beleiving that my findings are significant because I'll have missed the fact that my p values jumped back up with a larger sample (this actually happened to me once with real data). Since I know both populations have a mean of 0, this must be a false positive.
This is the problem with adding data until p < .05. If you add conduct enough tests, p will eventually cross the .05 threshold and you can find a significant effect is any data set. | Why does collecting data until finding a significant result increase Type I error rate?
People who are new to hypothesis testing tend to think that once a p value goes below .05, adding more participants will only decrease the p value further. But this isn't true. Under the null hypothes |
3,665 | Why does collecting data until finding a significant result increase Type I error rate? | This answer only concerns the probability of ultimately getting a "significant" result and the distribution of the time to this event under @whuber's model.
As in the model of @whuber, let $S(t)=X_1 + X_2 + \dots + X_t$ denote the value of the test statistic after $t$ observations have been collected and assume that the observations $X_1,X_2,\dots$ are iid standard normal. Then
$$
S(t+h)|S(t)=s_0 \sim N(s_0, h), \tag{1}
$$
such that
$S(t)$ behaves like a continuous-time standard Brownian motion, if we for the moment ignore the fact that we have a discrete-time process (left plot below).
Let $T$ denote the first passage time of $S(t)$ across the the time-dependent barriers $\pm z_{\alpha/2}\sqrt{t}$ (the number of observations needed before the test turns significant).
Consider the transformed process $Y(\tau)$ obtained by scaling $S(t)$ by its standard deviation at time $t$ and by letting the new time scale $\tau=\ln t$ such that
$$
Y(\tau)=\frac{S(t(\tau))}{\sqrt{t(\tau)}}=e^{-\tau/2}S(e^\tau). \tag{2}
$$
It follows from (1) and (2) that $Y(\tau+\delta)$ is normally distributed with
\begin{align}
E(Y(\tau+\delta)|Y(\tau)=y_0)
&=E(e^{-(\tau+\delta)/2}S(e^{\tau+\delta})|S(e^\tau)=y_0e^{\tau/2})
\\&=y_0e^{-\delta/2} \tag{3}
\end{align}
and
\begin{align}
\operatorname{Var}(Y(\tau+\delta)|Y(\tau)=y_0)
&=\operatorname{Var}(e^{(\tau+\delta)/2}S(e^{\tau+\delta})|S(e^\tau)=y_0e^{\tau/2})
\\&=1-e^{-\delta}, \tag{4}
\end{align}
that is, $Y(\tau)$ is a zero-mean Ornstein-Uhlenbeck (O-U) process with a stationary variance of 1 and return time 2 (right plot below). An almost identical transformation is given in Karlin & Taylor (1981), eq. 5.23.
For the transformed model, the barriers become time-independent constants equal to $\pm z_{\alpha/2}$. It is then known (Nobile et. al. 1985;
Ricciardi & Sato, 1988) that the first passage-time $\mathcal{T}$ of the O-U process $Y(\tau)$ across these barriers is approximately exponentially distributed with some parameter $\lambda$ (depending on the barriers at $\pm z_{\alpha/2}$) (estimated to $\hat\lambda=0.125$ for $\alpha=0.05$ below). There is also an extra point mass in of size $\alpha$ in $\tau=0$. "Rejection" of $H_0$ eventually happens with probability 1. Hence, $T=e^\mathcal{T}$ (the number of observations that needs to be collected before getting a "significant" result) is approximately Pareto distributed with density $f_T(t)=f_\mathcal{T}(\ln t)\frac{d\tau}{dt}=\lambda/t^{\lambda+1}$. The expected value is
$$
ET\approx 1+(1-\alpha)\int_0^\infty e^\tau \lambda e^{-\lambda \tau}d\tau.\tag{5}
$$
Thus, $T$ has a finite expectation only if $\lambda>1$ (for sufficiently large levels of significance $\alpha$).
The above ignores the fact that $T$ for the real model is discrete and that the real process is discrete- rather than continuous-time. Hence, the above model overestimates the probability that the barrier has been crossed (and underestimates $ET$) because the continuous-time sample path may cross the barrier only temporarily in-between two adjacent discrete time points $t$ and $t+1$. But such events should have negligible probability for large $t$.
The following figure shows a Kaplan-Meier estimate of $P(T>t)$ on log-log scale together with the survival curve for the exponential continuous-time approximation (red line).
R code:
# Fig 1
par(mfrow=c(1,2),mar=c(4,4,.5,.5))
set.seed(16)
n <- 20
npoints <- n*100 + 1
t <- seq(1,n,len=npoints)
subset <- 1:n*100-99
deltat <- c(1,diff(t))
z <- qnorm(.975)
s <- cumsum(rnorm(npoints,sd=sqrt(deltat)))
plot(t,s,type="l",ylim=c(-1,1)*z*sqrt(n),ylab="S(t)",col="grey")
points(t[subset],s[subset],pch="+")
curve(sqrt(t)*z,xname="t",add=TRUE)
curve(-sqrt(t)*z,xname="t",add=TRUE)
tau <- log(t)
y <- s/sqrt(t)
plot(tau,y,type="l",ylim=c(-2.5,2.5),col="grey",xlab=expression(tau),ylab=expression(Y(tau)))
points(tau[subset],y[subset],pch="+")
abline(h=c(-z,z))
# Fig 2
nmax <- 1e+3
nsim <- 1e+5
alpha <- .05
t <- numeric(nsim)
n <- 1:nmax
for (i in 1:nsim) {
s <- cumsum(rnorm(nmax))
t[i] <- which(abs(s) > qnorm(1-alpha/2)*sqrt(n))[1]
}
delta <- ifelse(is.na(t),0,1)
t[delta==0] <- nmax + 1
library(survival)
par(mfrow=c(1,1),mar=c(4,4,.5,.5))
plot(survfit(Surv(t,delta)~1),log="xy",xlab="t",ylab="P(T>t)",conf.int=FALSE)
curve((1-alpha)*exp(-.125*(log(x))),add=TRUE,col="red",from=1,to=nmax) | Why does collecting data until finding a significant result increase Type I error rate? | This answer only concerns the probability of ultimately getting a "significant" result and the distribution of the time to this event under @whuber's model.
As in the model of @whuber, let $S(t)=X_1 + | Why does collecting data until finding a significant result increase Type I error rate?
This answer only concerns the probability of ultimately getting a "significant" result and the distribution of the time to this event under @whuber's model.
As in the model of @whuber, let $S(t)=X_1 + X_2 + \dots + X_t$ denote the value of the test statistic after $t$ observations have been collected and assume that the observations $X_1,X_2,\dots$ are iid standard normal. Then
$$
S(t+h)|S(t)=s_0 \sim N(s_0, h), \tag{1}
$$
such that
$S(t)$ behaves like a continuous-time standard Brownian motion, if we for the moment ignore the fact that we have a discrete-time process (left plot below).
Let $T$ denote the first passage time of $S(t)$ across the the time-dependent barriers $\pm z_{\alpha/2}\sqrt{t}$ (the number of observations needed before the test turns significant).
Consider the transformed process $Y(\tau)$ obtained by scaling $S(t)$ by its standard deviation at time $t$ and by letting the new time scale $\tau=\ln t$ such that
$$
Y(\tau)=\frac{S(t(\tau))}{\sqrt{t(\tau)}}=e^{-\tau/2}S(e^\tau). \tag{2}
$$
It follows from (1) and (2) that $Y(\tau+\delta)$ is normally distributed with
\begin{align}
E(Y(\tau+\delta)|Y(\tau)=y_0)
&=E(e^{-(\tau+\delta)/2}S(e^{\tau+\delta})|S(e^\tau)=y_0e^{\tau/2})
\\&=y_0e^{-\delta/2} \tag{3}
\end{align}
and
\begin{align}
\operatorname{Var}(Y(\tau+\delta)|Y(\tau)=y_0)
&=\operatorname{Var}(e^{(\tau+\delta)/2}S(e^{\tau+\delta})|S(e^\tau)=y_0e^{\tau/2})
\\&=1-e^{-\delta}, \tag{4}
\end{align}
that is, $Y(\tau)$ is a zero-mean Ornstein-Uhlenbeck (O-U) process with a stationary variance of 1 and return time 2 (right plot below). An almost identical transformation is given in Karlin & Taylor (1981), eq. 5.23.
For the transformed model, the barriers become time-independent constants equal to $\pm z_{\alpha/2}$. It is then known (Nobile et. al. 1985;
Ricciardi & Sato, 1988) that the first passage-time $\mathcal{T}$ of the O-U process $Y(\tau)$ across these barriers is approximately exponentially distributed with some parameter $\lambda$ (depending on the barriers at $\pm z_{\alpha/2}$) (estimated to $\hat\lambda=0.125$ for $\alpha=0.05$ below). There is also an extra point mass in of size $\alpha$ in $\tau=0$. "Rejection" of $H_0$ eventually happens with probability 1. Hence, $T=e^\mathcal{T}$ (the number of observations that needs to be collected before getting a "significant" result) is approximately Pareto distributed with density $f_T(t)=f_\mathcal{T}(\ln t)\frac{d\tau}{dt}=\lambda/t^{\lambda+1}$. The expected value is
$$
ET\approx 1+(1-\alpha)\int_0^\infty e^\tau \lambda e^{-\lambda \tau}d\tau.\tag{5}
$$
Thus, $T$ has a finite expectation only if $\lambda>1$ (for sufficiently large levels of significance $\alpha$).
The above ignores the fact that $T$ for the real model is discrete and that the real process is discrete- rather than continuous-time. Hence, the above model overestimates the probability that the barrier has been crossed (and underestimates $ET$) because the continuous-time sample path may cross the barrier only temporarily in-between two adjacent discrete time points $t$ and $t+1$. But such events should have negligible probability for large $t$.
The following figure shows a Kaplan-Meier estimate of $P(T>t)$ on log-log scale together with the survival curve for the exponential continuous-time approximation (red line).
R code:
# Fig 1
par(mfrow=c(1,2),mar=c(4,4,.5,.5))
set.seed(16)
n <- 20
npoints <- n*100 + 1
t <- seq(1,n,len=npoints)
subset <- 1:n*100-99
deltat <- c(1,diff(t))
z <- qnorm(.975)
s <- cumsum(rnorm(npoints,sd=sqrt(deltat)))
plot(t,s,type="l",ylim=c(-1,1)*z*sqrt(n),ylab="S(t)",col="grey")
points(t[subset],s[subset],pch="+")
curve(sqrt(t)*z,xname="t",add=TRUE)
curve(-sqrt(t)*z,xname="t",add=TRUE)
tau <- log(t)
y <- s/sqrt(t)
plot(tau,y,type="l",ylim=c(-2.5,2.5),col="grey",xlab=expression(tau),ylab=expression(Y(tau)))
points(tau[subset],y[subset],pch="+")
abline(h=c(-z,z))
# Fig 2
nmax <- 1e+3
nsim <- 1e+5
alpha <- .05
t <- numeric(nsim)
n <- 1:nmax
for (i in 1:nsim) {
s <- cumsum(rnorm(nmax))
t[i] <- which(abs(s) > qnorm(1-alpha/2)*sqrt(n))[1]
}
delta <- ifelse(is.na(t),0,1)
t[delta==0] <- nmax + 1
library(survival)
par(mfrow=c(1,1),mar=c(4,4,.5,.5))
plot(survfit(Surv(t,delta)~1),log="xy",xlab="t",ylab="P(T>t)",conf.int=FALSE)
curve((1-alpha)*exp(-.125*(log(x))),add=TRUE,col="red",from=1,to=nmax) | Why does collecting data until finding a significant result increase Type I error rate?
This answer only concerns the probability of ultimately getting a "significant" result and the distribution of the time to this event under @whuber's model.
As in the model of @whuber, let $S(t)=X_1 + |
3,666 | Why does collecting data until finding a significant result increase Type I error rate? | It needs to be said that the above discussion is for a frequentist world view for which multiplicity comes from the chances you give data to be more extreme, not from the chances you give an effect to exist. The root cause of the problem is that p-values and type I errors use backwards-time backwards-information flow conditioning, which makes it important "how you got here" and what could have happened instead. On the other hand, the Bayesian paradigm encodes skepticism about an effect on the parameter itself, not on the data. That makes each posterior probability be interpreted the same whether you computed another posterior probability of an effect 5 minutes ago or not. More details and a simple simulation may be found at http://www.fharrell.com/2017/10/continuous-learning-from-data-no.html | Why does collecting data until finding a significant result increase Type I error rate? | It needs to be said that the above discussion is for a frequentist world view for which multiplicity comes from the chances you give data to be more extreme, not from the chances you give an effect to | Why does collecting data until finding a significant result increase Type I error rate?
It needs to be said that the above discussion is for a frequentist world view for which multiplicity comes from the chances you give data to be more extreme, not from the chances you give an effect to exist. The root cause of the problem is that p-values and type I errors use backwards-time backwards-information flow conditioning, which makes it important "how you got here" and what could have happened instead. On the other hand, the Bayesian paradigm encodes skepticism about an effect on the parameter itself, not on the data. That makes each posterior probability be interpreted the same whether you computed another posterior probability of an effect 5 minutes ago or not. More details and a simple simulation may be found at http://www.fharrell.com/2017/10/continuous-learning-from-data-no.html | Why does collecting data until finding a significant result increase Type I error rate?
It needs to be said that the above discussion is for a frequentist world view for which multiplicity comes from the chances you give data to be more extreme, not from the chances you give an effect to |
3,667 | Why does collecting data until finding a significant result increase Type I error rate? | We consider a researcher collecting a sample of size $n$, $x_1$, to test some hypothesis $\theta=\theta_0$. He rejects if a suitable test statistic $t$ exceeds its level-$\alpha$ critical value $c$. If it does not, he collects another sample of size $n$, $x_2$, and rejects if the test rejects for the combined sample $(x_1,x_2)$. If he still obtains no rejection, he proceeds in this fashion, up to $K$ times in total.
This problem seems to already have been addressed by P. Armitage, C. K. McPherson and B. C. Rowe (1969), Journal of the Royal Statistical Society. Series A (132), 2, 235-244: "Repeated Significance Tests on Accumulating Data".
The Bayesian point of view on this issue, also discussed here, is, by the way, discussed in Berger and Wolpert (1988), "The Likelihood Principle", Section 4.2.
Here is a partial replication of Armitage et al's results (code below), which shows how significance levels inflate when $K>1$, as well as possible correction factors to restore level-$\alpha$ critical values. Note the grid search takes a while to run---the implementation may be rather inefficient.
Size of the standard rejection rule as a function of the number of attempts $K$
Size as a function of increasing critical values for different $K$
Adjusted critical values to restore 5% tests as a function of $K$
reps <- 50000
K <- c(1:5, seq(10,50,5), seq(60,100,10)) # the number of attempts a researcher gives herself
alpha <- 0.05
cv <- qnorm(1-alpha/2)
grid.scale.cv <- cv*seq(1,1.5,by=.01) # scaled critical values over which we check rejection rates
max.g <- length(grid.scale.cv)
results <- matrix(NA, nrow = length(K), ncol=max.g)
for (kk in 1:length(K)){
g <- 1
dev <- 0
K.act <- K[kk]
while (dev > -0.01 & g <= max.g){
rej <- rep(NA,reps)
for (i in 1:reps){
k <- 1
accept <- 1
x <- rnorm(K.act)
while(k <= K.act & accept==1){
# each of our test statistics for "samples" of size n are N(0,1) under H0, so just scaling their sum by sqrt(k) gives another N(0,1) test statistic
rej[i] <- abs(1/sqrt(k)*sum(x[1:k])) > grid.scale.cv[g]
accept <- accept - rej[i]
k <- k+1
}
}
rej.rate <- mean(rej)
dev <- rej.rate-alpha
results[kk,g] <- rej.rate
g <- g+1
}
}
plot(K,results[,1], type="l")
matplot(grid.scale.cv,t(results), type="l")
abline(h=0.05)
cv.a <- data.frame(K,adjusted.cv=grid.scale.cv[apply(abs(results-alpha),1,which.min)])
plot(K,cv.a$adjusted.cv, type="l") | Why does collecting data until finding a significant result increase Type I error rate? | We consider a researcher collecting a sample of size $n$, $x_1$, to test some hypothesis $\theta=\theta_0$. He rejects if a suitable test statistic $t$ exceeds its level-$\alpha$ critical value $c$. I | Why does collecting data until finding a significant result increase Type I error rate?
We consider a researcher collecting a sample of size $n$, $x_1$, to test some hypothesis $\theta=\theta_0$. He rejects if a suitable test statistic $t$ exceeds its level-$\alpha$ critical value $c$. If it does not, he collects another sample of size $n$, $x_2$, and rejects if the test rejects for the combined sample $(x_1,x_2)$. If he still obtains no rejection, he proceeds in this fashion, up to $K$ times in total.
This problem seems to already have been addressed by P. Armitage, C. K. McPherson and B. C. Rowe (1969), Journal of the Royal Statistical Society. Series A (132), 2, 235-244: "Repeated Significance Tests on Accumulating Data".
The Bayesian point of view on this issue, also discussed here, is, by the way, discussed in Berger and Wolpert (1988), "The Likelihood Principle", Section 4.2.
Here is a partial replication of Armitage et al's results (code below), which shows how significance levels inflate when $K>1$, as well as possible correction factors to restore level-$\alpha$ critical values. Note the grid search takes a while to run---the implementation may be rather inefficient.
Size of the standard rejection rule as a function of the number of attempts $K$
Size as a function of increasing critical values for different $K$
Adjusted critical values to restore 5% tests as a function of $K$
reps <- 50000
K <- c(1:5, seq(10,50,5), seq(60,100,10)) # the number of attempts a researcher gives herself
alpha <- 0.05
cv <- qnorm(1-alpha/2)
grid.scale.cv <- cv*seq(1,1.5,by=.01) # scaled critical values over which we check rejection rates
max.g <- length(grid.scale.cv)
results <- matrix(NA, nrow = length(K), ncol=max.g)
for (kk in 1:length(K)){
g <- 1
dev <- 0
K.act <- K[kk]
while (dev > -0.01 & g <= max.g){
rej <- rep(NA,reps)
for (i in 1:reps){
k <- 1
accept <- 1
x <- rnorm(K.act)
while(k <= K.act & accept==1){
# each of our test statistics for "samples" of size n are N(0,1) under H0, so just scaling their sum by sqrt(k) gives another N(0,1) test statistic
rej[i] <- abs(1/sqrt(k)*sum(x[1:k])) > grid.scale.cv[g]
accept <- accept - rej[i]
k <- k+1
}
}
rej.rate <- mean(rej)
dev <- rej.rate-alpha
results[kk,g] <- rej.rate
g <- g+1
}
}
plot(K,results[,1], type="l")
matplot(grid.scale.cv,t(results), type="l")
abline(h=0.05)
cv.a <- data.frame(K,adjusted.cv=grid.scale.cv[apply(abs(results-alpha),1,which.min)])
plot(K,cv.a$adjusted.cv, type="l") | Why does collecting data until finding a significant result increase Type I error rate?
We consider a researcher collecting a sample of size $n$, $x_1$, to test some hypothesis $\theta=\theta_0$. He rejects if a suitable test statistic $t$ exceeds its level-$\alpha$ critical value $c$. I |
3,668 | Clustering with K-Means and EM: how are they related? | K means
Hard assign a data point to one particular cluster on convergence.
It makes use of the L2 norm when optimizing (Min {Theta} L2 norm point and its centroid coordinates).
EM
Soft assigns a point to clusters (so it give a probability of any point belonging to any centroid).
It doesn't depend on the L2 norm, but is based on the Expectation, i.e., the probability of the point belonging to a particular cluster. This makes K-means biased towards spherical clusters. | Clustering with K-Means and EM: how are they related? | K means
Hard assign a data point to one particular cluster on convergence.
It makes use of the L2 norm when optimizing (Min {Theta} L2 norm point and its centroid coordinates).
EM
Soft assigns a p | Clustering with K-Means and EM: how are they related?
K means
Hard assign a data point to one particular cluster on convergence.
It makes use of the L2 norm when optimizing (Min {Theta} L2 norm point and its centroid coordinates).
EM
Soft assigns a point to clusters (so it give a probability of any point belonging to any centroid).
It doesn't depend on the L2 norm, but is based on the Expectation, i.e., the probability of the point belonging to a particular cluster. This makes K-means biased towards spherical clusters. | Clustering with K-Means and EM: how are they related?
K means
Hard assign a data point to one particular cluster on convergence.
It makes use of the L2 norm when optimizing (Min {Theta} L2 norm point and its centroid coordinates).
EM
Soft assigns a p |
3,669 | Clustering with K-Means and EM: how are they related? | There is no "k-means algorithm". There is MacQueens algorithm for k-means, the Lloyd/Forgy algorithm for k-means, the Hartigan-Wong method, ...
There also isn't "the" EM-algorithm. It is a general scheme of repeatedly expecting the likelihoods and then maximizing the model. The most popular variant of EM is also known as "Gaussian Mixture Modeling" (GMM), where the model are multivariate Gaussian distributions.
One can consider Lloyds algorithm to consist of two steps:
the E-step, where each object is assigned to the centroid such that it is assigned to the most likely cluster.
the M-step, where the model (=centroids) are recomputed (= least squares optimization).
... iterating these two steps, as done by Lloyd, makes this effectively an instance of the general EM scheme. It differs from GMM that:
it uses hard partitioning, i.e. each object is assigned to exactly one cluster
the model are centroids only, no covariances or variances are taken into account | Clustering with K-Means and EM: how are they related? | There is no "k-means algorithm". There is MacQueens algorithm for k-means, the Lloyd/Forgy algorithm for k-means, the Hartigan-Wong method, ...
There also isn't "the" EM-algorithm. It is a general sch | Clustering with K-Means and EM: how are they related?
There is no "k-means algorithm". There is MacQueens algorithm for k-means, the Lloyd/Forgy algorithm for k-means, the Hartigan-Wong method, ...
There also isn't "the" EM-algorithm. It is a general scheme of repeatedly expecting the likelihoods and then maximizing the model. The most popular variant of EM is also known as "Gaussian Mixture Modeling" (GMM), where the model are multivariate Gaussian distributions.
One can consider Lloyds algorithm to consist of two steps:
the E-step, where each object is assigned to the centroid such that it is assigned to the most likely cluster.
the M-step, where the model (=centroids) are recomputed (= least squares optimization).
... iterating these two steps, as done by Lloyd, makes this effectively an instance of the general EM scheme. It differs from GMM that:
it uses hard partitioning, i.e. each object is assigned to exactly one cluster
the model are centroids only, no covariances or variances are taken into account | Clustering with K-Means and EM: how are they related?
There is no "k-means algorithm". There is MacQueens algorithm for k-means, the Lloyd/Forgy algorithm for k-means, the Hartigan-Wong method, ...
There also isn't "the" EM-algorithm. It is a general sch |
3,670 | Clustering with K-Means and EM: how are they related? | Here is an example, if I were doing this in mplus, which might be helpful and compliment more comprehensive answers:
Say I have 3 continuous variables and want to identify clusters based on these. I would specify a mixture model (more specficially in this case, a latent profile model), assuming conditional independence (the observed variables are independent, given cluster membership) as:
Model:
%Overall%
v1* v2* v3*; ! Freely estimated variances
[v1 v2 v3]; ! Freely estimated means
I would run this model multiple times, each time specifying a different number of clusters, and choose the solution I like the most (to do this is a vast topic on its own).
To then run k-means, I would specify the following model:
Model:
%Overall%
v1@0 v2@0 v3@0; ! Variances constrained as zero
[v1 v2 v3]; ! Freely estimated means
So class membership is only based on distance to the means of the observed variables. As stated in other responses, the variances have nothing to do with it.
The nice thing about doing this in mplus is that these are nested models, and so you can directly test if the constraints result in worse fit or not, in addition to being able to compare discordance in classification between the two methods. Both of these models, by the way, can be estimated using an EM algorithm, so the difference is really more about the model.
If you think in 3-D space, the 3 means make a point...and the variances the three axes of an ellipsoid running through that point. If all three variances are the same, you would get a sphere. | Clustering with K-Means and EM: how are they related? | Here is an example, if I were doing this in mplus, which might be helpful and compliment more comprehensive answers:
Say I have 3 continuous variables and want to identify clusters based on these. I w | Clustering with K-Means and EM: how are they related?
Here is an example, if I were doing this in mplus, which might be helpful and compliment more comprehensive answers:
Say I have 3 continuous variables and want to identify clusters based on these. I would specify a mixture model (more specficially in this case, a latent profile model), assuming conditional independence (the observed variables are independent, given cluster membership) as:
Model:
%Overall%
v1* v2* v3*; ! Freely estimated variances
[v1 v2 v3]; ! Freely estimated means
I would run this model multiple times, each time specifying a different number of clusters, and choose the solution I like the most (to do this is a vast topic on its own).
To then run k-means, I would specify the following model:
Model:
%Overall%
v1@0 v2@0 v3@0; ! Variances constrained as zero
[v1 v2 v3]; ! Freely estimated means
So class membership is only based on distance to the means of the observed variables. As stated in other responses, the variances have nothing to do with it.
The nice thing about doing this in mplus is that these are nested models, and so you can directly test if the constraints result in worse fit or not, in addition to being able to compare discordance in classification between the two methods. Both of these models, by the way, can be estimated using an EM algorithm, so the difference is really more about the model.
If you think in 3-D space, the 3 means make a point...and the variances the three axes of an ellipsoid running through that point. If all three variances are the same, you would get a sphere. | Clustering with K-Means and EM: how are they related?
Here is an example, if I were doing this in mplus, which might be helpful and compliment more comprehensive answers:
Say I have 3 continuous variables and want to identify clusters based on these. I w |
3,671 | What are alternatives of Gradient Descent? | This is more a problem to do with the function being minimized than the method used, if finding the true global minimum is important, then use a method such a simulated annealing. This will be able to find the global minimum, but may take a very long time to do so.
In the case of neural nets, local minima are not necessarily that much of a problem. Some of the local minima are due to the fact that you can get a functionally identical model by permuting the hidden layer units, or negating the inputs and output weights of the network etc. Also if the local minima is only slightly non-optimal, then the difference in performance will be minimal and so it won't really matter. Lastly, and this is an important point, the key problem in fitting a neural network is over-fitting, so aggressively searching for the global minima of the cost function is likely to result in overfitting and a model that performs poorly.
Adding a regularisation term, e.g. weight decay, can help to smooth out the cost function, which can reduce the problem of local minima a little, and is something I would recommend anyway as a means of avoiding overfitting.
The best method however of avoiding local minima in neural networks is to use a Gaussian Process model (or a Radial Basis Function neural network), which have fewer problems with local minima. | What are alternatives of Gradient Descent? | This is more a problem to do with the function being minimized than the method used, if finding the true global minimum is important, then use a method such a simulated annealing. This will be able t | What are alternatives of Gradient Descent?
This is more a problem to do with the function being minimized than the method used, if finding the true global minimum is important, then use a method such a simulated annealing. This will be able to find the global minimum, but may take a very long time to do so.
In the case of neural nets, local minima are not necessarily that much of a problem. Some of the local minima are due to the fact that you can get a functionally identical model by permuting the hidden layer units, or negating the inputs and output weights of the network etc. Also if the local minima is only slightly non-optimal, then the difference in performance will be minimal and so it won't really matter. Lastly, and this is an important point, the key problem in fitting a neural network is over-fitting, so aggressively searching for the global minima of the cost function is likely to result in overfitting and a model that performs poorly.
Adding a regularisation term, e.g. weight decay, can help to smooth out the cost function, which can reduce the problem of local minima a little, and is something I would recommend anyway as a means of avoiding overfitting.
The best method however of avoiding local minima in neural networks is to use a Gaussian Process model (or a Radial Basis Function neural network), which have fewer problems with local minima. | What are alternatives of Gradient Descent?
This is more a problem to do with the function being minimized than the method used, if finding the true global minimum is important, then use a method such a simulated annealing. This will be able t |
3,672 | What are alternatives of Gradient Descent? | Gradient descent is an optimization algorithm.
There are many optimization algorithms that operate on a fixed number of real values that are correlated (non-separable). We can divide them roughly in 2 categories: gradient-based optimizers and derivative-free optimizers. Usually you want to use the gradient to optimize neural networks in a supervised setting because that is significantly faster than derivative-free optimization. There are numerous gradient-based optimization algorithms that have been used to optimize neural networks:
Stochastic Gradient Descent (SGD), minibatch SGD, ...: You don't have to evaluate the gradient for the whole training set but only for one sample or a minibatch of samples, this is usually much faster than batch gradient descent. Minibatches have been used to smooth the gradient and parallelize the forward and backpropagation. The advantage over many other algorithms is that each iteration is in O(n) (n is the number of weights in your NN). SGD usually does not get stuck in local minima (!) because it is stochastic.
Nonlinear Conjugate Gradient: seems to be very successful in regression, O(n), requires the batch gradient (hence, might not be the best choice for huge datasets)
L-BFGS: seems to be very successful in classification, uses Hessian approximation, requires the batch gradient
Levenberg-Marquardt Algorithm (LMA): This is actually the best optimization algorithm that I know. It has the disadvantage that its complexity is roughly O(n^3). Don't use it for large networks!
And there have been many other algorithms proposed for optimization of neural networks, you could google for Hessian-free optimization or v-SGD (there are many types of SGD with adaptive learning rates, see e.g. here).
Optimization for NNs is not a solved problem! In my experiences the biggest challenge is not to find a good local minimum. However, the challenges are to get out of very flat regions, deal with ill-conditioned error functions etc. That is the reason why LMA and other algorithms that use approximations of the Hessian usually work so well in practice and people try to develop stochastic versions that use second order information with low complexity. However, often a very well tuned parameter set for minibatch SGD is better than any complex optimization algorithm.
Usually you don't want to find a global optimum. Because that usually requires overfitting the training data. | What are alternatives of Gradient Descent? | Gradient descent is an optimization algorithm.
There are many optimization algorithms that operate on a fixed number of real values that are correlated (non-separable). We can divide them roughly in 2 | What are alternatives of Gradient Descent?
Gradient descent is an optimization algorithm.
There are many optimization algorithms that operate on a fixed number of real values that are correlated (non-separable). We can divide them roughly in 2 categories: gradient-based optimizers and derivative-free optimizers. Usually you want to use the gradient to optimize neural networks in a supervised setting because that is significantly faster than derivative-free optimization. There are numerous gradient-based optimization algorithms that have been used to optimize neural networks:
Stochastic Gradient Descent (SGD), minibatch SGD, ...: You don't have to evaluate the gradient for the whole training set but only for one sample or a minibatch of samples, this is usually much faster than batch gradient descent. Minibatches have been used to smooth the gradient and parallelize the forward and backpropagation. The advantage over many other algorithms is that each iteration is in O(n) (n is the number of weights in your NN). SGD usually does not get stuck in local minima (!) because it is stochastic.
Nonlinear Conjugate Gradient: seems to be very successful in regression, O(n), requires the batch gradient (hence, might not be the best choice for huge datasets)
L-BFGS: seems to be very successful in classification, uses Hessian approximation, requires the batch gradient
Levenberg-Marquardt Algorithm (LMA): This is actually the best optimization algorithm that I know. It has the disadvantage that its complexity is roughly O(n^3). Don't use it for large networks!
And there have been many other algorithms proposed for optimization of neural networks, you could google for Hessian-free optimization or v-SGD (there are many types of SGD with adaptive learning rates, see e.g. here).
Optimization for NNs is not a solved problem! In my experiences the biggest challenge is not to find a good local minimum. However, the challenges are to get out of very flat regions, deal with ill-conditioned error functions etc. That is the reason why LMA and other algorithms that use approximations of the Hessian usually work so well in practice and people try to develop stochastic versions that use second order information with low complexity. However, often a very well tuned parameter set for minibatch SGD is better than any complex optimization algorithm.
Usually you don't want to find a global optimum. Because that usually requires overfitting the training data. | What are alternatives of Gradient Descent?
Gradient descent is an optimization algorithm.
There are many optimization algorithms that operate on a fixed number of real values that are correlated (non-separable). We can divide them roughly in 2 |
3,673 | What are alternatives of Gradient Descent? | An interesting alternative to gradient descent is the population-based training algorithms such as the evolutionary algorithms (EA) and the particle swarm optimisation (PSO). The basic idea behind population-based approaches is that a population of candidate solutions (NN weight vectors) is created, and the candidate solutions iteratively explore the search space, exchanging information, and eventually converging on a minima. Because many starting points (candidate solutions) are used, the chances of converging on the global minima are significantly increased. PSO and EA have been shown to perform very competitively, often (albeit not always) outperforming gradient descent on complex NN training problems. | What are alternatives of Gradient Descent? | An interesting alternative to gradient descent is the population-based training algorithms such as the evolutionary algorithms (EA) and the particle swarm optimisation (PSO). The basic idea behind pop | What are alternatives of Gradient Descent?
An interesting alternative to gradient descent is the population-based training algorithms such as the evolutionary algorithms (EA) and the particle swarm optimisation (PSO). The basic idea behind population-based approaches is that a population of candidate solutions (NN weight vectors) is created, and the candidate solutions iteratively explore the search space, exchanging information, and eventually converging on a minima. Because many starting points (candidate solutions) are used, the chances of converging on the global minima are significantly increased. PSO and EA have been shown to perform very competitively, often (albeit not always) outperforming gradient descent on complex NN training problems. | What are alternatives of Gradient Descent?
An interesting alternative to gradient descent is the population-based training algorithms such as the evolutionary algorithms (EA) and the particle swarm optimisation (PSO). The basic idea behind pop |
3,674 | What are alternatives of Gradient Descent? | I know this thread is quite old and others have done a great job to explain concepts like local minima, overfitting etc. However, as OP was looking for an alternative solution, I will try to contribute one and hope it will inspire more interesting ideas.
The idea is to replace every weight w to w + t, where t is a random number following Gaussian distribution. The final output of the network is then the average output over all possible values of t. This can be done analytically. You can then optimize the problem either with gradient descent or LMA or other optimization methods. Once the optimization is done, you have two options. One option is to reduce the sigma in the Gaussian distribution and do the optimization again and again until sigma reaches to 0, then you will have a better local minimum (but potentially it could cause overfitting). Another option is keep using the one with the random number in its weights, it usually has better generalization property.
The first approach is an optimization trick (I call it as convolutional tunneling, as it use convolution over the parameters to change the target function), it smooth out the surface of the cost function landscape and get rid of some of the local minima, thus make it easier to find global minimum (or better local minimum).
The second approach is related to noise injection (on weights). Notice that this is done analytically, meaning that the final result is one single network, instead of multiple networks.
The followings are example outputs for two-spirals problem. The network architecture is the same for all three of them: there is only one hidden layer of 30 nodes, and the output layer is linear. The optimization algorithm used is LMA. The left image is for vanilla setting; the middle is using the first approach (namely repeatedly reducing sigma towards 0); the third is using sigma = 2.
You can see that the vanilla solution is the worst, the convolutional tunneling does a better job, and the noise injection (with convolutional tunneling) is the best (in terms of generalization property).
Both convolutional tunneling and the analytical way of noise injection are my original ideas. Maybe they are the alternative someone might be interested. The details can be found in my paper Combining Infinity Number Of Neural Networks Into One. Warning: I am not a professional academic writer and the paper is not peer reviewed. If you have questions about the approaches I mentioned, please leave a comment. | What are alternatives of Gradient Descent? | I know this thread is quite old and others have done a great job to explain concepts like local minima, overfitting etc. However, as OP was looking for an alternative solution, I will try to contribut | What are alternatives of Gradient Descent?
I know this thread is quite old and others have done a great job to explain concepts like local minima, overfitting etc. However, as OP was looking for an alternative solution, I will try to contribute one and hope it will inspire more interesting ideas.
The idea is to replace every weight w to w + t, where t is a random number following Gaussian distribution. The final output of the network is then the average output over all possible values of t. This can be done analytically. You can then optimize the problem either with gradient descent or LMA or other optimization methods. Once the optimization is done, you have two options. One option is to reduce the sigma in the Gaussian distribution and do the optimization again and again until sigma reaches to 0, then you will have a better local minimum (but potentially it could cause overfitting). Another option is keep using the one with the random number in its weights, it usually has better generalization property.
The first approach is an optimization trick (I call it as convolutional tunneling, as it use convolution over the parameters to change the target function), it smooth out the surface of the cost function landscape and get rid of some of the local minima, thus make it easier to find global minimum (or better local minimum).
The second approach is related to noise injection (on weights). Notice that this is done analytically, meaning that the final result is one single network, instead of multiple networks.
The followings are example outputs for two-spirals problem. The network architecture is the same for all three of them: there is only one hidden layer of 30 nodes, and the output layer is linear. The optimization algorithm used is LMA. The left image is for vanilla setting; the middle is using the first approach (namely repeatedly reducing sigma towards 0); the third is using sigma = 2.
You can see that the vanilla solution is the worst, the convolutional tunneling does a better job, and the noise injection (with convolutional tunneling) is the best (in terms of generalization property).
Both convolutional tunneling and the analytical way of noise injection are my original ideas. Maybe they are the alternative someone might be interested. The details can be found in my paper Combining Infinity Number Of Neural Networks Into One. Warning: I am not a professional academic writer and the paper is not peer reviewed. If you have questions about the approaches I mentioned, please leave a comment. | What are alternatives of Gradient Descent?
I know this thread is quite old and others have done a great job to explain concepts like local minima, overfitting etc. However, as OP was looking for an alternative solution, I will try to contribut |
3,675 | What are alternatives of Gradient Descent? | When it comes to Global Optimisation tasks (i.e. attempting to find a global minimum of an objective function) you might wanna take a look at:
Pattern Search (also known as direct search, derivative-free search, or black-box search), which uses a pattern (set of vectors ${\{v_i\}}$) to determine the points to search at next iteration.
Genetic Algorithm that uses the concept of mutation, crossover and selection to define the population of points to be evaluated at next iteration of the optimisation.
Particle Swarm Optimisation that defines a set of particles that "walk" through the space searching for the minimum.
Surrogate Optimisation that uses a surrogate model to approximate the objective function. This method can be used when the objective function is expensive to evaluate.
Multi-objective Optimisation (also known as Pareto optimisation) which can be used for the problem that cannot be expressed in a form that has a single objective function (but rather a vector of objectives).
Simulated Annealing, which uses the concept of annealing (or temperature) to trade-off exploration and exploitation. It proposes new points for evaluation at each iteration, but as the number of iteration increases, the "temperature" drops and the algorithm becomes less and less likely to explore the space thus "converging" towards its current best candidate.
As mentioned above, Simulated Annealing, Particle Swarm Optimisation and Genetic Algorithms are good global optimisation algorithms that navigate well through huge search spaces and unlike Gradient Descent do not need any information about the gradient and could be successfully used with black-box objective functions and problems that require running simulations. | What are alternatives of Gradient Descent? | When it comes to Global Optimisation tasks (i.e. attempting to find a global minimum of an objective function) you might wanna take a look at:
Pattern Search (also known as direct search, derivative- | What are alternatives of Gradient Descent?
When it comes to Global Optimisation tasks (i.e. attempting to find a global minimum of an objective function) you might wanna take a look at:
Pattern Search (also known as direct search, derivative-free search, or black-box search), which uses a pattern (set of vectors ${\{v_i\}}$) to determine the points to search at next iteration.
Genetic Algorithm that uses the concept of mutation, crossover and selection to define the population of points to be evaluated at next iteration of the optimisation.
Particle Swarm Optimisation that defines a set of particles that "walk" through the space searching for the minimum.
Surrogate Optimisation that uses a surrogate model to approximate the objective function. This method can be used when the objective function is expensive to evaluate.
Multi-objective Optimisation (also known as Pareto optimisation) which can be used for the problem that cannot be expressed in a form that has a single objective function (but rather a vector of objectives).
Simulated Annealing, which uses the concept of annealing (or temperature) to trade-off exploration and exploitation. It proposes new points for evaluation at each iteration, but as the number of iteration increases, the "temperature" drops and the algorithm becomes less and less likely to explore the space thus "converging" towards its current best candidate.
As mentioned above, Simulated Annealing, Particle Swarm Optimisation and Genetic Algorithms are good global optimisation algorithms that navigate well through huge search spaces and unlike Gradient Descent do not need any information about the gradient and could be successfully used with black-box objective functions and problems that require running simulations. | What are alternatives of Gradient Descent?
When it comes to Global Optimisation tasks (i.e. attempting to find a global minimum of an objective function) you might wanna take a look at:
Pattern Search (also known as direct search, derivative- |
3,676 | What are alternatives of Gradient Descent? | Extreme Learning Machines Essentially they are a neural network where the weights connecting the inputs to the hidden nodes are assigned randomly and never updated. The weights between the hidden nodes and the outputs are learned in a single step by solving a linear equation (matrix inverse). | What are alternatives of Gradient Descent? | Extreme Learning Machines Essentially they are a neural network where the weights connecting the inputs to the hidden nodes are assigned randomly and never updated. The weights between the hidden node | What are alternatives of Gradient Descent?
Extreme Learning Machines Essentially they are a neural network where the weights connecting the inputs to the hidden nodes are assigned randomly and never updated. The weights between the hidden nodes and the outputs are learned in a single step by solving a linear equation (matrix inverse). | What are alternatives of Gradient Descent?
Extreme Learning Machines Essentially they are a neural network where the weights connecting the inputs to the hidden nodes are assigned randomly and never updated. The weights between the hidden node |
3,677 | What are alternatives of Gradient Descent? | (1) Bipropagation is a semi-gradient descent algorithm much faster than backpropagation. It solves the XOR problem each time and is 20 times faster than the fastest attempt of backpropagation.
(2) Border Pairs method (BPM) is totally non-gradient descent algorithm with many advantages over backpropagation:
it finds near-optimal NN size
it uses only useful patterns
it can remove noise by the way
and ...
More:
https://www.researchgate.net/publication/322617800_New_Deep_Learning_Algorithms_beyond_Backpropagation_IBM_Developers_UnConference_2018_Zurich
[https://www.researchgate.net/publication/263656317_Advances_in_Machine_Learning_Research][2] | What are alternatives of Gradient Descent? | (1) Bipropagation is a semi-gradient descent algorithm much faster than backpropagation. It solves the XOR problem each time and is 20 times faster than the fastest attempt of backpropagation.
(2) Bor | What are alternatives of Gradient Descent?
(1) Bipropagation is a semi-gradient descent algorithm much faster than backpropagation. It solves the XOR problem each time and is 20 times faster than the fastest attempt of backpropagation.
(2) Border Pairs method (BPM) is totally non-gradient descent algorithm with many advantages over backpropagation:
it finds near-optimal NN size
it uses only useful patterns
it can remove noise by the way
and ...
More:
https://www.researchgate.net/publication/322617800_New_Deep_Learning_Algorithms_beyond_Backpropagation_IBM_Developers_UnConference_2018_Zurich
[https://www.researchgate.net/publication/263656317_Advances_in_Machine_Learning_Research][2] | What are alternatives of Gradient Descent?
(1) Bipropagation is a semi-gradient descent algorithm much faster than backpropagation. It solves the XOR problem each time and is 20 times faster than the fastest attempt of backpropagation.
(2) Bor |
3,678 | What should I do when my neural network doesn't generalize well? | First of all, let's mention what does "my neural network doesn't generalize well" mean and what's the difference with saying "my neural network doesn't perform well".
When training a Neural Network, you are constantly evaluating it on a set of labelled data called the training set. If your model isn't working properly and doesn't appear to learn from the training set, you don't have a generalization issue yet, instead please refer to this post. However, if your model is achieving a satisfactory performance on the training set, but cannot perform well on previously unseen data (e.g. validation/test sets), then you do have a generalization problem.
Why is your model not generalizing properly?
The most important part is understanding why your network doesn't generalize well. High-capacity Machine Learning models have the ability to memorize the training set, which can lead to overfitting.
Overfitting is the state where an estimator has begun to learn the training set so well that it has started to model the noise in the training samples (besides all useful relationships).
For example, in the image below we can see how the blue on the right line has clearly overfit.
But why is this bad?
When attempting to evaluate our model on new, previously unseen data (i.e. validation/test set), the model's performance will be much worse than what we expect.
How to prevent overfitting?
In the beginning of the post I implied that the complexity of your model is what is actually causing the overfitting, as it is allowing the model to extract unnecessary relationships from the training set, that map its inherent noise. The easiest way to reduce overfitting is to essentially limit the capacity of your model. These techniques are called regularization techniques.
Parameter norm penalties. These add an extra term to the weight update function of each model, that is dependent on the norm of the parameters. This term's purpose is to counter the actual update (i.e. limit how much each weight can be updated). This makes the models more robust to outliers and noise. Examples of such regularizations are L1 and L2 regularizations, which can be found on the Lasso, Ridge and Elastic Net regressors.
Since each (fully connected) layer in a neural network functions much like a simple linear regression, these are used in Neural Networks. The most common use is to regularize each layer individually.
keras implementation.
Early stopping. This technique attempts to stop an estimator's training phase prematurely, at the point where it has learned to extract all meaningful relationships from the data, before beginning to model its noise. This is done by monitoring the validation loss (or a validation metric of your choosing) and terminating the training phase when this metric stops improving. This way we give the estimator enough time to learn the useful information but not enough to learn from the noise.
keras implementation.
Neural Network specific regularizations. Some examples are:
Dropout. Dropout is an interesting technique that works surprisingly well. Dropout is applied between two successive layers in a network. At each iteration a specified percentage of the connections (selected randomly), connecting the two layers, are dropped. This causes the subsequent layer rely on all of its connections to the previous layer.
keras implementation
Transfer learning. This is especially used in Deep Learning. This is done by initializing the weights of your network to the ones of another network with the same architecture pre-trained on a large, generic dataset.
Other things that may limit overfitting in Deep Neural Networks are: Batch Normalization, which can act as a regulizer and in some cases (e.g. inception modules) works as well as dropout; relatively small sized batches in SGD, which can also prevent overfitting; adding small random noise to weights in hidden layers.
Another way of preventing overfitting, besides limiting the model's capacity, is by improving the quality of your data. The most obvious choice would be outlier/noise removal, however in practice their usefulness is limited. A more common way (especially in image-related tasks) is data augmentation. Here we attempt randomly transform the training examples so that while they appear to the model to be different, they convey the same semantic information (e.g. left-right flipping on images).
Data augmentation overview
Practical suggestions:
By far the most effective regularization technique is dropout, meaning that it should be the first you should use. However, you don't need to (and probably shouldn't) place dropout everywhere! The most prone layers to overfitting are the Fully Connected (FC) layers, because they contain the most parameters. Dropout should be applied to these layers (impacting their connections to the next layer).
Batch normalization, besides having a regularization effect aids your model in several other ways (e.g. speeds up convergence, allows for the use of higher learning rates). It too should be used in FC layers.
As mentioned previously it also may be beneficial to stop your model earlier in the training phase than scheduled. The problem with early stopping is that there is no guarantee that, at any given point, the model won't start improving again. A more practical approach than early stopping is storing the weights of the model that achieve the best performance on the validation set. Be cautious, however, as this is not an unbiased estimate of the performance of your model (just better than the training set). You can also overfit on the validation set. More on that later.
keras implementation
In some applications (e.g. image related tasks), it is highly recommended to follow an already established architecture (e.g. VGG, ResNet, Inception), that you can find ImageNet weights for. The generic nature of this dataset, allows the features to be in turn generic enough to be used for any image related task. Besides being robust to overfitting this will greatly reduce the training time.
Another use of the similar concept is the following: if your task doesn't have much data, but you can find another similar task that does, you can use transfer learning to reduce overfitting. First train your network for the task that has the larger dataset and then attempt to fine-tune the model to the one you initially wanted. The initial training will, in most cases, make your model more robust to overfitting.
Data augmentation. While it always helps to have a larger dataset, data augmentation techniques do have their shortcomings. More specifically, you have to be careful not to augment too strongly, as this might ruin the semantic content of the data. For example in image augmentation if you translate/shift/scale or adjust the brighness/contrast the image too much you'll lose much of the information it contains. Furthermore, augmentation schemes need to be implemented for each task in an ad-hoc fashion (e.g. in handwritten digit recognition the digits are usually aligned and shouldn't be rotated too much; also they shouldn't be flipped in any direction, as they aren't horizontally/vertically symetric. Same goes for medical images).
In short be careful not to produce non realistic images through data augmentation. Moreover, an increased dataset size will require a longer training time. Personally, I start considering using data augmentation when I see that my model is reaching near $0$ loss on the training set. | What should I do when my neural network doesn't generalize well? | First of all, let's mention what does "my neural network doesn't generalize well" mean and what's the difference with saying "my neural network doesn't perform well".
When training a Neural Network, y | What should I do when my neural network doesn't generalize well?
First of all, let's mention what does "my neural network doesn't generalize well" mean and what's the difference with saying "my neural network doesn't perform well".
When training a Neural Network, you are constantly evaluating it on a set of labelled data called the training set. If your model isn't working properly and doesn't appear to learn from the training set, you don't have a generalization issue yet, instead please refer to this post. However, if your model is achieving a satisfactory performance on the training set, but cannot perform well on previously unseen data (e.g. validation/test sets), then you do have a generalization problem.
Why is your model not generalizing properly?
The most important part is understanding why your network doesn't generalize well. High-capacity Machine Learning models have the ability to memorize the training set, which can lead to overfitting.
Overfitting is the state where an estimator has begun to learn the training set so well that it has started to model the noise in the training samples (besides all useful relationships).
For example, in the image below we can see how the blue on the right line has clearly overfit.
But why is this bad?
When attempting to evaluate our model on new, previously unseen data (i.e. validation/test set), the model's performance will be much worse than what we expect.
How to prevent overfitting?
In the beginning of the post I implied that the complexity of your model is what is actually causing the overfitting, as it is allowing the model to extract unnecessary relationships from the training set, that map its inherent noise. The easiest way to reduce overfitting is to essentially limit the capacity of your model. These techniques are called regularization techniques.
Parameter norm penalties. These add an extra term to the weight update function of each model, that is dependent on the norm of the parameters. This term's purpose is to counter the actual update (i.e. limit how much each weight can be updated). This makes the models more robust to outliers and noise. Examples of such regularizations are L1 and L2 regularizations, which can be found on the Lasso, Ridge and Elastic Net regressors.
Since each (fully connected) layer in a neural network functions much like a simple linear regression, these are used in Neural Networks. The most common use is to regularize each layer individually.
keras implementation.
Early stopping. This technique attempts to stop an estimator's training phase prematurely, at the point where it has learned to extract all meaningful relationships from the data, before beginning to model its noise. This is done by monitoring the validation loss (or a validation metric of your choosing) and terminating the training phase when this metric stops improving. This way we give the estimator enough time to learn the useful information but not enough to learn from the noise.
keras implementation.
Neural Network specific regularizations. Some examples are:
Dropout. Dropout is an interesting technique that works surprisingly well. Dropout is applied between two successive layers in a network. At each iteration a specified percentage of the connections (selected randomly), connecting the two layers, are dropped. This causes the subsequent layer rely on all of its connections to the previous layer.
keras implementation
Transfer learning. This is especially used in Deep Learning. This is done by initializing the weights of your network to the ones of another network with the same architecture pre-trained on a large, generic dataset.
Other things that may limit overfitting in Deep Neural Networks are: Batch Normalization, which can act as a regulizer and in some cases (e.g. inception modules) works as well as dropout; relatively small sized batches in SGD, which can also prevent overfitting; adding small random noise to weights in hidden layers.
Another way of preventing overfitting, besides limiting the model's capacity, is by improving the quality of your data. The most obvious choice would be outlier/noise removal, however in practice their usefulness is limited. A more common way (especially in image-related tasks) is data augmentation. Here we attempt randomly transform the training examples so that while they appear to the model to be different, they convey the same semantic information (e.g. left-right flipping on images).
Data augmentation overview
Practical suggestions:
By far the most effective regularization technique is dropout, meaning that it should be the first you should use. However, you don't need to (and probably shouldn't) place dropout everywhere! The most prone layers to overfitting are the Fully Connected (FC) layers, because they contain the most parameters. Dropout should be applied to these layers (impacting their connections to the next layer).
Batch normalization, besides having a regularization effect aids your model in several other ways (e.g. speeds up convergence, allows for the use of higher learning rates). It too should be used in FC layers.
As mentioned previously it also may be beneficial to stop your model earlier in the training phase than scheduled. The problem with early stopping is that there is no guarantee that, at any given point, the model won't start improving again. A more practical approach than early stopping is storing the weights of the model that achieve the best performance on the validation set. Be cautious, however, as this is not an unbiased estimate of the performance of your model (just better than the training set). You can also overfit on the validation set. More on that later.
keras implementation
In some applications (e.g. image related tasks), it is highly recommended to follow an already established architecture (e.g. VGG, ResNet, Inception), that you can find ImageNet weights for. The generic nature of this dataset, allows the features to be in turn generic enough to be used for any image related task. Besides being robust to overfitting this will greatly reduce the training time.
Another use of the similar concept is the following: if your task doesn't have much data, but you can find another similar task that does, you can use transfer learning to reduce overfitting. First train your network for the task that has the larger dataset and then attempt to fine-tune the model to the one you initially wanted. The initial training will, in most cases, make your model more robust to overfitting.
Data augmentation. While it always helps to have a larger dataset, data augmentation techniques do have their shortcomings. More specifically, you have to be careful not to augment too strongly, as this might ruin the semantic content of the data. For example in image augmentation if you translate/shift/scale or adjust the brighness/contrast the image too much you'll lose much of the information it contains. Furthermore, augmentation schemes need to be implemented for each task in an ad-hoc fashion (e.g. in handwritten digit recognition the digits are usually aligned and shouldn't be rotated too much; also they shouldn't be flipped in any direction, as they aren't horizontally/vertically symetric. Same goes for medical images).
In short be careful not to produce non realistic images through data augmentation. Moreover, an increased dataset size will require a longer training time. Personally, I start considering using data augmentation when I see that my model is reaching near $0$ loss on the training set. | What should I do when my neural network doesn't generalize well?
First of all, let's mention what does "my neural network doesn't generalize well" mean and what's the difference with saying "my neural network doesn't perform well".
When training a Neural Network, y |
3,679 | What should I do when my neural network doesn't generalize well? | There is plenty of empirical evidence that deep enough neural networks can memorize random labels on huge datasets (Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, "Understanding deep learning requires rethinking generalization"). Thus in principle by getting a big enough NN we can always reduce the training error to extremely small values, limited in practice by numerical accuracy, no matter how meaningless the task.
Things are quite different for the generalization error. We cannot be sure that for each learning problem, there exists a learnable NN model which can produce a generalization error as low as desired. For this reason the first step is to
1. Set your expectations correctly
Find a reputable reference which tells you that there exists an architecture which can reach the generalization error you're looking for, on your data set or on the most similar one for which you can find references. For example, look here
What are the current state-of-the-art convolutional neural networks?
to find current (at the time of the answers) SOTA (State Of The Art) performance for CNNs on various tasks. It's a good idea to try to reproduce such results on these reference data sets, before you train on your own data set, as a test that all your infrastructure is properly in place.
2. Make sure your training procedure is flawless
All the checks described in the answers to question
What should I do when my neural network doesn't learn?
to make sure that your training procedure is ok, are a prerequisite for successful reduction of the generalisation error (if your NN is not learning, it cannot learn to generalise). These checks include, among the other stuff:
unit tests
dataset checks (have a look at a few random input/label samples for both the training set and test set and check that the labels are correct; check width and size of input images; shuffle samples in training/test set and see if it affects results; etc.)
randomisation tests
standardize your preprocessing and package versions
keep a logbook of numerical experiments
3. Try to get superconvergence
βSuper-Convergence: Very Fast Training of Neural Networks Using Large Learning Ratesβ by Leslie N. Smith and Nicholay Topin shows that in some cases the combination of large learning rates with the cyclical learning rate method of Leslie N. Smith acts as a regulariser, accelerating convergence by an order of magnitude and reducing the need for extensive regularisation. Thus this is a good thing to try before
4. Setting your regularisation to the MAXXX
Regularisation often increases training time (bad), increases the training error and reduces the generalisation error (good), but too much regularisation can actually increase both errors (underfitting). For this reason, and because of the increase in training time, itβs often better to introduce the various regularisation techniques one at a time, after you successfully managed to overfit the training set. Note that regularisation by itself doesnβt necessarily imply your generalisation error will get smaller: the model must have a large enough capacity to achieve good generalisation properties. This often means that you need a sufficiently deep network, before you can see the benefits of regularisation.
The oldest regularisation methods are probably early stopping and weight decay. Some of the others:
reduce batch size: smaller batch sizes are usually associated with smaller generalisation error, so this is something to try. However, note that some dispute the usefulness of minibatches: in my experience, they help (as long as you donβt have to use crazy small sizes such as $m=16$), but Elad Hoffer, Itay Hubara, Daniel Soudry Train longer, generalize better: closing the generalization gap in large batch training of neural networks disagree. Note that if you use batch norm (see below), too small minibatches will be quite harmful.
use SGD rather than adaptive optimisers: this has been already covered by @shimao, thus I only mention it for the sake of completeness
use dropout: if you use LSTMs, use standard dropout only for input and output units of a LSTM layer. For the recurrent units (the gates) use recurrent dropout, as first shown by Yarin Gal in his Ph.D. thesis. However, if you use CNNs, dropout is used less frequently now. Instead, you tend toβ¦
...use batch normalisation: the most recent CNN architectures eschew dropout in favour of batch normalisation. This could be just a fad, or it could be due to the fact that apparently dropout and batch normalisation donβt play nice together (Xiang Li,Β Shuo Chen,Β Xiaolin Hu,Β Jian Yang, Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift). Since batch norm is more effective than dropout when you have huge data sets, this could be a reason why dropout has fallen out of favour for CNN architectures. If you use batch normalisation, verify that the distribution of weights and biases for each layer looks approximately standard normal. For RNNs, implementing batch norm is complicated: weight normalisation (Tim Salimans, Diederik P. Kingma, Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks) is a viable alternative.
use data augmentation: it also has a regularising effect.
5. Hyperparameter/architecture search
If nothing else helps, you will have to test multiple different hyperparameter settings (Bayesian Optimization may help here) or multiple different architectural changes (e.g. maybe in your GAN architecture and for the data set you're working on, batch norm only works in the generator, but when added to the discriminator too it makes things worse). Be sure to keep track of the results of these long and boring experiments in a well-ordered logbook.
PS for a GAN it doesn't make much sense to talk about a generalization error: the above example was meant only as an indication that there's still a lot of alchemy in Deep Learning, and things that you would expect to work fine, sometimes don't, or vice versa something which worked ok many times, suddenly craps out on you for a new data set. | What should I do when my neural network doesn't generalize well? | There is plenty of empirical evidence that deep enough neural networks can memorize random labels on huge datasets (Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, "Understand | What should I do when my neural network doesn't generalize well?
There is plenty of empirical evidence that deep enough neural networks can memorize random labels on huge datasets (Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, "Understanding deep learning requires rethinking generalization"). Thus in principle by getting a big enough NN we can always reduce the training error to extremely small values, limited in practice by numerical accuracy, no matter how meaningless the task.
Things are quite different for the generalization error. We cannot be sure that for each learning problem, there exists a learnable NN model which can produce a generalization error as low as desired. For this reason the first step is to
1. Set your expectations correctly
Find a reputable reference which tells you that there exists an architecture which can reach the generalization error you're looking for, on your data set or on the most similar one for which you can find references. For example, look here
What are the current state-of-the-art convolutional neural networks?
to find current (at the time of the answers) SOTA (State Of The Art) performance for CNNs on various tasks. It's a good idea to try to reproduce such results on these reference data sets, before you train on your own data set, as a test that all your infrastructure is properly in place.
2. Make sure your training procedure is flawless
All the checks described in the answers to question
What should I do when my neural network doesn't learn?
to make sure that your training procedure is ok, are a prerequisite for successful reduction of the generalisation error (if your NN is not learning, it cannot learn to generalise). These checks include, among the other stuff:
unit tests
dataset checks (have a look at a few random input/label samples for both the training set and test set and check that the labels are correct; check width and size of input images; shuffle samples in training/test set and see if it affects results; etc.)
randomisation tests
standardize your preprocessing and package versions
keep a logbook of numerical experiments
3. Try to get superconvergence
βSuper-Convergence: Very Fast Training of Neural Networks Using Large Learning Ratesβ by Leslie N. Smith and Nicholay Topin shows that in some cases the combination of large learning rates with the cyclical learning rate method of Leslie N. Smith acts as a regulariser, accelerating convergence by an order of magnitude and reducing the need for extensive regularisation. Thus this is a good thing to try before
4. Setting your regularisation to the MAXXX
Regularisation often increases training time (bad), increases the training error and reduces the generalisation error (good), but too much regularisation can actually increase both errors (underfitting). For this reason, and because of the increase in training time, itβs often better to introduce the various regularisation techniques one at a time, after you successfully managed to overfit the training set. Note that regularisation by itself doesnβt necessarily imply your generalisation error will get smaller: the model must have a large enough capacity to achieve good generalisation properties. This often means that you need a sufficiently deep network, before you can see the benefits of regularisation.
The oldest regularisation methods are probably early stopping and weight decay. Some of the others:
reduce batch size: smaller batch sizes are usually associated with smaller generalisation error, so this is something to try. However, note that some dispute the usefulness of minibatches: in my experience, they help (as long as you donβt have to use crazy small sizes such as $m=16$), but Elad Hoffer, Itay Hubara, Daniel Soudry Train longer, generalize better: closing the generalization gap in large batch training of neural networks disagree. Note that if you use batch norm (see below), too small minibatches will be quite harmful.
use SGD rather than adaptive optimisers: this has been already covered by @shimao, thus I only mention it for the sake of completeness
use dropout: if you use LSTMs, use standard dropout only for input and output units of a LSTM layer. For the recurrent units (the gates) use recurrent dropout, as first shown by Yarin Gal in his Ph.D. thesis. However, if you use CNNs, dropout is used less frequently now. Instead, you tend toβ¦
...use batch normalisation: the most recent CNN architectures eschew dropout in favour of batch normalisation. This could be just a fad, or it could be due to the fact that apparently dropout and batch normalisation donβt play nice together (Xiang Li,Β Shuo Chen,Β Xiaolin Hu,Β Jian Yang, Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift). Since batch norm is more effective than dropout when you have huge data sets, this could be a reason why dropout has fallen out of favour for CNN architectures. If you use batch normalisation, verify that the distribution of weights and biases for each layer looks approximately standard normal. For RNNs, implementing batch norm is complicated: weight normalisation (Tim Salimans, Diederik P. Kingma, Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks) is a viable alternative.
use data augmentation: it also has a regularising effect.
5. Hyperparameter/architecture search
If nothing else helps, you will have to test multiple different hyperparameter settings (Bayesian Optimization may help here) or multiple different architectural changes (e.g. maybe in your GAN architecture and for the data set you're working on, batch norm only works in the generator, but when added to the discriminator too it makes things worse). Be sure to keep track of the results of these long and boring experiments in a well-ordered logbook.
PS for a GAN it doesn't make much sense to talk about a generalization error: the above example was meant only as an indication that there's still a lot of alchemy in Deep Learning, and things that you would expect to work fine, sometimes don't, or vice versa something which worked ok many times, suddenly craps out on you for a new data set. | What should I do when my neural network doesn't generalize well?
There is plenty of empirical evidence that deep enough neural networks can memorize random labels on huge datasets (Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, "Understand |
3,680 | What should I do when my neural network doesn't generalize well? | A list of commonly used regularization techniques which I've seen in the literature are:
Using batch normalization, which is a surprisingly effective regularizer to the point where I rarely see dropout used anymore, because it is simply not necessary.
A small amount of weight decay.
Some more recent regularization techniques include Shake-shake ("Shake-Shake regularization" by Xavier Gastaldi) and Cutout (
"Improved Regularization of Convolutional Neural Networks with Cutout" by
Terrance DeVries and Graham W. Taylor). In particular, the ease with which Cutout can be implemented makes it very attractive. I believe these work better than dropout -- but I'm not sure.
If possible, prefer fully convolutional architectures to architectures with fully connected layers. Compare VGG-16, which has 100 million parameters in a single fully connected layer, to Resnet-152, which has 10 times the number of layers and still fewer parameters.
Prefer SGD to other optimizers such as Rmsprop and Adam. It has been shown to generalize better. ("Improving Generalization Performance by Switching from Adam to SGD" by Nitish Shirish Keskar and Richard Socher) | What should I do when my neural network doesn't generalize well? | A list of commonly used regularization techniques which I've seen in the literature are:
Using batch normalization, which is a surprisingly effective regularizer to the point where I rarely see dropo | What should I do when my neural network doesn't generalize well?
A list of commonly used regularization techniques which I've seen in the literature are:
Using batch normalization, which is a surprisingly effective regularizer to the point where I rarely see dropout used anymore, because it is simply not necessary.
A small amount of weight decay.
Some more recent regularization techniques include Shake-shake ("Shake-Shake regularization" by Xavier Gastaldi) and Cutout (
"Improved Regularization of Convolutional Neural Networks with Cutout" by
Terrance DeVries and Graham W. Taylor). In particular, the ease with which Cutout can be implemented makes it very attractive. I believe these work better than dropout -- but I'm not sure.
If possible, prefer fully convolutional architectures to architectures with fully connected layers. Compare VGG-16, which has 100 million parameters in a single fully connected layer, to Resnet-152, which has 10 times the number of layers and still fewer parameters.
Prefer SGD to other optimizers such as Rmsprop and Adam. It has been shown to generalize better. ("Improving Generalization Performance by Switching from Adam to SGD" by Nitish Shirish Keskar and Richard Socher) | What should I do when my neural network doesn't generalize well?
A list of commonly used regularization techniques which I've seen in the literature are:
Using batch normalization, which is a surprisingly effective regularizer to the point where I rarely see dropo |
3,681 | What should I do when my neural network doesn't generalize well? | I feel like Djib2011, give great points about automated methods, but they don't really tackle the underlying issue of how do we know if the method employed to reduce overfitting did its job. So as an important footnote to DeltaIV answer, I wanted to include this based on recent research in the last 2 years. Overfitting for neural networks isn't just about the model over-memorizing, its also about the models inability to learn new things or deal with anomalies.
Detecting Overfitting in Black Box Model: Interpretability of a model is directly tied to how well you can tell a models ability to generalize. Thus many interpretable plots are methods of detecting overfitting and can tell you how well any of the methods suggested above are working. Interpretability plots directly detect it especially if you compare the validation and test result plots. Chapters 5 and 6 of this unpublished book talk about recent advances in the field detection of overfitting: Interpretable Modeling
Based on this book, I would like to mention three other methods of detecting and removing overfitting, that might be obvious to some, but I personally find that people forget these too often. So I would like to emphasize them if not one minds:
Feature Selection Detection: The less number of parameters and less features your model has the better. So if you only include the important one's of the 100 million (maybe have 75 million instead), you will have a better generalizable model. The problem is many neural networks are not perfect in feature selection especially when # 2 is present. Bootstrap or Boosting fundamentally cannot fix both (only a version called wild bootstrap can). In simpler terms, If you give you neural network junk data then it's going to give you junk out. (L2 Normalization mentioned above is very good at helping with this)
Detection and Dealing with Anomalies: The fewer "outliers" the more generalizable the model. By "outliers", we don't mean just outliers in the data. Outliers in the data (like the kind you see with a box plot) is a too narrow definition for neural networks. You need to consider also outliers in the error in a model, which is referred to as influence, as well as other anomalies. So detecting anomalies before you run your network is important. A neural net can be robust against one type of anomaly, but robust not against all other types. Counter Example methods, Criticism methods, and Adversarial example methods, and Influence plots are great at helping you discover outliers, and then figure out how to factor them in. (Ie. change the parameters or even remove some of the data)
Stratified Sampling, Oversampling, and Undersampling based on statistical or ethical considerations: I wish i was an expert in under and oversampling, but I am not but I know about stratified sampling. Clustering important factors such as (race, sex, gender) and then doing stratified sampling by the cluster is vital to not overfit when one considers big data. When doing image detection, stratified sampling in combination with clustering is legally required in some fields to avoid racial discrimination. The book linked above briefly talks about a methods to do this.
P.S. Should I include more links? | What should I do when my neural network doesn't generalize well? | I feel like Djib2011, give great points about automated methods, but they don't really tackle the underlying issue of how do we know if the method employed to reduce overfitting did its job. So as an | What should I do when my neural network doesn't generalize well?
I feel like Djib2011, give great points about automated methods, but they don't really tackle the underlying issue of how do we know if the method employed to reduce overfitting did its job. So as an important footnote to DeltaIV answer, I wanted to include this based on recent research in the last 2 years. Overfitting for neural networks isn't just about the model over-memorizing, its also about the models inability to learn new things or deal with anomalies.
Detecting Overfitting in Black Box Model: Interpretability of a model is directly tied to how well you can tell a models ability to generalize. Thus many interpretable plots are methods of detecting overfitting and can tell you how well any of the methods suggested above are working. Interpretability plots directly detect it especially if you compare the validation and test result plots. Chapters 5 and 6 of this unpublished book talk about recent advances in the field detection of overfitting: Interpretable Modeling
Based on this book, I would like to mention three other methods of detecting and removing overfitting, that might be obvious to some, but I personally find that people forget these too often. So I would like to emphasize them if not one minds:
Feature Selection Detection: The less number of parameters and less features your model has the better. So if you only include the important one's of the 100 million (maybe have 75 million instead), you will have a better generalizable model. The problem is many neural networks are not perfect in feature selection especially when # 2 is present. Bootstrap or Boosting fundamentally cannot fix both (only a version called wild bootstrap can). In simpler terms, If you give you neural network junk data then it's going to give you junk out. (L2 Normalization mentioned above is very good at helping with this)
Detection and Dealing with Anomalies: The fewer "outliers" the more generalizable the model. By "outliers", we don't mean just outliers in the data. Outliers in the data (like the kind you see with a box plot) is a too narrow definition for neural networks. You need to consider also outliers in the error in a model, which is referred to as influence, as well as other anomalies. So detecting anomalies before you run your network is important. A neural net can be robust against one type of anomaly, but robust not against all other types. Counter Example methods, Criticism methods, and Adversarial example methods, and Influence plots are great at helping you discover outliers, and then figure out how to factor them in. (Ie. change the parameters or even remove some of the data)
Stratified Sampling, Oversampling, and Undersampling based on statistical or ethical considerations: I wish i was an expert in under and oversampling, but I am not but I know about stratified sampling. Clustering important factors such as (race, sex, gender) and then doing stratified sampling by the cluster is vital to not overfit when one considers big data. When doing image detection, stratified sampling in combination with clustering is legally required in some fields to avoid racial discrimination. The book linked above briefly talks about a methods to do this.
P.S. Should I include more links? | What should I do when my neural network doesn't generalize well?
I feel like Djib2011, give great points about automated methods, but they don't really tackle the underlying issue of how do we know if the method employed to reduce overfitting did its job. So as an |
3,682 | What should I do when my neural network doesn't generalize well? | Reduce the number of parameters in the model.
The existing answers focus on different regularization strategies that can improve fit, given a model architecture that remains fixed (same configuration, number of layers, number of neurons in each layer).
However, the simplest and easiest step to reducing overfitting in a neural network is to reduce the number of parameters in the model. This can mean some combination of
fewer layers in the model; and
fewer parameters in each layer.
This can reduce overfitting because a model with a larger parameter count has a greater flexibility to fit the data. Intuitively, this is analogous to the simpler case of adding degrees of freedom to a linear model. A linear model with at least as many degrees of freedom as the number of observations can achieve a perfect fit to the training data, because it can interpolate between each training data point. However, this is unlikely to generalize well because, by perfectly interpolating the training data, the model has also fit to the noise in the target variable.
From an overfitting perspective, the goal of adjusting the number of parameters in the model is to achieve the correct trade-off between achieving a good fit to the data and a fit that will generalize to new data. | What should I do when my neural network doesn't generalize well? | Reduce the number of parameters in the model.
The existing answers focus on different regularization strategies that can improve fit, given a model architecture that remains fixed (same configuration, | What should I do when my neural network doesn't generalize well?
Reduce the number of parameters in the model.
The existing answers focus on different regularization strategies that can improve fit, given a model architecture that remains fixed (same configuration, number of layers, number of neurons in each layer).
However, the simplest and easiest step to reducing overfitting in a neural network is to reduce the number of parameters in the model. This can mean some combination of
fewer layers in the model; and
fewer parameters in each layer.
This can reduce overfitting because a model with a larger parameter count has a greater flexibility to fit the data. Intuitively, this is analogous to the simpler case of adding degrees of freedom to a linear model. A linear model with at least as many degrees of freedom as the number of observations can achieve a perfect fit to the training data, because it can interpolate between each training data point. However, this is unlikely to generalize well because, by perfectly interpolating the training data, the model has also fit to the noise in the target variable.
From an overfitting perspective, the goal of adjusting the number of parameters in the model is to achieve the correct trade-off between achieving a good fit to the data and a fit that will generalize to new data. | What should I do when my neural network doesn't generalize well?
Reduce the number of parameters in the model.
The existing answers focus on different regularization strategies that can improve fit, given a model architecture that remains fixed (same configuration, |
3,683 | What does having "constant variance" in a linear regression model mean? | It means that when you plot the individual error against the predicted value, the variance of the error predicted value should be constant. See the red arrows in the picture below, the length of the red lines (a proxy of its variance) are the same. | What does having "constant variance" in a linear regression model mean? | It means that when you plot the individual error against the predicted value, the variance of the error predicted value should be constant. See the red arrows in the picture below, the length of the r | What does having "constant variance" in a linear regression model mean?
It means that when you plot the individual error against the predicted value, the variance of the error predicted value should be constant. See the red arrows in the picture below, the length of the red lines (a proxy of its variance) are the same. | What does having "constant variance" in a linear regression model mean?
It means that when you plot the individual error against the predicted value, the variance of the error predicted value should be constant. See the red arrows in the picture below, the length of the r |
3,684 | What does having "constant variance" in a linear regression model mean? | This is a place where I've found looking at some formulas helps, even for people with some math anxiety (I'm not suggesting that you do, necessarily). The simple linear regression model is this:
$$
Y=\beta_0+\beta_1X+\varepsilon \\
\text{where } \varepsilon\sim\mathcal N(0, \sigma^2_\varepsilon)
$$
What's important to note here is that this model explicitly states once you've estimated the meaningful information in the data (that's the "$\beta_0+\beta_1X$") there is nothing left over but white noise. Moreover, the errors are distributed as a Normal with a variance of $\sigma^2_\varepsilon$.
It's important to realize that $\sigma^2_\varepsilon$ is not a variable (although in junior high school level algebra, we would call it that). It doesn't vary. $X$ varies. $Y$ varies. The error term, $\varepsilon$, varies randomly; that is, it is a random variable. However, the parameters ($\beta_0,~\beta_1,~\sigma^2_\varepsilon)$ are placeholders for values we don't know--they don't vary. Instead, they are unknown constants. The upshot of this fact for this discussion is that no matter what $X$ is (i.e., what value is plugged in there), $\sigma^2_\varepsilon$ remains the same. In other words, the variance of the errors / residuals is constant. For the sake of contrast (and perhaps greater clarity), consider this model:
$$
Y=\beta_0+\beta_1X+\varepsilon \\
\text{where } \varepsilon\sim\mathcal N(0, f(X)) \\
~ \\
\text{where } f(X)=\exp(\gamma_0+\gamma_1 X) \\
\text{and }\gamma_1\ne 0
$$
In this case, we plug in a value for $X$ (starting on the third line), pass it through the function $f(X)$ and get the error variance that obtains at that exact value of $X$. Then we move through the rest of the equation as usual.
The above discussion should help with understanding the nature of the assumption; the question also asks about how to assess the it. There are basically two approaches: formal hypothesis tests and examining plots. Tests for heteroscedasticity can be used if you have experimental-ish data (i.e., that only occur at fixed values of $X$) or an ANOVA. I discuss some such tests here: Why Levene test of equality of variances rather than F-ratio. However, I tend to think looking at plots is best. @Penquin_Knight has done a good job of showing what constant variance looks like by plotting the residuals of a model where homoscedasticity obtains against the fitted values. Heteroscedasticity can also possibly be detected in a plot of the raw data, or in a scale-location (also called spread-level) plot. R conveniently plots the latter for you with a call to plot.lm(model, which=2); it is the square root of the absolute values of the residuals against the fitted values, with a lowess curve helpfully overlaid. You want the lowess fit to be flat, not sloped.
Consider the plots below, which compare how homoscedastic vs. heteroscedastic data might look in these three different types of figures. Note the funnel shape for the upper two heteroscedastic plots, and the upward sloping lowess line in the last one.
For completeness, here is the code that I used to generate these data:
set.seed(5)
N = 500
b0 = 3
b1 = 0.4
s2 = 5
g1 = 1.5
g2 = 0.015
x = runif(N, min=0, max=100)
y_homo = b0 + b1*x + rnorm(N, mean=0, sd=sqrt(s2 ))
y_hetero = b0 + b1*x + rnorm(N, mean=0, sd=sqrt(exp(g1 + g2*x)))
mod.homo = lm(y_homo~x)
mod.hetero = lm(y_hetero~x) | What does having "constant variance" in a linear regression model mean? | This is a place where I've found looking at some formulas helps, even for people with some math anxiety (I'm not suggesting that you do, necessarily). The simple linear regression model is this:
$$
Y | What does having "constant variance" in a linear regression model mean?
This is a place where I've found looking at some formulas helps, even for people with some math anxiety (I'm not suggesting that you do, necessarily). The simple linear regression model is this:
$$
Y=\beta_0+\beta_1X+\varepsilon \\
\text{where } \varepsilon\sim\mathcal N(0, \sigma^2_\varepsilon)
$$
What's important to note here is that this model explicitly states once you've estimated the meaningful information in the data (that's the "$\beta_0+\beta_1X$") there is nothing left over but white noise. Moreover, the errors are distributed as a Normal with a variance of $\sigma^2_\varepsilon$.
It's important to realize that $\sigma^2_\varepsilon$ is not a variable (although in junior high school level algebra, we would call it that). It doesn't vary. $X$ varies. $Y$ varies. The error term, $\varepsilon$, varies randomly; that is, it is a random variable. However, the parameters ($\beta_0,~\beta_1,~\sigma^2_\varepsilon)$ are placeholders for values we don't know--they don't vary. Instead, they are unknown constants. The upshot of this fact for this discussion is that no matter what $X$ is (i.e., what value is plugged in there), $\sigma^2_\varepsilon$ remains the same. In other words, the variance of the errors / residuals is constant. For the sake of contrast (and perhaps greater clarity), consider this model:
$$
Y=\beta_0+\beta_1X+\varepsilon \\
\text{where } \varepsilon\sim\mathcal N(0, f(X)) \\
~ \\
\text{where } f(X)=\exp(\gamma_0+\gamma_1 X) \\
\text{and }\gamma_1\ne 0
$$
In this case, we plug in a value for $X$ (starting on the third line), pass it through the function $f(X)$ and get the error variance that obtains at that exact value of $X$. Then we move through the rest of the equation as usual.
The above discussion should help with understanding the nature of the assumption; the question also asks about how to assess the it. There are basically two approaches: formal hypothesis tests and examining plots. Tests for heteroscedasticity can be used if you have experimental-ish data (i.e., that only occur at fixed values of $X$) or an ANOVA. I discuss some such tests here: Why Levene test of equality of variances rather than F-ratio. However, I tend to think looking at plots is best. @Penquin_Knight has done a good job of showing what constant variance looks like by plotting the residuals of a model where homoscedasticity obtains against the fitted values. Heteroscedasticity can also possibly be detected in a plot of the raw data, or in a scale-location (also called spread-level) plot. R conveniently plots the latter for you with a call to plot.lm(model, which=2); it is the square root of the absolute values of the residuals against the fitted values, with a lowess curve helpfully overlaid. You want the lowess fit to be flat, not sloped.
Consider the plots below, which compare how homoscedastic vs. heteroscedastic data might look in these three different types of figures. Note the funnel shape for the upper two heteroscedastic plots, and the upward sloping lowess line in the last one.
For completeness, here is the code that I used to generate these data:
set.seed(5)
N = 500
b0 = 3
b1 = 0.4
s2 = 5
g1 = 1.5
g2 = 0.015
x = runif(N, min=0, max=100)
y_homo = b0 + b1*x + rnorm(N, mean=0, sd=sqrt(s2 ))
y_hetero = b0 + b1*x + rnorm(N, mean=0, sd=sqrt(exp(g1 + g2*x)))
mod.homo = lm(y_homo~x)
mod.hetero = lm(y_hetero~x) | What does having "constant variance" in a linear regression model mean?
This is a place where I've found looking at some formulas helps, even for people with some math anxiety (I'm not suggesting that you do, necessarily). The simple linear regression model is this:
$$
Y |
3,685 | Does it ever make sense to treat categorical data as continuous? | I will assume that a "categorical" variable actually stands for an ordinal variable; otherwise it doesn't make much sense to treat it as a continuous one, unless it's a binary variable (coded 0/1) as pointed by @Rob. Then, I would say that the problem is not that much the way we treat the variable, although many models for categorical data analysis have been developed so far--see e.g., The analysis of ordered categorical data: An overview and a survey of recent developments from Liu and Agresti--, than the underlying measurement scale we assume. My response will focus on this second point, although I will first briefly discuss the assignment of numerical scores to variable categories or levels.
By using a simple numerical recoding of an ordinal variable, you are assuming that the variable has interval properties (in the sense of the classification given by Stevens, 1946). From a measurement theory perspective (in psychology), this may often be a too strong assumption, but for basic study (i.e. where a single item is used to express one's opinion about a daily activity with clear wording) any monotone scores should give comparable results. Cochran (1954) already pointed that
any set of scores gives a valid
test, provided that they are
constructed without consulting the
results of the experiment. If the set
of scores is poor, in that it badly
distorts a numerical scale that really
does underlie the ordered
classification, the test will not be
sensitive. The scores should therefore
embody the best insight available
about the way in which the
classification was constructed and
used. (p. 436)
(Many thanks to @whuber for reminding me about this throughout one of his comments, which led me to re-read Agresti's book, from which this citation comes.)
Actually, several tests treat implicitly such variables as interval scales: for example, the $M^2$ statistic for testing a linear trend (as an alternative to simple independence) is based on a correlational approach ($M^2=(n-1)r^2$, Agresti, 2002, p. 87).
Well, you can also decide to recode your variable on an irregular range, or aggregate some of its levels, but in this case strong imbalance between recoded categories may distort statistical tests, e.g. the aforementioned trend test.
A nice alternative for assigning distance between categories was already proposed by @Jeromy, namely optimal scaling.
Now, let's discuss the second point I made, that of the underlying measurement model. I'm always hesitating about adding the "psychometrics" tag when I see this kind of question, because the construction and analysis of measurement scales come under Psychometric Theory (Nunnally and Bernstein, 1994, for a neat overview). I will not dwell on all the models that are actually headed under the Item Response Theory, and I kindly refer the interested reader to I. Partchev's tutorial, A visual guide to item response theory, for a gentle introduction to IRT, and to references (5-8) listed at the end for possible IRT taxonomies. Very briefly, the idea is that rather than assigning arbitrary distances between variable categories, you assume a latent scale and estimate their location on that continuum, together with individuals' ability or liability. A simple example is worth much mathematical notation, so let's consider the following item (coming from the EORTC QLQ-C30 health-related quality of life questionnaire):
Did you worry?
which is coded on a four-point scale, ranging from "Not at all" to "Very much". Raw scores are computed by assigning a score of 1 to 4. Scores on items belonging to the same scale can then be added together to yield a so-called scale score, which denotes one's rank on the underlying construct (here, a mental health component). Such summated scale scores are very practical because of scoring easiness (for the practitioner or nurse), but they are nothing more than a discrete (ordered) scale.
We can also consider that the probability of endorsing a given response category obeys some kind of a logistic model, as described in I. Partchev's tutorial, referred above. Basically, the idea is that of a kind of threshold model (which lead to equivalent formulation in terms of the proportional or cumulative odds models) and we model the odds of being in one response category rather the preceding one or the odds of scoring above a certain category, conditional on subjects' location on the latent trait. In addition, we may impose that response categories are equally spaced on the latent scale (this is the Rating Scale model)--which is the way we do by assigning regularly spaced numerical scores-- or not (this is the Partial Credit model).
Clearly, we are not adding very much to Classical Test Theory, where ordinal variable are treated as numerical ones. However, we introduce a probabilistic model, where we assume a continuous scale (with interval properties) and where specific errors of measurement can be accounted for, and we can plug these factorial scores in any regression model.
References
S S Stevens. On the theory of scales of measurement. Science, 103: 677-680, 1946.
W G Cochran. Some methods of strengthening the common $\chi^2$ tests. Biometrics, 10: 417-451, 1954.
J Nunnally and I Bernstein. Psychometric Theory. McGraw-Hill, 1994
Alan Agresti. Categorical Data Analysis. Wiley, 1990.
C R Rao and S Sinharay, editors. Handbook of Statistics, Vol. 26: Psychometrics. Elsevier Science B.V., The Netherlands, 2007.
A Boomsma, M A J van Duijn, and T A B Snijders. Essays on Item Response Theory. Springer, 2001.
D Thissen and L Steinberg. A taxonomy of item response models. Psychometrika, 51(4): 567β577, 1986.
P Mair and R Hatzinger. Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R. Journal of Statistical Software, 20(9), 2007. | Does it ever make sense to treat categorical data as continuous? | I will assume that a "categorical" variable actually stands for an ordinal variable; otherwise it doesn't make much sense to treat it as a continuous one, unless it's a binary variable (coded 0/1) as | Does it ever make sense to treat categorical data as continuous?
I will assume that a "categorical" variable actually stands for an ordinal variable; otherwise it doesn't make much sense to treat it as a continuous one, unless it's a binary variable (coded 0/1) as pointed by @Rob. Then, I would say that the problem is not that much the way we treat the variable, although many models for categorical data analysis have been developed so far--see e.g., The analysis of ordered categorical data: An overview and a survey of recent developments from Liu and Agresti--, than the underlying measurement scale we assume. My response will focus on this second point, although I will first briefly discuss the assignment of numerical scores to variable categories or levels.
By using a simple numerical recoding of an ordinal variable, you are assuming that the variable has interval properties (in the sense of the classification given by Stevens, 1946). From a measurement theory perspective (in psychology), this may often be a too strong assumption, but for basic study (i.e. where a single item is used to express one's opinion about a daily activity with clear wording) any monotone scores should give comparable results. Cochran (1954) already pointed that
any set of scores gives a valid
test, provided that they are
constructed without consulting the
results of the experiment. If the set
of scores is poor, in that it badly
distorts a numerical scale that really
does underlie the ordered
classification, the test will not be
sensitive. The scores should therefore
embody the best insight available
about the way in which the
classification was constructed and
used. (p. 436)
(Many thanks to @whuber for reminding me about this throughout one of his comments, which led me to re-read Agresti's book, from which this citation comes.)
Actually, several tests treat implicitly such variables as interval scales: for example, the $M^2$ statistic for testing a linear trend (as an alternative to simple independence) is based on a correlational approach ($M^2=(n-1)r^2$, Agresti, 2002, p. 87).
Well, you can also decide to recode your variable on an irregular range, or aggregate some of its levels, but in this case strong imbalance between recoded categories may distort statistical tests, e.g. the aforementioned trend test.
A nice alternative for assigning distance between categories was already proposed by @Jeromy, namely optimal scaling.
Now, let's discuss the second point I made, that of the underlying measurement model. I'm always hesitating about adding the "psychometrics" tag when I see this kind of question, because the construction and analysis of measurement scales come under Psychometric Theory (Nunnally and Bernstein, 1994, for a neat overview). I will not dwell on all the models that are actually headed under the Item Response Theory, and I kindly refer the interested reader to I. Partchev's tutorial, A visual guide to item response theory, for a gentle introduction to IRT, and to references (5-8) listed at the end for possible IRT taxonomies. Very briefly, the idea is that rather than assigning arbitrary distances between variable categories, you assume a latent scale and estimate their location on that continuum, together with individuals' ability or liability. A simple example is worth much mathematical notation, so let's consider the following item (coming from the EORTC QLQ-C30 health-related quality of life questionnaire):
Did you worry?
which is coded on a four-point scale, ranging from "Not at all" to "Very much". Raw scores are computed by assigning a score of 1 to 4. Scores on items belonging to the same scale can then be added together to yield a so-called scale score, which denotes one's rank on the underlying construct (here, a mental health component). Such summated scale scores are very practical because of scoring easiness (for the practitioner or nurse), but they are nothing more than a discrete (ordered) scale.
We can also consider that the probability of endorsing a given response category obeys some kind of a logistic model, as described in I. Partchev's tutorial, referred above. Basically, the idea is that of a kind of threshold model (which lead to equivalent formulation in terms of the proportional or cumulative odds models) and we model the odds of being in one response category rather the preceding one or the odds of scoring above a certain category, conditional on subjects' location on the latent trait. In addition, we may impose that response categories are equally spaced on the latent scale (this is the Rating Scale model)--which is the way we do by assigning regularly spaced numerical scores-- or not (this is the Partial Credit model).
Clearly, we are not adding very much to Classical Test Theory, where ordinal variable are treated as numerical ones. However, we introduce a probabilistic model, where we assume a continuous scale (with interval properties) and where specific errors of measurement can be accounted for, and we can plug these factorial scores in any regression model.
References
S S Stevens. On the theory of scales of measurement. Science, 103: 677-680, 1946.
W G Cochran. Some methods of strengthening the common $\chi^2$ tests. Biometrics, 10: 417-451, 1954.
J Nunnally and I Bernstein. Psychometric Theory. McGraw-Hill, 1994
Alan Agresti. Categorical Data Analysis. Wiley, 1990.
C R Rao and S Sinharay, editors. Handbook of Statistics, Vol. 26: Psychometrics. Elsevier Science B.V., The Netherlands, 2007.
A Boomsma, M A J van Duijn, and T A B Snijders. Essays on Item Response Theory. Springer, 2001.
D Thissen and L Steinberg. A taxonomy of item response models. Psychometrika, 51(4): 567β577, 1986.
P Mair and R Hatzinger. Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R. Journal of Statistical Software, 20(9), 2007. | Does it ever make sense to treat categorical data as continuous?
I will assume that a "categorical" variable actually stands for an ordinal variable; otherwise it doesn't make much sense to treat it as a continuous one, unless it's a binary variable (coded 0/1) as |
3,686 | Does it ever make sense to treat categorical data as continuous? | If there are only two categories, then transforming them to (0,1) makes sense. In fact, this is commonly done where the resulting dummy variable is used in regression models.
If there are more than two categories, then I think it only makes sense if the data are ordinal, and then only in very specific circumstances. For example, if I am doing regression and fit a nonparametric nonlinear function to the ordinal-cum-numeric variable, I think that is ok. But if I use linear regression, then I am making very strong assumptions about the relative difference between consecutive values of the ordinal variable, and I'm usually reluctant to do that. | Does it ever make sense to treat categorical data as continuous? | If there are only two categories, then transforming them to (0,1) makes sense. In fact, this is commonly done where the resulting dummy variable is used in regression models.
If there are more than tw | Does it ever make sense to treat categorical data as continuous?
If there are only two categories, then transforming them to (0,1) makes sense. In fact, this is commonly done where the resulting dummy variable is used in regression models.
If there are more than two categories, then I think it only makes sense if the data are ordinal, and then only in very specific circumstances. For example, if I am doing regression and fit a nonparametric nonlinear function to the ordinal-cum-numeric variable, I think that is ok. But if I use linear regression, then I am making very strong assumptions about the relative difference between consecutive values of the ordinal variable, and I'm usually reluctant to do that. | Does it ever make sense to treat categorical data as continuous?
If there are only two categories, then transforming them to (0,1) makes sense. In fact, this is commonly done where the resulting dummy variable is used in regression models.
If there are more than tw |
3,687 | Does it ever make sense to treat categorical data as continuous? | It is common practice to treat ordered categorical variables with many categories as continuous. Examples of this:
Number of items correct on a 100 item test
A summated psychological scale (e.g., that is the mean of 10 items each on a five point scale)
And by "treating as continuous" I mean including the variable in a model that assumes a continuous random variable (e.g., as a dependent variable in a linear regression). I suppose the issue is how many scale points are required for this to be a reasonable simplifying assumption.
A few other thoughts:
Polychoric correlations attempt to model the relationship between two ordinal variables in terms of assumed latent continuous variables.
Optimal scaling allows you to develop models where the scaling of a categorical variable is developed in a data driven way whilst respecting whatever scale constraints you impose (e.g., ordinality). For a good introduction see De Leeuw and Mair (2009)
References
De Leeuw, J., & Mair, P. (2009). Gifi methods for optimal scaling in R: The package homals. Journal of Statistical Software, forthcoming, 1-30. PDF | Does it ever make sense to treat categorical data as continuous? | It is common practice to treat ordered categorical variables with many categories as continuous. Examples of this:
Number of items correct on a 100 item test
A summated psychological scale (e.g., tha | Does it ever make sense to treat categorical data as continuous?
It is common practice to treat ordered categorical variables with many categories as continuous. Examples of this:
Number of items correct on a 100 item test
A summated psychological scale (e.g., that is the mean of 10 items each on a five point scale)
And by "treating as continuous" I mean including the variable in a model that assumes a continuous random variable (e.g., as a dependent variable in a linear regression). I suppose the issue is how many scale points are required for this to be a reasonable simplifying assumption.
A few other thoughts:
Polychoric correlations attempt to model the relationship between two ordinal variables in terms of assumed latent continuous variables.
Optimal scaling allows you to develop models where the scaling of a categorical variable is developed in a data driven way whilst respecting whatever scale constraints you impose (e.g., ordinality). For a good introduction see De Leeuw and Mair (2009)
References
De Leeuw, J., & Mair, P. (2009). Gifi methods for optimal scaling in R: The package homals. Journal of Statistical Software, forthcoming, 1-30. PDF | Does it ever make sense to treat categorical data as continuous?
It is common practice to treat ordered categorical variables with many categories as continuous. Examples of this:
Number of items correct on a 100 item test
A summated psychological scale (e.g., tha |
3,688 | Does it ever make sense to treat categorical data as continuous? | A very simple example often overlooked that should lie within the experience of many readers concerns the marks or grades given to academic work. Often marks for individual assignments are in essence judgement-based ordinal measurements, even when as a matter of convention they are given as (say) percent marks or marks on a scale with maximum 5 (possibly with decimal points too). That is, a teacher may read through an essay or dissertation or thesis or paper and decide that it deserves 42%, or 4, or whatever. Even when marks are based on a detailed assessment scheme the scale is at root some distance from an interval or ratio measurement scale.
But then many institutions take the view that if you have enough of these marks or grades it is perfectly reasonable to average them (grade-point average, etc.) and even to analyse them in more detail. So at some point the ordinal measurements morph into a summary scale that is treated as if it were continuous.
Connoisseurs of irony will note that statistical courses in many Departments or Schools often teach that this is at best dubious and at worst wrong, all the while it is implemented as a University-wide procedure. | Does it ever make sense to treat categorical data as continuous? | A very simple example often overlooked that should lie within the experience of many readers concerns the marks or grades given to academic work. Often marks for individual assignments are in essence | Does it ever make sense to treat categorical data as continuous?
A very simple example often overlooked that should lie within the experience of many readers concerns the marks or grades given to academic work. Often marks for individual assignments are in essence judgement-based ordinal measurements, even when as a matter of convention they are given as (say) percent marks or marks on a scale with maximum 5 (possibly with decimal points too). That is, a teacher may read through an essay or dissertation or thesis or paper and decide that it deserves 42%, or 4, or whatever. Even when marks are based on a detailed assessment scheme the scale is at root some distance from an interval or ratio measurement scale.
But then many institutions take the view that if you have enough of these marks or grades it is perfectly reasonable to average them (grade-point average, etc.) and even to analyse them in more detail. So at some point the ordinal measurements morph into a summary scale that is treated as if it were continuous.
Connoisseurs of irony will note that statistical courses in many Departments or Schools often teach that this is at best dubious and at worst wrong, all the while it is implemented as a University-wide procedure. | Does it ever make sense to treat categorical data as continuous?
A very simple example often overlooked that should lie within the experience of many readers concerns the marks or grades given to academic work. Often marks for individual assignments are in essence |
3,689 | Does it ever make sense to treat categorical data as continuous? | In an analysis of ranking by frequency, as with a Pareto chart and associated values (eg how many categories make up the top 80% of product faults) | Does it ever make sense to treat categorical data as continuous? | In an analysis of ranking by frequency, as with a Pareto chart and associated values (eg how many categories make up the top 80% of product faults) | Does it ever make sense to treat categorical data as continuous?
In an analysis of ranking by frequency, as with a Pareto chart and associated values (eg how many categories make up the top 80% of product faults) | Does it ever make sense to treat categorical data as continuous?
In an analysis of ranking by frequency, as with a Pareto chart and associated values (eg how many categories make up the top 80% of product faults) |
3,690 | Does it ever make sense to treat categorical data as continuous? | I'm going to make the argument that treating a truly categorical, non-ordinal variable as continuous can sometimes make sense.
If you are building decision trees based on large datasets, it may be costly in terms of processing power and memory to convert categorical variables into dummy variables. Furthermore, some models (e.g. randomForest in R) cannot handle categorical variables with many levels.
In these cases, a tree-based model should be able to identify extremely important categories, EVEN IF they are coded as a continuous variable. A contrived example:
set.seed(42)
library(caret)
n <- 10000
a <- sample(1:100, n, replace=TRUE)
b <- sample(1:100, n, replace=TRUE)
e <- runif(n)
y <- 2*a + 1000*(b==7) + 500*(b==42) + 1000*e
dat1 <- data.frame(y, a, b)
dat2 <- data.frame(y, a, b=factor(b))
y is a continuous variable, a is a continuous variable, and b is a categorical variable. However, in dat1 b is treated as continuous.
Fitting a decision tree to these 2 datasets, we find that dat1 is slightly worse than dat2:
model1 <- train(y~., dat1, method='rpart')
model2 <- train(y~., dat2, method='rpart')
> min(model1$results$RMSE)
[1] 302.0428
> min(model2$results$RMSE)
[1] 294.1411
If you look at the 2 models, you will find that they are very similar, but model1 misses the importance of b==42:
> model1$finalModel
n= 10000
node), split, n, deviance, yval
* denotes terminal node
1) root 10000 988408000 614.0377
2) a< 42.5 4206 407731400 553.5374 *
3) a>=42.5 5794 554105700 657.9563
6) b>=7.5 5376 468539000 649.2613 *
7) b< 7.5 418 79932820 769.7852
14) b< 6.5 365 29980450 644.6897 *
15) b>=6.5 53 4904253 1631.2920 *
> model2$finalModel
n= 10000
node), split, n, deviance, yval
* denotes terminal node
1) root 10000 988408000 614.0377
2) b7< 0.5 9906 889387900 604.7904
4) a< 42.5 4165 364209500 543.8927 *
5) a>=42.5 5741 498526600 648.9707
10) b42< 0.5 5679 478456300 643.7210 *
11) b42>=0.5 62 5578230 1129.8230 *
3) b7>=0.5 94 8903490 1588.5500 *
However, model1 runs in about 1/10 of the time of model2:
> model1$times$everything
user system elapsed
4.881 0.169 5.058
> model2$times$everything
user system elapsed
45.060 3.016 48.066
You can of course tweak the parameters of the problem to find situations in which dat2 far outperforms dat1, or dat1 slightly outperforms dat2.
I am not advocating generally treating categorical variables as continuous, but I have found situations where doing so has greatly reduced the time it takes to fit my models, without decreasing their predictive accuracy. | Does it ever make sense to treat categorical data as continuous? | I'm going to make the argument that treating a truly categorical, non-ordinal variable as continuous can sometimes make sense.
If you are building decision trees based on large datasets, it may be cos | Does it ever make sense to treat categorical data as continuous?
I'm going to make the argument that treating a truly categorical, non-ordinal variable as continuous can sometimes make sense.
If you are building decision trees based on large datasets, it may be costly in terms of processing power and memory to convert categorical variables into dummy variables. Furthermore, some models (e.g. randomForest in R) cannot handle categorical variables with many levels.
In these cases, a tree-based model should be able to identify extremely important categories, EVEN IF they are coded as a continuous variable. A contrived example:
set.seed(42)
library(caret)
n <- 10000
a <- sample(1:100, n, replace=TRUE)
b <- sample(1:100, n, replace=TRUE)
e <- runif(n)
y <- 2*a + 1000*(b==7) + 500*(b==42) + 1000*e
dat1 <- data.frame(y, a, b)
dat2 <- data.frame(y, a, b=factor(b))
y is a continuous variable, a is a continuous variable, and b is a categorical variable. However, in dat1 b is treated as continuous.
Fitting a decision tree to these 2 datasets, we find that dat1 is slightly worse than dat2:
model1 <- train(y~., dat1, method='rpart')
model2 <- train(y~., dat2, method='rpart')
> min(model1$results$RMSE)
[1] 302.0428
> min(model2$results$RMSE)
[1] 294.1411
If you look at the 2 models, you will find that they are very similar, but model1 misses the importance of b==42:
> model1$finalModel
n= 10000
node), split, n, deviance, yval
* denotes terminal node
1) root 10000 988408000 614.0377
2) a< 42.5 4206 407731400 553.5374 *
3) a>=42.5 5794 554105700 657.9563
6) b>=7.5 5376 468539000 649.2613 *
7) b< 7.5 418 79932820 769.7852
14) b< 6.5 365 29980450 644.6897 *
15) b>=6.5 53 4904253 1631.2920 *
> model2$finalModel
n= 10000
node), split, n, deviance, yval
* denotes terminal node
1) root 10000 988408000 614.0377
2) b7< 0.5 9906 889387900 604.7904
4) a< 42.5 4165 364209500 543.8927 *
5) a>=42.5 5741 498526600 648.9707
10) b42< 0.5 5679 478456300 643.7210 *
11) b42>=0.5 62 5578230 1129.8230 *
3) b7>=0.5 94 8903490 1588.5500 *
However, model1 runs in about 1/10 of the time of model2:
> model1$times$everything
user system elapsed
4.881 0.169 5.058
> model2$times$everything
user system elapsed
45.060 3.016 48.066
You can of course tweak the parameters of the problem to find situations in which dat2 far outperforms dat1, or dat1 slightly outperforms dat2.
I am not advocating generally treating categorical variables as continuous, but I have found situations where doing so has greatly reduced the time it takes to fit my models, without decreasing their predictive accuracy. | Does it ever make sense to treat categorical data as continuous?
I'm going to make the argument that treating a truly categorical, non-ordinal variable as continuous can sometimes make sense.
If you are building decision trees based on large datasets, it may be cos |
3,691 | Does it ever make sense to treat categorical data as continuous? | A very nice summary of this topic can be found here.
"When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under sub-optimal conditions."
Mijke Rhemtulla, Patricia Γ. Brosseau-Liard, and Victoria Savalei
They investigate about 60 pages' worth of methods for doing so and provide insights as to when it's useful to do, which approach to take, and what the strengths and weaknesses are of each approach to fit your specific situation. They don't cover all of them (as I'm learning there seems to be a limitless amount), but the ones they do cover they cover well. | Does it ever make sense to treat categorical data as continuous? | A very nice summary of this topic can be found here.
"When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under sub-optima | Does it ever make sense to treat categorical data as continuous?
A very nice summary of this topic can be found here.
"When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under sub-optimal conditions."
Mijke Rhemtulla, Patricia Γ. Brosseau-Liard, and Victoria Savalei
They investigate about 60 pages' worth of methods for doing so and provide insights as to when it's useful to do, which approach to take, and what the strengths and weaknesses are of each approach to fit your specific situation. They don't cover all of them (as I'm learning there seems to be a limitless amount), but the ones they do cover they cover well. | Does it ever make sense to treat categorical data as continuous?
A very nice summary of this topic can be found here.
"When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under sub-optima |
3,692 | Does it ever make sense to treat categorical data as continuous? | There is another case when it makes sense: when the data is sampled from continuous data (for example through an analogue-to-digital converter). For older instruments the ADCs would often be 10-bit, giving what is nominally 1024-category ordinal data, but can for most purposes be treated as real (though there will be some artifacts for values near the low end of the scale). Today ADCs are more commonly 16 or 24-bit. By the time you're talking 65536 or 16777216 "categories", you really have no trouble treating the data as continuous. | Does it ever make sense to treat categorical data as continuous? | There is another case when it makes sense: when the data is sampled from continuous data (for example through an analogue-to-digital converter). For older instruments the ADCs would often be 10-bit, g | Does it ever make sense to treat categorical data as continuous?
There is another case when it makes sense: when the data is sampled from continuous data (for example through an analogue-to-digital converter). For older instruments the ADCs would often be 10-bit, giving what is nominally 1024-category ordinal data, but can for most purposes be treated as real (though there will be some artifacts for values near the low end of the scale). Today ADCs are more commonly 16 or 24-bit. By the time you're talking 65536 or 16777216 "categories", you really have no trouble treating the data as continuous. | Does it ever make sense to treat categorical data as continuous?
There is another case when it makes sense: when the data is sampled from continuous data (for example through an analogue-to-digital converter). For older instruments the ADCs would often be 10-bit, g |
3,693 | Training a decision tree against unbalanced data | This is an interesting and very frequent problem in classification - not just in decision trees but in virtually all classification algorithms.
As you found empirically, a training set consisting of different numbers of representatives from either class may result in a classifier that is biased towards the majority class. When applied to a test set that is similarly imbalanced, this classifier yields an optimistic accuracy estimate. In an extreme case, the classifier might assign every single test case to the majority class, thereby achieving an accuracy equal to the proportion of test cases belonging to the majority class. This is a well-known phenomenon in binary classification (and it extends naturally to multi-class settings).
This is an important issue, because an imbalanced dataset may lead to inflated performance estimates. This in turn may lead to false conclusions about the significance with which the algorithm has performed better than chance.
The machine-learning literature on this topic has essentially developed three solution strategies.
You can restore balance on the training set by undersampling the large class or by oversampling the small class, to prevent bias from arising in the first place.
Alternatively, you can modify the costs of misclassification, as noted in a previous response, again to prevent bias.
An additional safeguard is to replace the accuracy by the so-called balanced accuracy. It is defined as the arithmetic mean of the class-specific accuracies, $\phi := \frac{1}{2}\left(\pi^+ + \pi^-\right),$ where $\pi^+$ and $\pi^-$ represent the accuracy obtained on positive and negative examples, respectively. If the classifier performs equally well on either class, this term reduces to the conventional accuracy (i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the conventional accuracy is above chance only because the classifier takes advantage of an imbalanced test set, then the balanced accuracy, as appropriate, will drop to chance (see sketch below).
I would recommend to consider at least two of the above approaches in conjunction. For example, you could oversample your minority class to prevent your classifier from acquiring a bias in favour the majority class. Following this, when evaluating the performance of your classifier, you could replace the accuracy by the balanced accuracy. The two approaches are complementary. When applied together, they should help you both prevent your original problem and avoid false conclusions following from it.
I would be happy to post some additional references to the literature if you would like to follow up on this. | Training a decision tree against unbalanced data | This is an interesting and very frequent problem in classification - not just in decision trees but in virtually all classification algorithms.
As you found empirically, a training set consisting of d | Training a decision tree against unbalanced data
This is an interesting and very frequent problem in classification - not just in decision trees but in virtually all classification algorithms.
As you found empirically, a training set consisting of different numbers of representatives from either class may result in a classifier that is biased towards the majority class. When applied to a test set that is similarly imbalanced, this classifier yields an optimistic accuracy estimate. In an extreme case, the classifier might assign every single test case to the majority class, thereby achieving an accuracy equal to the proportion of test cases belonging to the majority class. This is a well-known phenomenon in binary classification (and it extends naturally to multi-class settings).
This is an important issue, because an imbalanced dataset may lead to inflated performance estimates. This in turn may lead to false conclusions about the significance with which the algorithm has performed better than chance.
The machine-learning literature on this topic has essentially developed three solution strategies.
You can restore balance on the training set by undersampling the large class or by oversampling the small class, to prevent bias from arising in the first place.
Alternatively, you can modify the costs of misclassification, as noted in a previous response, again to prevent bias.
An additional safeguard is to replace the accuracy by the so-called balanced accuracy. It is defined as the arithmetic mean of the class-specific accuracies, $\phi := \frac{1}{2}\left(\pi^+ + \pi^-\right),$ where $\pi^+$ and $\pi^-$ represent the accuracy obtained on positive and negative examples, respectively. If the classifier performs equally well on either class, this term reduces to the conventional accuracy (i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the conventional accuracy is above chance only because the classifier takes advantage of an imbalanced test set, then the balanced accuracy, as appropriate, will drop to chance (see sketch below).
I would recommend to consider at least two of the above approaches in conjunction. For example, you could oversample your minority class to prevent your classifier from acquiring a bias in favour the majority class. Following this, when evaluating the performance of your classifier, you could replace the accuracy by the balanced accuracy. The two approaches are complementary. When applied together, they should help you both prevent your original problem and avoid false conclusions following from it.
I would be happy to post some additional references to the literature if you would like to follow up on this. | Training a decision tree against unbalanced data
This is an interesting and very frequent problem in classification - not just in decision trees but in virtually all classification algorithms.
As you found empirically, a training set consisting of d |
3,694 | Training a decision tree against unbalanced data | The following four ideas may help you tackle this problem.
Select an appropriate performance measure and then fine tune the hyperparameters of your model --e.g. regularization-- to attain satisfactory results on the Cross-Validation dataset and once satisfied, test your model on the testing dataset. For these purposes, set apart 15% of your data to be used for cross validation and 15% to be used for final testing. An established measure in Machine Learning, advocated by Andrews Ng is the F1 statistics defined as $2 * Precision * \frac{Recall}{Precision + Recall}$. Try to maximize this figure on the Cross-Validation dataset and make sure that the performance is stable on the testing dataset as well.
Use the 'prior' parameter in the Decision Trees to inform the algorithm of the prior frequency of the classes in the dataset, i.e. if there are 1,000 positives in a 1,000,0000 dataset set prior = c(0.001, 0.999) (in R).
Use the 'weights' argument in the classification function you use to penalize severely the algorithm for misclassifications of the rare positive cases
Use the 'cost' argument in some classification algorithms -- e.g. rpart in R-- to define relative costs for misclassifications of true positives and true negatives. You naturally should set a high cost for the misclassification of the rare class.
I am not in favor of oversampling, since it introduces dependent observations in the dataset and this violates assumptions of independence made both in Statistics and Machine Learning. | Training a decision tree against unbalanced data | The following four ideas may help you tackle this problem.
Select an appropriate performance measure and then fine tune the hyperparameters of your model --e.g. regularization-- to attain satisfactor | Training a decision tree against unbalanced data
The following four ideas may help you tackle this problem.
Select an appropriate performance measure and then fine tune the hyperparameters of your model --e.g. regularization-- to attain satisfactory results on the Cross-Validation dataset and once satisfied, test your model on the testing dataset. For these purposes, set apart 15% of your data to be used for cross validation and 15% to be used for final testing. An established measure in Machine Learning, advocated by Andrews Ng is the F1 statistics defined as $2 * Precision * \frac{Recall}{Precision + Recall}$. Try to maximize this figure on the Cross-Validation dataset and make sure that the performance is stable on the testing dataset as well.
Use the 'prior' parameter in the Decision Trees to inform the algorithm of the prior frequency of the classes in the dataset, i.e. if there are 1,000 positives in a 1,000,0000 dataset set prior = c(0.001, 0.999) (in R).
Use the 'weights' argument in the classification function you use to penalize severely the algorithm for misclassifications of the rare positive cases
Use the 'cost' argument in some classification algorithms -- e.g. rpart in R-- to define relative costs for misclassifications of true positives and true negatives. You naturally should set a high cost for the misclassification of the rare class.
I am not in favor of oversampling, since it introduces dependent observations in the dataset and this violates assumptions of independence made both in Statistics and Machine Learning. | Training a decision tree against unbalanced data
The following four ideas may help you tackle this problem.
Select an appropriate performance measure and then fine tune the hyperparameters of your model --e.g. regularization-- to attain satisfactor |
3,695 | Training a decision tree against unbalanced data | Adding to @Kay 's answer 1st solution strategy :
Synthetic Minority Oversampling (SMOTE) usually does better than under or over sampling from my experience as I think it kind of creates a compromise between both. It creates synthetic samples of the minority class using the data points plotted on the multivariate predictor space and it more or less takes midpoints between adjacent points on that space to create new synthetic points and hence balances both class sizes. (not sure of the midpoints, details of the algorithm here | Training a decision tree against unbalanced data | Adding to @Kay 's answer 1st solution strategy :
Synthetic Minority Oversampling (SMOTE) usually does better than under or over sampling from my experience as I think it kind of creates a compromise | Training a decision tree against unbalanced data
Adding to @Kay 's answer 1st solution strategy :
Synthetic Minority Oversampling (SMOTE) usually does better than under or over sampling from my experience as I think it kind of creates a compromise between both. It creates synthetic samples of the minority class using the data points plotted on the multivariate predictor space and it more or less takes midpoints between adjacent points on that space to create new synthetic points and hence balances both class sizes. (not sure of the midpoints, details of the algorithm here | Training a decision tree against unbalanced data
Adding to @Kay 's answer 1st solution strategy :
Synthetic Minority Oversampling (SMOTE) usually does better than under or over sampling from my experience as I think it kind of creates a compromise |
3,696 | Training a decision tree against unbalanced data | I gave an answer in recent topic:
What we do is pick a sample with different proportions. In aforementioned example, that would be 1000 cases of "YES" and, for instance, 9000 of "NO" cases. This approach gives more stable models. However, it has to be tested on a real sample (that with 1,000,000 rows).
Not only does it give a more stable approach, but models are generally better, as far as measures as lift are concerned. | Training a decision tree against unbalanced data | I gave an answer in recent topic:
What we do is pick a sample with different proportions. In aforementioned example, that would be 1000 cases of "YES" and, for instance, 9000 of "NO" cases. This appr | Training a decision tree against unbalanced data
I gave an answer in recent topic:
What we do is pick a sample with different proportions. In aforementioned example, that would be 1000 cases of "YES" and, for instance, 9000 of "NO" cases. This approach gives more stable models. However, it has to be tested on a real sample (that with 1,000,000 rows).
Not only does it give a more stable approach, but models are generally better, as far as measures as lift are concerned. | Training a decision tree against unbalanced data
I gave an answer in recent topic:
What we do is pick a sample with different proportions. In aforementioned example, that would be 1000 cases of "YES" and, for instance, 9000 of "NO" cases. This appr |
3,697 | Training a decision tree against unbalanced data | My follow up with the the 3 approaches @Kay mentioned above is that to deal with unbalanced data, no matter you use undersampling/oversampling or weighted cost function, it is shifting your fit in the original feature space v.s. original data. So "undersampling/oversampling" and "weighted cost" are essentially the same in term of result.
(I do not know how to pin @Kay) I think what @Kay mean by "balanced accuracy" is only trying to evaluate a model from measurement, it has nothing to do with the model itself. However, in order to count π+ and πβ , you will have to decide a threshold value of the classification. I HOPE THERE IS MORE DETAIL PROVIDED ON HOW TO GET THE CONFUSION MATRIX {40, 8, 5,2 }.
In real life, most of cases I met are unbalanced data, so I choose the cutoff by myself instead of using the default 0.5 in balanced data. I find it's more realistic to use F1 score mentioned in the other author to determine the threshold and use as evaluating model. | Training a decision tree against unbalanced data | My follow up with the the 3 approaches @Kay mentioned above is that to deal with unbalanced data, no matter you use undersampling/oversampling or weighted cost function, it is shifting your fit in the | Training a decision tree against unbalanced data
My follow up with the the 3 approaches @Kay mentioned above is that to deal with unbalanced data, no matter you use undersampling/oversampling or weighted cost function, it is shifting your fit in the original feature space v.s. original data. So "undersampling/oversampling" and "weighted cost" are essentially the same in term of result.
(I do not know how to pin @Kay) I think what @Kay mean by "balanced accuracy" is only trying to evaluate a model from measurement, it has nothing to do with the model itself. However, in order to count π+ and πβ , you will have to decide a threshold value of the classification. I HOPE THERE IS MORE DETAIL PROVIDED ON HOW TO GET THE CONFUSION MATRIX {40, 8, 5,2 }.
In real life, most of cases I met are unbalanced data, so I choose the cutoff by myself instead of using the default 0.5 in balanced data. I find it's more realistic to use F1 score mentioned in the other author to determine the threshold and use as evaluating model. | Training a decision tree against unbalanced data
My follow up with the the 3 approaches @Kay mentioned above is that to deal with unbalanced data, no matter you use undersampling/oversampling or weighted cost function, it is shifting your fit in the |
3,698 | What is the variance of the weighted mixture of two gaussians? | The variance is the second moment minus the square of the first moment, so it suffices to compute moments of mixtures.
In general, given distributions with PDFs $f_i$ and constant (non-random) weights $p_i$, the PDF of the mixture is
$$f(x) = \sum_i{p_i f_i(x)},$$
from which it follows immediately for any moment $k$ that
$$\mu^{(k)} = \mathbb{E}_{f}[x^k] = \sum_i{p_i \mathbb{E}_{f_i}[x^k]} = \sum_i{p_i \mu_i^{(k)}}.$$
I have written $\mu^{(k)}$ for the $k^{th}$ moment of $f$ and $\mu_i^{(k)}$ for the $k^{th}$ moment of $f_i$.
Using these formulae, the variance can be written
$$\text{Var}(f) = \mu^{(2)} - \left(\mu^{(1)}\right)^2 = \sum_i{p_i \mu_i^{(2)}} - \left(\sum_i{p_i \mu_i^{(1)}}\right)^2.$$
Equivalently, if the variances of the $f_i$ are given as $\sigma^2_i$, then $\mu^{(2)}_i = \sigma^2_i + \left(\mu^{(1)}_i\right)^2$, enabling the variance of the mixture $f$ to be written in terms of the variances and means of its components as
$$\eqalign{
\text{Var}(f) &= \sum_i{p_i \left(\sigma^2_i + \left(\mu^{(1)}_i\right)^2\right)} - \left(\sum_i{p_i \mu_i^{(1)}}\right)^2 \\
&= \sum_i{p_i \sigma^2_i} + \sum_i{p_i\left(\mu_i^{(1)}\right)^2} - \left(\sum_{i}{p_i \mu_i^{(1)}}\right)^2.
}$$
In words, this is the (weighted) average variance plus the average squared mean minus the square of the average mean. Because squaring is a convex function, Jensen's Inequality asserts that the average squared mean can be no less than the square of the average mean. This allows us to understand the formula as stating the variance of the mixture is the mixture of the variances plus a non-negative term accounting for the (weighted) dispersion of the means.
In your case the variance is
$$p_A \sigma_A^2 + p_B \sigma_B^2 + \left[p_A\mu_A^2 + p_B\mu_B^2 - (p_A \mu_A + p_B \mu_B)^2\right].$$
We can interpret this is a weighted mixture of the two variances, $p_A\sigma_A^2 + p_B\sigma_B^2$, plus a (necessarily positive) correction term to account for the shifts from the individual means relative to the overall mixture mean.
The utility of this variance in interpreting data, such as given in the question, is doubtful, because the mixture distribution will not be Normal (and may depart substantially from it, to the extent of exhibiting bimodality). | What is the variance of the weighted mixture of two gaussians? | The variance is the second moment minus the square of the first moment, so it suffices to compute moments of mixtures.
In general, given distributions with PDFs $f_i$ and constant (non-random) weights | What is the variance of the weighted mixture of two gaussians?
The variance is the second moment minus the square of the first moment, so it suffices to compute moments of mixtures.
In general, given distributions with PDFs $f_i$ and constant (non-random) weights $p_i$, the PDF of the mixture is
$$f(x) = \sum_i{p_i f_i(x)},$$
from which it follows immediately for any moment $k$ that
$$\mu^{(k)} = \mathbb{E}_{f}[x^k] = \sum_i{p_i \mathbb{E}_{f_i}[x^k]} = \sum_i{p_i \mu_i^{(k)}}.$$
I have written $\mu^{(k)}$ for the $k^{th}$ moment of $f$ and $\mu_i^{(k)}$ for the $k^{th}$ moment of $f_i$.
Using these formulae, the variance can be written
$$\text{Var}(f) = \mu^{(2)} - \left(\mu^{(1)}\right)^2 = \sum_i{p_i \mu_i^{(2)}} - \left(\sum_i{p_i \mu_i^{(1)}}\right)^2.$$
Equivalently, if the variances of the $f_i$ are given as $\sigma^2_i$, then $\mu^{(2)}_i = \sigma^2_i + \left(\mu^{(1)}_i\right)^2$, enabling the variance of the mixture $f$ to be written in terms of the variances and means of its components as
$$\eqalign{
\text{Var}(f) &= \sum_i{p_i \left(\sigma^2_i + \left(\mu^{(1)}_i\right)^2\right)} - \left(\sum_i{p_i \mu_i^{(1)}}\right)^2 \\
&= \sum_i{p_i \sigma^2_i} + \sum_i{p_i\left(\mu_i^{(1)}\right)^2} - \left(\sum_{i}{p_i \mu_i^{(1)}}\right)^2.
}$$
In words, this is the (weighted) average variance plus the average squared mean minus the square of the average mean. Because squaring is a convex function, Jensen's Inequality asserts that the average squared mean can be no less than the square of the average mean. This allows us to understand the formula as stating the variance of the mixture is the mixture of the variances plus a non-negative term accounting for the (weighted) dispersion of the means.
In your case the variance is
$$p_A \sigma_A^2 + p_B \sigma_B^2 + \left[p_A\mu_A^2 + p_B\mu_B^2 - (p_A \mu_A + p_B \mu_B)^2\right].$$
We can interpret this is a weighted mixture of the two variances, $p_A\sigma_A^2 + p_B\sigma_B^2$, plus a (necessarily positive) correction term to account for the shifts from the individual means relative to the overall mixture mean.
The utility of this variance in interpreting data, such as given in the question, is doubtful, because the mixture distribution will not be Normal (and may depart substantially from it, to the extent of exhibiting bimodality). | What is the variance of the weighted mixture of two gaussians?
The variance is the second moment minus the square of the first moment, so it suffices to compute moments of mixtures.
In general, given distributions with PDFs $f_i$ and constant (non-random) weights |
3,699 | What is the variance of the weighted mixture of two gaussians? | The solution of whuber is perfect but it seems that something lacks to join this result with the LTV (law of total variance). The previous result
$$\sigma^2=p_A \sigma_A^2+p_B \sigma_B^2+p_A \mu_A^2+p_B \mu_B^2β\mu^2$$
can be rewritten taking into account that $2p_A\mu_A\mu +2p_B\mu_B\mu=2\mu(p_A\mu_A+p_B\mu_B)=2\mu^2$, so
$$\sigma^2=p_A \sigma_A^2+p_B \sigma_B^2+p_A \mu_A^2+p_B \mu_B^2+\mu^2 -2p_A\mu_A\mu -2p_B\mu_B\mu$$
and finally
$$\sigma^2=p_A \sigma_A^2+p_B \sigma_B^2+p_A (\mu_A - \mu)^2+p_B(\mu_B-\mu)^2$$
what is the LTV typical expression that we are used to see. | What is the variance of the weighted mixture of two gaussians? | The solution of whuber is perfect but it seems that something lacks to join this result with the LTV (law of total variance). The previous result
$$\sigma^2=p_A \sigma_A^2+p_B \sigma_B^2+p_A \mu_A^2+p | What is the variance of the weighted mixture of two gaussians?
The solution of whuber is perfect but it seems that something lacks to join this result with the LTV (law of total variance). The previous result
$$\sigma^2=p_A \sigma_A^2+p_B \sigma_B^2+p_A \mu_A^2+p_B \mu_B^2β\mu^2$$
can be rewritten taking into account that $2p_A\mu_A\mu +2p_B\mu_B\mu=2\mu(p_A\mu_A+p_B\mu_B)=2\mu^2$, so
$$\sigma^2=p_A \sigma_A^2+p_B \sigma_B^2+p_A \mu_A^2+p_B \mu_B^2+\mu^2 -2p_A\mu_A\mu -2p_B\mu_B\mu$$
and finally
$$\sigma^2=p_A \sigma_A^2+p_B \sigma_B^2+p_A (\mu_A - \mu)^2+p_B(\mu_B-\mu)^2$$
what is the LTV typical expression that we are used to see. | What is the variance of the weighted mixture of two gaussians?
The solution of whuber is perfect but it seems that something lacks to join this result with the LTV (law of total variance). The previous result
$$\sigma^2=p_A \sigma_A^2+p_B \sigma_B^2+p_A \mu_A^2+p |
3,700 | What is the variance of the weighted mixture of two gaussians? | The solution of whuber is perfect. I just want to add that the term in the square brackets has another nice and simple expression, so
$$\sigma^2=p_A\sigma_A^2+p_B\sigma_B^2+p_Ap_B(\mu_A-\mu_B)^2.$$ | What is the variance of the weighted mixture of two gaussians? | The solution of whuber is perfect. I just want to add that the term in the square brackets has another nice and simple expression, so
$$\sigma^2=p_A\sigma_A^2+p_B\sigma_B^2+p_Ap_B(\mu_A-\mu_B)^2.$$ | What is the variance of the weighted mixture of two gaussians?
The solution of whuber is perfect. I just want to add that the term in the square brackets has another nice and simple expression, so
$$\sigma^2=p_A\sigma_A^2+p_B\sigma_B^2+p_Ap_B(\mu_A-\mu_B)^2.$$ | What is the variance of the weighted mixture of two gaussians?
The solution of whuber is perfect. I just want to add that the term in the square brackets has another nice and simple expression, so
$$\sigma^2=p_A\sigma_A^2+p_B\sigma_B^2+p_Ap_B(\mu_A-\mu_B)^2.$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.