idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
4,801
How to assess the similarity of two histograms?
As David's answer points out, the chi-squared test is necessary for binned data as the KS test assumes continuous distributions. Regarding why the KS test is inappropriate (naught101's comment), there has been some discussion of the issue in the applied statistics literature that is worth raising here. An amusing exchange began with the claim (García-Berthou and Alcaraz, 2004) that one third of Nature papers contain statistical errors. However, a subsequent paper (Jeng, 2006, "Error in statistical tests of error in statistical tests" -- perhaps my all-time favorite paper title) showed that Garcia-Berthou and Alcaraz (2005) used KS tests on discrete data, leading to their reporting inaccurate p-values in their meta-study. The Jeng (2006) paper provides a nice discussion of the issue, even showing that one can modify the KS test to work for discrete data. In this specific case, the distinction boils down to the difference between a uniform distribution of the trailing digit on [0,9], $$ P(x) = \frac{1}{9},\ (0 \leq x \leq 9) $$ (in the incorrect KS test) and a comb distribution of delta functions, $$ P(x) = \frac{1}{10}\sum_{j=0}^9 \delta(x-j) $$ (in the correct, modified form). As a result of the original error, Garcia-Berthou and Alcaraz (2004) incorrectly rejected the null, while the chi-squared and modified KS test do not. In any case, the chi-squared test is the standard choice in this scenario, even if KS can be modified to work here.
How to assess the similarity of two histograms?
As David's answer points out, the chi-squared test is necessary for binned data as the KS test assumes continuous distributions. Regarding why the KS test is inappropriate (naught101's comment), there
How to assess the similarity of two histograms? As David's answer points out, the chi-squared test is necessary for binned data as the KS test assumes continuous distributions. Regarding why the KS test is inappropriate (naught101's comment), there has been some discussion of the issue in the applied statistics literature that is worth raising here. An amusing exchange began with the claim (García-Berthou and Alcaraz, 2004) that one third of Nature papers contain statistical errors. However, a subsequent paper (Jeng, 2006, "Error in statistical tests of error in statistical tests" -- perhaps my all-time favorite paper title) showed that Garcia-Berthou and Alcaraz (2005) used KS tests on discrete data, leading to their reporting inaccurate p-values in their meta-study. The Jeng (2006) paper provides a nice discussion of the issue, even showing that one can modify the KS test to work for discrete data. In this specific case, the distinction boils down to the difference between a uniform distribution of the trailing digit on [0,9], $$ P(x) = \frac{1}{9},\ (0 \leq x \leq 9) $$ (in the incorrect KS test) and a comb distribution of delta functions, $$ P(x) = \frac{1}{10}\sum_{j=0}^9 \delta(x-j) $$ (in the correct, modified form). As a result of the original error, Garcia-Berthou and Alcaraz (2004) incorrectly rejected the null, while the chi-squared and modified KS test do not. In any case, the chi-squared test is the standard choice in this scenario, even if KS can be modified to work here.
How to assess the similarity of two histograms? As David's answer points out, the chi-squared test is necessary for binned data as the KS test assumes continuous distributions. Regarding why the KS test is inappropriate (naught101's comment), there
4,802
Dynamic Time Warping Clustering
Do not use k-means for timeseries. DTW is not minimized by the mean; k-means may not converge and even if it converges it will not yield a very good result. The mean is an least-squares estimator on the coordinates. It minimizes variance, not arbitrary distances, and k-means is designed for minimizing variance, not arbitrary distances. Assume you have two time series. Two sine waves, of the same frequency, and a rather long sampling period; but they are offset by $\pi$. Since DTW does time warping, it can align them so they perfectly match, except for the beginning and end. DTW will assign a rather small distance to these two series. However, if you compute the mean of the two series, it will be a flat 0 - they cancel out. The mean does not do dynamic time warping, and loses all the value that DTW got. On such data, k-means may fail to converge, and the results will be meaningless. K-means really should only be used with variance (= squared Euclidean), or some cases that are equivalent (like cosine, on L2 normalized data, where cosine similarity is the same as $2 -$ squared Euclidean distance) Instead, compute a distance matrix using DTW, then run hierarchical clustering such as single-link. In contrast to k-means, the series may even have different length.
Dynamic Time Warping Clustering
Do not use k-means for timeseries. DTW is not minimized by the mean; k-means may not converge and even if it converges it will not yield a very good result. The mean is an least-squares estimator on
Dynamic Time Warping Clustering Do not use k-means for timeseries. DTW is not minimized by the mean; k-means may not converge and even if it converges it will not yield a very good result. The mean is an least-squares estimator on the coordinates. It minimizes variance, not arbitrary distances, and k-means is designed for minimizing variance, not arbitrary distances. Assume you have two time series. Two sine waves, of the same frequency, and a rather long sampling period; but they are offset by $\pi$. Since DTW does time warping, it can align them so they perfectly match, except for the beginning and end. DTW will assign a rather small distance to these two series. However, if you compute the mean of the two series, it will be a flat 0 - they cancel out. The mean does not do dynamic time warping, and loses all the value that DTW got. On such data, k-means may fail to converge, and the results will be meaningless. K-means really should only be used with variance (= squared Euclidean), or some cases that are equivalent (like cosine, on L2 normalized data, where cosine similarity is the same as $2 -$ squared Euclidean distance) Instead, compute a distance matrix using DTW, then run hierarchical clustering such as single-link. In contrast to k-means, the series may even have different length.
Dynamic Time Warping Clustering Do not use k-means for timeseries. DTW is not minimized by the mean; k-means may not converge and even if it converges it will not yield a very good result. The mean is an least-squares estimator on
4,803
Dynamic Time Warping Clustering
Yes, you can use DTW approach for classification and clustering of time series. I've compiled the following resources, which are focused on this very topic (I've recently answered a similar question, but not on this site, so I'm copying the contents here for everybody's convenience): UCR Time Series Classification/Clustering: main page, software page and corresponding paper Time Series Classification and Clustering with Python: a blog post Capital Bikeshare: Time Series Clustering: another blog post Time Series Classification and Clustering: ipython notebook Dynamic Time Warping using rpy and Python: another blog post Mining Time-series with Trillions of Points: Dynamic Time Warping at Scale: another blog post Time Series Analysis and Mining in R (to add R to the mix): yet another blog post And, finally, two tools implementing/supporting DTW, to top it off: R package and Python module
Dynamic Time Warping Clustering
Yes, you can use DTW approach for classification and clustering of time series. I've compiled the following resources, which are focused on this very topic (I've recently answered a similar question,
Dynamic Time Warping Clustering Yes, you can use DTW approach for classification and clustering of time series. I've compiled the following resources, which are focused on this very topic (I've recently answered a similar question, but not on this site, so I'm copying the contents here for everybody's convenience): UCR Time Series Classification/Clustering: main page, software page and corresponding paper Time Series Classification and Clustering with Python: a blog post Capital Bikeshare: Time Series Clustering: another blog post Time Series Classification and Clustering: ipython notebook Dynamic Time Warping using rpy and Python: another blog post Mining Time-series with Trillions of Points: Dynamic Time Warping at Scale: another blog post Time Series Analysis and Mining in R (to add R to the mix): yet another blog post And, finally, two tools implementing/supporting DTW, to top it off: R package and Python module
Dynamic Time Warping Clustering Yes, you can use DTW approach for classification and clustering of time series. I've compiled the following resources, which are focused on this very topic (I've recently answered a similar question,
4,804
Dynamic Time Warping Clustering
A recent method DTW Barycenter Averaging (DBA) has been proposed by Petitjean et al. to average time series. In an other paper they proved empirically and theoretically how it can be used to cluster time series with k-means. An implementation is provided on GitHub by the authors (link to code). 1 F. Petitjean, G. Forestier, G. I. Webb, A. E. Nicholson, Y. Chen and E. Keogh, "Dynamic Time Warping Averaging of Time Series Allows Faster and More Accurate Classification," 2014 IEEE International Conference on Data Mining, Shenzhen, 2014. 2 F. Petitjean, P. Gançarski, Summarizing a set of time series by averaging: From Steiner sequence to compact multiple alignment, Theoretical Computer Science, Volume 414, Issue 1, 2012
Dynamic Time Warping Clustering
A recent method DTW Barycenter Averaging (DBA) has been proposed by Petitjean et al. to average time series. In an other paper they proved empirically and theoretically how it can be used to cluster
Dynamic Time Warping Clustering A recent method DTW Barycenter Averaging (DBA) has been proposed by Petitjean et al. to average time series. In an other paper they proved empirically and theoretically how it can be used to cluster time series with k-means. An implementation is provided on GitHub by the authors (link to code). 1 F. Petitjean, G. Forestier, G. I. Webb, A. E. Nicholson, Y. Chen and E. Keogh, "Dynamic Time Warping Averaging of Time Series Allows Faster and More Accurate Classification," 2014 IEEE International Conference on Data Mining, Shenzhen, 2014. 2 F. Petitjean, P. Gançarski, Summarizing a set of time series by averaging: From Steiner sequence to compact multiple alignment, Theoretical Computer Science, Volume 414, Issue 1, 2012
Dynamic Time Warping Clustering A recent method DTW Barycenter Averaging (DBA) has been proposed by Petitjean et al. to average time series. In an other paper they proved empirically and theoretically how it can be used to cluster
4,805
Dynamic Time Warping Clustering
Dynamic Time Warp compares the realized data points, which may or may not work. A more rigorous approach is to compare the distribution of the time series by way of a metric called telescope distance. The cool thing about this metric is that the empirical calculation is done by fitting a series of binary classifiers such as SVM. For a brief explanation, see this. For clustering time series, it's been shown to outperform DTW; see Table 1 in the original paper[1]. [1] Ryabko, D., & Mary, J. (2013). A binary-classification-based metric between time-series distributions and its use in statistical and learning problems. The Journal of Machine Learning Research, 14(1), 2837-2856.
Dynamic Time Warping Clustering
Dynamic Time Warp compares the realized data points, which may or may not work. A more rigorous approach is to compare the distribution of the time series by way of a metric called telescope distance.
Dynamic Time Warping Clustering Dynamic Time Warp compares the realized data points, which may or may not work. A more rigorous approach is to compare the distribution of the time series by way of a metric called telescope distance. The cool thing about this metric is that the empirical calculation is done by fitting a series of binary classifiers such as SVM. For a brief explanation, see this. For clustering time series, it's been shown to outperform DTW; see Table 1 in the original paper[1]. [1] Ryabko, D., & Mary, J. (2013). A binary-classification-based metric between time-series distributions and its use in statistical and learning problems. The Journal of Machine Learning Research, 14(1), 2837-2856.
Dynamic Time Warping Clustering Dynamic Time Warp compares the realized data points, which may or may not work. A more rigorous approach is to compare the distribution of the time series by way of a metric called telescope distance.
4,806
Dynamic Time Warping Clustering
Yes. A naive and potentially slow approach might be, Create your all cluster combinations. k is for cluster count and n is for number of series. The number of items returned should be n! / k! / (n-k)!. These would be something like potential centers. For each series, calculate distances via DTW for each center in each cluster groups and assign it to the minimum one. For each cluster groups, calculate total distance within individual clusters. Choose the minimum. I used this for a small project. Here is the my repository about Time Series Clustering and my other answer about this.
Dynamic Time Warping Clustering
Yes. A naive and potentially slow approach might be, Create your all cluster combinations. k is for cluster count and n is for number of series. The number of items returned should be n! / k! / (n-k)
Dynamic Time Warping Clustering Yes. A naive and potentially slow approach might be, Create your all cluster combinations. k is for cluster count and n is for number of series. The number of items returned should be n! / k! / (n-k)!. These would be something like potential centers. For each series, calculate distances via DTW for each center in each cluster groups and assign it to the minimum one. For each cluster groups, calculate total distance within individual clusters. Choose the minimum. I used this for a small project. Here is the my repository about Time Series Clustering and my other answer about this.
Dynamic Time Warping Clustering Yes. A naive and potentially slow approach might be, Create your all cluster combinations. k is for cluster count and n is for number of series. The number of items returned should be n! / k! / (n-k)
4,807
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter?
Minimizing square errors (MSE) is definitely not the same as minimizing absolute deviations (MAD) of errors. MSE provides the mean response of $y$ conditioned on $x$, while MAD provides the median response of $y$ conditioned on $x$. Historically, Laplace originally considered the maximum observed error as a measure of the correctness of a model. He soon moved to considering MAD instead. Due to his inability to exact solving both situations, he soon considered the differential MSE. Himself and Gauss (seemingly concurrently) derived the normal equations, a closed-form solution for this problem. Nowadays, solving the MAD is relatively easy by means of linear programming. As it is well known, however, linear programming does not have a closed-form solution. From an optimization perspective, both correspond to convex functions. However, MSE is differentiable, thus, allowing for gradient-based methods, much efficient than their non-differentiable counterpart. MAD is not differentiable at $x=0$. A further theoretical reason is that, in a bayesian setting, when assuming uniform priors of the model parameters, MSE yields normal distributed errors, which has been taken as a proof of correctness of the method. Theorists like the normal distribution because they believed it is an empirical fact, while experimentals like it because they believe it a theoretical result. A final reason of why MSE may have had the wide acceptance it has is that it is based on the euclidean distance (in fact it is a solution of the projection problem on an euclidean banach space) which is extremely intuitive given our geometrical reality.
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul
Minimizing square errors (MSE) is definitely not the same as minimizing absolute deviations (MAD) of errors. MSE provides the mean response of $y$ conditioned on $x$, while MAD provides the median res
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter? Minimizing square errors (MSE) is definitely not the same as minimizing absolute deviations (MAD) of errors. MSE provides the mean response of $y$ conditioned on $x$, while MAD provides the median response of $y$ conditioned on $x$. Historically, Laplace originally considered the maximum observed error as a measure of the correctness of a model. He soon moved to considering MAD instead. Due to his inability to exact solving both situations, he soon considered the differential MSE. Himself and Gauss (seemingly concurrently) derived the normal equations, a closed-form solution for this problem. Nowadays, solving the MAD is relatively easy by means of linear programming. As it is well known, however, linear programming does not have a closed-form solution. From an optimization perspective, both correspond to convex functions. However, MSE is differentiable, thus, allowing for gradient-based methods, much efficient than their non-differentiable counterpart. MAD is not differentiable at $x=0$. A further theoretical reason is that, in a bayesian setting, when assuming uniform priors of the model parameters, MSE yields normal distributed errors, which has been taken as a proof of correctness of the method. Theorists like the normal distribution because they believed it is an empirical fact, while experimentals like it because they believe it a theoretical result. A final reason of why MSE may have had the wide acceptance it has is that it is based on the euclidean distance (in fact it is a solution of the projection problem on an euclidean banach space) which is extremely intuitive given our geometrical reality.
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul Minimizing square errors (MSE) is definitely not the same as minimizing absolute deviations (MAD) of errors. MSE provides the mean response of $y$ conditioned on $x$, while MAD provides the median res
4,808
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter?
As an alternative explanation, consider the following intuition: When minimizing an error, we must decide how to penalize these errors. Indeed, the most straightforward approach to penalizing errors would be to use a linearly proportional penalty function. With such a function, each deviation from the mean is given a proportional corresponding error. Twice as far from the mean would therefore result in twice the penalty. The more common approach is to consider a squared proportional relationship between deviations from the mean and the corresponding penalty. This will make sure that the further you are away from the mean, the proportionally more you will be penalized. Using this penalty function, outliers (far away from the mean) are deemed proportionally more informative than observations near the mean. To give a visualisation of this, you can simply plot the penalty functions: Now especially when considering the estimation of regressions (e.g. OLS), different penalty functions will yield different results. Using the linearly proportional penalty function, the regression will assign less weight to outliers than when using the squared proportional penalty function. The Median Absolute Deviation (MAD) is therefore known to be a more robust estimator. In general, it is therefore the case that a robust estimator fits most of the data points well but 'ignores' outliers. A least squares fit, in comparison, is pulled more towards the outliers. Here is a visualisation for comparison: Now even though OLS is pretty much the standard, different penalty functions are most certainly in use as well. As an example, you can take a look at Matlab's robustfit function which allows you to choose a different penalty (also called 'weight') function for your regression. The penalty functions include andrews, bisquare, cauchy, fair, huber, logistic, ols, talwar and welsch. Their corresponding expressions can be found on the website as well. I hope that helps you in getting a bit more intuition for penalty functions :) Update If you have Matlab, I can recommend playing with Matlab's robustdemo, which was built specifically for the comparison of ordinary least squares to robust regression: The demo allows you to drag individual points and immediately see the impact on both ordinary least squares and robust regression (which is perfect for teaching purposes!).
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul
As an alternative explanation, consider the following intuition: When minimizing an error, we must decide how to penalize these errors. Indeed, the most straightforward approach to penalizing errors w
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter? As an alternative explanation, consider the following intuition: When minimizing an error, we must decide how to penalize these errors. Indeed, the most straightforward approach to penalizing errors would be to use a linearly proportional penalty function. With such a function, each deviation from the mean is given a proportional corresponding error. Twice as far from the mean would therefore result in twice the penalty. The more common approach is to consider a squared proportional relationship between deviations from the mean and the corresponding penalty. This will make sure that the further you are away from the mean, the proportionally more you will be penalized. Using this penalty function, outliers (far away from the mean) are deemed proportionally more informative than observations near the mean. To give a visualisation of this, you can simply plot the penalty functions: Now especially when considering the estimation of regressions (e.g. OLS), different penalty functions will yield different results. Using the linearly proportional penalty function, the regression will assign less weight to outliers than when using the squared proportional penalty function. The Median Absolute Deviation (MAD) is therefore known to be a more robust estimator. In general, it is therefore the case that a robust estimator fits most of the data points well but 'ignores' outliers. A least squares fit, in comparison, is pulled more towards the outliers. Here is a visualisation for comparison: Now even though OLS is pretty much the standard, different penalty functions are most certainly in use as well. As an example, you can take a look at Matlab's robustfit function which allows you to choose a different penalty (also called 'weight') function for your regression. The penalty functions include andrews, bisquare, cauchy, fair, huber, logistic, ols, talwar and welsch. Their corresponding expressions can be found on the website as well. I hope that helps you in getting a bit more intuition for penalty functions :) Update If you have Matlab, I can recommend playing with Matlab's robustdemo, which was built specifically for the comparison of ordinary least squares to robust regression: The demo allows you to drag individual points and immediately see the impact on both ordinary least squares and robust regression (which is perfect for teaching purposes!).
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul As an alternative explanation, consider the following intuition: When minimizing an error, we must decide how to penalize these errors. Indeed, the most straightforward approach to penalizing errors w
4,809
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter?
In theory you could use any kind of loss function. The absolute and the squared loss functions just happen to be the most popular and the most intuitive loss functions. According to this wikipedia entry, A common example involves estimating "location." Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. As also explained in the wikipedia entry, the choice of the loss functions depends on how do you value deviations from your targeted object. If all deviations are equally bad for you no matter their sign, then you could use the absolute loss function. If deviations become worse for you the farther away you are from the optimum and you don't care about whether the deviation is positive or negative, then the squared loss function is your easiest choice. But if none of the above definitions of loss fit your problem at hand, because e.g. small deviations are worse for you than big deviations, then you can choose a different loss function and try to solve the minimizing problem. However the statistical properties of your solution might be hard to assess.
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul
In theory you could use any kind of loss function. The absolute and the squared loss functions just happen to be the most popular and the most intuitive loss functions. According to this wikipedia ent
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter? In theory you could use any kind of loss function. The absolute and the squared loss functions just happen to be the most popular and the most intuitive loss functions. According to this wikipedia entry, A common example involves estimating "location." Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. As also explained in the wikipedia entry, the choice of the loss functions depends on how do you value deviations from your targeted object. If all deviations are equally bad for you no matter their sign, then you could use the absolute loss function. If deviations become worse for you the farther away you are from the optimum and you don't care about whether the deviation is positive or negative, then the squared loss function is your easiest choice. But if none of the above definitions of loss fit your problem at hand, because e.g. small deviations are worse for you than big deviations, then you can choose a different loss function and try to solve the minimizing problem. However the statistical properties of your solution might be hard to assess.
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul In theory you could use any kind of loss function. The absolute and the squared loss functions just happen to be the most popular and the most intuitive loss functions. According to this wikipedia ent
4,810
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter?
As another answer has explained, minimizing squared error is not the same as minimizing absolute error. The reason minimizing squared error is preferred is because it prevents large errors better. Say your empolyer's payroll department accidentally pays each of a total of ten employees \$50 less than required. That's an absolute error of \$500. It's also an absolute error of \$500 if the department pays just one employee \$500 less. But it terms of squared error, it's 25000 versus 250000. It's not always better to use squared error. If you have a data set with an extreme outlier due to a data acquisition error, minimizing squared error will pull the fit towards the extreme outlier much more than minimizing absolute error. That being said, it's -usually- better to use squared error.
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul
As another answer has explained, minimizing squared error is not the same as minimizing absolute error. The reason minimizing squared error is preferred is because it prevents large errors better. Say
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter? As another answer has explained, minimizing squared error is not the same as minimizing absolute error. The reason minimizing squared error is preferred is because it prevents large errors better. Say your empolyer's payroll department accidentally pays each of a total of ten employees \$50 less than required. That's an absolute error of \$500. It's also an absolute error of \$500 if the department pays just one employee \$500 less. But it terms of squared error, it's 25000 versus 250000. It's not always better to use squared error. If you have a data set with an extreme outlier due to a data acquisition error, minimizing squared error will pull the fit towards the extreme outlier much more than minimizing absolute error. That being said, it's -usually- better to use squared error.
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul As another answer has explained, minimizing squared error is not the same as minimizing absolute error. The reason minimizing squared error is preferred is because it prevents large errors better. Say
4,811
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter?
Short answers nope the mean has more interesting statistical properties than the median
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul
Short answers nope the mean has more interesting statistical properties than the median
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popular than the latter? Short answers nope the mean has more interesting statistical properties than the median
Is minimizing squared error equivalent to minimizing absolute error? Why squared error is more popul Short answers nope the mean has more interesting statistical properties than the median
4,812
Why zero correlation does not necessarily imply independence
Correlation measures linear association between two given variables and it has no obligation to detect any other form of association else. So those two variables might be associated in several other non-linear ways and correlation could not distinguish from independent case. As a very didactic, artificial and non realistic example, one can consider $X$ such that $P(X=x)=1/3$ for $x=-1, 0, 1$ and $Y=X^2$. Notice that they are not only associated, but one is a function of the other. Nonetheless, their correlation is 0, for their association is orthogonal to the association that correlation can detect.
Why zero correlation does not necessarily imply independence
Correlation measures linear association between two given variables and it has no obligation to detect any other form of association else. So those two variables might be associated in several other n
Why zero correlation does not necessarily imply independence Correlation measures linear association between two given variables and it has no obligation to detect any other form of association else. So those two variables might be associated in several other non-linear ways and correlation could not distinguish from independent case. As a very didactic, artificial and non realistic example, one can consider $X$ such that $P(X=x)=1/3$ for $x=-1, 0, 1$ and $Y=X^2$. Notice that they are not only associated, but one is a function of the other. Nonetheless, their correlation is 0, for their association is orthogonal to the association that correlation can detect.
Why zero correlation does not necessarily imply independence Correlation measures linear association between two given variables and it has no obligation to detect any other form of association else. So those two variables might be associated in several other n
4,813
Why zero correlation does not necessarily imply independence
There is a generalized lack of rigor in the use of the word "correlation" for the simple reason that it can have widely differing assumptions and meanings. The simplest, loosest and most common usage is that there is some vague association, relationship or lack of independence between a static pair of random variables. Here, the default metric referred to is usually the Pearson correlation, which is a standardized measure of pairwise, linear association between two continuously distributed variables. One of the Pearson's commonest misuses is to report it as a percentage. It is definitely not a percentage. The Pearson correlation, r, ranges between -1.0 and +1.0 where 0 means no linear association. Other not so widely recognized issues with using the Pearson correlation as the default is that it is actually quite a stringent, non-robust measure of linearity requiring interval-scaled variates as input (see Paul Embrechts' excellent paper on Correlation and Dependency in Risk Management: Properties and Pitfalls here: https://people.math.ethz.ch/~embrecht/ftp/pitfalls.pdf). Embrechts notes that there are many fallacious assumptions about dependence that begin with assumptions of the underlying structure and geometric shape of these relationships: These fallacies arise from a naive assumption that dependence properties of the elliptical world also hold in the non-elliptical world Embrechts points to copulas as a much wider class of dependence metrics used in finance and risk management, of which the Pearson correlation is just one type. Columbia's Statistics department spent the academic year 2013-2014 focused on developing deeper understandings of dependence structures: e.g., linear, nonlinear, monotonic, rank, parametric, nonparametric, potentially highly complex and possessing wide differences in scaling. The year ended with a 3 day workshop and conference that brought together most of the top contributors in this field (http://datascience.columbia.edu/workshop-and-conference-nonparametric-measures-dependence-apr-28-may-2). These contributors included the Reshef Brothers, now famous for a 2011 Science paper Detecting Novel Associations in Large Data Sets http://www.uvm.edu/~cdanfort/csc-reading-group/reshef-correlation-science-2011.pdf that has been widely criticized (see AndrewGelman.com for a good overview, published simultaneously with the Columbia event: http://andrewgelman.com/2014/03/14/maximal-information-coefficient). The Reshefs addressed all of these criticisms in their presentation (available on the Columbia conference website), as well as a vastly more efficient MIC algorithm. Many other leading statisticians presented at this event including Gabor Szekely, now at the NSF in DC. Szekely developed his distance and partial distance correlations. Deep Mukhopadhay, Temple U, presenting his Unified Statistical Algorithm -- a framework for unified algorithms of data science -- based on work done with Eugene Franzen http://www.fox.temple.edu/mcm_people/subhadeep-mukhopadhyay/. And many others. For me, one of the more interesting themes was wide leverage and use of Reproducing Kernel Hilbert Space (RKHS) and the chi-square. If there was a modal approach to dependence structures at this conference, it was the RKHS. The typical intro statistics textbooks is perfunctory in its treatment of dependence, usually relying on presentations of the same set of visualizations of circular or parabolic relationships. More sophisticated texts will delve into Anscombe's Quartet, a visualization of four different datasets possessing similar, simple statistical properties but hugely differing relationships: https://en.wikipedia.org/wiki/Anscombe%27s_quartet One of the great things about this workshop was the multitude of dependence structures and relationships visualized and presented, going far beyond the standard, perfunctory treatment. For instance, the Reshefs had dozens of thumbnail graphics that represented just a sampling of possible nonlinearities. Deep Mukhopadhay had stunning visuals of highly complex relationships that looked more like a satellite view of the Himalayas. Stats and data science textbook authors need to take note. Coming out of the Columbia conference with the development and visualization of these highly complex, pairwise dependence structures, I was left questioning the ability of multivariate statistical models to capture these nonlinearities and complexities.
Why zero correlation does not necessarily imply independence
There is a generalized lack of rigor in the use of the word "correlation" for the simple reason that it can have widely differing assumptions and meanings. The simplest, loosest and most common usage
Why zero correlation does not necessarily imply independence There is a generalized lack of rigor in the use of the word "correlation" for the simple reason that it can have widely differing assumptions and meanings. The simplest, loosest and most common usage is that there is some vague association, relationship or lack of independence between a static pair of random variables. Here, the default metric referred to is usually the Pearson correlation, which is a standardized measure of pairwise, linear association between two continuously distributed variables. One of the Pearson's commonest misuses is to report it as a percentage. It is definitely not a percentage. The Pearson correlation, r, ranges between -1.0 and +1.0 where 0 means no linear association. Other not so widely recognized issues with using the Pearson correlation as the default is that it is actually quite a stringent, non-robust measure of linearity requiring interval-scaled variates as input (see Paul Embrechts' excellent paper on Correlation and Dependency in Risk Management: Properties and Pitfalls here: https://people.math.ethz.ch/~embrecht/ftp/pitfalls.pdf). Embrechts notes that there are many fallacious assumptions about dependence that begin with assumptions of the underlying structure and geometric shape of these relationships: These fallacies arise from a naive assumption that dependence properties of the elliptical world also hold in the non-elliptical world Embrechts points to copulas as a much wider class of dependence metrics used in finance and risk management, of which the Pearson correlation is just one type. Columbia's Statistics department spent the academic year 2013-2014 focused on developing deeper understandings of dependence structures: e.g., linear, nonlinear, monotonic, rank, parametric, nonparametric, potentially highly complex and possessing wide differences in scaling. The year ended with a 3 day workshop and conference that brought together most of the top contributors in this field (http://datascience.columbia.edu/workshop-and-conference-nonparametric-measures-dependence-apr-28-may-2). These contributors included the Reshef Brothers, now famous for a 2011 Science paper Detecting Novel Associations in Large Data Sets http://www.uvm.edu/~cdanfort/csc-reading-group/reshef-correlation-science-2011.pdf that has been widely criticized (see AndrewGelman.com for a good overview, published simultaneously with the Columbia event: http://andrewgelman.com/2014/03/14/maximal-information-coefficient). The Reshefs addressed all of these criticisms in their presentation (available on the Columbia conference website), as well as a vastly more efficient MIC algorithm. Many other leading statisticians presented at this event including Gabor Szekely, now at the NSF in DC. Szekely developed his distance and partial distance correlations. Deep Mukhopadhay, Temple U, presenting his Unified Statistical Algorithm -- a framework for unified algorithms of data science -- based on work done with Eugene Franzen http://www.fox.temple.edu/mcm_people/subhadeep-mukhopadhyay/. And many others. For me, one of the more interesting themes was wide leverage and use of Reproducing Kernel Hilbert Space (RKHS) and the chi-square. If there was a modal approach to dependence structures at this conference, it was the RKHS. The typical intro statistics textbooks is perfunctory in its treatment of dependence, usually relying on presentations of the same set of visualizations of circular or parabolic relationships. More sophisticated texts will delve into Anscombe's Quartet, a visualization of four different datasets possessing similar, simple statistical properties but hugely differing relationships: https://en.wikipedia.org/wiki/Anscombe%27s_quartet One of the great things about this workshop was the multitude of dependence structures and relationships visualized and presented, going far beyond the standard, perfunctory treatment. For instance, the Reshefs had dozens of thumbnail graphics that represented just a sampling of possible nonlinearities. Deep Mukhopadhay had stunning visuals of highly complex relationships that looked more like a satellite view of the Himalayas. Stats and data science textbook authors need to take note. Coming out of the Columbia conference with the development and visualization of these highly complex, pairwise dependence structures, I was left questioning the ability of multivariate statistical models to capture these nonlinearities and complexities.
Why zero correlation does not necessarily imply independence There is a generalized lack of rigor in the use of the word "correlation" for the simple reason that it can have widely differing assumptions and meanings. The simplest, loosest and most common usage
4,814
Why zero correlation does not necessarily imply independence
It depends on your exact definition of "correlation", but it isn't too hard to construct degenerate cases. "Independent" could mean something like "no predictive power, at all, ever" just as much as "linear correlation". Linear correlation, for example, would not indicate dependence on $y= \sin(2000x)$ if the domain of $x$ was $[0,1)$.
Why zero correlation does not necessarily imply independence
It depends on your exact definition of "correlation", but it isn't too hard to construct degenerate cases. "Independent" could mean something like "no predictive power, at all, ever" just as much as "
Why zero correlation does not necessarily imply independence It depends on your exact definition of "correlation", but it isn't too hard to construct degenerate cases. "Independent" could mean something like "no predictive power, at all, ever" just as much as "linear correlation". Linear correlation, for example, would not indicate dependence on $y= \sin(2000x)$ if the domain of $x$ was $[0,1)$.
Why zero correlation does not necessarily imply independence It depends on your exact definition of "correlation", but it isn't too hard to construct degenerate cases. "Independent" could mean something like "no predictive power, at all, ever" just as much as "
4,815
Why zero correlation does not necessarily imply independence
An intuitive example would be a circle. I have two variables $X$ and $Y$. And they are satisfy the equation $$X^2+Y^2=1$$ Now, $X$ and $Y$ are definitely not independent to each other, because given $X$ we can calculate $Y$ and vice versa. But their person correlation coefficient is $0$. This is because it only captures the linear relationship between two variables.
Why zero correlation does not necessarily imply independence
An intuitive example would be a circle. I have two variables $X$ and $Y$. And they are satisfy the equation $$X^2+Y^2=1$$ Now, $X$ and $Y$ are definitely not independent to each other, because given
Why zero correlation does not necessarily imply independence An intuitive example would be a circle. I have two variables $X$ and $Y$. And they are satisfy the equation $$X^2+Y^2=1$$ Now, $X$ and $Y$ are definitely not independent to each other, because given $X$ we can calculate $Y$ and vice versa. But their person correlation coefficient is $0$. This is because it only captures the linear relationship between two variables.
Why zero correlation does not necessarily imply independence An intuitive example would be a circle. I have two variables $X$ and $Y$. And they are satisfy the equation $$X^2+Y^2=1$$ Now, $X$ and $Y$ are definitely not independent to each other, because given
4,816
Why zero correlation does not necessarily imply independence
Basically, dependence of Y on X means the distribution of values of Y depends on some way of the value of X. That dependence can be on the mean value of Y (the usual case presented in most of the answers) or whatever other characteristic of Y. For example, let X be 0 or 1. If X = 0 then let Y be 0, if X= 1 let Y be -1, 0 or 1 (same probability). X and Y are uncorrelated. On mean, Y doesn't depend on X because whatever value is X, the mean of Y is 0. But clearly the distribution of values of Y depends on X value. In this case, for example, the variance of Y is 0 when X=0 and > 0 when X=1, thus there is, at least, a dependence on variance, i.e. there is a dependence. So, linear correlation only show a type of dependence on mean (linear dependence), that in turn is only a special case of dependence.
Why zero correlation does not necessarily imply independence
Basically, dependence of Y on X means the distribution of values of Y depends on some way of the value of X. That dependence can be on the mean value of Y (the usual case presented in most of the answ
Why zero correlation does not necessarily imply independence Basically, dependence of Y on X means the distribution of values of Y depends on some way of the value of X. That dependence can be on the mean value of Y (the usual case presented in most of the answers) or whatever other characteristic of Y. For example, let X be 0 or 1. If X = 0 then let Y be 0, if X= 1 let Y be -1, 0 or 1 (same probability). X and Y are uncorrelated. On mean, Y doesn't depend on X because whatever value is X, the mean of Y is 0. But clearly the distribution of values of Y depends on X value. In this case, for example, the variance of Y is 0 when X=0 and > 0 when X=1, thus there is, at least, a dependence on variance, i.e. there is a dependence. So, linear correlation only show a type of dependence on mean (linear dependence), that in turn is only a special case of dependence.
Why zero correlation does not necessarily imply independence Basically, dependence of Y on X means the distribution of values of Y depends on some way of the value of X. That dependence can be on the mean value of Y (the usual case presented in most of the answ
4,817
Why zero correlation does not necessarily imply independence
Adding to @Marcelo Ventura and @Mike Hunter great answers, and the reference to a great discussion around this on Quora. An important point (implicit) is made in here and in the quora thread. Although correlation is a linear measure, it does not exclusively mean it can only quantify the relationship between linearly dependent variables. Arguably an equally important factor is whether there is a monotone relationship between variables. As stated on minitab In a monotonic relationship, the variables tend to move in the same relative direction, but not necessarily at a constant rate. In a linear relationship, the variables move in the same direction at a constant rate. This means if we have non-monotone related variables we can observe a zero correlation even though they are not independent To illustrate this say for example we have a $f(x) = x^2$, using python to evaluate the function If we look at $x$ in $[0, 50)$, we find that $f(x)$ has a monotone relationship with $x$, as a result we observe the correlations to be close to 1: import numpy as np import seaborn as sns x = np.arange(0, 50, 1) f = lambda x: x ** 2 y = f(x) sns.scatterplot(x, y) # Get correlations using scipy from scipy.stats import pearsonr, spearmanr pearsonr(x, y)[0] spearmanr(x, y)[0] Pearson Correlation: 0.967 Spearman Correlation: 0.999... Now if we look at $x$ in $[-25, 25)$, we find $f(x)$ no longer has a monotone relationship with $x$, the correlations are thus close to zero as expected: x = np.arange(-25, 25, 1) y = f(x) sns.scatterplot(x, y) pearsonr(x, y)[0] spearmanr(x, y)[0] Pearson Correlation: -0.077 Spearman Correlation: -0.059
Why zero correlation does not necessarily imply independence
Adding to @Marcelo Ventura and @Mike Hunter great answers, and the reference to a great discussion around this on Quora. An important point (implicit) is made in here and in the quora thread. Althou
Why zero correlation does not necessarily imply independence Adding to @Marcelo Ventura and @Mike Hunter great answers, and the reference to a great discussion around this on Quora. An important point (implicit) is made in here and in the quora thread. Although correlation is a linear measure, it does not exclusively mean it can only quantify the relationship between linearly dependent variables. Arguably an equally important factor is whether there is a monotone relationship between variables. As stated on minitab In a monotonic relationship, the variables tend to move in the same relative direction, but not necessarily at a constant rate. In a linear relationship, the variables move in the same direction at a constant rate. This means if we have non-monotone related variables we can observe a zero correlation even though they are not independent To illustrate this say for example we have a $f(x) = x^2$, using python to evaluate the function If we look at $x$ in $[0, 50)$, we find that $f(x)$ has a monotone relationship with $x$, as a result we observe the correlations to be close to 1: import numpy as np import seaborn as sns x = np.arange(0, 50, 1) f = lambda x: x ** 2 y = f(x) sns.scatterplot(x, y) # Get correlations using scipy from scipy.stats import pearsonr, spearmanr pearsonr(x, y)[0] spearmanr(x, y)[0] Pearson Correlation: 0.967 Spearman Correlation: 0.999... Now if we look at $x$ in $[-25, 25)$, we find $f(x)$ no longer has a monotone relationship with $x$, the correlations are thus close to zero as expected: x = np.arange(-25, 25, 1) y = f(x) sns.scatterplot(x, y) pearsonr(x, y)[0] spearmanr(x, y)[0] Pearson Correlation: -0.077 Spearman Correlation: -0.059
Why zero correlation does not necessarily imply independence Adding to @Marcelo Ventura and @Mike Hunter great answers, and the reference to a great discussion around this on Quora. An important point (implicit) is made in here and in the quora thread. Althou
4,818
Why zero correlation does not necessarily imply independence
Zero correlation does not imply independence for multiple reasons. One of these is possible is that two variables could be dependent on a third causing correlation.
Why zero correlation does not necessarily imply independence
Zero correlation does not imply independence for multiple reasons. One of these is possible is that two variables could be dependent on a third causing correlation.
Why zero correlation does not necessarily imply independence Zero correlation does not imply independence for multiple reasons. One of these is possible is that two variables could be dependent on a third causing correlation.
Why zero correlation does not necessarily imply independence Zero correlation does not imply independence for multiple reasons. One of these is possible is that two variables could be dependent on a third causing correlation.
4,819
Excel as a statistics workbench
Use the right tool for the right job and exploit the strengths of the tools you are familiar with. In Excel's case there are some salient issues: Please don't use a spreadsheet to manage data, even if your data will fit into one. You're just asking for trouble, terrible trouble. There is virtually no protection against typographical errors, wholesale mixing up of data, truncating data values, etc., etc. Many of the statistical functions indeed are broken. The t distribution is one of them. The default graphics are awful. It is missing some fundamental statistical graphics, especially boxplots and histograms. The random number generator is a joke (but despite that is still effective for educational purposes). Avoid the high-level functions and most of the add-ins; they're c**p. But this is just a general principle of safe computing: if you're not sure what a function is doing, don't use it. Stick to the low-level ones (which include arithmetic functions, ranking, exp, ln, trig functions, and--within limits--the normal distribution functions). Never use an add-in that produces a graphic: it's going to be terrible. (NB: it's dead easy to create your own probability plots from scratch. They'll be correct and highly customizable.) In its favor, though, are the following: Its basic numerical calculations are as accurate as double precision floats can be. They include some useful ones, such as log gamma. It's quite easy to wrap a control around input boxes in a spreadsheet, making it possible to create dynamic simulations easily. If you need to share a calculation with non-statistical people, most will have some comfort with a spreadsheet and none at all with statistical software, no matter how cheap it may be. It's easy to write effective numerical macros, including porting old Fortran code, which is quite close to VBA. Moreover, the execution of VBA is reasonably fast. (For example, I have code that accurately computes non-central t distributions from scratch and three different implementations of Fast Fourier Transforms.) It supports some effective simulation and Monte-Carlo add-ons like Crystal Ball and @Risk. (They use their own RNGs, by the way--I checked.) The immediacy of interacting directly with (a small set of) data is unparalleled: it's better than any stats package, Mathematica, etc. When used as a giant calculator with loads of storage, a spreadsheet really comes into its own. Good EDA, using robust and resistant methods, is not easy, but after you have done it once, you can set it up again quickly. With Excel you can effectively reproduce all the calculations (although only some of the plots) in Tukey's EDA book, including median polish of n-way tables (although it's a bit cumbersome). In direct answer to the original question, there is a bias in that paper: it focuses on the material that Excel is weakest at and that a competent statistician is least likely to use. That's not a criticism of the paper, though, because warnings like this need to be broadcast.
Excel as a statistics workbench
Use the right tool for the right job and exploit the strengths of the tools you are familiar with. In Excel's case there are some salient issues: Please don't use a spreadsheet to manage data, even i
Excel as a statistics workbench Use the right tool for the right job and exploit the strengths of the tools you are familiar with. In Excel's case there are some salient issues: Please don't use a spreadsheet to manage data, even if your data will fit into one. You're just asking for trouble, terrible trouble. There is virtually no protection against typographical errors, wholesale mixing up of data, truncating data values, etc., etc. Many of the statistical functions indeed are broken. The t distribution is one of them. The default graphics are awful. It is missing some fundamental statistical graphics, especially boxplots and histograms. The random number generator is a joke (but despite that is still effective for educational purposes). Avoid the high-level functions and most of the add-ins; they're c**p. But this is just a general principle of safe computing: if you're not sure what a function is doing, don't use it. Stick to the low-level ones (which include arithmetic functions, ranking, exp, ln, trig functions, and--within limits--the normal distribution functions). Never use an add-in that produces a graphic: it's going to be terrible. (NB: it's dead easy to create your own probability plots from scratch. They'll be correct and highly customizable.) In its favor, though, are the following: Its basic numerical calculations are as accurate as double precision floats can be. They include some useful ones, such as log gamma. It's quite easy to wrap a control around input boxes in a spreadsheet, making it possible to create dynamic simulations easily. If you need to share a calculation with non-statistical people, most will have some comfort with a spreadsheet and none at all with statistical software, no matter how cheap it may be. It's easy to write effective numerical macros, including porting old Fortran code, which is quite close to VBA. Moreover, the execution of VBA is reasonably fast. (For example, I have code that accurately computes non-central t distributions from scratch and three different implementations of Fast Fourier Transforms.) It supports some effective simulation and Monte-Carlo add-ons like Crystal Ball and @Risk. (They use their own RNGs, by the way--I checked.) The immediacy of interacting directly with (a small set of) data is unparalleled: it's better than any stats package, Mathematica, etc. When used as a giant calculator with loads of storage, a spreadsheet really comes into its own. Good EDA, using robust and resistant methods, is not easy, but after you have done it once, you can set it up again quickly. With Excel you can effectively reproduce all the calculations (although only some of the plots) in Tukey's EDA book, including median polish of n-way tables (although it's a bit cumbersome). In direct answer to the original question, there is a bias in that paper: it focuses on the material that Excel is weakest at and that a competent statistician is least likely to use. That's not a criticism of the paper, though, because warnings like this need to be broadcast.
Excel as a statistics workbench Use the right tool for the right job and exploit the strengths of the tools you are familiar with. In Excel's case there are some salient issues: Please don't use a spreadsheet to manage data, even i
4,820
Excel as a statistics workbench
An interesting paper about using Excel in a Bioinformatics setting is: Mistaken Identifiers: Gene name errors can be introduced inadvertently when using Excel in bioinformatics, BMC Bioinformatics, 2004 (link). This short paper describes the problem of automatic type conversions in Excel (in particular date and floating point conversions). For example, the gene name Sept2 is converted to 2-Sept. You can actually find this error in online databases. Using Excel to manage medium to large amounts of data is dangerous. Mistakes can easily creep in without the user noticing.
Excel as a statistics workbench
An interesting paper about using Excel in a Bioinformatics setting is: Mistaken Identifiers: Gene name errors can be introduced inadvertently when using Excel in bioinformatics, BMC Bioinformat
Excel as a statistics workbench An interesting paper about using Excel in a Bioinformatics setting is: Mistaken Identifiers: Gene name errors can be introduced inadvertently when using Excel in bioinformatics, BMC Bioinformatics, 2004 (link). This short paper describes the problem of automatic type conversions in Excel (in particular date and floating point conversions). For example, the gene name Sept2 is converted to 2-Sept. You can actually find this error in online databases. Using Excel to manage medium to large amounts of data is dangerous. Mistakes can easily creep in without the user noticing.
Excel as a statistics workbench An interesting paper about using Excel in a Bioinformatics setting is: Mistaken Identifiers: Gene name errors can be introduced inadvertently when using Excel in bioinformatics, BMC Bioinformat
4,821
Excel as a statistics workbench
Well, the question whether the paper is correct or biased should be easy: you could just replicate some of their analyses and see whether you get the same answers. McCullough has been taking different versions of MS Excel apart for some years now, and apparently MS haven't seen fit to fix errors he pointed out years ago in previous versions. I don't see a problem with playing around with data in Excel. But to be honest, I would not do my "serious" analyses in Excel. My main problem would not be inaccuracies (which I guess will only very rarely be a problem) but the impossibility of tracking and replicating my analyses a year later when a reviewer or my boss asks why I didn't do X - you can save your work and your blind alleys in commented R code, but not in a meaningful way in Excel.
Excel as a statistics workbench
Well, the question whether the paper is correct or biased should be easy: you could just replicate some of their analyses and see whether you get the same answers. McCullough has been taking different
Excel as a statistics workbench Well, the question whether the paper is correct or biased should be easy: you could just replicate some of their analyses and see whether you get the same answers. McCullough has been taking different versions of MS Excel apart for some years now, and apparently MS haven't seen fit to fix errors he pointed out years ago in previous versions. I don't see a problem with playing around with data in Excel. But to be honest, I would not do my "serious" analyses in Excel. My main problem would not be inaccuracies (which I guess will only very rarely be a problem) but the impossibility of tracking and replicating my analyses a year later when a reviewer or my boss asks why I didn't do X - you can save your work and your blind alleys in commented R code, but not in a meaningful way in Excel.
Excel as a statistics workbench Well, the question whether the paper is correct or biased should be easy: you could just replicate some of their analyses and see whether you get the same answers. McCullough has been taking different
4,822
Excel as a statistics workbench
Another good reference source for why you might not want to use excel is: Spreadsheet addiction If you find yourself in a situation where you really need to use excel (some accademic departments insist), then I would suggest using the Rexcel plugin. This lets you interface using Excel, but uses the R program as the computational engine. You don't need to know R to use it, you can use drop down menus and dialogs, but you can do a lot more if you do. Since R is doing the computations they are a lot more trustworthy than Excel and you have much better graphs and boxplots and other graphs missing from excel. It even works with the automatic cell updating in excel (though that can make things really slow if you have a lot of complex analyses to recompute every time). It does not fix all the problems from the spreadsheet addiction page, but it is a huge improvement over using straight excel.
Excel as a statistics workbench
Another good reference source for why you might not want to use excel is: Spreadsheet addiction If you find yourself in a situation where you really need to use excel (some accademic departments insis
Excel as a statistics workbench Another good reference source for why you might not want to use excel is: Spreadsheet addiction If you find yourself in a situation where you really need to use excel (some accademic departments insist), then I would suggest using the Rexcel plugin. This lets you interface using Excel, but uses the R program as the computational engine. You don't need to know R to use it, you can use drop down menus and dialogs, but you can do a lot more if you do. Since R is doing the computations they are a lot more trustworthy than Excel and you have much better graphs and boxplots and other graphs missing from excel. It even works with the automatic cell updating in excel (though that can make things really slow if you have a lot of complex analyses to recompute every time). It does not fix all the problems from the spreadsheet addiction page, but it is a huge improvement over using straight excel.
Excel as a statistics workbench Another good reference source for why you might not want to use excel is: Spreadsheet addiction If you find yourself in a situation where you really need to use excel (some accademic departments insis
4,823
Excel as a statistics workbench
The papers and other participants point out to technical weaknesses. Whuber does a good job of outlining at least some of its strengths. I personally do extensive statistical work in Excel (hypothesis testing, linear and multiple regressions) and love it. I use Excel 2003 with a capacity of 256 columns and 65,000 rows which can handle just about 100% of the data sets I use. I understand Excel 2007 has extended that capacity by a huge amount (rows in the millions). As Whuber mentions, Excel also serves as a starting platform for a multitude of pretty outstanding add-in software that are all pretty powerful and easy to use. I am thinking of Crystal Ball and @Risk for Monte Carlo Simulation; XLStat for all around powerful stats and data analysis; What's Best for optimization. And, the list goes on. It's like Excel is the equivalent of an IPod or IPad with a zillion of pretty incredible Apps. Granted the Excel Apps are not cheap. But, for what they are capable of doing they are typically pretty great bargains. As far as model documentation is concerned, it is so easy to insert a text box where you can literally write a book about your methodology, your sources, etc... You can also insert comments in any cell. So, if anything Excel is really good for facilitating embedded documentation.
Excel as a statistics workbench
The papers and other participants point out to technical weaknesses. Whuber does a good job of outlining at least some of its strengths. I personally do extensive statistical work in Excel (hypothes
Excel as a statistics workbench The papers and other participants point out to technical weaknesses. Whuber does a good job of outlining at least some of its strengths. I personally do extensive statistical work in Excel (hypothesis testing, linear and multiple regressions) and love it. I use Excel 2003 with a capacity of 256 columns and 65,000 rows which can handle just about 100% of the data sets I use. I understand Excel 2007 has extended that capacity by a huge amount (rows in the millions). As Whuber mentions, Excel also serves as a starting platform for a multitude of pretty outstanding add-in software that are all pretty powerful and easy to use. I am thinking of Crystal Ball and @Risk for Monte Carlo Simulation; XLStat for all around powerful stats and data analysis; What's Best for optimization. And, the list goes on. It's like Excel is the equivalent of an IPod or IPad with a zillion of pretty incredible Apps. Granted the Excel Apps are not cheap. But, for what they are capable of doing they are typically pretty great bargains. As far as model documentation is concerned, it is so easy to insert a text box where you can literally write a book about your methodology, your sources, etc... You can also insert comments in any cell. So, if anything Excel is really good for facilitating embedded documentation.
Excel as a statistics workbench The papers and other participants point out to technical weaknesses. Whuber does a good job of outlining at least some of its strengths. I personally do extensive statistical work in Excel (hypothes
4,824
Excel as a statistics workbench
Incidently, a question around the use of Google spreadsheets raised contrasting (hence, interesting) opinions about that, Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others? I have in mind an older paper which didn't seem so pessimist, but it is only marginally cited in the paper you mentioned: Keeling and Pavur, A comparative study of the reliability of nine statistical software packages (CSDA 2007 51: 3811). But now, I found yours on my hard drive. There was also a special issue in 2008, see Special section on Microsoft Excel 2007, and more recently in the Journal of Statistical Software: On the Numerical Accuracy of Spreadsheets. I think it is a long-standing debate, and you will find varying papers/opinions about Excel reliability for statistical computing. I think there are different levels of discussion (what kind of analysis do you plan to do, do you rely on the internal solver, are there non-linear terms that enter a given model, etc.), and sources of numerical inaccuracy might arise as the result of proper computing errors or design choices issues; this is well summarized in M. Altman, J. Gill & M.P. McDonald, Numerical Issues in Statistical Computing for the Social Scientist, Wiley, 2004. Now, for exploratory data analysis, there are various alternatives that provide enhanced visualization capabilities, multivariate and dynamic graphics, e.g. GGobi -- but see related threads on this wiki. But, clearly the first point you made addresses another issue (IMO), namely that of using a spreadsheet to deal with large data set: it is simply not possible to import a large csv file into Excel (I'm thinking of genomic data, but it applies to other kind of high-dimensional data). It has not been built for that purpose.
Excel as a statistics workbench
Incidently, a question around the use of Google spreadsheets raised contrasting (hence, interesting) opinions about that, Do some of you use Google Docs spreadsheet to conduct and share your statistic
Excel as a statistics workbench Incidently, a question around the use of Google spreadsheets raised contrasting (hence, interesting) opinions about that, Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others? I have in mind an older paper which didn't seem so pessimist, but it is only marginally cited in the paper you mentioned: Keeling and Pavur, A comparative study of the reliability of nine statistical software packages (CSDA 2007 51: 3811). But now, I found yours on my hard drive. There was also a special issue in 2008, see Special section on Microsoft Excel 2007, and more recently in the Journal of Statistical Software: On the Numerical Accuracy of Spreadsheets. I think it is a long-standing debate, and you will find varying papers/opinions about Excel reliability for statistical computing. I think there are different levels of discussion (what kind of analysis do you plan to do, do you rely on the internal solver, are there non-linear terms that enter a given model, etc.), and sources of numerical inaccuracy might arise as the result of proper computing errors or design choices issues; this is well summarized in M. Altman, J. Gill & M.P. McDonald, Numerical Issues in Statistical Computing for the Social Scientist, Wiley, 2004. Now, for exploratory data analysis, there are various alternatives that provide enhanced visualization capabilities, multivariate and dynamic graphics, e.g. GGobi -- but see related threads on this wiki. But, clearly the first point you made addresses another issue (IMO), namely that of using a spreadsheet to deal with large data set: it is simply not possible to import a large csv file into Excel (I'm thinking of genomic data, but it applies to other kind of high-dimensional data). It has not been built for that purpose.
Excel as a statistics workbench Incidently, a question around the use of Google spreadsheets raised contrasting (hence, interesting) opinions about that, Do some of you use Google Docs spreadsheet to conduct and share your statistic
4,825
Excel as a statistics workbench
Excel is no good for statistics, but it can be wonderful for exploratory data analysis. Take a look at this video for some particularly interesting techniques. Excel's ability to conditionally color your data and add in-cell bar charts can give great insight into the structure of your raw data.
Excel as a statistics workbench
Excel is no good for statistics, but it can be wonderful for exploratory data analysis. Take a look at this video for some particularly interesting techniques. Excel's ability to conditionally color
Excel as a statistics workbench Excel is no good for statistics, but it can be wonderful for exploratory data analysis. Take a look at this video for some particularly interesting techniques. Excel's ability to conditionally color your data and add in-cell bar charts can give great insight into the structure of your raw data.
Excel as a statistics workbench Excel is no good for statistics, but it can be wonderful for exploratory data analysis. Take a look at this video for some particularly interesting techniques. Excel's ability to conditionally color
4,826
Excel as a statistics workbench
Excel can be great both for exploratory data analysis and linear regression analysis with the right plugins. There are a number of commercial products, although most of them leave something to be desired in terms of the quality of the output they produce (they don't take full advantage of Excel's charting options or the ability to link with other Office applications) and in general they are not as good as they could be for data visualization and presentation. They also tend to not support a disciplined modeling approach in which (among other things) you keep a well-documented audit trail for your work. Here is a FREE plugin, "RegressIt", that addresses many of these issues: http://regressit.com. It provides very good support for exploratory analysis (including the ability to generate parallel time series plots and scatterplot matrices with up to 50 variables), it makes it easy to apply data transformations such as lagging, logging, and differencing (which are often not applied appropriately by naive users of regression), it provides very detailed table and chart output that supports best practices of data analysis, and it maintains an audit-trail worksheet that facilitates side-by-side model comparisons as well as keeping a record of what models were fitted in what order. It makes a good complement to whatever else you may be using, if you are dealing with multivariate data and at least some of your work is being carried out in an Excel environment.
Excel as a statistics workbench
Excel can be great both for exploratory data analysis and linear regression analysis with the right plugins. There are a number of commercial products, although most of them leave something to be des
Excel as a statistics workbench Excel can be great both for exploratory data analysis and linear regression analysis with the right plugins. There are a number of commercial products, although most of them leave something to be desired in terms of the quality of the output they produce (they don't take full advantage of Excel's charting options or the ability to link with other Office applications) and in general they are not as good as they could be for data visualization and presentation. They also tend to not support a disciplined modeling approach in which (among other things) you keep a well-documented audit trail for your work. Here is a FREE plugin, "RegressIt", that addresses many of these issues: http://regressit.com. It provides very good support for exploratory analysis (including the ability to generate parallel time series plots and scatterplot matrices with up to 50 variables), it makes it easy to apply data transformations such as lagging, logging, and differencing (which are often not applied appropriately by naive users of regression), it provides very detailed table and chart output that supports best practices of data analysis, and it maintains an audit-trail worksheet that facilitates side-by-side model comparisons as well as keeping a record of what models were fitted in what order. It makes a good complement to whatever else you may be using, if you are dealing with multivariate data and at least some of your work is being carried out in an Excel environment.
Excel as a statistics workbench Excel can be great both for exploratory data analysis and linear regression analysis with the right plugins. There are a number of commercial products, although most of them leave something to be des
4,827
Best method for short time-series
It is very common for extremely simple forecasting methods like "forecast the historical average" to outperform more complex methods. This is even more likely for short time series. Yes, in principle you can fit an ARIMA or even more complex model to 20 or fewer observations, but you will be rather likely to overfit and get very bad forecasts. So: start with a simple benchmark, e.g., the historical mean the historical median for added robustness the random walk (forecast the last observation out) Assess these on out-of-sample data. Compare any more complex model to these benchmarks. You may be surprised at seeing how hard it is to outperform these simple methods. In addition, compare the robustness of different methods to these simple ones, e.g., by not only assessing average accuracy out-of-sample, but also the error variance, using your favorite error measure. Yes, as Rob Hyndman writes in his post that Aleksandr links to, out-of-sample testing is a problem in itself for short series - but there really is no good alternative. (Don't use in-sample fit, which is no guide to forecasting accuracy.) The AIC won't help you with the median and the random walk. However, you could use time-series cross-validation, which AIC approximates, anyway.
Best method for short time-series
It is very common for extremely simple forecasting methods like "forecast the historical average" to outperform more complex methods. This is even more likely for short time series. Yes, in principle
Best method for short time-series It is very common for extremely simple forecasting methods like "forecast the historical average" to outperform more complex methods. This is even more likely for short time series. Yes, in principle you can fit an ARIMA or even more complex model to 20 or fewer observations, but you will be rather likely to overfit and get very bad forecasts. So: start with a simple benchmark, e.g., the historical mean the historical median for added robustness the random walk (forecast the last observation out) Assess these on out-of-sample data. Compare any more complex model to these benchmarks. You may be surprised at seeing how hard it is to outperform these simple methods. In addition, compare the robustness of different methods to these simple ones, e.g., by not only assessing average accuracy out-of-sample, but also the error variance, using your favorite error measure. Yes, as Rob Hyndman writes in his post that Aleksandr links to, out-of-sample testing is a problem in itself for short series - but there really is no good alternative. (Don't use in-sample fit, which is no guide to forecasting accuracy.) The AIC won't help you with the median and the random walk. However, you could use time-series cross-validation, which AIC approximates, anyway.
Best method for short time-series It is very common for extremely simple forecasting methods like "forecast the historical average" to outperform more complex methods. This is even more likely for short time series. Yes, in principle
4,828
Best method for short time-series
I am again using a question as an opportunity to learn more about time series - one of the (many) topics of my interest. After a brief research, it seems to me that there exist several approaches to the problem of modeling short time series. The first approach is to use standard/linear time series models (AR, MA, ARMA, etc.), but to pay attention to certain parameters, as described in this post [1] by Rob Hyndman, who does not need an introduction in time series and forecasting world. The second approach, referred to by most of the related literature that I have seen, suggest using non-linear time series models, in particular, the threshold models [2], which include threshold autoregressive model (TAR), self-exiting TAR (SETAR), threshold autoregressive moving average model (TARMA), and TARMAX model, which extends TAR model to exogenous time series. Excellent overviews of the non-linear time series models, including threshold models, can be found in this paper [3] and this paper [4]. Finally, another IMHO related research paper [5] describes an interesting approach, which is based on Volterra-Weiner representation of non-linear systems - see this [6] and this [7]. This approach is argued to be superior to other techniques in the context of short and noisy time series. References Hyndman, R. (March 4, 2014). Fitting models to short time series. [Blog post]. Retrieved from http://robjhyndman.com/hyndsight/short-time-series Pennsylvania State University. (2015). Threshold models. [Online course materials]. STAT 510, Applied Time Series Analysis. Retrieved from https://online.stat.psu.edu/stat510/lesson/13/13.2 Zivot, E. (2006). Non-linear time series models. [Class notes]. ECON 584, Time Series Econometrics. Washington University. Retrieved from http://faculty.washington.edu/ezivot/econ584/notes/nonlinear.pdf Chen, C. W. S., So, M. K. P., & Liu, F.-C. (2011). A review of threshold time series models in finance. Statistics and Its Interface, 4, 167–181. Retrieved from http://intlpress.com/site/pub/files/_fulltext/journals/sii/2011/0004/0002/SII-2011-0004-0002-a012.pdf Barahona, M., & Poon, C.-S. (1996). Detection of nonlinear dynamics of short, noisy time series. Nature, 381, 215-217. Retrieved from http://www.bg.ic.ac.uk/research/m.barahona/nonlin_detec_nature.PDF Franz, M. O. (2011). Volterra and Wiener series. Scholarpedia, 6(10):11307. Retrieved from http://www.scholarpedia.org/article/Volterra_and_Wiener_series Franz, M. O., & Scholkopf, B. (n.d.). A unifying view of Wiener and Volterra theory and polynomial kernel regression. Retrieved from http://www.is.tuebingen.mpg.de/fileadmin/user_upload/files/publications/nc05_%5B0%5D.pdf
Best method for short time-series
I am again using a question as an opportunity to learn more about time series - one of the (many) topics of my interest. After a brief research, it seems to me that there exist several approaches to t
Best method for short time-series I am again using a question as an opportunity to learn more about time series - one of the (many) topics of my interest. After a brief research, it seems to me that there exist several approaches to the problem of modeling short time series. The first approach is to use standard/linear time series models (AR, MA, ARMA, etc.), but to pay attention to certain parameters, as described in this post [1] by Rob Hyndman, who does not need an introduction in time series and forecasting world. The second approach, referred to by most of the related literature that I have seen, suggest using non-linear time series models, in particular, the threshold models [2], which include threshold autoregressive model (TAR), self-exiting TAR (SETAR), threshold autoregressive moving average model (TARMA), and TARMAX model, which extends TAR model to exogenous time series. Excellent overviews of the non-linear time series models, including threshold models, can be found in this paper [3] and this paper [4]. Finally, another IMHO related research paper [5] describes an interesting approach, which is based on Volterra-Weiner representation of non-linear systems - see this [6] and this [7]. This approach is argued to be superior to other techniques in the context of short and noisy time series. References Hyndman, R. (March 4, 2014). Fitting models to short time series. [Blog post]. Retrieved from http://robjhyndman.com/hyndsight/short-time-series Pennsylvania State University. (2015). Threshold models. [Online course materials]. STAT 510, Applied Time Series Analysis. Retrieved from https://online.stat.psu.edu/stat510/lesson/13/13.2 Zivot, E. (2006). Non-linear time series models. [Class notes]. ECON 584, Time Series Econometrics. Washington University. Retrieved from http://faculty.washington.edu/ezivot/econ584/notes/nonlinear.pdf Chen, C. W. S., So, M. K. P., & Liu, F.-C. (2011). A review of threshold time series models in finance. Statistics and Its Interface, 4, 167–181. Retrieved from http://intlpress.com/site/pub/files/_fulltext/journals/sii/2011/0004/0002/SII-2011-0004-0002-a012.pdf Barahona, M., & Poon, C.-S. (1996). Detection of nonlinear dynamics of short, noisy time series. Nature, 381, 215-217. Retrieved from http://www.bg.ic.ac.uk/research/m.barahona/nonlin_detec_nature.PDF Franz, M. O. (2011). Volterra and Wiener series. Scholarpedia, 6(10):11307. Retrieved from http://www.scholarpedia.org/article/Volterra_and_Wiener_series Franz, M. O., & Scholkopf, B. (n.d.). A unifying view of Wiener and Volterra theory and polynomial kernel regression. Retrieved from http://www.is.tuebingen.mpg.de/fileadmin/user_upload/files/publications/nc05_%5B0%5D.pdf
Best method for short time-series I am again using a question as an opportunity to learn more about time series - one of the (many) topics of my interest. After a brief research, it seems to me that there exist several approaches to t
4,829
Best method for short time-series
No, There is no best univariate extrapolation method for a short time series with $T \leq 20$ series. Extrapolation methods need lots and lots of data. Following qualitative methods work well in practice for very short or no data: Composite forecasts Surveys Delphi method Scenario building Forecast by analogy Executive opinion One of the best methods that I know that works very well is the use of structured analogies (5th in the list above) where you look for similar/analogous products in the category that you are trying to forecast and use them to forecast short term forecasting. See this article for examples, and SAS paper on "how to" do this using of course SAS. One limitation is that forecast by analogies will work only of you have good analogies otherwise you could rely on judgemental forecasting. Here is another video from Forecastpro software on how to use a tool like Forecastpro to do forecasting by analogy. Choosing an analogy is more art than science and you need domain expertise to select analogous products/situations. Two excellent resources for short or new product forecasting: Principle of Forecasting by Armstrong New Product forecasting by Kahn The following is for illustrative purpose.I just finished reading Signal and Noise by Nate Silver, in that there is a good example on US and Japanese(analogue to US market) housing market bubble and prediction. In the chart below if you stop at 10 data points and use one of the extrapolation methods (exponential smooting/ets/arima...) and see where it takes you and where the actual ended. Again the example I presented is much more complex than simple trend extrapolation. This is just to highlight the risks of trend extrapolation using limited data points. In addition if your product has seasonal pattern, you have to use some form of analogous products situation to forecast. I read an article I think in Journal of Business research that if you have 13 week of product sales in pharmaceuticals, you could predict data with greater accuracy using analogous products.
Best method for short time-series
No, There is no best univariate extrapolation method for a short time series with $T \leq 20$ series. Extrapolation methods need lots and lots of data. Following qualitative methods work well in pract
Best method for short time-series No, There is no best univariate extrapolation method for a short time series with $T \leq 20$ series. Extrapolation methods need lots and lots of data. Following qualitative methods work well in practice for very short or no data: Composite forecasts Surveys Delphi method Scenario building Forecast by analogy Executive opinion One of the best methods that I know that works very well is the use of structured analogies (5th in the list above) where you look for similar/analogous products in the category that you are trying to forecast and use them to forecast short term forecasting. See this article for examples, and SAS paper on "how to" do this using of course SAS. One limitation is that forecast by analogies will work only of you have good analogies otherwise you could rely on judgemental forecasting. Here is another video from Forecastpro software on how to use a tool like Forecastpro to do forecasting by analogy. Choosing an analogy is more art than science and you need domain expertise to select analogous products/situations. Two excellent resources for short or new product forecasting: Principle of Forecasting by Armstrong New Product forecasting by Kahn The following is for illustrative purpose.I just finished reading Signal and Noise by Nate Silver, in that there is a good example on US and Japanese(analogue to US market) housing market bubble and prediction. In the chart below if you stop at 10 data points and use one of the extrapolation methods (exponential smooting/ets/arima...) and see where it takes you and where the actual ended. Again the example I presented is much more complex than simple trend extrapolation. This is just to highlight the risks of trend extrapolation using limited data points. In addition if your product has seasonal pattern, you have to use some form of analogous products situation to forecast. I read an article I think in Journal of Business research that if you have 13 week of product sales in pharmaceuticals, you could predict data with greater accuracy using analogous products.
Best method for short time-series No, There is no best univariate extrapolation method for a short time series with $T \leq 20$ series. Extrapolation methods need lots and lots of data. Following qualitative methods work well in pract
4,830
Best method for short time-series
The assumption that the number of observations is critical came from an off-handed comment by G.E.P. Box regarding the minimum sample size to identify a model. A more nuanced answer as far as I am concerned is that the problem/quality of model identification is not solely based upon the sample size but the ratio of signal to noise that is in the data. If you have a strong signal to noise ratio you need less observations. If you have low s/n then you need more samples to identify. If your data set is monthly and you have 20 values it is not possible to empirically identify a seasonal model HOWEVER if you think the data might be seasonal then you could start the modelling process by specifying an ar(12) and then do model diagnostics (tests of significance) to either reduce or to augment your structurally deficient model
Best method for short time-series
The assumption that the number of observations is critical came from an off-handed comment by G.E.P. Box regarding the minimum sample size to identify a model. A more nuanced answer as far as I am co
Best method for short time-series The assumption that the number of observations is critical came from an off-handed comment by G.E.P. Box regarding the minimum sample size to identify a model. A more nuanced answer as far as I am concerned is that the problem/quality of model identification is not solely based upon the sample size but the ratio of signal to noise that is in the data. If you have a strong signal to noise ratio you need less observations. If you have low s/n then you need more samples to identify. If your data set is monthly and you have 20 values it is not possible to empirically identify a seasonal model HOWEVER if you think the data might be seasonal then you could start the modelling process by specifying an ar(12) and then do model diagnostics (tests of significance) to either reduce or to augment your structurally deficient model
Best method for short time-series The assumption that the number of observations is critical came from an off-handed comment by G.E.P. Box regarding the minimum sample size to identify a model. A more nuanced answer as far as I am co
4,831
Best method for short time-series
With very limited data, I would be more inclined to fit the data using Bayesian techniques. Stationarity can be a bit tricky when dealing with Bayesian time series models. One choice is to enforce constraints on parameters. Or, you could not. This is fine if you just want to look at the distribution of the parameters. However, if you want to generate the posterior predictive, then you might have a lot of forecasts that explode. The Stan documentation provides a few examples where they put constraints on the parameters of time series models to ensure stationarity. This is possible for the relatively simple models they use, but it can be pretty much impossible in more complicated time series models. If you really wanted to enforce stationarity, you could use a Metropolis-Hastings algorithm and throw out any coefficients that are improper. However, this requires a lot of eigenvalues to be calculated, which will slow things down.
Best method for short time-series
With very limited data, I would be more inclined to fit the data using Bayesian techniques. Stationarity can be a bit tricky when dealing with Bayesian time series models. One choice is to enforce con
Best method for short time-series With very limited data, I would be more inclined to fit the data using Bayesian techniques. Stationarity can be a bit tricky when dealing with Bayesian time series models. One choice is to enforce constraints on parameters. Or, you could not. This is fine if you just want to look at the distribution of the parameters. However, if you want to generate the posterior predictive, then you might have a lot of forecasts that explode. The Stan documentation provides a few examples where they put constraints on the parameters of time series models to ensure stationarity. This is possible for the relatively simple models they use, but it can be pretty much impossible in more complicated time series models. If you really wanted to enforce stationarity, you could use a Metropolis-Hastings algorithm and throw out any coefficients that are improper. However, this requires a lot of eigenvalues to be calculated, which will slow things down.
Best method for short time-series With very limited data, I would be more inclined to fit the data using Bayesian techniques. Stationarity can be a bit tricky when dealing with Bayesian time series models. One choice is to enforce con
4,832
Best method for short time-series
The problem as you wisely pointed out is the "overfitting" caused by fixed list-based procedures. A smart way is to try and keep the equation simple when you have a negligible amount of data. I have found after many moons that if you simply use an AR(1) model and leave the rate of adaption ( the ar coefficient) to the data things can work out reasonably well. For example if the estimated ar coefficient is close to zero this means that the overall mean would be appropriate . if the coefficient is near +1.0 then this means that the last value (adjusted for a constant is more appropriate . If the coefficient is close to -1.0 then the negative of the last value (adjusted for a constant) would be the best forecast. If the coefficient is otherwise it means that a weighted average of the recent past is appropriate. This is precisely what AUTOBOX starts with and then discards anomalies as it fine tunes the estimated parameter when a "small # of observations" is encountered. This is an example of the "art of forecasting" when a pure data driven approach might be inapplicable. Following is an automatic model developed for the 12 data points without concern for anomalies. with Actual/Fit and Forecast here and residual plot here
Best method for short time-series
The problem as you wisely pointed out is the "overfitting" caused by fixed list-based procedures. A smart way is to try and keep the equation simple when you have a negligible amount of data. I have f
Best method for short time-series The problem as you wisely pointed out is the "overfitting" caused by fixed list-based procedures. A smart way is to try and keep the equation simple when you have a negligible amount of data. I have found after many moons that if you simply use an AR(1) model and leave the rate of adaption ( the ar coefficient) to the data things can work out reasonably well. For example if the estimated ar coefficient is close to zero this means that the overall mean would be appropriate . if the coefficient is near +1.0 then this means that the last value (adjusted for a constant is more appropriate . If the coefficient is close to -1.0 then the negative of the last value (adjusted for a constant) would be the best forecast. If the coefficient is otherwise it means that a weighted average of the recent past is appropriate. This is precisely what AUTOBOX starts with and then discards anomalies as it fine tunes the estimated parameter when a "small # of observations" is encountered. This is an example of the "art of forecasting" when a pure data driven approach might be inapplicable. Following is an automatic model developed for the 12 data points without concern for anomalies. with Actual/Fit and Forecast here and residual plot here
Best method for short time-series The problem as you wisely pointed out is the "overfitting" caused by fixed list-based procedures. A smart way is to try and keep the equation simple when you have a negligible amount of data. I have f
4,833
What is difference-in-differences?
What is a difference in differences estimator Difference in differences (DiD) is a tool to estimate treatment effects comparing the pre- and post-treatment differences in the outcome of a treatment and a control group. In general, we are interested in estimating the effect of a treatment $D_i$ (e.g. union status, medication, etc.) on an outcome $Y_i$ (e.g. wages, health, etc.) as in $$Y_{it} = \alpha_i + \lambda_t + \rho D_{it} + X'_{it}\beta + \epsilon_{it}$$ where $\alpha_i$ are individual fixed effects (characteristics of individuals that do not change over time), $\lambda_t$ are time fixed effects, $X_{it}$ are time-varying covariates like individuals' age, and $\epsilon_{it}$ is an error term. Individuals and time are indexed by $i$ and $t$, respectively. If there is a correlation between the fixed effects and $D_{it}$ then estimating this regression via OLS will be biased given that the fixed effects are not controlled for. This is the typical omitted variable bias. To see the effect of a treatment we would like to know the difference between a person in a world in which she received the treatment and one in which she does not. Of course, only one of these is ever observable in practice. Therefore we look for people with the same pre-treatment trends in the outcome. Suppose we have two periods $t = 1, 2$ and two groups $s = A,B$. Then, under the assumption that the trends in the treatment and control groups would have continued the same way as before in the absence of treatment, we can estimate the treatment effect as $$\rho = (E[Y_{ist}|s=A,t=2] - E[Y_{ist}|s=A,t=1]) - (E[Y_{ist}|s=B,t=2] - E[Y_{ist}|s=B,t=1])$$ Graphically this would look something like this: You can simply calculate these means by hand, i.e. obtain the mean outcome of group $A$ in both periods and take their difference. Then obtain the mean outcome of group $B$ in both periods and take their difference. Then take the difference in the differences and that's the treatment effect. However, it is more convenient to do this in a regression framework because this allows you to control for covariates to obtain standard errors for the treatment effect to see if it is significant To do this, you can follow either of two equivalent strategies. Generate a control group dummy $\text{treat}_i$ which is equal to 1 if a person is in group $A$ and 0 otherwise, generate a time dummy $\text{time}_t$ which is equal to 1 if $t=2$ and 0 otherwise, and then regress $$Y_{it} = \beta_1 + \beta_2 (\text{treat}_i) + \beta_3 (\text{time}_t) + \rho (\text{treat}_i \cdot \text{time}_t) + \epsilon_{it}$$ Or you simply generate a dummy $T_{it}$ which equals one if a person is in the treatment group AND the time period is the post-treatment period and is zero otherwise. Then you would regress $$Y_{it} = \beta_1 \gamma_s + \beta_2 \lambda_t + \rho T_{it} + \epsilon_{it}$$ where $\gamma_s$ is again a dummy for the control group and $\lambda_t$ are time dummies. The two regressions give you the same results for two periods and two groups. The second equation is more general though as it easily extends to multiple groups and time periods. In either case, this is how you can estimate the difference in differences parameter in a way such that you can include control variables (I left those out from the above equations to not clutter them up but you can simply include them) and obtain standard errors for inference. Why is the difference in differences estimator useful? As stated before, DiD is a method to estimate treatment effects with non-experimental data. That's the most useful feature. DiD is also a version of fixed effects estimation. Whereas the fixed effects model assumes $E(Y_{0it}|i,t) = \alpha_i + \lambda_t$, DiD makes a similar assumption but at the group level, $E(Y_{0it}|s,t) = \gamma_s + \lambda_t$. So the expected value of the outcome here is the sum of a group and a time effect. So what's the difference? For DiD you don't necessarily need panel data as long as your repeated cross sections are drawn from the same aggregate unit $s$. This makes DiD applicable to a wider array of data than the standard fixed effects models that require panel data. Can we trust difference in differences? The most important assumption in DiD is the parallel trends assumption (see the figure above). Never trust a study that does not graphically show these trends! Papers in the 1990s might have gotten away with this but nowadays our understanding of DiD is much better. If there is no convincing graph that shows the parallel trends in the pre-treatment outcomes for the treatment and control groups, be cautious. If the parallel trends assumption holds and we can credibly rule out any other time-variant changes that may confound the treatment, then DiD is a trustworthy method. Another word of caution should be applied when it comes to the treatment of standard errors. With many years of data you need to adjust the standard errors for autocorrelation. In the past, this has been neglected but since Bertrand et al. (2004) "How Much Should We Trust Differences-In-Differences Estimates?" we know that this is an issue. In the paper they provide several remedies for dealing with autocorrelation. The easiest is to cluster on the individual panel identifier which allows for arbitrary correlation of the residuals among individual time series. This corrects for both autocorrelation and heteroscedasticity. For further references see these lecture notes by Waldinger and Pischke.
What is difference-in-differences?
What is a difference in differences estimator Difference in differences (DiD) is a tool to estimate treatment effects comparing the pre- and post-treatment differences in the outcome of a treatment an
What is difference-in-differences? What is a difference in differences estimator Difference in differences (DiD) is a tool to estimate treatment effects comparing the pre- and post-treatment differences in the outcome of a treatment and a control group. In general, we are interested in estimating the effect of a treatment $D_i$ (e.g. union status, medication, etc.) on an outcome $Y_i$ (e.g. wages, health, etc.) as in $$Y_{it} = \alpha_i + \lambda_t + \rho D_{it} + X'_{it}\beta + \epsilon_{it}$$ where $\alpha_i$ are individual fixed effects (characteristics of individuals that do not change over time), $\lambda_t$ are time fixed effects, $X_{it}$ are time-varying covariates like individuals' age, and $\epsilon_{it}$ is an error term. Individuals and time are indexed by $i$ and $t$, respectively. If there is a correlation between the fixed effects and $D_{it}$ then estimating this regression via OLS will be biased given that the fixed effects are not controlled for. This is the typical omitted variable bias. To see the effect of a treatment we would like to know the difference between a person in a world in which she received the treatment and one in which she does not. Of course, only one of these is ever observable in practice. Therefore we look for people with the same pre-treatment trends in the outcome. Suppose we have two periods $t = 1, 2$ and two groups $s = A,B$. Then, under the assumption that the trends in the treatment and control groups would have continued the same way as before in the absence of treatment, we can estimate the treatment effect as $$\rho = (E[Y_{ist}|s=A,t=2] - E[Y_{ist}|s=A,t=1]) - (E[Y_{ist}|s=B,t=2] - E[Y_{ist}|s=B,t=1])$$ Graphically this would look something like this: You can simply calculate these means by hand, i.e. obtain the mean outcome of group $A$ in both periods and take their difference. Then obtain the mean outcome of group $B$ in both periods and take their difference. Then take the difference in the differences and that's the treatment effect. However, it is more convenient to do this in a regression framework because this allows you to control for covariates to obtain standard errors for the treatment effect to see if it is significant To do this, you can follow either of two equivalent strategies. Generate a control group dummy $\text{treat}_i$ which is equal to 1 if a person is in group $A$ and 0 otherwise, generate a time dummy $\text{time}_t$ which is equal to 1 if $t=2$ and 0 otherwise, and then regress $$Y_{it} = \beta_1 + \beta_2 (\text{treat}_i) + \beta_3 (\text{time}_t) + \rho (\text{treat}_i \cdot \text{time}_t) + \epsilon_{it}$$ Or you simply generate a dummy $T_{it}$ which equals one if a person is in the treatment group AND the time period is the post-treatment period and is zero otherwise. Then you would regress $$Y_{it} = \beta_1 \gamma_s + \beta_2 \lambda_t + \rho T_{it} + \epsilon_{it}$$ where $\gamma_s$ is again a dummy for the control group and $\lambda_t$ are time dummies. The two regressions give you the same results for two periods and two groups. The second equation is more general though as it easily extends to multiple groups and time periods. In either case, this is how you can estimate the difference in differences parameter in a way such that you can include control variables (I left those out from the above equations to not clutter them up but you can simply include them) and obtain standard errors for inference. Why is the difference in differences estimator useful? As stated before, DiD is a method to estimate treatment effects with non-experimental data. That's the most useful feature. DiD is also a version of fixed effects estimation. Whereas the fixed effects model assumes $E(Y_{0it}|i,t) = \alpha_i + \lambda_t$, DiD makes a similar assumption but at the group level, $E(Y_{0it}|s,t) = \gamma_s + \lambda_t$. So the expected value of the outcome here is the sum of a group and a time effect. So what's the difference? For DiD you don't necessarily need panel data as long as your repeated cross sections are drawn from the same aggregate unit $s$. This makes DiD applicable to a wider array of data than the standard fixed effects models that require panel data. Can we trust difference in differences? The most important assumption in DiD is the parallel trends assumption (see the figure above). Never trust a study that does not graphically show these trends! Papers in the 1990s might have gotten away with this but nowadays our understanding of DiD is much better. If there is no convincing graph that shows the parallel trends in the pre-treatment outcomes for the treatment and control groups, be cautious. If the parallel trends assumption holds and we can credibly rule out any other time-variant changes that may confound the treatment, then DiD is a trustworthy method. Another word of caution should be applied when it comes to the treatment of standard errors. With many years of data you need to adjust the standard errors for autocorrelation. In the past, this has been neglected but since Bertrand et al. (2004) "How Much Should We Trust Differences-In-Differences Estimates?" we know that this is an issue. In the paper they provide several remedies for dealing with autocorrelation. The easiest is to cluster on the individual panel identifier which allows for arbitrary correlation of the residuals among individual time series. This corrects for both autocorrelation and heteroscedasticity. For further references see these lecture notes by Waldinger and Pischke.
What is difference-in-differences? What is a difference in differences estimator Difference in differences (DiD) is a tool to estimate treatment effects comparing the pre- and post-treatment differences in the outcome of a treatment an
4,834
What is difference-in-differences?
Wikipedia has a decent entry on this subject, but why not just use linear regression allowing for interactions between your independent variables of interest? This seems more interpretable to me. Then you might read up on analysis of simple slopes (in the Cohen et al book free on Google Books) if your variables of interest are quantitative.
What is difference-in-differences?
Wikipedia has a decent entry on this subject, but why not just use linear regression allowing for interactions between your independent variables of interest? This seems more interpretable to me. Then
What is difference-in-differences? Wikipedia has a decent entry on this subject, but why not just use linear regression allowing for interactions between your independent variables of interest? This seems more interpretable to me. Then you might read up on analysis of simple slopes (in the Cohen et al book free on Google Books) if your variables of interest are quantitative.
What is difference-in-differences? Wikipedia has a decent entry on this subject, but why not just use linear regression allowing for interactions between your independent variables of interest? This seems more interpretable to me. Then
4,835
What is difference-in-differences?
It is a technique widely used in econometrics to examine the influence of any exogenous event in a time series. You pick two separate groups of data relating to before and after the event studied. A good reference to learn more is the book Introduction to Econometrics by Wooldridge.
What is difference-in-differences?
It is a technique widely used in econometrics to examine the influence of any exogenous event in a time series. You pick two separate groups of data relating to before and after the event studied. A
What is difference-in-differences? It is a technique widely used in econometrics to examine the influence of any exogenous event in a time series. You pick two separate groups of data relating to before and after the event studied. A good reference to learn more is the book Introduction to Econometrics by Wooldridge.
What is difference-in-differences? It is a technique widely used in econometrics to examine the influence of any exogenous event in a time series. You pick two separate groups of data relating to before and after the event studied. A
4,836
What is difference-in-differences?
Careful: Two additional points are worth noting. First, 80 of the original 92 DD papers have a potential problem with grouped error terms as the unit of observation is more detailed than the level of variation (a point discussed by Donald and Lang [2001]). Only 36 of these papers address this problem, either by clustering standard errors or by aggregating the data. Second, several techniques are used (more or less informally) for dealing with the possible endogeneity of the intervention variable. For example, three papers include a lagged dependent variable in equation (1), seven include a time trend specific to the treated states, fifteen plot some graphs to examine the dynamics of the treatment effect, three examine whether there is an "effect" before the law, two test whether the effect is persistent, and eleven formally attempt to do triple-differences (DDD) by finding another control group. In Bertrand, Duflo, and Mullainathan [2002] we show that most of these techniques do not alleviate the serial correlation issues. (Bertrand, Duflo, and Mullainathan 2004, 253)
What is difference-in-differences?
Careful: Two additional points are worth noting. First, 80 of the original 92 DD papers have a potential problem with grouped error terms as the unit of observation is more detailed than the level of
What is difference-in-differences? Careful: Two additional points are worth noting. First, 80 of the original 92 DD papers have a potential problem with grouped error terms as the unit of observation is more detailed than the level of variation (a point discussed by Donald and Lang [2001]). Only 36 of these papers address this problem, either by clustering standard errors or by aggregating the data. Second, several techniques are used (more or less informally) for dealing with the possible endogeneity of the intervention variable. For example, three papers include a lagged dependent variable in equation (1), seven include a time trend specific to the treated states, fifteen plot some graphs to examine the dynamics of the treatment effect, three examine whether there is an "effect" before the law, two test whether the effect is persistent, and eleven formally attempt to do triple-differences (DDD) by finding another control group. In Bertrand, Duflo, and Mullainathan [2002] we show that most of these techniques do not alleviate the serial correlation issues. (Bertrand, Duflo, and Mullainathan 2004, 253)
What is difference-in-differences? Careful: Two additional points are worth noting. First, 80 of the original 92 DD papers have a potential problem with grouped error terms as the unit of observation is more detailed than the level of
4,837
Bayesian equivalent of two sample t-test?
This is a good question, that seems to pop up a lot: link 1, link 2. The paper Bayesian Estimation Superseeds the T-Test that Cam.Davidson.Pilon pointed out is an excellent resource on this subject. It is also very recent, published in 2012, which I think in part is due to the current interest in the area. I will try to summarize a mathematical explanation of a Bayesian alternative to the two sample t-test. This summary is similar to the BEST paper which assess the difference in two samples by comparing the difference in their posterior distributions (explained below in R). set.seed(7) #create samples sample.1 <- rnorm(8, 100, 3) sample.2 <- rnorm(10, 103, 7) #we need a pooled data set for estimating parameters in the prior. pooled <- c(sample.1, sample.2) par(mfrow=c(1, 2)) hist(sample.1) hist(sample.2) In order to compare the sample means we need to estimate what they are. The Bayesian method to do so uses Bayes' theorem: P(A|B) = P(B|A) * P(A)/P(B) (the syntax of P(A|B) is read as the probability of A given B) Thanks to modern numerical methods we can ignore the probability of B, P(B), and use the proportional statment: P(A|B) $\propto$ P(B|A)*P(A) In Bayesian vernacular the posterior is proportional to the likelihood times the prior Applying Bayes' theory to our problem where we want to know the means of samples given some data we get $P(mean.1 | sample.1)$ $\propto$ $P(sample.1 | mean.1) * P(mean.1)$. The first term on the right is the likelihood, $P(sample.1 | mean.1)$, which is the probability of observing the sample data given mean.1. The second term is the prior, $P(mean.1)$, which is simply the probability of mean.1. Figuring out appropriate priors is still a bit of an art and is one of the biggest critisims of Bayesian methods. Let's put it in code. Code makes everything better. likelihood <- function(parameters){ mu1=parameters[1]; sig1=parameters[2]; mu2=parameters[3]; sig2=parameters[4] prod(dnorm(sample.1, mu1, sig1)) * prod(dnorm(sample.2, mu2, sig2)) } prior <- function(parameters){ mu1=parameters[1]; sig1=parameters[2]; mu2=parameters[3]; sig2=parameters[4] dnorm(mu1, mean(pooled), 1000*sd(pooled)) * dnorm(mu2, mean(pooled), 1000*sd(pooled)) * dexp(sig1, rate=0.1) * dexp(sig2, 0.1) } I made some assumptions in the prior that need to be justified. To keep the priors from prejudicing the estimated mean I wanted to make them broad and uniform-ish over plausible values with the aim of letting the data produce the features of the posterior. I used recommended setting from BEST and distributed the mu's normally with mean = mean(pooled) and a broad standard deviation = 1000*sd(pooled). The standard deviations I set to a broad exponential distribution, because I wanted a broad unbounded distribution. Now we can make the posterior posterior <- function(parameters) {likelihood(parameters) * prior(parameters)} We will sample the posterior distribution using a markov chain monte carlo (MCMC) with Metropolis Hastings modification. Its easiest to understand with code. #starting values mu1 = 100; sig1 = 10; mu2 = 100; sig2 = 10 parameters <- c(mu1, sig1, mu2, sig2) #this is the MCMC /w Metropolis method n.iter <- 10000 results <- matrix(0, nrow=n.iter, ncol=4) results[1, ] <- parameters for (iteration in 2:n.iter){ candidate <- parameters + rnorm(4, sd=0.5) ratio <- posterior(candidate)/posterior(parameters) if (runif(1) < ratio) parameters <- candidate #Metropolis modification results[iteration, ] <- parameters } The results matrix is a list of samples from the posterior distribution for each parameter which we can use to answer our original question: Is sample.1 different than sample.2? But first to avoid affects from the starting values we will "burn-in" the first 500 values of the chain. #burn-in results <- results[500:n.iter,] Now, is sample.1 different than sample.2? mu1 <- results[,1] mu2 <- results[,3] hist(mu1 - mu2) mean(mu1 - mu2 < 0) [1] 0.9953689 From this analysis I would conclude there is a 99.5% chance that the mean for sample.1 is less than the mean for sample.2. An advantage of the Bayesian approach, as pointed out in the BEST paper, is that it can make strong theories. E.G. what is the probability that sample.2 is 5 units bigger than sample.1. mean(mu2 - mu1 > 5) [1] 0.9321124 We would conclude that there is a 93% chance that the mean of sample.2 is 5 unit greater than sample.1. An observant reader would find that interesting because we know the true populations have means of 100 and 103 respectively. This is most likely due to the small sample size, and choice of using a normal distribution for the likelihood. I will end this answer with a warning: This code is for teaching purposes. For a real analysis use RJAGS and depending on your sample size fit a t-distribution for the likelihood. If there is interest I will post a t-test using RJAGS. EDIT: As requested here is a JAGS model. model.str <- 'model { for (i in 1:Ntotal) { y[i] ~ dt(mu[x[i]], tau[x[i]], nu) } for (j in 1:2) { mu[j] ~ dnorm(mu_pooled, tau_pooled) tau[j] <- 1 / pow(sigma[j], 2) sigma[j] ~ dunif(sigma_low, sigma_high) } nu <- nu_minus_one + 1 nu_minus_one ~ dexp(1 / 29) }' # Indicator variable x <- c(rep(1, length(sample.1)), rep(2, length(sample.2))) cpd.model <- jags.model(textConnection(model.str), data=list(y=pooled, x=x, mu_pooled=mean(pooled), tau_pooled=1/(1000 * sd(pooled))^2, sigma_low=sd(pooled) / 1000, sigma_high=sd(pooled) * 1000, Ntotal=length(pooled))) update(cpd.model, 1000) chain <- coda.samples(model = cpd.model, n.iter = 100000, variable.names = c('mu', 'sigma')) rchain <- as.matrix(chain) hist(rchain[, 'mu[1]'] - rchain[, 'mu[2]']) mean(rchain[, 'mu[1]'] - rchain[, 'mu[2]'] < 0) mean(rchain[, 'mu[2]'] - rchain[, 'mu[1]'] > 5)
Bayesian equivalent of two sample t-test?
This is a good question, that seems to pop up a lot: link 1, link 2. The paper Bayesian Estimation Superseeds the T-Test that Cam.Davidson.Pilon pointed out is an excellent resource on this subject. I
Bayesian equivalent of two sample t-test? This is a good question, that seems to pop up a lot: link 1, link 2. The paper Bayesian Estimation Superseeds the T-Test that Cam.Davidson.Pilon pointed out is an excellent resource on this subject. It is also very recent, published in 2012, which I think in part is due to the current interest in the area. I will try to summarize a mathematical explanation of a Bayesian alternative to the two sample t-test. This summary is similar to the BEST paper which assess the difference in two samples by comparing the difference in their posterior distributions (explained below in R). set.seed(7) #create samples sample.1 <- rnorm(8, 100, 3) sample.2 <- rnorm(10, 103, 7) #we need a pooled data set for estimating parameters in the prior. pooled <- c(sample.1, sample.2) par(mfrow=c(1, 2)) hist(sample.1) hist(sample.2) In order to compare the sample means we need to estimate what they are. The Bayesian method to do so uses Bayes' theorem: P(A|B) = P(B|A) * P(A)/P(B) (the syntax of P(A|B) is read as the probability of A given B) Thanks to modern numerical methods we can ignore the probability of B, P(B), and use the proportional statment: P(A|B) $\propto$ P(B|A)*P(A) In Bayesian vernacular the posterior is proportional to the likelihood times the prior Applying Bayes' theory to our problem where we want to know the means of samples given some data we get $P(mean.1 | sample.1)$ $\propto$ $P(sample.1 | mean.1) * P(mean.1)$. The first term on the right is the likelihood, $P(sample.1 | mean.1)$, which is the probability of observing the sample data given mean.1. The second term is the prior, $P(mean.1)$, which is simply the probability of mean.1. Figuring out appropriate priors is still a bit of an art and is one of the biggest critisims of Bayesian methods. Let's put it in code. Code makes everything better. likelihood <- function(parameters){ mu1=parameters[1]; sig1=parameters[2]; mu2=parameters[3]; sig2=parameters[4] prod(dnorm(sample.1, mu1, sig1)) * prod(dnorm(sample.2, mu2, sig2)) } prior <- function(parameters){ mu1=parameters[1]; sig1=parameters[2]; mu2=parameters[3]; sig2=parameters[4] dnorm(mu1, mean(pooled), 1000*sd(pooled)) * dnorm(mu2, mean(pooled), 1000*sd(pooled)) * dexp(sig1, rate=0.1) * dexp(sig2, 0.1) } I made some assumptions in the prior that need to be justified. To keep the priors from prejudicing the estimated mean I wanted to make them broad and uniform-ish over plausible values with the aim of letting the data produce the features of the posterior. I used recommended setting from BEST and distributed the mu's normally with mean = mean(pooled) and a broad standard deviation = 1000*sd(pooled). The standard deviations I set to a broad exponential distribution, because I wanted a broad unbounded distribution. Now we can make the posterior posterior <- function(parameters) {likelihood(parameters) * prior(parameters)} We will sample the posterior distribution using a markov chain monte carlo (MCMC) with Metropolis Hastings modification. Its easiest to understand with code. #starting values mu1 = 100; sig1 = 10; mu2 = 100; sig2 = 10 parameters <- c(mu1, sig1, mu2, sig2) #this is the MCMC /w Metropolis method n.iter <- 10000 results <- matrix(0, nrow=n.iter, ncol=4) results[1, ] <- parameters for (iteration in 2:n.iter){ candidate <- parameters + rnorm(4, sd=0.5) ratio <- posterior(candidate)/posterior(parameters) if (runif(1) < ratio) parameters <- candidate #Metropolis modification results[iteration, ] <- parameters } The results matrix is a list of samples from the posterior distribution for each parameter which we can use to answer our original question: Is sample.1 different than sample.2? But first to avoid affects from the starting values we will "burn-in" the first 500 values of the chain. #burn-in results <- results[500:n.iter,] Now, is sample.1 different than sample.2? mu1 <- results[,1] mu2 <- results[,3] hist(mu1 - mu2) mean(mu1 - mu2 < 0) [1] 0.9953689 From this analysis I would conclude there is a 99.5% chance that the mean for sample.1 is less than the mean for sample.2. An advantage of the Bayesian approach, as pointed out in the BEST paper, is that it can make strong theories. E.G. what is the probability that sample.2 is 5 units bigger than sample.1. mean(mu2 - mu1 > 5) [1] 0.9321124 We would conclude that there is a 93% chance that the mean of sample.2 is 5 unit greater than sample.1. An observant reader would find that interesting because we know the true populations have means of 100 and 103 respectively. This is most likely due to the small sample size, and choice of using a normal distribution for the likelihood. I will end this answer with a warning: This code is for teaching purposes. For a real analysis use RJAGS and depending on your sample size fit a t-distribution for the likelihood. If there is interest I will post a t-test using RJAGS. EDIT: As requested here is a JAGS model. model.str <- 'model { for (i in 1:Ntotal) { y[i] ~ dt(mu[x[i]], tau[x[i]], nu) } for (j in 1:2) { mu[j] ~ dnorm(mu_pooled, tau_pooled) tau[j] <- 1 / pow(sigma[j], 2) sigma[j] ~ dunif(sigma_low, sigma_high) } nu <- nu_minus_one + 1 nu_minus_one ~ dexp(1 / 29) }' # Indicator variable x <- c(rep(1, length(sample.1)), rep(2, length(sample.2))) cpd.model <- jags.model(textConnection(model.str), data=list(y=pooled, x=x, mu_pooled=mean(pooled), tau_pooled=1/(1000 * sd(pooled))^2, sigma_low=sd(pooled) / 1000, sigma_high=sd(pooled) * 1000, Ntotal=length(pooled))) update(cpd.model, 1000) chain <- coda.samples(model = cpd.model, n.iter = 100000, variable.names = c('mu', 'sigma')) rchain <- as.matrix(chain) hist(rchain[, 'mu[1]'] - rchain[, 'mu[2]']) mean(rchain[, 'mu[1]'] - rchain[, 'mu[2]'] < 0) mean(rchain[, 'mu[2]'] - rchain[, 'mu[1]'] > 5)
Bayesian equivalent of two sample t-test? This is a good question, that seems to pop up a lot: link 1, link 2. The paper Bayesian Estimation Superseeds the T-Test that Cam.Davidson.Pilon pointed out is an excellent resource on this subject. I
4,838
Bayesian equivalent of two sample t-test?
The excellent answer by user1068430 implemented in Python import numpy as np from pylab import plt def dnorm(x, mu, sig): return 1/(sig * np.sqrt(2 * np.pi)) * np.exp(-(x - mu)**2 / (2 * sig**2)) def dexp(x, l): return l * np.exp(- l*x) def like(parameters): [mu1, sig1, mu2, sig2] = parameters return dnorm(sample1, mu1, sig1).prod()*dnorm(sample2, mu2, sig2).prod() def prior(parameters): [mu1, sig1, mu2, sig2] = parameters return dnorm(mu1, pooled.mean(), 1000*pooled.std()) * dnorm(mu2, pooled.mean(), 1000*pooled.std()) * dexp(sig1, 0.1) * dexp(sig2, 0.1) def posterior(parameters): [mu1, sig1, mu2, sig2] = parameters return like([mu1, sig1, mu2, sig2])*prior([mu1, sig1, mu2, sig2]) #create samples sample1 = np.random.normal(100, 3, 8) sample2 = np.random.normal(100, 7, 10) pooled= np.append(sample1, sample2) plt.figure(0) plt.hist(sample1) plt.hold(True) plt.hist(sample2) plt.show(block=False) mu1 = 100 sig1 = 10 mu2 = 100 sig2 = 10 parameters = np.array([mu1, sig1, mu2, sig2]) niter = 10000 results = np.zeros([niter, 4]) results[1,:] = parameters for iteration in np.arange(2,niter): candidate = parameters + np.random.normal(0,0.5,4) ratio = posterior(candidate)/posterior(parameters) if np.random.uniform() < ratio: parameters = candidate results[iteration,:] = parameters #burn-in results = results[499:niter-1,:] mu1 = results[:,1] mu2 = results[:,3] d = (mu1 - mu2) p_value = np.mean(d > 0) plt.figure(1) plt.hist(d,normed = 1) plt.show() ```
Bayesian equivalent of two sample t-test?
The excellent answer by user1068430 implemented in Python import numpy as np from pylab import plt def dnorm(x, mu, sig): return 1/(sig * np.sqrt(2 * np.pi)) * np.exp(-(x - mu)**2 / (2 * sig**2))
Bayesian equivalent of two sample t-test? The excellent answer by user1068430 implemented in Python import numpy as np from pylab import plt def dnorm(x, mu, sig): return 1/(sig * np.sqrt(2 * np.pi)) * np.exp(-(x - mu)**2 / (2 * sig**2)) def dexp(x, l): return l * np.exp(- l*x) def like(parameters): [mu1, sig1, mu2, sig2] = parameters return dnorm(sample1, mu1, sig1).prod()*dnorm(sample2, mu2, sig2).prod() def prior(parameters): [mu1, sig1, mu2, sig2] = parameters return dnorm(mu1, pooled.mean(), 1000*pooled.std()) * dnorm(mu2, pooled.mean(), 1000*pooled.std()) * dexp(sig1, 0.1) * dexp(sig2, 0.1) def posterior(parameters): [mu1, sig1, mu2, sig2] = parameters return like([mu1, sig1, mu2, sig2])*prior([mu1, sig1, mu2, sig2]) #create samples sample1 = np.random.normal(100, 3, 8) sample2 = np.random.normal(100, 7, 10) pooled= np.append(sample1, sample2) plt.figure(0) plt.hist(sample1) plt.hold(True) plt.hist(sample2) plt.show(block=False) mu1 = 100 sig1 = 10 mu2 = 100 sig2 = 10 parameters = np.array([mu1, sig1, mu2, sig2]) niter = 10000 results = np.zeros([niter, 4]) results[1,:] = parameters for iteration in np.arange(2,niter): candidate = parameters + np.random.normal(0,0.5,4) ratio = posterior(candidate)/posterior(parameters) if np.random.uniform() < ratio: parameters = candidate results[iteration,:] = parameters #burn-in results = results[499:niter-1,:] mu1 = results[:,1] mu2 = results[:,3] d = (mu1 - mu2) p_value = np.mean(d > 0) plt.figure(1) plt.hist(d,normed = 1) plt.show() ```
Bayesian equivalent of two sample t-test? The excellent answer by user1068430 implemented in Python import numpy as np from pylab import plt def dnorm(x, mu, sig): return 1/(sig * np.sqrt(2 * np.pi)) * np.exp(-(x - mu)**2 / (2 * sig**2))
4,839
Bayesian equivalent of two sample t-test?
With a Bayesian analysis you have more things to specify (that is actually a good thing, since it gives much more flexibility and ability to model what you believe the truth to be). Are you assuming normals for the likelihoods? Will the 2 groups have the same variance? One straight forward approach is to model the 2 means (and 1 or 2 variances/dispersions) then look at the posterior on the difference of the 2 means and/or the Credible Interval on the difference of the 2 means.
Bayesian equivalent of two sample t-test?
With a Bayesian analysis you have more things to specify (that is actually a good thing, since it gives much more flexibility and ability to model what you believe the truth to be). Are you assuming
Bayesian equivalent of two sample t-test? With a Bayesian analysis you have more things to specify (that is actually a good thing, since it gives much more flexibility and ability to model what you believe the truth to be). Are you assuming normals for the likelihoods? Will the 2 groups have the same variance? One straight forward approach is to model the 2 means (and 1 or 2 variances/dispersions) then look at the posterior on the difference of the 2 means and/or the Credible Interval on the difference of the 2 means.
Bayesian equivalent of two sample t-test? With a Bayesian analysis you have more things to specify (that is actually a good thing, since it gives much more flexibility and ability to model what you believe the truth to be). Are you assuming
4,840
Bayesian equivalent of two sample t-test?
a mathematical explanation of what are some Bayesian methods I can use to test the difference between the mean of two samples. There are several approaches to "testing" this. I'll mention a couple: If you want an explicit decision you could look at decision theory. A pretty simple thing that's sometimes done is to find an interval for the difference in the means and consider whether it includes 0 or not. That would involve starting with a model for the observations, priors on the parameters and computation of the posterior distribution of the difference in means conditional on the data. You'd need to say what your model is (e.g. normal, constant variance), and then (at least) some prior for the difference in means and a prior for the variance. You might have priors on the parameters of those priors in turn. Or you might not assume constant variance. Or you might assume something other than normality.
Bayesian equivalent of two sample t-test?
a mathematical explanation of what are some Bayesian methods I can use to test the difference between the mean of two samples. There are several approaches to "testing" this. I'll mention a couple:
Bayesian equivalent of two sample t-test? a mathematical explanation of what are some Bayesian methods I can use to test the difference between the mean of two samples. There are several approaches to "testing" this. I'll mention a couple: If you want an explicit decision you could look at decision theory. A pretty simple thing that's sometimes done is to find an interval for the difference in the means and consider whether it includes 0 or not. That would involve starting with a model for the observations, priors on the parameters and computation of the posterior distribution of the difference in means conditional on the data. You'd need to say what your model is (e.g. normal, constant variance), and then (at least) some prior for the difference in means and a prior for the variance. You might have priors on the parameters of those priors in turn. Or you might not assume constant variance. Or you might assume something other than normality.
Bayesian equivalent of two sample t-test? a mathematical explanation of what are some Bayesian methods I can use to test the difference between the mean of two samples. There are several approaches to "testing" this. I'll mention a couple:
4,841
Bayesian equivalent of two sample t-test?
First you should consider whether you would like to do this at all. A good start would be to read this even more recent (2022) paper by Costello and Watts to learn about the various theoretical pitfalls associated with hypothesis testing in a Bayesian framework. One of the most serious problems they see with the usual t-test null hypothesis of $\mathcal{H}_0: \Delta\mu = 0$ is that this is a "point-form null hypothesis". As $\Delta\mu$ is a real number, and its distribution is continuous, the probability $\mathrm{Pr}(\Delta\mu = 0)$ will unfortunately be exactly 0. Now, "if the null hypothesis is always false, what’s the big deal about rejecting it?" -- they ask in the paper, quoting Cohen (2016). After showing that the currently used "Bayesian t-test" is equivalent to its "classical"/frequentist counterpart (thus offering no obvious advantages), Costello and Watts also propose a different testing procedure which is supposed to be more informative. See the paper for details. Disclaimer: I haven't done such an analysis yet (just reading myself into the subject). I would be rather curious to know if there's someone who has some practical experience with what Costello and Watts propose.
Bayesian equivalent of two sample t-test?
First you should consider whether you would like to do this at all. A good start would be to read this even more recent (2022) paper by Costello and Watts to learn about the various theoretical pitfal
Bayesian equivalent of two sample t-test? First you should consider whether you would like to do this at all. A good start would be to read this even more recent (2022) paper by Costello and Watts to learn about the various theoretical pitfalls associated with hypothesis testing in a Bayesian framework. One of the most serious problems they see with the usual t-test null hypothesis of $\mathcal{H}_0: \Delta\mu = 0$ is that this is a "point-form null hypothesis". As $\Delta\mu$ is a real number, and its distribution is continuous, the probability $\mathrm{Pr}(\Delta\mu = 0)$ will unfortunately be exactly 0. Now, "if the null hypothesis is always false, what’s the big deal about rejecting it?" -- they ask in the paper, quoting Cohen (2016). After showing that the currently used "Bayesian t-test" is equivalent to its "classical"/frequentist counterpart (thus offering no obvious advantages), Costello and Watts also propose a different testing procedure which is supposed to be more informative. See the paper for details. Disclaimer: I haven't done such an analysis yet (just reading myself into the subject). I would be rather curious to know if there's someone who has some practical experience with what Costello and Watts propose.
Bayesian equivalent of two sample t-test? First you should consider whether you would like to do this at all. A good start would be to read this even more recent (2022) paper by Costello and Watts to learn about the various theoretical pitfal
4,842
Why is Entropy maximised when the probability distribution is uniform?
Heuristically, the probability density function on $\{x_1, x_2,..,.x_n\}$ with maximum entropy turns out to be the one that corresponds to the least amount of knowledge of $\{x_1, x_2,..,.x_n\}$, in other words the Uniform distribution. Now, for a more formal proof consider the following: A probability density function on $\{x_1, x_2,..,.x_n\}$ is a set of nonnegative real numbers $p_1,...,p_n$ that add up to 1. Entropy is a continuous function of the $n$-tuples $(p_1,...,p_n)$, and these points lie in a compact subset of $\mathbb{R}^n$, so there is an $n$-tuple where entropy is maximized. We want to show this occurs at $(1/n,...,1/n)$ and nowhere else. Suppose the $p_j$ are not all equal, say $p_1 < p_2$. (Clearly $n\neq 1$.) We will find a new probability density with higher entropy. It then follows, since entropy is maximized at some $n$-tuple, that entropy is uniquely maximized at the $n$-tuple with $p_i = 1/n$ for all $i$. Since $p_1 < p_2$, for small positive $\varepsilon$ we have $p_1 + \varepsilon < p_2 -\varepsilon$. The entropy of $\{p_1 + \varepsilon, p_2 -\varepsilon,p_3,...,p_n\}$ minus the entropy of $\{p_1,p_2,p_3,...,p_n\}$ equals $$-p_1\log\left(\frac{p_1+\varepsilon}{p_1}\right)-\varepsilon\log(p_1+\varepsilon)-p_2\log\left(\frac{p_2-\varepsilon}{p_2}\right)+\varepsilon\log(p_2-\varepsilon)$$ To complete the proof, we want to show this is positive for small enough $\varepsilon$. Rewrite the above equation as $$-p_1\log\left(1+\frac{\varepsilon}{p_1}\right)-\varepsilon\left(\log p_1+\log\left(1+\frac{\varepsilon}{p_1}\right)\right)-p_2\log\left(1-\frac{\varepsilon}{p_2}\right)+\varepsilon\left(\log p_2+\log\left(1-\frac{\varepsilon}{p_2}\right)\right)$$ Recalling that $\log(1 + x) = x + O(x^2)$ for small $x$, the above equation is $$-\varepsilon-\varepsilon\log p_1 + \varepsilon + \varepsilon \log p_2 + O(\varepsilon^2) = \varepsilon\log(p_2/p_1) + O(\varepsilon^2)$$ which is positive when $\varepsilon$ is small enough since $p_1 < p_2$. A less rigorous proof is the following: Consider first the following Lemma: Let $p(x)$ and $q(x)$ be continuous probability density functions on an interval $I$ in the real numbers, with $p\geq 0$ and $q > 0$ on $I$. We have $$-\int_I p\log p dx\leq -\int_I p\log q dx$$ if both integrals exist. Moreover, there is equality if and only if $p(x) = q(x)$ for all $x$. Now, let $p$ be any probability density function on $\{x_1,...,x_n\}$, with $p_i = p(x_i)$. Letting $q_i = 1/n$ for all $i$, $$-\sum_{i=1}^n p_i\log q_i = \sum_{i=1}^n p_i \log n=\log n$$ which is the entropy of $q$. Therefore our Lemma says $h(p)\leq h(q)$, with equality if and only if $p$ is uniform. Also, wikipedia has a brief discussion on this as well: wiki
Why is Entropy maximised when the probability distribution is uniform?
Heuristically, the probability density function on $\{x_1, x_2,..,.x_n\}$ with maximum entropy turns out to be the one that corresponds to the least amount of knowledge of $\{x_1, x_2,..,.x_n\}$, in o
Why is Entropy maximised when the probability distribution is uniform? Heuristically, the probability density function on $\{x_1, x_2,..,.x_n\}$ with maximum entropy turns out to be the one that corresponds to the least amount of knowledge of $\{x_1, x_2,..,.x_n\}$, in other words the Uniform distribution. Now, for a more formal proof consider the following: A probability density function on $\{x_1, x_2,..,.x_n\}$ is a set of nonnegative real numbers $p_1,...,p_n$ that add up to 1. Entropy is a continuous function of the $n$-tuples $(p_1,...,p_n)$, and these points lie in a compact subset of $\mathbb{R}^n$, so there is an $n$-tuple where entropy is maximized. We want to show this occurs at $(1/n,...,1/n)$ and nowhere else. Suppose the $p_j$ are not all equal, say $p_1 < p_2$. (Clearly $n\neq 1$.) We will find a new probability density with higher entropy. It then follows, since entropy is maximized at some $n$-tuple, that entropy is uniquely maximized at the $n$-tuple with $p_i = 1/n$ for all $i$. Since $p_1 < p_2$, for small positive $\varepsilon$ we have $p_1 + \varepsilon < p_2 -\varepsilon$. The entropy of $\{p_1 + \varepsilon, p_2 -\varepsilon,p_3,...,p_n\}$ minus the entropy of $\{p_1,p_2,p_3,...,p_n\}$ equals $$-p_1\log\left(\frac{p_1+\varepsilon}{p_1}\right)-\varepsilon\log(p_1+\varepsilon)-p_2\log\left(\frac{p_2-\varepsilon}{p_2}\right)+\varepsilon\log(p_2-\varepsilon)$$ To complete the proof, we want to show this is positive for small enough $\varepsilon$. Rewrite the above equation as $$-p_1\log\left(1+\frac{\varepsilon}{p_1}\right)-\varepsilon\left(\log p_1+\log\left(1+\frac{\varepsilon}{p_1}\right)\right)-p_2\log\left(1-\frac{\varepsilon}{p_2}\right)+\varepsilon\left(\log p_2+\log\left(1-\frac{\varepsilon}{p_2}\right)\right)$$ Recalling that $\log(1 + x) = x + O(x^2)$ for small $x$, the above equation is $$-\varepsilon-\varepsilon\log p_1 + \varepsilon + \varepsilon \log p_2 + O(\varepsilon^2) = \varepsilon\log(p_2/p_1) + O(\varepsilon^2)$$ which is positive when $\varepsilon$ is small enough since $p_1 < p_2$. A less rigorous proof is the following: Consider first the following Lemma: Let $p(x)$ and $q(x)$ be continuous probability density functions on an interval $I$ in the real numbers, with $p\geq 0$ and $q > 0$ on $I$. We have $$-\int_I p\log p dx\leq -\int_I p\log q dx$$ if both integrals exist. Moreover, there is equality if and only if $p(x) = q(x)$ for all $x$. Now, let $p$ be any probability density function on $\{x_1,...,x_n\}$, with $p_i = p(x_i)$. Letting $q_i = 1/n$ for all $i$, $$-\sum_{i=1}^n p_i\log q_i = \sum_{i=1}^n p_i \log n=\log n$$ which is the entropy of $q$. Therefore our Lemma says $h(p)\leq h(q)$, with equality if and only if $p$ is uniform. Also, wikipedia has a brief discussion on this as well: wiki
Why is Entropy maximised when the probability distribution is uniform? Heuristically, the probability density function on $\{x_1, x_2,..,.x_n\}$ with maximum entropy turns out to be the one that corresponds to the least amount of knowledge of $\{x_1, x_2,..,.x_n\}$, in o
4,843
Why is Entropy maximised when the probability distribution is uniform?
Entropy in physics and information theory are not unrelated. They're more different than the name suggests, yet there's clearly a link between. The purpose of entropy metric is to measure the amount of information. See my answer with graphs here to show how entropy changes from uniform distribution to a humped one. The reason why entropy is maximized for a uniform distribution is because it was designed so! Yes, we're constructing a measure for the lack of information so we want to assign its highest value to the least informative distribution. Example. I asked you "Dude, where's my car?" Your answer is "it's somewhere in USA between Atlantic and Pacific Oceans." This is an example of the uniform distribution. My car could be anywhere in USA. I didn't get much information from this answer. However, if you told me "I saw your car one hour ago on Route 66 heading from Washington, DC" - this is not a uniform distribution anymore. The car's more likely to be in 60 miles distance from DC, than anywhere near Los Angeles. There's clearly more information here. Hence, our measure must have high entropy for the first answer and lower one for the second. The uniform must be least informative distribution, it's basically "I've no idea" answer.
Why is Entropy maximised when the probability distribution is uniform?
Entropy in physics and information theory are not unrelated. They're more different than the name suggests, yet there's clearly a link between. The purpose of entropy metric is to measure the amount o
Why is Entropy maximised when the probability distribution is uniform? Entropy in physics and information theory are not unrelated. They're more different than the name suggests, yet there's clearly a link between. The purpose of entropy metric is to measure the amount of information. See my answer with graphs here to show how entropy changes from uniform distribution to a humped one. The reason why entropy is maximized for a uniform distribution is because it was designed so! Yes, we're constructing a measure for the lack of information so we want to assign its highest value to the least informative distribution. Example. I asked you "Dude, where's my car?" Your answer is "it's somewhere in USA between Atlantic and Pacific Oceans." This is an example of the uniform distribution. My car could be anywhere in USA. I didn't get much information from this answer. However, if you told me "I saw your car one hour ago on Route 66 heading from Washington, DC" - this is not a uniform distribution anymore. The car's more likely to be in 60 miles distance from DC, than anywhere near Los Angeles. There's clearly more information here. Hence, our measure must have high entropy for the first answer and lower one for the second. The uniform must be least informative distribution, it's basically "I've no idea" answer.
Why is Entropy maximised when the probability distribution is uniform? Entropy in physics and information theory are not unrelated. They're more different than the name suggests, yet there's clearly a link between. The purpose of entropy metric is to measure the amount o
4,844
Why is Entropy maximised when the probability distribution is uniform?
The mathematical argument is based on Jensen inequality for concave functions. That is, if $f(x)$ is a concave function on $[a,b]$ and $y_1, \ldots y_n$ are points in $[a,b]$, then: $n \cdot f(\frac{y_1 + \ldots y_n}{n}) \geq f(y_1) + \ldots + f(y_n)$ Apply this for the concave function $f(x) = -x \log(x)$ and Jensen inequality for $y_i = p(x_i)$ and you have the proof. Note that $p(x_i)$ define a discrete probability distribution, so their sum is 1. What you get is $log(n) \geq \sum_{i=1}^n - p(x_i) log(p(x_i))$, with equality for the uniform distribution.
Why is Entropy maximised when the probability distribution is uniform?
The mathematical argument is based on Jensen inequality for concave functions. That is, if $f(x)$ is a concave function on $[a,b]$ and $y_1, \ldots y_n$ are points in $[a,b]$, then: $n \cdot f(\frac{
Why is Entropy maximised when the probability distribution is uniform? The mathematical argument is based on Jensen inequality for concave functions. That is, if $f(x)$ is a concave function on $[a,b]$ and $y_1, \ldots y_n$ are points in $[a,b]$, then: $n \cdot f(\frac{y_1 + \ldots y_n}{n}) \geq f(y_1) + \ldots + f(y_n)$ Apply this for the concave function $f(x) = -x \log(x)$ and Jensen inequality for $y_i = p(x_i)$ and you have the proof. Note that $p(x_i)$ define a discrete probability distribution, so their sum is 1. What you get is $log(n) \geq \sum_{i=1}^n - p(x_i) log(p(x_i))$, with equality for the uniform distribution.
Why is Entropy maximised when the probability distribution is uniform? The mathematical argument is based on Jensen inequality for concave functions. That is, if $f(x)$ is a concave function on $[a,b]$ and $y_1, \ldots y_n$ are points in $[a,b]$, then: $n \cdot f(\frac{
4,845
Why is Entropy maximised when the probability distribution is uniform?
On a side note, is there any connnection between the entropy that occurs information theory and the entropy calculations in chemistry (thermodynamics) ? Yes, there is! You can see the work of Jaynes and many others following his work (such as here and here, for instance). But the main idea is that statistical mechanics (and other fields in science, also) can be viewed as the inference we do about the world. As a further reading I'd recommend Ariel Caticha's book on this topic.
Why is Entropy maximised when the probability distribution is uniform?
On a side note, is there any connnection between the entropy that occurs information theory and the entropy calculations in chemistry (thermodynamics) ? Yes, there is! You can see the work of Jaynes
Why is Entropy maximised when the probability distribution is uniform? On a side note, is there any connnection between the entropy that occurs information theory and the entropy calculations in chemistry (thermodynamics) ? Yes, there is! You can see the work of Jaynes and many others following his work (such as here and here, for instance). But the main idea is that statistical mechanics (and other fields in science, also) can be viewed as the inference we do about the world. As a further reading I'd recommend Ariel Caticha's book on this topic.
Why is Entropy maximised when the probability distribution is uniform? On a side note, is there any connnection between the entropy that occurs information theory and the entropy calculations in chemistry (thermodynamics) ? Yes, there is! You can see the work of Jaynes
4,846
Why is Entropy maximised when the probability distribution is uniform?
Main idea: take partial derivative of each $p_i$, set them all to zero, solve the system of linear equations. Take a finite number of $p_i$ where $i=1,...,n$ for an example. Denote $q = 1-\sum_{i=0}^{n-1} p_i$. \begin{align} H &= -\sum_{i=0}^{n-1} p_i \log p_i - (1-q)\log q\\ H*\ln 2 &= -\sum_{i=0}^{n-1} p_i \ln p_i - (1-q)\ln q \end{align} \begin{align} \frac{\partial H}{\partial p_i} &= \ln \frac{q}{p_i} = 0 \end{align} Then $q = p_i$ for every $i$, i.e., $p_1=p_2=...=p_n$.
Why is Entropy maximised when the probability distribution is uniform?
Main idea: take partial derivative of each $p_i$, set them all to zero, solve the system of linear equations. Take a finite number of $p_i$ where $i=1,...,n$ for an example. Denote $q = 1-\sum_{i=0}^{
Why is Entropy maximised when the probability distribution is uniform? Main idea: take partial derivative of each $p_i$, set them all to zero, solve the system of linear equations. Take a finite number of $p_i$ where $i=1,...,n$ for an example. Denote $q = 1-\sum_{i=0}^{n-1} p_i$. \begin{align} H &= -\sum_{i=0}^{n-1} p_i \log p_i - (1-q)\log q\\ H*\ln 2 &= -\sum_{i=0}^{n-1} p_i \ln p_i - (1-q)\ln q \end{align} \begin{align} \frac{\partial H}{\partial p_i} &= \ln \frac{q}{p_i} = 0 \end{align} Then $q = p_i$ for every $i$, i.e., $p_1=p_2=...=p_n$.
Why is Entropy maximised when the probability distribution is uniform? Main idea: take partial derivative of each $p_i$, set them all to zero, solve the system of linear equations. Take a finite number of $p_i$ where $i=1,...,n$ for an example. Denote $q = 1-\sum_{i=0}^{
4,847
Why is Entropy maximised when the probability distribution is uniform?
There are already several good answers. Another argument uses the fact that H is a symmetric, strictly concave function. More precisely, consider the unit simplex $\Delta_n=\{(p_1,\dots,p_n): p_i\ge 0,\sum_i p_i=1\}$.Then $H$ may be considered a function $H: \Delta_n\to \mathbb{R}$, and it is easy to show that it is strictly convex. To explain symmetry, we first introduce some notation. Given a permutation $\sigma: \{1,\dots, n\}\to\{1,\dots,n\}$, and a point $p\in \Delta_n$, define $\sigma p=(p_{\sigma(1)},\dots, p_{\sigma(n)})$. It is clear that $H(\sigma p)=H(p)$ for any $\sigma$ and p, and this is what it means to say that H is a symmetric function. Now, we can show that the uniform distribution maximizes $H$. Since $H$ is strictly convex, it has a unique maximizer, call it $p_{max}$. On the other hand $H(\sigma p_{max})=H(p_{max})$ for any $\sigma$, so $\sigma p_{max}$ is also a maximizer. Since $p_{max}$ is the only maximizer, we conclude $p_{max}=\sigma p_{max}$ for each $\sigma$, and the only point in $\Delta_n$ with this property is $p_{max}=(1/n,\dots, 1/n)$.
Why is Entropy maximised when the probability distribution is uniform?
There are already several good answers. Another argument uses the fact that H is a symmetric, strictly concave function. More precisely, consider the unit simplex $\Delta_n=\{(p_1,\dots,p_n): p_i\ge 0
Why is Entropy maximised when the probability distribution is uniform? There are already several good answers. Another argument uses the fact that H is a symmetric, strictly concave function. More precisely, consider the unit simplex $\Delta_n=\{(p_1,\dots,p_n): p_i\ge 0,\sum_i p_i=1\}$.Then $H$ may be considered a function $H: \Delta_n\to \mathbb{R}$, and it is easy to show that it is strictly convex. To explain symmetry, we first introduce some notation. Given a permutation $\sigma: \{1,\dots, n\}\to\{1,\dots,n\}$, and a point $p\in \Delta_n$, define $\sigma p=(p_{\sigma(1)},\dots, p_{\sigma(n)})$. It is clear that $H(\sigma p)=H(p)$ for any $\sigma$ and p, and this is what it means to say that H is a symmetric function. Now, we can show that the uniform distribution maximizes $H$. Since $H$ is strictly convex, it has a unique maximizer, call it $p_{max}$. On the other hand $H(\sigma p_{max})=H(p_{max})$ for any $\sigma$, so $\sigma p_{max}$ is also a maximizer. Since $p_{max}$ is the only maximizer, we conclude $p_{max}=\sigma p_{max}$ for each $\sigma$, and the only point in $\Delta_n$ with this property is $p_{max}=(1/n,\dots, 1/n)$.
Why is Entropy maximised when the probability distribution is uniform? There are already several good answers. Another argument uses the fact that H is a symmetric, strictly concave function. More precisely, consider the unit simplex $\Delta_n=\{(p_1,\dots,p_n): p_i\ge 0
4,848
Why is Entropy maximised when the probability distribution is uniform?
Calculus of Variations To handle varying functions, we will make use of the Calculus of Variations. The variation $\delta f(x)$ refers to a rate of change of $f(x)$ with respect to "time". That is, $\delta$ works like a partial derivative with respect to "time". For example, $$ \begin{align} \delta(\log(f(x))f(x)) &=\left(\frac1{f(x)}f(x)+\log(f(x))\right)\delta f(x)\\ &=(1+\log(f(x)))\,\delta f(x) \end{align} $$ simply says that the rate of change of $\log(f(x))f(x)$ is $(1+\log(f(x)))$ times the rate of change of $f(x)$. Maximize Entropy Consider the family of continuous probability distributions $f$ on $[a,b]$; that is, positive $f$ where $$ \int_a^bf(x)\,\mathrm{d}x=1\tag1 $$ Define the entropy of $f$ to be $$ -\int_a^b\log(f(x))f(x)\,\mathrm{d}x\tag2 $$ If we wish to maximize $(2)$ for all distributions satisfying $(1)$, we need to find all $f$ so that $(2)$ is stationary; that is, $\delta$ of the integral in $(2)$ vanishes: $$ -\int_a^b(1+\log(f(x))\,\delta f(x)\,\mathrm{d}x=0\tag3 $$ for all variations, $\delta f$, where $(1)$ is stationary; that is, $\delta$ of the integral in $(1)$ vanishes: $$ \int_a^b\color{#C00}{1}\,\delta f(x)\,\mathrm{d}x=0\tag4 $$ $(3)$, $(4)$, and orthogonality (equations $(3)$ and $(4)$ say that $\color{#C00}{1}$ and $1+\log(f(x))$ are orthogonal to all the same variations; see the link for details) require that there be a $c_0$ so that $$ 1+\log(f(x))=c_0\cdot\color{#C00}{1}\tag5 $$ That is, the desired distribution is constant; that is, $$ \bbox[5px,border:2px solid #C0A000]{f(x)=\frac1{b-a}}\tag6 $$ Vanishing Densities Note that in $(3)$, $-(1+\log(f(x)))\to\infty$ as $f(x)\to0$. This doesn't cause a problem in $(2)$ since $-\log(f(x))f(x)$ is bounded by $\frac1e$. There are a few simple ways to overcome the problem in $(3)$. We can use $\delta f(x)=f(x)\,\delta\log(f(x))$ and use variations of $\log(x)$. Then $(5)$ becomes $$ (1+\log(f(x)))f(x)=c_0f(x) $$ and both sides vanish when $f(x)=0$. We can restrict $f(x)\gt0$ and only consider $f(x)=0$ as a limiting case in the closure. Note that for $f(x)\approx0$, $-(1+\log(f(x)))$ is huge. That means that to maximize $(2)$, any place where $f(x)\approx0$ we want $\delta f(x)$ to be positive, so that $-\int_a^b(1+\log(f(x)))\delta f(x)\,\mathrm{d}x$ increases. That is, no maximizing function will have $f(x)=0$.
Why is Entropy maximised when the probability distribution is uniform?
Calculus of Variations To handle varying functions, we will make use of the Calculus of Variations. The variation $\delta f(x)$ refers to a rate of change of $f(x)$ with respect to "time". That is, $\
Why is Entropy maximised when the probability distribution is uniform? Calculus of Variations To handle varying functions, we will make use of the Calculus of Variations. The variation $\delta f(x)$ refers to a rate of change of $f(x)$ with respect to "time". That is, $\delta$ works like a partial derivative with respect to "time". For example, $$ \begin{align} \delta(\log(f(x))f(x)) &=\left(\frac1{f(x)}f(x)+\log(f(x))\right)\delta f(x)\\ &=(1+\log(f(x)))\,\delta f(x) \end{align} $$ simply says that the rate of change of $\log(f(x))f(x)$ is $(1+\log(f(x)))$ times the rate of change of $f(x)$. Maximize Entropy Consider the family of continuous probability distributions $f$ on $[a,b]$; that is, positive $f$ where $$ \int_a^bf(x)\,\mathrm{d}x=1\tag1 $$ Define the entropy of $f$ to be $$ -\int_a^b\log(f(x))f(x)\,\mathrm{d}x\tag2 $$ If we wish to maximize $(2)$ for all distributions satisfying $(1)$, we need to find all $f$ so that $(2)$ is stationary; that is, $\delta$ of the integral in $(2)$ vanishes: $$ -\int_a^b(1+\log(f(x))\,\delta f(x)\,\mathrm{d}x=0\tag3 $$ for all variations, $\delta f$, where $(1)$ is stationary; that is, $\delta$ of the integral in $(1)$ vanishes: $$ \int_a^b\color{#C00}{1}\,\delta f(x)\,\mathrm{d}x=0\tag4 $$ $(3)$, $(4)$, and orthogonality (equations $(3)$ and $(4)$ say that $\color{#C00}{1}$ and $1+\log(f(x))$ are orthogonal to all the same variations; see the link for details) require that there be a $c_0$ so that $$ 1+\log(f(x))=c_0\cdot\color{#C00}{1}\tag5 $$ That is, the desired distribution is constant; that is, $$ \bbox[5px,border:2px solid #C0A000]{f(x)=\frac1{b-a}}\tag6 $$ Vanishing Densities Note that in $(3)$, $-(1+\log(f(x)))\to\infty$ as $f(x)\to0$. This doesn't cause a problem in $(2)$ since $-\log(f(x))f(x)$ is bounded by $\frac1e$. There are a few simple ways to overcome the problem in $(3)$. We can use $\delta f(x)=f(x)\,\delta\log(f(x))$ and use variations of $\log(x)$. Then $(5)$ becomes $$ (1+\log(f(x)))f(x)=c_0f(x) $$ and both sides vanish when $f(x)=0$. We can restrict $f(x)\gt0$ and only consider $f(x)=0$ as a limiting case in the closure. Note that for $f(x)\approx0$, $-(1+\log(f(x)))$ is huge. That means that to maximize $(2)$, any place where $f(x)\approx0$ we want $\delta f(x)$ to be positive, so that $-\int_a^b(1+\log(f(x)))\delta f(x)\,\mathrm{d}x$ increases. That is, no maximizing function will have $f(x)=0$.
Why is Entropy maximised when the probability distribution is uniform? Calculus of Variations To handle varying functions, we will make use of the Calculus of Variations. The variation $\delta f(x)$ refers to a rate of change of $f(x)$ with respect to "time". That is, $\
4,849
Why is Entropy maximised when the probability distribution is uniform?
An intuitive explanation: If we put more probability mass into one event of a random variable, we will have to take away some from other events. The one will have less information content and more weight, the others more information content and less weight. Therefore the entropy being the expected information content will go down since the event with lower information content will be weighted more. As an extreme case imagine one event getting probability of almost one, therefore the other events will have a combined probability of almost zero and the entropy will be very low.
Why is Entropy maximised when the probability distribution is uniform?
An intuitive explanation: If we put more probability mass into one event of a random variable, we will have to take away some from other events. The one will have less information content and more wei
Why is Entropy maximised when the probability distribution is uniform? An intuitive explanation: If we put more probability mass into one event of a random variable, we will have to take away some from other events. The one will have less information content and more weight, the others more information content and less weight. Therefore the entropy being the expected information content will go down since the event with lower information content will be weighted more. As an extreme case imagine one event getting probability of almost one, therefore the other events will have a combined probability of almost zero and the entropy will be very low.
Why is Entropy maximised when the probability distribution is uniform? An intuitive explanation: If we put more probability mass into one event of a random variable, we will have to take away some from other events. The one will have less information content and more wei
4,850
Obtaining predicted values (Y=1 or 0) from a logistic regression model fit
Once you have the predicted probabilities, it is up to you what threshold you would like to use. You may choose the threshold to optimize sensitivity, specificity or whatever measure it most important in the context of the application (some additional info would be helpful here for a more specific answer). You may want to look at ROC curves and other measures related to optimal classification. Edit: To clarify this answer somewhat I'm going to give an example. The real answer is that the optimal cutoff depends on what properties of the classifier are important in the context of the application. Let $Y_{i}$ be the true value for observation $i$, and $\hat{Y}_{i}$ be the predicted class. Some common measures of performance are (1) Sensitivity: $P(\hat{Y}_i=1 | Y_i=1)$ - the proportion of '1's that are correctly identified as so. (2) Specificity: $P(\hat{Y}_i=0 | Y_i=0)$ - the proportion of '0's that are correctly identified as so (3) (Correct) Classification Rate: $P(Y_i = \hat{Y}_i)$ - the proportion of predictions that were correct. (1) is also called True Positive Rate, (2) is also called True Negative Rate. For example, if your classifier were aiming to evaluate a diagnostic test for a serious disease that has a relatively safe cure, the sensitivity is far more important that the specificity. In another case, if the disease were relatively minor and the treatment were risky, specificity would be more important to control. For general classification problems, it is considered "good" to jointly optimize the sensitivity and specification - for example, you may use the classifier that minimizes their Euclidean distance from the point $(1,1)$: $$ \delta = \sqrt{ [P(Y_i=1 | \hat{Y}_i=1)-1]^2 + [P(Y_i=0 | \hat{Y}_i=0)-1]^2 }$$ $\delta$ could be weighted or modified in another way to reflect a more reasonable measure of distance from $(1,1)$ in the context of the application - euclidean distance from (1,1) was chosen here arbitrarily for illustrative purposes. In any case, all of these four measures could be most appropriate, depending on the application. Below is a simulated example using prediction from a logistic regression model to classify. The cutoff is varied to see what cutoff gives the "best" classifier under each of these three measures. In this example the data comes from a logistic regression model with three predictors (see R code below plot). As you can see from this example, the "optimal" cutoff depends on which of these measures is most important - this is entirely application dependent. Edit 2: $P(Y_i = 1 | \hat{Y}_i = 1)$ and $P(Y_i = 0 | \hat{Y}_i = 0)$, the Positive Predictive Value and Negative Predictive Value (note these are NOT the same as sensitivity and specificity) may also be useful measures of performance. # data y simulated from a logistic regression model # with with three predictors, n=10000 x = matrix(rnorm(30000),10000,3) lp = 0 + x[,1] - 1.42*x[2] + .67*x[,3] + 1.1*x[,1]*x[,2] - 1.5*x[,1]*x[,3] +2.2*x[,2]*x[,3] + x[,1]*x[,2]*x[,3] p = 1/(1+exp(-lp)) y = runif(10000)<p # fit a logistic regression model mod = glm(y~x[,1]*x[,2]*x[,3],family="binomial") # using a cutoff of cut, calculate sensitivity, specificity, and classification rate perf = function(cut, mod, y) { yhat = (mod$fit>cut) w = which(y==1) sensitivity = mean( yhat[w] == 1 ) specificity = mean( yhat[-w] == 0 ) c.rate = mean( y==yhat ) d = cbind(sensitivity,specificity)-c(1,1) d = sqrt( d[1]^2 + d[2]^2 ) out = t(as.matrix(c(sensitivity, specificity, c.rate,d))) colnames(out) = c("sensitivity", "specificity", "c.rate", "distance") return(out) } s = seq(.01,.99,length=1000) OUT = matrix(0,1000,4) for(i in 1:1000) OUT[i,]=perf(s[i],mod,y) plot(s,OUT[,1],xlab="Cutoff",ylab="Value",cex.lab=1.5,cex.axis=1.5,ylim=c(0,1),type="l",lwd=2,axes=FALSE,col=2) axis(1,seq(0,1,length=5),seq(0,1,length=5),cex.lab=1.5) axis(2,seq(0,1,length=5),seq(0,1,length=5),cex.lab=1.5) lines(s,OUT[,2],col="darkgreen",lwd=2) lines(s,OUT[,3],col=4,lwd=2) lines(s,OUT[,4],col="darkred",lwd=2) box() legend(0,.25,col=c(2,"darkgreen",4,"darkred"),lwd=c(2,2,2,2),c("Sensitivity","Specificity","Classification Rate","Distance"))
Obtaining predicted values (Y=1 or 0) from a logistic regression model fit
Once you have the predicted probabilities, it is up to you what threshold you would like to use. You may choose the threshold to optimize sensitivity, specificity or whatever measure it most important
Obtaining predicted values (Y=1 or 0) from a logistic regression model fit Once you have the predicted probabilities, it is up to you what threshold you would like to use. You may choose the threshold to optimize sensitivity, specificity or whatever measure it most important in the context of the application (some additional info would be helpful here for a more specific answer). You may want to look at ROC curves and other measures related to optimal classification. Edit: To clarify this answer somewhat I'm going to give an example. The real answer is that the optimal cutoff depends on what properties of the classifier are important in the context of the application. Let $Y_{i}$ be the true value for observation $i$, and $\hat{Y}_{i}$ be the predicted class. Some common measures of performance are (1) Sensitivity: $P(\hat{Y}_i=1 | Y_i=1)$ - the proportion of '1's that are correctly identified as so. (2) Specificity: $P(\hat{Y}_i=0 | Y_i=0)$ - the proportion of '0's that are correctly identified as so (3) (Correct) Classification Rate: $P(Y_i = \hat{Y}_i)$ - the proportion of predictions that were correct. (1) is also called True Positive Rate, (2) is also called True Negative Rate. For example, if your classifier were aiming to evaluate a diagnostic test for a serious disease that has a relatively safe cure, the sensitivity is far more important that the specificity. In another case, if the disease were relatively minor and the treatment were risky, specificity would be more important to control. For general classification problems, it is considered "good" to jointly optimize the sensitivity and specification - for example, you may use the classifier that minimizes their Euclidean distance from the point $(1,1)$: $$ \delta = \sqrt{ [P(Y_i=1 | \hat{Y}_i=1)-1]^2 + [P(Y_i=0 | \hat{Y}_i=0)-1]^2 }$$ $\delta$ could be weighted or modified in another way to reflect a more reasonable measure of distance from $(1,1)$ in the context of the application - euclidean distance from (1,1) was chosen here arbitrarily for illustrative purposes. In any case, all of these four measures could be most appropriate, depending on the application. Below is a simulated example using prediction from a logistic regression model to classify. The cutoff is varied to see what cutoff gives the "best" classifier under each of these three measures. In this example the data comes from a logistic regression model with three predictors (see R code below plot). As you can see from this example, the "optimal" cutoff depends on which of these measures is most important - this is entirely application dependent. Edit 2: $P(Y_i = 1 | \hat{Y}_i = 1)$ and $P(Y_i = 0 | \hat{Y}_i = 0)$, the Positive Predictive Value and Negative Predictive Value (note these are NOT the same as sensitivity and specificity) may also be useful measures of performance. # data y simulated from a logistic regression model # with with three predictors, n=10000 x = matrix(rnorm(30000),10000,3) lp = 0 + x[,1] - 1.42*x[2] + .67*x[,3] + 1.1*x[,1]*x[,2] - 1.5*x[,1]*x[,3] +2.2*x[,2]*x[,3] + x[,1]*x[,2]*x[,3] p = 1/(1+exp(-lp)) y = runif(10000)<p # fit a logistic regression model mod = glm(y~x[,1]*x[,2]*x[,3],family="binomial") # using a cutoff of cut, calculate sensitivity, specificity, and classification rate perf = function(cut, mod, y) { yhat = (mod$fit>cut) w = which(y==1) sensitivity = mean( yhat[w] == 1 ) specificity = mean( yhat[-w] == 0 ) c.rate = mean( y==yhat ) d = cbind(sensitivity,specificity)-c(1,1) d = sqrt( d[1]^2 + d[2]^2 ) out = t(as.matrix(c(sensitivity, specificity, c.rate,d))) colnames(out) = c("sensitivity", "specificity", "c.rate", "distance") return(out) } s = seq(.01,.99,length=1000) OUT = matrix(0,1000,4) for(i in 1:1000) OUT[i,]=perf(s[i],mod,y) plot(s,OUT[,1],xlab="Cutoff",ylab="Value",cex.lab=1.5,cex.axis=1.5,ylim=c(0,1),type="l",lwd=2,axes=FALSE,col=2) axis(1,seq(0,1,length=5),seq(0,1,length=5),cex.lab=1.5) axis(2,seq(0,1,length=5),seq(0,1,length=5),cex.lab=1.5) lines(s,OUT[,2],col="darkgreen",lwd=2) lines(s,OUT[,3],col=4,lwd=2) lines(s,OUT[,4],col="darkred",lwd=2) box() legend(0,.25,col=c(2,"darkgreen",4,"darkred"),lwd=c(2,2,2,2),c("Sensitivity","Specificity","Classification Rate","Distance"))
Obtaining predicted values (Y=1 or 0) from a logistic regression model fit Once you have the predicted probabilities, it is up to you what threshold you would like to use. You may choose the threshold to optimize sensitivity, specificity or whatever measure it most important
4,851
Understanding "almost all local minimum have very similar function value to the global optimum"
A recent paper The Loss Surfaces of Multilayer Networks offers some possible explanations for this. From their abstract (bold is mine): "We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between large- and small-size networks where for the latter poor quality local minima have non-zero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting." A lot of the influential people in deep learning (Yann LeCunn and Yoshua Bengio to name a few) and some researchers coming more from the mathematical angle (Rong Ge and other Sanjeev Arora collaborators) have been discussing and exploring these ideas. In the above referenced paper, see Figure 3, which shows a banding/concentration phenomenon of the local minima values as the nets have more hidden units. The banding/concentration represents some empirical evidence that for deeper or larger models, a local minima is "good enough", since their loss values are roughly similar. And most importantly, they have a loss which is closer to the global minimum as the model gets more complex (in this case wider, but in practice, deeper). Furthermore, they use a spin-glass model, which they even state is just a model and not necessarily indicative of the true picture, to show that reaching the global minimizer from a local minima may take exponentially long: "In order to find a further low lying minimum we must pass through a saddle point. Therefore we must go up at least to the level where there is an equal amount of saddle points to have a decent chance of finding a path that might possibly take us to another local minimum. This process takes an exponentially long time so in practice finding the global minimum is not feasible." The Rong Ge research is centered around breaking through saddle points. Yoshua Bengio and his collaborators have posed a pretty bold Saddle Point Hypothesis: Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. source here: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. To some extent, the above two approaches aren't exactly the same (the Saddle Point Hypothesis might question what is really a local minima and what is merely a poorly conditioned saddle point with a very long plateau region?). The idea behind the Saddle Point Hypothesis is that it is possible to design optimization methods to break through saddle points, for example Saddle-Free Newton from the Bengio article, to potentially speed up convergence and maybe even reach the global optimum. The first Multilayer Loss Surface article is not really concerned with reaching the global optimum and actually believes it to have some poor overfitting properties. Curiously, both articles use ideas from statistical physics and spin-glass models. But they are sort of related in that both articles believe that in order to reach the global minimizer, one must overcome the optimization challenge of saddle points. The first article just believes that local minima are good enough. It is fair to wonder if momentum methods and other new optimization algorithms, which can estimate some 2nd order curvature properties can escape saddle points. A famous animation by Alec Radford here. To answer your question: "where does this belief come from" I personally think it comes from the fact that it's possible to use different random seeds to learn different weights, but the corresponding nets have similar quantitative performance. For example, if you set two different random seeds for Glorot weight initialization, you will probably learn different weights, but if you train using similar optimization methods, the nets will have similar performance. One common folklore belief is that the optimization landscape is similar to that of an egg carton, another good blog post on this here: No more local minima? with the egg-carton analogy. Edit: I just wanted to be clear that the egg carton analogy is not true, otherwise there would be no need for momentum or other more advanced optimization techniques. But it is known that SGD does not perform as well as SGD+Momentum or more modern optimization algorithms, perhaps due to the existence of saddle points.
Understanding "almost all local minimum have very similar function value to the global optimum"
A recent paper The Loss Surfaces of Multilayer Networks offers some possible explanations for this. From their abstract (bold is mine): "We conjecture that both simulated annealing and SGD converge t
Understanding "almost all local minimum have very similar function value to the global optimum" A recent paper The Loss Surfaces of Multilayer Networks offers some possible explanations for this. From their abstract (bold is mine): "We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between large- and small-size networks where for the latter poor quality local minima have non-zero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting." A lot of the influential people in deep learning (Yann LeCunn and Yoshua Bengio to name a few) and some researchers coming more from the mathematical angle (Rong Ge and other Sanjeev Arora collaborators) have been discussing and exploring these ideas. In the above referenced paper, see Figure 3, which shows a banding/concentration phenomenon of the local minima values as the nets have more hidden units. The banding/concentration represents some empirical evidence that for deeper or larger models, a local minima is "good enough", since their loss values are roughly similar. And most importantly, they have a loss which is closer to the global minimum as the model gets more complex (in this case wider, but in practice, deeper). Furthermore, they use a spin-glass model, which they even state is just a model and not necessarily indicative of the true picture, to show that reaching the global minimizer from a local minima may take exponentially long: "In order to find a further low lying minimum we must pass through a saddle point. Therefore we must go up at least to the level where there is an equal amount of saddle points to have a decent chance of finding a path that might possibly take us to another local minimum. This process takes an exponentially long time so in practice finding the global minimum is not feasible." The Rong Ge research is centered around breaking through saddle points. Yoshua Bengio and his collaborators have posed a pretty bold Saddle Point Hypothesis: Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. source here: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. To some extent, the above two approaches aren't exactly the same (the Saddle Point Hypothesis might question what is really a local minima and what is merely a poorly conditioned saddle point with a very long plateau region?). The idea behind the Saddle Point Hypothesis is that it is possible to design optimization methods to break through saddle points, for example Saddle-Free Newton from the Bengio article, to potentially speed up convergence and maybe even reach the global optimum. The first Multilayer Loss Surface article is not really concerned with reaching the global optimum and actually believes it to have some poor overfitting properties. Curiously, both articles use ideas from statistical physics and spin-glass models. But they are sort of related in that both articles believe that in order to reach the global minimizer, one must overcome the optimization challenge of saddle points. The first article just believes that local minima are good enough. It is fair to wonder if momentum methods and other new optimization algorithms, which can estimate some 2nd order curvature properties can escape saddle points. A famous animation by Alec Radford here. To answer your question: "where does this belief come from" I personally think it comes from the fact that it's possible to use different random seeds to learn different weights, but the corresponding nets have similar quantitative performance. For example, if you set two different random seeds for Glorot weight initialization, you will probably learn different weights, but if you train using similar optimization methods, the nets will have similar performance. One common folklore belief is that the optimization landscape is similar to that of an egg carton, another good blog post on this here: No more local minima? with the egg-carton analogy. Edit: I just wanted to be clear that the egg carton analogy is not true, otherwise there would be no need for momentum or other more advanced optimization techniques. But it is known that SGD does not perform as well as SGD+Momentum or more modern optimization algorithms, perhaps due to the existence of saddle points.
Understanding "almost all local minimum have very similar function value to the global optimum" A recent paper The Loss Surfaces of Multilayer Networks offers some possible explanations for this. From their abstract (bold is mine): "We conjecture that both simulated annealing and SGD converge t
4,852
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
PCA is a simple mathematical transformation. If you change the signs of the component(s), you do not change the variance that is contained in the first component. Moreover, when you change the signs, the weights (prcomp( ... )$rotation) also change the sign, so the interpretation stays exactly the same: set.seed( 999 ) a <- data.frame(1:10,rnorm(10)) pca1 <- prcomp( a ) pca2 <- princomp( a ) pca1$rotation shows PC1 PC2 X1.10 0.9900908 0.1404287 rnorm.10. -0.1404287 0.9900908 and pca2$loadings show Loadings: Comp.1 Comp.2 X1.10 -0.99 -0.14 rnorm.10. 0.14 -0.99 Comp.1 Comp.2 SS loadings 1.0 1.0 Proportion Var 0.5 0.5 Cumulative Var 0.5 1.0 So, why does the interpretation stays the same? You do the PCA regression of y on component 1. In the first version (prcomp), say the coefficient is positive: the larger the component 1, the larger the y. What does it mean when it comes to the original variables? Since the weight of the variable 1 (1:10 in a) is positive, that shows that the larger the variable 1, the larger the y. Now use the second version (princomp). Since the component has the sign changed, the larger the y, the smaller the component 1 -- the coefficient of y< over PC1 is now negative. But so is the loading of the variable 1; that means, the larger variable 1, the smaller the component 1, the larger y -- the interpretation is the same. Possibly, the easiest way to see that is to use a biplot. library( pca3d ) pca2d( pca1, biplot= TRUE, shape= 19, col= "black" ) shows The same biplot for the second variant shows pca2d( pca2$scores, biplot= pca2$loadings[,], shape= 19, col= "black" ) As you see, the images are rotated by 180°. However, the relation between the weights / loadings (the red arrows) and the data points (the black dots) is exactly the same; thus, the interpretation of the components is unchanged.
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
PCA is a simple mathematical transformation. If you change the signs of the component(s), you do not change the variance that is contained in the first component. Moreover, when you change the signs,
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign? PCA is a simple mathematical transformation. If you change the signs of the component(s), you do not change the variance that is contained in the first component. Moreover, when you change the signs, the weights (prcomp( ... )$rotation) also change the sign, so the interpretation stays exactly the same: set.seed( 999 ) a <- data.frame(1:10,rnorm(10)) pca1 <- prcomp( a ) pca2 <- princomp( a ) pca1$rotation shows PC1 PC2 X1.10 0.9900908 0.1404287 rnorm.10. -0.1404287 0.9900908 and pca2$loadings show Loadings: Comp.1 Comp.2 X1.10 -0.99 -0.14 rnorm.10. 0.14 -0.99 Comp.1 Comp.2 SS loadings 1.0 1.0 Proportion Var 0.5 0.5 Cumulative Var 0.5 1.0 So, why does the interpretation stays the same? You do the PCA regression of y on component 1. In the first version (prcomp), say the coefficient is positive: the larger the component 1, the larger the y. What does it mean when it comes to the original variables? Since the weight of the variable 1 (1:10 in a) is positive, that shows that the larger the variable 1, the larger the y. Now use the second version (princomp). Since the component has the sign changed, the larger the y, the smaller the component 1 -- the coefficient of y< over PC1 is now negative. But so is the loading of the variable 1; that means, the larger variable 1, the smaller the component 1, the larger y -- the interpretation is the same. Possibly, the easiest way to see that is to use a biplot. library( pca3d ) pca2d( pca1, biplot= TRUE, shape= 19, col= "black" ) shows The same biplot for the second variant shows pca2d( pca2$scores, biplot= pca2$loadings[,], shape= 19, col= "black" ) As you see, the images are rotated by 180°. However, the relation between the weights / loadings (the red arrows) and the data points (the black dots) is exactly the same; thus, the interpretation of the components is unchanged.
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign? PCA is a simple mathematical transformation. If you change the signs of the component(s), you do not change the variance that is contained in the first component. Moreover, when you change the signs,
4,853
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
This question gets asked a lot on this forum, so I would like to supplement @January's excellent answer with a bit more general considerations. In both principal component analysis (PCA) and factor analysis (FA), we use the original variables $x_1, x_2, ... x_d$ to estimate several latent components (or latent variables) $z_1, z_2, ... z_k$. These latent components are given by PCA or FA component scores. Each original variable is a linear combination of these components with some weights: for example the first original variable $x_1$ might be well approximated by twice $z_1$ plus three times $z_2$, so that $x_1 \approx 2z_1 + 3z_2$. If the scores are standardized, then these weights ($2$ and $3$) are known as loadings. So, informally, one can say that $$\mathrm{Original\: variables} \approx \mathrm{Scores} \cdot \mathrm{Loadings}.$$ From here we can see that if we take one latent component, e.g. $z_1$, and flip the sign of its scores and of its loadings, then this will have no influence on the outcome (or interpretation), because $$-1\cdot -1 = 1.$$ The conclusion is that for each PCA or FA component, the sign of its scores and of its loadings is arbitrary and meaningless. It can be flipped, but only if the sign of both scores and loadings is reversed at the same time.
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
This question gets asked a lot on this forum, so I would like to supplement @January's excellent answer with a bit more general considerations. In both principal component analysis (PCA) and factor an
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign? This question gets asked a lot on this forum, so I would like to supplement @January's excellent answer with a bit more general considerations. In both principal component analysis (PCA) and factor analysis (FA), we use the original variables $x_1, x_2, ... x_d$ to estimate several latent components (or latent variables) $z_1, z_2, ... z_k$. These latent components are given by PCA or FA component scores. Each original variable is a linear combination of these components with some weights: for example the first original variable $x_1$ might be well approximated by twice $z_1$ plus three times $z_2$, so that $x_1 \approx 2z_1 + 3z_2$. If the scores are standardized, then these weights ($2$ and $3$) are known as loadings. So, informally, one can say that $$\mathrm{Original\: variables} \approx \mathrm{Scores} \cdot \mathrm{Loadings}.$$ From here we can see that if we take one latent component, e.g. $z_1$, and flip the sign of its scores and of its loadings, then this will have no influence on the outcome (or interpretation), because $$-1\cdot -1 = 1.$$ The conclusion is that for each PCA or FA component, the sign of its scores and of its loadings is arbitrary and meaningless. It can be flipped, but only if the sign of both scores and loadings is reversed at the same time.
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign? This question gets asked a lot on this forum, so I would like to supplement @January's excellent answer with a bit more general considerations. In both principal component analysis (PCA) and factor an
4,854
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
This was well answered above. Just to provide some further mathematical relevance, the directions that the principal components act correspond to the eigenvectors of the system. If you are getting a positive or negative PC it just means that you are projecting on an eigenvector that is pointing in one direction or $180^\circ$ away in the other direction. Regardless, the interpretation remains the same! It should also be added that the lengths of your principal components are simply the eigenvalues.
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
This was well answered above. Just to provide some further mathematical relevance, the directions that the principal components act correspond to the eigenvectors of the system. If you are getting a p
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign? This was well answered above. Just to provide some further mathematical relevance, the directions that the principal components act correspond to the eigenvectors of the system. If you are getting a positive or negative PC it just means that you are projecting on an eigenvector that is pointing in one direction or $180^\circ$ away in the other direction. Regardless, the interpretation remains the same! It should also be added that the lengths of your principal components are simply the eigenvalues.
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign? This was well answered above. Just to provide some further mathematical relevance, the directions that the principal components act correspond to the eigenvectors of the system. If you are getting a p
4,855
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
It is easy to see that the sign of scores does not matter when using PCA for classification or clustering. But it seems to matter for regression. Consider a case where you have just one principal component or one common factor underlying several variables. Then lm(y ~ PC1) will give you different predictions of y compared to lm(y ~ -PC1). If y and PC1 have a positive linear relationship, y and -PC1 have a negative linear relationship. Maybe for regression, you should consider other alternatives discussed here, for example lasso regression.
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
It is easy to see that the sign of scores does not matter when using PCA for classification or clustering. But it seems to matter for regression. Consider a case where you have just one principal comp
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign? It is easy to see that the sign of scores does not matter when using PCA for classification or clustering. But it seems to matter for regression. Consider a case where you have just one principal component or one common factor underlying several variables. Then lm(y ~ PC1) will give you different predictions of y compared to lm(y ~ -PC1). If y and PC1 have a positive linear relationship, y and -PC1 have a negative linear relationship. Maybe for regression, you should consider other alternatives discussed here, for example lasso regression.
Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign? It is easy to see that the sign of scores does not matter when using PCA for classification or clustering. But it seems to matter for regression. Consider a case where you have just one principal comp
4,856
What is pre training a neural network?
The usual way of training a network: You want to train a neural network to perform a task (e.g. classification) on a data set (e.g. a set of images). You start training by initializing the weights randomly. As soon as you start training, the weights are changed in order to perform the task with less mistakes (i.e. optimization). Once you're satisfied with the training results you save the weights of your network somewhere. You are now interested in training a network to perform a new task (e.g. object detection) on a different data set (e.g. images too but not the same as the ones you used before). Instead of repeating what you did for the first network and start from training with randomly initialized weights, you can use the weights you saved from the previous network as the initial weight values for your new experiment. Initializing the weights this way is referred to as using a pre-trained network. The first network is your pre-trained network. The second one is the network you are fine-tuning. The idea behind pre-training is that random initialization is...well...random, the values of the weights have nothing to do with the task you're trying to solve. Why should a set of values be any better than another set? But how else would you initialize the weights? If you knew how to initialize them properly for the task, you might as well set them to the optimal values (slightly exaggerated). No need to train anything. You have the optimal solution to your problem. Pre-training gives the network a head start. As if it has seen the data before. What to watch out for when pre-training: The first task used in pre-training the network can be the same as the fine-tuning stage. The datasets used for pre-training vs. fine-tuning can also be the same, but can also be different. It's really interesting to see how pre-training on a different task and different dataset can still be transferred to a new dataset and new task that are slightly different. Using a pre-trained network generally makes sense if both tasks or both datasets have something in common. The bigger the gap, the less effective pre-training will be. It makes little sense to pre-train a network for image classification by training it on financial data first. In this case there's too much disconnect between the pre-training and fine-tuning stages.
What is pre training a neural network?
The usual way of training a network: You want to train a neural network to perform a task (e.g. classification) on a data set (e.g. a set of images). You start training by initializing the weights ran
What is pre training a neural network? The usual way of training a network: You want to train a neural network to perform a task (e.g. classification) on a data set (e.g. a set of images). You start training by initializing the weights randomly. As soon as you start training, the weights are changed in order to perform the task with less mistakes (i.e. optimization). Once you're satisfied with the training results you save the weights of your network somewhere. You are now interested in training a network to perform a new task (e.g. object detection) on a different data set (e.g. images too but not the same as the ones you used before). Instead of repeating what you did for the first network and start from training with randomly initialized weights, you can use the weights you saved from the previous network as the initial weight values for your new experiment. Initializing the weights this way is referred to as using a pre-trained network. The first network is your pre-trained network. The second one is the network you are fine-tuning. The idea behind pre-training is that random initialization is...well...random, the values of the weights have nothing to do with the task you're trying to solve. Why should a set of values be any better than another set? But how else would you initialize the weights? If you knew how to initialize them properly for the task, you might as well set them to the optimal values (slightly exaggerated). No need to train anything. You have the optimal solution to your problem. Pre-training gives the network a head start. As if it has seen the data before. What to watch out for when pre-training: The first task used in pre-training the network can be the same as the fine-tuning stage. The datasets used for pre-training vs. fine-tuning can also be the same, but can also be different. It's really interesting to see how pre-training on a different task and different dataset can still be transferred to a new dataset and new task that are slightly different. Using a pre-trained network generally makes sense if both tasks or both datasets have something in common. The bigger the gap, the less effective pre-training will be. It makes little sense to pre-train a network for image classification by training it on financial data first. In this case there's too much disconnect between the pre-training and fine-tuning stages.
What is pre training a neural network? The usual way of training a network: You want to train a neural network to perform a task (e.g. classification) on a data set (e.g. a set of images). You start training by initializing the weights ran
4,857
What is pre training a neural network?
Pretraining / fine-tuning works as follows: You have machine learning model $m$. Pre-training: You have a dataset $A$ on which you train $m$. You have a dataset $B$. Before you start training the model, you initialize some of the parameters of $m$ with the model which is trained on $A$. Fine-tuning: You train $m$ on $B$. This is one form of transfer learning. So you can transfer some of the knowledge obtained from dataset $A$ to dataset $B$. See my Machine Learning Glossary for this and more terms explained in very few words.
What is pre training a neural network?
Pretraining / fine-tuning works as follows: You have machine learning model $m$. Pre-training: You have a dataset $A$ on which you train $m$. You have a dataset $B$. Before you start training the mod
What is pre training a neural network? Pretraining / fine-tuning works as follows: You have machine learning model $m$. Pre-training: You have a dataset $A$ on which you train $m$. You have a dataset $B$. Before you start training the model, you initialize some of the parameters of $m$ with the model which is trained on $A$. Fine-tuning: You train $m$ on $B$. This is one form of transfer learning. So you can transfer some of the knowledge obtained from dataset $A$ to dataset $B$. See my Machine Learning Glossary for this and more terms explained in very few words.
What is pre training a neural network? Pretraining / fine-tuning works as follows: You have machine learning model $m$. Pre-training: You have a dataset $A$ on which you train $m$. You have a dataset $B$. Before you start training the mod
4,858
What is pre training a neural network?
The two answers above explains well. Just want to add one subtle thing regarding the pre-training for Deep Belief Nets (DBN). The pre-training for DBN is unsupervised learning (i.e. w/o labeled data) and the training afterwards is supervised learning (i.e. w/. labeled data).
What is pre training a neural network?
The two answers above explains well. Just want to add one subtle thing regarding the pre-training for Deep Belief Nets (DBN). The pre-training for DBN is unsupervised learning (i.e. w/o labeled data)
What is pre training a neural network? The two answers above explains well. Just want to add one subtle thing regarding the pre-training for Deep Belief Nets (DBN). The pre-training for DBN is unsupervised learning (i.e. w/o labeled data) and the training afterwards is supervised learning (i.e. w/. labeled data).
What is pre training a neural network? The two answers above explains well. Just want to add one subtle thing regarding the pre-training for Deep Belief Nets (DBN). The pre-training for DBN is unsupervised learning (i.e. w/o labeled data)
4,859
How to choose between ROC AUC and F1 score?
Calculation formula: Precision TP/(TP+FP) Recall: TP/(TP+FN) F1-score: 2/(1/P+1/R) ROC/AUC: TPR=TP/(TP+FN), FPR=FP/(FP+TN) ROC / AUC is the same criteria and the PR (Precision-Recall) curve (F1-score, Precision, Recall) is also the same criteria. Real data will tend to have an imbalance between positive and negative samples. This imbalance has large effect on PR but not ROC/AUC. So in the real world, the PR curve is used more since positive and negative samples are very uneven. The ROC/AUC curve does not reflect the performance of the classifier, but the PR curve can. If you just do the experiment in research papers, you can use the ROC, the experimental results will be more beautiful. On another hand, PR curve use in the real problem, and it has better interpretability.
How to choose between ROC AUC and F1 score?
Calculation formula: Precision TP/(TP+FP) Recall: TP/(TP+FN) F1-score: 2/(1/P+1/R) ROC/AUC: TPR=TP/(TP+FN), FPR=FP/(FP+TN) ROC / AUC is the same criteria and the PR (Precision-Recall) curve (F1-sco
How to choose between ROC AUC and F1 score? Calculation formula: Precision TP/(TP+FP) Recall: TP/(TP+FN) F1-score: 2/(1/P+1/R) ROC/AUC: TPR=TP/(TP+FN), FPR=FP/(FP+TN) ROC / AUC is the same criteria and the PR (Precision-Recall) curve (F1-score, Precision, Recall) is also the same criteria. Real data will tend to have an imbalance between positive and negative samples. This imbalance has large effect on PR but not ROC/AUC. So in the real world, the PR curve is used more since positive and negative samples are very uneven. The ROC/AUC curve does not reflect the performance of the classifier, but the PR curve can. If you just do the experiment in research papers, you can use the ROC, the experimental results will be more beautiful. On another hand, PR curve use in the real problem, and it has better interpretability.
How to choose between ROC AUC and F1 score? Calculation formula: Precision TP/(TP+FP) Recall: TP/(TP+FN) F1-score: 2/(1/P+1/R) ROC/AUC: TPR=TP/(TP+FN), FPR=FP/(FP+TN) ROC / AUC is the same criteria and the PR (Precision-Recall) curve (F1-sco
4,860
How to choose between ROC AUC and F1 score?
None of the measures listed here are proper accuracy scoring rules, i.e., rules that are optimized by a correct model. Consider the Brier score and log-likelihood-based measures such as pseudo $R^2$. The $c$-index (AUROC; concordance probability) is not proper but is good for describing a single model. It is not sensitive enough to use for choosing models or comparing even as few as two models.
How to choose between ROC AUC and F1 score?
None of the measures listed here are proper accuracy scoring rules, i.e., rules that are optimized by a correct model. Consider the Brier score and log-likelihood-based measures such as pseudo $R^2$.
How to choose between ROC AUC and F1 score? None of the measures listed here are proper accuracy scoring rules, i.e., rules that are optimized by a correct model. Consider the Brier score and log-likelihood-based measures such as pseudo $R^2$. The $c$-index (AUROC; concordance probability) is not proper but is good for describing a single model. It is not sensitive enough to use for choosing models or comparing even as few as two models.
How to choose between ROC AUC and F1 score? None of the measures listed here are proper accuracy scoring rules, i.e., rules that are optimized by a correct model. Consider the Brier score and log-likelihood-based measures such as pseudo $R^2$.
4,861
How to choose between ROC AUC and F1 score?
Above answers are both good. But what I want to point out is AUC (Area under ROC) is problematic especially the data is imbalanced (so called highly skewed: $Skew=\frac{negative\;examples}{positive\;examples}$ is large). This kind of situations is very common in action detection, fraud detection, bankruptcy prediction ect. That is, the positive examples you care have relatively low rates of occurrence. With imbalanced data, the AUC still gives you specious value around 0.8. However, it is high due to large FP, rather than the large TP (True positive). Such as the example below, TP=155, FN=182 FP=84049, TN=34088 So when you use AUC to measure the performance of classifier, the problem is the increasing of AUC doesn't really reflect a better classifier. It's just the side-effect of too many negative examples. You can simply try in you imbalanced dataset, you will see this issue. The paper Facing Imbalanced Data Recommendations for the Use of Performance Metrics found "while ROC was unaffected by skew, the precision-recall curves suggest that ROC may mask poor performance in some cases." Searching for a good performance metrics is still a open question. A general F1-score may help $$ F_\beta = (1 + \beta^2) \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{(\beta^2 \cdot \mathrm{precision}) + \mathrm{recall}}$$ where the $\beta$ is the relative importance of precision comparing to recall. Then, my suggestions for imbalanced data are similar to this post. You can also try the decile table, which can be construct by searching "Two-by-Two Classification and Decile Tables". Meanwhile, I am also studying on this problem and will give better measure.
How to choose between ROC AUC and F1 score?
Above answers are both good. But what I want to point out is AUC (Area under ROC) is problematic especially the data is imbalanced (so called highly skewed: $Skew=\frac{negative\;examples}{positive\;
How to choose between ROC AUC and F1 score? Above answers are both good. But what I want to point out is AUC (Area under ROC) is problematic especially the data is imbalanced (so called highly skewed: $Skew=\frac{negative\;examples}{positive\;examples}$ is large). This kind of situations is very common in action detection, fraud detection, bankruptcy prediction ect. That is, the positive examples you care have relatively low rates of occurrence. With imbalanced data, the AUC still gives you specious value around 0.8. However, it is high due to large FP, rather than the large TP (True positive). Such as the example below, TP=155, FN=182 FP=84049, TN=34088 So when you use AUC to measure the performance of classifier, the problem is the increasing of AUC doesn't really reflect a better classifier. It's just the side-effect of too many negative examples. You can simply try in you imbalanced dataset, you will see this issue. The paper Facing Imbalanced Data Recommendations for the Use of Performance Metrics found "while ROC was unaffected by skew, the precision-recall curves suggest that ROC may mask poor performance in some cases." Searching for a good performance metrics is still a open question. A general F1-score may help $$ F_\beta = (1 + \beta^2) \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{(\beta^2 \cdot \mathrm{precision}) + \mathrm{recall}}$$ where the $\beta$ is the relative importance of precision comparing to recall. Then, my suggestions for imbalanced data are similar to this post. You can also try the decile table, which can be construct by searching "Two-by-Two Classification and Decile Tables". Meanwhile, I am also studying on this problem and will give better measure.
How to choose between ROC AUC and F1 score? Above answers are both good. But what I want to point out is AUC (Area under ROC) is problematic especially the data is imbalanced (so called highly skewed: $Skew=\frac{negative\;examples}{positive\;
4,862
How to choose between ROC AUC and F1 score?
To put in very simple words when you have a data imbalance i.e., the difference between the number of examples you have for positive and negative classes is large, you should always use F1-score. Otherwise you can use ROC/AUC curves.
How to choose between ROC AUC and F1 score?
To put in very simple words when you have a data imbalance i.e., the difference between the number of examples you have for positive and negative classes is large, you should always use F1-score. Othe
How to choose between ROC AUC and F1 score? To put in very simple words when you have a data imbalance i.e., the difference between the number of examples you have for positive and negative classes is large, you should always use F1-score. Otherwise you can use ROC/AUC curves.
How to choose between ROC AUC and F1 score? To put in very simple words when you have a data imbalance i.e., the difference between the number of examples you have for positive and negative classes is large, you should always use F1-score. Othe
4,863
How to choose between ROC AUC and F1 score?
Despite the less interpretable graph that AUC integrates, the number itself tells you the probability that a randomly chosen positive would be ranked higher than a randomly chosen negative. This is a nice summary of the degree to which positive examples are scored higher than negative examples. If the negatives are ranked higher than all the positives, your AUC is 0. If your negatives are ranked lower than all the positives, the AUC is 1. If the negatives are in the middle or scattered randomly, AUC is around 0.5. Every time your model performance degrades to the point that a positive and negative instance trade ranks when sorted by model score, AUC decreases by a constant number equal to 1/(number of positives x number of negatives). If you have one negative and 99 positive examples, and that one negative example is ranked higher than all the positive examples, ROC AUC is 0 but you can still achieve a high F1. With a threshold at or lower than your lowest model score (0.5 will work if your model scores everything higher than 0.5), precision and recall are 99% and 100% respectively, leaving your F1 ~99.5%. In this example, your model performed far worse than a random number generator since it assigned its highest confidence to the only negative example in the dataset. At the same time, it may well be very successful if you care about precision and recall--the problem was so easy even a random number generator could do it! As a rule of thumb, I've found AUC is useful for comparing models as you're experimenting since it will tell you if you have a bad model despite an easy problem. Precision, recall, F1, and anything that relies on thresholds are useful once you're trying to figure out whether and to what extent it would meet production requirements.
How to choose between ROC AUC and F1 score?
Despite the less interpretable graph that AUC integrates, the number itself tells you the probability that a randomly chosen positive would be ranked higher than a randomly chosen negative. This is a
How to choose between ROC AUC and F1 score? Despite the less interpretable graph that AUC integrates, the number itself tells you the probability that a randomly chosen positive would be ranked higher than a randomly chosen negative. This is a nice summary of the degree to which positive examples are scored higher than negative examples. If the negatives are ranked higher than all the positives, your AUC is 0. If your negatives are ranked lower than all the positives, the AUC is 1. If the negatives are in the middle or scattered randomly, AUC is around 0.5. Every time your model performance degrades to the point that a positive and negative instance trade ranks when sorted by model score, AUC decreases by a constant number equal to 1/(number of positives x number of negatives). If you have one negative and 99 positive examples, and that one negative example is ranked higher than all the positive examples, ROC AUC is 0 but you can still achieve a high F1. With a threshold at or lower than your lowest model score (0.5 will work if your model scores everything higher than 0.5), precision and recall are 99% and 100% respectively, leaving your F1 ~99.5%. In this example, your model performed far worse than a random number generator since it assigned its highest confidence to the only negative example in the dataset. At the same time, it may well be very successful if you care about precision and recall--the problem was so easy even a random number generator could do it! As a rule of thumb, I've found AUC is useful for comparing models as you're experimenting since it will tell you if you have a bad model despite an easy problem. Precision, recall, F1, and anything that relies on thresholds are useful once you're trying to figure out whether and to what extent it would meet production requirements.
How to choose between ROC AUC and F1 score? Despite the less interpretable graph that AUC integrates, the number itself tells you the probability that a randomly chosen positive would be ranked higher than a randomly chosen negative. This is a
4,864
How to choose between ROC AUC and F1 score?
If the objective of classification is scoring by probability, it is better to use AUC which averages over all possible thresholds. However, if the objective of classification just needs to classify between two possible classes and doesn't require how likely each class is predicted by the model, it is more appropriate to rely on F-score using a particular threshold.
How to choose between ROC AUC and F1 score?
If the objective of classification is scoring by probability, it is better to use AUC which averages over all possible thresholds. However, if the objective of classification just needs to classify be
How to choose between ROC AUC and F1 score? If the objective of classification is scoring by probability, it is better to use AUC which averages over all possible thresholds. However, if the objective of classification just needs to classify between two possible classes and doesn't require how likely each class is predicted by the model, it is more appropriate to rely on F-score using a particular threshold.
How to choose between ROC AUC and F1 score? If the objective of classification is scoring by probability, it is better to use AUC which averages over all possible thresholds. However, if the objective of classification just needs to classify be
4,865
How to choose between ROC AUC and F1 score?
For some multi class classification problems, analyzing and visualizing ROC/AUC is not straightforward. You may look into this question, How to plot ROC curves in multiclass classification?. Under such situation, using F1 score could be a better metric. And F1 score is a common choice for information retrieval problem and popular in industry settings. Here is an well explained example, Building ML models is hard. Deploying them in real business environments is harder.
How to choose between ROC AUC and F1 score?
For some multi class classification problems, analyzing and visualizing ROC/AUC is not straightforward. You may look into this question, How to plot ROC curves in multiclass classification?. Under suc
How to choose between ROC AUC and F1 score? For some multi class classification problems, analyzing and visualizing ROC/AUC is not straightforward. You may look into this question, How to plot ROC curves in multiclass classification?. Under such situation, using F1 score could be a better metric. And F1 score is a common choice for information retrieval problem and popular in industry settings. Here is an well explained example, Building ML models is hard. Deploying them in real business environments is harder.
How to choose between ROC AUC and F1 score? For some multi class classification problems, analyzing and visualizing ROC/AUC is not straightforward. You may look into this question, How to plot ROC curves in multiclass classification?. Under suc
4,866
How to choose between ROC AUC and F1 score?
Lets start with some formula to see how each measure is calculated (see Wikipedia for a complete list): Precision: $\frac{TP}{TP+FP}$ Recall: $\frac{TP}{TP+FN}$ F1-score: $\frac{2}{\frac{1}{Precision}+\frac{1}{Recall}}=2\times\frac{Precision \times Recall}{Precision + Recall}$ AUC curve is built using the following measures: TPR = $\frac{TP}{TP+FN}$=Recall FPR = $\frac{FP}{FP+TN}$ PR curve is built using the following measures: Precision Recall Notice that the AUC is using TPR and FPR criteria. In contrast, the the PR (Precision-Recall) curve and F1-score are using Precision, Recall criteria. Note that the Recall = TPR used in both measures and is identical. So we only focus on the Precision and FPR to see the difference between them. In general both measures will nicely assess the performance of a classifier. Their difference is pronounced when classes are imbalanced, i.e. when number of samples in the positive class (say class Rare) is very small compared to the negative class (say class Freq). When a classifier is (wrongly) predicting many samples from Freq class as Rare class, then the Precision is going to be small, but FPR could still be large (see the equation above). As a result, the PR curve will more drastically show this lack of performance compared to AUC. The real-world data tend to be imbalance and often the Rare class is of interest. In such a cases, the using PR curve is recommended. The F1-score produces a single number which is more convenient to work with. So when many classifiers are being compared (or during hyper parameter optimisation), then F1-score is used instead of drawing a PR curve.
How to choose between ROC AUC and F1 score?
Lets start with some formula to see how each measure is calculated (see Wikipedia for a complete list): Precision: $\frac{TP}{TP+FP}$ Recall: $\frac{TP}{TP+FN}$ F1-score: $\frac{2}{\frac{1}{Precision
How to choose between ROC AUC and F1 score? Lets start with some formula to see how each measure is calculated (see Wikipedia for a complete list): Precision: $\frac{TP}{TP+FP}$ Recall: $\frac{TP}{TP+FN}$ F1-score: $\frac{2}{\frac{1}{Precision}+\frac{1}{Recall}}=2\times\frac{Precision \times Recall}{Precision + Recall}$ AUC curve is built using the following measures: TPR = $\frac{TP}{TP+FN}$=Recall FPR = $\frac{FP}{FP+TN}$ PR curve is built using the following measures: Precision Recall Notice that the AUC is using TPR and FPR criteria. In contrast, the the PR (Precision-Recall) curve and F1-score are using Precision, Recall criteria. Note that the Recall = TPR used in both measures and is identical. So we only focus on the Precision and FPR to see the difference between them. In general both measures will nicely assess the performance of a classifier. Their difference is pronounced when classes are imbalanced, i.e. when number of samples in the positive class (say class Rare) is very small compared to the negative class (say class Freq). When a classifier is (wrongly) predicting many samples from Freq class as Rare class, then the Precision is going to be small, but FPR could still be large (see the equation above). As a result, the PR curve will more drastically show this lack of performance compared to AUC. The real-world data tend to be imbalance and often the Rare class is of interest. In such a cases, the using PR curve is recommended. The F1-score produces a single number which is more convenient to work with. So when many classifiers are being compared (or during hyper parameter optimisation), then F1-score is used instead of drawing a PR curve.
How to choose between ROC AUC and F1 score? Lets start with some formula to see how each measure is calculated (see Wikipedia for a complete list): Precision: $\frac{TP}{TP+FP}$ Recall: $\frac{TP}{TP+FN}$ F1-score: $\frac{2}{\frac{1}{Precision
4,867
How to determine the optimal threshold for a classifier and generate ROC curve?
Use the SVM classifier to classify a set of annotated examples, and "one point" on the ROC space based on one prediction of the examples can be identified. Suppose the number of examples is 200, first count the number of examples of the four cases. \begin{array} {|r|r|r|} \hline & \text{labeled true} & \text{labeled false} \\ \hline \text{predicted true} &71& 28\\ \hline \text{predicted false} &57&44 \\ \hline \end{array} Then compute TPR (True Positive Rate) and FPR (False Positive Rate). $TPR = 71/ (71+57)=0.5547$, and $FPR=28/(28+44) = 0.3889$ On the ROC space, the x-axis is FPR, and the y-axis is TPR. So point $(0.3889, 0.5547)$ is obtained. To draw an ROC curve, just Adjust some threshold value that control the number of examples labelled true or false For example, if concentration of certain protein above α% signifies a disease, different values of α yield different final TPR and FPR values. The threshold values can be simply determined in a way similar to grid search; label training examples with different threshold values, train classifiers with different sets of labelled examples, run the classifier on the test data, compute FPR values, and select the threshold values that cover low (close to 0) and high (close to 1) FPR values, i.e., close to 0, 0.05, 0.1, ..., 0.95, 1 Generate many sets of annotated examples Run the classifier on the sets of examples Compute a (FPR, TPR) point for each of them Draw the final ROC curve Some details can be checked in http://en.wikipedia.org/wiki/Receiver_operating_characteristic. Besides, these two links are useful about how to determine an optimal threshold. A simple method is to take the one with maximal sum of true positive and false negative rates. Other finer criteria may include other variables involving different thresholds like financial costs, etc. http://www.medicalbiostatistics.com/roccurve.pdf http://www.kovcomp.co.uk/support/XL-Tut/life-ROC-curves-receiver-operating-characteristic.html
How to determine the optimal threshold for a classifier and generate ROC curve?
Use the SVM classifier to classify a set of annotated examples, and "one point" on the ROC space based on one prediction of the examples can be identified. Suppose the number of examples is 200, first
How to determine the optimal threshold for a classifier and generate ROC curve? Use the SVM classifier to classify a set of annotated examples, and "one point" on the ROC space based on one prediction of the examples can be identified. Suppose the number of examples is 200, first count the number of examples of the four cases. \begin{array} {|r|r|r|} \hline & \text{labeled true} & \text{labeled false} \\ \hline \text{predicted true} &71& 28\\ \hline \text{predicted false} &57&44 \\ \hline \end{array} Then compute TPR (True Positive Rate) and FPR (False Positive Rate). $TPR = 71/ (71+57)=0.5547$, and $FPR=28/(28+44) = 0.3889$ On the ROC space, the x-axis is FPR, and the y-axis is TPR. So point $(0.3889, 0.5547)$ is obtained. To draw an ROC curve, just Adjust some threshold value that control the number of examples labelled true or false For example, if concentration of certain protein above α% signifies a disease, different values of α yield different final TPR and FPR values. The threshold values can be simply determined in a way similar to grid search; label training examples with different threshold values, train classifiers with different sets of labelled examples, run the classifier on the test data, compute FPR values, and select the threshold values that cover low (close to 0) and high (close to 1) FPR values, i.e., close to 0, 0.05, 0.1, ..., 0.95, 1 Generate many sets of annotated examples Run the classifier on the sets of examples Compute a (FPR, TPR) point for each of them Draw the final ROC curve Some details can be checked in http://en.wikipedia.org/wiki/Receiver_operating_characteristic. Besides, these two links are useful about how to determine an optimal threshold. A simple method is to take the one with maximal sum of true positive and false negative rates. Other finer criteria may include other variables involving different thresholds like financial costs, etc. http://www.medicalbiostatistics.com/roccurve.pdf http://www.kovcomp.co.uk/support/XL-Tut/life-ROC-curves-receiver-operating-characteristic.html
How to determine the optimal threshold for a classifier and generate ROC curve? Use the SVM classifier to classify a set of annotated examples, and "one point" on the ROC space based on one prediction of the examples can be identified. Suppose the number of examples is 200, first
4,868
How to determine the optimal threshold for a classifier and generate ROC curve?
The choice of a threshold depends on the importance of TPR and FPR classification problem. For example, if your classifier will decide which criminal suspects will receive a death sentence, false positives are very bad (innocents will be killed!). Thus you would choose a threshold that yields a low FPR while keeping a reasonable TPR (so you actually catch some true criminals). If there is no external concern about low TPR or high FPR, one option is to weight them equally by choosing the threshold that maximizes $TPR-FPR$.
How to determine the optimal threshold for a classifier and generate ROC curve?
The choice of a threshold depends on the importance of TPR and FPR classification problem. For example, if your classifier will decide which criminal suspects will receive a death sentence, false posi
How to determine the optimal threshold for a classifier and generate ROC curve? The choice of a threshold depends on the importance of TPR and FPR classification problem. For example, if your classifier will decide which criminal suspects will receive a death sentence, false positives are very bad (innocents will be killed!). Thus you would choose a threshold that yields a low FPR while keeping a reasonable TPR (so you actually catch some true criminals). If there is no external concern about low TPR or high FPR, one option is to weight them equally by choosing the threshold that maximizes $TPR-FPR$.
How to determine the optimal threshold for a classifier and generate ROC curve? The choice of a threshold depends on the importance of TPR and FPR classification problem. For example, if your classifier will decide which criminal suspects will receive a death sentence, false posi
4,869
How to determine the optimal threshold for a classifier and generate ROC curve?
Choose the point closest to the top left corner of your ROC space. Now the threshold used to generate this point should be the optimal one.
How to determine the optimal threshold for a classifier and generate ROC curve?
Choose the point closest to the top left corner of your ROC space. Now the threshold used to generate this point should be the optimal one.
How to determine the optimal threshold for a classifier and generate ROC curve? Choose the point closest to the top left corner of your ROC space. Now the threshold used to generate this point should be the optimal one.
How to determine the optimal threshold for a classifier and generate ROC curve? Choose the point closest to the top left corner of your ROC space. Now the threshold used to generate this point should be the optimal one.
4,870
How to determine the optimal threshold for a classifier and generate ROC curve?
#################################### The optimal cut off would be where tpr is high and fpr is low tpr - (1-fpr) is zero or near to zero is the optimal cut off point #################################### def plot_roc_curve(fpr, tpr): plt.plot(fpr, tpr, color='orange', label='ROC') plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver Operating Characteristic (ROC) Curve') plt.legend() plt.show() y_true = np.array([0,0, 1, 1,1]) y_scores = np.array([0.0,0.09, .05, .75,1]) fpr, tpr, thresholds = roc_curve(y_true, y_scores) print(tpr) print(fpr) print(thresholds) print(roc_auc_score(y_true, y_scores)) optimal_idx = np.argmax(tpr - fpr) optimal_threshold = thresholds[optimal_idx] print("Threshold value is:", optimal_threshold) plot_roc_curve(fpr, tpr) Threshold value is: 0.75
How to determine the optimal threshold for a classifier and generate ROC curve?
#################################### The optimal cut off would be where tpr is high and fpr is low tpr - (1-fpr) is zero or near to zero is the optimal cut off point ##################################
How to determine the optimal threshold for a classifier and generate ROC curve? #################################### The optimal cut off would be where tpr is high and fpr is low tpr - (1-fpr) is zero or near to zero is the optimal cut off point #################################### def plot_roc_curve(fpr, tpr): plt.plot(fpr, tpr, color='orange', label='ROC') plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver Operating Characteristic (ROC) Curve') plt.legend() plt.show() y_true = np.array([0,0, 1, 1,1]) y_scores = np.array([0.0,0.09, .05, .75,1]) fpr, tpr, thresholds = roc_curve(y_true, y_scores) print(tpr) print(fpr) print(thresholds) print(roc_auc_score(y_true, y_scores)) optimal_idx = np.argmax(tpr - fpr) optimal_threshold = thresholds[optimal_idx] print("Threshold value is:", optimal_threshold) plot_roc_curve(fpr, tpr) Threshold value is: 0.75
How to determine the optimal threshold for a classifier and generate ROC curve? #################################### The optimal cut off would be where tpr is high and fpr is low tpr - (1-fpr) is zero or near to zero is the optimal cut off point ##################################
4,871
How to determine the optimal threshold for a classifier and generate ROC curve?
A really easy way to pick a threshold is to take the median predicted values of the positive cases for a test set. This becomes your threshold. The threshold comes relatively close to the same threshold you would get by using the roc curve where true positive rate(tpr) and 1 - false positive rate(fpr) overlap. This tpr (cross) 1-fpr cross maximizes true positive while minimizing false negatives.
How to determine the optimal threshold for a classifier and generate ROC curve?
A really easy way to pick a threshold is to take the median predicted values of the positive cases for a test set. This becomes your threshold. The threshold comes relatively close to the same thresho
How to determine the optimal threshold for a classifier and generate ROC curve? A really easy way to pick a threshold is to take the median predicted values of the positive cases for a test set. This becomes your threshold. The threshold comes relatively close to the same threshold you would get by using the roc curve where true positive rate(tpr) and 1 - false positive rate(fpr) overlap. This tpr (cross) 1-fpr cross maximizes true positive while minimizing false negatives.
How to determine the optimal threshold for a classifier and generate ROC curve? A really easy way to pick a threshold is to take the median predicted values of the positive cases for a test set. This becomes your threshold. The threshold comes relatively close to the same thresho
4,872
How to determine the optimal threshold for a classifier and generate ROC curve?
Following Will's comment. This article (www0.cs.ucl.ac.uk/staff/W.Langdon/roc) has some good points under the heading "Choosing the Operating Point". picking the point closest to the top left corner of a ROC curve equates to choosing the operating point such that TPR = TNR, i.e. false positives are equally bad as false negatives. – Will Nov 13 at 15:57. Using iscost line from the link www0.cs.ucl.ac.uk/staff/W.Langdon/roc. Using these concept: alpha = cost_false_positive = cost of a false positive (false alarm) beta = cost_false_negative = cost of missing a positive (false negative) p = proportion of positive cases Then the average expected cost of classification at point x,y in the ROC space is C = (1-p) alpha x + p beta (1-y). To find the best threshold you have to minimize C so : best_threshold = argmin ( (1-p) alpha x + p beta (1-y) ). This seams to works.I am open to suggestion or remarks. Here is the code. In needs to have binary_thresholds, fp_rate, recall. Here fp_rate and recall is of the shape (num_thresholds, 1) or (num_thresholds, num_classes). def find_best_binary_auc_threshold(binary_thresholds, fp_rate, recall, proportion_positive_case: float = 0.5, cost_false_positive: float = 0.5, cost_false_negative: float = 0.5, argmin_axis: int = 0): isocost_lines = cost_false_positive * (1 - proportion_positive_case) * fp_rate + cost_false_negative * proportion_positive_case * (1 - recall) best_indexes = np.argmin(isocost_lines, axis=argmin_axis) best_thresholds = binary_thresholds[best_indexes.tolist()] return best_thresholds, best_indexes
How to determine the optimal threshold for a classifier and generate ROC curve?
Following Will's comment. This article (www0.cs.ucl.ac.uk/staff/W.Langdon/roc) has some good points under the heading "Choosing the Operating Point". picking the point closest to the top left corner
How to determine the optimal threshold for a classifier and generate ROC curve? Following Will's comment. This article (www0.cs.ucl.ac.uk/staff/W.Langdon/roc) has some good points under the heading "Choosing the Operating Point". picking the point closest to the top left corner of a ROC curve equates to choosing the operating point such that TPR = TNR, i.e. false positives are equally bad as false negatives. – Will Nov 13 at 15:57. Using iscost line from the link www0.cs.ucl.ac.uk/staff/W.Langdon/roc. Using these concept: alpha = cost_false_positive = cost of a false positive (false alarm) beta = cost_false_negative = cost of missing a positive (false negative) p = proportion of positive cases Then the average expected cost of classification at point x,y in the ROC space is C = (1-p) alpha x + p beta (1-y). To find the best threshold you have to minimize C so : best_threshold = argmin ( (1-p) alpha x + p beta (1-y) ). This seams to works.I am open to suggestion or remarks. Here is the code. In needs to have binary_thresholds, fp_rate, recall. Here fp_rate and recall is of the shape (num_thresholds, 1) or (num_thresholds, num_classes). def find_best_binary_auc_threshold(binary_thresholds, fp_rate, recall, proportion_positive_case: float = 0.5, cost_false_positive: float = 0.5, cost_false_negative: float = 0.5, argmin_axis: int = 0): isocost_lines = cost_false_positive * (1 - proportion_positive_case) * fp_rate + cost_false_negative * proportion_positive_case * (1 - recall) best_indexes = np.argmin(isocost_lines, axis=argmin_axis) best_thresholds = binary_thresholds[best_indexes.tolist()] return best_thresholds, best_indexes
How to determine the optimal threshold for a classifier and generate ROC curve? Following Will's comment. This article (www0.cs.ucl.ac.uk/staff/W.Langdon/roc) has some good points under the heading "Choosing the Operating Point". picking the point closest to the top left corner
4,873
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df?
I would like to oppose the other two answers based on a paper (in German) by Kubinger, Rasch and Moder (2009). They argue, based on "extensive" simulations from distributions either meeting or not meeting the assumptions imposed by a t-test, (normality and homogenity of variance) that the welch-tests performs equally well when the assumptions are met (i.e., basically same probability of committing alpha and beta errors) but outperforms the t-test if the assumptions are not met, especially in terms of power. Therefore, they recommend to always use the welch-test if the sample size exceeds 30. As a meta-comment: For people interested in statistics (like me and probably most other here) an argument based on data (as mine) should at least count equally as arguments solely based on theoretical grounds (as the others here). Update: After thinking about this topic again, I found two further recommendations of which the newer one assists my point. Look at the original papers (which are both, at least for me, freely available) for the argumentations that lead to these recommendations. The first recommendation comes from Graeme D. Ruxton in 2006: "If you want to compare the central tendency of 2 populations based on samples of unrelated data, then the unequal variance t-test should always be used in preference to the Student's t-test or Mann–Whitney U test." In: Ruxton, G.D., 2006. The unequal variance t-test is an underused alternative to Student’s t-test and the Mann–Whitney U test. Behav. Ecol. 17, 688–690. The second (older) recommendation is from Coombs et al. (1996, p. 148): "In summary, the independent samples t test is generally acceptable in terms of controlling Type I error rates provided there are sufficiently large equal-sized samples, even when the equal population variance assumption is violated. For unequal-sized samples, however, an alternative that does not assume equal population variances is preferable. Use the James second-order test when distributions are either short-tailed symmetric or normal. Promising alternatives include the Wilcox H and Yuen trimmed means tests, which provide broader control of Type I error rates than either the Welch test or the James test and have greater power when data are long-tailed." (emphasis added) In: Coombs WT, Algina J, Oltman D. 1996. Univariate and multivariate omnibus hypothesis tests selected to control type I error rates when population variances are not necessarily equal. Rev Educ Res 66:137–79.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al
I would like to oppose the other two answers based on a paper (in German) by Kubinger, Rasch and Moder (2009). They argue, based on "extensive" simulations from distributions either meeting or not mee
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df? I would like to oppose the other two answers based on a paper (in German) by Kubinger, Rasch and Moder (2009). They argue, based on "extensive" simulations from distributions either meeting or not meeting the assumptions imposed by a t-test, (normality and homogenity of variance) that the welch-tests performs equally well when the assumptions are met (i.e., basically same probability of committing alpha and beta errors) but outperforms the t-test if the assumptions are not met, especially in terms of power. Therefore, they recommend to always use the welch-test if the sample size exceeds 30. As a meta-comment: For people interested in statistics (like me and probably most other here) an argument based on data (as mine) should at least count equally as arguments solely based on theoretical grounds (as the others here). Update: After thinking about this topic again, I found two further recommendations of which the newer one assists my point. Look at the original papers (which are both, at least for me, freely available) for the argumentations that lead to these recommendations. The first recommendation comes from Graeme D. Ruxton in 2006: "If you want to compare the central tendency of 2 populations based on samples of unrelated data, then the unequal variance t-test should always be used in preference to the Student's t-test or Mann–Whitney U test." In: Ruxton, G.D., 2006. The unequal variance t-test is an underused alternative to Student’s t-test and the Mann–Whitney U test. Behav. Ecol. 17, 688–690. The second (older) recommendation is from Coombs et al. (1996, p. 148): "In summary, the independent samples t test is generally acceptable in terms of controlling Type I error rates provided there are sufficiently large equal-sized samples, even when the equal population variance assumption is violated. For unequal-sized samples, however, an alternative that does not assume equal population variances is preferable. Use the James second-order test when distributions are either short-tailed symmetric or normal. Promising alternatives include the Wilcox H and Yuen trimmed means tests, which provide broader control of Type I error rates than either the Welch test or the James test and have greater power when data are long-tailed." (emphasis added) In: Coombs WT, Algina J, Oltman D. 1996. Univariate and multivariate omnibus hypothesis tests selected to control type I error rates when population variances are not necessarily equal. Rev Educ Res 66:137–79.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al I would like to oppose the other two answers based on a paper (in German) by Kubinger, Rasch and Moder (2009). They argue, based on "extensive" simulations from distributions either meeting or not mee
4,874
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df?
Of course, one could ditch both tests, and start using a Bayesian t-test (Savage-Dickey ratio test), which can account for equal and unequal variances, and best of all, it allows for a quantification of evidence in favor of the null hypothesis (which means, no more of old "failure to reject" talk) This test is very simple (and fast) to implement, and there is a paper that clearly explains to readers unfamiliar with Bayesian statistics how to use it, along with an R script. You basically can just insert your data send the commands to the R console: Wetzels, R., Raaijmakers, J. G. W., Jakab, E., & Wagenmakers, E.-J. (2009). How to Quantify Support For and Against the Null Hypothesis: A Flexible WinBUGS Implementation of a Default Bayesian t-test. there is also a tutorial for all this, with example data: http://www.ruudwetzels.com/index.php?src=SDtest I know this is not a direct response to what was asked, but I thought readers might enjoy having this nice alternative
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al
Of course, one could ditch both tests, and start using a Bayesian t-test (Savage-Dickey ratio test), which can account for equal and unequal variances, and best of all, it allows for a quantification
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df? Of course, one could ditch both tests, and start using a Bayesian t-test (Savage-Dickey ratio test), which can account for equal and unequal variances, and best of all, it allows for a quantification of evidence in favor of the null hypothesis (which means, no more of old "failure to reject" talk) This test is very simple (and fast) to implement, and there is a paper that clearly explains to readers unfamiliar with Bayesian statistics how to use it, along with an R script. You basically can just insert your data send the commands to the R console: Wetzels, R., Raaijmakers, J. G. W., Jakab, E., & Wagenmakers, E.-J. (2009). How to Quantify Support For and Against the Null Hypothesis: A Flexible WinBUGS Implementation of a Default Bayesian t-test. there is also a tutorial for all this, with example data: http://www.ruudwetzels.com/index.php?src=SDtest I know this is not a direct response to what was asked, but I thought readers might enjoy having this nice alternative
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al Of course, one could ditch both tests, and start using a Bayesian t-test (Savage-Dickey ratio test), which can account for equal and unequal variances, and best of all, it allows for a quantification
4,875
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df?
Because exact results are preferable to approximations, and avoid odd edge cases where the approximation may lead to a different result than the exact method. The Welch method isn't a quicker way to do any old t-test, it's a tractable approximation to an otherwise very hard problem: how to construct a t-test under unequal variances. The equal-variance case is well-understood, simple, and exact, and therefore should always be used when possible.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al
Because exact results are preferable to approximations, and avoid odd edge cases where the approximation may lead to a different result than the exact method. The Welch method isn't a quicker way to
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df? Because exact results are preferable to approximations, and avoid odd edge cases where the approximation may lead to a different result than the exact method. The Welch method isn't a quicker way to do any old t-test, it's a tractable approximation to an otherwise very hard problem: how to construct a t-test under unequal variances. The equal-variance case is well-understood, simple, and exact, and therefore should always be used when possible.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al Because exact results are preferable to approximations, and avoid odd edge cases where the approximation may lead to a different result than the exact method. The Welch method isn't a quicker way to
4,876
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df?
Two reasons I can think of: Regular Student's T is pretty robust to heteroscedasticity if the sample sizes are equal. If you believe strongly a priori that the data is homoscedastic, then you lose nothing and might gain a small amount of power by using Studen'ts T instead of Welch's T. One reason that I would not give is that Student's T is exact and Welch's T isn't. IMHO the exactness of Student's T is academic because it's only exact for normally distributed data, and no real data is exactly normally distributed. I can't think of a single quantity that people actually measure and analyze statistically where the distribution could plausibly have a support of all real numbers. For example, there are only so many atoms in the universe, and some quantities can't be negative. Therefore, when you use any kind of T-test on real data, you're making an approximation anyhow.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al
Two reasons I can think of: Regular Student's T is pretty robust to heteroscedasticity if the sample sizes are equal. If you believe strongly a priori that the data is homoscedastic, then you lose
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df? Two reasons I can think of: Regular Student's T is pretty robust to heteroscedasticity if the sample sizes are equal. If you believe strongly a priori that the data is homoscedastic, then you lose nothing and might gain a small amount of power by using Studen'ts T instead of Welch's T. One reason that I would not give is that Student's T is exact and Welch's T isn't. IMHO the exactness of Student's T is academic because it's only exact for normally distributed data, and no real data is exactly normally distributed. I can't think of a single quantity that people actually measure and analyze statistically where the distribution could plausibly have a support of all real numbers. For example, there are only so many atoms in the universe, and some quantities can't be negative. Therefore, when you use any kind of T-test on real data, you're making an approximation anyhow.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al Two reasons I can think of: Regular Student's T is pretty robust to heteroscedasticity if the sample sizes are equal. If you believe strongly a priori that the data is homoscedastic, then you lose
4,877
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df?
The fact that something more complex reduces to something less complex when some assumption is checked is not enough to throw the simpler method away.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al
The fact that something more complex reduces to something less complex when some assumption is checked is not enough to throw the simpler method away.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df? The fact that something more complex reduces to something less complex when some assumption is checked is not enough to throw the simpler method away.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al The fact that something more complex reduces to something less complex when some assumption is checked is not enough to throw the simpler method away.
4,878
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df?
I would take the opposite view here. Why bother with the Welch test when the standard unpaired student t test gives you nearly identical results. I studied this issue a while back and I explored a range of scenarios in an attempt to break down the t test and favor the Welch test. To do so I used sample sizes up to 5 times greater for one group vs the other. And, I explored variances up to 25 times greater for one group vs the other. And, it really did not make any material difference. The unpaired t test still generated a range of p values that were nearly identical to the Welch test. You can see my work at the following link and focus especially on slide 5 and 6. http://www.slideshare.net/gaetanlion/unpaired-t-test-family
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al
I would take the opposite view here. Why bother with the Welch test when the standard unpaired student t test gives you nearly identical results. I studied this issue a while back and I explored a r
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df? I would take the opposite view here. Why bother with the Welch test when the standard unpaired student t test gives you nearly identical results. I studied this issue a while back and I explored a range of scenarios in an attempt to break down the t test and favor the Welch test. To do so I used sample sizes up to 5 times greater for one group vs the other. And, I explored variances up to 25 times greater for one group vs the other. And, it really did not make any material difference. The unpaired t test still generated a range of p values that were nearly identical to the Welch test. You can see my work at the following link and focus especially on slide 5 and 6. http://www.slideshare.net/gaetanlion/unpaired-t-test-family
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al I would take the opposite view here. Why bother with the Welch test when the standard unpaired student t test gives you nearly identical results. I studied this issue a while back and I explored a r
4,879
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df?
With the assumption of equal variance, one can derive the non asymptotic distribution of t statistics. But when the assumption is violated, two variance terms cannot be cancelled, and we cannot simplify the distribution of statistics to a fixed one. Thus test can't be done. Welch t test is an approximation which is robust and give an approximate degree of freedom. But it is not "exact", which means its type one error is not exactly what you want theoretically. From my perspective, even when homogeneity test doesn't reject "equal variance", there is still risk to use t test assuming same variance. Because the true variance difference may be small/not significant, but not zero. We need a test without relying on "same variance", rather than use homogeneity test for "same variance". https://arxiv.org/abs/2210.16473. Here is my new paper, hope it can help with your question. It can derive "exact" or non-asymptotic t statistics when variances of two groups are different, and it reaches the maximal degree of freedom that an "exact test" can allow. In small sample cases, it significantly outperforms Welch's t-test, in the sense of type one error. ($\mu_1=\mu_2=0$,$\sigma_1=1,\sigma_2=2,n_1=5,n_2=50$) Its idea is: paired t test can always give an exact test, even when variances are unequal. But it loses some information when n1<n2, because it can only use n1 data points from n2 samples. This paper uses an orthogonal matrix to project the longer vector (n2) into a length (n1) vector, then we can use paired t test, with enough/compressed information. Go here for instruction of use, a package developed by me: https://github.com/hobbitish1028/Te_test
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al
With the assumption of equal variance, one can derive the non asymptotic distribution of t statistics. But when the assumption is violated, two variance terms cannot be cancelled, and we cannot simpli
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df? With the assumption of equal variance, one can derive the non asymptotic distribution of t statistics. But when the assumption is violated, two variance terms cannot be cancelled, and we cannot simplify the distribution of statistics to a fixed one. Thus test can't be done. Welch t test is an approximation which is robust and give an approximate degree of freedom. But it is not "exact", which means its type one error is not exactly what you want theoretically. From my perspective, even when homogeneity test doesn't reject "equal variance", there is still risk to use t test assuming same variance. Because the true variance difference may be small/not significant, but not zero. We need a test without relying on "same variance", rather than use homogeneity test for "same variance". https://arxiv.org/abs/2210.16473. Here is my new paper, hope it can help with your question. It can derive "exact" or non-asymptotic t statistics when variances of two groups are different, and it reaches the maximal degree of freedom that an "exact test" can allow. In small sample cases, it significantly outperforms Welch's t-test, in the sense of type one error. ($\mu_1=\mu_2=0$,$\sigma_1=1,\sigma_2=2,n_1=5,n_2=50$) Its idea is: paired t test can always give an exact test, even when variances are unequal. But it loses some information when n1<n2, because it can only use n1 data points from n2 samples. This paper uses an orthogonal matrix to project the longer vector (n2) into a length (n1) vector, then we can use paired t test, with enough/compressed information. Go here for instruction of use, a package developed by me: https://github.com/hobbitish1028/Te_test
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al With the assumption of equal variance, one can derive the non asymptotic distribution of t statistics. But when the assumption is violated, two variance terms cannot be cancelled, and we cannot simpli
4,880
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df?
It's true that the frequentist properties of the Welch corrected test are better than the ordinary Student's T, at least for errors. I agree that that alone is a pretty good argument for the Welch test. However, I'm usually reluctant to recommend the Welch correction because it's use is often deceptive. Which is, admittedly not a critique of the test itself. The reason I don't recommend the Welch correction is that it doesn't just change the degrees of freedom and subsequent theoretical distribution from which the p-value is drawn. It makes the test non-parametric. To perform a Welch corrected t-test one still pools variance as if equal variance can be assumed but then changes the final testing procedure implying either that equal variance cannot be assumed, or that you only care about the sample variances. This makes it a non-parametric test because the pooled variance is considered non-representative of the population and you conceded that you're just testing your observed values. In and of itself there's nothing particularly wrong with that. However, I find it deceptive because a) typically it's not reported with enough specificity; and b) the people who use it tend to think about it interchangeably with a t-test. The only way I ever know that it has been done in published papers is when I see an odd DF for the t-distribution. That was also the only way Rexton (referenced in the Henrik answer) could tell in review. Unfortunately, the non-parametric nature of the Welch corrected test occurs whether the degrees of freedom have changed or not (i.e. even if the sample variances are equal). But this reporting issue is symptomatic of the fact that most people who use the Welch correction don't recognize this change to the test has occurred. I just interviewed a few colleagues and they admitted they had never even thought of it. Therefore, because of this, I believe that if you're going to recommend a non-parametric test don't use one that often appears parametric or at least be very clear about what you're doing. The official name of the test should be Non-Parametric Welch Corrected T-test. If people reported it that way I'd be much happier with Henrik's recommendation.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al
It's true that the frequentist properties of the Welch corrected test are better than the ordinary Student's T, at least for errors. I agree that that alone is a pretty good argument for the Welch tes
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than always use a Welch approximation of the df? It's true that the frequentist properties of the Welch corrected test are better than the ordinary Student's T, at least for errors. I agree that that alone is a pretty good argument for the Welch test. However, I'm usually reluctant to recommend the Welch correction because it's use is often deceptive. Which is, admittedly not a critique of the test itself. The reason I don't recommend the Welch correction is that it doesn't just change the degrees of freedom and subsequent theoretical distribution from which the p-value is drawn. It makes the test non-parametric. To perform a Welch corrected t-test one still pools variance as if equal variance can be assumed but then changes the final testing procedure implying either that equal variance cannot be assumed, or that you only care about the sample variances. This makes it a non-parametric test because the pooled variance is considered non-representative of the population and you conceded that you're just testing your observed values. In and of itself there's nothing particularly wrong with that. However, I find it deceptive because a) typically it's not reported with enough specificity; and b) the people who use it tend to think about it interchangeably with a t-test. The only way I ever know that it has been done in published papers is when I see an odd DF for the t-distribution. That was also the only way Rexton (referenced in the Henrik answer) could tell in review. Unfortunately, the non-parametric nature of the Welch corrected test occurs whether the degrees of freedom have changed or not (i.e. even if the sample variances are equal). But this reporting issue is symptomatic of the fact that most people who use the Welch correction don't recognize this change to the test has occurred. I just interviewed a few colleagues and they admitted they had never even thought of it. Therefore, because of this, I believe that if you're going to recommend a non-parametric test don't use one that often appears parametric or at least be very clear about what you're doing. The official name of the test should be Non-Parametric Welch Corrected T-test. If people reported it that way I'd be much happier with Henrik's recommendation.
When conducting a t-test why would one prefer to assume (or test for) equal variances rather than al It's true that the frequentist properties of the Welch corrected test are better than the ordinary Student's T, at least for errors. I agree that that alone is a pretty good argument for the Welch tes
4,881
Are all models useless? Is any exact model possible -- or useful?
The cited article seems to be based on fears that statisticians "will not be an intrinsic part of the scientific team, and the scientists will naturally have their doubts about the methods used" and that "collaborators will view us as technicians they can steer to get their scientific results published." My comments on the questions posed by @rvl come from the perspective of a non-statistician biological scientist who has been forced to grapple with increasingly complicated statistical issues as I moved from bench research to translational/clinical research over the past few years. Question 5 is clearly answered by the multiple answers now on this page; I'll go in reverse order from there. 4) It doesn't really matter whether an "exact model" exists, because even if it does I probably won't be able to afford to do the study. Consider this issue in the context of the discussion: Do we really need to include “all relevant predictors?” Even if we can identify "all relevant predictors" there will still be the problem of collecting enough data to provide the degrees of freedom to incorporate them all reliably into the model. That's hard enough in controlled experimental studies, let alone retrospective or population studies. Maybe in some types of "Big Data" that's less of a problem, but it is for me and my colleagues. There will always be the need to "be smart about it," as @Aksakal put it an an answer on that page. In fairness to Prof. van der Laan, he doesn't use the word "exact" in the cited article, at least in the version presently available on line from the link. He talks about "realistic" models. That's an important distinction. Then again, Prof. van der Laan complains that "Statistics is now an art, not a science," which is more than a bit unfair on his part. Consider the way he proposes to work with collaborators: ... we need to take the data, our identity as a statistician, and our scientific collaborators seriously. We need to learn as much as possible about how the data were generated. Once we have posed a realistic statistical model, we need to extract from our collaborators what estimand best represents the answer to their scientific question of interest. This is a lot of work. It is difficult. It requires a reasonable understanding of statistical theory. It is a worthy academic enterprise! The application of these scientific principles to real-world problems would seem to require a good deal of "art," as with work in any scientific enterprise. I've known some very successful scientists, many more who did OK, and some failures. In my experience the difference seems to be in the "art" of pursing scientific goals. The result might be science, but the process is something more. 3) Again, part of the issue is terminological; there's a big difference between an "exact" model and the "realistic" models that Prof. van der Laan seeks. His claim is that many standard statistical models are sufficiently unrealistic to produce "unreliable" results. In particular: "Estimators of an estimand defined in an honest statistical model cannot be sensibly estimated based on parametric models." Those are matters for testing, not opinion. His own work clearly recognizes that exact models aren't always possible. Consider this manuscript on targeted maximum likelihood estimators (TMLE) in the context of missing outcome variables. It's based on an assumption of outcomes missing at random, which may never be testable in practice: "...we assume there are no unobserved confounders of the relationship between missingness ... and the outcome." This is another example of the difficulty in including "all relevant predictors." A strength of TMLE, however, is that it does seem to help evaluate the "positivity assumption" of adequate support in the data for estimating the target parameter in this context. The goal is to come as close as possible to a realistic model of the data. 2) TMLE has been discussed on Cross Validated previously. I'm not aware of widespread use on real data. Google Scholar showed today 258 citations of what seems to be the initial report, but at first glance none seemed to be on large real-world data sets. The Journal of Statistical Software article on the associated R package only shows 27 Google Scholar citations today. That should not, however, be taken as evidence about the value of TMLE. Its focus on obtaining reliable unbiased estimates of the actual "estimand" of interest, often a problem with plug-in estimates derived from standard statistical models, does seem potentially valuable. 1) The statement: "a statistical model that makes no assumptions is always true" seems to be intended as a straw man, a tautology. The data are the data. I assume that there are laws of the universe that remain consistent from day to day. The TMLE method presumably contains assumptions about convexity in the search space, and as noted above its application in a particular context might require additional assumptions. Even Prof. van der Laan would agree that some assumptions are necessary. My sense is that he would like to minimize the number of assumptions and to avoid those that are unrealistic. Whether that truly requires giving up on parametric models, as he seems to claim, is the crucial question.
Are all models useless? Is any exact model possible -- or useful?
The cited article seems to be based on fears that statisticians "will not be an intrinsic part of the scientific team, and the scientists will naturally have their doubts about the methods used" and t
Are all models useless? Is any exact model possible -- or useful? The cited article seems to be based on fears that statisticians "will not be an intrinsic part of the scientific team, and the scientists will naturally have their doubts about the methods used" and that "collaborators will view us as technicians they can steer to get their scientific results published." My comments on the questions posed by @rvl come from the perspective of a non-statistician biological scientist who has been forced to grapple with increasingly complicated statistical issues as I moved from bench research to translational/clinical research over the past few years. Question 5 is clearly answered by the multiple answers now on this page; I'll go in reverse order from there. 4) It doesn't really matter whether an "exact model" exists, because even if it does I probably won't be able to afford to do the study. Consider this issue in the context of the discussion: Do we really need to include “all relevant predictors?” Even if we can identify "all relevant predictors" there will still be the problem of collecting enough data to provide the degrees of freedom to incorporate them all reliably into the model. That's hard enough in controlled experimental studies, let alone retrospective or population studies. Maybe in some types of "Big Data" that's less of a problem, but it is for me and my colleagues. There will always be the need to "be smart about it," as @Aksakal put it an an answer on that page. In fairness to Prof. van der Laan, he doesn't use the word "exact" in the cited article, at least in the version presently available on line from the link. He talks about "realistic" models. That's an important distinction. Then again, Prof. van der Laan complains that "Statistics is now an art, not a science," which is more than a bit unfair on his part. Consider the way he proposes to work with collaborators: ... we need to take the data, our identity as a statistician, and our scientific collaborators seriously. We need to learn as much as possible about how the data were generated. Once we have posed a realistic statistical model, we need to extract from our collaborators what estimand best represents the answer to their scientific question of interest. This is a lot of work. It is difficult. It requires a reasonable understanding of statistical theory. It is a worthy academic enterprise! The application of these scientific principles to real-world problems would seem to require a good deal of "art," as with work in any scientific enterprise. I've known some very successful scientists, many more who did OK, and some failures. In my experience the difference seems to be in the "art" of pursing scientific goals. The result might be science, but the process is something more. 3) Again, part of the issue is terminological; there's a big difference between an "exact" model and the "realistic" models that Prof. van der Laan seeks. His claim is that many standard statistical models are sufficiently unrealistic to produce "unreliable" results. In particular: "Estimators of an estimand defined in an honest statistical model cannot be sensibly estimated based on parametric models." Those are matters for testing, not opinion. His own work clearly recognizes that exact models aren't always possible. Consider this manuscript on targeted maximum likelihood estimators (TMLE) in the context of missing outcome variables. It's based on an assumption of outcomes missing at random, which may never be testable in practice: "...we assume there are no unobserved confounders of the relationship between missingness ... and the outcome." This is another example of the difficulty in including "all relevant predictors." A strength of TMLE, however, is that it does seem to help evaluate the "positivity assumption" of adequate support in the data for estimating the target parameter in this context. The goal is to come as close as possible to a realistic model of the data. 2) TMLE has been discussed on Cross Validated previously. I'm not aware of widespread use on real data. Google Scholar showed today 258 citations of what seems to be the initial report, but at first glance none seemed to be on large real-world data sets. The Journal of Statistical Software article on the associated R package only shows 27 Google Scholar citations today. That should not, however, be taken as evidence about the value of TMLE. Its focus on obtaining reliable unbiased estimates of the actual "estimand" of interest, often a problem with plug-in estimates derived from standard statistical models, does seem potentially valuable. 1) The statement: "a statistical model that makes no assumptions is always true" seems to be intended as a straw man, a tautology. The data are the data. I assume that there are laws of the universe that remain consistent from day to day. The TMLE method presumably contains assumptions about convexity in the search space, and as noted above its application in a particular context might require additional assumptions. Even Prof. van der Laan would agree that some assumptions are necessary. My sense is that he would like to minimize the number of assumptions and to avoid those that are unrealistic. Whether that truly requires giving up on parametric models, as he seems to claim, is the crucial question.
Are all models useless? Is any exact model possible -- or useful? The cited article seems to be based on fears that statisticians "will not be an intrinsic part of the scientific team, and the scientists will naturally have their doubts about the methods used" and t
4,882
Are all models useless? Is any exact model possible -- or useful?
Maybe I missed the point, but I think you have to step back a little bit. I think his point is the abuse of easy-accessible tools with no further knowledge. This is also true for a simple t-test: just feed the algorithm with your data, getting a p<0.05 and thinking, that your thesis is true. Completely wrong. You, of course, have to know more about your data. Stepping even further back: There is nothing like an exact model (physicist here). But some agree very well with our measurements. The only exact thing is math. Which has nothing to do with reality or models of it. Everything else (and every model of the reality) is "wrong" (as quoted so often). But what does mean "wrong" and useful? Judge by yourself: ALL of our current high-tech (computers, rockets, radioactivity etc) is based on these wrong models. Maybe even computed by "wrong" simulations with "wrong" models. -> Focus more on the "useful" instead of "wrong";) More explicitly to your questions: Don't know, sorry! Yes. One example: in particle-physics, you want to detect certain particles (say electrons, protons etc.). Every particle leaves a characteristic trace in the detector (and therefore the data), but varies even for the same particle (by its nature). Today, most of the people use machine-learning to achieve this goal (this was a huge simplification, but it is pretty much like this) and there is an increase in efficiency of 20%-50% compared to doing it by hand statistics. Nobody really claimed this! Don't make wrong conclusion! (a: all models are inexact and b: some are useful. Don't confuse things) There is no thing as an exact model (except in math, but not really in statistics as having points exactly on a straight line and "fitting" a line through it may be exact... but that's an uninteresting special case which never happens). Don't know :) But IMHO I see this more as a "just because every child can use it, not everyone should" and don't overuse it blindly.
Are all models useless? Is any exact model possible -- or useful?
Maybe I missed the point, but I think you have to step back a little bit. I think his point is the abuse of easy-accessible tools with no further knowledge. This is also true for a simple t-test: just
Are all models useless? Is any exact model possible -- or useful? Maybe I missed the point, but I think you have to step back a little bit. I think his point is the abuse of easy-accessible tools with no further knowledge. This is also true for a simple t-test: just feed the algorithm with your data, getting a p<0.05 and thinking, that your thesis is true. Completely wrong. You, of course, have to know more about your data. Stepping even further back: There is nothing like an exact model (physicist here). But some agree very well with our measurements. The only exact thing is math. Which has nothing to do with reality or models of it. Everything else (and every model of the reality) is "wrong" (as quoted so often). But what does mean "wrong" and useful? Judge by yourself: ALL of our current high-tech (computers, rockets, radioactivity etc) is based on these wrong models. Maybe even computed by "wrong" simulations with "wrong" models. -> Focus more on the "useful" instead of "wrong";) More explicitly to your questions: Don't know, sorry! Yes. One example: in particle-physics, you want to detect certain particles (say electrons, protons etc.). Every particle leaves a characteristic trace in the detector (and therefore the data), but varies even for the same particle (by its nature). Today, most of the people use machine-learning to achieve this goal (this was a huge simplification, but it is pretty much like this) and there is an increase in efficiency of 20%-50% compared to doing it by hand statistics. Nobody really claimed this! Don't make wrong conclusion! (a: all models are inexact and b: some are useful. Don't confuse things) There is no thing as an exact model (except in math, but not really in statistics as having points exactly on a straight line and "fitting" a line through it may be exact... but that's an uninteresting special case which never happens). Don't know :) But IMHO I see this more as a "just because every child can use it, not everyone should" and don't overuse it blindly.
Are all models useless? Is any exact model possible -- or useful? Maybe I missed the point, but I think you have to step back a little bit. I think his point is the abuse of easy-accessible tools with no further knowledge. This is also true for a simple t-test: just
4,883
Are all models useless? Is any exact model possible -- or useful?
In econ, much is said of understanding the 'data generating process.' I'm not sure what exactly is meant by an 'exact' model, but in econ it might be the same as a 'correctly specified' model. Certainly, you want to know as much about the process that generated the data as you can before attempting a model, right? I think the difficulty comes from a) we may not have a clue about the real DGP and b) even if we knew the real DGP it might be intractable to model and estimate (for many reasons.) So you make assumptions to simplify matters and reduce estimation requirements. Can you ever know if your assumptions are exactly right? You can gain evidence in favor of them, but IMO it's tough to be really sure in some cases. I have to filter all of this in terms of both established theory as well as practicality. If you make an assumption consistent with a theory and that assumption buys you better estimation performance (efficiency, accuracy, consistency, whatever) then I see no reason to avoid it, even if it makes the model 'inexact'. Frankly, I think the article is meant to stimulate those who work with data to think harder about the entire modeling process. It's clear that van der Laan makes assumptions in his work. In this example, in fact, van der Laan seems to throw away any concern for an exact model, and instead uses a mish-mash of procedures to maximize performance. This makes me more confident that he raised Box's quote with the intent of preventing people from using it as an escape from the difficult work of understanding the problem. Let's face it, the world is rife with misuse and abuse of statistical models. People blindly apply whatever they know how to do, and worse, others often interpret the results in the most desirable way. This article is a good reminder to be careful, but I don't think we should take it to the extreme. The implications of the above for your questions: I agree with others on this post that have defined a model as a set of assumptions. With that definition, a model with no assumptions isn't really a model. Even exploratory data analysis (i.e. model free) requires assumptions. For example, most people assume the data are measured correctly. I don't know about TMLE, per se, but in economics there are many articles that use the same underlying philosophy of inferring about a causal effect on an unobserved counterfactual sample. In those cases, however, receiving a treatment is not independent of the other variables in the model (unlike TMLE), and so economists make extensive use of modeling. There are a few case studies for structural models, such as this one where the authors convinced a company to implement their model and found good results. I think all models are inexact, but again, this term is a bit fuzzy. IMO, this is at the core of Box's quote. I'll restate my understanding of Box this way: 'no model can capture the exact essence of reality, but some models do capture a variable of interest, so in that sense you might have a use for them.' I addressed this above. In short, I don't think so. I'm not sure. I like it right here.
Are all models useless? Is any exact model possible -- or useful?
In econ, much is said of understanding the 'data generating process.' I'm not sure what exactly is meant by an 'exact' model, but in econ it might be the same as a 'correctly specified' model. Certa
Are all models useless? Is any exact model possible -- or useful? In econ, much is said of understanding the 'data generating process.' I'm not sure what exactly is meant by an 'exact' model, but in econ it might be the same as a 'correctly specified' model. Certainly, you want to know as much about the process that generated the data as you can before attempting a model, right? I think the difficulty comes from a) we may not have a clue about the real DGP and b) even if we knew the real DGP it might be intractable to model and estimate (for many reasons.) So you make assumptions to simplify matters and reduce estimation requirements. Can you ever know if your assumptions are exactly right? You can gain evidence in favor of them, but IMO it's tough to be really sure in some cases. I have to filter all of this in terms of both established theory as well as practicality. If you make an assumption consistent with a theory and that assumption buys you better estimation performance (efficiency, accuracy, consistency, whatever) then I see no reason to avoid it, even if it makes the model 'inexact'. Frankly, I think the article is meant to stimulate those who work with data to think harder about the entire modeling process. It's clear that van der Laan makes assumptions in his work. In this example, in fact, van der Laan seems to throw away any concern for an exact model, and instead uses a mish-mash of procedures to maximize performance. This makes me more confident that he raised Box's quote with the intent of preventing people from using it as an escape from the difficult work of understanding the problem. Let's face it, the world is rife with misuse and abuse of statistical models. People blindly apply whatever they know how to do, and worse, others often interpret the results in the most desirable way. This article is a good reminder to be careful, but I don't think we should take it to the extreme. The implications of the above for your questions: I agree with others on this post that have defined a model as a set of assumptions. With that definition, a model with no assumptions isn't really a model. Even exploratory data analysis (i.e. model free) requires assumptions. For example, most people assume the data are measured correctly. I don't know about TMLE, per se, but in economics there are many articles that use the same underlying philosophy of inferring about a causal effect on an unobserved counterfactual sample. In those cases, however, receiving a treatment is not independent of the other variables in the model (unlike TMLE), and so economists make extensive use of modeling. There are a few case studies for structural models, such as this one where the authors convinced a company to implement their model and found good results. I think all models are inexact, but again, this term is a bit fuzzy. IMO, this is at the core of Box's quote. I'll restate my understanding of Box this way: 'no model can capture the exact essence of reality, but some models do capture a variable of interest, so in that sense you might have a use for them.' I addressed this above. In short, I don't think so. I'm not sure. I like it right here.
Are all models useless? Is any exact model possible -- or useful? In econ, much is said of understanding the 'data generating process.' I'm not sure what exactly is meant by an 'exact' model, but in econ it might be the same as a 'correctly specified' model. Certa
4,884
Are all models useless? Is any exact model possible -- or useful?
Said article appears to me to be a honest but political article, a sincere polemic. As such, it contains a lot of passionate passages that are scientific non-sense, but that nevertheless may be effective in stirring up useful conversations and deliberations on important matters. There are many good answers here so let me just quote a few lines from the article to just show that Prof. Laan is certainly not using any kind of "exact model" in his work (and by the way, who says that the "exact model" is a concept equivalent to the actual data generating mechanism?) Quotes (bold my emphasis) "Once we have posed a realistic statistical model, we need to extract from our collaborators what estimand best represents the answer to their scientific question of interest." Comment: "realistic" is as removed from "exact" as the Mars is from the Earth. They both orbit the Sun though, so for some purposes it doesn't matter which planet one chooses. For other purposes, it does matter. Also "best" is a relative concept. "Exact" is not. "Estimators of an estimand defined in an honest statistical model cannot be sensibly estimated based on parametric models... Comment: Honesty is the best policy indeed, but it is certainly not guaranteed to be "exact". Also, "sensible estimation" appears to be a very diluted outcome if one uses the "exact model". "In response to having to solve these hard estimation problems the best we can, we developed a general statistical approach..." Comment: OK. We are "doing the best we can". As almost everybody is thinking about oneself. But "best we can" is not "exact".
Are all models useless? Is any exact model possible -- or useful?
Said article appears to me to be a honest but political article, a sincere polemic. As such, it contains a lot of passionate passages that are scientific non-sense, but that nevertheless may be effect
Are all models useless? Is any exact model possible -- or useful? Said article appears to me to be a honest but political article, a sincere polemic. As such, it contains a lot of passionate passages that are scientific non-sense, but that nevertheless may be effective in stirring up useful conversations and deliberations on important matters. There are many good answers here so let me just quote a few lines from the article to just show that Prof. Laan is certainly not using any kind of "exact model" in his work (and by the way, who says that the "exact model" is a concept equivalent to the actual data generating mechanism?) Quotes (bold my emphasis) "Once we have posed a realistic statistical model, we need to extract from our collaborators what estimand best represents the answer to their scientific question of interest." Comment: "realistic" is as removed from "exact" as the Mars is from the Earth. They both orbit the Sun though, so for some purposes it doesn't matter which planet one chooses. For other purposes, it does matter. Also "best" is a relative concept. "Exact" is not. "Estimators of an estimand defined in an honest statistical model cannot be sensibly estimated based on parametric models... Comment: Honesty is the best policy indeed, but it is certainly not guaranteed to be "exact". Also, "sensible estimation" appears to be a very diluted outcome if one uses the "exact model". "In response to having to solve these hard estimation problems the best we can, we developed a general statistical approach..." Comment: OK. We are "doing the best we can". As almost everybody is thinking about oneself. But "best we can" is not "exact".
Are all models useless? Is any exact model possible -- or useful? Said article appears to me to be a honest but political article, a sincere polemic. As such, it contains a lot of passionate passages that are scientific non-sense, but that nevertheless may be effect
4,885
Are all models useless? Is any exact model possible -- or useful?
This post was brought to my attention just a few days ago. Thank you for your interest. Question 1: What useful statistical inferences can be made using a model that makes no assumptions at all? Before I answer this question we should agree on a definition of the word model: A common definition of a statistical model is the set of possible probability distributions or densities of the observed data. In addition to formulating a statistical model, one might make additional assumptions that do not restrict the distribution of the data such as missing at random, coarsening at random, or randomization assumptions in so called censored or missing data models. These latter type of assumptions are typically non-testable, i.e. they do not put restrictions on the distribution of the data and can thus not be tested based on data. For example, one commonly represents the observed data as a missing or censored data structure on a full-data random variable, and defines the target quantity of interest as some feature of the full-data distribution. To establish identification of this target quantity of the full-data distribution, one needs to make certain assumptions such as the ones I mention above. These assumptions allow us to define an estimand (i.e. feature of the distribution of the observed data) that equals the desired target quantity, even though these assumptions do not put any restrictions on the distribution of the data. These non-testable assumptions do not affect the statistical estimation or statistical properties of estimators of the target estimand, but they do affect the interpretation of the target estimand and the degree one feels comfortable extending the purely statistical interpretation to a causal or full-data distribution interpretation. I am going to focus on the notion of a statistical model. If we make no assumptions at all, then the statistical model would be all possible probability distributions. I agree that in this case we cannot do anything. This could happen, but still might be a useful realization, making us careful to over-interpret results that will be derived from statistical models that make assumptions. For example, if one observes a single microarray of gene expressions, then one might have to acknowledge that there is no basis for statistical inference without making very strong, unrealistic assumptions. In many studies we know or feel highly confident (based on understanding of the experiment) that the data set is the result of independent and identical experiments, in which case we view our data as n independent and identically distributed random variables with a common probability distribution. In other cases, one might condition on the units and treat the data observed on these units as the result of independent experiments, one for each unit. This does not only apply to experiments that involve random sampling of units from a population. For example, a study that enrolls patients that satisfy some eligibility criterion and then tracks patients longitudinally over time could be thought of as independent experiments, maybe identical, or maybe only independent. Our statistical model may then assume nothing else. Still, this is a real statistical model that allows us to formulate estimators with statistical inference based on asymptotic linearity of estimators and central limit theorems (that work under only assuming independence, or even weaker forms of independence assumptions). We might be able to make more assumptions, such as the treatment variable given the set of observed pre-treatment covariates only depends on a certain subset of the covariates (something that one might learn from talking to the people who made the treatment decision). In our research we carry out a lot of work on sample size one problems such as observing a single time series over many time points (assuming some form of stationarity), a single community of individuals connected through a network (assuming that the data at next time point on a subject is conditionally independent of data collected on other subjects at that time point, given the data we have observed on the friends of that subject), or a sequential adaptive trial in which the next experiment (e.g. randomly sampling a next group of subjects) is set in response to what is observed in the previous experiments. Again, such types of studies satisfy conditional independence assumptions that allow for estimators with asymptotic statistical inference. For me, a realistic statistical model is a model that is known to contain the true probability distribution of the data, or at least it can be sensibly defended as a truthful statistical model. One should be ready to defend a statistical model. Note that, using your language, an "exact model" would be a model that contains the true probability distribution of the data, but has nothing to do with making a lot of assumptions. The only hope to succeed in formulating such a model is to work hard on understanding the data generating experiment, learning about independence and conditional independence assumptions. There are cases where one cannot be sure (e.g. models for a single time series that avoid making parametric assumptions but still need a form of stationarity), even when posing a highly nonparametric model, and, the assumed model might be going as far as possible while still being able to obtain statistical inference (based on state of the art advances in probability theory). Even then (heavily advancing on current statistical methods), it is fair and necessary to criticize and be fully aware of the assumptions, while still moving forward with valid statistical estimators for such a model. This still represents important advances relative to working with parametric models that are known to be false from the start and cannot be defended at all. It might motivate us to more carefully design experiments for which this same model will be known to be valid, where we now know that we actually have valid powerful methods that handle such highly challenging statistical models. The selection of a statistical model should be distinguished from the construction of an estimator that might try out many working models and machine learning algorithms as a way to approximate the true distribution of the data. The fit of a data distribution is not a model, but just the realization of an estimator of the data distribution. Another important benefit by having defined the statistical estimation problem realistically is that one can set up simulation studies to evaluate the behavior of estimators (and data set competitions), refine them, learn the weak spots, and propose a bootstrap respecting the true experiment to further improve on finite sample inference. In the end, the asymptotic results are a must, but all that matters is finite sample inference, so one should always aim to work on finite sample improvements without affecting asymptotic optimality. Even such finite sample improvements are often guided by theory. Question 2: Does there exist a case study, with important, real data in the use of targeted maximum likelihood? Are these methods widely used and accepted? There is a growing literature on Targeted Learning. TMLE started with a 2006 article, and we published two books on the topic (van der Laan, Rose, 2011 and 2018) including contributions from a variety of authors working in the area. I just found out that the 2018 book (Targeted Learning in Data Science) is the top 1 in the Springer Series of Statistics over last three years, while the 2011 book is in top 3% overall going back to beginning of this series. Similarly, we see a great demand for workshops on the topic which we are giving regularly. We recently gave a workshop at the Bill and Melinda Gates Foundation on Targeted Learning and it included an initial presentation which showcased case studies in journals such as the New England Journal of Medicine, among others. There will be a link posted since it was recorded, feel free to contact me about it. Of course, these papers can all be Googled but this may still be helpful. Overall, I now regularly encounter articles by authors I do not know (e.g. not former students, postdocs, etc.). This is a good thing, and it is a joy to see new Ph.D. students at other places contributing new insights. Sometimes it is painful to see how some of such contributions are simply not understanding the material and are confusing the literature. Still, many of such authors are making a concerted effort, so they will get there eventually. Question 3: Are all inexact models indeed useless? If we define "inexact models" as statistical models that can be defended but for which we have no guarantee that the true data distribution is in it, then in my answer to Question 1 I clarify that such models are still useful. Work in such models advance the literature: the assumptions are transparent and for anybody to criticize and evaluate; and once one realizes the kind of assumptions ond needs to worry about, these are realistic enough so that one can expect future applications in which they can be applied. A model where not a single person on earth believes them is not helpful at all. For example, we teach our students that in GEE using a parametric regression model for a multivariate outcome that, for each choice of covariance matrix of the residuals, the estimator of the coefficients is consistent and asymptotically linear; and if we estimate the covariance matrix consistently then the estimator is efficient. These statements are predicated on the regression model being correct, but since they are not, they actually teach the wrong thing. In the real world, 1) the choice of covariance matrix defines the projection of the true regression curve on the parametric working model, and thereby affects the target estimand (so confidence intervals for two different covariance matrices will be non-overlapping for large enough sample size); 2) the variability of the estimator of the covariance matrix heavily contributes to the variability of the estimator of the coefficients, so that more nonparametric (and thus consistently) estimation of the covariance matrix typically heavily increases the actual variance of the estimator. The majority that one is taught in statistics is based on such unrealistic assumptions and is actually wrong when applied to the real world. This is just one of the million examples in which what we teach based on these models is not even representative of what happens in the real world when applying these methods. Question 4: Is it possible to know that you have the exact model other than in trivial cases? Not at all, as I explain in my response to Question 1. Either way, it is clearly my philosophy that one should make a sincere effort to define the real statistical estimation problem as accurately as possible, making assumptions that are reasonable and one can also defend. To me, this honest formulation is better than posing models in which the whole world knows the assumptions are plain wrong -- the confidence intervals have asymptotic coverage zero and the p-values result in testing procedures that have asymptotic type I error equal to 1. In addition, when people use these wrong models they typically play with them and try out many (I cannot blame them when that is only tool one has available), resulting in additional bias beyond the issue of using a statistical method with asymptotic coverage zero (for the assumed question of interest) and type I error 1. You wrote: "If this is too opinion-based and hence off-topic, where can it be discussed? Because Dr van der Laan's article definitely does need some discussion." This gets to the essence of statistical learning. Yes, this is incredibly important and it changes the way one approaches statistics. We often refer to the following steps as the roadmap of statistical learning (the answer to a statistical query): 1) Define data; 2) Define probability distribution of data and our knowledge about the data generating experiment; 3) Define target estimand (possibly augmenting its statistical interpretation with a causal/enhanced interpretation under specified non-testable assumptions); 4) Define estimator that is asymptotically valid under the statistical model assumptions; 5) Obtain inference based on sampling distribution of estimator; 6) Interpret the results (e.g. augmenting with sensitivity analysis to allow for interpretation of target estimand going in between purely statistical and purely causal). By taking these steps seriously, one often ends up with new statistical estimation problems. That itself can be an important contribution. In addition, many times new identification results (i.e, causal inference), new estimators and new theory needs to be developed (e.g. statistical estimators developed within the TMLE template), but this only happens because one has defined the precise challenge so that expertise and brainpower can be brought in by the general scientific community to solve it. If we replace the real problem by a toy problem, we avoid the real challenges.
Are all models useless? Is any exact model possible -- or useful?
This post was brought to my attention just a few days ago. Thank you for your interest. Question 1: What useful statistical inferences can be made using a model that makes no assumptions at all? Befor
Are all models useless? Is any exact model possible -- or useful? This post was brought to my attention just a few days ago. Thank you for your interest. Question 1: What useful statistical inferences can be made using a model that makes no assumptions at all? Before I answer this question we should agree on a definition of the word model: A common definition of a statistical model is the set of possible probability distributions or densities of the observed data. In addition to formulating a statistical model, one might make additional assumptions that do not restrict the distribution of the data such as missing at random, coarsening at random, or randomization assumptions in so called censored or missing data models. These latter type of assumptions are typically non-testable, i.e. they do not put restrictions on the distribution of the data and can thus not be tested based on data. For example, one commonly represents the observed data as a missing or censored data structure on a full-data random variable, and defines the target quantity of interest as some feature of the full-data distribution. To establish identification of this target quantity of the full-data distribution, one needs to make certain assumptions such as the ones I mention above. These assumptions allow us to define an estimand (i.e. feature of the distribution of the observed data) that equals the desired target quantity, even though these assumptions do not put any restrictions on the distribution of the data. These non-testable assumptions do not affect the statistical estimation or statistical properties of estimators of the target estimand, but they do affect the interpretation of the target estimand and the degree one feels comfortable extending the purely statistical interpretation to a causal or full-data distribution interpretation. I am going to focus on the notion of a statistical model. If we make no assumptions at all, then the statistical model would be all possible probability distributions. I agree that in this case we cannot do anything. This could happen, but still might be a useful realization, making us careful to over-interpret results that will be derived from statistical models that make assumptions. For example, if one observes a single microarray of gene expressions, then one might have to acknowledge that there is no basis for statistical inference without making very strong, unrealistic assumptions. In many studies we know or feel highly confident (based on understanding of the experiment) that the data set is the result of independent and identical experiments, in which case we view our data as n independent and identically distributed random variables with a common probability distribution. In other cases, one might condition on the units and treat the data observed on these units as the result of independent experiments, one for each unit. This does not only apply to experiments that involve random sampling of units from a population. For example, a study that enrolls patients that satisfy some eligibility criterion and then tracks patients longitudinally over time could be thought of as independent experiments, maybe identical, or maybe only independent. Our statistical model may then assume nothing else. Still, this is a real statistical model that allows us to formulate estimators with statistical inference based on asymptotic linearity of estimators and central limit theorems (that work under only assuming independence, or even weaker forms of independence assumptions). We might be able to make more assumptions, such as the treatment variable given the set of observed pre-treatment covariates only depends on a certain subset of the covariates (something that one might learn from talking to the people who made the treatment decision). In our research we carry out a lot of work on sample size one problems such as observing a single time series over many time points (assuming some form of stationarity), a single community of individuals connected through a network (assuming that the data at next time point on a subject is conditionally independent of data collected on other subjects at that time point, given the data we have observed on the friends of that subject), or a sequential adaptive trial in which the next experiment (e.g. randomly sampling a next group of subjects) is set in response to what is observed in the previous experiments. Again, such types of studies satisfy conditional independence assumptions that allow for estimators with asymptotic statistical inference. For me, a realistic statistical model is a model that is known to contain the true probability distribution of the data, or at least it can be sensibly defended as a truthful statistical model. One should be ready to defend a statistical model. Note that, using your language, an "exact model" would be a model that contains the true probability distribution of the data, but has nothing to do with making a lot of assumptions. The only hope to succeed in formulating such a model is to work hard on understanding the data generating experiment, learning about independence and conditional independence assumptions. There are cases where one cannot be sure (e.g. models for a single time series that avoid making parametric assumptions but still need a form of stationarity), even when posing a highly nonparametric model, and, the assumed model might be going as far as possible while still being able to obtain statistical inference (based on state of the art advances in probability theory). Even then (heavily advancing on current statistical methods), it is fair and necessary to criticize and be fully aware of the assumptions, while still moving forward with valid statistical estimators for such a model. This still represents important advances relative to working with parametric models that are known to be false from the start and cannot be defended at all. It might motivate us to more carefully design experiments for which this same model will be known to be valid, where we now know that we actually have valid powerful methods that handle such highly challenging statistical models. The selection of a statistical model should be distinguished from the construction of an estimator that might try out many working models and machine learning algorithms as a way to approximate the true distribution of the data. The fit of a data distribution is not a model, but just the realization of an estimator of the data distribution. Another important benefit by having defined the statistical estimation problem realistically is that one can set up simulation studies to evaluate the behavior of estimators (and data set competitions), refine them, learn the weak spots, and propose a bootstrap respecting the true experiment to further improve on finite sample inference. In the end, the asymptotic results are a must, but all that matters is finite sample inference, so one should always aim to work on finite sample improvements without affecting asymptotic optimality. Even such finite sample improvements are often guided by theory. Question 2: Does there exist a case study, with important, real data in the use of targeted maximum likelihood? Are these methods widely used and accepted? There is a growing literature on Targeted Learning. TMLE started with a 2006 article, and we published two books on the topic (van der Laan, Rose, 2011 and 2018) including contributions from a variety of authors working in the area. I just found out that the 2018 book (Targeted Learning in Data Science) is the top 1 in the Springer Series of Statistics over last three years, while the 2011 book is in top 3% overall going back to beginning of this series. Similarly, we see a great demand for workshops on the topic which we are giving regularly. We recently gave a workshop at the Bill and Melinda Gates Foundation on Targeted Learning and it included an initial presentation which showcased case studies in journals such as the New England Journal of Medicine, among others. There will be a link posted since it was recorded, feel free to contact me about it. Of course, these papers can all be Googled but this may still be helpful. Overall, I now regularly encounter articles by authors I do not know (e.g. not former students, postdocs, etc.). This is a good thing, and it is a joy to see new Ph.D. students at other places contributing new insights. Sometimes it is painful to see how some of such contributions are simply not understanding the material and are confusing the literature. Still, many of such authors are making a concerted effort, so they will get there eventually. Question 3: Are all inexact models indeed useless? If we define "inexact models" as statistical models that can be defended but for which we have no guarantee that the true data distribution is in it, then in my answer to Question 1 I clarify that such models are still useful. Work in such models advance the literature: the assumptions are transparent and for anybody to criticize and evaluate; and once one realizes the kind of assumptions ond needs to worry about, these are realistic enough so that one can expect future applications in which they can be applied. A model where not a single person on earth believes them is not helpful at all. For example, we teach our students that in GEE using a parametric regression model for a multivariate outcome that, for each choice of covariance matrix of the residuals, the estimator of the coefficients is consistent and asymptotically linear; and if we estimate the covariance matrix consistently then the estimator is efficient. These statements are predicated on the regression model being correct, but since they are not, they actually teach the wrong thing. In the real world, 1) the choice of covariance matrix defines the projection of the true regression curve on the parametric working model, and thereby affects the target estimand (so confidence intervals for two different covariance matrices will be non-overlapping for large enough sample size); 2) the variability of the estimator of the covariance matrix heavily contributes to the variability of the estimator of the coefficients, so that more nonparametric (and thus consistently) estimation of the covariance matrix typically heavily increases the actual variance of the estimator. The majority that one is taught in statistics is based on such unrealistic assumptions and is actually wrong when applied to the real world. This is just one of the million examples in which what we teach based on these models is not even representative of what happens in the real world when applying these methods. Question 4: Is it possible to know that you have the exact model other than in trivial cases? Not at all, as I explain in my response to Question 1. Either way, it is clearly my philosophy that one should make a sincere effort to define the real statistical estimation problem as accurately as possible, making assumptions that are reasonable and one can also defend. To me, this honest formulation is better than posing models in which the whole world knows the assumptions are plain wrong -- the confidence intervals have asymptotic coverage zero and the p-values result in testing procedures that have asymptotic type I error equal to 1. In addition, when people use these wrong models they typically play with them and try out many (I cannot blame them when that is only tool one has available), resulting in additional bias beyond the issue of using a statistical method with asymptotic coverage zero (for the assumed question of interest) and type I error 1. You wrote: "If this is too opinion-based and hence off-topic, where can it be discussed? Because Dr van der Laan's article definitely does need some discussion." This gets to the essence of statistical learning. Yes, this is incredibly important and it changes the way one approaches statistics. We often refer to the following steps as the roadmap of statistical learning (the answer to a statistical query): 1) Define data; 2) Define probability distribution of data and our knowledge about the data generating experiment; 3) Define target estimand (possibly augmenting its statistical interpretation with a causal/enhanced interpretation under specified non-testable assumptions); 4) Define estimator that is asymptotically valid under the statistical model assumptions; 5) Obtain inference based on sampling distribution of estimator; 6) Interpret the results (e.g. augmenting with sensitivity analysis to allow for interpretation of target estimand going in between purely statistical and purely causal). By taking these steps seriously, one often ends up with new statistical estimation problems. That itself can be an important contribution. In addition, many times new identification results (i.e, causal inference), new estimators and new theory needs to be developed (e.g. statistical estimators developed within the TMLE template), but this only happens because one has defined the precise challenge so that expertise and brainpower can be brought in by the general scientific community to solve it. If we replace the real problem by a toy problem, we avoid the real challenges.
Are all models useless? Is any exact model possible -- or useful? This post was brought to my attention just a few days ago. Thank you for your interest. Question 1: What useful statistical inferences can be made using a model that makes no assumptions at all? Befor
4,886
Are all models useless? Is any exact model possible -- or useful?
To address point 3, the answer, obviously, is no. Just about every human enterprise is based on a simplified model at some point: cooking, building, interpersonal relationships all involve humans acting on some kind of data + assumptions. No one has ever constructed a model that they did not intend to make use of. To assert otherwise is idle pedantry. It is much more interesting and enlightening, and useful to ask when inexact models are not useful, why they fail in their usefulness, and what happens when we rely on models that turn out not to be useful. Any researcher, whether in academia or industry, has to ask that question shrewdly and often. I don't think the question can be answered in general, but the principles of error propagation will inform the answer. Inexact models break down when the behavior they predict fails to reflect behavior in the real world. Understanding how errors propagate through a system can help one understand how much precision is necessary in modeling the system. For example, a rigid sphere is not usually a bad model for a baseball. But when you are designing catcher's mitt, this model will fail you and lead you to design the wrong thing. Your simplifying assumptions about baseball physics propagate through your baseball-mitt system, and lead you to draw the wrong conclusions.
Are all models useless? Is any exact model possible -- or useful?
To address point 3, the answer, obviously, is no. Just about every human enterprise is based on a simplified model at some point: cooking, building, interpersonal relationships all involve humans acti
Are all models useless? Is any exact model possible -- or useful? To address point 3, the answer, obviously, is no. Just about every human enterprise is based on a simplified model at some point: cooking, building, interpersonal relationships all involve humans acting on some kind of data + assumptions. No one has ever constructed a model that they did not intend to make use of. To assert otherwise is idle pedantry. It is much more interesting and enlightening, and useful to ask when inexact models are not useful, why they fail in their usefulness, and what happens when we rely on models that turn out not to be useful. Any researcher, whether in academia or industry, has to ask that question shrewdly and often. I don't think the question can be answered in general, but the principles of error propagation will inform the answer. Inexact models break down when the behavior they predict fails to reflect behavior in the real world. Understanding how errors propagate through a system can help one understand how much precision is necessary in modeling the system. For example, a rigid sphere is not usually a bad model for a baseball. But when you are designing catcher's mitt, this model will fail you and lead you to design the wrong thing. Your simplifying assumptions about baseball physics propagate through your baseball-mitt system, and lead you to draw the wrong conclusions.
Are all models useless? Is any exact model possible -- or useful? To address point 3, the answer, obviously, is no. Just about every human enterprise is based on a simplified model at some point: cooking, building, interpersonal relationships all involve humans acti
4,887
Are all models useless? Is any exact model possible -- or useful?
1) What useful statistical inferences can be made using a model that makes no assumptions at all? A model is by definition a generalization of what you are observing that can be captured by certain causal factors that in turn can explain and estimate the event you are observing. Given that all those generalization algorithms have some sort of underlying assumptions. I am not sure what is left of a model if you have no assumptions whatsoever. I think you are left with the original data and no model. 2) Does there exist a case study, with important, real data in the use of targeted maximum likelihood? Are these methods widely used and accepted? I don't know. Maximum likelihood is used all the time. Logit models are based on those as well as many other models. They don't differ a whole lot to standard OLS where you focus on the reductions of the sum of the square of the residuals. I am not sure what targeted maximum likelihood is. And, how it differs from traditional maximum likelihood. 3) Are all inexact models indeed useless? Absolutely not. Inexact models can be very useful. First, they contribute to better understanding or explaining a phenomenon. That should count for something. Second, they may provide descent estimation and forecasting with relevant Confidence Interval to capture the uncertainty surrounding an estimate. That can provide a lot of info on what you are studying. The issue of "inexact" also raises the issue of the tension between parsimony and overfit. You can have a simple model with 5 variables that is "inexact" but does a pretty good job of capturing and explaining the overall trend of the dependent variable. You can have a more complex model with 10 variables that is "more exact" than the first one (higher Adjusted R Square, lower Standard Error, etc.). Yet, this second more complex model may really crash when you test it using a Hold Out sample. And, in such case maybe the "inexact" model actually performs a lot better in the Hold Out sample. This happens literally all the time in econometrics and I suspect in many other social sciences. Beware of "exact" models. They can often be synonimous with overfit models and mis-specified models (models with non stationary variables that have underlying trends (unit root) with no economic meaning imparted to the model). 4) Is it possible to know that you have the exact model other than in trivial cases? It is not possible to know that you have the exact model. But, it is possible to know you have a pretty good model. The information criteria measures (AIC, BIC, SIC) can give you much information allowing to compare and benchmark the relative performance of various models. Also, the LINK test can also help in that regard. 5) If this is too opinion-based and hence off-topic, where can it be discussed? Because Dr van der Laan's article definitely does need some discussion. I would think this is as appropriate a forum to discuss this issue as anywhere else. This is a pretty interesting issue for most of us.
Are all models useless? Is any exact model possible -- or useful?
1) What useful statistical inferences can be made using a model that makes no assumptions at all? A model is by definition a generalization of what you are observing that can be captured by certain ca
Are all models useless? Is any exact model possible -- or useful? 1) What useful statistical inferences can be made using a model that makes no assumptions at all? A model is by definition a generalization of what you are observing that can be captured by certain causal factors that in turn can explain and estimate the event you are observing. Given that all those generalization algorithms have some sort of underlying assumptions. I am not sure what is left of a model if you have no assumptions whatsoever. I think you are left with the original data and no model. 2) Does there exist a case study, with important, real data in the use of targeted maximum likelihood? Are these methods widely used and accepted? I don't know. Maximum likelihood is used all the time. Logit models are based on those as well as many other models. They don't differ a whole lot to standard OLS where you focus on the reductions of the sum of the square of the residuals. I am not sure what targeted maximum likelihood is. And, how it differs from traditional maximum likelihood. 3) Are all inexact models indeed useless? Absolutely not. Inexact models can be very useful. First, they contribute to better understanding or explaining a phenomenon. That should count for something. Second, they may provide descent estimation and forecasting with relevant Confidence Interval to capture the uncertainty surrounding an estimate. That can provide a lot of info on what you are studying. The issue of "inexact" also raises the issue of the tension between parsimony and overfit. You can have a simple model with 5 variables that is "inexact" but does a pretty good job of capturing and explaining the overall trend of the dependent variable. You can have a more complex model with 10 variables that is "more exact" than the first one (higher Adjusted R Square, lower Standard Error, etc.). Yet, this second more complex model may really crash when you test it using a Hold Out sample. And, in such case maybe the "inexact" model actually performs a lot better in the Hold Out sample. This happens literally all the time in econometrics and I suspect in many other social sciences. Beware of "exact" models. They can often be synonimous with overfit models and mis-specified models (models with non stationary variables that have underlying trends (unit root) with no economic meaning imparted to the model). 4) Is it possible to know that you have the exact model other than in trivial cases? It is not possible to know that you have the exact model. But, it is possible to know you have a pretty good model. The information criteria measures (AIC, BIC, SIC) can give you much information allowing to compare and benchmark the relative performance of various models. Also, the LINK test can also help in that regard. 5) If this is too opinion-based and hence off-topic, where can it be discussed? Because Dr van der Laan's article definitely does need some discussion. I would think this is as appropriate a forum to discuss this issue as anywhere else. This is a pretty interesting issue for most of us.
Are all models useless? Is any exact model possible -- or useful? 1) What useful statistical inferences can be made using a model that makes no assumptions at all? A model is by definition a generalization of what you are observing that can be captured by certain ca
4,888
Are all models useless? Is any exact model possible -- or useful?
(I don't see the phrase "exact model" in the article (though quoted above)) 1) What useful statistical inferences can be made using a model that makes no assumptions at all? You have to start somewhere. If that's all you have (nothing), it can be a starting point. 2) Does there exist a case study, with important, real data in the use of targeted maximum likelihood? Are these methods widely used and accepted? To answer the second question, Targeted Maximum Likelihood turns up in 93/1143281 (~.008% ) of papers in arxiv.org. So, no is probably a good estimate (without assumptions) to that one. 3) Are all inexact models indeed useless? No. Sometimes you only care about one aspect of a model. That aspect can be very good and the rest very inexact. 4) Is it possible to know that you have the exact model other than in trivial cases? The best model is the model that best answers your question. That may mean leaving something out. What you want to avoid, as best you can, is assumption violation. 5) Happy hour. And drinks are cheaper to boot! I find use of the word "exact" a bit unsettling. It's not very statistician-like talk. Inexactitude? Variation? Thank G-d! That's why we are all here. I think the phrase "All models are wrong..." is okay, but only in the right company. Statisticians understand what it means, but few others do.
Are all models useless? Is any exact model possible -- or useful?
(I don't see the phrase "exact model" in the article (though quoted above)) 1) What useful statistical inferences can be made using a model that makes no assumptions at all? You have to start somewher
Are all models useless? Is any exact model possible -- or useful? (I don't see the phrase "exact model" in the article (though quoted above)) 1) What useful statistical inferences can be made using a model that makes no assumptions at all? You have to start somewhere. If that's all you have (nothing), it can be a starting point. 2) Does there exist a case study, with important, real data in the use of targeted maximum likelihood? Are these methods widely used and accepted? To answer the second question, Targeted Maximum Likelihood turns up in 93/1143281 (~.008% ) of papers in arxiv.org. So, no is probably a good estimate (without assumptions) to that one. 3) Are all inexact models indeed useless? No. Sometimes you only care about one aspect of a model. That aspect can be very good and the rest very inexact. 4) Is it possible to know that you have the exact model other than in trivial cases? The best model is the model that best answers your question. That may mean leaving something out. What you want to avoid, as best you can, is assumption violation. 5) Happy hour. And drinks are cheaper to boot! I find use of the word "exact" a bit unsettling. It's not very statistician-like talk. Inexactitude? Variation? Thank G-d! That's why we are all here. I think the phrase "All models are wrong..." is okay, but only in the right company. Statisticians understand what it means, but few others do.
Are all models useless? Is any exact model possible -- or useful? (I don't see the phrase "exact model" in the article (though quoted above)) 1) What useful statistical inferences can be made using a model that makes no assumptions at all? You have to start somewher
4,889
Are all models useless? Is any exact model possible -- or useful?
I'm going to approach this from the alternate direction of philosophy, in light of the really useful principles of Uncertainty Management discussed in George F. Klir's books on fuzzy sets. I can't give van der Laan exactness, but I can provide a somewhat exhaustive case for why his goal is logically impossible; that is going to call for a lengthy discussion that references other fields, so bear with me. Klir and his co-authors divide uncertainty into several subtypes, such as nonspecificity (i.e. when you have an unknown set of alternatives, dealt with through means like the Hartley Function); imprecision in definitions (i.e. the "fuzziness" modeled and quantified in fuzzy sets); strife or discord in evidence (addressed in Dempster-Shafer Evidence Theory); plus probability theory, possibility theory and measurement uncertainty, where the goal is to have an adequate scope to capture the relevant evidence, while minimizing errors. I look at the whole toolbox of statistical techniques as alternate means of partitioning uncertainty in different ways, much like a cookie cutter; confidence intervals and p-values quarantine uncertainty in one way, while measures like Shannon's Entropy whittle it down from another angle. What they can't do, however, is eliminate it entirely. To achieve an "exact model" of the kind van der Laan seems to describe, we'd need to reduce all of these types of uncertainty down to zero, so that there's no more left to partition. A truly "exact" model would always have probability and possibility values of 1, nonspecificity scores of 0 and no uncertainty whatsoever in the definitions of terms, ranges of values or measurement scales. There would be no discord in alternate sources of evidence. The predictions made by such a model would always be 100 percent accurate; predictive models essentially partition their uncertainty off into the future, but there would be none left to put off. The uncertainty perspective has some important implications: • This tall order is not only physically implausible, but actually logically impossible. Obviously, we cannot achieve perfectly continuous measurement scales with infinitesimal degrees, by gathering finite observations using fallible, physical scientific equipment; there will always be some uncertainty in terms of measurement scale. Likewise, there will always be some fuzziness surrounding the very definitions we employ in our experiments. The future is also inherently uncertain, so the supposedly perfect predictions of our "exact" models will have to be treated as imperfect until proven otherwise - which would take an eternity. • To make matters worse, no measurement technique is 100 percent free of error at some point in the process, nor can it be made comprehensive enough to embrace all of the possibly conflicting information in the universe. Furthermore, the elimination of possible confounding variables and complete conditional independence cannot be proven thoroughly without examining all other physical processes that affect the one we're examining, as well as those that affect these secondary processes and so on. • Exactness is possible only in pure logic and its subset, mathematics, precisely because abstractions are divorced from real-world concerns like these sources of uncertainty. For example, by pure deductive logic, we can prove that 2 + 2 = 4 and any other answer is 100 percent incorrect. We can also make perfectly accurate predictions that it will always equal 4. This kind of precision is only possible in statistics when we're dealing with abstractions. Statistics is incredibly useful when applied to the real world, but the very thing that makes it useful injects at least some degree of inescapable uncertainty, thereby rendering it inexact. It is an unavoidable dilemma. • Furthermore, Peter Chu raises additional limitations in the comments section of the article rvl linked to. He puts it better than I can: "This solution surface of NP-hard problems is typically rife with many local optima and in most case it is computationally unfeasible to solve the problem i.e. finding the global optimal solution in general. Hence, each modeler is using some (heuristic) modeling techniques, at best, to find adequate local optimal solutions in the vast solution space of this complex objective function." • All of this means that science itself cannot be perfectly accurate, although van der Laan seems to speak of it in this way in his article; the scientific method as an abstract process is precisely definable, but the impossibility of universal and perfect exact measurement means it cannot produce exact models devoid of uncertainty. Science is a great tool, but it has limits. • It gets worse from there: Even if were possible to exactly measure all of the forces acting on every constituent quark and gluon in the universe, some uncertainties would still remain. First, any predictions made by such a complete model would still be uncertain due to the existence of multiple solutions for quintic equations and higher polynomials. Secondly, we cannot be completely certain that the extreme skepticism in embodied in the classic question "maybe this is all a dream or a hallucination" is not a reflection of reality - in which case all of our models are indeed wrong in the worst possible way. This is basically equivalent to a more extreme ontological interpretation of the original epistemological formulations of philosophies like phenomenalism, idealism and solipsism. • In his 1909 classic Orthodoxy G.K. Chesterton noted that the extreme versions of these philosophies can indeed be judged, but by whether or not they drive their believers into mental institutions; ontological solipsism, for example, is actually a marker of schizophrenia, as are some of its cousins. The best that we can achieve in this world is to eliminate reasonable doubt; unreasonable doubt of this unsettling kind cannot be rigorously done away with, even in a hypothetical world of exact models, exhaustive and error-free measurements. If van der Laan aims at ridding us of unreasonable doubt then he is playing with fire. By grasping at perfection, the finite good we can do will slip through our fingers; we are finite creatures existing in an infinite world, which means the kind of complete and utterly certain knowledge van der Laan argues for is permanently beyond our grasp. The only way we can reach that kind of certainty is by retreating from that world into the narrower confines of the perfectly abstract one we call "pure mathematics." This does not mean, however, that a retreat into pure mathematics is the solution to eliminating uncertainty. This was essentially the approach taken by the successors of Ludwig Wittgenstein (1889-1951), who drained his philosophy of logical positivism of whatever common sense it had by rejecting metaphysics altogether and retreating entirely into pure math and scientism, as well as extreme skepticism, overspecialization and overemphasis on exactness over usefulness. In the process, they destroyed the discipline of philosophy by dissolving it into a morass of nitpicking over definitions and navel-gazing, thereby making it irrelevant to the rest of academia. This essentially killed the whole discipline, which had still been at the forefront of academic debate until the early 20th Century, to the point where it still garnered media attention and some of its leaders were household names. They grasped at a perfect, polished explanation of the world and it slipped through their fingers - just as it did through the mental patients GKC spoke of. It will also slip out of the grasp of van der Laan, who has already disproved his own point, as discussed below. The pursuit of models that are too exact is not just impossible; it can be dangerous, if taken to the point of self-defeating obsession. The pursuit of that kind of purity rarely ends well; it's often as self-defeating as those germophobes who scrub their hands so furiously that they end up with wounds that get infected. It's reminiscent of Icarus trying to steal fire from the Sun: as finite beings, we can have only a finite understanding of things. As Chesterton also says in Orthodoxy, "It is the logician who seeks to get the heavens into his head. And it is his head that splits." In the light of the above, let me tackle some of the specific questions listed by rvl: 1) A model with no assumptions whatsoever is either a) not aware of its own assumptions or b) must be cleanly divorced from considerations that introduce uncertainty, such as measurement errors, accounting for every single possible confounding variable, perfectly continuous measurement scales and the like. 2) I'm still a newbie when it comes to maximum likelihood estimation (MLE), so I can't comment on the mechanics of target likelihood, except to point out the obvious: likelihood is just that, a likelihood, not a certainty. To derive an exact model requires complete elimination of uncertainty, which probabilistic logic can rarely do, if ever. 3) Of course not. Since all models retain some uncertainty and are thus inexact (except in cases of pure mathematics, divorced from real-world physical measurements), the human race would not have been able to make any technological progress to date - or indeed, any other progress at all. If inexact models were always useless, we'd be having this conversation in a cave, instead of on this incredible feat of technology called the Internet, all of which was made possible through inexact modeling. Ironically, van der Laan's own model is a primary example of inexactness. His own article sketches out a model of sorts of how the field of statistics ought to be managed, with an aim towards exact models; there are no numbers attached to this "model" yet, no measurement of just how inexact or useless most models are now in his view, no quantification of how far we are away from his vision, but I suppose one could devise tests for those things. As it stands, however, his model is inexact. If it is not useful, it means his point is wrong; if it is useful, it defeats his main point that inexact models aren't useful. Either way, he disproves his own argument. 4) Probably not, because we cannot have complete information to test our model with, for the same reasons that we can't derive an exact model in the first place. An exact model would by definition require perfect predictability, but even if the first 100 tests turn out 100 percent accurate, the 101st might not. Then there's the whole issue of infinitesimal measurement scales. After that, we get into all of the other sources of uncertainty, which will contaminate any Ivory Tower evaluation of our Ivory Tower model. 5) To address the issue, I had to put it in the wider context of much larger philosophical issues that are often controversial, so I don't think it's possible discuss this without getting into opinions (note how that in and of itself is another source of uncertainty) but you're right, this article deserves a reply. A lot of what he says on other topics is on the right track, such as the need to make statistics relevant to Big Data, but there is some impractical extremism mixed in there that should be corrected.
Are all models useless? Is any exact model possible -- or useful?
I'm going to approach this from the alternate direction of philosophy, in light of the really useful principles of Uncertainty Management discussed in George F. Klir's books on fuzzy sets. I can't giv
Are all models useless? Is any exact model possible -- or useful? I'm going to approach this from the alternate direction of philosophy, in light of the really useful principles of Uncertainty Management discussed in George F. Klir's books on fuzzy sets. I can't give van der Laan exactness, but I can provide a somewhat exhaustive case for why his goal is logically impossible; that is going to call for a lengthy discussion that references other fields, so bear with me. Klir and his co-authors divide uncertainty into several subtypes, such as nonspecificity (i.e. when you have an unknown set of alternatives, dealt with through means like the Hartley Function); imprecision in definitions (i.e. the "fuzziness" modeled and quantified in fuzzy sets); strife or discord in evidence (addressed in Dempster-Shafer Evidence Theory); plus probability theory, possibility theory and measurement uncertainty, where the goal is to have an adequate scope to capture the relevant evidence, while minimizing errors. I look at the whole toolbox of statistical techniques as alternate means of partitioning uncertainty in different ways, much like a cookie cutter; confidence intervals and p-values quarantine uncertainty in one way, while measures like Shannon's Entropy whittle it down from another angle. What they can't do, however, is eliminate it entirely. To achieve an "exact model" of the kind van der Laan seems to describe, we'd need to reduce all of these types of uncertainty down to zero, so that there's no more left to partition. A truly "exact" model would always have probability and possibility values of 1, nonspecificity scores of 0 and no uncertainty whatsoever in the definitions of terms, ranges of values or measurement scales. There would be no discord in alternate sources of evidence. The predictions made by such a model would always be 100 percent accurate; predictive models essentially partition their uncertainty off into the future, but there would be none left to put off. The uncertainty perspective has some important implications: • This tall order is not only physically implausible, but actually logically impossible. Obviously, we cannot achieve perfectly continuous measurement scales with infinitesimal degrees, by gathering finite observations using fallible, physical scientific equipment; there will always be some uncertainty in terms of measurement scale. Likewise, there will always be some fuzziness surrounding the very definitions we employ in our experiments. The future is also inherently uncertain, so the supposedly perfect predictions of our "exact" models will have to be treated as imperfect until proven otherwise - which would take an eternity. • To make matters worse, no measurement technique is 100 percent free of error at some point in the process, nor can it be made comprehensive enough to embrace all of the possibly conflicting information in the universe. Furthermore, the elimination of possible confounding variables and complete conditional independence cannot be proven thoroughly without examining all other physical processes that affect the one we're examining, as well as those that affect these secondary processes and so on. • Exactness is possible only in pure logic and its subset, mathematics, precisely because abstractions are divorced from real-world concerns like these sources of uncertainty. For example, by pure deductive logic, we can prove that 2 + 2 = 4 and any other answer is 100 percent incorrect. We can also make perfectly accurate predictions that it will always equal 4. This kind of precision is only possible in statistics when we're dealing with abstractions. Statistics is incredibly useful when applied to the real world, but the very thing that makes it useful injects at least some degree of inescapable uncertainty, thereby rendering it inexact. It is an unavoidable dilemma. • Furthermore, Peter Chu raises additional limitations in the comments section of the article rvl linked to. He puts it better than I can: "This solution surface of NP-hard problems is typically rife with many local optima and in most case it is computationally unfeasible to solve the problem i.e. finding the global optimal solution in general. Hence, each modeler is using some (heuristic) modeling techniques, at best, to find adequate local optimal solutions in the vast solution space of this complex objective function." • All of this means that science itself cannot be perfectly accurate, although van der Laan seems to speak of it in this way in his article; the scientific method as an abstract process is precisely definable, but the impossibility of universal and perfect exact measurement means it cannot produce exact models devoid of uncertainty. Science is a great tool, but it has limits. • It gets worse from there: Even if were possible to exactly measure all of the forces acting on every constituent quark and gluon in the universe, some uncertainties would still remain. First, any predictions made by such a complete model would still be uncertain due to the existence of multiple solutions for quintic equations and higher polynomials. Secondly, we cannot be completely certain that the extreme skepticism in embodied in the classic question "maybe this is all a dream or a hallucination" is not a reflection of reality - in which case all of our models are indeed wrong in the worst possible way. This is basically equivalent to a more extreme ontological interpretation of the original epistemological formulations of philosophies like phenomenalism, idealism and solipsism. • In his 1909 classic Orthodoxy G.K. Chesterton noted that the extreme versions of these philosophies can indeed be judged, but by whether or not they drive their believers into mental institutions; ontological solipsism, for example, is actually a marker of schizophrenia, as are some of its cousins. The best that we can achieve in this world is to eliminate reasonable doubt; unreasonable doubt of this unsettling kind cannot be rigorously done away with, even in a hypothetical world of exact models, exhaustive and error-free measurements. If van der Laan aims at ridding us of unreasonable doubt then he is playing with fire. By grasping at perfection, the finite good we can do will slip through our fingers; we are finite creatures existing in an infinite world, which means the kind of complete and utterly certain knowledge van der Laan argues for is permanently beyond our grasp. The only way we can reach that kind of certainty is by retreating from that world into the narrower confines of the perfectly abstract one we call "pure mathematics." This does not mean, however, that a retreat into pure mathematics is the solution to eliminating uncertainty. This was essentially the approach taken by the successors of Ludwig Wittgenstein (1889-1951), who drained his philosophy of logical positivism of whatever common sense it had by rejecting metaphysics altogether and retreating entirely into pure math and scientism, as well as extreme skepticism, overspecialization and overemphasis on exactness over usefulness. In the process, they destroyed the discipline of philosophy by dissolving it into a morass of nitpicking over definitions and navel-gazing, thereby making it irrelevant to the rest of academia. This essentially killed the whole discipline, which had still been at the forefront of academic debate until the early 20th Century, to the point where it still garnered media attention and some of its leaders were household names. They grasped at a perfect, polished explanation of the world and it slipped through their fingers - just as it did through the mental patients GKC spoke of. It will also slip out of the grasp of van der Laan, who has already disproved his own point, as discussed below. The pursuit of models that are too exact is not just impossible; it can be dangerous, if taken to the point of self-defeating obsession. The pursuit of that kind of purity rarely ends well; it's often as self-defeating as those germophobes who scrub their hands so furiously that they end up with wounds that get infected. It's reminiscent of Icarus trying to steal fire from the Sun: as finite beings, we can have only a finite understanding of things. As Chesterton also says in Orthodoxy, "It is the logician who seeks to get the heavens into his head. And it is his head that splits." In the light of the above, let me tackle some of the specific questions listed by rvl: 1) A model with no assumptions whatsoever is either a) not aware of its own assumptions or b) must be cleanly divorced from considerations that introduce uncertainty, such as measurement errors, accounting for every single possible confounding variable, perfectly continuous measurement scales and the like. 2) I'm still a newbie when it comes to maximum likelihood estimation (MLE), so I can't comment on the mechanics of target likelihood, except to point out the obvious: likelihood is just that, a likelihood, not a certainty. To derive an exact model requires complete elimination of uncertainty, which probabilistic logic can rarely do, if ever. 3) Of course not. Since all models retain some uncertainty and are thus inexact (except in cases of pure mathematics, divorced from real-world physical measurements), the human race would not have been able to make any technological progress to date - or indeed, any other progress at all. If inexact models were always useless, we'd be having this conversation in a cave, instead of on this incredible feat of technology called the Internet, all of which was made possible through inexact modeling. Ironically, van der Laan's own model is a primary example of inexactness. His own article sketches out a model of sorts of how the field of statistics ought to be managed, with an aim towards exact models; there are no numbers attached to this "model" yet, no measurement of just how inexact or useless most models are now in his view, no quantification of how far we are away from his vision, but I suppose one could devise tests for those things. As it stands, however, his model is inexact. If it is not useful, it means his point is wrong; if it is useful, it defeats his main point that inexact models aren't useful. Either way, he disproves his own argument. 4) Probably not, because we cannot have complete information to test our model with, for the same reasons that we can't derive an exact model in the first place. An exact model would by definition require perfect predictability, but even if the first 100 tests turn out 100 percent accurate, the 101st might not. Then there's the whole issue of infinitesimal measurement scales. After that, we get into all of the other sources of uncertainty, which will contaminate any Ivory Tower evaluation of our Ivory Tower model. 5) To address the issue, I had to put it in the wider context of much larger philosophical issues that are often controversial, so I don't think it's possible discuss this without getting into opinions (note how that in and of itself is another source of uncertainty) but you're right, this article deserves a reply. A lot of what he says on other topics is on the right track, such as the need to make statistics relevant to Big Data, but there is some impractical extremism mixed in there that should be corrected.
Are all models useless? Is any exact model possible -- or useful? I'm going to approach this from the alternate direction of philosophy, in light of the really useful principles of Uncertainty Management discussed in George F. Klir's books on fuzzy sets. I can't giv
4,890
Why is a Bayesian not allowed to look at the residuals?
Of course Bayesians can look at the residuals! And of course there are bad models in Bayesian analysis. Maybe a few Bayesians in the 70's supported views like that (and I doubt that), but you will hardly find any Bayesian supporting this view these days. I didn't read the text, but Bayesians use things like Bayes factors to compare models. Actually, a Bayesian can even compute the probability of a model being true and pick the model which is more likely to be true. Or a Bayesian can average across models, to achieve a better model. Or can use posterior predictive checks. There are a lot of options to check a model and each one may favor one approach or another, but to say that there are no bad models in Bayesian analysis is non-sense. So, at most, it would be more appropriate to say that in some extreme versions of Bayesianism (extreme versions that almost no one uses in applied settings, by the way) you're not allowed to check your model. But than you could say that in some extreme versions of frequentism you're not allowed to use observational data as well. But why waste time discussing these silly things, when we can discuss if and when, in an applied setting, we should use Bayesian or frequentist methods or whatever? That's what's important, in my humble opinion. Update: The OP asked for a reference of someone advocating the extreme version of Bayes. Since I never read any extreme version of Bayes, I can't provide this reference. But I'd guess that Savage may be such a reference. I never read anything written by him, so I may be wrong. ps.: Think about the problem of the "well-calibrated Bayesian" (Dawid (1982), JASA, 77, 379). A coherent subjectivist Bayesian forecaster can't be uncalibrated, and so wouldn't review his model/forecasts despite any overwhelming evidence that he's uncalibrated. But I don't think anyone in practice can claim to be that coherent. Thus, model review is important. ps2.: I like this paper by Efron as well. The full reference is: Efron, Bradley (2005). "Bayesians, frequentists, and scientists." Journal of the American Statistical Association 100(469).
Why is a Bayesian not allowed to look at the residuals?
Of course Bayesians can look at the residuals! And of course there are bad models in Bayesian analysis. Maybe a few Bayesians in the 70's supported views like that (and I doubt that), but you will har
Why is a Bayesian not allowed to look at the residuals? Of course Bayesians can look at the residuals! And of course there are bad models in Bayesian analysis. Maybe a few Bayesians in the 70's supported views like that (and I doubt that), but you will hardly find any Bayesian supporting this view these days. I didn't read the text, but Bayesians use things like Bayes factors to compare models. Actually, a Bayesian can even compute the probability of a model being true and pick the model which is more likely to be true. Or a Bayesian can average across models, to achieve a better model. Or can use posterior predictive checks. There are a lot of options to check a model and each one may favor one approach or another, but to say that there are no bad models in Bayesian analysis is non-sense. So, at most, it would be more appropriate to say that in some extreme versions of Bayesianism (extreme versions that almost no one uses in applied settings, by the way) you're not allowed to check your model. But than you could say that in some extreme versions of frequentism you're not allowed to use observational data as well. But why waste time discussing these silly things, when we can discuss if and when, in an applied setting, we should use Bayesian or frequentist methods or whatever? That's what's important, in my humble opinion. Update: The OP asked for a reference of someone advocating the extreme version of Bayes. Since I never read any extreme version of Bayes, I can't provide this reference. But I'd guess that Savage may be such a reference. I never read anything written by him, so I may be wrong. ps.: Think about the problem of the "well-calibrated Bayesian" (Dawid (1982), JASA, 77, 379). A coherent subjectivist Bayesian forecaster can't be uncalibrated, and so wouldn't review his model/forecasts despite any overwhelming evidence that he's uncalibrated. But I don't think anyone in practice can claim to be that coherent. Thus, model review is important. ps2.: I like this paper by Efron as well. The full reference is: Efron, Bradley (2005). "Bayesians, frequentists, and scientists." Journal of the American Statistical Association 100(469).
Why is a Bayesian not allowed to look at the residuals? Of course Bayesians can look at the residuals! And of course there are bad models in Bayesian analysis. Maybe a few Bayesians in the 70's supported views like that (and I doubt that), but you will har
4,891
Why is a Bayesian not allowed to look at the residuals?
They can look but not touch. After all, the residuals are the part of the data that don't carry any information about model parameters, and their prior expresses all uncertainty about those—they can't change their prior based on what they see in the data. For example, suppose you're fitting a Gaussian model, but notice far too much kurtosis in the residuals. Perhaps your prior hypothesis should have been a t-distribution with non-zero probability over low degrees of freedom, but it wasn't—it was effectively a t-distribution with zero probability everywhere except on infinite degrees of freedom. Nothing in the likelihood can result in non-zero probabilities over regions of the posterior density where the prior density is zero. So the notion of continually updating priors based on likelihoods from data doesn't work when the original prior is mis-specified. Of course if you Google "Bayesian model checking", you'll see this is a parody of actual Bayesian practice; still, it does represent something of a difficulty for Logic of Science-type arguments for the superiority of Bayesianism on philosophical grounds. Andrew Gelman's blog is interesting on this topic.
Why is a Bayesian not allowed to look at the residuals?
They can look but not touch. After all, the residuals are the part of the data that don't carry any information about model parameters, and their prior expresses all uncertainty about those—they can't
Why is a Bayesian not allowed to look at the residuals? They can look but not touch. After all, the residuals are the part of the data that don't carry any information about model parameters, and their prior expresses all uncertainty about those—they can't change their prior based on what they see in the data. For example, suppose you're fitting a Gaussian model, but notice far too much kurtosis in the residuals. Perhaps your prior hypothesis should have been a t-distribution with non-zero probability over low degrees of freedom, but it wasn't—it was effectively a t-distribution with zero probability everywhere except on infinite degrees of freedom. Nothing in the likelihood can result in non-zero probabilities over regions of the posterior density where the prior density is zero. So the notion of continually updating priors based on likelihoods from data doesn't work when the original prior is mis-specified. Of course if you Google "Bayesian model checking", you'll see this is a parody of actual Bayesian practice; still, it does represent something of a difficulty for Logic of Science-type arguments for the superiority of Bayesianism on philosophical grounds. Andrew Gelman's blog is interesting on this topic.
Why is a Bayesian not allowed to look at the residuals? They can look but not touch. After all, the residuals are the part of the data that don't carry any information about model parameters, and their prior expresses all uncertainty about those—they can't
4,892
How to do logistic regression subset selection?
Stepwise and "all subsets" methods are generally bad. See Stopping Stepwise: Why Stepwise Methods are Bad and what you Should Use by David Cassell and myself (we used SAS, but the lesson applies) or Frank Harrell Regression Modeling Strategies. If you need an automatic method, I recommend LASSO or LAR. A LASSO package for logistic regression is available here, another interesting article is on the iterated LASSO for logistic
How to do logistic regression subset selection?
Stepwise and "all subsets" methods are generally bad. See Stopping Stepwise: Why Stepwise Methods are Bad and what you Should Use by David Cassell and myself (we used SAS, but the lesson applies) or
How to do logistic regression subset selection? Stepwise and "all subsets" methods are generally bad. See Stopping Stepwise: Why Stepwise Methods are Bad and what you Should Use by David Cassell and myself (we used SAS, but the lesson applies) or Frank Harrell Regression Modeling Strategies. If you need an automatic method, I recommend LASSO or LAR. A LASSO package for logistic regression is available here, another interesting article is on the iterated LASSO for logistic
How to do logistic regression subset selection? Stepwise and "all subsets" methods are generally bad. See Stopping Stepwise: Why Stepwise Methods are Bad and what you Should Use by David Cassell and myself (we used SAS, but the lesson applies) or
4,893
How to do logistic regression subset selection?
First of all $R^2$ is not an appropriate goodness-of-fit measure for logistic regression, take an information criterion $AIC$ or $BIC$, for example, as a good alternative. Logistic regression is estimated by maximum likelihood method, so leaps is not used directly here. An extension of leaps to glm() functions is the bestglm package (as usually recommendation follows, consult vignettes there). You may be also interested in the article by David W. Hosmer, Borko Jovanovic and Stanley Lemeshow Best Subsets Logistic Regression // Biometrics Vol. 45, No. 4 (Dec., 1989), pp. 1265-1270 (usually accessible through the university networks).
How to do logistic regression subset selection?
First of all $R^2$ is not an appropriate goodness-of-fit measure for logistic regression, take an information criterion $AIC$ or $BIC$, for example, as a good alternative. Logistic regression is estim
How to do logistic regression subset selection? First of all $R^2$ is not an appropriate goodness-of-fit measure for logistic regression, take an information criterion $AIC$ or $BIC$, for example, as a good alternative. Logistic regression is estimated by maximum likelihood method, so leaps is not used directly here. An extension of leaps to glm() functions is the bestglm package (as usually recommendation follows, consult vignettes there). You may be also interested in the article by David W. Hosmer, Borko Jovanovic and Stanley Lemeshow Best Subsets Logistic Regression // Biometrics Vol. 45, No. 4 (Dec., 1989), pp. 1265-1270 (usually accessible through the university networks).
How to do logistic regression subset selection? First of all $R^2$ is not an appropriate goodness-of-fit measure for logistic regression, take an information criterion $AIC$ or $BIC$, for example, as a good alternative. Logistic regression is estim
4,894
How to do logistic regression subset selection?
One idea would be to use a random forest and then use the variable importance measures it outputs to choose your best 8 variables. Another idea would be to use the "boruta" package to repeat this process a few hundred times to find the 8 variables that are consistently most important to the model.
How to do logistic regression subset selection?
One idea would be to use a random forest and then use the variable importance measures it outputs to choose your best 8 variables. Another idea would be to use the "boruta" package to repeat this pro
How to do logistic regression subset selection? One idea would be to use a random forest and then use the variable importance measures it outputs to choose your best 8 variables. Another idea would be to use the "boruta" package to repeat this process a few hundred times to find the 8 variables that are consistently most important to the model.
How to do logistic regression subset selection? One idea would be to use a random forest and then use the variable importance measures it outputs to choose your best 8 variables. Another idea would be to use the "boruta" package to repeat this pro
4,895
How to do logistic regression subset selection?
stats::step function or the more general MASS::stepAIC function supports lm, glm (including logistic regression) and aov family models.
How to do logistic regression subset selection?
stats::step function or the more general MASS::stepAIC function supports lm, glm (including logistic regression) and aov family models.
How to do logistic regression subset selection? stats::step function or the more general MASS::stepAIC function supports lm, glm (including logistic regression) and aov family models.
How to do logistic regression subset selection? stats::step function or the more general MASS::stepAIC function supports lm, glm (including logistic regression) and aov family models.
4,896
Why is softmax output not a good uncertainty measure for Deep Learning models?
This question can be answered more precisely than the current answers. Fixing the deviation between the predicted probabilities (the output of the softmax layer of a neural network) and their true probabilities (which represent a notion of confidence), is known as calibration or reliability curves. The issue with many deep neural networks is that, although they tend to perform well for prediction, their estimated predicted probabilities produced by the output of a softmax layer can not reliably be used as the true probabilities (as a confidence for each label). In practice, they tend to be too high - neural networks are 'too confident' in their predictions. Chuan Guo et. al., working with Kilian Weinberger, developed an effective solution for calibrating the predicted probabilities of neural networks in this paper: On Calibration of Modern Neural Networks[1] This paper also explains how predicted probabilities can be interpreted as confidence measures when the predicted probabilities are correctly calibrated. [1] On Calibration of Modern Neural Networks, https://arxiv.org/abs/1706.04599
Why is softmax output not a good uncertainty measure for Deep Learning models?
This question can be answered more precisely than the current answers. Fixing the deviation between the predicted probabilities (the output of the softmax layer of a neural network) and their true pro
Why is softmax output not a good uncertainty measure for Deep Learning models? This question can be answered more precisely than the current answers. Fixing the deviation between the predicted probabilities (the output of the softmax layer of a neural network) and their true probabilities (which represent a notion of confidence), is known as calibration or reliability curves. The issue with many deep neural networks is that, although they tend to perform well for prediction, their estimated predicted probabilities produced by the output of a softmax layer can not reliably be used as the true probabilities (as a confidence for each label). In practice, they tend to be too high - neural networks are 'too confident' in their predictions. Chuan Guo et. al., working with Kilian Weinberger, developed an effective solution for calibrating the predicted probabilities of neural networks in this paper: On Calibration of Modern Neural Networks[1] This paper also explains how predicted probabilities can be interpreted as confidence measures when the predicted probabilities are correctly calibrated. [1] On Calibration of Modern Neural Networks, https://arxiv.org/abs/1706.04599
Why is softmax output not a good uncertainty measure for Deep Learning models? This question can be answered more precisely than the current answers. Fixing the deviation between the predicted probabilities (the output of the softmax layer of a neural network) and their true pro
4,897
Why is softmax output not a good uncertainty measure for Deep Learning models?
The relationship between softmax confidence and uncertainty is more complicated than a lot of work makes it sound. Firstly, there are two separate issues that often get conflated. Callibration - Does 90% softmax confidence mean it is correct 90% of the time? This is evaluated over the training distribution. We are interested in the absolute confidence values. Uncertainty - Does softmax confidence reduce when the network doesn't know something? This is evaluated by comparing softmax confidence on the training distribution to some other data (often called out-of-distribution, OOD). If over the training distribution softmax confidence is in the range 92-100%, on OOD data it should be <92%. We are interested in the relative confidence values. Callibration. Deep neural networks typically output very high softmax confidence for any input (say >95%), and are known to be poorly calibrated. As far as I know this is fairly uncontroversial. The classic reference: 'On Calibration of Modern Neural Networks' by Guo et al.. Uncertainty. This issue is less clear cut. There are well-known ways to make softmax confidence fail, such as magnifying an input, or creating adversarial examples. Softmax confidence also conflates two different sources of uncertainty (aleatoric & epistemic). These counterexamples have drawn a lot of attention, leading to claims (made with varying strength) that softmax confidence $\neq$ uncertainty. What's sometimes forgotten in light of these failure modes, is that naively interpreting softmax confidence as uncertainty actually performs pretty well on many uncertainty tasks. Moreover, a lot of methods that claim to 'capture uncertainty' generally don't beat softmax confidence by all that much. The paper, 'Understanding Softmax Confidence and Uncertainty' by Pearce et al., investigates why softmax confidence performs reasonably in these uncertainty benchmarks, describing two properties of unmodified neural networks that, in certain situations, seem to help softmax confidence $\approx$ uncertainty.
Why is softmax output not a good uncertainty measure for Deep Learning models?
The relationship between softmax confidence and uncertainty is more complicated than a lot of work makes it sound. Firstly, there are two separate issues that often get conflated. Callibration - Does
Why is softmax output not a good uncertainty measure for Deep Learning models? The relationship between softmax confidence and uncertainty is more complicated than a lot of work makes it sound. Firstly, there are two separate issues that often get conflated. Callibration - Does 90% softmax confidence mean it is correct 90% of the time? This is evaluated over the training distribution. We are interested in the absolute confidence values. Uncertainty - Does softmax confidence reduce when the network doesn't know something? This is evaluated by comparing softmax confidence on the training distribution to some other data (often called out-of-distribution, OOD). If over the training distribution softmax confidence is in the range 92-100%, on OOD data it should be <92%. We are interested in the relative confidence values. Callibration. Deep neural networks typically output very high softmax confidence for any input (say >95%), and are known to be poorly calibrated. As far as I know this is fairly uncontroversial. The classic reference: 'On Calibration of Modern Neural Networks' by Guo et al.. Uncertainty. This issue is less clear cut. There are well-known ways to make softmax confidence fail, such as magnifying an input, or creating adversarial examples. Softmax confidence also conflates two different sources of uncertainty (aleatoric & epistemic). These counterexamples have drawn a lot of attention, leading to claims (made with varying strength) that softmax confidence $\neq$ uncertainty. What's sometimes forgotten in light of these failure modes, is that naively interpreting softmax confidence as uncertainty actually performs pretty well on many uncertainty tasks. Moreover, a lot of methods that claim to 'capture uncertainty' generally don't beat softmax confidence by all that much. The paper, 'Understanding Softmax Confidence and Uncertainty' by Pearce et al., investigates why softmax confidence performs reasonably in these uncertainty benchmarks, describing two properties of unmodified neural networks that, in certain situations, seem to help softmax confidence $\approx$ uncertainty.
Why is softmax output not a good uncertainty measure for Deep Learning models? The relationship between softmax confidence and uncertainty is more complicated than a lot of work makes it sound. Firstly, there are two separate issues that often get conflated. Callibration - Does
4,898
Why is softmax output not a good uncertainty measure for Deep Learning models?
Softmax distributes the 'probability' 0-1 between the available classes. It does not express incertitude, it is not a PDF function. If you want to express the incertitude you should be looking into bayesian neural networks. Have a look at this paper: Uncertainty in Deep Learning Some rather recent probability frameworks: Tensorflow probability Edward Pyro pytorch Interesting keynote talk by Zoubin Ghahramani (University of Cambridge) Have a look at this paper: Mixture Density Networks: I guess you can implement it and add as a final layer to CONVNET. If you do implement it don't forget sharing is caring ;-) Good luck
Why is softmax output not a good uncertainty measure for Deep Learning models?
Softmax distributes the 'probability' 0-1 between the available classes. It does not express incertitude, it is not a PDF function. If you want to express the incertitude you should be looking into b
Why is softmax output not a good uncertainty measure for Deep Learning models? Softmax distributes the 'probability' 0-1 between the available classes. It does not express incertitude, it is not a PDF function. If you want to express the incertitude you should be looking into bayesian neural networks. Have a look at this paper: Uncertainty in Deep Learning Some rather recent probability frameworks: Tensorflow probability Edward Pyro pytorch Interesting keynote talk by Zoubin Ghahramani (University of Cambridge) Have a look at this paper: Mixture Density Networks: I guess you can implement it and add as a final layer to CONVNET. If you do implement it don't forget sharing is caring ;-) Good luck
Why is softmax output not a good uncertainty measure for Deep Learning models? Softmax distributes the 'probability' 0-1 between the available classes. It does not express incertitude, it is not a PDF function. If you want to express the incertitude you should be looking into b
4,899
Why is softmax output not a good uncertainty measure for Deep Learning models?
In the paper Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, Yarin Gal and Zoubin Ghahramani argue the following In classification, predictive probabilities obtained at the end of the pipeline (the softmax output) are often erroneously interpreted as model confidence. A model can be uncertain in its predictions even with a high softmax output (fig. 1). Passing a point estimate of a function (solid line 1a) through a softmax (solid line 1b) results in extrapolations with unjustified high confidence for points far from the training data. $x^*$ for example would be classified as class 1 with probability 1. Here's figure 1. So, if we interpret the outputs of the softmax as model uncertainty or confidence, the model is highly confident for point $x^*$, even though no training data was observed in that region, but this can be misleading, because the true function, in that region, could be completely different than the learned one (the solid black line).
Why is softmax output not a good uncertainty measure for Deep Learning models?
In the paper Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, Yarin Gal and Zoubin Ghahramani argue the following In classification, predictive probabilities obta
Why is softmax output not a good uncertainty measure for Deep Learning models? In the paper Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, Yarin Gal and Zoubin Ghahramani argue the following In classification, predictive probabilities obtained at the end of the pipeline (the softmax output) are often erroneously interpreted as model confidence. A model can be uncertain in its predictions even with a high softmax output (fig. 1). Passing a point estimate of a function (solid line 1a) through a softmax (solid line 1b) results in extrapolations with unjustified high confidence for points far from the training data. $x^*$ for example would be classified as class 1 with probability 1. Here's figure 1. So, if we interpret the outputs of the softmax as model uncertainty or confidence, the model is highly confident for point $x^*$, even though no training data was observed in that region, but this can be misleading, because the true function, in that region, could be completely different than the learned one (the solid black line).
Why is softmax output not a good uncertainty measure for Deep Learning models? In the paper Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, Yarin Gal and Zoubin Ghahramani argue the following In classification, predictive probabilities obta
4,900
Why is softmax output not a good uncertainty measure for Deep Learning models?
What is called softmax in ML has the same equation as multinomial logistic equation. The latter can be used to calculate the probabilities. In practice it is widely used in estimation of default probabilities in competing risks framework for mortgages, e.g. see Eq. 4 in this paper. Hence, I would say that your intuition is not completely off the mark. However, in the above mortgage modeling example the dependent variable is the probability metric of loan defaults. You have a pool of mortgages and observe the number of defaults. A single mortgage can either be current or in default, the probability of its default is not observable. We only observe the discrete events. However, we do model the probabilities. How is this different from machine learning? It depends. I could decide to apply it to mortgage defaults, then it wouldn't be much different at all. On the other hand in different applications, this may not work. If you're not modeling the probability explicitly like in my example, then your model output may not represent the probability appropriately.
Why is softmax output not a good uncertainty measure for Deep Learning models?
What is called softmax in ML has the same equation as multinomial logistic equation. The latter can be used to calculate the probabilities. In practice it is widely used in estimation of default proba
Why is softmax output not a good uncertainty measure for Deep Learning models? What is called softmax in ML has the same equation as multinomial logistic equation. The latter can be used to calculate the probabilities. In practice it is widely used in estimation of default probabilities in competing risks framework for mortgages, e.g. see Eq. 4 in this paper. Hence, I would say that your intuition is not completely off the mark. However, in the above mortgage modeling example the dependent variable is the probability metric of loan defaults. You have a pool of mortgages and observe the number of defaults. A single mortgage can either be current or in default, the probability of its default is not observable. We only observe the discrete events. However, we do model the probabilities. How is this different from machine learning? It depends. I could decide to apply it to mortgage defaults, then it wouldn't be much different at all. On the other hand in different applications, this may not work. If you're not modeling the probability explicitly like in my example, then your model output may not represent the probability appropriately.
Why is softmax output not a good uncertainty measure for Deep Learning models? What is called softmax in ML has the same equation as multinomial logistic equation. The latter can be used to calculate the probabilities. In practice it is widely used in estimation of default proba