idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
5,301 | What is the difference between errors and residuals? | An error is the difference between the observed value and the true value (very often unobserved, generated by the DGP).
A residual is the difference between the observed value and the predicted value (by the model). | What is the difference between errors and residuals? | An error is the difference between the observed value and the true value (very often unobserved, generated by the DGP).
A residual is the difference between the observed value and the predicted value | What is the difference between errors and residuals?
An error is the difference between the observed value and the true value (very often unobserved, generated by the DGP).
A residual is the difference between the observed value and the predicted value (by the model). | What is the difference between errors and residuals?
An error is the difference between the observed value and the true value (very often unobserved, generated by the DGP).
A residual is the difference between the observed value and the predicted value |
5,302 | What is the difference between errors and residuals? | Error term is a theoretical concept that can never be observed, but the residual is a real world value that is calculated for each time a regression is done | What is the difference between errors and residuals? | Error term is a theoretical concept that can never be observed, but the residual is a real world value that is calculated for each time a regression is done | What is the difference between errors and residuals?
Error term is a theoretical concept that can never be observed, but the residual is a real world value that is calculated for each time a regression is done | What is the difference between errors and residuals?
Error term is a theoretical concept that can never be observed, but the residual is a real world value that is calculated for each time a regression is done |
5,303 | What is the difference between errors and residuals? | Error of the data set is the differences between the observed values and the true / unobserved values. Residual is calculated after running the regression model and is the differences between the observed values and the estimated values. | What is the difference between errors and residuals? | Error of the data set is the differences between the observed values and the true / unobserved values. Residual is calculated after running the regression model and is the differences between the obse | What is the difference between errors and residuals?
Error of the data set is the differences between the observed values and the true / unobserved values. Residual is calculated after running the regression model and is the differences between the observed values and the estimated values. | What is the difference between errors and residuals?
Error of the data set is the differences between the observed values and the true / unobserved values. Residual is calculated after running the regression model and is the differences between the obse |
5,304 | What is the difference between errors and residuals? | Error term is an unknown value that could never be known unless the DGP is known. Therefore, theoretically, one can generate a variable x from say a Normal random variable and the error from a normal random variable. Then construct the variable $y$ as follow
$$
y_t=\beta x_t+e_t
$$
Here, $e_t$ stands for error term the difference between the true variable $y_t$ and the expected value $\beta x_t$.
$\beta$ is unknown, once the beta is estimated then we get
$$
y_t=\hat{\beta} x_t+\hat{e}_t
$$
Then $\hat{e}_t$ is not the error anymore it is the residual, the difference between the true value $y_t$ and the estimated value $\hat{\beta} x_t:=\hat{y}_t$.
This comes in line with another question, what is the difference between the Mean Squared Error and Mean Squared residual. There is nothing called MSR: Means squared residual.
$$
MSR=\frac{1}{n}\sum_{i=1}^{n}\hat{e}_i
$$
However, many practitioners treat them the same. MSE is a theoretical concept that is always translated to MSR by practitioners due to their unfamiliarity between theory and practice.
$$
MSE=E(e_{t}^{2})
$$ | What is the difference between errors and residuals? | Error term is an unknown value that could never be known unless the DGP is known. Therefore, theoretically, one can generate a variable x from say a Normal random variable and the error from a normal | What is the difference between errors and residuals?
Error term is an unknown value that could never be known unless the DGP is known. Therefore, theoretically, one can generate a variable x from say a Normal random variable and the error from a normal random variable. Then construct the variable $y$ as follow
$$
y_t=\beta x_t+e_t
$$
Here, $e_t$ stands for error term the difference between the true variable $y_t$ and the expected value $\beta x_t$.
$\beta$ is unknown, once the beta is estimated then we get
$$
y_t=\hat{\beta} x_t+\hat{e}_t
$$
Then $\hat{e}_t$ is not the error anymore it is the residual, the difference between the true value $y_t$ and the estimated value $\hat{\beta} x_t:=\hat{y}_t$.
This comes in line with another question, what is the difference between the Mean Squared Error and Mean Squared residual. There is nothing called MSR: Means squared residual.
$$
MSR=\frac{1}{n}\sum_{i=1}^{n}\hat{e}_i
$$
However, many practitioners treat them the same. MSE is a theoretical concept that is always translated to MSR by practitioners due to their unfamiliarity between theory and practice.
$$
MSE=E(e_{t}^{2})
$$ | What is the difference between errors and residuals?
Error term is an unknown value that could never be known unless the DGP is known. Therefore, theoretically, one can generate a variable x from say a Normal random variable and the error from a normal |
5,305 | Eliciting priors from experts | John Cook gives some interesting recommendations. Basically, get percentiles/quantiles (not means or obscure scale parameters!) from the experts, and fit them with the appropriate distribution.
http://www.johndcook.com/blog/2010/01/31/parameters-from-percentiles/ | Eliciting priors from experts | John Cook gives some interesting recommendations. Basically, get percentiles/quantiles (not means or obscure scale parameters!) from the experts, and fit them with the appropriate distribution.
http:/ | Eliciting priors from experts
John Cook gives some interesting recommendations. Basically, get percentiles/quantiles (not means or obscure scale parameters!) from the experts, and fit them with the appropriate distribution.
http://www.johndcook.com/blog/2010/01/31/parameters-from-percentiles/ | Eliciting priors from experts
John Cook gives some interesting recommendations. Basically, get percentiles/quantiles (not means or obscure scale parameters!) from the experts, and fit them with the appropriate distribution.
http:/ |
5,306 | Eliciting priors from experts | I am currently researching the trial roulette method for my masters thesis as an elicitation technique. This is a graphical method that allows an expert to represent her subjective probability distribution for an uncertain quantity.
Experts are given counters (or what one can think of as casino chips) representing equal densities whose total would sum up to 1 - for example 20 chips of probability = 0.05 each. They are then instructed to arrange them on a pre-printed grid, with bins representing result intervals. Each column would represent their belief of the probability of getting the corresponding bin result.
Example: A student is asked to predict the mark in a future exam. The
figure below shows a completed grid for the elicitation of
a subjective probability distribution. The horizontal axis of the
grid shows the possible bins (or mark intervals) that the student was
asked to consider. The numbers in top row record the number of chips
per bin. The completed grid (using a total of 20 chips) shows that the
student believes there is a 30% chance that the mark will be between
60 and 64.9.
Some reasons in favour of using this technique are:
Many questions about the shape of the expert's subjective probability distribution can be answered without the need to pose a long series of questions to the expert - the statistician can simply read off density above or below any given point, or that between any two points.
During the elicitation process, the experts can move around the chips if unsatisfied with the way they placed them initially - thus they can be sure of the final result to be submitted.
It forces the expert to be coherent in the set of probabilities that are provided. If all the chips are used, the probabilities must sum to one.
Graphical methods seem to provide more accurate results, especially for participants with modest levels of statistical sophistication. | Eliciting priors from experts | I am currently researching the trial roulette method for my masters thesis as an elicitation technique. This is a graphical method that allows an expert to represent her subjective probability distrib | Eliciting priors from experts
I am currently researching the trial roulette method for my masters thesis as an elicitation technique. This is a graphical method that allows an expert to represent her subjective probability distribution for an uncertain quantity.
Experts are given counters (or what one can think of as casino chips) representing equal densities whose total would sum up to 1 - for example 20 chips of probability = 0.05 each. They are then instructed to arrange them on a pre-printed grid, with bins representing result intervals. Each column would represent their belief of the probability of getting the corresponding bin result.
Example: A student is asked to predict the mark in a future exam. The
figure below shows a completed grid for the elicitation of
a subjective probability distribution. The horizontal axis of the
grid shows the possible bins (or mark intervals) that the student was
asked to consider. The numbers in top row record the number of chips
per bin. The completed grid (using a total of 20 chips) shows that the
student believes there is a 30% chance that the mark will be between
60 and 64.9.
Some reasons in favour of using this technique are:
Many questions about the shape of the expert's subjective probability distribution can be answered without the need to pose a long series of questions to the expert - the statistician can simply read off density above or below any given point, or that between any two points.
During the elicitation process, the experts can move around the chips if unsatisfied with the way they placed them initially - thus they can be sure of the final result to be submitted.
It forces the expert to be coherent in the set of probabilities that are provided. If all the chips are used, the probabilities must sum to one.
Graphical methods seem to provide more accurate results, especially for participants with modest levels of statistical sophistication. | Eliciting priors from experts
I am currently researching the trial roulette method for my masters thesis as an elicitation technique. This is a graphical method that allows an expert to represent her subjective probability distrib |
5,307 | Eliciting priors from experts | Eliciting priors is a tricky business.
Statistical Methods for Eliciting Probability Distributions and Eliciting Probability Distributions are quite good practical guides for prior elicitation. The process in both papers is outlined as follows:
background and preparation;
identifying and recruiting the expert(s);
motivation and training the expert(s);
structuring and decomposition (typically deciding precisely what variables should
be elicited, and how to elicit joint distributions in the multivariate case);
the elicitation itself.
Of course, they also review how the elicitation results in information that may be fit to or otherwise define distributions (for instance, in the Bayesian context, Beta distributions), but quite importantly, they also address common pitfalls in modeling expert knowledge (anchoring, narrow and small-tailed distributions, etc.). | Eliciting priors from experts | Eliciting priors is a tricky business.
Statistical Methods for Eliciting Probability Distributions and Eliciting Probability Distributions are quite good practical guides for prior elicitation. The | Eliciting priors from experts
Eliciting priors is a tricky business.
Statistical Methods for Eliciting Probability Distributions and Eliciting Probability Distributions are quite good practical guides for prior elicitation. The process in both papers is outlined as follows:
background and preparation;
identifying and recruiting the expert(s);
motivation and training the expert(s);
structuring and decomposition (typically deciding precisely what variables should
be elicited, and how to elicit joint distributions in the multivariate case);
the elicitation itself.
Of course, they also review how the elicitation results in information that may be fit to or otherwise define distributions (for instance, in the Bayesian context, Beta distributions), but quite importantly, they also address common pitfalls in modeling expert knowledge (anchoring, narrow and small-tailed distributions, etc.). | Eliciting priors from experts
Eliciting priors is a tricky business.
Statistical Methods for Eliciting Probability Distributions and Eliciting Probability Distributions are quite good practical guides for prior elicitation. The |
5,308 | Eliciting priors from experts | I'd recommend letting the subject expert specify the mean or mode of the prior but I'd feel free to adjust what they give as a scale. Most people are not very good at quantifying variance.
And I would definitely not let the expert determine the distribution family, in particular the tail thickness. For example, suppose you need a symmetric distribution for a prior. No one can specify their subjective belief so finely as to distinguish a normal distribution from, say, a Student-t distribution with 5 degrees of freedom. But in some contexts the t(5) prior is much more robust than the normal prior. In short, I think the choice of tail thickness is a technical statistical matter, not a matter of quantifying expert opinion. | Eliciting priors from experts | I'd recommend letting the subject expert specify the mean or mode of the prior but I'd feel free to adjust what they give as a scale. Most people are not very good at quantifying variance.
And I wo | Eliciting priors from experts
I'd recommend letting the subject expert specify the mean or mode of the prior but I'd feel free to adjust what they give as a scale. Most people are not very good at quantifying variance.
And I would definitely not let the expert determine the distribution family, in particular the tail thickness. For example, suppose you need a symmetric distribution for a prior. No one can specify their subjective belief so finely as to distinguish a normal distribution from, say, a Student-t distribution with 5 degrees of freedom. But in some contexts the t(5) prior is much more robust than the normal prior. In short, I think the choice of tail thickness is a technical statistical matter, not a matter of quantifying expert opinion. | Eliciting priors from experts
I'd recommend letting the subject expert specify the mean or mode of the prior but I'd feel free to adjust what they give as a scale. Most people are not very good at quantifying variance.
And I wo |
5,309 | Eliciting priors from experts | This interesting question is the subject of some research in ACERA. The lead researcher is Andrew Speirs-Bridge, and his work is eminently google-able :) | Eliciting priors from experts | This interesting question is the subject of some research in ACERA. The lead researcher is Andrew Speirs-Bridge, and his work is eminently google-able :) | Eliciting priors from experts
This interesting question is the subject of some research in ACERA. The lead researcher is Andrew Speirs-Bridge, and his work is eminently google-able :) | Eliciting priors from experts
This interesting question is the subject of some research in ACERA. The lead researcher is Andrew Speirs-Bridge, and his work is eminently google-able :) |
5,310 | Eliciting priors from experts | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This is the most up-to-date reference, with clear interpretation for the expert, alleviating the task of understand complex model parameters effect on the observed data.
http://www.auai.org/uai2020/proceedings/470_main_paper.pdf | Eliciting priors from experts | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Eliciting priors from experts
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This is the most up-to-date reference, with clear interpretation for the expert, alleviating the task of understand complex model parameters effect on the observed data.
http://www.auai.org/uai2020/proceedings/470_main_paper.pdf | Eliciting priors from experts
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
5,311 | Difference between GradientDescentOptimizer and AdamOptimizer (TensorFlow)? | The tf.train.AdamOptimizer uses Kingma and Ba's Adam algorithm to control the learning rate. Adam offers several advantages over the simple tf.train.GradientDescentOptimizer. Foremost is that it uses moving averages of the parameters (momentum); Bengio discusses the reasons for why this is beneficial in Section 3.1.1 of this paper. Simply put, this enables Adam to use a larger effective step size, and the algorithm will converge to this step size without fine tuning.
The main down side of the algorithm is that Adam requires more computation to be performed for each parameter in each training step (to maintain the moving averages and variance, and calculate the scaled gradient); and more state to be retained for each parameter (approximately tripling the size of the model to store the average and variance for each parameter). A simple tf.train.GradientDescentOptimizer could equally be used in your MLP, but would require more hyperparameter tuning before it would converge as quickly. | Difference between GradientDescentOptimizer and AdamOptimizer (TensorFlow)? | The tf.train.AdamOptimizer uses Kingma and Ba's Adam algorithm to control the learning rate. Adam offers several advantages over the simple tf.train.GradientDescentOptimizer. Foremost is that it uses | Difference between GradientDescentOptimizer and AdamOptimizer (TensorFlow)?
The tf.train.AdamOptimizer uses Kingma and Ba's Adam algorithm to control the learning rate. Adam offers several advantages over the simple tf.train.GradientDescentOptimizer. Foremost is that it uses moving averages of the parameters (momentum); Bengio discusses the reasons for why this is beneficial in Section 3.1.1 of this paper. Simply put, this enables Adam to use a larger effective step size, and the algorithm will converge to this step size without fine tuning.
The main down side of the algorithm is that Adam requires more computation to be performed for each parameter in each training step (to maintain the moving averages and variance, and calculate the scaled gradient); and more state to be retained for each parameter (approximately tripling the size of the model to store the average and variance for each parameter). A simple tf.train.GradientDescentOptimizer could equally be used in your MLP, but would require more hyperparameter tuning before it would converge as quickly. | Difference between GradientDescentOptimizer and AdamOptimizer (TensorFlow)?
The tf.train.AdamOptimizer uses Kingma and Ba's Adam algorithm to control the learning rate. Adam offers several advantages over the simple tf.train.GradientDescentOptimizer. Foremost is that it uses |
5,312 | Features for time series classification | Simple statistical features
Means in each of the $d$ dimensions
Standard deviations of the $d$ dimensions
Skewness, Kurtosis and Higher order moments of the $d$ dimensions
Maximum and Minimum values
Time serie analysis related features
The $d \times d-1$ Cross-Correlations between each dimension and the $d$ Auto-Correlations
Orders of the autoregressive (AR), integrated (I) and moving average (MA) part of an estimated ARIMA model
Parameters of the AR part
Parameters of the MA part
Frequency domain related features
See Morchen03 for a study of energy preserving features on DFT and DWT
frequencies of the $k$ peaks in amplitude in the DFTs for the detrended $d$ dimensions
$k$-quantiles of these DFTs | Features for time series classification | Simple statistical features
Means in each of the $d$ dimensions
Standard deviations of the $d$ dimensions
Skewness, Kurtosis and Higher order moments of the $d$ dimensions
Maximum and Minimum values | Features for time series classification
Simple statistical features
Means in each of the $d$ dimensions
Standard deviations of the $d$ dimensions
Skewness, Kurtosis and Higher order moments of the $d$ dimensions
Maximum and Minimum values
Time serie analysis related features
The $d \times d-1$ Cross-Correlations between each dimension and the $d$ Auto-Correlations
Orders of the autoregressive (AR), integrated (I) and moving average (MA) part of an estimated ARIMA model
Parameters of the AR part
Parameters of the MA part
Frequency domain related features
See Morchen03 for a study of energy preserving features on DFT and DWT
frequencies of the $k$ peaks in amplitude in the DFTs for the detrended $d$ dimensions
$k$-quantiles of these DFTs | Features for time series classification
Simple statistical features
Means in each of the $d$ dimensions
Standard deviations of the $d$ dimensions
Skewness, Kurtosis and Higher order moments of the $d$ dimensions
Maximum and Minimum values |
5,313 | Features for time series classification | As the other answers suggested, there is a huge number of time series characteristics that can be used as potential features. There are simple features such as the mean, time series related features such as the coefficients of an AR model or highly sophisticated features such as the test statistic of the augmented dickey fuller hypothesis test.
Comprehensive Overview over possible time series features
The python package tsfresh automates the extraction of those features. Its documentation describes the different calculated features. You can find the page with the calculated features here.
Disclaimer: I am one of the authors of tsfresh. | Features for time series classification | As the other answers suggested, there is a huge number of time series characteristics that can be used as potential features. There are simple features such as the mean, time series related features s | Features for time series classification
As the other answers suggested, there is a huge number of time series characteristics that can be used as potential features. There are simple features such as the mean, time series related features such as the coefficients of an AR model or highly sophisticated features such as the test statistic of the augmented dickey fuller hypothesis test.
Comprehensive Overview over possible time series features
The python package tsfresh automates the extraction of those features. Its documentation describes the different calculated features. You can find the page with the calculated features here.
Disclaimer: I am one of the authors of tsfresh. | Features for time series classification
As the other answers suggested, there is a huge number of time series characteristics that can be used as potential features. There are simple features such as the mean, time series related features s |
5,314 | Features for time series classification | Emile, I think the features listed in your answer are pretty good starting points, though as always, I think some domain expertise (or at least a good long think) about your problem is equally important.
You may want to consider including features calculated from the derivatives (or integrals) of your signal. For example, I would wager that rapid acceleration/deceleration is a reasonably good predictors of accident-prone driving. That information is obviously still present in the position signal, but it's not nearly as explicit.
You may also want to consider replacing the Fourier coefficients with a wavelet or wavelet packet representation. The major advantage of wavelets is that they allow you to localize a feature in both frequency and time, while the traditional Fourier coefficients are restricted to only time. This might be particularly useful if your data contains components that switch on/off irregularly or has square wave-like pulses that can be problematic for Fourier methods. | Features for time series classification | Emile, I think the features listed in your answer are pretty good starting points, though as always, I think some domain expertise (or at least a good long think) about your problem is equally importa | Features for time series classification
Emile, I think the features listed in your answer are pretty good starting points, though as always, I think some domain expertise (or at least a good long think) about your problem is equally important.
You may want to consider including features calculated from the derivatives (or integrals) of your signal. For example, I would wager that rapid acceleration/deceleration is a reasonably good predictors of accident-prone driving. That information is obviously still present in the position signal, but it's not nearly as explicit.
You may also want to consider replacing the Fourier coefficients with a wavelet or wavelet packet representation. The major advantage of wavelets is that they allow you to localize a feature in both frequency and time, while the traditional Fourier coefficients are restricted to only time. This might be particularly useful if your data contains components that switch on/off irregularly or has square wave-like pulses that can be problematic for Fourier methods. | Features for time series classification
Emile, I think the features listed in your answer are pretty good starting points, though as always, I think some domain expertise (or at least a good long think) about your problem is equally importa |
5,315 | Features for time series classification | I suggest you, instead of using classic approaches for extracting hand-engineered features, utilise autoencoders. Autoencoders plays an important role in the feature extraction of deep learning architecture.
The autoencoder tries to learn a function $f(X_T)≈X_T$. In other words, it is trying to learn an approximation to the identity function, so as to output $\hat X_T$ that is similar to $X_T$.
The identity function seems a particularly trivial function to be trying to learn; but by placing constraints on the network, such as by limiting the number of hidden units, we can discover interesting structure about the data.
In this way, your desired $ϕ(X_T)=v_1,…,v_D \in R$ will be equivalent to the output values of the middlemost layer in a deep autoencoder, If you limit the number of hidden units in the middlemost to $D$.
Additionally, you can use many flavors of autoencoder for finding the best solution to your problem. | Features for time series classification | I suggest you, instead of using classic approaches for extracting hand-engineered features, utilise autoencoders. Autoencoders plays an important role in the feature extraction of deep learning archit | Features for time series classification
I suggest you, instead of using classic approaches for extracting hand-engineered features, utilise autoencoders. Autoencoders plays an important role in the feature extraction of deep learning architecture.
The autoencoder tries to learn a function $f(X_T)≈X_T$. In other words, it is trying to learn an approximation to the identity function, so as to output $\hat X_T$ that is similar to $X_T$.
The identity function seems a particularly trivial function to be trying to learn; but by placing constraints on the network, such as by limiting the number of hidden units, we can discover interesting structure about the data.
In this way, your desired $ϕ(X_T)=v_1,…,v_D \in R$ will be equivalent to the output values of the middlemost layer in a deep autoencoder, If you limit the number of hidden units in the middlemost to $D$.
Additionally, you can use many flavors of autoencoder for finding the best solution to your problem. | Features for time series classification
I suggest you, instead of using classic approaches for extracting hand-engineered features, utilise autoencoders. Autoencoders plays an important role in the feature extraction of deep learning archit |
5,316 | Features for time series classification | The linked paper will be somewhat enlightening, since it is interested in the more or less the same issue in another context.
Paper abstract (in the Internet Archive)
Paper PDF | Features for time series classification | The linked paper will be somewhat enlightening, since it is interested in the more or less the same issue in another context.
Paper abstract (in the Internet Archive)
Paper PDF | Features for time series classification
The linked paper will be somewhat enlightening, since it is interested in the more or less the same issue in another context.
Paper abstract (in the Internet Archive)
Paper PDF | Features for time series classification
The linked paper will be somewhat enlightening, since it is interested in the more or less the same issue in another context.
Paper abstract (in the Internet Archive)
Paper PDF |
5,317 | Features for time series classification | Depending on the length of your time series, the usual approach is to epoch the data into segments, e.g. 10 secs.
However, often prior to breaking the time-series into segments it is necessary to perform some preprocessing such as filtering and artifact rejection.
You can then compute a variety of features such as those based on frequency (i.e. take an FFT for each epoch), time (e.g. mean, variance etc of the time-series in that epoch) or morphology, (i.e. the shape of the signal/time-series in each epoch).
Usually the features used to classify segments (epochs) of a time-series/signal are domain-specific but Wavelet/Fourier analysis are simply tools to allow you examine your signal in the frequency/time-frequency domains rather than being features in themselves.
In a classification problem each epoch will have a class label e.g. 'happy' or 'sad', you would then train a classifier to distinguish between 'happy' and 'sad' epochs using the 6 features calculated for each epoch.
In the event that each time series represents a single case for classification, you need to calculate each feature across all samples of the time series. The FFT is only relevant here if the signal is linear time invariant (LTI), i.e. if the signal can be considered to be stationary over the whole time series, if the signal is not stationary over the period of interest, a wavelet analysis may be more appropriate. This approach will mean that each time series will produce one feature vector and will constitute one case for classification. | Features for time series classification | Depending on the length of your time series, the usual approach is to epoch the data into segments, e.g. 10 secs.
However, often prior to breaking the time-series into segments it is necessary to per | Features for time series classification
Depending on the length of your time series, the usual approach is to epoch the data into segments, e.g. 10 secs.
However, often prior to breaking the time-series into segments it is necessary to perform some preprocessing such as filtering and artifact rejection.
You can then compute a variety of features such as those based on frequency (i.e. take an FFT for each epoch), time (e.g. mean, variance etc of the time-series in that epoch) or morphology, (i.e. the shape of the signal/time-series in each epoch).
Usually the features used to classify segments (epochs) of a time-series/signal are domain-specific but Wavelet/Fourier analysis are simply tools to allow you examine your signal in the frequency/time-frequency domains rather than being features in themselves.
In a classification problem each epoch will have a class label e.g. 'happy' or 'sad', you would then train a classifier to distinguish between 'happy' and 'sad' epochs using the 6 features calculated for each epoch.
In the event that each time series represents a single case for classification, you need to calculate each feature across all samples of the time series. The FFT is only relevant here if the signal is linear time invariant (LTI), i.e. if the signal can be considered to be stationary over the whole time series, if the signal is not stationary over the period of interest, a wavelet analysis may be more appropriate. This approach will mean that each time series will produce one feature vector and will constitute one case for classification. | Features for time series classification
Depending on the length of your time series, the usual approach is to epoch the data into segments, e.g. 10 secs.
However, often prior to breaking the time-series into segments it is necessary to per |
5,318 | Features for time series classification | The TSFEL package provides this very comprehensive list of possible time series features. The source code shows how every feature is calculated in detail.
You can find a comprehensive list below:
* abs_energy(signal) Computes the absolute energy of the signal.
* auc(signal, fs) Computes the area under the curve of the signal computed with trapezoid rule.
* autocorr(signal) Computes autocorrelation of the signal.
* calc_centroid(signal, fs) Computes the centroid along the time axis.
* calc_max(signal) Computes the maximum value of the signal.
* calc_mean(signal) Computes mean value of the signal.
* calc_median(signal) Computes median of the signal.
* calc_min(signal) Computes the minimum value of the signal.
* calc_std(signal) Computes standard deviation (std) of the signal.
* calc_var(signal) Computes variance of the signal.
* distance(signal) Computes signal traveled distance.
* ecdf(signal[, d]) Computes the values of ECDF (empirical cumulative distribution function) along the time axis.
* ecdf_percentile(signal[, percentile]) Computes the percentile value of the ECDF.
* ecdf_percentile_count(signal[, percentile]) Computes the cumulative sum of samples that are less than the percentile.
* ecdf_slope(signal[, p_init, p_end]) Computes the slope of the ECDF between two percentiles.
* entropy(signal[, prob]) Computes the entropy of the signal using the Shannon Entropy.
* fft_mean_coeff(signal, fs[, nfreq]) Computes the mean value of each spectrogram frequency.
* fundamental_frequency(signal, fs) Computes fundamental frequency of the signal.
* hist(signal[, nbins, r]) Computes histogram of the signal.
* human_range_energy(signal, fs) Computes the human range energy ratio.
* interq_range(signal) Computes interquartile range of the signal.
* kurtosis(signal) Computes kurtosis of the signal.
* lpcc(signal[, n_coeff]) Computes the linear prediction cepstral coefficients.
* max_frequency(signal, fs) Computes maximum frequency of the signal.
* max_power_spectrum(signal, fs) Computes maximum power spectrum density of the signal.
* mean_abs_deviation(signal) Computes mean absolute deviation of the signal.
* mean_abs_diff(signal) Computes mean absolute differences of the signal.
* mean_diff(signal) Computes mean of differences of the signal.
* median_abs_deviation(signal) Computes median absolute deviation of the signal.
* median_abs_diff(signal) Computes median absolute differences of the signal.
* median_diff(signal) Computes median of differences of the signal.
* median_frequency(signal, fs) Computes median frequency of the signal.
* mfcc(signal, fs[, pre_emphasis, nfft, …]) Computes the MEL cepstral coefficients.
* negative_turning(signal) Computes number of negative turning points of the signal.
* neighbourhood_peaks(signal[, n]) Computes the number of peaks from a defined neighbourhood of the signal.
* pk_pk_distance(signal) Computes the peak to peak distance.
* positive_turning(signal) Computes number of positive turning points of the signal.
* power_bandwidth(signal, fs) Computes power spectrum density bandwidth of the signal.
* rms(signal) Computes root mean square of the signal.
* skewness(signal) Computes skewness of the signal.
* slope(signal) Computes the slope of the signal.
* spectral_centroid(signal, fs) Barycenter of the spectrum.
* spectral_decrease(signal, fs) Represents the amount of decreasing of the spectra amplitude.
* spectral_distance(signal, fs) Computes the signal spectral distance.
* spectral_entropy(signal, fs) Computes the spectral entropy of the signal based on Fourier transform.
* spectral_kurtosis(signal, fs) Measures the flatness of a distribution around its mean value.
* spectral_positive_turning(signal, fs) Computes number of positive turning points of the fft magnitude signal.
* spectral_roll_off(signal, fs) Computes the spectral roll-off of the signal.
* spectral_roll_on(signal, fs) Computes the spectral roll-on of the signal.
* spectral_skewness(signal, fs) Measures the asymmetry of a distribution around its mean value.
* spectral_slope(signal, fs) Computes the spectral slope.
* spectral_spread(signal, fs) Measures the spread of the spectrum around its mean value.
* spectral_variation(signal, fs) Computes the amount of variation of the spectrum along time.
* sum_abs_diff(signal) Computes sum of absolute differences of the signal.
* total_energy(signal, fs) Computes the total energy of the signal.
* wavelet_abs_mean(signal[, function, widths]) Computes CWT absolute mean value of each wavelet scale.
* wavelet_energy(signal[, function, widths]) Computes CWT energy of each wavelet scale.
* wavelet_entropy(signal[, function, widths]) Computes CWT entropy of the signal.
* wavelet_std(signal[, function, widths]) Computes CWT std value of each wavelet scale.
* wavelet_var(signal[, function, widths]) Computes CWT variance value of each wavelet scale.
* zero_cross(signal) Computes Zero-crossing rate of the signal. | Features for time series classification | The TSFEL package provides this very comprehensive list of possible time series features. The source code shows how every feature is calculated in detail.
You can find a comprehensive list below:
* ab | Features for time series classification
The TSFEL package provides this very comprehensive list of possible time series features. The source code shows how every feature is calculated in detail.
You can find a comprehensive list below:
* abs_energy(signal) Computes the absolute energy of the signal.
* auc(signal, fs) Computes the area under the curve of the signal computed with trapezoid rule.
* autocorr(signal) Computes autocorrelation of the signal.
* calc_centroid(signal, fs) Computes the centroid along the time axis.
* calc_max(signal) Computes the maximum value of the signal.
* calc_mean(signal) Computes mean value of the signal.
* calc_median(signal) Computes median of the signal.
* calc_min(signal) Computes the minimum value of the signal.
* calc_std(signal) Computes standard deviation (std) of the signal.
* calc_var(signal) Computes variance of the signal.
* distance(signal) Computes signal traveled distance.
* ecdf(signal[, d]) Computes the values of ECDF (empirical cumulative distribution function) along the time axis.
* ecdf_percentile(signal[, percentile]) Computes the percentile value of the ECDF.
* ecdf_percentile_count(signal[, percentile]) Computes the cumulative sum of samples that are less than the percentile.
* ecdf_slope(signal[, p_init, p_end]) Computes the slope of the ECDF between two percentiles.
* entropy(signal[, prob]) Computes the entropy of the signal using the Shannon Entropy.
* fft_mean_coeff(signal, fs[, nfreq]) Computes the mean value of each spectrogram frequency.
* fundamental_frequency(signal, fs) Computes fundamental frequency of the signal.
* hist(signal[, nbins, r]) Computes histogram of the signal.
* human_range_energy(signal, fs) Computes the human range energy ratio.
* interq_range(signal) Computes interquartile range of the signal.
* kurtosis(signal) Computes kurtosis of the signal.
* lpcc(signal[, n_coeff]) Computes the linear prediction cepstral coefficients.
* max_frequency(signal, fs) Computes maximum frequency of the signal.
* max_power_spectrum(signal, fs) Computes maximum power spectrum density of the signal.
* mean_abs_deviation(signal) Computes mean absolute deviation of the signal.
* mean_abs_diff(signal) Computes mean absolute differences of the signal.
* mean_diff(signal) Computes mean of differences of the signal.
* median_abs_deviation(signal) Computes median absolute deviation of the signal.
* median_abs_diff(signal) Computes median absolute differences of the signal.
* median_diff(signal) Computes median of differences of the signal.
* median_frequency(signal, fs) Computes median frequency of the signal.
* mfcc(signal, fs[, pre_emphasis, nfft, …]) Computes the MEL cepstral coefficients.
* negative_turning(signal) Computes number of negative turning points of the signal.
* neighbourhood_peaks(signal[, n]) Computes the number of peaks from a defined neighbourhood of the signal.
* pk_pk_distance(signal) Computes the peak to peak distance.
* positive_turning(signal) Computes number of positive turning points of the signal.
* power_bandwidth(signal, fs) Computes power spectrum density bandwidth of the signal.
* rms(signal) Computes root mean square of the signal.
* skewness(signal) Computes skewness of the signal.
* slope(signal) Computes the slope of the signal.
* spectral_centroid(signal, fs) Barycenter of the spectrum.
* spectral_decrease(signal, fs) Represents the amount of decreasing of the spectra amplitude.
* spectral_distance(signal, fs) Computes the signal spectral distance.
* spectral_entropy(signal, fs) Computes the spectral entropy of the signal based on Fourier transform.
* spectral_kurtosis(signal, fs) Measures the flatness of a distribution around its mean value.
* spectral_positive_turning(signal, fs) Computes number of positive turning points of the fft magnitude signal.
* spectral_roll_off(signal, fs) Computes the spectral roll-off of the signal.
* spectral_roll_on(signal, fs) Computes the spectral roll-on of the signal.
* spectral_skewness(signal, fs) Measures the asymmetry of a distribution around its mean value.
* spectral_slope(signal, fs) Computes the spectral slope.
* spectral_spread(signal, fs) Measures the spread of the spectrum around its mean value.
* spectral_variation(signal, fs) Computes the amount of variation of the spectrum along time.
* sum_abs_diff(signal) Computes sum of absolute differences of the signal.
* total_energy(signal, fs) Computes the total energy of the signal.
* wavelet_abs_mean(signal[, function, widths]) Computes CWT absolute mean value of each wavelet scale.
* wavelet_energy(signal[, function, widths]) Computes CWT energy of each wavelet scale.
* wavelet_entropy(signal[, function, widths]) Computes CWT entropy of the signal.
* wavelet_std(signal[, function, widths]) Computes CWT std value of each wavelet scale.
* wavelet_var(signal[, function, widths]) Computes CWT variance value of each wavelet scale.
* zero_cross(signal) Computes Zero-crossing rate of the signal. | Features for time series classification
The TSFEL package provides this very comprehensive list of possible time series features. The source code shows how every feature is calculated in detail.
You can find a comprehensive list below:
* ab |
5,319 | Clarification on interpreting confidence intervals? | I think the fundamental problem is that frequentist statistics can only assign a probability to something that can have a long run frequency. Whether the true value of a parameter lies in a particular interval or not doesn't have a long run frequency, becuase we can only perform the experiment once, so you can't assign a frequentist probability to it. The problem arises from the definition of a probability. If you change the definition of a probability to a Bayesian one, then the problem instantly dissapears as you are no longer tied to discussion of long run frequencies.
See my (rather tounge in cheek) answer to a related question here:
"A Frequentist is someone that believes probabilies represent long run frequencies with which events ocurr; if needs be, he will invent a fictitious population from which your particular situation could be considered a random sample so that he can meaningfully talk about long run frequencies. If you ask him a question about a particular situation, he will not give a direct answer, but instead make a statement about this (possibly imaginary) population."
In the case of a confidence interval, the question we normally would like to ask (unless we have a problem in quality control for example) is "given this sample of data, return the smallest interval that contains the true value of the parameter with probability X". However a frequentist can't do this as the experiment is only performed once and so there are no long run frequencies that can be used to assign a probability. So instead the frequentist has to invent a population of experiments (that you didn't perform) from which the experiment you did perform can be considered a random sample. The frequentist then gives you an indirect answer about that fictitious population of experiments, rather than a direct answer to the question you really wanted to ask about a particular experiment.
Essentially it is a problem of language, the frequentist definition of a popuation simply doesn't allow discussion of the probability of the true value of a parameter lying in a particular interval. That doesn't mean frequentist statistics are bad, or not useful, but it is important to know the limitations.
Regarding the major update
I am not sure we can say that "Before we calculate a 95% confidence interval, there is a 95% probability that the interval we calculate will cover the true parameter." within a frequentist framework. There is an implicit inference here that the long run frequency with which the true value of the parameter lies in confidence intervals constructed by some particular method is also the probability that that the true value of the parameter will lie in the confidence interval for the particular sample of data we are going to use. This is a perfectly reasonable inference, but it is a Bayesian inference, not a frequentist one, as the probability that the true value of the parameter lies in the confidence interval that we construct for a particular sample of data has no long run freqency, as we only have one sample of data. This is exactly the danger of frequentist statistics, common sense reasoning about probability is generally Bayesian, in that it is about the degree of plausibility of a proposition.
We can however "make some sort of non-frequentist argument that we're 95% sure the true parameter will lie in [a,b]", that is exactly what a Bayesian credible interval is, and for many problems the Bayesian credible interval exactly coincides with the frequentist confidence interval.
"I don't want to make this a debate about the philosophy of probability", sadly this is unavoidable, the reason you can't assign a frequentist probability to whether the true value of the statistic lies in the confidence interval is a direct consequence of the frequentist philosophy of probability. Frequentists can only assign probabilities to things that can have long run frequencies, as that is how frequentists define probability in their philosophy. That doesn't make frequentist philosophy wrong, but it is important to understand the bounds imposed by the definition of a probability.
"Before I've entered the password and seen the interval (but after the computer has already calculated it), what's the probability that the interval will contain the true parameter? It's 95%, and this part is not up for debate:" This is incorrect, or at least in making such a statement, you have departed from the framework of frequentist statistics and have made a Bayesian inference involving a degree of plausibility in the truth of a statement, rather than a long run frequency. However, as I have said earlier, it is a perfectly reasonable and natural inference.
Nothing has changed before or after entering the password, because niether event can be assigned a frequentist probability. Frequentist statistics can be rather counter-intuitive as we often want to ask questions about degrees of plausibility of statements regarding particular events, but this lies outside the remit of frequentist statistics, and this is the origin of most misinterpretations of frequentist procedures. | Clarification on interpreting confidence intervals? | I think the fundamental problem is that frequentist statistics can only assign a probability to something that can have a long run frequency. Whether the true value of a parameter lies in a particula | Clarification on interpreting confidence intervals?
I think the fundamental problem is that frequentist statistics can only assign a probability to something that can have a long run frequency. Whether the true value of a parameter lies in a particular interval or not doesn't have a long run frequency, becuase we can only perform the experiment once, so you can't assign a frequentist probability to it. The problem arises from the definition of a probability. If you change the definition of a probability to a Bayesian one, then the problem instantly dissapears as you are no longer tied to discussion of long run frequencies.
See my (rather tounge in cheek) answer to a related question here:
"A Frequentist is someone that believes probabilies represent long run frequencies with which events ocurr; if needs be, he will invent a fictitious population from which your particular situation could be considered a random sample so that he can meaningfully talk about long run frequencies. If you ask him a question about a particular situation, he will not give a direct answer, but instead make a statement about this (possibly imaginary) population."
In the case of a confidence interval, the question we normally would like to ask (unless we have a problem in quality control for example) is "given this sample of data, return the smallest interval that contains the true value of the parameter with probability X". However a frequentist can't do this as the experiment is only performed once and so there are no long run frequencies that can be used to assign a probability. So instead the frequentist has to invent a population of experiments (that you didn't perform) from which the experiment you did perform can be considered a random sample. The frequentist then gives you an indirect answer about that fictitious population of experiments, rather than a direct answer to the question you really wanted to ask about a particular experiment.
Essentially it is a problem of language, the frequentist definition of a popuation simply doesn't allow discussion of the probability of the true value of a parameter lying in a particular interval. That doesn't mean frequentist statistics are bad, or not useful, but it is important to know the limitations.
Regarding the major update
I am not sure we can say that "Before we calculate a 95% confidence interval, there is a 95% probability that the interval we calculate will cover the true parameter." within a frequentist framework. There is an implicit inference here that the long run frequency with which the true value of the parameter lies in confidence intervals constructed by some particular method is also the probability that that the true value of the parameter will lie in the confidence interval for the particular sample of data we are going to use. This is a perfectly reasonable inference, but it is a Bayesian inference, not a frequentist one, as the probability that the true value of the parameter lies in the confidence interval that we construct for a particular sample of data has no long run freqency, as we only have one sample of data. This is exactly the danger of frequentist statistics, common sense reasoning about probability is generally Bayesian, in that it is about the degree of plausibility of a proposition.
We can however "make some sort of non-frequentist argument that we're 95% sure the true parameter will lie in [a,b]", that is exactly what a Bayesian credible interval is, and for many problems the Bayesian credible interval exactly coincides with the frequentist confidence interval.
"I don't want to make this a debate about the philosophy of probability", sadly this is unavoidable, the reason you can't assign a frequentist probability to whether the true value of the statistic lies in the confidence interval is a direct consequence of the frequentist philosophy of probability. Frequentists can only assign probabilities to things that can have long run frequencies, as that is how frequentists define probability in their philosophy. That doesn't make frequentist philosophy wrong, but it is important to understand the bounds imposed by the definition of a probability.
"Before I've entered the password and seen the interval (but after the computer has already calculated it), what's the probability that the interval will contain the true parameter? It's 95%, and this part is not up for debate:" This is incorrect, or at least in making such a statement, you have departed from the framework of frequentist statistics and have made a Bayesian inference involving a degree of plausibility in the truth of a statement, rather than a long run frequency. However, as I have said earlier, it is a perfectly reasonable and natural inference.
Nothing has changed before or after entering the password, because niether event can be assigned a frequentist probability. Frequentist statistics can be rather counter-intuitive as we often want to ask questions about degrees of plausibility of statements regarding particular events, but this lies outside the remit of frequentist statistics, and this is the origin of most misinterpretations of frequentist procedures. | Clarification on interpreting confidence intervals?
I think the fundamental problem is that frequentist statistics can only assign a probability to something that can have a long run frequency. Whether the true value of a parameter lies in a particula |
5,320 | Clarification on interpreting confidence intervals? | Major update, major new answer. Let me try to clearly address this point, because it's where the problem lies:
"If you argue that "after seeing the interval, the notion of probability no longer makes sense", then fine, let's work in an interpretation of probability in which it does make sense."
The rules of probability don't change but your model for the universe does. Are you willing to quantify your prior beliefs about a parameter using a probability distribution? Is updating that probability distribution after seeing the data a reasonable thing to do? If you think so then you can make statements like $P(\theta\in [L(X), U(X)]| X=x)$. My prior distribution can represent my uncertainty about the true state of nature, not just randomness as it is commonly understood - that is, if I assign a prior distribution to the number of red balls in an urn that doesn't mean I think the number of red balls is random. It's fixed, but I'm uncertain about it.
Several people including I have said this, but if you aren't willing to call $\theta$ a random variable then the statement $P(\theta\in [L(X), U(X)]| X=x)$ isn't meaningful. If I'm a frequentist, I'm treating $\theta$ as a fixed quantity AND I can't ascribe a probability distribution to it. Why? Because it's fixed, and my interpretation of probability is in terms of long-run frequencies. The number of red balls in the urn doesn't ever change. $\theta$ is what $\theta$ is. If I pull out a few balls then I have a random sample. I can ask what would happen if I took a bunch of random samples - that is to say, I can talk about $P(\theta\in [L(X), U(X)])$ because the interval depends on the sample, which is (wait for it!) random.
But you don't want that. You want $P(\theta\in [L(X), U(X)]| X=x)$ - what's the probability that this interval I constructed with my observed (and now fixed) sample contains the parameter. However, once you've conditioned on $X=x$ then to me, a frequentist, there is nothing random left and the statement $P(\theta\in [L(X), U(X)]| X=x)$ doesn't make sense in any meaningful way.
The only principled way (IMO) to make a statement about $P(\theta\in [L(X), U(X)]| X=x)$ is to quantify our uncertainty about a parameter with a (prior) probability distribution and update that distribution with new information via Bayes Theorem. Every other approach I have seen is a lackluster approximation to Bayes. You certainly can't do it from a frequentist perspective.
That isn't to say that you can't evaluate traditional frequentist procedures from a Bayesian perspective (often confidence intervals are just credible intervals under uniform priors, for example) or that evaluating Bayesian estimators/credible intervals from a frequentist perspective isn't valuable (I think it can be). It isn't to say that classical/frequentist statistics is useless, because it isn't. It is what it is, and we shouldn't try to make it more.
Do you think it's reasonable to give a parameter a prior distribution to represent your beliefs about the universe? It sounds like it from your comments that you do; in my experience most people would agree (that's the little half-joke I made in my comment to @G. Jay Kerns's answer). If so, the Bayesian paradigm provides a logical, coherent way to make statements about $P(\theta\in [L(X), U(X)]| X=x)$. The frequentist approach simply doesn't. | Clarification on interpreting confidence intervals? | Major update, major new answer. Let me try to clearly address this point, because it's where the problem lies:
"If you argue that "after seeing the interval, the notion of probability no longer makes | Clarification on interpreting confidence intervals?
Major update, major new answer. Let me try to clearly address this point, because it's where the problem lies:
"If you argue that "after seeing the interval, the notion of probability no longer makes sense", then fine, let's work in an interpretation of probability in which it does make sense."
The rules of probability don't change but your model for the universe does. Are you willing to quantify your prior beliefs about a parameter using a probability distribution? Is updating that probability distribution after seeing the data a reasonable thing to do? If you think so then you can make statements like $P(\theta\in [L(X), U(X)]| X=x)$. My prior distribution can represent my uncertainty about the true state of nature, not just randomness as it is commonly understood - that is, if I assign a prior distribution to the number of red balls in an urn that doesn't mean I think the number of red balls is random. It's fixed, but I'm uncertain about it.
Several people including I have said this, but if you aren't willing to call $\theta$ a random variable then the statement $P(\theta\in [L(X), U(X)]| X=x)$ isn't meaningful. If I'm a frequentist, I'm treating $\theta$ as a fixed quantity AND I can't ascribe a probability distribution to it. Why? Because it's fixed, and my interpretation of probability is in terms of long-run frequencies. The number of red balls in the urn doesn't ever change. $\theta$ is what $\theta$ is. If I pull out a few balls then I have a random sample. I can ask what would happen if I took a bunch of random samples - that is to say, I can talk about $P(\theta\in [L(X), U(X)])$ because the interval depends on the sample, which is (wait for it!) random.
But you don't want that. You want $P(\theta\in [L(X), U(X)]| X=x)$ - what's the probability that this interval I constructed with my observed (and now fixed) sample contains the parameter. However, once you've conditioned on $X=x$ then to me, a frequentist, there is nothing random left and the statement $P(\theta\in [L(X), U(X)]| X=x)$ doesn't make sense in any meaningful way.
The only principled way (IMO) to make a statement about $P(\theta\in [L(X), U(X)]| X=x)$ is to quantify our uncertainty about a parameter with a (prior) probability distribution and update that distribution with new information via Bayes Theorem. Every other approach I have seen is a lackluster approximation to Bayes. You certainly can't do it from a frequentist perspective.
That isn't to say that you can't evaluate traditional frequentist procedures from a Bayesian perspective (often confidence intervals are just credible intervals under uniform priors, for example) or that evaluating Bayesian estimators/credible intervals from a frequentist perspective isn't valuable (I think it can be). It isn't to say that classical/frequentist statistics is useless, because it isn't. It is what it is, and we shouldn't try to make it more.
Do you think it's reasonable to give a parameter a prior distribution to represent your beliefs about the universe? It sounds like it from your comments that you do; in my experience most people would agree (that's the little half-joke I made in my comment to @G. Jay Kerns's answer). If so, the Bayesian paradigm provides a logical, coherent way to make statements about $P(\theta\in [L(X), U(X)]| X=x)$. The frequentist approach simply doesn't. | Clarification on interpreting confidence intervals?
Major update, major new answer. Let me try to clearly address this point, because it's where the problem lies:
"If you argue that "after seeing the interval, the notion of probability no longer makes |
5,321 | Clarification on interpreting confidence intervals? | OK, now you're talking! I've voted to delete my previous answer because it doesn't make sense with this major-updated question.
In this new, updated question, with a computer that calculates 95% confidence intervals, under the orthodox frequentist interpretation, here are the answers to your questions:
No.
No.
Once the interval is observed, it is not random any more, and does not change. (Maybe the interval was $[1,3]$.) But $\theta$ doesn't change, either, and has never changed. (Maybe it is $\theta = 7$.) The probability changes from 95% to 0% because 95% of the intervals the computer calculates cover 7, but 100% of the intervals $[1,3]$ do NOT cover 7.
(By the way, in the real world, the experimenter never knows that $\theta = 7$, which means the experimenter can never know whether the true probability $[1,3]$ covers $\theta$ is zero or one. (S)he only can say that it must be one or the other.) That, plus the experimenter can say that 95% of the computer's intervals cover $\theta$, but we knew that already.
The spirit of your question keeps hinting back to the observer's knowledge, and how that relates to where $\theta$ lies. That (presumably) is why you were talking about the password, about the computer calculating the interval without your seeing it yet, etc. I've seen in your comments to answers that it seems unsatisfactory/unseemly to be obliged to commit to 0 or 1, after all, why couldn't we believe it is 87%, or $15/16$, or even 99%??? But that is exactly the power - and simultaneously the Achilles' heel - of the frequentist framework: the subjective knowledge/belief of the observer is irrelevant. All that matters is a long-run relative frequency. Nothing more, nothing less.
As a final BTW: if you change your interpretation of probability (which you intentially have elected not to do for this question), then the new answers are:
Yes.
Yes.
The probability changes because probability = subjective knowledge, or degree of belief, and the knowledge of the observer changed. We represent knowledge with prior/posterior distributions, and as new information becomes available, the former morphs into the latter (via Bayes' Rule).
(But for full disclosure, the setup you describe doesn't match the subjective interpretation very well. For instance, we usually have a 95% prior credible interval before even turning on the computer, then we fire it up and employ the computer to give us a 95% posterior credible interval which is usually considerably skinnier than the prior one.) | Clarification on interpreting confidence intervals? | OK, now you're talking! I've voted to delete my previous answer because it doesn't make sense with this major-updated question.
In this new, updated question, with a computer that calculates 95% conf | Clarification on interpreting confidence intervals?
OK, now you're talking! I've voted to delete my previous answer because it doesn't make sense with this major-updated question.
In this new, updated question, with a computer that calculates 95% confidence intervals, under the orthodox frequentist interpretation, here are the answers to your questions:
No.
No.
Once the interval is observed, it is not random any more, and does not change. (Maybe the interval was $[1,3]$.) But $\theta$ doesn't change, either, and has never changed. (Maybe it is $\theta = 7$.) The probability changes from 95% to 0% because 95% of the intervals the computer calculates cover 7, but 100% of the intervals $[1,3]$ do NOT cover 7.
(By the way, in the real world, the experimenter never knows that $\theta = 7$, which means the experimenter can never know whether the true probability $[1,3]$ covers $\theta$ is zero or one. (S)he only can say that it must be one or the other.) That, plus the experimenter can say that 95% of the computer's intervals cover $\theta$, but we knew that already.
The spirit of your question keeps hinting back to the observer's knowledge, and how that relates to where $\theta$ lies. That (presumably) is why you were talking about the password, about the computer calculating the interval without your seeing it yet, etc. I've seen in your comments to answers that it seems unsatisfactory/unseemly to be obliged to commit to 0 or 1, after all, why couldn't we believe it is 87%, or $15/16$, or even 99%??? But that is exactly the power - and simultaneously the Achilles' heel - of the frequentist framework: the subjective knowledge/belief of the observer is irrelevant. All that matters is a long-run relative frequency. Nothing more, nothing less.
As a final BTW: if you change your interpretation of probability (which you intentially have elected not to do for this question), then the new answers are:
Yes.
Yes.
The probability changes because probability = subjective knowledge, or degree of belief, and the knowledge of the observer changed. We represent knowledge with prior/posterior distributions, and as new information becomes available, the former morphs into the latter (via Bayes' Rule).
(But for full disclosure, the setup you describe doesn't match the subjective interpretation very well. For instance, we usually have a 95% prior credible interval before even turning on the computer, then we fire it up and employ the computer to give us a 95% posterior credible interval which is usually considerably skinnier than the prior one.) | Clarification on interpreting confidence intervals?
OK, now you're talking! I've voted to delete my previous answer because it doesn't make sense with this major-updated question.
In this new, updated question, with a computer that calculates 95% conf |
5,322 | Clarification on interpreting confidence intervals? | I'll throw in my two cents (maybe redigesting some of the former answers). To a frequentist, the confidence interval itself is in essence a two-dimensional random variable: if you would redo the experiment a gazillion times, the confidence interval you would estimate (i.e.: calculate from your newly found data each time) would differ each time. As such, the two boundaries of the interval are random variables.
A 95% CI, then, means nothing more than the assurance (given all your assumptions leading to this CI are correct) that this set of random variables will contain the true value (a very frequentist expression) in 95% of the cases.
You can easily calculate the confidence interval for the mean of 100 draws from a standard normal distribution. Then, if you draw 10000 times 100 values from that standard normal distribution, and each time calculate the confidence interval for the mean, you will indeed see that 0 is in there about 9500 times.
The fact that you have created a confidence interval just once (from your actual data) does indeed reduce the probability of the true value being in that interval to either 0 or 1, but it doesn't change the probability of the confidence interval as a random variable to contain the true value.
So, bottom line: the probability of any (i.e. on average) 95% confidence interval containing the true value (95%) doesn't change, and neither does the probability of a particular interval (CI or whatever) for containing the true value (0 or 1). The probability of the interval the computer knows but you don't is actually 0 or 1 (because it is a particular interval), but since you don't know it (and, in a frequentist fashion, are unable to recalculate this same interval infinitely many times again from the same data), all you have to go for is the probability of any interval. | Clarification on interpreting confidence intervals? | I'll throw in my two cents (maybe redigesting some of the former answers). To a frequentist, the confidence interval itself is in essence a two-dimensional random variable: if you would redo the exper | Clarification on interpreting confidence intervals?
I'll throw in my two cents (maybe redigesting some of the former answers). To a frequentist, the confidence interval itself is in essence a two-dimensional random variable: if you would redo the experiment a gazillion times, the confidence interval you would estimate (i.e.: calculate from your newly found data each time) would differ each time. As such, the two boundaries of the interval are random variables.
A 95% CI, then, means nothing more than the assurance (given all your assumptions leading to this CI are correct) that this set of random variables will contain the true value (a very frequentist expression) in 95% of the cases.
You can easily calculate the confidence interval for the mean of 100 draws from a standard normal distribution. Then, if you draw 10000 times 100 values from that standard normal distribution, and each time calculate the confidence interval for the mean, you will indeed see that 0 is in there about 9500 times.
The fact that you have created a confidence interval just once (from your actual data) does indeed reduce the probability of the true value being in that interval to either 0 or 1, but it doesn't change the probability of the confidence interval as a random variable to contain the true value.
So, bottom line: the probability of any (i.e. on average) 95% confidence interval containing the true value (95%) doesn't change, and neither does the probability of a particular interval (CI or whatever) for containing the true value (0 or 1). The probability of the interval the computer knows but you don't is actually 0 or 1 (because it is a particular interval), but since you don't know it (and, in a frequentist fashion, are unable to recalculate this same interval infinitely many times again from the same data), all you have to go for is the probability of any interval. | Clarification on interpreting confidence intervals?
I'll throw in my two cents (maybe redigesting some of the former answers). To a frequentist, the confidence interval itself is in essence a two-dimensional random variable: if you would redo the exper |
5,323 | Clarification on interpreting confidence intervals? | I don't think a frequentist can say there is any probability of the true (population) value of a statistic lying in the confidence interval for a particular sample. It either is, or it isn't, but there is no long run frequency for a particular event, just the population of events that you would get by repeated performance of a statistical procedure. This is why we have to stick with statements such as "95% of confidence intervals so constructed will contain the true value of the statistic", but not "there is a p% probability that the true value lies in the confidence interval computed for this particular sample". This is true for any value of p, it simply isn't possible withing the frequentist definition of what a probability actually is. A Bayesian can make such a statement using a credible interval though. | Clarification on interpreting confidence intervals? | I don't think a frequentist can say there is any probability of the true (population) value of a statistic lying in the confidence interval for a particular sample. It either is, or it isn't, but the | Clarification on interpreting confidence intervals?
I don't think a frequentist can say there is any probability of the true (population) value of a statistic lying in the confidence interval for a particular sample. It either is, or it isn't, but there is no long run frequency for a particular event, just the population of events that you would get by repeated performance of a statistical procedure. This is why we have to stick with statements such as "95% of confidence intervals so constructed will contain the true value of the statistic", but not "there is a p% probability that the true value lies in the confidence interval computed for this particular sample". This is true for any value of p, it simply isn't possible withing the frequentist definition of what a probability actually is. A Bayesian can make such a statement using a credible interval though. | Clarification on interpreting confidence intervals?
I don't think a frequentist can say there is any probability of the true (population) value of a statistic lying in the confidence interval for a particular sample. It either is, or it isn't, but the |
5,324 | Clarification on interpreting confidence intervals? | The reason that the confidence interval doesn't specify "the probability that the true parameter lies in the interval" is because once the interval is specified, the paramater either lies in it or it doesn't. However, for a 95% confidence interval for example, you have a 95% chance of creating a confidence interval that does contain the value. This is a pretty difficult concept to grasp, so I may not be articulating it well. See http://frank.itlab.us/datamodel/node39.html for further clarification. | Clarification on interpreting confidence intervals? | The reason that the confidence interval doesn't specify "the probability that the true parameter lies in the interval" is because once the interval is specified, the paramater either lies in it or it | Clarification on interpreting confidence intervals?
The reason that the confidence interval doesn't specify "the probability that the true parameter lies in the interval" is because once the interval is specified, the paramater either lies in it or it doesn't. However, for a 95% confidence interval for example, you have a 95% chance of creating a confidence interval that does contain the value. This is a pretty difficult concept to grasp, so I may not be articulating it well. See http://frank.itlab.us/datamodel/node39.html for further clarification. | Clarification on interpreting confidence intervals?
The reason that the confidence interval doesn't specify "the probability that the true parameter lies in the interval" is because once the interval is specified, the paramater either lies in it or it |
5,325 | Clarification on interpreting confidence intervals? | The way you pose the problem is a little muddled. Take this statement: Let $E$ be the event that the true parameter falls in the interval $[a,b]$. This statement is meaningless from a frequentist perspective; the parameter is the parameter and it doesn't fall anywhere, it just is. P(E) is meaningless, P(E|C) is meaningless and this is why your example falls apart. The problem isn't conditioning on a set of measure zero either; the problem is that you're trying to make probability statements about something that isn't a random variable.
A frequentist would say something like: Let $\tilde E$ be the event that the interval $(L(X), U(X))$ contains the true parameter. This is something a frequentist can assign a probability to.
Edit: @G. Jay Kerns makes the argument better than me, and types faster, so probably just move along :) | Clarification on interpreting confidence intervals? | The way you pose the problem is a little muddled. Take this statement: Let $E$ be the event that the true parameter falls in the interval $[a,b]$. This statement is meaningless from a frequentist pers | Clarification on interpreting confidence intervals?
The way you pose the problem is a little muddled. Take this statement: Let $E$ be the event that the true parameter falls in the interval $[a,b]$. This statement is meaningless from a frequentist perspective; the parameter is the parameter and it doesn't fall anywhere, it just is. P(E) is meaningless, P(E|C) is meaningless and this is why your example falls apart. The problem isn't conditioning on a set of measure zero either; the problem is that you're trying to make probability statements about something that isn't a random variable.
A frequentist would say something like: Let $\tilde E$ be the event that the interval $(L(X), U(X))$ contains the true parameter. This is something a frequentist can assign a probability to.
Edit: @G. Jay Kerns makes the argument better than me, and types faster, so probably just move along :) | Clarification on interpreting confidence intervals?
The way you pose the problem is a little muddled. Take this statement: Let $E$ be the event that the true parameter falls in the interval $[a,b]$. This statement is meaningless from a frequentist pers |
5,326 | Clarification on interpreting confidence intervals? | In frequentist statistics, the event $E$ is fixed -- the parameter either lies in $[a, b]$ or it doesn't. Thus, $E$ is independent of $C$ and $C'$ and so both $P(E|C) = P(E)$ and $P(E|C') = P(E)$.
(In your argument, you seem to think that $P(E|C) = 1$ and $P(E|C') = 0$, which is incorrect.) | Clarification on interpreting confidence intervals? | In frequentist statistics, the event $E$ is fixed -- the parameter either lies in $[a, b]$ or it doesn't. Thus, $E$ is independent of $C$ and $C'$ and so both $P(E|C) = P(E)$ and $P(E|C') = P(E)$.
(In | Clarification on interpreting confidence intervals?
In frequentist statistics, the event $E$ is fixed -- the parameter either lies in $[a, b]$ or it doesn't. Thus, $E$ is independent of $C$ and $C'$ and so both $P(E|C) = P(E)$ and $P(E|C') = P(E)$.
(In your argument, you seem to think that $P(E|C) = 1$ and $P(E|C') = 0$, which is incorrect.) | Clarification on interpreting confidence intervals?
In frequentist statistics, the event $E$ is fixed -- the parameter either lies in $[a, b]$ or it doesn't. Thus, $E$ is independent of $C$ and $C'$ and so both $P(E|C) = P(E)$ and $P(E|C') = P(E)$.
(In |
5,327 | Clarification on interpreting confidence intervals? | There are so many long explanations here that I don't have time to read them. But I think the answer to the basic question can be short and sweet. It is the difference between a probability that is unconditional on the data. The probability of 1-alpha before collecting the dats is the probability that the well-defined procedure will include the parameter. After you have collected the data and know the specific interval that you have generated the interval is fixed and so since the parameter is a constant this conditional probability is either 0 or 1. But since we don't know the actual value of the parameter even after collecting the data we don't know which value it is.
Extension of the post by Michael Chernick copied form comments:
there is a pathological exception to this which can be called perfect estimation. Suppose we have a first order autoregressive process given by X(n)=pX(n-1) + en. It is stationary so we know p is not 1 or -1 and is < 1 in absolute value. Now the en are independent identically distributed with a mixed distribution there is a positive probability q that en= 0
There is a pathological exception to this which can be called perfect estimation. Suppose we have a first order autoregressive process given by X(n)=pX(n-1) + en. It is stationary so we know p is not 1 or -1 and is < 1 in absolute value.
Now the en are independent identically distributed with a mixed distribution there is a positive probability q that en=0 and with probability 1-q it has an absolutely continuous distribution (say that the density is non zero in an interval bounded away from 0. Then collect data from the time series sequentially and for each successive pair of values estimate p by X(i)/X(i-1). Now when ei = 0 the ratio will equal p exactly.
Because q is greater than 0 eventually the ratio will repeat a value and that value has to be the exact value of the parameter p because if it is not the value of ei which is not 0 will repeat with probability 0 and ei/x(i-1) will not repeat.
So the sequential stopping rule is to sample until the ratio repeats exactly then use the repeated value as the estimate of p. Since it is p exactly any interval you construct that is centered at this estimate has probability 1 of including the true parameter. Although this is a pathological example that is not practical there do exist stationary stochastic processes with the properties that we require for the error distribution | Clarification on interpreting confidence intervals? | There are so many long explanations here that I don't have time to read them. But I think the answer to the basic question can be short and sweet. It is the difference between a probability that is | Clarification on interpreting confidence intervals?
There are so many long explanations here that I don't have time to read them. But I think the answer to the basic question can be short and sweet. It is the difference between a probability that is unconditional on the data. The probability of 1-alpha before collecting the dats is the probability that the well-defined procedure will include the parameter. After you have collected the data and know the specific interval that you have generated the interval is fixed and so since the parameter is a constant this conditional probability is either 0 or 1. But since we don't know the actual value of the parameter even after collecting the data we don't know which value it is.
Extension of the post by Michael Chernick copied form comments:
there is a pathological exception to this which can be called perfect estimation. Suppose we have a first order autoregressive process given by X(n)=pX(n-1) + en. It is stationary so we know p is not 1 or -1 and is < 1 in absolute value. Now the en are independent identically distributed with a mixed distribution there is a positive probability q that en= 0
There is a pathological exception to this which can be called perfect estimation. Suppose we have a first order autoregressive process given by X(n)=pX(n-1) + en. It is stationary so we know p is not 1 or -1 and is < 1 in absolute value.
Now the en are independent identically distributed with a mixed distribution there is a positive probability q that en=0 and with probability 1-q it has an absolutely continuous distribution (say that the density is non zero in an interval bounded away from 0. Then collect data from the time series sequentially and for each successive pair of values estimate p by X(i)/X(i-1). Now when ei = 0 the ratio will equal p exactly.
Because q is greater than 0 eventually the ratio will repeat a value and that value has to be the exact value of the parameter p because if it is not the value of ei which is not 0 will repeat with probability 0 and ei/x(i-1) will not repeat.
So the sequential stopping rule is to sample until the ratio repeats exactly then use the repeated value as the estimate of p. Since it is p exactly any interval you construct that is centered at this estimate has probability 1 of including the true parameter. Although this is a pathological example that is not practical there do exist stationary stochastic processes with the properties that we require for the error distribution | Clarification on interpreting confidence intervals?
There are so many long explanations here that I don't have time to read them. But I think the answer to the basic question can be short and sweet. It is the difference between a probability that is |
5,328 | Clarification on interpreting confidence intervals? | If I say the probability the Knicks scored between xbar - 2sd(x) and xbar + 2sd(x) is about .95 in some given game in the past, that is a reasonable statement given some particular distributional assumption about the distribution of basketball scores. If I gather data about the scores given some sample of games and calculate that interval, the probability that they scored in that interval on some given day in the past is clearly zero or one, and you can google the game result to find out. The only notion of it maintaining non-zero or one probability to the frequentist comes from repeated sampling, and the realization of interval estimation of a particular sample is the magic point where either it happened or it didn't given the interval estimate of that sample. It isn't the point where you type in the password, it is the point where you decided to take a single sample that you lose the continuity of possible probabilities.
This is what Dikran argues above, and I have voted up his answer. The point when repeated samples are out of the consideration is the point in the frequentist paradigm where the non-discrete probability becomes unobtainable, not when you type in the password as in your example above, or when you google the result in my example of the Knicks game, but the point when your number of samples =1. | Clarification on interpreting confidence intervals? | If I say the probability the Knicks scored between xbar - 2sd(x) and xbar + 2sd(x) is about .95 in some given game in the past, that is a reasonable statement given some particular distributional ass | Clarification on interpreting confidence intervals?
If I say the probability the Knicks scored between xbar - 2sd(x) and xbar + 2sd(x) is about .95 in some given game in the past, that is a reasonable statement given some particular distributional assumption about the distribution of basketball scores. If I gather data about the scores given some sample of games and calculate that interval, the probability that they scored in that interval on some given day in the past is clearly zero or one, and you can google the game result to find out. The only notion of it maintaining non-zero or one probability to the frequentist comes from repeated sampling, and the realization of interval estimation of a particular sample is the magic point where either it happened or it didn't given the interval estimate of that sample. It isn't the point where you type in the password, it is the point where you decided to take a single sample that you lose the continuity of possible probabilities.
This is what Dikran argues above, and I have voted up his answer. The point when repeated samples are out of the consideration is the point in the frequentist paradigm where the non-discrete probability becomes unobtainable, not when you type in the password as in your example above, or when you google the result in my example of the Knicks game, but the point when your number of samples =1. | Clarification on interpreting confidence intervals?
If I say the probability the Knicks scored between xbar - 2sd(x) and xbar + 2sd(x) is about .95 in some given game in the past, that is a reasonable statement given some particular distributional ass |
5,329 | Clarification on interpreting confidence intervals? | Two observations about the many questions and responses that may help still.
Part of the confusion comes from glossing over some deeper math of probability theory, which, by the way, was not on a firm mathematical footing until about the 1940s. It gets into what constitutes sample spaces, probability spaces, etc.
First, you had stated that after a coin flip we know that there is 0% probability it did not come up tails if it came up heads. At that point it doesn't make sense to talk about probability; what happened happened, and we know it. Probability is about the unknown in the future, not the known in the present.
As a small corollary to that about what zero probability really means, consider this: we assume a fair count has a probability of 0.5 of coming up heads, and 0.5 of coming up tails. This means it has a 100% chance of coming up either heads or tails, since those outcomes are MECE (mutually exclusive and completely exhaustive). It has a zero percent change, however, of comping up heads and tails: Our notion of 'heads' and 'tails' are that they are mutually exclusive. Thus, this has zero percent chance because it is impossible in the way we think of (or define) 'tossing a coin'. And it is impossible before and after the toss.
As a further corollary to this, anything that is not, by definition, impossible is possible. In the real world, I hate when lawyers ask "isn't it possible you signed this document and forgot about it?" because the answer is always 'yes' by the nature of the question. For that matter, the answer is also 'yes' to the question "isn't it possible you were transported through dematerialization to planet Remulak 4 and forced to do something then transported back with no memory of it?". The likelihood may be very low -but what is not impossible is possible. In our regular concept of probability, when we talk about flipping a coin, it may come up heads; it may come up tails; and it may even stand on-end or (somehow, such as if we were snuck into a spacecraft while drugged and taken into orbit) float in the air forever. But, before or after the toss, it has zero probability of coming up heads and tails at the same time: they are mutually exclusive outcomes in the sample space of the experiment(look up 'probability sample spaces' and 'sigma-algebras').
Second, on all this Bayesian/Frequentist philosophy on confidence intervals, it is true it relates to frequencies if one is acting as a frequentist. So, when we say the confidence interval for a sampled and estimated mean is 95%, we are not saying that we are 95% certain the 'real' value lies between the bounds. We are saying that, if we could repeat this experiment over-and-over, 95% of the time we would find that the mean was, indeed, between the bounds. When we do it with one run, however, we are taking a mental shortcut and saying 'we are 95% certain we are right'.
FInally, don't forget what the standard setup is on a hypothesis test based on an experiment. If we want to know if a plant growth hormone makes plants grow faster, maybe we first determine the average size of a tomato after 6 months of growth. Then we repeat, but with the hormone, and get the average size. Our null hypothesis is 'the hormone didn't work', and we test that. But, if the tested plants are, on average larger, with 99% confidence, that means 'there will always be random variation due to the plants and how accurately we weigh, but the amount of randomness that would explain this would occur less than one time in a hundred." | Clarification on interpreting confidence intervals? | Two observations about the many questions and responses that may help still.
Part of the confusion comes from glossing over some deeper math of probability theory, which, by the way, was not on a fir | Clarification on interpreting confidence intervals?
Two observations about the many questions and responses that may help still.
Part of the confusion comes from glossing over some deeper math of probability theory, which, by the way, was not on a firm mathematical footing until about the 1940s. It gets into what constitutes sample spaces, probability spaces, etc.
First, you had stated that after a coin flip we know that there is 0% probability it did not come up tails if it came up heads. At that point it doesn't make sense to talk about probability; what happened happened, and we know it. Probability is about the unknown in the future, not the known in the present.
As a small corollary to that about what zero probability really means, consider this: we assume a fair count has a probability of 0.5 of coming up heads, and 0.5 of coming up tails. This means it has a 100% chance of coming up either heads or tails, since those outcomes are MECE (mutually exclusive and completely exhaustive). It has a zero percent change, however, of comping up heads and tails: Our notion of 'heads' and 'tails' are that they are mutually exclusive. Thus, this has zero percent chance because it is impossible in the way we think of (or define) 'tossing a coin'. And it is impossible before and after the toss.
As a further corollary to this, anything that is not, by definition, impossible is possible. In the real world, I hate when lawyers ask "isn't it possible you signed this document and forgot about it?" because the answer is always 'yes' by the nature of the question. For that matter, the answer is also 'yes' to the question "isn't it possible you were transported through dematerialization to planet Remulak 4 and forced to do something then transported back with no memory of it?". The likelihood may be very low -but what is not impossible is possible. In our regular concept of probability, when we talk about flipping a coin, it may come up heads; it may come up tails; and it may even stand on-end or (somehow, such as if we were snuck into a spacecraft while drugged and taken into orbit) float in the air forever. But, before or after the toss, it has zero probability of coming up heads and tails at the same time: they are mutually exclusive outcomes in the sample space of the experiment(look up 'probability sample spaces' and 'sigma-algebras').
Second, on all this Bayesian/Frequentist philosophy on confidence intervals, it is true it relates to frequencies if one is acting as a frequentist. So, when we say the confidence interval for a sampled and estimated mean is 95%, we are not saying that we are 95% certain the 'real' value lies between the bounds. We are saying that, if we could repeat this experiment over-and-over, 95% of the time we would find that the mean was, indeed, between the bounds. When we do it with one run, however, we are taking a mental shortcut and saying 'we are 95% certain we are right'.
FInally, don't forget what the standard setup is on a hypothesis test based on an experiment. If we want to know if a plant growth hormone makes plants grow faster, maybe we first determine the average size of a tomato after 6 months of growth. Then we repeat, but with the hormone, and get the average size. Our null hypothesis is 'the hormone didn't work', and we test that. But, if the tested plants are, on average larger, with 99% confidence, that means 'there will always be random variation due to the plants and how accurately we weigh, but the amount of randomness that would explain this would occur less than one time in a hundred." | Clarification on interpreting confidence intervals?
Two observations about the many questions and responses that may help still.
Part of the confusion comes from glossing over some deeper math of probability theory, which, by the way, was not on a fir |
5,330 | Clarification on interpreting confidence intervals? | The issue can be characterized as a confusion of prior and posterior probability
or maybe as the dissatisfaction of not knowing the joint distribution of certain random variables.
Conditioning
As an introductory example,
we consider a model for the experiment of drawing, without replacement,
two balls from an urn with $n$ balls numbered from $1$ to $n$.
The typical way to model this experiment is with two random variables $X$ and $Y$,
where $X$ is the number of the first ball and $Y$ is the number of the second ball,
and with the joint distribution $P(X=x \land Y=y) = 1/(n(n-1))$
for all $x,y \in N := \{1,\dots,n\}$ with $x \neq y$.
This way, all possible outcomes have the same, positive probability,
and the impossible outcomes (e.g., drawing the same ball twice) have zero probability.
It follows $P(X=x)=1/n$ and $P(Y=x)=1/n$ for all $x \in N$.
Let the experiment be conducted and the second ball revealed to us,
while the first ball is kept secret.
Denote $t$ the number of the second ball.
Then, still, $P(X=x)=1/n$ for all $x \in N$.
However, for each $x \in N$, our degree of belief that the event $X=x$ happened,
should now be $P(X=x \vert Y=t) = P(X=x \land Y=t) / P(Y=t)$,
which in case of $x \neq t$ is $1/(n-1)$,
and in case of $x = t$, it is $0$.
This is the probability of $X=x$ conditioned on the information that $Y=t$ happened,
also called the posterior probability of $X=x$,
meaning, the updated probability of $X=x$ after we obtained the evidence that $Y=t$ happened.
It is still $P(X=x)=P(Y=x)=1/n$ for all $x \in N$,
those are the prior probabilities.
Not conditioning on evidence means ignoring evidence.
However, we can only condition on what is expressible in the probabilistic model.
In our example with the two balls from the urn,
we cannot condition on the weather or on how we feel today.
In case that we have reason to believe that such is evidence relevant to the experiment,
we must change our model first in order to allow us to express this evidence as formal events.
Let $C$ be the indicator random variable that says if the first ball
has a lower number than the second ball, that is, $C = 1 \Longleftrightarrow X < Y$.
Then $P(C=1) = 1/2$.
Let again $t$ be the number of the second ball,
which is revealed to us, but the number of the first ball is secret.
Then it is easy to see that $P(C=1 \vert Y=t) = (t-1) / (n-1)$.
In particular $P(C=1 \vert Y=1) = 0$,
which in our model means that $C=1$ has certainly not happened.
Moreover, $P(C=1 \vert Y=n) = 1$,
which in our model means that $C=1$ has certainly happened.
It is still $P(C=1) = 1/2$.
Confidence Interval
Let $X = (X_1, \dots, X_n)$ be a vector of $n$ i.i.d random variables.
Let $(l,u)$ be a confidence interval estimator (CIE) with confidence level $\gamma$
for a real parameter of the distribution of the random variables in $X$,
that is, $l$ and $u$ are real-valued functions with domain $\mathbb{R}^n$,
such that if $\theta \in \mathbb{R}$ is the true value of the parameter,
then $P(l(X) \leq \theta \leq u(X)) \geq \gamma$.
Let $C$ be the indicator random variable that says if $(l,u)$ determined the correct parameter,
that is, $C = 1 \Longleftrightarrow l(X) \leq \theta \leq u(X)$.
Then $P(C=1) \geq \gamma$.
Let us collect data so that we have values $x = (x_1,\dots,x_n) \in \mathbb{R}^n$,
where $x_i$ is the realization of $X_i$ for all $i$.
Then our degree of belief that the event $C=1$ happened should be $\delta := P(C=1 \vert X = x)$.
In general, we cannot compute this conditional probability, but we know that it is either $0$ or $1$,
since $(C = 1 \land X = x) \Longleftrightarrow ((l(x) \leq \theta \leq u(x)) \land X = x)$.
If $l(x) \leq \theta \leq u(x)$ is false, then the latter statement is false, and thus $\delta=0$.
If $l(x) \leq \theta \leq u(x)$ is true, then the latter statement is equivalent to $X=x$, and thus $\delta=1$.
If we only know the values $l(x)$ and $u(x)$ and not the data $x$,
we can still argue in a similar way that $\delta \in \{0,1\}$.
It is still $P(C=1) \geq \gamma$.
If, for our degree of belief that $C=1$ happened,
we like this prior probability more,
then we must ignore $x$, and this also means ignoring the confidence interval $[l(x),u(x)]$.
Saying that $[l(x),u(x)]$ contained $\theta$ with probability at least $\gamma$,
would mean acknowledging this evidence and at the same time ignoring it.
Learning More, Knowing Less
What makes this situation so difficult to grasp may be the fact
that we cannot compute the conditional probability $\delta$.
But this is not particular to the CIE situation,
rather it may occur whenever we have insufficient information about the joint distribution of random variables.
As an example, let $X$ and $Y$ be discrete random variables and let their marginal distributions be given,
that is, for each $x \in \mathbb{R}$, we know $P(X=x)$ and $P(Y=x)$.
This does not give us their joint distribution, that is,
we do not know $P(X=x \land Y=y)$ for any $x,y \in \mathbb{R}$.
Assume that a result of this experiment should be reported as the value of the random vector $(X,Y)$,
that is, results should be reported as pairs of real numbers.
Let the underlying experiment be conducted, and assume that we learn that $Y=7$ happened,
while the value for $X$ is still unknown to us.
This does not change $P(X=x)$ for any $x$.
However, it would be problematic to say that the result of the experiment was of the form $(x,7)$,
where $x \in \mathbb{R}$,
and that the probability for each particular real number $x$ for being the first component of this pair was $P(X=x)$.
It is problematic since in this way, we would acknowledge the evidence $Y=7$
and at the same time ignore it.
We acknowledge the evidence $Y=7$ by reporting the second component of the pair as being $7$.
We ignore it by using the prior probability $P(X=x)$, where in fact
our degree of belief for $X=x$ should now be
$P(X=x \vert Y=7) = P(X=x \land Y=7) / P(Y=7)$, which unfortunately we cannot compute.
It may be unsatisfactory that in a sense,
knowing more about $Y$ forces us to say less about $X$.
But to the best of my knowledge this is how things are. | Clarification on interpreting confidence intervals? | The issue can be characterized as a confusion of prior and posterior probability
or maybe as the dissatisfaction of not knowing the joint distribution of certain random variables.
Conditioning
As an i | Clarification on interpreting confidence intervals?
The issue can be characterized as a confusion of prior and posterior probability
or maybe as the dissatisfaction of not knowing the joint distribution of certain random variables.
Conditioning
As an introductory example,
we consider a model for the experiment of drawing, without replacement,
two balls from an urn with $n$ balls numbered from $1$ to $n$.
The typical way to model this experiment is with two random variables $X$ and $Y$,
where $X$ is the number of the first ball and $Y$ is the number of the second ball,
and with the joint distribution $P(X=x \land Y=y) = 1/(n(n-1))$
for all $x,y \in N := \{1,\dots,n\}$ with $x \neq y$.
This way, all possible outcomes have the same, positive probability,
and the impossible outcomes (e.g., drawing the same ball twice) have zero probability.
It follows $P(X=x)=1/n$ and $P(Y=x)=1/n$ for all $x \in N$.
Let the experiment be conducted and the second ball revealed to us,
while the first ball is kept secret.
Denote $t$ the number of the second ball.
Then, still, $P(X=x)=1/n$ for all $x \in N$.
However, for each $x \in N$, our degree of belief that the event $X=x$ happened,
should now be $P(X=x \vert Y=t) = P(X=x \land Y=t) / P(Y=t)$,
which in case of $x \neq t$ is $1/(n-1)$,
and in case of $x = t$, it is $0$.
This is the probability of $X=x$ conditioned on the information that $Y=t$ happened,
also called the posterior probability of $X=x$,
meaning, the updated probability of $X=x$ after we obtained the evidence that $Y=t$ happened.
It is still $P(X=x)=P(Y=x)=1/n$ for all $x \in N$,
those are the prior probabilities.
Not conditioning on evidence means ignoring evidence.
However, we can only condition on what is expressible in the probabilistic model.
In our example with the two balls from the urn,
we cannot condition on the weather or on how we feel today.
In case that we have reason to believe that such is evidence relevant to the experiment,
we must change our model first in order to allow us to express this evidence as formal events.
Let $C$ be the indicator random variable that says if the first ball
has a lower number than the second ball, that is, $C = 1 \Longleftrightarrow X < Y$.
Then $P(C=1) = 1/2$.
Let again $t$ be the number of the second ball,
which is revealed to us, but the number of the first ball is secret.
Then it is easy to see that $P(C=1 \vert Y=t) = (t-1) / (n-1)$.
In particular $P(C=1 \vert Y=1) = 0$,
which in our model means that $C=1$ has certainly not happened.
Moreover, $P(C=1 \vert Y=n) = 1$,
which in our model means that $C=1$ has certainly happened.
It is still $P(C=1) = 1/2$.
Confidence Interval
Let $X = (X_1, \dots, X_n)$ be a vector of $n$ i.i.d random variables.
Let $(l,u)$ be a confidence interval estimator (CIE) with confidence level $\gamma$
for a real parameter of the distribution of the random variables in $X$,
that is, $l$ and $u$ are real-valued functions with domain $\mathbb{R}^n$,
such that if $\theta \in \mathbb{R}$ is the true value of the parameter,
then $P(l(X) \leq \theta \leq u(X)) \geq \gamma$.
Let $C$ be the indicator random variable that says if $(l,u)$ determined the correct parameter,
that is, $C = 1 \Longleftrightarrow l(X) \leq \theta \leq u(X)$.
Then $P(C=1) \geq \gamma$.
Let us collect data so that we have values $x = (x_1,\dots,x_n) \in \mathbb{R}^n$,
where $x_i$ is the realization of $X_i$ for all $i$.
Then our degree of belief that the event $C=1$ happened should be $\delta := P(C=1 \vert X = x)$.
In general, we cannot compute this conditional probability, but we know that it is either $0$ or $1$,
since $(C = 1 \land X = x) \Longleftrightarrow ((l(x) \leq \theta \leq u(x)) \land X = x)$.
If $l(x) \leq \theta \leq u(x)$ is false, then the latter statement is false, and thus $\delta=0$.
If $l(x) \leq \theta \leq u(x)$ is true, then the latter statement is equivalent to $X=x$, and thus $\delta=1$.
If we only know the values $l(x)$ and $u(x)$ and not the data $x$,
we can still argue in a similar way that $\delta \in \{0,1\}$.
It is still $P(C=1) \geq \gamma$.
If, for our degree of belief that $C=1$ happened,
we like this prior probability more,
then we must ignore $x$, and this also means ignoring the confidence interval $[l(x),u(x)]$.
Saying that $[l(x),u(x)]$ contained $\theta$ with probability at least $\gamma$,
would mean acknowledging this evidence and at the same time ignoring it.
Learning More, Knowing Less
What makes this situation so difficult to grasp may be the fact
that we cannot compute the conditional probability $\delta$.
But this is not particular to the CIE situation,
rather it may occur whenever we have insufficient information about the joint distribution of random variables.
As an example, let $X$ and $Y$ be discrete random variables and let their marginal distributions be given,
that is, for each $x \in \mathbb{R}$, we know $P(X=x)$ and $P(Y=x)$.
This does not give us their joint distribution, that is,
we do not know $P(X=x \land Y=y)$ for any $x,y \in \mathbb{R}$.
Assume that a result of this experiment should be reported as the value of the random vector $(X,Y)$,
that is, results should be reported as pairs of real numbers.
Let the underlying experiment be conducted, and assume that we learn that $Y=7$ happened,
while the value for $X$ is still unknown to us.
This does not change $P(X=x)$ for any $x$.
However, it would be problematic to say that the result of the experiment was of the form $(x,7)$,
where $x \in \mathbb{R}$,
and that the probability for each particular real number $x$ for being the first component of this pair was $P(X=x)$.
It is problematic since in this way, we would acknowledge the evidence $Y=7$
and at the same time ignore it.
We acknowledge the evidence $Y=7$ by reporting the second component of the pair as being $7$.
We ignore it by using the prior probability $P(X=x)$, where in fact
our degree of belief for $X=x$ should now be
$P(X=x \vert Y=7) = P(X=x \land Y=7) / P(Y=7)$, which unfortunately we cannot compute.
It may be unsatisfactory that in a sense,
knowing more about $Y$ forces us to say less about $X$.
But to the best of my knowledge this is how things are. | Clarification on interpreting confidence intervals?
The issue can be characterized as a confusion of prior and posterior probability
or maybe as the dissatisfaction of not knowing the joint distribution of certain random variables.
Conditioning
As an i |
5,331 | Clarification on interpreting confidence intervals? | Modeling
Correct procedure is:
(1) model the situation as a probability space $\mathcal{S} = (\Omega,\Sigma,P)$;
(2) define an event $E \in \Sigma$ of interest;
(3) determine its probability $P(E)$.
The event $E$ may be specified via random variables,
that is, functions on $\mathcal{S}$ (measurable functions, that is, but let's not worry about this here).
The space $\mathcal{S}$ may be given implicitly by one or more random variables
and their joint distribution.
Step (1) may allow some leeway.
The appropriateness of the modeling can sometimes be tested
by comparing the probability of certain events with what we would expect intuitively.
In particular, looking at certain marginal or conditional probabilities may help
to get an idea how appropriate the modeling is.
Sometimes, modeling or a part of it has already been done and we can build on this.
In statistics (at a certain point),
we typically are already given real-valued random variables
$X_1, \dots, X_n \sim \mathrm{Dist}(\theta)$ i.i.d
with fixed but unknown ${\theta \in \mathbb{R}}$.
Confidence Interval Estimator
A confidence interval estimator (CIE) at the $\gamma$ confidence level
is a pair of functions $L$ and $R$ with domain $\mathbb{R}^n$
such that $P(L(X) \leq \theta \leq R(X)) \geq \gamma$, writing $X = (X_1, \dots, X_n)$.
I prefer the wording "confidence interval estimator" to underline
that it is the functions and their functional properties that count;
$L(X)$ and $R(X)$ are both functions on the implicitly given sample space,
that is, they are random variables.
Given an observation $x \in \mathbb{R}^n$,
speaking of the "probability" of $L(x) \leq \theta \leq R(x)$ makes no sense
since this is not an event since it does not contain any random variables.
Preferences
Suppose one may choose between a lottery ticket that has been drawn from a set of tickets
where a $\gamma_1$ fraction consists of winning tickets,
and one that has been drawn from a set where a $\gamma_2$ fraction consists of winning tickets,
and suppose $\gamma_1 < \gamma_2$.
Both tickets have already been drawn, but none of them revealed.
Of course, all else being equal, we would prefer the second ticket,
since it had a higher probability of being a winning ticket than the first one
when they were drawn.
A preference regarding different observations (the two tickets in this examples)
based on the probabilistic properties of the random processes that generated the observations is fine.
Note that we do not say that any of the tickets has a higher probability of being a winning ticket.
If we ever say so, then with "probability" in a colloquial sense, which could mean anything,
so it is best avoided here.
With CIEs of different confidence levels, all else is usually not equal,
since higher confidence level will make the intervals delivered by the CIE tend to be wider.
So we cannot even give a preference in this case;
we cannot say that we generally prefer intervals computed with a CIE that has higher confidence level.
But if all else was equal, we would prefer intervals produced by a CIE that has highest available confidence level.
For example, if we were to choose between an interval that is the output of a CIE at the $0.95$ confidence level
and an interval of the same length that has been drawn uniformly at random from
the set of all intervals of this length, we would certainly prefer the former.
Example with a Simple Prior
Let us consider an example where the probabilistic modeling has been extended
in order to make the parameter we are interested in a random variable.
Suppose $\theta$ is a discrete random variable with $P(\theta=0) = P(\theta=1) = 1/2$
and that for each $\vartheta \in \mathbb{R}$,
conditioned on the knowledge of $\theta = \vartheta$, we have $X_1, \dots, X_n \sim \mathcal{N}(\vartheta, 1)$ i.i.d.
Let $L,R$ constitute a (classical) CIE for the mean of the normal distribution at the $\gamma$ confidence level,
that is, for each $\vartheta \in \mathbb{R}$, we have $P(L(X) \leq \vartheta \leq R(X) \vert \theta = \vartheta) \geq \gamma$,
which implies ${P(L(X) \leq \theta \leq R(X)) \geq \gamma}$.
Suppose we observe a concrete value $x \in \mathbb{R}^n$ of the $(X_1, \dots, X_n)$.
Now, what is the probability of $\theta$ being located inside of the interval specified by $L(x)$ and $R(x)$,
that is, what is $P(L(x) \leq \theta \leq R(x) \vert X = x)$?
Denote $f_\mu$ the joint PDF of $n$ independent, normally distributed random variables
with mean $\mu$ and standard deviation $\sigma=1$.
A calculation using Bayes' rule and the law of total probability shows:
$$P(L(x) \leq \theta \leq R(x) \vert X = x) =
\begin{cases}
\frac{f_0(x)}{f_0(x) + f_1(x)} & \text{if $L(x) \leq 0 \leq R(x) < 1$} \\
\frac{f_1(x)}{f_0(x) + f_1(x)} & \text{if $0 < L(x) \leq 1 \leq R(x)$} \\
1 & \text{if $L(x) \leq 0$ and $1 \leq R(x)$} \\
0 & \text{else}
\end{cases}$$
Remarkably, this probability has nothing to do with the confidence level $\gamma$ at all!
So even if the question for the probability of $\theta$ being contained in the output of the CIE makes sense,
that is, if $L(X) \leq \theta \leq R(X)$ is an event in our probabilistic model,
its probability in general is not $\gamma$, but can be something completely different.
In fact, once we have agreed on a prior (such as the simple discrete distribution of $\theta$ here)
and we have an observation $x$, it may be more informative to condition on $x$ than looking at the output of a CIE.
Precisely, for $\{\mu_0,\mu_1\} = \{0,1\}$ we have:
$$P(\theta = \mu_0 \vert X=x) = \frac{f_{\mu_0}(x)}{f_{\mu_0}(x) + f_{\mu_1}(x)}$$ | Clarification on interpreting confidence intervals? | Modeling
Correct procedure is:
(1) model the situation as a probability space $\mathcal{S} = (\Omega,\Sigma,P)$;
(2) define an event $E \in \Sigma$ of interest;
(3) determine its probability $P(E)$.
T | Clarification on interpreting confidence intervals?
Modeling
Correct procedure is:
(1) model the situation as a probability space $\mathcal{S} = (\Omega,\Sigma,P)$;
(2) define an event $E \in \Sigma$ of interest;
(3) determine its probability $P(E)$.
The event $E$ may be specified via random variables,
that is, functions on $\mathcal{S}$ (measurable functions, that is, but let's not worry about this here).
The space $\mathcal{S}$ may be given implicitly by one or more random variables
and their joint distribution.
Step (1) may allow some leeway.
The appropriateness of the modeling can sometimes be tested
by comparing the probability of certain events with what we would expect intuitively.
In particular, looking at certain marginal or conditional probabilities may help
to get an idea how appropriate the modeling is.
Sometimes, modeling or a part of it has already been done and we can build on this.
In statistics (at a certain point),
we typically are already given real-valued random variables
$X_1, \dots, X_n \sim \mathrm{Dist}(\theta)$ i.i.d
with fixed but unknown ${\theta \in \mathbb{R}}$.
Confidence Interval Estimator
A confidence interval estimator (CIE) at the $\gamma$ confidence level
is a pair of functions $L$ and $R$ with domain $\mathbb{R}^n$
such that $P(L(X) \leq \theta \leq R(X)) \geq \gamma$, writing $X = (X_1, \dots, X_n)$.
I prefer the wording "confidence interval estimator" to underline
that it is the functions and their functional properties that count;
$L(X)$ and $R(X)$ are both functions on the implicitly given sample space,
that is, they are random variables.
Given an observation $x \in \mathbb{R}^n$,
speaking of the "probability" of $L(x) \leq \theta \leq R(x)$ makes no sense
since this is not an event since it does not contain any random variables.
Preferences
Suppose one may choose between a lottery ticket that has been drawn from a set of tickets
where a $\gamma_1$ fraction consists of winning tickets,
and one that has been drawn from a set where a $\gamma_2$ fraction consists of winning tickets,
and suppose $\gamma_1 < \gamma_2$.
Both tickets have already been drawn, but none of them revealed.
Of course, all else being equal, we would prefer the second ticket,
since it had a higher probability of being a winning ticket than the first one
when they were drawn.
A preference regarding different observations (the two tickets in this examples)
based on the probabilistic properties of the random processes that generated the observations is fine.
Note that we do not say that any of the tickets has a higher probability of being a winning ticket.
If we ever say so, then with "probability" in a colloquial sense, which could mean anything,
so it is best avoided here.
With CIEs of different confidence levels, all else is usually not equal,
since higher confidence level will make the intervals delivered by the CIE tend to be wider.
So we cannot even give a preference in this case;
we cannot say that we generally prefer intervals computed with a CIE that has higher confidence level.
But if all else was equal, we would prefer intervals produced by a CIE that has highest available confidence level.
For example, if we were to choose between an interval that is the output of a CIE at the $0.95$ confidence level
and an interval of the same length that has been drawn uniformly at random from
the set of all intervals of this length, we would certainly prefer the former.
Example with a Simple Prior
Let us consider an example where the probabilistic modeling has been extended
in order to make the parameter we are interested in a random variable.
Suppose $\theta$ is a discrete random variable with $P(\theta=0) = P(\theta=1) = 1/2$
and that for each $\vartheta \in \mathbb{R}$,
conditioned on the knowledge of $\theta = \vartheta$, we have $X_1, \dots, X_n \sim \mathcal{N}(\vartheta, 1)$ i.i.d.
Let $L,R$ constitute a (classical) CIE for the mean of the normal distribution at the $\gamma$ confidence level,
that is, for each $\vartheta \in \mathbb{R}$, we have $P(L(X) \leq \vartheta \leq R(X) \vert \theta = \vartheta) \geq \gamma$,
which implies ${P(L(X) \leq \theta \leq R(X)) \geq \gamma}$.
Suppose we observe a concrete value $x \in \mathbb{R}^n$ of the $(X_1, \dots, X_n)$.
Now, what is the probability of $\theta$ being located inside of the interval specified by $L(x)$ and $R(x)$,
that is, what is $P(L(x) \leq \theta \leq R(x) \vert X = x)$?
Denote $f_\mu$ the joint PDF of $n$ independent, normally distributed random variables
with mean $\mu$ and standard deviation $\sigma=1$.
A calculation using Bayes' rule and the law of total probability shows:
$$P(L(x) \leq \theta \leq R(x) \vert X = x) =
\begin{cases}
\frac{f_0(x)}{f_0(x) + f_1(x)} & \text{if $L(x) \leq 0 \leq R(x) < 1$} \\
\frac{f_1(x)}{f_0(x) + f_1(x)} & \text{if $0 < L(x) \leq 1 \leq R(x)$} \\
1 & \text{if $L(x) \leq 0$ and $1 \leq R(x)$} \\
0 & \text{else}
\end{cases}$$
Remarkably, this probability has nothing to do with the confidence level $\gamma$ at all!
So even if the question for the probability of $\theta$ being contained in the output of the CIE makes sense,
that is, if $L(X) \leq \theta \leq R(X)$ is an event in our probabilistic model,
its probability in general is not $\gamma$, but can be something completely different.
In fact, once we have agreed on a prior (such as the simple discrete distribution of $\theta$ here)
and we have an observation $x$, it may be more informative to condition on $x$ than looking at the output of a CIE.
Precisely, for $\{\mu_0,\mu_1\} = \{0,1\}$ we have:
$$P(\theta = \mu_0 \vert X=x) = \frac{f_{\mu_0}(x)}{f_{\mu_0}(x) + f_{\mu_1}(x)}$$ | Clarification on interpreting confidence intervals?
Modeling
Correct procedure is:
(1) model the situation as a probability space $\mathcal{S} = (\Omega,\Sigma,P)$;
(2) define an event $E \in \Sigma$ of interest;
(3) determine its probability $P(E)$.
T |
5,332 | Clarification on interpreting confidence intervals? | If we could say "the probability that the true parameter lies in this confidence interval" then we wouldn't take into account the size of the sample. No matter how large the sample is, as long as the the mean is the same, then the confidence interval would be equally wide. But when we say "if i repeat this 100 times, then I would expect that in 95 of the cases the true parameter will lie within the interval", we are taking into account the size of the sample size, and how sure our estimate is. The larger the sample size is, the less variance will the mean estimate have. So it wont vary that much, and when we are repeating the procedure 100 times, we doesn't need a large interval to make sure that in 95 of the cases the true parameter is in the interval. | Clarification on interpreting confidence intervals? | If we could say "the probability that the true parameter lies in this confidence interval" then we wouldn't take into account the size of the sample. No matter how large the sample is, as long as the | Clarification on interpreting confidence intervals?
If we could say "the probability that the true parameter lies in this confidence interval" then we wouldn't take into account the size of the sample. No matter how large the sample is, as long as the the mean is the same, then the confidence interval would be equally wide. But when we say "if i repeat this 100 times, then I would expect that in 95 of the cases the true parameter will lie within the interval", we are taking into account the size of the sample size, and how sure our estimate is. The larger the sample size is, the less variance will the mean estimate have. So it wont vary that much, and when we are repeating the procedure 100 times, we doesn't need a large interval to make sure that in 95 of the cases the true parameter is in the interval. | Clarification on interpreting confidence intervals?
If we could say "the probability that the true parameter lies in this confidence interval" then we wouldn't take into account the size of the sample. No matter how large the sample is, as long as the |
5,333 | Importance of local response normalization in CNN | It seems that these kinds of layers have a minimal impact and are not used any more. Basically, their role have been outplayed by other regularization techniques (such as dropout and batch normalization), better initializations and training methods. This is what is written in the lecture notes for the Stanford Course CS321n on ConvNets:
Normalization Layer
Many types of normalization layers have been proposed for use in
ConvNet architectures, sometimes with the intentions of implementing
inhibition schemes observed in the biological brain. However, these
layers have recently fallen out of favor because in practice their
contribution has been shown to be minimal, if any. For various types
of normalizations, see the discussion in Alex Krizhevsky's
cuda-convnet library API. | Importance of local response normalization in CNN | It seems that these kinds of layers have a minimal impact and are not used any more. Basically, their role have been outplayed by other regularization techniques (such as dropout and batch normalizati | Importance of local response normalization in CNN
It seems that these kinds of layers have a minimal impact and are not used any more. Basically, their role have been outplayed by other regularization techniques (such as dropout and batch normalization), better initializations and training methods. This is what is written in the lecture notes for the Stanford Course CS321n on ConvNets:
Normalization Layer
Many types of normalization layers have been proposed for use in
ConvNet architectures, sometimes with the intentions of implementing
inhibition schemes observed in the biological brain. However, these
layers have recently fallen out of favor because in practice their
contribution has been shown to be minimal, if any. For various types
of normalizations, see the discussion in Alex Krizhevsky's
cuda-convnet library API. | Importance of local response normalization in CNN
It seems that these kinds of layers have a minimal impact and are not used any more. Basically, their role have been outplayed by other regularization techniques (such as dropout and batch normalizati |
5,334 | Importance of local response normalization in CNN | Indeed, there seems no good explanation in a single place. The best is to read the articles from where it comes:
The original AlexNet article explains a bit in Section 3.3:
Krizhevsky, Sutskever, and Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS 2012. pdf
The exact way of doing this was proposed in (but not much extra info here):
Kevin Jarrett, Koray Kavukcuoglu, Marc’Aurelio Ranzato and Yann LeCun, What is the best Multi-Stage Architecture for Object Recognition?, ICCV 2009. pdf
It was inspired by computational neuroscience:
S. Lyu and E. Simoncelli. Nonlinear image representation using divisive normalization. CVPR 2008. pdf. This paper goes deeper into the math, and is in accordance with the answer of seanv507.
[24] N. Pinto, D. D. Cox, and J. J. DiCarlo. Why is real-world vi-
sual object recognition hard? PLoS Computational Biology, 2008. | Importance of local response normalization in CNN | Indeed, there seems no good explanation in a single place. The best is to read the articles from where it comes:
The original AlexNet article explains a bit in Section 3.3:
Krizhevsky, Sutskever, and | Importance of local response normalization in CNN
Indeed, there seems no good explanation in a single place. The best is to read the articles from where it comes:
The original AlexNet article explains a bit in Section 3.3:
Krizhevsky, Sutskever, and Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS 2012. pdf
The exact way of doing this was proposed in (but not much extra info here):
Kevin Jarrett, Koray Kavukcuoglu, Marc’Aurelio Ranzato and Yann LeCun, What is the best Multi-Stage Architecture for Object Recognition?, ICCV 2009. pdf
It was inspired by computational neuroscience:
S. Lyu and E. Simoncelli. Nonlinear image representation using divisive normalization. CVPR 2008. pdf. This paper goes deeper into the math, and is in accordance with the answer of seanv507.
[24] N. Pinto, D. D. Cox, and J. J. DiCarlo. Why is real-world vi-
sual object recognition hard? PLoS Computational Biology, 2008. | Importance of local response normalization in CNN
Indeed, there seems no good explanation in a single place. The best is to read the articles from where it comes:
The original AlexNet article explains a bit in Section 3.3:
Krizhevsky, Sutskever, and |
5,335 | Importance of local response normalization in CNN | Here is my suggested answer, though I don't claim to be knowledgeable.
When performing gradient descent on a linear model, the error surface is quadratic, with the curvature determined by $XX_T$, where $X$ is your input. Now the ideal error surface for or gradient descent has the same curvature in all directions (otherwise the step size is too small in some directions and too big in others). Normalising your inputs by rescaling the inputs to mean zero, variance 1 helps and is fast:now the directions along each dimension all have the same curvature, which in turn bounds the curvature in other directions.
The optimal solution would be to sphere/whiten the inputs to each neuron, however this is computationally too expensive. LCN can be justified as an approximate whitening based on the assumption of a high degree of correlation between neighbouring pixels (or channels)
So I would claim the benefit is that the error surface is more benign for SGD... A single Learning rate works well across the input dimensions (of each neuron) | Importance of local response normalization in CNN | Here is my suggested answer, though I don't claim to be knowledgeable.
When performing gradient descent on a linear model, the error surface is quadratic, with the curvature determined by $XX_T$, | Importance of local response normalization in CNN
Here is my suggested answer, though I don't claim to be knowledgeable.
When performing gradient descent on a linear model, the error surface is quadratic, with the curvature determined by $XX_T$, where $X$ is your input. Now the ideal error surface for or gradient descent has the same curvature in all directions (otherwise the step size is too small in some directions and too big in others). Normalising your inputs by rescaling the inputs to mean zero, variance 1 helps and is fast:now the directions along each dimension all have the same curvature, which in turn bounds the curvature in other directions.
The optimal solution would be to sphere/whiten the inputs to each neuron, however this is computationally too expensive. LCN can be justified as an approximate whitening based on the assumption of a high degree of correlation between neighbouring pixels (or channels)
So I would claim the benefit is that the error surface is more benign for SGD... A single Learning rate works well across the input dimensions (of each neuron) | Importance of local response normalization in CNN
Here is my suggested answer, though I don't claim to be knowledgeable.
When performing gradient descent on a linear model, the error surface is quadratic, with the curvature determined by $XX_T$, |
5,336 | Importance of local response normalization in CNN | With this answer I would like to summarize contributions of other authors and provide a single place explanation of the LRN (or contrastive normalization) technique for those, who just want to get aware of what it is and how it works.
Motivation: 'This sort of response normalization (LRN) implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities among neuron outputs computed using different kernels.' AlexNet 3.3
In other words LRN allows to diminish responses that are uniformly large for the neighborhood and make large activation more pronounced within a neighborhood i.e. create higher contrast in activation map. prateekvjoshi.com states that it is particulary useful with unbounded activation functions as RELU.
Original Formula: For every particular position (x, y) and kernel i that corresponds to a single 'pixel' output we apply a 'filter', that incorporates information about outputs of other n kernels applied to the same position. This regularization is applied before activation function. This regularization, indeed, relies on the order of kernels which is, to my best knowledge, just an unfortunate coincidence.
In practice (see Caffe) 2 approaches can be used:
WITHIN_CHANNEL. Normalize over local neighborhood of a single channel (corresponding to a single convolutional filter). In other words, divide response of a single channel of a single pixel according to output values of the same neuron for pixels nearby.
ACROSS_CHANNELS. For a single pixel normalize values of every channel according to values of all channels for the same pixel
Actual usage LRN was used more often during the days of early convets like LeNet-5. Current implementation of GoogLeNet (Inception) in Caffe often uses LRN in connection with pooling techniques, but it seems to be done for the sake of just having it. Neither original Inception/GoogLeNet (here) nor any of the following versions mention LRN in any way. Also, TensorFlow implementation of Inception (provided and updated by the team of original authors) networks does not use LRN despite it being available.
Conclusion Applying LRN along with pooling layer would not hurt the performance of the network as long as hyper-parameter values are reasonable. Despite that, I am not aware of any recent justification for applying LRN/contrast normalization in a neural-network. | Importance of local response normalization in CNN | With this answer I would like to summarize contributions of other authors and provide a single place explanation of the LRN (or contrastive normalization) technique for those, who just want to get awa | Importance of local response normalization in CNN
With this answer I would like to summarize contributions of other authors and provide a single place explanation of the LRN (or contrastive normalization) technique for those, who just want to get aware of what it is and how it works.
Motivation: 'This sort of response normalization (LRN) implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities among neuron outputs computed using different kernels.' AlexNet 3.3
In other words LRN allows to diminish responses that are uniformly large for the neighborhood and make large activation more pronounced within a neighborhood i.e. create higher contrast in activation map. prateekvjoshi.com states that it is particulary useful with unbounded activation functions as RELU.
Original Formula: For every particular position (x, y) and kernel i that corresponds to a single 'pixel' output we apply a 'filter', that incorporates information about outputs of other n kernels applied to the same position. This regularization is applied before activation function. This regularization, indeed, relies on the order of kernels which is, to my best knowledge, just an unfortunate coincidence.
In practice (see Caffe) 2 approaches can be used:
WITHIN_CHANNEL. Normalize over local neighborhood of a single channel (corresponding to a single convolutional filter). In other words, divide response of a single channel of a single pixel according to output values of the same neuron for pixels nearby.
ACROSS_CHANNELS. For a single pixel normalize values of every channel according to values of all channels for the same pixel
Actual usage LRN was used more often during the days of early convets like LeNet-5. Current implementation of GoogLeNet (Inception) in Caffe often uses LRN in connection with pooling techniques, but it seems to be done for the sake of just having it. Neither original Inception/GoogLeNet (here) nor any of the following versions mention LRN in any way. Also, TensorFlow implementation of Inception (provided and updated by the team of original authors) networks does not use LRN despite it being available.
Conclusion Applying LRN along with pooling layer would not hurt the performance of the network as long as hyper-parameter values are reasonable. Despite that, I am not aware of any recent justification for applying LRN/contrast normalization in a neural-network. | Importance of local response normalization in CNN
With this answer I would like to summarize contributions of other authors and provide a single place explanation of the LRN (or contrastive normalization) technique for those, who just want to get awa |
5,337 | Importance of local response normalization in CNN | Local Response Normalization(LRN) type of layer turns out to be useful when using neurons with unbounded activations (e.g. rectified linear neurons), because it permits the detection of high-frequency features with a big neuron response, while damping responses that are uniformly large in a local neighborhood. It is a type of regularizer that encourages "competition" for big activities among nearby groups of neurons.
src-https://code.google.com/p/cuda-convnet/wiki/LayerParams#Local_response_normalization_layer_(same_map) | Importance of local response normalization in CNN | Local Response Normalization(LRN) type of layer turns out to be useful when using neurons with unbounded activations (e.g. rectified linear neurons), because it permits the detection of high-frequency | Importance of local response normalization in CNN
Local Response Normalization(LRN) type of layer turns out to be useful when using neurons with unbounded activations (e.g. rectified linear neurons), because it permits the detection of high-frequency features with a big neuron response, while damping responses that are uniformly large in a local neighborhood. It is a type of regularizer that encourages "competition" for big activities among nearby groups of neurons.
src-https://code.google.com/p/cuda-convnet/wiki/LayerParams#Local_response_normalization_layer_(same_map) | Importance of local response normalization in CNN
Local Response Normalization(LRN) type of layer turns out to be useful when using neurons with unbounded activations (e.g. rectified linear neurons), because it permits the detection of high-frequency |
5,338 | Importance of local response normalization in CNN | Local response normalization (LRN) is done pixel-wise for each channel $i$:
$$x_i = \frac{x_i}{ (k + ( \alpha \sum_j x_j^2 ))^\beta }$$
where $k, \alpha, \beta \in \mathbb{R}$ are constants. Note that you get L2 normalization if you set $\kappa = 0$, $\alpha=1$, $\beta=\frac{1}{2}$.
However, there is a much newer technique called "batch normalization" (see paper) which works pretty similar and suggests not to use LRN anymore. Batch normalization also works pixel-wise:
$$y = \frac{x - \mu}{\sqrt{\sigma^2 + \epsilon}} \gamma + \beta$$
where $\mu$ is the mean, $\sigma^2$ is the variance, $\varepsilon > 0$ is a small constant, $\gamma, \beta \in \mathbb{R}$ are learnable parameters which allow the net to remove the normalization.
So the answer is: Local Response Normalization is not important any more, because we have something which works better and replaced LRN: Batch Normalization.
See also
Lasagne documentation | Importance of local response normalization in CNN | Local response normalization (LRN) is done pixel-wise for each channel $i$:
$$x_i = \frac{x_i}{ (k + ( \alpha \sum_j x_j^2 ))^\beta }$$
where $k, \alpha, \beta \in \mathbb{R}$ are constants. Note that | Importance of local response normalization in CNN
Local response normalization (LRN) is done pixel-wise for each channel $i$:
$$x_i = \frac{x_i}{ (k + ( \alpha \sum_j x_j^2 ))^\beta }$$
where $k, \alpha, \beta \in \mathbb{R}$ are constants. Note that you get L2 normalization if you set $\kappa = 0$, $\alpha=1$, $\beta=\frac{1}{2}$.
However, there is a much newer technique called "batch normalization" (see paper) which works pretty similar and suggests not to use LRN anymore. Batch normalization also works pixel-wise:
$$y = \frac{x - \mu}{\sqrt{\sigma^2 + \epsilon}} \gamma + \beta$$
where $\mu$ is the mean, $\sigma^2$ is the variance, $\varepsilon > 0$ is a small constant, $\gamma, \beta \in \mathbb{R}$ are learnable parameters which allow the net to remove the normalization.
So the answer is: Local Response Normalization is not important any more, because we have something which works better and replaced LRN: Batch Normalization.
See also
Lasagne documentation | Importance of local response normalization in CNN
Local response normalization (LRN) is done pixel-wise for each channel $i$:
$$x_i = \frac{x_i}{ (k + ( \alpha \sum_j x_j^2 ))^\beta }$$
where $k, \alpha, \beta \in \mathbb{R}$ are constants. Note that |
5,339 | Importance of local response normalization in CNN | AlexNet also uses a competitive normalization step immediately after the ReLU step of layers C1 and C3, called local response normalization (LRN): the most strongly activated neurons inhibit other neurons located at the same position in neighboring feature maps (such competitive activation has been observed in biological neurons). This encourages different feature maps to specialize, pushing them apart and forcing them to explore a wider range of features, ultimately improving generalization. Equation 14-2 shows how to apply LRN.
Equation 14-2. Local response normalization (LRN)
In this equation:
$b_i$ is the normalized output of the neuron located in feature map $i$, at some row u and column v (note that in this equation we consider only neurons located at this row and column, so u and v are not shown).
$a_i$ is the activation of that neuron after the ReLU step, but before normalization.
k, α, β, and r are hyperparameters. k is called the bias, and r is called the depth radius.
$f_n$ is the number of feature maps.
For example, if r = 2 and a neuron has a strong activation, it will inhibit the activation of the neurons located in the feature maps immediately above and below its own.
In AlexNet, the hyperparameters are set as follows: r = 5, α = 0.0001, β = 0.75, and k = 2. This step can be implemented using the tf.nn.local_response_normalization() function (which you can wrap in a Lambda layer if you want to use it in a Keras model).
A variant of AlexNet called ZF Net12 was developed by Matthew Zeiler and Rob Fergus and won the 2013 ILSVRC challenge. It is essentially AlexNet with a few tweaked hyperparameters (number of feature maps, kernel size, stride, etc.).
GoogLeNet[![enter image description here][2]][2]
The answer is from this book
Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow: Concepts, Tools, and Techniques to Build Intelligent Systems | Importance of local response normalization in CNN | AlexNet also uses a competitive normalization step immediately after the ReLU step of layers C1 and C3, called local response normalization (LRN): the most strongly activated neurons inhibit other neu | Importance of local response normalization in CNN
AlexNet also uses a competitive normalization step immediately after the ReLU step of layers C1 and C3, called local response normalization (LRN): the most strongly activated neurons inhibit other neurons located at the same position in neighboring feature maps (such competitive activation has been observed in biological neurons). This encourages different feature maps to specialize, pushing them apart and forcing them to explore a wider range of features, ultimately improving generalization. Equation 14-2 shows how to apply LRN.
Equation 14-2. Local response normalization (LRN)
In this equation:
$b_i$ is the normalized output of the neuron located in feature map $i$, at some row u and column v (note that in this equation we consider only neurons located at this row and column, so u and v are not shown).
$a_i$ is the activation of that neuron after the ReLU step, but before normalization.
k, α, β, and r are hyperparameters. k is called the bias, and r is called the depth radius.
$f_n$ is the number of feature maps.
For example, if r = 2 and a neuron has a strong activation, it will inhibit the activation of the neurons located in the feature maps immediately above and below its own.
In AlexNet, the hyperparameters are set as follows: r = 5, α = 0.0001, β = 0.75, and k = 2. This step can be implemented using the tf.nn.local_response_normalization() function (which you can wrap in a Lambda layer if you want to use it in a Keras model).
A variant of AlexNet called ZF Net12 was developed by Matthew Zeiler and Rob Fergus and won the 2013 ILSVRC challenge. It is essentially AlexNet with a few tweaked hyperparameters (number of feature maps, kernel size, stride, etc.).
GoogLeNet[![enter image description here][2]][2]
The answer is from this book
Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow: Concepts, Tools, and Techniques to Build Intelligent Systems | Importance of local response normalization in CNN
AlexNet also uses a competitive normalization step immediately after the ReLU step of layers C1 and C3, called local response normalization (LRN): the most strongly activated neurons inhibit other neu |
5,340 | Confidence interval around binomial estimate of 0 or 1 | Do not use the normal approximation
Much has been written about this problem. A general advice is to never use the normal approximation (i.e., the asymptotic/Wald confidence interval), as it has terrible coverage properties. R code for illustrating this:
library(binom)
p = seq(0,1,.001)
coverage = binom.coverage(p, 25, method="asymptotic")$coverage
plot(p, coverage, type="l")
binom.confint(0,25)
abline(h=.95, col="red")
For small success probabilities, you might ask for a 95% confidence interval, but actually get, say, a 10% confidence interval!
Recommendations
So what should we use? I believe the current recommendations are the ones listed in the paper Interval Estimation for a Binomial Proportion by Brown, Cai and DasGupta in Statistical Science 2001, vol. 16, no. 2, pages 101–133. The authors examined several methods for calculating confidence intervals, and came to the following conclusion.
[W]e recommend the Wilson interval or the equal-tailed Jeffreys prior interval for small n and the interval suggested in Agresti and Coull for larger n.
The Wilson interval is also sometimes called the score interval, since it’s based on inverting a score test.
Calculating the intervals
To calculate these confidence intervals, you can use this online calculator or the binom.confint() function in the binom package in R. For example, for 0 successes in 25 trials, the R code would be:
> binom.confint(0, 25, method=c("wilson", "bayes", "agresti-coull"),
type="central")
method x n mean lower upper
1 agresti-coull 0 25 0.000 -0.024 0.158
2 bayes 0 25 0.019 0.000 0.073
3 wilson 0 25 0.000 0.000 0.133
Here bayes is the Jeffreys interval. (The argument type="central" is needed to get the equal-tailed interval.)
Note that you should decide on which of the three methods you want to use before calculating the interval. Looking at all three and selecting the shortest will naturally give you too small coverage probability.
A quick, approximate answer
As a final note, if you observe exactly zero successes in your n trials and just want a very quick approximate confidence interval, you can use the rule of three. Simply divide the number 3 by n. In the above example n is 25, so the upper bound is 3/25 = 0.12 (the lower bound is of course 0). | Confidence interval around binomial estimate of 0 or 1 | Do not use the normal approximation
Much has been written about this problem. A general advice is to never use the normal approximation (i.e., the asymptotic/Wald confidence interval), as it has terri | Confidence interval around binomial estimate of 0 or 1
Do not use the normal approximation
Much has been written about this problem. A general advice is to never use the normal approximation (i.e., the asymptotic/Wald confidence interval), as it has terrible coverage properties. R code for illustrating this:
library(binom)
p = seq(0,1,.001)
coverage = binom.coverage(p, 25, method="asymptotic")$coverage
plot(p, coverage, type="l")
binom.confint(0,25)
abline(h=.95, col="red")
For small success probabilities, you might ask for a 95% confidence interval, but actually get, say, a 10% confidence interval!
Recommendations
So what should we use? I believe the current recommendations are the ones listed in the paper Interval Estimation for a Binomial Proportion by Brown, Cai and DasGupta in Statistical Science 2001, vol. 16, no. 2, pages 101–133. The authors examined several methods for calculating confidence intervals, and came to the following conclusion.
[W]e recommend the Wilson interval or the equal-tailed Jeffreys prior interval for small n and the interval suggested in Agresti and Coull for larger n.
The Wilson interval is also sometimes called the score interval, since it’s based on inverting a score test.
Calculating the intervals
To calculate these confidence intervals, you can use this online calculator or the binom.confint() function in the binom package in R. For example, for 0 successes in 25 trials, the R code would be:
> binom.confint(0, 25, method=c("wilson", "bayes", "agresti-coull"),
type="central")
method x n mean lower upper
1 agresti-coull 0 25 0.000 -0.024 0.158
2 bayes 0 25 0.019 0.000 0.073
3 wilson 0 25 0.000 0.000 0.133
Here bayes is the Jeffreys interval. (The argument type="central" is needed to get the equal-tailed interval.)
Note that you should decide on which of the three methods you want to use before calculating the interval. Looking at all three and selecting the shortest will naturally give you too small coverage probability.
A quick, approximate answer
As a final note, if you observe exactly zero successes in your n trials and just want a very quick approximate confidence interval, you can use the rule of three. Simply divide the number 3 by n. In the above example n is 25, so the upper bound is 3/25 = 0.12 (the lower bound is of course 0). | Confidence interval around binomial estimate of 0 or 1
Do not use the normal approximation
Much has been written about this problem. A general advice is to never use the normal approximation (i.e., the asymptotic/Wald confidence interval), as it has terri |
5,341 | Confidence interval around binomial estimate of 0 or 1 | Agresti (2007, pp.9-10) shows that when a proportion falls near 0 or 1, the confidence interval $p\pm z_{\alpha/2}\sqrt{p(1-p)/n}$ performs poorly. Instead, use a "duality with significance tests... [that] consists of all values of $\pi_0$ for the null hypothesis parameter that a judged plausible," where $\pi_0$ is the unknown parameter. Do this by solving for $\pi_0$ in the equation
$$\frac{|p-\pi_0|}{\sqrt{p(1-p)/n}}=0$$.
Do this by squaring both sides, yielding
$$(1+z_0^2/n)\pi_0^2+(-2p-z_0^2/n)\pi_0+p^2=0$$
Solve using the quadratic formula, which will yield the appropriate critical z-value. | Confidence interval around binomial estimate of 0 or 1 | Agresti (2007, pp.9-10) shows that when a proportion falls near 0 or 1, the confidence interval $p\pm z_{\alpha/2}\sqrt{p(1-p)/n}$ performs poorly. Instead, use a "duality with significance tests... [ | Confidence interval around binomial estimate of 0 or 1
Agresti (2007, pp.9-10) shows that when a proportion falls near 0 or 1, the confidence interval $p\pm z_{\alpha/2}\sqrt{p(1-p)/n}$ performs poorly. Instead, use a "duality with significance tests... [that] consists of all values of $\pi_0$ for the null hypothesis parameter that a judged plausible," where $\pi_0$ is the unknown parameter. Do this by solving for $\pi_0$ in the equation
$$\frac{|p-\pi_0|}{\sqrt{p(1-p)/n}}=0$$.
Do this by squaring both sides, yielding
$$(1+z_0^2/n)\pi_0^2+(-2p-z_0^2/n)\pi_0+p^2=0$$
Solve using the quadratic formula, which will yield the appropriate critical z-value. | Confidence interval around binomial estimate of 0 or 1
Agresti (2007, pp.9-10) shows that when a proportion falls near 0 or 1, the confidence interval $p\pm z_{\alpha/2}\sqrt{p(1-p)/n}$ performs poorly. Instead, use a "duality with significance tests... [ |
5,342 | How do I test a nonlinear association? | ...the relationship is nonlinear yet there is a clear relation between x and y, how can I test the association and label its nature?
One way of doing this would be to fit $y$ as a semi-parametrically estimated function of $x$ using, for example, a generalized additive model and testing whether or not that functional estimate is constant, which would indicate no relationship between $y$ and $x$. This approach frees you from having to do polynomial regression and making sometimes arbitrary decisions about the order of the polynomial, etc.
Specifically, if you have observations, $(Y_i, X_i)$, you could fit the model:
$$ E(Y_i | X_i) = \alpha + f(X_i) + \varepsilon_i $$
and test the hypothesis $H_{0} : f(x) = 0, \ \forall x$. In R, you can do this using the gam() function. If y is your outcome and x is your predictor, you could type:
library(mgcv)
g <- gam(y ~ s(x))
Typing summary(g) will give you the result of the hypothesis test above. As far as characterizing the nature of the relationship, this would be best done with a plot. One way to do this in R (assuming the code above has already been entered)
plot(g,scheme=2)
If your response variable is discrete (e.g. binary), you can accommodate that within this framework by fitting a logistic GAM (in R, you'd add family=binomial to your call to gam). Also, if you have multiple predictors, you can include multiple additive terms (or ordinary linear terms), or fit multivariable functions, e.g. $f(x,z)$ if you had predictors x, z. The complexity of the relationship is automatically selected by cross validation if you use the default methods, although there is a lot of flexibility here - see the gam help file if interested. | How do I test a nonlinear association? | ...the relationship is nonlinear yet there is a clear relation between x and y, how can I test the association and label its nature?
One way of doing this would be to fit $y$ as a semi-parametrically | How do I test a nonlinear association?
...the relationship is nonlinear yet there is a clear relation between x and y, how can I test the association and label its nature?
One way of doing this would be to fit $y$ as a semi-parametrically estimated function of $x$ using, for example, a generalized additive model and testing whether or not that functional estimate is constant, which would indicate no relationship between $y$ and $x$. This approach frees you from having to do polynomial regression and making sometimes arbitrary decisions about the order of the polynomial, etc.
Specifically, if you have observations, $(Y_i, X_i)$, you could fit the model:
$$ E(Y_i | X_i) = \alpha + f(X_i) + \varepsilon_i $$
and test the hypothesis $H_{0} : f(x) = 0, \ \forall x$. In R, you can do this using the gam() function. If y is your outcome and x is your predictor, you could type:
library(mgcv)
g <- gam(y ~ s(x))
Typing summary(g) will give you the result of the hypothesis test above. As far as characterizing the nature of the relationship, this would be best done with a plot. One way to do this in R (assuming the code above has already been entered)
plot(g,scheme=2)
If your response variable is discrete (e.g. binary), you can accommodate that within this framework by fitting a logistic GAM (in R, you'd add family=binomial to your call to gam). Also, if you have multiple predictors, you can include multiple additive terms (or ordinary linear terms), or fit multivariable functions, e.g. $f(x,z)$ if you had predictors x, z. The complexity of the relationship is automatically selected by cross validation if you use the default methods, although there is a lot of flexibility here - see the gam help file if interested. | How do I test a nonlinear association?
...the relationship is nonlinear yet there is a clear relation between x and y, how can I test the association and label its nature?
One way of doing this would be to fit $y$ as a semi-parametrically |
5,343 | How do I test a nonlinear association? | If the nonlinear relationship had been monotonic rank correlation (Spearman's rho) would be appropriate. In your example there is a clear small region where the curve changes from monotoncally increasing to montonically decreasing like a parabola would do at the point where the first derivative equals $0$.
I think if you have some modeling knowledge (beyond the empiricial information) where that change point occurs (say at $x=a$) then you can characterize the correlation as positive and use Spearman's rho on the set of $(x,y)$ pairs where $x < a$ to provide an estimate of that correlation and use another estimate of Spearman's correlation for $x>a$ where the correlation is negative. These two estimates then characterize the correlation structure between $x$ and $y$ and unlike a correlation estimate that would be near $0$ when estimated using all the data these estimates will both be large and opposite in sign.
Some might argue that just the empirical information (i.e. the observed $(x,y)$ pairs is enough to justify this. | How do I test a nonlinear association? | If the nonlinear relationship had been monotonic rank correlation (Spearman's rho) would be appropriate. In your example there is a clear small region where the curve changes from monotoncally increas | How do I test a nonlinear association?
If the nonlinear relationship had been monotonic rank correlation (Spearman's rho) would be appropriate. In your example there is a clear small region where the curve changes from monotoncally increasing to montonically decreasing like a parabola would do at the point where the first derivative equals $0$.
I think if you have some modeling knowledge (beyond the empiricial information) where that change point occurs (say at $x=a$) then you can characterize the correlation as positive and use Spearman's rho on the set of $(x,y)$ pairs where $x < a$ to provide an estimate of that correlation and use another estimate of Spearman's correlation for $x>a$ where the correlation is negative. These two estimates then characterize the correlation structure between $x$ and $y$ and unlike a correlation estimate that would be near $0$ when estimated using all the data these estimates will both be large and opposite in sign.
Some might argue that just the empirical information (i.e. the observed $(x,y)$ pairs is enough to justify this. | How do I test a nonlinear association?
If the nonlinear relationship had been monotonic rank correlation (Spearman's rho) would be appropriate. In your example there is a clear small region where the curve changes from monotoncally increas |
5,344 | How do I test a nonlinear association? | You can test any kind of dependence by using distance correlation tests. See here for more informations about the distance correlation: Understanding distance correlation computations
And here the original paper: https://arxiv.org/pdf/0803.4101.pdf
In R this is implemented in the energy package with the dcor.test function. | How do I test a nonlinear association? | You can test any kind of dependence by using distance correlation tests. See here for more informations about the distance correlation: Understanding distance correlation computations
And here the ori | How do I test a nonlinear association?
You can test any kind of dependence by using distance correlation tests. See here for more informations about the distance correlation: Understanding distance correlation computations
And here the original paper: https://arxiv.org/pdf/0803.4101.pdf
In R this is implemented in the energy package with the dcor.test function. | How do I test a nonlinear association?
You can test any kind of dependence by using distance correlation tests. See here for more informations about the distance correlation: Understanding distance correlation computations
And here the ori |
5,345 | How do I test a nonlinear association? | Someone correct me if my understanding is wrong here but one way to deal with non- linear variables is to use a linear approximation. So, for example, taking log of exponential distribution should allow you to treat the variable as normal distribution. It may then be used to solve the problem like any linear regression. | How do I test a nonlinear association? | Someone correct me if my understanding is wrong here but one way to deal with non- linear variables is to use a linear approximation. So, for example, taking log of exponential distribution should all | How do I test a nonlinear association?
Someone correct me if my understanding is wrong here but one way to deal with non- linear variables is to use a linear approximation. So, for example, taking log of exponential distribution should allow you to treat the variable as normal distribution. It may then be used to solve the problem like any linear regression. | How do I test a nonlinear association?
Someone correct me if my understanding is wrong here but one way to deal with non- linear variables is to use a linear approximation. So, for example, taking log of exponential distribution should all |
5,346 | How do I test a nonlinear association? | I used to implement the general additive model to detect the non-linear relationship between two variables, but recently I've found out about the non-linear correlation implemented via nlcor package in R, you can implement this method in the same way as Pearson correlation, the correlation coefficient is between 0 and 1 and not -1 and 1 as in Pearson correlation. A higher correlation coefficient implies the existence of a strong non-linear relationship. Let's assume two time series x2 and y2, the nonlinear correlation between the two time series is tested as follows
install.packages("devtools")
library(devtools)
install_github("ProcessMiner/nlcor")
library(nlcor)
c <- nlcor(x2, y2, plt = T)
c$cor.estimate
[1] 0.897205
The two variables seem to be strongly correlated via nonlinear relationship, you can also obtain the adjusted p-value for the correlation coefficient
c$adjusted.p.value
[1] 0
You can also plot the results
print(c$cor.plot)
You can view this link for more details | How do I test a nonlinear association? | I used to implement the general additive model to detect the non-linear relationship between two variables, but recently I've found out about the non-linear correlation implemented via nlcor package i | How do I test a nonlinear association?
I used to implement the general additive model to detect the non-linear relationship between two variables, but recently I've found out about the non-linear correlation implemented via nlcor package in R, you can implement this method in the same way as Pearson correlation, the correlation coefficient is between 0 and 1 and not -1 and 1 as in Pearson correlation. A higher correlation coefficient implies the existence of a strong non-linear relationship. Let's assume two time series x2 and y2, the nonlinear correlation between the two time series is tested as follows
install.packages("devtools")
library(devtools)
install_github("ProcessMiner/nlcor")
library(nlcor)
c <- nlcor(x2, y2, plt = T)
c$cor.estimate
[1] 0.897205
The two variables seem to be strongly correlated via nonlinear relationship, you can also obtain the adjusted p-value for the correlation coefficient
c$adjusted.p.value
[1] 0
You can also plot the results
print(c$cor.plot)
You can view this link for more details | How do I test a nonlinear association?
I used to implement the general additive model to detect the non-linear relationship between two variables, but recently I've found out about the non-linear correlation implemented via nlcor package i |
5,347 | Statistical test to tell whether two samples are pulled from the same population? | The tests that compare distributions are rule-out tests. They start with the null hypothesis that the 2 populations are identical, then try to reject that hypothesis. We can never prove the null to be true, just reject it, so these tests cannot really be used to show that 2 samples come from the same population (or identical populations).
This is because there could be minor differences in the distributions (meaning they are not identical), but so small that tests cannot really find the difference.
Consider 2 distributions, the first is uniform from 0 to 1, the second is a mixture of 2 uniforms, so it is 1 between 0 and 0.999, and also 1 between 9.999 and 10 (0 elsewhere). So clearly these distributions are different (whether the difference is meaningful is another question), but if you take a sample size of 50 from each (total 100) there is over a 90% chance that you will only see values between 0 and 0.999 and be unable to see any real difference.
There are ways to do what is called equivalence testing where you ask if the 2 distributions/populations are equivalent, but you need to define what you consider to be equivalent. It is usually that some measure of difference is within a given range, i.e. the difference in the 2 means is less than 5% of the average of the 2 means, or the KS statistic is below a given cut-off, etc. If you can then calculate a confidence interval for the difference statistic (difference of means could just be the t confidence interval, bootstrapping, simulation, or other methods may be needed for other statistics). If the entire confidence interval falls in the "equivalence region" then we consider the 2 populations/distributions to be "equivalent".
The hard part is figuring out what the equivalence region should be. | Statistical test to tell whether two samples are pulled from the same population? | The tests that compare distributions are rule-out tests. They start with the null hypothesis that the 2 populations are identical, then try to reject that hypothesis. We can never prove the null to | Statistical test to tell whether two samples are pulled from the same population?
The tests that compare distributions are rule-out tests. They start with the null hypothesis that the 2 populations are identical, then try to reject that hypothesis. We can never prove the null to be true, just reject it, so these tests cannot really be used to show that 2 samples come from the same population (or identical populations).
This is because there could be minor differences in the distributions (meaning they are not identical), but so small that tests cannot really find the difference.
Consider 2 distributions, the first is uniform from 0 to 1, the second is a mixture of 2 uniforms, so it is 1 between 0 and 0.999, and also 1 between 9.999 and 10 (0 elsewhere). So clearly these distributions are different (whether the difference is meaningful is another question), but if you take a sample size of 50 from each (total 100) there is over a 90% chance that you will only see values between 0 and 0.999 and be unable to see any real difference.
There are ways to do what is called equivalence testing where you ask if the 2 distributions/populations are equivalent, but you need to define what you consider to be equivalent. It is usually that some measure of difference is within a given range, i.e. the difference in the 2 means is less than 5% of the average of the 2 means, or the KS statistic is below a given cut-off, etc. If you can then calculate a confidence interval for the difference statistic (difference of means could just be the t confidence interval, bootstrapping, simulation, or other methods may be needed for other statistics). If the entire confidence interval falls in the "equivalence region" then we consider the 2 populations/distributions to be "equivalent".
The hard part is figuring out what the equivalence region should be. | Statistical test to tell whether two samples are pulled from the same population?
The tests that compare distributions are rule-out tests. They start with the null hypothesis that the 2 populations are identical, then try to reject that hypothesis. We can never prove the null to |
5,348 | Statistical test to tell whether two samples are pulled from the same population? | http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
Assuming your sample values come from continuous distributions, I would suggest the Kolmogorov-Smirnov test. It can be used to test whether two samples come from different distributions (this is how I am interpreting your usage of population) based on their associated empirical distributions.
Directly from Wikipedia:
The null distribution of this statistic is calculated under the null hypothesis that the samples are drawn from the same distribution (in the two-sample case)
The ks.test function in R can be used for this test.
While it is true the kstest does not test for homogeneity, I would argue that if you fail to reject with a large enough sample size (a high powered test), you can claim the differences are not practically significant. You could infer that if differences do exist, they are likely not meaningful (again, assuming large sample size). You cannot conclude they are from the same population as others have correctly stated. All this being said, typically I would just graphically examine the two samples for similarity. | Statistical test to tell whether two samples are pulled from the same population? | http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
Assuming your sample values come from continuous distributions, I would suggest the Kolmogorov-Smirnov test. It can be used to test whethe | Statistical test to tell whether two samples are pulled from the same population?
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
Assuming your sample values come from continuous distributions, I would suggest the Kolmogorov-Smirnov test. It can be used to test whether two samples come from different distributions (this is how I am interpreting your usage of population) based on their associated empirical distributions.
Directly from Wikipedia:
The null distribution of this statistic is calculated under the null hypothesis that the samples are drawn from the same distribution (in the two-sample case)
The ks.test function in R can be used for this test.
While it is true the kstest does not test for homogeneity, I would argue that if you fail to reject with a large enough sample size (a high powered test), you can claim the differences are not practically significant. You could infer that if differences do exist, they are likely not meaningful (again, assuming large sample size). You cannot conclude they are from the same population as others have correctly stated. All this being said, typically I would just graphically examine the two samples for similarity. | Statistical test to tell whether two samples are pulled from the same population?
http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
Assuming your sample values come from continuous distributions, I would suggest the Kolmogorov-Smirnov test. It can be used to test whethe |
5,349 | Statistical test to tell whether two samples are pulled from the same population? | You can use a 'shift function' which checks whether the 2 distributions differ at at each decile. While its technically a test of whether they are from different populations rather than the same, if the distributions don't differ on any of the deciles then you can be reasonably sure they are from the same population, especially if the group sizes are large.
I would also visualize the 2 groups: overlay their distributions and see if they resemble each other, or better yet draw a couple of thousand bootstrap samples from each group and plot those, as this would give you an idea of whether they come from the same population particularly if the population in question isnt normally distributed for you given variable. | Statistical test to tell whether two samples are pulled from the same population? | You can use a 'shift function' which checks whether the 2 distributions differ at at each decile. While its technically a test of whether they are from different populations rather than the same, if t | Statistical test to tell whether two samples are pulled from the same population?
You can use a 'shift function' which checks whether the 2 distributions differ at at each decile. While its technically a test of whether they are from different populations rather than the same, if the distributions don't differ on any of the deciles then you can be reasonably sure they are from the same population, especially if the group sizes are large.
I would also visualize the 2 groups: overlay their distributions and see if they resemble each other, or better yet draw a couple of thousand bootstrap samples from each group and plot those, as this would give you an idea of whether they come from the same population particularly if the population in question isnt normally distributed for you given variable. | Statistical test to tell whether two samples are pulled from the same population?
You can use a 'shift function' which checks whether the 2 distributions differ at at each decile. While its technically a test of whether they are from different populations rather than the same, if t |
5,350 | Statistical test to tell whether two samples are pulled from the same population? | I recently had to do something similar (although in my case I needed to know if two distributions were significantly different).
Our samples were very large (several hundred to tens of thousands) so KS test said everything was different (even for distributions that appeared nearly identical upon inspection of their histograms).
As AdamO mentions, this is a potential shortcoming of the KS test.
Essentially, the KS test examines the maximum difference between the cumulative distributions of each sample...
So if we have 2 distributions, A, and B, one valid question we could ask, is whether the difference between the cdf of A and the cdf of B is large compared with the cdf of a random sample taken from A compared to the cdf for the rest of A or the cdf of a random sample taken from B compared to the cdf for the rest of B.
So the solution I came up with was to take bootstraps of $A$ and $B$ ($A^*$ and $B^*$).
I then constructed an approximation of the null distribution by taking the KS statistic (D statistic, which is the maximum difference in cdf's) of $A^*$ vs $A$ and $B^*$ vs $B$, denoted $KS(A^*,A)$ and $KS(B^*,B)$ respectively. This gives an idea of the range of D values that would be expected if you compared additional samples from either distribution to itself.
Next I got the width of $KS(A^*,A)$ and $KS(B^*,B)$ by subtracting value of the 1st and 99th percentile and dividing by two to yield $sA^*$ and $sB^*$ respectively. Eg. $sA^* = (quantile(KS(A^*,A),.99)-quantile(KS(A^*,A),.01))/2 $
By adding $sA^*$ + $sB^*$ to the the maximum of $<KS(A^*,A)>$ (the expected value of the distribution of $KS(A^*,A)$) and $<KS(B^*,B)>$. I constructed a cutoff threshold for determining whether $KS(A,B)$ was significant. I.e. $Dcut=sA^*+sB^*+max(<KS(A^*,A)>,<KS(B^*,B)>)$
I then computed a confidence interval for the alternative by computing $KS(A^*,B)$ and $KS(B^*,A)$ and used the inner 99% of the resulting D values as a confidence interval for the alternative $KS(A,B)$. I.e. $$CI(KS(A,B)) \sim [quantile(KS(A^*,B) \cup KS(B^*,A),.01),quantile(KS(A^*,B) \cup KS(B^*,A),.99)]$$
Finally, if the confidence interval for the alternative lied above the threshold from the null, $A$ and $B$ were determined to be significantly different (with 99% confidence). | Statistical test to tell whether two samples are pulled from the same population? | I recently had to do something similar (although in my case I needed to know if two distributions were significantly different).
Our samples were very large (several hundred to tens of thousands) so K | Statistical test to tell whether two samples are pulled from the same population?
I recently had to do something similar (although in my case I needed to know if two distributions were significantly different).
Our samples were very large (several hundred to tens of thousands) so KS test said everything was different (even for distributions that appeared nearly identical upon inspection of their histograms).
As AdamO mentions, this is a potential shortcoming of the KS test.
Essentially, the KS test examines the maximum difference between the cumulative distributions of each sample...
So if we have 2 distributions, A, and B, one valid question we could ask, is whether the difference between the cdf of A and the cdf of B is large compared with the cdf of a random sample taken from A compared to the cdf for the rest of A or the cdf of a random sample taken from B compared to the cdf for the rest of B.
So the solution I came up with was to take bootstraps of $A$ and $B$ ($A^*$ and $B^*$).
I then constructed an approximation of the null distribution by taking the KS statistic (D statistic, which is the maximum difference in cdf's) of $A^*$ vs $A$ and $B^*$ vs $B$, denoted $KS(A^*,A)$ and $KS(B^*,B)$ respectively. This gives an idea of the range of D values that would be expected if you compared additional samples from either distribution to itself.
Next I got the width of $KS(A^*,A)$ and $KS(B^*,B)$ by subtracting value of the 1st and 99th percentile and dividing by two to yield $sA^*$ and $sB^*$ respectively. Eg. $sA^* = (quantile(KS(A^*,A),.99)-quantile(KS(A^*,A),.01))/2 $
By adding $sA^*$ + $sB^*$ to the the maximum of $<KS(A^*,A)>$ (the expected value of the distribution of $KS(A^*,A)$) and $<KS(B^*,B)>$. I constructed a cutoff threshold for determining whether $KS(A,B)$ was significant. I.e. $Dcut=sA^*+sB^*+max(<KS(A^*,A)>,<KS(B^*,B)>)$
I then computed a confidence interval for the alternative by computing $KS(A^*,B)$ and $KS(B^*,A)$ and used the inner 99% of the resulting D values as a confidence interval for the alternative $KS(A,B)$. I.e. $$CI(KS(A,B)) \sim [quantile(KS(A^*,B) \cup KS(B^*,A),.01),quantile(KS(A^*,B) \cup KS(B^*,A),.99)]$$
Finally, if the confidence interval for the alternative lied above the threshold from the null, $A$ and $B$ were determined to be significantly different (with 99% confidence). | Statistical test to tell whether two samples are pulled from the same population?
I recently had to do something similar (although in my case I needed to know if two distributions were significantly different).
Our samples were very large (several hundred to tens of thousands) so K |
5,351 | Statistical test to tell whether two samples are pulled from the same population? | This question came up as the top related result when I was searching for a similar topic, namely, how to compare whether the means of two distributions had the same mean. My question is slightly different because two distributions could be different in ways other than mean (e.g. variance or skew or kurtosis or other "shape measures") but still have nearly equivalent means.
@GregSnow's comment got me on the right track - the term for this is an Equivalence Test.
Another key concept is that you actually cannot compare if two sample distributions are probably the same or not - only if they are close within a bound that you have to pick a priori.
In my case, the Two One-Sided T-Test (TOST) is a recommended procedure and what I used. Once you grok it, it makes sense. It relies on the standard t-test methodology, but uses a trick to make the null hypothesis that the distribution are different (rather than the usual null hypothesis that the distributions are the same).
For my situation, KS was inappropriate because KS DOES test for shape measures, not just means, and I was only interested in means.
Here is the link I used to understand how to implement TOST.
https://www.real-statistics.com/students-t-distribution/equivalence-testing-tost/ | Statistical test to tell whether two samples are pulled from the same population? | This question came up as the top related result when I was searching for a similar topic, namely, how to compare whether the means of two distributions had the same mean. My question is slightly diffe | Statistical test to tell whether two samples are pulled from the same population?
This question came up as the top related result when I was searching for a similar topic, namely, how to compare whether the means of two distributions had the same mean. My question is slightly different because two distributions could be different in ways other than mean (e.g. variance or skew or kurtosis or other "shape measures") but still have nearly equivalent means.
@GregSnow's comment got me on the right track - the term for this is an Equivalence Test.
Another key concept is that you actually cannot compare if two sample distributions are probably the same or not - only if they are close within a bound that you have to pick a priori.
In my case, the Two One-Sided T-Test (TOST) is a recommended procedure and what I used. Once you grok it, it makes sense. It relies on the standard t-test methodology, but uses a trick to make the null hypothesis that the distribution are different (rather than the usual null hypothesis that the distributions are the same).
For my situation, KS was inappropriate because KS DOES test for shape measures, not just means, and I was only interested in means.
Here is the link I used to understand how to implement TOST.
https://www.real-statistics.com/students-t-distribution/equivalence-testing-tost/ | Statistical test to tell whether two samples are pulled from the same population?
This question came up as the top related result when I was searching for a similar topic, namely, how to compare whether the means of two distributions had the same mean. My question is slightly diffe |
5,352 | Where does $\sqrt{n}$ come from in central limit theorem (CLT)? | Nice question (+1)!!
You will remember that for independent random variables $X$ and $Y$, $Var(X+Y) = Var(X) + Var(Y)$ and $Var(a\cdot X) = a^2 \cdot Var(X)$. So the variance of $\sum_{i=1}^n X_i$ is $\sum_{i=1}^n \sigma^2 = n\sigma^2$, and the variance of $\bar{X} = \frac{1}{n}\sum_{i=1}^n X_i$ is $n\sigma^2 / n^2 = \sigma^2/n$.
This is for the variance. To standardize a random variable, you divide it by its standard deviation. As you know, the expected value of $\bar{X}$ is $\mu$, so the variable
$$ \frac{\bar{X} - E\left( \bar{X} \right)}{\sqrt{ Var(\bar{X}) }} = \sqrt{n} \frac{\bar{X} - \mu}{\sigma}$$ has expected value 0 and variance 1. So if it tends to a Gaussian, it has to be the standard Gaussian $\mathcal{N}(0,\;1)$. Your formulation in the first equation is equivalent. By multiplying the left hand side by $\sigma$ you set the variance to $\sigma^2$.
Regarding your second point, I believe that the equation shown above illustrates that you have to divide by $\sigma$ and not $\sqrt{\sigma}$ to standardize the equation, explaining why you use $s_n$ (the estimator of $\sigma)$ and not $\sqrt{s_n}$.
Addition: @whuber suggests to discuss the why of the scaling by $\sqrt{n}$. He does it there, but because the answer is very long I will try to capture the essense of his argument (which is a reconstruction of de Moivre's thoughts).
If you add a large number $n$ of +1's and -1's, you can approximate the probability that the sum will be $j$ by elementary counting. The log of this probability is proportional to $-j^2/n$. So if we want the probability above to converge to a constant as $n$ goes large, we have to use a normalizing factor in $O(\sqrt{n})$.
Using modern (post de Moivre) mathematical tools, you can see the approximation mentioned above by noticing that the sought probability is
$$P(j) = \frac{{n \choose n/2+j}}{2^n} = \frac{n!}{2^n(n/2+j)!(n/2-j)!}$$
which we approximate by Stirling's formula
$$ P(j) \approx \frac{n^n e^{n/2+j} e^{n/2-j}}{2^n e^n (n/2+j)^{n/2+j} (n/2-j)^{n/2-j} } = \left(\frac{1}{1+2j/n}\right)^{n+j} \left(\frac{1}{1-2j/n}\right)^{n-j}. $$
$$ \log(P(j)) = -(n+j) \log(1+2j/n) - (n-j) \log(1-2j/n) \\
\sim -2j(n+j)/n + 2j(n-j)/n \propto -j^2/n.$$ | Where does $\sqrt{n}$ come from in central limit theorem (CLT)? | Nice question (+1)!!
You will remember that for independent random variables $X$ and $Y$, $Var(X+Y) = Var(X) + Var(Y)$ and $Var(a\cdot X) = a^2 \cdot Var(X)$. So the variance of $\sum_{i=1}^n X_i$ is | Where does $\sqrt{n}$ come from in central limit theorem (CLT)?
Nice question (+1)!!
You will remember that for independent random variables $X$ and $Y$, $Var(X+Y) = Var(X) + Var(Y)$ and $Var(a\cdot X) = a^2 \cdot Var(X)$. So the variance of $\sum_{i=1}^n X_i$ is $\sum_{i=1}^n \sigma^2 = n\sigma^2$, and the variance of $\bar{X} = \frac{1}{n}\sum_{i=1}^n X_i$ is $n\sigma^2 / n^2 = \sigma^2/n$.
This is for the variance. To standardize a random variable, you divide it by its standard deviation. As you know, the expected value of $\bar{X}$ is $\mu$, so the variable
$$ \frac{\bar{X} - E\left( \bar{X} \right)}{\sqrt{ Var(\bar{X}) }} = \sqrt{n} \frac{\bar{X} - \mu}{\sigma}$$ has expected value 0 and variance 1. So if it tends to a Gaussian, it has to be the standard Gaussian $\mathcal{N}(0,\;1)$. Your formulation in the first equation is equivalent. By multiplying the left hand side by $\sigma$ you set the variance to $\sigma^2$.
Regarding your second point, I believe that the equation shown above illustrates that you have to divide by $\sigma$ and not $\sqrt{\sigma}$ to standardize the equation, explaining why you use $s_n$ (the estimator of $\sigma)$ and not $\sqrt{s_n}$.
Addition: @whuber suggests to discuss the why of the scaling by $\sqrt{n}$. He does it there, but because the answer is very long I will try to capture the essense of his argument (which is a reconstruction of de Moivre's thoughts).
If you add a large number $n$ of +1's and -1's, you can approximate the probability that the sum will be $j$ by elementary counting. The log of this probability is proportional to $-j^2/n$. So if we want the probability above to converge to a constant as $n$ goes large, we have to use a normalizing factor in $O(\sqrt{n})$.
Using modern (post de Moivre) mathematical tools, you can see the approximation mentioned above by noticing that the sought probability is
$$P(j) = \frac{{n \choose n/2+j}}{2^n} = \frac{n!}{2^n(n/2+j)!(n/2-j)!}$$
which we approximate by Stirling's formula
$$ P(j) \approx \frac{n^n e^{n/2+j} e^{n/2-j}}{2^n e^n (n/2+j)^{n/2+j} (n/2-j)^{n/2-j} } = \left(\frac{1}{1+2j/n}\right)^{n+j} \left(\frac{1}{1-2j/n}\right)^{n-j}. $$
$$ \log(P(j)) = -(n+j) \log(1+2j/n) - (n-j) \log(1-2j/n) \\
\sim -2j(n+j)/n + 2j(n-j)/n \propto -j^2/n.$$ | Where does $\sqrt{n}$ come from in central limit theorem (CLT)?
Nice question (+1)!!
You will remember that for independent random variables $X$ and $Y$, $Var(X+Y) = Var(X) + Var(Y)$ and $Var(a\cdot X) = a^2 \cdot Var(X)$. So the variance of $\sum_{i=1}^n X_i$ is |
5,353 | Where does $\sqrt{n}$ come from in central limit theorem (CLT)? | There is a nice theory of what kind of distributions can be limiting distributions of sums of random variables. The nice resource is the following book by Petrov, which I personally enjoyed immensely.
It turns out, that if you are investigating limits of this type
$$\frac{1}{a_n}\sum_{i=1}^n(X_i-b_i), \quad (1)$$ where $X_i$ are independent random variables, the distributions of limits are only certain distributions.
There is a lot of mathematics going around then, which boils to several theorems which completely characterizes what happens in the limit. One of such theorems is due to Feller:
Theorem Let $\{X_n;n=1,2,...\}$ be a sequence of independent random variables, $V_n(x)$ be the distribution function of $X_n$, and $a_n$ be a sequence of positive constant. In order that
$$\max_{1\le k\le n}P(|X_k|\ge\varepsilon a_n)\to 0, \text{ for every fixed } \varepsilon>0$$
and
$$\sup_x\left|P\left(a_n^{-1}\sum_{k=1}^nX_k<x\right)-\Phi(x)\right|\to 0$$
it is necessary and sufficient that
$$\sum_{k=1}^n\int_{|x|\ge \varepsilon a_n}dV_k(x)\to 0 \text{ for every fixed }\varepsilon>0,$$
$$a_n^{-2}\sum_{k=1}^n\left(\int_{|x|<a_n}x^2dV_k(x)-\left(\int_{|x|<a_n}xdV_k(x)\right)^2\right)\to 1$$
and
$$a_n^{-1}\sum_{k=1}^n\int_{|x|<a_n}xdV_k(x)\to 0.$$
This theorem then gives you an idea of what $a_n$ should look like.
The general theory in the book is constructed in such way that norming constant is restricted in any way, but final theorems which give necessary and sufficient conditions, do not leave any room for norming constant other than $\sqrt{n}$. | Where does $\sqrt{n}$ come from in central limit theorem (CLT)? | There is a nice theory of what kind of distributions can be limiting distributions of sums of random variables. The nice resource is the following book by Petrov, which I personally enjoyed immensely. | Where does $\sqrt{n}$ come from in central limit theorem (CLT)?
There is a nice theory of what kind of distributions can be limiting distributions of sums of random variables. The nice resource is the following book by Petrov, which I personally enjoyed immensely.
It turns out, that if you are investigating limits of this type
$$\frac{1}{a_n}\sum_{i=1}^n(X_i-b_i), \quad (1)$$ where $X_i$ are independent random variables, the distributions of limits are only certain distributions.
There is a lot of mathematics going around then, which boils to several theorems which completely characterizes what happens in the limit. One of such theorems is due to Feller:
Theorem Let $\{X_n;n=1,2,...\}$ be a sequence of independent random variables, $V_n(x)$ be the distribution function of $X_n$, and $a_n$ be a sequence of positive constant. In order that
$$\max_{1\le k\le n}P(|X_k|\ge\varepsilon a_n)\to 0, \text{ for every fixed } \varepsilon>0$$
and
$$\sup_x\left|P\left(a_n^{-1}\sum_{k=1}^nX_k<x\right)-\Phi(x)\right|\to 0$$
it is necessary and sufficient that
$$\sum_{k=1}^n\int_{|x|\ge \varepsilon a_n}dV_k(x)\to 0 \text{ for every fixed }\varepsilon>0,$$
$$a_n^{-2}\sum_{k=1}^n\left(\int_{|x|<a_n}x^2dV_k(x)-\left(\int_{|x|<a_n}xdV_k(x)\right)^2\right)\to 1$$
and
$$a_n^{-1}\sum_{k=1}^n\int_{|x|<a_n}xdV_k(x)\to 0.$$
This theorem then gives you an idea of what $a_n$ should look like.
The general theory in the book is constructed in such way that norming constant is restricted in any way, but final theorems which give necessary and sufficient conditions, do not leave any room for norming constant other than $\sqrt{n}$. | Where does $\sqrt{n}$ come from in central limit theorem (CLT)?
There is a nice theory of what kind of distributions can be limiting distributions of sums of random variables. The nice resource is the following book by Petrov, which I personally enjoyed immensely. |
5,354 | Where does $\sqrt{n}$ come from in central limit theorem (CLT)? | $s_n$ represents the sample standard deviation for the sample mean. $s_n$$^2$ is the sample variance for the sample mean and it equals $S_n^2/n$, where $S_n^2$ is the sample estimate of the population variance. Since $s_n =S_n/\sqrt{n}$ that explains how $\sqrt n$ appears in the first formula. Note there would be a $\sigma$ in the denominator if the limit were $N(0,1)$ but the limit is given as $N(0, \sigma^2)$. Since $S_n$ is a consistent estimate of $\sigma$ it is used in the second equation to take $\sigma$ out of the limit. | Where does $\sqrt{n}$ come from in central limit theorem (CLT)? | $s_n$ represents the sample standard deviation for the sample mean. $s_n$$^2$ is the sample variance for the sample mean and it equals $S_n^2/n$, where $S_n^2$ is the sample estimate of the population | Where does $\sqrt{n}$ come from in central limit theorem (CLT)?
$s_n$ represents the sample standard deviation for the sample mean. $s_n$$^2$ is the sample variance for the sample mean and it equals $S_n^2/n$, where $S_n^2$ is the sample estimate of the population variance. Since $s_n =S_n/\sqrt{n}$ that explains how $\sqrt n$ appears in the first formula. Note there would be a $\sigma$ in the denominator if the limit were $N(0,1)$ but the limit is given as $N(0, \sigma^2)$. Since $S_n$ is a consistent estimate of $\sigma$ it is used in the second equation to take $\sigma$ out of the limit. | Where does $\sqrt{n}$ come from in central limit theorem (CLT)?
$s_n$ represents the sample standard deviation for the sample mean. $s_n$$^2$ is the sample variance for the sample mean and it equals $S_n^2/n$, where $S_n^2$ is the sample estimate of the population |
5,355 | Where does $\sqrt{n}$ come from in central limit theorem (CLT)? | Intuitively, if $Z_n \to \mathcal N(0, \sigma^2)$ for some $\sigma^2$ we should expect that $\mbox{Var}(Z_n)$ is roughly equal to $\sigma^2$; it seems like a pretty reasonable expectation, though I don't think it is necessary in general. The reason for the $\sqrt n$ in the first expression is that the variance of $\bar X_n - \mu$ goes to $0$ like $\frac 1 n$ and so the $\sqrt n$ is inflating the variance so that the expression just has variance equal to $\sigma^2$. In the second expression, the term $s_n$ is defined to be $\sqrt{\sum_{i = 1} ^ n \mbox{Var}(X_i)}$ while the variance of the numerator grows like $\sum_{i = 1} ^ n \mbox{Var}(X_i)$, so we again have that the variance of the whole expression is a constant ($1$ in this case).
Essentially, we know something "interesting" is happening with the distribution of $\bar X_n := \frac 1 n \sum_i X_i$, but if we don't properly center and scale it we won't be able to see it. I've heard this described sometimes as needing to adjust the microscope. If we don't blow up (e.g.) $\bar X - \mu$ by $\sqrt n$ then we just have $\bar X_n - \mu \to 0$ in distribution by the weak law; an interesting result in it's own right but not as informative as the CLT. If we inflate by any factor $a_n$ which is dominated by $\sqrt n$, we still get $a_n(\bar X_n - \mu) \to 0$ while any factor $a_n$ which dominates $\sqrt n$ gives $a_n(\bar X_n - \mu) \to \infty$. It turns out $\sqrt n$ is just the right magnification to be able to see what is going on in this case (note: all convergence here is in distribution; there is another level of magnification which is interesting for almost sure convergence, which gives rise to the law of iterated logarithm). | Where does $\sqrt{n}$ come from in central limit theorem (CLT)? | Intuitively, if $Z_n \to \mathcal N(0, \sigma^2)$ for some $\sigma^2$ we should expect that $\mbox{Var}(Z_n)$ is roughly equal to $\sigma^2$; it seems like a pretty reasonable expectation, though I do | Where does $\sqrt{n}$ come from in central limit theorem (CLT)?
Intuitively, if $Z_n \to \mathcal N(0, \sigma^2)$ for some $\sigma^2$ we should expect that $\mbox{Var}(Z_n)$ is roughly equal to $\sigma^2$; it seems like a pretty reasonable expectation, though I don't think it is necessary in general. The reason for the $\sqrt n$ in the first expression is that the variance of $\bar X_n - \mu$ goes to $0$ like $\frac 1 n$ and so the $\sqrt n$ is inflating the variance so that the expression just has variance equal to $\sigma^2$. In the second expression, the term $s_n$ is defined to be $\sqrt{\sum_{i = 1} ^ n \mbox{Var}(X_i)}$ while the variance of the numerator grows like $\sum_{i = 1} ^ n \mbox{Var}(X_i)$, so we again have that the variance of the whole expression is a constant ($1$ in this case).
Essentially, we know something "interesting" is happening with the distribution of $\bar X_n := \frac 1 n \sum_i X_i$, but if we don't properly center and scale it we won't be able to see it. I've heard this described sometimes as needing to adjust the microscope. If we don't blow up (e.g.) $\bar X - \mu$ by $\sqrt n$ then we just have $\bar X_n - \mu \to 0$ in distribution by the weak law; an interesting result in it's own right but not as informative as the CLT. If we inflate by any factor $a_n$ which is dominated by $\sqrt n$, we still get $a_n(\bar X_n - \mu) \to 0$ while any factor $a_n$ which dominates $\sqrt n$ gives $a_n(\bar X_n - \mu) \to \infty$. It turns out $\sqrt n$ is just the right magnification to be able to see what is going on in this case (note: all convergence here is in distribution; there is another level of magnification which is interesting for almost sure convergence, which gives rise to the law of iterated logarithm). | Where does $\sqrt{n}$ come from in central limit theorem (CLT)?
Intuitively, if $Z_n \to \mathcal N(0, \sigma^2)$ for some $\sigma^2$ we should expect that $\mbox{Var}(Z_n)$ is roughly equal to $\sigma^2$; it seems like a pretty reasonable expectation, though I do |
5,356 | How to interpret and report eta squared / partial eta squared in statistically significant and non-significant analyses? | Effect sizes for group mean differences
In general, I find standardised group mean differences (e.g., Cohen's d) a more meaningful effect size measure within the context of group differences. Measures like eta square are influenced by whether group samples sizes are equal, whereas Cohen's d is not. I also think that the meaning of d-based measures are more intuitive when what you are trying to quantify is a difference between group means.
The above point is particularly strong for the case where you only have two groups (e.g., the effect of treatment versus control). If you have more than two groups, then the situation is a little more complicated. I can see the argument for variance explained measures in this case. Alternatively, Cohen's $f^2$ is another option.
A third option is that within the context of experimental effects, even when there are more than two groups, the concept of effect is best conceptualised as a binary comparison (i.e., the effect of one condition relative to another). In this case, you can once again return to d-based measures. The d-based measure is not an effect size measure for the factor, but rather of one group relative to a reference group. The key is to define a meaningful reference group.
Finally, it is important to remember the broader aim of including effect size measures. It is to give the reader a sense of the size of the effect of interest. Any standardised measure of effect should assist the reader in this task. If the dependent variable is on an inherently meaningful scale, then don't shy away from interpreting the size of effect in terms of that scale. E.g., scales like reaction time, salary, height, weight, etc. are inherently meaningful. If you find, as I do, eta squared to be a bit unintuitive within the context of experimental effects, then perhaps choose another index.
Eta squared versus partial eta squared
Partial eta squared is the default effect size measure reported in several ANOVA procedures in SPSS. I assume this is why I frequently get questions about it.
If you only have one predictor variable, then partial eta squared is equivalent to eta squared.
This article explains the difference between eta squared and partial eta squared (Levine and Hullett Eta Squared, Partial Eta Squared..).
In summary, if you have more than one predictor, partial eta squared is the variance explained by a given variable of the variance remaining after excluding variance explained by other predictors.
Rules of thumb for eta squared and partial eta squared
If you only have one predictor then, eta squared and partial eta squared are the same and thus the same rules of thumb would apply.
If you have more than one predictor, then I think that the general rules of thumb for eta squared would apply more to partial eta squared than to eta squared. This is because partial eta squared in factorial ANOVA arguably more closely approximates what eta squared would have been for the factor had it been a one-way ANOVA; and it is presumably a one-way ANOVA which gave rise to Cohen's rules of thumb. In general, including other factors in an experimental design should typically reduce eta squared, but not necessarily partial eta squared due to the fact that the second factor, if it has an effect, increases variability in the dependent variable.
Despite what I say about rules of thumb for eta squared and partial eta squared, I reiterate that I'm not a fan of variance explained measures of effect size within the context of interpreting the size and meaning of experimental effects. Equally, rules of thumb are just that, rough, context dependent, and not to be taken too seriously.
Reporting effect size in the context of significant and non-significant results
In some sense an aim of your research is to estimate various quantitative estimates of the effects of your variables of interest in the population.
Effect sizes are one quantification of a point estimate of this effect. The bigger your sample size is, the more close, in general, your sample point estimate will be to the true population effect.
In broad terms, significance testing aims to rule out chance as an explanation of your results. Thus, the p-value tells you the probability of observing an effect size as or more extreme assuming the null hypothesis was true.
Ultimately, you want to rule out no effect and want to say something about the size of the true population effect. Confidence intervals and credibility intervals around effect sizes are two approaches that get at this issue more directly. However, reporting p-values and point estimates of effect size is quite common and much better than reporting only p-values or only effect size measures.
With regards to your specific question, if you have non-significant results, it is your decision as to whether you report effect size measures. I think if you have a table with many results then having an effect size column that is used regardless of significance makes sense. Even in non-significant contexts effect sizes with confidence intervals can be informative in indicating whether the non-significant findings could be due to inadequate sample size. | How to interpret and report eta squared / partial eta squared in statistically significant and non- | Effect sizes for group mean differences
In general, I find standardised group mean differences (e.g., Cohen's d) a more meaningful effect size measure within the context of group differences. Measure | How to interpret and report eta squared / partial eta squared in statistically significant and non-significant analyses?
Effect sizes for group mean differences
In general, I find standardised group mean differences (e.g., Cohen's d) a more meaningful effect size measure within the context of group differences. Measures like eta square are influenced by whether group samples sizes are equal, whereas Cohen's d is not. I also think that the meaning of d-based measures are more intuitive when what you are trying to quantify is a difference between group means.
The above point is particularly strong for the case where you only have two groups (e.g., the effect of treatment versus control). If you have more than two groups, then the situation is a little more complicated. I can see the argument for variance explained measures in this case. Alternatively, Cohen's $f^2$ is another option.
A third option is that within the context of experimental effects, even when there are more than two groups, the concept of effect is best conceptualised as a binary comparison (i.e., the effect of one condition relative to another). In this case, you can once again return to d-based measures. The d-based measure is not an effect size measure for the factor, but rather of one group relative to a reference group. The key is to define a meaningful reference group.
Finally, it is important to remember the broader aim of including effect size measures. It is to give the reader a sense of the size of the effect of interest. Any standardised measure of effect should assist the reader in this task. If the dependent variable is on an inherently meaningful scale, then don't shy away from interpreting the size of effect in terms of that scale. E.g., scales like reaction time, salary, height, weight, etc. are inherently meaningful. If you find, as I do, eta squared to be a bit unintuitive within the context of experimental effects, then perhaps choose another index.
Eta squared versus partial eta squared
Partial eta squared is the default effect size measure reported in several ANOVA procedures in SPSS. I assume this is why I frequently get questions about it.
If you only have one predictor variable, then partial eta squared is equivalent to eta squared.
This article explains the difference between eta squared and partial eta squared (Levine and Hullett Eta Squared, Partial Eta Squared..).
In summary, if you have more than one predictor, partial eta squared is the variance explained by a given variable of the variance remaining after excluding variance explained by other predictors.
Rules of thumb for eta squared and partial eta squared
If you only have one predictor then, eta squared and partial eta squared are the same and thus the same rules of thumb would apply.
If you have more than one predictor, then I think that the general rules of thumb for eta squared would apply more to partial eta squared than to eta squared. This is because partial eta squared in factorial ANOVA arguably more closely approximates what eta squared would have been for the factor had it been a one-way ANOVA; and it is presumably a one-way ANOVA which gave rise to Cohen's rules of thumb. In general, including other factors in an experimental design should typically reduce eta squared, but not necessarily partial eta squared due to the fact that the second factor, if it has an effect, increases variability in the dependent variable.
Despite what I say about rules of thumb for eta squared and partial eta squared, I reiterate that I'm not a fan of variance explained measures of effect size within the context of interpreting the size and meaning of experimental effects. Equally, rules of thumb are just that, rough, context dependent, and not to be taken too seriously.
Reporting effect size in the context of significant and non-significant results
In some sense an aim of your research is to estimate various quantitative estimates of the effects of your variables of interest in the population.
Effect sizes are one quantification of a point estimate of this effect. The bigger your sample size is, the more close, in general, your sample point estimate will be to the true population effect.
In broad terms, significance testing aims to rule out chance as an explanation of your results. Thus, the p-value tells you the probability of observing an effect size as or more extreme assuming the null hypothesis was true.
Ultimately, you want to rule out no effect and want to say something about the size of the true population effect. Confidence intervals and credibility intervals around effect sizes are two approaches that get at this issue more directly. However, reporting p-values and point estimates of effect size is quite common and much better than reporting only p-values or only effect size measures.
With regards to your specific question, if you have non-significant results, it is your decision as to whether you report effect size measures. I think if you have a table with many results then having an effect size column that is used regardless of significance makes sense. Even in non-significant contexts effect sizes with confidence intervals can be informative in indicating whether the non-significant findings could be due to inadequate sample size. | How to interpret and report eta squared / partial eta squared in statistically significant and non-
Effect sizes for group mean differences
In general, I find standardised group mean differences (e.g., Cohen's d) a more meaningful effect size measure within the context of group differences. Measure |
5,357 | Can simple linear regression be done without using plots and linear algebra? | Yes your onto it. You have to keep playing around with the 2333 until you find the right one which minimizes the error. But there's a mathematical way to find the "right" one. Let's call that number $\beta$. $E$, the sum of the squared errors (SSE) is a function of $\beta$ since for each choice of $\beta$ can calculate the amount each estimate is off, square it, and sum them together.
What $\beta$ minimizes the total sum of the squared errors? This is just a calculus problem. Take the derivative of $E$ by $\beta$ and set it equal to zero. This gives an equation for $\beta$. Check the second derivative is positive to know that it's a minimium. Thus you get an equation for $\beta$ which minimizes the error.
If you derive it this way, you will get $\beta$ as a summation. If you write out the linear algebra form of the estimate you will see that this is the same thing.
Edit: Here's a link to some notes with this type of derivation. The math gets a little messy, but at it's core it's just a calculus problem. | Can simple linear regression be done without using plots and linear algebra? | Yes your onto it. You have to keep playing around with the 2333 until you find the right one which minimizes the error. But there's a mathematical way to find the "right" one. Let's call that number $ | Can simple linear regression be done without using plots and linear algebra?
Yes your onto it. You have to keep playing around with the 2333 until you find the right one which minimizes the error. But there's a mathematical way to find the "right" one. Let's call that number $\beta$. $E$, the sum of the squared errors (SSE) is a function of $\beta$ since for each choice of $\beta$ can calculate the amount each estimate is off, square it, and sum them together.
What $\beta$ minimizes the total sum of the squared errors? This is just a calculus problem. Take the derivative of $E$ by $\beta$ and set it equal to zero. This gives an equation for $\beta$. Check the second derivative is positive to know that it's a minimium. Thus you get an equation for $\beta$ which minimizes the error.
If you derive it this way, you will get $\beta$ as a summation. If you write out the linear algebra form of the estimate you will see that this is the same thing.
Edit: Here's a link to some notes with this type of derivation. The math gets a little messy, but at it's core it's just a calculus problem. | Can simple linear regression be done without using plots and linear algebra?
Yes your onto it. You have to keep playing around with the 2333 until you find the right one which minimizes the error. But there's a mathematical way to find the "right" one. Let's call that number $ |
5,358 | Can simple linear regression be done without using plots and linear algebra? | Your understanding is close, but needs some extension: Simple linear regression is trying to find the formula that once you give X to it, would provide you with the closest estimation of Y based on a linear relation between X and Y.
Your example of house prices, when extended a bit, shows why you end up with scatter plots and the like. First, simply dividing the price by the area doesn't work in other cases, like land prices in my home town, where regulations on construction mean that simply owning a parcel of land upon which you can build a house has a high value. So land prices aren't simply proportional to areas. Each increase of parcel area might give the same increase in parcel value, but if you went all the way down to a (mythical) parcel of 0 area there would still be an associated apparent price that represents the value of just owning a parcel of land that's approved for building.
That's still a linear relation between area and value, but there is an intercept in the relation, representing the value of just owning a parcel. What makes this nevertheless a linear relation is that the change in value per unit change in area, the slope or the regression coefficient, is always the same regardless of the magnitudes of area or value.
So say that you already know somehow both the intercept and the slope that relate parcel areas to value, and you compare the values from that linear relation to the actual values represented by recent sales. You will find that the predicted and actual values seldom if ever coincide. These discrepancies represent the errors in your model, and result in a scatter of values around the predicted relation. You get a scatter plot of points clustered around your predicted straight-line relation between area and value.
In most practical examples you don't already know the intercept and the slope, so you have to try to estimate them from the data. That's what linear regression tries to do.
You may be better off thinking about linear regression and related modeling from the perspective of maximum-likelihood estimation, which is a search for the particular parameter values in your model that make the data the most probable. It's similar to the "brute-force" approach you propose in your question, but with a somewhat different measure of what you are trying to optimize. With modern computing methods and intelligent design of the search pattern, it can be done quite quickly.
Maximum-likelihood estimation can be conceptualized in ways that don't require a graphical plot and is similar to the way you already seem to be thinking. In the case of linear regression, both standard least-squares regression and maximum likelihood provide the same estimates of intercept and slope.
Thinking in terms of maximum likelihood has the additional advantage that it extends better to other situations where there aren't strictly linear relations. A good example is logistic regression in which you try to estimate the probability of an event occurring based on predictor variables. That can be accomplished by maximum likelihood, but unlike standard linear regression there is no simple equation that produces the intercept and slopes in logistic regression. | Can simple linear regression be done without using plots and linear algebra? | Your understanding is close, but needs some extension: Simple linear regression is trying to find the formula that once you give X to it, would provide you with the closest estimation of Y based on a | Can simple linear regression be done without using plots and linear algebra?
Your understanding is close, but needs some extension: Simple linear regression is trying to find the formula that once you give X to it, would provide you with the closest estimation of Y based on a linear relation between X and Y.
Your example of house prices, when extended a bit, shows why you end up with scatter plots and the like. First, simply dividing the price by the area doesn't work in other cases, like land prices in my home town, where regulations on construction mean that simply owning a parcel of land upon which you can build a house has a high value. So land prices aren't simply proportional to areas. Each increase of parcel area might give the same increase in parcel value, but if you went all the way down to a (mythical) parcel of 0 area there would still be an associated apparent price that represents the value of just owning a parcel of land that's approved for building.
That's still a linear relation between area and value, but there is an intercept in the relation, representing the value of just owning a parcel. What makes this nevertheless a linear relation is that the change in value per unit change in area, the slope or the regression coefficient, is always the same regardless of the magnitudes of area or value.
So say that you already know somehow both the intercept and the slope that relate parcel areas to value, and you compare the values from that linear relation to the actual values represented by recent sales. You will find that the predicted and actual values seldom if ever coincide. These discrepancies represent the errors in your model, and result in a scatter of values around the predicted relation. You get a scatter plot of points clustered around your predicted straight-line relation between area and value.
In most practical examples you don't already know the intercept and the slope, so you have to try to estimate them from the data. That's what linear regression tries to do.
You may be better off thinking about linear regression and related modeling from the perspective of maximum-likelihood estimation, which is a search for the particular parameter values in your model that make the data the most probable. It's similar to the "brute-force" approach you propose in your question, but with a somewhat different measure of what you are trying to optimize. With modern computing methods and intelligent design of the search pattern, it can be done quite quickly.
Maximum-likelihood estimation can be conceptualized in ways that don't require a graphical plot and is similar to the way you already seem to be thinking. In the case of linear regression, both standard least-squares regression and maximum likelihood provide the same estimates of intercept and slope.
Thinking in terms of maximum likelihood has the additional advantage that it extends better to other situations where there aren't strictly linear relations. A good example is logistic regression in which you try to estimate the probability of an event occurring based on predictor variables. That can be accomplished by maximum likelihood, but unlike standard linear regression there is no simple equation that produces the intercept and slopes in logistic regression. | Can simple linear regression be done without using plots and linear algebra?
Your understanding is close, but needs some extension: Simple linear regression is trying to find the formula that once you give X to it, would provide you with the closest estimation of Y based on a |
5,359 | Can simple linear regression be done without using plots and linear algebra? | First of all, my compliments. It is difficult for everyone to struggle with statistics (I am a physician, so you can guess how hard it is for me)...
I can propose not a visual explanation to linear regression, but something very close: a tactile explanation to linear regression.
Imagine you are entering a room from a door. The room is more or less a square in shape, and the door is in the lower left corner. You wish to get to the next room, whose door you expect is going to be in the upper right corner, more or less. Imagine that you cannot tell exactly where the next door is (ever!), but there are some people scattered in the room, and they can tell you which were to go. They can't see either, but they can tell you what is there close to them. The final path you will take to reach the next door, guided by this people, is analogous to a regression line, which minimizes the distance between these people, and brings you toward the door, close to (if not on) the correct path. | Can simple linear regression be done without using plots and linear algebra? | First of all, my compliments. It is difficult for everyone to struggle with statistics (I am a physician, so you can guess how hard it is for me)...
I can propose not a visual explanation to linear re | Can simple linear regression be done without using plots and linear algebra?
First of all, my compliments. It is difficult for everyone to struggle with statistics (I am a physician, so you can guess how hard it is for me)...
I can propose not a visual explanation to linear regression, but something very close: a tactile explanation to linear regression.
Imagine you are entering a room from a door. The room is more or less a square in shape, and the door is in the lower left corner. You wish to get to the next room, whose door you expect is going to be in the upper right corner, more or less. Imagine that you cannot tell exactly where the next door is (ever!), but there are some people scattered in the room, and they can tell you which were to go. They can't see either, but they can tell you what is there close to them. The final path you will take to reach the next door, guided by this people, is analogous to a regression line, which minimizes the distance between these people, and brings you toward the door, close to (if not on) the correct path. | Can simple linear regression be done without using plots and linear algebra?
First of all, my compliments. It is difficult for everyone to struggle with statistics (I am a physician, so you can guess how hard it is for me)...
I can propose not a visual explanation to linear re |
5,360 | Can simple linear regression be done without using plots and linear algebra? | The reason why plots are universally used to introduce simple regression - a response predicted by a single predictor - is that they aid understanding.
However, I believe I can give something of the flavor that might aid in understanding what's going on. In this I'll mostly focus on trying to convey some of the understanding they give, which may help with some of the other aspects you'll typically encounter in reading about regression. So this answer will mainly deal with a particular aspect of your post.
Imagine you are seated before a large rectangular table such as a plain office desk, one a full arm-span long (perhaps 1.8 meters), by perhaps half that wide.
You are seated before the table in the usual position, in the middle of one long side. On this table a large number of nails (with fairly smooth heads) have been hammered into the top surface such that each pokes up a little way (enough to feel where they are, and enough to tie a string to them or attach a rubber band).
These nails are at varying distances from your edge of the desk, in such a way that toward one end (say the left end) they typically are closer to your edge of the desk and then as you move toward the other end the nail-heads tend to be further away from your edge.
Further imagine that it would be useful to have a sense of how far on average the nails are from your edge at any give position along your edge.
Choose some place along your edge of the desk and place your hand there, then reach forward directly across the table, gently dragging your hand directly back toward you, then away again, moving your hand back and forth over the nail heads. You encounter several dozen bumps from these nails - the ones within that narrow breadth of your hand (as it moves directly away from your edge, at constant distance from the left end of the desk), a section, or strip, roughly ten centimeters wide.
The idea is to figure out some average distance to a nail from your edge of the desk in that small section. Intuitively it's just the middle of the bumps we hit but if we measured each distance-to-a-nail in that hand-breadth-wide section of desk, we could compute those averages easily.
For example, we could make use of a T-square whose head slides along the edge of the desk and whose shaft runs toward the other side of the desk, but just above the desk so we don't hit the nails as it slides left or right - as we pass a given nail we can get its distance along the shaft of the T-square.
So at a progression of places along our edge we repeat this exercise of finding all the nails in a hand-width strip running toward and away from us and finding their average distance away. Perhaps we divide the desk up into hand-width strips along our edge (so every nail is encountered in exactly one strip).
Now imagine there were say 21 such strips, the first at the left edge and the last at the right edge. The means get further away from our desk-edge as we progress across the strips.
These means form a simple nonparametric regression estimator of the expectation of y (our distance-away) given x (distance along our edge from the left end), that is, E(y|x). Specifically, this is a binned nonparametric regression estimator, also called a regressogram
If those strip means increased regularly - that is, the mean was typically increasing by about the same amount-per-strip as we moved across the strips - then we could better estimate our regression function by assuming that the expected value of y was a linear function of x - i.e. that the expected value of y given x was a constant plus a multiple of x. Here the constant represents where the nails tend to be when we at x is zero (often we might place this at the extreme left edge but it doesn't have to be), and the particular multiple of x being how fast on average the mean changes as we move by one centimeter (say) to the right.
But how to find such a linear function?
Imagine that we loop one rubber band over each nail-head, and attach each to a long thin stick that lays just above the desk, on top of the nails, so that it lays somewhere near the "middle" of each strip we had be for.
We attach the bands in such a way that they only stretch in the direction toward and away from us (not left or right) - left to themselves they would pull so as to make their direction of stretch at a right-angle with the stick, but here we prevent that, so that their direction of stretch remains only in the directions toward or away from our edge of the desk. Now we let the stick settle as the bands pull it toward each nail, with more distant nails (with more stretched rubber bands) pulling correspondingly harder than nails close to the stick.
Then the combined result of all the bands pulling on the stick would be (ideally, at least) to pull the stick to minimize the sum of squared lengths of the stretched rubber bands; in that direction directly across the table the distance from our edge of the table to the stick at any given x position would be our estimate of the expected value of y given x.
This is essentially a linear regression estimate.
Now, imagine that instead of nails, we have many fruits (like small apples perhaps) hanging from a large tree and we wish to find the average distance of fruits above the ground as it varies with position on the ground. Imagine that in this case the heights above the ground get larger as we go forward and slightly larger as we move right, again in a regular fashion, so each step forward typically changes the mean height by about the same amount, and each step to the right will also change the mean by a roughly constant amount (but this stepping-right amount of change in mean is different to the stepping-forward amount of change).
If we minimize the sum of squared vertical distances from the fruits to a thin flat sheet (perhaps a thin sheet of very stiff plastic) in order to figure out how the mean height changes as we move forward or step to the right, that would be a linear regression with two predictors - a multiple regression.
These are the only two cases that plots can help understand (they can show rapidly what I just described at length, but hopefully you know have a basis on which to conceptualize the same ideas). Beyond those simplest two cases, we're left with the mathematics only.
Now take your house price example; you can represent every house's area by a distance along your edge of the desk - represent the largest house size as a position near the right edge, every other house size will be some position further to the left where a certain number of centimeters will represent some number of square meters. Now the distance away represents sale price. Represent the most expensive house as some particular distance near the furthest edge of the desk (as always, the edge furthest from your chair), and every centimeter shifted away will represent some number of Rials.
For the present imagine that we chose the representation so that the left edge of the desk corresponds to a house area of zero and the near edge to a house price of 0. We then put in a nail for each house.
We probably won't have any nails near the left end of our edge (they might be mostly toward the right and away from us) because this isn't necessarily a good choice of scale but your choice of a no-intercept model makes this a better way to discuss it.
Now in your model you force the stick to pass through a loop of string at the left corner of the near edge of the desk - thus forcing the fitted model to have price zero for area zero, which might seem natural - but imagine if there are some fairly constant components of price which affected every sale. Then it would make sense to have the intercept different from zero.
In any case, with the addition of that loop, the same rubber-band exercise as before will find our least squares estimate of the line. | Can simple linear regression be done without using plots and linear algebra? | The reason why plots are universally used to introduce simple regression - a response predicted by a single predictor - is that they aid understanding.
However, I believe I can give something of the | Can simple linear regression be done without using plots and linear algebra?
The reason why plots are universally used to introduce simple regression - a response predicted by a single predictor - is that they aid understanding.
However, I believe I can give something of the flavor that might aid in understanding what's going on. In this I'll mostly focus on trying to convey some of the understanding they give, which may help with some of the other aspects you'll typically encounter in reading about regression. So this answer will mainly deal with a particular aspect of your post.
Imagine you are seated before a large rectangular table such as a plain office desk, one a full arm-span long (perhaps 1.8 meters), by perhaps half that wide.
You are seated before the table in the usual position, in the middle of one long side. On this table a large number of nails (with fairly smooth heads) have been hammered into the top surface such that each pokes up a little way (enough to feel where they are, and enough to tie a string to them or attach a rubber band).
These nails are at varying distances from your edge of the desk, in such a way that toward one end (say the left end) they typically are closer to your edge of the desk and then as you move toward the other end the nail-heads tend to be further away from your edge.
Further imagine that it would be useful to have a sense of how far on average the nails are from your edge at any give position along your edge.
Choose some place along your edge of the desk and place your hand there, then reach forward directly across the table, gently dragging your hand directly back toward you, then away again, moving your hand back and forth over the nail heads. You encounter several dozen bumps from these nails - the ones within that narrow breadth of your hand (as it moves directly away from your edge, at constant distance from the left end of the desk), a section, or strip, roughly ten centimeters wide.
The idea is to figure out some average distance to a nail from your edge of the desk in that small section. Intuitively it's just the middle of the bumps we hit but if we measured each distance-to-a-nail in that hand-breadth-wide section of desk, we could compute those averages easily.
For example, we could make use of a T-square whose head slides along the edge of the desk and whose shaft runs toward the other side of the desk, but just above the desk so we don't hit the nails as it slides left or right - as we pass a given nail we can get its distance along the shaft of the T-square.
So at a progression of places along our edge we repeat this exercise of finding all the nails in a hand-width strip running toward and away from us and finding their average distance away. Perhaps we divide the desk up into hand-width strips along our edge (so every nail is encountered in exactly one strip).
Now imagine there were say 21 such strips, the first at the left edge and the last at the right edge. The means get further away from our desk-edge as we progress across the strips.
These means form a simple nonparametric regression estimator of the expectation of y (our distance-away) given x (distance along our edge from the left end), that is, E(y|x). Specifically, this is a binned nonparametric regression estimator, also called a regressogram
If those strip means increased regularly - that is, the mean was typically increasing by about the same amount-per-strip as we moved across the strips - then we could better estimate our regression function by assuming that the expected value of y was a linear function of x - i.e. that the expected value of y given x was a constant plus a multiple of x. Here the constant represents where the nails tend to be when we at x is zero (often we might place this at the extreme left edge but it doesn't have to be), and the particular multiple of x being how fast on average the mean changes as we move by one centimeter (say) to the right.
But how to find such a linear function?
Imagine that we loop one rubber band over each nail-head, and attach each to a long thin stick that lays just above the desk, on top of the nails, so that it lays somewhere near the "middle" of each strip we had be for.
We attach the bands in such a way that they only stretch in the direction toward and away from us (not left or right) - left to themselves they would pull so as to make their direction of stretch at a right-angle with the stick, but here we prevent that, so that their direction of stretch remains only in the directions toward or away from our edge of the desk. Now we let the stick settle as the bands pull it toward each nail, with more distant nails (with more stretched rubber bands) pulling correspondingly harder than nails close to the stick.
Then the combined result of all the bands pulling on the stick would be (ideally, at least) to pull the stick to minimize the sum of squared lengths of the stretched rubber bands; in that direction directly across the table the distance from our edge of the table to the stick at any given x position would be our estimate of the expected value of y given x.
This is essentially a linear regression estimate.
Now, imagine that instead of nails, we have many fruits (like small apples perhaps) hanging from a large tree and we wish to find the average distance of fruits above the ground as it varies with position on the ground. Imagine that in this case the heights above the ground get larger as we go forward and slightly larger as we move right, again in a regular fashion, so each step forward typically changes the mean height by about the same amount, and each step to the right will also change the mean by a roughly constant amount (but this stepping-right amount of change in mean is different to the stepping-forward amount of change).
If we minimize the sum of squared vertical distances from the fruits to a thin flat sheet (perhaps a thin sheet of very stiff plastic) in order to figure out how the mean height changes as we move forward or step to the right, that would be a linear regression with two predictors - a multiple regression.
These are the only two cases that plots can help understand (they can show rapidly what I just described at length, but hopefully you know have a basis on which to conceptualize the same ideas). Beyond those simplest two cases, we're left with the mathematics only.
Now take your house price example; you can represent every house's area by a distance along your edge of the desk - represent the largest house size as a position near the right edge, every other house size will be some position further to the left where a certain number of centimeters will represent some number of square meters. Now the distance away represents sale price. Represent the most expensive house as some particular distance near the furthest edge of the desk (as always, the edge furthest from your chair), and every centimeter shifted away will represent some number of Rials.
For the present imagine that we chose the representation so that the left edge of the desk corresponds to a house area of zero and the near edge to a house price of 0. We then put in a nail for each house.
We probably won't have any nails near the left end of our edge (they might be mostly toward the right and away from us) because this isn't necessarily a good choice of scale but your choice of a no-intercept model makes this a better way to discuss it.
Now in your model you force the stick to pass through a loop of string at the left corner of the near edge of the desk - thus forcing the fitted model to have price zero for area zero, which might seem natural - but imagine if there are some fairly constant components of price which affected every sale. Then it would make sense to have the intercept different from zero.
In any case, with the addition of that loop, the same rubber-band exercise as before will find our least squares estimate of the line. | Can simple linear regression be done without using plots and linear algebra?
The reason why plots are universally used to introduce simple regression - a response predicted by a single predictor - is that they aid understanding.
However, I believe I can give something of the |
5,361 | Can simple linear regression be done without using plots and linear algebra? | Nice example that can help for your question was provided by Andrew Gelman and David K. Park (2012). Let's stick to your example of predicting the price of house $Y$ given it's area $X$. For this we use simple linear regression model
$$ Y = \beta_0 + \beta_1 X + \varepsilon $$
For sake of simplicity, let's forget about the intercept $\beta_0$, you can check this thread to learn why is it important. This data can be visualized on a scatterplot. What is scatterplot? Imagine two-dimensional space (it could be a room), the datapoints are "scattered" around the place, where values of both variables mark their $y$-axis and $x$-axis positions. What you already know, is that it somehow translates to the linear regression model.
To make it clear, lets simplify this example even more -- as Gelman and Park did. The simplification that they proposed is to divide the $X$ variable, i.e. area of the house, into three groups: "small", "medium", and "big" houses (they describe how to optimally make such decision, but this is of lesser importance). Next, calculate the average size of "small" house and average size of "big" house. Calculate also average price of "small" house and of "big" one. Now, reduce your data to two points -- the centers of the clouds of datapoints for small and big houses scattered in the space and remove all the datapoints about "medium" houses. You are left with two points in two-dimensional space. Regression line is the line that connects the points -- you can think of it as a direction from one point to the another. The slope $\beta_1$ of this line tells us about amount of change between small and big houses in their prices.
The same happens when we have more points, scattered around the space: regression line finds her way by minimizing it's square distance to every point. So the line is going exactly through the center of the cloud of points scattered in the space. Instead of connecting two points, you can think of it as connecting unlimited number of such central points.
Gelman, A., & Park, D. K. (2012). Splitting a predictor at the upper quarter or third and the lower quarter or third. The American Statistician, 62(4), 1-8. | Can simple linear regression be done without using plots and linear algebra? | Nice example that can help for your question was provided by Andrew Gelman and David K. Park (2012). Let's stick to your example of predicting the price of house $Y$ given it's area $X$. For this we u | Can simple linear regression be done without using plots and linear algebra?
Nice example that can help for your question was provided by Andrew Gelman and David K. Park (2012). Let's stick to your example of predicting the price of house $Y$ given it's area $X$. For this we use simple linear regression model
$$ Y = \beta_0 + \beta_1 X + \varepsilon $$
For sake of simplicity, let's forget about the intercept $\beta_0$, you can check this thread to learn why is it important. This data can be visualized on a scatterplot. What is scatterplot? Imagine two-dimensional space (it could be a room), the datapoints are "scattered" around the place, where values of both variables mark their $y$-axis and $x$-axis positions. What you already know, is that it somehow translates to the linear regression model.
To make it clear, lets simplify this example even more -- as Gelman and Park did. The simplification that they proposed is to divide the $X$ variable, i.e. area of the house, into three groups: "small", "medium", and "big" houses (they describe how to optimally make such decision, but this is of lesser importance). Next, calculate the average size of "small" house and average size of "big" house. Calculate also average price of "small" house and of "big" one. Now, reduce your data to two points -- the centers of the clouds of datapoints for small and big houses scattered in the space and remove all the datapoints about "medium" houses. You are left with two points in two-dimensional space. Regression line is the line that connects the points -- you can think of it as a direction from one point to the another. The slope $\beta_1$ of this line tells us about amount of change between small and big houses in their prices.
The same happens when we have more points, scattered around the space: regression line finds her way by minimizing it's square distance to every point. So the line is going exactly through the center of the cloud of points scattered in the space. Instead of connecting two points, you can think of it as connecting unlimited number of such central points.
Gelman, A., & Park, D. K. (2012). Splitting a predictor at the upper quarter or third and the lower quarter or third. The American Statistician, 62(4), 1-8. | Can simple linear regression be done without using plots and linear algebra?
Nice example that can help for your question was provided by Andrew Gelman and David K. Park (2012). Let's stick to your example of predicting the price of house $Y$ given it's area $X$. For this we u |
5,362 | Can simple linear regression be done without using plots and linear algebra? | The short answer is, yes. What line goes best through the middle of all points that comprise the entirety or just the surface of an airplane or javelin? Draw it; in your head or on a picture. You are looking for and at that solitary line from which every point (of interest, whether you plot them or not) that would contribute to total least (among points) deviation from that line. If you do it by eye, implicitly by common sense, you will approximate (remarkably well) a mathematically calculated result. For that there are formulae which bother the eye and may not make common sense. In similar formalized problems in engineering and science, the scatters still invite a preliminary appraisal by eye, but in those arenas one is supposed to come up with a "test" probability that a line is the line. It goes downhill from there. However, you are apparently trying to teach a machine to size up (in effect) the metes and bounds of (a) a sizeable barnyard and (b) scattered livestock inside it. If you give your machine what amounts to a picture (graphical, algebraic) of the real estate and occupants, it should be able to figure out (midline neatly dividing blob in two, calculated descatter into a line) what you want it to do. Any decent statistics textbook (ask teachers or professors to name more than one) should spell out both the whole point of linear regression in the first place, and how to do it in the simplest cases (ranging to cases that are not simple). A number of pretzels later, you'll have it down pat.
In re: Silverfish's comment to my post supra (there seems no simple way other than this to add comment to that comment), yes, the OP is blind, is learning machine learning, and requested practicality without plots or graphs, but I assume that he is able to distinguish "visualizing" from "vision", visualizes and has veritable pictures in his head, and has a basic idea of all manner of physical in objects the world around him (houses, among others), so he can still "draw" both mathematically as well as otherwise in his head, and can probably put a good semblance of 2D and 3D to paper. A wide array of books and other texts nowadays is available in physical Braille as well as in electronic voice on one's own computer (such as for forums, dictionaries, etc.), and many schools for the blind have fairly complete curricula. Rather than airplane or javelin, sofa or cane would not necessarily be the more appropriate, and statistics texts are probably available. He is less concerned for how machines might learn to plot and graph or calculate regression, then for how machines might learn to do something equivalent (and more basic) in order to grasp regression (whether a machine might display it, react to it, follow it, avoid it, or whatever). The essential thrust (as to blind as well as to sighted students) is still how to visualize what can be non-visual (such as concept of linearity rather than instance of drawn line, since before Euclid and Pythagoras), and how to visualize the basic purpose of a special kind of linearity (regression, whose basic point is best fit to least deviation, since early in mathematics and statistics). A lineprinter's Fortran output of regression is scarcely "visual" till mentally assimilated, but even the basic point of regression is imaginary (a line that isn't there till it is made for a purpose). | Can simple linear regression be done without using plots and linear algebra? | The short answer is, yes. What line goes best through the middle of all points that comprise the entirety or just the surface of an airplane or javelin? Draw it; in your head or on a picture. You a | Can simple linear regression be done without using plots and linear algebra?
The short answer is, yes. What line goes best through the middle of all points that comprise the entirety or just the surface of an airplane or javelin? Draw it; in your head or on a picture. You are looking for and at that solitary line from which every point (of interest, whether you plot them or not) that would contribute to total least (among points) deviation from that line. If you do it by eye, implicitly by common sense, you will approximate (remarkably well) a mathematically calculated result. For that there are formulae which bother the eye and may not make common sense. In similar formalized problems in engineering and science, the scatters still invite a preliminary appraisal by eye, but in those arenas one is supposed to come up with a "test" probability that a line is the line. It goes downhill from there. However, you are apparently trying to teach a machine to size up (in effect) the metes and bounds of (a) a sizeable barnyard and (b) scattered livestock inside it. If you give your machine what amounts to a picture (graphical, algebraic) of the real estate and occupants, it should be able to figure out (midline neatly dividing blob in two, calculated descatter into a line) what you want it to do. Any decent statistics textbook (ask teachers or professors to name more than one) should spell out both the whole point of linear regression in the first place, and how to do it in the simplest cases (ranging to cases that are not simple). A number of pretzels later, you'll have it down pat.
In re: Silverfish's comment to my post supra (there seems no simple way other than this to add comment to that comment), yes, the OP is blind, is learning machine learning, and requested practicality without plots or graphs, but I assume that he is able to distinguish "visualizing" from "vision", visualizes and has veritable pictures in his head, and has a basic idea of all manner of physical in objects the world around him (houses, among others), so he can still "draw" both mathematically as well as otherwise in his head, and can probably put a good semblance of 2D and 3D to paper. A wide array of books and other texts nowadays is available in physical Braille as well as in electronic voice on one's own computer (such as for forums, dictionaries, etc.), and many schools for the blind have fairly complete curricula. Rather than airplane or javelin, sofa or cane would not necessarily be the more appropriate, and statistics texts are probably available. He is less concerned for how machines might learn to plot and graph or calculate regression, then for how machines might learn to do something equivalent (and more basic) in order to grasp regression (whether a machine might display it, react to it, follow it, avoid it, or whatever). The essential thrust (as to blind as well as to sighted students) is still how to visualize what can be non-visual (such as concept of linearity rather than instance of drawn line, since before Euclid and Pythagoras), and how to visualize the basic purpose of a special kind of linearity (regression, whose basic point is best fit to least deviation, since early in mathematics and statistics). A lineprinter's Fortran output of regression is scarcely "visual" till mentally assimilated, but even the basic point of regression is imaginary (a line that isn't there till it is made for a purpose). | Can simple linear regression be done without using plots and linear algebra?
The short answer is, yes. What line goes best through the middle of all points that comprise the entirety or just the surface of an airplane or javelin? Draw it; in your head or on a picture. You a |
5,363 | Can simple linear regression be done without using plots and linear algebra? | Have you encountered the sort of toaster you often get in hotels. You put bread on a conveyor belt at one end and it comes out as toast at the other.
Unfortunately, in the toaster at this cheap hotel, the heaters have all got moved to random heights and distances from the entrance to the toaster. You cannot move the heaters or bend the path of the belt (which is straight, by the way (this is where the linear bit comes in), but you can alter the HEIGHT and TILT of the belt .
Given the positions of all the heaters, linear regression will tell you the correct height and angle to place the belt to get the most heat overall. This is because linear regression will minimise the average distance between the toast and the heaters.
My first holiday job was doing linear regressions by hand. The guy who said you don't want to do that is RIGHT!!! | Can simple linear regression be done without using plots and linear algebra? | Have you encountered the sort of toaster you often get in hotels. You put bread on a conveyor belt at one end and it comes out as toast at the other.
Unfortunately, in the toaster at this cheap hotel, | Can simple linear regression be done without using plots and linear algebra?
Have you encountered the sort of toaster you often get in hotels. You put bread on a conveyor belt at one end and it comes out as toast at the other.
Unfortunately, in the toaster at this cheap hotel, the heaters have all got moved to random heights and distances from the entrance to the toaster. You cannot move the heaters or bend the path of the belt (which is straight, by the way (this is where the linear bit comes in), but you can alter the HEIGHT and TILT of the belt .
Given the positions of all the heaters, linear regression will tell you the correct height and angle to place the belt to get the most heat overall. This is because linear regression will minimise the average distance between the toast and the heaters.
My first holiday job was doing linear regressions by hand. The guy who said you don't want to do that is RIGHT!!! | Can simple linear regression be done without using plots and linear algebra?
Have you encountered the sort of toaster you often get in hotels. You put bread on a conveyor belt at one end and it comes out as toast at the other.
Unfortunately, in the toaster at this cheap hotel, |
5,364 | Can simple linear regression be done without using plots and linear algebra? | My favorite explanation of linear regression is geometric, but not visual. It treats the data set as a single point in a high-dimensional space, rather than breaking it up into a cloud of points in two-dimensional space.
The area $a$ and price $p$ of a house are a pair of numbers, which you can think of as the coordinates of a point $(a, p)$ in two-dimensional space. The areas $a_1, \ldots, a_{1000}$ and prices $p_1, \ldots, p_{1000}$ of a thousand houses are a thousand pairs of numbers, which you can think of as the coordinates of a point
$$D = (a_1, \ldots, a_{1000}, p_1, \ldots, p_{1000})$$
in two-thousand-dimensional space. For convenience, I'll call two-thousand-dimensional space "data space." Your data set $D$ is a single point in data space.
If the relationship between area and price were perfectly linear, the point $D$ would sit in a very special region of data space, which I'll call the "linear sheet." It consists of the points
$$M(\rho, \beta) = (a_1, \ldots, a_{1000}, \rho a_1 + \beta, \ldots, \rho a_{1000} + \beta).$$
The numbers $\rho$ and $\beta$ are allowed to vary, but $a_1, \ldots, a_{1000}$ are fixed to be the same areas that appear in your data set. I'm calling the linear sheet a "sheet" because it's two-dimensional: a point on it is specified by the two coordinates $\rho$ and $\beta$. If you want to get a sense of how the linear sheet is shaped, imagine a thin, straight wire stretched across three-dimensional space. The linear sheet is like that: it's perfectly flat, and its dimension is very low compared to the dimension of the space it sits inside.
In a real neighborhood, the relationship between area and price won't be perfectly linear, so the point $D$ won't sit exactly on the linear sheet. However, it might sit very close to the linear sheet. The goal of linear regression is to find the point $M(\rho, \beta)$ on the linear sheet which sits the closest to the data point $D$. That point is the best linear model for the data.
Using the Pythagorean theorem, you can figure out that the square of the distance between $D$ and $M(\rho, \beta)$ is
$$[p_1 - (\rho a_1 + \beta)]^2 + \ldots + [p_{1000} - (\rho a_{1000} + \beta)]^2.$$
In other words, the distance between the data point and the model point is the total squared error of the model! Minimizing the total squared error of a model is the same thing as minimizing the distance between the model and the data in data space.
As Chris Rackauckas pointed out, calculus gives a very practical way to find the coordinates $\rho$ and $\beta$ that minimize the distance between $D$ and $M(\rho, \beta)$. | Can simple linear regression be done without using plots and linear algebra? | My favorite explanation of linear regression is geometric, but not visual. It treats the data set as a single point in a high-dimensional space, rather than breaking it up into a cloud of points in tw | Can simple linear regression be done without using plots and linear algebra?
My favorite explanation of linear regression is geometric, but not visual. It treats the data set as a single point in a high-dimensional space, rather than breaking it up into a cloud of points in two-dimensional space.
The area $a$ and price $p$ of a house are a pair of numbers, which you can think of as the coordinates of a point $(a, p)$ in two-dimensional space. The areas $a_1, \ldots, a_{1000}$ and prices $p_1, \ldots, p_{1000}$ of a thousand houses are a thousand pairs of numbers, which you can think of as the coordinates of a point
$$D = (a_1, \ldots, a_{1000}, p_1, \ldots, p_{1000})$$
in two-thousand-dimensional space. For convenience, I'll call two-thousand-dimensional space "data space." Your data set $D$ is a single point in data space.
If the relationship between area and price were perfectly linear, the point $D$ would sit in a very special region of data space, which I'll call the "linear sheet." It consists of the points
$$M(\rho, \beta) = (a_1, \ldots, a_{1000}, \rho a_1 + \beta, \ldots, \rho a_{1000} + \beta).$$
The numbers $\rho$ and $\beta$ are allowed to vary, but $a_1, \ldots, a_{1000}$ are fixed to be the same areas that appear in your data set. I'm calling the linear sheet a "sheet" because it's two-dimensional: a point on it is specified by the two coordinates $\rho$ and $\beta$. If you want to get a sense of how the linear sheet is shaped, imagine a thin, straight wire stretched across three-dimensional space. The linear sheet is like that: it's perfectly flat, and its dimension is very low compared to the dimension of the space it sits inside.
In a real neighborhood, the relationship between area and price won't be perfectly linear, so the point $D$ won't sit exactly on the linear sheet. However, it might sit very close to the linear sheet. The goal of linear regression is to find the point $M(\rho, \beta)$ on the linear sheet which sits the closest to the data point $D$. That point is the best linear model for the data.
Using the Pythagorean theorem, you can figure out that the square of the distance between $D$ and $M(\rho, \beta)$ is
$$[p_1 - (\rho a_1 + \beta)]^2 + \ldots + [p_{1000} - (\rho a_{1000} + \beta)]^2.$$
In other words, the distance between the data point and the model point is the total squared error of the model! Minimizing the total squared error of a model is the same thing as minimizing the distance between the model and the data in data space.
As Chris Rackauckas pointed out, calculus gives a very practical way to find the coordinates $\rho$ and $\beta$ that minimize the distance between $D$ and $M(\rho, \beta)$. | Can simple linear regression be done without using plots and linear algebra?
My favorite explanation of linear regression is geometric, but not visual. It treats the data set as a single point in a high-dimensional space, rather than breaking it up into a cloud of points in tw |
5,365 | Can simple linear regression be done without using plots and linear algebra? | @Chris Rackauckas and @EDM's answers are spot on. There are many ways to approach simple linear regression that don't require plotting or visual explanations of ordinary least squares estimation, and they give very solid explanations of what actually happens when you're running OLS.
I might add that using scatterplots as an instruction tool to learn any kind of new modeling procedure, whether it's old school parametric model, advanced machine learning stuff, or bayesian algorithms, graphing can help cut down on the time it takes to learn what a particular algorithm does.
Graphing is also very important for exploratory data analysis when you are first beginning to work with a new dataset. I have had situations where I collected lots of data, worked out the theory, carefully planned out my model, and then ran it, only to end up with results that essentially had no predictive power. Plotting bivariate relationships can take out some of the guesswork: in your example, it's possible that home price is linearly related to area, but maybe the relationship isn't linear. Scatterplots help you decide if you need higher order terms in your regression, or if you want to use a different method than linear regression, or if you want to use some sort of nonparametric method. | Can simple linear regression be done without using plots and linear algebra? | @Chris Rackauckas and @EDM's answers are spot on. There are many ways to approach simple linear regression that don't require plotting or visual explanations of ordinary least squares estimation, and | Can simple linear regression be done without using plots and linear algebra?
@Chris Rackauckas and @EDM's answers are spot on. There are many ways to approach simple linear regression that don't require plotting or visual explanations of ordinary least squares estimation, and they give very solid explanations of what actually happens when you're running OLS.
I might add that using scatterplots as an instruction tool to learn any kind of new modeling procedure, whether it's old school parametric model, advanced machine learning stuff, or bayesian algorithms, graphing can help cut down on the time it takes to learn what a particular algorithm does.
Graphing is also very important for exploratory data analysis when you are first beginning to work with a new dataset. I have had situations where I collected lots of data, worked out the theory, carefully planned out my model, and then ran it, only to end up with results that essentially had no predictive power. Plotting bivariate relationships can take out some of the guesswork: in your example, it's possible that home price is linearly related to area, but maybe the relationship isn't linear. Scatterplots help you decide if you need higher order terms in your regression, or if you want to use a different method than linear regression, or if you want to use some sort of nonparametric method. | Can simple linear regression be done without using plots and linear algebra?
@Chris Rackauckas and @EDM's answers are spot on. There are many ways to approach simple linear regression that don't require plotting or visual explanations of ordinary least squares estimation, and |
5,366 | Can simple linear regression be done without using plots and linear algebra? | Google for Anscombe Quartet.
It shows 4 sets of data which on inspecting numerically do not show much difference.
However, on creating a visual scatter plot, the differences become dramatically
visible.
It gives a pretty clear view why you should always plot your data, regression or no regression :-) | Can simple linear regression be done without using plots and linear algebra? | Google for Anscombe Quartet.
It shows 4 sets of data which on inspecting numerically do not show much difference.
However, on creating a visual scatter plot, the differences become dramatically
visib | Can simple linear regression be done without using plots and linear algebra?
Google for Anscombe Quartet.
It shows 4 sets of data which on inspecting numerically do not show much difference.
However, on creating a visual scatter plot, the differences become dramatically
visible.
It gives a pretty clear view why you should always plot your data, regression or no regression :-) | Can simple linear regression be done without using plots and linear algebra?
Google for Anscombe Quartet.
It shows 4 sets of data which on inspecting numerically do not show much difference.
However, on creating a visual scatter plot, the differences become dramatically
visib |
5,367 | Can simple linear regression be done without using plots and linear algebra? | We want to have a solution that minimizes the difference between the predicted and actual values.
We assume that the $y=bx+a$ i.e. there is a linear relationship.
We don't care whether the difference between predicted and actual $y$ is positive or negative assume that distribution of errors of $y$ posses certain properties.
If we assume that the distribution of errors is normally distributed it turns out that there is an analytical solution to this minimization problem. The the sum of squares of differences is the best value to minimize for a best fit. But normality is not required in general case.
There isn't much more to it really.
The geometrical interpretation comes handy because sum of squares has the interpretation in the form of sum of distances of the dots on the scatter plot from the $y=bx+a$ line. And human eye is very good at approximating the line that corresponds to the best fit. So it was handy before we had computers to find the fit quickly.
Nowadays it is left more as a comprehension help but is not necessary to have to understand linear regression really.
EDIT:replaced the normality of errors assumption with a correct but less concise list. Normality was required to have an analytical solution and can be assumed for many practical cases and in that case sum of squares is optimal not only for the linear estimator and maximizes likelihood as well.
If further the assumption of normality of error distribution holds then the Sum of Squares is optimal among both linear and non-linear estimators and is maximizing the likelihood. | Can simple linear regression be done without using plots and linear algebra? | We want to have a solution that minimizes the difference between the predicted and actual values.
We assume that the $y=bx+a$ i.e. there is a linear relationship.
We don't care whether the difference | Can simple linear regression be done without using plots and linear algebra?
We want to have a solution that minimizes the difference between the predicted and actual values.
We assume that the $y=bx+a$ i.e. there is a linear relationship.
We don't care whether the difference between predicted and actual $y$ is positive or negative assume that distribution of errors of $y$ posses certain properties.
If we assume that the distribution of errors is normally distributed it turns out that there is an analytical solution to this minimization problem. The the sum of squares of differences is the best value to minimize for a best fit. But normality is not required in general case.
There isn't much more to it really.
The geometrical interpretation comes handy because sum of squares has the interpretation in the form of sum of distances of the dots on the scatter plot from the $y=bx+a$ line. And human eye is very good at approximating the line that corresponds to the best fit. So it was handy before we had computers to find the fit quickly.
Nowadays it is left more as a comprehension help but is not necessary to have to understand linear regression really.
EDIT:replaced the normality of errors assumption with a correct but less concise list. Normality was required to have an analytical solution and can be assumed for many practical cases and in that case sum of squares is optimal not only for the linear estimator and maximizes likelihood as well.
If further the assumption of normality of error distribution holds then the Sum of Squares is optimal among both linear and non-linear estimators and is maximizing the likelihood. | Can simple linear regression be done without using plots and linear algebra?
We want to have a solution that minimizes the difference between the predicted and actual values.
We assume that the $y=bx+a$ i.e. there is a linear relationship.
We don't care whether the difference |
5,368 | Can simple linear regression be done without using plots and linear algebra? | I'm coming late to this conversation but I just want to add something that I think might aid understanding.
The computation for finding the OLS estimator relies on math from linear algebra involving matrices. It's my understanding there are a few different ways to do this. A = QR shows how to create an orthonormal matrix and an upper triangular matrix.
However, in the case of simple linear regression for an ordinary least squares estimate, meaning regression with only one independent and one dependent variable, you can use a shortcut equation in order to figure out the slope and the intercept of the line instead of leaning on the more complicated linear algebra math.
In the case of simple linear regression, the slope and intercept follow these neat closed-form equations: the slope can be calculated by multiplying the correlation r by the quotient of standard deviation of y over standard deviation of x. In this below equation, a refers to the slope and sy and sx refer to the standard deviation of y and the standard deviation of x, respectively.
a = r * ( sy / sx )
The intercept of the line of best fit for ordinary least squares simple linear regression can be calculated easily after you calculate the slope of the line of best fit. You do this by subtracting the slope of the line of best fit from mean of y, then multiplying the result by the mean of x. In the equation below, i refers to y-intercept and the straight line over the x and y values is a way of referring to the mean of x and y respectively; we refer to these terms as x-bar and y-bar.
i = y-bar - r * (sy / sx) * x-bar
Our R code looks like this:
r_slope <- ( sd(mtcars$mpg) / sd(mtcars$wt) ) * cor(mtcars$mpg, mtcars$wt)
r_intercept <- mean(mtcars$mpg) - r_slope * mean(mtcars$wt)
I know this only covers the case of simple linear regression with one x and one y but I hope it's helpful in visualizing this case. In simple linear regression, you can definitely find the slope and intercept of the line without knowing linear algebra. | Can simple linear regression be done without using plots and linear algebra? | I'm coming late to this conversation but I just want to add something that I think might aid understanding.
The computation for finding the OLS estimator relies on math from linear algebra involving m | Can simple linear regression be done without using plots and linear algebra?
I'm coming late to this conversation but I just want to add something that I think might aid understanding.
The computation for finding the OLS estimator relies on math from linear algebra involving matrices. It's my understanding there are a few different ways to do this. A = QR shows how to create an orthonormal matrix and an upper triangular matrix.
However, in the case of simple linear regression for an ordinary least squares estimate, meaning regression with only one independent and one dependent variable, you can use a shortcut equation in order to figure out the slope and the intercept of the line instead of leaning on the more complicated linear algebra math.
In the case of simple linear regression, the slope and intercept follow these neat closed-form equations: the slope can be calculated by multiplying the correlation r by the quotient of standard deviation of y over standard deviation of x. In this below equation, a refers to the slope and sy and sx refer to the standard deviation of y and the standard deviation of x, respectively.
a = r * ( sy / sx )
The intercept of the line of best fit for ordinary least squares simple linear regression can be calculated easily after you calculate the slope of the line of best fit. You do this by subtracting the slope of the line of best fit from mean of y, then multiplying the result by the mean of x. In the equation below, i refers to y-intercept and the straight line over the x and y values is a way of referring to the mean of x and y respectively; we refer to these terms as x-bar and y-bar.
i = y-bar - r * (sy / sx) * x-bar
Our R code looks like this:
r_slope <- ( sd(mtcars$mpg) / sd(mtcars$wt) ) * cor(mtcars$mpg, mtcars$wt)
r_intercept <- mean(mtcars$mpg) - r_slope * mean(mtcars$wt)
I know this only covers the case of simple linear regression with one x and one y but I hope it's helpful in visualizing this case. In simple linear regression, you can definitely find the slope and intercept of the line without knowing linear algebra. | Can simple linear regression be done without using plots and linear algebra?
I'm coming late to this conversation but I just want to add something that I think might aid understanding.
The computation for finding the OLS estimator relies on math from linear algebra involving m |
5,369 | What is the relationship between the mean squared error and the residual sum of squares function? | Actually it's mentioned in the Regression section of Mean squared error in Wikipedia:
In regression analysis, the term mean squared error is sometimes used
to refer to the unbiased estimate of error variance: the residual sum
of squares divided by the number of degrees of freedom.
You can also find some informations here: Errors and residuals in statistics
It says the expression mean squared error may have different meanings in different cases, which is tricky sometimes. | What is the relationship between the mean squared error and the residual sum of squares function? | Actually it's mentioned in the Regression section of Mean squared error in Wikipedia:
In regression analysis, the term mean squared error is sometimes used
to refer to the unbiased estimate of erro | What is the relationship between the mean squared error and the residual sum of squares function?
Actually it's mentioned in the Regression section of Mean squared error in Wikipedia:
In regression analysis, the term mean squared error is sometimes used
to refer to the unbiased estimate of error variance: the residual sum
of squares divided by the number of degrees of freedom.
You can also find some informations here: Errors and residuals in statistics
It says the expression mean squared error may have different meanings in different cases, which is tricky sometimes. | What is the relationship between the mean squared error and the residual sum of squares function?
Actually it's mentioned in the Regression section of Mean squared error in Wikipedia:
In regression analysis, the term mean squared error is sometimes used
to refer to the unbiased estimate of erro |
5,370 | What is the relationship between the mean squared error and the residual sum of squares function? | But be aware that Sum of Squared Errors (SSE) and Residue Sum of Squares (RSS) sometimes are used interchangeably, thus confusing the readers. For instance, check this URL out.
Strictly speaking from statistic point of views, Errors and Residues are completely different concepts. Errors mainly refer to difference between actual observed sample values and your predicted values, and used mostly in the statistic metrics like Root Means Squared Errors (RMSE) and Mean Absolute Errors (MAE). In contrast, residues refer exclusively to the differences between dependent variables and estimations from linear regression. | What is the relationship between the mean squared error and the residual sum of squares function? | But be aware that Sum of Squared Errors (SSE) and Residue Sum of Squares (RSS) sometimes are used interchangeably, thus confusing the readers. For instance, check this URL out.
Strictly speaking from | What is the relationship between the mean squared error and the residual sum of squares function?
But be aware that Sum of Squared Errors (SSE) and Residue Sum of Squares (RSS) sometimes are used interchangeably, thus confusing the readers. For instance, check this URL out.
Strictly speaking from statistic point of views, Errors and Residues are completely different concepts. Errors mainly refer to difference between actual observed sample values and your predicted values, and used mostly in the statistic metrics like Root Means Squared Errors (RMSE) and Mean Absolute Errors (MAE). In contrast, residues refer exclusively to the differences between dependent variables and estimations from linear regression. | What is the relationship between the mean squared error and the residual sum of squares function?
But be aware that Sum of Squared Errors (SSE) and Residue Sum of Squares (RSS) sometimes are used interchangeably, thus confusing the readers. For instance, check this URL out.
Strictly speaking from |
5,371 | What is the relationship between the mean squared error and the residual sum of squares function? | I don´t think this is correct here if we consider MSE to be the sqaure of RMSE. For instance, you have a series of sampled data on predictions and observations, now you try to do a linear regresion: Observation (O)= a + b X Prediction (P). In this case, the MSE is the sum of squared difference between O and P and divided by sample size N.
But if you want to measure how linear regression performs, you need to calculate Mean Squared Residue (MSR). In the same case, it would be firstly calculating Residual Sum of Squares (RSS) that corresponds to sum of squared differences between actual observation values and predicted observations derived from the linear regression.Then, it is followed for RSS divided by N-2 to get MSR.
Simply put, in the example, MSE can not be estimated using RSS/N since RSS component is no longer the same for the component used to calculate MSE. | What is the relationship between the mean squared error and the residual sum of squares function? | I don´t think this is correct here if we consider MSE to be the sqaure of RMSE. For instance, you have a series of sampled data on predictions and observations, now you try to do a linear regresion: O | What is the relationship between the mean squared error and the residual sum of squares function?
I don´t think this is correct here if we consider MSE to be the sqaure of RMSE. For instance, you have a series of sampled data on predictions and observations, now you try to do a linear regresion: Observation (O)= a + b X Prediction (P). In this case, the MSE is the sum of squared difference between O and P and divided by sample size N.
But if you want to measure how linear regression performs, you need to calculate Mean Squared Residue (MSR). In the same case, it would be firstly calculating Residual Sum of Squares (RSS) that corresponds to sum of squared differences between actual observation values and predicted observations derived from the linear regression.Then, it is followed for RSS divided by N-2 to get MSR.
Simply put, in the example, MSE can not be estimated using RSS/N since RSS component is no longer the same for the component used to calculate MSE. | What is the relationship between the mean squared error and the residual sum of squares function?
I don´t think this is correct here if we consider MSE to be the sqaure of RMSE. For instance, you have a series of sampled data on predictions and observations, now you try to do a linear regresion: O |
5,372 | Is regression with L1 regularization the same as Lasso, and with L2 regularization the same as ridge regression? And how to write "Lasso"? | Yes.
Yes.
LASSO is actually an acronym (least absolute shrinkage and selection operator), so it ought to be capitalized, but modern writing is the lexical equivalent of Mad Max. On the other hand, Amoeba writes that even the statisticians who coined the term LASSO now use the lower-case rendering (Hastie, Tibshirani and Wainwright, Statistical Learning with Sparsity). One can only speculate as to the motivation for the switch. If you're writing for an academic press, they typically have a style guide for this sort of thing. If you're writing on this forum, either is fine, and I doubt anyone really cares.
The $L$ notation is a reference to Minkowski norms and $L^p$ spaces. These just generalize the notion of taxicab and Euclidean distances to $p>0$ in the following expression:
$$
\|x\|_p=(|x_1|^p+|x_2|^p+...+|x_n|^p)^{\frac{1}{p}}
$$
Importantly, only $p\ge 1$ defines a metric distance; $0<p<1$ does not satisfy the triangle inequality, so it is not a distance by most definitions.
I'm not sure when the connection between ridge and LASSO was realized.
As for why there are multiple names, it's just a matter that these methods developed in different places at different times. A common theme in statistics is that concepts often have multiple names, one for each sub-field in which it was independently discovered (kernel functions vs covariance functions, Gaussian process regression vs Kriging, AUC vs $c$-statistic). Ridge regression should probably be called Tikhonov regularization, since I believe he has the earliest claim to the method. Meanwhile, LASSO was only introduced in 1996, much later than Tikhonov's "ridge" method! | Is regression with L1 regularization the same as Lasso, and with L2 regularization the same as ridge | Yes.
Yes.
LASSO is actually an acronym (least absolute shrinkage and selection operator), so it ought to be capitalized, but modern writing is the lexical equivalent of Mad Max. On the other hand, Amo | Is regression with L1 regularization the same as Lasso, and with L2 regularization the same as ridge regression? And how to write "Lasso"?
Yes.
Yes.
LASSO is actually an acronym (least absolute shrinkage and selection operator), so it ought to be capitalized, but modern writing is the lexical equivalent of Mad Max. On the other hand, Amoeba writes that even the statisticians who coined the term LASSO now use the lower-case rendering (Hastie, Tibshirani and Wainwright, Statistical Learning with Sparsity). One can only speculate as to the motivation for the switch. If you're writing for an academic press, they typically have a style guide for this sort of thing. If you're writing on this forum, either is fine, and I doubt anyone really cares.
The $L$ notation is a reference to Minkowski norms and $L^p$ spaces. These just generalize the notion of taxicab and Euclidean distances to $p>0$ in the following expression:
$$
\|x\|_p=(|x_1|^p+|x_2|^p+...+|x_n|^p)^{\frac{1}{p}}
$$
Importantly, only $p\ge 1$ defines a metric distance; $0<p<1$ does not satisfy the triangle inequality, so it is not a distance by most definitions.
I'm not sure when the connection between ridge and LASSO was realized.
As for why there are multiple names, it's just a matter that these methods developed in different places at different times. A common theme in statistics is that concepts often have multiple names, one for each sub-field in which it was independently discovered (kernel functions vs covariance functions, Gaussian process regression vs Kriging, AUC vs $c$-statistic). Ridge regression should probably be called Tikhonov regularization, since I believe he has the earliest claim to the method. Meanwhile, LASSO was only introduced in 1996, much later than Tikhonov's "ridge" method! | Is regression with L1 regularization the same as Lasso, and with L2 regularization the same as ridge
Yes.
Yes.
LASSO is actually an acronym (least absolute shrinkage and selection operator), so it ought to be capitalized, but modern writing is the lexical equivalent of Mad Max. On the other hand, Amo |
5,373 | Logistic regression model does not converge | glm() uses an iterative re-weighted least squares algorithm. The algorithm hit the maximum number of allowed iterations before signalling convergence. The default, documented in ?glm.control is 25. You pass control parameters as a list in the glm call:
delay.model <- glm(BigDelay ~ ArrDelay, data=flights, family=binomial,
control = list(maxit = 50))
As @Conjugate Prior says, you seem to be predicting the response with the data used to generate it. You have complete separation as any ArrDelay < 10 will predict FALSE and any ArrDelay >= 10 will predict TRUE. The other warning message tells you that the fitted probabilities for some observations were effectively 0 or 1 and that is a good indicator you have something wrong with the model.
The two warnings can go hand in hand. The likelihood function can be quite flat when some $\hat{\beta}_i$ get large, as in your example. If you allow more iterations, the model coefficients will diverge further if you have a separation issue. | Logistic regression model does not converge | glm() uses an iterative re-weighted least squares algorithm. The algorithm hit the maximum number of allowed iterations before signalling convergence. The default, documented in ?glm.control is 25. Yo | Logistic regression model does not converge
glm() uses an iterative re-weighted least squares algorithm. The algorithm hit the maximum number of allowed iterations before signalling convergence. The default, documented in ?glm.control is 25. You pass control parameters as a list in the glm call:
delay.model <- glm(BigDelay ~ ArrDelay, data=flights, family=binomial,
control = list(maxit = 50))
As @Conjugate Prior says, you seem to be predicting the response with the data used to generate it. You have complete separation as any ArrDelay < 10 will predict FALSE and any ArrDelay >= 10 will predict TRUE. The other warning message tells you that the fitted probabilities for some observations were effectively 0 or 1 and that is a good indicator you have something wrong with the model.
The two warnings can go hand in hand. The likelihood function can be quite flat when some $\hat{\beta}_i$ get large, as in your example. If you allow more iterations, the model coefficients will diverge further if you have a separation issue. | Logistic regression model does not converge
glm() uses an iterative re-weighted least squares algorithm. The algorithm hit the maximum number of allowed iterations before signalling convergence. The default, documented in ?glm.control is 25. Yo |
5,374 | Logistic regression model does not converge | You could try to check if Firth's bias reduction works with your dataset. It is a penalized likelihood approach that can be useful for datasets which produce divergences using the standard glm package. Sometimes it can be used instead of eliminating that variable which produces complete/almost complete separation.
For the formulation of the bias reduction (the $O(n^{-1})$-term in the asymptotic expansion of the bias of the maximum likelihood estimator is removed using classical cumulants expansion as motivating example) please check
http://biomet.oxfordjournals.org/content/80/1/27.abstract
Firth's bias reduction is implemented in the R-package logistf:
http://cran.r-project.org/web/packages/logistf/logistf.pdf | Logistic regression model does not converge | You could try to check if Firth's bias reduction works with your dataset. It is a penalized likelihood approach that can be useful for datasets which produce divergences using the standard glm package | Logistic regression model does not converge
You could try to check if Firth's bias reduction works with your dataset. It is a penalized likelihood approach that can be useful for datasets which produce divergences using the standard glm package. Sometimes it can be used instead of eliminating that variable which produces complete/almost complete separation.
For the formulation of the bias reduction (the $O(n^{-1})$-term in the asymptotic expansion of the bias of the maximum likelihood estimator is removed using classical cumulants expansion as motivating example) please check
http://biomet.oxfordjournals.org/content/80/1/27.abstract
Firth's bias reduction is implemented in the R-package logistf:
http://cran.r-project.org/web/packages/logistf/logistf.pdf | Logistic regression model does not converge
You could try to check if Firth's bias reduction works with your dataset. It is a penalized likelihood approach that can be useful for datasets which produce divergences using the standard glm package |
5,375 | scale a number between a range [duplicate] | Your scaling will need to take into account the possible range of the original number. There is a difference if your 200 could have been in the range [200,201] or in [0,200] or in [0,10000].
So let
$r_{\text{min}}$ denote the minimum of the range of your measurement
$r_{\text{max}}$ denote the maximum of the range of your measurement
$t_{\text{min}}$ denote the minimum of the range of your desired target scaling
$t_{\text{max}}$ denote the maximum of the range of your desired target scaling
$m\in[r_{\text{min}},r_{\text{max}}]$ denote your measurement to be scaled
Then
$$ m\mapsto \frac{m-r_{\text{min}}}{r_{\text{max}}-r_{\text{min}}}\times (t_{\text{max}}-t_{\text{min}}) + t_{\text{min}}$$
will scale $m$ linearly into $[t_{\text{min}},t_{\text{max}}]$ as desired.
To go step by step,
$ m\mapsto m-r_{\text{min}}$ maps $m$ to $[0,r_{\text{max}}-r_{\text{min}}]$.
Next,
$$ m\mapsto \frac{m-r_{\text{min}}}{r_{\text{max}}-r_{\text{min}}} $$
maps $m$ to the interval $[0,1]$, with $m=r_{\text{min}}$ mapped to $0$ and $m=r_{\text{max}}$ mapped to $1$.
Multiplying this by $(t_{\text{max}}-t_{\text{min}})$ maps $m$ to $[0,t_{\text{max}}-t_{\text{min}}]$.
Finally, adding $t_{\text{min}}$ shifts everything and maps $m$ to $[t_{\text{min}},t_{\text{max}}]$ as desired. | scale a number between a range [duplicate] | Your scaling will need to take into account the possible range of the original number. There is a difference if your 200 could have been in the range [200,201] or in [0,200] or in [0,10000].
So let
| scale a number between a range [duplicate]
Your scaling will need to take into account the possible range of the original number. There is a difference if your 200 could have been in the range [200,201] or in [0,200] or in [0,10000].
So let
$r_{\text{min}}$ denote the minimum of the range of your measurement
$r_{\text{max}}$ denote the maximum of the range of your measurement
$t_{\text{min}}$ denote the minimum of the range of your desired target scaling
$t_{\text{max}}$ denote the maximum of the range of your desired target scaling
$m\in[r_{\text{min}},r_{\text{max}}]$ denote your measurement to be scaled
Then
$$ m\mapsto \frac{m-r_{\text{min}}}{r_{\text{max}}-r_{\text{min}}}\times (t_{\text{max}}-t_{\text{min}}) + t_{\text{min}}$$
will scale $m$ linearly into $[t_{\text{min}},t_{\text{max}}]$ as desired.
To go step by step,
$ m\mapsto m-r_{\text{min}}$ maps $m$ to $[0,r_{\text{max}}-r_{\text{min}}]$.
Next,
$$ m\mapsto \frac{m-r_{\text{min}}}{r_{\text{max}}-r_{\text{min}}} $$
maps $m$ to the interval $[0,1]$, with $m=r_{\text{min}}$ mapped to $0$ and $m=r_{\text{max}}$ mapped to $1$.
Multiplying this by $(t_{\text{max}}-t_{\text{min}})$ maps $m$ to $[0,t_{\text{max}}-t_{\text{min}}]$.
Finally, adding $t_{\text{min}}$ shifts everything and maps $m$ to $[t_{\text{min}},t_{\text{max}}]$ as desired. | scale a number between a range [duplicate]
Your scaling will need to take into account the possible range of the original number. There is a difference if your 200 could have been in the range [200,201] or in [0,200] or in [0,10000].
So let
|
5,376 | scale a number between a range [duplicate] | In general, to scale your variable $x$ into a range $[a,b]$ you can use:
$$
x_{normalized} = (b-a)\frac{x - min(x)}{max(x) - min(x)} + a
$$ | scale a number between a range [duplicate] | In general, to scale your variable $x$ into a range $[a,b]$ you can use:
$$
x_{normalized} = (b-a)\frac{x - min(x)}{max(x) - min(x)} + a
$$ | scale a number between a range [duplicate]
In general, to scale your variable $x$ into a range $[a,b]$ you can use:
$$
x_{normalized} = (b-a)\frac{x - min(x)}{max(x) - min(x)} + a
$$ | scale a number between a range [duplicate]
In general, to scale your variable $x$ into a range $[a,b]$ you can use:
$$
x_{normalized} = (b-a)\frac{x - min(x)}{max(x) - min(x)} + a
$$ |
5,377 | Why don't linear regression assumptions matter in machine learning? | It’s because statistics puts an emphasis on model inference, while machine learning puts an emphasis on accurate predictions.
We like normal residuals in linear regression because then the usual $\hat{\beta}=(X^TX)^{-1}X^Ty$ is a maximum likelihood estimator.
We like uncorrelated predictors because then we get tighter confidence intervals on the parameters than we would if the predictors were correlated.
In machine learning, we often don’t care about how we get the answer, just that the result has a tight fit both in- and out-of-sample.
Leo Breiman has a famous article on the "two cultures" of modeling: Breiman, Leo. "Statistical modeling: The two cultures (with comments and a rejoinder by the author)." Statistical science 16.3 (2001): 199-231. | Why don't linear regression assumptions matter in machine learning? | It’s because statistics puts an emphasis on model inference, while machine learning puts an emphasis on accurate predictions.
We like normal residuals in linear regression because then the usual $\hat | Why don't linear regression assumptions matter in machine learning?
It’s because statistics puts an emphasis on model inference, while machine learning puts an emphasis on accurate predictions.
We like normal residuals in linear regression because then the usual $\hat{\beta}=(X^TX)^{-1}X^Ty$ is a maximum likelihood estimator.
We like uncorrelated predictors because then we get tighter confidence intervals on the parameters than we would if the predictors were correlated.
In machine learning, we often don’t care about how we get the answer, just that the result has a tight fit both in- and out-of-sample.
Leo Breiman has a famous article on the "two cultures" of modeling: Breiman, Leo. "Statistical modeling: The two cultures (with comments and a rejoinder by the author)." Statistical science 16.3 (2001): 199-231. | Why don't linear regression assumptions matter in machine learning?
It’s because statistics puts an emphasis on model inference, while machine learning puts an emphasis on accurate predictions.
We like normal residuals in linear regression because then the usual $\hat |
5,378 | Why don't linear regression assumptions matter in machine learning? | The typical linear regression assumptions are required mostly to make sure your inferences are right.
For instance, suppose you want to check if a certain predictor is associated with your target variable. In a linear regression setting, you would calculate the p-value associated to the coefficient of that predictor. In order to get this p-value correct, you need to satisfy all the assumptions.
In ML, on the other hand, you only want a model that can fit and generalize the patterns in your data: it's all about prediction, not inference. One would mostly care about how well the linear regression generalizes to unseen data, and this can be checked by assessing MSE on train-test splitted data or by cross validation, no need for parametric assumptions.
Of course this is not as black and white as I put it, for instance, one can use parametric assumptions to derive error estimates for predictions on new data. This can still be interesting in a ML setting. Still, you are correct in noticing that these assumptions are, in general, very important from a Stats point of view and not such a big deal in ML and that's the reason: the focus on inference vs. the focus on prediction. | Why don't linear regression assumptions matter in machine learning? | The typical linear regression assumptions are required mostly to make sure your inferences are right.
For instance, suppose you want to check if a certain predictor is associated with your target vari | Why don't linear regression assumptions matter in machine learning?
The typical linear regression assumptions are required mostly to make sure your inferences are right.
For instance, suppose you want to check if a certain predictor is associated with your target variable. In a linear regression setting, you would calculate the p-value associated to the coefficient of that predictor. In order to get this p-value correct, you need to satisfy all the assumptions.
In ML, on the other hand, you only want a model that can fit and generalize the patterns in your data: it's all about prediction, not inference. One would mostly care about how well the linear regression generalizes to unseen data, and this can be checked by assessing MSE on train-test splitted data or by cross validation, no need for parametric assumptions.
Of course this is not as black and white as I put it, for instance, one can use parametric assumptions to derive error estimates for predictions on new data. This can still be interesting in a ML setting. Still, you are correct in noticing that these assumptions are, in general, very important from a Stats point of view and not such a big deal in ML and that's the reason: the focus on inference vs. the focus on prediction. | Why don't linear regression assumptions matter in machine learning?
The typical linear regression assumptions are required mostly to make sure your inferences are right.
For instance, suppose you want to check if a certain predictor is associated with your target vari |
5,379 | Why don't linear regression assumptions matter in machine learning? | A linear regression is a statistical procedure that can be interpreted from both perspectives. Instead I will tackle the question of comparing linear regression (and its assumptions) to other methods.
A linear regression takes the form
$$ Y_i = X_i'\beta + \varepsilon_i$$
Texbooks usually ask you to check (i) Exogeneity $\mathbb{E}[\varepsilon_i \mid X_i] = 0$, (ii) Non-colinearity: $\mathbb{E}[X_iX_i']$ is invertible and (iii) homoskedasticity, $\mathbb{E}[\varepsilon_i \mid X_i] = \sigma^2$. Only (i) and (ii) are considered identifying assumptions, and (iii) can be replaced by much weaker assumptions. Normality of residuals sometimes appears in introductory texts, but has been shown to be unnecessary to understand the large sample behavior. Why do we need it?
$$ \widehat{\beta} = \beta + {\underbrace{\left(\frac{X'X}{n}\right)}_{\to^p \mathbb{E}[X_iX_i']}}^{-1} \ \underbrace{\left(\frac{X'\varepsilon_i}{n}\right)}_{\to^p \mathbb{E}[X_i\varepsilon_i']}$$
Condition (i) makes the second term zero, (ii) makes sure that the matrix is invertible, (iii) or some version of it guarantees the validity of the weak law of large numbers. Similar ideas are used to compute standard errors. The estimated prediction is $X_i'\widehat{\beta}$ which converges to $X_i'\beta$.
A typical machine learning (ML) algorithm attempts a more complicated functional form
$$ Y_i = g(X_i) + \varepsilon_i $$
The ``regression'' function is defined as $g(x) = \mathbb{E}[Y_i \mid X_i = x]$. By construction
$$\mathbb{E}[\varepsilon_i \mid X_i] = \mathbb{E}[Y_i - g(X_i) \mid X_i] = 0$$
Assumption (i) is automatically satisfied if the ML method is sufficiently flexible to describe the data. Assumption (ii) is still needed, with some caveats. Non-collinearity is a special case of a regularization condition. It says that your model can't be too complex relative to the sample size or include redundant information. ML methods also have that issue, but typically adjust it via a "tuning parameter". The problem is there, just that some state-of-the-art ML method push the complexity to squeeze more information from the data. Versions of (iii) are still technically there for convergence, but are usually easy to satisfy in both linear regressions and ML models.
It is also worth noting that some problems in experimental analyses involve latent variables (partially unobserved $X_i$). This sometimes changes the interpretation of the exogeneity condition in both linear regression and ML models. Off-the-shelf ML just makes the most out of observed data, but state-of-the-art research adapts ML for causal models with latent variables as well.
*PS: In the linear regression $\mathbb{E}[X_i\varepsilon_i] = 0$ can replace (i). | Why don't linear regression assumptions matter in machine learning? | A linear regression is a statistical procedure that can be interpreted from both perspectives. Instead I will tackle the question of comparing linear regression (and its assumptions) to other methods. | Why don't linear regression assumptions matter in machine learning?
A linear regression is a statistical procedure that can be interpreted from both perspectives. Instead I will tackle the question of comparing linear regression (and its assumptions) to other methods.
A linear regression takes the form
$$ Y_i = X_i'\beta + \varepsilon_i$$
Texbooks usually ask you to check (i) Exogeneity $\mathbb{E}[\varepsilon_i \mid X_i] = 0$, (ii) Non-colinearity: $\mathbb{E}[X_iX_i']$ is invertible and (iii) homoskedasticity, $\mathbb{E}[\varepsilon_i \mid X_i] = \sigma^2$. Only (i) and (ii) are considered identifying assumptions, and (iii) can be replaced by much weaker assumptions. Normality of residuals sometimes appears in introductory texts, but has been shown to be unnecessary to understand the large sample behavior. Why do we need it?
$$ \widehat{\beta} = \beta + {\underbrace{\left(\frac{X'X}{n}\right)}_{\to^p \mathbb{E}[X_iX_i']}}^{-1} \ \underbrace{\left(\frac{X'\varepsilon_i}{n}\right)}_{\to^p \mathbb{E}[X_i\varepsilon_i']}$$
Condition (i) makes the second term zero, (ii) makes sure that the matrix is invertible, (iii) or some version of it guarantees the validity of the weak law of large numbers. Similar ideas are used to compute standard errors. The estimated prediction is $X_i'\widehat{\beta}$ which converges to $X_i'\beta$.
A typical machine learning (ML) algorithm attempts a more complicated functional form
$$ Y_i = g(X_i) + \varepsilon_i $$
The ``regression'' function is defined as $g(x) = \mathbb{E}[Y_i \mid X_i = x]$. By construction
$$\mathbb{E}[\varepsilon_i \mid X_i] = \mathbb{E}[Y_i - g(X_i) \mid X_i] = 0$$
Assumption (i) is automatically satisfied if the ML method is sufficiently flexible to describe the data. Assumption (ii) is still needed, with some caveats. Non-collinearity is a special case of a regularization condition. It says that your model can't be too complex relative to the sample size or include redundant information. ML methods also have that issue, but typically adjust it via a "tuning parameter". The problem is there, just that some state-of-the-art ML method push the complexity to squeeze more information from the data. Versions of (iii) are still technically there for convergence, but are usually easy to satisfy in both linear regressions and ML models.
It is also worth noting that some problems in experimental analyses involve latent variables (partially unobserved $X_i$). This sometimes changes the interpretation of the exogeneity condition in both linear regression and ML models. Off-the-shelf ML just makes the most out of observed data, but state-of-the-art research adapts ML for causal models with latent variables as well.
*PS: In the linear regression $\mathbb{E}[X_i\varepsilon_i] = 0$ can replace (i). | Why don't linear regression assumptions matter in machine learning?
A linear regression is a statistical procedure that can be interpreted from both perspectives. Instead I will tackle the question of comparing linear regression (and its assumptions) to other methods. |
5,380 | Why don't linear regression assumptions matter in machine learning? | Assumptions do matter for regression whether it is used for inference (as is most common in statistics) or prediction (as is most common in machine learning). However, the sets of assumptions are not the same; successful prediction requires less restrictive assumptions than sensible inference does. The post "T-consistency vs. P-consistency" illustrates one of the assumptions that is needed for predictive success. If the so-called predictive consistency fails, prediction with regression will fail.
Why is so little attention paid to assumptions in machine learning context? I am not sure. Perhaps the assumptions for successful prediction are quite often satisfied (at least approximately), so they are less important. Also, it might be a historical reason, but we might also see some more discussion of assumptions in future texts (who knows). | Why don't linear regression assumptions matter in machine learning? | Assumptions do matter for regression whether it is used for inference (as is most common in statistics) or prediction (as is most common in machine learning). However, the sets of assumptions are not | Why don't linear regression assumptions matter in machine learning?
Assumptions do matter for regression whether it is used for inference (as is most common in statistics) or prediction (as is most common in machine learning). However, the sets of assumptions are not the same; successful prediction requires less restrictive assumptions than sensible inference does. The post "T-consistency vs. P-consistency" illustrates one of the assumptions that is needed for predictive success. If the so-called predictive consistency fails, prediction with regression will fail.
Why is so little attention paid to assumptions in machine learning context? I am not sure. Perhaps the assumptions for successful prediction are quite often satisfied (at least approximately), so they are less important. Also, it might be a historical reason, but we might also see some more discussion of assumptions in future texts (who knows). | Why don't linear regression assumptions matter in machine learning?
Assumptions do matter for regression whether it is used for inference (as is most common in statistics) or prediction (as is most common in machine learning). However, the sets of assumptions are not |
5,381 | Why don't linear regression assumptions matter in machine learning? | Even ignoring inference, the normality assumption matters for machine learning. In predictive modeling, the conditional distributions of the target variable are important. Gross non-normality indicates alternative models and/or methods are needed.
My post just focuses on the assumption of normality of the dependent (or target) variable; cases can be made for all the other regression assumptions as well.
Examples:
The data are very discrete. In the most extreme case, the data have only two possible values, in which case you should be using logistic regression for your predictive model. Similarly, with only a small number of ordinal values, you should use ordinal regression, and with only a small number of nominal values, you should use multinomial regression.
The data are censored. You might realize, in the process of investigating normality, that there is an upper bound. In some cases the upper bound is not really data, just an indication that the true data value is higher. In this case, ordinary predictive models must not be used because of gross biases. Censored data models must be used instead.
In the process of investigating normality (eg using q-q plots) it may become apparent that there are occasional extreme outlier observations (part of the process that you are studying) that will grossly affect ordinary predictive models. In such cases it would be prudent to use a predictive model that minimizes something other than squared errors, such as median regression, or (the negative of) a likelihood function that assumes heavy-tailed distributions. Similarly, you should evaluate predictive ability in such cases using something other than squared errors.
If you do use an ordinary predictive model, you would often like to bound the prediction error in some way for any particular prediction. The usual 95% bound $\hat Y \pm 1.96 \hat \sigma$ is valid for normal distributions (assuming that $\hat \sigma$ correctly estimates the conditional standard deviation), but not otherwise. With non-normal conditional distributions, the interval should be asymmetric and/or a different multiplier is needed.
All that having been said, there is no "thou shalt check normality" commandment. You don't have to do it at all. It's just that in certain cases, you can do better by using alternative methods when the conditional distributions are grossly non-normal. | Why don't linear regression assumptions matter in machine learning? | Even ignoring inference, the normality assumption matters for machine learning. In predictive modeling, the conditional distributions of the target variable are important. Gross non-normality indicate | Why don't linear regression assumptions matter in machine learning?
Even ignoring inference, the normality assumption matters for machine learning. In predictive modeling, the conditional distributions of the target variable are important. Gross non-normality indicates alternative models and/or methods are needed.
My post just focuses on the assumption of normality of the dependent (or target) variable; cases can be made for all the other regression assumptions as well.
Examples:
The data are very discrete. In the most extreme case, the data have only two possible values, in which case you should be using logistic regression for your predictive model. Similarly, with only a small number of ordinal values, you should use ordinal regression, and with only a small number of nominal values, you should use multinomial regression.
The data are censored. You might realize, in the process of investigating normality, that there is an upper bound. In some cases the upper bound is not really data, just an indication that the true data value is higher. In this case, ordinary predictive models must not be used because of gross biases. Censored data models must be used instead.
In the process of investigating normality (eg using q-q plots) it may become apparent that there are occasional extreme outlier observations (part of the process that you are studying) that will grossly affect ordinary predictive models. In such cases it would be prudent to use a predictive model that minimizes something other than squared errors, such as median regression, or (the negative of) a likelihood function that assumes heavy-tailed distributions. Similarly, you should evaluate predictive ability in such cases using something other than squared errors.
If you do use an ordinary predictive model, you would often like to bound the prediction error in some way for any particular prediction. The usual 95% bound $\hat Y \pm 1.96 \hat \sigma$ is valid for normal distributions (assuming that $\hat \sigma$ correctly estimates the conditional standard deviation), but not otherwise. With non-normal conditional distributions, the interval should be asymmetric and/or a different multiplier is needed.
All that having been said, there is no "thou shalt check normality" commandment. You don't have to do it at all. It's just that in certain cases, you can do better by using alternative methods when the conditional distributions are grossly non-normal. | Why don't linear regression assumptions matter in machine learning?
Even ignoring inference, the normality assumption matters for machine learning. In predictive modeling, the conditional distributions of the target variable are important. Gross non-normality indicate |
5,382 | Why don't linear regression assumptions matter in machine learning? | The real answer is because most people peddling machine learning are deceptive con artists.
The curse of dimensionality precludes most complex regressions that have any sort of chaotic relationship, since you are trying to build a surface of best fit over an N-1 dimensional space. See Page 41 of David Kristjanson Duvenaud's PhD thesis. Tools like Facebook Prophet provide a great delusion to the user since they just ignore all mathematical verification and give users "what they want".
Classification models are typically easier because the surface has more potential fits that yield meaningful separation in the data. Most regression fits are not "meaningful". It is likely when 2 people see the same thing, they are actually identifying it with different separation procedures in their "neural nets".
You should think long and hard about your assumptions and try to poke holes in any failure you can imagine, because mathematical proofs are still few and far between in this protoscience.
EDIT: I've wrote a fairly simple proof of why the SMOTE algorithm actually makes no sense. I would suggest it be immediately abandoned as a legitimate method in Machine Learning. The work is here: https://mikaeltamillow96.medium.com/smote-ml-hocus-pocus-ddee12506b39 | Why don't linear regression assumptions matter in machine learning? | The real answer is because most people peddling machine learning are deceptive con artists.
The curse of dimensionality precludes most complex regressions that have any sort of chaotic relationship, s | Why don't linear regression assumptions matter in machine learning?
The real answer is because most people peddling machine learning are deceptive con artists.
The curse of dimensionality precludes most complex regressions that have any sort of chaotic relationship, since you are trying to build a surface of best fit over an N-1 dimensional space. See Page 41 of David Kristjanson Duvenaud's PhD thesis. Tools like Facebook Prophet provide a great delusion to the user since they just ignore all mathematical verification and give users "what they want".
Classification models are typically easier because the surface has more potential fits that yield meaningful separation in the data. Most regression fits are not "meaningful". It is likely when 2 people see the same thing, they are actually identifying it with different separation procedures in their "neural nets".
You should think long and hard about your assumptions and try to poke holes in any failure you can imagine, because mathematical proofs are still few and far between in this protoscience.
EDIT: I've wrote a fairly simple proof of why the SMOTE algorithm actually makes no sense. I would suggest it be immediately abandoned as a legitimate method in Machine Learning. The work is here: https://mikaeltamillow96.medium.com/smote-ml-hocus-pocus-ddee12506b39 | Why don't linear regression assumptions matter in machine learning?
The real answer is because most people peddling machine learning are deceptive con artists.
The curse of dimensionality precludes most complex regressions that have any sort of chaotic relationship, s |
5,383 | Why do people use p-values instead of computing probability of the model given data? | Computing the probability that the hypothesis is correct doesn't fit well within the frequentist definition of a probability (a long run frequency), which was adopted to avoid the supposed subjectivity of the Bayesian definition of a probability. The truth of a particular hypothesis is not a random variable, it is either true or it isn't and has no long run frequency. It is indeed more natural to be interested in the probability of the truth of the hypothesis, which is IMHO why p-values are often misinterpreted as the probability that the null hypothesis is true. Part of the difficulty is that from Bayes rule, we know that to compute the posterior probability that a hypothesis is true, you need to start with a prior probability that the hypothesis is true.
A Bayesian would compute the probability that the hypothesis is true, given the data (and his/her prior belief).
Essentially in deciding between frequentist and Bayesian approaches is a choice whether the supposed subjectivity of the Bayesian approach is more abhorrent than the fact that the frequentist approach generally does not give a direct answer to the question you actually want to ask - but there is room for both.
In the case of asking whether a coin is fair, i.e. the probability of a head is equal to the probability of a tail, we also have an example of a hypothesis that we know in the real world is almost certainly false right from the outset. The two sides of the coin are non-symmetric, so we should expect a slight asymmetry in the probabilities of heads and tails, so if the coin "passes" the test, it just means we don't have enough observations to be able to conclude what we already know to be true - that the coin is very slightly biased! | Why do people use p-values instead of computing probability of the model given data? | Computing the probability that the hypothesis is correct doesn't fit well within the frequentist definition of a probability (a long run frequency), which was adopted to avoid the supposed subjectivit | Why do people use p-values instead of computing probability of the model given data?
Computing the probability that the hypothesis is correct doesn't fit well within the frequentist definition of a probability (a long run frequency), which was adopted to avoid the supposed subjectivity of the Bayesian definition of a probability. The truth of a particular hypothesis is not a random variable, it is either true or it isn't and has no long run frequency. It is indeed more natural to be interested in the probability of the truth of the hypothesis, which is IMHO why p-values are often misinterpreted as the probability that the null hypothesis is true. Part of the difficulty is that from Bayes rule, we know that to compute the posterior probability that a hypothesis is true, you need to start with a prior probability that the hypothesis is true.
A Bayesian would compute the probability that the hypothesis is true, given the data (and his/her prior belief).
Essentially in deciding between frequentist and Bayesian approaches is a choice whether the supposed subjectivity of the Bayesian approach is more abhorrent than the fact that the frequentist approach generally does not give a direct answer to the question you actually want to ask - but there is room for both.
In the case of asking whether a coin is fair, i.e. the probability of a head is equal to the probability of a tail, we also have an example of a hypothesis that we know in the real world is almost certainly false right from the outset. The two sides of the coin are non-symmetric, so we should expect a slight asymmetry in the probabilities of heads and tails, so if the coin "passes" the test, it just means we don't have enough observations to be able to conclude what we already know to be true - that the coin is very slightly biased! | Why do people use p-values instead of computing probability of the model given data?
Computing the probability that the hypothesis is correct doesn't fit well within the frequentist definition of a probability (a long run frequency), which was adopted to avoid the supposed subjectivit |
5,384 | Why do people use p-values instead of computing probability of the model given data? | Nothing like answering a really old question, but here goes....
p-values are almost valid hypothesis tests. This is a slightly adapted exerpt taken from Jaynes's 2003 probability theory book (Repetitive experiments: probability and frequency). Suppose we have a null hypothesis $H_0$ that we wish to test. We have data $D$ and prior information $I$. Suppose that there is some unspecified hypothesis $H_A$ that we will test $H_0$ against. The posterior odds ratio for $H_A$ against $H_0$ is then given by:
$$\frac{P(H_A|DI)}{P(H_0|DI)}=\frac{P(H_A|I)}{P(H_0|I)}\times\frac{P(D|H_AI)}{P(D|H_0I)}$$
Now the first term on the right hand side is independent of the data, so the data can only influence the result via the second term. Now, we can always invent an alternative hypothesis $H_A$ such that $P(D|H_AI)=1$ - a "perfect fit" hypothesis. Thus we can use $\frac{1}{P(D|H_0I)}$ as a measure of how well the data could support any alternative hypothesis over the null. There is no alternative hypothesis that the data could support over $H_0$ by greater than $\frac{1}{P(D|H_0I)}$. We can also restrict the class of alternatives, and the change is that the $1$ is replaced by the maximised likelihood (including normalising constants) within that class. If $P(D|H_0I)$ starts to become too small, then we begin to doubt the null, because the number of alternatives between $H_0$ and $H_A$ grows (including some with non-negligible prior probabilities). But this is so very nearly what is done with p-values, but with one exception: we don't calculate the probability for $t(D)>t_0$ for some statistic $t(D)$ and some "bad" region of the statistic. We calculate the probability for $D$ - the information we actually have, rather than some subset of it, $t(D)$.
Another reason people use p-values is that they often amount to a "proper" hypothesis test, but may be easier to calculate. We can show this with the very simple example of testing the normal mean with known variance. We have data $D\equiv\{x_1,\dots,x_N\}$ with an assumed model $x_i\sim Normal(\mu,\sigma^2)$ (part of the prior information $I$). We want to test $H_0:\mu=\mu_0$. Then we have, after a little calculation:
$$P(D|H_0I)=(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{N\left[s^2+(\overline{x}-\mu_0)^2\right]}{2\sigma^2}\right)$$
Where $\overline{x}=\frac{1}{N}\sum_{i=1}^{N}x_i$ and $s^2=\frac{1}{N}\sum_{i=1}^{N}(x_i-\overline{x})^2$. This shows that the maximum value of $P(D|H_0I)$ will be achieved when $\mu_0=\overline{x}$. The maximised value is:
$$P(D|H_AI)=(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{Ns^2}{2\sigma^2}\right)$$
So we take the ratio of these two, and we get:
$$\frac{P(D|H_AI)}{P(D|H_0I)}=\frac{(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{Ns^2}{2\sigma^2}\right)}{(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{Ns^2+N(\overline{x}-\mu_0)^2}{2\sigma^2}\right)}=\exp\left(\frac{z^2}{2}\right)$$
Where $z=\sqrt{N}\frac{\overline{x}-\mu_0}{\sigma}$ is the "Z-statistic". Large values of $|z|$ cast doubt on the null hypothesis, relative to the hypothesis about the normal mean which is most strongly supported by the data. We can also see that $\overline{x}$ is the only part of the data that is needed, and thus is a sufficient statistic for the test.
The p-value approach to this problem is almost the same, but in reverse. We start with the sufficient statistic $\overline{x}$, and we caluclate its sampling distribution, which is easily shown to be $\overline{X}\sim Normal\left(\mu,\frac{\sigma^2}{N}\right)$ - where I have used a capital letter to distinguish the random variable $\overline{X}$ from the observed value $\overline{x}$. Now we need to find a region which casts doubt on the null hypothesis: this is easily seen to be those regions where $|\overline{X}-\mu_0|$ is large. So we can calculate the probability that $|\overline{X}-\mu_0|\geq |\overline{x}-\mu_0|$ as a measure of how far away the observed data is from the null hypothesis. As before, this is a simple calculation, and we get:
$$\text{p-value}=P(|\overline{X}-\mu_0|\geq |\overline{x}-\mu_0||H_0)$$
$$=1-P\left[-\sqrt{N}\frac{|\overline{x}-\mu_0|}{\sigma}\leq\sqrt{N}\frac{\overline{X}-\mu_0}{\sigma}\leq \sqrt{N}\frac{|\overline{x}-\mu_0|}{\sigma}|H_0\right]$$
$$=1-P(-|z|\leq Z\leq |z||H_0)=2\left[1-\Phi(|z|)\right]$$
Now, we can see that the p-value is a monotonic decreasing function of $|z|$, which means we essentially get the same answer as the "proper" hypothesis test. Rejecting when the p-value is below a certain threshold is the same thing as rejecting when the posterior odds is above a certain threshold. However, note that in doing the proper test, we had to define the class of alternatives, and we had to maximise a probability over that class. For the p-value, we have to find a statistic, and calculate its sampling distribution, and evaluate this at the observed value. In some sense choosing a statistic is equivalent to defining the alternative hypothesis that you are considering.
Although they are both easy things to do in this example, they are not always so easy in more complicated cases. In some cases it may be easier to choose the right statistic to use and calculate its sampling distribution. In others it may be easier to define the class of alternatives, and maximise over that class.
This simple example account for a large amount of p-value based testing, simply because so many hypothesis tests are of the "approximate normal" variety. It provides an approximate answer to your coin problem also (by using the normal approximation to the binomial). It also shows that p-values in this case will not lead you astray, at least in terms of testing a single hypothesis. In this case, we can say that a p-value is a measure of evidence against the null hypothesis.
However, the p-values have a less interpretable scale than the bayes factor - the link between p-value and the "amount" of evidence against the null is complex. p-values get too small too quickly - which makes them difficult to use properly. They tend overstate the support against the null provided by the data. If we interpret p-values as probabilities against the null - $0.1$ in odds form is $9$, when the actual evidence is $3.87$, and $0.05$ in odds form is $19$ when the actual evidence is $6.83$. Or to put it another way, using a p-value as a probability that the null is false here, is equivalent to setting the prior odds. So for p-value of $0.1$ the implied prior odds against the null are $2.33$ and for p-value of $0.05$ the implied prior odds against the null are $2.78$. | Why do people use p-values instead of computing probability of the model given data? | Nothing like answering a really old question, but here goes....
p-values are almost valid hypothesis tests. This is a slightly adapted exerpt taken from Jaynes's 2003 probability theory book (Repetit | Why do people use p-values instead of computing probability of the model given data?
Nothing like answering a really old question, but here goes....
p-values are almost valid hypothesis tests. This is a slightly adapted exerpt taken from Jaynes's 2003 probability theory book (Repetitive experiments: probability and frequency). Suppose we have a null hypothesis $H_0$ that we wish to test. We have data $D$ and prior information $I$. Suppose that there is some unspecified hypothesis $H_A$ that we will test $H_0$ against. The posterior odds ratio for $H_A$ against $H_0$ is then given by:
$$\frac{P(H_A|DI)}{P(H_0|DI)}=\frac{P(H_A|I)}{P(H_0|I)}\times\frac{P(D|H_AI)}{P(D|H_0I)}$$
Now the first term on the right hand side is independent of the data, so the data can only influence the result via the second term. Now, we can always invent an alternative hypothesis $H_A$ such that $P(D|H_AI)=1$ - a "perfect fit" hypothesis. Thus we can use $\frac{1}{P(D|H_0I)}$ as a measure of how well the data could support any alternative hypothesis over the null. There is no alternative hypothesis that the data could support over $H_0$ by greater than $\frac{1}{P(D|H_0I)}$. We can also restrict the class of alternatives, and the change is that the $1$ is replaced by the maximised likelihood (including normalising constants) within that class. If $P(D|H_0I)$ starts to become too small, then we begin to doubt the null, because the number of alternatives between $H_0$ and $H_A$ grows (including some with non-negligible prior probabilities). But this is so very nearly what is done with p-values, but with one exception: we don't calculate the probability for $t(D)>t_0$ for some statistic $t(D)$ and some "bad" region of the statistic. We calculate the probability for $D$ - the information we actually have, rather than some subset of it, $t(D)$.
Another reason people use p-values is that they often amount to a "proper" hypothesis test, but may be easier to calculate. We can show this with the very simple example of testing the normal mean with known variance. We have data $D\equiv\{x_1,\dots,x_N\}$ with an assumed model $x_i\sim Normal(\mu,\sigma^2)$ (part of the prior information $I$). We want to test $H_0:\mu=\mu_0$. Then we have, after a little calculation:
$$P(D|H_0I)=(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{N\left[s^2+(\overline{x}-\mu_0)^2\right]}{2\sigma^2}\right)$$
Where $\overline{x}=\frac{1}{N}\sum_{i=1}^{N}x_i$ and $s^2=\frac{1}{N}\sum_{i=1}^{N}(x_i-\overline{x})^2$. This shows that the maximum value of $P(D|H_0I)$ will be achieved when $\mu_0=\overline{x}$. The maximised value is:
$$P(D|H_AI)=(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{Ns^2}{2\sigma^2}\right)$$
So we take the ratio of these two, and we get:
$$\frac{P(D|H_AI)}{P(D|H_0I)}=\frac{(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{Ns^2}{2\sigma^2}\right)}{(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{Ns^2+N(\overline{x}-\mu_0)^2}{2\sigma^2}\right)}=\exp\left(\frac{z^2}{2}\right)$$
Where $z=\sqrt{N}\frac{\overline{x}-\mu_0}{\sigma}$ is the "Z-statistic". Large values of $|z|$ cast doubt on the null hypothesis, relative to the hypothesis about the normal mean which is most strongly supported by the data. We can also see that $\overline{x}$ is the only part of the data that is needed, and thus is a sufficient statistic for the test.
The p-value approach to this problem is almost the same, but in reverse. We start with the sufficient statistic $\overline{x}$, and we caluclate its sampling distribution, which is easily shown to be $\overline{X}\sim Normal\left(\mu,\frac{\sigma^2}{N}\right)$ - where I have used a capital letter to distinguish the random variable $\overline{X}$ from the observed value $\overline{x}$. Now we need to find a region which casts doubt on the null hypothesis: this is easily seen to be those regions where $|\overline{X}-\mu_0|$ is large. So we can calculate the probability that $|\overline{X}-\mu_0|\geq |\overline{x}-\mu_0|$ as a measure of how far away the observed data is from the null hypothesis. As before, this is a simple calculation, and we get:
$$\text{p-value}=P(|\overline{X}-\mu_0|\geq |\overline{x}-\mu_0||H_0)$$
$$=1-P\left[-\sqrt{N}\frac{|\overline{x}-\mu_0|}{\sigma}\leq\sqrt{N}\frac{\overline{X}-\mu_0}{\sigma}\leq \sqrt{N}\frac{|\overline{x}-\mu_0|}{\sigma}|H_0\right]$$
$$=1-P(-|z|\leq Z\leq |z||H_0)=2\left[1-\Phi(|z|)\right]$$
Now, we can see that the p-value is a monotonic decreasing function of $|z|$, which means we essentially get the same answer as the "proper" hypothesis test. Rejecting when the p-value is below a certain threshold is the same thing as rejecting when the posterior odds is above a certain threshold. However, note that in doing the proper test, we had to define the class of alternatives, and we had to maximise a probability over that class. For the p-value, we have to find a statistic, and calculate its sampling distribution, and evaluate this at the observed value. In some sense choosing a statistic is equivalent to defining the alternative hypothesis that you are considering.
Although they are both easy things to do in this example, they are not always so easy in more complicated cases. In some cases it may be easier to choose the right statistic to use and calculate its sampling distribution. In others it may be easier to define the class of alternatives, and maximise over that class.
This simple example account for a large amount of p-value based testing, simply because so many hypothesis tests are of the "approximate normal" variety. It provides an approximate answer to your coin problem also (by using the normal approximation to the binomial). It also shows that p-values in this case will not lead you astray, at least in terms of testing a single hypothesis. In this case, we can say that a p-value is a measure of evidence against the null hypothesis.
However, the p-values have a less interpretable scale than the bayes factor - the link between p-value and the "amount" of evidence against the null is complex. p-values get too small too quickly - which makes them difficult to use properly. They tend overstate the support against the null provided by the data. If we interpret p-values as probabilities against the null - $0.1$ in odds form is $9$, when the actual evidence is $3.87$, and $0.05$ in odds form is $19$ when the actual evidence is $6.83$. Or to put it another way, using a p-value as a probability that the null is false here, is equivalent to setting the prior odds. So for p-value of $0.1$ the implied prior odds against the null are $2.33$ and for p-value of $0.05$ the implied prior odds against the null are $2.78$. | Why do people use p-values instead of computing probability of the model given data?
Nothing like answering a really old question, but here goes....
p-values are almost valid hypothesis tests. This is a slightly adapted exerpt taken from Jaynes's 2003 probability theory book (Repetit |
5,385 | Why do people use p-values instead of computing probability of the model given data? | Your question is a great example of frequentist reasoning and is, actually quite natural. I've used this example in my classes to demonstrate the nature of hypothesis tests. I ask for a volunteer to predict the results of a coin flip. No matter what the result, I record a "correct" guess. We do this repeatedly until the class becomes suspicious.
Now, they have a null model in their head. They assume the coin is fair. Given that assumption of 50% correct when is everything is fair, every successive correct guess arouses more suspicion that the fair coin model is incorrect. A few correct guesses and they accept the role of chance. After 5 or 10 correct guesses, the class always begins to suspect that the chance of a fair coin is low. Thus it is with the nature of hypothesis testing under the frequentist model.
It is a clear and intuitive representation of the frequentist take on hypothesis testing. It is the probability of the observed data given that the null is true. It is actually quite natural as demonstrated by this easy experiment. We take it for granted that the model is 50-50 but as evidence mounts, I reject that model and suspect that there is something else at play.
So, if the probability of what I observe is low given the model I assume (the p-value) then I have some confidence in rejecting my assumed model. Thus, a p-value is a useful measure of evidence against my assumed model taking into account the role of chance.
A disclaimer: I took this exercise from a long forgotten article in, what I recall, was one of the ASA journals. | Why do people use p-values instead of computing probability of the model given data? | Your question is a great example of frequentist reasoning and is, actually quite natural. I've used this example in my classes to demonstrate the nature of hypothesis tests. I ask for a volunteer to | Why do people use p-values instead of computing probability of the model given data?
Your question is a great example of frequentist reasoning and is, actually quite natural. I've used this example in my classes to demonstrate the nature of hypothesis tests. I ask for a volunteer to predict the results of a coin flip. No matter what the result, I record a "correct" guess. We do this repeatedly until the class becomes suspicious.
Now, they have a null model in their head. They assume the coin is fair. Given that assumption of 50% correct when is everything is fair, every successive correct guess arouses more suspicion that the fair coin model is incorrect. A few correct guesses and they accept the role of chance. After 5 or 10 correct guesses, the class always begins to suspect that the chance of a fair coin is low. Thus it is with the nature of hypothesis testing under the frequentist model.
It is a clear and intuitive representation of the frequentist take on hypothesis testing. It is the probability of the observed data given that the null is true. It is actually quite natural as demonstrated by this easy experiment. We take it for granted that the model is 50-50 but as evidence mounts, I reject that model and suspect that there is something else at play.
So, if the probability of what I observe is low given the model I assume (the p-value) then I have some confidence in rejecting my assumed model. Thus, a p-value is a useful measure of evidence against my assumed model taking into account the role of chance.
A disclaimer: I took this exercise from a long forgotten article in, what I recall, was one of the ASA journals. | Why do people use p-values instead of computing probability of the model given data?
Your question is a great example of frequentist reasoning and is, actually quite natural. I've used this example in my classes to demonstrate the nature of hypothesis tests. I ask for a volunteer to |
5,386 | Why do people use p-values instead of computing probability of the model given data? | As a former academic who moved into practice, I'll take a shot. People use p-values because they are useful. You can't see it in textbooky examples of coin flips. Sure they're not really solid foundationally, but maybe that is not as necessary as we like to think when we're thinking academically.
In the world of data, we're surrounded by a literally infinite number of possible things to look into next. With p-value computations all you need as an idea of what is uninteresting and a numerical heuristic for what sort of data might be interesting (well, plus a probability model for uninteresting). Then individually or collectively we can scan things pretty simple, rejecting the bulk of the uninteresting. The p-value allows us to say "If I don't put much priority on thinking about this otherwise, this data gives me no reason to change".
I agree p-values can be misinterpreted and overinterpreted, but they're still an important part of statistics. | Why do people use p-values instead of computing probability of the model given data? | As a former academic who moved into practice, I'll take a shot. People use p-values because they are useful. You can't see it in textbooky examples of coin flips. Sure they're not really solid foundat | Why do people use p-values instead of computing probability of the model given data?
As a former academic who moved into practice, I'll take a shot. People use p-values because they are useful. You can't see it in textbooky examples of coin flips. Sure they're not really solid foundationally, but maybe that is not as necessary as we like to think when we're thinking academically.
In the world of data, we're surrounded by a literally infinite number of possible things to look into next. With p-value computations all you need as an idea of what is uninteresting and a numerical heuristic for what sort of data might be interesting (well, plus a probability model for uninteresting). Then individually or collectively we can scan things pretty simple, rejecting the bulk of the uninteresting. The p-value allows us to say "If I don't put much priority on thinking about this otherwise, this data gives me no reason to change".
I agree p-values can be misinterpreted and overinterpreted, but they're still an important part of statistics. | Why do people use p-values instead of computing probability of the model given data?
As a former academic who moved into practice, I'll take a shot. People use p-values because they are useful. You can't see it in textbooky examples of coin flips. Sure they're not really solid foundat |
5,387 | Why do people use p-values instead of computing probability of the model given data? | "Roughly speaking p-value gives a probability of the observed outcome of an experiment given the hypothesis (model)."
but it doesn't. Not even roughly - this fudges an essential distinction.
The model is not specified, as Raskolnikov points out, but let's assume you mean a binomial model (independent coin tosses, fixed unknown coin bias). The hypothesis is the claim that the relevant parameter in this model, the bias or probability of heads, is 0.5.
"Having this probability (p-value) we want to judge our hypothesis (how likely it is)"
We may indeed want to make this judgement but a p-value will not (and was not designed to) help us do so.
"But wouldn't it be more natural to calculate the probability of the hypothesis given the observed outcome?"
Perhaps it would. See all the discussion of Bayes above.
"[...] Now we calculate the p-value, that is equal to the probability to get 14 or more heads in 20 flips of coin. OK, now we have this probability (0.058) and we want to use this probability to judge our model (how is it likely that we have a fair coin)."
'of our hypothesis, assuming our model to be true', but essentially: yes. Large p-values indicate that the coin's behaviour is consistent with the hypothesis that it is fair. (They are also typically consistent with the hypothesis being false but so close to being true we do not have enough data to tell; see 'statistical power'.)
"But if we want to estimate the probability of the model, why we do not calculate the probability of the model given the experiment? Why do we calculate the probability of the experiment given the model (p-value)?"
We actually don't calculate the probability of the experimental results given the hypothesis in this setup. After all, the probability is only about 0.176 of seeing exactly 10 heads when the hypothesis is true, and that's the most probable value. This isn't a quantity of interest at all.
It is also relevant that we don't usually estimate the probability of the model either. Both frequentist and Bayesian answers typically assume the model is true and make their inferences about its parameters. Indeed, not all Bayesians would even in principle be interested in the probability of the model, that is: the probability that the whole situation was well modelled by a binomial distribution. They might do a lot of model checking, but never actually ask how likely the binomial was in the space of other possible models. Bayesians who care about Bayes Factors are interested, others not so much. | Why do people use p-values instead of computing probability of the model given data? | "Roughly speaking p-value gives a probability of the observed outcome of an experiment given the hypothesis (model)."
but it doesn't. Not even roughly - this fudges an essential distinction.
The mode | Why do people use p-values instead of computing probability of the model given data?
"Roughly speaking p-value gives a probability of the observed outcome of an experiment given the hypothesis (model)."
but it doesn't. Not even roughly - this fudges an essential distinction.
The model is not specified, as Raskolnikov points out, but let's assume you mean a binomial model (independent coin tosses, fixed unknown coin bias). The hypothesis is the claim that the relevant parameter in this model, the bias or probability of heads, is 0.5.
"Having this probability (p-value) we want to judge our hypothesis (how likely it is)"
We may indeed want to make this judgement but a p-value will not (and was not designed to) help us do so.
"But wouldn't it be more natural to calculate the probability of the hypothesis given the observed outcome?"
Perhaps it would. See all the discussion of Bayes above.
"[...] Now we calculate the p-value, that is equal to the probability to get 14 or more heads in 20 flips of coin. OK, now we have this probability (0.058) and we want to use this probability to judge our model (how is it likely that we have a fair coin)."
'of our hypothesis, assuming our model to be true', but essentially: yes. Large p-values indicate that the coin's behaviour is consistent with the hypothesis that it is fair. (They are also typically consistent with the hypothesis being false but so close to being true we do not have enough data to tell; see 'statistical power'.)
"But if we want to estimate the probability of the model, why we do not calculate the probability of the model given the experiment? Why do we calculate the probability of the experiment given the model (p-value)?"
We actually don't calculate the probability of the experimental results given the hypothesis in this setup. After all, the probability is only about 0.176 of seeing exactly 10 heads when the hypothesis is true, and that's the most probable value. This isn't a quantity of interest at all.
It is also relevant that we don't usually estimate the probability of the model either. Both frequentist and Bayesian answers typically assume the model is true and make their inferences about its parameters. Indeed, not all Bayesians would even in principle be interested in the probability of the model, that is: the probability that the whole situation was well modelled by a binomial distribution. They might do a lot of model checking, but never actually ask how likely the binomial was in the space of other possible models. Bayesians who care about Bayes Factors are interested, others not so much. | Why do people use p-values instead of computing probability of the model given data?
"Roughly speaking p-value gives a probability of the observed outcome of an experiment given the hypothesis (model)."
but it doesn't. Not even roughly - this fudges an essential distinction.
The mode |
5,388 | Why do people use p-values instead of computing probability of the model given data? | A side note to the other excellent answers: on occasion there are times we don't. For example, up until very recently, they were outright banned at the journal Epidemiology - now they are merely "strongly discouraged" and the editorial board devoted a tremendous amount of space to a discussion of them here: http://journals.lww.com/epidem/pages/collectiondetails.aspx?TopicalCollectionId=4 | Why do people use p-values instead of computing probability of the model given data? | A side note to the other excellent answers: on occasion there are times we don't. For example, up until very recently, they were outright banned at the journal Epidemiology - now they are merely "stro | Why do people use p-values instead of computing probability of the model given data?
A side note to the other excellent answers: on occasion there are times we don't. For example, up until very recently, they were outright banned at the journal Epidemiology - now they are merely "strongly discouraged" and the editorial board devoted a tremendous amount of space to a discussion of them here: http://journals.lww.com/epidem/pages/collectiondetails.aspx?TopicalCollectionId=4 | Why do people use p-values instead of computing probability of the model given data?
A side note to the other excellent answers: on occasion there are times we don't. For example, up until very recently, they were outright banned at the journal Epidemiology - now they are merely "stro |
5,389 | Why do people use p-values instead of computing probability of the model given data? | I will only add a few remarks; I agree with you that the overuse of $p$-values is harmful.
Some people in applied stats misinterpret $p$-values, notably understanding them as the probability that the null
hypotheses is true; cf these papers: P Values are not Error Probabilities and Why We Don’t Really Know What "Statistical Significance" Means: A Major Educational Failure.
An other common misconception is that $p$-values reflect the size of effect detected, or their potential for classification, when they reflect both the size of sample and the size of effects. This leads some people to write papers to explain why variables that have been shown "strongly associated" to a character (ie with very small p values) are poor classifiers, like this one...
To conclude, my opinion is that $p$-values are so widely used because of publications standards. In applied areas (biostats...) their size are sometimes the sole concern of some reviewers. | Why do people use p-values instead of computing probability of the model given data? | I will only add a few remarks; I agree with you that the overuse of $p$-values is harmful.
Some people in applied stats misinterpret $p$-values, notably understanding them as the probability that the | Why do people use p-values instead of computing probability of the model given data?
I will only add a few remarks; I agree with you that the overuse of $p$-values is harmful.
Some people in applied stats misinterpret $p$-values, notably understanding them as the probability that the null
hypotheses is true; cf these papers: P Values are not Error Probabilities and Why We Don’t Really Know What "Statistical Significance" Means: A Major Educational Failure.
An other common misconception is that $p$-values reflect the size of effect detected, or their potential for classification, when they reflect both the size of sample and the size of effects. This leads some people to write papers to explain why variables that have been shown "strongly associated" to a character (ie with very small p values) are poor classifiers, like this one...
To conclude, my opinion is that $p$-values are so widely used because of publications standards. In applied areas (biostats...) their size are sometimes the sole concern of some reviewers. | Why do people use p-values instead of computing probability of the model given data?
I will only add a few remarks; I agree with you that the overuse of $p$-values is harmful.
Some people in applied stats misinterpret $p$-values, notably understanding them as the probability that the |
5,390 | Why do people use p-values instead of computing probability of the model given data? | Define probability. I mean it. Before we progress any further, we need to settle on terms.
An intuitive definition of probability is a measure of uncertainty. We are uncertain whether the next coin toss will come up heads or tails. That is uncertainty in the data $D$. We are also uncertain whether the coin is fair or not. That is uncertainty about the model $M$... or you can call uncertainty about the state of the world.
To arrive at the conditional distribution $P(M|D)$, you need to have the joint distribution $P(M,D)$ -- i.e., the knowledge of the whole population of coins in circulation, how many of them are forged, and how forged coins behave (which may depend on the way the coins are spun and caught in the air).
In the particular example of coins, this is at least conceptually possible -- the government figures are available on the coins that are supposed to be fair (28$\cdot$109 per year), or at least those with stable characteristics. As far as forged coins go, the scale of production of less than a million is probably not worth talking about, so $10^6/28\cdot10^9$ may be a probability that the coin you got from a cashier's register is unfair. Then you need to come up with a model of how the unfair coin works... and obtain the joint distribution, and condition on the data.
In the practical world problems with say medical conditions and the way they work, you may not be able to come up with none of these components of the joint distribution, and can't condition.
Bayesian modeling provides a way to simplify the models and come up with these joints $P(M,D)$. But the devil is in the details. If you say that the fair coin is the one with $p=0.5$, and then go ahead and specify a traditional Beta prior, and get the Beta conjugate posterior, then... surprise, surprise! $P(p=0.5)=0$ for either of these continuous distributions, no matter if your prior is $B(0.5,0.5)$ or $B(1000,1000)$. So you'd have to incorporate a point mass at $0.5$, give it a prior mass ($28\cdot10^9/(28\cdot10^9 + 10^6)$, say), and see if your data moves the posterior away from that point mass. This is a more complicated calculation that involve Metropolis-Hastings sampling rather than the more traditional Gibbs sampling.
Besides the difficulties in talking about what exactly the right models are, Bayesian methods have limited ways of dealing with model misspecification. If you don't like Gaussian errors, or you don't believe in independence of coin tosses (your hand gets tired after the first 10,000 or so tosses, so you don't toss it as high as the first 1,000 or so times, whch may affect the probabilities), all that you can do in Bayesian world is to build a more complicated model -- stick breaking priors for normal mixtures, splines in probabilities over time, whatever. But there is no direct analogue to Huber sandwich standard errors that explicitly acknowledge that the model may be misspecified, and are prepared to account for that.
Going back to my first paragraph -- again, define probability. The formal definition is the trio $<\Omega,{\mathcal F},P>$. $\Omega$ is the space of possible outcomes (combinations of models and data). $\mathcal F$ is the $\sigma$-algebra of what can be measured on that space. $P$ is the probability measure / density attached to subsets $A\subset \Omega$, $A\in\mathcal F$ -- which have to be measureable for the mathematics of probability to work. In finite dimensions, most reasonable sets are measurable -- see Borel sets, I am not going to bore you with details. With the more interesting infinite spaces (those of curves and trajectories, for instance), things get hairy very quickly. If you have a random process $X_t, t\in[0,1]$ on a unit interval in time, then the set $\{ X_t > 0, t\in[0,0.5]\}$ is not measurable, despite its apparent simplicity. (Sets like $\{ X_t > 0, t\in\{t_1, t_2, \ldots, t_k\}\}$ are measurable for finite $k$, and in fact generate the required $\sigma$-algebra. But that is not enough, apparently.) So probabilities in large dimensions may get tricky even at the level of definitions, let alone computations. | Why do people use p-values instead of computing probability of the model given data? | Define probability. I mean it. Before we progress any further, we need to settle on terms.
An intuitive definition of probability is a measure of uncertainty. We are uncertain whether the next coin to | Why do people use p-values instead of computing probability of the model given data?
Define probability. I mean it. Before we progress any further, we need to settle on terms.
An intuitive definition of probability is a measure of uncertainty. We are uncertain whether the next coin toss will come up heads or tails. That is uncertainty in the data $D$. We are also uncertain whether the coin is fair or not. That is uncertainty about the model $M$... or you can call uncertainty about the state of the world.
To arrive at the conditional distribution $P(M|D)$, you need to have the joint distribution $P(M,D)$ -- i.e., the knowledge of the whole population of coins in circulation, how many of them are forged, and how forged coins behave (which may depend on the way the coins are spun and caught in the air).
In the particular example of coins, this is at least conceptually possible -- the government figures are available on the coins that are supposed to be fair (28$\cdot$109 per year), or at least those with stable characteristics. As far as forged coins go, the scale of production of less than a million is probably not worth talking about, so $10^6/28\cdot10^9$ may be a probability that the coin you got from a cashier's register is unfair. Then you need to come up with a model of how the unfair coin works... and obtain the joint distribution, and condition on the data.
In the practical world problems with say medical conditions and the way they work, you may not be able to come up with none of these components of the joint distribution, and can't condition.
Bayesian modeling provides a way to simplify the models and come up with these joints $P(M,D)$. But the devil is in the details. If you say that the fair coin is the one with $p=0.5$, and then go ahead and specify a traditional Beta prior, and get the Beta conjugate posterior, then... surprise, surprise! $P(p=0.5)=0$ for either of these continuous distributions, no matter if your prior is $B(0.5,0.5)$ or $B(1000,1000)$. So you'd have to incorporate a point mass at $0.5$, give it a prior mass ($28\cdot10^9/(28\cdot10^9 + 10^6)$, say), and see if your data moves the posterior away from that point mass. This is a more complicated calculation that involve Metropolis-Hastings sampling rather than the more traditional Gibbs sampling.
Besides the difficulties in talking about what exactly the right models are, Bayesian methods have limited ways of dealing with model misspecification. If you don't like Gaussian errors, or you don't believe in independence of coin tosses (your hand gets tired after the first 10,000 or so tosses, so you don't toss it as high as the first 1,000 or so times, whch may affect the probabilities), all that you can do in Bayesian world is to build a more complicated model -- stick breaking priors for normal mixtures, splines in probabilities over time, whatever. But there is no direct analogue to Huber sandwich standard errors that explicitly acknowledge that the model may be misspecified, and are prepared to account for that.
Going back to my first paragraph -- again, define probability. The formal definition is the trio $<\Omega,{\mathcal F},P>$. $\Omega$ is the space of possible outcomes (combinations of models and data). $\mathcal F$ is the $\sigma$-algebra of what can be measured on that space. $P$ is the probability measure / density attached to subsets $A\subset \Omega$, $A\in\mathcal F$ -- which have to be measureable for the mathematics of probability to work. In finite dimensions, most reasonable sets are measurable -- see Borel sets, I am not going to bore you with details. With the more interesting infinite spaces (those of curves and trajectories, for instance), things get hairy very quickly. If you have a random process $X_t, t\in[0,1]$ on a unit interval in time, then the set $\{ X_t > 0, t\in[0,0.5]\}$ is not measurable, despite its apparent simplicity. (Sets like $\{ X_t > 0, t\in\{t_1, t_2, \ldots, t_k\}\}$ are measurable for finite $k$, and in fact generate the required $\sigma$-algebra. But that is not enough, apparently.) So probabilities in large dimensions may get tricky even at the level of definitions, let alone computations. | Why do people use p-values instead of computing probability of the model given data?
Define probability. I mean it. Before we progress any further, we need to settle on terms.
An intuitive definition of probability is a measure of uncertainty. We are uncertain whether the next coin to |
5,391 | Why do people use p-values instead of computing probability of the model given data? | But if we want to estimate the probability of the model, why don't we calculate the probability of the model given the experiment?
Because we don't know how. There's infinite number of model possible, and their probability space is not defined.
Here's a practical example. Let's say I want to forecast US GDP. I get the time series, and fit a model. What is the probability that this model is true?
So, let's actually fit a random walk model into GDP series:
$$\Delta\ln y_t=\mu+e_t$$
where $\mu$ is the growth rate and $e_t$ is a random error. My code below does just that, and it also produces the forecast (red) and compares it historical data (blue).
However, who said that GDP is a random walk process? What is it was a trend process? So, let's fit the trend: $$\ln y_t = c t+ e_t$$
where $c$ is the slope of the time trend. The forecast using a trend model is shown on the same chart (yellow).
Now, how would you calculate the probability that my random walk model is true? Within MLE we could calculate the likelihood of the drift $\mu$ given the data set, but that's not the probability. Second, and more importantly, how would you calculate the probability that the model is random walk with this drift knowing that it could also be a trend model? It could be any other number of models that produce this kind of dynamic. | Why do people use p-values instead of computing probability of the model given data? | But if we want to estimate the probability of the model, why don't we calculate the probability of the model given the experiment?
Because we don't know how. There's infinite number of model possible | Why do people use p-values instead of computing probability of the model given data?
But if we want to estimate the probability of the model, why don't we calculate the probability of the model given the experiment?
Because we don't know how. There's infinite number of model possible, and their probability space is not defined.
Here's a practical example. Let's say I want to forecast US GDP. I get the time series, and fit a model. What is the probability that this model is true?
So, let's actually fit a random walk model into GDP series:
$$\Delta\ln y_t=\mu+e_t$$
where $\mu$ is the growth rate and $e_t$ is a random error. My code below does just that, and it also produces the forecast (red) and compares it historical data (blue).
However, who said that GDP is a random walk process? What is it was a trend process? So, let's fit the trend: $$\ln y_t = c t+ e_t$$
where $c$ is the slope of the time trend. The forecast using a trend model is shown on the same chart (yellow).
Now, how would you calculate the probability that my random walk model is true? Within MLE we could calculate the likelihood of the drift $\mu$ given the data set, but that's not the probability. Second, and more importantly, how would you calculate the probability that the model is random walk with this drift knowing that it could also be a trend model? It could be any other number of models that produce this kind of dynamic. | Why do people use p-values instead of computing probability of the model given data?
But if we want to estimate the probability of the model, why don't we calculate the probability of the model given the experiment?
Because we don't know how. There's infinite number of model possible |
5,392 | Why do people use p-values instead of computing probability of the model given data? | IMHO, confidence intervals are a better method of expressing results. This is especially true when comparing results to be included in meta analysis and for "not significant" answers. This avoids the all too common misrepresentation of not significant results as significantly insignificant. I don't know in which "camp" that puts me, frequentist or Bayesian, and frankly don't care. What I am saying is that it makes a rather important distinction just how wide a 95% confidence interval is, for example, when comparing whether the 115 mm Hg mean blood pressure of a test series is different from a (control) mean of 120 mm Hg, it makes a world of difference if the 95% confidence interval for the difference is $\pm$ 100 mm Hg or $\pm$ 10 mm Hg, because when $\pm$ 100 mm Hg we haven't excluded anything practical; a blood pressure of 20 mm Hg is only achievable several minutes after the heart has stopped, and a blood pressure of 220 mm Hg is also problematic. Only in the latter case, does our result, 110 to 130 mm Hg as a CI, exclude anything pragmatic.
One might be taken aback to realize just how many published results from averaging three wildly different values do not seem to be different from some assumed value when the confidence in saying so admits the entire range of possible values, and a not significant $p$-value does not tell that story, such that the discussion centering on interpretation of H0 seems non-contributory to me. What are your thoughts? | Why do people use p-values instead of computing probability of the model given data? | IMHO, confidence intervals are a better method of expressing results. This is especially true when comparing results to be included in meta analysis and for "not significant" answers. This avoids the | Why do people use p-values instead of computing probability of the model given data?
IMHO, confidence intervals are a better method of expressing results. This is especially true when comparing results to be included in meta analysis and for "not significant" answers. This avoids the all too common misrepresentation of not significant results as significantly insignificant. I don't know in which "camp" that puts me, frequentist or Bayesian, and frankly don't care. What I am saying is that it makes a rather important distinction just how wide a 95% confidence interval is, for example, when comparing whether the 115 mm Hg mean blood pressure of a test series is different from a (control) mean of 120 mm Hg, it makes a world of difference if the 95% confidence interval for the difference is $\pm$ 100 mm Hg or $\pm$ 10 mm Hg, because when $\pm$ 100 mm Hg we haven't excluded anything practical; a blood pressure of 20 mm Hg is only achievable several minutes after the heart has stopped, and a blood pressure of 220 mm Hg is also problematic. Only in the latter case, does our result, 110 to 130 mm Hg as a CI, exclude anything pragmatic.
One might be taken aback to realize just how many published results from averaging three wildly different values do not seem to be different from some assumed value when the confidence in saying so admits the entire range of possible values, and a not significant $p$-value does not tell that story, such that the discussion centering on interpretation of H0 seems non-contributory to me. What are your thoughts? | Why do people use p-values instead of computing probability of the model given data?
IMHO, confidence intervals are a better method of expressing results. This is especially true when comparing results to be included in meta analysis and for "not significant" answers. This avoids the |
5,393 | How do I avoid overlapping labels in an R plot? [closed] | Check out the new package ggrepel.
ggrepel provides geoms for ggplot2 to repel overlapping text labels. It works both for geom_text and geom_label.
Figure is taken from this blog post. | How do I avoid overlapping labels in an R plot? [closed] | Check out the new package ggrepel.
ggrepel provides geoms for ggplot2 to repel overlapping text labels. It works both for geom_text and geom_label.
Figure is taken from this blog post. | How do I avoid overlapping labels in an R plot? [closed]
Check out the new package ggrepel.
ggrepel provides geoms for ggplot2 to repel overlapping text labels. It works both for geom_text and geom_label.
Figure is taken from this blog post. | How do I avoid overlapping labels in an R plot? [closed]
Check out the new package ggrepel.
ggrepel provides geoms for ggplot2 to repel overlapping text labels. It works both for geom_text and geom_label.
Figure is taken from this blog post. |
5,394 | How do I avoid overlapping labels in an R plot? [closed] | The directlabels package does that. From its web page:
This package is an attempt to make direct labeling a reality in
everyday statistical practice by making available a body of useful
functions that make direct labeling of common plots easy to do with
high-level plotting systems such as lattice and ggplot2.
It might not always be possible for dense plots, though.
Here is a short example:
set.seed(123)
a <- c(rnorm(10,-3,2),rnorm(10,3,2))
b <- c(rnorm(10,-3,2),rnorm(10,3,2))
dfr <- data.frame(a,b)
dfr$t <- c(paste("A",1:10,sep=""),paste("B",1:10,sep=""))
direct.label(xyplot(b~a,dfr,groups=t, col="black"))
I did manage get rid of the point colouring with col="black", but the not labels. | How do I avoid overlapping labels in an R plot? [closed] | The directlabels package does that. From its web page:
This package is an attempt to make direct labeling a reality in
everyday statistical practice by making available a body of useful
functions | How do I avoid overlapping labels in an R plot? [closed]
The directlabels package does that. From its web page:
This package is an attempt to make direct labeling a reality in
everyday statistical practice by making available a body of useful
functions that make direct labeling of common plots easy to do with
high-level plotting systems such as lattice and ggplot2.
It might not always be possible for dense plots, though.
Here is a short example:
set.seed(123)
a <- c(rnorm(10,-3,2),rnorm(10,3,2))
b <- c(rnorm(10,-3,2),rnorm(10,3,2))
dfr <- data.frame(a,b)
dfr$t <- c(paste("A",1:10,sep=""),paste("B",1:10,sep=""))
direct.label(xyplot(b~a,dfr,groups=t, col="black"))
I did manage get rid of the point colouring with col="black", but the not labels. | How do I avoid overlapping labels in an R plot? [closed]
The directlabels package does that. From its web page:
This package is an attempt to make direct labeling a reality in
everyday statistical practice by making available a body of useful
functions |
5,395 | How do I avoid overlapping labels in an R plot? [closed] | I'd suggest you take a look at the wordcloud package. I know this package focuses not exactly on the points but on the labels themselves, and also the style seems to be rather fixed. But still, the results I got from using it were pretty stunning. Also note that the package version in question was released about the time you asked the question, so it's still very new.
http://blog.fellstat.com/?cat=11 | How do I avoid overlapping labels in an R plot? [closed] | I'd suggest you take a look at the wordcloud package. I know this package focuses not exactly on the points but on the labels themselves, and also the style seems to be rather fixed. But still, the re | How do I avoid overlapping labels in an R plot? [closed]
I'd suggest you take a look at the wordcloud package. I know this package focuses not exactly on the points but on the labels themselves, and also the style seems to be rather fixed. But still, the results I got from using it were pretty stunning. Also note that the package version in question was released about the time you asked the question, so it's still very new.
http://blog.fellstat.com/?cat=11 | How do I avoid overlapping labels in an R plot? [closed]
I'd suggest you take a look at the wordcloud package. I know this package focuses not exactly on the points but on the labels themselves, and also the style seems to be rather fixed. But still, the re |
5,396 | How do I avoid overlapping labels in an R plot? [closed] | I ran into a similar problem with several of the plots I have been working with and wrote a basic package that uses force field simulation to adjust object locations. The advantage over some of the above-cited solutions is the dynamic adjustment for relative object proximity in 2D. While much improvement is possible, including heuristics and integration with ggplot, etc. it seems to get the task accomplished. The following illustrates the functionality:
install.packages("FField", type = "source")
install.packages("ggplot2")
install.packages("gridExtra")
library(FField)
FFieldPtRepDemo()
For now there is no heuristics for a variety of areas and point distributions as the solution met my needs and I wanted to get something helpful to folks out quickly but I'll add these in the medium term. At this time I recommend scaling charts to 100x100 and back and slightly tweaking the default attraction and repulsion parameters as warranted. | How do I avoid overlapping labels in an R plot? [closed] | I ran into a similar problem with several of the plots I have been working with and wrote a basic package that uses force field simulation to adjust object locations. The advantage over some of the ab | How do I avoid overlapping labels in an R plot? [closed]
I ran into a similar problem with several of the plots I have been working with and wrote a basic package that uses force field simulation to adjust object locations. The advantage over some of the above-cited solutions is the dynamic adjustment for relative object proximity in 2D. While much improvement is possible, including heuristics and integration with ggplot, etc. it seems to get the task accomplished. The following illustrates the functionality:
install.packages("FField", type = "source")
install.packages("ggplot2")
install.packages("gridExtra")
library(FField)
FFieldPtRepDemo()
For now there is no heuristics for a variety of areas and point distributions as the solution met my needs and I wanted to get something helpful to folks out quickly but I'll add these in the medium term. At this time I recommend scaling charts to 100x100 and back and slightly tweaking the default attraction and repulsion parameters as warranted. | How do I avoid overlapping labels in an R plot? [closed]
I ran into a similar problem with several of the plots I have been working with and wrote a basic package that uses force field simulation to adjust object locations. The advantage over some of the ab |
5,397 | How do I avoid overlapping labels in an R plot? [closed] | In the event that you simply cannot get the labels to work correctly as produced by R, keep in mind you can always save the graphs in a vector format (like .pdf) and pull them into an editing program like InkScape or Adobe Illustrator. | How do I avoid overlapping labels in an R plot? [closed] | In the event that you simply cannot get the labels to work correctly as produced by R, keep in mind you can always save the graphs in a vector format (like .pdf) and pull them into an editing program | How do I avoid overlapping labels in an R plot? [closed]
In the event that you simply cannot get the labels to work correctly as produced by R, keep in mind you can always save the graphs in a vector format (like .pdf) and pull them into an editing program like InkScape or Adobe Illustrator. | How do I avoid overlapping labels in an R plot? [closed]
In the event that you simply cannot get the labels to work correctly as produced by R, keep in mind you can always save the graphs in a vector format (like .pdf) and pull them into an editing program |
5,398 | How do I avoid overlapping labels in an R plot? [closed] | A couple of additional tools to look at in R:
The spread.labels function in the plotrix package
thigmophobe.labels in the plotrix package
the spread.labs function in the TeachingDemos package
the TkIdentify function in the TeachingDemos package
These won't do everything for you, but one of them may be part of a solution. | How do I avoid overlapping labels in an R plot? [closed] | A couple of additional tools to look at in R:
The spread.labels function in the plotrix package
thigmophobe.labels in the plotrix package
the spread.labs function in the TeachingDemos package
the Tk | How do I avoid overlapping labels in an R plot? [closed]
A couple of additional tools to look at in R:
The spread.labels function in the plotrix package
thigmophobe.labels in the plotrix package
the spread.labs function in the TeachingDemos package
the TkIdentify function in the TeachingDemos package
These won't do everything for you, but one of them may be part of a solution. | How do I avoid overlapping labels in an R plot? [closed]
A couple of additional tools to look at in R:
The spread.labels function in the plotrix package
thigmophobe.labels in the plotrix package
the spread.labs function in the TeachingDemos package
the Tk |
5,399 | AIC guidelines in model selection | AIC and BIC hold the same interpretation in terms of model comparison. That is, the larger difference in either AIC or BIC indicates stronger evidence for one model over the other (the lower the better). It's just the the AIC doesn't penalize the number of parameters as strongly as BIC. There is also a correction to the AIC (the AICc) that is used for smaller sample sizes. More information on the comparison of AIC/BIC can be found here. | AIC guidelines in model selection | AIC and BIC hold the same interpretation in terms of model comparison. That is, the larger difference in either AIC or BIC indicates stronger evidence for one model over the other (the lower the bette | AIC guidelines in model selection
AIC and BIC hold the same interpretation in terms of model comparison. That is, the larger difference in either AIC or BIC indicates stronger evidence for one model over the other (the lower the better). It's just the the AIC doesn't penalize the number of parameters as strongly as BIC. There is also a correction to the AIC (the AICc) that is used for smaller sample sizes. More information on the comparison of AIC/BIC can be found here. | AIC guidelines in model selection
AIC and BIC hold the same interpretation in terms of model comparison. That is, the larger difference in either AIC or BIC indicates stronger evidence for one model over the other (the lower the bette |
5,400 | AIC guidelines in model selection | You are talking about two different things and you are mixing them up. In the first case you have two models (1 and 2) and you obtained their AIC like $AIC_1$ and $AIC_2$. IF you want to compare these two models based on their AIC's, then model with lower AIC would be the preferred one i.e. if $AIC_1< AIC_2$ then you pick up model 1 and vise versa.
In the 2nd case, you have a set of candidate models like models $(1, 2, ..., n)$ and for each model you calculate AIC differences as $\Delta_i= AIC_i- AIC_{min}$, where $AIC_i$ is the AIC for the $i$th model and $AIC_{min}$ is the minimum of AIC among all the models. Now the model with $\Delta_i >10$ have no support and can be ommited from further consideration as explained in Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach by Kenneth P. Burnham, David R. Anderson, page 71. So the larger is the $\Delta_i$, the weaker would be your model. Here the best model has $\Delta_i\equiv\Delta_{min}\equiv0.$ | AIC guidelines in model selection | You are talking about two different things and you are mixing them up. In the first case you have two models (1 and 2) and you obtained their AIC like $AIC_1$ and $AIC_2$. IF you want to compare these | AIC guidelines in model selection
You are talking about two different things and you are mixing them up. In the first case you have two models (1 and 2) and you obtained their AIC like $AIC_1$ and $AIC_2$. IF you want to compare these two models based on their AIC's, then model with lower AIC would be the preferred one i.e. if $AIC_1< AIC_2$ then you pick up model 1 and vise versa.
In the 2nd case, you have a set of candidate models like models $(1, 2, ..., n)$ and for each model you calculate AIC differences as $\Delta_i= AIC_i- AIC_{min}$, where $AIC_i$ is the AIC for the $i$th model and $AIC_{min}$ is the minimum of AIC among all the models. Now the model with $\Delta_i >10$ have no support and can be ommited from further consideration as explained in Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach by Kenneth P. Burnham, David R. Anderson, page 71. So the larger is the $\Delta_i$, the weaker would be your model. Here the best model has $\Delta_i\equiv\Delta_{min}\equiv0.$ | AIC guidelines in model selection
You are talking about two different things and you are mixing them up. In the first case you have two models (1 and 2) and you obtained their AIC like $AIC_1$ and $AIC_2$. IF you want to compare these |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.