idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
3,201
Why is the validation accuracy fluctuating?
Adding to the answer by @dk14 . If you are still seeing fluctuations after properly regularising your model, these could be the possible reasons: Using a random sample from your validation set: It means your validation set at each evaluation step is different, so is your validation-loss. Using a weighted loss-function(which is used in case of highly imbalanced class-problems). At train step, you weigh your loss function based on class-weights, while at dev step you just calculate the un-weighted loss. In such case, though your network is stepping into convergence, you might see lots of fluctuations in validation loss after each train-step. But if you wait for a bigger picture, you can see that your network is actually converging to a minima with fluctuations wearing out.(see the attached images for one such example).
Why is the validation accuracy fluctuating?
Adding to the answer by @dk14 . If you are still seeing fluctuations after properly regularising your model, these could be the possible reasons: Using a random sample from your validation set: It me
Why is the validation accuracy fluctuating? Adding to the answer by @dk14 . If you are still seeing fluctuations after properly regularising your model, these could be the possible reasons: Using a random sample from your validation set: It means your validation set at each evaluation step is different, so is your validation-loss. Using a weighted loss-function(which is used in case of highly imbalanced class-problems). At train step, you weigh your loss function based on class-weights, while at dev step you just calculate the un-weighted loss. In such case, though your network is stepping into convergence, you might see lots of fluctuations in validation loss after each train-step. But if you wait for a bigger picture, you can see that your network is actually converging to a minima with fluctuations wearing out.(see the attached images for one such example).
Why is the validation accuracy fluctuating? Adding to the answer by @dk14 . If you are still seeing fluctuations after properly regularising your model, these could be the possible reasons: Using a random sample from your validation set: It me
3,202
Why is the validation accuracy fluctuating?
Your validation accuracy on a binary classification problem (I assume) is "fluctuating" around 50%, that means your model is giving completely random predictions (sometimes it guesses correctly few samples more, sometimes a few samples less). Generally, your model is not better than flipping a coin. The reason the validation loss is more stable is that it is a continuous function: It can distinguish that prediction 0.9 for a positive sample is more correct than a prediction 0.51. For accuracy, you round these continuous logit predictions to $\{0;1\}$ and simply compute the percentage of correct predictions. Now, since your model is guessing, it is most likely predicting values near 0.5 for all samples, let's say a sample gets 0.49 after one epoch and 0.51 in the next. From the loss perspective the incorrectness of the prediction did not change much, whereas the accuracy is sensitive even to these small differences. Anyway, as others have already pointed out, your model is experiencing severe overfitting. My guess is that your problem is too complicated, i.e. it is very difficult to extract the desired information from your data, and such simple end2end trained 4-layer conv-net has no chance of learning it.
Why is the validation accuracy fluctuating?
Your validation accuracy on a binary classification problem (I assume) is "fluctuating" around 50%, that means your model is giving completely random predictions (sometimes it guesses correctly few sa
Why is the validation accuracy fluctuating? Your validation accuracy on a binary classification problem (I assume) is "fluctuating" around 50%, that means your model is giving completely random predictions (sometimes it guesses correctly few samples more, sometimes a few samples less). Generally, your model is not better than flipping a coin. The reason the validation loss is more stable is that it is a continuous function: It can distinguish that prediction 0.9 for a positive sample is more correct than a prediction 0.51. For accuracy, you round these continuous logit predictions to $\{0;1\}$ and simply compute the percentage of correct predictions. Now, since your model is guessing, it is most likely predicting values near 0.5 for all samples, let's say a sample gets 0.49 after one epoch and 0.51 in the next. From the loss perspective the incorrectness of the prediction did not change much, whereas the accuracy is sensitive even to these small differences. Anyway, as others have already pointed out, your model is experiencing severe overfitting. My guess is that your problem is too complicated, i.e. it is very difficult to extract the desired information from your data, and such simple end2end trained 4-layer conv-net has no chance of learning it.
Why is the validation accuracy fluctuating? Your validation accuracy on a binary classification problem (I assume) is "fluctuating" around 50%, that means your model is giving completely random predictions (sometimes it guesses correctly few sa
3,203
Why is the validation accuracy fluctuating?
Definitely over-fitting. The gap between accuracy on training data and test data shows you have over fitted on training. Maybe regularization can help.
Why is the validation accuracy fluctuating?
Definitely over-fitting. The gap between accuracy on training data and test data shows you have over fitted on training. Maybe regularization can help.
Why is the validation accuracy fluctuating? Definitely over-fitting. The gap between accuracy on training data and test data shows you have over fitted on training. Maybe regularization can help.
Why is the validation accuracy fluctuating? Definitely over-fitting. The gap between accuracy on training data and test data shows you have over fitted on training. Maybe regularization can help.
3,204
Why is the validation accuracy fluctuating?
There are few ways to try in your situation. Firstly try to increase the batch size, which helps the mini-batch SGD less wandering wildly. Secondly tuning the learning rate, probably set it smaller. Thirdly, try different optimizer, for instance Adam or RMSProp which are able to adapt learning rates for wrt features. If possible try augmenting your data. Lastly, try Bayesian neural networks via dropout approximation, a very interesting work of Yarin Gal https://arxiv.org/abs/1506.02158
Why is the validation accuracy fluctuating?
There are few ways to try in your situation. Firstly try to increase the batch size, which helps the mini-batch SGD less wandering wildly. Secondly tuning the learning rate, probably set it smaller. T
Why is the validation accuracy fluctuating? There are few ways to try in your situation. Firstly try to increase the batch size, which helps the mini-batch SGD less wandering wildly. Secondly tuning the learning rate, probably set it smaller. Thirdly, try different optimizer, for instance Adam or RMSProp which are able to adapt learning rates for wrt features. If possible try augmenting your data. Lastly, try Bayesian neural networks via dropout approximation, a very interesting work of Yarin Gal https://arxiv.org/abs/1506.02158
Why is the validation accuracy fluctuating? There are few ways to try in your situation. Firstly try to increase the batch size, which helps the mini-batch SGD less wandering wildly. Secondly tuning the learning rate, probably set it smaller. T
3,205
Why is the validation accuracy fluctuating?
Have you tried a smaller network? Considering your training accuracy can reach >.99, your network seems have enough connections to fully model your data, but you may have extraneous connections that are learning randomly (i.e. overfitting). In my experience, I've gotten the holdout validation accuracy to stabilize with a smaller network by trying various networks such as ResNet, VGG, and even simpler networks.
Why is the validation accuracy fluctuating?
Have you tried a smaller network? Considering your training accuracy can reach >.99, your network seems have enough connections to fully model your data, but you may have extraneous connections that a
Why is the validation accuracy fluctuating? Have you tried a smaller network? Considering your training accuracy can reach >.99, your network seems have enough connections to fully model your data, but you may have extraneous connections that are learning randomly (i.e. overfitting). In my experience, I've gotten the holdout validation accuracy to stabilize with a smaller network by trying various networks such as ResNet, VGG, and even simpler networks.
Why is the validation accuracy fluctuating? Have you tried a smaller network? Considering your training accuracy can reach >.99, your network seems have enough connections to fully model your data, but you may have extraneous connections that a
3,206
What is the difference between a neural network and a deep belief network?
"Neural networks" is a term usually used to refer to feedforward neural networks. Deep Neural Networks are feedforward Neural Networks with many layers. A Deep belief network is not the same as a Deep Neural Network. As you have pointed out a deep belief network has undirected connections between some layers. This means that the topology of the DNN and DBN is different by definition. The undirected layers in the DBN are called Restricted Boltzmann Machines. This layers can be trained using an unsupervised learning algorithm (Contrastive Divergence) that is very fast (Here's a link! with details). Some more comments: The solutions obtained with deeper neural networks correspond to solutions that perform worse than the solutions obtained for networks with 1 or 2 hidden layers. As the architecture gets deeper, it becomes more difficult to obtain good generalization using a Deep NN. In 2006 Hinton discovered that much better results could be achieved in deeper architectures when each layer (RBM) is pre-trained with an unsupervised learning algorithm (Contrastive Divergence). Then the Network can be trained in a supervised way using backpropagation in order to "fine-tune" the weights.
What is the difference between a neural network and a deep belief network?
"Neural networks" is a term usually used to refer to feedforward neural networks. Deep Neural Networks are feedforward Neural Networks with many layers. A Deep belief network is not the same as a Dee
What is the difference between a neural network and a deep belief network? "Neural networks" is a term usually used to refer to feedforward neural networks. Deep Neural Networks are feedforward Neural Networks with many layers. A Deep belief network is not the same as a Deep Neural Network. As you have pointed out a deep belief network has undirected connections between some layers. This means that the topology of the DNN and DBN is different by definition. The undirected layers in the DBN are called Restricted Boltzmann Machines. This layers can be trained using an unsupervised learning algorithm (Contrastive Divergence) that is very fast (Here's a link! with details). Some more comments: The solutions obtained with deeper neural networks correspond to solutions that perform worse than the solutions obtained for networks with 1 or 2 hidden layers. As the architecture gets deeper, it becomes more difficult to obtain good generalization using a Deep NN. In 2006 Hinton discovered that much better results could be achieved in deeper architectures when each layer (RBM) is pre-trained with an unsupervised learning algorithm (Contrastive Divergence). Then the Network can be trained in a supervised way using backpropagation in order to "fine-tune" the weights.
What is the difference between a neural network and a deep belief network? "Neural networks" is a term usually used to refer to feedforward neural networks. Deep Neural Networks are feedforward Neural Networks with many layers. A Deep belief network is not the same as a Dee
3,207
What is the difference between a neural network and a deep belief network?
"A Deep Neural Network is a feed-forward, artificial neural network that has more than one layer of hidden units between its inputs and its outputs. Each hidden unit, $j$, typically uses the logistic function to map its total input from the layer below,$x_j$, to the scalar state, $y_j$ that it sends to the layer above. (Ref. (1))". That said, as mentioned by David: "deep belief networks have a undirected connections between the top two layers, like in an RBM", which is in contrast to standard feed-forward neural networks. In general, the main issue in a DNN regards the training of it that is definitely more involved that a single layer NN. (I am not working on NNs it just happened I read the paper recently.) Reference: 1. Deep Neural Networks for Acoustic Modeling in Speech Recognition, by Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath,, and Brian Kingsbury in the IEEE Signal Processing Magazine [82] Nov. 2012 (Link to Original Paper in MSR)
What is the difference between a neural network and a deep belief network?
"A Deep Neural Network is a feed-forward, artificial neural network that has more than one layer of hidden units between its inputs and its outputs. Each hidden unit, $j$, typically uses the logistic
What is the difference between a neural network and a deep belief network? "A Deep Neural Network is a feed-forward, artificial neural network that has more than one layer of hidden units between its inputs and its outputs. Each hidden unit, $j$, typically uses the logistic function to map its total input from the layer below,$x_j$, to the scalar state, $y_j$ that it sends to the layer above. (Ref. (1))". That said, as mentioned by David: "deep belief networks have a undirected connections between the top two layers, like in an RBM", which is in contrast to standard feed-forward neural networks. In general, the main issue in a DNN regards the training of it that is definitely more involved that a single layer NN. (I am not working on NNs it just happened I read the paper recently.) Reference: 1. Deep Neural Networks for Acoustic Modeling in Speech Recognition, by Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath,, and Brian Kingsbury in the IEEE Signal Processing Magazine [82] Nov. 2012 (Link to Original Paper in MSR)
What is the difference between a neural network and a deep belief network? "A Deep Neural Network is a feed-forward, artificial neural network that has more than one layer of hidden units between its inputs and its outputs. Each hidden unit, $j$, typically uses the logistic
3,208
Proper way of using recurrent neural network for time series analysis
What you describe is in fact a "sliding time window" approach and is different to recurrent networks. You can use this technique with any regression algorithm. There is a huge limitation to this approach: events in the inputs can only be correlatd with other inputs/outputs which lie at most t timesteps apart, where t is the size of the window. E.g. you can think of a Markov chain of order t. RNNs don't suffer from this in theory, however in practice learning is difficult. It is best to illustrate an RNN in contrast to a feedfoward network. Consider the (very) simple feedforward network $y = Wx$ where $y$ is the output, $W$ is the weight matrix, and $x$ is the input. Now, we use a recurrent network. Now we have a sequence of inputs, so we will denote the inputs by $x^{i}$ for the ith input. The corresponding ith output is then calculated via $y^{i} = Wx^i + W_ry^{i-1}$. Thus, we have another weight matrix $W_r$ which incorporates the output at the previous step linearly into the current output. This is of course a simple architecture. Most common is an architecture where you have a hidden layer which is recurrently connected to itself. Let $h^i$ denote the hidden layer at timestep i. The formulas are then: $$h^0 = 0$$ $$h^i = \sigma(W_1x^i + W_rh^{i-1})$$ $$y^i = W_2h^i$$ Where $\sigma$ is a suitable non-linearity/transfer function like the sigmoid. $W_1$ and $W_2$ are the connecting weights between the input and the hidden and the hidden and the output layer. $W_r$ represents the recurrent weights. Here is a diagram of the structure:
Proper way of using recurrent neural network for time series analysis
What you describe is in fact a "sliding time window" approach and is different to recurrent networks. You can use this technique with any regression algorithm. There is a huge limitation to this appro
Proper way of using recurrent neural network for time series analysis What you describe is in fact a "sliding time window" approach and is different to recurrent networks. You can use this technique with any regression algorithm. There is a huge limitation to this approach: events in the inputs can only be correlatd with other inputs/outputs which lie at most t timesteps apart, where t is the size of the window. E.g. you can think of a Markov chain of order t. RNNs don't suffer from this in theory, however in practice learning is difficult. It is best to illustrate an RNN in contrast to a feedfoward network. Consider the (very) simple feedforward network $y = Wx$ where $y$ is the output, $W$ is the weight matrix, and $x$ is the input. Now, we use a recurrent network. Now we have a sequence of inputs, so we will denote the inputs by $x^{i}$ for the ith input. The corresponding ith output is then calculated via $y^{i} = Wx^i + W_ry^{i-1}$. Thus, we have another weight matrix $W_r$ which incorporates the output at the previous step linearly into the current output. This is of course a simple architecture. Most common is an architecture where you have a hidden layer which is recurrently connected to itself. Let $h^i$ denote the hidden layer at timestep i. The formulas are then: $$h^0 = 0$$ $$h^i = \sigma(W_1x^i + W_rh^{i-1})$$ $$y^i = W_2h^i$$ Where $\sigma$ is a suitable non-linearity/transfer function like the sigmoid. $W_1$ and $W_2$ are the connecting weights between the input and the hidden and the hidden and the output layer. $W_r$ represents the recurrent weights. Here is a diagram of the structure:
Proper way of using recurrent neural network for time series analysis What you describe is in fact a "sliding time window" approach and is different to recurrent networks. You can use this technique with any regression algorithm. There is a huge limitation to this appro
3,209
Proper way of using recurrent neural network for time series analysis
You may also consider simply using a number of transforms of time series for the input data. Just for one example, the inputs could be: the most recent interval value (7) the next most recent interval value (6) the delta between most recent and next most recent (7-6=1) the third most recent interval value (5) the delta between the second and third most recent (6-5=1) the average of the last three intervals ((7+6+5)/3=6) So, if your inputs to a conventional neural network were these six pieces of transformed data, it would not be a difficult task for an ordinary backpropagation algorithm to learn the pattern. You would have to code for the transforms that take the raw data and turn it into the above 6 inputs to your neural network, however.
Proper way of using recurrent neural network for time series analysis
You may also consider simply using a number of transforms of time series for the input data. Just for one example, the inputs could be: the most recent interval value (7) the next most recent inter
Proper way of using recurrent neural network for time series analysis You may also consider simply using a number of transforms of time series for the input data. Just for one example, the inputs could be: the most recent interval value (7) the next most recent interval value (6) the delta between most recent and next most recent (7-6=1) the third most recent interval value (5) the delta between the second and third most recent (6-5=1) the average of the last three intervals ((7+6+5)/3=6) So, if your inputs to a conventional neural network were these six pieces of transformed data, it would not be a difficult task for an ordinary backpropagation algorithm to learn the pattern. You would have to code for the transforms that take the raw data and turn it into the above 6 inputs to your neural network, however.
Proper way of using recurrent neural network for time series analysis You may also consider simply using a number of transforms of time series for the input data. Just for one example, the inputs could be: the most recent interval value (7) the next most recent inter
3,210
Proper way of using recurrent neural network for time series analysis
Another possibility are Historical Consistent Neural Networks (HCNN). This architecture might be more appropriate for the above mentioned setup because they eliminate the often arbitrary distinction between input- and output-variables and instead try to replicate the full underlying dynamics of the whole system via training with all observables. When I was working for Siemens I published a paper on this architecture in a book by Springer Verlag: Zimmermann, Grothmann, Tietz, von Jouanne-Diedrich: Market Modeling, Forecasting and Risk Analysis with Historical Consistent Neural Networks. Just to give an idea about the paradigm here is a short excerpt: In this article, we present a new type of recurrent NN, called historical consistent neural network (HCNN). HCNNs allow the modeling of highly-interacting non-linear dynamical systems across multiple time scales. HCNNs do not draw any distinction between inputs and outputs, but model observables embedded in the dynamics of a large state space. [...] The RNN is used to model and forecast an open dynamic system using a non-linear regression approach. Many real-world technical and economic applications must however be seen in the context of large systems in which various (non-linear) dynamics interact with each other in time. Projected on a model, this means that we do not differentiate between inputs and outputs but speak about observables. Due to the partial observability of large systems, we need hidden states to be able to explain the dynamics of the observables. Observables and hidden variables should be treated by the model in the same manner. The term observables embraces the input and output variables (i. e. $Y_τ := (y_τ, u_τ)$). If we are able to implement a model in which the dynamics of all of the observables can be described, we will be in a position to close the open system. ...and from the conclusion: The joint modeling of hidden and observed variables in large recurrent neural networks provides new prospects for planning and risk management. The ensemble approach based on HCNN offers an alternative approach to forecasting of future probability distributions. HCNNs give a perfect description of the dynamic of the observables in the past. However, the partial observability of the world results in a non-unique reconstruction of the hidden variables and thus, different future scenarios. Since the genuine development of the dynamic is unknown and all paths have the same probability, the average of the ensemble may be regarded as the best forecast, whereas the bandwidth of the distribution describes the market risk. Today, we use HCNN forecasts to predict prices for energy and precious metals to optimize the timing of procurement decisions. Work currently in progress concerns the analysis of the properties of the ensemble and the implementation of these concepts in practical risk management and financial market applications. The paper is now finally available in full here: Zimmermann, Grothmann, Tietz, von Jouanne-Diedrich: Market Modeling, Forecasting and Risk Analysis with Historical Consistent Neural Networks.
Proper way of using recurrent neural network for time series analysis
Another possibility are Historical Consistent Neural Networks (HCNN). This architecture might be more appropriate for the above mentioned setup because they eliminate the often arbitrary distinction b
Proper way of using recurrent neural network for time series analysis Another possibility are Historical Consistent Neural Networks (HCNN). This architecture might be more appropriate for the above mentioned setup because they eliminate the often arbitrary distinction between input- and output-variables and instead try to replicate the full underlying dynamics of the whole system via training with all observables. When I was working for Siemens I published a paper on this architecture in a book by Springer Verlag: Zimmermann, Grothmann, Tietz, von Jouanne-Diedrich: Market Modeling, Forecasting and Risk Analysis with Historical Consistent Neural Networks. Just to give an idea about the paradigm here is a short excerpt: In this article, we present a new type of recurrent NN, called historical consistent neural network (HCNN). HCNNs allow the modeling of highly-interacting non-linear dynamical systems across multiple time scales. HCNNs do not draw any distinction between inputs and outputs, but model observables embedded in the dynamics of a large state space. [...] The RNN is used to model and forecast an open dynamic system using a non-linear regression approach. Many real-world technical and economic applications must however be seen in the context of large systems in which various (non-linear) dynamics interact with each other in time. Projected on a model, this means that we do not differentiate between inputs and outputs but speak about observables. Due to the partial observability of large systems, we need hidden states to be able to explain the dynamics of the observables. Observables and hidden variables should be treated by the model in the same manner. The term observables embraces the input and output variables (i. e. $Y_τ := (y_τ, u_τ)$). If we are able to implement a model in which the dynamics of all of the observables can be described, we will be in a position to close the open system. ...and from the conclusion: The joint modeling of hidden and observed variables in large recurrent neural networks provides new prospects for planning and risk management. The ensemble approach based on HCNN offers an alternative approach to forecasting of future probability distributions. HCNNs give a perfect description of the dynamic of the observables in the past. However, the partial observability of the world results in a non-unique reconstruction of the hidden variables and thus, different future scenarios. Since the genuine development of the dynamic is unknown and all paths have the same probability, the average of the ensemble may be regarded as the best forecast, whereas the bandwidth of the distribution describes the market risk. Today, we use HCNN forecasts to predict prices for energy and precious metals to optimize the timing of procurement decisions. Work currently in progress concerns the analysis of the properties of the ensemble and the implementation of these concepts in practical risk management and financial market applications. The paper is now finally available in full here: Zimmermann, Grothmann, Tietz, von Jouanne-Diedrich: Market Modeling, Forecasting and Risk Analysis with Historical Consistent Neural Networks.
Proper way of using recurrent neural network for time series analysis Another possibility are Historical Consistent Neural Networks (HCNN). This architecture might be more appropriate for the above mentioned setup because they eliminate the often arbitrary distinction b
3,211
Regression with multiple dependent variables?
Yes, it is possible. What you're interested is is called "Multivariate Multiple Regression" or just "Multivariate Regression". I don't know what software you are using, but you can do this in R. Here's a link that provides examples.
Regression with multiple dependent variables?
Yes, it is possible. What you're interested is is called "Multivariate Multiple Regression" or just "Multivariate Regression". I don't know what software you are using, but you can do this in R. Her
Regression with multiple dependent variables? Yes, it is possible. What you're interested is is called "Multivariate Multiple Regression" or just "Multivariate Regression". I don't know what software you are using, but you can do this in R. Here's a link that provides examples.
Regression with multiple dependent variables? Yes, it is possible. What you're interested is is called "Multivariate Multiple Regression" or just "Multivariate Regression". I don't know what software you are using, but you can do this in R. Her
3,212
Regression with multiple dependent variables?
@Brett's response is fine. If you are interested in describing your two-block structure, you could also use PLS regression. Basically, it is a regression framework which relies on the idea of building successive (orthogonal) linear combinations of the variables belonging to each block such that their covariance is maximal. Here we consider that one block $X$ contains explanatory variables, and the other block $Y$ responses variables, as shown below: We seek "latent variables" who account for a maximum of information (in a linear fashion) included in the $X$ block while allowing to predict the $Y$ block with minimal error. The $u_j$ and $v_j$ are the loadings (i.e., linear combinations) associated to each dimension. The optimization criteria reads $$ \max_{\mid u_h\mid =1,\mid v_h\mid =1}\text{cov}(X_{h-1}u_h,Yv_h)\quad \big(\equiv \max\text{cov}(\xi_h,\omega_h)\big) $$ where $X_{h-1}$ stands for the deflated (i.e., residualized) $X$ block, after the $h^\text{th}$ regression. The correlation between factorial scores on the first dimension ($\xi_1$ and $\omega_1$) reflects the magnitude of the $X$-$Y$ link.
Regression with multiple dependent variables?
@Brett's response is fine. If you are interested in describing your two-block structure, you could also use PLS regression. Basically, it is a regression framework which relies on the idea of buildin
Regression with multiple dependent variables? @Brett's response is fine. If you are interested in describing your two-block structure, you could also use PLS regression. Basically, it is a regression framework which relies on the idea of building successive (orthogonal) linear combinations of the variables belonging to each block such that their covariance is maximal. Here we consider that one block $X$ contains explanatory variables, and the other block $Y$ responses variables, as shown below: We seek "latent variables" who account for a maximum of information (in a linear fashion) included in the $X$ block while allowing to predict the $Y$ block with minimal error. The $u_j$ and $v_j$ are the loadings (i.e., linear combinations) associated to each dimension. The optimization criteria reads $$ \max_{\mid u_h\mid =1,\mid v_h\mid =1}\text{cov}(X_{h-1}u_h,Yv_h)\quad \big(\equiv \max\text{cov}(\xi_h,\omega_h)\big) $$ where $X_{h-1}$ stands for the deflated (i.e., residualized) $X$ block, after the $h^\text{th}$ regression. The correlation between factorial scores on the first dimension ($\xi_1$ and $\omega_1$) reflects the magnitude of the $X$-$Y$ link.
Regression with multiple dependent variables? @Brett's response is fine. If you are interested in describing your two-block structure, you could also use PLS regression. Basically, it is a regression framework which relies on the idea of buildin
3,213
Regression with multiple dependent variables?
Multivariate regression is done in SPSS using the GLM-multivariate option. Put all your outcomes (DVs) into the outcomes box, but all your continuous predictors into the covariates box. You don't need anything in the factors box. Look at the multivariate tests. The univariate tests will be the same as separate multiple regressions. As someone else said, you can also specify this as a structural equation model, but the tests are the same. (Interestingly, well, I think it's interesting, there's a bit of a UK-US difference on this. In the UK, multiple regression is not usually considered a multivariate technique, hence multivariate regression is only multivariate when you have multiple outcomes / DVs.)
Regression with multiple dependent variables?
Multivariate regression is done in SPSS using the GLM-multivariate option. Put all your outcomes (DVs) into the outcomes box, but all your continuous predictors into the covariates box. You don't need
Regression with multiple dependent variables? Multivariate regression is done in SPSS using the GLM-multivariate option. Put all your outcomes (DVs) into the outcomes box, but all your continuous predictors into the covariates box. You don't need anything in the factors box. Look at the multivariate tests. The univariate tests will be the same as separate multiple regressions. As someone else said, you can also specify this as a structural equation model, but the tests are the same. (Interestingly, well, I think it's interesting, there's a bit of a UK-US difference on this. In the UK, multiple regression is not usually considered a multivariate technique, hence multivariate regression is only multivariate when you have multiple outcomes / DVs.)
Regression with multiple dependent variables? Multivariate regression is done in SPSS using the GLM-multivariate option. Put all your outcomes (DVs) into the outcomes box, but all your continuous predictors into the covariates box. You don't need
3,214
Regression with multiple dependent variables?
I would do this by first transforming the regression variables to PCA calculated variables, and then I would to the regression with the PCA calculated variables. Of course I would store the eigenvectors to be able to calculate the corresponding pca values when I have a new instance I wanna classify.
Regression with multiple dependent variables?
I would do this by first transforming the regression variables to PCA calculated variables, and then I would to the regression with the PCA calculated variables. Of course I would store the eigenvecto
Regression with multiple dependent variables? I would do this by first transforming the regression variables to PCA calculated variables, and then I would to the regression with the PCA calculated variables. Of course I would store the eigenvectors to be able to calculate the corresponding pca values when I have a new instance I wanna classify.
Regression with multiple dependent variables? I would do this by first transforming the regression variables to PCA calculated variables, and then I would to the regression with the PCA calculated variables. Of course I would store the eigenvecto
3,215
Regression with multiple dependent variables?
As mentionned by caracal, you can use mvtnorm package in R. Assuming you made a lm model (named "model") of one of the response in your model, and called it "model", here is how to obtain the multivariate predictive distribution of several response "resp1", "resp2", "resp3" stored in a matrix form Y: library(mvtnorm) model = lm(resp1~1+x+x1+x2,datas) # this is only a fake model to get #the X matrix out of it Y = as.matrix(datas[, c("resp1", "resp2", "resp3")]) X = model.matrix(delete.response(terms(model)), data, model$contrasts) XprimeX = t(X) %*% X XprimeXinv = solve(xprimex) hatB = xprimexinv %*% t(X) %*% Y A = t(Y - X%*%hatB)%*% (Y-X%*%hatB) F = ncol(X) M = ncol(Y) N = nrow(Y) nu= N-(M+F)+1 #nu must be positive C_1 = c(1 + x0 %*% xprimexinv %*% t(x0)) # for a prediction of the factor setting x0 # (a vector of size F=ncol(X)) varY = A/(nu) postmean = x0 %*% hatB nsim = 2000 ysim = rmvt(n=nsim, delta=postmux0, C_1*varY, df=nu) Now, quantiles of ysim are beta-expectation tolerance intervals from the predictive distribution, you can of course directly use the sampled distribution to do whatever you want. To answer Andrew F., degrees of freedom are hence nu=N-(M+F) + 1 ... N being the # of observations, M the # of responses and F the # of parameters per equation model. nu must be positive. (You may want to read my work on in this document :-) )
Regression with multiple dependent variables?
As mentionned by caracal, you can use mvtnorm package in R. Assuming you made a lm model (named "model") of one of the response in your model, and called it "model", here is how to obtain the multivar
Regression with multiple dependent variables? As mentionned by caracal, you can use mvtnorm package in R. Assuming you made a lm model (named "model") of one of the response in your model, and called it "model", here is how to obtain the multivariate predictive distribution of several response "resp1", "resp2", "resp3" stored in a matrix form Y: library(mvtnorm) model = lm(resp1~1+x+x1+x2,datas) # this is only a fake model to get #the X matrix out of it Y = as.matrix(datas[, c("resp1", "resp2", "resp3")]) X = model.matrix(delete.response(terms(model)), data, model$contrasts) XprimeX = t(X) %*% X XprimeXinv = solve(xprimex) hatB = xprimexinv %*% t(X) %*% Y A = t(Y - X%*%hatB)%*% (Y-X%*%hatB) F = ncol(X) M = ncol(Y) N = nrow(Y) nu= N-(M+F)+1 #nu must be positive C_1 = c(1 + x0 %*% xprimexinv %*% t(x0)) # for a prediction of the factor setting x0 # (a vector of size F=ncol(X)) varY = A/(nu) postmean = x0 %*% hatB nsim = 2000 ysim = rmvt(n=nsim, delta=postmux0, C_1*varY, df=nu) Now, quantiles of ysim are beta-expectation tolerance intervals from the predictive distribution, you can of course directly use the sampled distribution to do whatever you want. To answer Andrew F., degrees of freedom are hence nu=N-(M+F) + 1 ... N being the # of observations, M the # of responses and F the # of parameters per equation model. nu must be positive. (You may want to read my work on in this document :-) )
Regression with multiple dependent variables? As mentionned by caracal, you can use mvtnorm package in R. Assuming you made a lm model (named "model") of one of the response in your model, and called it "model", here is how to obtain the multivar
3,216
Regression with multiple dependent variables?
Did you already come across the term "canonical correlation"? There you have sets of variables on the independent as well as on the dependent side. But maybe there are more modern concepts available, the descriptions I have are all of the eighties/nineties...
Regression with multiple dependent variables?
Did you already come across the term "canonical correlation"? There you have sets of variables on the independent as well as on the dependent side. But maybe there are more modern concepts available,
Regression with multiple dependent variables? Did you already come across the term "canonical correlation"? There you have sets of variables on the independent as well as on the dependent side. But maybe there are more modern concepts available, the descriptions I have are all of the eighties/nineties...
Regression with multiple dependent variables? Did you already come across the term "canonical correlation"? There you have sets of variables on the independent as well as on the dependent side. But maybe there are more modern concepts available,
3,217
Regression with multiple dependent variables?
For Bayesian multivariate regression, one can use R package BNSP. For example, the dataset ami that comes with the package includes 3 responses and 3 covariates. # First load the package and dataset require(BNSP) data(ami) # Second, centre and scale variables - # this is specific to the dataset sc <- function(x){return((x-mean(x))/sd(x))} ami$ratio <- sc(log(ami$ami)-log(ami$tot)) ami$tot <- sc(log(ami$tot)) ami$amt <- sc(log(ami$amt)) ami$pr <- sc(ami$pr) ami$qrs <- sc(ami$qrs) ami$bp <- sc(ami$bp) ##Third, define the mode: on the left of ~ # are the 3 responses, separated by |. # On the right of ~ is the model for the mean that # includes smooth functions sm # of the 3 covariates: amt, tot, and ratio model <- pr | qrs | bp ~ sm(amt, k = 5) + sm(tot, k = 5) + sm(ratio, k = 5) #Fourth, fit the model multiv <- mvrm(formula = model, data = ami, sweeps = 10000, burn = 5000, thin = 2, seed = 1, StorageDir = getwd()) # And last, plot the fitted curves and estimated # correlation matrix plot(multiv, nrow = 3) plotCorr(multiv) Results are shown below. For the correlation matrix, the plot of the left shows posterior means and the one on the right posterior credible intervals.
Regression with multiple dependent variables?
For Bayesian multivariate regression, one can use R package BNSP. For example, the dataset ami that comes with the package includes 3 responses and 3 covariates. # First load the package and dataset r
Regression with multiple dependent variables? For Bayesian multivariate regression, one can use R package BNSP. For example, the dataset ami that comes with the package includes 3 responses and 3 covariates. # First load the package and dataset require(BNSP) data(ami) # Second, centre and scale variables - # this is specific to the dataset sc <- function(x){return((x-mean(x))/sd(x))} ami$ratio <- sc(log(ami$ami)-log(ami$tot)) ami$tot <- sc(log(ami$tot)) ami$amt <- sc(log(ami$amt)) ami$pr <- sc(ami$pr) ami$qrs <- sc(ami$qrs) ami$bp <- sc(ami$bp) ##Third, define the mode: on the left of ~ # are the 3 responses, separated by |. # On the right of ~ is the model for the mean that # includes smooth functions sm # of the 3 covariates: amt, tot, and ratio model <- pr | qrs | bp ~ sm(amt, k = 5) + sm(tot, k = 5) + sm(ratio, k = 5) #Fourth, fit the model multiv <- mvrm(formula = model, data = ami, sweeps = 10000, burn = 5000, thin = 2, seed = 1, StorageDir = getwd()) # And last, plot the fitted curves and estimated # correlation matrix plot(multiv, nrow = 3) plotCorr(multiv) Results are shown below. For the correlation matrix, the plot of the left shows posterior means and the one on the right posterior credible intervals.
Regression with multiple dependent variables? For Bayesian multivariate regression, one can use R package BNSP. For example, the dataset ami that comes with the package includes 3 responses and 3 covariates. # First load the package and dataset r
3,218
Regression with multiple dependent variables?
It's called structural equation model or simultaneous equation model.
Regression with multiple dependent variables?
It's called structural equation model or simultaneous equation model.
Regression with multiple dependent variables? It's called structural equation model or simultaneous equation model.
Regression with multiple dependent variables? It's called structural equation model or simultaneous equation model.
3,219
What are the worst (commonly adopted) ideas/principles in statistics?
I'll present one novice error (in this answer) and perhaps one error committed by more seasoned people. Very often, even on this website, I see people lamenting that their data are not normally distributed and so t-tests or linear regression are out of the question. Even stranger, I will see people try to rationalize their choice for linear regression because their covariates are normally distributed. I don't have to tell you that regression assumptions are about the conditional distribution, not the marginal. My absolute favorite way to demonstrate this flaw in thinking is to essentially compute a t-test with linear regression as I do here.
What are the worst (commonly adopted) ideas/principles in statistics?
I'll present one novice error (in this answer) and perhaps one error committed by more seasoned people. Very often, even on this website, I see people lamenting that their data are not normally distri
What are the worst (commonly adopted) ideas/principles in statistics? I'll present one novice error (in this answer) and perhaps one error committed by more seasoned people. Very often, even on this website, I see people lamenting that their data are not normally distributed and so t-tests or linear regression are out of the question. Even stranger, I will see people try to rationalize their choice for linear regression because their covariates are normally distributed. I don't have to tell you that regression assumptions are about the conditional distribution, not the marginal. My absolute favorite way to demonstrate this flaw in thinking is to essentially compute a t-test with linear regression as I do here.
What are the worst (commonly adopted) ideas/principles in statistics? I'll present one novice error (in this answer) and perhaps one error committed by more seasoned people. Very often, even on this website, I see people lamenting that their data are not normally distri
3,220
What are the worst (commonly adopted) ideas/principles in statistics?
Post hoc power analysis That is, using power analysis after a study has been completed rather than before, and in particular plugging in the observed effect size estimate, sample size, etc. Some people have the intuition that post hoc power analysis could be informative because it could help explain why they attained a non-significant result. Specifically, they think maybe their failure to attain a significant result doesn't mean their theory is wrong... instead maybe it's just that the study didn't have a large enough sample size or an efficient enough design to detect the effect. So then a post hoc power analysis should indicate low power, and we can just blame it on low power, right? The problem is that the post hoc power analysis does not actually add any new information. It is a simple transformation of the p-value you already computed. If you got a non-significant result, then it's a mathematical necessity that post hoc power will be low. And conversely, post hoc power is high when and only when the observed p-value is small. So post hoc power cannot possibly provide any support for the hopeful line of reasoning mentioned above. Here's another way to think about the conceptual problem with these kinds of post hoc power (PHP) exercises -- the following passage is from this paper by Russ Lenth: Note that the problem here is not the chronological issue of running a power analysis after the study is completed per se -- it is possible to run after-the-fact power analysis in a way that is informative and sensible by varying some of the observed statistics, for example to estimate what would have happened if you had run the study in a different way. The key problem with "post hoc power analysis" as defined in this post is in simply plugging in all of the observed statistics when doing the power analysis. The vast majority of the time that someone does this, the problem they are attempting to solve is better solved by just computing some sort of confidence interval around their observed effect size estimate. That is, if someone wants to argue that the reason they failed to reject the null is not because their theory is wrong but just because the design was highly sub-optimal, then a more statistically sound way to make that argument is to compute the confidence interval around their observed estimate and point out that while it does include 0, it also includes large effect size values -- basically the interval is too wide to conclude very much about the true effect size, and thus is not a very strong disconfirmation.
What are the worst (commonly adopted) ideas/principles in statistics?
Post hoc power analysis That is, using power analysis after a study has been completed rather than before, and in particular plugging in the observed effect size estimate, sample size, etc. Some peopl
What are the worst (commonly adopted) ideas/principles in statistics? Post hoc power analysis That is, using power analysis after a study has been completed rather than before, and in particular plugging in the observed effect size estimate, sample size, etc. Some people have the intuition that post hoc power analysis could be informative because it could help explain why they attained a non-significant result. Specifically, they think maybe their failure to attain a significant result doesn't mean their theory is wrong... instead maybe it's just that the study didn't have a large enough sample size or an efficient enough design to detect the effect. So then a post hoc power analysis should indicate low power, and we can just blame it on low power, right? The problem is that the post hoc power analysis does not actually add any new information. It is a simple transformation of the p-value you already computed. If you got a non-significant result, then it's a mathematical necessity that post hoc power will be low. And conversely, post hoc power is high when and only when the observed p-value is small. So post hoc power cannot possibly provide any support for the hopeful line of reasoning mentioned above. Here's another way to think about the conceptual problem with these kinds of post hoc power (PHP) exercises -- the following passage is from this paper by Russ Lenth: Note that the problem here is not the chronological issue of running a power analysis after the study is completed per se -- it is possible to run after-the-fact power analysis in a way that is informative and sensible by varying some of the observed statistics, for example to estimate what would have happened if you had run the study in a different way. The key problem with "post hoc power analysis" as defined in this post is in simply plugging in all of the observed statistics when doing the power analysis. The vast majority of the time that someone does this, the problem they are attempting to solve is better solved by just computing some sort of confidence interval around their observed effect size estimate. That is, if someone wants to argue that the reason they failed to reject the null is not because their theory is wrong but just because the design was highly sub-optimal, then a more statistically sound way to make that argument is to compute the confidence interval around their observed estimate and point out that while it does include 0, it also includes large effect size values -- basically the interval is too wide to conclude very much about the true effect size, and thus is not a very strong disconfirmation.
What are the worst (commonly adopted) ideas/principles in statistics? Post hoc power analysis That is, using power analysis after a study has been completed rather than before, and in particular plugging in the observed effect size estimate, sample size, etc. Some peopl
3,221
What are the worst (commonly adopted) ideas/principles in statistics?
The idea that because something is not statistically significant, it is not interesting and should be ignored.
What are the worst (commonly adopted) ideas/principles in statistics?
The idea that because something is not statistically significant, it is not interesting and should be ignored.
What are the worst (commonly adopted) ideas/principles in statistics? The idea that because something is not statistically significant, it is not interesting and should be ignored.
What are the worst (commonly adopted) ideas/principles in statistics? The idea that because something is not statistically significant, it is not interesting and should be ignored.
3,222
What are the worst (commonly adopted) ideas/principles in statistics?
Removing Outliers It seems that many individuals have the idea that they not only can, but should disregard data points that are some number of standard deviations away from the mean. Even when there is no reason to suspect that the observation is invalid, or any conscious justification for identifying/removing outliers, this strategy is often considered a staple of data preprocessing.
What are the worst (commonly adopted) ideas/principles in statistics?
Removing Outliers It seems that many individuals have the idea that they not only can, but should disregard data points that are some number of standard deviations away from the mean. Even when there
What are the worst (commonly adopted) ideas/principles in statistics? Removing Outliers It seems that many individuals have the idea that they not only can, but should disregard data points that are some number of standard deviations away from the mean. Even when there is no reason to suspect that the observation is invalid, or any conscious justification for identifying/removing outliers, this strategy is often considered a staple of data preprocessing.
What are the worst (commonly adopted) ideas/principles in statistics? Removing Outliers It seems that many individuals have the idea that they not only can, but should disregard data points that are some number of standard deviations away from the mean. Even when there
3,223
What are the worst (commonly adopted) ideas/principles in statistics?
Not addressing multiple hypothesis testing problems. Just because you aren't performing a t.test on 1,000,000 genes doesn't mean you're safe from it. One example of a field it notably pops up is in studies that test an effect conditional on a previous effect being significant. Often in experiments the authors identify a significant effect of something, and then conditional on it being significant, then perform further tests to better understand it without adjusting for that procedural analysis approach. I recently read a paper specifically about the pervasiveness of this problem in experiments, Multiple hypothesis testing in experimental economics and it was quite a good read.
What are the worst (commonly adopted) ideas/principles in statistics?
Not addressing multiple hypothesis testing problems. Just because you aren't performing a t.test on 1,000,000 genes doesn't mean you're safe from it. One example of a field it notably pops up is in st
What are the worst (commonly adopted) ideas/principles in statistics? Not addressing multiple hypothesis testing problems. Just because you aren't performing a t.test on 1,000,000 genes doesn't mean you're safe from it. One example of a field it notably pops up is in studies that test an effect conditional on a previous effect being significant. Often in experiments the authors identify a significant effect of something, and then conditional on it being significant, then perform further tests to better understand it without adjusting for that procedural analysis approach. I recently read a paper specifically about the pervasiveness of this problem in experiments, Multiple hypothesis testing in experimental economics and it was quite a good read.
What are the worst (commonly adopted) ideas/principles in statistics? Not addressing multiple hypothesis testing problems. Just because you aren't performing a t.test on 1,000,000 genes doesn't mean you're safe from it. One example of a field it notably pops up is in st
3,224
What are the worst (commonly adopted) ideas/principles in statistics?
This seems like low hanging fruit, but stepwise regression is one error which I see pretty frequently even from some stats people. Even if you haven't read some of the very well-written answers on this site which address the approach and its flaws, I think if you just took a moment to understand what is happening (that you are essentially testing with the data that generated the hypothesis) it would be clear that step wise is a bad idea. Edit: This answer refers to inference problems. Prediction is something different. In my own (limited) experiments, stepwise seems to perform on par with other methods in terms of RMSE.
What are the worst (commonly adopted) ideas/principles in statistics?
This seems like low hanging fruit, but stepwise regression is one error which I see pretty frequently even from some stats people. Even if you haven't read some of the very well-written answers on th
What are the worst (commonly adopted) ideas/principles in statistics? This seems like low hanging fruit, but stepwise regression is one error which I see pretty frequently even from some stats people. Even if you haven't read some of the very well-written answers on this site which address the approach and its flaws, I think if you just took a moment to understand what is happening (that you are essentially testing with the data that generated the hypothesis) it would be clear that step wise is a bad idea. Edit: This answer refers to inference problems. Prediction is something different. In my own (limited) experiments, stepwise seems to perform on par with other methods in terms of RMSE.
What are the worst (commonly adopted) ideas/principles in statistics? This seems like low hanging fruit, but stepwise regression is one error which I see pretty frequently even from some stats people. Even if you haven't read some of the very well-written answers on th
3,225
What are the worst (commonly adopted) ideas/principles in statistics?
Regression towards the mean is a far more common problem than is often realised. It is also one of those things that is actually quite simple but appears to be quite nebulous on closer inspection, and this is partly due to the narrow way that it is usually taught. Sometimes it is attributed entirely to measurement error, and that can be quite misleading. It is often "defined" in terms of extreme events - for example, if a variable is sampled and an extreme value observed, the next measurement tends to be less extreme. But this is also misleading because it implies that it is the same variable being measured. Not only may RTM arise where the subsequent measures are on different variables, but it may arise for measures that are not even repeated measures on the same subject. For example some people recognise RTM from the original "discovery" by Galton who realised that the children of tall parents also tend to be tall but less tall than their parents, while children of short parents also tend to be short but less short than their parents. Fundamentally, RTM is a consequence of imperfect correlation between two variables. Hence, the question shouldn't be about when RTM occurs - it should be about when RTM doesn't occur. Often the impact may be small but sometimes it can lead to completely spurious conclusions. A very simple one is the observation of a "placebo effect" in clinical trials. Another more subtle one, but potentially much more damaging is the inference of " growth trajectories" in lifecourse studies where conditioning on the outcome has implicitly taken place.
What are the worst (commonly adopted) ideas/principles in statistics?
Regression towards the mean is a far more common problem than is often realised. It is also one of those things that is actually quite simple but appears to be quite nebulous on closer inspection, an
What are the worst (commonly adopted) ideas/principles in statistics? Regression towards the mean is a far more common problem than is often realised. It is also one of those things that is actually quite simple but appears to be quite nebulous on closer inspection, and this is partly due to the narrow way that it is usually taught. Sometimes it is attributed entirely to measurement error, and that can be quite misleading. It is often "defined" in terms of extreme events - for example, if a variable is sampled and an extreme value observed, the next measurement tends to be less extreme. But this is also misleading because it implies that it is the same variable being measured. Not only may RTM arise where the subsequent measures are on different variables, but it may arise for measures that are not even repeated measures on the same subject. For example some people recognise RTM from the original "discovery" by Galton who realised that the children of tall parents also tend to be tall but less tall than their parents, while children of short parents also tend to be short but less short than their parents. Fundamentally, RTM is a consequence of imperfect correlation between two variables. Hence, the question shouldn't be about when RTM occurs - it should be about when RTM doesn't occur. Often the impact may be small but sometimes it can lead to completely spurious conclusions. A very simple one is the observation of a "placebo effect" in clinical trials. Another more subtle one, but potentially much more damaging is the inference of " growth trajectories" in lifecourse studies where conditioning on the outcome has implicitly taken place.
What are the worst (commonly adopted) ideas/principles in statistics? Regression towards the mean is a far more common problem than is often realised. It is also one of those things that is actually quite simple but appears to be quite nebulous on closer inspection, an
3,226
What are the worst (commonly adopted) ideas/principles in statistics?
You have a nice answer to one that I posted a few weeks ago. Debunking wrong CLT statement False claim: the central limit theorem says that the empirical distribution converges to a normal distribution. As the answers to my question show, that claim is utterly preposterous (unless the population is normal), yet the answers also tell me that this is a common misconception.
What are the worst (commonly adopted) ideas/principles in statistics?
You have a nice answer to one that I posted a few weeks ago. Debunking wrong CLT statement False claim: the central limit theorem says that the empirical distribution converges to a normal distributio
What are the worst (commonly adopted) ideas/principles in statistics? You have a nice answer to one that I posted a few weeks ago. Debunking wrong CLT statement False claim: the central limit theorem says that the empirical distribution converges to a normal distribution. As the answers to my question show, that claim is utterly preposterous (unless the population is normal), yet the answers also tell me that this is a common misconception.
What are the worst (commonly adopted) ideas/principles in statistics? You have a nice answer to one that I posted a few weeks ago. Debunking wrong CLT statement False claim: the central limit theorem says that the empirical distribution converges to a normal distributio
3,227
What are the worst (commonly adopted) ideas/principles in statistics?
Equating a high $R^2$ with a "good model" (or equivalently, lamenting - or, in the case of referees of papers, criticizing - that $R^2$ is "too" low). More discussion is provided, e.g. here and here. As should be universally appreciated, $R^2$ increases (more precisely, never decreases, see here) in the number of regressors in the model, and can hence always be made equal to 1 by including sufficiently many powers and interaction terms in the model (see the related illustration below). That is, of course, a very bad idea because the resulting model will strongly overfit and hence predict very poorly out of sample. Also, when you regress something onto itself, $R^2$ will be 1 by construction (as residuals are zero), but you have of course learnt nothing. Yet, praising high $R^2$ in similar setups (e.g., this year's GDP as a function of last year's, which in view of growth rates of around 2% is more or less the same) is not uncommon. Conversely, a regression with a small $R^2$ can be highly interesting when the effect that is responsible for that $R^2$ is one that you can actually act upon (i.e., is causalish). # R^2 increases even if you regress on pure noise n <- 15 regressors <- n-1 # enough, as we'll also fit a constant y <- rnorm(n) X <- matrix(rnorm(regressors*n),ncol=regressors) collectionR2s <- rep(NA,regressors) for (i in 1:regressors){ collectionR2s[i] <- summary(lm(y~X[,1:i]))$r.squared } plot(1:regressors,collectionR2s,col="purple",pch=19,type="b",lwd=2) abline(h=1, lty=2)
What are the worst (commonly adopted) ideas/principles in statistics?
Equating a high $R^2$ with a "good model" (or equivalently, lamenting - or, in the case of referees of papers, criticizing - that $R^2$ is "too" low). More discussion is provided, e.g. here and here.
What are the worst (commonly adopted) ideas/principles in statistics? Equating a high $R^2$ with a "good model" (or equivalently, lamenting - or, in the case of referees of papers, criticizing - that $R^2$ is "too" low). More discussion is provided, e.g. here and here. As should be universally appreciated, $R^2$ increases (more precisely, never decreases, see here) in the number of regressors in the model, and can hence always be made equal to 1 by including sufficiently many powers and interaction terms in the model (see the related illustration below). That is, of course, a very bad idea because the resulting model will strongly overfit and hence predict very poorly out of sample. Also, when you regress something onto itself, $R^2$ will be 1 by construction (as residuals are zero), but you have of course learnt nothing. Yet, praising high $R^2$ in similar setups (e.g., this year's GDP as a function of last year's, which in view of growth rates of around 2% is more or less the same) is not uncommon. Conversely, a regression with a small $R^2$ can be highly interesting when the effect that is responsible for that $R^2$ is one that you can actually act upon (i.e., is causalish). # R^2 increases even if you regress on pure noise n <- 15 regressors <- n-1 # enough, as we'll also fit a constant y <- rnorm(n) X <- matrix(rnorm(regressors*n),ncol=regressors) collectionR2s <- rep(NA,regressors) for (i in 1:regressors){ collectionR2s[i] <- summary(lm(y~X[,1:i]))$r.squared } plot(1:regressors,collectionR2s,col="purple",pch=19,type="b",lwd=2) abline(h=1, lty=2)
What are the worst (commonly adopted) ideas/principles in statistics? Equating a high $R^2$ with a "good model" (or equivalently, lamenting - or, in the case of referees of papers, criticizing - that $R^2$ is "too" low). More discussion is provided, e.g. here and here.
3,228
What are the worst (commonly adopted) ideas/principles in statistics?
ARIMA!!! - a marvel of theoretical rigor and mathematical elegance that is almost useless for any realistic business time series. Ok, that is an exaggeration: ARIMA and similar models like GARCH are occasionally useful. But ARIMA is not nearly as general purpose a model as most people seem to think it is. Most competent Data Scientists and ML Engineers who are generalists (in the sense that they don't specialize in time series forecasting or econometrics), as well as MBA types and people with solid general statistics backgrounds, will default to ARIMA as the baseline model for a time series forecasting problem. Most of the time they end up sticking with it. When they do evaluate it against other models, it is usually against more exotic entities like Deep Learning Models, XGBoost, etc... On the other hand, most time series specialists, supply chain analysts, experienced demand forecasting analysts, etc...stay away from ARIMA. The accepted baseline model and the one that is still very hard to beat is Holt-Winters, or Triple Exponential Smoothing. See for example "Why the damped trend works" by E S Gardner Jr & E McKenzie. Beyond academic forecasting, many enterprise grade forecasting solutions in the demand forecasting and the supply chain space still use some variation of Holt-Winters. This isn't corporate inertia or bad design, it is simply the case that Holt-Winters or Damped Holt-Winters is still the best overall approach in terms of robustness and average overall accuracy. A brief history lesson: Some history might be useful here: Exponential Smoothing models, Simple ES, Holt's model, and Holt-Winters, were developed in the 50s. They proved to be very useful and pragmatic, but were completely "ad-hoc". They had no underlying statistical theory or first principles - they were more of a case of: How can we extrapolate time series into the future? Moving averages are a good first step, but we need to make the moving average more responsive to recent observations. Why don't we just add an $\alpha$ parameter that gives more importance to recent observation? - This was how simple exponential smoothing was invented. Holt and Holt-Winters were simply the same idea, but with the trend and seasonality split up and then estimated with their own weighted moving average models (hence the additional $\beta$ and $\gamma$ parameters). In fact, in the original formulations of ES, the parameters $\alpha$, $\beta$, and $\gamma$ were chosen manually based on their gut feeling and domain knowledge. Even today, I occasionally have to respond to requests of the type "The sales for this particular product division are highly reactive, can you please override the automated model selection process and set $\alpha$ to 0.95 for us" (Ahhh - thinking to myself - why don't y'all set it to a naive forecast then??? But I am an engineer, so I can't say things like that to a business person). Anyway, ARIMA, which was proposed in the 1970s, was in some ways a direct response to Exponential Smoothing models. While engineers loved ES models, statisticians were horrified by them. They yearned for a model that had at least some theoretical justification to it. And that is exactly what Box and Jenkins did when they came up with ARIMA models. Instead of the ad-hoc pragmatism of ES models, the ARIMA approach was built from the ground up using sound first principles and highly rigorous theoretical considerations. And ARIMA models are indeed very elegant and theoretically compelling. Even if you don't ever deploy a single ARIMA model to production in your whole life, I still highly recommend that anyone interested in time series forecasting dedicate some time to fully grasping the theory behind how ARIMA works, because it will give a very good understanding of how time series behave in general. But ARIMA never did well empirically, see here. Hyndman writes (and quotes others): Many of the discussants seem to have been enamoured with ARIMA models. “It is amazing to me, however, that after all this exercise in identifying models, transforming and so on, that the autoregressive moving averages come out so badly. I wonder whether it might be partly due to the authors not using the backwards forecasting approach to obtain the initial errors”. — W.G. Gilchrist “I find it hard to believe that Box-Jenkins, if properly applied, can actually be worse than so many of the simple methods”. — Chris Chatfield At times, the discussion degenerated to questioning the competency of the authors: “Why do empirical studies sometimes give different answers? It may depend on the selected sample of time series, but I suspect it is more likely to depend on the skill of the analyst … these authors are more at home with simple procedures than with Box-Jenkins”. — Chris Chatfield When ARIMA performs well, it does so only because the models selected are equivalent to Exponential Smoothing models (there is some overlap between the ARIMA family and the ES family for $ARIMA(p,d,q)$ with low values of $p$, $d$, and $q$ - see here and here for details). I recall once working with a very smart business forecaster who had a strong statistics background and who was unhappy that our production system was using exponential smoothing, and wanted us to shift to ARIMA instead. So him and I worked together to test some ARIMA models. He shared with me that in his previous jobs, there was some informal wisdom around the fact that ARIMA models should never have values of $p$, $d$, or $q$ higher than 2. Ironically, this meant that the ARIMA models we were testing were all identical to or very close to ES models. It is not my colleague's fault though that he missed this irony. Most introductory graduate and MBA level material on time series modeling focus significantly or entirely on ARIMA and imply (even if they don't explicitly say so) that it is the end all be all of statistical forecasting. This is likely a holdover from the mind set that Hyndman referred to in the 70s, of academic forecasting experts being "enamored" with ARIMA. Additionally, the general framework that unifies ARIMA and ES models is a relatively recent development and isn't always covered in introductory texts, and is also significantly more involved mathematically than the basic formulations of both ARIMA and ES models (I have to confess I haven't completely wrapped my head around it yet myself). Ok, why does ARIMA perform so poorly? Several reasons, listed in no particular order of importance: ARIMA requires polynomial trends: Differencing is used to remove the trend from a time series in order to make it mean stationary, so that autoregressive models are applicable. See this previous post for details. Consider a time series $$Y(t)=L(t)+T(t)$$ with $L$ the level and $T$ the trend (most of what I am saying is applicable to seasonal time series as well, but for simplicity's sake I will stick to the case trend only). Removing the trend amounts to applying a transformation that will map $T(t)$ to a constant $T=c$. Intuitively, the differencing component of ARIMA is the discrete time equivalent of differentiation. That is, for a discrete time series $Y$ that has an equivalent continuous time series $Y_c$, setting $d = 1$ ($Y_n'= Y_n - Y_{n-1}$) is equivalent to calculating $$\frac{dY_c}{dt}$$ and setting $d=2$ is equivalent to $$\frac{d^2Y_c}{dt^2}$$ etc...now consider what type of continuous curves can be transformed into constants by successive differentiation? Only polynomials of the form $T(t)=a_nt^n+a_{n-1}t^{n-1}...+a_1t+a_0$ (only? It's been a while since I studied calculus...) - note that a linear trend is the special case where $T(t)=a_1t+a_0$. For all other curves, no number of successive differentiations will lead to a constant value (consider and exponential curve or a sine wave, etc...). Same thing for discrete time differencing: it only transfroms the series into a mean stationary one if the trend is polynomial. But how many real world time series will have a higher order ($n>2$) polynomial trend? Very few if any at all. Hence selecting an order $d>2$ is a recipe for overfitting (and manually selected ARIMA models do indeed overfit often). And for lower order trends,$d=0,1,2$, you're in exponential smoothing territory (again, see the equivalence table here). ARIMA models assume a very specific data generating process: Data generating process generally refers to the "true" model that describes our data if we were able to observe it directly without errors or noise. For example an $ARIMA(2,0,0)$ model can be written as $$Y_t = a_1Y_{t-1}+a_2Y_{t-2}+c+ \epsilon_t$$ with $\epsilon_t$ modeling the errors and noise and the true model being $$\hat{Y}_t = a_1\hat{Y}_{t-1}+a_2\hat{Y}_{t-2}+c$$ but very few business time series have such a "true model", e.g why would a sales demand signal or a DC capacity time series ever have a DGP that corresponds to $$\hat{Y}_t = a_1\hat{Y}_{t-1}+a_2\hat{Y}_{t-2}+c??$$ If we look a little bit deeper into the structure of ARIMA models, we realize that they are in fact very complex models. An ARIMA model first removes the trend and the seasonality, and then looks at the residuals and tries to model them as a linear regression against passed values (hence "auto"-regression) - this will only work if the residuals do indeed have some complex underlying deterministic process. But many (most) business time series barely have enough signal in them to properly capture the trend and the seasonality, let alone remove them and then find additional autoregressive structure in the residuals. Most univariate business time series data is either too noisy or too sparse for that. That is why Holt-Winters, and more recently Facebook Prophet are so popular: They do away with looking for any complex pattern in the residuals and just model them as a moving average or don't bother modeling them at all (in Prophet's case), and focus mainly on capturing the dynamics of the seasonality and the trend. In short, ARIMA models are actually pretty complex, and complexity often leads to overfitting. Sometimes autoregressive processes are justified. But because of stationarity requirements, ARIMA AR processes are very weird and counter intuitive: Let's try to look at what types of processes correspond in fact to an auto-regressive process - i.e. what time series would actually have an underlying DGP that corresponds to an $AR(p)$ model. This is possible for example with a cell population growth model, where each cell reproduces by dividing into to 2, and hence the population $P(t_n)$ could reasonably be approximated by $P_n = 2P_{n-1}+\epsilon_t$. Because here $a=2$ ($>1$), the process is not stationary and can't be modeled using ARIMA. Nor are most "natural" $AR(p)$ models that have a true model of the form $$\hat{Y}_t = a_1\hat{Y}_{t-1}+a_2\hat{Y}_{t-2}...+a_p\hat{Y}_{t-p}+c$$This is because of the stationarity requirement: In order for the mean $c$ to remain constant, there are very stringent requirements on the values of $a_1,a_2,...,a_p$ (see this previous post) to insure that $\hat{Y}_t$ never strays too far from the mean. Basically,$a_1,a_2,...,a_p$ have to sort of cancel each other out $$\sum_{j=1}^pa_j<1$$otherwise the model is not stationary (this is what all that stuff about unit roots and Z-transforms is about). This implication leads to very weird DGPs if we were to consider them as "true models" of a business time series: e.g. we have a sales time series or an electricity load time series, etc...what type of causal relationships would have to occur in order to insure that$$\sum_{j=1}^pa_j<1?$$ e.g. what type of economic or social process could ever lead to a situation where the detrended sales for 3 weeks ago are always equal to negative the sum of the sales from 2 weeks ago and the sales from last week? Such a process would be outlandish to say the least. To recap: While there are real world processes that can correspond to an autoregressive model, they are almost never stationary (if anyone can think of a counter example - that is a naturally occurring stationary AR(p) process, please share, I've been searching for one for a while). A stationary AR(p) process behaves in weird and counter intuitive ways (more or less oscillating around the mean) that make them very hard to fit to business time series data in a naturally explainable way. Hyndman mentions this (using stronger words than mine) in the aforementioned paper: This reveals a view commonly held (even today) that there is some single model that describes the data generating process, and that the job of a forecaster is to find it. This seems patently absurd to me — real data come from processes that are much more complicated, non-linear and non-stationary than any model we might dream up — and George Box himself famously dismissed it saying, “All models are wrong but some are useful”. But what about the 'good' ARIMA tools? At this point would point out to some modern tools and packages that use ARIMA and perform very well on most reasonable time series (not too noisy or too sparse), such as auto.arima() from the R Forecast package or BigQuery ARIMA. These tools in fact rely on sophisticated model selection procedures which do a pretty good job of ensuring that the $p,d,q$ orders selected are optimal (BigQuery ARIMA also uses far more sophisticated seasonality and trend modeling than the standard ARIMA and SARIMA models do). In other words, they are not your grandparent's ARIMA (nor the one taught in most introductory graduate texts...) and will usually generate models with low $p,d,q$ values anyway (after proper pre-processing of course). In fact now that I think of it, I don't recall ever using auto.arima() on a work related time series and getting $p,d,q > 1$, although I did get a value of $q=3$ once using auto.arima() on the Air Passengers time series. Conclusion Learn traditional ARIMA models in and out, but don't use them. Stick to state space models (ES incredibly sophisticated descendants) or use modern automated ARIMA model packages (which are very similar to state space models under the hood anyway).
What are the worst (commonly adopted) ideas/principles in statistics?
ARIMA!!! - a marvel of theoretical rigor and mathematical elegance that is almost useless for any realistic business time series. Ok, that is an exaggeration: ARIMA and similar models like GARCH are o
What are the worst (commonly adopted) ideas/principles in statistics? ARIMA!!! - a marvel of theoretical rigor and mathematical elegance that is almost useless for any realistic business time series. Ok, that is an exaggeration: ARIMA and similar models like GARCH are occasionally useful. But ARIMA is not nearly as general purpose a model as most people seem to think it is. Most competent Data Scientists and ML Engineers who are generalists (in the sense that they don't specialize in time series forecasting or econometrics), as well as MBA types and people with solid general statistics backgrounds, will default to ARIMA as the baseline model for a time series forecasting problem. Most of the time they end up sticking with it. When they do evaluate it against other models, it is usually against more exotic entities like Deep Learning Models, XGBoost, etc... On the other hand, most time series specialists, supply chain analysts, experienced demand forecasting analysts, etc...stay away from ARIMA. The accepted baseline model and the one that is still very hard to beat is Holt-Winters, or Triple Exponential Smoothing. See for example "Why the damped trend works" by E S Gardner Jr & E McKenzie. Beyond academic forecasting, many enterprise grade forecasting solutions in the demand forecasting and the supply chain space still use some variation of Holt-Winters. This isn't corporate inertia or bad design, it is simply the case that Holt-Winters or Damped Holt-Winters is still the best overall approach in terms of robustness and average overall accuracy. A brief history lesson: Some history might be useful here: Exponential Smoothing models, Simple ES, Holt's model, and Holt-Winters, were developed in the 50s. They proved to be very useful and pragmatic, but were completely "ad-hoc". They had no underlying statistical theory or first principles - they were more of a case of: How can we extrapolate time series into the future? Moving averages are a good first step, but we need to make the moving average more responsive to recent observations. Why don't we just add an $\alpha$ parameter that gives more importance to recent observation? - This was how simple exponential smoothing was invented. Holt and Holt-Winters were simply the same idea, but with the trend and seasonality split up and then estimated with their own weighted moving average models (hence the additional $\beta$ and $\gamma$ parameters). In fact, in the original formulations of ES, the parameters $\alpha$, $\beta$, and $\gamma$ were chosen manually based on their gut feeling and domain knowledge. Even today, I occasionally have to respond to requests of the type "The sales for this particular product division are highly reactive, can you please override the automated model selection process and set $\alpha$ to 0.95 for us" (Ahhh - thinking to myself - why don't y'all set it to a naive forecast then??? But I am an engineer, so I can't say things like that to a business person). Anyway, ARIMA, which was proposed in the 1970s, was in some ways a direct response to Exponential Smoothing models. While engineers loved ES models, statisticians were horrified by them. They yearned for a model that had at least some theoretical justification to it. And that is exactly what Box and Jenkins did when they came up with ARIMA models. Instead of the ad-hoc pragmatism of ES models, the ARIMA approach was built from the ground up using sound first principles and highly rigorous theoretical considerations. And ARIMA models are indeed very elegant and theoretically compelling. Even if you don't ever deploy a single ARIMA model to production in your whole life, I still highly recommend that anyone interested in time series forecasting dedicate some time to fully grasping the theory behind how ARIMA works, because it will give a very good understanding of how time series behave in general. But ARIMA never did well empirically, see here. Hyndman writes (and quotes others): Many of the discussants seem to have been enamoured with ARIMA models. “It is amazing to me, however, that after all this exercise in identifying models, transforming and so on, that the autoregressive moving averages come out so badly. I wonder whether it might be partly due to the authors not using the backwards forecasting approach to obtain the initial errors”. — W.G. Gilchrist “I find it hard to believe that Box-Jenkins, if properly applied, can actually be worse than so many of the simple methods”. — Chris Chatfield At times, the discussion degenerated to questioning the competency of the authors: “Why do empirical studies sometimes give different answers? It may depend on the selected sample of time series, but I suspect it is more likely to depend on the skill of the analyst … these authors are more at home with simple procedures than with Box-Jenkins”. — Chris Chatfield When ARIMA performs well, it does so only because the models selected are equivalent to Exponential Smoothing models (there is some overlap between the ARIMA family and the ES family for $ARIMA(p,d,q)$ with low values of $p$, $d$, and $q$ - see here and here for details). I recall once working with a very smart business forecaster who had a strong statistics background and who was unhappy that our production system was using exponential smoothing, and wanted us to shift to ARIMA instead. So him and I worked together to test some ARIMA models. He shared with me that in his previous jobs, there was some informal wisdom around the fact that ARIMA models should never have values of $p$, $d$, or $q$ higher than 2. Ironically, this meant that the ARIMA models we were testing were all identical to or very close to ES models. It is not my colleague's fault though that he missed this irony. Most introductory graduate and MBA level material on time series modeling focus significantly or entirely on ARIMA and imply (even if they don't explicitly say so) that it is the end all be all of statistical forecasting. This is likely a holdover from the mind set that Hyndman referred to in the 70s, of academic forecasting experts being "enamored" with ARIMA. Additionally, the general framework that unifies ARIMA and ES models is a relatively recent development and isn't always covered in introductory texts, and is also significantly more involved mathematically than the basic formulations of both ARIMA and ES models (I have to confess I haven't completely wrapped my head around it yet myself). Ok, why does ARIMA perform so poorly? Several reasons, listed in no particular order of importance: ARIMA requires polynomial trends: Differencing is used to remove the trend from a time series in order to make it mean stationary, so that autoregressive models are applicable. See this previous post for details. Consider a time series $$Y(t)=L(t)+T(t)$$ with $L$ the level and $T$ the trend (most of what I am saying is applicable to seasonal time series as well, but for simplicity's sake I will stick to the case trend only). Removing the trend amounts to applying a transformation that will map $T(t)$ to a constant $T=c$. Intuitively, the differencing component of ARIMA is the discrete time equivalent of differentiation. That is, for a discrete time series $Y$ that has an equivalent continuous time series $Y_c$, setting $d = 1$ ($Y_n'= Y_n - Y_{n-1}$) is equivalent to calculating $$\frac{dY_c}{dt}$$ and setting $d=2$ is equivalent to $$\frac{d^2Y_c}{dt^2}$$ etc...now consider what type of continuous curves can be transformed into constants by successive differentiation? Only polynomials of the form $T(t)=a_nt^n+a_{n-1}t^{n-1}...+a_1t+a_0$ (only? It's been a while since I studied calculus...) - note that a linear trend is the special case where $T(t)=a_1t+a_0$. For all other curves, no number of successive differentiations will lead to a constant value (consider and exponential curve or a sine wave, etc...). Same thing for discrete time differencing: it only transfroms the series into a mean stationary one if the trend is polynomial. But how many real world time series will have a higher order ($n>2$) polynomial trend? Very few if any at all. Hence selecting an order $d>2$ is a recipe for overfitting (and manually selected ARIMA models do indeed overfit often). And for lower order trends,$d=0,1,2$, you're in exponential smoothing territory (again, see the equivalence table here). ARIMA models assume a very specific data generating process: Data generating process generally refers to the "true" model that describes our data if we were able to observe it directly without errors or noise. For example an $ARIMA(2,0,0)$ model can be written as $$Y_t = a_1Y_{t-1}+a_2Y_{t-2}+c+ \epsilon_t$$ with $\epsilon_t$ modeling the errors and noise and the true model being $$\hat{Y}_t = a_1\hat{Y}_{t-1}+a_2\hat{Y}_{t-2}+c$$ but very few business time series have such a "true model", e.g why would a sales demand signal or a DC capacity time series ever have a DGP that corresponds to $$\hat{Y}_t = a_1\hat{Y}_{t-1}+a_2\hat{Y}_{t-2}+c??$$ If we look a little bit deeper into the structure of ARIMA models, we realize that they are in fact very complex models. An ARIMA model first removes the trend and the seasonality, and then looks at the residuals and tries to model them as a linear regression against passed values (hence "auto"-regression) - this will only work if the residuals do indeed have some complex underlying deterministic process. But many (most) business time series barely have enough signal in them to properly capture the trend and the seasonality, let alone remove them and then find additional autoregressive structure in the residuals. Most univariate business time series data is either too noisy or too sparse for that. That is why Holt-Winters, and more recently Facebook Prophet are so popular: They do away with looking for any complex pattern in the residuals and just model them as a moving average or don't bother modeling them at all (in Prophet's case), and focus mainly on capturing the dynamics of the seasonality and the trend. In short, ARIMA models are actually pretty complex, and complexity often leads to overfitting. Sometimes autoregressive processes are justified. But because of stationarity requirements, ARIMA AR processes are very weird and counter intuitive: Let's try to look at what types of processes correspond in fact to an auto-regressive process - i.e. what time series would actually have an underlying DGP that corresponds to an $AR(p)$ model. This is possible for example with a cell population growth model, where each cell reproduces by dividing into to 2, and hence the population $P(t_n)$ could reasonably be approximated by $P_n = 2P_{n-1}+\epsilon_t$. Because here $a=2$ ($>1$), the process is not stationary and can't be modeled using ARIMA. Nor are most "natural" $AR(p)$ models that have a true model of the form $$\hat{Y}_t = a_1\hat{Y}_{t-1}+a_2\hat{Y}_{t-2}...+a_p\hat{Y}_{t-p}+c$$This is because of the stationarity requirement: In order for the mean $c$ to remain constant, there are very stringent requirements on the values of $a_1,a_2,...,a_p$ (see this previous post) to insure that $\hat{Y}_t$ never strays too far from the mean. Basically,$a_1,a_2,...,a_p$ have to sort of cancel each other out $$\sum_{j=1}^pa_j<1$$otherwise the model is not stationary (this is what all that stuff about unit roots and Z-transforms is about). This implication leads to very weird DGPs if we were to consider them as "true models" of a business time series: e.g. we have a sales time series or an electricity load time series, etc...what type of causal relationships would have to occur in order to insure that$$\sum_{j=1}^pa_j<1?$$ e.g. what type of economic or social process could ever lead to a situation where the detrended sales for 3 weeks ago are always equal to negative the sum of the sales from 2 weeks ago and the sales from last week? Such a process would be outlandish to say the least. To recap: While there are real world processes that can correspond to an autoregressive model, they are almost never stationary (if anyone can think of a counter example - that is a naturally occurring stationary AR(p) process, please share, I've been searching for one for a while). A stationary AR(p) process behaves in weird and counter intuitive ways (more or less oscillating around the mean) that make them very hard to fit to business time series data in a naturally explainable way. Hyndman mentions this (using stronger words than mine) in the aforementioned paper: This reveals a view commonly held (even today) that there is some single model that describes the data generating process, and that the job of a forecaster is to find it. This seems patently absurd to me — real data come from processes that are much more complicated, non-linear and non-stationary than any model we might dream up — and George Box himself famously dismissed it saying, “All models are wrong but some are useful”. But what about the 'good' ARIMA tools? At this point would point out to some modern tools and packages that use ARIMA and perform very well on most reasonable time series (not too noisy or too sparse), such as auto.arima() from the R Forecast package or BigQuery ARIMA. These tools in fact rely on sophisticated model selection procedures which do a pretty good job of ensuring that the $p,d,q$ orders selected are optimal (BigQuery ARIMA also uses far more sophisticated seasonality and trend modeling than the standard ARIMA and SARIMA models do). In other words, they are not your grandparent's ARIMA (nor the one taught in most introductory graduate texts...) and will usually generate models with low $p,d,q$ values anyway (after proper pre-processing of course). In fact now that I think of it, I don't recall ever using auto.arima() on a work related time series and getting $p,d,q > 1$, although I did get a value of $q=3$ once using auto.arima() on the Air Passengers time series. Conclusion Learn traditional ARIMA models in and out, but don't use them. Stick to state space models (ES incredibly sophisticated descendants) or use modern automated ARIMA model packages (which are very similar to state space models under the hood anyway).
What are the worst (commonly adopted) ideas/principles in statistics? ARIMA!!! - a marvel of theoretical rigor and mathematical elegance that is almost useless for any realistic business time series. Ok, that is an exaggeration: ARIMA and similar models like GARCH are o
3,229
What are the worst (commonly adopted) ideas/principles in statistics?
Forgetting that bootstrapping requires special care when examining distributions of non-pivotal quantities (e.g., for estimating their confidence intervals), even though that has been known since the beginning.
What are the worst (commonly adopted) ideas/principles in statistics?
Forgetting that bootstrapping requires special care when examining distributions of non-pivotal quantities (e.g., for estimating their confidence intervals), even though that has been known since the
What are the worst (commonly adopted) ideas/principles in statistics? Forgetting that bootstrapping requires special care when examining distributions of non-pivotal quantities (e.g., for estimating their confidence intervals), even though that has been known since the beginning.
What are the worst (commonly adopted) ideas/principles in statistics? Forgetting that bootstrapping requires special care when examining distributions of non-pivotal quantities (e.g., for estimating their confidence intervals), even though that has been known since the
3,230
What are the worst (commonly adopted) ideas/principles in statistics?
"A complex model is better than a simple one". Or a variation thereof: "We need a model that can model nonlinearities." Especially often heard in forecasting. There is a strong preconception that a more complex model will forecast better than a simple one. That is very often not the case.
What are the worst (commonly adopted) ideas/principles in statistics?
"A complex model is better than a simple one". Or a variation thereof: "We need a model that can model nonlinearities." Especially often heard in forecasting. There is a strong preconception that a mo
What are the worst (commonly adopted) ideas/principles in statistics? "A complex model is better than a simple one". Or a variation thereof: "We need a model that can model nonlinearities." Especially often heard in forecasting. There is a strong preconception that a more complex model will forecast better than a simple one. That is very often not the case.
What are the worst (commonly adopted) ideas/principles in statistics? "A complex model is better than a simple one". Or a variation thereof: "We need a model that can model nonlinearities." Especially often heard in forecasting. There is a strong preconception that a mo
3,231
What are the worst (commonly adopted) ideas/principles in statistics?
Doing statistical inference with a - most certainly - biased convenience sample. (And then caring primarily about normality instead of addressing bias...)
What are the worst (commonly adopted) ideas/principles in statistics?
Doing statistical inference with a - most certainly - biased convenience sample. (And then caring primarily about normality instead of addressing bias...)
What are the worst (commonly adopted) ideas/principles in statistics? Doing statistical inference with a - most certainly - biased convenience sample. (And then caring primarily about normality instead of addressing bias...)
What are the worst (commonly adopted) ideas/principles in statistics? Doing statistical inference with a - most certainly - biased convenience sample. (And then caring primarily about normality instead of addressing bias...)
3,232
What are the worst (commonly adopted) ideas/principles in statistics?
Assuming that controlling for covariates is equivalent to eliminating their causal impact—this is false. The original example given by Pearl is that of qualifications, gender, and hiring. We hope that qualifications affect hiring, and want to know if gender does too. Gender can affect qualifications (unequal opportunity to education, for example). If an average man with a given education is more likely to be hired than an average woman who happens to have that same level of education, that is evidence of sexism, right? Wrong. The conclusion of sexism would only be justifiable if there were no confounders between Qualifications and Hiring. On the contrary, it may be that the women who happened to have the same level of education came from wealthy families, and the interviewer was biased against them for that reason. In other words, controlling for covariates can open back door paths. In many cases, controlling for is the best we can do, but when other back door paths are likely to exist, the evidence for causal conclusions should be considered weak.
What are the worst (commonly adopted) ideas/principles in statistics?
Assuming that controlling for covariates is equivalent to eliminating their causal impact—this is false. The original example given by Pearl is that of qualifications, gender, and hiring. We hope tha
What are the worst (commonly adopted) ideas/principles in statistics? Assuming that controlling for covariates is equivalent to eliminating their causal impact—this is false. The original example given by Pearl is that of qualifications, gender, and hiring. We hope that qualifications affect hiring, and want to know if gender does too. Gender can affect qualifications (unequal opportunity to education, for example). If an average man with a given education is more likely to be hired than an average woman who happens to have that same level of education, that is evidence of sexism, right? Wrong. The conclusion of sexism would only be justifiable if there were no confounders between Qualifications and Hiring. On the contrary, it may be that the women who happened to have the same level of education came from wealthy families, and the interviewer was biased against them for that reason. In other words, controlling for covariates can open back door paths. In many cases, controlling for is the best we can do, but when other back door paths are likely to exist, the evidence for causal conclusions should be considered weak.
What are the worst (commonly adopted) ideas/principles in statistics? Assuming that controlling for covariates is equivalent to eliminating their causal impact—this is false. The original example given by Pearl is that of qualifications, gender, and hiring. We hope tha
3,233
What are the worst (commonly adopted) ideas/principles in statistics?
What does a p-value mean? ALERT TO NEWCOMERS: THIS QUOTE IS EXTREMELY FALSE “The probability that the null hypothesis is true, duh! Come on, Dave, you’re a professional statistician, and that’s Statistics 101.” I get the appeal of this one, and it would be really nice to have a simple measure of the probability of the null hypothesis, but no.
What are the worst (commonly adopted) ideas/principles in statistics?
What does a p-value mean? ALERT TO NEWCOMERS: THIS QUOTE IS EXTREMELY FALSE “The probability that the null hypothesis is true, duh! Come on, Dave, you’re a professional statistician, and that’s Statis
What are the worst (commonly adopted) ideas/principles in statistics? What does a p-value mean? ALERT TO NEWCOMERS: THIS QUOTE IS EXTREMELY FALSE “The probability that the null hypothesis is true, duh! Come on, Dave, you’re a professional statistician, and that’s Statistics 101.” I get the appeal of this one, and it would be really nice to have a simple measure of the probability of the null hypothesis, but no.
What are the worst (commonly adopted) ideas/principles in statistics? What does a p-value mean? ALERT TO NEWCOMERS: THIS QUOTE IS EXTREMELY FALSE “The probability that the null hypothesis is true, duh! Come on, Dave, you’re a professional statistician, and that’s Statis
3,234
What are the worst (commonly adopted) ideas/principles in statistics?
It's not purely statistics, but more statistical modeling in the large sense, but a very common misconception, that I have also heard in some University courses, is that Random Forests cannot overfit. Here is a question where they asked exactly this, and I tried explaining why this isn't true, and where this misconception comes from.
What are the worst (commonly adopted) ideas/principles in statistics?
It's not purely statistics, but more statistical modeling in the large sense, but a very common misconception, that I have also heard in some University courses, is that Random Forests cannot overfit.
What are the worst (commonly adopted) ideas/principles in statistics? It's not purely statistics, but more statistical modeling in the large sense, but a very common misconception, that I have also heard in some University courses, is that Random Forests cannot overfit. Here is a question where they asked exactly this, and I tried explaining why this isn't true, and where this misconception comes from.
What are the worst (commonly adopted) ideas/principles in statistics? It's not purely statistics, but more statistical modeling in the large sense, but a very common misconception, that I have also heard in some University courses, is that Random Forests cannot overfit.
3,235
What are the worst (commonly adopted) ideas/principles in statistics?
People often assume that the uniform prior is uninformative. This is usually false.
What are the worst (commonly adopted) ideas/principles in statistics?
People often assume that the uniform prior is uninformative. This is usually false.
What are the worst (commonly adopted) ideas/principles in statistics? People often assume that the uniform prior is uninformative. This is usually false.
What are the worst (commonly adopted) ideas/principles in statistics? People often assume that the uniform prior is uninformative. This is usually false.
3,236
What are the worst (commonly adopted) ideas/principles in statistics?
In the medical community especially, and somewhat less often in psychology, the "change from baseline" is usually analyzed by modelling the change as a function of covariates. Doug Altman and Martin Bland have a really great paper on why this is probably not a good idea and argue that an ANVOCA (post measure ~ covariates + baseline) is better. Frank Harrell also does a really great job of compiling some hidden assumptions behind this approach.
What are the worst (commonly adopted) ideas/principles in statistics?
In the medical community especially, and somewhat less often in psychology, the "change from baseline" is usually analyzed by modelling the change as a function of covariates. Doug Altman and Martin
What are the worst (commonly adopted) ideas/principles in statistics? In the medical community especially, and somewhat less often in psychology, the "change from baseline" is usually analyzed by modelling the change as a function of covariates. Doug Altman and Martin Bland have a really great paper on why this is probably not a good idea and argue that an ANVOCA (post measure ~ covariates + baseline) is better. Frank Harrell also does a really great job of compiling some hidden assumptions behind this approach.
What are the worst (commonly adopted) ideas/principles in statistics? In the medical community especially, and somewhat less often in psychology, the "change from baseline" is usually analyzed by modelling the change as a function of covariates. Doug Altman and Martin
3,237
What are the worst (commonly adopted) ideas/principles in statistics?
Not realizing to what extent functional form assumptions and parametrizations are buying information in your analysis. In economics, you get these models that seem really interesting and give you a new way to potentially identify some effect of interest, but sometimes you read them and realize that without that last normality assumption that gave you point identification, the model identifies infinite bounds, and so the model really isn't actually giving you anything helpful.
What are the worst (commonly adopted) ideas/principles in statistics?
Not realizing to what extent functional form assumptions and parametrizations are buying information in your analysis. In economics, you get these models that seem really interesting and give you a ne
What are the worst (commonly adopted) ideas/principles in statistics? Not realizing to what extent functional form assumptions and parametrizations are buying information in your analysis. In economics, you get these models that seem really interesting and give you a new way to potentially identify some effect of interest, but sometimes you read them and realize that without that last normality assumption that gave you point identification, the model identifies infinite bounds, and so the model really isn't actually giving you anything helpful.
What are the worst (commonly adopted) ideas/principles in statistics? Not realizing to what extent functional form assumptions and parametrizations are buying information in your analysis. In economics, you get these models that seem really interesting and give you a ne
3,238
What are the worst (commonly adopted) ideas/principles in statistics?
When analysing change, that it is OK to create change scores (followup - baseline or a percent change from baseline) and then regress them on baseline. It's not (mathematical coupling). ANCOVA is often suggested as the best approach and it might be in the case of randomisation to groups, such as in clinical trials, but if the groups are unbalanced as if often the case in observational studies, ANCOVA can also be biased.
What are the worst (commonly adopted) ideas/principles in statistics?
When analysing change, that it is OK to create change scores (followup - baseline or a percent change from baseline) and then regress them on baseline. It's not (mathematical coupling). ANCOVA is ofte
What are the worst (commonly adopted) ideas/principles in statistics? When analysing change, that it is OK to create change scores (followup - baseline or a percent change from baseline) and then regress them on baseline. It's not (mathematical coupling). ANCOVA is often suggested as the best approach and it might be in the case of randomisation to groups, such as in clinical trials, but if the groups are unbalanced as if often the case in observational studies, ANCOVA can also be biased.
What are the worst (commonly adopted) ideas/principles in statistics? When analysing change, that it is OK to create change scores (followup - baseline or a percent change from baseline) and then regress them on baseline. It's not (mathematical coupling). ANCOVA is ofte
3,239
What are the worst (commonly adopted) ideas/principles in statistics?
Using interaction (product) terms in regressions without using curvilinear (quadratic) terms. A few years ago I've been thinking about it (after seeing a few papers (in economic/management fields) that were doing it), and I realized that if in the true model the outcome variable depends on the square of some or all the variables in the model, yet those are not included and instead an interaction is included in the examined model, the researcher may find that the interaction has an effect, while in fact it does not. I then searched to see if there is an academic paper that addressed this, and I did find one (could be more, but that is what I found): https://psycnet.apa.org/fulltext/1998-04950-001.html You might say that it is a novice mistake, and that a real statistician should know to first try to include all terms and interactions of a certain degree in the regression. But still, this specific mistake seems to be quite common in many fields that apply statistics, and the above linked article demonstrates the misleading results it may lead to.
What are the worst (commonly adopted) ideas/principles in statistics?
Using interaction (product) terms in regressions without using curvilinear (quadratic) terms. A few years ago I've been thinking about it (after seeing a few papers (in economic/management fields) tha
What are the worst (commonly adopted) ideas/principles in statistics? Using interaction (product) terms in regressions without using curvilinear (quadratic) terms. A few years ago I've been thinking about it (after seeing a few papers (in economic/management fields) that were doing it), and I realized that if in the true model the outcome variable depends on the square of some or all the variables in the model, yet those are not included and instead an interaction is included in the examined model, the researcher may find that the interaction has an effect, while in fact it does not. I then searched to see if there is an academic paper that addressed this, and I did find one (could be more, but that is what I found): https://psycnet.apa.org/fulltext/1998-04950-001.html You might say that it is a novice mistake, and that a real statistician should know to first try to include all terms and interactions of a certain degree in the regression. But still, this specific mistake seems to be quite common in many fields that apply statistics, and the above linked article demonstrates the misleading results it may lead to.
What are the worst (commonly adopted) ideas/principles in statistics? Using interaction (product) terms in regressions without using curvilinear (quadratic) terms. A few years ago I've been thinking about it (after seeing a few papers (in economic/management fields) tha
3,240
What are the worst (commonly adopted) ideas/principles in statistics?
I vote for "specification tests," e.g., White's test for heteroscedasticity, Hausman's tests, etc. These are common in econometrics and elsewhere, to the point where many people think they comprise the actual definition of the assumptions tested rather than a means to evaluate them. You would think the recent ASA statements on p-values would have dampened the enthusiasm for these methods. However, a Google scholar search for "Hausman test" turns up 17,200 results since 2019 and 8,300 since 2020; i.e., they are not fading away.
What are the worst (commonly adopted) ideas/principles in statistics?
I vote for "specification tests," e.g., White's test for heteroscedasticity, Hausman's tests, etc. These are common in econometrics and elsewhere, to the point where many people think they comprise th
What are the worst (commonly adopted) ideas/principles in statistics? I vote for "specification tests," e.g., White's test for heteroscedasticity, Hausman's tests, etc. These are common in econometrics and elsewhere, to the point where many people think they comprise the actual definition of the assumptions tested rather than a means to evaluate them. You would think the recent ASA statements on p-values would have dampened the enthusiasm for these methods. However, a Google scholar search for "Hausman test" turns up 17,200 results since 2019 and 8,300 since 2020; i.e., they are not fading away.
What are the worst (commonly adopted) ideas/principles in statistics? I vote for "specification tests," e.g., White's test for heteroscedasticity, Hausman's tests, etc. These are common in econometrics and elsewhere, to the point where many people think they comprise th
3,241
What are the worst (commonly adopted) ideas/principles in statistics?
“Correlation does not imply causation.” This is a true statement. Even if there is causation, it could be in the opposite direction of what is asserted. What I have seen happen is that, when the correlation is inconvenient, people take this to mean that correlation precludes causation. I don’t see professional statisticians making this mistake, but I have seen it happen when people use that phrase to sound quantitative and rigorous in their analysis, only to botch the meaning.
What are the worst (commonly adopted) ideas/principles in statistics?
“Correlation does not imply causation.” This is a true statement. Even if there is causation, it could be in the opposite direction of what is asserted. What I have seen happen is that, when the corre
What are the worst (commonly adopted) ideas/principles in statistics? “Correlation does not imply causation.” This is a true statement. Even if there is causation, it could be in the opposite direction of what is asserted. What I have seen happen is that, when the correlation is inconvenient, people take this to mean that correlation precludes causation. I don’t see professional statisticians making this mistake, but I have seen it happen when people use that phrase to sound quantitative and rigorous in their analysis, only to botch the meaning.
What are the worst (commonly adopted) ideas/principles in statistics? “Correlation does not imply causation.” This is a true statement. Even if there is causation, it could be in the opposite direction of what is asserted. What I have seen happen is that, when the corre
3,242
What are the worst (commonly adopted) ideas/principles in statistics?
Using statistical significance (usually at $1\%$, $5\%$ or $10\%$) of explanatory variables / regressors as a criterion in model building for explanatory or predictive purposes. In explanatory modelling, both subject-matter and statistical validity are needed; see e.g. the probabilistic reduction approach to model building by Aris Spanos described in "Effects of model selection and misspecification testing on inference: Probabilistic Reduction approach (Aris Spanos)" and references therein. Statistical validity of parameter estimators amounts to the certain statistical assumptions being satisfied by the data. E.g. for OLS estimators in linear regression models, this is homoskedasticity and zero autocorrelation of errors, among other things. There are corresponding tests to be applied on model residuals to yield insight on whether the assumptions are violated in a particular way. There is no assumption that the explanatory variables be statistically significant, however. Yet many a practitioner applies statistical significance of individual regressors or groups thereof as a criterion of model validity in model building, just like they apply the diagnostic tests mentioned above. In my experience, this is a rather common practice, but it is unjustified and thus a bad idea. In predictive modelling, variable selection on the basis of statistical significance may be sensible. If one aims to maximize out-of-sample likelihood, AIC-based feature selection implies a cutoff level corresponding to a $p$-value of around $16\%$. But the commonly used thresholds of $1\%$, $5\%$ and $10\%$ are suboptimal for most purposes. Hence, using statistical significance of explanatory variables at common levels of $1\%$, $5\%$ and $10\%$ as a selection criterion is a bad idea also in predictive model building.
What are the worst (commonly adopted) ideas/principles in statistics?
Using statistical significance (usually at $1\%$, $5\%$ or $10\%$) of explanatory variables / regressors as a criterion in model building for explanatory or predictive purposes. In explanatory modelli
What are the worst (commonly adopted) ideas/principles in statistics? Using statistical significance (usually at $1\%$, $5\%$ or $10\%$) of explanatory variables / regressors as a criterion in model building for explanatory or predictive purposes. In explanatory modelling, both subject-matter and statistical validity are needed; see e.g. the probabilistic reduction approach to model building by Aris Spanos described in "Effects of model selection and misspecification testing on inference: Probabilistic Reduction approach (Aris Spanos)" and references therein. Statistical validity of parameter estimators amounts to the certain statistical assumptions being satisfied by the data. E.g. for OLS estimators in linear regression models, this is homoskedasticity and zero autocorrelation of errors, among other things. There are corresponding tests to be applied on model residuals to yield insight on whether the assumptions are violated in a particular way. There is no assumption that the explanatory variables be statistically significant, however. Yet many a practitioner applies statistical significance of individual regressors or groups thereof as a criterion of model validity in model building, just like they apply the diagnostic tests mentioned above. In my experience, this is a rather common practice, but it is unjustified and thus a bad idea. In predictive modelling, variable selection on the basis of statistical significance may be sensible. If one aims to maximize out-of-sample likelihood, AIC-based feature selection implies a cutoff level corresponding to a $p$-value of around $16\%$. But the commonly used thresholds of $1\%$, $5\%$ and $10\%$ are suboptimal for most purposes. Hence, using statistical significance of explanatory variables at common levels of $1\%$, $5\%$ and $10\%$ as a selection criterion is a bad idea also in predictive model building.
What are the worst (commonly adopted) ideas/principles in statistics? Using statistical significance (usually at $1\%$, $5\%$ or $10\%$) of explanatory variables / regressors as a criterion in model building for explanatory or predictive purposes. In explanatory modelli
3,243
What are the worst (commonly adopted) ideas/principles in statistics?
The 'rule of thumb' that the standard deviation $S$ of a normal sample can be usefully approximated as sample range $D$ divided by $4$ (or $5$ or $6).$ The rule is typically "illustrated" by an example, contrived so the 'rule' gives a reasonable answer. In fact, the appropriate divisor depends crucially on sample size $n.$ n=100 set.seed(2020) s = replicate(10^5, sd(rnorm(n))) set.seed(2020) # same samples again d = replicate(10^5, diff(range(rnorm(n)))) mean(d/s) [1] 5.029495 summary(d/s) Min. 1st Qu. Median Mean 3rd Qu. Max. 3.581 4.678 4.984 5.029 5.330 7.756 For, $n = 25,$ dividing the range by $4$ works pretty well, and without great variation. For $n = 100$ and $500,$ respective denominators are on average $5$ and $6,$ but with widely decreasing precision for individual samples as sample size increases. A simulation in R for $n=100$ is shown above. Note: The idea of approximating $S$ as $D/c_n$ is not completely useless: For $n < 15,$ dividing the range by some constant $c_n$ (different for each $n)$ works well enough that makers of control charts often use range divided by the appropriate constant to get $S$ for chart boundaries.
What are the worst (commonly adopted) ideas/principles in statistics?
The 'rule of thumb' that the standard deviation $S$ of a normal sample can be usefully approximated as sample range $D$ divided by $4$ (or $5$ or $6).$ The rule is typically "illustrated" by an exampl
What are the worst (commonly adopted) ideas/principles in statistics? The 'rule of thumb' that the standard deviation $S$ of a normal sample can be usefully approximated as sample range $D$ divided by $4$ (or $5$ or $6).$ The rule is typically "illustrated" by an example, contrived so the 'rule' gives a reasonable answer. In fact, the appropriate divisor depends crucially on sample size $n.$ n=100 set.seed(2020) s = replicate(10^5, sd(rnorm(n))) set.seed(2020) # same samples again d = replicate(10^5, diff(range(rnorm(n)))) mean(d/s) [1] 5.029495 summary(d/s) Min. 1st Qu. Median Mean 3rd Qu. Max. 3.581 4.678 4.984 5.029 5.330 7.756 For, $n = 25,$ dividing the range by $4$ works pretty well, and without great variation. For $n = 100$ and $500,$ respective denominators are on average $5$ and $6,$ but with widely decreasing precision for individual samples as sample size increases. A simulation in R for $n=100$ is shown above. Note: The idea of approximating $S$ as $D/c_n$ is not completely useless: For $n < 15,$ dividing the range by some constant $c_n$ (different for each $n)$ works well enough that makers of control charts often use range divided by the appropriate constant to get $S$ for chart boundaries.
What are the worst (commonly adopted) ideas/principles in statistics? The 'rule of thumb' that the standard deviation $S$ of a normal sample can be usefully approximated as sample range $D$ divided by $4$ (or $5$ or $6).$ The rule is typically "illustrated" by an exampl
3,244
What are the worst (commonly adopted) ideas/principles in statistics?
The idea that because we have in mind an "average" result, that a sequence of data that is either below or above the average means that a particular result "is due". The examples are things like rolling a die, where a large number of "no six" outcomes are observed - surely a six is due soon!
What are the worst (commonly adopted) ideas/principles in statistics?
The idea that because we have in mind an "average" result, that a sequence of data that is either below or above the average means that a particular result "is due". The examples are things like rolli
What are the worst (commonly adopted) ideas/principles in statistics? The idea that because we have in mind an "average" result, that a sequence of data that is either below or above the average means that a particular result "is due". The examples are things like rolling a die, where a large number of "no six" outcomes are observed - surely a six is due soon!
What are the worst (commonly adopted) ideas/principles in statistics? The idea that because we have in mind an "average" result, that a sequence of data that is either below or above the average means that a particular result "is due". The examples are things like rolli
3,245
What are the worst (commonly adopted) ideas/principles in statistics?
My favorite stats malpractice: permuting features instead of samples in a permutation test. In genomics, it's common to get a big list of differentially expressed, or differentially methylated, or differentially accessible genes (or similar). Often this is full of unfamiliar items, because nobody knows the literature on all 30k human genes, let alone transcript variants or non-coding regions. So, it's common to interpret these lists by using tools like Enrichr to test for overlap with databases of biological systems or previous experiments. Most such analyses yield p-values assuming that features (genes or transcripts) are exchangeable under some null hypothesis. This null hypothesis is much more restrictive than it seems at first, and I've never seen a case where it's a) biologically realistic or b) defended by any sort of diagnostic. (Fortunately, there are tools that don't make this mistake. Look up MAST or CAMERA.)
What are the worst (commonly adopted) ideas/principles in statistics?
My favorite stats malpractice: permuting features instead of samples in a permutation test. In genomics, it's common to get a big list of differentially expressed, or differentially methylated, or dif
What are the worst (commonly adopted) ideas/principles in statistics? My favorite stats malpractice: permuting features instead of samples in a permutation test. In genomics, it's common to get a big list of differentially expressed, or differentially methylated, or differentially accessible genes (or similar). Often this is full of unfamiliar items, because nobody knows the literature on all 30k human genes, let alone transcript variants or non-coding regions. So, it's common to interpret these lists by using tools like Enrichr to test for overlap with databases of biological systems or previous experiments. Most such analyses yield p-values assuming that features (genes or transcripts) are exchangeable under some null hypothesis. This null hypothesis is much more restrictive than it seems at first, and I've never seen a case where it's a) biologically realistic or b) defended by any sort of diagnostic. (Fortunately, there are tools that don't make this mistake. Look up MAST or CAMERA.)
What are the worst (commonly adopted) ideas/principles in statistics? My favorite stats malpractice: permuting features instead of samples in a permutation test. In genomics, it's common to get a big list of differentially expressed, or differentially methylated, or dif
3,246
What are the worst (commonly adopted) ideas/principles in statistics?
Examining the t-test for each variable in a regression, but not the F-tests for multiple variables. A common practice in many fields that apply statistics, is to use a regression with many covariates in order to determine the effect of the covariates on the outcome(s) of interest. In these researches it is common to use t-test for each of the covariates in order to determine whether we can say that this variable has an effect on the outcome or not. (I'm putting aside the issue of how to identify a causal relation ("effect") - for now let's assume that there are reasonable identification assumptions. Or alternatively, the researcher is interested only in finding correlation, I just find it easier to speak of an "effect") It could be that there are two or more variables that are somewhat highly correlated, and as a result including them both in the regression will yield a high p-value in each of their t-tests, but examining their combined contribution to the model by using an F-test may conclude that these variables, or at least one of them, has a great contribution to the model. Some researches do not check for this, and therefore may disregard some very important factors that affect the outcome variable, because they only use t-tests.
What are the worst (commonly adopted) ideas/principles in statistics?
Examining the t-test for each variable in a regression, but not the F-tests for multiple variables. A common practice in many fields that apply statistics, is to use a regression with many covariates
What are the worst (commonly adopted) ideas/principles in statistics? Examining the t-test for each variable in a regression, but not the F-tests for multiple variables. A common practice in many fields that apply statistics, is to use a regression with many covariates in order to determine the effect of the covariates on the outcome(s) of interest. In these researches it is common to use t-test for each of the covariates in order to determine whether we can say that this variable has an effect on the outcome or not. (I'm putting aside the issue of how to identify a causal relation ("effect") - for now let's assume that there are reasonable identification assumptions. Or alternatively, the researcher is interested only in finding correlation, I just find it easier to speak of an "effect") It could be that there are two or more variables that are somewhat highly correlated, and as a result including them both in the regression will yield a high p-value in each of their t-tests, but examining their combined contribution to the model by using an F-test may conclude that these variables, or at least one of them, has a great contribution to the model. Some researches do not check for this, and therefore may disregard some very important factors that affect the outcome variable, because they only use t-tests.
What are the worst (commonly adopted) ideas/principles in statistics? Examining the t-test for each variable in a regression, but not the F-tests for multiple variables. A common practice in many fields that apply statistics, is to use a regression with many covariates
3,247
What are the worst (commonly adopted) ideas/principles in statistics?
Post-selection inference, i.e. model building and doing inference on the same data set where the inference does not account for the model building stage. Either: Given a data set and no predetermined model, a model is built based on the patterns found in the data set. Or: Given a data set and a model, the model is often found to be inadequate. The model is adjusted based on the patterns in the data set. Then: The model is used for inference such as null hypothesis significance testing. The problem: The inference cannot be taken at face value as it is conditional on the data set due to the model-building stage. Unfortunately, this fact often gets neglected in practice.
What are the worst (commonly adopted) ideas/principles in statistics?
Post-selection inference, i.e. model building and doing inference on the same data set where the inference does not account for the model building stage. Either: Given a data set and no predetermined
What are the worst (commonly adopted) ideas/principles in statistics? Post-selection inference, i.e. model building and doing inference on the same data set where the inference does not account for the model building stage. Either: Given a data set and no predetermined model, a model is built based on the patterns found in the data set. Or: Given a data set and a model, the model is often found to be inadequate. The model is adjusted based on the patterns in the data set. Then: The model is used for inference such as null hypothesis significance testing. The problem: The inference cannot be taken at face value as it is conditional on the data set due to the model-building stage. Unfortunately, this fact often gets neglected in practice.
What are the worst (commonly adopted) ideas/principles in statistics? Post-selection inference, i.e. model building and doing inference on the same data set where the inference does not account for the model building stage. Either: Given a data set and no predetermined
3,248
What are the worst (commonly adopted) ideas/principles in statistics?
Calling type I assertion probability the "type I error rate" when it is neither a rate nor the probability of making an error. It is the probability of making an assertion of an effect when there is no effect. Calling type I assertion probability the "false positive rate" when it is not the probability of a false positive result. It is the probability of making an assertion of an effect when any assertion of an effect is by definition wrong. The probability of a false + result is the probability that an effect is not there given the evidence was + for such a finding. The is a Bayesian posterior probability, not $\alpha$. Thinking that controlling $\alpha$ has to do with limiting decision errors.
What are the worst (commonly adopted) ideas/principles in statistics?
Calling type I assertion probability the "type I error rate" when it is neither a rate nor the probability of making an error. It is the probability of making an assertion of an effect when there is
What are the worst (commonly adopted) ideas/principles in statistics? Calling type I assertion probability the "type I error rate" when it is neither a rate nor the probability of making an error. It is the probability of making an assertion of an effect when there is no effect. Calling type I assertion probability the "false positive rate" when it is not the probability of a false positive result. It is the probability of making an assertion of an effect when any assertion of an effect is by definition wrong. The probability of a false + result is the probability that an effect is not there given the evidence was + for such a finding. The is a Bayesian posterior probability, not $\alpha$. Thinking that controlling $\alpha$ has to do with limiting decision errors.
What are the worst (commonly adopted) ideas/principles in statistics? Calling type I assertion probability the "type I error rate" when it is neither a rate nor the probability of making an error. It is the probability of making an assertion of an effect when there is
3,249
Statistics interview questions
Not sure what the job is, but I think "Explain x to a novice" would probably be good- a) because they will probably need to do this in the job b) it's a good test of understanding, I reckon.
Statistics interview questions
Not sure what the job is, but I think "Explain x to a novice" would probably be good- a) because they will probably need to do this in the job b) it's a good test of understanding, I reckon.
Statistics interview questions Not sure what the job is, but I think "Explain x to a novice" would probably be good- a) because they will probably need to do this in the job b) it's a good test of understanding, I reckon.
Statistics interview questions Not sure what the job is, but I think "Explain x to a novice" would probably be good- a) because they will probably need to do this in the job b) it's a good test of understanding, I reckon.
3,250
Statistics interview questions
Standard Q where I work is along the lines of: Have a look at this output of a multiple logistic regression from a statistical package you claim to have used (preferably one we use too). XXX is the independent variable of principal interest. How woud you interpret the results for a colleague with knowledge of the subject matter but no formal statistical training? (If necessary prompt for separate interpretation of point estimate, CI, p-value).
Statistics interview questions
Standard Q where I work is along the lines of: Have a look at this output of a multiple logistic regression from a statistical package you claim to have used (preferably one we use too). XXX is the i
Statistics interview questions Standard Q where I work is along the lines of: Have a look at this output of a multiple logistic regression from a statistical package you claim to have used (preferably one we use too). XXX is the independent variable of principal interest. How woud you interpret the results for a colleague with knowledge of the subject matter but no formal statistical training? (If necessary prompt for separate interpretation of point estimate, CI, p-value).
Statistics interview questions Standard Q where I work is along the lines of: Have a look at this output of a multiple logistic regression from a statistical package you claim to have used (preferably one we use too). XXX is the i
3,251
Statistics interview questions
You might also want to reflect on whether the interview is the best medium for measuring the construct of interest. If you want to measure prior knowledge of probability or statistics, you might be better off relying more on a written test. You can ask more questions, and thus increase reliability of measurement. It's more standardised both in administration, and in scoring. And once the instrument is developed, it probably uses fewer resources to administer. You could then use the interview as a more focussed tool looking at factors such as verbal and interpersonal skills.
Statistics interview questions
You might also want to reflect on whether the interview is the best medium for measuring the construct of interest. If you want to measure prior knowledge of probability or statistics, you might be be
Statistics interview questions You might also want to reflect on whether the interview is the best medium for measuring the construct of interest. If you want to measure prior knowledge of probability or statistics, you might be better off relying more on a written test. You can ask more questions, and thus increase reliability of measurement. It's more standardised both in administration, and in scoring. And once the instrument is developed, it probably uses fewer resources to administer. You could then use the interview as a more focussed tool looking at factors such as verbal and interpersonal skills.
Statistics interview questions You might also want to reflect on whether the interview is the best medium for measuring the construct of interest. If you want to measure prior knowledge of probability or statistics, you might be be
3,252
Statistics interview questions
Many questions/answers on this site could give ideas for good questions. I will give a list with some such links that I think are good. Posts where I answered are overrepresented, because I know those posts better, not because they necessarily are the best! I give short comments to each link, so you can decide if you want to follow the link. What is the intuition behind SVD? "Can you explain to one of our clients how the SVD works?" Maximum Likelihood Estimation (MLE) in layman terms "Can you explain in nontechnical language the idea of maximum likelihood estimation?" Taleb and the Black Swan "Tell me, what is a black swan, and why is that relevant? When is it relevant?" Statistical inference when the sample "is" the population "What can you say about statistical inference when the sample is the whole population?" Goodness of fit and which model to choose linear regression or Poisson "We have a regression problem where the response is a count variable. Which would you choose in this context, ordinary least squares or Poisson regression (or maybe some other)? Explain your choice, what is the main differences between these models?" What is the difference between finite and infinite variance "Can you explain, in as simple a language as is possible, what it means for a random variable to have infinite expectation or infinite variance? What is the practical importance of this distinction? Explain with an example." What are modern, easily used alternatives to stepwise regression? "How would you build a complex regression model when there are many possible predictor variables? Describe different possible strategies, and tell about the problems with each of them" How to deal with perfect separation in logistic regression? "What is the problem of separation in logistic regression, its causes, symptoms? What can you do to solve it, if it is really a problem?" Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to be positive semi-definite? and What does a non positive definite covariance matrix tell me about my data? "Explain why a covariance matrix must be positive (semi) definite, and what that means. How can that fact be used?" What are the multidimensional versions of median "Can you propose some way to generalize the median to multivariate data?" Interpreting interaction terms in logit regression with categorical variables and What are best practices in identifying interaction effects? and Two negative main effects yet positive interaction effect? and Including the interaction but not the main effects in a model and How to interpret main effects when the interaction effect is not significant? "Explain what is meant by interaction in regression models. Specifically, what does it mean if interaction is significant while main effects are not? Is there some difference in interpretation of interaction between ordinary linear regression and logistic regression?" What could be the reason for using square root transformation on data? and Appropriate data transformation "When, how and why do you transform the response variable in a regression (or ANOVA) model? Are there any alternatives? Can I trust ANOVA results for a non-normally distributed DV? "How would you treat an ANOVA with non-normal residuals? Why is statistics useful when many things that matter are one shot things? How can I efficiently model the sum of Bernoulli random variables? When to use generalized estimating equations vs. mixed effects models? What is happening here, when I use squared loss in logistic regression setting? "Why do we use maximum likelihood for logistic regression? Why not least squares?" What is the difference between linear regression on y with x and x with y?
Statistics interview questions
Many questions/answers on this site could give ideas for good questions. I will give a list with some such links that I think are good. Posts where I answered are overrepresented, because I know thos
Statistics interview questions Many questions/answers on this site could give ideas for good questions. I will give a list with some such links that I think are good. Posts where I answered are overrepresented, because I know those posts better, not because they necessarily are the best! I give short comments to each link, so you can decide if you want to follow the link. What is the intuition behind SVD? "Can you explain to one of our clients how the SVD works?" Maximum Likelihood Estimation (MLE) in layman terms "Can you explain in nontechnical language the idea of maximum likelihood estimation?" Taleb and the Black Swan "Tell me, what is a black swan, and why is that relevant? When is it relevant?" Statistical inference when the sample "is" the population "What can you say about statistical inference when the sample is the whole population?" Goodness of fit and which model to choose linear regression or Poisson "We have a regression problem where the response is a count variable. Which would you choose in this context, ordinary least squares or Poisson regression (or maybe some other)? Explain your choice, what is the main differences between these models?" What is the difference between finite and infinite variance "Can you explain, in as simple a language as is possible, what it means for a random variable to have infinite expectation or infinite variance? What is the practical importance of this distinction? Explain with an example." What are modern, easily used alternatives to stepwise regression? "How would you build a complex regression model when there are many possible predictor variables? Describe different possible strategies, and tell about the problems with each of them" How to deal with perfect separation in logistic regression? "What is the problem of separation in logistic regression, its causes, symptoms? What can you do to solve it, if it is really a problem?" Why does correlation matrix need to be positive semi-definite and what does it mean to be or not to be positive semi-definite? and What does a non positive definite covariance matrix tell me about my data? "Explain why a covariance matrix must be positive (semi) definite, and what that means. How can that fact be used?" What are the multidimensional versions of median "Can you propose some way to generalize the median to multivariate data?" Interpreting interaction terms in logit regression with categorical variables and What are best practices in identifying interaction effects? and Two negative main effects yet positive interaction effect? and Including the interaction but not the main effects in a model and How to interpret main effects when the interaction effect is not significant? "Explain what is meant by interaction in regression models. Specifically, what does it mean if interaction is significant while main effects are not? Is there some difference in interpretation of interaction between ordinary linear regression and logistic regression?" What could be the reason for using square root transformation on data? and Appropriate data transformation "When, how and why do you transform the response variable in a regression (or ANOVA) model? Are there any alternatives? Can I trust ANOVA results for a non-normally distributed DV? "How would you treat an ANOVA with non-normal residuals? Why is statistics useful when many things that matter are one shot things? How can I efficiently model the sum of Bernoulli random variables? When to use generalized estimating equations vs. mixed effects models? What is happening here, when I use squared loss in logistic regression setting? "Why do we use maximum likelihood for logistic regression? Why not least squares?" What is the difference between linear regression on y with x and x with y?
Statistics interview questions Many questions/answers on this site could give ideas for good questions. I will give a list with some such links that I think are good. Posts where I answered are overrepresented, because I know thos
3,253
Statistics interview questions
Two questions I've been asked: 1) You fit a multiple regression to examine the effect of a particular variable a worker in another department is interested in. The variable comes back insignificant, but your co-worker says that this is impossible as it is known to have an effect. What would you say/do? 2) You have 1000 variables and 100 observations. You would like to find the significant variables for a particular response. What would you do?
Statistics interview questions
Two questions I've been asked: 1) You fit a multiple regression to examine the effect of a particular variable a worker in another department is interested in. The variable comes back insignificant,
Statistics interview questions Two questions I've been asked: 1) You fit a multiple regression to examine the effect of a particular variable a worker in another department is interested in. The variable comes back insignificant, but your co-worker says that this is impossible as it is known to have an effect. What would you say/do? 2) You have 1000 variables and 100 observations. You would like to find the significant variables for a particular response. What would you do?
Statistics interview questions Two questions I've been asked: 1) You fit a multiple regression to examine the effect of a particular variable a worker in another department is interested in. The variable comes back insignificant,
3,254
Statistics interview questions
Here is a big data set. What is your plan for dealing with outliers? How about missing values? How about transformations? Can they deal with real-world data?
Statistics interview questions
Here is a big data set. What is your plan for dealing with outliers? How about missing values? How about transformations? Can they deal with real-world data?
Statistics interview questions Here is a big data set. What is your plan for dealing with outliers? How about missing values? How about transformations? Can they deal with real-world data?
Statistics interview questions Here is a big data set. What is your plan for dealing with outliers? How about missing values? How about transformations? Can they deal with real-world data?
3,255
Statistics interview questions
I was asked once how I would explain the relevance of the central limit theorem to a class of freshmen in the social sciences that barely have knowledge about statistics.
Statistics interview questions
I was asked once how I would explain the relevance of the central limit theorem to a class of freshmen in the social sciences that barely have knowledge about statistics.
Statistics interview questions I was asked once how I would explain the relevance of the central limit theorem to a class of freshmen in the social sciences that barely have knowledge about statistics.
Statistics interview questions I was asked once how I would explain the relevance of the central limit theorem to a class of freshmen in the social sciences that barely have knowledge about statistics.
3,256
Statistics interview questions
How do you prevent over-fitting when you are creating a statistical model? Good answer: cross-validation
Statistics interview questions
How do you prevent over-fitting when you are creating a statistical model? Good answer: cross-validation
Statistics interview questions How do you prevent over-fitting when you are creating a statistical model? Good answer: cross-validation
Statistics interview questions How do you prevent over-fitting when you are creating a statistical model? Good answer: cross-validation
3,257
Statistics interview questions
How do you numericize something that is not numerical? Example, "Automatic Feature Extraction for Classifying Audio Data" Rationale: Can they figure out how to analyze something statistically that is not already in a big table?
Statistics interview questions
How do you numericize something that is not numerical? Example, "Automatic Feature Extraction for Classifying Audio Data" Rationale: Can they figure out how to analyze something statistically that is
Statistics interview questions How do you numericize something that is not numerical? Example, "Automatic Feature Extraction for Classifying Audio Data" Rationale: Can they figure out how to analyze something statistically that is not already in a big table?
Statistics interview questions How do you numericize something that is not numerical? Example, "Automatic Feature Extraction for Classifying Audio Data" Rationale: Can they figure out how to analyze something statistically that is
3,258
Statistics interview questions
I often ask "how would you define/explain what forecasting is?" Answer to that type of very general question helps me to see if people are connected to a particular case of forecasting. There is not a right answer but answering this synthetically during an interview is not always easy:)
Statistics interview questions
I often ask "how would you define/explain what forecasting is?" Answer to that type of very general question helps me to see if people are connected to a particular case of forecasting. There is not
Statistics interview questions I often ask "how would you define/explain what forecasting is?" Answer to that type of very general question helps me to see if people are connected to a particular case of forecasting. There is not a right answer but answering this synthetically during an interview is not always easy:)
Statistics interview questions I often ask "how would you define/explain what forecasting is?" Answer to that type of very general question helps me to see if people are connected to a particular case of forecasting. There is not
3,259
Statistics interview questions
For an observational data context: Consider this regression model applied to this substantive problem. What, if anything, in it can be interpreted causally? [Further probe] What would you need to learn to change your opinion?
Statistics interview questions
For an observational data context: Consider this regression model applied to this substantive problem. What, if anything, in it can be interpreted causally? [Further probe] What would you need to l
Statistics interview questions For an observational data context: Consider this regression model applied to this substantive problem. What, if anything, in it can be interpreted causally? [Further probe] What would you need to learn to change your opinion?
Statistics interview questions For an observational data context: Consider this regression model applied to this substantive problem. What, if anything, in it can be interpreted causally? [Further probe] What would you need to l
3,260
Statistics interview questions
How will you count the number of sandal wood trees in Bangalore ?
Statistics interview questions
How will you count the number of sandal wood trees in Bangalore ?
Statistics interview questions How will you count the number of sandal wood trees in Bangalore ?
Statistics interview questions How will you count the number of sandal wood trees in Bangalore ?
3,261
Statistics interview questions
Under the heading Causation vs correlation: It's common to use customer/user engagement as features for a predictive model. For example, people who click on this button at more likely to subscribe than people who don't. People who shop on Mondays are more likely to shop again than those who shop on Tuesdays. If we take this to an extreme: Users who click "purchase" are more likely to purchase a product than users who don't click purchase. But obviously that's not very helpful in explaining why some users subscribe and some do not. How would you go about balancing using customer features which explain why they subscribe vs. those that are highly correlated with subscription, but are necessary to accomplish the task?
Statistics interview questions
Under the heading Causation vs correlation: It's common to use customer/user engagement as features for a predictive model. For example, people who click on this button at more likely to subscribe tha
Statistics interview questions Under the heading Causation vs correlation: It's common to use customer/user engagement as features for a predictive model. For example, people who click on this button at more likely to subscribe than people who don't. People who shop on Mondays are more likely to shop again than those who shop on Tuesdays. If we take this to an extreme: Users who click "purchase" are more likely to purchase a product than users who don't click purchase. But obviously that's not very helpful in explaining why some users subscribe and some do not. How would you go about balancing using customer features which explain why they subscribe vs. those that are highly correlated with subscription, but are necessary to accomplish the task?
Statistics interview questions Under the heading Causation vs correlation: It's common to use customer/user engagement as features for a predictive model. For example, people who click on this button at more likely to subscribe tha
3,262
Statistics interview questions
A lot of the questions we ask are similar to those that have already been described. But some that I haven't read yet, that are used: you might be asked to sketch out a program on a whiteboard to do something like: simulate a dice rolling or other probability problem, or calculate a series of prime numbers (e.g. all the prime numbers that are less than 1,000,000) - you'd be able to do this in whatever language you wanted, but most people choose R, and some choose Python (I believe), but I guess you could choose Stata, SAS, SPSS, Matlab, etc. You'd probably be asked questions to probe the depth of your knowledge of your programming language of choice - why use apply instead of a for loop in R, for example. You also might be asked to design an experiment or other study to investigate something - usually something practical - sometimes this will be related to the work that we do, but often not. (You're not supposed to have knowledge of the work that we do, but you should be able to grasp the gist of a problem you haven't heard of and speculate on it intelligently, even if given certain domain knowledge you'd know that was wrong - that's OK, you're not expected to have domain knowledge). You might be asked to take things like power into account.
Statistics interview questions
A lot of the questions we ask are similar to those that have already been described. But some that I haven't read yet, that are used: you might be asked to sketch out a program on a whiteboard to do s
Statistics interview questions A lot of the questions we ask are similar to those that have already been described. But some that I haven't read yet, that are used: you might be asked to sketch out a program on a whiteboard to do something like: simulate a dice rolling or other probability problem, or calculate a series of prime numbers (e.g. all the prime numbers that are less than 1,000,000) - you'd be able to do this in whatever language you wanted, but most people choose R, and some choose Python (I believe), but I guess you could choose Stata, SAS, SPSS, Matlab, etc. You'd probably be asked questions to probe the depth of your knowledge of your programming language of choice - why use apply instead of a for loop in R, for example. You also might be asked to design an experiment or other study to investigate something - usually something practical - sometimes this will be related to the work that we do, but often not. (You're not supposed to have knowledge of the work that we do, but you should be able to grasp the gist of a problem you haven't heard of and speculate on it intelligently, even if given certain domain knowledge you'd know that was wrong - that's OK, you're not expected to have domain knowledge). You might be asked to take things like power into account.
Statistics interview questions A lot of the questions we ask are similar to those that have already been described. But some that I haven't read yet, that are used: you might be asked to sketch out a program on a whiteboard to do s
3,263
Statistics interview questions
Here is a TinkerToy set. Show me how Euclidean distance works in three dimensions. Now show me how multiple regression works. Can they explain how statistics works in the physical world?
Statistics interview questions
Here is a TinkerToy set. Show me how Euclidean distance works in three dimensions. Now show me how multiple regression works. Can they explain how statistics works in the physical world?
Statistics interview questions Here is a TinkerToy set. Show me how Euclidean distance works in three dimensions. Now show me how multiple regression works. Can they explain how statistics works in the physical world?
Statistics interview questions Here is a TinkerToy set. Show me how Euclidean distance works in three dimensions. Now show me how multiple regression works. Can they explain how statistics works in the physical world?
3,264
Statistics interview questions
We are running a customer service centre. We are getting 1 million calls per month. How do we reduce it to ten thousand ?
Statistics interview questions
We are running a customer service centre. We are getting 1 million calls per month. How do we reduce it to ten thousand ?
Statistics interview questions We are running a customer service centre. We are getting 1 million calls per month. How do we reduce it to ten thousand ?
Statistics interview questions We are running a customer service centre. We are getting 1 million calls per month. How do we reduce it to ten thousand ?
3,265
Statistics interview questions
While doing the variance analysis of quantitative variable, sometimes it found that frequency of the variable are very high (>5) then we use the Fisher's exact test to find independence of the variable.
Statistics interview questions
While doing the variance analysis of quantitative variable, sometimes it found that frequency of the variable are very high (>5) then we use the Fisher's exact test to find independence of the variabl
Statistics interview questions While doing the variance analysis of quantitative variable, sometimes it found that frequency of the variable are very high (>5) then we use the Fisher's exact test to find independence of the variable.
Statistics interview questions While doing the variance analysis of quantitative variable, sometimes it found that frequency of the variable are very high (>5) then we use the Fisher's exact test to find independence of the variabl
3,266
Statistics interview questions
The average paid attendance at Yankees games last year was 55,000. You randomly ask a bunch of people in NYC if they went to a Yankees game last season, and if they did, you record the paid attendance. What is the average paid attendance for the games that the people you asked who went to a game attended? I'll give you hint for my answer (hint was not provided): length-biased sampling. I scored a home run on that, but it wasn't enough to win the game, ha ha. Note: I mentioned many caveats pertaining to how the sampling was done, and the interviewer told me to disregard all of them.
Statistics interview questions
The average paid attendance at Yankees games last year was 55,000. You randomly ask a bunch of people in NYC if they went to a Yankees game last season, and if they did, you record the paid attendanc
Statistics interview questions The average paid attendance at Yankees games last year was 55,000. You randomly ask a bunch of people in NYC if they went to a Yankees game last season, and if they did, you record the paid attendance. What is the average paid attendance for the games that the people you asked who went to a game attended? I'll give you hint for my answer (hint was not provided): length-biased sampling. I scored a home run on that, but it wasn't enough to win the game, ha ha. Note: I mentioned many caveats pertaining to how the sampling was done, and the interviewer told me to disregard all of them.
Statistics interview questions The average paid attendance at Yankees games last year was 55,000. You randomly ask a bunch of people in NYC if they went to a Yankees game last season, and if they did, you record the paid attendanc
3,267
What is so cool about de Finetti's representation theorem?
De Finetti's Representation Theorem gives in a single take, within the subjectivistic interpretation of probabilities, the raison d'être of statistical models and the meaning of parameters and their prior distributions. Suppose that the random variables $X_1,\dots,X_n$ represent the results of successive tosses of a coin, with values $1$ and $0$ corresponding to the results "Heads" and "Tails", respectively. Analyzing, within the context of a subjectivistic interpretation of the probability calculus, the meaning of the usual frequentist model under which the $X_i$'s are independent and identically distributed, De Finetti observed that the condition of independence would imply, for example, that $$ P\{X_n=x_n\mid X_1=x_1,\dots,X_{n-1}=x_{n-1}\} = P\{X_n=x_n\} \, , $$ and, therefore, the results of the first $n-1$ tosses would not change my uncertainty about the result of $n$-th toss. For example, if I believe $\textit{a priori}$ that this is a balanced coin, then, after getting the information that the first $999$ tosses turned out to be "Heads", I would still believe, conditionally on that information, that the probability of getting "Heads" on toss 1000 is equal to $1/2$. Effectively, the hypothesis of independence of the $X_i$'s would imply that it is impossible to learn anything about the coin by observing the results of its tosses. This observation led De Finetti to the introduction of a condition weaker than independence that resolves this apparent contradiction. The key to De Finetti's solution is a kind of distributional symmetry known as exchangeability. $\textbf{Definition.}$ For a given finite set $\{X_i\}_{i=1}^n$ of random objects, let $\mu_{X_1,\dots,X_n}$ denote their joint distribution. This finite set is exchangeable if $\mu_{X_1,\dots,X_n} = \mu_{X_{\pi(1)},\dots,X_{\pi(n)}}$, for every permutation $\pi:\{1,\dots,n\}\to\{1,\dots,n\}$. A sequence $\{X_i\}_{i=1}^\infty$ of random objects is exchangeable if each of its finite subsets are exchangeable. Supposing only that the sequence of random variables $\{X_i\}_{i=1}^\infty$ is exchangeable, De Finetti proved a notable theorem that sheds light on the meaning of commonly used statistical models. In the particular case when the $X_i$'s take the values $0$ and $1$, De Finetti's Representation Theorem says that $\{X_i\}_{i=1}^\infty$ is exchangeable if and only if there is a random variable $\Theta:\Omega\to[0,1]$, with distribution $\mu_\Theta$, such that $$ P\{X_1=x_1,\dots,X_n=x_n\} = \int_{[0,1]} \theta^s(1-\theta)^{n-s}\,d\mu_\Theta(\theta) \, , $$ in which $s=\sum_{i=1}^n x_i$. Moreover, we have that $$ \bar{X}_n = \frac{1}{n}\sum_{i=1}^n X_i \xrightarrow[n\to\infty]{} \Theta \qquad \textrm{almost surely}, $$ which is known as De Finetti's Strong Law of Large Numbers. This Representation Theorem shows how statistical models emerge in a Bayesian context: under the hypothesis of exchangeability of the observables $\{X_i\}_{i=1}^\infty$, $\textbf{there is}$ a $\textit{parameter}$ $\Theta$ such that, given the value of $\Theta$, the observables are $\textit{conditionally}$ independent and identically distributed. Moreover, De Finetti's Strong law shows that our prior opinion about the unobservable $\Theta$, represented by the distribution $\mu_\Theta$, is the opinion about the limit of $\bar{X}_n$, before we have information about the values of the realizations of any of the $X_i$'s. The parameter $\Theta$ plays the role of a useful subsidiary construction, which allows us to obtain conditional probabilities involving only observables through relations like $$ P\{X_n=1\mid X_1=x_1,\dots,X_{n-1}=x_{n-1}\} = \mathrm{E}\left[\Theta\mid X_1=x_1,\dots,X_{n-1}=x_{n-1}\right] \, . $$
What is so cool about de Finetti's representation theorem?
De Finetti's Representation Theorem gives in a single take, within the subjectivistic interpretation of probabilities, the raison d'être of statistical models and the meaning of parameters and their p
What is so cool about de Finetti's representation theorem? De Finetti's Representation Theorem gives in a single take, within the subjectivistic interpretation of probabilities, the raison d'être of statistical models and the meaning of parameters and their prior distributions. Suppose that the random variables $X_1,\dots,X_n$ represent the results of successive tosses of a coin, with values $1$ and $0$ corresponding to the results "Heads" and "Tails", respectively. Analyzing, within the context of a subjectivistic interpretation of the probability calculus, the meaning of the usual frequentist model under which the $X_i$'s are independent and identically distributed, De Finetti observed that the condition of independence would imply, for example, that $$ P\{X_n=x_n\mid X_1=x_1,\dots,X_{n-1}=x_{n-1}\} = P\{X_n=x_n\} \, , $$ and, therefore, the results of the first $n-1$ tosses would not change my uncertainty about the result of $n$-th toss. For example, if I believe $\textit{a priori}$ that this is a balanced coin, then, after getting the information that the first $999$ tosses turned out to be "Heads", I would still believe, conditionally on that information, that the probability of getting "Heads" on toss 1000 is equal to $1/2$. Effectively, the hypothesis of independence of the $X_i$'s would imply that it is impossible to learn anything about the coin by observing the results of its tosses. This observation led De Finetti to the introduction of a condition weaker than independence that resolves this apparent contradiction. The key to De Finetti's solution is a kind of distributional symmetry known as exchangeability. $\textbf{Definition.}$ For a given finite set $\{X_i\}_{i=1}^n$ of random objects, let $\mu_{X_1,\dots,X_n}$ denote their joint distribution. This finite set is exchangeable if $\mu_{X_1,\dots,X_n} = \mu_{X_{\pi(1)},\dots,X_{\pi(n)}}$, for every permutation $\pi:\{1,\dots,n\}\to\{1,\dots,n\}$. A sequence $\{X_i\}_{i=1}^\infty$ of random objects is exchangeable if each of its finite subsets are exchangeable. Supposing only that the sequence of random variables $\{X_i\}_{i=1}^\infty$ is exchangeable, De Finetti proved a notable theorem that sheds light on the meaning of commonly used statistical models. In the particular case when the $X_i$'s take the values $0$ and $1$, De Finetti's Representation Theorem says that $\{X_i\}_{i=1}^\infty$ is exchangeable if and only if there is a random variable $\Theta:\Omega\to[0,1]$, with distribution $\mu_\Theta$, such that $$ P\{X_1=x_1,\dots,X_n=x_n\} = \int_{[0,1]} \theta^s(1-\theta)^{n-s}\,d\mu_\Theta(\theta) \, , $$ in which $s=\sum_{i=1}^n x_i$. Moreover, we have that $$ \bar{X}_n = \frac{1}{n}\sum_{i=1}^n X_i \xrightarrow[n\to\infty]{} \Theta \qquad \textrm{almost surely}, $$ which is known as De Finetti's Strong Law of Large Numbers. This Representation Theorem shows how statistical models emerge in a Bayesian context: under the hypothesis of exchangeability of the observables $\{X_i\}_{i=1}^\infty$, $\textbf{there is}$ a $\textit{parameter}$ $\Theta$ such that, given the value of $\Theta$, the observables are $\textit{conditionally}$ independent and identically distributed. Moreover, De Finetti's Strong law shows that our prior opinion about the unobservable $\Theta$, represented by the distribution $\mu_\Theta$, is the opinion about the limit of $\bar{X}_n$, before we have information about the values of the realizations of any of the $X_i$'s. The parameter $\Theta$ plays the role of a useful subsidiary construction, which allows us to obtain conditional probabilities involving only observables through relations like $$ P\{X_n=1\mid X_1=x_1,\dots,X_{n-1}=x_{n-1}\} = \mathrm{E}\left[\Theta\mid X_1=x_1,\dots,X_{n-1}=x_{n-1}\right] \, . $$
What is so cool about de Finetti's representation theorem? De Finetti's Representation Theorem gives in a single take, within the subjectivistic interpretation of probabilities, the raison d'être of statistical models and the meaning of parameters and their p
3,268
What is so cool about de Finetti's representation theorem?
Everything is mathematically correct in Zen's answer. However I disagree on some points. Please be aware that I don't claim/believe my point of view is the good one; on the contrary I feel these points are not entirely clear for me yet. These are somewhat philosophical questions about which I like to discuss (and a good English exercise for me), and I am also interested in any advice. About the example with $999$ "Heads", Zen comment: "the hypothesis of independence of the $X_i$'s would imply that it is impossible to learn anything about the coin by observing the results of its tosses." This is not true from the frequentist perspective: learning about the coin means learning about $\theta$, which is possible by estimating (point-estimate or confidence interval) $\theta$ from the previous $999$ results. If the frequentist observe $999$ "Heads" then he/she concludes that $\theta$ is likely close to $1$, and so is $\Pr(X_n=1)$ consequently. By the way, in this coin-tossing example, what is the random $\Theta$ ? Imagining each of two people play a coin-tossing game an infinite number of times with the same coin, why would they find a different $\theta = \bar X_\infty$ ? I have in mind that the characteristic of the coin-tossing is the fixed $\theta$ which is the common value of $\bar X_\infty$ for any gamer ("almost any gamer" for technical mathemathical reasons). A more concrete example for which there's no interpretable random $\Theta$ is the case of a random sampling with replacment in a finite population of $0$ and $1$. About Schervish's book and the question raised by the OP I think (quickly speaking) Schervish means that exchangeability is a "cool" assumption and then deFinetti's theorem is "cool" because it says that every exchangeable model has a parametric representation. Of course I totally agree. However if I assume an exchangeable model such as $(X_i\mid\Theta=\theta)\sim_\text{iid} \text{Bernoulli}(\theta)$ and $\Theta \sim \text{Beta}(a,b)$ then I would be interested in performing inference about $a$ and $b$, not about the realization of $\Theta$. If I am only interested in the realization of $\Theta$ then I don't see any interest in assuming exchangeability. It's late...
What is so cool about de Finetti's representation theorem?
Everything is mathematically correct in Zen's answer. However I disagree on some points. Please be aware that I don't claim/believe my point of view is the good one; on the contrary I feel these point
What is so cool about de Finetti's representation theorem? Everything is mathematically correct in Zen's answer. However I disagree on some points. Please be aware that I don't claim/believe my point of view is the good one; on the contrary I feel these points are not entirely clear for me yet. These are somewhat philosophical questions about which I like to discuss (and a good English exercise for me), and I am also interested in any advice. About the example with $999$ "Heads", Zen comment: "the hypothesis of independence of the $X_i$'s would imply that it is impossible to learn anything about the coin by observing the results of its tosses." This is not true from the frequentist perspective: learning about the coin means learning about $\theta$, which is possible by estimating (point-estimate or confidence interval) $\theta$ from the previous $999$ results. If the frequentist observe $999$ "Heads" then he/she concludes that $\theta$ is likely close to $1$, and so is $\Pr(X_n=1)$ consequently. By the way, in this coin-tossing example, what is the random $\Theta$ ? Imagining each of two people play a coin-tossing game an infinite number of times with the same coin, why would they find a different $\theta = \bar X_\infty$ ? I have in mind that the characteristic of the coin-tossing is the fixed $\theta$ which is the common value of $\bar X_\infty$ for any gamer ("almost any gamer" for technical mathemathical reasons). A more concrete example for which there's no interpretable random $\Theta$ is the case of a random sampling with replacment in a finite population of $0$ and $1$. About Schervish's book and the question raised by the OP I think (quickly speaking) Schervish means that exchangeability is a "cool" assumption and then deFinetti's theorem is "cool" because it says that every exchangeable model has a parametric representation. Of course I totally agree. However if I assume an exchangeable model such as $(X_i\mid\Theta=\theta)\sim_\text{iid} \text{Bernoulli}(\theta)$ and $\Theta \sim \text{Beta}(a,b)$ then I would be interested in performing inference about $a$ and $b$, not about the realization of $\Theta$. If I am only interested in the realization of $\Theta$ then I don't see any interest in assuming exchangeability. It's late...
What is so cool about de Finetti's representation theorem? Everything is mathematically correct in Zen's answer. However I disagree on some points. Please be aware that I don't claim/believe my point of view is the good one; on the contrary I feel these point
3,269
What is so cool about de Finetti's representation theorem?
You guys might be interested in a paper on this subject (journal subscription required for access - try accessing it from your university): O'Neill, B. (2011) Exchangeability, correlation and Bayes' Effect. International Statistical Review 77(2), pp. 241-250. This paper discusses the representation theorem as the basis for both Bayesian and frequentist IID models, and also applies it to a coin-tossing example. It should clear up the discussion of the assumptions of the frequentist paradigm. It actually uses a broader extension to the representation theorem going beyond the binomial model, but it should still be useful.
What is so cool about de Finetti's representation theorem?
You guys might be interested in a paper on this subject (journal subscription required for access - try accessing it from your university): O'Neill, B. (2011) Exchangeability, correlation and Bayes' E
What is so cool about de Finetti's representation theorem? You guys might be interested in a paper on this subject (journal subscription required for access - try accessing it from your university): O'Neill, B. (2011) Exchangeability, correlation and Bayes' Effect. International Statistical Review 77(2), pp. 241-250. This paper discusses the representation theorem as the basis for both Bayesian and frequentist IID models, and also applies it to a coin-tossing example. It should clear up the discussion of the assumptions of the frequentist paradigm. It actually uses a broader extension to the representation theorem going beyond the binomial model, but it should still be useful.
What is so cool about de Finetti's representation theorem? You guys might be interested in a paper on this subject (journal subscription required for access - try accessing it from your university): O'Neill, B. (2011) Exchangeability, correlation and Bayes' E
3,270
What is so cool about de Finetti's representation theorem?
I'll try to counter the assertion that the theorem isn't directly useful, with a topical example: COVID modeling. I think we're seen that models that try to replicate reality in all its detail have proven hard to steer during this crisis, leading to poor predictions despite noble and urgent efforts to recalibrate them. On the other hand, overly stylized compartmental models have run headlong into paradoxes, such as Sweden's herd immunity. The theorem of de Finetti inspires a different approach. We identify orbits in the space of models that leave unchanged the key decision-making quantities we care about. We use mixtures of IID models to span the orbits. The question then becomes: can we find the right orbit? That's a lot easier than finding the "right" model. The orbit can be located using convexity adjustments. For more details I'll refer you to the blog article or working paper.
What is so cool about de Finetti's representation theorem?
I'll try to counter the assertion that the theorem isn't directly useful, with a topical example: COVID modeling. I think we're seen that models that try to replicate reality in all its detail have pr
What is so cool about de Finetti's representation theorem? I'll try to counter the assertion that the theorem isn't directly useful, with a topical example: COVID modeling. I think we're seen that models that try to replicate reality in all its detail have proven hard to steer during this crisis, leading to poor predictions despite noble and urgent efforts to recalibrate them. On the other hand, overly stylized compartmental models have run headlong into paradoxes, such as Sweden's herd immunity. The theorem of de Finetti inspires a different approach. We identify orbits in the space of models that leave unchanged the key decision-making quantities we care about. We use mixtures of IID models to span the orbits. The question then becomes: can we find the right orbit? That's a lot easier than finding the "right" model. The orbit can be located using convexity adjustments. For more details I'll refer you to the blog article or working paper.
What is so cool about de Finetti's representation theorem? I'll try to counter the assertion that the theorem isn't directly useful, with a topical example: COVID modeling. I think we're seen that models that try to replicate reality in all its detail have pr
3,271
Are there cases where PCA is more suitable than t-SNE?
$t$-SNE is a great piece of Machine Learning but one can find many reasons to use PCA instead of it. Of the top of my head, I will mention five. As most other computational methodologies in use, $t$-SNE is no silver bullet and there are quite a few reasons that make it a suboptimal choice in some cases. Let me mention some points in brief: Stochasticity of final solution. PCA is deterministic; $t$-SNE is not. One gets a nice visualisation and then her colleague gets another visualisation and then they get artistic which looks better and if a difference of $0.03\%$ in the $KL(P||Q)$ divergence is meaningful... In PCA the correct answer to the question posed is guaranteed. $t$-SNE might have multiple minima that might lead to different solutions. This necessitates multiple runs as well as raises questions about the reproducibility of the results. Interpretability of mapping. This relates to the above point but let's assume that a team has agreed in a particular random seed/run. Now the question becomes what this shows... $t$-SNE tries to map only local / neighbours correctly so our insights from that embedding should be very cautious; global trends are not accurately represented (and that can be potentially a great thing for visualisation). On the other hand, PCA is just a diagonal rotation of our initial covariance matrix and the eigenvectors represent a new axial system in the space spanned by our original data. We can directly explain what a particular PCA does. Application to new/unseen data. $t$-SNE is not learning a function from the original space to the new (lower) dimensional one and that's a problem. On that matter, $t$-SNE is a non-parametric learning algorithm so approximating with parametric algorithm is an ill-posed problem. The embedding is learned by directly moving the data across the low dimensional space. That means one does not get an eigenvector or a similar construct to use in new data. In contrast, using PCA the eigenvectors offer a new axes system what can be directly used to project new data. [Apparently one could try training a deep-network to learn the $t$-SNE mapping (you can hear Dr. van der Maaten at ~46' of this video suggesting something along this lines) but clearly no easy solution exists.] Incomplete data. Natively $t$-SNE does not deal with incomplete data. In fairness, PCA does not deal with them either but numerous extensions of PCA for incomplete data (eg. probabilistic PCA) are out there and are almost standard modelling routines. $t$-SNE currently cannot handle incomplete data (aside obviously training a probabilistic PCA first and passing the PC scores to $t$-SNE as inputs). The $k$ is not (too) small case. $t$-SNE solves a problem known as the crowding problem, effectively that somewhat similar points in higher dimension collapsing on top of each other in lower dimensions (more here). Now as you increase the dimensions used the crowding problem gets less severe ie. the problem you are trying to solve through the use of $t$-SNE gets attenuated. You can work around this issue but it is not trivial. Therefore if you need a $k$ dimensional vector as the reduced set and $k$ is not quite small the optimality of the produce solution is in question. PCA on the other hand offer always the $k$ best linear combination in terms of variance explained. (Thanks to @amoeba for noticing I made a mess when first trying to outline this point.) I do not mention issues about computational requirements (eg. speed or memory size) nor issues about selecting relevant hyperparameters (eg. perplexity). I think these are internal issues of the $t$-SNE methodology and are irrelevant when comparing it to another algorithm. To summarise, $t$-SNE is great but as all algorithms has its limitations when it comes to its applicability. I use $t$-SNE almost on any new dataset I get my hands on as an explanatory data analysis tool. I think though it has certain limitations that do not make it nearly as applicable as PCA. Let me stress that PCA is not perfect either; for example, the PCA-based visualisations are often inferior to those of $t$-SNE.
Are there cases where PCA is more suitable than t-SNE?
$t$-SNE is a great piece of Machine Learning but one can find many reasons to use PCA instead of it. Of the top of my head, I will mention five. As most other computational methodologies in use, $t$-S
Are there cases where PCA is more suitable than t-SNE? $t$-SNE is a great piece of Machine Learning but one can find many reasons to use PCA instead of it. Of the top of my head, I will mention five. As most other computational methodologies in use, $t$-SNE is no silver bullet and there are quite a few reasons that make it a suboptimal choice in some cases. Let me mention some points in brief: Stochasticity of final solution. PCA is deterministic; $t$-SNE is not. One gets a nice visualisation and then her colleague gets another visualisation and then they get artistic which looks better and if a difference of $0.03\%$ in the $KL(P||Q)$ divergence is meaningful... In PCA the correct answer to the question posed is guaranteed. $t$-SNE might have multiple minima that might lead to different solutions. This necessitates multiple runs as well as raises questions about the reproducibility of the results. Interpretability of mapping. This relates to the above point but let's assume that a team has agreed in a particular random seed/run. Now the question becomes what this shows... $t$-SNE tries to map only local / neighbours correctly so our insights from that embedding should be very cautious; global trends are not accurately represented (and that can be potentially a great thing for visualisation). On the other hand, PCA is just a diagonal rotation of our initial covariance matrix and the eigenvectors represent a new axial system in the space spanned by our original data. We can directly explain what a particular PCA does. Application to new/unseen data. $t$-SNE is not learning a function from the original space to the new (lower) dimensional one and that's a problem. On that matter, $t$-SNE is a non-parametric learning algorithm so approximating with parametric algorithm is an ill-posed problem. The embedding is learned by directly moving the data across the low dimensional space. That means one does not get an eigenvector or a similar construct to use in new data. In contrast, using PCA the eigenvectors offer a new axes system what can be directly used to project new data. [Apparently one could try training a deep-network to learn the $t$-SNE mapping (you can hear Dr. van der Maaten at ~46' of this video suggesting something along this lines) but clearly no easy solution exists.] Incomplete data. Natively $t$-SNE does not deal with incomplete data. In fairness, PCA does not deal with them either but numerous extensions of PCA for incomplete data (eg. probabilistic PCA) are out there and are almost standard modelling routines. $t$-SNE currently cannot handle incomplete data (aside obviously training a probabilistic PCA first and passing the PC scores to $t$-SNE as inputs). The $k$ is not (too) small case. $t$-SNE solves a problem known as the crowding problem, effectively that somewhat similar points in higher dimension collapsing on top of each other in lower dimensions (more here). Now as you increase the dimensions used the crowding problem gets less severe ie. the problem you are trying to solve through the use of $t$-SNE gets attenuated. You can work around this issue but it is not trivial. Therefore if you need a $k$ dimensional vector as the reduced set and $k$ is not quite small the optimality of the produce solution is in question. PCA on the other hand offer always the $k$ best linear combination in terms of variance explained. (Thanks to @amoeba for noticing I made a mess when first trying to outline this point.) I do not mention issues about computational requirements (eg. speed or memory size) nor issues about selecting relevant hyperparameters (eg. perplexity). I think these are internal issues of the $t$-SNE methodology and are irrelevant when comparing it to another algorithm. To summarise, $t$-SNE is great but as all algorithms has its limitations when it comes to its applicability. I use $t$-SNE almost on any new dataset I get my hands on as an explanatory data analysis tool. I think though it has certain limitations that do not make it nearly as applicable as PCA. Let me stress that PCA is not perfect either; for example, the PCA-based visualisations are often inferior to those of $t$-SNE.
Are there cases where PCA is more suitable than t-SNE? $t$-SNE is a great piece of Machine Learning but one can find many reasons to use PCA instead of it. Of the top of my head, I will mention five. As most other computational methodologies in use, $t$-S
3,272
Are there cases where PCA is more suitable than t-SNE?
https://stats.stackexchange.com/a/249520/7828 is an excellent general answer. I'd like to focus a bit more on your problem. You apparently want to see how your samples relate with respect to your 7 input variables. That is something t-SNE doesn't do. The idea of SNE and t-SNE is to place neighbors close to each other, (almost) completly ignoring the global structure. This is excellent for visualization, because similar items can be plotted next to each other (and not on top of each other, c.f. crowding). This is not good for further analysis. Global structure is lost, some objects may have been blocked from moving to their neighbors, and separation between different groups is not preserved quantitatively. Which is largely why e.g. clustering on the projection usually does not work very well. PCA is quite the opposite. It tries to preserve the global properties (eigenvectors with high variance) while it may lose low-variance deviations between neighbors.
Are there cases where PCA is more suitable than t-SNE?
https://stats.stackexchange.com/a/249520/7828 is an excellent general answer. I'd like to focus a bit more on your problem. You apparently want to see how your samples relate with respect to your 7 in
Are there cases where PCA is more suitable than t-SNE? https://stats.stackexchange.com/a/249520/7828 is an excellent general answer. I'd like to focus a bit more on your problem. You apparently want to see how your samples relate with respect to your 7 input variables. That is something t-SNE doesn't do. The idea of SNE and t-SNE is to place neighbors close to each other, (almost) completly ignoring the global structure. This is excellent for visualization, because similar items can be plotted next to each other (and not on top of each other, c.f. crowding). This is not good for further analysis. Global structure is lost, some objects may have been blocked from moving to their neighbors, and separation between different groups is not preserved quantitatively. Which is largely why e.g. clustering on the projection usually does not work very well. PCA is quite the opposite. It tries to preserve the global properties (eigenvectors with high variance) while it may lose low-variance deviations between neighbors.
Are there cases where PCA is more suitable than t-SNE? https://stats.stackexchange.com/a/249520/7828 is an excellent general answer. I'd like to focus a bit more on your problem. You apparently want to see how your samples relate with respect to your 7 in
3,273
Are there cases where PCA is more suitable than t-SNE?
There are many very good points which have been given already here. However, there are some that I would like to stress. One is that PCA will preserves things that tSNE will not. This may be good or bad, depending on what you are trying to achieve. Per example tSNE will not preserve cluster sizes, while PCA will (see the pictures below, from tSNE vs PCA As an heuristic, you can keep in mind that PCA will preserve large distances between points, while tSNE will preserve points which are close to each other in its representation. Therefore, the performance of each method will vastly depend on the dataset !
Are there cases where PCA is more suitable than t-SNE?
There are many very good points which have been given already here. However, there are some that I would like to stress. One is that PCA will preserves things that tSNE will not. This may be good or b
Are there cases where PCA is more suitable than t-SNE? There are many very good points which have been given already here. However, there are some that I would like to stress. One is that PCA will preserves things that tSNE will not. This may be good or bad, depending on what you are trying to achieve. Per example tSNE will not preserve cluster sizes, while PCA will (see the pictures below, from tSNE vs PCA As an heuristic, you can keep in mind that PCA will preserve large distances between points, while tSNE will preserve points which are close to each other in its representation. Therefore, the performance of each method will vastly depend on the dataset !
Are there cases where PCA is more suitable than t-SNE? There are many very good points which have been given already here. However, there are some that I would like to stress. One is that PCA will preserves things that tSNE will not. This may be good or b
3,274
Are there cases where PCA is more suitable than t-SNE?
To give one applied angle, PCA and t-SNE are not mutually exclusive. In some fields of biology we are dealing with highly dimensional data where t-SNE simply does not scale. Therefore, we use PCA first to reduce the dimensionality of the data and then, taking the top principle components, we apply t-SNE (or a similar non-linear dimensionality reduction approach like UMAP) for visualisation.
Are there cases where PCA is more suitable than t-SNE?
To give one applied angle, PCA and t-SNE are not mutually exclusive. In some fields of biology we are dealing with highly dimensional data where t-SNE simply does not scale. Therefore, we use PCA firs
Are there cases where PCA is more suitable than t-SNE? To give one applied angle, PCA and t-SNE are not mutually exclusive. In some fields of biology we are dealing with highly dimensional data where t-SNE simply does not scale. Therefore, we use PCA first to reduce the dimensionality of the data and then, taking the top principle components, we apply t-SNE (or a similar non-linear dimensionality reduction approach like UMAP) for visualisation.
Are there cases where PCA is more suitable than t-SNE? To give one applied angle, PCA and t-SNE are not mutually exclusive. In some fields of biology we are dealing with highly dimensional data where t-SNE simply does not scale. Therefore, we use PCA firs
3,275
Does it make sense to add a quadratic term but not the linear term to a model?
1. Why include the linear term? It is illuminating to notice that a quadratic relationship can be written in two ways: $$y = a_0 + a_1 x + a_2 x^2 = a_2(x - b)^2 + c$$ (where, equating coefficients, we find $-2a_2 b = a_1$ and $a_2 b^2 + c = a_0$). The value $x=b$ corresponds to a global extremum of the relationship (geometrically, it locates the vertex of a parabola). If you do not include the linear term $a_1 x$, the possibilities are reduced to $$y = a_0 + a_2 x^2 = a_2(x - 0)^2 + c$$ (where now, obviously, $c = a_0$ and it is assumed the model contains a constant term $a_0$). That is, you force $b=0$. In light of this, question #1 comes down to whether you are certain that the global extremum must occur at $x=0$. If you are, then you may safely omit the linear term $a_1 x$. Otherwise, you must include it. 2. How to understand changes in significance as terms are included or excluded? This is discussed in great detail in a related thread at https://stats.stackexchange.com/a/28493. In the present case, the significance of $a_2$ indicates there is curvature in the relationship and the significance of $a_1$ indicates that $b$ is nonzero: it sounds like you need to include both terms (as well as the constant, of course).
Does it make sense to add a quadratic term but not the linear term to a model?
1. Why include the linear term? It is illuminating to notice that a quadratic relationship can be written in two ways: $$y = a_0 + a_1 x + a_2 x^2 = a_2(x - b)^2 + c$$ (where, equating coefficients, w
Does it make sense to add a quadratic term but not the linear term to a model? 1. Why include the linear term? It is illuminating to notice that a quadratic relationship can be written in two ways: $$y = a_0 + a_1 x + a_2 x^2 = a_2(x - b)^2 + c$$ (where, equating coefficients, we find $-2a_2 b = a_1$ and $a_2 b^2 + c = a_0$). The value $x=b$ corresponds to a global extremum of the relationship (geometrically, it locates the vertex of a parabola). If you do not include the linear term $a_1 x$, the possibilities are reduced to $$y = a_0 + a_2 x^2 = a_2(x - 0)^2 + c$$ (where now, obviously, $c = a_0$ and it is assumed the model contains a constant term $a_0$). That is, you force $b=0$. In light of this, question #1 comes down to whether you are certain that the global extremum must occur at $x=0$. If you are, then you may safely omit the linear term $a_1 x$. Otherwise, you must include it. 2. How to understand changes in significance as terms are included or excluded? This is discussed in great detail in a related thread at https://stats.stackexchange.com/a/28493. In the present case, the significance of $a_2$ indicates there is curvature in the relationship and the significance of $a_1$ indicates that $b$ is nonzero: it sounds like you need to include both terms (as well as the constant, of course).
Does it make sense to add a quadratic term but not the linear term to a model? 1. Why include the linear term? It is illuminating to notice that a quadratic relationship can be written in two ways: $$y = a_0 + a_1 x + a_2 x^2 = a_2(x - b)^2 + c$$ (where, equating coefficients, w
3,276
Does it make sense to add a quadratic term but not the linear term to a model?
@whuber has given a really excellent answer here. I just want to add a small complimentary point. The question states that "a linear relation of predictor and data is not interpretable". This hints at a common misunderstanding, although I usually hear it on the other end ('what is the interpretation of the squared [cubic, etc.] term?'). When we have a model with multiple different covariates, each beta [term] can generally be afforded its own interpretation. For example, if: $$ \widehat{\text{GPA}}_{college}=\beta_0+\beta_1\text{GPA}_{highschool}+\beta_2\text{class rank}+\beta_3\text{SAT}, $$ (GPA means grade point average; rank is the ordering of a student's GPA relative to other students at the same high school; & SAT means 'scholastic aptitude test' a standard, nationwide test for students going to university) then we can assign separate interpretations to each beta/term. For instance, if a student's high school GPA were 1 point higher--all else being equal--we would expect their college GPA to be $\beta_1$ points higher. It is important to note, however, that it is not always permissible to interpret a model in this manner. One obvious case is when there is an interaction amongst some of the variables, as it would not be possible for the individual term to differ and still have all else held constant--of necessity, the interaction term would change as well. Thus, when there is an interaction, we do not interpret main effects but only simple effects, as is well understood. The situation with power terms is directly analogous, but unfortunately, does not seem to be widely understood. Consider the following model: $$ \hat{y}=\beta_0+\beta_1x+\beta_2x^2 $$ (In this situation, $x$ is intended to represent a prototypical continuous covariate.) It is not possible for $x$ to change without $x^2$ changing also, and vice versa. Simply put, when there are polynomial terms in a model, the various terms based on the same underlying covariate are not afforded separate interpretations. The $x^2$ ($x$, $x^{17}$, etc.) term does not have any independent meaning. The fact that a $p$-power polynomial term is 'significant' in a model indicates that there are $p-1$ 'bends' in the function relating $x$ and $y$. It is unfortunate, but unavoidable, that when curvature exists, the interpretation becomes more complicated, and possibly less intuitive. To assess the change in $\hat{y}$ as $x$ changes, we will have to use calculus. The derivative of the above model is: $$ \frac{dy}{dx}=\beta_1+2\beta_2x $$ which is the instantaneous rate of change in the expected value of $y$ as $x$ changes, all else being equal. This is not so clean as the interpretation of the very top model; importantly, the instantaneous rate of change in $y$ depends on the level of $x$ from which the change is assessed. Furthermore, the rate of change in $y$ is an instantaneous rate; that is, it is itself continuously changing throughout the interval from $x_{old}$ to $x_{new}$. This is simply the nature of a curvilinear relationship.
Does it make sense to add a quadratic term but not the linear term to a model?
@whuber has given a really excellent answer here. I just want to add a small complimentary point. The question states that "a linear relation of predictor and data is not interpretable". This hints
Does it make sense to add a quadratic term but not the linear term to a model? @whuber has given a really excellent answer here. I just want to add a small complimentary point. The question states that "a linear relation of predictor and data is not interpretable". This hints at a common misunderstanding, although I usually hear it on the other end ('what is the interpretation of the squared [cubic, etc.] term?'). When we have a model with multiple different covariates, each beta [term] can generally be afforded its own interpretation. For example, if: $$ \widehat{\text{GPA}}_{college}=\beta_0+\beta_1\text{GPA}_{highschool}+\beta_2\text{class rank}+\beta_3\text{SAT}, $$ (GPA means grade point average; rank is the ordering of a student's GPA relative to other students at the same high school; & SAT means 'scholastic aptitude test' a standard, nationwide test for students going to university) then we can assign separate interpretations to each beta/term. For instance, if a student's high school GPA were 1 point higher--all else being equal--we would expect their college GPA to be $\beta_1$ points higher. It is important to note, however, that it is not always permissible to interpret a model in this manner. One obvious case is when there is an interaction amongst some of the variables, as it would not be possible for the individual term to differ and still have all else held constant--of necessity, the interaction term would change as well. Thus, when there is an interaction, we do not interpret main effects but only simple effects, as is well understood. The situation with power terms is directly analogous, but unfortunately, does not seem to be widely understood. Consider the following model: $$ \hat{y}=\beta_0+\beta_1x+\beta_2x^2 $$ (In this situation, $x$ is intended to represent a prototypical continuous covariate.) It is not possible for $x$ to change without $x^2$ changing also, and vice versa. Simply put, when there are polynomial terms in a model, the various terms based on the same underlying covariate are not afforded separate interpretations. The $x^2$ ($x$, $x^{17}$, etc.) term does not have any independent meaning. The fact that a $p$-power polynomial term is 'significant' in a model indicates that there are $p-1$ 'bends' in the function relating $x$ and $y$. It is unfortunate, but unavoidable, that when curvature exists, the interpretation becomes more complicated, and possibly less intuitive. To assess the change in $\hat{y}$ as $x$ changes, we will have to use calculus. The derivative of the above model is: $$ \frac{dy}{dx}=\beta_1+2\beta_2x $$ which is the instantaneous rate of change in the expected value of $y$ as $x$ changes, all else being equal. This is not so clean as the interpretation of the very top model; importantly, the instantaneous rate of change in $y$ depends on the level of $x$ from which the change is assessed. Furthermore, the rate of change in $y$ is an instantaneous rate; that is, it is itself continuously changing throughout the interval from $x_{old}$ to $x_{new}$. This is simply the nature of a curvilinear relationship.
Does it make sense to add a quadratic term but not the linear term to a model? @whuber has given a really excellent answer here. I just want to add a small complimentary point. The question states that "a linear relation of predictor and data is not interpretable". This hints
3,277
Does it make sense to add a quadratic term but not the linear term to a model?
@whuber's answer above is right on target in pointing out that omitting the linear term is the "usual" quadratic model is equivalent to saying, "I am absolutely certain that the extremum is at $x=0$." However, you also need to check whether the software you are using has a "gotcha". Some software may automatically center the data when fitting a polynomial and testing its coefficients unless you turn off polynomial centering. That is, it may fit an equation that looks something like $Y = b_0 + b_2(x - \bar{x})^2$ where $\bar{x}$ is the mean of your $x$s. That would force the extremum to be at $x=\bar{x}$. Your statement that both the linear and quadratic terms are significant when both are entered needs some clarification. For example, SAS may report a Type I and/or a Type III test for that example. Type I tests the linear before putting in the quadratic. Type III tests the linear with the quadratic in the model.
Does it make sense to add a quadratic term but not the linear term to a model?
@whuber's answer above is right on target in pointing out that omitting the linear term is the "usual" quadratic model is equivalent to saying, "I am absolutely certain that the extremum is at $x=0$."
Does it make sense to add a quadratic term but not the linear term to a model? @whuber's answer above is right on target in pointing out that omitting the linear term is the "usual" quadratic model is equivalent to saying, "I am absolutely certain that the extremum is at $x=0$." However, you also need to check whether the software you are using has a "gotcha". Some software may automatically center the data when fitting a polynomial and testing its coefficients unless you turn off polynomial centering. That is, it may fit an equation that looks something like $Y = b_0 + b_2(x - \bar{x})^2$ where $\bar{x}$ is the mean of your $x$s. That would force the extremum to be at $x=\bar{x}$. Your statement that both the linear and quadratic terms are significant when both are entered needs some clarification. For example, SAS may report a Type I and/or a Type III test for that example. Type I tests the linear before putting in the quadratic. Type III tests the linear with the quadratic in the model.
Does it make sense to add a quadratic term but not the linear term to a model? @whuber's answer above is right on target in pointing out that omitting the linear term is the "usual" quadratic model is equivalent to saying, "I am absolutely certain that the extremum is at $x=0$."
3,278
Does it make sense to add a quadratic term but not the linear term to a model?
Brambor, Clark and Golder (2006) (which comes with an internet appendix) have a very clear take on how to understand interaction models and how to avoid the common pitfalls, including why you should (almost) always include the lower-order terms ("constitutive terms") in interaction models. Analysts should include all constitutive terms when specifying multiplicative interaction models except in very rare circumstances. By constitutive terms, we mean each of the elements that constitute the interaction term. [..] The reader should note, though, that multiplicative interaction models can take a variety of forms and may involve quadratic terms such as $X^2$ or higher-order interaction terms such as $XZJ$. No matter what form the interaction term takes, all constitutive terms should be included. Thus, $X$ should be included when the interaction term is $X^2$ and $X$, $Z$, $J$, $XZ$, $XJ$, and $ZJ$ should be included when the interaction term is $XZJ$. Failure to do so may result in an underspecified model that would lead to biased estimates. This may lead to inferential errors. If this is the case and $Z$ is correlated with either $XZ$ (or $X$) as will occur in virtually any social science circumstance, then omitting the constitutive term $Z$ will result in biased (and inconsistent) estimates of $\beta_0$, $\beta_1$, and $\beta_3$. Although not always recognized as such, this is a straightforward case of omitted variable bias (Greene 2003, pp. 148–149).
Does it make sense to add a quadratic term but not the linear term to a model?
Brambor, Clark and Golder (2006) (which comes with an internet appendix) have a very clear take on how to understand interaction models and how to avoid the common pitfalls, including why you should (
Does it make sense to add a quadratic term but not the linear term to a model? Brambor, Clark and Golder (2006) (which comes with an internet appendix) have a very clear take on how to understand interaction models and how to avoid the common pitfalls, including why you should (almost) always include the lower-order terms ("constitutive terms") in interaction models. Analysts should include all constitutive terms when specifying multiplicative interaction models except in very rare circumstances. By constitutive terms, we mean each of the elements that constitute the interaction term. [..] The reader should note, though, that multiplicative interaction models can take a variety of forms and may involve quadratic terms such as $X^2$ or higher-order interaction terms such as $XZJ$. No matter what form the interaction term takes, all constitutive terms should be included. Thus, $X$ should be included when the interaction term is $X^2$ and $X$, $Z$, $J$, $XZ$, $XJ$, and $ZJ$ should be included when the interaction term is $XZJ$. Failure to do so may result in an underspecified model that would lead to biased estimates. This may lead to inferential errors. If this is the case and $Z$ is correlated with either $XZ$ (or $X$) as will occur in virtually any social science circumstance, then omitting the constitutive term $Z$ will result in biased (and inconsistent) estimates of $\beta_0$, $\beta_1$, and $\beta_3$. Although not always recognized as such, this is a straightforward case of omitted variable bias (Greene 2003, pp. 148–149).
Does it make sense to add a quadratic term but not the linear term to a model? Brambor, Clark and Golder (2006) (which comes with an internet appendix) have a very clear take on how to understand interaction models and how to avoid the common pitfalls, including why you should (
3,279
What are good basic statistics to use for ordinal data?
A frequency table is a good place to start. You can do the count, and relative frequency for each level. Also, the total count, and number of missing values may be of use. You can also use a contingency table to compare two variables at once. Can display using a mosaic plot too.
What are good basic statistics to use for ordinal data?
A frequency table is a good place to start. You can do the count, and relative frequency for each level. Also, the total count, and number of missing values may be of use. You can also use a continge
What are good basic statistics to use for ordinal data? A frequency table is a good place to start. You can do the count, and relative frequency for each level. Also, the total count, and number of missing values may be of use. You can also use a contingency table to compare two variables at once. Can display using a mosaic plot too.
What are good basic statistics to use for ordinal data? A frequency table is a good place to start. You can do the count, and relative frequency for each level. Also, the total count, and number of missing values may be of use. You can also use a continge
3,280
What are good basic statistics to use for ordinal data?
I'm going to argue from an applied perspective that the mean is often the best choice for summarising the central tendency of a Likert item. Specifically, I'm thinking of contexts such as student satisfaction surveys, market research scales, employee opinion surveys, personality test items, and many social science survey items. In such contexts, consumers of research often want answers to questions like: Which statements have more or less agreement relative to others? Which groups agreed more or less with a given statement? Over time, has agreement gone up or down? For these purposes, the mean has several benefits: 1. Mean is easy to calculate: It is easy to see the relationship between the raw data and the mean. It is pragmatically easy to calculate. Thus, the mean can be easily embedded into reporting systems. It also facilitates comparability across contexts, and settings. 2. Mean is relatively well understood and intuitive: The mean is often used to report central tendency of Likert items. Thus, consumers of research are more likely to understand the mean (and thus trust it, and act on it). Some researchers prefer the, arguably, even more intuitive option of reporting the percentage of the sample answering 4 or 5. I.e., it has the relatively intuitive interpretation of "percentage agreement". In essence, this is just an alternative form of the mean, with 0, 0, 0, 1, 1 coding. Also, over time, consumers of research build up frames of reference. For example, when you're comparing your teaching performance from year to year, or across subjects, you build up a nuanced sense of what a mean of 3.7, 3.9, or 4.1 indicates. 3. The mean is a single number: A single number is particularly valuable, when you want to make claims like "students were more satisfied with Subject X than Subject Y." I also find, empirically, that a single number is actually the main information of interest in a Likert item. The standard deviation tends to be related to the extent to which the mean is close to the central score (e.g., 3.0). Of course, empirically, this may not apply in your context. For example, I read somewhere that when You Tube ratings had the star system, there were a large number of either the lowest or the highest rating. For this reason, it is important to inspect category frequencies. 4. It doesn't make much difference Although I have not formally tested it, I would hypothesise that for the purpose of comparing central tendency ratings across items, or groups of participants, or over time, any reasonable choice of scaling for generating the mean would yield similar conclusions.
What are good basic statistics to use for ordinal data?
I'm going to argue from an applied perspective that the mean is often the best choice for summarising the central tendency of a Likert item. Specifically, I'm thinking of contexts such as student sati
What are good basic statistics to use for ordinal data? I'm going to argue from an applied perspective that the mean is often the best choice for summarising the central tendency of a Likert item. Specifically, I'm thinking of contexts such as student satisfaction surveys, market research scales, employee opinion surveys, personality test items, and many social science survey items. In such contexts, consumers of research often want answers to questions like: Which statements have more or less agreement relative to others? Which groups agreed more or less with a given statement? Over time, has agreement gone up or down? For these purposes, the mean has several benefits: 1. Mean is easy to calculate: It is easy to see the relationship between the raw data and the mean. It is pragmatically easy to calculate. Thus, the mean can be easily embedded into reporting systems. It also facilitates comparability across contexts, and settings. 2. Mean is relatively well understood and intuitive: The mean is often used to report central tendency of Likert items. Thus, consumers of research are more likely to understand the mean (and thus trust it, and act on it). Some researchers prefer the, arguably, even more intuitive option of reporting the percentage of the sample answering 4 or 5. I.e., it has the relatively intuitive interpretation of "percentage agreement". In essence, this is just an alternative form of the mean, with 0, 0, 0, 1, 1 coding. Also, over time, consumers of research build up frames of reference. For example, when you're comparing your teaching performance from year to year, or across subjects, you build up a nuanced sense of what a mean of 3.7, 3.9, or 4.1 indicates. 3. The mean is a single number: A single number is particularly valuable, when you want to make claims like "students were more satisfied with Subject X than Subject Y." I also find, empirically, that a single number is actually the main information of interest in a Likert item. The standard deviation tends to be related to the extent to which the mean is close to the central score (e.g., 3.0). Of course, empirically, this may not apply in your context. For example, I read somewhere that when You Tube ratings had the star system, there were a large number of either the lowest or the highest rating. For this reason, it is important to inspect category frequencies. 4. It doesn't make much difference Although I have not formally tested it, I would hypothesise that for the purpose of comparing central tendency ratings across items, or groups of participants, or over time, any reasonable choice of scaling for generating the mean would yield similar conclusions.
What are good basic statistics to use for ordinal data? I'm going to argue from an applied perspective that the mean is often the best choice for summarising the central tendency of a Likert item. Specifically, I'm thinking of contexts such as student sati
3,281
What are good basic statistics to use for ordinal data?
For basic summaries, I agree that reporting frequency tables and some indication about central tendency is fine. For inference, a recent article published in PARE discussed t- vs. MWW-test, Five-Point Likert Items: t test versus Mann-Whitney-Wilcoxon. For more elaborated treatment, I would recommend reading Agresti's review on ordered categorical variables: Liu, Y and Agresti, A (2005). The analysis of ordered categorical data: An overview and a survey of recent developments. Sociedad de Estadística e Investigación Operativa Test, 14(1), 1-73. It largely extends beyond usual statistics, like threshold-based model (e.g. proportional odds-ratio), and is worth reading in place of Agresti's CDA book. Below I show a picture of three different ways of treating a Likert item; from top to bottom, the "frequency" (nominal) view, the "numerical" view, and the "probabilistic" view (a Partial Credit Model): The data comes from the Science data in the ltm package, where the item concerned technology ("New technology does not depend on basic scientific research", with response "strongly disagree" to "strongly agree", on a four-point scale)
What are good basic statistics to use for ordinal data?
For basic summaries, I agree that reporting frequency tables and some indication about central tendency is fine. For inference, a recent article published in PARE discussed t- vs. MWW-test, Five-Point
What are good basic statistics to use for ordinal data? For basic summaries, I agree that reporting frequency tables and some indication about central tendency is fine. For inference, a recent article published in PARE discussed t- vs. MWW-test, Five-Point Likert Items: t test versus Mann-Whitney-Wilcoxon. For more elaborated treatment, I would recommend reading Agresti's review on ordered categorical variables: Liu, Y and Agresti, A (2005). The analysis of ordered categorical data: An overview and a survey of recent developments. Sociedad de Estadística e Investigación Operativa Test, 14(1), 1-73. It largely extends beyond usual statistics, like threshold-based model (e.g. proportional odds-ratio), and is worth reading in place of Agresti's CDA book. Below I show a picture of three different ways of treating a Likert item; from top to bottom, the "frequency" (nominal) view, the "numerical" view, and the "probabilistic" view (a Partial Credit Model): The data comes from the Science data in the ltm package, where the item concerned technology ("New technology does not depend on basic scientific research", with response "strongly disagree" to "strongly agree", on a four-point scale)
What are good basic statistics to use for ordinal data? For basic summaries, I agree that reporting frequency tables and some indication about central tendency is fine. For inference, a recent article published in PARE discussed t- vs. MWW-test, Five-Point
3,282
What are good basic statistics to use for ordinal data?
Conventional practice is to use the non-parametric statistics rank sum and mean rank to describe ordinal data. Here's how they work: Rank Sum assign a rank to each member in each group; e.g., suppose you are looking at goals for each player on two opposing football teams then rank each member on both teams from first to last; calculate rank sum by adding the ranks per group; the magnitude of the rank sum tells you how close together the ranks are for each group Mean Rank M/R is a more sophisticated statistic than R/S because it compensates for unequal sizes in the groups you are comparing. Hence, in addition to the steps above, you divide each sum by the number of members in the group. Once you have these two statistics, you can, for instance, z-test the rank sum to see if the difference between the two groups is statistically significant (I believe that's known as the Wilcoxon rank sum test, which is interchangeable, i.e., functionally equivalent to the Mann-Whitney U test). R Functions for these statistics (the ones I know about, anyway): wilcox.test in the standard R installation meanranks in the cranks Package
What are good basic statistics to use for ordinal data?
Conventional practice is to use the non-parametric statistics rank sum and mean rank to describe ordinal data. Here's how they work: Rank Sum assign a rank to each member in each group; e.g., suppose
What are good basic statistics to use for ordinal data? Conventional practice is to use the non-parametric statistics rank sum and mean rank to describe ordinal data. Here's how they work: Rank Sum assign a rank to each member in each group; e.g., suppose you are looking at goals for each player on two opposing football teams then rank each member on both teams from first to last; calculate rank sum by adding the ranks per group; the magnitude of the rank sum tells you how close together the ranks are for each group Mean Rank M/R is a more sophisticated statistic than R/S because it compensates for unequal sizes in the groups you are comparing. Hence, in addition to the steps above, you divide each sum by the number of members in the group. Once you have these two statistics, you can, for instance, z-test the rank sum to see if the difference between the two groups is statistically significant (I believe that's known as the Wilcoxon rank sum test, which is interchangeable, i.e., functionally equivalent to the Mann-Whitney U test). R Functions for these statistics (the ones I know about, anyway): wilcox.test in the standard R installation meanranks in the cranks Package
What are good basic statistics to use for ordinal data? Conventional practice is to use the non-parametric statistics rank sum and mean rank to describe ordinal data. Here's how they work: Rank Sum assign a rank to each member in each group; e.g., suppose
3,283
What are good basic statistics to use for ordinal data?
Based on the abstract This article may be helpful for comparing several variables that are Likert scale. It compares two types of non-parametric multiple comparison tests: One based on ranks and one based on a test by Chacko. It includes simulations.
What are good basic statistics to use for ordinal data?
Based on the abstract This article may be helpful for comparing several variables that are Likert scale. It compares two types of non-parametric multiple comparison tests: One based on ranks and one b
What are good basic statistics to use for ordinal data? Based on the abstract This article may be helpful for comparing several variables that are Likert scale. It compares two types of non-parametric multiple comparison tests: One based on ranks and one based on a test by Chacko. It includes simulations.
What are good basic statistics to use for ordinal data? Based on the abstract This article may be helpful for comparing several variables that are Likert scale. It compares two types of non-parametric multiple comparison tests: One based on ranks and one b
3,284
What are good basic statistics to use for ordinal data?
I usually like to use Mosaic plot. You can create them by incoorporating other covariates of interest (such as: sex, stratified factors etc.)
What are good basic statistics to use for ordinal data?
I usually like to use Mosaic plot. You can create them by incoorporating other covariates of interest (such as: sex, stratified factors etc.)
What are good basic statistics to use for ordinal data? I usually like to use Mosaic plot. You can create them by incoorporating other covariates of interest (such as: sex, stratified factors etc.)
What are good basic statistics to use for ordinal data? I usually like to use Mosaic plot. You can create them by incoorporating other covariates of interest (such as: sex, stratified factors etc.)
3,285
What are good basic statistics to use for ordinal data?
I agree with Jeromy Anglim's evaluation. Remember that Likert responses are estimates — you are not using a perfectly reliable ruler to measure a physical object with stable dimensions. The mean is a powerful measure when using reasonable sample sizes. In business and product R&D, the mean is by far the most common statistic used with Likert scales. When using Likert scales I have usually chosen a measure that ideally fits the research question. For instance, if you are talking about "preference" or "attitudes" you can use multiple Likert-based indicators, with each indicator providing slightly different insight. To evaluate the question "how will people in segment $i$ react to service offering $X$," I may look at (1) arithmetic mean, (2) exact median, (3) percentage most favorable response (top box), (4) % top two boxes, (5) ratio of top two boxes to bottom two boxes, (6) percentage within mid-range boxes... etc. Each measure tells a different piece of the story. In a very critical project, I use multiple Likert-based indicators. I will also use multiple indicators with small samples and when a specific cross tab has an "interesting" structure or looks information-rich. Ahhh... the art of statistics.
What are good basic statistics to use for ordinal data?
I agree with Jeromy Anglim's evaluation. Remember that Likert responses are estimates — you are not using a perfectly reliable ruler to measure a physical object with stable dimensions. The mean is a
What are good basic statistics to use for ordinal data? I agree with Jeromy Anglim's evaluation. Remember that Likert responses are estimates — you are not using a perfectly reliable ruler to measure a physical object with stable dimensions. The mean is a powerful measure when using reasonable sample sizes. In business and product R&D, the mean is by far the most common statistic used with Likert scales. When using Likert scales I have usually chosen a measure that ideally fits the research question. For instance, if you are talking about "preference" or "attitudes" you can use multiple Likert-based indicators, with each indicator providing slightly different insight. To evaluate the question "how will people in segment $i$ react to service offering $X$," I may look at (1) arithmetic mean, (2) exact median, (3) percentage most favorable response (top box), (4) % top two boxes, (5) ratio of top two boxes to bottom two boxes, (6) percentage within mid-range boxes... etc. Each measure tells a different piece of the story. In a very critical project, I use multiple Likert-based indicators. I will also use multiple indicators with small samples and when a specific cross tab has an "interesting" structure or looks information-rich. Ahhh... the art of statistics.
What are good basic statistics to use for ordinal data? I agree with Jeromy Anglim's evaluation. Remember that Likert responses are estimates — you are not using a perfectly reliable ruler to measure a physical object with stable dimensions. The mean is a
3,286
What are good basic statistics to use for ordinal data?
"Box scores" are often used to summarize ordinal data, particularly when it comes with meaningful verbal anchors. In other words, you might report "top 2 box", the percentage that chose either "agree" or "strongly agree".
What are good basic statistics to use for ordinal data?
"Box scores" are often used to summarize ordinal data, particularly when it comes with meaningful verbal anchors. In other words, you might report "top 2 box", the percentage that chose either "agree
What are good basic statistics to use for ordinal data? "Box scores" are often used to summarize ordinal data, particularly when it comes with meaningful verbal anchors. In other words, you might report "top 2 box", the percentage that chose either "agree" or "strongly agree".
What are good basic statistics to use for ordinal data? "Box scores" are often used to summarize ordinal data, particularly when it comes with meaningful verbal anchors. In other words, you might report "top 2 box", the percentage that chose either "agree
3,287
When are Log scales appropriate?
This is a very interesting question, and one that too few people think about. There are several different ways that a log scale can be appropriate. The first and most well known is that mentioned by Macro in his comment: log scales allow a large range to be displayed without small values being compressed down into bottom of the graph. A different reason for preferring a log scaling is in circumstances where the data are more naturally expressed geometrically. An example is when the data represent concentration of a biological mediator. Concentrations cannot be negative and the variability almost invariably scales with the mean (i.e. there is heteroscedastic variance). Using a logarithmic scale or, equivalently, using the log concentration as the primary measure both 'fixe' the uneven variability and gives a scale that is unbounded on both ends. The concentrations are probably log-normally distributed and so a log scaling gives us a very convenient result that is arguably 'natural'. In pharmacology we use a logarithmic scale for drug concentrations far more often than not, and in many cases linear scales are only the product of non-pharmacologists dabbling with drugs ;-) Another good reason for a log scale, probably the one that you are interested in for time-series data, comes from the ability of a log scale to make fractional changes equivalent. Imagine a display of the long-term performance of your retirement investments. It (should) be growing roughly exponentially because tomorrow's interest depends on today's investment (roughly speaking). Thus even if the performance in percentage terms has been fairly constant a graph of the funds will appear to have grown most rapidly at the right hand end. With a logarithmic scale a constant percentage change is seen as a constant vertical distance so a constant growth rate is seen as a straight line. That is often a substantial advantage. Another slightly more esoteric reason for choosing a log scale comes in circumstances where values can be reasonably expressed either as x or 1/x. An example from my own research is vascular resistance which can also be sensibly expressed as the reciprocal, vascular conductance. (It is also sensible in some circumstances to think of the diameter of the blood vessels which scale as a power of resistance or conductance.) Neither of those measures has any more reality than the other and both can be found in research papers. If they are scaled logarithmically then they are simply the negative of each other and the choice of one or the other makes no susbstantive difference. (Vascular diameter will differ from resistance and conductance by a constant multiplier when they are all log scaled.)
When are Log scales appropriate?
This is a very interesting question, and one that too few people think about. There are several different ways that a log scale can be appropriate. The first and most well known is that mentioned by M
When are Log scales appropriate? This is a very interesting question, and one that too few people think about. There are several different ways that a log scale can be appropriate. The first and most well known is that mentioned by Macro in his comment: log scales allow a large range to be displayed without small values being compressed down into bottom of the graph. A different reason for preferring a log scaling is in circumstances where the data are more naturally expressed geometrically. An example is when the data represent concentration of a biological mediator. Concentrations cannot be negative and the variability almost invariably scales with the mean (i.e. there is heteroscedastic variance). Using a logarithmic scale or, equivalently, using the log concentration as the primary measure both 'fixe' the uneven variability and gives a scale that is unbounded on both ends. The concentrations are probably log-normally distributed and so a log scaling gives us a very convenient result that is arguably 'natural'. In pharmacology we use a logarithmic scale for drug concentrations far more often than not, and in many cases linear scales are only the product of non-pharmacologists dabbling with drugs ;-) Another good reason for a log scale, probably the one that you are interested in for time-series data, comes from the ability of a log scale to make fractional changes equivalent. Imagine a display of the long-term performance of your retirement investments. It (should) be growing roughly exponentially because tomorrow's interest depends on today's investment (roughly speaking). Thus even if the performance in percentage terms has been fairly constant a graph of the funds will appear to have grown most rapidly at the right hand end. With a logarithmic scale a constant percentage change is seen as a constant vertical distance so a constant growth rate is seen as a straight line. That is often a substantial advantage. Another slightly more esoteric reason for choosing a log scale comes in circumstances where values can be reasonably expressed either as x or 1/x. An example from my own research is vascular resistance which can also be sensibly expressed as the reciprocal, vascular conductance. (It is also sensible in some circumstances to think of the diameter of the blood vessels which scale as a power of resistance or conductance.) Neither of those measures has any more reality than the other and both can be found in research papers. If they are scaled logarithmically then they are simply the negative of each other and the choice of one or the other makes no susbstantive difference. (Vascular diameter will differ from resistance and conductance by a constant multiplier when they are all log scaled.)
When are Log scales appropriate? This is a very interesting question, and one that too few people think about. There are several different ways that a log scale can be appropriate. The first and most well known is that mentioned by M
3,288
When are Log scales appropriate?
Some real life examples I had to hand as an addition to @Michael Lew's very good answer. First, the two time series plots below show monthly visitor arrivals to New Zealand, available from Statistics New Zealand. Both plots have their purpose, but I find the one with the vertical axis on a logarithmic scale spectacularly useful for many more purposes than the first one. For example, you can see that the seasonality in arrivals stays roughly proportional to the scale of the arrivals; and you can see the significant changes in growth rate (eg during the second world war) which are just invisible on the original scale. Second, the plots below show total trip-related spend by tourists to New Zealand, compared to the spend while they are actually in New Zealand. The source is the International Visitors Survey by Ministry of Economic Development. The difference is pre-trip expenditure, eg hotels or packages paid in advance. The first plot, on the original scale, can be used for few purposes other than a very crude (but important) impression of the data being grouped in the bottom left corner. The second plot sacrifices some immediate interpretability, particularly for non-statisticians (because of this, I would normally now actually use a logarithmic scale on the axes, rather than transform the data and have the scale showing the logarithmic value), but gives a lot more visual differentiation. For example, you can clearly spot the few outliers (which turned out to be data editing errors) where total spend was less than spend in New Zealand. Perhaps more importantly, you could use this graph with different colors or faceting to show how different market countries or purpose of visit (eg holiday v. visiting friends and family) occupy different parts of the expenditure "space" - something that would just be invisible on the original axes. Turning this plot into something useful would involve somehow dealing with the high density data (eg by adding some transparency to the points, or replacing points with hexagonal bins colored according to density) but any useful visual solution will almost certainly involve logarithmic axes. edit / addition Another plot to illustrate what I meant by the hexagonal bins, using color to represent density when there is a large dataset (in this case, about 12000 respondents to a survey about Rugby World Cup experiences in New Zealand). Note again this is another example where I've used a logarithmic scale for expenditure.
When are Log scales appropriate?
Some real life examples I had to hand as an addition to @Michael Lew's very good answer. First, the two time series plots below show monthly visitor arrivals to New Zealand, available from Statistics
When are Log scales appropriate? Some real life examples I had to hand as an addition to @Michael Lew's very good answer. First, the two time series plots below show monthly visitor arrivals to New Zealand, available from Statistics New Zealand. Both plots have their purpose, but I find the one with the vertical axis on a logarithmic scale spectacularly useful for many more purposes than the first one. For example, you can see that the seasonality in arrivals stays roughly proportional to the scale of the arrivals; and you can see the significant changes in growth rate (eg during the second world war) which are just invisible on the original scale. Second, the plots below show total trip-related spend by tourists to New Zealand, compared to the spend while they are actually in New Zealand. The source is the International Visitors Survey by Ministry of Economic Development. The difference is pre-trip expenditure, eg hotels or packages paid in advance. The first plot, on the original scale, can be used for few purposes other than a very crude (but important) impression of the data being grouped in the bottom left corner. The second plot sacrifices some immediate interpretability, particularly for non-statisticians (because of this, I would normally now actually use a logarithmic scale on the axes, rather than transform the data and have the scale showing the logarithmic value), but gives a lot more visual differentiation. For example, you can clearly spot the few outliers (which turned out to be data editing errors) where total spend was less than spend in New Zealand. Perhaps more importantly, you could use this graph with different colors or faceting to show how different market countries or purpose of visit (eg holiday v. visiting friends and family) occupy different parts of the expenditure "space" - something that would just be invisible on the original axes. Turning this plot into something useful would involve somehow dealing with the high density data (eg by adding some transparency to the points, or replacing points with hexagonal bins colored according to density) but any useful visual solution will almost certainly involve logarithmic axes. edit / addition Another plot to illustrate what I meant by the hexagonal bins, using color to represent density when there is a large dataset (in this case, about 12000 respondents to a survey about Rugby World Cup experiences in New Zealand). Note again this is another example where I've used a logarithmic scale for expenditure.
When are Log scales appropriate? Some real life examples I had to hand as an addition to @Michael Lew's very good answer. First, the two time series plots below show monthly visitor arrivals to New Zealand, available from Statistics
3,289
When are Log scales appropriate?
One other nifty thing about log scales is that they make ratios appear symmetric. For example, like this:
When are Log scales appropriate?
One other nifty thing about log scales is that they make ratios appear symmetric. For example, like this:
When are Log scales appropriate? One other nifty thing about log scales is that they make ratios appear symmetric. For example, like this:
When are Log scales appropriate? One other nifty thing about log scales is that they make ratios appear symmetric. For example, like this:
3,290
How can I help ensure testing data does not leak into training data?
You are right, this is a significant problem in machine learning/statistical modelling. Essentially the only way to really solve this problem is to retain an independent test set and keep it held out until the study is complete and use it for final validation. However, inevitably people will look at the results on the test set and then change their model accordingly; however this won't necessarily result in an improvement in generalisation performance as the difference in performance of different models may be largely due to the particular sample of test data that we have. In this case, in making a choice we are effectively over-fitting the test error. The way to limit this is to make the variance of the test error as small as possible (i.e. the variability in test error we would see if we used different samples of data as the test set, drawn from the same underlying distribution). This is most easily achieved using a large test set if that is possible, or e.g. bootstrapping or cross-validation if there isn't much data available. I have found that this sort of over-fitting in model selection is a lot more troublesome than is generally appreciated, especially with regard to performance estimation, see G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010 (www) This sort of problem especially affects the use of benchmark datasets, which have been used in many studies, and each new study is implicitly affected by the results of earlier studies, so the observed performance is likely to be an over-optimistic estimate of the true performance of the method. The way I try to get around this is to look at many datasets (so the method isn't tuned to one specific dataset) and also use multiple random test/training splits for performance estimation (to reduce the variance of the estimate). However the results still need the caveat that these benchmarks have been over-fit. Another example where this does occur is in machine learning competitions with a leader-board based on a validation set. Inevitably some competitors keep tinkering with their model to get further up the leader board, but then end up towards the bottom of the final rankings. The reason for this is that their multiple choices have over-fitted the validation set (effectively learning the random variations in the small validation set). If you can't keep a statistically pure test set, then I'm afraid the two best options are (i) collect some new data to make a new statistically pure test set or (ii) make the caveat that the new model was based on a choice made after observing the test set error, so the performance estimate is likely to have an optimistic bias.
How can I help ensure testing data does not leak into training data?
You are right, this is a significant problem in machine learning/statistical modelling. Essentially the only way to really solve this problem is to retain an independent test set and keep it held out
How can I help ensure testing data does not leak into training data? You are right, this is a significant problem in machine learning/statistical modelling. Essentially the only way to really solve this problem is to retain an independent test set and keep it held out until the study is complete and use it for final validation. However, inevitably people will look at the results on the test set and then change their model accordingly; however this won't necessarily result in an improvement in generalisation performance as the difference in performance of different models may be largely due to the particular sample of test data that we have. In this case, in making a choice we are effectively over-fitting the test error. The way to limit this is to make the variance of the test error as small as possible (i.e. the variability in test error we would see if we used different samples of data as the test set, drawn from the same underlying distribution). This is most easily achieved using a large test set if that is possible, or e.g. bootstrapping or cross-validation if there isn't much data available. I have found that this sort of over-fitting in model selection is a lot more troublesome than is generally appreciated, especially with regard to performance estimation, see G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010 (www) This sort of problem especially affects the use of benchmark datasets, which have been used in many studies, and each new study is implicitly affected by the results of earlier studies, so the observed performance is likely to be an over-optimistic estimate of the true performance of the method. The way I try to get around this is to look at many datasets (so the method isn't tuned to one specific dataset) and also use multiple random test/training splits for performance estimation (to reduce the variance of the estimate). However the results still need the caveat that these benchmarks have been over-fit. Another example where this does occur is in machine learning competitions with a leader-board based on a validation set. Inevitably some competitors keep tinkering with their model to get further up the leader board, but then end up towards the bottom of the final rankings. The reason for this is that their multiple choices have over-fitted the validation set (effectively learning the random variations in the small validation set). If you can't keep a statistically pure test set, then I'm afraid the two best options are (i) collect some new data to make a new statistically pure test set or (ii) make the caveat that the new model was based on a choice made after observing the test set error, so the performance estimate is likely to have an optimistic bias.
How can I help ensure testing data does not leak into training data? You are right, this is a significant problem in machine learning/statistical modelling. Essentially the only way to really solve this problem is to retain an independent test set and keep it held out
3,291
How can I help ensure testing data does not leak into training data?
One way to ensure this is to make sure you have coded up all of the things you do to fit the model, even "tinkering". This way, when you run the process repeatedly, say via cross-validation, you are keeping things consistent between runs. This ensures that all of the potential sources of variation are captured by the cross-validation process. The other vitally important thing is to ensure that you have a representative sample, in both data sets. If your data set is not representative of the kind of data you expect to be using to predict, then there is not much that you can do. All modelling rests on an assumption that "induction" works - the things we haven't observed behave like the things we have observed. As a general rule, stay away from complex model fitting procedures unless (i) you know what you are doing, and (ii) you have tried the simpler methods, and found that they don't work, and how the complex method fixes the problems with the simple method. "Simple" and "complex" are meant in the sense of "simple" or "complex" to the person doing the fitting. The reason this is so important is that it allows you to apply what I like to call a "sniff test" to the results. Does the result look right? You can't "smell" the results from a procedure that you don't understand. NOTE: the next, rather long part of my answer is based on my experience, which is in the $N>>p$ area, with $p$ possibly large. I am almost certain that what follows below would not apply to the $N\approx p$ or $N<p$ cases When you have a large sample, the difference between using and not using a given observation is very small, provided your modelling is not too "local". This is because the influence of a given data point is generally the order of $\frac{1}{N}$. So in large data sets, the residuals you get from "holding out" the test data set are basically the same as the residuals you get from using it in the training data set. You can show this using ordinary least squares. The residual you get from excluding the $i$th observation (i.e. what the test set error would be if we put the observation in the test set) is $e_i^{test}=(1-h_{ii})^{-1}e_i^\mathrm{train}$, where $e_i^\mathrm{train}$ is the training residual, and $h_{ii}$ is the leverage of the $i$th data point. Now we have that $\sum_ih_{ii}=p$, where $p$ is the number of variables in the regression. Now if $N>>p$, then it is extremely difficult for any $h_{ii}$ to be large enough to make an appreciable difference between the test set and training set errors. We can take a simplified example, suppose $p=2$ (intercept and $1$ variable), $N\times p$ design matrix is $X$ (both training and testing sets), and the leverage is $$h_{ii}=x_i^T(X^TX)^{-1}x_i=\frac{1}{Ns_x^2} \begin{pmatrix}1 & x_i \end{pmatrix} \begin{pmatrix}\overline{x^2} & -\overline{x}\\ -\overline{x} & 1\end{pmatrix} \begin{pmatrix}1 \\ x_i\end{pmatrix} =\frac{1+\tilde{x}_i^2}{N}$$ Where $\overline{x}=N^{-1}\sum_ix_i$, $\overline{x^2}=N^{-1}\sum_ix_i^2$, and $s_x^2=\overline{x^2}-\overline{x}^2$. Finally, $\tilde{x}_i=\frac{x_i-\overline{x}}{s_x}$ is the standardised predictor variable, and measures how many standard deviations $x_i$ is from the mean. So, we know from the beginning that the test set error will be much larger than the training set error for observations "at the edge" of the training set. But this is basically that representative issue again - observations "at the edge" are less representative than observations "in the middle". Additionally, this is to order $\frac{1}{N}$. So if you have $100$ observations, even if $\tilde{x}_i=5$ (an outlier in x-space by most definitions), this means $h_{ii}=\frac{26}{100}$, and the test error is understated by a factor of just $1-\frac{26}{100}=\frac{74}{100}$. If you have a large data set, say $10000$, it is even smaller,$1-\frac{26}{10000}$, which is less than $1\text{%}$. In fact, for $10000$ observations, you would require an observation of $\tilde{x}=50$ in order to make a $25\text{%}$ under-estimate of the test set error, using the training set error. So for big data sets, using a test set is not only inefficient, it is also unnecessary, so long as $N>>p$. This applies for OLS and also approximately applies for GLMs (details are different for GLM, but the general conclusion is the same). In more than $2$ dimensions, the "outliers" are defined by the observations with large "principal component" scores. This can be shown by writing $h_{ii}=x_i^TEE^T(X^TX)^{-1}EE^Tx_i$ Where $E$ is the (orthogonal) eigenvector matrix for $X^TX$, with eigenvalue matrix $\Lambda$. We get $h_{ii}=z_i^T\Lambda^{-1}z_i=\sum_{j=1}^p\frac{z_{ji}^2}{\Lambda_{jj}}$ where $z_i=E^Tx_i$ is the principal component scores for $x_i$. If your test set has $k$ observations, you get a matrix version ${\bf{e}}_{\{k\}}^\mathrm{test}=(I_k-H_{\{k\}})^{-1}{\bf{e}}_{\{k\}}^\mathrm{train}$, where $H_{\{k\}}=X_{\{k\}}(X^TX)^{-1}X_{\{k\}}^T$ and $X_{\{k\}}$ is the rows of the design matrix in the test set. So, for OLS regression, you already know what the "test set" errors would have been for all possible splits of the data into training and testing sets. In this case ($N>>p$), there is no need to split the data at all. You can report "best case" and "worst case" test set errors of almost any size without actually having to split the data. This can save a lot of PC time and resources. Basically, this all reduces to using a penalty term, to account for the difference between training and testing errors, such as BIC or AIC. This effectively achieves the same result as what using a test set does, however you aren't forced to throw away potentially useful information. With the BIC, you are approximating the evidence for the model, which looks mathematically like: $$p(D|M_iI)=p(y_1y_2\dots y_N|M_iI)$$ Note that in this procedure, we cannot estimate any internal parameters - each model $M_i$ must be fully specified or have its internal parameters integrated out. However, we can make this look like cross validation (using a specific loss function) by repeatedly using the product rule, and then taking the log of the result: $$p(D|M_iI)=p(y_1|M_iI)p(y_2\dots y_N|y_1M_iI)$$ $$=p(y_1|M_iI)p(y_2|y_1M_iI)p(y_3\dots y_N|y_1y_2M_iI)$$ $$=\dots=\prod_{i=1}^{N}p(y_i|y_1\dots y_{i-1}M_iI)$$ $$\implies\log\left[p(D|M_iI)\right]=\sum_{i=1}^{N}\log\left[p(y_i|y_1\dots y_{i-1}M_iI)\right]$$ This suggests a form of cross validation, but where the training set is constantly being updated, one observation at a time from the test set - similar to the Kalman Filter. We predict the next observation from the test set using the current training set, measure the deviation from the observed value using the conditional log-likelihood, and then update the training set to include the new observation. But note that this procedure fully digests all of the available data, while at the same time making sure that every observation is tested as an "out-of-sample" case. It is also invariant, in that it does not matter what you call "observation 1" or "observation 10"; the result is the same (calculations may be easier for some permutations than others). The loss function is also "adaptive" in that if we define $L_i=\log\left[p(y_i|y_1\dots y_{i-1}M_iI)\right]$, then the sharpness of $L_i$ depends on $i$, because the loss function is constantly being updated with new data. I would suggest that assessing predictive models this way would work quite well.
How can I help ensure testing data does not leak into training data?
One way to ensure this is to make sure you have coded up all of the things you do to fit the model, even "tinkering". This way, when you run the process repeatedly, say via cross-validation, you are
How can I help ensure testing data does not leak into training data? One way to ensure this is to make sure you have coded up all of the things you do to fit the model, even "tinkering". This way, when you run the process repeatedly, say via cross-validation, you are keeping things consistent between runs. This ensures that all of the potential sources of variation are captured by the cross-validation process. The other vitally important thing is to ensure that you have a representative sample, in both data sets. If your data set is not representative of the kind of data you expect to be using to predict, then there is not much that you can do. All modelling rests on an assumption that "induction" works - the things we haven't observed behave like the things we have observed. As a general rule, stay away from complex model fitting procedures unless (i) you know what you are doing, and (ii) you have tried the simpler methods, and found that they don't work, and how the complex method fixes the problems with the simple method. "Simple" and "complex" are meant in the sense of "simple" or "complex" to the person doing the fitting. The reason this is so important is that it allows you to apply what I like to call a "sniff test" to the results. Does the result look right? You can't "smell" the results from a procedure that you don't understand. NOTE: the next, rather long part of my answer is based on my experience, which is in the $N>>p$ area, with $p$ possibly large. I am almost certain that what follows below would not apply to the $N\approx p$ or $N<p$ cases When you have a large sample, the difference between using and not using a given observation is very small, provided your modelling is not too "local". This is because the influence of a given data point is generally the order of $\frac{1}{N}$. So in large data sets, the residuals you get from "holding out" the test data set are basically the same as the residuals you get from using it in the training data set. You can show this using ordinary least squares. The residual you get from excluding the $i$th observation (i.e. what the test set error would be if we put the observation in the test set) is $e_i^{test}=(1-h_{ii})^{-1}e_i^\mathrm{train}$, where $e_i^\mathrm{train}$ is the training residual, and $h_{ii}$ is the leverage of the $i$th data point. Now we have that $\sum_ih_{ii}=p$, where $p$ is the number of variables in the regression. Now if $N>>p$, then it is extremely difficult for any $h_{ii}$ to be large enough to make an appreciable difference between the test set and training set errors. We can take a simplified example, suppose $p=2$ (intercept and $1$ variable), $N\times p$ design matrix is $X$ (both training and testing sets), and the leverage is $$h_{ii}=x_i^T(X^TX)^{-1}x_i=\frac{1}{Ns_x^2} \begin{pmatrix}1 & x_i \end{pmatrix} \begin{pmatrix}\overline{x^2} & -\overline{x}\\ -\overline{x} & 1\end{pmatrix} \begin{pmatrix}1 \\ x_i\end{pmatrix} =\frac{1+\tilde{x}_i^2}{N}$$ Where $\overline{x}=N^{-1}\sum_ix_i$, $\overline{x^2}=N^{-1}\sum_ix_i^2$, and $s_x^2=\overline{x^2}-\overline{x}^2$. Finally, $\tilde{x}_i=\frac{x_i-\overline{x}}{s_x}$ is the standardised predictor variable, and measures how many standard deviations $x_i$ is from the mean. So, we know from the beginning that the test set error will be much larger than the training set error for observations "at the edge" of the training set. But this is basically that representative issue again - observations "at the edge" are less representative than observations "in the middle". Additionally, this is to order $\frac{1}{N}$. So if you have $100$ observations, even if $\tilde{x}_i=5$ (an outlier in x-space by most definitions), this means $h_{ii}=\frac{26}{100}$, and the test error is understated by a factor of just $1-\frac{26}{100}=\frac{74}{100}$. If you have a large data set, say $10000$, it is even smaller,$1-\frac{26}{10000}$, which is less than $1\text{%}$. In fact, for $10000$ observations, you would require an observation of $\tilde{x}=50$ in order to make a $25\text{%}$ under-estimate of the test set error, using the training set error. So for big data sets, using a test set is not only inefficient, it is also unnecessary, so long as $N>>p$. This applies for OLS and also approximately applies for GLMs (details are different for GLM, but the general conclusion is the same). In more than $2$ dimensions, the "outliers" are defined by the observations with large "principal component" scores. This can be shown by writing $h_{ii}=x_i^TEE^T(X^TX)^{-1}EE^Tx_i$ Where $E$ is the (orthogonal) eigenvector matrix for $X^TX$, with eigenvalue matrix $\Lambda$. We get $h_{ii}=z_i^T\Lambda^{-1}z_i=\sum_{j=1}^p\frac{z_{ji}^2}{\Lambda_{jj}}$ where $z_i=E^Tx_i$ is the principal component scores for $x_i$. If your test set has $k$ observations, you get a matrix version ${\bf{e}}_{\{k\}}^\mathrm{test}=(I_k-H_{\{k\}})^{-1}{\bf{e}}_{\{k\}}^\mathrm{train}$, where $H_{\{k\}}=X_{\{k\}}(X^TX)^{-1}X_{\{k\}}^T$ and $X_{\{k\}}$ is the rows of the design matrix in the test set. So, for OLS regression, you already know what the "test set" errors would have been for all possible splits of the data into training and testing sets. In this case ($N>>p$), there is no need to split the data at all. You can report "best case" and "worst case" test set errors of almost any size without actually having to split the data. This can save a lot of PC time and resources. Basically, this all reduces to using a penalty term, to account for the difference between training and testing errors, such as BIC or AIC. This effectively achieves the same result as what using a test set does, however you aren't forced to throw away potentially useful information. With the BIC, you are approximating the evidence for the model, which looks mathematically like: $$p(D|M_iI)=p(y_1y_2\dots y_N|M_iI)$$ Note that in this procedure, we cannot estimate any internal parameters - each model $M_i$ must be fully specified or have its internal parameters integrated out. However, we can make this look like cross validation (using a specific loss function) by repeatedly using the product rule, and then taking the log of the result: $$p(D|M_iI)=p(y_1|M_iI)p(y_2\dots y_N|y_1M_iI)$$ $$=p(y_1|M_iI)p(y_2|y_1M_iI)p(y_3\dots y_N|y_1y_2M_iI)$$ $$=\dots=\prod_{i=1}^{N}p(y_i|y_1\dots y_{i-1}M_iI)$$ $$\implies\log\left[p(D|M_iI)\right]=\sum_{i=1}^{N}\log\left[p(y_i|y_1\dots y_{i-1}M_iI)\right]$$ This suggests a form of cross validation, but where the training set is constantly being updated, one observation at a time from the test set - similar to the Kalman Filter. We predict the next observation from the test set using the current training set, measure the deviation from the observed value using the conditional log-likelihood, and then update the training set to include the new observation. But note that this procedure fully digests all of the available data, while at the same time making sure that every observation is tested as an "out-of-sample" case. It is also invariant, in that it does not matter what you call "observation 1" or "observation 10"; the result is the same (calculations may be easier for some permutations than others). The loss function is also "adaptive" in that if we define $L_i=\log\left[p(y_i|y_1\dots y_{i-1}M_iI)\right]$, then the sharpness of $L_i$ depends on $i$, because the loss function is constantly being updated with new data. I would suggest that assessing predictive models this way would work quite well.
How can I help ensure testing data does not leak into training data? One way to ensure this is to make sure you have coded up all of the things you do to fit the model, even "tinkering". This way, when you run the process repeatedly, say via cross-validation, you are
3,292
How can I help ensure testing data does not leak into training data?
I suppose the only way to guarantee this is that someone else has the test data. In a client-consultant relationship this can be managed fairly easily: the client gives the consultant the training set upon which to build the models, and within this training set the consultant can split the data in whatever way necessary to ensure that overfitting doesn't occur; subsequently the models are given back to the client to use on their test data. For an individual researcher, it stands to reason that best practice would therefore be to mimic this setup. This would mean hiving off some of the data to test, after all model selection has been performed. Unfortunately, as you say, this is not practised by many people, and it even happens to people who should know better! However ultimately it depends on what the model is being used for. If you're only ever interested in prediction on that single dataset, then maybe you can overfit all you like? However if you are trying to promote your model as one that generalises well, or use the model in some real world application, then of course this of great significance. There is a side issue which I thought I should mention, which is that even if you follow all the procedures correctly, you can still end up with models that are overfitted, due to the data not being truly i.i.d.. For example, if there are temporal correlations in the data, then if you take all of your training data from times 1-3, and test on time 4, then you may find that the prediction error is larger than expected. Alternatively there could be experiment-specific artefacts, such as the measurement device being used, or the pool of subjects in human experiments, that cause the generalisation of the models to be worse than expected.
How can I help ensure testing data does not leak into training data?
I suppose the only way to guarantee this is that someone else has the test data. In a client-consultant relationship this can be managed fairly easily: the client gives the consultant the training set
How can I help ensure testing data does not leak into training data? I suppose the only way to guarantee this is that someone else has the test data. In a client-consultant relationship this can be managed fairly easily: the client gives the consultant the training set upon which to build the models, and within this training set the consultant can split the data in whatever way necessary to ensure that overfitting doesn't occur; subsequently the models are given back to the client to use on their test data. For an individual researcher, it stands to reason that best practice would therefore be to mimic this setup. This would mean hiving off some of the data to test, after all model selection has been performed. Unfortunately, as you say, this is not practised by many people, and it even happens to people who should know better! However ultimately it depends on what the model is being used for. If you're only ever interested in prediction on that single dataset, then maybe you can overfit all you like? However if you are trying to promote your model as one that generalises well, or use the model in some real world application, then of course this of great significance. There is a side issue which I thought I should mention, which is that even if you follow all the procedures correctly, you can still end up with models that are overfitted, due to the data not being truly i.i.d.. For example, if there are temporal correlations in the data, then if you take all of your training data from times 1-3, and test on time 4, then you may find that the prediction error is larger than expected. Alternatively there could be experiment-specific artefacts, such as the measurement device being used, or the pool of subjects in human experiments, that cause the generalisation of the models to be worse than expected.
How can I help ensure testing data does not leak into training data? I suppose the only way to guarantee this is that someone else has the test data. In a client-consultant relationship this can be managed fairly easily: the client gives the consultant the training set
3,293
How can I help ensure testing data does not leak into training data?
This is a very good question and a very subtle problem. Of course there are the bad intentioned mistakes, which derive from someone trying to deceive you. But there is a deeper question of how to avoid accidental leaking and avoid honest mistakes. Let me list some operational good practices. They all stem from honest mistakes I've made at some point: Separate your data into three groups: train, validate and test. Understand the problem setup to be able to argue what is reasonable and what isn't. Understand the problem, many times subtle misunderstanding in what the data represents can lead to leaks. For example while no one would train and test on the same frame of one video, it is more subtle when two frames of the same video fall in different folds, two frames of the same video probably share the same individuals the same lighting and so on. Be extremely careful with previously written cross validation procedures. More so with ones not written by you (LIBSVM is a big offender here). Repeat every experiment at least twice before reporting anything, even if reporting to your office mate. Version control is your friend, before running an experiment commit and write down what version of the code you're running. Be very careful when normalizing your data. Many times this leads to thinking you will have the full dataset on which you want to test at the same time, which again is often not realistic.
How can I help ensure testing data does not leak into training data?
This is a very good question and a very subtle problem. Of course there are the bad intentioned mistakes, which derive from someone trying to deceive you. But there is a deeper question of how to avoi
How can I help ensure testing data does not leak into training data? This is a very good question and a very subtle problem. Of course there are the bad intentioned mistakes, which derive from someone trying to deceive you. But there is a deeper question of how to avoid accidental leaking and avoid honest mistakes. Let me list some operational good practices. They all stem from honest mistakes I've made at some point: Separate your data into three groups: train, validate and test. Understand the problem setup to be able to argue what is reasonable and what isn't. Understand the problem, many times subtle misunderstanding in what the data represents can lead to leaks. For example while no one would train and test on the same frame of one video, it is more subtle when two frames of the same video fall in different folds, two frames of the same video probably share the same individuals the same lighting and so on. Be extremely careful with previously written cross validation procedures. More so with ones not written by you (LIBSVM is a big offender here). Repeat every experiment at least twice before reporting anything, even if reporting to your office mate. Version control is your friend, before running an experiment commit and write down what version of the code you're running. Be very careful when normalizing your data. Many times this leads to thinking you will have the full dataset on which you want to test at the same time, which again is often not realistic.
How can I help ensure testing data does not leak into training data? This is a very good question and a very subtle problem. Of course there are the bad intentioned mistakes, which derive from someone trying to deceive you. But there is a deeper question of how to avoi
3,294
How can I help ensure testing data does not leak into training data?
Many important points have been covered in the excellent answers that are already given. Lately, I've developed this personal check list for statistical independence of test data: Split data at highest level of data hierarchy (e.g. patient-wise splitting) Split also independently for known or suspected confounders, such as day-to-day variation in instruments etc. (DoE should take care of random sequence of measurements**) All calculation steps beginning with the first (usually pre-processing) step that involves more than one patient* need to be redone for each surrogate model in resampling validation. For hold-out / independent test set valdiation, test patients need to be separated before this step. This is regardless whether the calculation is called preprocessing or is considered part of the actual model. Typical culprits: mean centering, variance scaling (usually only mild influence), dimensionality reduction such as PCA or PLS (can cause heavy bias, e.g. underestimate no of errors by an order of magnitude) Any kind of data-driven optimization or model selection needs another (outer) testing to independently validate the final model. There are some types of generalization performance that can only be measured by particular independent test sets, e.g. how predictive performance deteriorates for cases measured in future (I'm not dealing with time series forecasting, just with instrument drift). But this needs a properly designed validation study. There's another peculiar type of data leak in my field: we do spatially resolved spectroscopy of biological tissues. The reference labelling of the test spectra needs to be blinded against the spectroscopic information, even if it is tempting to use a cluster analysis and then just find out which class each cluster belongs to (that would be semi-supervised test data which isn't independent at all). Last but certainly not least: When coding resampling validation, I actually check whether the calculated indices into the data set to not lead to grabbing test rows from training patients, days etc. Note that the "splitting not done in order to ensure independence" and "split before any calculation occurs that involves more than one case" can also happen with testing that claims to use an independent test set, and the latter even if the data analyst is blinded to the reference of the test cases. These mistakes cannot happen if the test data is withheld until the final model is presented. * I'm using patients as the topmost hierarchy in data just for the ease of description. ** I'm analytical chemist: Instrument drift is a known problem. In fact, part of the validation of chemical analysis methods is determining how often calibrations need to be checked against validation samples, and how often the calibration needs to be redone. FWIW: In practice, I deal with applications where $p$ is in the order of magnitude of $10^2 - 10^3$, $n_{rows}$ is usually larger than $p$, but $n_{biol. replicates}$ or $n_{patients}$ is $\ll p$ (order of magnitude: $10^0 - 10^1$, rarely $10^2$) depending on the spectroscopic measurement method, all rows of one, say, patient may be very similar or rather dissimilar because different types of spectra have signal-to-noise ratio (instrument error) also varying by an order of magnitude or so Personally, I've yet to meet the application where for classifier development I get enough independent cases to allow setting aside a proper independent test set. Thus, I've come to the conclusion that properly done resampling validation is the better alternative while the method is still under development. Proper validation studies will need to be done eventually, but it is a huge waste of resources (or results will carry no useful information because of variance) doing that while the method development is in a stage where things still change.
How can I help ensure testing data does not leak into training data?
Many important points have been covered in the excellent answers that are already given. Lately, I've developed this personal check list for statistical independence of test data: Split data at high
How can I help ensure testing data does not leak into training data? Many important points have been covered in the excellent answers that are already given. Lately, I've developed this personal check list for statistical independence of test data: Split data at highest level of data hierarchy (e.g. patient-wise splitting) Split also independently for known or suspected confounders, such as day-to-day variation in instruments etc. (DoE should take care of random sequence of measurements**) All calculation steps beginning with the first (usually pre-processing) step that involves more than one patient* need to be redone for each surrogate model in resampling validation. For hold-out / independent test set valdiation, test patients need to be separated before this step. This is regardless whether the calculation is called preprocessing or is considered part of the actual model. Typical culprits: mean centering, variance scaling (usually only mild influence), dimensionality reduction such as PCA or PLS (can cause heavy bias, e.g. underestimate no of errors by an order of magnitude) Any kind of data-driven optimization or model selection needs another (outer) testing to independently validate the final model. There are some types of generalization performance that can only be measured by particular independent test sets, e.g. how predictive performance deteriorates for cases measured in future (I'm not dealing with time series forecasting, just with instrument drift). But this needs a properly designed validation study. There's another peculiar type of data leak in my field: we do spatially resolved spectroscopy of biological tissues. The reference labelling of the test spectra needs to be blinded against the spectroscopic information, even if it is tempting to use a cluster analysis and then just find out which class each cluster belongs to (that would be semi-supervised test data which isn't independent at all). Last but certainly not least: When coding resampling validation, I actually check whether the calculated indices into the data set to not lead to grabbing test rows from training patients, days etc. Note that the "splitting not done in order to ensure independence" and "split before any calculation occurs that involves more than one case" can also happen with testing that claims to use an independent test set, and the latter even if the data analyst is blinded to the reference of the test cases. These mistakes cannot happen if the test data is withheld until the final model is presented. * I'm using patients as the topmost hierarchy in data just for the ease of description. ** I'm analytical chemist: Instrument drift is a known problem. In fact, part of the validation of chemical analysis methods is determining how often calibrations need to be checked against validation samples, and how often the calibration needs to be redone. FWIW: In practice, I deal with applications where $p$ is in the order of magnitude of $10^2 - 10^3$, $n_{rows}$ is usually larger than $p$, but $n_{biol. replicates}$ or $n_{patients}$ is $\ll p$ (order of magnitude: $10^0 - 10^1$, rarely $10^2$) depending on the spectroscopic measurement method, all rows of one, say, patient may be very similar or rather dissimilar because different types of spectra have signal-to-noise ratio (instrument error) also varying by an order of magnitude or so Personally, I've yet to meet the application where for classifier development I get enough independent cases to allow setting aside a proper independent test set. Thus, I've come to the conclusion that properly done resampling validation is the better alternative while the method is still under development. Proper validation studies will need to be done eventually, but it is a huge waste of resources (or results will carry no useful information because of variance) doing that while the method development is in a stage where things still change.
How can I help ensure testing data does not leak into training data? Many important points have been covered in the excellent answers that are already given. Lately, I've developed this personal check list for statistical independence of test data: Split data at high
3,295
How can I help ensure testing data does not leak into training data?
If I remember correctly, some of the forecasting contests (such as Netflix or the ones on Kaggle) use this scheme: There is a training set, with the "answers". There is test set #1, for which the researcher provides answers. The researcher finds out their score. There is test set #2, for which the researcher provides answers, BUT the researcher does not find out their score. The researcher doesn't know which prediction cases are in #1 and #2. At some point, set #2 has to become visible, but you've at least limited the contamination.
How can I help ensure testing data does not leak into training data?
If I remember correctly, some of the forecasting contests (such as Netflix or the ones on Kaggle) use this scheme: There is a training set, with the "answers". There is test set #1, for which the rese
How can I help ensure testing data does not leak into training data? If I remember correctly, some of the forecasting contests (such as Netflix or the ones on Kaggle) use this scheme: There is a training set, with the "answers". There is test set #1, for which the researcher provides answers. The researcher finds out their score. There is test set #2, for which the researcher provides answers, BUT the researcher does not find out their score. The researcher doesn't know which prediction cases are in #1 and #2. At some point, set #2 has to become visible, but you've at least limited the contamination.
How can I help ensure testing data does not leak into training data? If I remember correctly, some of the forecasting contests (such as Netflix or the ones on Kaggle) use this scheme: There is a training set, with the "answers". There is test set #1, for which the rese
3,296
How can I help ensure testing data does not leak into training data?
In some cases, such as Biological sequence-based predictors, it is not enough to ensure that cases do not appear in more than one set. You still need to worry about dependency between the sets. For example, for sequence-based predictors, one needs to remove redundancy by ensuring that sequences in different sets (including the different cross-validation sets) do not share a high level of sequence similarity.
How can I help ensure testing data does not leak into training data?
In some cases, such as Biological sequence-based predictors, it is not enough to ensure that cases do not appear in more than one set. You still need to worry about dependency between the sets. For e
How can I help ensure testing data does not leak into training data? In some cases, such as Biological sequence-based predictors, it is not enough to ensure that cases do not appear in more than one set. You still need to worry about dependency between the sets. For example, for sequence-based predictors, one needs to remove redundancy by ensuring that sequences in different sets (including the different cross-validation sets) do not share a high level of sequence similarity.
How can I help ensure testing data does not leak into training data? In some cases, such as Biological sequence-based predictors, it is not enough to ensure that cases do not appear in more than one set. You still need to worry about dependency between the sets. For e
3,297
How can I help ensure testing data does not leak into training data?
I'd say "k-fold cross validation" is the right answer from the theoretical point of view, but your question seems more about organizational and teaching stuff so I'll answer differently. When people are "still learning" it's often thought as if they're learning how to "quickly and dirtily" apply the algorithms and all the "extra" knowledge (problem motivation, dataset preparation, validation, error analysis, practical gotchas and so on) will be learned "later" when they're "more prepared". This is utterly wrong. If we want a student or whoever to understand the difference between a test set and a training set, the worst thing will be to give the two sets to two different guys as if we think that "at this stage" the "extra knowledge" is harmful. This is like waterfall approach in software development - few months of pure design, then few month of pure coding, then few months of pure testing and a pity throwaway result in the end. Learning should not go as waterfall. All parts of learning - problem motivation, algorithm, practical gotchas, result evaluation - must come together, in small steps. (Like agile approach in software development). Perhaps everyone here has gone through Andrew Ng's ml-class.org - I'd put his course as an example of a robust "agile", if you will, style of learning - the one which would never yield a question of "how to ensure that test data doesn't leak into training data". Note that I may have completely misunderstood your question, so apologies! :)
How can I help ensure testing data does not leak into training data?
I'd say "k-fold cross validation" is the right answer from the theoretical point of view, but your question seems more about organizational and teaching stuff so I'll answer differently. When people
How can I help ensure testing data does not leak into training data? I'd say "k-fold cross validation" is the right answer from the theoretical point of view, but your question seems more about organizational and teaching stuff so I'll answer differently. When people are "still learning" it's often thought as if they're learning how to "quickly and dirtily" apply the algorithms and all the "extra" knowledge (problem motivation, dataset preparation, validation, error analysis, practical gotchas and so on) will be learned "later" when they're "more prepared". This is utterly wrong. If we want a student or whoever to understand the difference between a test set and a training set, the worst thing will be to give the two sets to two different guys as if we think that "at this stage" the "extra knowledge" is harmful. This is like waterfall approach in software development - few months of pure design, then few month of pure coding, then few months of pure testing and a pity throwaway result in the end. Learning should not go as waterfall. All parts of learning - problem motivation, algorithm, practical gotchas, result evaluation - must come together, in small steps. (Like agile approach in software development). Perhaps everyone here has gone through Andrew Ng's ml-class.org - I'd put his course as an example of a robust "agile", if you will, style of learning - the one which would never yield a question of "how to ensure that test data doesn't leak into training data". Note that I may have completely misunderstood your question, so apologies! :)
How can I help ensure testing data does not leak into training data? I'd say "k-fold cross validation" is the right answer from the theoretical point of view, but your question seems more about organizational and teaching stuff so I'll answer differently. When people
3,298
How can I help ensure testing data does not leak into training data?
How can I help ensure testing data does not leak into training data? If you are looking for a practical way to check that the testing data is not the same as the training data I would recommend use of Excel Vlookup or SQL query. If the dataset is small enough you could use an excel vlookup to check whether the same records exist in the training data. https://exceljet.net/excel-functions/excel-vlookup-function Simple SQL queries can also be run to show matches of where data is the same in two tables. https://stackoverflow.com/questions/15938180/sql-check-if-entry-in-table-a-exists-in-table-b
How can I help ensure testing data does not leak into training data?
How can I help ensure testing data does not leak into training data? If you are looking for a practical way to check that the testing data is not the same as the training data I would recommend use of
How can I help ensure testing data does not leak into training data? How can I help ensure testing data does not leak into training data? If you are looking for a practical way to check that the testing data is not the same as the training data I would recommend use of Excel Vlookup or SQL query. If the dataset is small enough you could use an excel vlookup to check whether the same records exist in the training data. https://exceljet.net/excel-functions/excel-vlookup-function Simple SQL queries can also be run to show matches of where data is the same in two tables. https://stackoverflow.com/questions/15938180/sql-check-if-entry-in-table-a-exists-in-table-b
How can I help ensure testing data does not leak into training data? How can I help ensure testing data does not leak into training data? If you are looking for a practical way to check that the testing data is not the same as the training data I would recommend use of
3,299
Do we need a global test before post hoc tests?
Since multiple comparison tests are often called 'post tests', you'd think they logically follow the one-way ANOVA. In fact, this isn't so. "An unfortunate common practice is to pursue multiple comparisons only when the hull hypothesis of homogeneity is rejected." (Hsu, page 177) Will the results of post tests be valid if the overall P value for the ANOVA is greater than 0.05? Surprisingly, the answer is yes. With one exception, post tests are valid even if the overall ANOVA did not find a significant difference among means. The exception is the first multiple comparison test invented, the protected Fisher Least Significant Difference (LSD) test. The first step of the protected LSD test is to check if the overall ANOVA rejects the null hypothesis of identical means. If it doesn't, individual comparisons should not be made. But this protected LSD test is outmoded, and no longer recommended. Is it possible to get a 'significant' result from a multiple comparisons test even when the overall ANOVA was not significant? Yes it is possible. The exception is Scheffe's test. It is intertwined with the overall F test. If the overall ANOVA has a P value greater than 0.05, then the Scheffe's test won't find any significant post tests. In this case, performing post tests following an overall nonsignificant ANOVA is a waste of time but won't lead to invalid conclusions. But other multiple comparison tests can find significant differences (sometimes) even when the overall ANOVA showed no significant differences among groups. How can I understand the apparent contradiction between an ANOVA saying, in effect, that all group means are identical and a post test finding differences? The overall one-way ANOVA tests the null hypothesis that all the treatment groups have identical mean values, so any difference you happened to observe is due to random sampling. Each post test tests the null hypothesis that two particular groups have identical means. The post tests are more focused, so have power to find differences between groups even when the overall ANOVA reports that the differences among the means are not statistically significant. Are the results of the overall ANOVA useful at all? ANOVA tests the overall null hypothesis that all the data come from groups that have identical means. If that is your experimental question -- does the data provide convincing evidence that the means are not all identical -- then ANOVA is exactly what you want. More often, your experimental questions are more focused and answered by multiple comparison tests (post tests). In these cases, you can safely ignore the overall ANOVA results and jump right to the post test results. Note that the multiple comparison calculations all use the mean-square result from the ANOVA table. So even if you don't care about the value of F or the P value, the post tests still require that the ANOVA table be computed.
Do we need a global test before post hoc tests?
Since multiple comparison tests are often called 'post tests', you'd think they logically follow the one-way ANOVA. In fact, this isn't so. "An unfortunate common practice is to pursue multiple compa
Do we need a global test before post hoc tests? Since multiple comparison tests are often called 'post tests', you'd think they logically follow the one-way ANOVA. In fact, this isn't so. "An unfortunate common practice is to pursue multiple comparisons only when the hull hypothesis of homogeneity is rejected." (Hsu, page 177) Will the results of post tests be valid if the overall P value for the ANOVA is greater than 0.05? Surprisingly, the answer is yes. With one exception, post tests are valid even if the overall ANOVA did not find a significant difference among means. The exception is the first multiple comparison test invented, the protected Fisher Least Significant Difference (LSD) test. The first step of the protected LSD test is to check if the overall ANOVA rejects the null hypothesis of identical means. If it doesn't, individual comparisons should not be made. But this protected LSD test is outmoded, and no longer recommended. Is it possible to get a 'significant' result from a multiple comparisons test even when the overall ANOVA was not significant? Yes it is possible. The exception is Scheffe's test. It is intertwined with the overall F test. If the overall ANOVA has a P value greater than 0.05, then the Scheffe's test won't find any significant post tests. In this case, performing post tests following an overall nonsignificant ANOVA is a waste of time but won't lead to invalid conclusions. But other multiple comparison tests can find significant differences (sometimes) even when the overall ANOVA showed no significant differences among groups. How can I understand the apparent contradiction between an ANOVA saying, in effect, that all group means are identical and a post test finding differences? The overall one-way ANOVA tests the null hypothesis that all the treatment groups have identical mean values, so any difference you happened to observe is due to random sampling. Each post test tests the null hypothesis that two particular groups have identical means. The post tests are more focused, so have power to find differences between groups even when the overall ANOVA reports that the differences among the means are not statistically significant. Are the results of the overall ANOVA useful at all? ANOVA tests the overall null hypothesis that all the data come from groups that have identical means. If that is your experimental question -- does the data provide convincing evidence that the means are not all identical -- then ANOVA is exactly what you want. More often, your experimental questions are more focused and answered by multiple comparison tests (post tests). In these cases, you can safely ignore the overall ANOVA results and jump right to the post test results. Note that the multiple comparison calculations all use the mean-square result from the ANOVA table. So even if you don't care about the value of F or the P value, the post tests still require that the ANOVA table be computed.
Do we need a global test before post hoc tests? Since multiple comparison tests are often called 'post tests', you'd think they logically follow the one-way ANOVA. In fact, this isn't so. "An unfortunate common practice is to pursue multiple compa
3,300
Do we need a global test before post hoc tests?
(1) Post hoc tests might or might not achieve the nominal global Type I error rate, depending on (a) whether the analyst is adjusting for the number of tests and (b) to what extent the post-hoc tests are independent of one another. Applying a global test first is pretty solid protection against the risk of (even inadvertently) uncovering spurious "significant" results from post-hoc data snooping. (2) There is a problem of power. It is well known that a global ANOVA F test can detect a difference of means even in cases where no individual t-test of any of the pairs of means will yield a significant result. In other words, in some cases the data can reveal that the true means likely differ but it cannot identify with sufficient confidence which pairs of means differ.
Do we need a global test before post hoc tests?
(1) Post hoc tests might or might not achieve the nominal global Type I error rate, depending on (a) whether the analyst is adjusting for the number of tests and (b) to what extent the post-hoc tests
Do we need a global test before post hoc tests? (1) Post hoc tests might or might not achieve the nominal global Type I error rate, depending on (a) whether the analyst is adjusting for the number of tests and (b) to what extent the post-hoc tests are independent of one another. Applying a global test first is pretty solid protection against the risk of (even inadvertently) uncovering spurious "significant" results from post-hoc data snooping. (2) There is a problem of power. It is well known that a global ANOVA F test can detect a difference of means even in cases where no individual t-test of any of the pairs of means will yield a significant result. In other words, in some cases the data can reveal that the true means likely differ but it cannot identify with sufficient confidence which pairs of means differ.
Do we need a global test before post hoc tests? (1) Post hoc tests might or might not achieve the nominal global Type I error rate, depending on (a) whether the analyst is adjusting for the number of tests and (b) to what extent the post-hoc tests