idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
4,701 | Help me understand the quantile (inverse CDF) function | All this may sound complicated at first, but it is essentially about something very simple.
By cumulative distribution function we denote the function that returns probabilities of $X$ being smaller than or equal to some value $x$,
$$ \Pr(X \le x) = F(x).$$
This function takes as input $x$ and returns values from the $[0, 1]$ interval (probabilities)—let's denote them as $p$. The inverse of the cumulative distribution function (or quantile function) tells you what $x$ would make $F(x)$ return some value $p$,
$$ F^{-1}(p) = x.$$
This is illustrated in the diagram below which uses the normal cumulative distribution function (and its inverse) as an example.
Example
As an simple example, you can take a standard Gumbel distribution. Its cumulative distribution function is
$$ F(x) = e^{-e^{-x}} $$
and it can be easily inverted: recall natural logarithm function is an inverse of exponential function, so it is instantly obvious that quantile function for Gumbel distribution is
$$ F^{-1}(p) = -\ln(-\ln(p)) $$
As you can see, the quantile function, according to its alternative name, "inverts" the behaviour of cumulative distribution function.
Generalized inverse distribution function
Not every function has an inverse. That is why the quotation you refer to says "monotonically increasing function". Recall that from the definition of the function, it has to assign for each input value exactly one output. Cumulative distribution functions for continuous random variables satisfy this property since they are monotonically increasing. For discrete random variables cumulative distribution functions are not continuous and increasing, so we use generalized inverse distribution functions which need to be non-decreasing. More formally, the generalized inverse distribution function is defined as
$$ F^{-1}(p) = \inf \big\{x \in \mathbb{R}: F(x) \ge p \big\}. $$
The definition, translated to plain English, says that for given probability value $p$, we are looking for some $x$, that results in $F(x)$ returning value greater or equal then $p$, but since there could be multiple values of $x$ that meet this condition (e.g. $F(x) \ge 0$ is true for any $x$), so we take the smallest $x$ of those.
Functions with no inverses
In general, there are no inverses for functions that can return same value for different inputs, for example density functions (e.g., the standard normal density function is symmetric, so it returns the same values for $-2$ and $2$ etc.). The normal distribution is an interesting example for one more reason—it is one of the examples of cumulative distribution functions that do not have a closed-form inverse. Not every cumulative distribution function has to have a closed-form inverse! Hopefully in such cases the inverses can be found using numerical methods.
Use-case
The quantile function can be used for random generation as described in How does the inverse transform method work? | Help me understand the quantile (inverse CDF) function | All this may sound complicated at first, but it is essentially about something very simple.
By cumulative distribution function we denote the function that returns probabilities of $X$ being smaller t | Help me understand the quantile (inverse CDF) function
All this may sound complicated at first, but it is essentially about something very simple.
By cumulative distribution function we denote the function that returns probabilities of $X$ being smaller than or equal to some value $x$,
$$ \Pr(X \le x) = F(x).$$
This function takes as input $x$ and returns values from the $[0, 1]$ interval (probabilities)—let's denote them as $p$. The inverse of the cumulative distribution function (or quantile function) tells you what $x$ would make $F(x)$ return some value $p$,
$$ F^{-1}(p) = x.$$
This is illustrated in the diagram below which uses the normal cumulative distribution function (and its inverse) as an example.
Example
As an simple example, you can take a standard Gumbel distribution. Its cumulative distribution function is
$$ F(x) = e^{-e^{-x}} $$
and it can be easily inverted: recall natural logarithm function is an inverse of exponential function, so it is instantly obvious that quantile function for Gumbel distribution is
$$ F^{-1}(p) = -\ln(-\ln(p)) $$
As you can see, the quantile function, according to its alternative name, "inverts" the behaviour of cumulative distribution function.
Generalized inverse distribution function
Not every function has an inverse. That is why the quotation you refer to says "monotonically increasing function". Recall that from the definition of the function, it has to assign for each input value exactly one output. Cumulative distribution functions for continuous random variables satisfy this property since they are monotonically increasing. For discrete random variables cumulative distribution functions are not continuous and increasing, so we use generalized inverse distribution functions which need to be non-decreasing. More formally, the generalized inverse distribution function is defined as
$$ F^{-1}(p) = \inf \big\{x \in \mathbb{R}: F(x) \ge p \big\}. $$
The definition, translated to plain English, says that for given probability value $p$, we are looking for some $x$, that results in $F(x)$ returning value greater or equal then $p$, but since there could be multiple values of $x$ that meet this condition (e.g. $F(x) \ge 0$ is true for any $x$), so we take the smallest $x$ of those.
Functions with no inverses
In general, there are no inverses for functions that can return same value for different inputs, for example density functions (e.g., the standard normal density function is symmetric, so it returns the same values for $-2$ and $2$ etc.). The normal distribution is an interesting example for one more reason—it is one of the examples of cumulative distribution functions that do not have a closed-form inverse. Not every cumulative distribution function has to have a closed-form inverse! Hopefully in such cases the inverses can be found using numerical methods.
Use-case
The quantile function can be used for random generation as described in How does the inverse transform method work? | Help me understand the quantile (inverse CDF) function
All this may sound complicated at first, but it is essentially about something very simple.
By cumulative distribution function we denote the function that returns probabilities of $X$ being smaller t |
4,702 | Help me understand the quantile (inverse CDF) function | Tim had a very thorough answer. Good job!
I'd like to add one more remark. Not every monotonically increasing function has an inverse function. Actually only strictly monotonically increasing/decreasing functions have inverse functions.
For monotonically increasing cdf which are not strictly monotonically increasing, we have a quantile function which is also called the inverse cumulative distribution function. You can find more details here.
Both inverse functions (for those strictly increasing cdf) and quantile functions (for those monotonically increasing but not strictly monotonically increasing cdfs) can be denoted as $F^{-1}$, which can be confusing sometimes. | Help me understand the quantile (inverse CDF) function | Tim had a very thorough answer. Good job!
I'd like to add one more remark. Not every monotonically increasing function has an inverse function. Actually only strictly monotonically increasing/decreasi | Help me understand the quantile (inverse CDF) function
Tim had a very thorough answer. Good job!
I'd like to add one more remark. Not every monotonically increasing function has an inverse function. Actually only strictly monotonically increasing/decreasing functions have inverse functions.
For monotonically increasing cdf which are not strictly monotonically increasing, we have a quantile function which is also called the inverse cumulative distribution function. You can find more details here.
Both inverse functions (for those strictly increasing cdf) and quantile functions (for those monotonically increasing but not strictly monotonically increasing cdfs) can be denoted as $F^{-1}$, which can be confusing sometimes. | Help me understand the quantile (inverse CDF) function
Tim had a very thorough answer. Good job!
I'd like to add one more remark. Not every monotonically increasing function has an inverse function. Actually only strictly monotonically increasing/decreasi |
4,703 | Help me understand the quantile (inverse CDF) function | Chapter 2 of the book "Statistical Distributions" by Forbes, Evans, Hastings, and Peacock has a concise
summary with consistent notation.
A quantile is any possible value (e.g. in context of a random draw)
of a variable, that is, a variate.
The authors give an example of a sample space of tossing 2 coins
as the set {HH, HT, TH, TT}. The number of heads in that sample
is a quantile of the ordered set {0, 1, 2}.
For a probability distribution or mass function, you are plotting
the variate on the x-axis and the probability on the y-axis.
If you knew the probability and the function and wanted to deduce
the variate on the x-axis from it, you would invert the function
or approximate an inversion of it to get x, knowing y.
The discrete or continuous values along the y-axis for the
discrete or continuous pdf might not be
increasing and there may be multiple x's which would result in
the same y.
The CDF (cumulative distribution function) is more convenient as the
function plotted is increasing along the x-axis and the y-axis.
Extracting the quantile, that is, the variate from CDF is usually
easier math.
There are a few diagrams in the book demonstrating properties of the discrete probability distribution, and the CDF in chapter 2 and those are
shown in answers posted to your question above this one also
(though I can't see them while I'm typing this answer).
Table 2.1 has a concise summary of many terms and item 4 is for
the inverse distribution function or quantile function (of probability
alpha) and refers to determining x from the inverse function which
takes the probability as an argument.
The book is a practical handbook on the subject with examples,
though implementing the inverse functions requires other resources
(like pre-computed tables findable at NIST or published approximation
algorithms etc.
https://www.itl.nist.gov/div898/handbook/eda/section3/eda367.htm).
(NOTE: everything past the 1st sentence was added in response to the comment from gung.) | Help me understand the quantile (inverse CDF) function | Chapter 2 of the book "Statistical Distributions" by Forbes, Evans, Hastings, and Peacock has a concise
summary with consistent notation.
A quantile is any possible value (e.g. in context of a random | Help me understand the quantile (inverse CDF) function
Chapter 2 of the book "Statistical Distributions" by Forbes, Evans, Hastings, and Peacock has a concise
summary with consistent notation.
A quantile is any possible value (e.g. in context of a random draw)
of a variable, that is, a variate.
The authors give an example of a sample space of tossing 2 coins
as the set {HH, HT, TH, TT}. The number of heads in that sample
is a quantile of the ordered set {0, 1, 2}.
For a probability distribution or mass function, you are plotting
the variate on the x-axis and the probability on the y-axis.
If you knew the probability and the function and wanted to deduce
the variate on the x-axis from it, you would invert the function
or approximate an inversion of it to get x, knowing y.
The discrete or continuous values along the y-axis for the
discrete or continuous pdf might not be
increasing and there may be multiple x's which would result in
the same y.
The CDF (cumulative distribution function) is more convenient as the
function plotted is increasing along the x-axis and the y-axis.
Extracting the quantile, that is, the variate from CDF is usually
easier math.
There are a few diagrams in the book demonstrating properties of the discrete probability distribution, and the CDF in chapter 2 and those are
shown in answers posted to your question above this one also
(though I can't see them while I'm typing this answer).
Table 2.1 has a concise summary of many terms and item 4 is for
the inverse distribution function or quantile function (of probability
alpha) and refers to determining x from the inverse function which
takes the probability as an argument.
The book is a practical handbook on the subject with examples,
though implementing the inverse functions requires other resources
(like pre-computed tables findable at NIST or published approximation
algorithms etc.
https://www.itl.nist.gov/div898/handbook/eda/section3/eda367.htm).
(NOTE: everything past the 1st sentence was added in response to the comment from gung.) | Help me understand the quantile (inverse CDF) function
Chapter 2 of the book "Statistical Distributions" by Forbes, Evans, Hastings, and Peacock has a concise
summary with consistent notation.
A quantile is any possible value (e.g. in context of a random |
4,704 | Why downsample? | Most classification models in fact don't yield a binary decision, but rather a continuous decision value (for instance, logistic regression models output a probability, SVMs output a signed distance to the hyperplane, ...). Using the decision values we can rank test samples, from 'almost certainly positive' to 'almost certainly negative'.
Based on the decision value, you can always assign some cutoff that configures the classifier in such a way that a certain fraction of data is labeled as positive. Determining an appropriate threshold can be done via the model's ROC or PR curves. You can play with the decision threshold regardless of the balance used in the training set. In other words, techniques like up -or downsampling are orthogonal to this.
Assuming the model is better than random, you can intuitively see that increasing the threshold for positive classification (which leads to less positive predictions) increases the model's precision at the cost of lower recall and vice versa.
Consider SVM as an intuitive example: the main challenge is to learn the orientation of the separating hyperplane. Up -or downsampling can help with this (I recommend preferring upsampling over downsampling). When the orientation of the hyperplane is good, we can play with the decision threshold (e.g. signed distance to the hyperplane) to get a desired fraction of positive predictions. | Why downsample? | Most classification models in fact don't yield a binary decision, but rather a continuous decision value (for instance, logistic regression models output a probability, SVMs output a signed distance t | Why downsample?
Most classification models in fact don't yield a binary decision, but rather a continuous decision value (for instance, logistic regression models output a probability, SVMs output a signed distance to the hyperplane, ...). Using the decision values we can rank test samples, from 'almost certainly positive' to 'almost certainly negative'.
Based on the decision value, you can always assign some cutoff that configures the classifier in such a way that a certain fraction of data is labeled as positive. Determining an appropriate threshold can be done via the model's ROC or PR curves. You can play with the decision threshold regardless of the balance used in the training set. In other words, techniques like up -or downsampling are orthogonal to this.
Assuming the model is better than random, you can intuitively see that increasing the threshold for positive classification (which leads to less positive predictions) increases the model's precision at the cost of lower recall and vice versa.
Consider SVM as an intuitive example: the main challenge is to learn the orientation of the separating hyperplane. Up -or downsampling can help with this (I recommend preferring upsampling over downsampling). When the orientation of the hyperplane is good, we can play with the decision threshold (e.g. signed distance to the hyperplane) to get a desired fraction of positive predictions. | Why downsample?
Most classification models in fact don't yield a binary decision, but rather a continuous decision value (for instance, logistic regression models output a probability, SVMs output a signed distance t |
4,705 | Why downsample? | The real problem here is your choice of metric: % accuracy is a poor measure of a model's success on an un-balanced dataset (for the exactly reason you mention: it's trivial to achieve 99% accuracy in this case).
Balancing your dataset before fitting the model is a bad solution as it biases your model and (even worse) throws out potentially useful data.
You're much better off balancing your accuracy metric, rather than balancing your data. For example you could use balanced accuracy when evaluating you model: (error for the positive class + error for the negative class)/2. If you predict all positive or all negative, this metric will be 50% which is a nice property.
In my opinion, the only reason to down-sample is when you have too much data and can't fit your model. Many classifiers (logistic regression for example) will do fine on un-balanced data. | Why downsample? | The real problem here is your choice of metric: % accuracy is a poor measure of a model's success on an un-balanced dataset (for the exactly reason you mention: it's trivial to achieve 99% accuracy in | Why downsample?
The real problem here is your choice of metric: % accuracy is a poor measure of a model's success on an un-balanced dataset (for the exactly reason you mention: it's trivial to achieve 99% accuracy in this case).
Balancing your dataset before fitting the model is a bad solution as it biases your model and (even worse) throws out potentially useful data.
You're much better off balancing your accuracy metric, rather than balancing your data. For example you could use balanced accuracy when evaluating you model: (error for the positive class + error for the negative class)/2. If you predict all positive or all negative, this metric will be 50% which is a nice property.
In my opinion, the only reason to down-sample is when you have too much data and can't fit your model. Many classifiers (logistic regression for example) will do fine on un-balanced data. | Why downsample?
The real problem here is your choice of metric: % accuracy is a poor measure of a model's success on an un-balanced dataset (for the exactly reason you mention: it's trivial to achieve 99% accuracy in |
4,706 | Why downsample? | As always @Marc Claesen as a great answer.
I'd just add that the key concept that seems to be missing is the concept of a cost function. In any model you have an implicit or explicit cost of false negatives to false positives (FN/FP). For the unbalanced data described one is often willing to have a 5:1 or 10:1 ratio. There are many ways of introducing cost functions into models. A traditional method is to impose a probability cut-off on the probabilities produced by a model - this works well for logistic regression.
A method used for strict classifiers that do not naturally output probability estimates is to undersample the majority class at a ratio that will induce the cost function you are interested in. Note that if you sample at 50/50 you are inducing an arbitrary cost function. The cost function is different but just as arbitrary as if you sampled at the prevalence rate. You can often predict an appropriate sampling ratio that corresponds to your cost function (it is usually not 50/50), but most practitioners that I've talked to just try a couple of sampling ratios and choose the one closest to their cost function. | Why downsample? | As always @Marc Claesen as a great answer.
I'd just add that the key concept that seems to be missing is the concept of a cost function. In any model you have an implicit or explicit cost of false ne | Why downsample?
As always @Marc Claesen as a great answer.
I'd just add that the key concept that seems to be missing is the concept of a cost function. In any model you have an implicit or explicit cost of false negatives to false positives (FN/FP). For the unbalanced data described one is often willing to have a 5:1 or 10:1 ratio. There are many ways of introducing cost functions into models. A traditional method is to impose a probability cut-off on the probabilities produced by a model - this works well for logistic regression.
A method used for strict classifiers that do not naturally output probability estimates is to undersample the majority class at a ratio that will induce the cost function you are interested in. Note that if you sample at 50/50 you are inducing an arbitrary cost function. The cost function is different but just as arbitrary as if you sampled at the prevalence rate. You can often predict an appropriate sampling ratio that corresponds to your cost function (it is usually not 50/50), but most practitioners that I've talked to just try a couple of sampling ratios and choose the one closest to their cost function. | Why downsample?
As always @Marc Claesen as a great answer.
I'd just add that the key concept that seems to be missing is the concept of a cost function. In any model you have an implicit or explicit cost of false ne |
4,707 | Why downsample? | Answering Jessica's question directly - one reason for downsampling is when you're working with a large dataset and facing memory limits on your computer or simply want to reduce processing time. Downsampling (i.e., taking a random sample without replacement) from the negative cases reduces the dataset to a more manageable size.
You mentioned using a "classifier" in your question but didn't specify which one. One classifier you may want to avoid are decision trees. When running a simple decision tree on rare event data, I often find the tree builds only a single root given it has difficulty splitting so few positive cases into categories. There may be more sophisticated methods to improve the performance of trees for rare events - I don't know of any off the top of my head.
Therefore using a logistic regression which returns a continuous predicted probability value, as suggested by Marc Claesen, is a better approach. If you're performing a logistic regression on the data, the coefficients remain unbiased despite there being fewer records. You will have to adjust the intercept, $\beta_0$, from your downsampled regression according to the formula from Hosmer and Lemeshow, 2000:
$$\beta_c=\beta_0 - \log\left(\frac{p_+}{1-p_+}\right)$$
where $p_+$ is the fraction of positive cases in your pre-downsampling population.
Finding your preferred spam ID threshold with the ROC can be done by first scoring the complete dataset with the model coefficients traned on the downsampled dataset, and then ranking the records from highest to lowest predicted probability of being spam. Next, take the top $n$ scored records, where $n$ is whatever threshold you want to set (100, 500, 1000, etc.) and then calculate the percentage of false positive cases in the top $n$ cases and the percentage of false negative cases in the remaining lower tier of $N$-$n$ cases in order to find the right balance of sensitivity/specificity that serves your needs. | Why downsample? | Answering Jessica's question directly - one reason for downsampling is when you're working with a large dataset and facing memory limits on your computer or simply want to reduce processing time. Down | Why downsample?
Answering Jessica's question directly - one reason for downsampling is when you're working with a large dataset and facing memory limits on your computer or simply want to reduce processing time. Downsampling (i.e., taking a random sample without replacement) from the negative cases reduces the dataset to a more manageable size.
You mentioned using a "classifier" in your question but didn't specify which one. One classifier you may want to avoid are decision trees. When running a simple decision tree on rare event data, I often find the tree builds only a single root given it has difficulty splitting so few positive cases into categories. There may be more sophisticated methods to improve the performance of trees for rare events - I don't know of any off the top of my head.
Therefore using a logistic regression which returns a continuous predicted probability value, as suggested by Marc Claesen, is a better approach. If you're performing a logistic regression on the data, the coefficients remain unbiased despite there being fewer records. You will have to adjust the intercept, $\beta_0$, from your downsampled regression according to the formula from Hosmer and Lemeshow, 2000:
$$\beta_c=\beta_0 - \log\left(\frac{p_+}{1-p_+}\right)$$
where $p_+$ is the fraction of positive cases in your pre-downsampling population.
Finding your preferred spam ID threshold with the ROC can be done by first scoring the complete dataset with the model coefficients traned on the downsampled dataset, and then ranking the records from highest to lowest predicted probability of being spam. Next, take the top $n$ scored records, where $n$ is whatever threshold you want to set (100, 500, 1000, etc.) and then calculate the percentage of false positive cases in the top $n$ cases and the percentage of false negative cases in the remaining lower tier of $N$-$n$ cases in order to find the right balance of sensitivity/specificity that serves your needs. | Why downsample?
Answering Jessica's question directly - one reason for downsampling is when you're working with a large dataset and facing memory limits on your computer or simply want to reduce processing time. Down |
4,708 | Why downsample? | Of course classifying everything as 'not spam' allows you to say that, given 100 mails, it classifies correctly 99 of them, but it also classifies as 'not spam' the only one labelled as spam (100% False Positive).
It turns out that the metric you choose to evaluate the algorithm is not adapt.
This video exemplifies the concept.
Roughly speaking, balancing the dataset allows you to weight the misclassification errors. An algorithm that uses an unbalanced training set presumably will not learn to discriminate from the features, because it would not give much importance to the fact that misclassifies the data of the scanty class. | Why downsample? | Of course classifying everything as 'not spam' allows you to say that, given 100 mails, it classifies correctly 99 of them, but it also classifies as 'not spam' the only one labelled as spam (100% Fal | Why downsample?
Of course classifying everything as 'not spam' allows you to say that, given 100 mails, it classifies correctly 99 of them, but it also classifies as 'not spam' the only one labelled as spam (100% False Positive).
It turns out that the metric you choose to evaluate the algorithm is not adapt.
This video exemplifies the concept.
Roughly speaking, balancing the dataset allows you to weight the misclassification errors. An algorithm that uses an unbalanced training set presumably will not learn to discriminate from the features, because it would not give much importance to the fact that misclassifies the data of the scanty class. | Why downsample?
Of course classifying everything as 'not spam' allows you to say that, given 100 mails, it classifies correctly 99 of them, but it also classifies as 'not spam' the only one labelled as spam (100% Fal |
4,709 | Why downsample? | I would not go for either downsampling or upsampling as both tricks the learning algorithm, however, if the data was imbalanced the accuracy measure becomes invalid or uninformative, therefore, it is better to use precision and recall measures, both depends mainly on the TP (the correctly classified spams in your case) this gives a good idea about the real performance of your system in terms detecting spams regardless the number of the negative examples. | Why downsample? | I would not go for either downsampling or upsampling as both tricks the learning algorithm, however, if the data was imbalanced the accuracy measure becomes invalid or uninformative, therefore, it is | Why downsample?
I would not go for either downsampling or upsampling as both tricks the learning algorithm, however, if the data was imbalanced the accuracy measure becomes invalid or uninformative, therefore, it is better to use precision and recall measures, both depends mainly on the TP (the correctly classified spams in your case) this gives a good idea about the real performance of your system in terms detecting spams regardless the number of the negative examples. | Why downsample?
I would not go for either downsampling or upsampling as both tricks the learning algorithm, however, if the data was imbalanced the accuracy measure becomes invalid or uninformative, therefore, it is |
4,710 | Machine Learning using Python | About the scikit-learn option: 100k (sparse) features and 10k samples is reasonably small enough to fit in memory hence perfectly doable with scikit-learn (same size as the 20 newsgroups dataset).
Here is a tutorial I gave at PyCon 2011 with a chapter on text classification with exercises and solutions:
http://scikit-learn.github.com/scikit-learn-tutorial/ (online HTML version)
https://github.com/downloads/scikit-learn/scikit-learn-tutorial/scikit_learn_tutorial.pdf (PDF version)
https://github.com/scikit-learn/scikit-learn-tutorial (source code + exercises)
I also gave a talk on the topic which is an updated version of the version I gave at PyCon FR. Here are the slides (and the embedded video in the comments):
http://www.slideshare.net/ogrisel/statistical-machine-learning-for-text-classification-with-scikitlearn-and-nltk
As for feature selection, have a look at this answer on quora where all the examples are based on the scikit-learn documentation:
http://www.quora.com/What-are-some-feature-selection-methods/answer/Olivier-Grisel
We don't have collocation feature extraction in scikit-learn yet. Use nltk and nltk-trainer to do this in the mean time:
https://github.com/japerk/nltk-trainer | Machine Learning using Python | About the scikit-learn option: 100k (sparse) features and 10k samples is reasonably small enough to fit in memory hence perfectly doable with scikit-learn (same size as the 20 newsgroups dataset).
Her | Machine Learning using Python
About the scikit-learn option: 100k (sparse) features and 10k samples is reasonably small enough to fit in memory hence perfectly doable with scikit-learn (same size as the 20 newsgroups dataset).
Here is a tutorial I gave at PyCon 2011 with a chapter on text classification with exercises and solutions:
http://scikit-learn.github.com/scikit-learn-tutorial/ (online HTML version)
https://github.com/downloads/scikit-learn/scikit-learn-tutorial/scikit_learn_tutorial.pdf (PDF version)
https://github.com/scikit-learn/scikit-learn-tutorial (source code + exercises)
I also gave a talk on the topic which is an updated version of the version I gave at PyCon FR. Here are the slides (and the embedded video in the comments):
http://www.slideshare.net/ogrisel/statistical-machine-learning-for-text-classification-with-scikitlearn-and-nltk
As for feature selection, have a look at this answer on quora where all the examples are based on the scikit-learn documentation:
http://www.quora.com/What-are-some-feature-selection-methods/answer/Olivier-Grisel
We don't have collocation feature extraction in scikit-learn yet. Use nltk and nltk-trainer to do this in the mean time:
https://github.com/japerk/nltk-trainer | Machine Learning using Python
About the scikit-learn option: 100k (sparse) features and 10k samples is reasonably small enough to fit in memory hence perfectly doable with scikit-learn (same size as the 20 newsgroups dataset).
Her |
4,711 | Machine Learning using Python | In terms of working with text, have a look at NLTK. Very, very well supported & documented (there's even a book online, or in paper if you prefer) and will do the preprocesing you require. You might find Gensim useful as well; the emphasis is on vector space modeling and it's got scalable implementations of LSI and LDA (pLSI too I think) if those are of interest. It will also do selection by tf-idf - I'm not sure that NLTK does. I've used pieces of these on corpora of ~50k without much difficulty.
NLTK:
http://www.nltk.org/
Gensim:
http://nlp.fi.muni.cz/projekty/gensim/
Unfortunately, as to the main thrust of your question I'm not familiar with the specific libraries you reference (although I've used bits of scikits-learn before). | Machine Learning using Python | In terms of working with text, have a look at NLTK. Very, very well supported & documented (there's even a book online, or in paper if you prefer) and will do the preprocesing you require. You might f | Machine Learning using Python
In terms of working with text, have a look at NLTK. Very, very well supported & documented (there's even a book online, or in paper if you prefer) and will do the preprocesing you require. You might find Gensim useful as well; the emphasis is on vector space modeling and it's got scalable implementations of LSI and LDA (pLSI too I think) if those are of interest. It will also do selection by tf-idf - I'm not sure that NLTK does. I've used pieces of these on corpora of ~50k without much difficulty.
NLTK:
http://www.nltk.org/
Gensim:
http://nlp.fi.muni.cz/projekty/gensim/
Unfortunately, as to the main thrust of your question I'm not familiar with the specific libraries you reference (although I've used bits of scikits-learn before). | Machine Learning using Python
In terms of working with text, have a look at NLTK. Very, very well supported & documented (there's even a book online, or in paper if you prefer) and will do the preprocesing you require. You might f |
4,712 | Machine Learning using Python | Python has a wide range of ML libraries (check out mloss.org as well). However, I always have the feeling that it's more of use for ml researchers than for ml practitioners.
Numpy/SciPy and matplotlib are excellent tools for scientific work with Python. If you are not afraid to hack in most of the math formulas yourself, you will not be disappointed. Also, it is very easy to use the GPU with cudamat or gnumpy - experiments that took days before are now completed in hours or even minutes.
The latest kid on the block is probably Theano. It is a symbolic language for mathematical expressions that comes with opmitimzations, GPU implementations and the über-feature automatic differentiation which is nothing short of awesome for gradient based methods.
Also, as far as I know the NLTK mentioned by JMS is basically the number one open source natural language library out there.
Python is the right tool for machine learning. | Machine Learning using Python | Python has a wide range of ML libraries (check out mloss.org as well). However, I always have the feeling that it's more of use for ml researchers than for ml practitioners.
Numpy/SciPy and matplotlib | Machine Learning using Python
Python has a wide range of ML libraries (check out mloss.org as well). However, I always have the feeling that it's more of use for ml researchers than for ml practitioners.
Numpy/SciPy and matplotlib are excellent tools for scientific work with Python. If you are not afraid to hack in most of the math formulas yourself, you will not be disappointed. Also, it is very easy to use the GPU with cudamat or gnumpy - experiments that took days before are now completed in hours or even minutes.
The latest kid on the block is probably Theano. It is a symbolic language for mathematical expressions that comes with opmitimzations, GPU implementations and the über-feature automatic differentiation which is nothing short of awesome for gradient based methods.
Also, as far as I know the NLTK mentioned by JMS is basically the number one open source natural language library out there.
Python is the right tool for machine learning. | Machine Learning using Python
Python has a wide range of ML libraries (check out mloss.org as well). However, I always have the feeling that it's more of use for ml researchers than for ml practitioners.
Numpy/SciPy and matplotlib |
4,713 | Machine Learning using Python | Let me suggest Orange
comprehensive
Yes
scalable (100k features, 10k examples)
Yes
well supported libraries for doing ML in Python out there?
Yes
library that has a good collection of classifiers, feature selection methods (Information Gain, Chi-Sqaured etc.),
All of these work out of box in Orange
and text pre-processing capabilities (stemming, stopword removal, tf-idf etc.).
I have never used Orange for text processing, though | Machine Learning using Python | Let me suggest Orange
comprehensive
Yes
scalable (100k features, 10k examples)
Yes
well supported libraries for doing ML in Python out there?
Yes
library that has a good collection of classif | Machine Learning using Python
Let me suggest Orange
comprehensive
Yes
scalable (100k features, 10k examples)
Yes
well supported libraries for doing ML in Python out there?
Yes
library that has a good collection of classifiers, feature selection methods (Information Gain, Chi-Sqaured etc.),
All of these work out of box in Orange
and text pre-processing capabilities (stemming, stopword removal, tf-idf etc.).
I have never used Orange for text processing, though | Machine Learning using Python
Let me suggest Orange
comprehensive
Yes
scalable (100k features, 10k examples)
Yes
well supported libraries for doing ML in Python out there?
Yes
library that has a good collection of classif |
4,714 | Machine Learning using Python | Not sure if this is particularly useful, but there's a guide for programmers to learn statistics in Python available online. http://www.greenteapress.com/thinkstats/
It seems pretty good from my brief scan, and it appears to talk about some machine learning methods, so it might be a good place to start. | Machine Learning using Python | Not sure if this is particularly useful, but there's a guide for programmers to learn statistics in Python available online. http://www.greenteapress.com/thinkstats/
It seems pretty good from my brief | Machine Learning using Python
Not sure if this is particularly useful, but there's a guide for programmers to learn statistics in Python available online. http://www.greenteapress.com/thinkstats/
It seems pretty good from my brief scan, and it appears to talk about some machine learning methods, so it might be a good place to start. | Machine Learning using Python
Not sure if this is particularly useful, but there's a guide for programmers to learn statistics in Python available online. http://www.greenteapress.com/thinkstats/
It seems pretty good from my brief |
4,715 | Machine Learning using Python | Check out libsvm. | Machine Learning using Python | Check out libsvm. | Machine Learning using Python
Check out libsvm. | Machine Learning using Python
Check out libsvm. |
4,716 | Machine Learning using Python | SHOGUN (将軍) is a large scale machine learning toolbox, which seems promising. | Machine Learning using Python | SHOGUN (将軍) is a large scale machine learning toolbox, which seems promising. | Machine Learning using Python
SHOGUN (将軍) is a large scale machine learning toolbox, which seems promising. | Machine Learning using Python
SHOGUN (将軍) is a large scale machine learning toolbox, which seems promising. |
4,717 | Machine Learning using Python | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
open source python ml library
PySpark MLlib https://spark.apache.org/docs/0.9.0/mllib-guide.html
proprietary ml library with free trial
GraphLab Create https://dato.com/products/create/ | Machine Learning using Python | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Machine Learning using Python
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
open source python ml library
PySpark MLlib https://spark.apache.org/docs/0.9.0/mllib-guide.html
proprietary ml library with free trial
GraphLab Create https://dato.com/products/create/ | Machine Learning using Python
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
4,718 | Machine Learning using Python | As @ogrisel highlighted, scikit-learn is one of the best machine learning packages out there for Python. It is well suited for data-sets as small as 100k (sparse) features and 10k samples, and even for marginally bigger data-sets that may contains over 200k rows. Basically, any dataset that fits in the memory.
But, if you are looking for a highly scalable Python Machine Learning framework, I'd highly recommend Pyspark MLlib. Since datasets these days can grow big exponentially (given the big data and deep learning wave), you would often need a platform that can scale well and run fast not just on the model training phase, but also during the feature engineering phase (feature transformation, feature selection). Let's look at all the three metrics for Spark Mllib platform that you are interested in:
Scalability:
If your dataset can fit in the memory, scikit-learn should be your choice. If it's too big to fit in the memory, Spark is the way to go. The important thing to note here is that Spark works faster only in a distributed setting.
Comprehensiveness:
Sklearn is far richer in terms of decent implementations of a large number of commonly used algorithms as compared to spark mllib. The support for data manipulation and transformation is also more richer in scikit-learn. Spark Mllib has sufficient data transformation modules that does the trick majority of the times. So, in case you end up with spark mllib for scalability concerns, you will still be able to get the job done. It has all the support for correlation analysis, feature extraction (tf-idf, word2vec, CountVectorizer), feature transformation (Tokenizer, StopWordsRemover, nn-gram, Binarizer, PCA etc). For a detailed list see the link below:
Extracting, transforming and selecting features in Spark mllib
Classification:
Spark mllib has all the major algorithms' implementation that you'd be using majority of the times (including algos that work well for text classification). For a detailed overview of what algorithms are available through mllib, see the link below.
Mllib Classification and regression
Bonus: Apache Spark has support for Python, R, Java, and Scala. So, if tomorrow you decide to experiment with a different language (as a personal choice or for professional reasons), you won't have to learn an entirely new framework. | Machine Learning using Python | As @ogrisel highlighted, scikit-learn is one of the best machine learning packages out there for Python. It is well suited for data-sets as small as 100k (sparse) features and 10k samples, and even fo | Machine Learning using Python
As @ogrisel highlighted, scikit-learn is one of the best machine learning packages out there for Python. It is well suited for data-sets as small as 100k (sparse) features and 10k samples, and even for marginally bigger data-sets that may contains over 200k rows. Basically, any dataset that fits in the memory.
But, if you are looking for a highly scalable Python Machine Learning framework, I'd highly recommend Pyspark MLlib. Since datasets these days can grow big exponentially (given the big data and deep learning wave), you would often need a platform that can scale well and run fast not just on the model training phase, but also during the feature engineering phase (feature transformation, feature selection). Let's look at all the three metrics for Spark Mllib platform that you are interested in:
Scalability:
If your dataset can fit in the memory, scikit-learn should be your choice. If it's too big to fit in the memory, Spark is the way to go. The important thing to note here is that Spark works faster only in a distributed setting.
Comprehensiveness:
Sklearn is far richer in terms of decent implementations of a large number of commonly used algorithms as compared to spark mllib. The support for data manipulation and transformation is also more richer in scikit-learn. Spark Mllib has sufficient data transformation modules that does the trick majority of the times. So, in case you end up with spark mllib for scalability concerns, you will still be able to get the job done. It has all the support for correlation analysis, feature extraction (tf-idf, word2vec, CountVectorizer), feature transformation (Tokenizer, StopWordsRemover, nn-gram, Binarizer, PCA etc). For a detailed list see the link below:
Extracting, transforming and selecting features in Spark mllib
Classification:
Spark mllib has all the major algorithms' implementation that you'd be using majority of the times (including algos that work well for text classification). For a detailed overview of what algorithms are available through mllib, see the link below.
Mllib Classification and regression
Bonus: Apache Spark has support for Python, R, Java, and Scala. So, if tomorrow you decide to experiment with a different language (as a personal choice or for professional reasons), you won't have to learn an entirely new framework. | Machine Learning using Python
As @ogrisel highlighted, scikit-learn is one of the best machine learning packages out there for Python. It is well suited for data-sets as small as 100k (sparse) features and 10k samples, and even fo |
4,719 | Machine Learning using Python | I don't know if your are still looking for some advice (you made this question 5 months ago...). I just started this book and so far is pretty well:
https://www.amazon.com.mx/dp/1491962291/ref=cm_cr_ryp_prd_ttl_sol_3
The author shows code, examples and explains some theory and math "behind the scenes" of ML algorithms. I'm finding this very instructive. Hope this could be the same for you. | Machine Learning using Python | I don't know if your are still looking for some advice (you made this question 5 months ago...). I just started this book and so far is pretty well:
https://www.amazon.com.mx/dp/1491962291/ref=cm_cr_r | Machine Learning using Python
I don't know if your are still looking for some advice (you made this question 5 months ago...). I just started this book and so far is pretty well:
https://www.amazon.com.mx/dp/1491962291/ref=cm_cr_ryp_prd_ttl_sol_3
The author shows code, examples and explains some theory and math "behind the scenes" of ML algorithms. I'm finding this very instructive. Hope this could be the same for you. | Machine Learning using Python
I don't know if your are still looking for some advice (you made this question 5 months ago...). I just started this book and so far is pretty well:
https://www.amazon.com.mx/dp/1491962291/ref=cm_cr_r |
4,720 | Prediction in Cox regression | Following Cox model, the estimated hazard for individual $i$ with covariate vector $x_i$ has the form
$$\hat{h}_i(t) = \hat{h}_0(t) \exp(x_i' \hat{\beta}),$$
where $\hat{\beta}$ is found by maximising the partial likelihood, while $\hat{h}_0$ follows from the Nelson-Aalen estimator,
$$
\hat{h}_0(t_i) = \frac{d_i}{\sum_{j:t_j \geq t_i} \exp(x_j' \hat{\beta})}
$$
with $t_1$, $t_2, \dotsc$ the distinct event times and $d_i$ the number of deaths at $t_i$
(see, e.g., Section 3.6).
Similarly,
$$\hat{S}_i(t) = \hat{S}_0(t)^{\exp(x_i' \hat{\beta})}$$
with $\hat{S}_0(t) = \exp(- \hat{\Lambda}_0(t))$ and
$$\hat{\Lambda}_0(t) = \sum_{j:t_j \leq t} \hat{h}_0(t_j).$$
EDIT:
This might also be of interest :-) | Prediction in Cox regression | Following Cox model, the estimated hazard for individual $i$ with covariate vector $x_i$ has the form
$$\hat{h}_i(t) = \hat{h}_0(t) \exp(x_i' \hat{\beta}),$$
where $\hat{\beta}$ is found by maximising | Prediction in Cox regression
Following Cox model, the estimated hazard for individual $i$ with covariate vector $x_i$ has the form
$$\hat{h}_i(t) = \hat{h}_0(t) \exp(x_i' \hat{\beta}),$$
where $\hat{\beta}$ is found by maximising the partial likelihood, while $\hat{h}_0$ follows from the Nelson-Aalen estimator,
$$
\hat{h}_0(t_i) = \frac{d_i}{\sum_{j:t_j \geq t_i} \exp(x_j' \hat{\beta})}
$$
with $t_1$, $t_2, \dotsc$ the distinct event times and $d_i$ the number of deaths at $t_i$
(see, e.g., Section 3.6).
Similarly,
$$\hat{S}_i(t) = \hat{S}_0(t)^{\exp(x_i' \hat{\beta})}$$
with $\hat{S}_0(t) = \exp(- \hat{\Lambda}_0(t))$ and
$$\hat{\Lambda}_0(t) = \sum_{j:t_j \leq t} \hat{h}_0(t_j).$$
EDIT:
This might also be of interest :-) | Prediction in Cox regression
Following Cox model, the estimated hazard for individual $i$ with covariate vector $x_i$ has the form
$$\hat{h}_i(t) = \hat{h}_0(t) \exp(x_i' \hat{\beta}),$$
where $\hat{\beta}$ is found by maximising |
4,721 | Prediction in Cox regression | Maybe you would also like to try something like this? Fit a Cox proportional hazards model and use it to get the predicted Survival curve for a new instance.
Taken out of the help file for the survfit.coxph in R (I just added the lines part)
# fit a Cox proportional hazards model and plot the
# predicted survival for a 60 year old
fit <- coxph(Surv(futime, fustat) ~ age, data=ovarian)
plot(survfit(fit, newdata=data.frame(age=60)),
xscale=365.25, xlab="Years", ylab="Survival", conf.int=F)
# also plot the predicted survival for a 70 year old
lines(survfit(fit, newdata=data.frame(age=70)),
xscale=365.25, xlab="Years", ylab="Survival")
You should keep in mind though that for the proportional hazards assumption to still hold for your prediction, the patient for which you predict should be from a group that is qualitatively the same as the one used to derive the Cox proportional hazards model you used for the prediction. | Prediction in Cox regression | Maybe you would also like to try something like this? Fit a Cox proportional hazards model and use it to get the predicted Survival curve for a new instance.
Taken out of the help file for the survfit | Prediction in Cox regression
Maybe you would also like to try something like this? Fit a Cox proportional hazards model and use it to get the predicted Survival curve for a new instance.
Taken out of the help file for the survfit.coxph in R (I just added the lines part)
# fit a Cox proportional hazards model and plot the
# predicted survival for a 60 year old
fit <- coxph(Surv(futime, fustat) ~ age, data=ovarian)
plot(survfit(fit, newdata=data.frame(age=60)),
xscale=365.25, xlab="Years", ylab="Survival", conf.int=F)
# also plot the predicted survival for a 70 year old
lines(survfit(fit, newdata=data.frame(age=70)),
xscale=365.25, xlab="Years", ylab="Survival")
You should keep in mind though that for the proportional hazards assumption to still hold for your prediction, the patient for which you predict should be from a group that is qualitatively the same as the one used to derive the Cox proportional hazards model you used for the prediction. | Prediction in Cox regression
Maybe you would also like to try something like this? Fit a Cox proportional hazards model and use it to get the predicted Survival curve for a new instance.
Taken out of the help file for the survfit |
4,722 | Prediction in Cox regression | The function predictSurvProb in the pec package can give you absolute risk estimates for new data based on an existing cox model if you use R.
The mathematical details I cannot explain.
EDIT: The function provides survival probabilities, which I have so far taken as 1-(Event probability).
EDIT 2:
One can do without the pec package. Using only the survival package, the following function returns absolute risk based on a Cox model
risk = function(model, newdata, time) {
as.numeric(1-summary(survfit(model, newdata = newdata, se.fit = F, conf.int = F), times = time)$surv)
} | Prediction in Cox regression | The function predictSurvProb in the pec package can give you absolute risk estimates for new data based on an existing cox model if you use R.
The mathematical details I cannot explain.
EDIT: The func | Prediction in Cox regression
The function predictSurvProb in the pec package can give you absolute risk estimates for new data based on an existing cox model if you use R.
The mathematical details I cannot explain.
EDIT: The function provides survival probabilities, which I have so far taken as 1-(Event probability).
EDIT 2:
One can do without the pec package. Using only the survival package, the following function returns absolute risk based on a Cox model
risk = function(model, newdata, time) {
as.numeric(1-summary(survfit(model, newdata = newdata, se.fit = F, conf.int = F), times = time)$surv)
} | Prediction in Cox regression
The function predictSurvProb in the pec package can give you absolute risk estimates for new data based on an existing cox model if you use R.
The mathematical details I cannot explain.
EDIT: The func |
4,723 | Prediction in Cox regression | The basehaz function of survival packages provides the baseline hazard at the event time points. From that you can work your way up the math that ocram provides and include the ORs of your coxph estimates. | Prediction in Cox regression | The basehaz function of survival packages provides the baseline hazard at the event time points. From that you can work your way up the math that ocram provides and include the ORs of your coxph estim | Prediction in Cox regression
The basehaz function of survival packages provides the baseline hazard at the event time points. From that you can work your way up the math that ocram provides and include the ORs of your coxph estimates. | Prediction in Cox regression
The basehaz function of survival packages provides the baseline hazard at the event time points. From that you can work your way up the math that ocram provides and include the ORs of your coxph estim |
4,724 | Prediction in Cox regression | The whole point of the Cox model is the proportional hazard's assumption and the use of the partial likelhood. The partial likelihood has the baseline hazard function eliminated. So you do not need to specify one. That is the beauty of it! | Prediction in Cox regression | The whole point of the Cox model is the proportional hazard's assumption and the use of the partial likelhood. The partial likelihood has the baseline hazard function eliminated. So you do not need | Prediction in Cox regression
The whole point of the Cox model is the proportional hazard's assumption and the use of the partial likelhood. The partial likelihood has the baseline hazard function eliminated. So you do not need to specify one. That is the beauty of it! | Prediction in Cox regression
The whole point of the Cox model is the proportional hazard's assumption and the use of the partial likelhood. The partial likelihood has the baseline hazard function eliminated. So you do not need |
4,725 | Class imbalance in Supervised Machine Learning | There are many frameworks and approaches. This is a recurrent issue.
Examples:
Undersampling. Select a subsample of the sets of zeros such that it's size matches the set of ones. There is an obvious loss of information, unless you use a more complex framework (for a instance, I would split the first set on 9 smaller, mutually exclusive subsets, train a model on each one of them and ensemble the models).
Oversampling. Produce artificial ones until the proportion is 50%/50%. My previous employer used this by default. There are many frameworks for this (I think SMOTE is the most popular, but I prefer simpler tricks like Noisy PCA).
One Class Learning. Just assume your data has a few real points (the ones) and lots of random noise that doesn't physically exists leaked into the dataset (anything that is not a one is noise). Use an algorithm to denoise the data instead of a classification algorithm.
Cost-Sensitive Training. Use a asymmetric cost function to artificially balance the training process.
Some lit reviews, in increasing order of technical complexity\level of details:
On the Classification of Imbalanced Datasets
ACM SIGKDD Explorations Newsletter - Special issue on learning from imbalanced datasets (read at least the editorial, it will be enlightening)
Data Mining for Imbalanced Datasets: An Overview
Oh, and by the way, 90%/10% is not unbalanced. Card transaction fraud datasets often are split 99.97%/0.03%. This is unbalanced. | Class imbalance in Supervised Machine Learning | There are many frameworks and approaches. This is a recurrent issue.
Examples:
Undersampling. Select a subsample of the sets of zeros such that it's size matches the set of ones. There is an obvious | Class imbalance in Supervised Machine Learning
There are many frameworks and approaches. This is a recurrent issue.
Examples:
Undersampling. Select a subsample of the sets of zeros such that it's size matches the set of ones. There is an obvious loss of information, unless you use a more complex framework (for a instance, I would split the first set on 9 smaller, mutually exclusive subsets, train a model on each one of them and ensemble the models).
Oversampling. Produce artificial ones until the proportion is 50%/50%. My previous employer used this by default. There are many frameworks for this (I think SMOTE is the most popular, but I prefer simpler tricks like Noisy PCA).
One Class Learning. Just assume your data has a few real points (the ones) and lots of random noise that doesn't physically exists leaked into the dataset (anything that is not a one is noise). Use an algorithm to denoise the data instead of a classification algorithm.
Cost-Sensitive Training. Use a asymmetric cost function to artificially balance the training process.
Some lit reviews, in increasing order of technical complexity\level of details:
On the Classification of Imbalanced Datasets
ACM SIGKDD Explorations Newsletter - Special issue on learning from imbalanced datasets (read at least the editorial, it will be enlightening)
Data Mining for Imbalanced Datasets: An Overview
Oh, and by the way, 90%/10% is not unbalanced. Card transaction fraud datasets often are split 99.97%/0.03%. This is unbalanced. | Class imbalance in Supervised Machine Learning
There are many frameworks and approaches. This is a recurrent issue.
Examples:
Undersampling. Select a subsample of the sets of zeros such that it's size matches the set of ones. There is an obvious |
4,726 | Class imbalance in Supervised Machine Learning | This heavily depends on the learning method. Most general purpose approaches have one (or several) ways to deal with this. A common fix is to assign a higher misclassification penalty on the minority class, forcing the classifier to recognize them (SVM, logistic regression, neural networks, ...).
Changing sampling is also a possibility like you mention. In this case, oversampling the minority class is usually a better solution than undersampling the majority class.
Some methods, like random forests, don't need any modifications. | Class imbalance in Supervised Machine Learning | This heavily depends on the learning method. Most general purpose approaches have one (or several) ways to deal with this. A common fix is to assign a higher misclassification penalty on the minority | Class imbalance in Supervised Machine Learning
This heavily depends on the learning method. Most general purpose approaches have one (or several) ways to deal with this. A common fix is to assign a higher misclassification penalty on the minority class, forcing the classifier to recognize them (SVM, logistic regression, neural networks, ...).
Changing sampling is also a possibility like you mention. In this case, oversampling the minority class is usually a better solution than undersampling the majority class.
Some methods, like random forests, don't need any modifications. | Class imbalance in Supervised Machine Learning
This heavily depends on the learning method. Most general purpose approaches have one (or several) ways to deal with this. A common fix is to assign a higher misclassification penalty on the minority |
4,727 | Class imbalance in Supervised Machine Learning | Often problem is not the frequency but absolute amount of cases in the minority class. If you do not have enought variation in the target when compared against variation in the features, then it might mean that algorithm cannot classify things very accurately.
One thing is that misclassification penalty could be used at classification step and not in the parameter estimation step if there is any. Some methods do not have concept of parameter, they just produce outright class labels or class probabilities.
When you have probabilistic estimator then you can make classification decision based on information theoretic grounds or with combination of business value. | Class imbalance in Supervised Machine Learning | Often problem is not the frequency but absolute amount of cases in the minority class. If you do not have enought variation in the target when compared against variation in the features, then it might | Class imbalance in Supervised Machine Learning
Often problem is not the frequency but absolute amount of cases in the minority class. If you do not have enought variation in the target when compared against variation in the features, then it might mean that algorithm cannot classify things very accurately.
One thing is that misclassification penalty could be used at classification step and not in the parameter estimation step if there is any. Some methods do not have concept of parameter, they just produce outright class labels or class probabilities.
When you have probabilistic estimator then you can make classification decision based on information theoretic grounds or with combination of business value. | Class imbalance in Supervised Machine Learning
Often problem is not the frequency but absolute amount of cases in the minority class. If you do not have enought variation in the target when compared against variation in the features, then it might |
4,728 | Class imbalance in Supervised Machine Learning | Add two trick:
1. use CDF , count the frequency in your training data or use very large validation (if your test set will not change, but the validation set must have same distribution with training set), then sort your prediction, and get first X%(your count the frequency before) for the one class and the others are else/
2. weighted sample, model will be tend to the weighted sample class, your can use the sample variance v. eg. weighti = 1/2(1- (vmax - vi)/vmax) | Class imbalance in Supervised Machine Learning | Add two trick:
1. use CDF , count the frequency in your training data or use very large validation (if your test set will not change, but the validation set must have same distribution with training s | Class imbalance in Supervised Machine Learning
Add two trick:
1. use CDF , count the frequency in your training data or use very large validation (if your test set will not change, but the validation set must have same distribution with training set), then sort your prediction, and get first X%(your count the frequency before) for the one class and the others are else/
2. weighted sample, model will be tend to the weighted sample class, your can use the sample variance v. eg. weighti = 1/2(1- (vmax - vi)/vmax) | Class imbalance in Supervised Machine Learning
Add two trick:
1. use CDF , count the frequency in your training data or use very large validation (if your test set will not change, but the validation set must have same distribution with training s |
4,729 | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regression when we don't have to? | In Econometrics, we would say that non-normality violates the conditions of the Classical Normal Linear Regression Model, while heteroskedasticity violates both the assumptions of the CNLR and of the Classical Linear Regression Model.
But those that say "...violates OLS" are also justified: the name Ordinary Least-Squares comes from Gauss directly and essentially refers to normal errors. In other words "OLS" is not an acronym for least-squares estimation (which is a much more general principle and approach), but of the CNLR.
Ok, this was history, terminology and semantics. I understand the core of the OP's question as follows: "Why should we emphasize the ideal, if we have found solutions for the case when it is not present?" (Because the CNLR assumptions are ideal, in the sense that they provide excellent least-square estimator properties "off-the-shelf", and without the need to resort to asymptotic results. Remember also that OLS is maximum likelihood when the errors are normal).
As an ideal, it is a good place to start teaching. This is what we always do in teaching any kind of subject: "simple" situations are "ideal" situations, free of the complexities one will actually encounter in real life and real research, and for which no definite solutions exist.
And this is what I find problematic about the OP's post: he writes about robust standard errors and bootstrap as though they are "superior alternatives", or foolproof solutions to the lack of the said assumptions under discussion for which moreover the OP writes
"..assumptions that people do not have to meet"
Why? Because there are some methods of dealing with the situation, methods that have some validity of course, but they are far from ideal? Bootstrap and heteroskedasticity-robust standard errors are not the solutions -if they indeed were, they would have become the dominant paradigm, sending the CLR and the CNLR to the history books. But they are not.
So we start from the set of assumptions that guarantees those estimator properties that we have deemed important (it is another discussion whether the properties designated as desirable are indeed the ones that should be), so that we keep visible that any violation of them, has consequences which cannot be fully offset through the methods we have found in order to deal with the absence of these assumptions. It would be really dangerous, scientifically speaking, to convey the feeling that "we can bootstrap our way to the truth of the matter" -because, simply, we cannot.
So, they remain imperfect solutions to a problem, not an alternative and/or definitely superior way to do things. Therefore, we have first to teach the problem-free situation, then point to the possible problems, and then discuss possible solutions. Otherwise, we would elevate these solutions to a status they don't really have. | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regre | In Econometrics, we would say that non-normality violates the conditions of the Classical Normal Linear Regression Model, while heteroskedasticity violates both the assumptions of the CNLR and of the | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regression when we don't have to?
In Econometrics, we would say that non-normality violates the conditions of the Classical Normal Linear Regression Model, while heteroskedasticity violates both the assumptions of the CNLR and of the Classical Linear Regression Model.
But those that say "...violates OLS" are also justified: the name Ordinary Least-Squares comes from Gauss directly and essentially refers to normal errors. In other words "OLS" is not an acronym for least-squares estimation (which is a much more general principle and approach), but of the CNLR.
Ok, this was history, terminology and semantics. I understand the core of the OP's question as follows: "Why should we emphasize the ideal, if we have found solutions for the case when it is not present?" (Because the CNLR assumptions are ideal, in the sense that they provide excellent least-square estimator properties "off-the-shelf", and without the need to resort to asymptotic results. Remember also that OLS is maximum likelihood when the errors are normal).
As an ideal, it is a good place to start teaching. This is what we always do in teaching any kind of subject: "simple" situations are "ideal" situations, free of the complexities one will actually encounter in real life and real research, and for which no definite solutions exist.
And this is what I find problematic about the OP's post: he writes about robust standard errors and bootstrap as though they are "superior alternatives", or foolproof solutions to the lack of the said assumptions under discussion for which moreover the OP writes
"..assumptions that people do not have to meet"
Why? Because there are some methods of dealing with the situation, methods that have some validity of course, but they are far from ideal? Bootstrap and heteroskedasticity-robust standard errors are not the solutions -if they indeed were, they would have become the dominant paradigm, sending the CLR and the CNLR to the history books. But they are not.
So we start from the set of assumptions that guarantees those estimator properties that we have deemed important (it is another discussion whether the properties designated as desirable are indeed the ones that should be), so that we keep visible that any violation of them, has consequences which cannot be fully offset through the methods we have found in order to deal with the absence of these assumptions. It would be really dangerous, scientifically speaking, to convey the feeling that "we can bootstrap our way to the truth of the matter" -because, simply, we cannot.
So, they remain imperfect solutions to a problem, not an alternative and/or definitely superior way to do things. Therefore, we have first to teach the problem-free situation, then point to the possible problems, and then discuss possible solutions. Otherwise, we would elevate these solutions to a status they don't really have. | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regre
In Econometrics, we would say that non-normality violates the conditions of the Classical Normal Linear Regression Model, while heteroskedasticity violates both the assumptions of the CNLR and of the |
4,730 | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regression when we don't have to? | If we had time in the class where we first introduce regression models to discuss bootstrapping and the other techniques that you mentioned (including all their assumptions, pitfalls, etc.), then I would agree with you that it is not necessary to talk about normality and homoscedasticity assumptions. But in truth, when regression is first introduced we do not have the time to talk about all those other things, so we would rather have the students be conservative and check for things that may not be needed and consult a statistician (or take another stats class or 2 or 3, ...) when the assumptions don't hold.
If you tell students that those assumptions don't matter except when ..., then most will only remember the don't matter part and not the important when parts.
If we have a case with unequal variances, then yes we can still fit a least squares line, but is it still the "best" line? or would it be better to consult someone with more experience/training on how to fit lines in that case. Even if we are happy with the least squares line, shouldn't we acknowledge that predictions will have different properties for different values of the predictor(s)? So checking for unequal variances is good for later interpretations, even if we don't need it for the tests/intervals/etc. that we are using. | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regre | If we had time in the class where we first introduce regression models to discuss bootstrapping and the other techniques that you mentioned (including all their assumptions, pitfalls, etc.), then I wo | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regression when we don't have to?
If we had time in the class where we first introduce regression models to discuss bootstrapping and the other techniques that you mentioned (including all their assumptions, pitfalls, etc.), then I would agree with you that it is not necessary to talk about normality and homoscedasticity assumptions. But in truth, when regression is first introduced we do not have the time to talk about all those other things, so we would rather have the students be conservative and check for things that may not be needed and consult a statistician (or take another stats class or 2 or 3, ...) when the assumptions don't hold.
If you tell students that those assumptions don't matter except when ..., then most will only remember the don't matter part and not the important when parts.
If we have a case with unequal variances, then yes we can still fit a least squares line, but is it still the "best" line? or would it be better to consult someone with more experience/training on how to fit lines in that case. Even if we are happy with the least squares line, shouldn't we acknowledge that predictions will have different properties for different values of the predictor(s)? So checking for unequal variances is good for later interpretations, even if we don't need it for the tests/intervals/etc. that we are using. | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regre
If we had time in the class where we first introduce regression models to discuss bootstrapping and the other techniques that you mentioned (including all their assumptions, pitfalls, etc.), then I wo |
4,731 | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regression when we don't have to? | 1) rarely do people only want to estimate. Usually inference - CIs, PIs, tests - is the aim, or at least part of it (even if sometimes it's done relatively informally)
2) Things like the Gauss Markov theorem isn't necessarily much help -- if the distribution is sufficiently far from normal, a linear estimator is not much use. There's no point in getting the BLUE if no linear estimator is very good.
3) things like sandwich estimators involve a large number of implicit parameters. It may still be okay if you have a lot of data, but many times people don't.
4) Prediction intervals rely on the conditional distribution's shape including having a good handle on the variance at the observation - you can't quite so easily wave the details away with a PI.
5) things like bootstrapping are often handy for very large samples. They sometimes struggle in small samples -- and even in moderately sized samples, frequently we find that the actual coverage properties are nothing like advertized.
Which is to say -- few things are the sort of panacea people would like them to be. All of those things have their place, and there are certainly plenty of cases where (say) normality is not required, and where estimation and inference (tests and CIs) can reasonably be done without necessarily needing normality, constant variance and so on.
One thing that often seems to be forgotten is other parametric assumptions that could be made instead. Often people know enough about a situation to make a fairly decent parametric assumption (e.g. say... that the conditional response will tend to be right skew with s.d. pretty much proportional to mean might lead us to consider say a gamma or lognormal model); often this may deal with both the heteroskedasticity and the non-normality in one go.
A very useful tool is simulation -- with that we can examine the properties of our tools in situations very like those it appears our data may have arisen from, and so either use them in the comforting knowledge that they have good properties in those cases (or, sometimes, see that they don't work as well as we might hope). | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regre | 1) rarely do people only want to estimate. Usually inference - CIs, PIs, tests - is the aim, or at least part of it (even if sometimes it's done relatively informally)
2) Things like the Gauss Markov | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regression when we don't have to?
1) rarely do people only want to estimate. Usually inference - CIs, PIs, tests - is the aim, or at least part of it (even if sometimes it's done relatively informally)
2) Things like the Gauss Markov theorem isn't necessarily much help -- if the distribution is sufficiently far from normal, a linear estimator is not much use. There's no point in getting the BLUE if no linear estimator is very good.
3) things like sandwich estimators involve a large number of implicit parameters. It may still be okay if you have a lot of data, but many times people don't.
4) Prediction intervals rely on the conditional distribution's shape including having a good handle on the variance at the observation - you can't quite so easily wave the details away with a PI.
5) things like bootstrapping are often handy for very large samples. They sometimes struggle in small samples -- and even in moderately sized samples, frequently we find that the actual coverage properties are nothing like advertized.
Which is to say -- few things are the sort of panacea people would like them to be. All of those things have their place, and there are certainly plenty of cases where (say) normality is not required, and where estimation and inference (tests and CIs) can reasonably be done without necessarily needing normality, constant variance and so on.
One thing that often seems to be forgotten is other parametric assumptions that could be made instead. Often people know enough about a situation to make a fairly decent parametric assumption (e.g. say... that the conditional response will tend to be right skew with s.d. pretty much proportional to mean might lead us to consider say a gamma or lognormal model); often this may deal with both the heteroskedasticity and the non-normality in one go.
A very useful tool is simulation -- with that we can examine the properties of our tools in situations very like those it appears our data may have arisen from, and so either use them in the comforting knowledge that they have good properties in those cases (or, sometimes, see that they don't work as well as we might hope). | Why do we care so much about normally distributed error terms (and homoskedasticity) in linear regre
1) rarely do people only want to estimate. Usually inference - CIs, PIs, tests - is the aim, or at least part of it (even if sometimes it's done relatively informally)
2) Things like the Gauss Markov |
4,732 | What is the difference between NaN and NA? | ?is.nan
?is.na
?NA
?NaN
Should answer your question.
But, in short:
NaN means $\frac {0} {0}$ -- Stands for Not a Number
NA is generally interpreted as a missing value and has various forms - NA_integer_, NA_real_, etc.
Therefore, NaN $\neq$ NA and there is a need for NaN and NA. | What is the difference between NaN and NA? | ?is.nan
?is.na
?NA
?NaN
Should answer your question.
But, in short:
NaN means $\frac {0} {0}$ -- Stands for Not a Number
NA is generally interpreted as a missing value and has various forms - NA_int | What is the difference between NaN and NA?
?is.nan
?is.na
?NA
?NaN
Should answer your question.
But, in short:
NaN means $\frac {0} {0}$ -- Stands for Not a Number
NA is generally interpreted as a missing value and has various forms - NA_integer_, NA_real_, etc.
Therefore, NaN $\neq$ NA and there is a need for NaN and NA. | What is the difference between NaN and NA?
?is.nan
?is.na
?NA
?NaN
Should answer your question.
But, in short:
NaN means $\frac {0} {0}$ -- Stands for Not a Number
NA is generally interpreted as a missing value and has various forms - NA_int |
4,733 | What is the difference between NaN and NA? | NA is for missing data. NaN, as J.M. said is for arithmetic purpose. NaN is usually the product of some arithmetic operation, such as 0/0. NA usually is declared in advance, or is a product of operation when you try to access something that is not there:
> a <- c(1,2)
> a[3]
[1] NA | What is the difference between NaN and NA? | NA is for missing data. NaN, as J.M. said is for arithmetic purpose. NaN is usually the product of some arithmetic operation, such as 0/0. NA usually is declared in advance, or is a product of operati | What is the difference between NaN and NA?
NA is for missing data. NaN, as J.M. said is for arithmetic purpose. NaN is usually the product of some arithmetic operation, such as 0/0. NA usually is declared in advance, or is a product of operation when you try to access something that is not there:
> a <- c(1,2)
> a[3]
[1] NA | What is the difference between NaN and NA?
NA is for missing data. NaN, as J.M. said is for arithmetic purpose. NaN is usually the product of some arithmetic operation, such as 0/0. NA usually is declared in advance, or is a product of operati |
4,734 | What is the difference between NaN and NA? | I think of NA standing for 'Not Available', while NaN is 'Not a Number', although this is more mnemonic than explanation. By the way, I know of no language other than R (perhaps Splus?) that has both. Matlab, for example, has only NaN. | What is the difference between NaN and NA? | I think of NA standing for 'Not Available', while NaN is 'Not a Number', although this is more mnemonic than explanation. By the way, I know of no language other than R (perhaps Splus?) that has both. | What is the difference between NaN and NA?
I think of NA standing for 'Not Available', while NaN is 'Not a Number', although this is more mnemonic than explanation. By the way, I know of no language other than R (perhaps Splus?) that has both. Matlab, for example, has only NaN. | What is the difference between NaN and NA?
I think of NA standing for 'Not Available', while NaN is 'Not a Number', although this is more mnemonic than explanation. By the way, I know of no language other than R (perhaps Splus?) that has both. |
4,735 | What is the difference between NaN and NA? | NA means the error was already there when you imported the spreadsheet into R. NaN means you caused the error after importing the data. It's the third type of error that's really hard to catch. | What is the difference between NaN and NA? | NA means the error was already there when you imported the spreadsheet into R. NaN means you caused the error after importing the data. It's the third type of error that's really hard to catch. | What is the difference between NaN and NA?
NA means the error was already there when you imported the spreadsheet into R. NaN means you caused the error after importing the data. It's the third type of error that's really hard to catch. | What is the difference between NaN and NA?
NA means the error was already there when you imported the spreadsheet into R. NaN means you caused the error after importing the data. It's the third type of error that's really hard to catch. |
4,736 | What is the difference between NaN and NA? | NA = Not Available
NaN = Not a Number
I think once we expand the acronyms, it should be self explanatory. | What is the difference between NaN and NA? | NA = Not Available
NaN = Not a Number
I think once we expand the acronyms, it should be self explanatory. | What is the difference between NaN and NA?
NA = Not Available
NaN = Not a Number
I think once we expand the acronyms, it should be self explanatory. | What is the difference between NaN and NA?
NA = Not Available
NaN = Not a Number
I think once we expand the acronyms, it should be self explanatory. |
4,737 | Understanding Naive Bayes | I'm going to run through the whole Naive Bayes process from scratch, since it's not totally clear to me where you're getting hung up.
We want to find the probability that a new example belongs to each class: $P(class|feature_1, feature_2,..., feature_n$). We then compute that probability for each class, and pick the most likely class. The problem is that we usually don't have those probabilities. However, Bayes' Theorem lets us rewrite that equation in a more tractable form.
Bayes' Thereom is simply$$P(A|B)=\frac{P(B|A) \cdot P(A)}{P(B)}$$ or in terms of our problem:
$$P(class|features)=\frac{P(features|class) \cdot P(class)}{P(features)}$$
We can simplify this by removing $P(features)$. We can do this because we're going to rank $P(class|features)$ for each value of $class$; $P(features)$ will be the same every time--it doesn't depend on $class$. This leaves us with
$$ P(class|features) \propto P(features|class) \cdot P(class)$$
The prior probabilities, $P(class)$, can be calculated as you described in your question.
That leaves $P(features|class)$. We want to eliminate the massive, and probably very sparse, joint probability $P(feature_1, feature_2, ..., feature_n|class)$. If each feature is independent , then $$P(feature_1, feature_2, ..., feature_n|class) = \prod_i{P(feature_i|class})$$ Even if they're not actually independent, we can assume they are (that's the "naive" part of naive Bayes). I personally think it's easier to think this through for discrete (i.e., categorical) variables, so let's use a slightly different version of your example. Here, I've divided each feature dimension into two categorical variables.
.
Example: Training the classifer
To train the classifer, we count up various subsets of points and use them to compute the prior and conditional probabilities.
The priors are trivial: There are sixty total points, forty are green while twenty are red. Thus $$P(class=green)=\frac{40}{60} = 2/3 \text{ and } P(class=red)=\frac{20}{60}=1/3$$
Next, we have to compute the conditional probabilities of each feature-value given a class. Here, there are two features: $feature_1$ and $feature_2$, each of which takes one of two values (A or B for one, X or Y for the other). We therefore need to know the following:
$P(feature_1=A|class=red)$
$P(feature_1=B|class=red)$
$P(feature_1=A|class=green)$
$P(feature_1=B|class=green)$
$P(feature_2=X|class=red)$
$P(feature_2=Y|class=red)$
$P(feature_2=X|class=green)$
$P(feature_2=Y|class=green)$
(in case it's not obvious, this is all possible pairs of feature-value and class)
These are easy to compute by counting and dividing too. For example, for $P(feature_1=A|class=red)$, we look only at the red points and count how many of them are in the 'A' region for $feature_1$. There are twenty red points, all of which are in the 'A' region, so $P(feature_1=A|class=red)=20/20=1$. None of the red points are in the B region, so $P(feature_1|class=red)=0/20=0$. Next, we do the same, but consider only the green points. This gives us $P(feature_1=A|class=green)=5/40=1/8$ and $P(feature_1=B|class=green)=35/40=7/8$. We repeat that process for $feature_2$, to round out the probability table. Assuming I've counted correctly, we get
$P(feature_1=A|class=red)=1$
$P(feature_1=B|class=red)=0$
$P(feature_1=A|class=green)=1/8$
$P(feature_1=B|class=green)=7/8$
$P(feature_2=X|class=red)=3/10$
$P(feature_2=Y|class=red)=7/10$
$P(feature_2=X|class=green)=8/10$
$P(feature_2=Y|class=green)=2/10$
Those ten probabilities (the two priors plus the eight conditionals) are our model
Classifying a New Example
Let's classify the white point from your example. It's in the "A" region for $feature_1$ and the "Y" region for $feature_2$. We want to find the probability that it's in each class. Let's start with red. Using the formula above, we know that:
$$P(class=red|example) \propto P(class=red) \cdot P(feature_1=A|class=red) \cdot P(feature_2=Y|class=red)$$
Subbing in the probabilities from the table, we get
$$P(class=red|example) \propto \frac{1}{3} \cdot 1 \cdot \frac{7}{10} = \frac{7}{30}$$
We then do the same for green:
$$P(class=green|example) \propto P(class=green) \cdot P(feature_1=A|class=green) \cdot P(feature_2=Y|class=green) $$
Subbing in those values gets us 0 ($2/3 \cdot 0 \cdot 2/10$). Finally, we look to see which class gave us the highest probability. In this case, it's clearly the red class, so that's where we assign the point.
Notes
In your original example, the features are continuous. In that case, you need to find some way of assigning P(feature=value|class) for each class. You might consider fitting then to a known probability distribution (e.g., a Gaussian). During training, you would find the mean and variance for each class along each feature dimension. To classify a point, you'd find $P(feature=value|class)$ by plugging in the appropriate mean and variance for each class. Other distributions might be more appropriate, depending on the particulars of your data, but a Gaussian would be a decent starting point.
I'm not too familiar with the DARPA data set, but you'd do essentially the same thing. You'll probably end up computing something like P(attack=TRUE|service=finger), P(attack=false|service=finger), P(attack=TRUE|service=ftp), etc. and then combine them in the same way as the example. As a side note, part of the trick here is to come up with good features. Source IP , for example, is probably going to be hopelessly sparse--you'll probably only have one or two examples for a given IP. You might do much better if you geolocated the IP and use "Source_in_same_building_as_dest (true/false)" or something as a feature instead.
I hope that helps more. If anything needs clarification, I'd be happy to try again! | Understanding Naive Bayes | I'm going to run through the whole Naive Bayes process from scratch, since it's not totally clear to me where you're getting hung up.
We want to find the probability that a new example belongs to each | Understanding Naive Bayes
I'm going to run through the whole Naive Bayes process from scratch, since it's not totally clear to me where you're getting hung up.
We want to find the probability that a new example belongs to each class: $P(class|feature_1, feature_2,..., feature_n$). We then compute that probability for each class, and pick the most likely class. The problem is that we usually don't have those probabilities. However, Bayes' Theorem lets us rewrite that equation in a more tractable form.
Bayes' Thereom is simply$$P(A|B)=\frac{P(B|A) \cdot P(A)}{P(B)}$$ or in terms of our problem:
$$P(class|features)=\frac{P(features|class) \cdot P(class)}{P(features)}$$
We can simplify this by removing $P(features)$. We can do this because we're going to rank $P(class|features)$ for each value of $class$; $P(features)$ will be the same every time--it doesn't depend on $class$. This leaves us with
$$ P(class|features) \propto P(features|class) \cdot P(class)$$
The prior probabilities, $P(class)$, can be calculated as you described in your question.
That leaves $P(features|class)$. We want to eliminate the massive, and probably very sparse, joint probability $P(feature_1, feature_2, ..., feature_n|class)$. If each feature is independent , then $$P(feature_1, feature_2, ..., feature_n|class) = \prod_i{P(feature_i|class})$$ Even if they're not actually independent, we can assume they are (that's the "naive" part of naive Bayes). I personally think it's easier to think this through for discrete (i.e., categorical) variables, so let's use a slightly different version of your example. Here, I've divided each feature dimension into two categorical variables.
.
Example: Training the classifer
To train the classifer, we count up various subsets of points and use them to compute the prior and conditional probabilities.
The priors are trivial: There are sixty total points, forty are green while twenty are red. Thus $$P(class=green)=\frac{40}{60} = 2/3 \text{ and } P(class=red)=\frac{20}{60}=1/3$$
Next, we have to compute the conditional probabilities of each feature-value given a class. Here, there are two features: $feature_1$ and $feature_2$, each of which takes one of two values (A or B for one, X or Y for the other). We therefore need to know the following:
$P(feature_1=A|class=red)$
$P(feature_1=B|class=red)$
$P(feature_1=A|class=green)$
$P(feature_1=B|class=green)$
$P(feature_2=X|class=red)$
$P(feature_2=Y|class=red)$
$P(feature_2=X|class=green)$
$P(feature_2=Y|class=green)$
(in case it's not obvious, this is all possible pairs of feature-value and class)
These are easy to compute by counting and dividing too. For example, for $P(feature_1=A|class=red)$, we look only at the red points and count how many of them are in the 'A' region for $feature_1$. There are twenty red points, all of which are in the 'A' region, so $P(feature_1=A|class=red)=20/20=1$. None of the red points are in the B region, so $P(feature_1|class=red)=0/20=0$. Next, we do the same, but consider only the green points. This gives us $P(feature_1=A|class=green)=5/40=1/8$ and $P(feature_1=B|class=green)=35/40=7/8$. We repeat that process for $feature_2$, to round out the probability table. Assuming I've counted correctly, we get
$P(feature_1=A|class=red)=1$
$P(feature_1=B|class=red)=0$
$P(feature_1=A|class=green)=1/8$
$P(feature_1=B|class=green)=7/8$
$P(feature_2=X|class=red)=3/10$
$P(feature_2=Y|class=red)=7/10$
$P(feature_2=X|class=green)=8/10$
$P(feature_2=Y|class=green)=2/10$
Those ten probabilities (the two priors plus the eight conditionals) are our model
Classifying a New Example
Let's classify the white point from your example. It's in the "A" region for $feature_1$ and the "Y" region for $feature_2$. We want to find the probability that it's in each class. Let's start with red. Using the formula above, we know that:
$$P(class=red|example) \propto P(class=red) \cdot P(feature_1=A|class=red) \cdot P(feature_2=Y|class=red)$$
Subbing in the probabilities from the table, we get
$$P(class=red|example) \propto \frac{1}{3} \cdot 1 \cdot \frac{7}{10} = \frac{7}{30}$$
We then do the same for green:
$$P(class=green|example) \propto P(class=green) \cdot P(feature_1=A|class=green) \cdot P(feature_2=Y|class=green) $$
Subbing in those values gets us 0 ($2/3 \cdot 0 \cdot 2/10$). Finally, we look to see which class gave us the highest probability. In this case, it's clearly the red class, so that's where we assign the point.
Notes
In your original example, the features are continuous. In that case, you need to find some way of assigning P(feature=value|class) for each class. You might consider fitting then to a known probability distribution (e.g., a Gaussian). During training, you would find the mean and variance for each class along each feature dimension. To classify a point, you'd find $P(feature=value|class)$ by plugging in the appropriate mean and variance for each class. Other distributions might be more appropriate, depending on the particulars of your data, but a Gaussian would be a decent starting point.
I'm not too familiar with the DARPA data set, but you'd do essentially the same thing. You'll probably end up computing something like P(attack=TRUE|service=finger), P(attack=false|service=finger), P(attack=TRUE|service=ftp), etc. and then combine them in the same way as the example. As a side note, part of the trick here is to come up with good features. Source IP , for example, is probably going to be hopelessly sparse--you'll probably only have one or two examples for a given IP. You might do much better if you geolocated the IP and use "Source_in_same_building_as_dest (true/false)" or something as a feature instead.
I hope that helps more. If anything needs clarification, I'd be happy to try again! | Understanding Naive Bayes
I'm going to run through the whole Naive Bayes process from scratch, since it's not totally clear to me where you're getting hung up.
We want to find the probability that a new example belongs to each |
4,738 | Understanding Naive Bayes | Simplifying the notation with $D$ denoting the data, we want to find which of various $P(C_j\mid D)$ is the largest. Now, Bayes' formula gives
$$P(C_j\mid D) = \frac{P(D\mid C_j)P(C_j)}{P(D)}, ~ j = 1, 2, \ldots$$
where the denominator on the right is the same for all $j$. If we want to
find which of $P(C_1\mid D)$, $P(C_2\mid D), \ldots$ is the largest, we can,
of course, compute each $P(C_j\mid D)$ and compare the values. But note
that the comparisons are not really affected by the value of $P(D)$ which
is the same in all cases. We could equally well compute all the $P(D\mid C_j)P(C_j)$
and compare (that is, without bothering to divide each $P(D\mid C_j)P(C_j)$
by $P(D)$ before the comparisons), and the
same $C_j$ will be chosen as having the largest posterior probability.
Put another way, the posterior probability $P(C_j\mid D)$ is
proportional to the likelihood $P(D\mid C_j)$ times the prior probability $P(C_j)$
$$P(C_j\mid D) \propto P(D\mid C_j)P(C_j).$$ Finally, when the data $D$ is a
collection of (conditionally) independent observations $(x_1, x_2, \ldots, x_d)$ given
$C_j)$, we have that
$$\begin{align*}
P(D\mid C_j) &= P(x_1, x_2, \ldots, x_d\mid C_j)\\
&= P(x_1\mid C_j)P(x_2\mid C_j)\cdots P(x_d\mid C_j)\\
&= \prod_{1=1}^d P(x_i\mid C_j)
\end{align*}$$ | Understanding Naive Bayes | Simplifying the notation with $D$ denoting the data, we want to find which of various $P(C_j\mid D)$ is the largest. Now, Bayes' formula gives
$$P(C_j\mid D) = \frac{P(D\mid C_j)P(C_j)}{P(D)}, ~ j = | Understanding Naive Bayes
Simplifying the notation with $D$ denoting the data, we want to find which of various $P(C_j\mid D)$ is the largest. Now, Bayes' formula gives
$$P(C_j\mid D) = \frac{P(D\mid C_j)P(C_j)}{P(D)}, ~ j = 1, 2, \ldots$$
where the denominator on the right is the same for all $j$. If we want to
find which of $P(C_1\mid D)$, $P(C_2\mid D), \ldots$ is the largest, we can,
of course, compute each $P(C_j\mid D)$ and compare the values. But note
that the comparisons are not really affected by the value of $P(D)$ which
is the same in all cases. We could equally well compute all the $P(D\mid C_j)P(C_j)$
and compare (that is, without bothering to divide each $P(D\mid C_j)P(C_j)$
by $P(D)$ before the comparisons), and the
same $C_j$ will be chosen as having the largest posterior probability.
Put another way, the posterior probability $P(C_j\mid D)$ is
proportional to the likelihood $P(D\mid C_j)$ times the prior probability $P(C_j)$
$$P(C_j\mid D) \propto P(D\mid C_j)P(C_j).$$ Finally, when the data $D$ is a
collection of (conditionally) independent observations $(x_1, x_2, \ldots, x_d)$ given
$C_j)$, we have that
$$\begin{align*}
P(D\mid C_j) &= P(x_1, x_2, \ldots, x_d\mid C_j)\\
&= P(x_1\mid C_j)P(x_2\mid C_j)\cdots P(x_d\mid C_j)\\
&= \prod_{1=1}^d P(x_i\mid C_j)
\end{align*}$$ | Understanding Naive Bayes
Simplifying the notation with $D$ denoting the data, we want to find which of various $P(C_j\mid D)$ is the largest. Now, Bayes' formula gives
$$P(C_j\mid D) = \frac{P(D\mid C_j)P(C_j)}{P(D)}, ~ j = |
4,739 | Understanding Naive Bayes | The main assumption behind the naive bayes model is that each feature (x_i) is conditionally independent of all other features given the class. This assumption is what allows us to write the likelihood as a simple product (as you have shown).
This is also what helps the naive bayes model generalize well in practice. Consider the training phase: if we did not make this assumption, then learning would involve estimating a complicated, high dimensional distribution: p(x1, x2, ..., xn, c) in which all of the features are jointly distributed. Instead, we can train by estimating p(x1, c), p(x2, c), ..., p(xn, c), since by knowing the value c makes the values of all the other features irrelevant (they provide no additional information about x_i).
I don't know a good way to visualize this (besides the standard graphical model notation), but to make it more concrete you can write some code to learn a Naive bayes model (you can grab some example data here). Train and test. Now drop the conditional independence assumption and modify the code. Train, test, and compare to the previous model. | Understanding Naive Bayes | The main assumption behind the naive bayes model is that each feature (x_i) is conditionally independent of all other features given the class. This assumption is what allows us to write the likelihoo | Understanding Naive Bayes
The main assumption behind the naive bayes model is that each feature (x_i) is conditionally independent of all other features given the class. This assumption is what allows us to write the likelihood as a simple product (as you have shown).
This is also what helps the naive bayes model generalize well in practice. Consider the training phase: if we did not make this assumption, then learning would involve estimating a complicated, high dimensional distribution: p(x1, x2, ..., xn, c) in which all of the features are jointly distributed. Instead, we can train by estimating p(x1, c), p(x2, c), ..., p(xn, c), since by knowing the value c makes the values of all the other features irrelevant (they provide no additional information about x_i).
I don't know a good way to visualize this (besides the standard graphical model notation), but to make it more concrete you can write some code to learn a Naive bayes model (you can grab some example data here). Train and test. Now drop the conditional independence assumption and modify the code. Train, test, and compare to the previous model. | Understanding Naive Bayes
The main assumption behind the naive bayes model is that each feature (x_i) is conditionally independent of all other features given the class. This assumption is what allows us to write the likelihoo |
4,740 | Gradient Boosting for Linear Regression - why does it not work? | What am I missing here?
I don't think you're really missing anything!
Another observation is that a sum of subsequent linear regression models can be represented as a single regression model as well (adding all intercepts and corresponding coefficients) so I cannot imagine how that could ever improve the model. The last observation is that a linear regression (the most typical approach) is using sum of squared residuals as a loss function - the same one that GB is using.
Seems to me that you nailed it right there, and gave a short sketch of a proof that linear regression just beats boosting linear regressions in this setting.
To be pedantic, both methods are attempting to solve the following optimization problem
$$ \hat \beta = \text{argmin}_\beta (y - X \beta)^t (y - X \beta) $$
Linear regression just observes that you can solve it directly, by finding the solution to the linear equation
$$ X^t X \beta = X^t y $$
This automatically gives you the best possible value of $\beta$ out of all possibilities.
Boosting, whether your weak classifier is a one variable or multi variable regression, gives you a sequence of coefficient vectors $\beta_1, \beta_2, \ldots$. The final model prediction is, as you observe, a sum, and has the same functional form as the full linear regressor
$$ X \beta_1 + X \beta_2 + \cdots + X \beta_n = X (\beta_1 + \beta_2 + \cdots + \beta_n) $$
Each of these steps is chosen to further decrease the sum of squared errors. But we could have found the minimum possible sum of square errors within this functional form by just performing a full linear regression to begin with.
A possible defense of boosting in this situation could be the implicit regularization it provides. Possibly (I haven't played with this) you could use the early stopping feature of a gradient booster, along with a cross validation, to stop short of the full linear regression. This would provide a regularization to your regression, and possibly help with overfitting. This is not particularly practical, as one has very efficient and well understood options like ridge regression and the elastic net in this setting.
Boosting shines when there is no terse functional form around. Boosting decision trees lets the functional form of the regressor/classifier evolve slowly to fit the data, often resulting in complex shapes one could not have dreamed up by hand and eye. When a simple functional form is desired, boosting is not going to help you find it (or at least is probably a rather inefficient way to find it). | Gradient Boosting for Linear Regression - why does it not work? | What am I missing here?
I don't think you're really missing anything!
Another observation is that a sum of subsequent linear regression models can be represented as a single regression model as well | Gradient Boosting for Linear Regression - why does it not work?
What am I missing here?
I don't think you're really missing anything!
Another observation is that a sum of subsequent linear regression models can be represented as a single regression model as well (adding all intercepts and corresponding coefficients) so I cannot imagine how that could ever improve the model. The last observation is that a linear regression (the most typical approach) is using sum of squared residuals as a loss function - the same one that GB is using.
Seems to me that you nailed it right there, and gave a short sketch of a proof that linear regression just beats boosting linear regressions in this setting.
To be pedantic, both methods are attempting to solve the following optimization problem
$$ \hat \beta = \text{argmin}_\beta (y - X \beta)^t (y - X \beta) $$
Linear regression just observes that you can solve it directly, by finding the solution to the linear equation
$$ X^t X \beta = X^t y $$
This automatically gives you the best possible value of $\beta$ out of all possibilities.
Boosting, whether your weak classifier is a one variable or multi variable regression, gives you a sequence of coefficient vectors $\beta_1, \beta_2, \ldots$. The final model prediction is, as you observe, a sum, and has the same functional form as the full linear regressor
$$ X \beta_1 + X \beta_2 + \cdots + X \beta_n = X (\beta_1 + \beta_2 + \cdots + \beta_n) $$
Each of these steps is chosen to further decrease the sum of squared errors. But we could have found the minimum possible sum of square errors within this functional form by just performing a full linear regression to begin with.
A possible defense of boosting in this situation could be the implicit regularization it provides. Possibly (I haven't played with this) you could use the early stopping feature of a gradient booster, along with a cross validation, to stop short of the full linear regression. This would provide a regularization to your regression, and possibly help with overfitting. This is not particularly practical, as one has very efficient and well understood options like ridge regression and the elastic net in this setting.
Boosting shines when there is no terse functional form around. Boosting decision trees lets the functional form of the regressor/classifier evolve slowly to fit the data, often resulting in complex shapes one could not have dreamed up by hand and eye. When a simple functional form is desired, boosting is not going to help you find it (or at least is probably a rather inefficient way to find it). | Gradient Boosting for Linear Regression - why does it not work?
What am I missing here?
I don't think you're really missing anything!
Another observation is that a sum of subsequent linear regression models can be represented as a single regression model as well |
4,741 | Gradient Boosting for Linear Regression - why does it not work? | The least squares projection matrix is given by
$X(X^{T}X)^{-1}X^{T}$
We can use this to directly obtain our predicted values $\hat{y}$, e.g.
$\hat{y} = X(X^{T}X)^{-1}X^{T}y $
Let's say you fit a regression and subsequently you calculate your residuals
$e = y - \hat{y} = y - X(X^{T}X)^{-1}X^{T}y $
And then you use this residual vector e as your new dependent variable in the next regression. Use the projection matrix again to directly calculate the predictions of this second regression and call these new predictions $\hat{y}_{2}$ :
$\hat{y}_{2} = X(X^{T}X)^{-1}X^{T}e \\
\quad = X(X^{T}X)^{-1}X^{T} (y - X(X^{T}X)^{-1}X^{T}y) \\
\quad = X(X^{T}X)^{-1}X^{T}y - X(X^{T}X)^{-1}X^{T}X(X^{T}X)^{-1}X^{T}y \\
\quad = X(X^{T}X)^{-1}X^{T}y - X(X^{T}X)^{-1}X^{T}y \\
\quad = 0 $
A reason for this is that by construction the residual vector e from the initial regression is orthogonal to the X Space i. e. $\hat{y}$ is a orthogonal projection from y onto the X space (you'll find nice pictures visualizing this in the literature).
This means the simple approach of fitting a regression and then fitting a new regression on the residuals from the first regression will not result in anything senseful because X is entirely uncorrelated with e.
I write this because you said there is not really a new line to fit which corresponds to the derivations above. | Gradient Boosting for Linear Regression - why does it not work? | The least squares projection matrix is given by
$X(X^{T}X)^{-1}X^{T}$
We can use this to directly obtain our predicted values $\hat{y}$, e.g.
$\hat{y} = X(X^{T}X)^{-1}X^{T}y $
Let's say you fit a regr | Gradient Boosting for Linear Regression - why does it not work?
The least squares projection matrix is given by
$X(X^{T}X)^{-1}X^{T}$
We can use this to directly obtain our predicted values $\hat{y}$, e.g.
$\hat{y} = X(X^{T}X)^{-1}X^{T}y $
Let's say you fit a regression and subsequently you calculate your residuals
$e = y - \hat{y} = y - X(X^{T}X)^{-1}X^{T}y $
And then you use this residual vector e as your new dependent variable in the next regression. Use the projection matrix again to directly calculate the predictions of this second regression and call these new predictions $\hat{y}_{2}$ :
$\hat{y}_{2} = X(X^{T}X)^{-1}X^{T}e \\
\quad = X(X^{T}X)^{-1}X^{T} (y - X(X^{T}X)^{-1}X^{T}y) \\
\quad = X(X^{T}X)^{-1}X^{T}y - X(X^{T}X)^{-1}X^{T}X(X^{T}X)^{-1}X^{T}y \\
\quad = X(X^{T}X)^{-1}X^{T}y - X(X^{T}X)^{-1}X^{T}y \\
\quad = 0 $
A reason for this is that by construction the residual vector e from the initial regression is orthogonal to the X Space i. e. $\hat{y}$ is a orthogonal projection from y onto the X space (you'll find nice pictures visualizing this in the literature).
This means the simple approach of fitting a regression and then fitting a new regression on the residuals from the first regression will not result in anything senseful because X is entirely uncorrelated with e.
I write this because you said there is not really a new line to fit which corresponds to the derivations above. | Gradient Boosting for Linear Regression - why does it not work?
The least squares projection matrix is given by
$X(X^{T}X)^{-1}X^{T}$
We can use this to directly obtain our predicted values $\hat{y}$, e.g.
$\hat{y} = X(X^{T}X)^{-1}X^{T}y $
Let's say you fit a regr |
4,742 | Gradient Boosting for Linear Regression - why does it not work? | OP is absolutely right. However I came across an algorithm called RegBoost by Li et al. (2020) which attempts to adapt Linear Regression (LR) to be used as the weak learner in Gradient Boosting by combining it with a non-linearity. This is done by constructing a decision tree of LR models based on the sign (+ve or -ve) of the error at each stage. They report results on par with Gradient Boosted Decision Trees (LightGBM) on 3 datasets, which seems promising. I haven't used/implemented it myself yet but it seems interesting so I will give a brief rundown of the core algorithm below.
Algorithm
Train an LR model on training data
Split the regression outputs into 2 categories: those above the target and those below the target
Train an LR model on each of these splits with the new target being the residual error from the previous model
Repeat steps 2 and 3 until a predetermined maximum depth is reached
Additional information
The two key hyperparameters for tuning are the maximum depth of the tree and the learning rate (excluded here for simplicity)
In order to determine the split when the target is not known (e.g. at inference time), the authors use a KNN model and assign a positive or negative class depending on the $k$ nearest neighbours from the training set at that particular split point
In order to reduce dimensionality, the authors run a stepwise regression as a feature selection step prior to fitting each LR model. I personally think this step should not be part of the core algorithm. Feature selection may not be necessary depending on your number of features and you should be able to use any feature selection technique you want without affecting the algorithm as depicted above | Gradient Boosting for Linear Regression - why does it not work? | OP is absolutely right. However I came across an algorithm called RegBoost by Li et al. (2020) which attempts to adapt Linear Regression (LR) to be used as the weak learner in Gradient Boosting by com | Gradient Boosting for Linear Regression - why does it not work?
OP is absolutely right. However I came across an algorithm called RegBoost by Li et al. (2020) which attempts to adapt Linear Regression (LR) to be used as the weak learner in Gradient Boosting by combining it with a non-linearity. This is done by constructing a decision tree of LR models based on the sign (+ve or -ve) of the error at each stage. They report results on par with Gradient Boosted Decision Trees (LightGBM) on 3 datasets, which seems promising. I haven't used/implemented it myself yet but it seems interesting so I will give a brief rundown of the core algorithm below.
Algorithm
Train an LR model on training data
Split the regression outputs into 2 categories: those above the target and those below the target
Train an LR model on each of these splits with the new target being the residual error from the previous model
Repeat steps 2 and 3 until a predetermined maximum depth is reached
Additional information
The two key hyperparameters for tuning are the maximum depth of the tree and the learning rate (excluded here for simplicity)
In order to determine the split when the target is not known (e.g. at inference time), the authors use a KNN model and assign a positive or negative class depending on the $k$ nearest neighbours from the training set at that particular split point
In order to reduce dimensionality, the authors run a stepwise regression as a feature selection step prior to fitting each LR model. I personally think this step should not be part of the core algorithm. Feature selection may not be necessary depending on your number of features and you should be able to use any feature selection technique you want without affecting the algorithm as depicted above | Gradient Boosting for Linear Regression - why does it not work?
OP is absolutely right. However I came across an algorithm called RegBoost by Li et al. (2020) which attempts to adapt Linear Regression (LR) to be used as the weak learner in Gradient Boosting by com |
4,743 | Do we have a problem of "pity upvotes"? | You could use a multistate model or Markov chain (the msm package in R is one way to fit these). You could then look to see if the transition probability from -1 to 0 is greater than from 0 to 1, 1 to 2, etc. You can also look at the average time at -1 compared to the others to see if it is shorter. | Do we have a problem of "pity upvotes"? | You could use a multistate model or Markov chain (the msm package in R is one way to fit these). You could then look to see if the transition probability from -1 to 0 is greater than from 0 to 1, 1 t | Do we have a problem of "pity upvotes"?
You could use a multistate model or Markov chain (the msm package in R is one way to fit these). You could then look to see if the transition probability from -1 to 0 is greater than from 0 to 1, 1 to 2, etc. You can also look at the average time at -1 compared to the others to see if it is shorter. | Do we have a problem of "pity upvotes"?
You could use a multistate model or Markov chain (the msm package in R is one way to fit these). You could then look to see if the transition probability from -1 to 0 is greater than from 0 to 1, 1 t |
4,744 | Do we have a problem of "pity upvotes"? | Conduct an experiment. Randomly downvote half of the new posts at a particular time every day. | Do we have a problem of "pity upvotes"? | Conduct an experiment. Randomly downvote half of the new posts at a particular time every day. | Do we have a problem of "pity upvotes"?
Conduct an experiment. Randomly downvote half of the new posts at a particular time every day. | Do we have a problem of "pity upvotes"?
Conduct an experiment. Randomly downvote half of the new posts at a particular time every day. |
4,745 | Do we have a problem of "pity upvotes"? | Summary of my answer. I like the Markov chain modeling but it misses the "temporal" aspect. On the other end, focusing on the temporal aspect (e.g. average time at $-1$) misses the "transition" aspect. I would go into the following general modelling (which with suitable assumption can lead to [markov process][1]). Also there is a lot of "censored" statistic behind this problem (which is certainly a classical problem of Software reliability ? ). The last equation of my answer gives the maximum likelihood estimator of voting intensity (up with "+" and dow with "-") for a given state of vote. As we can see from the equation, it is an intermediate from the case when you only estimate transition probability and the case when you only measure time spent at a given state. Hope this help.
General Modelling (to restate the question and assumptions).
Let $(VD_i)_{i\geq 1}$ and $(S_{i})_{i\geq 1}$ be random variables modelling respectively the voting dates and the associated vote sign (+1 for upvote, -1 for downvote). The voting process is simply
$$Y_{t}=Y^+_t-Y^-_t$$ where
$$Y^+_t=\sum_{i=0}^{\infty}1_{VD_i\leq t,S_i=1} \;\text{ and } \;Y^-_t=\sum_{i=0}^{\infty}1_{VD_i\leq t,S_i=-1}$$
The important quantity here is the intentity of $\epsilon$-jump
$$\lambda^{\epsilon}_t=\lim_{dt\rightarrow 0} \frac{1}{dt} P(Y^{\epsilon}_{t+dt}-Y^{\epsilon}_t=1|\mathcal{F}_t) $$
where $\epsilon$ can be $-$ or $+$ and $\mathcal{F}_t$ is a good filtration, in the genera case, without other knowledge it would be:
$$\mathcal{F}_t=\sigma \left (Y^+_t,Y^-_t,VD_1,\dots,VD_{Y^+_t+Y^-_t},S_{1},\dots,S_{Y^+_t+Y^-_t} \right )$$.
but along the lines of your question, I think you implicitly assume that
$$ P \left ( Y^{\epsilon}_{t+dt}-Y^{\epsilon}_t=1 | \mathcal{F}_t \right )= P \left (Y^{\epsilon}_{t+dt}-Y^{\epsilon}_t=1| Y_t \right ) $$
This means that for $\epsilon=+,-$ there exists a deterministic sequence $(\mu^{\epsilon}_i)_{i\in \mathbb{Z}}$ such that $\lambda^{\epsilon}_t=\mu^{\epsilon}_{Y_t}$.
Within this formalism, you question can be restated as: "it likely that $ \mu^{+}_{-1} -\mu^{+}_{0}>0$ " (or at least is the difference larger than a given threshold).
Under this assumption, it is easy to show that $Y_t$ is an [homogeneous markov process][3] on $\mathbb{Z}$ with generator $Q$ given by
$$\forall i,j \in \mathbb{Z}\;\;\; Q_{i,i+1}=\mu^{+}_{i}\;\; Q_{i,i-1}=\mu^{-}_{i}\;\; Q_{ii}=1-(\mu^{+}_{i}+\mu^{-}_{i}) \;\; Q_{ij}=0 \text{ if } |i-j|>1$$
Answering the question (through proposing a maximum likelihood estimatior for the statistical problem)
From this reformulation, solving the problem is done by estimating $(\mu^{+}_i)$ and building a test uppon its values. Let us fix and forget the $i$ index without loss of generality. Estimation of $\mu^+$ (and $\mu^-$) can be done uppon the observation of
$(T^{1},\eta^1),\dots,(T^{p},\eta^p)$ where $T^j$ are the lengths of the $j^{th}$ of the $p$ periods spent in state $i$ (i.e. successive times with $Y_t=i$) and $\eta^j$ is $+1$ if the question was upvoted, $-1$ if it was downvoted and $0$ if it was the last state of observation.
If you forget the case with the last state of observation, the mentionned couples are iid from a distribution that depends on $\mu_i^+$ and $\mu_i^-$: it is distributed as $(\min(Exp(\mu_i^+),Exp(\mu_i^-)),\eta)$ (where Exp is a random var from an exponential distribution and $\eta$ is + or -1 depending on who realizes the max).
Then, you can use the following simple lemma (the proof is straightforward):
Lemma If $X_+\leadsto Exp(\mu_+)$ and $X_{-} \leadsto Exp(\mu_{-})$ then, $T=\min(X_+,X_-)\leadsto Exp(\mu_++\mu_-)$ and $P(X_+1<X_-)=\frac{\mu_+}{\mu_++\mu_-}$.
This implies that the density $f(t,\epsilon)$ of $(T,\eta)$ is given by:
$$ f(t,\epsilon)=g_{\mu_++\mu_-}\left ( \frac{1(\epsilon=+1)*\mu_++1(\epsilon=-1)*\mu_-}{\mu_++\mu_-}\right )$$
where $g_a$ for $a>0$ is the density function of an exponential random variable with parameter $a$. From this expression, it is easy to derive the maximum likelihood estimator of $\mu_+$ and $\mu_-$:
$$(\hat{\mu}_+,\hat{\mu}_-)=argmin \ln (\mu_-+\mu_+)\left ( (\mu_-+\mu_+)\sum_{i=1}^p T^i+p\right )- p_-\ln\left (\mu_-\right ) -p_+ \ln \left (\mu_+\right )$$
where $p_-=|{i:\delta_i=-1}|$ and $p_+=|{i:\delta_i=+1}|$.
Comments for more advanced approaches
If you want to take into acount cases when $i$ is the last observed state (certainly smarter because when you go through $-1$, it is often your last score...) you have to modify a little bit the reasonning. The corresponding censoring is relatively classical...
Possible other approache may include the possibility of
Having an intensity that decreases in with time
Having an intensity that decreases with the time spent since the last vote (I prefer this one. In this case there are classical way of modelling how the density decreases...
You may want to assume that $\mu_i^+$ is a smooth function of $i$
.... you can propose other ideas ! | Do we have a problem of "pity upvotes"? | Summary of my answer. I like the Markov chain modeling but it misses the "temporal" aspect. On the other end, focusing on the temporal aspect (e.g. average time at $-1$) misses the "transition" aspect | Do we have a problem of "pity upvotes"?
Summary of my answer. I like the Markov chain modeling but it misses the "temporal" aspect. On the other end, focusing on the temporal aspect (e.g. average time at $-1$) misses the "transition" aspect. I would go into the following general modelling (which with suitable assumption can lead to [markov process][1]). Also there is a lot of "censored" statistic behind this problem (which is certainly a classical problem of Software reliability ? ). The last equation of my answer gives the maximum likelihood estimator of voting intensity (up with "+" and dow with "-") for a given state of vote. As we can see from the equation, it is an intermediate from the case when you only estimate transition probability and the case when you only measure time spent at a given state. Hope this help.
General Modelling (to restate the question and assumptions).
Let $(VD_i)_{i\geq 1}$ and $(S_{i})_{i\geq 1}$ be random variables modelling respectively the voting dates and the associated vote sign (+1 for upvote, -1 for downvote). The voting process is simply
$$Y_{t}=Y^+_t-Y^-_t$$ where
$$Y^+_t=\sum_{i=0}^{\infty}1_{VD_i\leq t,S_i=1} \;\text{ and } \;Y^-_t=\sum_{i=0}^{\infty}1_{VD_i\leq t,S_i=-1}$$
The important quantity here is the intentity of $\epsilon$-jump
$$\lambda^{\epsilon}_t=\lim_{dt\rightarrow 0} \frac{1}{dt} P(Y^{\epsilon}_{t+dt}-Y^{\epsilon}_t=1|\mathcal{F}_t) $$
where $\epsilon$ can be $-$ or $+$ and $\mathcal{F}_t$ is a good filtration, in the genera case, without other knowledge it would be:
$$\mathcal{F}_t=\sigma \left (Y^+_t,Y^-_t,VD_1,\dots,VD_{Y^+_t+Y^-_t},S_{1},\dots,S_{Y^+_t+Y^-_t} \right )$$.
but along the lines of your question, I think you implicitly assume that
$$ P \left ( Y^{\epsilon}_{t+dt}-Y^{\epsilon}_t=1 | \mathcal{F}_t \right )= P \left (Y^{\epsilon}_{t+dt}-Y^{\epsilon}_t=1| Y_t \right ) $$
This means that for $\epsilon=+,-$ there exists a deterministic sequence $(\mu^{\epsilon}_i)_{i\in \mathbb{Z}}$ such that $\lambda^{\epsilon}_t=\mu^{\epsilon}_{Y_t}$.
Within this formalism, you question can be restated as: "it likely that $ \mu^{+}_{-1} -\mu^{+}_{0}>0$ " (or at least is the difference larger than a given threshold).
Under this assumption, it is easy to show that $Y_t$ is an [homogeneous markov process][3] on $\mathbb{Z}$ with generator $Q$ given by
$$\forall i,j \in \mathbb{Z}\;\;\; Q_{i,i+1}=\mu^{+}_{i}\;\; Q_{i,i-1}=\mu^{-}_{i}\;\; Q_{ii}=1-(\mu^{+}_{i}+\mu^{-}_{i}) \;\; Q_{ij}=0 \text{ if } |i-j|>1$$
Answering the question (through proposing a maximum likelihood estimatior for the statistical problem)
From this reformulation, solving the problem is done by estimating $(\mu^{+}_i)$ and building a test uppon its values. Let us fix and forget the $i$ index without loss of generality. Estimation of $\mu^+$ (and $\mu^-$) can be done uppon the observation of
$(T^{1},\eta^1),\dots,(T^{p},\eta^p)$ where $T^j$ are the lengths of the $j^{th}$ of the $p$ periods spent in state $i$ (i.e. successive times with $Y_t=i$) and $\eta^j$ is $+1$ if the question was upvoted, $-1$ if it was downvoted and $0$ if it was the last state of observation.
If you forget the case with the last state of observation, the mentionned couples are iid from a distribution that depends on $\mu_i^+$ and $\mu_i^-$: it is distributed as $(\min(Exp(\mu_i^+),Exp(\mu_i^-)),\eta)$ (where Exp is a random var from an exponential distribution and $\eta$ is + or -1 depending on who realizes the max).
Then, you can use the following simple lemma (the proof is straightforward):
Lemma If $X_+\leadsto Exp(\mu_+)$ and $X_{-} \leadsto Exp(\mu_{-})$ then, $T=\min(X_+,X_-)\leadsto Exp(\mu_++\mu_-)$ and $P(X_+1<X_-)=\frac{\mu_+}{\mu_++\mu_-}$.
This implies that the density $f(t,\epsilon)$ of $(T,\eta)$ is given by:
$$ f(t,\epsilon)=g_{\mu_++\mu_-}\left ( \frac{1(\epsilon=+1)*\mu_++1(\epsilon=-1)*\mu_-}{\mu_++\mu_-}\right )$$
where $g_a$ for $a>0$ is the density function of an exponential random variable with parameter $a$. From this expression, it is easy to derive the maximum likelihood estimator of $\mu_+$ and $\mu_-$:
$$(\hat{\mu}_+,\hat{\mu}_-)=argmin \ln (\mu_-+\mu_+)\left ( (\mu_-+\mu_+)\sum_{i=1}^p T^i+p\right )- p_-\ln\left (\mu_-\right ) -p_+ \ln \left (\mu_+\right )$$
where $p_-=|{i:\delta_i=-1}|$ and $p_+=|{i:\delta_i=+1}|$.
Comments for more advanced approaches
If you want to take into acount cases when $i$ is the last observed state (certainly smarter because when you go through $-1$, it is often your last score...) you have to modify a little bit the reasonning. The corresponding censoring is relatively classical...
Possible other approache may include the possibility of
Having an intensity that decreases in with time
Having an intensity that decreases with the time spent since the last vote (I prefer this one. In this case there are classical way of modelling how the density decreases...
You may want to assume that $\mu_i^+$ is a smooth function of $i$
.... you can propose other ideas ! | Do we have a problem of "pity upvotes"?
Summary of my answer. I like the Markov chain modeling but it misses the "temporal" aspect. On the other end, focusing on the temporal aspect (e.g. average time at $-1$) misses the "transition" aspect |
4,746 | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models | "Is this a statement about the feature as a whole or about specific values within the feature?"
"Global" variable importance is the mean decrease of accuracy over all out-of-bag cross validated predictions, when a given variable is permuted after training, but before prediction. "Global" is implicit. Local variable importance is the mean decrease of accuracy by each individual out-of-bag cross validated prediction. Global variable importance is the most popular, as it is a single number per variable, easier to understand, and more robust as it is averaged over all predictions.
"In either case, is the Mean Decrease in Accuracy the number or proportion of observations that are incorrectly classified by removing the feature (or values from the feature) in question from the model?"
train forest
measure out-of-bag CV accuracy → OOB_acc_base
permute variable i
measure out-of-bag CV accuracy → OOB_acc_perm_i
VI_i = - (OOB_acc_perm_i - OOB_acc_base)
-"Does this mean that removing Petal.Length from the model would only result in an additional misclassification of 8 or so observations on average?"
Yep. Both Petal.length and Petal.width alone has almost perfect linear separation. Thus the variables share redundant information and permuting only one does not obstruct the model.
"How could the Mean Decrease in Accuracy for Petal.Length be so low, given that it's the highest in this measure, and thus the other variables have even lower values on this measure?"
When a robust/regularized model is trained on redundant variables, it is quite resistant to permutations in single variables.
Mainly use variable importance mainly to rank the usefulness of your variables. A clear interpretation of the absolute values of variable importance is hard to do well.
GINI:
GINI importance measures the average gain of purity by splits of a given variable. If the variable is useful, it tends to split mixed labeled nodes into pure single class nodes. Splitting by a permuted variables tend neither to increase nor decrease node purities. Permuting a useful variable, tend to give relatively large decrease in mean gini-gain. GINI importance is closely related to the local decision function, that random forest uses to select the best available split. Therefore, it does not take much extra time to compute. On the other hand, mean gini-gain in local splits, is not necessarily what is most useful to measure, in contrary to change of overall model performance. Gini importance is overall inferior to (permutation based) variable importance as it is relatively more biased, more unstable and tend to answer a more indirect question. | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models | "Is this a statement about the feature as a whole or about specific values within the feature?"
"Global" variable importance is the mean decrease of accuracy over all out-of-bag cross validated predi | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models
"Is this a statement about the feature as a whole or about specific values within the feature?"
"Global" variable importance is the mean decrease of accuracy over all out-of-bag cross validated predictions, when a given variable is permuted after training, but before prediction. "Global" is implicit. Local variable importance is the mean decrease of accuracy by each individual out-of-bag cross validated prediction. Global variable importance is the most popular, as it is a single number per variable, easier to understand, and more robust as it is averaged over all predictions.
"In either case, is the Mean Decrease in Accuracy the number or proportion of observations that are incorrectly classified by removing the feature (or values from the feature) in question from the model?"
train forest
measure out-of-bag CV accuracy → OOB_acc_base
permute variable i
measure out-of-bag CV accuracy → OOB_acc_perm_i
VI_i = - (OOB_acc_perm_i - OOB_acc_base)
-"Does this mean that removing Petal.Length from the model would only result in an additional misclassification of 8 or so observations on average?"
Yep. Both Petal.length and Petal.width alone has almost perfect linear separation. Thus the variables share redundant information and permuting only one does not obstruct the model.
"How could the Mean Decrease in Accuracy for Petal.Length be so low, given that it's the highest in this measure, and thus the other variables have even lower values on this measure?"
When a robust/regularized model is trained on redundant variables, it is quite resistant to permutations in single variables.
Mainly use variable importance mainly to rank the usefulness of your variables. A clear interpretation of the absolute values of variable importance is hard to do well.
GINI:
GINI importance measures the average gain of purity by splits of a given variable. If the variable is useful, it tends to split mixed labeled nodes into pure single class nodes. Splitting by a permuted variables tend neither to increase nor decrease node purities. Permuting a useful variable, tend to give relatively large decrease in mean gini-gain. GINI importance is closely related to the local decision function, that random forest uses to select the best available split. Therefore, it does not take much extra time to compute. On the other hand, mean gini-gain in local splits, is not necessarily what is most useful to measure, in contrary to change of overall model performance. Gini importance is overall inferior to (permutation based) variable importance as it is relatively more biased, more unstable and tend to answer a more indirect question. | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models
"Is this a statement about the feature as a whole or about specific values within the feature?"
"Global" variable importance is the mean decrease of accuracy over all out-of-bag cross validated predi |
4,747 | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models | Here is the description of the mean decrease in accuracy (MDA) from the help manual of randomForest:
The first measure is computed from permuting OOB data: For each tree, the prediction error on the out-of-bag portion of the data is recorded (error rate for classification, MSE for regression). Then the same is done after permuting each predictor variable. The difference between the two are then averaged over all trees, and normalized by the standard deviation of the differences. If the standard deviation of the differences is equal to 0 for a variable, the division is not done (but the average is almost always equal to 0 in that case).
According to the description, the "accuracy" in MDA actually refers to the accuracy of single tree models, regardless of the fact that we are more concerned with the error rate of the forest. So,
"Does this mean that removing Petal.Length from the model would only result in an additional misclassification of 8 or so observations on average?"
First, the MDA (scaled by default) as defined above is more like a test statistic:
$$
\frac{\text{Mean(Decreases in Accuracy of Trees)}} {\text{StandardDeviation(Decreases in Accuracy of Trees)}}
$$
The scale is neither percentage or count of observations.
Second, even the unscaled MDA, i.e. $\text{Mean(Decreases in Accuracy of Trees)}$, doesn't tell anything about the accuracy of the forest model (trees as a whole by voting).
In summary, the MDA output by randomForest package is neither about error rate nor error counts, but better interpreted as a test statistic on the hypothesis test:
$$
H_0: \text{Nodes constructed by predictor } i \text{ is useless in any single trees}
$$
versus
$$
H_1: \text{Nodes constructed by predictor } i \text{ is useful}
$$
As a remark, the MDA procedure described by Soren is different from the implementation of randomForest package. It is closer to what we desire from an MDA: the accuracy decrease of the whole forest model. However, the model will probably be fitted differently without Petal.Length and rely more on other predictors. Thus Soren's MDA would be too pessimistic. | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models | Here is the description of the mean decrease in accuracy (MDA) from the help manual of randomForest:
The first measure is computed from permuting OOB data: For each tree, the prediction error on the | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models
Here is the description of the mean decrease in accuracy (MDA) from the help manual of randomForest:
The first measure is computed from permuting OOB data: For each tree, the prediction error on the out-of-bag portion of the data is recorded (error rate for classification, MSE for regression). Then the same is done after permuting each predictor variable. The difference between the two are then averaged over all trees, and normalized by the standard deviation of the differences. If the standard deviation of the differences is equal to 0 for a variable, the division is not done (but the average is almost always equal to 0 in that case).
According to the description, the "accuracy" in MDA actually refers to the accuracy of single tree models, regardless of the fact that we are more concerned with the error rate of the forest. So,
"Does this mean that removing Petal.Length from the model would only result in an additional misclassification of 8 or so observations on average?"
First, the MDA (scaled by default) as defined above is more like a test statistic:
$$
\frac{\text{Mean(Decreases in Accuracy of Trees)}} {\text{StandardDeviation(Decreases in Accuracy of Trees)}}
$$
The scale is neither percentage or count of observations.
Second, even the unscaled MDA, i.e. $\text{Mean(Decreases in Accuracy of Trees)}$, doesn't tell anything about the accuracy of the forest model (trees as a whole by voting).
In summary, the MDA output by randomForest package is neither about error rate nor error counts, but better interpreted as a test statistic on the hypothesis test:
$$
H_0: \text{Nodes constructed by predictor } i \text{ is useless in any single trees}
$$
versus
$$
H_1: \text{Nodes constructed by predictor } i \text{ is useful}
$$
As a remark, the MDA procedure described by Soren is different from the implementation of randomForest package. It is closer to what we desire from an MDA: the accuracy decrease of the whole forest model. However, the model will probably be fitted differently without Petal.Length and rely more on other predictors. Thus Soren's MDA would be too pessimistic. | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models
Here is the description of the mean decrease in accuracy (MDA) from the help manual of randomForest:
The first measure is computed from permuting OOB data: For each tree, the prediction error on the |
4,748 | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models | A recent blog post from a team at the University of San Francisco shows that default importance strategies in both R (randomForest) and Python (scikit) are unreliable in many data scenarios. Particularly, mean decrease in impurity importance metrics are biased when potential predictor variables vary in their scale of measurement or their number of categories.
The papers and blog post demonstrate how continuous and high cardinality variables are preferred in mean decrease in impurity importance rankings, even if they are equally uninformative compared to variables with less categories. The authors suggest using permutation importance instead of the default in these cases. If the predictor variables in your model are highly correlated, conditional permutation importance is suggested.
The impurity is biased since at each time a breakpoint is selected in a variable, every level of the variable is tested to find the best break point. Continuous or high cardinality variables will have many more split points, which results in the “multiple testing” problem. That is, there is a higher probability that by chance that variable happens to predict the outcome well, since variables, where more splits are tried, will appear more often in the tree. | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models | A recent blog post from a team at the University of San Francisco shows that default importance strategies in both R (randomForest) and Python (scikit) are unreliable in many data scenarios. Particula | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models
A recent blog post from a team at the University of San Francisco shows that default importance strategies in both R (randomForest) and Python (scikit) are unreliable in many data scenarios. Particularly, mean decrease in impurity importance metrics are biased when potential predictor variables vary in their scale of measurement or their number of categories.
The papers and blog post demonstrate how continuous and high cardinality variables are preferred in mean decrease in impurity importance rankings, even if they are equally uninformative compared to variables with less categories. The authors suggest using permutation importance instead of the default in these cases. If the predictor variables in your model are highly correlated, conditional permutation importance is suggested.
The impurity is biased since at each time a breakpoint is selected in a variable, every level of the variable is tested to find the best break point. Continuous or high cardinality variables will have many more split points, which results in the “multiple testing” problem. That is, there is a higher probability that by chance that variable happens to predict the outcome well, since variables, where more splits are tried, will appear more often in the tree. | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models
A recent blog post from a team at the University of San Francisco shows that default importance strategies in both R (randomForest) and Python (scikit) are unreliable in many data scenarios. Particula |
4,749 | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models | Relative Mean Decrease Accuracy?
"In either case, is the Mean Decrease in Accuracy the number or proportion of observations that are incorrectly classified by removing the feature (or values from the feature) in question from the model?"
train forest
measure out-of-bag CV accuracy → OOB_acc_base
permute variable i
measure out-of-bag CV accuracy → OOB_acc_perm_i
VI_i = - (OOB_acc_perm_i - OOB_acc_base)
So, Ugo Mela if this shows an absolute value of incorrectly classified observations, if we divide the Mean Decrease in Accuracy (MDA) by the total number of observations with that feature then we can get a mean relative importance of each feature?
1000 observations of feature A;
MDA(feature A) = 500;
500/1000 * 100 = 50%.
With this we can look which features, even if in low amounts, greatly improve the model. | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models | Relative Mean Decrease Accuracy?
"In either case, is the Mean Decrease in Accuracy the number or proportion of observations that are incorrectly classified by removing the feature (or values from the | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models
Relative Mean Decrease Accuracy?
"In either case, is the Mean Decrease in Accuracy the number or proportion of observations that are incorrectly classified by removing the feature (or values from the feature) in question from the model?"
train forest
measure out-of-bag CV accuracy → OOB_acc_base
permute variable i
measure out-of-bag CV accuracy → OOB_acc_perm_i
VI_i = - (OOB_acc_perm_i - OOB_acc_base)
So, Ugo Mela if this shows an absolute value of incorrectly classified observations, if we divide the Mean Decrease in Accuracy (MDA) by the total number of observations with that feature then we can get a mean relative importance of each feature?
1000 observations of feature A;
MDA(feature A) = 500;
500/1000 * 100 = 50%.
With this we can look which features, even if in low amounts, greatly improve the model. | How to interpret Mean Decrease in Accuracy and Mean Decrease GINI in Random Forest models
Relative Mean Decrease Accuracy?
"In either case, is the Mean Decrease in Accuracy the number or proportion of observations that are incorrectly classified by removing the feature (or values from the |
4,750 | Would PCA work for boolean (binary) data types? | I would like to suggest you a relatively recent technique for automatic structure extraction from categorical variable data (this includes binary). The method is called CorEx from Greg van Steeg from University of Southern California. The idea is to use the notion of Total Correlation based on the entropy measures. It is appealing due to its simplicity and no tuning of large number of hyperparameters.
The paper about hierarchical representations (the most recent, builds on the top of the previous measures).
http://arxiv.org/pdf/1410.7404.pdf | Would PCA work for boolean (binary) data types? | I would like to suggest you a relatively recent technique for automatic structure extraction from categorical variable data (this includes binary). The method is called CorEx from Greg van Steeg from | Would PCA work for boolean (binary) data types?
I would like to suggest you a relatively recent technique for automatic structure extraction from categorical variable data (this includes binary). The method is called CorEx from Greg van Steeg from University of Southern California. The idea is to use the notion of Total Correlation based on the entropy measures. It is appealing due to its simplicity and no tuning of large number of hyperparameters.
The paper about hierarchical representations (the most recent, builds on the top of the previous measures).
http://arxiv.org/pdf/1410.7404.pdf | Would PCA work for boolean (binary) data types?
I would like to suggest you a relatively recent technique for automatic structure extraction from categorical variable data (this includes binary). The method is called CorEx from Greg van Steeg from |
4,751 | Would PCA work for boolean (binary) data types? | You can also use Multiple Correspondence Analysis (MCA), which is an extension of principal component analysis when the variables to be analyzed are categorical instead of quantitative (which is the case here with your binary variables). See for instance Husson et al. (2010), or Abdi and Valentin (2007). An excellent R package to perform MCA (and hierarchical clustering on PCs) is FactoMineR. | Would PCA work for boolean (binary) data types? | You can also use Multiple Correspondence Analysis (MCA), which is an extension of principal component analysis when the variables to be analyzed are categorical instead of quantitative (which is the c | Would PCA work for boolean (binary) data types?
You can also use Multiple Correspondence Analysis (MCA), which is an extension of principal component analysis when the variables to be analyzed are categorical instead of quantitative (which is the case here with your binary variables). See for instance Husson et al. (2010), or Abdi and Valentin (2007). An excellent R package to perform MCA (and hierarchical clustering on PCs) is FactoMineR. | Would PCA work for boolean (binary) data types?
You can also use Multiple Correspondence Analysis (MCA), which is an extension of principal component analysis when the variables to be analyzed are categorical instead of quantitative (which is the c |
4,752 | Would PCA work for boolean (binary) data types? | If you think of PCA as an exploratory technique to give you a way to visualise the relationships between variables (and in my opinion this is the only way to think about it) then yes, there is no reason why you can't put in binary variables. For example, here is a biplot of your data
It seems reasonably useful. For example, you can see that Doc and Bashful are very similar; that HR is rather unlike the three other variables; Sleepy and Sneezy are very dissimilar, etc. | Would PCA work for boolean (binary) data types? | If you think of PCA as an exploratory technique to give you a way to visualise the relationships between variables (and in my opinion this is the only way to think about it) then yes, there is no reas | Would PCA work for boolean (binary) data types?
If you think of PCA as an exploratory technique to give you a way to visualise the relationships between variables (and in my opinion this is the only way to think about it) then yes, there is no reason why you can't put in binary variables. For example, here is a biplot of your data
It seems reasonably useful. For example, you can see that Doc and Bashful are very similar; that HR is rather unlike the three other variables; Sleepy and Sneezy are very dissimilar, etc. | Would PCA work for boolean (binary) data types?
If you think of PCA as an exploratory technique to give you a way to visualise the relationships between variables (and in my opinion this is the only way to think about it) then yes, there is no reas |
4,753 | Would PCA work for boolean (binary) data types? | Although PCA is often used for binary data, it is argued that PCA assumptions are not appropriate for binary or count data (see e.g. Collins 2002 for an explanation) and generalizations exists: the strategy is similar in spirit to the development of generalized linear models to perform regression analysis for data belonging to the exponential family.
An implementation in R of different methods can be found in the logisticPCA package, and a tutorial in this page.
Ref. Collins, M., Dasgupta, S., & Schapire, R. E. (2002). A generalization of principal components analysis to the exponential family. In Advances in neural information processing systems (pp. 617-624). | Would PCA work for boolean (binary) data types? | Although PCA is often used for binary data, it is argued that PCA assumptions are not appropriate for binary or count data (see e.g. Collins 2002 for an explanation) and generalizations exists: the st | Would PCA work for boolean (binary) data types?
Although PCA is often used for binary data, it is argued that PCA assumptions are not appropriate for binary or count data (see e.g. Collins 2002 for an explanation) and generalizations exists: the strategy is similar in spirit to the development of generalized linear models to perform regression analysis for data belonging to the exponential family.
An implementation in R of different methods can be found in the logisticPCA package, and a tutorial in this page.
Ref. Collins, M., Dasgupta, S., & Schapire, R. E. (2002). A generalization of principal components analysis to the exponential family. In Advances in neural information processing systems (pp. 617-624). | Would PCA work for boolean (binary) data types?
Although PCA is often used for binary data, it is argued that PCA assumptions are not appropriate for binary or count data (see e.g. Collins 2002 for an explanation) and generalizations exists: the st |
4,754 | What does the term saturating nonlinearities mean? | Intuition
A saturating activation function squeezes the input.
Definitions
$f$ is non-saturating iff $ (|\lim_{z\to-\infty} f(z)| = +\infty) \vee (|\lim_{z\to+\infty} f(z)| = +\infty) $
$f$ is saturating iff $f$ is not non-saturating.
These definitions are not specific to convolutional neural networks.
Examples
The Rectified Linear Unit (ReLU) activation function, which is defined as $f(x)=max(0,x)$ is non-saturating because $\lim_{z\to+\infty} f(z) = +\infty$:
The sigmoid activation function, which is defined as $f(x) = \frac{1}{1 + e^{-x}}$ is saturating, because it squashes real numbers to range between $[0,1]$:
The tanh (hyperbolic tangent) activation function is saturating as it squashes real numbers to range between $[-1,1]$:
(figures are from CS231n, MIT License) | What does the term saturating nonlinearities mean? | Intuition
A saturating activation function squeezes the input.
Definitions
$f$ is non-saturating iff $ (|\lim_{z\to-\infty} f(z)| = +\infty) \vee (|\lim_{z\to+\infty} f(z)| = +\infty) $
$f$ is satu | What does the term saturating nonlinearities mean?
Intuition
A saturating activation function squeezes the input.
Definitions
$f$ is non-saturating iff $ (|\lim_{z\to-\infty} f(z)| = +\infty) \vee (|\lim_{z\to+\infty} f(z)| = +\infty) $
$f$ is saturating iff $f$ is not non-saturating.
These definitions are not specific to convolutional neural networks.
Examples
The Rectified Linear Unit (ReLU) activation function, which is defined as $f(x)=max(0,x)$ is non-saturating because $\lim_{z\to+\infty} f(z) = +\infty$:
The sigmoid activation function, which is defined as $f(x) = \frac{1}{1 + e^{-x}}$ is saturating, because it squashes real numbers to range between $[0,1]$:
The tanh (hyperbolic tangent) activation function is saturating as it squashes real numbers to range between $[-1,1]$:
(figures are from CS231n, MIT License) | What does the term saturating nonlinearities mean?
Intuition
A saturating activation function squeezes the input.
Definitions
$f$ is non-saturating iff $ (|\lim_{z\to-\infty} f(z)| = +\infty) \vee (|\lim_{z\to+\infty} f(z)| = +\infty) $
$f$ is satu |
4,755 | What does the term saturating nonlinearities mean? | In the neural network context, the phenomenon of saturation refers to the state in which a neuron predominantly outputs values close to the asymptotic ends of the bounded activation function.
Measuring Saturation in Neural Networks (2015)
So, saturation refers to behaviour of a neuron in a neural network after a given period of training/for a given range of input, and only neurons with bounded limits are susceptible to saturation (and by extension, such functions are sometimes referred to as 'saturating' even if in a particular instance they have not 'saturated').
Saturating functions include:
Type
Examples
Limited as x approaches infinity and minus infinity
Sigmoid, tanh
Limited only in one direction
$\max(x,c)$
Non saturating functions include:
Type
Examples
Unbounded functions
identity, $\sinh$, $abs$
Periodic functions
sin, cos
So in your example, a "non-saturating nonlinearity" means a "non-linear function with no limit as x approaches infinity". | What does the term saturating nonlinearities mean? | In the neural network context, the phenomenon of saturation refers to the state in which a neuron predominantly outputs values close to the asymptotic ends of the bounded activation function.
Measur | What does the term saturating nonlinearities mean?
In the neural network context, the phenomenon of saturation refers to the state in which a neuron predominantly outputs values close to the asymptotic ends of the bounded activation function.
Measuring Saturation in Neural Networks (2015)
So, saturation refers to behaviour of a neuron in a neural network after a given period of training/for a given range of input, and only neurons with bounded limits are susceptible to saturation (and by extension, such functions are sometimes referred to as 'saturating' even if in a particular instance they have not 'saturated').
Saturating functions include:
Type
Examples
Limited as x approaches infinity and minus infinity
Sigmoid, tanh
Limited only in one direction
$\max(x,c)$
Non saturating functions include:
Type
Examples
Unbounded functions
identity, $\sinh$, $abs$
Periodic functions
sin, cos
So in your example, a "non-saturating nonlinearity" means a "non-linear function with no limit as x approaches infinity". | What does the term saturating nonlinearities mean?
In the neural network context, the phenomenon of saturation refers to the state in which a neuron predominantly outputs values close to the asymptotic ends of the bounded activation function.
Measur |
4,756 | What does the term saturating nonlinearities mean? | The most common activation functions are LOG and TanH. These functions have a compact range, meaning that they compress the neural response into a bounded subset of the real numbers. The LOG compresses inputs to outputs between 0 and 1, the TAN H between -1 and 1. These functions display limiting behavior at the boundaries.
At the border the gradient of the output with respect to the input ∂yj/∂xj is very small. So Gradient is small hence small steps to convergence hence longer time to converge. | What does the term saturating nonlinearities mean? | The most common activation functions are LOG and TanH. These functions have a compact range, meaning that they compress the neural response into a bounded subset of the real numbers. The LOG compresse | What does the term saturating nonlinearities mean?
The most common activation functions are LOG and TanH. These functions have a compact range, meaning that they compress the neural response into a bounded subset of the real numbers. The LOG compresses inputs to outputs between 0 and 1, the TAN H between -1 and 1. These functions display limiting behavior at the boundaries.
At the border the gradient of the output with respect to the input ∂yj/∂xj is very small. So Gradient is small hence small steps to convergence hence longer time to converge. | What does the term saturating nonlinearities mean?
The most common activation functions are LOG and TanH. These functions have a compact range, meaning that they compress the neural response into a bounded subset of the real numbers. The LOG compresse |
4,757 | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | There do not appear to be "standards". For example:
The Nature style guide refers to "P value"
This APA style guide refers to "p value"
The Blood style guide says:
Capitalize and italicize the P that introduces a P value
Italicize the p that represents the Spearman rank correlation test
Wikipedia uses "p-value" (with hyphen and italicized "p")
My brief, unscientific survey suggests that the most common combination is lower-case, italicized p without a hyphen. | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | There do not appear to be "standards". For example:
The Nature style guide refers to "P value"
This APA style guide refers to "p value"
The Blood style guide says:
Capitalize and italicize the P t | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
There do not appear to be "standards". For example:
The Nature style guide refers to "P value"
This APA style guide refers to "p value"
The Blood style guide says:
Capitalize and italicize the P that introduces a P value
Italicize the p that represents the Spearman rank correlation test
Wikipedia uses "p-value" (with hyphen and italicized "p")
My brief, unscientific survey suggests that the most common combination is lower-case, italicized p without a hyphen. | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
There do not appear to be "standards". For example:
The Nature style guide refers to "P value"
This APA style guide refers to "p value"
The Blood style guide says:
Capitalize and italicize the P t |
4,758 | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | This seems to be a style issue with different journals and publishers adopting different conventions (or allowing a mixed muddle of styles depending on authors' preferences). My own preference, for what it's worth, is p-value, hyphenated with no italics and no capitalization. | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | This seems to be a style issue with different journals and publishers adopting different conventions (or allowing a mixed muddle of styles depending on authors' preferences). My own preference, for wh | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
This seems to be a style issue with different journals and publishers adopting different conventions (or allowing a mixed muddle of styles depending on authors' preferences). My own preference, for what it's worth, is p-value, hyphenated with no italics and no capitalization. | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
This seems to be a style issue with different journals and publishers adopting different conventions (or allowing a mixed muddle of styles depending on authors' preferences). My own preference, for wh |
4,759 | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | The ASA House Style seems to recommend italicizing the p with hyphen: p-value. A google scholar search shows varied spellings. | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | The ASA House Style seems to recommend italicizing the p with hyphen: p-value. A google scholar search shows varied spellings. | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
The ASA House Style seems to recommend italicizing the p with hyphen: p-value. A google scholar search shows varied spellings. | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
The ASA House Style seems to recommend italicizing the p with hyphen: p-value. A google scholar search shows varied spellings. |
4,760 | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | P value from theoretical point of view is some realization of random variable.
There is some standard (in probability) to use upper case letters for random variables and lower case for realizations.
In table headers we should use P (maybe italicize), in text together with its value p=0.0012 and in text describing for example methodology p-value . | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | P value from theoretical point of view is some realization of random variable.
There is some standard (in probability) to use upper case letters for random variables and lower case for realizations.
I | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
P value from theoretical point of view is some realization of random variable.
There is some standard (in probability) to use upper case letters for random variables and lower case for realizations.
In table headers we should use P (maybe italicize), in text together with its value p=0.0012 and in text describing for example methodology p-value . | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
P value from theoretical point of view is some realization of random variable.
There is some standard (in probability) to use upper case letters for random variables and lower case for realizations.
I |
4,761 | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | Omitting the hyphen can sometimes change the meaning of sentences or at least they can become ambiguous. This can occur especially in papers that describe statistical tests or introduce algorithms to evaluate p-values, but one may also describe methods that have nothing to do with statistics, and still calculate p values from t tests (but not the p-values using statistical t-tests). In this kind of context, the hyphens would really be necessary, even if writers usually try to avoid notations that could get easily confused.
Example (with a bad choice of notations): We would like to find a set of strong association patterns and evaluate the probability that the result would have occurred by chance. In the first phase, we search for the z best patterns with some goodness score. So, after the search phase, we will have z scores (but the z-scores). Then we evaluate the best patterns with a randomization test. We generate t random data sets and evaluate the score of the z:th best pattern in each data set. So, we perform t tests (but not the t-tests) and output the score of the z:th best pattern. We find out that p values (but not the p-values) of all t score values are better then the original z:th best pattern had. Therefore, we could estimate that the probability of getting z so good patterns by chance is p/t. | Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | Omitting the hyphen can sometimes change the meaning of sentences or at least they can become ambiguous. This can occur especially in papers that describe statistical tests or introduce algorithms to | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
Omitting the hyphen can sometimes change the meaning of sentences or at least they can become ambiguous. This can occur especially in papers that describe statistical tests or introduce algorithms to evaluate p-values, but one may also describe methods that have nothing to do with statistics, and still calculate p values from t tests (but not the p-values using statistical t-tests). In this kind of context, the hyphens would really be necessary, even if writers usually try to avoid notations that could get easily confused.
Example (with a bad choice of notations): We would like to find a set of strong association patterns and evaluate the probability that the result would have occurred by chance. In the first phase, we search for the z best patterns with some goodness score. So, after the search phase, we will have z scores (but the z-scores). Then we evaluate the best patterns with a randomization test. We generate t random data sets and evaluate the score of the z:th best pattern in each data set. So, we perform t tests (but not the t-tests) and output the score of the z:th best pattern. We find out that p values (but not the p-values) of all t score values are better then the original z:th best pattern had. Therefore, we could estimate that the probability of getting z so good patterns by chance is p/t. | Correct spelling (capitalization, italicization, hyphenation) of "p-value"?
Omitting the hyphen can sometimes change the meaning of sentences or at least they can become ambiguous. This can occur especially in papers that describe statistical tests or introduce algorithms to |
4,762 | What are posterior predictive checks and what makes them useful? | Posterior predictive checks are, in simple words, "simulating replicated data under the
fitted model and then comparing these to the observed data" (Gelman and Hill, 2007, p. 158). So, you use posterior predictive to "look for systematic discrepancies between real and simulated data" (Gelman et al. 2004, p. 169).
The argument about "using the data twice" is that you use your data for estimating the model and then, for checking if the model fits the data, while generally it is a bad idea and it would be better to validate your model on external data, that was not used for estimation.
Posterior predictive checks are helpful in assessing if your model gives you "valid" predictions about the reality - do they fit the observed data or not. It is a helpful phase of model building and checking. It does not give you a definite answer on if your model is "ok" or if it is "better" then other model, however, it can help you to check if your model makes sens.
This is nicely described in LaplacesDemon vignette Bayesian Inference:
Comparing the predictive distribution $y^\text{rep}$ to the observed
data $y$ is generally termed a "posterior predictive check". This type
of check includes the uncertainty associated with the estimated
parameters of the model, unlike frequentist statistics.
Posterior predictive checks (via the predictive distribution) involve
a double-use of the data, which violates the likelihood principle.
However, arguments have been made in favor of posterior predictive
checks, provided that usage is limited to measures of discrepancy to
study model adequacy, not for model comparison and inference (Meng
1994).
Gelman recommends at the most basic level to compare $y^\text{rep}$ to $y$,
looking for any systematic differences, which could indicate potential
failings of the model (Gelman et al. 2004, p. 159). It is often first
recommended to compare graphical plots, such as the distribution of
$y$ and $y^\text{rep}$. | What are posterior predictive checks and what makes them useful? | Posterior predictive checks are, in simple words, "simulating replicated data under the
fitted model and then comparing these to the observed data" (Gelman and Hill, 2007, p. 158). So, you use posteri | What are posterior predictive checks and what makes them useful?
Posterior predictive checks are, in simple words, "simulating replicated data under the
fitted model and then comparing these to the observed data" (Gelman and Hill, 2007, p. 158). So, you use posterior predictive to "look for systematic discrepancies between real and simulated data" (Gelman et al. 2004, p. 169).
The argument about "using the data twice" is that you use your data for estimating the model and then, for checking if the model fits the data, while generally it is a bad idea and it would be better to validate your model on external data, that was not used for estimation.
Posterior predictive checks are helpful in assessing if your model gives you "valid" predictions about the reality - do they fit the observed data or not. It is a helpful phase of model building and checking. It does not give you a definite answer on if your model is "ok" or if it is "better" then other model, however, it can help you to check if your model makes sens.
This is nicely described in LaplacesDemon vignette Bayesian Inference:
Comparing the predictive distribution $y^\text{rep}$ to the observed
data $y$ is generally termed a "posterior predictive check". This type
of check includes the uncertainty associated with the estimated
parameters of the model, unlike frequentist statistics.
Posterior predictive checks (via the predictive distribution) involve
a double-use of the data, which violates the likelihood principle.
However, arguments have been made in favor of posterior predictive
checks, provided that usage is limited to measures of discrepancy to
study model adequacy, not for model comparison and inference (Meng
1994).
Gelman recommends at the most basic level to compare $y^\text{rep}$ to $y$,
looking for any systematic differences, which could indicate potential
failings of the model (Gelman et al. 2004, p. 159). It is often first
recommended to compare graphical plots, such as the distribution of
$y$ and $y^\text{rep}$. | What are posterior predictive checks and what makes them useful?
Posterior predictive checks are, in simple words, "simulating replicated data under the
fitted model and then comparing these to the observed data" (Gelman and Hill, 2007, p. 158). So, you use posteri |
4,763 | Should you ever standardise binary variables? | A binary variable with values 0, 1 can (usually) be scaled to (value - mean) / SD, which is presumably your z-score.
The most obvious constraint on that is that if you happen to get all zeros or all ones then plugging in SD blindly would mean that the z-score is undefined. There is a case for assigning zero too in so far as value - mean is identically zero. But many statistical things won't make much sense if a variable is really a constant. More generally, however, if the SD is small, there is more risk that scores are unstable and/or not well determined.
A problem over giving a better answer to your question is precisely what "machine learning algorithm" you are considering. It sounds as if it's an algorithm that combines data for several variables, and so it usually will make sense to supply them on similar scales.
(LATER) As the original poster adds comments one by one, their question is morphing. I still consider that (value - mean) / SD makes sense (i.e. is not nonsensical) for binary variables so long as the SD is positive. However, logistic regression was later named as the application and for this there is no theoretical or practical gain (and indeed some loss of simplicity) to anything other than feeding in binary variables as 0, 1. Your software should be able to cope well with that; if not, abandon that software in favour of a program that can. In terms of the title question: can, yes; should, no. | Should you ever standardise binary variables? | A binary variable with values 0, 1 can (usually) be scaled to (value - mean) / SD, which is presumably your z-score.
The most obvious constraint on that is that if you happen to get all zeros or all o | Should you ever standardise binary variables?
A binary variable with values 0, 1 can (usually) be scaled to (value - mean) / SD, which is presumably your z-score.
The most obvious constraint on that is that if you happen to get all zeros or all ones then plugging in SD blindly would mean that the z-score is undefined. There is a case for assigning zero too in so far as value - mean is identically zero. But many statistical things won't make much sense if a variable is really a constant. More generally, however, if the SD is small, there is more risk that scores are unstable and/or not well determined.
A problem over giving a better answer to your question is precisely what "machine learning algorithm" you are considering. It sounds as if it's an algorithm that combines data for several variables, and so it usually will make sense to supply them on similar scales.
(LATER) As the original poster adds comments one by one, their question is morphing. I still consider that (value - mean) / SD makes sense (i.e. is not nonsensical) for binary variables so long as the SD is positive. However, logistic regression was later named as the application and for this there is no theoretical or practical gain (and indeed some loss of simplicity) to anything other than feeding in binary variables as 0, 1. Your software should be able to cope well with that; if not, abandon that software in favour of a program that can. In terms of the title question: can, yes; should, no. | Should you ever standardise binary variables?
A binary variable with values 0, 1 can (usually) be scaled to (value - mean) / SD, which is presumably your z-score.
The most obvious constraint on that is that if you happen to get all zeros or all o |
4,764 | Should you ever standardise binary variables? | Standardizing binary variables does not make any sense. The values are arbitrary; they don't mean anything in and of themselves. There may be a rationale for choosing some values like 0 & 1, with respect to numerical stability issues, but that's it. | Should you ever standardise binary variables? | Standardizing binary variables does not make any sense. The values are arbitrary; they don't mean anything in and of themselves. There may be a rationale for choosing some values like 0 & 1, with re | Should you ever standardise binary variables?
Standardizing binary variables does not make any sense. The values are arbitrary; they don't mean anything in and of themselves. There may be a rationale for choosing some values like 0 & 1, with respect to numerical stability issues, but that's it. | Should you ever standardise binary variables?
Standardizing binary variables does not make any sense. The values are arbitrary; they don't mean anything in and of themselves. There may be a rationale for choosing some values like 0 & 1, with re |
4,765 | Should you ever standardise binary variables? | One nice example where it can be useful to standardize in a slightly different way is given in section 4.2
of Gelman and Hill (http://www.stat.columbia.edu/~gelman/arm/). This is mostly when the interpretation of the coefficients is of interest, and perhaps when there are not many predictors.
There, they standardize a binary variable (with equal proportion of 0 and 1) by
$$
\frac{x-\mu_x}{2\sigma_x},
$$
instead of the normal $\sigma$. Then these standardized coefficients take on values $\pm 0.5 $ and then the coefficients reflect comparisons between $x=0$ and $x=1$ directly. If scaled by $\sigma$ instead then the coefficient would correspond to half the difference between the possible values of $x$. | Should you ever standardise binary variables? | One nice example where it can be useful to standardize in a slightly different way is given in section 4.2
of Gelman and Hill (http://www.stat.columbia.edu/~gelman/arm/). This is mostly when the inter | Should you ever standardise binary variables?
One nice example where it can be useful to standardize in a slightly different way is given in section 4.2
of Gelman and Hill (http://www.stat.columbia.edu/~gelman/arm/). This is mostly when the interpretation of the coefficients is of interest, and perhaps when there are not many predictors.
There, they standardize a binary variable (with equal proportion of 0 and 1) by
$$
\frac{x-\mu_x}{2\sigma_x},
$$
instead of the normal $\sigma$. Then these standardized coefficients take on values $\pm 0.5 $ and then the coefficients reflect comparisons between $x=0$ and $x=1$ directly. If scaled by $\sigma$ instead then the coefficient would correspond to half the difference between the possible values of $x$. | Should you ever standardise binary variables?
One nice example where it can be useful to standardize in a slightly different way is given in section 4.2
of Gelman and Hill (http://www.stat.columbia.edu/~gelman/arm/). This is mostly when the inter |
4,766 | Should you ever standardise binary variables? | What do you want to standardize, a binary random variable, or a proportion?
It makes no sense to standardize a binary random variable. A random variable is a function that assigns a real value to an event $Y:S\rightarrow \mathbb{R} $. In this case 0 for failure and 1 to success, i.e. $Y\in \lbrace 0,1\rbrace$.
In the case of a proportion, this is not a binary random variable, this is a continuous variable $X\in[0,1]$, $x\in \mathbb{R}^+$. | Should you ever standardise binary variables? | What do you want to standardize, a binary random variable, or a proportion?
It makes no sense to standardize a binary random variable. A random variable is a function that assigns a real value to an e | Should you ever standardise binary variables?
What do you want to standardize, a binary random variable, or a proportion?
It makes no sense to standardize a binary random variable. A random variable is a function that assigns a real value to an event $Y:S\rightarrow \mathbb{R} $. In this case 0 for failure and 1 to success, i.e. $Y\in \lbrace 0,1\rbrace$.
In the case of a proportion, this is not a binary random variable, this is a continuous variable $X\in[0,1]$, $x\in \mathbb{R}^+$. | Should you ever standardise binary variables?
What do you want to standardize, a binary random variable, or a proportion?
It makes no sense to standardize a binary random variable. A random variable is a function that assigns a real value to an e |
4,767 | Should you ever standardise binary variables? | In logistic regression binary variables may be standardise for combining them with continuos vars when you want to give to all of them a non informative prior such as N~(0,5) or Cauchy~(0,5).
The standardisation is adviced to be as follows:
Take the total count and give
1 = proportion of 1's
0 = 1 - proportion of 1's.
-----
Edit: Actually I was not right at all, it is not an standardisation but a shifting to be centered at 0 and differ by 1 in the lower and upper condition, lets say that a population is 30% with company A and 70% other, we can define centered "Company A" variable to take on the values -0.3 and 0.7. | Should you ever standardise binary variables? | In logistic regression binary variables may be standardise for combining them with continuos vars when you want to give to all of them a non informative prior such as N~(0,5) or Cauchy~(0,5).
The stan | Should you ever standardise binary variables?
In logistic regression binary variables may be standardise for combining them with continuos vars when you want to give to all of them a non informative prior such as N~(0,5) or Cauchy~(0,5).
The standardisation is adviced to be as follows:
Take the total count and give
1 = proportion of 1's
0 = 1 - proportion of 1's.
-----
Edit: Actually I was not right at all, it is not an standardisation but a shifting to be centered at 0 and differ by 1 in the lower and upper condition, lets say that a population is 30% with company A and 70% other, we can define centered "Company A" variable to take on the values -0.3 and 0.7. | Should you ever standardise binary variables?
In logistic regression binary variables may be standardise for combining them with continuos vars when you want to give to all of them a non informative prior such as N~(0,5) or Cauchy~(0,5).
The stan |
4,768 | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter? | One big difference is that regression "controls for" those characteristics in a linear fashion. Matching by propensity scores eliminates the linearity assumption, but, as some observations may not be matched, you may not be able to say anything about certain groups.
For example, if you are studying a worker training program, you may have all the enrollees be men, but the control, non-participant population be composed of men and women. Using regression, you could regress, income, say, on a participation indicator variable and a male indicator. You would use all your data and could estimate the income of a female had she participated in the program.
If you were doing matching, you could only match men to men. As a result, you wouldn't be using any women in your analysis and your results wouldn't pertain to them.
Regression can extrapolate using the linearity assumption, but matching cannot. All the other assumptions are essentially the same between regression and matching. The benefit of matching over regression is that it is non-parametric (except you do have to assume that you have the right propensity score, if that is how you are doing your matching).
For more discussion, see my page here for a course that was heavily focused on matching methods. See especially Causal Effects Estimation Strategy Assumptions.
Also, be sure to check out the Rosenbaum and Rubin (1983) article that outlines propensity score matching.
Lastly, matching has come a long way since 1983. Check out Jas Sekhon's webpage to learn about his genetic matching algorithm. | How are propensity scores different from adding covariates in a regression, and when are they prefer | One big difference is that regression "controls for" those characteristics in a linear fashion. Matching by propensity scores eliminates the linearity assumption, but, as some observations may not be | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter?
One big difference is that regression "controls for" those characteristics in a linear fashion. Matching by propensity scores eliminates the linearity assumption, but, as some observations may not be matched, you may not be able to say anything about certain groups.
For example, if you are studying a worker training program, you may have all the enrollees be men, but the control, non-participant population be composed of men and women. Using regression, you could regress, income, say, on a participation indicator variable and a male indicator. You would use all your data and could estimate the income of a female had she participated in the program.
If you were doing matching, you could only match men to men. As a result, you wouldn't be using any women in your analysis and your results wouldn't pertain to them.
Regression can extrapolate using the linearity assumption, but matching cannot. All the other assumptions are essentially the same between regression and matching. The benefit of matching over regression is that it is non-parametric (except you do have to assume that you have the right propensity score, if that is how you are doing your matching).
For more discussion, see my page here for a course that was heavily focused on matching methods. See especially Causal Effects Estimation Strategy Assumptions.
Also, be sure to check out the Rosenbaum and Rubin (1983) article that outlines propensity score matching.
Lastly, matching has come a long way since 1983. Check out Jas Sekhon's webpage to learn about his genetic matching algorithm. | How are propensity scores different from adding covariates in a regression, and when are they prefer
One big difference is that regression "controls for" those characteristics in a linear fashion. Matching by propensity scores eliminates the linearity assumption, but, as some observations may not be |
4,769 | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter? | The short answer is that propensity scores are not any better than the equivalent ANCOVA model, particularly with regard to causal interpretation.
Propensity scores are best understood as a data reduction method. They are an effective means to reduce many covariates into a single score that can be used to adjust an effect of interest for a set of variables. In doing so, you save degrees of freedom by adjusting for a single propensity score rather than multiple covariates. This presents a statistical advantage, certainly, but nothing more.
One question which may arise when using regression adjustment with
propensity scores is whether there is any gain in using the propensity
score rather than performing a regression adjustment with all of the
covariates used to estimate the propensity score included in the
model. Rosenbaum and Rubin showed that the "point estimate of the
treatment effect from an analysis of covariance adjustment for
multivariate X is equal to the estimate obtained from a univariate
covariance adjustment for the sample linear discriminant based on X,
whenever the same sample covariance matrix is used for both the
covariance adjustment and the discriminant analysis". Thus, the
results from both methods should lead to the same conclusions.
However, one advantage to performing the two-step procedure is that
one can fit a very complicated propensity score model with interactions
and higher order terms first. Since the goal of this propensity score
model is to obtain the best estimated probability of treatment
assignment, one is not concerned with over-parameterizing this model.
From:
PROPENSITY SCORE METHODS FOR BIAS REDUCTION IN THE COMPARISON OF A TREATMENT TO A NON-RANDOMIZED CONTROL GROUP
D'Agostino (quoting Rosenbaum and Rubin)
D’agostino, R.B. 1998. Propensity score matching for bias reduction in the comparison of a treatment to a non-randomized control group. Statistical Medicine 17: 2265–2281. | How are propensity scores different from adding covariates in a regression, and when are they prefer | The short answer is that propensity scores are not any better than the equivalent ANCOVA model, particularly with regard to causal interpretation.
Propensity scores are best understood as a data reduc | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter?
The short answer is that propensity scores are not any better than the equivalent ANCOVA model, particularly with regard to causal interpretation.
Propensity scores are best understood as a data reduction method. They are an effective means to reduce many covariates into a single score that can be used to adjust an effect of interest for a set of variables. In doing so, you save degrees of freedom by adjusting for a single propensity score rather than multiple covariates. This presents a statistical advantage, certainly, but nothing more.
One question which may arise when using regression adjustment with
propensity scores is whether there is any gain in using the propensity
score rather than performing a regression adjustment with all of the
covariates used to estimate the propensity score included in the
model. Rosenbaum and Rubin showed that the "point estimate of the
treatment effect from an analysis of covariance adjustment for
multivariate X is equal to the estimate obtained from a univariate
covariance adjustment for the sample linear discriminant based on X,
whenever the same sample covariance matrix is used for both the
covariance adjustment and the discriminant analysis". Thus, the
results from both methods should lead to the same conclusions.
However, one advantage to performing the two-step procedure is that
one can fit a very complicated propensity score model with interactions
and higher order terms first. Since the goal of this propensity score
model is to obtain the best estimated probability of treatment
assignment, one is not concerned with over-parameterizing this model.
From:
PROPENSITY SCORE METHODS FOR BIAS REDUCTION IN THE COMPARISON OF A TREATMENT TO A NON-RANDOMIZED CONTROL GROUP
D'Agostino (quoting Rosenbaum and Rubin)
D’agostino, R.B. 1998. Propensity score matching for bias reduction in the comparison of a treatment to a non-randomized control group. Statistical Medicine 17: 2265–2281. | How are propensity scores different from adding covariates in a regression, and when are they prefer
The short answer is that propensity scores are not any better than the equivalent ANCOVA model, particularly with regard to causal interpretation.
Propensity scores are best understood as a data reduc |
4,770 | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter? | A likely obtuse reference, but if you by chance have access to it I would recommend reading this book chapter (Apel and Sweeten, 2010). It is aimed at social scientists and so perhaps not as mathematically rigorous as you seem to want, but it should go into enough depth to be more than a satisfactory answer to your question.
There are a few different ways people treat propensity scores that can result in different conclusions from simply including covariates in a regression model. When one matches scores one does not necessarily have common support for all observations (i.e. one has some observations that appear to never have the chance to be in the treatment group, and some that are always in the treatment group). Also one can weight observations in various ways that can result in different conclusions.
In addition to the answers here I would also suggest you check out the answers to the question chl cited. There is more substance behind propensity scores than simply a statistical trick to achieve covariate balance. It you read and understand the highly cited articles by Rosenbaum and Rubin it will be more clear why the approach is different than simply adding in covariates in a regression model. I think a more satisfactory answer to your question is not necessarily in the math behind propensity scores but in their logic. | How are propensity scores different from adding covariates in a regression, and when are they prefer | A likely obtuse reference, but if you by chance have access to it I would recommend reading this book chapter (Apel and Sweeten, 2010). It is aimed at social scientists and so perhaps not as mathemati | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter?
A likely obtuse reference, but if you by chance have access to it I would recommend reading this book chapter (Apel and Sweeten, 2010). It is aimed at social scientists and so perhaps not as mathematically rigorous as you seem to want, but it should go into enough depth to be more than a satisfactory answer to your question.
There are a few different ways people treat propensity scores that can result in different conclusions from simply including covariates in a regression model. When one matches scores one does not necessarily have common support for all observations (i.e. one has some observations that appear to never have the chance to be in the treatment group, and some that are always in the treatment group). Also one can weight observations in various ways that can result in different conclusions.
In addition to the answers here I would also suggest you check out the answers to the question chl cited. There is more substance behind propensity scores than simply a statistical trick to achieve covariate balance. It you read and understand the highly cited articles by Rosenbaum and Rubin it will be more clear why the approach is different than simply adding in covariates in a regression model. I think a more satisfactory answer to your question is not necessarily in the math behind propensity scores but in their logic. | How are propensity scores different from adding covariates in a regression, and when are they prefer
A likely obtuse reference, but if you by chance have access to it I would recommend reading this book chapter (Apel and Sweeten, 2010). It is aimed at social scientists and so perhaps not as mathemati |
4,771 | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter? | I like to think of PS as a design portion of the study which completely separated from the analysis. That is, you might want to think in terms of design (PS) and analysis (regression etc...). Also, PS porvides a mean of supporting exchangeability for binary treatment; maybe others can comment on whether including the covariates in the outcome model can actaully support exchangebility, or whether one assume exchangeability prior to including the covariates in the outcome model. | How are propensity scores different from adding covariates in a regression, and when are they prefer | I like to think of PS as a design portion of the study which completely separated from the analysis. That is, you might want to think in terms of design (PS) and analysis (regression etc...). Also, PS | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter?
I like to think of PS as a design portion of the study which completely separated from the analysis. That is, you might want to think in terms of design (PS) and analysis (regression etc...). Also, PS porvides a mean of supporting exchangeability for binary treatment; maybe others can comment on whether including the covariates in the outcome model can actaully support exchangebility, or whether one assume exchangeability prior to including the covariates in the outcome model. | How are propensity scores different from adding covariates in a regression, and when are they prefer
I like to think of PS as a design portion of the study which completely separated from the analysis. That is, you might want to think in terms of design (PS) and analysis (regression etc...). Also, PS |
4,772 | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter? | Like the person who asked the question, I am relatively new to propensity score analysis. However, my scientific collaborator has deep knowledge and expertise in biostatistics and clinical trial analysis, so I posed this question to him. His answer provides additional insight beyond what was already posted:
When using observational data to estimate the causal effect of a primary exposure (e.g. a drug treatment or intervention) on a health outcome, it is important to balance the treatment and control groups for any other exposures. This balancing attempts to mimic the effect of randomization, which is not possible with observational data. Some exposures are associated with both the treatment and the outcome and thus, meet the definition of confounders. Confounders can be adjusted in multivariable regression models of the health outcome. However, other exposures are associated with the treatment, but NOT the health outcome, and cannot be adjusted in regression models of the outcome. Systematic differences between the treatment and control groups can lead to treatment selection bias.
The proper way to balance the exposures between the treatment and control groups is to use propensity score matching, without consideration of the health outcome variable. Propensity score matching balances the exposures between the treatment and control groups above and beyond what can be accomplished by multivariable regression modeling of the health outcome.
Here is a helpful tutorial, with references, on propensity score analysis from Columbia University's Mailman School of Public Health: https://www.publichealth.columbia.edu/research/population-health-methods/propensity-score-analysis.
Best wishes,
Dave | How are propensity scores different from adding covariates in a regression, and when are they prefer | Like the person who asked the question, I am relatively new to propensity score analysis. However, my scientific collaborator has deep knowledge and expertise in biostatistics and clinical trial anal | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter?
Like the person who asked the question, I am relatively new to propensity score analysis. However, my scientific collaborator has deep knowledge and expertise in biostatistics and clinical trial analysis, so I posed this question to him. His answer provides additional insight beyond what was already posted:
When using observational data to estimate the causal effect of a primary exposure (e.g. a drug treatment or intervention) on a health outcome, it is important to balance the treatment and control groups for any other exposures. This balancing attempts to mimic the effect of randomization, which is not possible with observational data. Some exposures are associated with both the treatment and the outcome and thus, meet the definition of confounders. Confounders can be adjusted in multivariable regression models of the health outcome. However, other exposures are associated with the treatment, but NOT the health outcome, and cannot be adjusted in regression models of the outcome. Systematic differences between the treatment and control groups can lead to treatment selection bias.
The proper way to balance the exposures between the treatment and control groups is to use propensity score matching, without consideration of the health outcome variable. Propensity score matching balances the exposures between the treatment and control groups above and beyond what can be accomplished by multivariable regression modeling of the health outcome.
Here is a helpful tutorial, with references, on propensity score analysis from Columbia University's Mailman School of Public Health: https://www.publichealth.columbia.edu/research/population-health-methods/propensity-score-analysis.
Best wishes,
Dave | How are propensity scores different from adding covariates in a regression, and when are they prefer
Like the person who asked the question, I am relatively new to propensity score analysis. However, my scientific collaborator has deep knowledge and expertise in biostatistics and clinical trial anal |
4,773 | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter? | Stat Methods Med Res. 2016 Apr 19.
An evaluation of bias in propensity score-adjusted non-linear regression models.
Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model. | How are propensity scores different from adding covariates in a regression, and when are they prefer | Stat Methods Med Res. 2016 Apr 19.
An evaluation of bias in propensity score-adjusted non-linear regression models.
Propensity score methods are commonly used to adjust for observed confounding when | How are propensity scores different from adding covariates in a regression, and when are they preferred to the latter?
Stat Methods Med Res. 2016 Apr 19.
An evaluation of bias in propensity score-adjusted non-linear regression models.
Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model. | How are propensity scores different from adding covariates in a regression, and when are they prefer
Stat Methods Med Res. 2016 Apr 19.
An evaluation of bias in propensity score-adjusted non-linear regression models.
Propensity score methods are commonly used to adjust for observed confounding when |
4,774 | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7 | If $X$ and $Y$ are uncorrelated random variables with equal variance $\sigma^2$, then we have that
$$\begin{align}
\operatorname{var}(X-Y) &= \operatorname{var}(X) + \operatorname{var}(-Y)\\
&= \operatorname{var}(X) + \operatorname{var}(Y)\\
&=2\sigma^2,\\
\operatorname{cov}(X, X-Y) &= \operatorname{cov}(X,X) - \operatorname{cov}(X,Y)
& \text{bilinearity of covariance operator}\\
&= \operatorname{var}(X) - 0 & 0 ~\text{because}~X ~\text{and}~ Y ~\text{are
uncorrelated}\\
&= \sigma^2.
\end{align}$$
Consequently, $$\rho_{X,X-Y} = \frac{\operatorname{cov}(X, X-Y)}{\sqrt{\operatorname{var}(X)\operatorname{var}(X-Y)}}= \frac{\sigma^2}{\sqrt{\sigma^2\cdot2\sigma^2}} = \frac{1}{\sqrt{2}}.$$
So, when you find
$$\frac{\sum_{i=1}^n\left(x_i - \bar{x}\right)
\left((x_i-y_i) - (\bar{x}-\bar{y})\right)}{
\sqrt{\sum_{i=1}^n\left(x_i - \bar{x}\right)^2
\sum_{i=1}^n\left((x_i-y_i) - (\bar{x}-\bar{y})\right)^2}} $$
the sample correlation of $x$ and $x-y$ for a large data set $\{(x_i,y_i)\colon 1 \leq i \leq n\}$ drawn from a population with these properties,
which includes "random numbers" as a special case, the result tends to
be close to the population correlation value $\frac{1}{\sqrt{2}} \approx 0.7071\ldots$ | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7 | If $X$ and $Y$ are uncorrelated random variables with equal variance $\sigma^2$, then we have that
$$\begin{align}
\operatorname{var}(X-Y) &= \operatorname{var}(X) + \operatorname{var}(-Y)\\
&= \opera | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7
If $X$ and $Y$ are uncorrelated random variables with equal variance $\sigma^2$, then we have that
$$\begin{align}
\operatorname{var}(X-Y) &= \operatorname{var}(X) + \operatorname{var}(-Y)\\
&= \operatorname{var}(X) + \operatorname{var}(Y)\\
&=2\sigma^2,\\
\operatorname{cov}(X, X-Y) &= \operatorname{cov}(X,X) - \operatorname{cov}(X,Y)
& \text{bilinearity of covariance operator}\\
&= \operatorname{var}(X) - 0 & 0 ~\text{because}~X ~\text{and}~ Y ~\text{are
uncorrelated}\\
&= \sigma^2.
\end{align}$$
Consequently, $$\rho_{X,X-Y} = \frac{\operatorname{cov}(X, X-Y)}{\sqrt{\operatorname{var}(X)\operatorname{var}(X-Y)}}= \frac{\sigma^2}{\sqrt{\sigma^2\cdot2\sigma^2}} = \frac{1}{\sqrt{2}}.$$
So, when you find
$$\frac{\sum_{i=1}^n\left(x_i - \bar{x}\right)
\left((x_i-y_i) - (\bar{x}-\bar{y})\right)}{
\sqrt{\sum_{i=1}^n\left(x_i - \bar{x}\right)^2
\sum_{i=1}^n\left((x_i-y_i) - (\bar{x}-\bar{y})\right)^2}} $$
the sample correlation of $x$ and $x-y$ for a large data set $\{(x_i,y_i)\colon 1 \leq i \leq n\}$ drawn from a population with these properties,
which includes "random numbers" as a special case, the result tends to
be close to the population correlation value $\frac{1}{\sqrt{2}} \approx 0.7071\ldots$ | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7
If $X$ and $Y$ are uncorrelated random variables with equal variance $\sigma^2$, then we have that
$$\begin{align}
\operatorname{var}(X-Y) &= \operatorname{var}(X) + \operatorname{var}(-Y)\\
&= \opera |
4,775 | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7 | A geometrical-statistical explanation.
Imagine you make an "inside-out" scatterplot where the $n$ subjects are the axes and the $2$ variables $X$ and $Y$ are the points. This is called a subject space plot (as opposed to usual variable space plot). Because there is only 2 points to plot, all dimensions in such a space except just any two arbitrary dimensions that are able to support the 2 points plus the origin, are redundant and can be safely dropped. And so we are left with a plane. We draw vector arrows from the origin to the points: these are our variables $X$ and $Y$ as vectors in the subject space of the data.
Now, if the variables were centered then, in a subject space, the cosine of the angle between their vectors is their correlation coefficient. On the pic below $X$ and $Y$ vectors are orthogonal: their $r=0$. Uncorrelatedness was a prerequisite outlined by @Dilip in their answer.
Also for variables centered, their vector lengths in a subject space are their standard deviations. On the pic, $X$ and $Y$ are of equal length, - equal variances was also a prerequisite made by @Dilip.
To draw the variable $X-Y$ or variable $X+Y$ we just use vector addition or subtraction that we've forgotten since school (move Y vector over to the end of X vector and invert direction in case of subtraction, - this is shown by grey arrows on the pic, - then draw a vector to where the grey arrow points).
It becomes very clear that the length of $X-Y$ or $X+Y$ vectors (the standard deviation of these variables) is, by Pythagorean theorem, $\sqrt{2\sigma^2}$, and the angle between $X$ and $X-Y$ or $X+Y$ is 45 degrees, which cosine - the correlation - is $0.707...$ | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7 | A geometrical-statistical explanation.
Imagine you make an "inside-out" scatterplot where the $n$ subjects are the axes and the $2$ variables $X$ and $Y$ are the points. This is called a subject space | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7
A geometrical-statistical explanation.
Imagine you make an "inside-out" scatterplot where the $n$ subjects are the axes and the $2$ variables $X$ and $Y$ are the points. This is called a subject space plot (as opposed to usual variable space plot). Because there is only 2 points to plot, all dimensions in such a space except just any two arbitrary dimensions that are able to support the 2 points plus the origin, are redundant and can be safely dropped. And so we are left with a plane. We draw vector arrows from the origin to the points: these are our variables $X$ and $Y$ as vectors in the subject space of the data.
Now, if the variables were centered then, in a subject space, the cosine of the angle between their vectors is their correlation coefficient. On the pic below $X$ and $Y$ vectors are orthogonal: their $r=0$. Uncorrelatedness was a prerequisite outlined by @Dilip in their answer.
Also for variables centered, their vector lengths in a subject space are their standard deviations. On the pic, $X$ and $Y$ are of equal length, - equal variances was also a prerequisite made by @Dilip.
To draw the variable $X-Y$ or variable $X+Y$ we just use vector addition or subtraction that we've forgotten since school (move Y vector over to the end of X vector and invert direction in case of subtraction, - this is shown by grey arrows on the pic, - then draw a vector to where the grey arrow points).
It becomes very clear that the length of $X-Y$ or $X+Y$ vectors (the standard deviation of these variables) is, by Pythagorean theorem, $\sqrt{2\sigma^2}$, and the angle between $X$ and $X-Y$ or $X+Y$ is 45 degrees, which cosine - the correlation - is $0.707...$ | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7
A geometrical-statistical explanation.
Imagine you make an "inside-out" scatterplot where the $n$ subjects are the axes and the $2$ variables $X$ and $Y$ are the points. This is called a subject space |
4,776 | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7 | I believe that there's a simple intuition based on symmetry here, too. Since X and Y have the same distributions and have a covariance of 0, the relationship of X ± Y with X should "explain" half of the variation in X ± Y; the other half should be explained by Y. So R2 should be 1/2, which means R is 1/√2 ≈ 0.707. | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7 | I believe that there's a simple intuition based on symmetry here, too. Since X and Y have the same distributions and have a covariance of 0, the relationship of X ± Y with X should "explain" half of t | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7
I believe that there's a simple intuition based on symmetry here, too. Since X and Y have the same distributions and have a covariance of 0, the relationship of X ± Y with X should "explain" half of the variation in X ± Y; the other half should be explained by Y. So R2 should be 1/2, which means R is 1/√2 ≈ 0.707. | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7
I believe that there's a simple intuition based on symmetry here, too. Since X and Y have the same distributions and have a covariance of 0, the relationship of X ± Y with X should "explain" half of t |
4,777 | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7 | Here's a simple way to think about why there's a correlation here at all.
Imagine what goes on when you subtract two distributions. If the value of x is low then, on average, x - y will be a lower value than if the value of x is high. As x increases then x - y increase, on average, and thus, a positive correlation. | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7 | Here's a simple way to think about why there's a correlation here at all.
Imagine what goes on when you subtract two distributions. If the value of x is low then, on average, x - y will be a lower va | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7
Here's a simple way to think about why there's a correlation here at all.
Imagine what goes on when you subtract two distributions. If the value of x is low then, on average, x - y will be a lower value than if the value of x is high. As x increases then x - y increase, on average, and thus, a positive correlation. | Why does the correlation coefficient between X and X-Y random variables tend to be 0.7
Here's a simple way to think about why there's a correlation here at all.
Imagine what goes on when you subtract two distributions. If the value of x is low then, on average, x - y will be a lower va |
4,778 | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$ | A more conventional notation is
$$y(\mu, \sigma) = \int\Phi\left(\frac{x-\mu}{\sigma}\right)\phi(x) dx = \Phi\left(\frac{-\mu}{\sqrt{1+\sigma^2}}\right).$$
This can be found by differentiating the integral with respect to $\mu$ and $\sigma$, producing elementary integrals which can be expressed in closed form:
$$\frac{\partial y}{\partial \mu}(\mu, \sigma) = -\frac{1}{\sqrt{2 \pi } \sqrt{\sigma ^2+1}}e^{-\frac{1}{2}\frac{\mu ^2}{\sigma ^2+1}},$$
$$\frac{\partial y}{\partial \sigma}(\mu, \sigma) = \frac{\mu\sigma }{\sqrt{2 \pi } \left(\sigma ^2+1\right)^{3/2}}e^{-\frac{1}{2}\frac{\mu ^2}{\sigma ^2+1}}.$$
This system can be integrated, beginning with the initial condition $y(0,1)$ = $\int\Phi(x)\phi(x)dx$ = $1/2$, to obtain the given solution (which is easily checked by differentiation). | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$ | A more conventional notation is
$$y(\mu, \sigma) = \int\Phi\left(\frac{x-\mu}{\sigma}\right)\phi(x) dx = \Phi\left(\frac{-\mu}{\sqrt{1+\sigma^2}}\right).$$
This can be found by differentiating the int | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$
A more conventional notation is
$$y(\mu, \sigma) = \int\Phi\left(\frac{x-\mu}{\sigma}\right)\phi(x) dx = \Phi\left(\frac{-\mu}{\sqrt{1+\sigma^2}}\right).$$
This can be found by differentiating the integral with respect to $\mu$ and $\sigma$, producing elementary integrals which can be expressed in closed form:
$$\frac{\partial y}{\partial \mu}(\mu, \sigma) = -\frac{1}{\sqrt{2 \pi } \sqrt{\sigma ^2+1}}e^{-\frac{1}{2}\frac{\mu ^2}{\sigma ^2+1}},$$
$$\frac{\partial y}{\partial \sigma}(\mu, \sigma) = \frac{\mu\sigma }{\sqrt{2 \pi } \left(\sigma ^2+1\right)^{3/2}}e^{-\frac{1}{2}\frac{\mu ^2}{\sigma ^2+1}}.$$
This system can be integrated, beginning with the initial condition $y(0,1)$ = $\int\Phi(x)\phi(x)dx$ = $1/2$, to obtain the given solution (which is easily checked by differentiation). | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$
A more conventional notation is
$$y(\mu, \sigma) = \int\Phi\left(\frac{x-\mu}{\sigma}\right)\phi(x) dx = \Phi\left(\frac{-\mu}{\sqrt{1+\sigma^2}}\right).$$
This can be found by differentiating the int |
4,779 | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$ | Let $X$ and $Y$ be independent normal random variables with $X \sim N(a,b^2)$ and $Y$ a standard normal random variable. Then, $$P\{X \leq Y \mid Y = w\} = P\{X \leq w\} = \Phi\left(\frac{w-a}{b}\right).$$ So, using the law of total probability, we get that
$$P\{X \leq Y\}
= \int_{-\infty}^\infty P\{X \leq Y \mid Y = w\}\phi(w)\,\mathrm dw
= \int_{-\infty}^\infty \Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw.$$
Now, $P\{X \leq Y\} = P\{X-Y \leq 0\}$ can be expressed in terms of $\Phi(\cdot)$ by noting that $X-Y \sim N(a,b^2+1)$, and thus we get
$$\int_{-\infty}^\infty \Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw = \Phi\left(\frac{-a}{\sqrt{b^2+1}}\right)$$
which is the same as the result in whuber's answer. | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$ | Let $X$ and $Y$ be independent normal random variables with $X \sim N(a,b^2)$ and $Y$ a standard normal random variable. Then, $$P\{X \leq Y \mid Y = w\} = P\{X \leq w\} = \Phi\left(\frac{w-a}{b}\rig | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$
Let $X$ and $Y$ be independent normal random variables with $X \sim N(a,b^2)$ and $Y$ a standard normal random variable. Then, $$P\{X \leq Y \mid Y = w\} = P\{X \leq w\} = \Phi\left(\frac{w-a}{b}\right).$$ So, using the law of total probability, we get that
$$P\{X \leq Y\}
= \int_{-\infty}^\infty P\{X \leq Y \mid Y = w\}\phi(w)\,\mathrm dw
= \int_{-\infty}^\infty \Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw.$$
Now, $P\{X \leq Y\} = P\{X-Y \leq 0\}$ can be expressed in terms of $\Phi(\cdot)$ by noting that $X-Y \sim N(a,b^2+1)$, and thus we get
$$\int_{-\infty}^\infty \Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw = \Phi\left(\frac{-a}{\sqrt{b^2+1}}\right)$$
which is the same as the result in whuber's answer. | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$
Let $X$ and $Y$ be independent normal random variables with $X \sim N(a,b^2)$ and $Y$ a standard normal random variable. Then, $$P\{X \leq Y \mid Y = w\} = P\{X \leq w\} = \Phi\left(\frac{w-a}{b}\rig |
4,780 | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$ | Here is another solution: We define
\begin{align*}
I(\gamma) & =\int_{-\infty}^{\infty}\Phi(\xi x+\gamma)\mathcal{N}(x|0,\sigma^{2})dx,
\end{align*}
which we can evaluate $\gamma=-\xi\mu$ to obtain our desired expression.
We know at least one function value of $I(\gamma)$, e.g., $I(0)=0$
due to symmetry. We take the derivative wrt to $\gamma$
\begin{align*}
\frac{dI}{d\gamma} & =\int_{-\infty}^{\infty}\mathcal{N}((\xi x+\gamma)|0,1)\mathcal{N}(x|0,\sigma^{2})dx\\
& =\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\xi x+\gamma\right)^{2}\right)\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{x^{2}}{2\sigma^{2}}\right)dx.
\end{align*}
and complete the square
\begin{align*}
\left(\xi x+\gamma\right)^{2}+\frac{x^{2}}{\sigma^{2}} & =\underbrace{\left(\xi^{2}+\sigma^{-2}\right)}_{=a}x^{2}+\underbrace{-2\gamma\xi}_{=b}x+\underbrace{\gamma^{2}}_{=c} \\
&=a\left(x-\frac{b}{2a}\right)^{2}+\left(c-\frac{b^{2}}{4a}\right)
\left(c-\frac{b^{2}}{4a}\right)\\
& =\gamma^{2}-\frac{4\gamma^{2}\xi^{2}}{4\left(\xi^{2}+\sigma^{-2}\right)}\\
&=\gamma^{2}\left(1-\frac{\xi^{2}}{\xi^{2}+\sigma^{-2}}\right)\\
&=\gamma^{2}\left(\frac{1}{1+\xi^{2}\sigma^{2}}\right)
\end{align*}
Thus,
\begin{align*}
\frac{dI}{d\gamma} & =\frac{1}{2\pi\sigma}\exp\left(-\frac{1}{2}\left(c-\frac{b^{2}}{4a}\right)\right)\sqrt{\frac{2\pi}{a}}\int_{-\infty}^{\infty}\sqrt{\frac{a}{2\pi}}\exp\left(-\frac{1}{2}a\left(x-\frac{b}{2a}\right)^{2}\right)dx\\
& =\frac{1}{2\pi\sigma}\exp\left(-\frac{1}{2}\left(c-\frac{b^{2}}{4a}\right)\right)\sqrt{\frac{2\pi}{a}}\\
&=\frac{1}{\sqrt{2\pi\sigma^{2}a}}\exp\left(-\frac{1}{2}\left(c-\frac{b^{2}}{4a}\right)\right)\\
& =\frac{1}{\sqrt{2\pi\left(1+\sigma^{2}\xi^{2}\right)}}\exp\left(-\frac{1}{2}\frac{\gamma^{2}}{1+\xi^{2}\sigma^{2}}\right)
\end{align*}
and integration yields
$$
\begin{align*}
I(\gamma)
&=\int_{-\infty}^{\gamma}\frac{1}{\sqrt{2\pi\left(1+\sigma^{2}\xi^{2}\right)}}\exp\left(-\frac{1}{2}\frac{z^{2}}{1+\xi^{2}\sigma^{2}}\right)dz\\
&=\Phi\left(\frac{\gamma}{\sqrt{1+\xi^{2}\sigma^{2}}}\right)
\end{align*}
$$
which implies
$$
\begin{align*}
\int_{-\infty}^{\infty}\Phi(\xi x)\mathcal{N}(x|\mu,\sigma^{2})dx
&=I(\xi\mu)\\
&=\Phi\left(\frac{\xi\mu}{\sqrt{1+\xi^{2}\sigma^{2}}}\right).
\end{align*}
$$ | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$ | Here is another solution: We define
\begin{align*}
I(\gamma) & =\int_{-\infty}^{\infty}\Phi(\xi x+\gamma)\mathcal{N}(x|0,\sigma^{2})dx,
\end{align*}
which we can evaluate $\gamma=-\xi\mu$ to obtain ou | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$
Here is another solution: We define
\begin{align*}
I(\gamma) & =\int_{-\infty}^{\infty}\Phi(\xi x+\gamma)\mathcal{N}(x|0,\sigma^{2})dx,
\end{align*}
which we can evaluate $\gamma=-\xi\mu$ to obtain our desired expression.
We know at least one function value of $I(\gamma)$, e.g., $I(0)=0$
due to symmetry. We take the derivative wrt to $\gamma$
\begin{align*}
\frac{dI}{d\gamma} & =\int_{-\infty}^{\infty}\mathcal{N}((\xi x+\gamma)|0,1)\mathcal{N}(x|0,\sigma^{2})dx\\
& =\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\xi x+\gamma\right)^{2}\right)\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{x^{2}}{2\sigma^{2}}\right)dx.
\end{align*}
and complete the square
\begin{align*}
\left(\xi x+\gamma\right)^{2}+\frac{x^{2}}{\sigma^{2}} & =\underbrace{\left(\xi^{2}+\sigma^{-2}\right)}_{=a}x^{2}+\underbrace{-2\gamma\xi}_{=b}x+\underbrace{\gamma^{2}}_{=c} \\
&=a\left(x-\frac{b}{2a}\right)^{2}+\left(c-\frac{b^{2}}{4a}\right)
\left(c-\frac{b^{2}}{4a}\right)\\
& =\gamma^{2}-\frac{4\gamma^{2}\xi^{2}}{4\left(\xi^{2}+\sigma^{-2}\right)}\\
&=\gamma^{2}\left(1-\frac{\xi^{2}}{\xi^{2}+\sigma^{-2}}\right)\\
&=\gamma^{2}\left(\frac{1}{1+\xi^{2}\sigma^{2}}\right)
\end{align*}
Thus,
\begin{align*}
\frac{dI}{d\gamma} & =\frac{1}{2\pi\sigma}\exp\left(-\frac{1}{2}\left(c-\frac{b^{2}}{4a}\right)\right)\sqrt{\frac{2\pi}{a}}\int_{-\infty}^{\infty}\sqrt{\frac{a}{2\pi}}\exp\left(-\frac{1}{2}a\left(x-\frac{b}{2a}\right)^{2}\right)dx\\
& =\frac{1}{2\pi\sigma}\exp\left(-\frac{1}{2}\left(c-\frac{b^{2}}{4a}\right)\right)\sqrt{\frac{2\pi}{a}}\\
&=\frac{1}{\sqrt{2\pi\sigma^{2}a}}\exp\left(-\frac{1}{2}\left(c-\frac{b^{2}}{4a}\right)\right)\\
& =\frac{1}{\sqrt{2\pi\left(1+\sigma^{2}\xi^{2}\right)}}\exp\left(-\frac{1}{2}\frac{\gamma^{2}}{1+\xi^{2}\sigma^{2}}\right)
\end{align*}
and integration yields
$$
\begin{align*}
I(\gamma)
&=\int_{-\infty}^{\gamma}\frac{1}{\sqrt{2\pi\left(1+\sigma^{2}\xi^{2}\right)}}\exp\left(-\frac{1}{2}\frac{z^{2}}{1+\xi^{2}\sigma^{2}}\right)dz\\
&=\Phi\left(\frac{\gamma}{\sqrt{1+\xi^{2}\sigma^{2}}}\right)
\end{align*}
$$
which implies
$$
\begin{align*}
\int_{-\infty}^{\infty}\Phi(\xi x)\mathcal{N}(x|\mu,\sigma^{2})dx
&=I(\xi\mu)\\
&=\Phi\left(\frac{\xi\mu}{\sqrt{1+\xi^{2}\sigma^{2}}}\right).
\end{align*}
$$ | How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$
Here is another solution: We define
\begin{align*}
I(\gamma) & =\int_{-\infty}^{\infty}\Phi(\xi x+\gamma)\mathcal{N}(x|0,\sigma^{2})dx,
\end{align*}
which we can evaluate $\gamma=-\xi\mu$ to obtain ou |
4,781 | Most confusing statistical terms | "Significant" is the biggest one I run into, because it has both a common English-use meaning and that meaning will crop up in the discussion of research results. I even find myself mixing in "significant" to mean important in the same sentence where I've talked about statistical results.
That way lies madness. | Most confusing statistical terms | "Significant" is the biggest one I run into, because it has both a common English-use meaning and that meaning will crop up in the discussion of research results. I even find myself mixing in "signifi | Most confusing statistical terms
"Significant" is the biggest one I run into, because it has both a common English-use meaning and that meaning will crop up in the discussion of research results. I even find myself mixing in "significant" to mean important in the same sentence where I've talked about statistical results.
That way lies madness. | Most confusing statistical terms
"Significant" is the biggest one I run into, because it has both a common English-use meaning and that meaning will crop up in the discussion of research results. I even find myself mixing in "signifi |
4,782 | Most confusing statistical terms | I would suggest adding Linear to the list.
I asked a question
on math.SE about what I, as an engineer, think of as linear
minimum mean square error estimation of a random variable $Y$
given the value of a random variable $X$ (meaning estimating
$Y$ as $\hat{Y} = aX+b$ with $a$ and $b$ being chosen so as to
minimize $E[(Y-aX-b)^2]$), and gave a partial answer. One of
the comments on the question said
"I am somewhat uncomfortable with your language, since I fear that this way of using the word "linear" might feed into the popular misunderstanding that the reason why linear regression in called linear regression is that one is fitting a line. People who think that then find it confusing when a statistician insists that one is doing linear regression when one fits a parabola or a sine wave, etc."
So, what does linear regression mean to a statistician? | Most confusing statistical terms | I would suggest adding Linear to the list.
I asked a question
on math.SE about what I, as an engineer, think of as linear
minimum mean square error estimation of a random variable $Y$
given the v | Most confusing statistical terms
I would suggest adding Linear to the list.
I asked a question
on math.SE about what I, as an engineer, think of as linear
minimum mean square error estimation of a random variable $Y$
given the value of a random variable $X$ (meaning estimating
$Y$ as $\hat{Y} = aX+b$ with $a$ and $b$ being chosen so as to
minimize $E[(Y-aX-b)^2]$), and gave a partial answer. One of
the comments on the question said
"I am somewhat uncomfortable with your language, since I fear that this way of using the word "linear" might feed into the popular misunderstanding that the reason why linear regression in called linear regression is that one is fitting a line. People who think that then find it confusing when a statistician insists that one is doing linear regression when one fits a parabola or a sine wave, etc."
So, what does linear regression mean to a statistician? | Most confusing statistical terms
I would suggest adding Linear to the list.
I asked a question
on math.SE about what I, as an engineer, think of as linear
minimum mean square error estimation of a random variable $Y$
given the v |
4,783 | Most confusing statistical terms | "Confidence"
It's very hard to dissuade non-statisticians that their confidence interval is not (directly) a statement about the credibility of different parameter values.
To have confidence, in the technical meaning of the term, we need to imagine some set of repeated experiments, each one computing an interval in some pre-specified way. To be a 95% confidence interval, 95% of these uses of the formula will trap the relevant parameter of interest.
But non-statisticians routinely interpret "95% confidence" to be a statement about plausible parameter values, based on one experiment alone. Typically, they assume that the interval covers 95% of some posterior beliefs about the parameter, i.e. "we're pretty certain the parameter is between $a$ and $b$". This instead defines a credible interval.
(There are of course situations when the two notions agree, either approximately or exactly. But in general they don't, and numerical agreement doesn't remove the problem of misuse of technical terms.) | Most confusing statistical terms | "Confidence"
It's very hard to dissuade non-statisticians that their confidence interval is not (directly) a statement about the credibility of different parameter values.
To have confidence, in the | Most confusing statistical terms
"Confidence"
It's very hard to dissuade non-statisticians that their confidence interval is not (directly) a statement about the credibility of different parameter values.
To have confidence, in the technical meaning of the term, we need to imagine some set of repeated experiments, each one computing an interval in some pre-specified way. To be a 95% confidence interval, 95% of these uses of the formula will trap the relevant parameter of interest.
But non-statisticians routinely interpret "95% confidence" to be a statement about plausible parameter values, based on one experiment alone. Typically, they assume that the interval covers 95% of some posterior beliefs about the parameter, i.e. "we're pretty certain the parameter is between $a$ and $b$". This instead defines a credible interval.
(There are of course situations when the two notions agree, either approximately or exactly. But in general they don't, and numerical agreement doesn't remove the problem of misuse of technical terms.) | Most confusing statistical terms
"Confidence"
It's very hard to dissuade non-statisticians that their confidence interval is not (directly) a statement about the credibility of different parameter values.
To have confidence, in the |
4,784 | Most confusing statistical terms | probability
It seems to me that most of the problems associated with interpreting hypothesis tests and confidence intervals stem from the application of a Bayesian definition of "probability" when the procedure is based on a frequentist one. For example the p-value being the probability the null hypothesis is true, when AFAICS no probability can be associated with the truth of a particular hypothesis in a frequentist setting. | Most confusing statistical terms | probability
It seems to me that most of the problems associated with interpreting hypothesis tests and confidence intervals stem from the application of a Bayesian definition of "probability" when th | Most confusing statistical terms
probability
It seems to me that most of the problems associated with interpreting hypothesis tests and confidence intervals stem from the application of a Bayesian definition of "probability" when the procedure is based on a frequentist one. For example the p-value being the probability the null hypothesis is true, when AFAICS no probability can be associated with the truth of a particular hypothesis in a frequentist setting. | Most confusing statistical terms
probability
It seems to me that most of the problems associated with interpreting hypothesis tests and confidence intervals stem from the application of a Bayesian definition of "probability" when th |
4,785 | Most confusing statistical terms | "Likelihood" -- it is synonymous with "probability" in everyday speech, but in Statistics it has a special meaning: it is a function of the parameters of a statistical model and a particular data situation, whose value is the probability of the observed outcome assuming that the parameters are equal to the parameter values. | Most confusing statistical terms | "Likelihood" -- it is synonymous with "probability" in everyday speech, but in Statistics it has a special meaning: it is a function of the parameters of a statistical model and a particular data situ | Most confusing statistical terms
"Likelihood" -- it is synonymous with "probability" in everyday speech, but in Statistics it has a special meaning: it is a function of the parameters of a statistical model and a particular data situation, whose value is the probability of the observed outcome assuming that the parameters are equal to the parameter values. | Most confusing statistical terms
"Likelihood" -- it is synonymous with "probability" in everyday speech, but in Statistics it has a special meaning: it is a function of the parameters of a statistical model and a particular data situ |
4,786 | Most confusing statistical terms | Error.
In statistics, an "error" is a deviation of an actual data value from the prediction of a model.
In real life, an error is a spllng mstake or other goof. | Most confusing statistical terms | Error.
In statistics, an "error" is a deviation of an actual data value from the prediction of a model.
In real life, an error is a spllng mstake or other goof. | Most confusing statistical terms
Error.
In statistics, an "error" is a deviation of an actual data value from the prediction of a model.
In real life, an error is a spllng mstake or other goof. | Most confusing statistical terms
Error.
In statistics, an "error" is a deviation of an actual data value from the prediction of a model.
In real life, an error is a spllng mstake or other goof. |
4,787 | Most confusing statistical terms | "Inference"
One of the hardest things for me to understand at first was the difference between a population and a sample. Statisticians write these fancy population level regression equations and then all of a sudden drop down into sample level work and the $\beta$s become $b$s. It took me a long time to realize that you were using the sample level data and regression equations to estimate the population level parameters.
Another important part about inference is the central limit theorem. Once you realize that you are simply sampling from a population -- although sampling is another complicated feature akin to inference -- then you understand that even if the sample mean holds one value, that value isn't necessarily the same mean as in the population.
Perhaps I took a relatively loose understanding of your question, but once someone understands inference or the differences between a sample and the population then the entirety of statistics opens to them. | Most confusing statistical terms | "Inference"
One of the hardest things for me to understand at first was the difference between a population and a sample. Statisticians write these fancy population level regression equations and then | Most confusing statistical terms
"Inference"
One of the hardest things for me to understand at first was the difference between a population and a sample. Statisticians write these fancy population level regression equations and then all of a sudden drop down into sample level work and the $\beta$s become $b$s. It took me a long time to realize that you were using the sample level data and regression equations to estimate the population level parameters.
Another important part about inference is the central limit theorem. Once you realize that you are simply sampling from a population -- although sampling is another complicated feature akin to inference -- then you understand that even if the sample mean holds one value, that value isn't necessarily the same mean as in the population.
Perhaps I took a relatively loose understanding of your question, but once someone understands inference or the differences between a sample and the population then the entirety of statistics opens to them. | Most confusing statistical terms
"Inference"
One of the hardest things for me to understand at first was the difference between a population and a sample. Statisticians write these fancy population level regression equations and then |
4,788 | Most confusing statistical terms | To us (or at least me), "randomness" of a "sample" suggests that it is representative of the "population".
To others, "randomness" sometimes implies that a person/thing is unusual. | Most confusing statistical terms | To us (or at least me), "randomness" of a "sample" suggests that it is representative of the "population".
To others, "randomness" sometimes implies that a person/thing is unusual. | Most confusing statistical terms
To us (or at least me), "randomness" of a "sample" suggests that it is representative of the "population".
To others, "randomness" sometimes implies that a person/thing is unusual. | Most confusing statistical terms
To us (or at least me), "randomness" of a "sample" suggests that it is representative of the "population".
To others, "randomness" sometimes implies that a person/thing is unusual. |
4,789 | Most confusing statistical terms | I think one should distinguish between terms confusing the public and terms confusing statisticians. The above suggestions are mostly terms well understood by statisticians and (possibly) misunderstood by the public. I wish to add to the list some terms misunderstood by statisticians:
Bayesian: Originally referred to what is now known as subjective Bayes (a.k.a. epistemic, De Finetti). Today the term will be used anytime Bayes' rule shows up, rarely in the context of subjective beliefs, which is considered decision theory.
Empirical Bayes: Originally referring to a frequentist setup with a non parametric prior. Today, will typically mean the parameters of the parametric (objective) prior are estimated and not known a priori. I.e., what was once known as type-II maximum likelihood.
Non parametric: Sometimes refers to "model free". Sometimes to "distribution free". Has become practically uninformative in the days "parametric" models might include millions of parameters.
Type III error: sometimes referring to a sign error. Sometime referring to a mis-specification of the model. | Most confusing statistical terms | I think one should distinguish between terms confusing the public and terms confusing statisticians. The above suggestions are mostly terms well understood by statisticians and (possibly) misunderstoo | Most confusing statistical terms
I think one should distinguish between terms confusing the public and terms confusing statisticians. The above suggestions are mostly terms well understood by statisticians and (possibly) misunderstood by the public. I wish to add to the list some terms misunderstood by statisticians:
Bayesian: Originally referred to what is now known as subjective Bayes (a.k.a. epistemic, De Finetti). Today the term will be used anytime Bayes' rule shows up, rarely in the context of subjective beliefs, which is considered decision theory.
Empirical Bayes: Originally referring to a frequentist setup with a non parametric prior. Today, will typically mean the parameters of the parametric (objective) prior are estimated and not known a priori. I.e., what was once known as type-II maximum likelihood.
Non parametric: Sometimes refers to "model free". Sometimes to "distribution free". Has become practically uninformative in the days "parametric" models might include millions of parameters.
Type III error: sometimes referring to a sign error. Sometime referring to a mis-specification of the model. | Most confusing statistical terms
I think one should distinguish between terms confusing the public and terms confusing statisticians. The above suggestions are mostly terms well understood by statisticians and (possibly) misunderstoo |
4,790 | Most confusing statistical terms | Ecological, commonly used to refer to biological systems, but also a statistical fallacy. From Wikipedia:
An ecological fallacy (or ecological inference fallacy) is an error in the interpretation of statistical data in an ecological study, whereby inferences about the nature of specific individuals are based solely upon aggregate statistics collected for the group to which those individuals belong. This fallacy assumes that individual members of a group have the average characteristics of the group at large. | Most confusing statistical terms | Ecological, commonly used to refer to biological systems, but also a statistical fallacy. From Wikipedia:
An ecological fallacy (or ecological inference fallacy) is an error in the interpretation of | Most confusing statistical terms
Ecological, commonly used to refer to biological systems, but also a statistical fallacy. From Wikipedia:
An ecological fallacy (or ecological inference fallacy) is an error in the interpretation of statistical data in an ecological study, whereby inferences about the nature of specific individuals are based solely upon aggregate statistics collected for the group to which those individuals belong. This fallacy assumes that individual members of a group have the average characteristics of the group at large. | Most confusing statistical terms
Ecological, commonly used to refer to biological systems, but also a statistical fallacy. From Wikipedia:
An ecological fallacy (or ecological inference fallacy) is an error in the interpretation of |
4,791 | Most confusing statistical terms | Is a "survey" a type of math ("survey sampling") or a piece of paper ("questionnaire")?
I haven't conducted a survey on this, but I suspect that much of the public considers a "survey" to be the latter. I suspect further that they don't think about the former. | Most confusing statistical terms | Is a "survey" a type of math ("survey sampling") or a piece of paper ("questionnaire")?
I haven't conducted a survey on this, but I suspect that much of the public considers a "survey" to be the latte | Most confusing statistical terms
Is a "survey" a type of math ("survey sampling") or a piece of paper ("questionnaire")?
I haven't conducted a survey on this, but I suspect that much of the public considers a "survey" to be the latter. I suspect further that they don't think about the former. | Most confusing statistical terms
Is a "survey" a type of math ("survey sampling") or a piece of paper ("questionnaire")?
I haven't conducted a survey on this, but I suspect that much of the public considers a "survey" to be the latte |
4,792 | Most confusing statistical terms | "Loadings", "Coefficients" and "Weights"; when talking about Principal Component Analysis.
I usually find people being quite ad hoc when using them, employing them interchangeably without first explicitly defining what they mean and I have come across papers that refer to "loading vectors" and sometimes mean the PCs themselves and other times the "weights" associated with a specific PC.
Probably the fact that Jolliffe's excellent reference on Principal Components states at the end of section 1.1 "Some authors distinguish between the terms ‘loadings’ and ‘coefficients,’ depending on the normalization constraint used, but they will be used interchangeably in this book." just made people think they have a free pass to mix and match terminology to their liking.... | Most confusing statistical terms | "Loadings", "Coefficients" and "Weights"; when talking about Principal Component Analysis.
I usually find people being quite ad hoc when using them, employing them interchangeably without first explic | Most confusing statistical terms
"Loadings", "Coefficients" and "Weights"; when talking about Principal Component Analysis.
I usually find people being quite ad hoc when using them, employing them interchangeably without first explicitly defining what they mean and I have come across papers that refer to "loading vectors" and sometimes mean the PCs themselves and other times the "weights" associated with a specific PC.
Probably the fact that Jolliffe's excellent reference on Principal Components states at the end of section 1.1 "Some authors distinguish between the terms ‘loadings’ and ‘coefficients,’ depending on the normalization constraint used, but they will be used interchangeably in this book." just made people think they have a free pass to mix and match terminology to their liking.... | Most confusing statistical terms
"Loadings", "Coefficients" and "Weights"; when talking about Principal Component Analysis.
I usually find people being quite ad hoc when using them, employing them interchangeably without first explic |
4,793 | Most confusing statistical terms | Additive model. Still not really sure what this means. I think it refers to a model without interaction terms. But then I will come across an article where they're using it to refer to something else, i.e. a spline model. | Most confusing statistical terms | Additive model. Still not really sure what this means. I think it refers to a model without interaction terms. But then I will come across an article where they're using it to refer to something el | Most confusing statistical terms
Additive model. Still not really sure what this means. I think it refers to a model without interaction terms. But then I will come across an article where they're using it to refer to something else, i.e. a spline model. | Most confusing statistical terms
Additive model. Still not really sure what this means. I think it refers to a model without interaction terms. But then I will come across an article where they're using it to refer to something el |
4,794 | Most confusing statistical terms | Consistency
First, many other people read into this a notion of something like "does not have any (internal) contradictions", which is related, but surely not equivalent, to definitions used in statistics.
Second, even within statistics, it has more than one meaning, such as consistency of an estimator, consistency of a test or consistent model selection criteria. | Most confusing statistical terms | Consistency
First, many other people read into this a notion of something like "does not have any (internal) contradictions", which is related, but surely not equivalent, to definitions used in statis | Most confusing statistical terms
Consistency
First, many other people read into this a notion of something like "does not have any (internal) contradictions", which is related, but surely not equivalent, to definitions used in statistics.
Second, even within statistics, it has more than one meaning, such as consistency of an estimator, consistency of a test or consistent model selection criteria. | Most confusing statistical terms
Consistency
First, many other people read into this a notion of something like "does not have any (internal) contradictions", which is related, but surely not equivalent, to definitions used in statis |
4,795 | Most confusing statistical terms | One of the terms that I find most confusing is the "confusion matrix".
Of course, The term used itself is confusing, not the concept.
I tried to track the history of the term and it is quite interesting too.
The confusion matrix was invented at 1904 by (http://en.wikipedia.org/wiki/Karl_Pearson). He used the term http://en.wikipedia.org/wiki/Contingency_table. It appeared at Karl Pearson, F.R.S. (1904). Mathematical contributions to the theory of evolution (PDF). Dulau and Co. http://ia600408.us.archive.org/18/items/cu31924003064833/cu31924003064833.pdf
During War World 2, https://en.wikipedia.org/wiki/Detection_theory was developed as investigation of the relations between stimulus and responds. The confusion matrix was used there.
Due to detection theory, the term was used a psychology. From there the term reached machine learning.
It seems that though the concept was invented in statistics, a field very related to machine learning, it reached machine learning after a detour in during a period of 100 years.
For some references of the usage of the term see:
What is the origin of the term confusion matrix? | Most confusing statistical terms | One of the terms that I find most confusing is the "confusion matrix".
Of course, The term used itself is confusing, not the concept.
I tried to track the history of the term and it is quite interesti | Most confusing statistical terms
One of the terms that I find most confusing is the "confusion matrix".
Of course, The term used itself is confusing, not the concept.
I tried to track the history of the term and it is quite interesting too.
The confusion matrix was invented at 1904 by (http://en.wikipedia.org/wiki/Karl_Pearson). He used the term http://en.wikipedia.org/wiki/Contingency_table. It appeared at Karl Pearson, F.R.S. (1904). Mathematical contributions to the theory of evolution (PDF). Dulau and Co. http://ia600408.us.archive.org/18/items/cu31924003064833/cu31924003064833.pdf
During War World 2, https://en.wikipedia.org/wiki/Detection_theory was developed as investigation of the relations between stimulus and responds. The confusion matrix was used there.
Due to detection theory, the term was used a psychology. From there the term reached machine learning.
It seems that though the concept was invented in statistics, a field very related to machine learning, it reached machine learning after a detour in during a period of 100 years.
For some references of the usage of the term see:
What is the origin of the term confusion matrix? | Most confusing statistical terms
One of the terms that I find most confusing is the "confusion matrix".
Of course, The term used itself is confusing, not the concept.
I tried to track the history of the term and it is quite interesti |
4,796 | Most confusing statistical terms | "Statistics"
To the general public, a substitute for, "now I'm about to lie to you and speak in a way you don't understand." | Most confusing statistical terms | "Statistics"
To the general public, a substitute for, "now I'm about to lie to you and speak in a way you don't understand." | Most confusing statistical terms
"Statistics"
To the general public, a substitute for, "now I'm about to lie to you and speak in a way you don't understand." | Most confusing statistical terms
"Statistics"
To the general public, a substitute for, "now I'm about to lie to you and speak in a way you don't understand." |
4,797 | How to assess the similarity of two histograms? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
A recent paper that may be worth reading is:
Cao, Y. Petzold, L. Accuracy limitations and the measurement of errors in the stochastic simulation of chemically reacting systems, 2006.
Although this paper's focus is on comparing stochastic simulation algorithms, essentially the main idea is how to compare two histogram.
You can access the pdf from the author's webpage. | How to assess the similarity of two histograms? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to assess the similarity of two histograms?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
A recent paper that may be worth reading is:
Cao, Y. Petzold, L. Accuracy limitations and the measurement of errors in the stochastic simulation of chemically reacting systems, 2006.
Although this paper's focus is on comparing stochastic simulation algorithms, essentially the main idea is how to compare two histogram.
You can access the pdf from the author's webpage. | How to assess the similarity of two histograms?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
4,798 | How to assess the similarity of two histograms? | There are plenty of distance measures between two histograms. You can read a good categorization of these measures in:
K. Meshgi, and S. Ishii, “Expanding Histogram of Colors with Gridding
to Improve Tracking Accuracy,” in Proc. of MVA’15, Tokyo, Japan, May
2015.
The most popular distance functions are listed here for your convenience:
$L_0$ or Hellinger Distance
$D_{L0} = \sum\limits_{i} h_1(i) \neq h_2(i) $
$L_1$, Manhattan, or City Block Distance
$D_{L1} = \sum_{i}\lvert h_1(i) - h_2(i) \rvert $
$L=2$ or Euclidean Distance
$D_{L2} = \sqrt{\sum_{i}\left( h_1(i) - h_2(i) \right) ^2 }$
L$_{\infty}$ or Chybyshev Distance
$D_{L\infty} = \max_{i}\lvert h_1(i) - h_2(i) \rvert $
L$_p$ or Fractional Distance (part of Minkowski distance family)
$D_{Lp} = \left(\sum\limits_{i}\lvert h_1(i) - h_2(i) \rvert ^p \right)^{1/p}$ and $0<p<1$
Histogram Intersection
$D_{\cap} = 1 - \frac{\sum_{i} \left(\min(h_1(i),h_2(i) \right)}{\min\left(\vert h_1(i)\vert,\vert h_2(i) \vert \right)}$
Cosine Distance
$D_{CO} = 1 - \sum_i h_1(i)h2_(i)$
Canberra Distance
$D_{CB} = \sum_i \frac{\lvert h_1(i)-h_2(i) \rvert}{\min\left( \lvert h_1(i)\rvert,\lvert h_2(i)\rvert \right)}$
Pearson's Correlation Coefficient
$ D_{CR} = \frac{\sum_i \left(h_1(i)- \frac{1}{n} \right)\left(h_2(i)- \frac{1}{n} \right)}{\sqrt{\sum_i \left(h_1(i)- \frac{1}{n} \right)^2\sum_i \left(h_2(i)- \frac{1}{n} \right)^2}} $
Kolmogorov-Smirnov Divergance
$ D_{KS} = \max_{i}\lvert h_1(i) - h_2(i) \rvert $
Match Distance
$D_{MA} = \sum\limits_{i}\lvert h_1(i) - h_2(i) \rvert $
Cramer-von Mises Distance
$D_{CM} = \sum\limits_{i}\left( h_1(i) - h_2(i) \right)^2$
$\chi^2$ Statistics
$D_{\chi^2} = \sum_i \frac{\left(h_1(i) - h_2(i)\right)^2}{h_1(i) + h_2(i)}$
Bhattacharyya Distance
$ D_{BH} = \sqrt{1-\sum_i \sqrt{h_1(i)h_2(i)}} $ & hellinger
Squared Chord
$ D_{SC} = \sum_i\left(\sqrt{h_1(i)}-\sqrt{h_2(i)}\right)^2 $
Kullback-Liebler Divergance
$D_{KL} = \sum_i h_1(i)\log\frac{h_1(i)}{m(i)}$
Jefferey Divergence
$D_{JD} = \sum_i \left(h_1(i)\log\frac{h_1(i)}{m(i)}+h_2(i)\log\frac{h_2(i)}{m(i)}\right)$
Earth Mover's Distance (this is the first member of Transportation distances that embed binning information $A$ into the distance, for more information please refer to the above mentioned paper or Wikipedia entry.
$ D_{EM} = \frac{\min_{f_{ij}}\sum_{i,j}f_{ij}A_{ij}}{sum_{i,j}f_{ij}}$
$ \sum_j f_{ij} \leq h_1(i) , \sum_j f_{ij} \leq h_2(j) , \sum_{i,j} f_{ij} = \min\left( \sum_i h_1(i) \sum_j h_2(j) \right) $ and $f_{ij}$ represents the flow from $i$ to $j$
Quadratic Distance
$D_{QU} = \sqrt{\sum_{i,j} A_{ij}\left(h_1(i) - h_2(j)\right)^2}$
Quadratic-Chi Distance
$D_{QC} = \sqrt{\sum_{i,j} A_{ij}\left(\frac{h_1(i) - h_2(i)}{\left(\sum_c A_{ci}\left(h_1(c)+h_2(c)\right)\right)^m}\right)\left(\frac{h_1(j) - h_2(j)}{\left(\sum_c A_{cj}\left(h_1(c)+h_2(c)\right)\right)^m}\right)}$ and $\frac{0}{0} \equiv 0$
A Matlab implementation of some of these distances is available from my GitHub repository. Also, you can search for people like Yossi Rubner, Ofir Pele, Marco Cuturi, and Haibin Ling for more state-of-the-art distances.
Update: Alternative explanation for the distances appears here and there in the literature, so I list them here for sake of completeness.
Canberra distance (another version)
$D_{CB}=\sum_i \frac{|h_1(i)-h_2(i)|}{|h_1(i)|+|h_2(i)|}$
Bray-Curtis Dissimilarity, Sorensen Distance (since the sum of histograms are equal to one, it equals to $D_{L0}$)
$D_{BC} = 1 - \frac{2 \sum_i h_1(i) = h_2(i)}{\sum_i h_1(i) + \sum_i h_2(i)}$
Jaccard Distance (i.e. intersection over union, another version)
$D_{IOU} = 1 - \frac{\sum_i \min(h_1(i),h_2(i))}{\sum_i \max(h_1(i),h_2(i))}$ | How to assess the similarity of two histograms? | There are plenty of distance measures between two histograms. You can read a good categorization of these measures in:
K. Meshgi, and S. Ishii, “Expanding Histogram of Colors with Gridding
to Improve | How to assess the similarity of two histograms?
There are plenty of distance measures between two histograms. You can read a good categorization of these measures in:
K. Meshgi, and S. Ishii, “Expanding Histogram of Colors with Gridding
to Improve Tracking Accuracy,” in Proc. of MVA’15, Tokyo, Japan, May
2015.
The most popular distance functions are listed here for your convenience:
$L_0$ or Hellinger Distance
$D_{L0} = \sum\limits_{i} h_1(i) \neq h_2(i) $
$L_1$, Manhattan, or City Block Distance
$D_{L1} = \sum_{i}\lvert h_1(i) - h_2(i) \rvert $
$L=2$ or Euclidean Distance
$D_{L2} = \sqrt{\sum_{i}\left( h_1(i) - h_2(i) \right) ^2 }$
L$_{\infty}$ or Chybyshev Distance
$D_{L\infty} = \max_{i}\lvert h_1(i) - h_2(i) \rvert $
L$_p$ or Fractional Distance (part of Minkowski distance family)
$D_{Lp} = \left(\sum\limits_{i}\lvert h_1(i) - h_2(i) \rvert ^p \right)^{1/p}$ and $0<p<1$
Histogram Intersection
$D_{\cap} = 1 - \frac{\sum_{i} \left(\min(h_1(i),h_2(i) \right)}{\min\left(\vert h_1(i)\vert,\vert h_2(i) \vert \right)}$
Cosine Distance
$D_{CO} = 1 - \sum_i h_1(i)h2_(i)$
Canberra Distance
$D_{CB} = \sum_i \frac{\lvert h_1(i)-h_2(i) \rvert}{\min\left( \lvert h_1(i)\rvert,\lvert h_2(i)\rvert \right)}$
Pearson's Correlation Coefficient
$ D_{CR} = \frac{\sum_i \left(h_1(i)- \frac{1}{n} \right)\left(h_2(i)- \frac{1}{n} \right)}{\sqrt{\sum_i \left(h_1(i)- \frac{1}{n} \right)^2\sum_i \left(h_2(i)- \frac{1}{n} \right)^2}} $
Kolmogorov-Smirnov Divergance
$ D_{KS} = \max_{i}\lvert h_1(i) - h_2(i) \rvert $
Match Distance
$D_{MA} = \sum\limits_{i}\lvert h_1(i) - h_2(i) \rvert $
Cramer-von Mises Distance
$D_{CM} = \sum\limits_{i}\left( h_1(i) - h_2(i) \right)^2$
$\chi^2$ Statistics
$D_{\chi^2} = \sum_i \frac{\left(h_1(i) - h_2(i)\right)^2}{h_1(i) + h_2(i)}$
Bhattacharyya Distance
$ D_{BH} = \sqrt{1-\sum_i \sqrt{h_1(i)h_2(i)}} $ & hellinger
Squared Chord
$ D_{SC} = \sum_i\left(\sqrt{h_1(i)}-\sqrt{h_2(i)}\right)^2 $
Kullback-Liebler Divergance
$D_{KL} = \sum_i h_1(i)\log\frac{h_1(i)}{m(i)}$
Jefferey Divergence
$D_{JD} = \sum_i \left(h_1(i)\log\frac{h_1(i)}{m(i)}+h_2(i)\log\frac{h_2(i)}{m(i)}\right)$
Earth Mover's Distance (this is the first member of Transportation distances that embed binning information $A$ into the distance, for more information please refer to the above mentioned paper or Wikipedia entry.
$ D_{EM} = \frac{\min_{f_{ij}}\sum_{i,j}f_{ij}A_{ij}}{sum_{i,j}f_{ij}}$
$ \sum_j f_{ij} \leq h_1(i) , \sum_j f_{ij} \leq h_2(j) , \sum_{i,j} f_{ij} = \min\left( \sum_i h_1(i) \sum_j h_2(j) \right) $ and $f_{ij}$ represents the flow from $i$ to $j$
Quadratic Distance
$D_{QU} = \sqrt{\sum_{i,j} A_{ij}\left(h_1(i) - h_2(j)\right)^2}$
Quadratic-Chi Distance
$D_{QC} = \sqrt{\sum_{i,j} A_{ij}\left(\frac{h_1(i) - h_2(i)}{\left(\sum_c A_{ci}\left(h_1(c)+h_2(c)\right)\right)^m}\right)\left(\frac{h_1(j) - h_2(j)}{\left(\sum_c A_{cj}\left(h_1(c)+h_2(c)\right)\right)^m}\right)}$ and $\frac{0}{0} \equiv 0$
A Matlab implementation of some of these distances is available from my GitHub repository. Also, you can search for people like Yossi Rubner, Ofir Pele, Marco Cuturi, and Haibin Ling for more state-of-the-art distances.
Update: Alternative explanation for the distances appears here and there in the literature, so I list them here for sake of completeness.
Canberra distance (another version)
$D_{CB}=\sum_i \frac{|h_1(i)-h_2(i)|}{|h_1(i)|+|h_2(i)|}$
Bray-Curtis Dissimilarity, Sorensen Distance (since the sum of histograms are equal to one, it equals to $D_{L0}$)
$D_{BC} = 1 - \frac{2 \sum_i h_1(i) = h_2(i)}{\sum_i h_1(i) + \sum_i h_2(i)}$
Jaccard Distance (i.e. intersection over union, another version)
$D_{IOU} = 1 - \frac{\sum_i \min(h_1(i),h_2(i))}{\sum_i \max(h_1(i),h_2(i))}$ | How to assess the similarity of two histograms?
There are plenty of distance measures between two histograms. You can read a good categorization of these measures in:
K. Meshgi, and S. Ishii, “Expanding Histogram of Colors with Gridding
to Improve |
4,799 | How to assess the similarity of two histograms? | The standard answer to this question is the chi-squared test. The KS test is for unbinned data, not binned data. (If you have the unbinned data, then by all means use a KS-style test, but if you only have the histogram, the KS test is not appropriate.) | How to assess the similarity of two histograms? | The standard answer to this question is the chi-squared test. The KS test is for unbinned data, not binned data. (If you have the unbinned data, then by all means use a KS-style test, but if you only | How to assess the similarity of two histograms?
The standard answer to this question is the chi-squared test. The KS test is for unbinned data, not binned data. (If you have the unbinned data, then by all means use a KS-style test, but if you only have the histogram, the KS test is not appropriate.) | How to assess the similarity of two histograms?
The standard answer to this question is the chi-squared test. The KS test is for unbinned data, not binned data. (If you have the unbinned data, then by all means use a KS-style test, but if you only |
4,800 | How to assess the similarity of two histograms? | You're looking for the Kolmogorov-Smirnov test. Don't forget to divide the bar heights by the sum of all observations of each histogram.
Note that the KS-test is also reporting a difference if e.g. the means of the distributions are shifted relative to one another. If translation of the histogram along the x-axis is not meaningful in your application, you may want to subtract the mean from each histogram first. | How to assess the similarity of two histograms? | You're looking for the Kolmogorov-Smirnov test. Don't forget to divide the bar heights by the sum of all observations of each histogram.
Note that the KS-test is also reporting a difference if e.g. th | How to assess the similarity of two histograms?
You're looking for the Kolmogorov-Smirnov test. Don't forget to divide the bar heights by the sum of all observations of each histogram.
Note that the KS-test is also reporting a difference if e.g. the means of the distributions are shifted relative to one another. If translation of the histogram along the x-axis is not meaningful in your application, you may want to subtract the mean from each histogram first. | How to assess the similarity of two histograms?
You're looking for the Kolmogorov-Smirnov test. Don't forget to divide the bar heights by the sum of all observations of each histogram.
Note that the KS-test is also reporting a difference if e.g. th |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.